An Application is composed of one or more Components, where each Component maps to a construct in the Kubernetes Workloads API. In most cases an Application contains Kubernetes controllers like Deployments and StatefulSets, exposed as Services. The number of components will depend on your application architecture. For example some 3-tier applications may have a few components, or even a single component, where as Microservices style applications may have hundreds of services.

Applications are defined and stored in the Catalog. Once defined, an Application can be run in one or more Environments.


Changes made to an Application in the Catalog, can be propagated to Application instances running in Environments, based on the Environment’s Update Policy.

Here are some of the common constructs you can use to build Kubernetes applications with Nirmata:

Deployments (stateless components)

You can use Deployment for stateless services in your application. As part of your Deployment you can define a Pod template with one or more containers. For each container, you can add settings such as image information, run command, health checks, environment variables, volume mounts, container ports etc.

To enable communication with other components in your application, or with external clients, you can expose one or more ports of your containers by defining a Service.

StatefulSets (stateful components)

Some application components require stable identities e.g. distributed software tools may require that cluster members retain the same names and addresses within the cluster. Other software may manage large data sets, requiring reusing the same volume. StatefulSets address solve of these challenges, and provide additional controls over upgrades and restarts suitable for stateful services.

As with a Deployment, StatefulSets also contain a Pod template and can be referenced by a Service.


An Ingress manages external access to the services in a cluster, typically HTTP. Ingress policies can provide load balancing, SSL termination, and name-based virtual hosting.

Prior to creating an Ingress for your application, ensure that an Ingress Controller is available in the Kubernetes cluster to fulfill the ingress. Note that Google Kubernetes Engine (GKE) deploys an ingress controller on the master. Review the beta limitations of this controller if using GKE.

Nirmata allows you to use the same application YAML to deploy applications multiple times to the same cluster within different namespaces. To do so, the Ingress rule must be configured differently for each application.

In support, Nirmata allows multiple ingresses per a cluster and specific ingress settings per an environment and/or application by allowing the Ingress class notation to be specified for each Ingress rule. Configuring an Ingress rule at the application level allows you to add an Ingress policy to an environment to apply environment specific Ingress settings.

Additionally, some aspects of the Ingress rule, such as the Host name (subdomain) can be customized at a later date by another team to enforce Ingress policies when needed.

An Ingress policy contains:

How to Configure an Ingress Rule at the Application Level

Open the application by selecting Catalog from the sidebar menu and then open Application. Click on the application card.


Once inside the application, select Discover & Routing and choose the option to Add an Ingress.


Create a Name for the ingress and select a Default Backend Service. Then click Next.

The backend is a combination of service and port names. HTTP (and HTTPS) requests to the ingress matching the host and path of the rule are sent to the listed backend. In most cases, a default backend is configured in the ingress controller. The default backend in the ingress controller services any requests that do not match the path in the specified path.

An ingress without rules sends all traffic to a single backend.


To keep the application portable, leave the Host field empty or provide a default value to customize when the application is deployed in an environment. This ensures that the host name is unique to the application.

When a Host is not specified, the ingress rule applies to all inbound HTTP traffic through the IP specified IP address.

If a Host is specified, the ingress rule applies only to the specified host.

Next, add a path(s).

Each path must be associated with a backend serviceName and servicePort. Both the host and path must match the content of an incoming request before the load balancer will direct traffic to the referenced service.

Note that on most Kubernetes project maintained distributions, communication between the user to the apiserver and from the apiserver to the kublets is protected by SSL/TLS. Secrets are protected when transmitted over these channels.

Click Finish.


TLS Support

Ingress can be secured by specifying a secret that contains a TLS private key and certificate. Kubernetes Ingress only supports TLS port 443 and assumes TLS termination.

If TLS is enabled, both Host name and Secret name must be configured. The TLS secret must contain keys named tls.crt and tls.key that contain the certificate and private key to use for TLS.

apiVersion: v1
  tls.crt: base64 encoded cert
  tls.key: base64 encoded key
kind: Secret
  name: testsecret-tls
  namespace: default

Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. The TLS secret must come from a certificate that contains a CN, such

apiVersion: extensions/v1beta1
kind: Ingress
  name: tls-example-ingress
  - hosts:
    secretName: testsecret-tls
    - host:
        - path: /
            serviceName: service1
            servicePort: 80

The Host name is generated using the same Ingress Policy settings in non-TLS utilizations:

The Secret name is stored in the ConfigMap selected by configMapSelector. The name of the key is stored as a new attribute of the IngressPolicy called MapSercretKey.

To create a new Secret name:

  1. Check that TSLEnabled is set to true
  2. Apply configMapSelector to retrieve the ConfigMap containing the Secret name
  3. Use the value of configMapSecretKey as the key to retrieve the value of the Secret name in ConfigMap

How to Setup Elastic Load Balancing (ELB) on Amazon Web Services (AWS) with Nutanix

This tutorial explains how to setup an Classic Load Balancer on AWS with Nutanix. Check AWS for guidance on setting up an Application Load Balancer or Network Load Balancer.

ELB on AWS performs routine health checks on registered EC2 instance and automatically distributes incoming requests to the DNS name of your load balancer across the registered, healthy EC2 instance.

To enable ELB through Nutanix: Login to the AWS Management Console. In the Compute section, click EC2.


In the left navigation pane, click Load Balancers.


Select the required load balancer.


In the Cross-Zone Load Balancing section, in the Description field, click Edit.


Click to Enable load balancing and then click Save in the Configure Cross-Zone Load Balancing window.


For information on creating a High Availability cluster, click here.

For information on installation and setup of a load balancer, click here.

Persistent Volume Claims

A PersistentVolumeClaim (PVC) is a request for storage for a pod. PVCs are used to create Persistent Volumes (PV) for your pods). PVCs can request specific size and access modes for storage.

Tip: When creating a StatefulSet, you can create VolumeClaimTemplates instead of using PVCs. This will allow you to scale your StatefulSet.


ConfigMaps enable you to decouple configuration from your container image, ensuring that your containerized application is portable.

ConfigMaps can be made available to your pod as:

  1. Environment variables
  2. Volumes

A ConfigMap can be shared across multiple pod templates further simplifying your application configuration.


Secrets can be created to store sensitive information such as password, certificates etc. Putting sensitive information in secrets is relatively secure compared to including it in your container image and provides flexibility in how these secrets can be accessed.

Secrets can be made available to your pod as:

  1. Environment variables
  2. Volumes

A Secret can be shared across multiple pod templates further simplifying your application configuration.

Network Policies

If your cluster supports network policy, you can configure them as part of your application. A network policy specifies how groups of pods are allowed to communicate with each other and other network endpoints.

Network Policies use labels to select pods and define rules which specify what traffic is allowed to the selected pods. By default, all pods in a Namespace can communicate with each other.

See Network Policies for more information.

Custom and Other Resources

Nirmata offers native-support for most commonly used Kubernetes workload concepts. However, a major benefit of Kubernetes is its extensibility. To enable use of any other Kubernetes resource, including Custom Resource Definitions (CRDs), Nirmata can import and manage any Kubernetes resource in YAML or JSON format.

Kubernetes References

For additional details on defining applications with Kubernetes concepts, you can reference the Kubernetes documentation using the links below: