Start your free 14-day ContainIQ trial

Using Kubectl Expose | Tutorial and Best Practices

July 3, 2022

Kubernetes comes with many tools to help you, and one of them is kubectl expose. In this post, you’ll learn more about how kubectl expose is used and the best practices for using it.

Ujjwal Sharma
Software Engineer

Each resource inside a running Kubernetes instance is assigned an IP address that’s used as the basis for communication. However, the resources are ephemeral, and change when a new Kubernetes resource is created. Moreover, resources like pods can have multiple instances, each with a different dynamic IP.

This means that if you want to communicate with a given resource, a robust framework is required. This issue is addressed by Kubernetes Services. A Kubernetes Service groups a set of resources and links them using network policy. Resources can also be logically linked using <terminal inline>selector<terminal inline>.

kubectl expose helps you to expose your resources by creating a service, and will create different services to serve different use cases. For example, if you’re debugging your application from an external environment by sending mock HTTP requests to your pods, this is made possible by exposing the pods and interacting with them using a <terminal inline>NodePort<terminal inline> service.

Similarly, pods can be exposed internally to other pods in a cluster using a <terminal inline>ClusterIP<terminal inline> service. This might be required for accessing an internal engine that processes your data and returns results that you don’t want exposed externally. Another important use case is when you’re deploying your application to a cloud platform, and you have a huge number of pods. Exposing the whole deployment using the <terminal inline>LoadBalancer<terminal inline> service can help to distribute the incoming traffic across the available and healthy pods in the deployment.

In this article, you’ll learn how to expose your pods with the <terminal inline>kubectl expose<terminal inline> command in different practical scenarios, and will be provided with code samples and configurations that you can run on your local machine using <terminal inline>minikube<terminal inline> and <terminal inline>kubectl<terminal inline>. You’ll also learn about best practices to follow when using the kubectl expose command.

K8s Metrics, Logging, and Tracing
Monitor the health of your cluster and troubleshoot issues faster with pre-built dashboards that just work.
Start Free Trial Book a Demo

What is kubectl Expose?

kubectl is a command-line tool that helps an end user to interact with the Kubernetes cluster by way of the Kubernetes API. Each action in Kubernetes is undertaken by the control plane, and the instructions for these actions come in through the Kubernetes API server. The API server validates the incoming request, transforms the data, and sends it to the control plane. The control plane maintains the DNS and networking information of Kubernetes resources, and interacts with them using an IP address. Having a service allows you to group the resources under a single IP address.

When the <terminal inline>kubectl expose<terminal inline> command is run, it takes the name of the resource, the port, the target port that should be used to expose, the protocol over which to expose, and the type of the service. It sends this data using a REST request to the Kubernetes API server, which then instructs the control plane to make the service and assign an IP to it.

The Kubernetes resources that can be exposed using a service are pods, deployments, replica sets, and replication controllers. kubectl expose uses selectors to expose the resources to a given port.

Use Cases for kubectl Expose

When deploying data-intensive applications, it’s a common practice to use internal containers that do concurrent processing as a way of ensuring that a single pod doesn’t take all the CPU and memory utilization that is available to it. This distribution of load might require you to interact with internal pods, for which you’ll need an internal IP address. This is the ideal use case for <terminal inline>ClusterIP<terminal inline>. The internal pods are deployed and exposed, and the resultant IP is used by the main pods to interact with them. Thanks to kubectl expose, you only need to provide the IP of the service and not of the pod. If an internal pod becomes unhealthy and inactive, a new pod with a new IP will be created, but because of the selector, it will be mapped to the same service.

The pods run on nodes. In a network where there are multiple nodes, each node will have a unique, stable IP address and, like any physical machine, you can interact with the IP and port. This is the purpose of the <terminal inline>NodePort<terminal inline> service, which is responsible for exposing your resources externally. The application’s port needs to be mapped with the port of the node, and kubectl expose gives you the option to configure it. For example, if you have a container serving on port 8000, you can expose it on port 80 of your node using the kubectl expose option <terminal inline>--target-port<terminal inline>.

In an application with a high number of active users, the traffic must be evenly distributed among the available pods. This is handled by the <terminal inline>LoadBalancer<terminal inline> service. Without load balancing, a pod’s finite resources will become overwhelmed, and the pod will inevitably go down.

Using kubectl Expose | Example

To demonstrate the kubectl expose command, you’ll use minikube to spin up a local Kubernetes cluster. kubectl has been configured to point to the <terminal inline>default<terminal inline> namespace:


console@bash:~$ minikube start
Minikube startup
Minikube startup


To follow along with the section below, you’ll need to have your kubectl command configured to point to the Kubernetes cluster and namespace of your choice.

Start by creating two deployments that need to interact with each other. For this example, you’ll look at a basic use case where you need to make the deployments interact internally. Note that the <terminal inline>curl<terminal inline> command can be used to call an IP service from inside a container.

To create the internal deployment with two pods of <terminal inline>nginx:latest<terminal inline> image, run the following command:


console@bash:~$ kubectl create deployment internal-deployment --image=nginx --replicas=2

This will be accessible only from inside the cluster, and you can see the running pods in internal-deployment using <terminal inline>kubectl get<terminal inline>.

To make an external deployment with three pods of the <terminal inline>nginx:latest<terminal inline> image, run the following command:


console@bash:~$ kubectl create deployment external-deployment --image=nginx --replicas=3

This deployment will be available from outside the cluster.

You can see the available set of pods using the <terminal inline>kubectl get<terminal inline> command:


console@bash:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
external-deployment-6c9f8d8dcc-47xgw 1/1 Running 0 16m
external-deployment-6c9f8d8dcc-8qkmm 1/1 Running 0 16m
external-deployment-6c9f8d8dcc-9k8wj 1/1 Running 0 16m
internal-deployment-666749b99-fz6lp 1/1 Running 0 16m
internal-deployment-666749b99-pltxl 1/1 Running 0 16m

You can see there are three pods with the prefix of “external-deployment” and two pods with the prefix of “internal-deployment”.

Expose the internal-deployment using <terminal inline>ClusterIP<terminal inline>:


console@bash:~$ kubectl expose deployment internal-deployment --port=80 --target-port=8008 --name=internal-service

This will create a service for <terminal inline>internal-deployment<terminal inline>. The service is named <terminal inline>internal-service<terminal inline>, and is on the target port of 8008. As you can see, the type of IP is ClusterIP over TCP protocol, which is the default of the kubectl expose command:


console@bash:~$ kubectl describe service internal-service

Name:              internal-service
Namespace:         default
Labels:            app=internal-deployment
Annotations:       <none>
Selector:          app=internal-deployment
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.103.67.104
IPs:               10.103.67.104
Port:              <unset>  80/TCP
TargetPort:        8008/TCP
Endpoints:         172.17.0.3:8008,172.17.0.4:8008
Session Affinity:  None
Events:            <none>

To access the pods with ClusterIP, you can run a BusyBox curl container, then run <terminal inline>nslookup<terminal inline> for the internal service. If the service is discoverable, this confirms that the pods are available from inside the cluster:


console@bash:~$ kubectl run curl --image=radial/busyboxplus:curl -i --tty

You can then run <terminal inline>nslookup internal-service<terminal inline>, and you’ll see the following output:


[ root@curl:/ ]$ns lookup internal-service
Server:                10.96.0.10
Address 1:          10.103.67.104 kube-dns.kube-system.svc.cluster.local

Name:              internal-service
Address 1:          10.99.196.194 internal-service.default.svc.cluster.local

You can exit the pod using Ctrl+D, or by running <terminal inline>exit<terminal inline> in the console.

To expose the external deployment as a <terminal inline>NodePort<terminal inline> service, you can specify the type as NodePort in kubectl expose:


console@bash:~$ kubectl expose deployment external-deployment --port=80 --target-port=8000 --name=external-service --type=NodePort

After this, you can see the details of <terminal inline>external-service<terminal inline> using the following command:


console@bash:~$ kubectl describe service external-service

Name:                     external-service
Namespace:                default
Labels:                   app=external-deployment
Annotations:              <none>
Selector:                 app=external-deployment
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.230.202
IPs:                      10.109.230.202
Port:                     <unset>  80/TCP
TargetPort:               8000/TCP
NodePort:                 <unset>  30193/TCP
Endpoints:                172.17.0.5:8000,172.17.0.6:8000,172.17.0.7:8000
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

This is exposed as an external service.

Finally, you’ll create a <terminal inline>LoadBalancer<terminal inline> service that will help demonstrate how load balancing works. You can expose the pods from the external deployment using this load balancer. The IP address for the load balancer can be given as the actual IP of the computing service hosting the load balancer. As minikube doesn’t have a cloud provider, it provides a way to set up a load balancer tunnel using the following command:


console@bash:~$ minikube tunnel

This simulates the load balancer of a cloud provider. Now you can expose the service as a LoadBalancer type. The same set of resources can be exposed using multiple services:


console@bash:~$ kubectl expose deployment external-deployment --port=80 --target-port=8000 --name=lb-service --type=LoadBalancer

To see the details of the <terminal inline>lb-service<terminal inline>, run the following command:


console@bash:~$ kubectl describe service lb-service

Name:                     lb-service
Namespace:                default
Labels:                   app=external-deployment
Annotations:              <none>
Selector:                 app=external-deployment
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.196.199
IPs:                      10.111.196.199
LoadBalancer Ingress:     127.0.0.1
Port:                     <unset>  80/TCP
TargetPort:               8000/TCP
NodePort:                 <unset>  32561/TCP
Endpoints:                172.17.0.5:8000,172.17.0.6:8000,172.17.0.7:8000
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

When the requests are routed to the LoadBalancer ingress point, the requests are distributed according to the load of the resource. For multiple-node clusters, this is scaled at the node level, and then the load is distributed across pods. This handles the excess traffic while maintaining pod health and ensuring balanced resource utilization.

The overall abstraction of the section looks like this:

Abstraction of the section

Best Practices for kubectl Expose

If you have a set of IPs available, you can expose the resources as a NodePort by providing the IP in option <terminal inline>--external-ip<terminal inline>. It is essential to understand the networking concepts when dealing with Kubernetes services. The load balancers in production are given an IP, which can be provided using the option <terminal inline>--load-balancer-ip<terminal inline>. This will be the ingress point, and needs to be specified when using it in the production environment.

In Kubernetes, it’s generally preferred that you use deployment files. In the examples above, the <terminal inline>kubectl create<terminal inline> command is used with different parameters. However, having the deployment details in a file and a directory is a best practice that is widely followed for good reason.

Maintaining the files is easier than deploying with lengthy CLI commands or scripts. Kubectl expose uses the flag -f to include the resources using the deployment file. You can also group the resources in a file using the flag -R recursively.

Final Thoughts

In this article, you learned how to create ClusterIP, NodePort, and LoadBalancer services using kubectl expose. You also explored the details of the kubectl expose command in practical scenarios. These use cases resemble real-world scenarios, and address how services interact with each other. Finally, you looked at some of the best practices to follow when using kubectl.

Kubernetes can be a black box to the external world, but developers and operators need to be aware of what’s going on inside of it. This is where ContainIQ steps in, enabling you to instantly monitor Kubernetes applications with real time with analysis of logs, events, and traces using a simple HELM chart or .yaml file.

Start your free 14-day ContainIQ trial
Start Free TrialBook a Demo
No card required
Ujjwal Sharma
Software Engineer

Ujjwal is a software engineer with a unique perspective having built observability in distributed systems for Intuit. And he was previously an open-source contributor for Mattermost. Ujjwal has a Bachelor of Technology in Information Technology from ABV-Indian Institute of Technology and Management.

READ MORE