Resources

Using Kubectl Port-Forward to Access Kubernetes Applications

October 15, 2021

Accessing an application via VM in Kubernetes isn’t always simple. Readers will learn how to use the kubectl port-forward functionality to access Kubernetes applications.

Lukonde Mwila
DevOps Engineer

When it comes to exposing your Kubernetes workload to external traffic, creating ingresses or services such as NodePorts and LoadBalancers are the standard practices. Each of these functions differ in how they allow Pods to be accessed.

However, they don’t offer a secure and optimal model for debugging applications that you don’t want exposed to the outside world. Port forwarding, on the other hand, offers you the opportunity to investigate issues and adjust your applications locally without the need to expose them beforehand.

In this article, you will learn the fundamentals of port forwarding in the context of Network Address Translation and how this networking concept can be put into practice with Pods on your Kubernetes cluster. Lastly, you will provision an EKS cluster and deploy a basic application that will be exposed exclusively for your local access through port forwarding.

What Is Port Forwarding?

Port forwarding diagram
Port forwarding diagram

Before getting into the details of port forwarding, it’s important to understand Network Address Translation (NAT) and the basics of how it works. NAT is the process of modifying IP addresses that pass through a router. This is a built-in functionality that conceals an entire IP address space. A computer or laptop that wants to communicate with servers on the internet will make a client request to a specific public-facing IP address.

That public IP address will then be converted or translated to a private IP address. In addition, every time you attempt to establish a connection to a server on the internet, you have to do so via a specific port. This is where port forwarding comes in.

Port forwarding is a part of NAT that redirects a single system’s IP address and port number to another system. It deals with a single IP address and port and is often used between hosts on the Internet and an individual host on a Local Area Network (LAN) or demilitarized zone (DMZ).

As the diagram above depicts, a client request made from a laptop to a web server on the internet will be sent to a public-facing address (207.172.15.60) on a specific port (443). The router will then redirect this request to a destination server (192.168.1.5) and the relevant port (8080).

Port Forwarding in Kubernetes

At this point, you may be wondering how port forwarding works in the context of Kubernetes. You can use kubectl to set up a proxy that will forward all traffic from a local port that you specify to a port associated with the Pod that you determine.

This is especially useful when you want to directly communicate from your local machine to a given port on a Pod. Also, this is accomplished without you having to manually expose Services. Instead, you would simply use the <terminal inline>kubectl port-forward<terminal inline> command as demonstrated later on in this post.

Kubernetes may be a highly automated orchestration platform, but the port forwarding process requires direct and recurrent user input. If a Pod instance fails, the connection is terminated. Therefore, it will be necessary to establish a new port-forwarding to the desired Pod on your cluster by entering the same command manually.

It’s important to note that port forwarding is practical only when you are working with individual Pods. It cannot be utilized for Services. The entire process of port-forwarding in K8s is simplified by the fact that the kubectl CLI tool already has a built-in port forwarding functionality.

To carry this out, you have to have kubectl installed on your workstation. You will then interact with the relevant Kubernetes cluster using the kubectl command-line from your local machine.
When executing this, the port-forward command has to include the cluster resource name as well as the port number to port-forward to.

The Kubernetes API server will then establish a single HTTP connection between your localhost (on your machine) and the resource running on your K8s cluster. Once this is successfully set up, the user will then be able to engage with that specific Pod directly, either to investigate an issue or for debugging purposes.

In a nutshell, port forwarding in Kubernetes can be accomplished by using kubectl to create a tunnel from the target pod to your localhost.

When running applications like databases in a K8s cluster, you will make use of the Cluster IP Service to expose the application exclusively to internal traffic within the cluster. Database applications are a fitting use case when it comes to exposing Pods via port forwarding. Only authenticated users with the relevant permissions defined using RBAC should be able to interact with the API server using kubectl. This provides an added layer of security, allowing you to determine who is allowed to access certain Pods directly from their local machine.

The kubectl command to establish port forwarding is as follows:


kubectl port-forward <pod-name> 8080:8080

You should see the following response or output to the above command:


Forwarding from 127.0.0.1:8080 → 8080
Forwarding from [::1]:8080 → 8080

To cancel or quit the kubectl command, you can simply press Ctrl + C and the port forwarding will end immediately.

In addition to this, you can use the kubectl proxy command to establish a direct connection from your local machine to your cluster’s API server. To expose the K8s API on a specific port, run the following command:


kubectl proxy --port=6443

The above command will give you the following response:


Starting to serve on 127.0.0.1:6443

You can then head over to your browser and navigate to <terminal inline>http://localhost:6443/api/v1/namespaces/default/pods<terminal inline> to view a list of all running Pods in the default namespace of your Kubernetes cluster.

Testing On A Live Kubernetes Cluster

In this section, you will provision an Amazon EKS cluster in AWS, before deploying a single replica of a basic Node.js application. Once the Pod is running, you will establish a direct connection to it from your local machine using the <terminal inline>kubectl port-forward<terminal inline> command. After that, you will run the <terminal inline>kubectl proxy<terminal inline> command to expose the K8s API server on your machine to list all of the Pods in the default namespace of your EKS cluster.

The EKS cluster will be provisioned using Terraform, which is a IaC tool that will be used to create infrastructure in the AWS environment. All the source code for the following demonstration is available in this public repository.

To carry out the next steps, you will need to make sure you fulfill all of the following prerequisites:

  • Create an AWS account if you don’t already have one
  • Ensure that your AWS profile is configured with the AWS CLI on your local machine
  • Terraform installed
  • kubectl installed

Provision EKS Cluster

Once you have cloned the repository and met the necessary requirements outlined above, you can proceed to run the following command from the root level of the cloned repository:

<terminal>terraform apply<terminal>

You will then be prompted to give a name to your cluster and to type out the AWS profile that should be used to create the cluster. After giving your responses, an execution plan with the details of all the infrastructure to be provisioned will be displayed. You can then type, ‘yes’ for the infrastructure to be provisioned. This may take a few minutes.

Once the cluster has been created, you can update your kube config file with the newly provisioned cluster’s details with this command:


aws eks --region eu-west-1 update-kubeconfig --name <your-cluster-name>

Finally, you can ensure that your kube config has been updated with the correct context and that your cluster is running as expected.


kubectl config current-context
kubectl get pods --all-namespaces

Port Forwarding on EKS Cluster

The next step is to deploy a Node.js application to your cluster, and then expose a Pod for local accessibility. This approach will allow you to access and interact with internal Kubernetes cluster processes from your localhost.

Deploy the Node.js Application

The command below will be used to create a Pod in the EKS cluster that just got provisioned.

<terminal>kubectl apply -f https://raw.githubusercontent.com/LukeMwila/setting-up-eks-cluster-dojo/master/manifests/pod.yaml<terminal>

Set Up Port Forwarding with kubectl

Once the pod has been created and is running as expected, you can then proceed to set up port forwarding with the <terminal inline>kubectl port-forward<terminal inline> command, with the port that you want to expose locally for traffic from the application.

<terminal>kubectl port-forward 8080:8080<terminal>

After getting the expected output, you can open your browser and navigate to <terminal inline>http://localhost:8080/test<terminal inline>. You should then see the following response.

Simple Node App
Simple Node App

Conclusion

In this post, you learned what port-forwarding is and how it fits into NAT. More importantly, this post covered how port forwarding can be used in the context of Kubernetes to achieve secure local exposure of Pods running on your cluster. Port forwarding is simply one way to access your Pods.

As mentioned above, port forwarding is a useful method for testing accessibility, debugging, and other investigative tasks on your Pods in Kubernetes. It is not meant to be used for exposing applications to external traffic from end users. Services and ingresses are designed to specifically fulfill the purpose of Pod accessibility for external users in a more advanced way.

Article by

Lukonde Mwila

DevOps Engineer

Lukonde Mwila specializes in cloud and DevOps engineering, cloud architecture designs, and cloud security at an enterprise level in the AWS landscape. He currently consults in the financial services sector specializing in cloud and DevOps engineering, cloud architecture designs and cloud security at an enterprise level in the AWS landscape. He has a passion for sharing knowledge through speaking engagements such as meetups and tech conferences, as well as writing technical articles. His talk at DockerCon 2020 on deploying multi-container applications to AWS was one of the top-rated and most viewed sessions of the event. He is 5x AWS certified and is an advocate for containerization and serverless technologies.

Read More