Expose the App using Ingress
This guide provides a comprehensive overview of setting up Ingress in Kubernetes environments. While there are various Ingress options available, this guide specifically focuses on implementing Ingress using NGINX. It’s important to note that you can choose other Ingress controllers as per your requirements, but for the purposes of this tutorial, we will concentrate on NGINX Ingress.
Why Nginx Ingress? #
NGINX is a popular choice due to its performance, reliability, and flexibility. However, depending on your specific use case and preferences, you may opt for other Ingress controllers. This guide is tailored for those who are specifically interested in setting up Nginx as their Ingress controller.
After completing the Kubernetes Setup, the next step is to expose your application. Rather than using nodeport for each service, this guide recommends using Ingress. This method provides an external IP that can be directly associated with your domain, simplifying access.
In scenarios where your cloud provider does not offer a Load Balancer, this guide includes steps to manually set up an alternative using a new instance. This ensures continued accessibility and efficiency of your applications.
Prerequisites #
- Compatible operating system (e.g., Amazon Linux 2, Ubuntu)
- Administrative system access
Step 1: Install NGINX Ingress Controller #
For detailed instructions, visit NGINX Ingress Controller. This guide uses the manifest method for installation. Alternatively, you can install NGINX Ingress using Helm with the following command:
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace \
--set controller.service.type=NodePort
Clone the NGINX Repository #
git clone https://github.com/nginxinc/kubernetes-ingress.git --branch <version_number>
cd kubernetes-ingress/deployments
Setting Up Role-Based Access Control (RBAC) #
-
Create a namespace and service account:
kubectl apply -f common/ns-and-sa.yaml
-
Establish a cluster role and binding:
kubectl apply -f rbac/rbac.yaml
Additional steps are required for NGINX App Protect or NGINX App Protect DoS.
Creating Common Resources #
-
(Optional) Set up a default server TLS secret, if using that option:
kubectl apply -f ../examples/shared-examples/default-server-secret/default-server-secret.yaml
-
Customize NGINX settings with a ConfigMap:
kubectl apply -f common/nginx-config.yaml
-
An
IngressClass
resource is essential for the controller’s operation:kubectl apply -f common/ingress-class.yaml
Deploying the NGINX Ingress Controller #
-
Deployment Method:
kubectl apply -f deployment/nginx-ingress.yaml
-
DaemonSet Method:
kubectl apply -f deployment/nginx-plus-ingress.yaml
Verifying NGINX Ingress Controller #
Confirm operational status:
kubectl get pods --namespace=nginx-ingress
Step 2: Accessing the NGINX Ingress Controller #
kubectl create -f service/nodeport.yaml
Step 3: Manual Load Balancer Setup #
Using NGINX as a Load Balancer:
On Amazon Linux 2 #
sudo yum update -y
sudo amazon-linux-extras install nginx1 -y
sudo systemctl start nginx
On Ubuntu #
sudo apt update
sudo apt install nginx -y
sudo systemctl start nginx
Step 4: Configuring the Load Balancer #
-
Open and edit the NGINX configuration:
sudo nano /etc/nginx/nginx.conf
-
Configure your upstream server and restart NGINX. Here’s a basic example:
http { upstream backend { server <Worker_Node_IP>:<NodePort>; # Add additional worker nodes if any } server { listen 80; location / { proxy_pass http://backend; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } }
Replace placeholders with your specific node and port details.
Additional Considerations #
- SSL/TLS Configuration: For SSL/TLS termination, configure Nginx with necessary certificates.
- Firewall Rules: Adjust firewall rules on Lightsail to allow required traffic.
This manual load balancing approach is suitable for various workloads, providing a basic yet effective solution without relying on cloud-specific services. It’s ideal for small to medium applications but may need extra configurations for scaling and high availability.