Managing Kubernetes Traffic With F5 NGINX
Managing Kubernetes Traffic With F5 NGINX
Managing Kubernetes
Traffic with F5 NGINX
A Practical Guide
By Amir Rawdat
Technical Marketing Manager, NGINX
www.dbooks.org
Table of Contents
Foreword. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1. Installing and Deploying F5 NGINX Ingress Controller and F5 NGINX Service Mesh. . . . . 6
What Is an Ingress Controller and Why Is It Important?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
What’s Special About NGINX Ingress Resources?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Prerequisites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Installation and Deployment Instructions for NGINX Ingress Controller. . . . . . . . . . . . . . . . . . . . . . . . 7
What Is a Service Mesh and Do I Need One?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Why Should I Try NGINX Service Mesh?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Why Integrate NGINX Ingress Controller with NGINX Service Mesh?. . . . . . . . . . . . . . . . . . . . . . . . . . . 11
NGINX Service Mesh Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Installation and Deployment Instructions for NGINX Service Mesh. . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Installation with the NGINX Service Mesh CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Install the NGINX Service Mesh CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Install NGINX Service Mesh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Installation with Helm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Prerequisites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Installing with Helm Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Installing with Chart Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Migrating from the Community Ingress Controller to F5 NGINX Ingress Controller. . . . . . . . . . . . . . . . 17
Option 1: Migrate Using NGINX Ingress Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Set Up SSL Termination and HTTP Path-Based Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Set Up TCP/UDP Load Balancing and TLS Passthrough . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Convert Community Ingress Controller Annotations to NGINX Ingress Resources. . . . . . . . . . . . 18
Canary Deployments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Traffic Control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Header Manipulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Other Proxying and Load Balancing Annotations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
mTLS Authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Session Persistence (Exclusive to NGINX Plus). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Option 2: Migrate Using the Kubernetes Ingress Resource. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Advanced Configuration with Annotations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Global Configuration with ConfigMaps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Chapter Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
www.dbooks.org
3. Monitoring and Visibility Use Cases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Monitoring with the NGINX Plus Live Activity Monitoring Dashboard. . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Distributed Tracing, Monitoring, and Visualization with Jaeger, Prometheus, and Grafana. . . . . . . . . 74
Enabling Distributed Tracing, Monitoring, and Visualization for NGINX Service Mesh . . . . . . . . . . 74
Enabling Distributed Tracing for NGINX Ingress Controller. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Enabling Monitoring and Visualization for NGINX Ingress Controller . . . . . . . . . . . . . . . . . . . . . . . . . 76
Visualizing Distributed Tracing and Monitoring Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Logging and Monitoring with the Elastic Stack. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Configuring the NGINX Ingress Controller Access and Error Logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Enabling Filebeat. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Displaying NGINX Ingress Controller Log Data with Filebeat. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Enabling Metricbeat and Displaying NGINX Ingress Controller and
NGINX Service Mesh Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Displaying Logs and Metrics with Amazon CloudWatch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Configuring CloudWatch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Creating Graphs in CloudWatch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Capturing Logs in CloudWatch with Fluent Bit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Chapter Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Docuument Revision History. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Microservices architectures introduce several benefits to the application development and delivery process.
Microservices-based apps are easier to build, test, maintain, and scale. They also reduce downtime through
better fault isolation.
While container-based microservices apps have profoundly changed the way DevOps teams deploy applications, they have also
introduced challenges. Kubernetes – the de facto container orchestration platform – is designed to simplify management of
containerized apps, but it has its own complexities and a steep learning curve. This is because responsibility for many functions that
traditionally run inside an app (security, logging, scaling, and so on) are shifted to the Kubernetes networking fabric.
To manage this complexity, DevOps teams need a data plane that gives them control of Kubernetes networking. The data plane
is the key component that connects microservices to end users and each other, and managing it effectively is critical to achieving
stability and predictability in an environment where modern apps are evolving constantly.
Ingress controller and service mesh are the two Kubernetes-native technologies that provide the control you need over the
data plane. This hands-on guide to F5 NGINX Ingress Controller and F5 NGINX Service Mesh includes thorough explanations,
diagrams, and code samples to prepare you to deploy and manage production-grade Kubernetes environments.
Chapter 1 introduces NGINX Ingress Controller and NGINX Service Mesh and walks you through installation and deployment,
including an integrated solution for managing both north-south and east-west traffic.
• Multi-tenancy and delegation – For safe and effective sharing of resources in a cluster
• Traffic splitting – Blue-green and canary deployments, A/B testing, and debug routing
Chapter 3 covers monitoring, logging, and tracing, which are essential for visibility and insight into your distributed applications.
You’ll learn how to export NGINX metrics to third-party tools including AWS, Elastic Stack, and Prometheus.
And of course, we can’t forget about security. Chapter 4 addresses several mechanisms for protecting your apps, including
centralized authentication on the Ingress controller, integration with third-party SSO solutions, and F5 NGINX App Protect WAF
policies for preventing advanced attacks and data exfiltration methods.
I’d like to thank my collaborators on this eBook: Jenn Gile for project conception and management, Sandra Kennedy for the cover
design, Tony Mauro for editing, and Michael Weil for the layout and diagrams.
This is our first edition of this eBook and we welcome your input on important scenarios to include in future editions.
Amir Rawdat
Technical Marketing Engineer, F5 NGINX
FOREWORD 5
www.dbooks.org
1. Installing and Deploying F5 NGINX Ingress
Controller and F5 NGINX Service Mesh
In this chapter we explain how to install and deploy NGINX Ingress Controller and
NGINX Service Mesh. We also detail how to migrate from the NGINX Ingress Controller
maintained by the Kubernetes community (kubernetes/ingress-nginx) to our version
(nginxinc/kubernetes-ingress).
• Chapter Summary
I N S TA L L I N G A N D D E P L O Y I N G N G I N X I N G R E S S C O N T R O L L E R
As you start off using Kubernetes, your cluster typically has just a few simple applications
that serve requests from external clients and don’t exchange much data with other services
in the cluster. For this use case, NGINX Ingress Controller is usually sufficient on its own,
and we begin with instructions for a stand-alone NGINX Ingress Controller deployment.
AS YOUR CLUSTER TOPOLOGY As your cluster topology becomes more complicated, adding a service mesh often becomes
BECOMES MORE COMPLICATED, necessary. We cover installation and deployment of NGINX Service Mesh in the next section.
ADDING A SERVICE MESH
OFTEN BECOMES NECESSARY
What Is an Ingress Controller and Why Is It Important?
In Kubernetes, the Ingress controller is a specialized load balancer that bridges between
the internal network, which connects the containerized apps running within the Kubernetes
cluster, and the external network outside the Kubernetes cluster. Ingress controllers are
used to configure and manage external interactions with Kubernetes pods that are labeled
to a specific service. Ingress controllers have many features of traditional external load
balancers, like TLS termination, handling multiple domains and namespaces, and of course,
load balancing traffic.
You configure the Ingress controller with the Kubernetes API. The Ingress controller integrates
with Kubernetes components so that it can automatically reconfigure itself appropriately
when service endpoints scale up and down. And there’s another bonus! Ingress controllers
can also enforce egress rules which permit outgoing traffic from certain pods only to specific
external services, or ensure that traffic is secured using mTLS.
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 6
What’s Special About NGINX Ingress Resources?
NGINX Ingress Controller supports the standard Kubernetes Ingress resource, but also supports
NGINX INGRESS CONTROLLER NGINX Ingress resources, which provide enterprise-grade features such as more flexible
SUPPORTS THE STANDARD
load-balancing options, circuit breaking, routing, header manipulation, mutual TLS (mTLS)
KUBERNETES INGRESS
authentication, and web application firewall (WAF). In contrast, the native Kubernetes
RESOURCE AND NGINX
Ingress resource facilitates configuration of load balancing in Kubernetes but does not provide
INGRESS RESOURCES
those enterprise-grade features nor other customizations.
Prerequisites
• A working Kubernetes environment where you have administrative privilege. See the
Kubernetes documentation to get started.
• A subscription to the NGINX Ingress Controller based on NGINX Plus, if you also want
to deploy NGINX Service Mesh and NGINX App Protect. To explore all use cases in
later chapters of this eBook, you must have NGINX Service Mesh. If you don’t already
have a paid subscription, start a 30-day free trial before continuing.
1. Complete the indicated steps in these sections of the NGINX Ingress Controller
documentation:
• Prerequisites: Step 2
• 1. Configure RBAC: Steps 1–3
• 2. Create Common Resources: Steps 1–3, plus the two steps in the Create
Custom Resources subsection and the one step in the Resources for
NGINX App Protect subsection
2. Clone the GitHub repo for this eBook, which includes configuration files for the
NGINX Ingress Controller based on NGINX Plus and the sample bookinfo application
used in later chapters:
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 7
www.dbooks.org
3. Deploy NGINX Ingress Controller.
Note (again!): You must use the NGINX Ingress Controller based on NGINX Plus if you
later want to deploy NGINX Service Mesh as well as explore all uses cases in this guide.
• If deploying the NGINX Open Source-based NGINX Ingress Controller, apply the
nginx-ingress.yaml file provided in the nginxinc/kubernetes-ingress repo on GitHub:
• If deploying the NGINX Ingress Controller based on NGINX Plus, along with
NGINX App Protect:
a) Download the JSON Web Token (JWT) provided with your NGINX Ingress Controller
subscription from MyF5 (if your subscription covers multiple NGINX Ingress Controller
instances, there is a separate JWT for each instance).
b) Create a Kubernetes Secret, which is required for pulling images from the NGINX
private registry. Substitute the JWT obtained in the previous step for <your_JWT>:
29 -image: private-registry.nginx.com/nginx-ic-nap/nginx-plus-
ingress:2.x.y
View on GitHub
For a list of the available images, see NGINX Ingress Controller Technical
Specifications.
d) Apply nginx-plus-ingress.yaml to deploy NGINX Ingress Controller with
NGINX App Protect:
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 8
4. Check that the NGINX Ingress Controller pod is up and running, as confirmed by the
value Running in the STATUS column:
The file includes the following keys, which enable the PROXY Protocol for proper
interaction with AWS Elastic Load Balancing (ELB).
6 data:
7 proxy-protocol: "True"
8 real-ip-header: "proxy_protocol"
9 set-real-ip-from: "0.0.0.0/0"
View on GitHub
b) Apply the LoadBalancer configuration defined in
Installation-Deployment/loadbalancer-aws-elb.yaml:
• For Azure or Google Cloud Platform, apply the LoadBalancer configuration defined
in Installation-Deployment/loadbalancer.yaml:
FOR ON-PREMISES • For on-premises deployments, you need to deploy your own network load balancer that
DEPLOYMENTS, YOU NEED
integrates with the cluster, because Kubernetes does not offer a native implementation
TO DEPLOY YOUR OWN
of a LoadBalancer service for this use case. Network operators typically deploy systems
NETWORK LOAD BALANCER
like the following in front of an on-premises Kubernetes cluster:
• F5 BIG-IP or Citrix ADC hardware load balancer
• Software network load balancers like MetalLB
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 9
www.dbooks.org
At this point NGINX Ingress Controller is deployed. You can either:
• Continue to the next section to add NGINX Service Mesh (required to explore all of the
traffic-management use cases in Chapter 2).
I N S TA L L I N G A N D D E P L O Y I N G N G I N X S E R V I C E M E S H
As previously mentioned, NGINX Ingress Controller on its own is typically sufficient for
Kubernetes clusters with simple applications that serve requests from external clients and
don’t exchange much data with other services in the cluster. But as your cluster topology
becomes more complicated, adding a service mesh often becomes helpful, if not required,
for proper operation. In this section we install and deploy NGINX Service Mesh and integrate
it with NGINX Ingress Controller.
Also as previously mentioned, you must use the NGINX Ingress Controller based on NGINX Plus
to integrate with NGINX Service Mesh, and some use cases in Chapter 2 are possible
only with the combination. (For ease of reading, the remainder of this section uses the term
NGINX Ingress Controller for the NGINX Plus-based model of the product.)
Let’s start with a look at the capabilities provided by NGINX Service Mesh and NGINX Ingress
Controller, when they are used, and how they can be used together.
Service meshes typically handle traffic management and security in a way that’s transparent
to the containerized applications. By offloading functions like SSL/TLS and load balancing,
service meshes free developers from having to implement security or service availability
separately in each application. An enterprise‑grade service mesh provides solutions for a
variety of “problems”:
• Orchestration via injection and sidecar management, and Kubernetes API integration
• Management of service traffic, including load balancing, traffic control (rate limiting and
circuit breaking), and traffic splitting (canary and blue-green deployments, A/B testing,
and debug routing)
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 10
• Improved monitoring and visibility of service-to-service traffic with popular tools like
Prometheus and Grafana
THE LARGER AND MORE Service meshes range in focus from small and very focused (like NGINX Service Mesh) to
COMPLEX THE SERVICE MESH, very large with a comprehensive set of network and cluster management tools (like Istio), and
THE MORE HELPFUL AN everywhere in between. The larger and more complex the service mesh, the more helpful
INDEPENDENT MANAGEMENT an independent management plane can be.
PLANE CAN BE
At NGINX, we think it’s no longer a binary question of “Do I have to use a service mesh?”
but rather “When will I be ready for a service mesh?” We believe that anyone deploying
containers in production and using Kubernetes to orchestrate them has the potential to
reach the level of app and infrastructure maturity where a service mesh adds value. But as
with any technology, implementing a service mesh before you need one just adds risk and
expense that outweigh the possible benefits to your business. For our six-point readiness
checklist, read How to Choose a Service Mesh on our blog.
If you are ready for a service mesh, NGINX Service Mesh is a great option because it is
IF YOU ARE READY FOR
lightweight, turnkey, and developer-friendly. You don’t need a team of people to run it.
A SERVICE MESH, NGINX
It leverages NGINX Plus as the sidecar to operate highly available and scalable containerized
SERVICE MESH IS A GREAT
environments, providing a level of enterprise traffic management, performance, and scalability
OPTION BECAUSE IT IS
to the market that other sidecars don’t offer. NGINX Service Mesh provides the seamless and
LIGHTWEIGHT, TURNKEY,
AND DEVELOPER-FRIENDLY transparent load balancing, reverse proxy, traffic routing, identity, and encryption features
needed for production-grade service mesh deployments.
If you run containers on Kubernetes in production, then you can use NGINX Service Mesh
to reliably deploy and orchestrate your services for many use cases and features such as
configuration of mTLS between app services. For especially large, distributed app topologies,
NGINX Service Mesh provides full visibility, monitoring, and security.
Not all Ingress controllers integrate with all service meshes, and when they do, it’s not
always pretty. NGINX Service Mesh was designed to tightly and perfectly integrate with
NGINX Ingress Controller, which provides benefits including:
• A unified data plane which can be managed in a single configuration, saving you time
and helping avoid errors resulting in improper traffic routing and security issues
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 11
www.dbooks.org
Integrating NGINX Ingress Controller with NGINX Service Mesh yields a unified data plane
with production‑grade security, functionality, and scale.
IF YOU ARE READY TO If you are ready to integrate NGINX Ingress Controller with NGINX Service Mesh, where do
INTEGRATE NGINX INGRESS you start? Before we step through the details of the installation, it is important to understand
CONTROLLER WITH the architectural components of the NGINX Service Mesh.
NGINX SERVICE MESH,
WHERE DO YOU START? NGINX Service Mesh consists of two key components:
• Data plane – Handles the traffic between services in the Kubernetes cluster and performs
traffic-management functions that include load balancing, reverse proxy, traffic routing,
identity, and encryption. The data plane component is implemented with sidecars, which
are proxy instances responsible for intercepting and routing all traffic for their service
and executing traffic-management rules.
• Control plane – Configures and manages the data plane, providing instant support for
optimizing apps and their updates, all of which make it possible to keep the instances
protected and integrated with other components.
The following diagram depicts the interaction of the control and data planes in
NGINX Service Mesh.
External External
VM Clusters K8s Clusters
Ingress Egress
Kubernetes Cluster
NGINX
Ingress
Controller
K8s API Grafana OpenTelemetry SPIRE
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 12
As shown in the diagram, sidecar proxies interoperate with the following open source solutions:
• Grafana
• NATS
• OpenTelemetry
THERE ARE SEVERAL METHODS There are several methods available for installing NGINX Service Mesh and integrating it
AVAILABLE FOR INSTALLING with NGINX Ingress Controller. In this section we provide instructions for two popular methods:
NGINX SERVICE MESH AND
• Installation with the NGINX Service Mesh CLI
INTEGRATING IT WITH
NGINX INGRESS CONTROLLER • Installation with Helm
The NGINX Service Mesh control plane is designed to connect to an API, a CLI, and a GUI for
managing the app. Here you install the NGINX Service Mesh CLI (nginx-meshctl).
The following instructions apply to Linux, but you can also install the CLI on macOS and
Windows; for instructions, see the NGINX Service Mesh documentation.
2. Open a terminal and run the following command to unzip the downloaded binary file:
$ gunzip nginx-meshctl_linux.gz
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 13
www.dbooks.org
3. Copy the tool to the local /usr/bin/ directory and make it executable:
4. Verify that the CLI is working – if it is, usage instructions and a list of available commands
and flags is displayed.
$ nginx-meshctl help
You configure NGINX Service Mesh in one of three mutual TLS (mTLS) modes depending on
which types of traffic need to be protected by encryption:
• off – mTLS is disabled and incoming traffic is accepted from any source
• permissive – mTLS secures communication among injected pods, which can also
communicate in clear text with external services
• strict – mTLS secures communication among injected pods and external traffic
cannot enter the Kubernetes cluster
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 14
2. Verify that all pods are up and running in the nginx-mesh namespace. A list of pods like
the following indicates a successful deployment.
Prerequisites
• Helm 3.0+
• A clone of the NGINX Service Mesh GitHub repository on your local machine
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 15
www.dbooks.org
2. Install NGINX Service Mesh from the Helm repository, substituting a deployment name
such as my-service-mesh-release for <your_deployment_name>
<your_deployment_name>:
With chart sources, you specify configuration parameters in values.yaml (not included in
the eBook repo), which are referenced when the following command installs the nginx-
service-mesh Helm chart. For <your_deployment_name> substitute a name such as
my-service-mesh-release.
my-service-mesh-release
$ cd helm-chart
$ helm install <your_deployment_name
<your_deployment_name>
> -f values.yaml . -n nginx-
mesh --create-namespace --wait
Any configurable parameters that you do not specify in values.yaml are set to their default value.
For more details on chart sources and the full list of configurable parameters for the Helm chart,
see the NGINX Ingress Controller repository and NGINX Service Mesh documentation.
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 16
M I G R AT I N G F R O M T H E C O M M U N I T Y I N G R E S S C O N T R O L L E R
TO F5 NGINX INGRESS CONTROLLER
AS THEIR KUBERNETES Many organizations setting up Kubernetes for the first time start with the NGINX Ingress
DEPLOYMENT MATURES, SOME Controller developed and maintained by the Kubernetes community (kubernetes/ingress-nginx).
ORGANIZATIONS FIND THEY As their Kubernetes deployment matures, however, some organizations find they need
NEED ADVANCED FEATURES advanced features or want commercial support while keeping NGINX as the data plane.
OR WANT COMMERCIAL
SUPPORT WHILE KEEPING One option is to migrate to the NGINX Ingress Controller based on NGINX Plus and maintained
NGINX AS THE DATA PLANE by F5 NGINX (nginxinc/kubernetes-ingress), and here we provide complete instructions so
you can avoid some complications that result from differences between the two projects.
(As mentioned previously, you must use the NGINX Plus-based NGINX Ingress Controller
and NGINX Service Mesh if you want to explore all use cases in this guide.)
To distinguish between the two projects in the remainder of this guide, we refer to the
NGINX Ingress Controller maintained by the Kubernetes community (kubernetes/ingress-nginx)
as the “community Ingress controller” and the one maintained by F5 NGINX
(nginxinc/kubernetes-ingress) as “NGINX Ingress Controller”.
There are two ways to migrate from the community Ingress controller to NGINX Ingress Controller:
With this migration option, you use the standard Kubernetes Ingress resource to set root
capabilities and NGINX Ingress resources to enhance your configuration with increased
capabilities and ease of use.
The custom resource definitions (CRDs) for NGINX Ingress resources – VirtualServer,
VirtualServerRoute, TransportServer, GlobalConfiguration, and Policy – enable you to easily
delegate control over various parts of the configuration to different teams (such as AppDev
and security teams) as well as provide greater configuration safety and validation.
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 17
www.dbooks.org
Set Up SSL Termination and HTTP Path-Based Routing
The table maps the configuration of SSL termination and Layer 7 path-based routing in the
spec field of the standard Kubernetes Ingress resource with the spec field in the NGINX
VirtualServer resource. The syntax and indentation differ slightly in the two resources, but
they accomplish the same basic Ingress functions.
With NGINX Ingress Controller, TransportServer resources define a broad range of options for
TLS Passthrough and TCP and UDP load balancing. TransportServer resources are used in
conjunction with GlobalConfiguration resources to control inbound and outbound connections.
For more information, see Load Balancing TCP and UDP Traffic and Load Balancing
TLS-Encrypted Traffic with TLS Passthrough in Chapter 2.
In the following sections we show how to convert community Ingress controller annotations
into NGINX Ingress Controller resources.
Canary Deployments
EVEN AS YOU PUSH FREQUENT Even as you push frequent code changes to your production container workloads, you must
CODE CHANGES TO YOUR continue to serve your existing users. Canary and blue-green deployments enable you to
PRODUCTION CONTAINER do this, and you can perform them on the NGINX Ingress Controller data plane to achieve
WORKLOADS, YOU MUST stable and predictable updates in production-grade Kubernetes environments.
CONTINUE TO SERVE YOUR
EXISTING USERS The table shows the fields in NGINX VirtualServer and VirtualServerRoute resources that
correspond to community Ingress Controller annotations for canary deployments.
The community Ingress controller evaluates canary annotations in this order of precedence:
1. nginx.ingress.kubernetes.io/canary-by-header
2. nginx.ingress.kubernetes.io/canary-by-cookie
3. nginx.ingress.kubernetes.io/canary-by-weight
For NGINX Ingress Controller to evaluate them the same way, they must appear in that order
in the NGINX VirtualServer or VirtualServerRoute manifest.
(continues)
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 19
www.dbooks.org
COMMUNITY INGRESS CONTROLLER NGINX INGRESS CONTROLLER
nginx.ingress.kubernetes.io/canary: "true" matches:
nginx.ingress.kubernetes.io/canary-by-cookie: "cookieName" - conditions:
- cookie: cookieName
value: never
action:
pass: echo
- cookie: cookieName
value: always
action:
pass: echo-canary
action:
pass: echo
Traffic Control
The table shows the fields in NGINX VirtualServer and VirtualServerRoute resources that
correspond to community Ingress controller annotations for rate limiting, custom HTTP errors,
a custom default backend, and URI rewriting.
(continues)
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 20
COMMUNITY INGRESS CONTROLLER NGINX INGRESS CONTROLLER
nginx.ingress.kubernetes.io/limit-rate: "number" location-snippets: |
nginx.ingress.kubernetes.io/limit-rate-after: "number" limit_rate number;
limit_rate_after number;
As indicated in the table, as of this writing NGINX Ingress resources do not include fields
that directly translate the following four community Ingress controller annotations, and you
must use snippets. Direct support for the four annotations, using Policy objects, is planned
for future releases of NGINX Ingress Controller.
• nginx.ingress.kubernetes.io/limit-connections
• nginx.ingress.kubernetes.io/limit-rate
• nginx.ingress.kubernetes.io/limit-rate-after
• nginx.ingress.kubernetes.io/limit-whitelist
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 21
www.dbooks.org
Header Manipulation
MANIPULATING HTTP Manipulating HTTP headers is useful in many use cases, as they contain additional
HEADERS IS USEFUL information that is important and relevant for systems involved in an HTTP transaction.
IN MANY USE CASES For example, the community Ingress controller supports enabling and setting cross-origin
resource sharing (CORS) headers, which are used with AJAX applications, where front-end
JavaScript code from a browser is connecting to a backend app or web server.
The table shows the fields in NGINX VirtualServer and VirtualServerRoute resources that
correspond to community Ingress Controller annotations for header manipulation.
- name: Access-Control-Allow-Origin
value: "*"
- name: Access-Control-Max-Age
value: "seconds"
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 22
Other Proxying and Load Balancing Annotations
THERE ARE OTHER PROXYING There are other proxying and load-balancing functionalities you might want to configure in
AND LOAD-BALANCING NGINX Ingress Controller depending on the specific use case. These functionalities include
FUNCTIONALITIES YOU setting load-balancing algorithms and timeouts and buffering settings for proxied connections.
MIGHT WANT TO CONFIGURE
The table shows the statements in the upstream field of NGINX VirtualServer and
VirtualServerRoute resources that correspond to community Ingress Controller annotations
for custom NGINX load balancing, proxy timeouts, proxy buffering, and routing connections
to a service’s Cluster IP address and port.
nginx.ingress.kubernetes.io/proxy-buffering buffering
nginx.ingress.kubernetes.io/proxy-buffers-number buffers
nginx.ingress.kubernetes.io/proxy-buffer-size
nginx.ingress.kubernetes.io/proxy-connect-timeout connect-timeout
nginx.ingress.kubernetes.io/proxy-next-upstream next-upstream
nginx.ingress.kubernetes.io/proxy-next-upstream-timeout next-upstream-timeout
nginx.ingress.kubernetes.io/proxy-read-timeout read-timeout
nginx.ingress.kubernetes.io/proxy-send-timeout send-timeout
nginx.ingress.kubernetes.io/service-upstream use-cluster-ip
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 23
www.dbooks.org
mTLS Authentication
A SERVICE MESH IS As previously noted, a service mesh is particularly useful in a strict zero-trust environment, where
PARTICULARLY USEFUL distributed applications inside a cluster communicate securely by mutually authenticating.
IN A STRICT ZERO-TRUST What if we need to impose that same level of security on traffic entering and exiting the cluster?
ENVIRONMENT
We can configure mTLS authentication at the Ingress Controller layer so that the end
systems of external connections authenticate each other by presenting a valid certificate.
The table shows the fields in NGINX Policy resources that correspond to community Ingress
Controller annotations for client certificate authentication and backend certificate authentication.
The table shows the fields in NGINX Policy resources that are exclusive to the
NGINX Ingress Controller based on NGINX Plus and correspond to community Ingress
Controller annotations for session persistence (affinity).
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 24
Option 2: Migrate Using the Kubernetes Ingress Resource
The second option for migrating from the community Ingress controller to NGINX Ingress
Controller is to use only annotations and ConfigMaps in the standard Kubernetes Ingress
resource and potentially rely on “master/minion”-style processing. This keeps all the
configuration in the Ingress object.
Note: With this method, do not alter the spec field of the Ingress resource.
1. The community Ingress controller uses Lua to implement some of its load-balancing algorithms. NGINX Ingress Controller doesn’t have an
equivalent for all of them.
2. Redirects HTTP traffic to HTTPS. The community Ingress controller implements this with Lua code, while NGINX Ingress Controller uses
native NGINX if conditions.
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 25
www.dbooks.org
The following table outlines the community Ingress controller annotations that correspond
directly to annotations supported by the NGINX Ingress Controller based on NGINX Plus.
nginx.ingress.kubernetes.io/session-cookie-path: "/route"
Note: The NGINX Ingress Controller based on NGINX Plus has additional annotations for
features that the community Ingress controller doesn’t support at all, including active health
checks and authentication using JSON Web Tokens (JWTs).
The following table maps community Ingress controller ConfigMap keys to their directly
corresponding NGINX Ingress Controller ConfigMap keys. Note that a handful of ConfigMap
key names are identical. Also, both the community Ingress controller and NGINX Ingress
Controller have ConfigMaps keys that the other does not (not shown in the table).
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 26
COMMUNITY INGRESS RESOURCE NGINX INGRESS CONTROLLER
proxy-body-size client-max-body-size
proxy-buffering proxy-buffering
proxy-buffers-number: "number" proxy-buffers: number size
proxy-buffer-size: "size"
proxy-connect-timeout proxy-connect-timeout
proxy-read-timeout proxy-read-timeout
proxy-send-timeout proxy-send-timeout
server-name-hash-bucket-size server-names-hash-bucket-size
server-name-hash-max-size server-names-hash-max-size
server-snippet server-snippets
server-tokens server-tokens
ssl-ciphers ssl-ciphers
ssl-dh-param ssl-dhparam-file
ssl-protocols ssl-protocols
ssl-redirect ssl-redirect
upstream-keepalive-connections keepalive
use-http2 http2
use-proxy-protocol proxy-protocol
variables-hash-bucket-size variables-hash-bucket-size
worker-cpu-affinity worker-cpu-affinity
worker-processes worker-processes
worker-shutdown-timeout worker-shutdown-timeout
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 27
www.dbooks.org
CHAPTER SUMMARY
We defined what an Ingress controller and service mesh are and what they do, explained why
it’s beneficial to deploy them together, and showed how to install NGINX Ingress Controller
and NGINX Service Mesh.
• NGINX Service Mesh comes in handy when setting up zero-trust production environments,
especially those with large-scale distributed app topologies.
• NGINX Service Mesh has two key components: the data plane and the control plane.
The data plane (implemented as sidecar proxy instances) manages the traffic between
instances, and its behavior is configured and controlled by the control plane.
• You can migrate from the community Ingress controller to NGINX Ingress Controller using
either custom NGINX Ingress resources or the standard Kubernetes Ingress resource with
annotations and ConfigMaps. The former option supports a broader set of networking
capabilities and so is more suitable for production-grade Kubernetes environments.
• NGINX Ingress resources not only enable configuration of load balancing, but also
provide additional customization, including circuit breaking, routing, header manipulation,
mTLS authentication, and web application firewall (WAF).
CHAPTER 1 – INSTALLING AND DEPLOYING F5 NGINX CONTROLLER AND F5 NGINX SERVICE MESH 28
2. Traffic Management Use Cases
In this chapter we show how to configure NGINX Ingress Controller for several traffic
management use cases.
• Chapter Summary
NGINX INGRESS CONTROLLER NGINX Ingress Controller supports TCP and UDP load balancing, so you can use it to manage
SUPPORTS TCP AND UDP traffic for a wide range of apps and utilities based on those protocols, including:
LOAD BALANCING
• MySQL, LDAP, and MQTT – TCP-based apps used by many popular applications
• DNS, syslog, and RADIUS – UDP-based utilities used by edge devices and
non-transactional applications
TCP and UDP load balancing with NGINX Ingress Controller is also an effective solution for
distributing network traffic to Kubernetes applications in the following circumstances:
• You are using end-to-end encryption (EE2E) and having the application handle
encryption and decryption rather than NGINX Ingress Controller
• You need high-performance load balancing for applications that are based on TCP or UDP
• You want to minimize the amount of change when migrating an existing network (TCP/UDP)
load balancer to a Kubernetes environment
NGINX Ingress Controller comes with two NGINX Ingress resources that support TCP/UDP
load balancing:
www.dbooks.org
The following diagram depicts a sample use case for the GlobalConfiguration and
TransportServer resources. In gc.yaml, the cluster administrator defines TCP and UDP listeners
in a GlobalConfiguration resource. In ts.yaml, a DevOps engineer references the TCP listener
in a TransportServer resource that routes traffic to a MySQL deployment.
gc.yaml ts.yaml
Kubernetes
Cluster Kubernetes API
1 apiVersion: k8s.nginx.org/v1alpha1
2 kind: GlobalConfiguration
3 metadata:
4 name: nginx-configuration
5 namespace: nginx-ingress
6 spec:
7 listeners:
8 - name: syslog-udp
9 port: 541
10 protocol: UDP
11 - name: mysql-tcp
12 port: 5353
13 protocol: TCP
1 apiVersion: k8s.nginx.org/v1alpha1
2 kind: TransportServer
3 metadata:
4 name: mysql-tcp
5 spec:
6 listener:
7 name: mysql-tcp
8 protocol: TCP
9 upstreams:
10 - name: mysql-db
11 service: mysql
12 port: 3306
13 action:
14 pass: mysql-db
In this example, a DevOps engineer uses the MySQL client to verify that the configuration is
working, as confirmed by the output with the list of tables in the rawdata_content_schema
database inside the MySQL deployment.
TransportServer resources for UDP traffic are configured similarly; for a complete example, see
Basic TCP/UDP Load Balancing in the NGINX Ingress Controller repo on GitHub. Advanced
NGINX users can extend the TransportServer resource with native NGINX configuration using
the stream-snippets ConfigMap key, as shown in the Support for TCP/UDP Load Balancing
example in the repo.
www.dbooks.org
LOAD BALANCING TLS-ENCRYPTED TRAFFIC WITH
T L S PA S ST H R O U G H
Beyond TCP and UDP load balancing, you can use NGINX Ingress Controller for
TLS Passthrough, which means load balancing encrypted TLS traffic on port 443 among
different applications without decryption.
There are several other options for load balancing TLS-encrypted traffic with NGINX
Ingress Controller:
• TLS termination – NGINX Ingress Controller terminates inbound TLS connections and
routes them unencrypted to service endpoints, using either an NGINX VirtualServer or
a standard Kubernetes Ingress resource. This option is not secure if hackers are able
to access your private network and view traffic between NGINX Ingress Controller and
backend services.
• TCP load balancing with a TransportServer resource – NGINX Ingress Controller load
balances TCP connections that are bound to a specific port defined by the listeners
field in a GlobalConfiguration resource. Connections can be either clear text or
encrypted (in the second case the backend app decrypts them).
TLS Passthrough is an alternative to E2EE when encrypting traffic in the local area network
TLS PASSTHROUGH IS AN is important. The difference is that with TLS Passthrough NGINX Ingress Controller does not
ALTERNATIVE TO E2EE WHEN terminate inbound TLS connections, instead forwarding them encrypted to service endpoints.
ENCRYPTING TRAFFIC IN As with E2EE, the service endpoints do the decryption and so require the certificate and keys.
THE LOCAL AREA NETWORK A limitation with TLS Passthrough compared to EE2E is that NGINX Ingress Controller cannot
IS IMPORTANT make additional Layer 7 routing decisions or transform the headers and body because it
doesn’t decrypt traffic coming from the client.
TLS Passthrough is also an alternative to TCP load balancing with a TransportServer resource
(the third option above). The difference is that with the third option the GlobalConfiguration
resource can specify multiple ports for load balancing TCP/UDP connections, but only
one TransportServer resource can reference the GlobalConfiguration resource. With TLS
Passthrough, there is a single built-in listener that exposes port 443 only, but many
TransportServer resources can reference the built-in listener for load balancing encrypted
TLS connections.
TLS Passthrough is also extremely useful when the backend configures and performs the
process for TLS verification of the client, and it is not possible to move authentication to
NGINX Ingress Controller.
Kubernetes Cluster
Public
Entry :443 mysql-svc:3306
Point
MySQL TLS App
Client Passthrough Server MySQL
Deployment
The following TransportServer resource for TLS Passthrough references a built-in listener
named tls-passthrough and sets the protocol to TLS_PASSTHROUGH (lines 7–8). This exposes
port 443 on NGINX Ingress Controller for load balancing TLS-encrypted traffic. Users can
establish secure connections with the application workload by accessing the hostname
app.example.com (line 9), which resolves to NGINX Ingress Controller’s public entry point.
NGINX Ingress Controller passes the TLS-secured connections to the secure-app upstream
for decryption (lines 10–15).
1 apiVersion: k8s.nginx.org/v1alpha1
2 kind: TransportServer
3 metadata:
4 name: secure-app
5 spec:
6 listener:
7 name: tls-passthrough
8 protocol: TLS_PASSTHROUGH
9 host: app.example.com
10 upstreams:
11 - name: secure-app
12 service: secure-app
13 port: 8443
14 action:
15 pass: secure-app
For more information about features you can configure in TransportServer resources, see
the NGINX Ingress Controller documentation.
www.dbooks.org
E N A B L I N G M U LT I - T E N A N C Y A N D N A M E S P A C E I S O L AT I O N
AS ORGANIZATIONS SCALE As organizations scale up, development and operational workflows in Kubernetes get more
UP, DEVELOPMENT AND complex. It’s generally more cost-effective – and can be more secure – for teams to share
OPERATIONAL WORKFLOWS Kubernetes clusters and resources, rather than each team getting its own cluster. But there
IN KUBERNETES GET can be critical damage to your deployments if teams don’t share those resources in a safe
MORE COMPLEX and secure manner or hackers exploit your configurations.
Multi-tenancy practices and namespace isolation at the network and resource level help teams
share Kubernetes resources safely. You can also significantly reduce the magnitude of breaches
by isolating applications on a per-tenant basis. This method helps boost resiliency because
only subsections of applications owned by specific teams can be compromised, while systems
providing other functionalities remain intact.
NGINX Ingress Controller supports multiple multi-tenancy models, but we see two primary
patterns. The infrastructure service provider pattern typically includes multiple NGINX
NGINX INGRESS CONTROLLER
Ingress Controller deployments with physical isolation, while the enterprise pattern typically
SUPPORTS MULTIPLE
uses a shared NGINX Ingress Controller deployment with namespace isolation. In this
MULTI-TENANCY MODELS
section we explore the enterprise pattern in depth; for information about running multiple
NGINX Ingress Controllers see our documentation.
NGINX Ingress Controller supports both the standard Kubernetes Ingress resource and
custom NGINX Ingress resources, which enable both more sophisticated traffic management
and delegation of control over configuration to multiple teams. The custom resources are
VirtualServer, VirtualServerRoute, GlobalConfiguration, TransportServer, and Policy.
There are two models you can choose from when implementing multi-tenancy in your
Kubernetes cluster: full self-service and restricted self-service.
To illustrate this model, we replicate the bookinfo application with two subdomains,
a.bookinfo.com and b.bookinfo.com , as depicted in the following diagram. Once the
administrator installs and deploys NGINX Ingress Controller in the nginx-ingress
namespace (highlighted in green), teams DevA (pink) and DevB (purple) create their
own VirtualServer resources and deploy applications isolated within their namespaces
(A and B respectively).
Kubernetes
Cluster
Kubernetes API
Pod A
https://a.bookinfo.com VirtualServer
(kubectl)
Client A Set LB rules for host a.bookinfo.com Pod A
Public with TLS termination
Entry Namespace: A
Point
www.dbooks.org
Teams DevA and DevB set Ingress rules for their domains to route external connections to
their applications.
Team DevA applies the following VirtualServer resource to expose applications for the
a.bookinfo.com domain in the A namespace.
1 apiVersion: k8s.nginx.org/v1
2 kind: VirtualServer
3 metadata:
4 name: bookinfo
5 namespace: A
6 spec:
7 host: a.bookinfo.com
8 upstreams:
9 - name: productpageA
10 service: productpageA
11 port: 9080
12 routes:
13 - path: /
14 action:
15 pass: productpageA
Similarly, team DevB applies the following VirtualServer resource to expose applications for
the b.bookinfo.com domain in the B namespace.
1 apiVersion: k8s.nginx.org/v1
2 kind: VirtualServer
3 metadata:
4 name: bookinfo
5 namespace: B
6 spec:
7 host: b.bookinfo.com
8 upstreams:
9 - name: productpageB
10 service: productpageB
11 port: 9080
12 routes:
13 - path: /
14 action:
15 pass: productpageB
Kubernetes
Cluster
Kubernetes API
VirtualServerRoute
bookinfo.example.com
NGINX
Ingress Controller /productpage-A
Pod A
Namespace: A
Public
Entry
Point VirtualServer
Set LB rules for hostname VirtualServerRoute
Clients bookinfo.example.com with
TLS termination bookinfo.example.com
Pod B
Namespace: B
As illustrated in the diagram, the cluster administrator installs and deploys NGINX Ingress
Controller in the nginx-ingress namespace (highlighted in green), and defines a VirtualServer
resource that sets path-based rules referring to VirtualServerRoute resource definitions.
www.dbooks.org
This VirtualServer resource definition sets two path-based rules that refer to VirtualServerRoute
resource definitions for two subroutes, /productpage-A and /productpage-B.
1 apiVersion: k8s.nginx.org/v1
2 kind: VirtualServer
3 metadata:
4 name: example
5 spec:
6 host: bookinfo.example.com
7 routes:
8 - path: /productpage-A
9 route: A/ingress
10 - path: /productpage-B
11 route: B/ingress
The developer teams responsible for the apps in namespaces A and B then define
VirtualServerRoute resources to expose application subroutes within their namespaces.
The teams are isolated by namespace and restricted to deploying application subroutes set
by VirtualServer resources provisioned by the administrator:
• Team DevA (pink in the diagram) applies the following VirtualServerRoute resource to
expose the application subroute rule set by the administrator for /productpage-A .
1 apiVersion: k8s.nginx.org/v1
2 kind: VirtualServerRoute
3 metadata:
4 name: ingress
5 namespace: A
6 spec:
7 host: bookinfo.example.com
8 upstreams:
9 - name: productpageA
10 service: productpageA-svc
11 port: 9080
12 subroutes:
13 - path: /productpage-A
14 action:
15 pass: productpageA
1 apiVersion: k8s.nginx.org/v1
2 kind: VirtualServerRoute
3 metadata:
4 name: ingress
5 namespace: B
6 spec:
7 host: bookinfo.example.com
8 upstreams:
9 - name: productpageB
10 service: productpageB-svc
11 port: 9080
12 subroutes:
13 - path: /productpage-B
14 action:
15 pass: productpageB
For more information about features you can configure in VirtualServer and VirtualServerRoute
resources, see the NGINX Ingress Controller documentation.
Note: You can use mergeable Ingress types to configure cross-namespace routing, but in
a restricted self-service delegation model that approach has three downsides compared to
VirtualServer and VirtualServerRoute resources:
1. It is less secure.
2. As your Kubernetes deployment grows becomes larger and more complex, it becomes
increasingly prone to accidental modifications, because mergeable Ingress types do not
prevent developers from setting Ingress rules for hostnames within their namespace.
YOU CAN USE KUBERNETES You can use Kubernetes role-based access control (RBAC) to regulate a user’s access to
ROLE-BASED ACCESS namespaces and NGINX Ingress resources based on the roles assigned to the user.
CONTROL (RBAC) TO
For instance, in a restricted self-service model, only administrators with special privileges
REGULATE A USER’S ACCESS
can safely be allowed to access VirtualServer resources – because those resources define
the entry point to the Kubernetes cluster, misuse can lead to system-wide outages.
Developers use VirtualServerRoute resources to configure Ingress rules for the application
routes they own, so administrators set RBAC polices that allow developers to create only
those resources. They can even restrict that permission to specific namespaces if they need
to regulate developer access even further.
www.dbooks.org
In a full self-service model, developers can safely be granted access to VirtualServer resources,
but again the administrator might restrict that permission to specific namespaces.
Adding Policies
NGINX POLICY RESOURCES NGINX Policy resources are another tool for enabling distributed teams to configure
ARE ANOTHER TOOL FOR Kubernetes in multi-tenancy deployments. Policy resources enable functionalities like
ENABLING DISTRIBUTED authentication using OAuth and OpenID Connect (OIDC), rate limiting, and web application
TEAMS TO CONFIGURE firewall (WAF). Policy resources are referenced in VirtualServer and VirtualServerRoute
KUBERNETES resources to take effect in the Ingress configuration.
For instance, a team in charge of identity management in a cluster can define JSON Web
Token (JWT) or OIDC policies like this one (defined in Identity-Security/okta-oidc-policy.yaml)
for using Okta as the OIDC identity provider (IdP), which we discuss in detail in Chapter 4.
1 apiVersion: k8s.nginx.org/v1
2 kind: Policy
3 metadata:
4 name: okta-oidc-policy
5 spec:
6 oidc:
7 clientID: client-id
8 clientSecret: okta-oidc-secret
9 authEndpoint: https://your-okta-domain/oauth2/v1/authorize
10 tokenEndpoint: https://your-okta-domain/oauth2/v1/token
11 jwksURI: https://your-okta-domain/oauth2/v1/keys
View on GitHub
1 apiVersion: k8s.nginx.org/v1
2 kind: VirtualServer
3 metadata:
4 name: bookinfo-vs
5 spec:
6 host: bookinfo.example.com
7 tls:
8 secret: bookinfo-secret
9 upstreams:
10 - name: backend
11 service: productpage
12 port: 9080
(continues)
View on GitHub
Together, the NGINX Policy, VirtualServer, and VirtualServerRoute resources enable distributed
configuration architectures, where administrators can easily delegate configuration to other
teams. Teams can assemble modules across namespaces and configure the NGINX Ingress
Controller with sophisticated use cases in a secure, scalable, and manageable fashion.
DEV
SEC
OPS
Identity DevSecOps
For more information about Policy resources, see the NGINX Ingress Controller documentation.
www.dbooks.org
CONFIGURING TRAFFIC CONTROL AND TRAFFIC SPLITTING
Traffic control and traffic splitting are two key traffic-management approaches that have
become critical in modern application topologies.
This section uses a sample application called bookinfo, originally created by Istio, to
illustrate how traffic control and traffic splitting affect application behavior. Some of the
use cases leverage features that are available only when you deploy both NGINX Service
Mesh and the NGINX Ingress Controller based on NGINX Plus, but many are also available
with just the NGINX Ingress Controller based on NGINX Open Source.
We have prepared a GitHub repo that includes all the files for deploying the bookinfo app
and implementing the sample use cases in this section as well as Chapter 4. To get started,
see Deploying the Sample Application.
Customer satisfaction and “always on” accessibility of resources are paramount for most
companies delivering services online. Loss of customers, loss of revenue up to $550,000
for each hour of downtime, and loss of employee productivity are all factors that directly
hurt not only the bottom line but also the reputation of the company. If a company isn’t
successfully using modern app-delivery technologies and approaches to manage its online
traffic, customers are quick to react on social media, and you just don’t want to be known as
“that company”.
TRAFFIC CONTROL AND Traffic control and traffic splitting are both essential for maximizing application performance,
TRAFFIC SPLITTING ARE but the method to choose depends on your goals:
BOTH ESSENTIAL FOR
• To protect services from being overwhelmed with requests, use rate limiting (traffic control)
MAXIMIZING APPLICATION
PERFORMANCE • To prevent cascading failure, use circuit breaking (traffic control)
• To test how a new application version handles load by gradually increasing the amount
of traffic directed to it, use a canary release (traffic splitting)
• To determine which version of an application users prefer, use A/B testing (traffic splitting)
• To expose a new application or feature only to a defined set of users, use debug routing
(traffic splitting)
You can implement all of these methods with NGINX Ingress Controller and NGINX Service
Mesh, configuring robust traffic routing and splitting policies in seconds.
BOTH NGINX INGRESS Both NGINX Ingress Controller and NGINX Service Mesh help you implement robust traffic
CONTROLLER AND NGINX control and traffic splitting in seconds. However, they are not equally suitable for all use
SERVICE MESH HELP YOU cases and app architectures. As a general rule of thumb:
IMPLEMENT ROBUST TRAFFIC
• NGINX Ingress Controller is appropriate when there is no service-to-service communication
CONTROL AND TRAFFIC
in your cluster or apps are direct endpoints from NGINX Ingress Controller.
SPLITTING IN SECONDS
• NGINX Service Mesh is appropriate when you need to control east-west traffic within
the cluster, for example when testing and upgrading individual microservices.
In this section you use NGINX Ingress Controller to expose the sample bookinfo application,
using the VirtualServer resource defined in Traffic-Management/bookinfo-vs.yaml.
www.dbooks.org
Line 8 of bookinfo-vs.yaml references the Kubernetes Secret for the bookinfo app, which
is defined in Traffic-Management/bookinfo-secret.yaml. For the purposes of the sample
application, the Secret is self-signed; in a production environment we strongly recommend
that you use real keys and certificates generated by a Certificate Authority.
Lines 9–16 of bookinfo-vs.yaml define the routing rule that directs requests for
bookinfo.example.com to the productpage service.
1 apiVersion: k8s.nginx.org/v1
2 kind: VirtualServer
3 metadata:
4 name: bookinfo-vs
5 spec:
6 host: bookinfo.example.com
7 tls:
8 secret: bookinfo-secret
9 upstreams:
10 - name: backend
11 service: productpage
12 port: 9080
13 routes:
14 - path: /
15 action:
16 pass: backend
View on GitHub
Here is bookinfo-secret.yaml, with a self-signed key and certificate for the purposes of
this example:
1 apiVersion: v1
2 kind: Secret
3 metadata:
4 name: bookinfo-secret
5 type: kubernetes.io/tls
6 data:
7 tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0...
8 tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVk...
View on GitHub
1. Load the key and certificate, and activate the VirtualServer resource for bookinfo:
2. To enable external clients to access resources in the cluster via NGINX Ingress Controller,
you need to advertise a public IP address as the entry point for the cluster.
In cloud deployments, this is the public IP address of the LoadBalancer service you
created in Step 5.
Obtain the public IP address of the LoadBalancer service (the output is spread across
two lines for legibility):
For Azure and Google Cloud Platform, the public IP address is reported in the EXTERNAL-IP
field. For AWS, however, the public IP address of the Network Load Balancer (NLB) is
not static and the EXTERNAL-IP field instead reports its DNS name, as in the sample
output above. To find the public IP address, run the nslookup command (here the public
address is 203.0.x.66):
$ nslookup a309c13t-2.elb.amazonaws.com
Server: 198.51.100.1
Address: 198.51.100.1#53
Non-authoritative answer:
Name: a309c13t-2.elb.amazonaws.com
Address: 203.0.x.66
www.dbooks.org
3. Edit the local /etc/hosts file, adding an entry for bookinfo.example.com with the public
IP address. For example:
203.0.x.66 bookinfo.example.com
5. Verify that all pods have a sidecar injected, as indicated by nginx-mesh-sidecar in the
CONTAINERS field:
8. To verify that external clients can access the app by connecting to NGINX Ingress
Controller, navigate to https://bookinfo.example.com/ in a browser.
In this guide we deploy NGINX Service Mesh in mTLS strict mode. If you use off or
permissive mode, an alternative way to verify that clients can access the app is to run
this command to port-forward the product page to your local environment, and then
open http://localhost:9080/ in your browser.
Traffic control refers to the act of regulating the flow of traffic to your apps in terms of source,
volume, and destination. It’s a necessity when running Kubernetes in production because
it allows you to protect your infrastructure and apps from attacks and traffic spikes. In simple
terms, it’s always an advantage to regulate the traffic coming to your app or service.
Traffic control incorporates two techniques:
• Rate limiting
• Circuit breaking
www.dbooks.org
Activating Client Rate Limiting with NGINX Ingress Controller
Ingress Egress
NGINX
Ingress
Controller
A simple way to rate-limit all clients is to create an NGINX Ingress Controller Policy resource
and apply it to VirtualServer and VirtualServerRoute resources.
1 apiVersion: k8s.nginx.org/v1
2 kind: Policy
3 metadata:
4 name: nic-rate-limit-policy
5 spec:
6 rateLimit:
7 rate: 1r/s
8 zoneSize: 10M
9 key: ${binary_remote_addr}
10 logLevel: warn
11 rejectCode: 503
12 dryRun: false
View on GitHub
17 policies:
18 - name: nic-rate-limit-policy
View on GitHub
3. Apply the changes to the bookinfo application you exposed in Deploying the
Sample Application:
www.dbooks.org
4. To verify that the policy is in effect, use the Traffic-Management/generate-traffic.sh script,
which generates and directs traffic to the Ingress Controller:
1 #!/bin/bash
2
3 # get IP address of NGINX Ingress Controller
4
IC_IP=$(kubectl get svc -n nginx-ingress -o jsonpath="{.items[0].
status.loadBalancer.ingress[0].ip}")
5
[ -z "$IC_IP" ] && IC_IP=$(kubectl get svc -n nginx-ingress -o
jsonpath="{.items[0].status.loadBalancer.ingress[0].hostname}")
6
7 # send 300 requests to bookinfo
8 for i in $(seq 1 300);
9
do curl -I -k https://$IC_IP:443/productpage\?u=normal -H "host:
bookinfo.example.com";
10 done
View on GitHub
5. Run the script on your local machine. Requests that exceed the rate limit get rejected
with error code 503 (Service Unavailable) as for the second request in this example:
$ bash generate-traffic.sh
HTTP/1.1 200 OK
Server: nginx/1.21.5
Date: Day, DD Mon HH:MM:SS YYYY TZ
Content-Type: text/html; charset=utf-8
Content-Length: 4183
Connection: keep-alive
X-Mesh-Request-ID: c9df5e030d3c6871745527ea93e403b8
TO PRESERVE A To preserve a satisfactory user experience, you often need to make rate-limiting policies
SATISFACTORY USER more flexible, for example to accommodate “bursty” apps. Such apps tend to send multiple
EXPERIENCE, YOU OFTEN requests in rapid succession followed by a period of inactivity. If the rate-limiting policy is
NEED TO MAKE RATE- set such that it always rejects bursty traffic, many legitimate client requests don’t succeed.
LIMITING POLICIES
MORE FLEXIBLE To avoid this, instead of immediately rejecting requests that exceed the limit, you can buffer
them in a queue and service them in a timely manner. The burst field in a rateLimit policy
defines how many requests a client can make in excess of the rate, and requests that exceed
burst are rejected immediately. We can also control how quickly the queued requests are sent.
The noDelay field sets the number of queued requests (which must be smaller than burst)
NGINX Ingress Controller proxies to the app without delay; the remaining queued requests
are delayed to comply with the defined rate limit.
1 apiVersion: k8s.nginx.org/v1
2 kind: Policy
3 metadata:
4 name: nic-rate-limit-policy
5 spec:
6 rateLimit:
7 rate: 10r/s
8 zoneSize: 10M
9 key: ${binary_remote_addr}
10 logLevel: warn
11 rejectCode: 503
12 dryRun: false
13 noDelay: true
14 burst: 10
View on GitHub
2. Verify that the new rate limit is in effect by running the Traffic-Management/generate-
traffic.sh script as in Step 4 of the previous section.
Rate limiting with NGINX Ingress Controller is implemented with the NGINX Limit Requests
module. For a more detailed explanation of how rate limiting works, see the NGINX blog.
www.dbooks.org
Activating Interservice Rate Limiting with NGINX Service Mesh
Traffic between services in a Kubernetes cluster doesn’t necessarily fall under the scope
of the NGINX Ingress Controller, making NGINX Service Mesh the more appropriate way to
rate limit it.
Ingress Egress
NGINX
Ingress
Controller
As with NGINX Ingress Controller, you create a rate-limiting policy to have NGINX Service Mesh
limit the number of requests an app accepts from each service within a defined period of time.
The NGINX Service Mesh RateLimit object takes different parameters from the
NGINX Ingress Controller rateLimit policy:
• rate – Allowed number of requests per second or minute from each client
1 apiVersion: specs.smi.nginx.com/v1alpha1
2 kind: RateLimit
3 metadata:
4 name: nsm-rate-limit
5 namespace: default
6 spec:
7 destination:
8 kind: Service
9 name: productpage
10 namespace: default
11 sources:
12 - kind: Deployment
13 name: bash
14 namespace: default
15 name: 10rm
16 rate: 10r/m
17 burst: 0
18 delay: nodelay
View on GitHub
www.dbooks.org
3. Run the following curl command several times in rapid succession in the bash container
to verify the rate limit is being imposed. As shown for the second request, the error
code 503 (Service Unavailable) indicates the request was rejected because it
exceeded the limit.
$ curl -I -k http://productpage:9080/productpage\?u=normal
HTTP/1.1 200 OK
Server: nginx/1.21.5
Date: Day, DD Mon HH:MM:SS YYYY TZ
Content-Type: text/html; charset=utf-8
Content-Length: 5183
Connection: keep-alive
X-Mesh-Request-ID: 27c4030698264b7136f2218002d9933f
$ curl -I -k http://productpage:9080/productpage\?u=normal
HTTP/1.1 503 Service Unavailable
Server: nginx/1.21.5
Date: Day, DD Mon HH:MM:SS YYYY TZ
Content-Type: text/html
Content-Length: 198
Connection: keep-alive
To enable a circuit breaker with NGINX Service Mesh, you set a limit on the number of errors
that occur within a defined period. When the number of failures exceeds the limit, the circuit
breaker starts returning an error response to clients as soon as a request arrives. You can also
define a custom informational page to return when your service is not functioning correctly
or under maintenance, as detailed in Returning a Custom Page.
THE USE OF A CIRCUIT The use of a circuit breaker can improve the performance of an application by eliminating calls
BREAKER CAN IMPROVE to a failed component that would otherwise time out or cause delays, and it can often mitigate
THE PERFORMANCE OF the impact of a failed non‑essential component.
AN APPLICATION
The following lines in Traffic-Management/broken-deployment.yaml simulate a service failure
in which the release of version 2 of the reviews service is followed by a command (lines 44–45)
that causes the associated pod to crash and start returning error code 502 (Bad Gateway).
18 apiVersion: apps/v1
19 kind: Deployment
25 spec:
31 template:
32 metadata:
33 labels:
34 app: reviews-v2
34 version: v2
36 spec:
37 serviceAccountName: bookinfo-reviews
38 containers:
39 - name: reviews-v2
40 image: docker.io/istio/examples-bookinfo-reviews-v2:1.15.0
41 imagePullPolicy: IfNotPresent
42 ports:
43 - containerPort: 9080
44 command: ["/bin/sh","-c"]
45
args: ["timeout --signal=SIGINT 5 /opt/ibm/wlp/bin/server run
defaultServer"]
View on GitHub
1. Apply the failure simulation. In the STATUS column of the output from kubectl get pods,
the value CrashLoopBackOff for the reviews-v2 pod indicates that it is repeatedly
starting up and crashing. When you send a curl request to the reviews service, you
get a 502 (Bad Gateway) error response:
(continues)
www.dbooks.org
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
bash-5bbdcb458d-tbzrb 2/2 Running 0 42h
details-v1-847c7999fb-47fsw 2/2 Running 0 9d
maintenance-v1-588566b84f-57wrr 2/2 Running 0 3h53m
productpage-v1-764fd8c446-px5p9 2/2 Running 0 4m47s
ratings-v1-7c46bc6f4d-qjqff 2/2 Running 0 9d
reviews-v1-76ddd45467-vvw56 2/2 Running 0 9d
reviews-v2-7fb86bc686-5jkhq 1/2 CrashLoopBackOff 9 2m
2. Configure an NGINX Service Mesh CircuitBreaker object to route traffic away from the
reviews-v2 pod and to reviews-v1 instead, preventing clients from receiving the 502
error. This configuration (defined in Traffic-Management/nsm-circuit-breaker.yaml) trips
the circuit when there are more than 3 errors within 30 seconds.
1 apiVersion: specs.smi.nginx.com/v1alpha1
2 kind: CircuitBreaker
3 metadata:
4 name: nsm-circuit-breaker
5 namespace: default
6 spec:
7 destination:
8 kind: Service
9 name: reviews
10 namespace: default
11 errors: 3
12 timeoutSeconds: 30
13 fallback:
14 service: default/reviews-v1
15 port: 9080
View on GitHub
Note: The NGINX Service Mesh circuit breaker relies on passive health checks to monitor
the status of service endpoints. With the configuration shown above, it marks the broken
deployment as unhealthy when more than 3 requests issued from the bash container trigger
an error response during a 30-second period.
When implementing the circuit breaker pattern with the NGINX Ingress Controller based
on NGINX Plus, you can use active health checks instead. For more information, see the
NGINX blog.
www.dbooks.org
Returning a Custom Page
A circuit breaker with a backup service improves the user experience by reducing the number
A CIRCUIT BREAKER WITH
of error messages clients see, but it doesn’t eliminate such messages entirely. We can
A BACKUP SERVICE IMPROVES
enhance the user experience further by returning a response that’s more helpful than an
THE USER EXPERIENCE
error code when failure occurs.
Consider, for example, an application with a web or mobile interface that presents a list
of ancillary items – comments on an article, recommendations, advertisements, and
so on – in addition to the information specifically requested by clients. If the Kubernetes
service that generates this list fails, by default it returns error code 502 (Bad Gateway).
You can create a more appropriate response for the circuit breaker to send, such as a redirect
to a URL that explains the failure.
17 errorPages:
18 - codes: [502]
19 redirect:
20 code: 301
21 url: https://cdn.f5.com/maintenance/f5.com/SorryPage.html
View on GitHub
$ curl -k -I https://bookinfo.example.com
HTTP/1.1 301 Moved Permanently
Server: nginx/1.21.5
Date: Day, DD Mon HH:MM:SS YYYY TZ
Content-Type: text/html
Content-Length: 169
Connection: keep-alive
Location: https://cdn.f5.com/maintenance/f5.com/SorryPage.html
TRAFFIC SPLITTING DIRECTS two versions of a backend app running simultaneously in an environment (usually the
DIFFERENT PROPORTIONS OF current production version and an updated version). One use case is testing the stability
INCOMING TRAFFIC TO TWO and performance of the new version as you gradually increase the amount of traffic to it.
VERSIONS OF A BACKEND APP Another is seamlessly updating the app version by changing routing rules to transfer all
traffic at once from the current version to the new version. Traffic splitting techniques include:
• Blue-green deployment
• Canary deployment
• A/B testing
• Debug routing
www.dbooks.org
2. Create the script:
The following example implements a blue-green deployment that directs traffic to the new
version of a sample service (reviews-v2-1) without removing the old one (reviews-v1).
Keeping reviews-v1 in place means it’s easy to roll back if reviews-v2-1 has problems or fails.
1. Generate traffic to the reviews service (if necessary, repeat the instructions in
Generating Cluster-Internal Traffic to Split to create the script):
$ bash traffic.sh
2. The nginx-meshctl top command shows that all traffic is flowing to the
reviews-v1 service:
$ nginx-meshctl top
Deployment Incoming Success Outgoing Success NumRequests
bash 100.00% 253
reviews-v1 100.00% 253
1 apiVersion: split.smi-spec.io/v1alpha3
2 kind: TrafficSplit
3 metadata:
4 name: reviews
5 spec:
6 service: reviews
7 backends:
8 - service: reviews-v1
9 weight: 0
10 - service: reviews-v2-1
11 weight: 100
View on GitHub
5. Run nginx-meshctl top again to verify that all traffic is flowing to the
reviews-v2-1 service:
$ nginx-meshctl top
Deployment Incoming Success Outgoing Success NumRequests
reviews-v2-1 100.00% 129
bash 100.00% 129
www.dbooks.org
Blue-Green Deployment with NGINX Ingress Controller
Blue-green deployment with NGINX Ingress Controller is similar to NGINX Service Mesh,
except that traffic originates from outside the cluster rather than from inside the bash container.
(You can generate that traffic with the Traffic-Management/generate-traffic.sh script;
see Step 4 in Activating Client Rate Limiting with NGINX Ingress Controller.)
1 apiVersion: k8s.nginx.org/v1
2 kind: VirtualServer
3 metadata:
4 name: reviews
5 spec:
6 host: reviews.example.com
7 upstreams:
8 - name: reviews-v1
9 service: reviews-v1
10 port: 9080
11 - name: reviews-v2-1
12 service: reviews-v2-1
13 port: 9080
14 routes:
15 - path: /
16 splits:
17 - weight: 1
18 action:
19 pass: reviews-v1
20 - weight: 99
21 action:
22 pass: reviews-v2-1
View on GitHub
The following example uses NGINX Service Mesh to implement a canary deployment that
directs 10% of traffic to the new version (reviews-v2-1) of the sample service and the rest to
old one (reviews-v1).
1 apiVersion: split.smi-spec.io/v1alpha3
2 kind: TrafficSplit
3 metadata:
4 name: reviews
5 spec:
6 service: reviews
7 backends:
8 - service: reviews-v1
9 weight: 90
10 - service: reviews-v2-1
11 weight: 10
View on GitHub
2. Generate traffic to the reviews service (if necessary, repeat the instructions in Generating
Cluster-Internal Traffic to Split to create the script):
$ bash traffic.sh
www.dbooks.org
4. Run the nginx-meshctl top command to verify the split: about 10% of requests (16) are
going to the reviews-v2-1 service:
$ nginx-meshctl top
Deployment Incoming Success Outgoing Success NumRequests
bash 100.00% 16
reviews-v1 100.00% 164
reviews-v2-1 100.00% 16
Canary deployment with NGINX Ingress Controller is similar to NGINX Service Mesh, except
that traffic originates from outside the cluster rather than from inside the bash container.
(You can generate that traffic with the Traffic-Management/generate-traffic.sh script; see
Step 4 in Activating Client Rate Limiting with NGINX Ingress Controller.)
1 apiVersion: k8s.nginx.org/v1
2 kind: VirtualServer
3 metadata:
4 name: reviews
5 spec:
6 host: reviews.example.com
7 upstreams:
8 - name: reviews-v1
9 service: reviews-v1
10 port: 9080
11 - name: reviews-v2-1
12 service: reviews-v2-1
13 port: 9080
14 routes:
15 - path: /
16 splits:
17 - weight: 90
18 action:
19 pass: reviews-v1
20 - weight: 10
21 action:
22 pass: reviews-v2-1
View on GitHub
The following example uses NGINX Service Mesh to split traffic between two app versions
based on which browser the client is using.
1 apiVersion: specs.smi-spec.io/v1alpha3
2 kind: HTTPRouteGroup
3 metadata:
4 name: reviews-testgroup-rg
5 namespace: default
6 spec:
7 matches:
8 - name: test-users
9 headers:
10 - user-agent: ".*Firefox.*"
View on GitHub
12 apiVersion: split.smi-spec.io/v1alpha3
13 kind: TrafficSplit
14 metadata:
15 name: reviews
16 spec:
17 service: reviews
18 backends:
19 - service: reviews-v1
20 weight: 0
21 - service: reviews-v3
22 weight: 100
23 matches:
24 - kind: HTTPRouteGroup
25 name: reviews-testgroup-rg
View on GitHub
www.dbooks.org
3. Generate traffic to the reviews service (if necessary, repeat the instructions in Generating
Cluster-Internal Traffic to Split to create the script):
$ bash traffic.sh
Recall that traffic.sh includes this command to generate traffic from the test group by
setting the User-Agent header to Firefox:
$ nginx-meshctl top
Deployment Incoming Success Outgoing Success NumRequests
bash 100.00% 65
reviews-v3 100.00% 65
reviews-v1 100.00% 65
A/B testing with NGINX Ingress Controller is similar to NGINX Service Mesh, except that traffic
originates from outside the cluster rather than from inside the bash container. (You can generate
that traffic with the Traffic-Management/generate-traffic.sh script; see Step 4 in Activating
Client Rate Limiting with NGINX Ingress Controller.)
1 apiVersion: k8s.nginx.org/v1
2 kind: VirtualServer
3 metadata:
4 name: reviews
5 spec:
6 host: reviews.example.com
7 upstreams:
8 - name: reviews-v1
9 service: reviews-v1
10 port: 9080
11 - name: reviews-v2-1
12 service: reviews-v2-1
13 port: 9080
14 - name: reviews-v3
15 service: reviews-v3
16 port: 9080
17 routes:
18 - path: /
19 matches:
20 - conditions:
21 - header: "user-agent"
22 value: ".*Firefox.*"
23 action:
24 pass: reviews-v3
25 action:
26 pass: reviews-v1
View on GitHub
www.dbooks.org
Implementing Debug Routing
YOU’VE ADDED A NEW Suppose you’ve added a new feature to the sample reviews service and want to test how the
FEATURE TO THE SAMPLE feature performs in production. This is a use case for debug routing, which restricts access to
REVIEWS SERVICE AND WANT the service to defined group of users, based on Layer 7 attributes such as a session cookie,
TO TEST HOW THE FEATURE session ID, or group ID. This makes the updating process safer and more seamless.
PERFORMS IN PRODUCTION
Debug Routing with NGINX Service Mesh
The following example uses NGINX Service Mesh to direct traffic from users who have a
session cookie to a development version of the app.
1 apiVersion: specs.smi-spec.io/v1alpha3
2 kind: HTTPRouteGroup
3 metadata:
4 name: reviews-session-cookie
5 namespace: default
6 spec:
7 matches:
8 - name: get-session-cookie
9 headers:
10 - Cookie: "session_token=xxx-yyy-zzz"
11 - name: get-api-requests
12 pathRegex: "/api/reviews"
13 methods:
14 - GET
View on GitHub
16 apiVersion: split.smi-spec.io/v1alpha3
17 kind: TrafficSplit
18 metadata:
19 name: reviews
20 spec:
21 service: reviews
22 backends:
23 - service: reviews-v1
24 weight: 0
25 - service: reviews-v3
26 weight: 100
27 matches:
28 - kind: HTTPRouteGroup
29 name: reviews-session-cookie
View on GitHub
$ nginx-meshctl top
Deployment Incoming Success Outgoing Success NumRequests
bash 100.00% 65
reviews-v1 100.00% 65
reviews-v3 100.00% 65
www.dbooks.org
Debug Routing with NGINX Ingress Controller
Debug routing with NGINX Ingress Controller is similar to NGINX Service Mesh, except that
traffic originates from outside the cluster rather than from inside the bash container. (You can
generate that traffic with the Traffic-Management/generate-traffic.sh script; see Step 4 in
Activating Client Rate Limiting with NGINX Ingress Controller.)
1 apiVersion: k8s.nginx.org/v1
2 kind: VirtualServer
3 metadata:
4 name: reviews
5 spec:
6 host: reviews.example.com
7 upstreams:
8 - name: reviews-v1
9 service: reviews-v1
10 port: 9080
11 - name: reviews-v2-1
12 service: reviews-v2-1
13 port: 9080
14 - name: reviews-v3
15 service: reviews-v3
16 port: 9080
17 routes:
18 - path: /api/reviews
19 matches:
20 - conditions:
21 - header: "cookie"
22 value: "session_token=xxx-yyy-zzz"
23 - variable: $request_method
24 value: GET
25 action:
26 pass: reviews-v3
27 action:
28 pass: reviews-v1
View on GitHub
APPLICATION TRAFFIC Application traffic management is an important contributor to successful operation of apps
MANAGEMENT IS AN and services. In this chapter we used NGINX Ingress Controller and NGINX Service Mesh
IMPORTANT CONTRIBUTOR to apply traffic control – for better app performance and resilience – and traffic splitting for
TO SUCCESSFUL OPERATION seamless upgrades without downtime and testing of new app versions and features.
OF APPS AND SERVICES
Let’s summarize some of the key concepts from this chapter:
• NGINX Ingress Controller supports TCP and UDP load balancing for use cases involving
TCP- and UDP-based apps and utilities.
• TLS Passthrough is an effective option for Layer 4 (TCP) routing where the encrypted
traffic is passed to application workloads in Kubernetes.
• Traffic control and traffic splitting are two basic categories of traffic-management methods.
– Traffic control refers to the act of regulating the flow of traffic to your apps in terms of
source, volume, and destination, using rate limiting and circuit breaking.
Ղ Rate limiting refers to setting the maximum number of requests that an app or
service accepts from each client within a certain time period. Set rate limits with
NGINX Ingress Controller for traffic from cluster-external clients, and with NGINX
Service Mesh for traffic from other services within the Kubernetes cluster.
Ղ Canary deployment directs a small proportion of traffic to a new app version to verify
that it performs well, while the rest of the traffic continues to go to the current version.
When the new version proves stable, it receives increasing amounts of traffic.
Ղ A/B testing helps determine which version of an app users prefer. A group of users
is defined based on characteristics (such as the value of an HTTP header) and their
requests are always sent to one of the versions.
Ղ Debug routing is great for verifying the performance or stability of a new app
version or feature by directing traffic to it from a group of users selected on the
basis of a Layer 7 attributes (such as the session cookie they are using).
www.dbooks.org
3. Monitoring and Visibility Use Cases
In this chapter we explore how to use NGINX and third-party tools and services for monitoring,
visibility, tracing, and the insights required for successful traffic management in Kubernetes.
• Distributed Tracing, Monitoring, and Visualization with Jaeger, Prometheus, and Grafana
• Chapter Summary
Monitoring and visibility are crucial for successful app delivery, as tracking of app availability
MONITORING AND VISIBILITY
and request processing helps you identify issues quickly and resolve them in a timely way.
ARE CRUCIAL FOR SUCCESSFUL
For this reason, by default the NGINX Plus API and live monitoring dashboard are enabled
APP DELIVERY
for the NGINX Ingress Controller based on NGINX Plus.
(The stub_status module is enabled by default for the NGINX Ingress Controller based
on NGINX Open Source, but no dashboard is provided.)
The NGINX Plus live monitoring dashboard is enabled on port 8080 by default, but you can
designate a different port by adding the following line to the args section (starting on line 66)
of Installation-Deployment/nginx-plus-ingress.yaml:
- -nginx-status-port=<port_number>
View on GitHub
Although we don’t recommend it, you can disable statistics gathering completely by adding
this line to the args section:
- -nginx-status=false
View on GitHub
The main page of the dashboard displays summary metrics, which you can explore in
fine‑grained detail, down to the level of a single pod, on the tabs:
• HTTP Zones – Statistics for each server{} and location{} block in the http{}
context that includes the status_zone directive
• HTTP Upstreams – Statistics for each upstream{} block in the http{} context that
includes the zone directive
• Caches – Statistics for each cache
• Shared Zones – The amount of memory currently used by each shared memory zone
For more information about the tabs, see the NGINX Plus documentation.
www.dbooks.org
D I S T R I B U T E D T R A C I N G , M O N I T O R I N G , A N D V I S U A L I Z AT I O N
W I T H JA E G E R , P R O M E T H E U S , A N D G R A FA N A
Although a microservices-based application looks like a single entity to its clients, internally
it’s a daisy-chain network of several microservices involved in completing a request from
end users. How can you troubleshoot issues as requests are routed through this potentially
complex network? Distributed tracing is a method for tracking services that shows detailed
session information for all requests as they are processed, helping you diagnose issues with
your apps and services.
TRAFFIC MANAGEMENT Traffic management tools such as load balancers, reverse proxies, and Ingress controllers
TOOLS SUCH AS LOAD generate a lot of information about the performance of your services and applications. You can
BALANCERS, REVERSE configure NGINX Ingress Controller and NGINX Service Mesh to feed such information to
PROXIES, AND INGRESS third‑party monitoring tools, which among other features give you extra insight with visualization
CONTROLLERS GENERATE of performance over time.
A LOT OF INFORMATION
In this section we show how to deploy three of the most popular tools:
Distributed tracing, monitoring, and visualization are enabled by default for NGINX Service Mesh,
but a server for each of Jaeger, Prometheus, and Grafana must be deployed in the cluster.
At the time of writing, NGINX Service Mesh automatically deploys default Jaeger, Prometheus,
and Grafana servers. These deployments are intended for onboarding and evaluation, and
might not be feature-complete or robust enough for production environments. In addition,
by default the data they gather and present does not persist.
For production environments, we recommend that you separately deploy Jaeger, Prometheus,
and Grafana. In the Monitoring-Visibility directory of the eBook repo, we provide configuration
for each server: jaeger.yaml, prometheus.yaml, and grafana.yaml.
Notes:
• The next planned release of NGINX Service Mesh, version 1.5, will not create default
deployments of these servers, making the commands in Step 1 below mandatory even
for non-production environments.
1. Create the monitoring namespace and configure Grafana, Prometheus, and Jaeger:
$ nginx-meshctl remove
$ nginx-meshctl deploy --sample-rate 1 --prometheus-address
"prometheus-service.monitoring:9090" --tracing-address
"jaeger.monitoring:6831"
NGINX INGRESS CONTROLLER NGINX Ingress Controller supports distributed tracing with a third-party OpenTracing module
SUPPORTS DISTRIBUTED that works with Datadog, Jaeger, and Zipkin. Distributed tracing is disabled by default.
TRACING
To enable distributed tracing with OpenTracing and Jaeger:
1. Add lines 7–20 to the data section of the ConfigMap for NGINX Ingress Controller, in
Monitoring-Visibility/nginx-config.yaml:
6 data:
7 opentracing: "True"
8 opentracing-tracer: "/usr/local/lib/libjaegertracing_plugin.so"
9 opentracing-tracer-config: |
10 {
11 "service_name": "nginx-ingress",
12 "propagation_format": "w3c",
13 "sampler": {
14 "type": "const",
15 "param": 1
16 },
17 "reporter": {
18 "localAgentHostPort": "jaeger.monitoring.svc.cluster.local:6831"
19 }
20 }
View on GitHub
www.dbooks.org
2. Apply the ConfigMap:
3. Run the kubectl port-forward command to forward connections made to port 16686
on your local machine to the Jaeger service in the monitoring namespace:
Metrics from the NGINX Ingress Controller based on NGINX Plus (as well as general latency
metrics) are exposed in Prometheus format at /metrics, on port 9113 by default. To change
the port, see Step 1.
1. Include the following settings in the configuration for the NGINX Ingress Controller
based on NGINX Plus, in Installation-Deployment/nginx-plus-ingress.yaml:
• A label with nginx-ingress as the resource name:
13 labels:
14 app: nginx-ingress
15 nsm.nginx.com/deployment: nginx-ingress
View on GitHub
17 annotations:
20 prometheus.io/scrape: "true"
21 prometheus.io/port: "9113"
22 prometheus.io/scheme: "http"
View on GitHub
66 args:
71 - -enable-prometheus-metrics
72 - -enable-latency-metrics
View on GitHub
- - prometheus-metrics-listen-port=<port_number>
View on GitHub
www.dbooks.org
Visualizing Distributed Tracing and Monitoring Data
THERE ARE SEVERAL There are several ways to visualize data from distributed tracing and monitoring of NGINX
WAYS TO VISUALIZE Ingress Controller and NGINX Service Mesh.
DATA FROM DISTRIBUTED
TRACING AND MONITORING To display distributed tracing data for NGINX Ingress Controller, open the Jaeger dashboard
in a browser at http://localhost:16686.
This sample Jaeger dashboard show details for four requests, with the most recent at the top.
The information includes how much time a response took, and the length of time between
responses. In the example, the most recent response, with ID 00496e0, took 2.83ms, starting
about 4 seconds after the previous response.
Prometheus metrics exported from the NGINX Ingress Controller are prefixed with nginx_ingress
and metrics exported from the NGINX Service Mesh sidecars are prefixed with nginxplus.
For example, nginx_ingress_controller_upstream_server_response_latency_ms_count
is specific to NGINX Ingress Controller, while nginxplus_upstream_server_response_
latency_ms_count is specific to NGINX Service Mesh sidecars.
• NGINX Plus Ingress Controller Metrics in the NGINX Service Mesh documentation
www.dbooks.org
To display metrics for NGINX Ingress Controller and NGINX Service Mesh with Grafana, open
the Grafana UI in a browser at http://localhost:3000. Add Prometheus as a data source and
create a dashboard. This example includes global success rate and request volume per second,
memory usage, and more:
The Elastic Stack (formerly called the ELK stack) is a popular open source logging tool made
up of three base tools:
In this section we explain how to collect and visualize NGINX Ingress Controller logs with
Elastic Stack, using the Filebeat module for NGINX. Filebeat monitors the log files or locations
that you specify, collects log events, and forwards them to either Elasticsearch or Logstash
for indexing.
• Access log – Information about client requests recorded right after the request is processed.
To customize the information included in the access log entries, add these ConfigMap
keys to the data section (starting on line 6) of Monitoring-Visibility/nginx-config.yaml:
– log-format for HTTP and HTTPS traffic
– stream-log-format for TCP, UDP, and TLS Passthrough traffic
For an example of log-entry customization, see the NGINX Ingress Controller repo on
GitHub. For a list of all the NGINX built-in variables you can include in log entries, see
the NGINX reference documentation.
Although we do not recommend that you disable access logging, you can do so
by including this key in the data section of nginx-config.yaml:
access-log-off: "true"
View on GitHub
Run:
www.dbooks.org
• Error log – Information about error conditions at the severity levels you configure with
the error-log-level ConfigMap key.
To enable debug logging, include this key in the data section (starting on line 6) of
Monitoring-Visibility/nginx-config.yaml:
error-log-level: "debug"
View on GitHub
Also include this line in the args section (starting on line 66) of Installation-Deployment/
nginx-plus-ingress.yaml. This starts NGINX Ingress Controller in debug mode.
- -nginx-debug
View on GitHub
Enabling Filebeat
Filebeat is a module in the Elastic stack that “monitors the log files or locations that you specify,
collects log events, and forwards them either to Elasticsearch or Logstash for indexing”.
Here we use two Filebeat features:
• The module for NGINX, which parses the NGINX access and error logs
• The autodiscover feature, which tracks containers as their status changes (they spin up
and down or change locations) and adapts logging settings automatically
1. Sign in to your Elastic Cloud account (start a free trial if you don’t already have an account)
and create a deployment. Record the username and password for the deployment in a
secure location, as you need the password in the next step and cannot retrieve it after
the deployment is created.
On line 26, replace cloud_ID with the Cloud ID associated with your Elastic Cloud
deployment. (To access the Cloud ID, select Manage this deployment in the left-hand
navigation column in Elastic Cloud. The value appears in the Cloud ID field on the page
that opens.)
On line 27, replace password with the password associated with the deployment, which
you noted in Step 1.
11 filebeat.autodiscover:
12 providers:
13 - type: kubernetes
14 templates:
15 - condition:
16 equals:
17 kubernetes.container.name: "nginx-plus-ingress"
18 config:
19 - module: nginx
20 access:
21 enabled: true
22 input:
23 type: container
24 paths:
25 - /var/log/containers/*-
${data.kubernetes.container.id}.log
26 cloud.id: "cloud_ID"
27 cloud.auth: "elastic:password"
View on GitHub
www.dbooks.org
4. Confirm that your Filebeat deployment appears on the Data Streams tab of the Elastic
Cloud Index Management page. (To navigate to the tab, click Stack Management in the
Management section of the left-hand navigation column. Then click Index Management
in the navigation column and Data Streams on the Index Management page.) In the
screenshot, the deployment is called filebeat-8.1.1.
To display the NGINX Ingress Controller access and error logs that Filebeat has forwarded
to Elasticsearch, access the Stream page. (In the left-hand navigation column, click Logs in
the Observability section. The Stream page opens by default.)
www.dbooks.org
The Filebeat module for NGINX comes with a pre-configured dashboard. To load the
Filebeat dashboards, run the following command in the Filebeat pod:
Navigate to Dashboards and search for nginx to see the available dashboards:
www.dbooks.org
Enabling Metricbeat and Displaying NGINX Ingress Controller and
NGINX Service Mesh Metrics
Just as the Filebeat module for NGINX exports NGINX Ingress Controller logs to Elasticsearch,
the Metricbeat module for NGINX scrapes Prometheus metrics from NGINX Ingress Controller
and NGINX Service Mesh and sends them to Elasticsearch.
1. Sign in to your Elastic Cloud account if you have not already done so. (For information
about creating an account, as well as the autodiscover feature, see Enabling Filebeat.)
2. Configure the Metricbeat NGINX module with the autodiscover feature to scrape
metrics and display them in the Elastic Metrics Explorer. The templates section
(starting on line 14) of Monitoring-Visibility/elk/metricbeat.yaml directs the
autodiscover subsystem to start monitoring new services when they initialize.
By default, the NGINX Service Mesh sidecar and NGINX Ingress Controller expose
metrics in Prometheus format at /metrics, on ports 8887 and 9113 respectively.
To change the defaults, edit lines 20–21 for NGINX Service Mesh and lines 29–30
for NGINX Ingress Controller.
On line 31, replace cloud_ID with the Cloud ID associated with your Elastic Cloud
deployment. (To access the Cloud ID, select Manage this deployment from the
left-hand navigation column in Elastic Cloud. The value appears in the Cloud ID field
on the page that opens.)
On line 32, replace password with the password associated with the deployment.
11 metricbeat.autodiscover:
12 providers:
13 - type: kubernetes
14 templates:
15 - condition.equals:
16 kubernetes.container.name: "nginx-mesh-sidecar"
17 config:
18 - module: prometheus
19 period: 10s
20 hosts: ["${data.host}:8887"]
21 metrics_path: /metrics
22 - type: kubernetes
23 templates:
24 - condition.equals
25 kubernetes.container.name: "nginx-plus-ingress"
26 config:
27 - module: prometheus
28 period: 10s
29 hosts: ["${data.host}:9113"]
30 metrics_path: /metrics
31 cloud.id: "cloud_ID"
32 cloud.auth: "elastic:password"
View on GitHub
For more information about logging and the Elastic Stack, see:
• How to monitor NGINX web servers with the Elastic Stack on the Elastic blog
• Run Filebeat on Kubernetes and Run Metricbeat on Kubernetes in the Elastic documentation
• Nginx module (Filebeat) and Nginx module (Metricbeat) in the Elastic documentation
www.dbooks.org
D I S P L AY I N G L O G S A N D M E T R I C S W I T H A M A Z O N C L O U DWATC H
Amazon CloudWatch is a monitoring and observability service that provides a unified view of your
NGINX Ingress Controller and NGINX Service Mesh deployment in the CloudWatch console.
Configuring CloudWatch
To configure and use CloudWatch, you create two configurations: a standard Prometheus
<scrape_config> configuration and a CloudWatch agent configuration.
For descriptions of the fields, see CloudWatch agent configuration for Prometheus in
the CloudWatch documentation.
53 data:
54 cwagentconfig.json: |
55 {
60 "logs": |
61 "metrics_collected": {
61 "prometheus": {
63 "prometheus_config_path": "/etc/prometheusconfig/
prometheus.yaml",
64 "log_group_name":"nginx-metrics",
65 "cluster_name":"nginx-demo-cluster",
66 "emf_processor": {
67 "metric_declaration": [
76 {
77 "source_labels": ["job"],
78 "label_matcher": "nic",
79 "dimensions": [["PodNamespace","PodName"]],
80 "metric_selectors": [
81 "^nginx*"
82 ]
83 }
84 ]
85 }
86 }
87 },
89 }
90 }
View on GitHub
97 data:
98 prometheus.yaml: |
99 global:
100 scrape_interval: 1m
101 scrape_timeout: 5s
102 scrape_configs:
126 - job_name: nic
127 sample_limit: 10000
128 kubernetes_sd_configs:
129 - role: pod
130 relabel_configs:
131 - source_labels: [ __meta_kubernetes_pod_container_name ]
132 action: keep
133 regex: '^nginx-plus-ingress$'
134 - action: replace
135 source_labels:
136 - __meta_kubernetes_namespace
137 target_label: PodNamespace
138 - action: replace
139 source_labels:
140 - __meta_kubernetes_pod_name
141 target_label: PodName
142 - action: labelmap
143 regex: __meta_kubernetes_pod_label_(.+)
View on GitHub
www.dbooks.org
4. Specify your AWS credentials by replacing:
• << AWS_access_key >> on line 153 with your AWS access key
• << AWS_secret_access_key >> on line 154 with your secret access key
(For instructions about creating and accessing AWS access keys, see the AWS
documentation.)
150 data:
151 credentials: |
152 [AmazonCloudWatchAgent]
153 aws_access_key_id = << AWS_access_key >>
154 aws_secret_access_key = << AWS_secret_access_key >>
View on GitHub
If you have an AWS session token, add this line directly below line 154:
With this configuration in place, the nginx-metrics log group appears on the
Log Groups tab.
1. In the left-hand navigation column, select All metrics in the Metrics section.
3. On the Browse tab, click the checkbox at the left end of the row for a metric to
graph it in the upper part of the page. This screenshot displays a graph of the
nginx_ingress_nginxplus_http_requests_total metric.
www.dbooks.org
Capturing Logs in CloudWatch with Fluent Bit
There are two ways to send logs from your containers to CloudWatch: Fluent Bit and
Fluentd. Here we use Fluent Bit because it has the following advantages over Fluentd:
• A smaller resource footprint and more resource-efficient usage of memory and CPU
• The image is developed and maintained by AWS, resulting in quicker adoption of new
Fluent Bit image features and faster reaction to bugs or other issues
66 nginx-ingress.conf: |
67 [INPUT]
68 Name tail
69 Tag nic.data
70 Path /var/log/containers/nginx-ingress*.log
71 Parser docker
72 DB /var/fluent-bit/state/flb_log.db
73 Mem_Buf_Limit 5MB
74 Skip_Long_Lines On
75 Refresh_Interval 10
76 [FILTER]
77 Name parser
78 Match nic.*
79 Key_Name log
80 Parser nginx_nic
81 Reserve_Data On
82 Preserve_Key On
83 [OUTPUT]
84 Name cloudwatch_logs
85 Match nic.*
86 region ${AWS_REGION}
87 log_group_name nginx-logs
88 log_stream_prefix ${HOST_NAME}-
89 auto_create_group true
90 extra_user_agent container-insights
View on GitHub
• Set up NGINX with sample traffic on Amazon EKS and Kubernetes in the
Cloud Watch documentation
www.dbooks.org
CHAPTER SUMMARY
ACTIONABLE INSIGHTS We discussed why actionable insights into app and service performance are crucial to
INTO APP AND SERVICE successful traffic management in Kubernetes clusters – they help you quickly identify and
PERFORMANCE ARE CRUCIAL resolve issues that worsen user experience. We showed how to configure tools for tracing,
TO SUCCESSFUL TRAFFIC monitoring, and visibility of NGINX Ingress Controller and NGINX Service Mesh.
MANAGEMENT
Let’s summarize some of the key concepts from this chapter:
• With the NGINX Ingress Controller based on NGINX Plus, the NGINX Plus API and live
activity monitoring dashboard are enabled by default and report key load-balancing
and performance metrics for insight into app performance and availability.
• The Jaeger distributed tracing service tracks and shows detailed session information
for all requests as they are processed, which is key to diagnosing and troubleshooting
issues with apps and services.
• The Elastic Stack (formerly the ELK stack) is a popular open source logging tool made
up of Elasticsearch for search and analytics, Logstash as the data processing pipeline,
and Kibana for charts and graphs.
AUTHENTICATING USER While there are many ways to protect applications, authenticating user identities and
IDENTITIES AND enforcing permissions are probably the most common way to prevent unauthorized access
ENFORCING PERMISSIONS to application resources. App developers commonly leverage a third-party identity provider
ARE PROBABLY THE MOST to manage user credentials and digital profiles. Identity providers eliminate the need for app
COMMON WAY TO PREVENT developers and administrators to write and manage bespoke solutions for authenticating
UNAUTHORIZED ACCESS users’ digital identities and controlling access to application resources.
Identity-provider solutions increase overall user satisfaction by enabling single sign-on (SSO).
Users do not need to input profile data separately for each app and then remember the
associated unique usernames and passwords. Instead, one paired username and password
enables access to all apps. The identity provider enforces consistent security criteria for
identity attributes like passwords, reducing end-user frustration with the registration process
which can lead to abandonment.
OpenID Connect (OIDC) is an authentication protocol for SSO built on the industry-standard
OAuth 2.0 protocol. We show how to implement a full-fledged SSO solution that supports
WE SHOW HOW TO
the OIDC Authorization Code Flow, with the NGINX Ingress Controller based on
IMPLEMENT A
F5 NGINX Plus as the relaying party which analyzes user requests to determine the
FULL-FLEDGED
requester’s level of authorization and routes requests to the appropriate app service.
SSO SOLUTION
(For ease of reading, the remainder of this chapter uses the term NGINX Ingress Controller
for the NGINX Plus-based model.)
We provide instructions for implementing SSO with three OIDC identity providers (IdPs):
Okta, Azure Active Directory (AD), and Ping Identity.
Finally, we show how to “shift security left” by integrating F5 NGINX App Protect WAF with
NGINX Ingress Controller.
• Chapter Summary
www.dbooks.org
I M P L E M E N T I N G S S O W I T H O K TA
In this section you use the Okta CLI to preconfigure Okta as the OIDC identity provider (IdP)
for SSO and then configure NGINX Ingress Controller as the relaying party.
• Prerequisites
Prerequisites
$ okta register
First name: <your_first_name>
Last name: <your_last_name>
Email address: <your_email_address>
Country: <your_country>
Creating new Okta Organization, this may take a minute:
An account activation email has been sent to you.
4. In the browser window that opens, the email address you provided in Step 2 appears
in the upper righthand corner. Click the down-arrow to the right of it and note the value
that appears below your email address in the pop-up (here, dev-609627xx.okta.com).
5. In the browser window, click the Create Token button. Follow Steps 3–5 of
Create the Token in the Okta documentation and record the token value.
1. Sign in to your Okta Developer account using the Okta CLI, substituting these values:
• <your_okta_domain> – URL starting with https://, followed by the
dev-xxxxxxxx.okta.com value you obtained in Step 4 of Prerequisites.
• <your_okta_API_token> – The token value you obtained in Step 5 of Prerequisites.
$ okta login
Okta Org URL: <your_okta_domain>
Okta API token: <your_okta_API_token>
2. Create an app integration for the bookinfo sample app. In response to the prompts,
type 1 ( Web) and 5 (Other):
3. Obtain the integrated app’s client ID and Kubernetes Secret from the
OKTA_OAUTH2_CLIENT_ID and OKTA_OAUTH2_CLIENT_SECRET fields in .okta.env:
$ cat .okta.env
export OKTA_OAUTH2_ISSUER="https://dev-xxxxxxxx.okta.com/
oauth2/default"
export OKTA_OAUTH2_CLIENT_ID="0oa4go...b735d7"
export OKTA_OAUTH2_CLIENT_SECRET="CMRErvVMJKM...PINeaofZZ6I"
www.dbooks.org
4. Base64-encode the secret:
1 apiVersion: v1
2 kind: Secret
3 metadata:
4 name: okta-oidc-secret
5 type: nginx.org/oidc
6 data:
7 client-secret: client-secret
View on GitHub
3. Obtain the URLs for the integrated app’s authorization endpoint, token endpoint,
and JSON Web Key (JWK) file from the Okta configuration. Run the following curl
command, piping the output to the indicated python command to output the entire
configuration in an easily readable format. The output is abridged to show only the
relevant fields. In the command, for <your_okta_domain> substitute the value
(https://dev-xxxxxxxx.okta.com) you used in Step 1 of Configuring Okta as the IdP.
$ curl -s https://<your_okta_domain>
https://<your_okta_domain>/.well-known/openid-
/.well-known/openid-
configuration | python -m json.tool
{
"authorization_endpoint":
"https://<your_okta_domain>/oauth2/v1/authorize",
...
"jwks_uri": "https://<your_okta_domain>/oauth2/v1/keys",
...
"token_endpoint": "https://<your_okta_domain>/oauth2/v1/token",
...
}
1 apiVersion: k8s.nginx.org/v1
2 kind: Policy
3 metadata:
4 name: okta-oidc-policy
5 spec:
6 oidc:
7 clientID: client-id
8 clientSecret: okta-oidc-secret
9 authEndpoint: https://your-okta-domain/oauth2/v1/authorize
10 tokenEndpoint: https://your-okta-domain/oauth2/v1/token
11 jwksURI: https://your-okta-domain/oauth2/v1/keys
View on GitHub
www.dbooks.org
6. Apply the VirtualServer resource (Identity-Security/okta-oidc-bookinfo-vs.yaml) that
references okta-oidc-policy:
The VirtualServer resource references okta-oidc-policy on line 16. The path definition
on line 14 means that users who request a URL starting with / are authenticated
before the request is proxied to the upstream called backend (line 18) which maps to
the productpage service (lines 10–11):
1 apiVersion: k8s.nginx.org/v1
2 kind: VirtualServer
3 metadata:
4 name: bookinfo-vs
5 spec:
6 host: bookinfo.example.com
7 tls:
8 secret: bookinfo-secret
9 upstreams:
10 - name: backend
11 service: productpage
12 port: 9080
13 routes:
14 - path: /
15 policies:
16 - name: okta-oidc-policy
17 action:
18 pass: backend
View on GitHub
In this section you use the Microsoft Azure portal to preconfigure Azure Active Directory (AD)
as the OIDC identity provider (IdP) for SSO and then configure NGINX Ingress Controller as
the relaying party.
Note: The instructions and screenshots in this section are accurate as of the time of
publication, but are subject to change by Microsoft.
2. Log in at Azure Portal and click on the Azure Active Directory icon in the Azure
services section.
www.dbooks.org
5. On the Register an application page that opens:
b) Click the top radio button (Single tenant) in the Supported account types section.
7. On the Certificates & secrets tab that appears, click + New client secret.
8. On the Add a client secret card that opens, enter a name in the Description field
(here, oidc-demo-secret) and select an expiration time from the Expires drop-down menu
(here, 12 months). Click the Add button.
www.dbooks.org
9. The new secret appears in the table on the Certificates & secrets tab. Copy the character
string in the Value column to a safe location; you cannot access it after you leave this page.
10. In the left-hand navigation column, click Overview and note the values in these fields:
• Application (client) ID
• Directory (tenant) ID
1 apiVersion: v1
2 kind: Secret
3 metadata:
4 name: ad-oidc-secret
5 type: nginx.org/oidc
6 data:
7 client-secret: client-secret
View on GitHub
3. Obtain the URLs for the registered app’s authorization endpoint, token endpoint, and
JSON Web Key (JWK) file. Run the following curl command, piping the output to the
indicated python command to output the entire configuration in an easily readable format.
The output is abridged to show only the relevant fields. In the command, make the
following substitutions as appropriate:
• For <tenant>, substitute the value from the Directory (tenant) ID field obtained in
Step 10 of the previous section.
• Be sure to include v2.0 in the path to obtain Azure AD Endpoint V2 endpoints.
• If you are using an Azure national cloud rather than the Azure “global” cloud, substitute
your Azure AD authentication endpoint for login.microsoftonline.com.
• If your app has custom signing keys because you’re using the Azure AD claims-mapping
feature in a multi-tenant environment, also append the appid query parameter to get
the jwks_uri value that is specific to your app’s signing key. For <app_id>, substitute
the value from the Application (client) ID field obtained in Step 10 of the previous section.
$ curl -s https://login.microsoftonline.com/<tenant>
https://login.microsoftonline.com/<tenant>/v2.0/.
/v2.0/.
well-known/openid-configuration?appid=<app_id>
well-known/openid-configuration?appid= <app_id>
{
"authorization_endpoint":
"https://login.microsoftonline.com/<tenant>/oauth2/v2.0/
authorize",
...
"jwks_uri":
"https://login.microsoftonline.com/<tenant>/discovery/v2.0/
keys?appid=<app_id> ",
...
"token_endpoint": "https://login.microsoftonline.com/
<tenant>/oauth2/v2.0/token",
...
}
www.dbooks.org
4. Edit the NGINX Ingress OIDC Policy in Identity-Security/ad-oidc-policy.yaml, replacing
the parameters as indicated:
• ad-client-id on line 7 – The value from the Application (client) ID field obtained in
Step 10 of the previous section
• token on lines 9–11 – The value used for <tenant> in the command in the previous step
(as obtained from the Directory (tenant) ID field in Step 10 of the previous section)
• appid on line 11 – The value used for <app_id> in the command in the previous step
(and the same as ad-client-id on line 7)
1 apiVersion: k8s.nginx.org/v1
2 kind: Policy
3 metadata:
4 name: ad-oidc-policy
5 spec:
6 oidc:
7 clientID: ad-client-id
8 clientSecret: ad-oidc-secret
9 authEndpoint: https://login.microsoftonline.com/token/oauth2/
v2.0/authorize
10 tokenEndpoint: https://login.microsoftonline.com/token/oauth2/
v2.0/token
11 jwksURI: https://login.microsoftonline.com/token/discovery/v2.0/
keys?appid=appid
View on GitHub
The VirtualServer resource references ad-oidc-policy on line 16. The path definition
on line 14 means that users who request a URL starting with / are authenticated before
the request is proxied to the upstream called backend (line 18) which maps to the
productpage service (lines 10–11):
1 apiVersion: k8s.nginx.org/v1
2 kind: VirtualServer
3 metadata:
4 name: bookinfo-vs
5 spec:
6 host: bookinfo.example.com
7 tls:
8 secret: bookinfo-secret
9 upstreams:
10 - name: backend
11 service: productpage
12 port: 9080
13 routes:
14 - path: /
15 policies:
16 - name: ad-oidc-policy
17 action:
18 pass: backend
View on GitHub
For more information about OIDC with Azure, see the Microsoft documentation.
www.dbooks.org
IMPLEMENTING SSO WITH PING IDENTITY
In this section you use the Ping Identity portal to preconfigure Ping Identity as the OIDC identity
provider (IdP) for SSO and then configure NGINX Ingress Controller as the relaying party.
• Configuring NGINX Ingress Controller as the Relaying Party with Ping Identity
Note: The instructions and screenshots in this section are accurate for the PingOne for
Customers product as of the time of publication, but are subject to change by Ping Identity.
3. From the ADVANCED CONFIGURATION box, click the Configure button for the OIDC
connection type.
www.dbooks.org
7. Access the Configuration tab for the sample application (here, demo-oidc) and note
the following:
• In the GENERAL section, the value in both the CLIENT ID and CLIENT SECRET fields
(to see the actual client secret, click the eye icon).
8. In a terminal, Base64-encode the secret you obtained from the CLIENT SECRET field in
the previous step.
1 apiVersion: v1
2 kind: Secret
3 metadata:
4 name: ping-oidc-secret
5 type: nginx.org/oidc
6 data:
7 client-secret: client-secret
View on GitHub
1 apiVersion: k8s.nginx.org/v1
2 kind: Policy
3 metadata:
4 name: ping-oidc-policy
5 spec:
6 oidc:
7 clientID: ping-client-id
8 clientSecret: ping-oidc-secret
9 authEndpoint: https://auth.pingone.com/token/as/authorize
11 tokenEndpoint: https://auth.pingone.com/token/as/token
12 jwksURI: https://auth.pingone.com/token/as/jwks
View on GitHub
www.dbooks.org
5. Apply the VirtualServer resource (Identity-Security/ping-oidc-bookinfo-vs.yaml) that
references ping-oidc-policy:
The VirtualServer resource references ping-oidc-policy on line 16. The path definition
on line 14 means that users who request a URL starting with / are authenticated before
the request is proxied to the upstream called backend (line 18) which maps to the
productpage service (lines 10–11):
1 apiVersion: k8s.nginx.org/v1
2 kind: VirtualServer
3 metadata:
4 name: bookinfo-vs
5 spec:
6 host: bookinfo.example.com
7 tls:
1 secret: bookinfo-secret
9 upstreams:
10 - name: backend
11 service: productpage
12 port: 9080
13 routes:
14 - path: /
15 policies:
16 - name: ping-oidc-policy
17 action:
18 pass: backend
View on GitHub
WHAT IF YOUR SYSTEM In the previous three sections, we showed how to move the authentication process from the
SCALES TO TENS OR EVEN application layer to three third-party OIDC IdPs (Okta, Microsoft Azure Active Directory, and
HUNDREDS OF APPLICATIONS Ping Identity). What if we want to enable each user to use the same set of credentials to
THAT USERS NEED TO ACCESS access more than one application or application service? What if your system scales to tens
USING THE SAME SET OF or even hundreds of applications that users need to access using the same set of credentials?
CREDENTIALS? Single sign-on (SSO) is the solution that addresses this problem.
You can easily add new application integrations by defining other OIDC policies if necessary
and referencing the policies in VirtualServer resources. In the following diagram, there are
two subdomains, unit-demo.marketing.net and unit-demo.engineering.net, which both
resolve to the external IP address of NGINX Ingress Controller. NGINX Ingress Controller
routes requests to either the Marketing app or the Engineering app based on the subdomain.
Once the identity of a user is verified, the user can access both applications until the session
ID token issued from the IdP is expired or no longer valid.
Kubernetes
Cluster
unit-demo.marketing.net
IdP Marketing
App
NGINX
Ingress
Controller unit-demo.engineering.net
Clients Engineering
App
www.dbooks.org
DEPLOYING NGINX APP PROTECT WITH
NGINX INGRESS CONTROLLER
Having discussed authentication approaches, let’s look at how to improve application security.
LET’S LOOK AT HOW TO
NGINX App Protect Web Application Firewall (WAF) provides advanced protection for apps and
IMPROVE APPLICATION
APIs against attacks by bad actors, with minimal configuration and management overhead. The
SECURITY
diagram shows how NGINX App Protect WAF can be embedded in NGINX Ingress Controller:
WAF Ingress
Policy Resource
WAF Signature
Database
Kubernetes Cluster
App App
App App
There are several benefits from integrating NGINX App Protect WAF into NGINX
Ingress Controller:
• Consolidating the data plane – Embedding the WAF within the Ingress controller
eliminates the need for a separate WAF device. This reduces complexity, cost, and the
number of points of failure.
The configuration objects for NGINX App Protect are consistent across both NGINX Ingress
Controller (using YAML files) and NGINX Plus (using JSON). A master configuration can easily
be translated and deployed to either device, making it even easier to manage WAF configuration
as code and deploy it to any application environment.
You installed NGINX App Protect along with NGINX Ingress Controller in Step 3 of Installation
and Deployment Instructions for NGINX Ingress Controller.
NGINX App Protect is configured in NGINX Ingress Controller with three custom resources:
• APPolicy defines a WAF policy for NGINX App Protect to apply. An APPolicy WAF policy
is the YAML version of a standalone, JSON-formatted NGINX App Protect policy.
• APUserSig defines a custom signature to protect against an attack type not covered by
the standard signature sets.
The NGINX Ingress Controller image also includes an NGINX App Protect signature set,
which is embedded at build time.
NGINX App Protect policies protect your web applications against many threats, including
the OWASP Top Ten, cross-site scripting (XSS), SQL injections, evasion techniques, information
leakage (with Data Guard), and more.
You can configure NGINX App Protect in two ways using either NGINX Ingress resources or
the standard Ingress resource.
www.dbooks.org
Configuring NGINX App Protect WAF with NGINX Ingress Resources
To configure NGINX App Protect WAF with NGINX Ingress resources, you define policies,
logging configuration, and custom signatures with resources like the following from the
eBook repo. You then reference these resources in an NGINX Ingress Controller Policy
resource and the Policy resource in a VirtualServer resource.
1 apiVersion: appprotect.f5.com/v1beta1
2 kind: APPolicy
3 metadata:
4 name: dataguard-alarm
5 spec:
6 policy:
16 applicationLanguage: utf-8
17 blocking-settings:
18 violations:
19 - alarm: true
20 block: false
21 name: VIOL_DATA_GUARD
22 data-guard:
23 creditCardNumbers: true
24 enabled: true
25 enforcementMode: ignore-urls-in-list
29 maskData: true
30 usSocialSecurityNumbers: true
31 enforcementMode: blocking
32 name: dataguard-alarm
33 template:
34 name: POLICY_TEMPLATE_NGINX_BASE
View on GitHub
1 apiVersion: appprotect.f5.com/v1beta1
2 kind: APLogConf
3 metadata:
4 name: logconf
5 spec:
6 content:
7 format: default
8 max_message_size: 64k
9 max_request_size: any
10 filter:
11 request_type: all
View on GitHub
1 apiVersion: appprotect.f5.com/v1beta1
2 kind: APUserSig
3 metadata:
4 name: apple
5 spec:
6 signatures:
7 - accuracy: medium
8 attackType:
9 name: Brute Force Attack
10 description: Medium accuracy user defined signature with tag (Fruits)
11 name: Apple_medium_acc
12 risk: medium
13 rule: content:"apple"; nocase;
14 signatureType: request
15 systems:
16 - name: Microsoft Windows
17 - name: Unix/Linux
18 tag: Fruits
View on GitHub
The APPolicy and APLogConf resources are applied by references to dataguard-alarm on line 8
and logconf on line 11 in the Policy resource defined in Identity-Security/app-protect/waf.yaml:
1 apiVersion: k8s.nginx.org/v1
2 kind: Policy
3 metadata:
4 name: waf-policy
5 spec:
6 waf:
7 enable: true
8 apPolicy: "default/dataguard-alarm"
9 securityLog:
10 enable: true
11 apLogConf: "default/logconf"
12 logDest: "syslog:server=syslog-svc.default:514"
View on GitHub
www.dbooks.org
The WAF policy is applied in turn by a reference to it on line 8 of the VirtualServer resource
defined in Identity-Security/app-protect/bookinfo-vs.yaml.
1 apiVersion: k8s.nginx.org/v1
2 kind: VirtualServer
3 metadata:
4 name: bookinfo-vs
5 spec:
6 host: bookinfo.example.com
7 policies:
8 - name: waf-policy
View on GitHub
2. Activate the APPolicy, APLogConf, and APUserSig resources by applying the WAF Policy
resource (defined in waf.yaml) that references them and the VirtualServer resource
(defined in bookinfo-vs.yaml) that references the Policy.
kind: APPolicy
metadata:
name: dataguard-alarm
spec:
policy:
data-guard:
creditCardNumbers: true
enabled: true
www.dbooks.org
Configuring NGINX App Protect WAF with the Standard Ingress Resource
As an alternative to NGINX Ingress resources, you can use annotations in a standard
Kubernetes Ingress resource to reference NGINX App Protect policies, as in this example
(Identity-Security/app-protect/bookinfo-ingress.yaml):
1 apiVersion: networking.k8s.io/v1beta
2 kind: Ingress
3 metadata:
4 name: bookinfo-ingress
5 annotations:
6 appprotect.f5.com/app-protect-policy: "default/dataguard-alarm"
7 appprotect.f5.com/app-protect-enable: "True"
8 appprotect.f5.com/app-protect-security-log-enable: "True"
9 appprotect.f5.com/app-protect-security-log: "default/logconf"
10 appprotect.f5.com/app-protect-security-log-destination:
"syslog:server=syslog-svc.default:514"
View on GitHub
To activate the policy and log settings, apply the Ingress resource:
Logging
The logs for NGINX App Protect WAF and NGINX Ingress Controller are separate by design,
to accommodate the delegation of responsibility to different teams such as DevSecOps and
application owners.
NGINX Ingress Controller logs are forwarded to the local standard output, as for all
Kubernetes containers.
• If using NGINX Ingress resources, set the logDest field of the WAF Policy resource
to the cluster IP address of the syslog service.
1 apiVersion: apps/v1
2 kind: Deployment
3 metadata:
4 name: syslog
5 spec:
6 replicas: 1
7 selector:
8 matchLabels:
9 app: syslog
10 template:
11 metadata:
12 labels:
13 app: syslog
14 spec:
15 containers:
16 - name: syslog
17 image: balabit/syslog-ng:3.35.1
18 ports:
19 - containerPort: 514
20 - containerPort: 601
21 ---
22 apiVersion: v1
23 kind: Service
24 metadata:
25 name: syslog-svc
26 spec:
27 ports:
29 - port: 514
29 targetPort: 514
30 protocol: TCP
31 selector:
32 app: syslog
View on GitHub
www.dbooks.org
Resource Thresholds
You can also use NGINX App Protect to set resource protection thresholds for both CPU
YOU CAN ALSO USE and memory utilization by NGINX App Protect processes. This is particularly important in
NGINX APP PROTECT
multi-tenant environments such as Kubernetes which rely on resource sharing and can
TO SET RESOURCE
potentially suffer from the “noisy neighbor” problem. The following sample ConfigMap sets
PROTECTION THRESHOLDS
resource thresholds:
1 kind: ConfigMap
2 apiVersion: v1
3 metadata:
4 name: nginx-config
5 namespace: nginx-ingress
6 data:
7 app_protect_physical_memory_util_thresholds: "high=100 low=10"
8 app_protect_cpu_thresholds: "high=100 low=50"
9 app_protect_failure_mode_action: "drop"
For thresholds with high and low parameters, the former parameter sets the percent
utilization at which App Protect enters failure mode and the latter the percent utilization
at which it exits failure mode. Here the high and low parameters are set to 100% and 10%
respectively for memory utilization and to 100% and 50% for CPU utilization.
• drop – App Protect rejects requests, returning 503 (Service Unavailable) and
closing the connection
• pass – App Protect forwards requests without inspecting them or enforcing any policies
VERIFYING USER IDENTITY Verifying user identity and enforcing security are key to successful operation for all types
AND ENFORCING SECURITY of users and applications. We showed how to implement authentication of user identities
ARE KEY TO SUCCESSFUL by integrating with third-party OIDC IdPs and how to secure applications with NGINX App
OPERATION FOR ALL TYPES Protect WAF.
OF USERS AND APPLICATIONS
Let’s summarize some of the key concepts from this chapter:
• Authenticating user identities and enforcing authorization controls are as crucial for
protecting applications and APIs as any other protection technique.
• Working with an OIDC identity provider (IdP), NGINX Ingress Controller operates as the
relaying party which enforces access controls on incoming traffic and routes it to the
appropriate services in the cluster.
• Integrating NGINX Ingress Controller with an IdP involves configuring the IdP to
recognize the application, defining NGINX Policy resources, and referencing the policies
in NGINX VirtualServer resources.
• NGINX App Protect integrates into NGINX Ingress Controller to secure the application
perimeter quickly and reliably, helping to protect web applications against bad actors.
www.dbooks.org
Appendix
©2022 F5, Inc. All rights reserved. F5, the F5 logo, NGINX, the NGINX logo, F5 NGINX, F5 NGINX App Protect, F5 NGINX App Protect DoS,
F5 NGINX App Protect WAF, F5 NGINX Ingress Controller, F5 NGINX Plus, F5 NGINX Service Mesh, and NGINX Open Source are trademarks
of F5 in the U.S. and in certain other countries. Other F5 trademarks are identified at f5.com. Any other products, services, or company names
126
referenced herein may be trademarks of their respective owners with no endorsement or affiliation, expressed or implied, claimed by F5.