0% found this document useful (0 votes)
36 views3 pages

Expose Microservices Using Eks Ra

The document describes exposing Amazon EKS microservices hosted in private subnets to the internet and on-premises networks. It discusses using Elastic Load Balancers and NAT gateways to direct traffic to pods in private subnets from public subnets and outbound, and using a virtual private gateway to send traffic to an on-premises network over VPN.

Uploaded by

shamshuddin0003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views3 pages

Expose Microservices Using Eks Ra

The document describes exposing Amazon EKS microservices hosted in private subnets to the internet and on-premises networks. It discusses using Elastic Load Balancers and NAT gateways to direct traffic to pods in private subnets from public subnets and outbound, and using a virtual private gateway to send traffic to an on-premises network over VPN.

Uploaded by

shamshuddin0003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Amazon Route 53 resolves incoming requests to

Expose Microservices in a Hybrid Scenario Using Amazon EKS 1 the public Elastic Load Balancer (ELB) deployed
by the AWS Load Balancer Controller.*
Expose Amazon Elastic Kubernetes Service (Amazon EKS) microservices hosted in private subnets to the internet and
Amazon Route 53 resolves incoming requests to
on-premises networks. 1 the private ELB deployed by the AWS Load
Balancer Controller.*
Inbound external 2 The ELBs forward traffic to applications. You can
Internet choose between the two modes**:
gateway Route53
Inbound internal
2.1 • Instance mode: The traffic is sent to a worker
Availability Zone 1 Availability Zone 2 Outbound external node, then the service redirects traffic to the
pod.
VPC 10.0.0.0/16 Outbound internal
2.2 • IP mode: The traffic is directed to the IP of the
Public subnet 1 B Public subnet pod directly.
10.0.1.0/24 10.0.2.0/24 Public RT
Destination Target A When the pod in private subnets initiates an
EKS public NAT outbound request to the internet, the private
load balancer 10.0.0.0/16 local route table forwards the traffic to the NAT
NAT gateway
0.0.0.0/0 IGW gateway (NGW).***
gateway
The public route table forwards the traffic from
Private subnet Private subnet Private RT AZ n
B the NGW to the internet gateway (IGW).
10.0.3.0/24 10.0.4.0/24 Destination Target The pod in private subnets initiates an outbound
a request to the on-premises network. The private
2.1
10.0.0.0/16 local
2.2
10.1.0.0/16 vpn- route table forwards the traffic to the virtual
Worker node Worker node attach private gateway (VGW).***
0.0.0.0/0 ngw- b The traffic is sent to the on-premises network
EKS private A azn over the virtual private network (VPN) or AWS
LB controller load balancer Direct Connect connection.
AWS Direct * Recommended way to manage Create, Read, Update,
External app External
External app Connect and Delete (CRUD) operations on EKS-related ELBs. The
svc/ing app AWS LB controller satisfies k8s services with Network Load
Balancers (NLBs) and Kubernetes ingresses with
Internal Internal app Internal Application Load Balancers (ALBs). You can also manage
app svc/ing app Virtual ingresses by implementing other ingress controllers like
private
b the NGINX ingress controller.
gateway ** More information here.
Private worker nodes
• You can also enable private access for your Amazon EKS
node group cluster’s Kubernetes API server endpoint and limit, or
a completely disable, public access from the internet.
Private hosted zone On-premises More information here.
1 AWS VPN 10.1.0.0/16 • If you’re using AWS Fargate for Amazon EKS, you will
Reviewed for technical accuracy Feb 02, 2022 not have worker nodes but only the pod ENIs in the
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. AWS Reference Architecture private subnets. You can only use ELBs with IP mode
with AWS Fargate pods.
Deal with Pod IP Exhaustion
Amazon Route 53 resolves incoming requests
1 to the public ELB deployed by the AWS Load
Balancer Controller.
Increase the IP addresses available to pods by adding dedicated subnets from the 100.64.0.0/10 and 198.19.0.0/16 Amazon Route 53 resolves incoming requests
ranges.* 1 to the private ELB deployed by the AWS Load
Balancer Controller.
Inbound external The ELBs forward traffic to applications. You
Internet 2
Inbound internal can choose between two modes:
gateway
Availability Zone 1 Route 53 Availability Zone 2 Outbound external 2.1 • Instance mode: The traffic to a worker node
and then the service will redirect traffic to
VPC 10.0.0.0/16 and 100.64.0.0/16 Outbound internal
B the pod.

Public subnet 1 Public subnet Public RT 2.2 • IP mode: The traffic is directed to the IP of
Destination Target the pod directly.
10.0.1.0/24 10.0.2.0/24
10.0.0.0/16 local The pod in private subnets initiates an
EKS public A outbound request to the internet. The private
load balancer 100.64.0.0/16 local
NAT NAT A route table forwards the traffic to the NAT
gateway gateway 0.0.0.0/0 IGW gateway (NGW).***
The public route table forwards the traffic from
Private subnet for nodes Private subnet for nodes B
Private nodes RT AZ n the NGW to the internet gateway (IGW).
10.0.3.0/24 10.0.4.0/24 Destination Target
2.1 The pod in private subnets initiates an
Worker EKS private 10.0.0.0/16 local a
Worker outbound request to the on-premises network.
nodes load balancer
nodes 100.64.0.0/16 local The private route table forwards the traffic to
Private worker 10.1.0.0/16 VGW the Virtual Private gateway (VGW).***
nodes node group The traffic is sent on-premises network over
0.0.0.0/0 ngw- b
azn the VPN/Direct Connect connections.
Private subnet for pods Private subnet for pods Private pods RT AZ n
100.64.0.0/19 2.2
100.64.32.0/19 Destination Target * By adding secondary CIDR blocks to a VPC from the
1 RFC 6598 address space (in the example 100.64.0.0/16),
External External 10.0.0.0/16 local in conjunction with the CNI Custom Networking feature,
app app 100.64.0.0/16 local it is possible for pods to no longer consume any RFC
10.1.0.0/16 VGW 1918 IP addresses in a VPC (in the example, pods are in
LB Internal Internal 0.0.0.0/0 ngw- subnets 100.64.0.0/19 and 100.64.32.0/19). Check out
controller app a app this post for a technical how-to.
azn
** More information here.
• The default behavior of EKS is to source NAT pod
traffic to the primary IP address of the hosting worker
Private hosted AWS VPN
node.
zone Virtual private b • Check out this blog for multi-account settings.
On-premises
gateway • AWS Fargate for Amazon EKS supports additional
10.1.0.0/16 CIDRs.
Reviewed for technical accuracy Feb 02, 2022
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. AWS AWS Reference Architecture • The ENIConfig custom resource is used to define the
Direct Connect subnet in which pods will be scheduled.
Amazon Route 53 resolves incoming requests
a to the public ELBs in dual-stack mode*
Expose Amazon EKS Microservices in IPv6 Clusters deployed by the AWS Load Balancer
controller.**
Expose Amazon EKS microservices with IPv6 and connect to both IPv6 and IPv4 endpoints on the internet. The ELB forwards traffic to the IPv6 pods (the
b ELB must use the IP mode).
Route 53 Inbound
IPv4 IPv6
Any pod communication from within private
Outbound to IPv6 A subnets to IPv6 endpoints outside the cluster
will be routed via an egress-only internet
Availability Zone 1 Availability Zone 2 Outbound to IPv4
gateway (EIGW).
VPC 192.168.0.0/16 A pod in private subnet initiates an outbound
2600:1f13:80f:8200::/56 Internet Egress-only Public RT 0 request to an IPv4 address on the internet and
gateway internet performs a DNS lookup for an endpoint and,
gateway Destination Target
Public subnet Public subnet upon receiving an IPv4 “A” response, establishes
3 192.168.32.0/19 192.168.0.0/16 local
192.168.0.0/19 a connection with the IPv4 endpoint using the
2600:1f13:80f:8200::/64
a 2600:1f13:80f:8201::/64 IPv4 address from the host-local
0.0.0.0/0 IGW
169.254.172.0/22 IP range***.
Dual-stack
2600:1f13:80f:8200::/56 local
NAT NAT The pod’s node-only unique IPv4 address is
gateway EKS public gateway ::/0 IGW 1 translated through NAT to the IPv4 (VPC)
Load address of the primary network interface
Private subnet Balancer Private subnet attached to the node.
192.168.96.0/19 2 192.168.128.0/19 Private RT AZ n
The private route table forwards the traffic to
2600:1f13:80f:8203::/64 2600:1f13:80f:8204::/64 Destination Target 2 the NGW, and the private IPv4 address of a
b 192.168.0.0/16 local node is translated by a NAT gateway to the
Worker node Worker node 0.0.0.0/0 ngw-azn public IPv4 address of the gateway.

64:ff9b::/96 ngw-azn 3 The public route table forwards the traffic


1 from the NGW to the IGW.
2600:1f13:80f:8200::/56 local
External External app External ::/0 eigw * At the time of this writing, ALB and NLB support
0 svc/ing
A dual-stack for only internet-facing endpoints. More
app app
information on the ELB annotation here.
** The legacy in-tree service controller does not
Load support IPV6
Balancer ***EKS implements a host-local CNI plugin chained
controller along with VPC CNI to allocate and configure an IPv4
Private worker nodes address for a pod. The CNI plugin configures a host-
node group specific non-routable IPv4 address for a pod from the
169.254.172.0/22 range.

Reviewed for technical accuracy Feb 02, 2022 • Moving to IPV6 also solves pod IP exhaustion,
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. AWS Reference Architecture
because you don’t need to work around IPv4 limits.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy