Istio Gateway-API NodePort Access Issue On AWS
Hey guys,
I'm diving into the world of Istio and Gateway-API, and I've hit a snag. I'm trying to set up an AWS Load Balancer Controller to work with an Istio gateway, using the Gateway API. The setup involves a NodePort service, but I'm facing an issue where I can only access the gateway pod on the host of the pod itself. Accessing it from other hosts results in the port being closed. I'm hoping someone in the Istio community can lend a hand!
Understanding the Setup and the Problem
Istio and Gateway API are powerful tools for managing traffic in a Kubernetes environment. The Gateway API provides a more expressive and flexible way to configure ingress compared to the older Ingress resources. In this specific scenario, I'm using an AWS Load Balancer to direct traffic to an Istio gateway. The Istio gateway is exposed via a NodePort service. However, the problem arises when trying to access the gateway's NodePort from any node other than the one where the gateway pod is running. I am currently running this setup on AWS.
The Configuration Details
Here's a breakdown of the configurations I've put together:
- Gateway Resource: The Gateway resource is defined in the
gatewaynamespace and uses theistiogateway class. The service type is set to NodePort, and it listens on port 80 for HTTP traffic.
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: general-gateway
namespace: gateway
annotations:
networking.istio.io/service-type: NodePort
spec:
gatewayClassName: istio
listeners:
- name: http
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: All
- Service Resource: A Service of type
NodePortis created to expose the Gateway. TheexternalTrafficPolicyis set toCluster.
apiVersion: v1
kind: Service
metadata:
annotations:
argocd.argoproj.io/tracking-id: gateway:gateway.networking.k8s.io/Gateway:gateway/general-gateway
networking.istio.io/service-type: NodePort
creationTimestamp: "2025-11-12T09:42:29Z"
labels:
gateway.istio.io/managed: istio.io-gateway-controller
gateway.networking.k8s.io/gateway-name: general-gateway
istio.io/dataplane-mode: none
name: general-gateway-istio
namespace: gateway
ownerReferences:
- apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
name: general-gateway
uid: 8227496b-5a66-478f-84b0-b0c1c3cfaebe
resourceVersion: "6565369"
uid: a8d82d17-d28c-4c57-afbb-43a1245e5dbf
spec:
clusterIP: 172.20.18.246
clusterIPs:
- 172.20.18.246
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: PreferDualStack
ports:
- appProtocol: tcp
name: status-port
nodePort: 30667
port: 15021
protocol: TCP
targetPort: 15021
- appProtocol: http
name: http
nodePort: 32549
port: 80
protocol: TCP
targetPort: 80
selector:
gateway.networking.k8s.io/gateway-name: general-gateway
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
- Pod Resource: The Istio proxy is injected as a sidecar in the pod. The pod has various configurations related to Istio proxy settings.
apiVersion: v1
kind: Pod
metadata:
annotations:
argocd.argoproj.io/tracking-id: gateway:gateway.networking.k8s.io/Gateway:gateway/general-gateway
istio.io/rev: default
networking.istio.io/service-type: NodePort
prometheus.io/path: /stats/prometheus
prometheus.io/port: "15020"
prometheus.io/scrape: "true"
creationTimestamp: "2025-11-12T09:42:29Z"
generateName: general-gateway-istio-7c6fd54c6d-
generation: 1
labels:
gateway.istio.io/managed: istio.io-gateway-controller
gateway.networking.k8s.io/gateway-name: general-gateway
istio.io/dataplane-mode: none
pod-template-hash: 7c6fd54c6d
service.istio.io/canonical-name: general-gateway-istio
service.istio.io/canonical-revision: latest
sidecar.istio.io/inject: "false"
name: general-gateway-istio-7c6fd54c6d-rnkg7
namespace: gateway
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: general-gateway-istio-7c6fd54c6d
uid: 01c0dfdd-0ddf-495b-b76a-42152dfe6269
resourceVersion: "6565433"
uid: fc14fd0a-a6e0-48ee-92a7-d96d3f2484d9
spec:
containers:
- args:
- proxy
- router
- --domain
- $(POD_NAMESPACE).svc.cluster.local
- --proxyLogLevel
- warning
- --proxyComponentLogLevel
- misc:error
- --log_output_level
- default:info
env:
- name: PILOT_CERT_PROVIDER
value: istiod
- name: CA_ADDR
value: istiod.istio-system.svc:15012
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.serviceAccountName
- name: HOST_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
- name: ISTIO_CPU_LIMIT
valueFrom:
resourceFieldRef:
divisor: "0"
resource: limits.cpu
- name: PROXY_CONFIG
value: |
{"proxyMetadata":{"ISTIO_META_ENABLE_HBONE":"true"},"image":{"imageType":"distroless"}}
- name: ISTIO_META_POD_PORTS
value: '[]'
- name: ISTIO_META_APP_CONTAINERS
- name: GOMEMLIMIT
valueFrom:
resourceFieldRef:
divisor: "0"
resource: limits.memory
- name: GOMAXPROCS
valueFrom:
resourceFieldRef:
divisor: "0"
resource: limits.cpu
- name: ISTIO_META_CLUSTER_ID
value: Kubernetes
- name: ISTIO_META_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: ISTIO_META_INTERCEPTION_MODE
value: REDIRECT
- name: ISTIO_META_WORKLOAD_NAME
value: general-gateway-istio
- name: ISTIO_META_OWNER
value: kubernetes://apis/apps/v1/namespaces/gateway/deployments/general-gateway-istio
- name: ISTIO_META_MESH_ID
value: cluster.local
- name: TRUST_DOMAIN
value: cluster.local
- name: ISTIO_META_ENABLE_HBONE
value: "true"
image: docker.io/istio/proxyv2:1.27.2-distroless
imagePullPolicy: IfNotPresent
name: istio-proxy
ports:
- containerPort: 15020
name: metrics
protocol: TCP
- containerPort: 15021
name: status-port
protocol: TCP
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
readinessProbe:
failureThreshold: 4
httpGet:
path: /healthz/ready
port: 15021
scheme: HTTP
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: "2"
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsGroup: 1337
runAsNonRoot: true
runAsUser: 1337
startupProbe:
failureThreshold: 30
httpGet:
path: /healthz/ready
port: 15021
scheme: HTTP
initialDelaySeconds: 1
periodSeconds: 1
successThreshold: 1
timeoutSeconds: 1
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/workload-spiffe-uds
name: workload-socket
- mountPath: /var/run/secrets/credential-uds
name: credential-socket
- mountPath: /var/run/secrets/workload-spiffe-credentials
name: workload-certs
- mountPath: /var/run/secrets/istio
name: istiod-ca-cert
- mountPath: /var/lib/istio/data
name: istio-data
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /var/run/secrets/tokens
name: istio-token
- mountPath: /etc/istio/pod
name: istio-podinfo
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-86hr9
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ip-10-0-38-213.us-west-2.compute.internal
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
sysctls:
- name: net.ipv4.ip_unprivileged_port_start
value: "0"
serviceAccount: general-gateway-istio
serviceAccountName: general-gateway-istio
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- emptyDir: {}
name: workload-socket
- emptyDir: {}
name: credential-socket
- emptyDir: {}
name: workload-certs
- emptyDir:
medium: Memory
name: istio-envoy
- emptyDir: {}
name: istio-data
- downwardAPI:
defaultMode: 420
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.labels
path: labels
- fieldRef:
apiVersion: v1
fieldPath: metadata.annotations
path: annotations
name: istio-podinfo
- name: istio-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: istio-ca
expirationSeconds: 43200
path: istio-token
- configMap:
defaultMode: 420
name: istio-ca-root-cert
name: istiod-ca-cert
- name: kube-api-access-86hr9
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2025-11-12T09:42:30Z"
status: "True"
type: PodReadyToStartContainers
- lastProbeTime: null
lastTransitionTime: "2025-11-12T09:42:29Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2025-11-12T09:42:32Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2025-11-12T09:42:32Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2025-11-12T09:42:29Z"
status: "True"
type: PodScheduled
containerStatuses:
- allocatedResources:
cpu: 100m
memory: 128Mi
containerID: containerd://262658ab1e4e81ce6aa6a64d031bb6af2f7f2febfdbd762abe2b2c3f4d7529d5
image: docker.io/istio/proxyv2:1.27.2-distroless
imageID: docker.io/istio/proxyv2@sha256:91dfb31791224e6cc4a2e4b4050a148b0bae3c7e0d5fb56e2de1a417e4b39cce
lastState: {}
name: istio-proxy
ready: true
resources:
limits:
cpu: "2"
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
restartCount: 0
started: true
state:
running:
startedAt: "2025-11-12T09:42:30Z"
user:
linux:
gid: 1337
supplementalGroups:
- 1337
uid: 1337
volumeMounts:
- mountPath: /var/run/secrets/workload-spiffe-uds
name: workload-socket
- mountPath: /var/run/secrets/credential-uds
name: credential-socket
- mountPath: /var/run/secrets/workload-spiffe-credentials
name: workload-certs
- mountPath: /var/run/secrets/istio
name: istiod-ca-cert
- mountPath: /var/lib/istio/data
name: istio-data
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /var/run/secrets/tokens
name: istio-token
- mountPath: /etc/istio/pod
name: istio-podinfo
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-86hr9
readOnly: true
recursiveReadOnly: Disabled
hostIP: 10.0.38.213
hostIPs:
- ip: 10.0.38.213
phase: Running
podIP: 10.0.43.141
podIPs:
- ip: 10.0.43.141
qosClass: Burstable
startTime: "2025-11-12T09:42:29Z"
The Versions
Here are the versions I'm using:
istioctl version: Client version: 1.28.0, Control plane version: 1.27.2, Data plane version: 1.27.2 (9 proxies)kubectl version: Client Version: v1.32.1, Kustomize Version: v5.5.0, Server Version: v1.33.5-eks-ba24e9c
Troubleshooting Steps and Initial Thoughts
NodePort services are designed to make an application accessible on a static port on each node in your cluster. When you access a NodePort from outside the cluster, the traffic is routed to the corresponding pod. In the case of Istio and Gateway API, the expectation is that the traffic should be routed to the gateway pod, regardless of which node the request hits. Since it's only working on the host of the pod, it seems like there might be a problem with the network configuration or the way Istio is handling the traffic routing.
Here's what I've tried and what I'm thinking might be the issue:
- Network Policies: Checked if any network policies are interfering with the traffic flow. Ensure that traffic is allowed to the NodePort on all nodes.
- AWS Security Groups and NACLs: Validated that the security groups and network ACLs on AWS are correctly configured to allow traffic to the NodePort from all the necessary sources (e.g., the Load Balancer Controller).
externalTrafficPolicy: TheexternalTrafficPolicyon the service is set toCluster. This setting can influence how traffic is routed. WithCluster, the traffic is routed to a node, and then the kube-proxy handles forwarding the traffic to the gateway pod. However, it is set toCluster, which should not cause this behaviour. I've considered switching toLocaland checking if that resolves the problem. When set toLocal, it ensures that traffic is only routed to pods running on the same node.- Istio Proxy Configuration: Checked the Istio proxy logs for any errors or warnings related to routing or traffic handling. Ensured the proxy is configured correctly and has the necessary permissions.
- Gateway API Configuration: Reviewed the Gateway API configuration to make sure it correctly specifies the port and protocol. Also, I've confirmed that the
gatewayClassNameis set correctly. - iptables Rules: Checked the
iptablesrules on the nodes. This might give some insight if there are any rules blocking or redirecting the traffic.
Potential Causes and Solutions
Let's brainstorm some potential causes and possible solutions:
1. Incorrect Network Configuration
- Problem: Misconfigured AWS security groups, NACLs, or Kubernetes network policies might be blocking traffic to the NodePort on nodes other than the one running the pod.
- Solution: Double-check AWS security group and NACL rules to ensure they allow inbound traffic to the NodePort (e.g., 32549) from the CIDR blocks of your AWS Load Balancer Controller. Review Kubernetes network policies to ensure they are not blocking traffic to the service.
2. externalTrafficPolicy Misconfiguration
- Problem: Although the
externalTrafficPolicyis set toCluster, which should work, there might be some interaction with the AWS environment that is causing unexpected behaviour.Clustermeans that traffic can be forwarded to any node in the cluster. - Solution: Try setting the
externalTrafficPolicytoLocaland see if it makes a difference. This setting could potentially help to rule out routing issues. Keep in mind that withLocal, the traffic will only be routed to the pods running on that node. If the service is still not accessible from other nodes, it might indicate a more fundamental networking problem.
3. Istio Proxy Issues
- Problem: There could be issues within the Istio proxy itself. The proxy might not be configured correctly or might be failing to route traffic as expected.
- Solution: Check the Istio proxy logs (
istio-proxycontainer logs) for any errors. Also, check the Envoy configuration to ensure it has the correct routes and listeners. This can be done by accessing the Envoy admin interface. Verify that the proxy is properly receiving the configuration from Istiod.
4. Incorrect Gateway API Configuration
- Problem: A misconfiguration in the Gateway API resources can also lead to traffic routing problems. Incorrect port, protocol, or route specifications could cause the issue.
- Solution: Double-check the Gateway and HTTPRoute resources. Make sure the port, protocol, and allowed routes are correctly configured. Ensure the
gatewayClassNamematches the Istio gateway class. Usekubectl describe gateway <gateway-name> -n <namespace>to examine the status of the Gateway resource and confirm it's ready.
5. AWS Load Balancer Controller Integration
- Problem: There might be an issue with how the AWS Load Balancer Controller interacts with Istio and the NodePort service.
- Solution: Make sure you're using a compatible version of the AWS Load Balancer Controller. Verify that the controller is correctly provisioning the load balancer and that the load balancer is properly configured to route traffic to the NodePort. Examine the logs of the AWS Load Balancer Controller for any errors or warnings. Also, check the annotations on the service to make sure the load balancer is configured correctly.
Seeking Community Help
Guys, I've tried a few things, but I'm still stuck. Any insights or suggestions from the community would be greatly appreciated! If anyone has encountered a similar issue or has any ideas, please share them. Any advice would be a great help!
Thanks in advance for your assistance!