Back to articles

    Why Migrate from Ingress to Gateway API with Kgateway in 2026?

    By Bastien Genestal

    February 18, 2026

    KubernetesKgatewayGateway APIEnvoyCloud Native
    Why Migrate from Ingress to Gateway API with Kgateway in 2026?

    As you know, we are in the middle of a pivotal phase for our Kubernetes infrastructures. The wind is shifting, and the old habits we've built around the Ingress Controller (that trusty NGINX we've been dragging along for years) are about to be disrupted. And spoiler alert: this is excellent news.

    We've spent the last few weeks deep-diving into the subject, combing through CNCF documentation and benchmarking different solutions. The verdict is in, and it's clear. We are adopting the Gateway API as the new exposure standard for all our startup clients, and the technical implementation that won our hearts (and our load tests) is Kgateway.

    In this article, I'll walk you through the entire reasoning that led us to this conclusion. We'll talk about the end of the technical debt tied to NGINX annotations, the implementation of real multi-tenant governance, and above all, we'll see how Kgateway lets us stabilize our environments while offering a routing flexibility we had never cleanly achieved before.

    Sit back, grab a coffee, and let's dive deep into the future of our K8s networking.

    The Bitter Reality: The Ingress API Is a Technological Dead End

    Let's be honest: the Ingress API is still the default method everyone uses today. You define a host, a path, a backend, and it works immediately. This apparent simplicity is what made it successful for classic use cases, but it's precisely where the trap snaps shut the moment requirements become even slightly complex.

    And complexity is our daily reality. Between multi-tenancy, canary release strategies, fine-grained authentication, and the explosion of LLM (Large Language Model) requests, our clients now expect far more than a simple static route. This is where the Ingress API shows its structural limits: it is simply no longer sized to carry such technical ambitions.

    Why We Can't Continue Like This

    The major problem is that the Ingress API is no longer evolving. Kubernetes has officially frozen its features. It has become a "legacy" API surviving on life support, a conclusion shared by experts on the planned obsolescence of NGINX Ingress.

    The original design flaw was putting everything into a single resource. A developer who wants to expose their API must define the route, but they also potentially have access to TLS configuration or Load Balancer annotations. It's a governance nightmare. To compensate for the lack of native features (like traffic splitting or complex rewrites), each controller (NGINX, HAProxy, Traefik) invented its own annotations.

    The result? We end up with unreadable YAML files, stuffed with proprietary annotations that create massive vendor lock-in. Migrating from one controller to another requires rewriting all manifests. That's pure technical debt. Furthermore, Ingress natively only handles HTTP/HTTPS. The moment a client wants to expose a database over TCP or a streaming service over UDP, we have to hack around it with ConfigMaps or LoadBalancer-type Services, which completely breaks the unification of our network model.

    The Gateway API: A Revolution, Not an Evolution

    The Gateway API is not a "version 2" of Ingress. It's a completely rethought, "Role-Oriented" architecture that transforms our approach to Kubernetes managed services. The brilliant idea behind this API is decoupling responsibilities, marking a clear break from the old model as explained in this comparison between Ingress and the Gateway API.

    The New Shared Responsibility Model

    Gone are the days when Infra and Dev teams stepped on each other's toes in the same YAML file. The Gateway API introduces three distinct resource levels that map perfectly to our internal organization.

    The GatewayClassThis is the "Platform" level. This is where we define who will manage traffic. In our case, that will be Kgateway. It's a cluster-wide resource. Think of it like a StorageClass: we define what type of controller is available, and that's it. The Log'in Line Run team pilots this structural layer for the entire cluster.

    The GatewayThis is the heart of our managed services work. The Gateway resource defines the network entry point: ports (80, 443, etc.), protocols, and above all TLS configuration (wildcard certificates, for example). This is the network contract we expose to application teams. We retain full control over where traffic enters and how it is secured at the boundary.

    The Routes (HTTPRoute, TLSRoute, etc.)This is where the magic happens for developers. They create HTTPRoute resources that belong to their namespace. They define their matching rules (path, headers, query params) and which Kubernetes Service the traffic should go to. But the best part is that they must explicitly "attach" to an existing Gateway.

    Finally Robust Governance

    This separation gives us incredible native governance. With Ingress, it was "all or nothing". With the Gateway API, we can configure the Gateway to only accept routes from specific namespaces, or validate routes via label selectors.

    Security finally becomes declarative and, above all, sanctified. We no longer cross our fingers hoping a developer hasn't accidentally overwritten a critical security annotation — a scenario we've lived through far too many times and which remains extremely incident-prone in our production environments. Now, we define the immutable framework via the Gateway, and applications plug into it via Routes with no ability to alter the global security policy. This is a major qualitative leap for the stability of our multi-tenant SaaS clients, where a simple local configuration error can no longer compromise the integrity of the global exposure.

    Why Kgateway? The Team's Analysis

    Having a new standardized API is great, but you need an implementation (a controller) to run it. The market is vast: Istio, Cilium, Contour, Envoy Gateway... So why did we set our sights on Kgateway (formerly Gloo Gateway)?

    An Ultra-Performant Envoy Engine Under the Hood

    Kgateway is built on Envoy Proxy. For those who've been asleep at the back of the class for the last 5 years, Envoy has become the de facto standard for the cloud-native data plane. Unlike NGINX, which was designed in the era of monolithic servers and often requires costly process reloads to change configuration, Envoy is built for dynamism.

    Kgateway leverages this power by providing a lightweight but robust "Control Plane" that translates Gateway API resources into xDS configuration for Envoy. Benchmarks and real-world feedback show that Kgateway handles frequent route updates (the "churn") much better than NGINX-based controllers, which is crucial in our CI/CD environments where deployments are continuous.

    Kgateway vs Istio: The Strategic Positioning

    This was the big internal debate: "Why not just use Istio for everything?" Istio does everything, right? True, Istio is a powerhouse for Service Mesh (East-West traffic). But using Istio as an Ingress Controller (North-South) is often "overkill" and complex to maintain if you don't need the full mesh right away.

    Here is a summary table to understand how Kgateway positions itself in our stack compared to other components, illustrating our selection methodology.

    Criterion / SolutionIngress NGINX (Current)Istio Ingress GatewayKgateway (Target)
    Primary RoleLegacy Ingress ControllerService Mesh Entry PointAutonomous API Gateway & Next-Gen Ingress
    Gateway API SupportPartial / ExperimentalYes, but Mesh-orientedNative and complete (Core feature)
    Operational ComplexityLow (but high technical debt)High (Istiod dependency)Moderate (Lightweight dedicated Control Plane)
    AI Traffic Management (LLM)NoneVia Wasm (complex)Native features ("AI Gateway")
    Service Mesh IntegrationNoneNative (Istio)Transparent integration (Waypoint for Ambient)

    Kgateway positions itself as the "best of breed" for North-South traffic (cluster ingress). It is more feature-rich than a simple NGINX Ingress, but more focused and lightweight than a full Istio.

    Furthermore, Kgateway was recently donated to the CNCF (Cloud Native Computing Foundation) by Solo.io. This is a critical point for our long-term strategy: it guarantees that the project is "vendor-neutral" and benefits from genuine community governance. We've all seen the recent turbulence and uncertainties around the Bitnami registry following successive acquisitions, or the brutal license changes at other open source pillars. We can no longer afford to build our foundations on solutions whose model can pivot overnight. With the CNCF, we ensure that Kgateway remains a common good, protecting us against aggressive monetization strategies or unexpected access restrictions that could jeopardize the stability of our clients.

    The Strategic Asset: Kgateway as an AI Gateway

    Beyond its network robustness and compliance with new standards, Kgateway stands out for a feature that is becoming crucial for our ecosystem: its AI Gateway capabilities. This is not the sole reason for our choice, but it is an undeniable competitive advantage as our startup clients massively integrate LLMs (Large Language Models) into their architectures.

    Kgateway doesn't just route classic HTTP traffic — it offers native AI Gateway features to unify flows to AI models at the infrastructure level, providing a layer of control, security, and observability that we sorely lacked until now.

    Securing and Controlling LLM Consumption

    The usual approach involves injecting API keys directly into code or Pod secrets, with very little visibility into costs or actual usage. By positioning Kgateway upstream of calls to OpenAI or Anthropic, we centralize authentication and can apply AI-specific Rate Limiting policies.

    This allows us, for example, to limit the number of tokens consumed per minute for a given client — an effective strategy to reduce Kubernetes costs and avoid billing explosions caused by a development error or unexpected usage.

    Data Governance and Prompt Guards

    Data security is the number one concern. Kgateway allows implementing "Prompt Guards" directly at the gateway level. We can thus detect and filter sensitive information (PII - Personally Identifiable Information) before it leaves the cluster.

    This ability to analyze and mask patterns (emails, card numbers) in outgoing requests gives our clients a natively secure "AI-Ready" infrastructure, without having to reinvent the wheel in every microservice.

    With Kgateway, we can configure a route that intercepts calls to LLM providers. Kgateway handles centralized authentication and can apply AI-specific Rate Limiting policies. For example, we can limit the number of tokens per minute to prevent the bill from skyrocketing because of an infinite loop in an intern's code.

    The Battle Plan for Migration

    Convincing is good. Executing is better. The transition from Ingress to Gateway API won't happen overnight, but there are now practical guides for migrating smoothly to the Gateway API.

    Automated Migration Tools

    The Kubernetes community, and specifically the Kgateway team, has developed tools like ingress2gateway. This utility can read our existing Ingress resources (even those stuffed with NGINX-specific annotations) and generate the corresponding Gateway API resources (Gateway + HTTPRoutes).

    We will use this tool to generate a migration base for each client. The idea is not to blindly apply the result, but to use it to audit what exists and refactor cleanly. It's an opportunity to remove years of "temporary fixes" that became permanent.

    The "Double Run" Strategy

    We won't cut NGINX overnight. The beauty of Kubernetes is that we can run Kgateway in parallel with NGINX Ingress on the same cluster.

    1. Deploy Kgateway and the GatewayClass.
    2. Create Gateways and Routes for a pilot service.
    3. Change the DNS to point to the Kgateway LoadBalancer (or use a weighted DNS record to progressively switch over).
    4. Once validated, decommission the Ingress for that service.

    This approach reduces risk to near zero. If something breaks, we point the DNS back to the old Ingress.

    In-Depth Technical Comparison

    For those who want raw data, here is a summary of our technical benchmark. We compared our current implementation (NGINX), the "Full Istio" option, and our target Kgateway.

    The chosen criteria are those that impact our daily work the most: ease of configuring non-HTTP protocols, extension management (WAF, Rate Limit), and observability.

    FeatureNGINX Ingress (Legacy)Istio IngressKgateway
    Configuration ModelIngress Resource + Annotations (Messy)VirtualService + Gateway (Istio-specific)Standard Gateway API (K8s Native)
    Supported ProtocolsHTTP/HTTPS (TCP/UDP via painful global ConfigMap)HTTP, gRPC, TCP, TLSUnified HTTP, gRPC, TCP, UDP, TLS Passthrough
    Extensibility (Wasm)Limited (often Lua scripts)Yes, robust but complexYes, native Envoy & simplified extensions
    Service DiscoveryStandard K8s ServiceIstio service registryK8s Services + Upstreams (e.g., Lambda, S3)
    Performance (Route Propagation)Slow (frequent NGINX reloads)Very fast (xDS)Excellent (Optimized for high churn)

    This table highlights why NGINX Ingress is end-of-life for us. Managing TCP/UDP "via global ConfigMap" is an aberration in multi-tenant (one client could break everyone's config). Kgateway treats TCP/UDP as first-class citizens with TCPRoute and UDPRoute, isolated by namespace.

    Furthermore, Kgateway's ability to route to "Upstreams" that are not Kubernetes Pods (like an AWS Lambda function or a static S3 bucket) opens up hybrid architecture possibilities we didn't have before without intense hacking.

    Impact on Our Daily Work (Infra) and the Devs'

    Concretely, what does this change for us tomorrow morning?

    For the Infra team:We regain control. We are no longer the janitors cleaning up invalid YAML annotations. We provide pre-configured, secured (hardened TLS, WAF enabled) GatewayClasses and Gateways. We can sleep soundly knowing that developers cannot accidentally expose port 22 or disable SSL because they are limited to what the Gateway allows via allowedRoutes.

    For Application teams:At first, they'll complain. That's normal — we're changing their cheese. They'll have to learn to write HTTPRoute instead of Ingress. But very quickly, they'll see the advantages: they'll be able to do A/B testing (90%/10% Traffic Splitting) with three lines of standard YAML, without having to ask us to install some obscure plugin. They'll be able to cleanly rewrite their request headers. They gain autonomy over advanced routing, which is the ultimate goal of DevOps.

    Security: The Reduced "Blast Radius" Concept

    I insist on this point because it's a major selling argument for our fintech and healthcare clients. With the classic Ingress Controller, a syntax error in a poorly handled Lua annotation could sometimes crash the controller for the entire cluster. The "blast radius" (impact radius) was total.

    With Kgateway and the Gateway API model, configurations are much more compartmentalized. An invalid HTTPRoute in namespace A does not impact the Gateway or the routes in namespace B. The controller simply rejects the invalid configuration and surfaces a clear error status on the affected resource (via the status field), without affecting ongoing traffic. This is the level of resilience we must aim for.

    Conclusion and Vision

    The migration to the Gateway API with Kgateway is not just a "technical upgrade". It's a strategic realignment of our managed services offering.

    We are moving from a "best effort" model based on fragile conventions to a strong "contractual" model, standardized by Kubernetes. Kgateway brings us Envoy's performance, flexibility for AI, and a clear roadmap towards Service Mesh with Ambient.

    In summary:

    1. Ingress is dead, long live the Gateway API.
    2. Kgateway is our champion for its raw performance and native AI features.
    3. Infra takes the lead on network governance, while giving developers more power over routing.

    What's your next step? Read the official Gateway API documentation if you haven't already, and start experimenting with the lab cluster where I've pre-deployed Kgateway. We're kicking off the first client migrations next month.

    Get ready — it's going to be a great technical adventure!

    Bastien Genestal

    Bastien Genestal

    Lead Cloud Architect @ Log'in Line

    As Lead Cloud Architect, I manage a team of architects and SREs to manage our clients' clusters on a daily basis. I am also a big fan of climbing 🧗‍♂️

    LinkedIn

    Get discount on your Cloud account

    Get -4% on AWS, -3% on GCP and -8% on Scaleway

    1h to setupNo commitment

    Read next

    5 Beginner Terraform Mistakes You Must Absolutely Avoid

    Discover 5 critical Terraform mistakes: state management, secrets, architecture, and loops. Learn best practices for your IaC infrastructure.

    Reducing Kubernetes Costs in 2026: 8 Practical Tips

    Optimize your Cloud spending: discover 8 concrete levers to halve your Kubernetes cluster bill in 2026. Technical and FinOps guide.