For more than a decade, F5 and Red Hat have collaborated to help organizations modernize application delivery and infrastructure. Over that time, one thing has remained constant: as platforms evolve, so do the security requirements that surround them.
At Red Hat Summit 2026, F5 highlights this ongoing collaboration with two new enhancements designed to strengthen security across modern applications and AI environments.
The updates focus on two distinct areas of growing importance. One brings security directly into AI workflows through a new AI quickstart. The other extends application-layer protection by integrating a web application firewall into the Kubernetes Gateway API model on Red Hat OpenShift. The two address different use cases and layers of the stack, but together reflect a broader shift toward embedding security directly into the platform teams already use.
“At Red Hat Summit 2026, F5 highlights its ongoing collaboration with two new enhancements designed to strengthen security across modern applications and AI environments.”
Bringing security into AI workflows with a new AI quickstart
AI quickstarts are designed to help teams move quickly from experimentation to real-world deployment. Available through the official Red Hat catalog, these AI quickstarts provide pre-built, deployable use cases that allow organizations to stand up and explore AI workflows on Red Hat OpenShift AI without starting from scratch.
Instead of assembling components manually, teams can use these guided blueprints to accelerate proof-of-concept development and shorten the path to production. The approach emphasizes practical, hands-on experience, with deployments delivered as reference designs featuring streamlined setup and clearly defined use cases.
For example, teams can deploy a complete retrieval-augmented generation (RAG) powered chat assistant—including LlamaStack for AI orchestration, vLLM for GPU-accelerated inference, a Streamlit chat interface, and PostgreSQL with pgvector for semantic retrieval—alongside the F5 AI Security Operator, which enforces runtime guardrails. The result is a fully functional application with integrated security capabilities, delivered in a single deployment.
F5’s new AI quickstart introduced at Red Hat Summit builds on this model by integrating F5 AI Guardrails and F5 AI Red Team into a single deployment. The goal is straightforward: enable teams to evaluate secure AI inference from the start, rather than layering in protections later.
In this context, F5 AI Guardrails provide runtime protections designed to address AI-specific risks such as prompt injection, policy violations, and sensitive data exposure. These controls operate without requiring changes to the underlying model, which is critical for teams working with pre-trained or third-party models.
At the same time, F5 AI Red Team introduces a continuous testing loop, allowing organizations to simulate adversarial inputs and identify weaknesses early in the development lifecycle.
Together, these capabilities turn the AI quickstart into more than a deployment shortcut. It becomes a practical way to explore how security can be embedded directly into AI workflows. For platform teams, machine learning operations (MLOps) engineers, and security teams, the value lies in being able to deploy, test, and refine protections in a controlled environment, with a faster path to meaningful results.
The underlying benefits align closely with what makes AI quickstarts effective in general: they serve as blueprints, support rapid deployment, and enable hands-on evaluation. In this case, they also demonstrate how AI security can be treated as a core component of the workflow rather than an afterthought.
Extending application security with WAF and Gateway API
The second announcement builds on earlier work between F5 and Red Hat. In late 2025, F5 introduced NGINX Gateway Fabric as a certified solution for Red Hat OpenShift, giving platform teams a consistent way to manage application traffic using emerging Kubernetes standards. That work aligned closely with the industry’s shift toward the Gateway API model, which is widely viewed as the next evolution of Kubernetes Ingress.
At Red Hat Summit 2026, F5 is extending that foundation with the addition of a web application firewall (WAF) integrated into this same model.
As Kubernetes networking evolves, so, too, must the way in which security is applied. Traditional approaches often rely on separate tools or configurations that sit outside of core platform workflows. By contrast, attaching WAF policies at the gateway layer allows organizations to bring security closer to the application while maintaining consistency with how traffic is already managed.
This approach enables a common baseline of protection across multiple services and routes, including defenses against common web exploits and malicious request patterns aligned with the OWASP Top 10.
Customers also gain operational advantages. Because WAF policies can be defined and managed within the same framework as traffic routing, such policies can be version-controlled, integrated into CI/CD pipelines, and applied consistently across clusters and environments. This supports both modernization and portability, especially in hybrid and multi-cloud deployments where consistency is often difficult to achieve.
From a broader perspective, this integration reflects an important transition point. As the ecosystem standardizes on Gateway API for traffic entry and policy attachment, the ability to deliver production-grade security controls within that model becomes increasingly important. By pairing a high-performance gateway data plane with a container-native WAF based on its established engine, F5 is positioned to take early advantage of that shift.
A broader view of modern security
While these two enhancements target different domains, they point to a common trend. Security is no longer something that can be applied externally or retrofitted after deployment. It is becoming an integral part of the platforms and workflows that teams rely on every day.
In one case, that means embedding protections directly into AI development and inference workflows. In the other, it means integrating application security into the evolving standards that govern how traffic flows through Kubernetes environments. Together, they illustrate how security must adapt alongside both modern applications and emerging AI use cases.
See demos at Red Hat Summit
Both the new AI quickstart and the WAF integration with NGINX Gateway Fabric will be demonstrated at Red Hat Summit 2026, taking place this week in Atlanta. Attendees can view how these capabilities work in practice and how they can be applied in real-world environments.
You can find the F5 AI Guardrails AI quickstart here.
About the Author

Related Blog Posts

Why sub-optimal application delivery architecture costs more than you think
Discover the hidden performance, security, and operational costs of sub‑optimal application delivery—and how modern architectures address them.

Keyfactor + F5: Integrating digital trust in the F5 platform
By integrating digital trust solutions into F5 ADSP, Keyfactor and F5 redefine how organizations protect and deliver digital services at enterprise scale.

Architecting for AI: Secure, scalable, multicloud
Operationalize AI-era multicloud with F5 and Equinix. Explore scalable solutions for secure data flows, uniform policies, and governance across dynamic cloud environments.

Nutanix and F5 expand successful partnership to Kubernetes
Nutanix and F5 have a shared vision of simplifying IT management. The two are joining forces for a Kubernetes service that is backed by F5 NGINX Plus.

AppViewX + F5: Automating and orchestrating app delivery
As an F5 ADSP Select partner, AppViewX works with F5 to deliver a centralized orchestration solution to manage app services across distributed environments.
F5 NGINX Gateway Fabric is a certified solution for Red Hat OpenShift
F5 collaborates with Red Hat to deliver a solution that combines the high-performance app delivery of F5 NGINX with Red Hat OpenShift’s enterprise Kubernetes capabilities.
