AI systems evolve faster than traditional security testing can keep up. Join F5 experts to learn how AI Red Team accelerates continuous adversarial testing across models, apps and agents—using an extensive attack database, multi-turn Agentic Resistance campaigns, and operational stress tests—to surface vulnerabilities before they’re exploited. We’ll demo how severity and risk-scored results and Agentic Fingerprints produce audit-ready, explainable reports and show how findings can be operationalized into runtime protections via F5 AI Guardrails.
Explain the evolving runtime-layer threat landscape and why traditional pen-testing is insufficient.
Demonstrate how to configure and run continuous red-team campaigns (signature + agentic tests) and interpret CASI/ARS risk scores.
Read and act on Agentic Fingerprints and audit-ready vulnerability reports to prioritize remediation and enable GRC initiatives.
Map AI Red Team findings into runtime protections with F5 AI Guardrails for continuous enforcement.

Kim Bieler
Senior Manager, Product Management
F5

Allan Healy
Senior Solutions Engineer
F5

Jessica Brennan
Senior Product Marketing Manager
F5