How Telcos Can Take Generative AI to the Next Level

Alix Leconte 缩略图
Alix Leconte
Published February 21, 2024

Over the past 18 months, generative AI (GenAI) has taken the world by storm. 

New services, such as ChatGPT and DALL-E, can generate text, images, and software code in response to natural language prompts from users.  

New levels of productivity are now possible and, according to recent research by Bloomberg Intelligence, the GenAI market could be worth as much as USD$1.3 trillion by 2032.

With the value of this technology now vividly apparent, we're starting to see a growing drive to create industry- and region-specific versions of the Large Language Models (LLMs) that enable computers to generate credible text and other content. 

A LLM is a statistical language model trained on a massive amount of data. It can be used to generate and translate text and other content, as well as perform natural language processing tasks. LLMs are typically based on deep-learning architectures. 

Across the world, pioneering telecoms operators are already gearing up to play a major role in the delivery and security of these specialist LLMs. In particular, they are anticipating strong demand for end-to-end GenAI solutions from enterprises, start-ups, universities, and public administrations that can’t afford to build the necessary computing infrastructure themselves.

It is an eye-catching trend and, with appropriate security safeguards, LLM-as-a-service solutions could soon be used to develop specific GenAI applications for healthcare, education, transport and other key sectors (including telecoms).


So, what are the next steps to make it all work, and what are the some of the key challenges that lie ahead?

As they need to be very responsive, highly reliable and always available, many LLMs will likely be distributed across multiple clouds and network edge locations.

Indeed, with the appropriate latency, GenAI will be integral to telcos’ edge propositions as users will need real-time “conversational” responses.

For telcos that have been struggling to grow revenue, delivering edge infrastructure to support specialist GenAI systems could be a major new market. Bloomberg Intelligence estimates that the GenAI infrastructure-as-a-service market (used for training LLMs) will be worth US$247 billion by 2032.

Neverthless, those hoping to hit the GenAI jackpot need to tread carefully.

Distributed architectures, which can increase the potential attack surface, call for robust and scalable security solutions to prevent data and personally identifiable information leaking - both in the AI training and inference phases. 

As bad actors increasingly employ lateral movement techniques to span multiple interconnected systems, it is critical that telcos secure both the apps and the APIs third parties will use to access the LLM-as-a-service. To help raise awarness on this front, the Open Source Foundation for Application Security (OWASP) recently started a new project to educate developers, designers, architects, managers and organizations about the potential security risks when deploying and managing LLMs. 

One this is certain:  telcos need to maintain the customer trust required to become credible players in this market, as GenAI systems will often need to process personally or commercially sensitive data. For that reason, many governments and regulators are keen that these systems run on compute capacity located within their jurisdictions. Meanwhile, enterprises are also reluctant to share sensitive data that may threaten their intellectual property and, therefore, prefer to use private LLMs offers.

Other issues of note include the way AI clusters act as virtual user communities, which requires high-performance data paths to access data residing in the private repositories of countries and enterprises.

Furthermore, AI's impact on network traffic and infrastructure will be increasingly influenced by plans from both countries and enterprises to self-host AI apps.  Concerns about hallucinations, copyright, security, as well as the environmental impacts of AI, are driving many to seek further security and control over data. In addition, they will need new ways to mitigate the antcipated strain on GPUs. All these considerations impact the overall TCO of AI infrastructures.

Enter telcos: flexible and scalable protection across multiple environments 

Telcos can play a major role in the AI revolution. They own national infrastructures, have an existing B2B offer and are a natural option to become providers of AI-as-a-service.    

As a case in point, F5 is already helping a telco in Europe to secure its new GenAI proposition. In this instance, our customer is using Nvidia DGX Super POD and Nvidia AI Enterprise technologies to develop the first LLM-trained natively in a local language. The goal is to capture the nuances of the language, as well as the specifics of its grammar, context and cultural identity.

To secure the solution across multiple edge sites, the telco will leverage F5 Distributed Cloud Web Application and API Protection (WAAP), provided as a cloud-based service. They are also harnessing F5’s ADC clusters to perform load balancing for the new AI platform across its edge infrastructure.

Crucially, F5’s solutions can be employed across public cloud and multi-tenancy data centres, as well as in-house and for their edge infrastructure. 

What's more, F5 Distributed Cloud WAAP, and associated API security solutions, can rapidly scale as traffic increases, reducing the overall cost of delivering the LLM-as-a-service. F5 also provides the visibility of traffic flow, latency and response times telcos and other managed service providers will need to provide enterprise customers with service level agreements.

Another way F5 can help is by dealing with the fact that LLM inference and AI tasks are notorious for requiring a lot of resources. These workloads call for extensive data exchanges, and often result in bottlenecks due to the need for secure data exchanges at scale. This can result in a lower utilization of valuable resources, which leads to increased operational costs and delays to desired outcomes. 

If they play their cards right, and are able to smartly leverage scalable and robust security solutions, telcos have everything it takes to become trusted providers of industry- and nation-specific LLMs. Those that succeed will undoubtedly gain a major competitive edge in the years ahead.

Keen to learn more? Book a meeting with F5 at Mobile World Congress in Barcelona from 26 February – 29 February (Hall 5, Stand 5C60)