Skip to main content

SR Linux and ChatGPT combine for network AIOps

A person looking at their tablet.

With the release of ChatGPT in November 2022, we witnessed one of those rare moments when it feels as though the world is living in the future, not the present. The uncanniness of ChatGPT’s facility with language, and its ability to so closely mimic human intelligence set off wonder and warning in equal measure. Suddenly AI was the darling of Wall Street while prognostications of doom filled the headlines.

Like many organizations, Nokia was interested to see what Generative AI and ChatGPT might be able to contribute if it could be integrated with our existing tools. The result of this work is an application built into our SR Linux Network Operating System (NOS). It’s called “SR Linux GPT”, and it took one of our engineers a couple of weeks to build. I will quickly describe what the application does and how it can benefit engineers and operations teams before turning to the long-term implications for our customers.

ChatGPT and natural language

Most of us are familiar at this point with large language models (LLMs) and what they do. For our development team, the interest in ChatGPT was its ability to query network information and assist with configuring network parameters using everyday language. Its natural language processing (NLP) abilities enable it to engage in and hold query-based conversations. Would this ability make it easier for engineers and operations teams to issue commands in natural language as opposed to traditional CLI (command-line interface) commands? It might not only make it easier, quicker and more efficient, but it might also reduce errors.

With a ChatGPT-like console interface, issuing commands in natural language instead of using the CLI format and grammar is much more natural and easy. It reduces the need for specialized knowledge of the vendor CLI and reduces errors and time in retrieving system information. For example, you can issue a natural language directive such as ‘Are there any dropped packets on any interfaces’

Figure 1

Context is king

Our initial tests of the new app do live up to our expectations. But what about ChatGPT’s tendency to experience ‘hallucinations’? If you have been following the more sober discussions of ChatGPT, you will know that the key to getting reliable answers from ChapGPT is to carefully set the context for your queries. In this case, our engineers directed OpenAI's GPT LLM to model its answers based on both the documentation associated with SR Linux and its operations as well as the state of the network received through telemetry and logs. This ensures that it doesn’t invent answers based on different data sets, such as publicly available information about other NOSs, for example.

Network AIOps

This integration of NLP is an evolving use case related to AIOps for networking. Network AIOps capabilities may be enabled via an OSS, BSS, network management and network automation platform or directly integrated into a networking hardware platform like an IP router or Ethernet switch. The end goal is applying AI-based approaches to improve the human operator experience and enhance the day-to-day processes best suited to the operator’s unique business needs.

When combined with AI platforms like ChatGPT, network AIOps opens up new possibilities and use cases in advanced network automation and enhanced operations:

  • Automation of manual, tedious and error prone tasks

  • Reduction of open trouble tickets

  • Faster detection and rectification of faults

  • Deduplication of alerts so operations staff are not inundated.

Root-cause analysis and resolution is another network AIOps area that is growing fast. Machine learning software can be trained on historic telemetry data to establish a ‘normal’ baseline for the network. The baseline enables it to detect anomalies and suggest root causes for these variances or events. AI-enhanced infrastructure can pinpoint the exact origin of the fault and provide remediation to solve the issue, such as the configuration change required.

A potential area for consideration could be the use of a network digital twin capability within a fabric management system to sanity check the change in emulation, releasing it quickly to production through the IT service management (ITSM) workflow, once verified. This entire process could be automated end-to-end to enable predictive repairs before customers even experience the fault.

Innovate quickly

Our ability to adapt ChatGPT-like functionality so quickly to SR Linux is rooted in our more modern and open NOS architecture. When we built SR Linux, we had the advantage of being able to see what our customers liked and disliked the most about the legacy systems and tools they were using. One of the most often-heard complaints was how they felt locked in by legacy solutions.

Fortunately for us, software development in the cloud era has shifted substantially with greater emphasis on openness and extensibility rather than monolithic integration. We built our system around Linux using a microservices architecture and open tools so that our customers, third parties and our own teams could innovate quickly and take advantage of advances like Generative AI when they occur.

Our state-of-the-art NetOps Development Kit (NDK) uses gRPC (Google Remote Procedure Call) and protocol buffers (Protobufs) to provide maximum flexibility when it comes to the languages supported, as well as backwards compatibility with previous versions of our NOS. With SR Linux and its NDK, it was as simple as creating an app that integrates with the OpenAI API and run it directly in the networking hardware platform running SR Linux. We believe this is one of the first implementations featuring this level of tight integration capability in the industry.

Because they are designed to create natural language statements in response to everyday queries, large language models or LLMs are somewhat unique in their ability to touch the lives of everyday citizens. This has garnered them a lot of attention, but the truth is that AI has seen dramatic results on many fronts recently, most of them invisible to the general public. As many commentators have noted, we are at a tipping point, where AI technology may be poised to usher in a new era in human progress.

We are happy to say that for our customers already enjoying the freedom and openness of SR Linux, you will be very well positioned to integrate these ground-breaking technologies into your AIOps, just as we’ve done with SR Linux GPT. This ability to innovate quickly will ensure that you profit from these AI advancements by lowering operations costs and improving and innovating the services your customers receive. 

To learn more and test drive this functionality, check out the Packet Pushers Video Byte and download the “SR Linux GPT” app at the Learn SR Linux site.

Erwan  James

About Erwan James

Erwan James is Product Line Manager driving strategy and execution for Nokia's datacenter portfolio.  Erwan has accumulated over 15 years in the networking industry, transitioning from roles in technical support to network engineering and architecture to his current position in product management.

Connect with Erwan on Linkedin

Article tags