Unlocking innovation with generative AI: Nokia's journey
There’s a concept in interstellar travel called the “wait calculation” which posits that we should wait for better technology before embarking on long voyages; embark on a decades-long journey now, or wait a few years for a spaceship that can travel twice as fast? Over the past year, as I have observed the amazing pace of innovation in Generative Artificial Intelligence, I have frequently reflected on this concept. Do we build an AI solution with the technology available today, or wait for a ready-made solution that may come in just a few months?
At Nokia, our longstanding tradition of using AI/ML in R&D, Product Development, and Service Delivery has positioned us well for this exciting realm of Generative AI. The past year has been a period of significant advancement, witnessing Generative AI not just take shape but leap forward in bounds. This progress is largely the result of advancements in cloud computing, which now enables the management of vast data sets at remarkably low costs, at levels that were inconceivable just a decade ago.
For multinational corporations like ours, the pivotal choice is whether to develop custom Large Language Model (LLM)-based solutions or adopt readily available AI "Assistant" tools. While off-the-shelf solutions from technology partners offer convenience, they may not always cater to the nuanced needs of an organization as diverse as Nokia. It is a strategic decision that bears considerable weight on our long-term innovation trajectory.
We believe it’s imperative to learn fast, through trials, proofs-of-concept, and experimentation. In early 2023 we launched our in-house LLM Gateway, an initiative that has already seen over 200 use-case candidates, with 52 advancing to the proof-of-concept stage so far. The Nokia LLM Gateway is a testament to our commitment to providing a secure suite of tools, granting our team a gateway to a myriad of LLMs for use-case development. By collaborating with major cloud partners and leveraging on-prem computing, and our own Bell Labs LLMs, we make a diverse set of tools available to our employees —ranging from chatbot interfaces for product documentation to automated test case development. The most popular use-cases are Information Retrieval using Retrieval Augmented Generation – RAG, and Programming Support. We have recorded as much as 70% reduction of manual tasks in some of these use-cases. A landmark component of this is our own NokiaGPT, an LLM chat interface designed to address a variety of daily operational needs across the company.
Our journey doesn't only focus on in-house solutions. We have embarked on the trial and piloting of various AI Assistant tools, witnessing firsthand their potential to elevate productivity. These tools have proven invaluable in distilling insights from surveys and efficiently summarizing meetings, despite the recognition that they are yet to reach perfection. One day I asked for a 3-slide presentation, but the application returned 7-slides, just as a human would! Sentient?. A noteworthy mention is our trial of various Code Assistant tools, which, while currently more effective for common languages like Python, shows promise for broader applications in the future. Equipped with these tools, we will see many more 10x developers. My personal favorite, the M365 Chat, has been instrumental in managing the plethora of information that crosses my desk. It serves as a digital assistant for tracking down conversations, documents, and action items—enhancing my day-to-day efficiency. While far from perfect, these tools already offer a glimpse of a very different future. Remember Amara’s Law; we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
One of my wake-up calls has been realizing how Generative AI simplifies and accelerates content discovery, underscoring the necessity for meticulous data management. Last year we established the Enterprise Data Office in Nokia and hired our first Chief Data Officer. This unit now spearheads our enterprise AI initiatives and works to ensure our data remains well-organized, accessible, and secured—an endeavor that becomes even more salient in the age of Generative AI. This new era calls for a new level of vigilance and responsibility—a challenge we at Nokia are ready to meet head-on.
The potential of Generative AI is monumental, and its transformative impact is expected to surpass that of the Internet. Yet, this optimism is tempered by an acute awareness of the privacy, security, social, and legal implications that accompany this new wave of technology. At Nokia we are pioneering Responsible AI in the telecom industry. We have defined 6 Responsible AI principles, put in place an AI Ethics Board and have established a thorough legal process to evaluate the potential impact of our PoCs and Trials. We are also contributing to the development of government guidelines and steering regulation. In embracing Generative AI, we are not just adopting new technology; we are paving the way for a more intelligent, efficient, and secure future, staying true to Nokia's legacy of pioneering innovation. Do we wait? No. We study, we experiment, and we learn, and prepare ourselves for a very different future.