Skip to main content

Neural networks are taking us closer to Connected AI

Neural networks are taking us closer to Connected AI

The term Artificial Intelligence (AI) certainly isn’t new. What seemed like science fiction not so many years ago is now an integral part of our daily lives.

We utilize AI-enabled services all the time, sometimes without even realizing it: whenever we take a selfie, our smartphone uses AI to automatically enhance the quality of the image. In videoconferencing, our cameras detect the subject and segment it, enabling us to blur the background if we want to.

And artificial neural networks, that enable computing systems to mimic the functions of the human brain, offer even greater potential.

Neural networks enable deep learning

In a very basic sense, a neural network is a system that is designed to learn and recognize patterns. Neural networks help computers become more autonomous by advancing their intelligent decision-making abilities. As a result, computers can process and analyse enormous amounts of data in a fraction of a second and use it to make decisions. Autonomous driving is one example of this kind of rapid decision making. And deep neural networks have demonstrated impressive results in other types of tasks as well, such as generating images from descriptions, or detecting and identifying the underlying emotional meaning of a text through language sentiment analysis.

From the Internet of Things to Connected AI

We are now moving from the Internet of Things (IoT) towards Artificial Intelligence of Things (AIoT) – or connected AI. While IoT can make machines and devices communicate with each other, AIoT takes that interaction one step further by enabling things to learn from each other and improve their communication over time.

AIoT allows us to build a connected network of intelligent devices and organizations. In a smart warehouse, for example, the placement and movement of goods can be automatically controlled by a network of intelligent sensors or cameras. Over time, the performance of the operating machines in the warehouse improves, increasing productivity and reducing errors.

Nokia is among the key developers of the Neural Network Compression (NNC) standards

Limited computing powers and restricted bandwidth are the biggest barriers to implementing connected AI in devices. Increasing the efficiency of AI algorithms and compressing them for efficient transmission, in a way that works for all service providers, is the key to overcoming these hurdles.

ISO/IEC 15938-17, a class of standards for Neural Network Compression (NNC), addresses this challenge by providing interoperability between vendors and enabling them to decode and process compressed neural networks. The standard supports both the deployment of neural networks and carriage of updates. The latter enables incremental enhancement and collaborative learning of algorithms.

Nokia is one of the major contributors to this standard which paves the way for connected AI. Nokia’s contribution covers almost every aspect of compactization, ranging from preparation of a neural network for efficient computation, to efficient transportation.

Some key aspects of Nokia’s contribution to NNC

A neural network can be interpreted as a computational graph consisting of many nodes. Every node in the graph defines operations and variables that enable the neural network to perform complex mathematical functions. Think of image classification, for example: as the neural network assigns a certain class label to an image, different nodes contribute to the decision making. Some variables are more important, such as those that identify the object, whilst others are less important, such as the nodes that identify the background.

So, the nodes’ contribution isn’t equal. This is where pruning – removing the nodes that are redundant and contribute less to the decision making – reduces the complexity of the neural network.

The NNC standards combine Nokia’s pruning techniques with other specialized sparsification methods. This hybrid framework makes neural networks more computationally efficient and improves their characteristics so that they can be carried and deployed more efficiently.

Another important aspect of Nokia’s contribution to the NNC standards is the compression of weight updates. Weights are the tunable variables in a computational graph. During the learning process, the weights of the neural networks are tuned to learn new patterns. The difference between the weights at the start and end of a learning process is known as weight updates. Weight update compression is a key step in reducing the amount of data during a neural network update transfer.

Nokia developed quantization techniques for obtaining a low bit-width representation of the neural networks to enable better compression of weight updates. Furthermore, we exploited predictive coding techniques to enhance the encoding of weight updates. This helps to communicate the difference between two consecutive weight updates and significantly reduces the amount of exchanged information.

Most importantly, Nokia helped ensure that a wide range of tools and techniques – not limited to Nokia-specific solutions – are supported by the standard, because the biggest advancements in technology are achieved through collaboration and openness. These are the fundamental values that guide our efforts in standardization as we approach a new era of connected AI.

Hamed R. Tavakoli

About Hamed R. Tavakoli

Hamed R. Tavakoli, Doctor of Science (Tech.), is the Head of Visual AI Systems Research at Nokia. He conducts research in machine learning and contributes to the standardization of technologies relevant to artificial intelligence, including neural network compression. He is an experienced researcher with a background in developing deep learning algorithms at e.g. Aalto University in Finland.

Connect with Hamed on LinkedIn or follow him on Twitter

Article tags