AI shapes the radio architecture of future cellular devices
It is well-known that artificial intelligence will significantly impact the wireless industry. Consequently, 3GPP has decided to standardize its usage in Release 18 and Release 19 for three distinct use cases: localization, beam management and channel state information feedback. For all these use cases, AI is utilized independently either on the device side, on the network side or collaboratively on both sides of the communication system.
The device plays a key role in all the above use-cases, either by running AI functionalities itself, or by supporting the network to do so without disclosing any information about its own inner workings. Proof of how AI can be used successfully across the communication link can be found in Nokia’s recent collaboration with Qualcomm that shows how a device can cooperate in a multi-vendor setup with the network to boost the wireless capacity.
The key aspect of the three 5G-Advanced use cases is that AI is ultimately used to learn the intricacies of the wireless environment around the device. Once the information is acquired, it is then used to either locate the device, or optimize the communication link to that device.
6G is expected to bring even more use cases targeting AI enhancements, with a north star of eventually creating a native AI air interface in which, for example, AI transceivers adapt not only to the specific conditions experienced by each device in the field, but also to the device- and base station-specific hardware impairments.
Context aware devices
An AI-native 6G device should be enabled to learn about its extrinsic environment, such as field reflectors (pedestrian, buildings, vegetation, etc.), weather conditions, user behavior (w.r.t. handling the device) and other proximity devices. It should also learn about its intrinsic environment, like how the different hardware components subject to aging, supply voltage or temperature variations, affect the radio signal that the device either emits or receives. The enablement comes partly from the network, via configuring the necessary radio signals for the device to understand what the propagation and interference conditions are, and partly from within the device itself, by allowing the AI models to access information about the device’s own hardware, such as RF nonlinearities, operating temperature, etc.
Furthermore, research shows that AI can greatly improve the performance of 5G-Advanced and 6G devices when context information complementing the radio signals is made available to the AI models running in the device. Such context information may refer to device orientation, user posture, handgrip, device location and all kinds of other sensorial information.
As examples, context-aware AI has shown potential in circumventing the very common hand blockage problem, or the impact of the user posture (e.g., gaming versus video calls), and can enable the device to select the best receive beams, or to use smart antennas, both of which yield significant improvements in the quality of the communication link and of the overall service.
Similarly, harnessing the device location by AI has shown promise with respect to learning a device-specific waveform and the corresponding detection of thereof. In other words, once the device’s location is made available to the model, the AI combines the location information with the radio signals in such a way that the AI-powered device becomes fully aware of its own spatial orientation and surroundings, and overcomes challenges such as link obstructions, high level of interference, device’s own mobility and more. Despite the obvious gains of using user context, preserving the privacy of this data is an equally important aspect to be considered. Thus, maintaining data privacy while allowing the AI model to effectively harness this data is paramount to ensuring AI’s adoption, and methods that purposely add noise to sensitive user data (or to the AI model data for that matter) become increasingly popular.
Balancing between power efficiency and throughput
Given the energy constraint of the battery-powered device, the right tradeoff between the AI model complexity and robustness to change must be established. Presently, limited on-device memory and overall device silicon size also means that the number of AI models that the device can store and process are limited, and so are the AI architecture variants that the device hardware can support. While these are critical considerations when designing an AI solution for a device-specific wireless communication problem, the advantage of using AI is that many of these constraints can be accounted for during the design phase and prior to deployment e.g. via model pruning or model quantization.
With rather limited on-device resources, the temptation is to train and deploy simpler AI models and fine-tune such models during the operation of the device in the field. The biggest caveat of this approach is that it may lead to a phenomenon called “catastrophic forgetting”, a situation in which the AI model, once trained for previously unseen conditions, forgets how to handle the old set of conditions. The term “catastrophic forgetting” appears in the literature as early as late 80’s, when it has been observed that a neural network tends to forget how to handle a past task, after it has been sequentially trained to solve the current task. Thus, ensuring AI robustness and predictability in terms of conformance and performance to various propagation and interference conditions becomes paramount to the successful adoption of AI-native devices in 6G systems.
Nevertheless, as Moore’s law remains an accurate prediction of the compute power (i.e., number of a microchip transistors doubling every twenty-four months), increasingly more complex and powerful AI models are constantly under development and could soon be deployed on-device to solve increasingly more complex wireless problems e.g. when the 6G era starts.
Conclusion
The Release 19 AI-powered use cases are a clear indication that the wireless community has embraced the usage of AI for physical layer design. With the device's processing power increasing exponentially, AI also shows a tremendous potential for both AI-native 6G air interface design and 6G modem design, and for success, the wireless industry must balance the complexity and robustness of the AI models deployed onto the future 6G device.