Machine Learning for MU-MIMO Receive Processing in OFDM Systems
18 June 2021
Machine learning (ML)-based receivers are seen as a promising approach to enable accurate detection in multi-user multiple-input multiple-output (MU-MIMO) systems. However, their suitability as alternatives to conventional detection methods has not yet been demonstrated. In addition to enable accurate signal reconstruction over realistic channel models, MU-MIMO detection algorithms must allow for easy adaptation to a varying number of users without the need for computationally demanding retraining. In this work, we propose a ML-enhanced receiver for MU-MIMO systems that builds on top of a conventional linear minimum mean squared error (LMMSE) architecture. The proposed architecture preserves the high versatility of the LMMSE receiver, while improving its accuracy in two ways. Firstly, a convolutional neural network (CNN) is used to compute an approximation of the channel estimation error second order statistics, which are required for accurate equalization. Secondly, a CNN-based demapper is leveraged to jointly demap a large number of orthogonal frequency-division multiplexing (OFDM) symbols and subcarriers. The resulting architecture can be leveraged for both uplink and downlink, and is trained in an end-to-end manner, removing the need for hard-to-get perfect channel state information at training. Simulations were performed on 3rd Generation Partnership Project (3GPP)-compliant channel models using two different pilot patterns. The results show that the proposed ML-enhanced receiver beats the conventional LMMSE architecture on every user speed ranges considered, with particularly high gains on high mobility scenarios.