Skip to main content

A Stochastic Conjugate Gradient Method for Approximation of Functions

01 March 2012

New Image

A stochastic conjugate gradient method for approximation of a function is proposed. The proposed method avoids computing and storing the covariance matrix in the normal equations for the least squares solution. In addition, the method performs the conjugate gradient steps by using an inner product that is based stochastic sampling. Theoretical analysis shows that the method is convergent in probability. The method has applications in such fields as predistortion for the linearization of power amplifiers. Index Terms -- Stochastic conjugate gradient, approximation of functions, convergence in probability, normal equations, least squares solution, polynomial predistortion, power amplifier linearization The best approximation in a vector space is normally carried out by the least squares method in which a linear combination of the basis functions is sought so that it best matches the observed output samples when evaluated at the observed input samples. The coefficients of the least squares solution satisfy the normal equations. The normal equations can be solved by an iterative method such as the conjugate gradient (CG) method. An iterative method for solving the normal equations has many advantages over a direct method such as the Cholesky decomposition. When the CG method is used to solve the normal equations, the solution to the approximation problem of the interest becomes an inner-outer loop. In the outer loop, each iteration consists of taking a set of samples from the observation and forming the normal equations with the samples.