|
In this thesis, two new types of LMS adaptive filtering algorithmsare proposed. The first one is based on algebraic reduction of adaptive filtering equation, while the second one is based on extrapolation of the filter coefficients. By rearranging the filtering equation, the algebraic-reduction LMS (ARLMS) adaptive filtering algorithm costs 50% fewer multiplications at the expense of 50% more additions than the direct-form LMS (DLMS) algorithm. In addition, the new algorithm introduces an extra adaptation parameter other than the filter coefficients. On the other hand, the extrapolated LMS (ELMS) algorithm extrapolates the adaptation coefficients to values close to the converged ones. As a result, considerable adaptation iterations are saved. The thesis first derives the optimal coefficient solution and minimal mean square error (MMSE) of the algebraic-reduction adaptive filtering (ARAF) structure. From the derivations, the optimal coefficient solutions and MMSEs for ARAF structure and direct-form Wiener filter are the same when system satisfies a specific condition. Otherwise, their optimal coefficient solutionsand MMSEs are different and the ARAF structure has a smaller MMSE than that ofthe direct-form Wiener filter. The specific condition is that the mean of input signal multiplied by the conjugated summation of the adaptive coefficient equals the mean of desired signal. Secondly, we introduce the ARLMS algorithm and explore its properties. FromARAF structure, an ARLMS algorithm is proposed, in contrast to the DLMS algorithm which is a realization of Wiener adaptive filter. The mean square errors of ARLMS and DLMS algorithms are different, because the ARLMS algorithmintroduces an extra adaptation parameter. Simulations show that the ARLMS algorithm has a smaller steady-state MSE than that of the DLMS algorithm when the step sizes are properly chosen and system is not under the mentioned specific condition. The conditions of convergence in mean and mean square for for the ARLMS algorithm are derived under the conditions of zero-mean input and desired signals. Accordingly, closed-form steady-state mean square error (MSE) as a function of the LMS algorithm step size μ and an extra compensation step size α are derived, which is slightly larger than that of the DLMS algorithm. Meanwhile, convergence bounds for μ and α are also derived. It is shown that, when μ is small enough and α is properly chosen, the ARLMS algorithm has comparable performance to that of the DLMS algorithm. Correspondingly, simple working rules and ranges for α and μ to make such comparability are provided. For verification, the ARLMS algorithm has been applied to HDTV and NTSC multi-path equalizations, Echo cancellation and HDSL equalization. Thirdly, the combined ARLMS algorithms with the signed LMS algorithm (SA), signed regressor algorithm (SRA), signed-signed (or signed product) algorithm (SSA) and the Duhamel''s fast exact least mean square (FELMS) adaptive algorithm are simulated to converge as fast as their counter DLMS variants. They maintain comparable performances to those of the DLMS variants when α isproperly chosen. Further, the ARLMS adaptive algorithm is extended to two- dimensional adaptive filtering. Simulations on image noise reduction also showthat the 2-D algorithm has comparable performance to that of the conventional 2-D LMS algorithm when a is properly chosen. Finally, the ELMS algorithms in various forms are detailed. Simulations show that the ELMS algorithm is very sensitive to step size, and the time instances that the extrapolation is made. To avoid wide and/or incorrect extrapolation, pre- cautious scheme is included, which improves the extrapolation accuracy noticeably.
|