Abstract
In this paper the aim is to solve the problem of distributed estimation in an incremental network when the measurements taken by the nodes follow a widely linear model. The proposed algorithm, which we refer to as incremental augmented affine projection algorithm (incAAPA), utilizes the full second order statistical information in the complex domain. Moreover, it exploits the spatio-temporal diversity to improve the estimation performance. We derive steady-state performance metric of the incAAPA in terms of mean-square deviation. We further derive sufficient conditions to ensure mean-square convergence. Our analysis illustrates that the proposed algorithm is able to process both second-order circular (proper) and non-circular (improper) signals. The validity of the theoretical results and the good performance of the proposed algorithm are demonstrated by several computer simulations.








Similar content being viewed by others
Notes
For any matrices of compatible dimensions \(\{{\mathbf {Z}}_1, {\mathbf {Z}}_2, {\varvec{\Sigma }}_k\}\), it holds that
$$\begin{aligned} {\mathsf {vec}}\{{\mathbf {Z}}_1 {\varvec{\Sigma }}_k {\mathbf {Z}}_2\}&=\Big ({\mathbf {Z}}_2^{{\mathsf {T}}} \otimes {\mathbf {Z}}_1\Big ) {\mathsf {vec}}\{{\varvec{\Sigma }}_k\} \nonumber \end{aligned}$$Note that \({\mathbf {x}}_{k,i}=[x_{k,i}(1), x_{k,i}(2), \ldots , x_{k,i}(L)]\)
References
M.S.E. Abadi, A.R. Danaee, Low computational complexity family of affine projection algorithms over adaptive distributed incremental networks. AEU-Int. J. Electron. Commun. 68(2), 97–110 (2014)
T. Adali, P.J. Schreier, L.L. Scharf, Complex-valued signal processing: the proper way to deal with impropriety. IEEE Trans. Signal Process. 59(11), 5101–5125 (2011)
K. Aihara, Chaos and its applications. Procedia IUTAM 5(0), 199–203 (2012). (IUTAM Symposium on 50 Years of Chaos: Applied and Theoretical)
R. Arablouei, S. Werner, Y.-F. Huang, K. Dogancay, Distributed least mean-square estimation with partial diffusion. IEEE Trans. Signal Process. 62(2), 472–484 (2014)
R. Arablouei, K. Doanay, Affine projection algorithm with selective projections. Signal Process. 92(9), 2253–2263 (2012)
G. Azarnia, M.A. Tinati, Steady-state analysis of the deficient length incremental LMS adaptive networks. Circuits Syst. Signal Process. 39, 1–18 (2015)
S. Barbarossa, S. Sardellitti, P. Di Lorenzo, Distributed detection and estimation in wireless sensor networks. in CoRR, abs/1307.1448 (2013)
J. Benesty, P. Duhamel, Y. Grenier, A multichannel affine projection algorithm with applications to multichannel acoustic echo cancellation. IEEE Signal Process. Lett. 3(2), 35–37 (1996)
N. Bogdanovic, J. Plata-Chaves, K. Berberidis, Distributed incremental-based lms for node-specific parameter estimation over adaptive networks. in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5425–5429 (2013)
F.T. Castoldi, M.L.R. de Campos, Application of a minimum-disturbance description to constrained adaptive filters. IEEE Signal Process. Lett. 20(12), 1215–1218 (2013)
J. Chen, A.H. Sayed, Diffusion adaptation strategies for distributed optimization and learning over networks. IEEE Trans. Signal Process. 60(8), 4289–4305 (2012)
P. Di Lorenzo, S. Barbarossa, Distributed least mean squares strategies for sparsity-aware estimation over gaussian markov random fields. in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5472–5476 (2014)
S.C. Douglas, Widely-linear recursive least-squares algorithm for adaptive beamforming. in IEEE International Conference on Acoustics, Speech and Signal Processing, 2009. ICASSP 2009, pp. 2041–2044 (2009)
O.N. Gharehshiran, V. Krishnamurthy, G. Yin, Distributed energy-aware diffusion least mean squares: game-theoretic learning. IEEE J. Sel. Top. Signal Process. 7(5), 821–836 (2013)
S.L. Goh, D.P. Mandic, An augmented extended kalman filter algorithm for complex-valued recurrent neural networks. in 2006 IEEE International Conference on Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings, Vol. 5, pp. V–V (2006)
S. Javidi, M. Pedzisz, S. L. Goh, D. P. Mandic, The augmented complex least mean square algorithm with application to adaptive prediction problems. in 1st IARP Workshop on Cognitive Information Processing, pp. 54–57 (2008)
S.M. Jung, J.-H. Seo, P.G. Park, A variable step-size diffusion normalized least-mean-square algorithm with a combination method based on mean-square deviation. Circuits Syst. Signal Process., pp. 1–14 (2015)
S. Kanna, S.P. Talebi, D.P. Mandic, Diffusion widely linear adaptive estimation of system frequency in distributed power grids. in 2014 IEEE International Energy Conference (ENERGYCON), pp 772–778 (2014)
S. Kanna, D.H. Dini, Yili Xia, S.Y. Hui, D.P. Mandic, Distributed widely linear Kalman filtering for frequency estimation in power networks. IEEE Trans. Signal Inf. Process. Netw. 1(1), 45–57 (2015)
A. Khalili, A. Rastegarnia, W.M. Bazzi, Zhi Yang, Derivation and analysis of incremental augmented complex least mean square algorithm. IET Signal Process. 9(4), 312–319 (2015)
L. Li, J.A. Chambers, C.G. Lopes, A.H. Sayed, Distributed estimation over an adaptive incremental network based on the affine projection algorithm. IEEE Trans. Signal Process. 58(1), 151–164 (2010)
C. Li, P. Shen, Y. Liu, Z. Zhang, Diffusion information theoretic learning for distributed estimation over network. IEEE Trans. Signal Process. 61(16), 4011–4024 (2013)
Y. Liu, W.K.S. Tang, Enhanced incremental LMS with norm constraints for distributed in-network estimation. Signal Process. 94(0), 373–385 (2014)
C.G. Lopes, A.H. Sayed, Incremental adaptive strategies over distributed networks. IEEE Trans. Signal Process. 55(8), 4064–4077 (2007)
C.G. Lopes, A.H. Sayed, Diffusion least-mean squares over adaptive networks: formulation and performance analysis. IEEE Trans. Signal Process. 56(7), 3122–3136 (2008)
D.P. Mandic, S. Javidi, S.L. Goh, A. Kuh, K. Aihara, Complex-valued prediction of wind profile using augmented complex statistics. Renew. Energy 34(1), 196–201 (2009)
D. Mandic, V.S.L. Goh, Complex Valued Nonlinear Adaptive Filters: Noncircularity, Widely Linear and Neural Models (Wiley, New York, 2009)
R.G. Rahmati, A. Khalili, A. Rastegarnia, An adaptive diffusion algorithm based on augmented QLMS for distributed filtering of hypercomplex processes. Am. J. Signal Process. 5(2A), 1–8 (2015)
R.G. Rahmati, A. Khalili, A. Rastegarnia, H. Mohammadi, An adaptive incremental algorithm for distributed filtering of hypercomplex processes. Am. J. Signal Process. 5(2A), 9–15 (2015)
A. Rastegarnia, M.A. Tinati, A. Khalili, Performance analysis of quantized incremental LMS algorithm for distributed adaptive estimation. Signal Process. 90(8), 2621–2627 (2010)
A. Rastegarnia, M.A. Tinati, A. Khalili, Steady-state analysis of quantized distributed incremental LMS algorithm without gaussian restriction. Signal Image Video Process. 7(2), 227–234 (2013)
M.O.B. Saeed, A U H Sheikh, A new LMS strategy for sparse estimation in adaptive networks. in 2012 IEEE 23rd International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC), pp. 1722–1733 (2012)
A.H. Sayed, Adaptive Filters (Wiley, New York, 2008)
A.H. Sayed, Adaptive networks. Proc. IEEE 102(4), 460–497 (2014)
I.D. Schizas, G.B. Giannakis, S.I. Roumeliotis, A. Ribeiro, Consensus in ad hoc wsns with noisy links—Part II: Distributed estimation and smoothing of random signals. IEEE Trans. Signal Process. 56(4), 1650–1666 (2008)
I.D. Schizas, A. Ribeiro, G.B. Giannakis, Consensus in ad hoc wsns with noisy links—Part I: Distributed estimation of deterministic signals. IEEE Trans. Signal Process. 56(1), 350–364 (2008)
H.-C. Shin, A.H. Sayed, Mean-square performance of a family of affine projection algorithms. IEEE Trans. Signal Process. 52(1), 90–102 (2004)
F. Wen, Diffusion LMP algorithm with adaptive variable power. Electron. Lett. 50(5), 374–376 (2014)
Y. Xia, S. Javidi, D.P. Mandic, A regularised normalised augmented complex least mean square algorithm. in 2010 7th International Symposium on Wireless Communication Systems (ISWCS), pp. 355–359 (2010)
Y. Xia, C.C. Took, D.P. Mandic, An augmented affine projection algorithm for the filtering of noncircular complex signals. Signal Process. 90(6), 1788–1799 (2010)
Y. Xia, D.P. Mandic, A.H. Sayed, An adaptive diffusion augmented clms algorithm for distributed filtering of noncircular complex signals. IEEE Signal Process. Lett. 18(11), 659–662 (2011)
Y. Xia, D.P. Mandic, A.H. Sayed, An adaptive diffusion augmented clms algorithm for distributed filtering of noncircular complex signals. IEEE Signal Process. Lett. 18(11), 659–662 (2011)
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A
Now, to develop a distributed solution for (5), we turn the constrained minimization into an unconstrained one as
where \(T \times 1\) vector \({\pmb {\beta }}_{k,i}\) comprises Lagrange multipliers. We can see that (50) is real-valued function of complex variables \({\mathbf {h}}\) and \({\mathbf {g}}\). Obviously, each node can use its local data \(\{d_{k}(i), {\mathbf {x}}_{k,i}\}\) to calculate an estimate of \(\{{\mathbf {h}}^{\circ }, {\mathbf {g}}^{\circ }\}\). However, when a set of nodes has access to data, we can take advantage of node cooperation and space-time diversity to improve the estimation performance. To solve (50) we firstly compute the gradient of \(J({\mathbf {h}}_i,{\mathbf {g}}_i)\) with respect to weight vectors \({{\mathbf {h}}^{*}}\) and \({{\mathbf {g}}^{*}}\) which are defined as
The required gradients are given by
After setting the gradient of \(J({\mathbf {h}}_i,{\mathbf {g}}_i)\) with respect to weight vectors \({{\mathbf {h}}^{*}}\) and \({{\mathbf {g}}^{*}}\) equal to zero we get
Clearly, the solution given by (55) and (56) is not a distributed solution for (50). To have a distributed and adaptive solution for (50), we need to apply the following modifications
-
1.
Split the update Eqs. (55) and (56) into N separate steps whereby each step adds one term to summation. Then for \(\forall k \in {\mathcal {K}}\) we have
$$\begin{aligned} {{\mathbf {h}}_{1,i}}&= {{\mathbf {h}}_{i-1}},\ \ \ {{\mathbf {g}}_{1,i}}= {{\mathbf {g}}_{i-1}} \end{aligned}$$(57a)$$\begin{aligned} {{\mathbf {h}}_{k,i}}&= {{\mathbf {h}}_{k-1,i}} + \frac{1}{2}{{\mathbf {X}}_{k,i}^{*}} {\pmb {\beta }}_{k,i} \end{aligned}$$(57b)$$\begin{aligned} {{\mathbf {g}}_{k,i}}&= {{\mathbf {g}}_{k-1,i}} + \frac{1}{2}{{\mathbf {X}}_{k,i}} {\pmb {\beta }}_{k,i} \end{aligned}$$(57c)$$\begin{aligned} {{\mathbf {h}}_{i}}&= {{\mathbf {h}}_{N,i}} \ \ {{\mathbf {g}}_{i}}= {{\mathbf {g}}_{N,i}} \end{aligned}$$(57d)where \({\mathbf {h}}_{k,i}\) and \({\mathbf {g}}_{k,i}\) denote local estimates of \({\mathbf {h}}^{\circ }\) and \({\mathbf {g}}^{\circ }\) at node k at time i respectively.
-
2.
Eliminate the Lagrange multiplier vectors \({\pmb {\beta }}_{k,i}\). To this end we substitute (57b) and (57c) in the constraint relation of Eq. (5) to obtain
$$\begin{aligned}&{{\mathbf {d}}_{k,i}} - {\mathbf {X}}_{k,i}^{{\mathsf {T}}} \Big ({{\mathbf {h}}_{k-1,i}} + \frac{1}{2}{{\mathbf {X}}_{k,i}^{*}} {\pmb {\beta }}_{k,i} \Big ) \nonumber \\&\quad -{\mathbf {X}}_{k,i}^{{\mathsf {H}}} \Big ({{\mathbf {g}}_{k-1,i}} + \frac{1}{2}{{\mathbf {X}}_{k,i}} {\pmb {\beta }}_{k,i} \Big ) =0 \end{aligned}$$(58)Solving (58) in terms of \({\pmb {\beta }}_{k,i}\) yields
$$\begin{aligned} {\pmb {\beta }}_{k,i}=2\Big ({\mathbf {X}}_{k,i}^{{\mathsf {H}}}{\mathbf {X}}_{k,i}+{\mathbf {X}}_{k,i}^{{\mathsf {T}}}{\mathbf {X}}_{k,i}^{*} \Big )^{-1}{\mathbf {e}}_{k,i} \end{aligned}$$(59)where \({\mathbf {e}}_{k,i}\) is defined in (10). Finally, we replace (59) in (57b) and(57c) and also introduce a convergence factor \(\mu >0\) in order to trade-off final misadjustment and convergence speed to obtain the following update equation
$$\begin{aligned} {{\mathbf {h}}_{k,i}}&= {{\mathbf {h}}_{k-1,i}} + \mu _k {{\mathbf {X}}^{*}_{k,i}} \Big ({\mathbf {X}}_{k,i}^{{\mathsf {H}}}{\mathbf {X}}_{k,i}+{\mathbf {X}}_{k,i}^{{\mathsf {T}}}{\mathbf {X}}_{k,i}^{*} \Big )^{-1}{\mathbf {e}}_{k,i} \end{aligned}$$(60a)$$\begin{aligned} {{\mathbf {g}}_{k,i}}&= {{\mathbf {g}}_{k-1,i}} + \mu _k {{\mathbf {X}}_{k,i}} \Big ({\mathbf {X}}_{k,i}^{{\mathsf {H}}}{\mathbf {X}}_{k,i}+{\mathbf {X}}_{k,i}^{{\mathsf {T}}}{\mathbf {X}}_{k,i}^{*} \Big )^{-1}{\mathbf {e}}_{k,i} \end{aligned}$$(60b)
To avoid singularities due to the inversion of a rank deficient matrix, a positive constant \(\delta \), called the regularisation parameter, is added to the above updates, giving
and the proof is complete.
Appendix B
If we equate the weighted energies of both sides of (25), we arrive the following space-time version of the weighted energy conservation relation for incAAPA as:
Substituting (20) into (63) and rearranging the result, we obtain
By using the error signal \({\mathbf {e}}_{k,i}={\mathbf {U}}_{k,i}^{{\mathsf {H}}} {\tilde{\mathbf {w}}}_{k-1,i}+{\mathbf {v}}_{k,i}\) we have
Taking expectations of both sides of (65) and applying the Assumptions. 1 and 2 we obtain
where
Then, (66) and (67) can be rewritten as
and the proof is complete.
Rights and permissions
About this article
Cite this article
Khalili, A., Rastegarnia, A., Bazzi, W.M. et al. Analysis of Incremental Augmented Affine Projection Algorithm for Distributed Estimation of Complex-Valued Signals. Circuits Syst Signal Process 36, 119–136 (2017). https://doi.org/10.1007/s00034-016-0295-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00034-016-0295-6