Abstract
Recently, the dual neural network (DNN) model has been used to synthesize the k-winners-take-all (kWTA) process. The advantage of this DNN-kWTA model is that its structure is very simple. It contains 2nā+ā1 connections only. Also, the convergence behavior of the DNN-kWTA model under the noise condition was reported. However, there is no an analytic expression on the equilibrium point. Hence it is difficult to study how the noise condition affects the model performance. This paper studies how the noise condition affects the model performance. Based on the energy function, we propose an efficient method to study the performance of the DNN-kWTA model under the noise condition. Hence we can efficiently study how the noise condition affects the model performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Lazzaro, J., Ryckebusch, S., Mahowald, M.A., Mead, M.A.: Winner-take-all networks of O(N) complexity. In: Advances in Neural Information Processing Systems, vol.Ā 1, pp. 703ā711. Morgan Kaufmann Publishers Inc. (1989)
Hopfield, J.J.: Neurons with graded response have collective computational properties like those of two-state neurons. Proc. Natl. Acad. Sci. USAĀ 81(10), 3088ā3092 (1984)
Sum, J., Leung, C.S., Tam, P., Young, G., Kan, W., Chan, L.W.: Analysis for a class of winner-take-all model. IEEE Trans. Neural Netw.Ā 10(1), 64ā71 (1999)
Hu, X., Wang, J.: An improved dual neural network for solving a class of quadratic programming problems and its -winners-take-all application. IEEE Trans. Neural Netw.Ā 19(22), 2022ā2031 (2008)
Wang, J., Guo, Z.: Parametric sensitivity and scalability of k-winners-take-all networks with a single state variable and infinity-gain activation functions. In: Zhang, L., Kwok, J., Lu, B.-L. (eds.) ISNN 2010, Part I. LNCS, vol.Ā 6063, pp. 77ā85. Springer, Heidelberg (2010)
Wang, J.: Analysis and design of a -winners-take-all model with a single state variable and the heaviside step activation function. IEEE Trans. Neural Netw.Ā 21(9), 1496ā1506 (2010)
Xiao, Y., Liu, Y., Leung, C.S., Sum, J., Ho, K.: Analysis on the convergence time of dual neural network-based. IEEE Trans. Neural Netw. Learn. Syst.Ā 23(4), 676ā682 (2012)
Leung, C.S., Sum, J.: A fault tolerant regularizer for RBF networks. IEEE Trans. Neural Netw.Ā 19(3), 493ā507 (2008)
Wang, L.: Noise injection into inputs in sparsely connected Hopfield and winner-take-all neural networks. IEEE Trans. on System Man and Cybernetics Part B: CyberneticsĀ 27(5), 868ā870 (1997)
Hu, M., Li, H., Wu, Q., Rose, G.S., Chen, Y.: Memristor crossbar based hardware realization of BSB recall function. In: Proc. the 2012 International Joint Conference on Neural Networks, IJCNN (2012)
He, J., Zhan, S., Chen, D., Geiger, R.L.: Analyses of static and dynamic random offset voltages in dynamic comparators. IEEE Trans. on Circuits and Systems IĀ 56(5), 911ā919 (2009)
Sum, J., Leung, C.S., Ho, K.: Effect of Input Noise and Output Node Stochastic on Wangās kWTA. IEEE Trans. Neural Netw. Learn. Syst.Ā 24(9), 1472ā1478 (2013)
Bowling, S.R., Khasawneh, M.T., Kaewkuekool, S., Cho, B.R.: A logistic approximation to the cumulative normal distribution. Journal of Industrial Engineering and ManagementĀ 2(1), 114ā127 (2009)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Ā© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Feng, R., Leung, CS., Ng, KT., Sum, J. (2014). The Performance of the Stochastic DNN-kWTA Network. In: Loo, C.K., Yap, K.S., Wong, K.W., Teoh, A., Huang, K. (eds) Neural Information Processing. ICONIP 2014. Lecture Notes in Computer Science, vol 8834. Springer, Cham. https://doi.org/10.1007/978-3-319-12637-1_35
Download citation
DOI: https://doi.org/10.1007/978-3-319-12637-1_35
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-12636-4
Online ISBN: 978-3-319-12637-1
eBook Packages: Computer ScienceComputer Science (R0)