Abstract
Dynamic power management (DPM) in wireless sensor nodes is a well-known technique for reducing idle energy consumption. DPM controls a node’s operating mode by dynamically toggling the on/off status of its units based on predictions of event occurrences. However, since each mode change induces some overhead in its own right, guaranteeing DPM’s efficiency is no mean feat in environments exhibiting non-determinism and uncertainty with unknown statistics. Our solution suite in this paper, collectively referred to as cognitive power management (CPM), is a principled attempt toward enabling DPM in statistically unknown settings and gives two different analytical guarantees. Our first design is based on learning automata and guarantees better-than-pure-chance DPM in the face of non-stationary event processes. Our second solution caters for an even more general setting in which event occurrences may take on an adversarial character. In this case, we formulate the interaction of an individual mote with its environment in terms of a repeated zero-sum game in which the node relies on a no-external-regret procedure to learn its mini-max strategies in an online fashion. We conduct numerical experiments to measure the performance of our schemes in terms of network lifetime and event loss percentage.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Sinha A, Chandrakasan A. Dynamic power management in wireless sensor networks. IEEE Design & Test of Computers, 2001, 18(2): 62–74.
Lou R C, Tu L C, Chen O. An efficient dynamic power management policy on sensor network. In Proc. the 19th Int. Conf. Advanced Information Networking and Applications, March 2005, pp.341-344.
Lin C, Xiong N, Park J H, Kim T. Dynamic power management in new architecture of wireless sensor networks. International Journal of Communication Systems, 2009, 22(6): 671–693.
Kianpisheh S, Charkari N M. Dynamic power management for sensor node inWSN using average rewardMDP. In Proc. the 4th WASA, Aug. 2009, pp.53-61.
Lou R C, Chen O.Mobile sensor node deployment and asynchronous power management for wireless sensor networks. IEEE Transactions on Industrial Electronics, 2012, 59(5): 2377–2385.
Kianpisheh S, Charkari N M. A new approach for power management in sensor node based on reinforcement learning. In Proc. International Symposium on Computer Networks and Distributed Systems (CNDS), Feb. 2011, pp.158-163.
Fang L, Dobson S. In-network sensor data modelling methods for fault detection. In Evolving Ambient Intelligence, O’Grady M J, Vahdat-Nejad H, Wolf K et al. (eds.), Springer International Publishing, 2013, pp.176-189.
Narendra K S, Thathachar M A L. Learning automata a survey. IEEE Transactions on Systems, Man and Cybernetics, 1974, 4(4): 323–334.
Blum A, Mansour Y. Learning, regret minimization, and equilibria. In Algorithmic Game Theory, Nisan N, Roughgarden T, Tardos E, Vazirani V (eds.), Cambridge University Press, 2007.
Wang L, Xiao Y. A survey of energy-efficient scheduling mechanisms in sensor networks. Mobile Networks and Applications, 2006, 11(5): 723–740.
Kianpisheh S, Charkari N M. A power control mechanism for sensor node based on dynamic programming. In Proc. the 2nd International Conference on Communication Software and Networks, Feb. 2010, pp.114-118.
Fallahi A, Hossain E. A dynamic programming approach for QoS-aware power management in wireless video sensor networks. IEEE Transactions on Vehicular Technology, 2009, 58(2): 843–854.
Wang X, Ma J, Wang S, Bi D. Prediction-based dynamic energy management in wireless sensor networks. Sensors, 2007, 7(3): 251–266.
Durand J B, Girard S, Civiza V et al. Optimization of power consumption and device availability based on point process modelling of the request sequence. Journal of the Royal Statistical Society: Series C (Applied Statistics), 2013, 62(2): 151–165.
Jung H, Pedram M. Supervised learning based power management for multicore processors. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2010, 29(9): 1395–1408.
Sutton R S, Barto A G. Reinforcement Learning: An Introduction. Cambridge, Massachusetts, London, England: MIT Press, 1998.
Benini L, Bogliolo A, De Micheli G. A survey of design techniques for system-level dynamic power management. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2000, 8(3): 299–316.
Vrancx P. Decentralised reinforcement learning in Markov games [Ph.D. Dissertation]. Department of Computer Science, Vrije Universiteit Brussel, 2010.
Wang X, Ma J, Wang S. Collaborative deployment optimization and dynamic power management in wireless sensor networks. In Proc. the 5th Grid and Cooperative Computing, October 2006, pp.121-128.
Meybodi M R, Beigy H. A note on learning automata based schemes for adaptation of BP parameters. Neurocomputing, 2002, 48(1/2/3/4): 957–974.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Tabatabaei, S.M., Hakami, V. & Dehghan, M. Cognitive Power Management in Wireless Sensor Networks. J. Comput. Sci. Technol. 30, 1306–1317 (2015). https://doi.org/10.1007/s11390-015-1600-8
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11390-015-1600-8