Abstract
In this paper a new architecture for learning action sequences through imitation is proposed. Imitation occurs by means of observing and applying sequences of basic behaviors. When an agent has observed another agent and applied the observed action sequence later on, this imitated action sequence can be seen as a meme. Agents that behave similarly can therefore be grouped by their typical behavioral patterns. This paper thus explores imitation from the view of memetic proliferation.
Combining imitation learning with meme theory we show by simulating agent societies that with imitation significant performance improvements can be achieved. The performance is quantified by using an entropy measure to qualitatively evaluating the emerging clusters.
Our approach is demonstrated by the example of a society of emotion driven agents that imitate each other to reach pleasant emotional state.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
The Oxford English Dictionary. Oxford University Press
Alissandrakis, A., Nehaniv, C., Dautenhahn, K.: Learning how to do things with imitation (2000)
Arkin, R.C.: Behaviour-Based Robotics. MIT Press, Cambridge (1998)
Atkeson, C.G., Schaal, S.: Robot learning from demonstration. In: Proc. 14th International Conference on Machine Learning, pp. 12–20. Morgan Kaufmann, San Francisco (1997)
Balch, T.: Behavioral Diversity in Learning Robot Teams. PhD thesis, Georgia Institute of Technology (December 1998)
Billard, A.: Learning motor skills by imitation: a biologically inspired robotic model (2000)
Blackmore, S.: The Meme Machine. Oxford University Press, Oxford (1999)
Breazeal, C., Scassellati, B.: How to build robots that make friends and influence people (1999)
Dawkins, R.: The Selfish Gene. Oxford University Press, Oxford (1976)
Demiris, J., Hayes, G.: Imitation as a dual-route process featuring predictive and learning components: a biologically-plausible computational model (2001)
Borenstein, E., Ruppin, E.: Enhancing autonomous agents evolution with learning by imitation. In: Second International Symposium on Imitation in Animals and Artifacts (2003)
Esau, N., Kleinjohan, B., Kleinjohann, L., Stichling, D.: Mexi: Machine with emotionally extended intelligence (2003)
Gatsoulis, Y., Maistros, G., Marom, Y., Hayes, G.: Learning to forage through imitation. In: Proceedings of the Second IASTED International Conference on Artificial Intelligence and Applications (AIA 2002), September 2002, pp. 485–491 (2002)
Nilsson, N.J.: Artificial Intelligence: A New Synthesis. Morgan Kaufmann Publishers, San Francisco (1998)
Plutchik, R.: The Emotions. University Press of America (1991)
Schaal, S.: Dynamic movement primitives – a framework for motor control in humans and humanoid robotics. In: The International Symposium on Adaptive Motion of Animals and Machines (2003)
Shannon, C.E.: A mathematical theory of communication. Bell System Technical Journal 27, 379–423, 623–656 (1948)
Sutton, Barto: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)
Weicker, K.: Evolutionäre Algorithmen. Teubner, Stuttgart (2002)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Richert, W., Kleinjohann, B., Kleinjohann, L. (2005). Learning Action Sequences Through Imitation in Behavior Based Architectures. In: Beigl, M., Lukowicz, P. (eds) Systems Aspects in Organic and Pervasive Computing - ARCS 2005. ARCS 2005. Lecture Notes in Computer Science, vol 3432. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-31967-2_7
Download citation
DOI: https://doi.org/10.1007/978-3-540-31967-2_7
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-25273-3
Online ISBN: 978-3-540-31967-2
eBook Packages: Computer ScienceComputer Science (R0)