Self-Organizing Maps (SOM) are a type of artificial neural network that create topologically ordered maps of input patterns through a competitive learning process. The neurons in a SOM are organized in a lattice structure and adaptively adjust their synaptic weights based on the input patterns, allowing for effective representation of high-dimensional data in a lower-dimensional space. The key processes involved in SOM include competition among neurons, cooperation within a topological neighborhood, and synaptic adaptation to enhance response to similar input patterns.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
3 views15 pages
Self Organizing Maps (SOM)
Self-Organizing Maps (SOM) are a type of artificial neural network that create topologically ordered maps of input patterns through a competitive learning process. The neurons in a SOM are organized in a lattice structure and adaptively adjust their synaptic weights based on the input patterns, allowing for effective representation of high-dimensional data in a lower-dimensional space. The key processes involved in SOM include competition among neurons, cooperation within a topological neighborhood, and synaptic adaptation to enhance response to similar input patterns.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15
Self Organizing Maps (SOM)
NN Based Unsupervised Learning
Introduction – Neurobiological Motivation
• The brain is organized in many places
in such a way that different sensory inputs are represented by topologically ordered computational maps. • Sensory inputs such as tactile, visual, and acoustic inputs are mapped onto different areas of the cerebral cortex in a topologically ordered manner. Introduction-Features of Computational Mapping Models • It must be understood that the objective is to build artificial topographic maps that learn through self-organization in a neurobiologically inspired manner. • The Computational maps should offer four properties: 1. In each map, neurons act in parallel and process pieces of information that are similar in nature, but originate from different regions in the sensory input space. 2. At each stage of representation, each incoming piece of information is kept in its proper context. 3. Neurons dealing with closely related pieces of information are close together so that they can interact via short synaptic connections. 4. Contextual maps can be understood in terms of decision-reducing mappings from higher- dimensional parameter spaces onto the cortical surface. • The spatial location of an output neuron in a topographic map should correspond to a particular domain or feature of data drawn from the input space. Introduction-Self Organizing Maps • In a self-organizing map, the neurons are placed at the nodes of a lattice that is usually, one or two dimensional. Higher-dimensional maps are also possible but not as common. • The neurons become selectively tuned to various input patterns (stimuli) or classes of input patterns in the course of a competitive- learning process. • The locations of the neurons so tuned (i.e., the winning neurons) become ordered with respect to each other in such a way that a meaningful coordinate system for different input features is created over the lattice. Introduction-Self Organizing Maps • SOM is a type of artificial neural networks that falls in a category of Self- organizing systems. • These networks are based on competitive learning; the output neurons of the network compete among themselves to be activated or fired, with the result that only one output neuron, or one neuron per group, is on at any one time. • An output neuron that wins the competition is called a winner-takes all neuron, or simply a winning neuron. • One way of inducing a winner-takes-all competition among the output neurons is to use lateral inhibitory connections (i.e., negative feedback paths) between them; such an idea was originally proposed by Rosenblatt (1958). Introduction A self-organizing map is therefore characterized by the formation of a topographic map of the input patterns, in which the spatial locations (i.e., coordinates) of the neurons in the lattice are indicative of intrinsic statistical features contained in the input patterns— hence, the name “self-organizing map.” Two Basic Feature-Mapping Models SELF-ORGANIZING MAP • The principal goal of the self- organizing map (SOM) is to transform an incoming signal pattern of arbitrary dimension into a one- or two-dimensional discrete map, and to perform this transformation adaptively in a topologically ordered fashion. SOM – Algorithm Overview • Initialization: Initialization of the synaptic weights in the network. This can be done by assigning them small values picked from a random-number generator. • Once the network has been properly initialized, there are three essential processes involved in the formation of the self-organizing map, as summarized as follows: • Competition: For each input pattern, the neurons in the network compute their respective values of a discriminant function. This discriminant function provides the basis for competition among the neurons. The particular neuron with the largest value of discriminant function is declared winner of the competition • Cooperation: The winning neuron determines the spatial location of a topological neighborhood of excited neurons, thereby providing the basis for cooperation among such neighboring neurons • Synaptic Adaptation: This last mechanism enables the excited neurons to increase their individual values of the discriminant function in relation to the input pattern through suitable adjustments applied to their synaptic weights. The adjustments made are such that the response of the winning neuron to the subsequent application of a similar input pattern is enhanced Competitive Process • Here A denotes the lattice of neurons. • Observe the Eq. of i(x) that is the subject of attention because it represents the identity of neuron i. • The particular neuron i that satisfies this condition is called the best- matching, or winning, neuron for the input vector x
A continuous input space of activation patterns is
mapped onto a discrete output space of neurons by a process of competition among the neurons in the network. Cooperative Process • The winning neuron locates the center of a topological neighborhood of cooperating neurons. • Similar to the phenomena that there exists a lateral interaction among a set of excited neurons in the human brain, a neuron that is firing in competitive process tends to excite the neurons in its immediate neighborhood more than those farther away from it. • This observation motivates to introduce a topological neighborhood around the winning neuron i and make it decay smoothly with lateral distance. • To be specific, hj,i denote the topological neighborhood function centered on winning neuron i and encompassing a set of excited (cooperating) neurons, a typical one of which is denoted by j. Cooperative Process Adaptive Process Phases of Adaptive Process • Self Organizing or Ordering Phase • Convergence Phase