The most popular entropy-based encoding technique is the Huffman code [1]. It provides the least amount of information units (bits) per source symbol. This short article describes how it works.
The first step in the Huffman algorithm consists in creating a series of source reductions, by sorting the probabilities of each symbol and combining the (two) least probable symbols into a single symbol, which will then be used in the next source reduction stage. Figure 1shows an example of consecutive source reductions. The original source symbols appear on the left-hand side, sorted in decreasing order by their probability of occurrence. In the first reduction, the two least probable symbols (a3 with prob. = 0.06 and a5 with prob. = 0.04) are combined into a composite symbol, whose probability is 0.06 + 0.04 = 0.1. This composite symbol and its probability are copied onto the first source reduction column at the proper slot (so as to enforce the requirement that the probabilities are sorted...
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
D.A. Huffman, “A Method for the Construction of Minimum Redundancy Codes,” Proceedings of the IRE, Vol. 40, No. 10, 1952, pp. 1098–1101.
Huffman Coding, http://datacompression.info/Huffman.shtml (accessed on July 13, 2005).
Arithmetic Coding, http://datacompression.info/Arithmetic Coding.shtml (accessed July 13, 2005).
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2008 Springer-Verlag
About this entry
Cite this entry
(2008). Huffman Coding. In: Furht, B. (eds) Encyclopedia of Multimedia. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-78414-4_338
Download citation
DOI: https://doi.org/10.1007/978-0-387-78414-4_338
Publisher Name: Springer, Boston, MA
Print ISBN: 978-0-387-74724-8
Online ISBN: 978-0-387-78414-4
eBook Packages: Computer ScienceReference Module Computer Science and Engineering