0% found this document useful (0 votes)
239 views62 pages

Virtualization Primer

Uploaded by

hareendrareddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
239 views62 pages

Virtualization Primer

Uploaded by

hareendrareddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Virtualization Primer

Created 2017-07-07
Contents

1 Virtualization 1
1.1 Hardware virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.3 Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.4 Video game console emulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.5 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Desktop virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Other types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Nested virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Hardware virtualization 6
2.1 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Reasons for virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Full virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Hardware-assisted virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.5 Paravirtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.6 Operating-system-level virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.7 Hardware virtualization disaster recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3 I/O virtualization 10
3.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

i
ii CONTENTS

3.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

4 Hypervisor 12
4.1 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.2 Mainframe origins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.3 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.4 x86 systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.5 Embedded systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.6 Security implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

5 Virtual machine 17
5.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.1.1 System virtual machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.3 Full virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.3.1 Hardware-assisted virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.4 Operating-system-level virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.7 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

6 Virtual memory 21
6.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.2 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.3 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.4 Paged virtual memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.4.1 Page tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.4.2 Paging supervisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.4.3 Pinned pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.4.4 Thrashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.5 Segmented virtual memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.6 Address space swapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.8 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
CONTENTS iii

6.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

7 Kernel (operating system) 27


7.1 Functions of the kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
7.1.1 Memory management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7.1.2 Device management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7.1.3 System calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.2 Kernel design decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.2.1 Issues of kernel support for protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.2.2 Process cooperation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
7.2.3 I/O devices management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
7.3 Kernel-wide design approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
7.3.1 Monolithic kernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.3.2 Microkernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
7.3.3 Monolithic kernels vs. microkernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
7.3.4 Hybrid (or modular) kernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
7.3.5 Nanokernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
7.3.6 Exokernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
7.4 History of kernel development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
7.4.1 Early operating system kernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
7.4.2 Time-sharing operating systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
7.4.3 Amiga . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
7.4.4 Unix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
7.4.5 Mac OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
7.4.6 Microsoft Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
7.4.7 Development of microkernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
7.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
7.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
7.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
7.8 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
7.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

8 Kernel-based Virtual Machine 42


8.1 Internals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
8.2 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
8.3 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
8.4 Graphical management tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
8.5 Emulated hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
iv CONTENTS

8.6 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
8.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
8.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
8.9 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
8.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

9 QEMU 45
9.1 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
9.2 Operating modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
9.3 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
9.3.1 Tiny Code Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
9.3.2 Accelerator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
9.3.3 Supported disk image formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
9.4 Hardware-assisted emulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
9.5 Parallel emulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
9.6 Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
9.6.1 VirtualBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
9.6.2 Xen-HVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
9.6.3 KVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
9.6.4 Win4Lin Pro Desktop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
9.6.5 SerialICE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
9.6.6 WinUAE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
9.7 Emulated hardware platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
9.7.1 x86 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
9.7.2 PowerPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
9.7.3 ARM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
9.7.4 SPARC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
9.7.5 MicroBlaze . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
9.7.6 LatticeMico32 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
9.7.7 CRIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
9.7.8 OpenRISC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
9.7.9 External patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
9.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
9.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
9.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
9.11 Text and image sources, contributors, and licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
9.11.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
9.11.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
9.11.3 Content license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Chapter 1

Virtualization

In computing, virtualization refers to the act of creating a simulated; however, the guest programs are executed
virtual (rather than actual) version of something, including in their own isolated domains, as if they are running on
virtual computer hardware platforms, storage devices, and a separate system. Guest programs need to be specif-
computer network resources. ically modified to run in this environment.
Virtualization began in the 1960s, as a method of logi-
cally dividing the system resources provided by mainframe
computers between different applications. Since then, the Hardware-assisted virtualization is a way of improving
meaning of the term has broadened.[1] overall efficiency of virtualization. It involves CPUs that
provide support for virtualization in hardware, and other
hardware components that help improve the performance
1.1 Hardware virtualization of a guest environment.
Hardware virtualization can be viewed as part of an over-
Main article: Hardware virtualization all trend in enterprise IT that includes autonomic comput-
See also: Mobile virtualization ing, a scenario in which the IT environment will be able to
manage itself based on perceived activity, and utility com-
Hardware virtualization or platform virtualization refers to puting, in which computer processing power is seen as a
the creation of a virtual machine that acts like a real com- utility that clients can pay for only as needed. The usual
puter with an operating system. Software executed on goal of virtualization is to centralize administrative tasks
these virtual machines is separated from the underlying while improving scalability and overall hardware-resource
hardware resources. For example, a computer that is run- utilization. With virtualization, several operating systems
ning Microsoft Windows may host a virtual machine that can be run in parallel on a single central processing unit
looks like a computer with the Ubuntu Linux operating (CPU). This parallelism tends to reduce overhead costs and
system; Ubuntu-based software can be run on the virtual differs from multitasking, which involves running several
machine.[2][3] programs on the same OS. Using virtualization, an enter-
prise can better manage updates and rapid changes to the
In hardware virtualization, the host machine is the actual operating system and applications without disrupting the
machine on which the virtualization takes place, and the user. “Ultimately, virtualization dramatically improves the
guest machine is the virtual machine. The words host and efficiency and availability of resources and applications in
guest are used to distinguish the software that runs on the an organization. Instead of relying on the old model of
physical machine from the software that runs on the virtual “one server, one application” that leads to underutilized re-
machine. The software or firmware that creates a virtual sources, virtual resources are dynamically applied to meet
machine on the host hardware is called a hypervisor or Vir- business needs without any excess fat” (ConsonusTech).
tual Machine Manager.
Hardware virtualization is not the same as hardware em-
Different types of hardware virtualization include: ulation. In hardware emulation, a piece of hardware im-
itates another, while in hardware virtualization, a hyper-
• Full virtualization – almost complete simulation of
visor (a piece of software) imitates a particular piece of
the actual hardware to allow software, which typically
computer hardware or the entire computer. Furthermore,
consists of a guest operating system, to run unmodi-
a hypervisor is not the same as an emulator; both are com-
fied.
puter programs that imitate hardware, but their domain of
• Paravirtualization – a hardware environment is not use in language differs.

1
2 CHAPTER 1. VIRTUALIZATION

1.1.1 Snapshots 1.1.3 Failover

Main article: Failover


Main article: Snapshot (computer storage)

Similarly to the migration mechanism described above,


A snapshot is a state of a virtual machine, and generally its
failover allows the VM to continue operations if the host
storage devices, at an exact point in time. A snapshot en-
fails. However, in this case, the VM continues operation
ables the virtual machine’s state at the time of the snapshot
from the last-known coherent state, rather than the current
to be restored later, effectively undoing any changes that
state, based on whatever materials the backup server was
occurred afterwards. This capability is useful as a backup
last provided with.
technique, for example, prior to performing a risky opera-
tion.
Virtual machines frequently use virtual disks for their stor- 1.1.4 Video game console emulation
age; in a very simple example, a 10-gigabyte hard disk drive
is simulated with a 10-gigabyte flat file. Any requests by Main article: Video game console emulator
the VM for a location on its physical disk are transparently
translated into an operation on the corresponding file. Once
such a translation layer is present, however, it is possible to A video game console emulator is a program that allows
intercept the operations and send them to different files, de- a personal computer or video game console to emulate a
pending on various criteria. Every time a snapshot is taken, different video game console’s behavior. Video game con-
a new file is created, and used as an overlay for its prede- sole emulators and hypervisors both perform hardware vir-
cessors. New data are written to the topmost overlay; read- tualization; words like “virtualization”, “virtual machine”,
ing existing data, however, needs the overlay hierarchy to “host” and “guest” are not used in conjunction with console
be scanned, resulting in accessing the most recent version. emulators.
Thus, the entire stack of snapshots is virtually a single co-
herent disk; in that sense, creating snapshots works similarly
to the incremental backup technique. 1.1.5 Licensing
Other components of a virtual machine can also be included Virtual machines running proprietary operating systems re-
in a snapshot, such as the contents of its random-access quire licensing, regardless of the host machine’s operating
memory (RAM), BIOS settings, or its configuration set- system. For example, installing Microsoft Windows into a
tings. "Save state" feature in video game console emulators VM guest requires its licensing requirements to be satisfied.
is an example of such snapshots.
Restoring a snapshot consists of discarding or disregarding
all overlay layers that are added after that snapshot, and di-
recting all new changes to a new overlay.
1.2 Desktop virtualization
Main article: Desktop virtualization

Desktop virtualization is the concept of separating the


logical desktop from the physical machine.
1.1.2 Migration
One form of desktop virtualization, virtual desktop infras-
tructure (VDI), can be thought of as a more advanced form
Main article: Migration (virtualization) of hardware virtualization. Rather than interacting with a
host computer directly via a keyboard, mouse, and moni-
The snapshots described above can be moved to another tor, the user interacts with the host computer using another
host machine with its own hypervisor; when the VM is tem- desktop computer or a mobile device by means of a net-
porarily stopped, snapshotted, moved, and then resumed work connection, such as a LAN, Wireless LAN or even
on the new host, this is known as migration. If the older the Internet. In addition, the host computer in this scenario
snapshots are kept in sync regularly, this operation can be becomes a server computer capable of hosting multiple vir-
quite fast, and allow the VM to provide uninterrupted ser- tual machines at the same time for multiple users.[4]
vice while its prior physical host is, for example, taken down As organizations continue to virtualize and converge their
for physical maintenance. data center environment, client architectures also continue
1.3. OTHER TYPES 3

to evolve in order to take advantage of the predictabil- • Application virtualization and workspace virtualiza-
ity, continuity, and quality of service delivered by their tion, the hosting of individual applications in an en-
converged infrastructure. For example, companies like HP vironment separated from the underlying OS. Appli-
and IBM provide a hybrid VDI model with a range of vir- cation virtualization is closely associated with the con-
tualization software and delivery models to improve upon cept of portable applications.
the limitations of distributed client computing.[5] Selected
client environments move workloads from PCs and other • Service virtualization, emulating the behavior of de-
devices to data center servers, creating well-managed vir- pendent (e.g., third-party, evolving, or not imple-
tual clients, with applications and client operating environ- mented) system components that are needed to exer-
ments hosted on servers and storage in the data center. For cise an application under test (AUT) for development
users, this means they can access their desktop from any lo- or testing purposes. Rather than virtualizing entire
cation, without being tied to a single client device. Since the components, it virtualizes only specific slices of de-
resources are centralized, users moving between work loca- pendent behavior critical to the execution of develop-
tions can still access the same client environment with their ment and testing tasks.
[5]
applications and data. For IT administrators, this means a
more centralized, efficient client environment that is easier Memory
to maintain and able to more quickly respond to the chang-
ing needs of the user and business.[6][7]
• Memory virtualization, aggregating random-access
Another form, session virtualization, allows multiple users memory (RAM) resources from networked systems
to connect and log into a shared but powerful computer over into a single memory pool
the network and use it simultaneously. Each is given a desk-
top and a personal folder in which they store their files.[4] • Virtual memory, giving an application program the
With multiseat configuration, session virtualization can be impression that it has contiguous working memory,
accomplished using a single PC with multiple monitors key- isolating it from the underlying physical memory im-
boards and mice connected. plementation
Thin clients, which are seen in desktop virtualization, are
simple and/or cheap computers that are primarily designed Storage
to connect to the network. They may lack significant hard
disk storage space, RAM or even processing power, but • Storage virtualization, the process of completely ab-
many organizations are beginning to look at the cost benefits stracting logical storage from physical storage
of eliminating “thick client” desktops that are packed with
software (and require software licensing fees) and making • Distributed file system, any file system that allows ac-
more strategic investments.[8] Desktop virtualization sim- cess to files from multiple hosts sharing via a computer
plifies software versioning and patch management, where network
the new image is simply updated on the server, and the
desktop gets the updated version when it reboots. It also • Virtual file system, an abstraction layer on top of a
enables centralized control over what applications the user more concrete file system, allowing client applications
is allowed to have access to on the workstation. to access different types of concrete file systems in a
Moving virtualized desktops into the cloud creates hosted uniform way
virtual desktops (HVDs), in which the desktop images are
• Storage hypervisor, the software that manages storage
centrally managed and maintained by a specialist hosting
virtualization and combines physical storage resources
firm. Benefits include scalability and the reduction of capi-
into one or more flexible pools of logical storage[10]
tal expenditure, which is replaced by a monthly operational
cost.[9] • Virtual disk drive, a computer program that emulates
a disk drive such as a hard disk drive or optical disk
drive (see comparison of disc image software)
1.3 Other types
Data
Software

• Operating system-level virtualization, hosting of mul- • Data virtualization, the presentation of data as an ab-
tiple virtualized environments within a single OS in- stract layer, independent of underlying database sys-
stance. tems, structures and storage.
4 CHAPTER 1. VIRTUALIZATION

• Database virtualization, the decoupling of the 1.5 See also


database layer, which lies between the storage and
application layers within the application stack over • Timeline of virtualization development
all.
• Network Functions Virtualization

Network • Emulation (computing)

• Computer simulation

• Network virtualization, creation of a virtualized net- • Numeronym (explains that “V12N” is an abbreviation
work addressing space within or across network sub- for “virtualization”)
nets
• Consolidation ratio

• Virtual private network (VPN), a network protocol • I/O virtualization


that replaces the actual wire or other physical media
• Application checkpointing
in a network with an abstract layer, allowing a network
to be created over the Internet

1.6 References
[1] Graziano, Charles. “A performance analysis of Xen and
1.4 Nested virtualization KVM hypervisors for hosting the Xen Worlds Project”. Re-
trieved 2013-01-29.
Nested virtualization refers to the ability of running a virtual [2] Turban, E; King, D; Lee, J; Viehland, D (2008). “Chapter
machine within another, having this general concept ex- 19: Building E-Commerce Applications and Infrastructure”.
tendable to an arbitrary depth. In other words, nested vir- Electronic Commerce A Managerial Perspective. Prentice-
tualization refers to running one or more hypervisors in- Hall. p. 27.
side another hypervisor. Nature of a nested guest vir-
tual machine does not need not be homogeneous with its [3] “Virtualization in education” (PDF). IBM. October 2007.
host virtual machine; for example, application virtualiza- Retrieved 6 July 2010. A virtual computer is a logical rep-
resentation of a computer in software. By decoupling the
tion can be deployed within a virtual machine created by
physical hardware from the operating system, virtualization
using hardware virtualization.[11]
provides more operational flexibility and increases the uti-
Nested virtualization becomes more necessary as lization rate of the underlying physical hardware.
widespread operating systems gain built-in hypervisor
[4] “Strategies for Embracing Consumerization” (PDF). Mi-
functionality, which in a virtualized environment can be
crosoft Corporation. April 2011. p. 9. Retrieved 22 July
used only if the surrounding hypervisor supports nested 2011.
virtualization; for example, Windows 7 is capable of
running Windows XP applications inside a built-in virtual [5] Chernicoff, David, “HP VDI Moves to Center Stage,” ZD-
machine. Furthermore, moving already existing virtualized Net, August 19, 2011.
environments into a cloud, following the Infrastructure
[6] Baburajan, Rajani, “The Rising Cloud Storage Market Op-
as a Service (IaaS) approach, is much more complicated
portunity Strengthens Vendors,” infoTECH, August 24,
if the destination IaaS platform does not support nested 2011. It.tmcnet.com. 2011-08-24.
virtualization.[12][13]
[7] Oestreich, Ken, “Converged Infrastructure,” CTO Forum,
The way nested virtualization can be implemented on a
November 15, 2010. Thectoforum.com.
particular computer architecture depends on supported
hardware-assisted virtualization capabilities. If a particular [8] “Desktop Virtualization Tries to Find Its Place in the Enter-
architecture does not provide hardware support required for prise”. Dell.com. Retrieved 2012-06-19.
nested virtualization, various software techniques are em-
ployed to enable it.[12] Over time, more architectures gain [9] “HVD: the cloud’s silver lining” (PDF). Intrinsic Technol-
ogy. Retrieved 30 August 2012.
required hardware support; for example, since the Haswell
microarchitecture (announced in 2013), Intel started to in- [10] “Enterprise Systems Group White paper, Page 5” (PDF).
clude VMCS shadowing as a technology that accelerates Enterprise Strategy Group White Paper written and pub-
nested virtualization.[14] lished on August 20, 2011 by Mark Peters.
1.7. EXTERNAL LINKS 5

[11] Orit Wasserman, Red Hat (2013). “Nested virtualization:


Shadow turtles” (PDF). KVM forum. Retrieved 2014-04-
07.

[12] Muli Ben-Yehuda; Michael D. Day; Zvi Dubitzky; Michael


Factor; Nadav Har’El; Abel Gordon; Anthony Liguori; Orit
Wasserman; Ben-Ami Yassour (2010-09-23). “The Turtles
Project: Design and Implementation of Nested Virtualiza-
tion” (PDF). usenix.org. Retrieved 2014-12-16.

[13] Alex Fishman; Mike Rapoport; Evgeny Budilovsky; Izik Ei-


dus (2013-06-25). “HVX: Virtualizing the Cloud” (PDF).
rackcdn.com. Retrieved 2014-12-16.

[14] “4th-Gen Intel Core vPro Processors with Intel VMCS Shad-
owing” (PDF). Intel. 2013. Retrieved 2014-12-16.

1.7 External links


• An Introduction to Virtualization, January 2004, by
Amit Singh
Chapter 2

Hardware virtualization

Computer hardware virtualization is the virtualization of 2.2 Reasons for virtualization


computers as complete hardware platforms, certain logical
abstractions of their componentry, or only the functionality • In the case of server consolidation, many small phys-
required to run various operating systems. Virtualization ical servers are replaced by one larger physical server
hides the physical characteristics of a computing platform to increase the utilization of costly hardware resources
from the users, presenting instead another abstract com- such as CPU. Although hardware is consolidated, typi-
puting platform.[1][2] At its origins, the software that con- cally OSs are not. Instead, each OS running on a phys-
trolled virtualization was called a “control program”, but the ical server becomes converted to a distinct OS running
terms "hypervisor" or “virtual machine monitor” became inside a virtual machine. The large server can “host”
preferred over time.[3] many such “guest” virtual machines. This is known as
Physical-to-Virtual (P2V) transformation.

• Consolidating servers can also have the added bene-


2.1 Concept fit of reducing energy consumption. A typical server
runs at 425 W[4] and VMware estimates a hardware
reduction ratio of up to 15:1.[5]
The term “virtualization” was coined in the 1960s to re-
fer to a virtual machine (sometimes called “pseudo ma- • A virtual machine can be more easily controlled and
chine”), a term which itself dates from the experimental inspected from outside than a physical one, and its
IBM M44/44X system. The creation and management of configuration is more flexible. This is very useful in
virtual machines has been called “platform virtualization”, kernel development and for teaching operating system
or “server virtualization”, more recently. courses.[6]
Platform virtualization is performed on a given hardware
• A new virtual machine can be provisioned as needed
platform by host software (a control program), which cre-
without the need for an up-front hardware purchase.
ates a simulated computer environment, a virtual machine
(VM), for its guest software. The guest software is not lim- • A virtual machine can easily be relocated from one
ited to user applications; many hosts allow the execution physical machine to another as needed. For example,
of complete operating systems. The guest software exe- a salesperson going to a customer can copy a virtual
cutes as if it were running directly on the physical hardware, machine with the demonstration software to his laptop,
with several notable caveats. Access to physical system re- without the need to transport the physical computer.
sources (such as the network access, display, keyboard, and Likewise, an error inside a virtual machine does not
disk storage) is generally managed at a more restrictive level harm the host system, so there is no risk of breaking
than the host processor and system-memory. Guests are of- down the OS on the laptop.
ten restricted from accessing specific peripheral devices, or
may be limited to a subset of the device’s native capabili- • Because of the easy relocation, virtual machines can
ties, depending on the hardware access policy implemented be used in disaster recovery scenarios.
by the virtualization host.
Virtualization often exacts performance penalties, both in However, when multiple VMs are concurrently running on
resources required to run the hypervisor, and as well as in the same physical host, each VM may exhibit a varying and
reduced performance on the virtual machine compared to unstable performance, which highly depends on the work-
running native on the physical machine. load imposed on the system by other VMs, unless proper

6
2.5. PARAVIRTUALIZATION 7

techniques are used for temporal isolation among virtual


machines. Hardware (CPU, Memory, NIC, Disk)
Hypervisor (Hyper-V, Xen, ESX Server)
There are several approaches to platform virtualization.
Application Application Application
Examples of virtualization scenarios:

• Running one or more applications that are not sup- Guest OS Guest OS Guest OS
ported by the host OS: A virtual machine running the
required guest OS could allow the desired applications
Virtual Hardware Virtual Hardware Virtual Hardware
to be run, without altering the host OS.
• Evaluating an alternate operating system: The new OS
could be run within a VM, without altering the host
OS.
• Server virtualization: Multiple virtual servers could be
run on a single physical server, in order to more fully
utilize the hardware resources of the physical server.
• Duplicating specific environments: A virtual machine
could, depending on the virtualization software used,
be duplicated and installed on multiple hosts, or re-
stored to a previously backed-up system state.
• Creating a protected environment: if a guest OS run-
ning on a VM becomes damaged in a way that is
difficult to repair, such as may occur when studying Logical diagram of full virtualization.
malware or installing badly behaved software, the VM
may simply be discarded without harm to the host sys-
tem, and a clean copy used next time. In hardware-assisted virtualization, the hardware provides
architectural support that facilitates building a virtual
machine monitor and allows guest OSes to be run in
2.3 Full virtualization isolation.[7] Hardware-assisted virtualization was first in-
troduced on the IBM System/370 in 1972, for use with
VM/370, the first virtual machine operating system.
Main article: Full virtualization
In full virtualization, the virtual machine simulates enough In 2005 and 2006, Intel and AMD provided additional
hardware to allow an unmodified “guest” OS (one designed hardware to support virtualization. Sun Microsystems
for the same instruction set) to be run in isolation. This (now Oracle Corporation) added similar features in their
approach was pioneered in 1966 with the IBM CP-40 and UltraSPARC T-Series processors in 2005. Examples of
CP-67, predecessors of the VM family. virtualization platforms adapted to such hardware include
KVM, VMware Workstation, VMware Fusion, Hyper-V,
Examples outside the mainframe field include Parallels
Windows Virtual PC, Xen, Parallels Desktop for Mac,
Workstation, Parallels Desktop for Mac, VirtualBox,
Oracle VM Server for SPARC, VirtualBox and Parallels
Virtual Iron, Oracle VM, Virtual PC, Virtual Server,
Workstation.
Hyper-V, VMware Workstation, VMware Server (dis-
continued, formerly called GSX Server), VMware ESXi, In 2006, first-generation 32- and 64-bit x86 hardware sup-
QEMU, Adeos, Mac-on-Linux, Win4BSD, Win4Lin Pro, port was found to rarely offer performance advantages over
and Egenera vBlade technology. software virtualization.[8]

2.4 Hardware-assisted virtualiza- 2.5 Paravirtualization


tion
Main article: Paravirtualization
Main article: Hardware-assisted virtualization
In paravirtualization, the virtual machine does not neces-
8 CHAPTER 2. HARDWARE VIRTUALIZATION

sarily simulate hardware, but instead (or in addition) of- your data. Tape backup data is only as good as the
fers a special API that can only be used by modifying the latest copy stored. Tape backup methods will require
“guest” OS. For this to be possible, the “guest” OS’s source a backup device and ongoing storage material.
code must be available. If the source code is available, it
is sufficient to replace sensitive instructions with calls to Whole-file and application replication The implemen-
VMM APIs (e.g.: “cli” with “vm_handle_cli()"), then re- tation of this method will require control software and
compile the OS and use the new binaries. This system storage capacity for application and data file storage
call to the hypervisor is called a “hypercall” in TRANGO replication typically on the same site. The data is
and Xen; it is implemented via a DIAG (“diagnose”) hard- replicated on a different disk partition or separate
ware instruction in IBM’s CMS under VM (which was the disk device and can be a scheduled activity for most
origin of the term hypervisor). Examples include IBM’s servers and is implemented more for database-type
[9]
LPARs, Win4Lin 9x, Sun’s Logical Domains, z/VM, and applications.
TRANGO. Hardware and software redundancy Ensures the high-
est level of disaster recovery protection for a hard-
ware virtualization solution, by providing duplicate
2.6 Operating-system-level virtual- hardware and software replication in two distinct geo-
graphic areas.[12]
ization
Main article: Operating-system-level virtualization
2.8 See also
In operating-system-level virtualization, a physical server • Application virtualization
is virtualized at the operating system level, enabling mul-
tiple isolated and secure virtualized servers to run on a sin- • Comparison of platform virtualization software
gle physical server. The “guest” operating system environ-
• Desktop virtualization
ments share the same running instance of the operating sys-
tem as the host system. Thus, the same operating system • Dynamic infrastructure
kernel is also used to implement the “guest” environments,
and applications running in a given “guest” environment • Hyperjacking
view it as a stand-alone system. The pioneer implemen- • Instruction set simulator
tation was FreeBSD jails; other examples include Docker,
Solaris Containers, OpenVZ, Linux-VServer, LXC, AIX • Popek and Goldberg virtualization requirements
Workload Partitions, Parallels Virtuozzo Containers, and
iCore Virtual Accounts. • Physicalization
• Virtual appliance

2.7 Hardware virtualization disaster • Virtualization for aggregation

recovery • Workspace virtualization

A disaster recovery (DR) plan is good business practice for


a hardware virtualization platform solution. DR of a vir- 2.9 References
tualization environment can ensure high rate of availability
during a wide range of situations that disrupt normal busi- [1] Turban, E; King, D.; Lee, J.; Viehland, D. (2008). “19”.
ness operations. Continued operations of VMs is mission Electronic Commerce A Managerial Perspective (PDF) (5th
critical and a DR can compensate for concerns of hard- ed.). Prentice-Hall. p. 27.
ware performance and maintenance requirements. A hard- [2] “Virtualization in education” (PDF). IBM. October 2007.
ware virtualization DR environment involves hardware and Retrieved 6 July 2010.
software protection solutions based on business continuity
needs, which include the methods described below.[10][11] [3] Creasy, R.J. (1981). “The Origin of the VM/370 Time-
sharing System” (PDF). IBM. Retrieved 26 February 2013.
Tape backup for software data long-term archival needs [4] Profiling Energy Usage for Efficient Consumption; Ra-
This common method can be used to store data offsite jesh Chheda, Dan Shookowsky, Steve Stefanovich, and Joe
but can be a difficult and lengthy process to recover Toscano
2.10. EXTERNAL LINKS 9

[5] VMware server consolidation overview

[6] Examining VMware Dr. Dobb’s Journal August 2000 By


Jason Nieh and Ozgur Can Leonard

[7] Uhlig, R. et al.; “Intel virtualization technology,” Computer


, vol.38, no.5, pp. 48-56, May 2005

[8] A Comparison of Software and Hardware Techniques for


x86 Virtualization, Keith Adams and Ole Agesen, VMWare,
ASPLOS’06 21–25 October 2006, San Jose, California,
USA “Surprisingly, we find that the first-generation hard-
ware support rarely offers performance advantages over ex-
isting software techniques. We ascribe this situation to high
VMM/guest transition costs and a rigid programming model
that leaves little room for software flexibility in managing
either the frequency or cost of these transitions.”

[9] Borden, T.L. et al.; Multiple Operating Systems on One Pro-


cessor Complex. IBM Systems Journal, vol.28, no.1, pp.
104-123, 1989

[10] “The One Essential Guide to Disaster Recovery: How to En-


sure IT and Business Continuity” (PDF). Vision Solutions,
Inc. 2010.

[11] Wold, G (2008). “Disaster Recovery Planning Process”.

[12] “Disaster Recovery Virtualization Protecting Production


Systems Using VMware Virtual Infrastructure and Double-
Take” (PDF). VMWare. 2010.

2.10 External links


• An introduction to Virtualization
• Xen and the Art of Virtualization, ACM, 2003, by a
group of authors
• Linux Virtualization Software

• Zoppis, Bruno (August 27, 2007) [1st pub.


LinuxDevices:2007]. “Using a hypervisor to recon-
cile GPL and proprietary embedded code”. Linux
Gizmos. Archived from the original on 2013-12-24.
Chapter 3

I/O virtualization

Input/output (I/O) virtualization is a methodology to However, increased utilization created by virtualization


simplify management, lower costs and improve perfor- placed a significant strain on the server’s I/O capacity. Net-
mance of servers in enterprise environments. I/O virtual- work traffic, storage traffic, and inter-server communica-
ization environments are created by abstracting the upper tions combine to impose increased loads that may over-
layer protocols from the physical connections.[1] whelm the server’s channels, leading to backlogs and idle
CPUs as they wait for data.[4]
The technology enables one physical adapter card to ap-
pear as multiple virtual network interface cards (vNICs) Virtual I/O addresses performance bottlenecks by consol-
and virtual host bus adapters (vHBAs).[2] Virtual NICs and idating I/O to a single connection whose bandwidth ide-
HBAs function as conventional NICs and HBAs, and are ally exceeds the I/O capacity of the server itself, thereby
designed to be compatible with existing operating systems, ensuring that the I/O link itself is not a bottleneck. That
hypervisors, and applications. To networking resources bandwidth is then dynamically allocated in real time across
(LANs and SANs), they appear as normal cards. multiple virtual connections to both storage and network
In the physical view, virtual I/O replaces a server’s multiple resources. In I/O intensive applications, this approach can
I/O cables with a single cable that provides a shared trans- help increase both VM [2] performance and the potential num-
port for all network and storage connections. That cable (or ber of VMs per server.
commonly two cables for redundancy) connects to an ex- Virtual I/O systems that include quality of service (QoS)
ternal device, which then provides connections to the data controls can also regulate I/O bandwidth to specific virtual
center networks.[2] machines, thus ensuring predictable performance for crit-
ical applications. QoS thus increases the applicability of
server virtualization for both production server and end-
user applications.[4]
3.1 Background
Server I/O is a critical component to successful and ef-
fective server deployments, particularly with virtualized 3.2 Benefits
servers. To accommodate multiple applications, virtual-
ized servers demand more network bandwidth and connec- • Management agility: By abstracting upper layer proto-
tions to more networks and storage. According to a survey, cols from physical connections, I/O virtualization pro-
75% of virtualized servers require 7 or more I/O connec- vides greater flexibility, greater utilization and faster
tions per device, and are likely to require more frequent I/O provisioning when compared to traditional NIC and
reconfigurations.[3] HBA card architectures.[1] Virtual I/O technologies
In virtualized data centers, I/O performance problems are can be dynamically expanded and contracted (versus
caused by running numerous virtual machines (VMs) on traditional physical I/O channels that are fixed and
one server. In early server virtualization implementations, static), and usually replace multiple network and stor-
the number of virtual machines per server was typically lim- age connections to each server with a single cable that
ited to six or less. But it was found that it could safely run carries multiple traffic types.[5] Because configuration
seven or more applications per server, often using 80 per- changes are implemented in software rather than hard-
cent of total server capacity, an improvement over the av- ware, time periods to perform common data center
erage 5 to 15 percent utilized with non-virtualized servers tasks – such as adding servers, storage or network con-
. nectivity – can be reduced from days to minutes.[6]

10
3.4. REFERENCES 11

• Reduced cost: Virtual I/O lowers costs and enables [3] Keith Ward (March 31, 2008). “New Things to Virtual-
simplified server management by using fewer cards, ize, Virtualization Review,”. virtualizationreview.com. Re-
cables, and switch ports, while still achieving full net- trieved 2009-11-04.
work I/O performance.[7] It also simplifies data center [4] Charles Babcock (May 16, 2008). “Virtualization’s Promise
network design by consolidating and better utilizing And Problems”. Information Week. Retrieved 2009-11-04.
LAN and SAN network switches.[8]
[5] “Tech Road Map: Keep An Eye On Virtual I/O”. Network
• Reduced cabling: In a virtualized I/O environment, Computing. June 8, 2009. Retrieved 2009-11-04. |first1=
only one cable is needed to connect servers to both missing |last1= in Authors list (help)
storage and network traffic. This can reduce data cen-
[6] “PrimaCloud offers new cloud computing service built on
ter server-to-network, and server-to-storage cabling
Xsigo’s Virtual I/O,”. InfoWorld. July 20, 2009. Retrieved
within a single server rack by more than 70 percent, 2009-11-04. |first1= missing |last1= in Authors list (help)
which equates to reduced cost, complexity, and power
requirements. Because the high-speed interconnect [7] “I/O Virtualization (IOV) & its uses in the network infras-
is dynamically shared among various requirements, it tructure: Part 1,”. Embedded.com: Embedded.com. June
frequently results in increased performance as well.[8] 1, 2009. Retrieved 2009-11-04. |first1= missing |last1= in
Authors list (help)
• Increased density: I/O virtualization increases the
[8] “Unified Fabric Options Are Finally Here, Lippis Report:
practical density of I/O by allowing more connections 126”. Lippis Report. May 2009. Retrieved 2009-11-04.
to exist within a given space. This in turn enables |first1= missing |last1= in Authors list (help)
greater utilization of dense 1U high servers and blade
servers that would otherwise be I/O constrained. [9] “I/O Virtualization for Blade Servers,”. Windows IT Pro.
Retrieved 2009-11-04. |first1= missing |last1= in Authors
list (help)
Blade server chassis enhance density by packaging many
servers (and hence many I/O connections) in a small phys-
ical space. Virtual I/O consolidates all storage and net-
work connections to a single physical interconnect, which
eliminates any physical restrictions on port counts. Virtual
I/O also enables software-based configuration management,
which simplifies control of the I/O devices. The combina-
tion allows more I/O ports to be deployed in a given space,
and facilitates the practical management of the resulting
environment.[9]

3.3 See also


• Intel VT-d and AMD-Vi

• Intel VT-x

• PCI-SIG I/O virtualization

• x86 virtualization

3.4 References
[1] Scott Lowe (2008-04-21). “Virtualization strategies > Ben-
efiting from I/O virtualization”. Tech Target. Retrieved
2009-11-04.

[2] Scott Hanson. “Strategies to Optimize Virtual Machine


Connectivity,” (PDF). Dell. Retrieved 2009-11-04.
Chapter 4

Hypervisor

A hypervisor or virtual machine monitor (VMM) is In their 1974 article, Formal Requirements for Virtualizable
computer software, firmware or hardware that creates and Third Generation Architectures, Gerald J. Popek and Robert
runs virtual machines. A computer on which a hypervi- P. Goldberg classified two types of hypervisor:[3]
sor runs one or more virtual machines is called a host ma-
chine, and each virtual machine is called a guest machine. Type-1, native or bare-metal hypervisors These hyper-
The hypervisor presents the guest operating systems with visors run directly on the host’s hardware to control
a virtual operating platform and manages the execution of the hardware and to manage guest operating systems.
the guest operating systems. Multiple instances of a vari- For this reason, they are sometimes called bare metal
ety of operating systems may share the virtualized hardware hypervisors. The first hypervisors, which IBM devel-
resources: for example, Linux, Windows, and macOS in- oped in the 1960s, were native hypervisors.[4] These
stances can all run on a single physical x86 machine. This included the test software SIMMON and the CP/CMS
contrasts with operating-system-level virtualization, where operating system (the predecessor of IBM’s z/VM).
all instances (usually called containers) must share a single Modern equivalents include Xen, Oracle VM Server
kernel, though the guest operating systems can differ in user for SPARC, Oracle VM Server for x86, Microsoft
space, such as different Linux distributions with the same Hyper-V and VMware ESX/ESXi.
kernel.
Type-2 or hosted hypervisors These hypervisors run on
The term hypervisor is a variant of supervisor, a traditional a conventional operating system (OS) just as other
term for the kernel of an operating system: the hypervi- computer programs do. A guest operating system
sor is the supervisor of the supervisor,[1] with hyper- used runs as a process on the host. Type-2 hypervisors ab-
as a stronger variant of super-.[lower-alpha 1] The term dates stract guest operating systems from the host operat-
to circa 1970;[2] in the earlier CP/CMS (1967) system the ing system. VMware Workstation, VMware Player,
term Control Program was used instead. VirtualBox, Parallels Desktop for Mac and QEMU are
examples of type-2 hypervisors.

4.1 Classification However, the distinction between these two types is not
necessarily clear. Linux’s Kernel-based Virtual Machine
(KVM) and FreeBSD's bhyve are kernel modules[5] that
effectively convert the host operating system to a type-1
hypervisor.[6] At the same time, since Linux distributions
and FreeBSD are still general-purpose operating systems,
with other applications competing for VM resources, KVM
and bhyve can also be categorized as type-2 hypervisors.[7]

4.2 Mainframe origins


The first hypervisors providing full virtualization were the
test tool SIMMON and IBM’s one-off research CP-40 sys-
tem, which began production use in January 1967, and be-
Type-1 and type-2 hypervisors came the first version of IBM’s CP/CMS operating system.

12
4.3. OPERATING SYSTEM SUPPORT 13

CP-40 ran on a S/360-40 that was modified at the IBM As mentioned above, the VM control program includes a
Cambridge Scientific Center to support Dynamic Address hypervisor-call handler that intercepts DIAG (“Diagnose”)
Translation, a key feature that allowed virtualization. Prior instructions used within a virtual machine. This provides
to this time, computer hardware had only been virtualized fast-path non-virtualized execution of file-system access
enough to allow multiple user applications to run concur- and other operations (DIAG is a model-dependent privi-
rently (see CTSS and IBM M44/44X). With CP-40, the leged instruction, not used in normal programming, and
hardware’s supervisor state was virtualized as well, allowing thus is not virtualized. It is therefore available for use as
multiple operating systems to run concurrently in separate a signal to the “host” operating system). When first im-
virtual machine contexts. plemented in CP/CMS release 3.1, this use of DIAG pro-
Programmers soon re-implemented CP-40 (as CP-67) for vided an operating system interface that was analogous to
the System/360 Supervisor Call instruction (SVC), but that
the IBM System/360-67, the first production computer-
system capable of full virtualization. IBM first shipped this did not require altering or extending the system’s virtualiza-
tion of SVC.
machine in 1966; it included page-translation-table hard-
ware for virtual memory, and other techniques that allowed In 1985 IBM introduced the PR/SM hypervisor to manage
a full virtualization of all kernel tasks, including I/O and logical partitions (LPAR).
interrupt handling. (Note that its “official” operating sys-
tem, the ill-fated TSS/360, did not employ full virtualiza-
tion.) Both CP-40 and CP-67 began production use in 4.3 Operating system support
1967. CP/CMS was available to IBM customers from 1968
to early 1970s, in source code form without support.
Several factors led to a resurgence around 2005 in the use
CP/CMS formed part of IBM’s attempt to build robust of virtualization technology among Unix, Linux, and other
time-sharing systems for its mainframe computers. By run- Unix-like operating systems:[9]
ning multiple operating systems concurrently, the hypervi-
sor increased system robustness and stability: Even if one • Expanding hardware capabilities, allowing each single
operating system crashed, the others would continue work- machine to do more simultaneous work
ing without interruption. Indeed, this even allowed beta
or experimental versions of operating systems—or even of • Efforts to control costs and to simplify management
new hardware[8] —to be deployed and debugged, without through consolidation of servers
jeopardizing the stable main production system, and with-
out requiring costly additional development systems. • The need to control large multiprocessor and cluster
installations, for example in server farms and render
IBM announced its System/370 series in 1970 without any farms
virtualization features, but added virtual memory support in
the August 1972 Advanced Function announcement. Vir- • The improved security, reliability, and device inde-
tualization has been featured in all successor systems (all pendence possible from hypervisor architectures
modern-day IBM mainframes, such as the zSeries line, re-
• The ability to run complex, OS-dependent applications
tain backward compatibility with the 1960s-era IBM S/360
in different hardware or OS environments
line). The 1972 announcement also included VM/370,
a reimplementation of CP/CMS for the S/370. Unlike
CP/CMS, IBM provided support for this version (though Major Unix vendors, including Sun Microsystems, HP,
it was still distributed in source code form for several re- IBM, and SGI, have been selling virtualized hardware since
leases). VM stands for Virtual Machine, emphasizing that before 2000. These have generally been large, expensive
all, and not just some, of the hardware interfaces are vir- systems (in the multimillion-dollar range at the high end),
tualized. Both VM and CP/CMS enjoyed early acceptance although virtualization has also been available on some
and rapid development by universities, corporate users, and low- and mid-range systems, such as IBM’s pSeries servers,
time-sharing vendors, as well as within IBM. Users played Sun/Oracle's T-series CoolThreads servers and HP Super-
an active role in ongoing development, anticipating trends dome series machines.
seen in modern open source projects. However, in a se- Although Solaris has always been the only guest domain
ries of disputed and bitter battles, time-sharing lost out to OS officially supported by Sun/Oracle on their Logical Do-
batch processing through IBM political infighting, and VM mains hypervisor, as of late 2006, Linux (Ubuntu and Gen-
remained IBM’s “other” mainframe operating system for too), and FreeBSD have been ported to run on top of the
decades, losing to MVS. It enjoyed a resurgence of pop- hypervisor (and can all run simultaneously on the same
ularity and support from 2000 as the z/VM product, for ex- processor, as fully virtualized independent guest OSes).
ample as the platform for Linux for zSeries. Wind River "Carrier Grade Linux" also runs on Sun’s
14 CHAPTER 4. HYPERVISOR

Hypervisor.[10] Full virtualization on SPARC processors The Power Hypervisor provides for high levels of reliabil-
proved straightforward: since its inception in the mid-1980s ity, availability and serviceability (RAS) by facilitating hot
Sun deliberately kept the SPARC architecture clean of ar- add/replace of many parts (model dependent: processors,
tifacts that would have impeded virtualization. (Compare memory, I/O adapters, blowers, power units, disks, system
with virtualization on x86 processors below.)[11] controllers, etc.)
HP calls its technology to host multiple OS technology on Similar trends have occurred with x86/x86-64 server plat-
its Itanium powered systems “Integrity Virtual Machines” forms, where open-source projects such as Xen have led
(Integrity VM). Itanium can run HP-UX, Linux, Windows virtualization efforts. These include hypervisors built on
and OpenVMS. Except for OpenVMS, to be supported in a Linux and Solaris kernels as well as custom kernels. Since
later release, these environments are also supported as vir- these technologies span from large systems down to desk-
tual servers on HP’s Integrity VM platform. The HP-UX tops, they are described in the next section.
operating system hosts the Integrity VM hypervisor layer
that allows for many important features of HP-UX to be
taken advantage of and provides major differentiation be-
tween this platform and other commodity platforms - such 4.4 x86 systems
as processor hotswap, memory hotswap, and dynamic ker-
nel updates without system reboot. While it heavily lever- Main article: x86 virtualization
ages HP-UX, the Integrity VM hypervisor is really a hybrid
that runs on bare-metal while guests are executing. Run-
Starting in 2005, CPU vendors have added hardware vir-
ning normal HP-UX applications on an Integrity VM host
tualization assistance to their products, for example: Intel
is heavily discouraged, because Integrity VM implements
VT-x (codenamed Vanderpool) and AMD-V (codenamed
its own memory management, scheduling and I/O policies
Pacifica).
that are tuned for virtual machines and are not as effective
for normal applications. HP also provides more rigid par- An alternative approach requires modifying the guest
titioning of their Integrity and HP9000 systems by way of operating-system to make system calls to the hypervisor,
VPAR and nPar technology, the former offering shared re- rather than executing machine I/O instructions that the
source partitioning and the latter offering complete I/O and hypervisor simulates. This is called paravirtualization in
processing isolation. The flexibility of virtual server envi- Xen, a “hypercall” in Parallels Workstation, and a “DIAG-
ronment (VSE) has given way to its use more frequently in NOSE code” in IBM’s VM. All are really the same thing,
newer deployments. a system call to the hypervisor below. Some microker-
nels such as Mach and L4 are flexible enough such that
IBM provides virtualization partition technology known
"paravirtualization" of guest operating systems is possible.
as logical partitioning (LPAR) on System/390, zSeries,
pSeries and iSeries systems. For IBM’s Power Systems,
the POWER Hypervisor (PHYP) is a native (bare-metal)
hypervisor in firmware and provides isolation between 4.5 Embedded systems
LPARs. Processor capacity is provided to LPARs in ei-
ther a dedicated fashion or on an entitlement basis where
Embedded hypervisors, targeting embedded systems and
unused capacity is harvested and can be re-allocated to
certain real-time operating system (RTOS) environments,
busy workloads. Groups of LPARs can have their proces-
are designed with different requirements when compared
sor capacity managed as if they were in a “pool” - IBM
to desktop and enterprise systems, including robustness, se-
refers to this capability as Multiple Shared-Processor Pools
curity and real-time capabilities. The resource-constrained
(MSPPs) and implements it in servers with the POWER6
nature of many embedded systems, especially battery-
processor. LPAR and MSPP capacity allocations can be dy-
powered mobile systems, imposes a further requirement for
namically changed. Memory is allocated to each LPAR (at
small memory-size and low overhead. Finally, in contrast
LPAR initiation or dynamically) and is address-controlled
to the ubiquity of the x86 architecture in the PC world,
by the POWER Hypervisor. For real-mode addressing by
the embedded world uses a wider variety of architectures
operating systems (AIX, Linux, IBM i), the POWER pro-
and less standardized environments. Support for virtualiza-
cessors (POWER4 onwards) have designed virtualization
tion requires memory protection (in the form of a memory
capabilities where a hardware address-offset is evaluated
management unit or at least a memory protection unit)
with the OS address-offset to arrive at the physical mem-
and a distinction between user mode and privileged mode,
ory address. Input/Output (I/O) adapters can be exclusively
which rules out most microcontrollers. This still leaves x86,
“owned” by LPARs or shared by LPARs through an ap-
MIPS, ARM and PowerPC as widely deployed architec-
pliance partition known as the Virtual I/O Server (VIOS).
tures on medium- to high-end embedded systems.[12]
4.8. REFERENCES 15

As manufacturers of embedded systems usually have the [3] Popek, Gerald J.; Goldberg, Robert P. (1974). “Formal
source code to their operating systems, they have less need requirements for virtualizable third generation architec-
for full virtualization in this space. Instead, the performance tures”. Communications of the ACM. 17 (7): 412–421.
advantages of paravirtualization make this usually the vir- doi:10.1145/361011.361073. Retrieved 2015-03-01.
tualization technology of choice. Nevertheless, ARM and
[4] Meier, Shannon (2008). “IBM Systems Virtualization:
MIPS have recently added full virtualization support as an Servers, Storage, and Software” (PDF). pp. 2, 15, 20. Re-
IP option and has included it in their latest high-end proces- trieved 2015-12-22.
sors and architecture versions, such as ARM Cortex-A15
MPCore and ARMv8 EL2. [5] Dexter, Michael. “Hands-on bhyve”. CallForTesting.org.
Retrieved 2013-09-24.
Other differences between virtualization in server/desktop
and embedded environments include requirements for ef- [6] Graziano, Charles (2011). “A performance analysis of Xen
ficient sharing of resources across virtual machines, high- and KVM hypervisors for hosting the Xen Worlds Project”.
bandwidth, low-latency inter-VM communication, a global Graduate Theses and Dissertations. Iowa State University.
view of scheduling and power management, and fine- Retrieved 2013-01-29.
grained control of information flows.[13]
[7] Pariseau, Beth (15 April 2011). “KVM reignites Type 1
vs. Type 2 hypervisor debate”. SearchServerVirtualization.
TechTarget. Retrieved 2013-01-29.
4.6 Security implications
[8] See History of CP/CMS for virtual-hardware simulation in
the development of the System/370
Main article: Hyperjacking
[9] Loftus, Jack (19 December 2005). “Xen virtualization
The use of hypervisor technology by malware and rootkits quickly becoming open source 'killer app'". TechTarget. Re-
installing themselves as a hypervisor below the operating trieved 26 October 2015.
system, known as hyperjacking, can make them more dif- [10] “Wind River To Support Sun’s Breakthrough UltraSPARC
ficult to detect because the malware could intercept any T1 Multithreaded Next-Generation Processor”. Wind River
operations of the operating system (such as someone en- Newsroom (Press release). Alameda, California. 1 Novem-
tering a password) without the anti-malware software nec- ber 2006. Retrieved 26 October 2015.
essarily detecting it (since the malware runs below the en-
tire operating system). Implementation of the concept has [11] Fritsch, Lothar; Husseiki, Rani; Alkassar, Ammar.
allegedly occurred in the SubVirt laboratory rootkit (de- Complementary and Alternative Technologies to Trusted
veloped jointly by Microsoft and University of Michigan Computing (TC-Erg./-A.), Part 1, A study on behalf of
[14] the German Federal Office for Information Security (BSI)
researchers ) as well as in the Blue Pill malware package.
(PDF) (Report).
However, such assertions have been disputed by others who
claim that it would be possible to detect the presence of a [12] Strobl, Marius (2013). Virtualization for Reliable Embedded
hypervisor-based rootkit.[15] Systems. Munich: GRIN Publishing GmbH. pp. 5–6. ISBN
978-3-656-49071-5. Retrieved 2015-03-07.
In 2009, researchers from Microsoft and North Car-
olina State University demonstrated a hypervisor-layer anti- [13] Gernot Heiser (April 2008). “The role of virtualization in
rootkit called Hooksafe that can provide generic protection embedded systems”. Proc. 1st Workshop on Isolation and
against kernel-mode rootkits.[16] Integration in Embedded Systems (IIES'08). pp. 11–16.

[14] “SubVirt: Implementing malware with virtual machines”


(PDF). University of Michigan, Microsoft. 2006-04-03.
4.7 Notes Retrieved 2008-09-15.

[1] super- is from Latin, meaning “above”, while hyper- is from [15] “Debunking Blue Pill myth”. Virtualization.info. Retrieved
the cognate term in Ancient Greek, also meaning “above”. 2010-12-10.

[16] Wang, Zhi; Jiang, Xuxian; Cui, Weidong; Ning, Peng


(11 August 2009). “Countering Kernel Rootkits with
4.8 References Lightweight Hook Protection” (PDF). Proceedings of the
16th ACM Conference on Computer and Communications Se-
[1] Bernard Golden (2011). Virtualization For Dummies. p. 54. curity. CCS '09. Chicago, Illinois, USA: ACM. ISBN 978-
1-60558-894-0. doi:10.1145/1653662.1653728. Retrieved
[2] “How did the term “hypervisor” come into use?". 2009-11-11.
16 CHAPTER 4. HYPERVISOR

4.9 External links


• Hypervisors and Virtual Machines: Implementation
Insights on the x86 Architecture
• A Performance Comparison of Hypervisors, VMware
Chapter 5

Virtual machine

In computing, a virtual machine (VM) is an emulation of a 5.1.1 System virtual machines


computer system. Virtual machines are based on computer
architectures and provide functionality of a physical com- See also: Hardware virtualization and Comparison of
puter. Their implementations may involve specialized hard- platform virtualization software
ware, software, or a combination.
There are different kinds of virtual machines, each with dif- The desire to run multiple operating systems was the ini-
ferent functions: tial motive for virtual machines, so as to allow time-sharing
among several single-tasking operating systems. In some
• System virtual machines (also termed full virtualiza- respects, a system virtual machine can be considered a
tion VMs) provide a substitute for a real machine. generalization of the concept of virtual memory that his-
They provide functionality needed to execute entire torically preceded it. IBM’s CP/CMS, the first systems
operating systems. A hypervisor uses native execu- to allow full virtualization, implemented time sharing by
tion to share and manage hardware, allowing for mul- providing each user with a single-user operating system,
tiple environments which are isolated from one an- the Conversational Monitor System (CMS). Unlike virtual
other, yet exist on the same physical machine. Mod- memory, a system virtual machine entitled the user to write
ern hypervisors use hardware-assisted virtualization, privileged instructions in their code. This approach had cer-
virtualization-specific hardware, primarily from the tain advantages, such as adding input/output devices not al-
host CPUs. lowed by the standard system.[2]
As technology evolves virtual memory for purposes of vir-
• Process virtual machines are designed to execute com-
tualization, new systems of memory overcommitment may
puter programs in a platform-independent environ-
be applied to manage memory sharing among multiple vir-
ment.
tual machines on one computer operating system. It may be
possible to share memory pages that have identical contents
Some virtual machines, such as QEMU, are designed to also among multiple virtual machines that run on the same phys-
emulate different architectures and allow execution of soft- ical machine, what may result in mapping them to the same
ware applications and operating systems written for another physical page by a technique termed Kernel SamePage
CPU or architecture. Operating-system-level virtualization Merging. This is especially useful for read-only pages, such
allows the resources of a computer to be partitioned via the as those holding code segments, which is the case for mul-
kernel's support for multiple isolated user space instances, tiple virtual machines running the same or similar software,
which are usually called containers and may look and feel software libraries, web servers, middleware components,
like real machines to the end users. etc. The guest operating systems do not need to be com-
pliant with the host hardware, thus making it possible to
run different operating systems on the same computer (e.g.,
5.1 Definitions Windows, Linux, or prior versions of an operating system)
to support future software.[3]
A VM or virtual machine was originally defined by Popek The use of virtual machines to support separate guest op-
and Goldberg as “an efficient, isolated duplicate of a erating systems is popular in regard to embedded systems.
real computer machine.” Current use includes virtual ma- A typical use would be to run a real-time operating system
chines which have no direct correspondence to any real simultaneously with a preferred complex operating system,
hardware.[1] such as Linux or Windows. Another use would be for novel

17
18 CHAPTER 5. VIRTUAL MACHINE

and unproven software still in the developmental stage, so it been often generally called p-code machines. In addition
runs inside a sandbox. Virtual machines have other advan- to being an intermediate language, Pascal p-code was also
tages for operating system development, and may include executed directly by an interpreter implementing the virtual
improved debugging access and faster reboots.[4] machine, notably in UCSD Pascal (1978); this influenced
Multiple VMs running their own guest operating system are later interpreters, notably the Java virtual machine (JVM).
frequently engaged for server consolidation.[5] Another early example was SNOBOL4 (1967), which was
written in the SNOBOL Implementation Language (SIL),
an assembly language for a virtual machine, which was then
targeted to physical machines by transpiling to their na-
5.2 History tive assembler via a macro assembler.[9] Macros have since
fallen out of favor, however, so this approach has been
See also: History of CP/CMS and Timeline of virtualiza- less influential. Process virtual machines were a popular
tion development approach to implementing early microcomputer software,
including Tiny BASIC and adventure games, from one-
off implementations such as Pyramid 2000 to a general-
Both system virtual machines and process virtual machines
purpose engine like Infocom's z-machine, which Graham
date to the 1960s, and continue to be areas of active devel-
Nelson argues is “possibly the most portable virtual ma-
opment.
chine ever created”.[10]
System virtual machines grew out of time-sharing, as no-
Significant advances occurred in the implementation
tably implemented in the Compatible Time-Sharing System
of Smalltalk−80,[11] particularly the Deutsch/Schiffmann
(CTSS). Time-sharing allowed multiple users to use a com-
implementation[12] which pushed just-in-time (JIT) com-
puter concurrently: each program appeared to have full ac-
pilation forward as an implementation approach that uses
cess to the machine, but only one program was executed
process virtual machine.[13] Later notable Smalltalk VMs
at the time, with the system switching between programs
were VisualWorks, the Squeak Virtual Machine[14] and
in time slices, saving and restoring state each time. This
Strongtalk.[15] A related language that produced a lot of
evolved into virtual machines, notably via IBM’s research
virtual machine innovation was the Self programming
systems: the M44/44X, which used partial virtualization,
language,[16] which pioneered adaptive optimization[17] and
and the CP-40 and SIMMON, which used full virtualization
generational garbage collection. These techniques proved
and were early examples of hypervisors. The first widely
commercially successful in 1999 in the HotSpot Java virtual
available virtual machine architecture was the CP-67/CMS;
machine.[18] Other innovations include having a register-
see History of CP/CMS for details. An important distinc-
based virtual machine, to better match the underlying hard-
tion was between using multiple virtual machines on one
ware, rather than a stack-based virtual machine, which is a
host system for time-sharing, as in M44/44X and CP-40,
closer match for the programming language; in 1995, this
and using one virtual machine on a host system for prototyp-
was pioneered by the Dis virtual machine for the Limbo
ing, as in SIMMON. Emulators, with hardware emulation
language.
of earlier systems for compatibility, date back to the IBM
360 in 1963,[6][7] while the software emulation (then-called
“simulation”) predates it.
Process virtual machines arose originally as abstract plat- 5.3 Full virtualization
forms for an intermediate language used as the intermediate
representation of a program by a compiler; early examples
date to around 1966. An early 1966 example was the O- Main article: Full virtualization
code machine, a virtual machine which executes O-code In full virtualization, the virtual machine simulates enough
(object code) emitted by the front end of the BCPL com- hardware to allow an unmodified “guest” OS (one designed
piler. This abstraction allowed the compiler to be easily for the same instruction set) to be run in isolation. This
ported to a new architecture by implementing a new back approach was pioneered in 1966 with the IBM CP-40 and
end that took the existing O-code and compiled it to ma- CP-67, predecessors of the VM family.
chine code for the underlying physical machine. The Euler Examples outside the mainframe field include Parallels
language used a similar design, with the intermediate lan- Workstation, Parallels Desktop for Mac, VirtualBox,
guage named P (portable).[8] This was popularized around Virtual Iron, Oracle VM, Virtual PC, Virtual Server,
1970 by Pascal, notably in the Pascal-P system (1973) and Hyper-V, VMware Workstation, VMware Server (dis-
Pascal-S compiler (1975), in which it was termed p-code continued, formerly called GSX Server), VMware ESXi,
and the resulting machine as a p-code machine. This has QEMU, Adeos, Mac-on-Linux, Win4BSD, Win4Lin Pro,
been influential, and virtual machines in this sense have and Egenera vBlade technology.
5.4. OPERATING-SYSTEM-LEVEL VIRTUALIZATION 19

5.4 Operating-system-level virtual-


Hardware (CPU, Memory, NIC, Disk)
Hypervisor (Hyper-V, Xen, ESX Server) ization
Application Application Application
Main article: Operating-system-level virtualization

Guest OS Guest OS Guest OS In operating-system-level virtualization, a physical server


is virtualized at the operating system level, enabling mul-
Virtual Hardware Virtual Hardware Virtual Hardware tiple isolated and secure virtualized servers to run on a sin-
gle physical server. The “guest” operating system environ-
ments share the same running instance of the operating sys-
tem as the host system. Thus, the same operating system
kernel is also used to implement the “guest” environments,
and applications running in a given “guest” environment
view it as a stand-alone system. The pioneer implemen-
tation was FreeBSD jails; other examples include Docker,
Solaris Containers, OpenVZ, Linux-VServer, LXC, AIX
Workload Partitions, Parallels Virtuozzo Containers, and
iCore Virtual Accounts.

5.5 See also


• Amazon Machine Image
Logical diagram of full virtualization.
• Linux Containers

• Native development kit

• Storage hypervisor
5.3.1 Hardware-assisted virtualization
• Universal Turing machine

Main article: Hardware-assisted virtualization • Virtual appliance

• Virtual backup appliance


In hardware-assisted virtualization, the hardware provides
architectural support that facilitates building a virtual • Virtual disk image
machine monitor and allows guest OSes to be run in
• Virtual machine escape
isolation.[19] Hardware-assisted virtualization was first in-
troduced on the IBM System/370 in 1972, for use with
VM/370, the first virtual machine operating system.
In 2005 and 2006, Intel and AMD provided additional
5.6 References
hardware to support virtualization. Sun Microsystems
[1] Smith, James; Nair, Ravi (2005). “The Architecture of Vir-
(now Oracle Corporation) added similar features in their
tual Machines”. Computer. IEEE Computer Society. 38 (5):
UltraSPARC T-Series processors in 2005. Examples of
32–38. doi:10.1109/MC.2005.173.
virtualization platforms adapted to such hardware include
KVM, VMware Workstation, VMware Fusion, Hyper-V, [2] Smith and Nair, pp. 395–396
Windows Virtual PC, Xen, Parallels Desktop for Mac,
Oracle VM Server for SPARC, VirtualBox and Parallels [3] Oliphant, Patrick. “Virtual Machines”. VirtualComputing.
Workstation. Archived from the original on 2016-07-29. Retrieved 23
September 2015. Some people use that capability to set up a
In 2006, first-generation 32- and 64-bit x86 hardware sup- separate virtual machine running Windows on a Mac, giving
port was found to rarely offer performance advantages over them access to the full range of applications available for
software virtualization.[20] both platforms.
20 CHAPTER 5. VIRTUAL MACHINE

[4] “Super Fast Server Reboots – Another reason Virtualization [18] Paleczny, Michael; Vick, Christopher; Click, Cliff (2001).
rocks”. vmwarez.com. May 9, 2006. Archived from the “The Java HotSpot server compiler”. Proceedings of the
original on 2006-06-14. Retrieved 2013-06-14. Java Virtual Machine Research and Technology Symposium
on Java Virtual Machine Research and Technology Sympo-
[5] “Server Consolidation and Containment With Virtual Infras- sium – Volume 1. Monterey, California: USENIX Associa-
tructure” (PDF). VMware. 2007. Retrieved 2015-09-29. tion.
[6] Pugh, Emerson W. (1995). Building IBM: Shaping an Indus- [19] Uhlig, R. et al.; “Intel virtualization technology,” Computer
try and Its Technology. MIT. p. 274. ISBN 0-262-16147-8. , vol.38, no.5, pp. 48–56, May 2005
[7] Pugh, Emerson W.; et al. (1991). IBM’s 360 and Early 370 [20] A Comparison of Software and Hardware Techniques for
Systems. MIT. ISBN 0-262-16123-0. pages 160–161 x86 Virtualization, Keith Adams and Ole Agesen, VMWare,
ASPLOS’06 21–25 October 2006, San Jose, California,
[8] Wirth, N.; Weber, H. (1966). EULER: a generalization of USA “Surprisingly, we find that the first-generation hard-
ALGOL, and its formal definition: Part II, Communications ware support rarely offers performance advantages over ex-
of the Association for Computing Machinery, Vol.9, No.2, isting software techniques. We ascribe this situation to high
pp.89–99. New York: ACM. VMM/guest transition costs and a rigid programming model
that leaves little room for software flexibility in managing
[9] Griswold, Ralph E. The Macro Implementation of SNOBOL4.
either the frequency or cost of these transitions.”
San Francisco, CA: W. H. Freeman and Company, 1972
(ISBN 0-7167-0447-1), Chapter 1.

[10] Nelson, Graham. “About Interpreters”. Inform website. Re- 5.7 Further reading
trieved 2009-11-07.

[11] Goldberg, Adele; Robson, David (1983). Smalltalk-80: The • James E. Smith, Ravi Nair, Virtual Machines: Ver-
Language and its Implementation. Addison-Wesley Series in satile Platforms For Systems And Processes, Morgan
Computer Science. Addison-Wesley. ISBN 0-201-11371- Kaufmann, May 2005, ISBN 1-55860-910-5, 656
6. pages (covers both process and system virtual ma-
chines)
[12] Deutsch, L. Peter; Schiffman, Allan M. (1984). “Efficient
implementation of the Smalltalk-80 system”. POPL. • Craig, Iain D. Virtual Machines. Springer, 2006, ISBN
Salt Lake City, Utah: ACM. ISBN 0-89791-125-3. 1-85233-969-1, 269 pages (covers only process virtual
doi:10.1145/800017.800542.
machines)
[13] Aycock, John (2003). “A brief history of just-in-
time”. ACM Comput. Surv. 35 (2): 97–113.
doi:10.1145/857076.857077. 5.8 External links
[14] Ingalls, Dan; Kaehler, Ted; Maloney, John; Wallace, Scott;
Kay, Alan (1997). “Back to the future: the story of Squeak, • The Reincarnation of Virtual Machines, Article on
a practical Smalltalk written in itself”. OOPSLA '97: Pro- ACM Queue by Mendel Rosenblum, Co-Founder,
ceedings of the 12th ACM SIGPLAN conference on Object- VMware
oriented programming, systems, languages, and applications.
New York, NY, USA: ACM Press. pp. 318–326. ISBN • Sandia National Laboratories Runs 1 Million Linux
0-89791-908-4. doi:10.1145/263698.263754. Kernels as Virtual Machines

[15] Bracha, Gilad; Griswold, David (1993). “Strongtalk: Type- • The design of the Inferno virtual machine by Phil Win-
checking Smalltalk in a Production Environment”. Proceed- terbottom and Rob Pike
ings of the Eighth Annual Conference on Object-oriented Pro-
gramming Systems, Languages, and Applications. OOPSLA • Software Portability by Virtual Machine Emulation by
'93. New York, NY, USA: ACM. pp. 215–230. ISBN 978- Stefan Vorkoetter
0-89791-587-8. doi:10.1145/165854.165893.
• Create new Virtual Machine in Windows Azure by
[16] Ungar, David; Smith, Randall B (December 1987). “Self: Rahul Vijay Manekari
The power of simplicity”. ACM SIGPLAN Notices. 22: 227–
242. ISSN 0362-1340. doi:10.1145/38807.38828. • A Virtual Machine for building Virtual Machines

[17] Hölzle, Urs; Ungar, David (1994). “Optimizing


dynamically-dispatched calls with run-time type feedback”.
PLDI. Orlando, Florida, United States: ACM. pp. 326–336.
ISBN 0-89791-662-X. doi:10.1145/178243.178478.
Chapter 6

Virtual memory

This article is about the computational technique. For the pears as a contiguous address space or collection of con-
TBN game show, see Virtual Memory (game show). tiguous segments. The operating system manages virtual
In computing, virtual memory is a memory management address spaces and the assignment of real memory to virtual
memory. Address translation hardware in the CPU, often
referred to as a memory management unit or MMU, auto-
Virtual memory Physical matically translates virtual addresses to physical addresses.
(per process) memory Software within the operating system may extend these ca-
pabilities to provide a virtual address space that can exceed
the capacity of real memory and thus reference more mem-
ory than is physically present in the computer.
The primary benefits of virtual memory include freeing ap-
plications from having to manage a shared memory space,
increased security due to memory isolation, and being able
Another to conceptually use more memory than might be physically
process's available, using the technique of paging.
memory

6.1 Properties
Virtual memory makes application programming easier by
hiding fragmentation of physical memory; by delegating to
the kernel the burden of managing the memory hierarchy
(eliminating the need for the program to handle overlays ex-
RAM plicitly); and, when each process is run in its own dedicated
address space, by obviating the need to relocate program
code or to access memory with relative addressing.
Memory virtualization can be considered a generalization
of the concept of virtual memory.

Disk 6.2 Usage


Virtual memory combines active RAM and inactive memory on Virtual memory is an integral part of a modern computer
DASD[NB 1] to form a large range of contiguous addresses. architecture; implementations usually require hardware
support, typically in the form of a memory management
technique that is implemented using both hardware and unit built into the CPU. While not necessary, emulators and
software. It maps memory addresses used by a program, virtual machines can employ hardware support to increase
called virtual addresses, into physical addresses in computer performance of their virtual memory implementations.[1]
memory. Main storage as seen by a process or task ap- Consequently, older operating systems, such as those for the

21
22 CHAPTER 6. VIRTUAL MEMORY

mainframes of the 1960s, and those for personal computers 6.3 History
of the early to mid-1980s (e.g., DOS),[2] generally have no
virtual memory functionality, though notable exceptions for
mainframes of the 1960s include: In the 1940s and 1950s, all larger programs had to contain
logic for managing primary and secondary storage, such as
overlaying. Virtual memory was therefore introduced not
• the Atlas Supervisor for the Atlas only to extend primary memory, but to make such an ex-
tension as easy as possible for programmers to use.[3] To
• THE multiprogramming system for the Electrologica allow for multiprogramming and multitasking, many early
X8 (software based virtual memory without hardware systems divided memory between multiple programs with-
support) out virtual memory, such as early models of the PDP-10 via
registers.
• MCP for the Burroughs B5000 The concept of virtual memory was first developed by Ger-
man physicist Fritz-Rudolf Güntsch at the Technische Uni-
• MTS, TSS/360 and CP/CMS for the IBM System/360 versität Berlin in 1956 in his doctoral thesis, Logical Design
Model 67 of a Digital Computer with Multiple Asynchronous Rotating
Drums and Automatic High Speed Memory Operation; it de-
scribed a machine with 6 100-word blocks of primary core
• Multics for the GE 645
memory and an address space of 1,000 100-word blocks,
with hardware automatically moving blocks between pri-
• the Time Sharing Operating System for the RCA mary memory and secondary drum memory.[4][5] Paging
Spectra 70/46 was first implemented at the University of Manchester as
a way to extend the Atlas Computer's working memory by
combining its 16 thousand words of primary core mem-
and the operating system for the Apple Lisa is an example ory with an additional 96 thousand words of secondary
of a personal computer operating system of the 1980s that drum memory. The first Atlas was commissioned in 1962
features virtual memory. but working prototypes of paging had been developed by
During the 1960s and early 70s, computer memory was 1959.[3](p2)[6][7] In 1961, the Burroughs Corporation inde-
very expensive. The introduction of virtual memory pro- pendently released the first commercial computer with vir-
vided an ability for software systems with large memory tual memory, the B5000, with segmentation rather than
demands to run on computers with less real memory. The paging.[8][9]
savings from this provided a strong incentive to switch to Before virtual memory could be implemented in main-
virtual memory for all systems. The additional capability
stream operating systems, many problems had to be ad-
of providing virtual address spaces added another level of dressed. Dynamic address translation required expen-
security and reliability, thus making virtual memory even
sive and difficult to build specialized hardware; initial im-
more attractive to the market place. plementations slowed down access to memory slightly.[3]
Most modern operating systems that support virtual mem- There were worries that new system-wide algorithms uti-
ory also run each process in its own dedicated address space. lizing secondary storage would be less effective than pre-
Each program thus appears to have sole access to the virtual viously used application-specific algorithms. By 1969,
memory. However, some older operating systems (such as the debate over virtual memory for commercial comput-
OS/VS1 and OS/VS2 SVS) and even modern ones (such as ers was over;[3] an IBM research team led by David Sayre
IBM i) are single address space operating systems that run showed that their virtual memory overlay system consis-
all processes in a single address space composed of virtual- tently worked better than the best manually controlled
ized memory. systems.[10] Throughout the 1970s, the IBM 370 series run-
Embedded systems and other special-purpose computer ning their virtual-storage based operating systems provided
systems that require very fast and/or very consistent re- a means for business users to migrate multiple older sys-
sponse times may opt not to use virtual memory due to tems into fewer, more powerful, mainframes that had im-
decreased determinism; virtual memory systems trigger un- proved price/performance. The first minicomputer to intro-
predictable traps that may produce unwanted "jitter" during duce virtual memory was the Norwegian NORD-1; during
I/O operations. This is because embedded hardware costs the 1970s, other minicomputers implemented virtual mem-
are often kept low by implementing all such operations with ory, notably VAX models running VMS.
software (a technique called bit-banging) rather than with Virtual memory was introduced to the x86 architecture with
dedicated hardware. the protected mode of the Intel 80286 processor, but its
6.4. PAGED VIRTUAL MEMORY 23

segment swapping technique scaled poorly to larger seg- paging supervisor accesses secondary storage, returns the
ment sizes. The Intel 80386 introduced paging support page that has the virtual address that resulted in the page
underneath the existing segmentation layer, enabling the fault, updates the page tables to reflect the physical location
page fault exception to chain with other exceptions without of the virtual address and tells the translation mechanism to
double fault. However, loading segment descriptors was an restart the request.
expensive operation, causing operating system designers to When all physical memory is already in use, the paging su-
rely strictly on paging rather than a combination of paging pervisor must free a page in primary storage to hold the
and segmentation. swapped-in page. The supervisor uses one of a variety of
page replacement algorithms such as least recently used to
determine which page to free.
6.4 Paged virtual memory
Nearly all implementations of virtual memory divide a
6.4.3 Pinned pages
virtual address space into pages, blocks of contiguous vir-
Operating systems have memory areas that are pinned
tual memory addresses. Pages on contemporary[NB 2] sys-
(never swapped to secondary storage). Other terms used
tems are usually at least 4 kilobytes in size; systems with
are locked, fixed, or wired pages. For example, interrupt
large virtual address ranges or amounts of real memory gen-
mechanisms rely on an array of pointers to their handlers,
erally use larger page sizes.
such as I/O completion and page fault. If the pages contain-
ing these pointers or the code that they invoke were page-
6.4.1 Page tables able, interrupt-handling would become far more complex
and time-consuming, particularly in the case of page fault
Page tables are used to translate the virtual addresses seen interruptions. Hence, some part of the page table structures
by the application into physical addresses used by the hard- is not pageable.
ware to process instructions; such hardware that handles this Some pages may be pinned for short periods of time, others
specific translation is often known as the memory manage- may be pinned for long periods of time, and still others may
ment unit. Each entry in the page table holds a flag indi- need to be permanently pinned. For example:
cating whether the corresponding page is in real memory or
not. If it is in real memory, the page table entry will con- • The paging supervisor code and drivers for secondary
tain the real memory address at which the page is stored. storage devices on which pages reside must be perma-
When a reference is made to a page by the hardware, if the nently pinned, as otherwise paging wouldn't even work
page table entry for the page indicates that it is not currently because the necessary code wouldn't be available.
in real memory, the hardware raises a page fault exception,
invoking the paging supervisor component of the operating • Timing-dependent components may be pinned to
system. avoid variable paging delays.
Systems can have one page table for the whole system, sep- • Data buffers that are accessed directly by peripheral
arate page tables for each application and segment, a tree devices that use direct memory access or I/O chan-
of page tables for large segments or some combination of nels must reside in pinned pages while the I/O opera-
these. If there is only one page table, different applications tion is in progress because such devices and the buses
running at the same time use different parts of a single range to which they are attached expect to find data buffers
of virtual addresses. If there are multiple page or segment located at physical memory addresses; regardless of
tables, there are multiple virtual address spaces and concur- whether the bus has a memory management unit for
rent applications with separate page tables redirect to dif- I/O, transfers cannot be stopped if a page fault occurs
ferent real addresses. and then restarted when the page fault has been pro-
Some earlier systems with smaller real memoty sizes, such cessed.
as the SDS 940, used page registers instead of page tables
in memory for address translation. In IBM’s operating systems for System/370 and successor
systems, the term is “fixed”, and such pages may be long-
term fixed, or may be short-term fixed, or may be unfixed
6.4.2 Paging supervisor (i.e., pageable). System control structures are often long-
term fixed (measured in wall-clock time, i.e., time mea-
This part of the operating system creates and manages page sured in seconds, rather than time measured in fractions
tables. If the hardware raises a page fault exception, the of one second) whereas I/O buffers are usually short-term
24 CHAPTER 6. VIRTUAL MEMORY

fixed (usually measured in significantly less than wall-clock reside in a 32-bit linear, paged address space. Segments can
time, possibly for tens of milliseconds). Indeed, the OS be moved in and out of that space; pages there can “page”
has a special facility for “fast fixing” these short-term fixedin and out of main memory, providing two levels of virtual
data buffers (fixing which is performed without resorting to memory; few if any operating systems do so, instead using
a time-consuming Supervisor Call instruction). only paging. Early non-hardware-assisted x86 virtualization
Multics used the term “wired”. OpenVMS and Windows solutions combined paging and segmentation because x86
refer to pages temporarily made nonpageable (as for I/O paging offers only two protection domains whereas a VMM
[16]:22
buffers) as “locked”, and simply “nonpageable” for those / guest OS / guest applications stack needs three. The
difference between paging and segmentation systems is not
that are never pageable.
only about memory division; segmentation is visible to user
processes, as part of memory model semantics. Hence, in-
Virtual-real operation stead of memory that looks like a single large space, it is
structured into multiple spaces.
In OS/VS1 and similar OSes, some parts of systems mem- This difference has important consequences; a segment is
ory are managed in “virtual-real” mode, called “V=R”. In not a page with variable length or a simple way to lengthen
this mode every virtual address corresponds to the same the address space. Segmentation that can provide a single-
real address. This mode is used for interrupt mechanisms, level memory model in which there is no differentiation be-
for the paging supervisor and page tables in older systems, tween process memory and file system consists of only a list
and for application programs using non-standard I/O man- of segments (files) mapped into the process’s potential ad-
agement. For example, IBM’s z/OS has 3 modes (virtual- dress space.[17]
virtual, virtual-real and virtual-fixed).[11]
This is not the same as the mechanisms provided by calls
such as mmap and Win32's MapViewOfFile, because inter-
6.4.4 Thrashing file pointers do not work when mapping files into semi-
arbitrary places. In Multics, a file (or a segment from a
When paging and page stealing are used, a problem called multi-segment file) is mapped into a segment in the address
"thrashing" can occur, in which the computer spends an un- space, so files are always mapped at a segment boundary. A
suitably large amount of time transferring pages to and from file’s linkage section can contain pointers for which an at-
a backing store, hence slowing down useful work. A task’s tempt to load the pointer into a register or make an indirect
working set is the minimum set of pages that should be in reference through it causes a trap. The unresolved pointer
memory in order for it to make useful progress. Thrash- contains an indication of the name of the segment to which
ing occurs when there is insufficient memory available to the pointer refers and an offset within the segment; the han-
store the working sets of all active programs. Adding real dler for the trap maps the segment into the address space,
memory is the simplest response, but improving application puts the segment number into the pointer, changes the tag
design, scheduling, and memory usage can help. Another field in the pointer so that it no longer causes a trap, and
solution is to reduce the number of active tasks on the sys- returns to the code where the trap [18] occurred, re-executing
tem. This reduces demand on real memory by swapping out the instruction that caused the trap. This eliminates the
[3]
the entire working set of one or more processes. need for a linker completely and works when different
processes map the same file into different places in their
private address spaces.[19]

6.5 Segmented virtual memory


Some systems, such as the Burroughs B5500,[12] use 6.6 Address space swapping
segmentation instead of paging, dividing virtual address
spaces into variable-length segments. A virtual address here Some operating systems provide for swapping entire
consists of a segment number and an offset within the seg- address spaces, in addition to whatever facilities they have
ment. The Intel 80286 supports a similar segmentation for paging and segmentation. When this occurs, the OS
scheme as an option, but it is rarely used. Segmentation and writes those pages and segments currently in real memory
paging can be used together by dividing each segment into to swap files. In a swap-in, the OS reads back the data from
pages; systems with this memory structure, such as Multics the swap files but does not automatically read back pages
and IBM System/38, are usually paging-predominant, seg- that had been paged out at the time of the swap out opera-
mentation providing memory protection.[13][14][15] tion.
In the Intel 80386 and later IA-32 processors, the segments IBM’s MVS, from OS/VS2 Release 2 through z/OS, pro-
6.9. NOTES 25

vides for marking an address space as unswappable; doing 6.9 Notes


so does not pin any pages in the address space. This can
be done for the duration of a job by entering the name of [1] Early systems used drums; contemporary systems use disks
an eligible[20] main program in the Program Properties Ta- or solid state memory
ble with an unswappable flag. In addition, privileged code
can temporarily make an address space unswappable With [2] IBM DOS/VS and OS/VS1 only supported 2 KiB pages.
a SYSEVENT Supervisor Call instruction (SVC); certain
changes[21] in the address space properties require that the
OS swap it out and then swap it back in, using SYSEVENT 6.10 References
TRANSWAP.[22]
• This article is based on material taken from the Free On-line
Dictionary of Computing prior to 1 November 2008 and in-
6.7 See also corporated under the “relicensing” terms of the GFDL, ver-
sion 1.3 or later.

• Computer memory
[1] “AMD-V™ Nested Paging” (PDF). AMD. Retrieved 28
• Memory address April 2015.

[2] “Windows Version History”. Microsoft. September 23,


• Address space 2011. Retrieved March 9, 2015.
• Virtual address space
[3] Denning, Peter (1997). “Before Memory Was Virtual”
• CPU design (PDF). In the Beginning: Recollections of Software Pioneers.

[4] Jessen, Elke (2004). “Origin of the Virtual Memory Con-


• Page (computing) cept”. IEEE Annals of the History of Computing. 26 (4):
71–72.
• Page table
• Paging [5] Jessen, E. (1996). “Die Entwicklung des virtuellen Speich-
ers”. Informatik-Spektrum (in German). Springer Berlin
• Working set / Heidelberg. 19 (4): 216–219. ISSN 0170-6012.
• Memory management unit doi:10.1007/s002870050034.

• Cache algorithms [6] R. J. Creasy, "The origin of the VM/370 time-sharing sys-
tem", IBM Journal of Research & Development, Vol. 25, No.
• Page replacement algorithm 5 (September 1981), p. 486

• Segmentation (memory) [7] Atlas design includes virtual memory

• System/38 [8] Ian Joyner on Burroughs B5000

• Memory management [9] Cragon, Harvey G. (1996). Memory Systems and Pipelined
Processors. Jones and Bartlett Publishers. p. 113. ISBN
• Memory allocation 0-86720-474-5.

[10] Sayre, D. (1969). “Is automatic “folding” of programs effi-


• Memory management (operating systems) cient enough to displace manual?". Communications of the
ACM. 12 (12): 656. doi:10.1145/363626.363629.
• Protected mode, an x86 mode that allows for virtual
memory. [11] “z/OS Basic Skills Information Center: z/OS Concepts”
(PDF).
• CUDA Pinned memory
[12] Burroughs (1964). Burroughs B5500 Information Process-
ing System Reference Manual (PDF). Burroughs Corpora-
tion. 1021326. Retrieved November 28, 2013.
6.8 Further reading
[13] GE-645 System Manual (PDF). January 1968. pp. 21–30.
Retrieved 28 April 2015.
• Hennessy, John L.; and Patterson, David A.; Com-
puter Architecture, A Quantitative Approach (ISBN 1- [14] Corbató, F.J.; and Vyssotsky, V. A. “Introduction and
55860-724-2) Overview of the Multics System”. Retrieved 2007-11-13.
26 CHAPTER 6. VIRTUAL MEMORY

[15] Glaser, Edward L.; Couleur, John F. & Oliver, G. A.


“System Design of a Computer for Time Sharing Applica-
tions”.

[16] J. E. Smith, R. Uhlig (August 14, 2005) Virtual Ma-


chines: Architectures, Implementations and Applications,
HOTCHIPS 17, Tutorial 1, part 2

[17] Bensoussan, André; Clingen, CharlesT.; Daley, Robert C.


(May 1972). “The Multics Virtual Memory: Concepts and
Design”. Communications of the ACM. 15 (5): 308–318.
doi:10.1145/355602.361306.

[18] “Multics Execution Environment”. Multicians.org. Re-


trieved October 9, 2016.

[19] Organick, Elliott I. (1972). The Multics System: An Exami-


nation of Its Structure. MIT Press. ISBN 0-262-15012-3.

[20] The most important requirement is that the program be APF


authorized.

[21] E.g., requesting use of preferred memory

[22] “Control swapping (DONTSWAP, OKSWAP, TRAN-


SWAP)". IBM Knowledge Center. z/OS MVS Program-
ming: Authorized Assembler Services Reference SET-
WTO SA23-1375-00. 1990–2014. Retrieved October 9,
2016.

6.11 External links


• Operating Systems: Three Easy Pieces, by Remzi
H. Arpaci-Dusseau and Andrea C. Arpaci-Dusseau.
Arpaci-Dusseau Books, 2014. Relevant chapters:
Address Spaces Address Translation Segmentation
Introduction to Paging TLBs Advanced Page Tables
Swapping: Mechanisms Swapping: Policies

• “Time-Sharing Supervisor Programs” by Michael T.


Alexander in Advanced Topics in Systems Program-
ming, University of Michigan Engineering Summer
Conference 1970 (revised May 1971), compares the
scheduling and resource allocation approaches, includ-
ing virtual memory and paging, used in four main-
frame operating systems: CP-67, TSS/360, MTS, and
Multics.

• LinuxMM: Linux Memory Management.

• Birth of Linux Kernel, mailing list discussion.


• The Virtual-Memory Manager in Windows NT,
Randy Kath, Microsoft Developer Network Technol-
ogy Group, 12 December 1992 at the Wayback Ma-
chine (archived June 22, 2010)
Chapter 7

Kernel (operating system)

“Kernel (computing)" and “Kernel (computer science)" a process makes requests of the kernel, it is called a system
redirect here. For other uses, see Kernel (disambiguation). call. Kernel designs differ in how they manage these sys-
tem calls and resources. A monolithic kernel runs all the
The kernel is a computer program that is the core of a com- operating system instructions in the same address space, for
speed. A microkernel runs most processes in user space,[2]
puter’s operating system, with complete control over every- [3]
thing in the system.[1] It is the first program loaded on start- for modularity.
up. It handles the rest of start-up as well as input/output re-
quests from software, translating them into data-processing
instructions for the central processing unit. It handles mem- 7.1 Functions of the kernel
ory and peripherals like keyboards, monitors, printers, and
speakers.
The kernel’s primary function is to mediate access to the
computer’s resources, including:[4]

Applications The central processing unit This central component of a


computer system is responsible for running or execut-
ing programs. The kernel takes responsibility for de-
ciding at any time which of the many running pro-
grams should be allocated to the processor or proces-
Kernel
sors (each of which can usually run only one program
at a time).

Random-access memory Random-access memory is


CPU Memory Devices used to store both program instructions and data.
Typically, both need to be present in memory in order
for a program to execute. Often multiple programs
will want access to memory, frequently demanding
A kernel connects the application software to the hardware of a more memory than the computer has available. The
computer. kernel is responsible for deciding which memory each
process can use, and determining what to do when not
The critical code of the kernel is usually loaded into a pro- enough memory is available.
tected area of memory, which prevents it from being over-
written by applications or other, more minor parts of the Input/output (I/O) devices I/O devices include such pe-
operating system. The kernel performs its tasks, such as ripherals as keyboards, mice, disk drives, printers, net-
running processes and handling interrupts, in kernel space. work adapters, and display devices. The kernel allo-
In contrast, everything a user does is in user space: writing cates requests from applications to perform I/O to an
text in a text editor, running programs in a GUI, etc. This appropriate device and provides convenient methods
separation prevents user data and kernel data from interfer- for using the device (typically abstracted to the point
[1]
ing with each other and causing instability and slowness. where the application does not need to know imple-
The kernel’s interface is a low-level abstraction layer. When mentation details of the device).

27
28 CHAPTER 7. KERNEL (OPERATING SYSTEM)

Key aspects necessary in resource management are the def-cessor to address kernel memory, thus preventing an appli-
inition of an execution domain (address space) and the pro-
cation from damaging the running kernel. This fundamental
tection mechanism used to mediate the accesses to the re-partition of memory space has contributed much to the cur-
sources within a domain.[4] rent designs of actual general-purpose kernels and is almost
Kernels also usually provide methods for synchronization universal in such systems, although some research kernels
and communication between processes called inter-process (e.g. Singularity) take other approaches.
communication (IPC).
A kernel may implement these features itself, or rely on 7.1.2 Device management
some of the processes it runs to provide the facilities to
other processes, although in this case it must provide some To perform useful functions, processes need access to the
means of IPC to allow processes to access the facilities pro- peripherals connected to the computer, which are con-
vided by each other. trolled by the kernel through device drivers. A device driver
Finally, a kernel must provide running programs with a is a computer program that enables the operating system to
method to make requests to access these facilities. interact with a hardware device. It provides the operating
system with information of how to control and communi-
cate with a certain piece of hardware. The driver is an im-
7.1.1 Memory management portant and vital piece to a program application. The design
goal of a driver is abstraction; the function of the driver is
For more details on this topic, see Memory management to translate the OS-mandated function calls (programming
(operating systems). calls) into device-specific calls. In theory, the device should
work correctly with the suitable driver. Device drivers are
used for such things as video cards, sound cards, printers,
The kernel has full access to the system’s memory and must scanners, modems, and LAN cards. The common levels of
allow processes to safely access this memory as they require abstraction of device drivers are:
it. Often the first step in doing this is virtual addressing,
usually achieved by paging and/or segmentation. Virtual 1. On the hardware side:
addressing allows the kernel to make a given physical ad-
dress appear to be another address, the virtual address. Vir- • Interfacing directly.
tual address spaces may be different for different processes;
the memory that one process accesses at a particular (vir- • Using a high level interface (Video BIOS).
tual) address may be different memory from what another
process accesses at the same address. This allows every pro- • Using a lower-level device driver (file drivers using
gram to behave as if it is the only one (apart from the kernel) disk drivers).
running and thus prevents applications from crashing each
• Simulating work with hardware, while doing some-
other.[5]
thing entirely different.
On many systems, a program’s virtual address may refer to
data which is not currently in memory. The layer of in- 2. On the software side:
direction provided by virtual addressing allows the oper-
ating system to use other data stores, like a hard drive, to
store what would otherwise have to remain in main memory • Allowing the operating system direct access to hard-
(RAM). As a result, operating systems can allow programs ware resources.
to use more memory than the system has physically avail-
• Implementing only primitives.
able. When a program needs data which is not currently in
RAM, the CPU signals to the kernel that this has happened, • Implementing an interface for non-driver software
and the kernel responds by writing the contents of an inac- (Example: TWAIN).
tive memory block to disk (if necessary) and replacing it
with the data requested by the program. The program can • Implementing a language, sometimes high-level (Ex-
then be resumed from the point where it was stopped. This ample PostScript).
scheme is generally known as demand paging.
Virtual addressing also allows creation of virtual partitions For example, to show the user something on the screen, an
of memory in two disjointed areas, one being reserved for application would make a request to the kernel, which would
the kernel (kernel space) and the other for the applications forward the request to its display driver, which is then re-
(user space). The applications are not permitted by the pro- sponsible for actually plotting the character/pixel.[5]
7.2. KERNEL DESIGN DECISIONS 29

A kernel must maintain a list of available devices. This list functions.[6]


may be known in advance (e.g. on an embedded system The method of invoking the kernel function varies from ker-
where the kernel will be rewritten if the available hardware nel to kernel. If memory isolation is in use, it is impossible
changes), configured by the user (typical on older PCs and for a user process to call the kernel directly, because that
on systems that are not designed for personal use) or de- would be a violation of the processor’s access control rules.
tected by the operating system at run time (normally called A few possibilities are:
plug and play). In a plug and play system, a device man-
ager first performs a scan on different hardware buses, such
as Peripheral Component Interconnect (PCI) or Universal • Using a software-simulated interrupt. This method is
Serial Bus (USB), to detect installed devices, then searches available on most hardware, and is therefore very com-
for the appropriate drivers. mon.

As device management is a very OS-specific topic, these • Using a call gate. A call gate is a special address stored
drivers are handled differently by each kind of kernel de- by the kernel in a list in kernel memory at a location
sign, but in every case, the kernel has to provide the I/O known to the processor. When the processor detects
to allow drivers to physically access their devices through a call to that address, it instead redirects to the tar-
some port or memory location. Very important decisions get location without causing an access violation. This
have to be made when designing the device management requires hardware support, but the hardware for it is
system, as in some designs accesses may involve context quite common.
switches, making the operation very CPU-intensive and
easily causing a significant performance overhead. • Using a special system call instruction. This technique
requires special hardware support, which common ar-
chitectures (notably, x86) may lack. System call in-
structions have been added to recent models of x86
7.1.3 System calls processors, however, and some operating systems for
PCs make use of them when available.
Main article: System call
• Using a memory-based queue. An application that
makes large numbers of requests but does not need to
In computing, a system call is how a program requests a wait for the result of each may add details of requests
service from an operating system’s kernel that it does not to an area of memory that the kernel periodically scans
normally have permission to run. System calls provide the to find requests.
interface between a process and the operating system. Most
operations interacting with the system require permissions
not available to a user level process, e.g. I/O performed
with a device present on the system, or any form of com-
7.2 Kernel design decisions
munication with other processes requires the use of system
calls. 7.2.1 Issues of kernel support for protection
A system call is a mechanism that is used by the applica-
tion program to request a service from the operating system. An important consideration in the design of a kernel is
They use a machine-code instruction that causes the proces- the support it provides for protection from faults (fault tol-
sor to change mode. An example would be from supervisor erance) and from malicious behaviours (security). These
mode to protected mode. This is where the operating sys- two aspects are usually not clearly distinguished, and the
tem performs actions like accessing hardware devices or the adoption of this distinction in the kernel design leads to the
memory management unit. Generally the operating system rejection of a hierarchical structure for protection.[4]
provides a library that sits between the operating system and
The mechanisms or policies provided by the kernel can be
normal programs. Usually it is a C library such as Glibc or classified according to several criteria, including: static (en-
Windows API. The library handles the low-level details of forced at compile time) or dynamic (enforced at run time);
passing information to the kernel and switching to supervi- pre-emptive or post-detection; according to the protection
sor mode. System calls include close, open, read, wait and principles they satisfy (e.g. Denning[7][8] ); whether they are
write. hardware supported or language based; whether they are
To actually perform useful work, a process must be able more an open mechanism or a binding policy; and many
to access the services provided by the kernel. This is im- more.
plemented differently by each kernel, but most provide a C Support for hierarchical protection domains[9] is typically
library or an API, which in turn invokes the related kernel implemented using CPU modes.
30 CHAPTER 7. KERNEL (OPERATING SYSTEM)

Many kernels provide implementation of “capabilities”, i.e. fault tolerance (see above), and build the security policy
objects that are provided to user code which allow limited for malicious behavior on top of that (adding features such
access to an underlying object managed by the kernel. A as cryptography mechanisms where necessary), delegating
common example occurs in file handling: a file is a rep- some responsibility to the compiler. Approaches that dele-
resentation of information stored on a permanent storage gate enforcement of security policy to the compiler and/or
device. The kernel may be able to perform many differ- the application level are often called language-based secu-
ent operations (e.g. read, write, delete or execute the file rity.
contents) but a user level application may only be permit- The lack of many critical security mechanisms in current
ted to perform some of these operations (e.g. it may only
mainstream operating systems impedes the implementation
be allowed to read the file). A common implementation of of adequate security policies at the application abstraction
this is for the kernel to provide an object to the application
level.[15] In fact, a common misconception in computer se-
(typically called a “file handle”) which the application may curity is that any security policy can be implemented in an
then invoke operations on, the validity of which the ker-
application regardless of kernel support.[15]
nel checks at the time the operation is requested. Such a
system may be extended to cover all objects that the ker-
nel manages, and indeed to objects provided by other user Hardware-based or language-based protection
applications.
Typical computer systems today use hardware-enforced
An efficient and simple way to provide hardware support
rules about what programs are allowed to access what data.
of capabilities is to delegate the MMU the responsibility of
The processor monitors the execution and stops a program
checking access-rights for every memory access, a mecha-
that violates a rule (e.g., a user process that is about to read
nism called capability-based addressing.[10] Most commer-
or write to kernel memory, and so on). In systems that lack
cial computer architectures lack such MMU support for ca-
support for capabilities, processes are isolated from each
pabilities.
other by using separate address spaces.[19] Calls from user
An alternative approach is to simulate capabilities using processes into the kernel are regulated by requiring them to
commonly supported hierarchical domains; in this ap- use one of the above-described system call methods.
proach, each protected object must reside in an address
An alternative approach is to use language-based protec-
space that the application does not have access to; the ker-
tion. In a language-based protection system, the kernel
nel also maintains a list of capabilities in such memory.
will only allow code to execute that has been produced
When an application needs to access an object protected
by a trusted language compiler. The language may then
by a capability, it performs a system call and the kernel then
be designed such that it is impossible for the programmer
checks whether the application’s capability grants it permis-
to instruct it to do something that will violate a security
sion to perform the requested action, and if it is permitted
requirement.[14]
performs the access for it (either directly, or by delegat-
ing the request to another user-level process). The perfor- Advantages of this approach include:
mance cost of address space switching limits the practicality
of this approach in systems with complex interactions be- • No need for separate address spaces. Switching be-
tween objects, but it is used in current operating systems tween address spaces is a slow operation that causes a
for objects that are not accessed frequently or which are great deal of overhead, and a lot of optimization work
not expected to perform quickly.[11][12] Approaches where is currently performed in order to prevent unneces-
protection mechanism are not firmware supported but are sary switches in current operating systems. Switching
instead simulated at higher levels (e.g. simulating capa- is completely unnecessary in a language-based protec-
bilities by manipulating page tables on hardware that does tion system, as all code can safely operate in the same
not have direct support), are possible, but there are per- address space.
formance implications.[13] Lack of hardware support may
• Flexibility. Any protection scheme that can be de-
not be an issue, however, for systems that choose to use
[14] signed to be expressed via a programming language
language-based protection.
can be implemented using this method. Changes to the
An important kernel design decision is the choice of the protection scheme (e.g. from a hierarchical system to
abstraction levels where the security mechanisms and poli- a capability-based one) do not require new hardware.
cies should be implemented. Kernel security mecha-
nisms play a critical role in supporting security at higher Disadvantages include:
levels.[10][15][16][17][18]
One approach is to use firmware and kernel support for • Longer application start up time. Applications must
be verified when they are started to ensure they have
7.3. KERNEL-WIDE DESIGN APPROACHES 31

been compiled by the correct compiler, or may need 7.3 Kernel-wide design approaches
recompiling either from source code or from bytecode.
• Inflexible type systems. On traditional systems, ap- Naturally, the above listed tasks and features can be pro-
plications frequently perform operations that are not vided in many ways that differ from each other in design
type safe. Such operations cannot be permitted in a and implementation.
language-based protection system, which means that
The principle of separation of mechanism and policy is the
applications may need to be rewritten and may, in
substantial difference between the philosophy of micro and
some cases, lose performance.
monolithic kernels.[24][25] Here a mechanism is the support
Examples of systems with language-based protection in- that allows the implementation of many different policies,
clude JX and Microsoft's Singularity. while a policy is a particular “mode of operation”. For in-
stance, a mechanism may provide for user log-in attempts
to call an authorization server to determine whether access
7.2.2 Process cooperation should be granted; a policy may be for the authorization
server to request a password and check it against an en-
Edsger Dijkstra proved that from a logical point of view, crypted password stored in a database. Because the mech-
atomic lock and unlock operations operating on binary anism is generic, the policy could more easily be changed
semaphores are sufficient primitives to express any func- (e.g. by requiring the use of a security token) than if the
tionality of process cooperation.[20] However this approach mechanism and policy were integrated in the same module.
is generally held to be lacking in terms of safety and In minimal microkernel just some very basic policies are
efficiency, whereas a message passing approach is more included,[25] and its mechanisms allows what is running on
flexible.[21] A number of other approaches (either lower- or top of the kernel (the remaining part of the operating sys-
higher-level) are available as well, with many modern ker- tem and the other applications) to decide which policies to
nels providing support for systems such as shared memory adopt (as memory management, high level process schedul-
and remote procedure calls. ing, file system management, etc.).[4][21] A monolithic ker-
nel instead tends to include many policies, therefore restrict-
ing the rest of the system to rely on them.
7.2.3 I/O devices management
Per Brinch Hansen presented arguments in favour of
The idea of a kernel where I/O devices are han- separation of mechanism and policy.[4][21] The failure
dled uniformly with other processes, as parallel co- to properly fulfill this separation is one of the major
operating processes, was first proposed and implemented causes of the lack of substantial innovation in exist-
by Brinch Hansen (although similar ideas were suggested ing operating systems,[4] a problem common in com-
in 1967[22][23] ). In Hansen’s description of this, the “com- puter architecture.[26][27][28] The monolithic design is in-
mon” processes are called internal processes, while the I/O duced by the “kernel mode"/"user mode” architectural ap-
devices are called external processes.[21] proach to protection (technically called hierarchical pro-
Similar to physical memory, allowing applications direct tection domains), which is common in conventional com-
access to controller ports and registers can cause the con- mercial systems;[29] in fact, every module needing pro-
troller to malfunction, or system to crash. With this, de- tection is therefore preferably included into the kernel.[29]
pending on the complexity of the device, some devices can This link between monolithic design and “privileged mode”
get surprisingly complex to program, and use several differ- can be reconducted to the key issue of mechanism-policy
ent controllers. Because of this, providing a more abstract separation;[4] in fact the “privileged mode” architectural
interface to manage the device is important. This interface approach melts together the protection mechanism with
is normally done by a Device Driver or Hardware Abstrac- the security policies, while the major alternative architec-
tion Layer. Frequently, applications will require access to tural approach, capability-based addressing, clearly distin-
these devices. The Kernel must maintain the list of these guishes between the two, leading naturally to a microkernel
devices by querying the system for them in some way. This design[4] (see Separation of protection and security).
can be done through the BIOS, or through one of the var- While monolithic kernels execute all of their code in the
ious system buses (such as PCI/PCIE, or USB). When an same address space (kernel space) microkernels try to run
application requests an operation on a device (Such as dis- most of their services in user space, aiming to improve
playing a character), the kernel needs to send this request maintainability and modularity of the codebase.[3] Most
to the current active video driver. The video driver, in turn, kernels do not fit exactly into one of these categories, but are
needs to carry out this request. This is an example of Inter rather found in between these two designs. These are called
Process Communication (IPC). hybrid kernels. More exotic designs such as nanokernels
32 CHAPTER 7. KERNEL (OPERATING SYSTEM)

and exokernels are available, but are seldom used for pro- drivers, Scheduler, Memory handling, File systems, Net-
duction systems. The Xen hypervisor, for example, is an work stacks. Many system calls are provided to applica-
exokernel. tions, to allow them to access all those services. A mono-
lithic kernel, while initially loaded with subsystems that may
not be needed, can be tuned to a point where it is as fast
7.3.1 Monolithic kernels as or faster than the one that was specifically designed for
the hardware, although more relevant in a general sense.
Main article: Monolithic kernel Modern monolithic kernels, such as those of Linux and
In a monolithic kernel, all OS services run along with the FreeBSD, both of which fall into the category of Unix-like
operating systems, feature the ability to load modules at run-
time, thereby allowing easy extension of the kernel’s capa-
bilities as required, while helping to minimize the amount
of code running in kernel space. In the monolithic kernel,
some advantages hinge on these points:

• Since there is less software involved it is faster.

• As it is one single piece of software it should be smaller


both in source and compiled forms.

• Less code generally means fewer bugs which can trans-


late to fewer security problems.

Most work in the monolithic kernel is done via system calls.


These are interfaces, usually kept in a tabular structure, that
access some subsystem within the kernel such as disk oper-
ations. Essentially calls are made within programs and a
checked copy of the request is passed through the system
call. Hence, not far to travel at all. The monolithic Linux
kernel can be made extremely small not only because of
its ability to dynamically load modules but also because of
its ease of customization. In fact, there are some versions
that are small enough to fit together with a large number of
Diagram of a monolithic kernel
utilities and other programs on a single floppy disk and still
main kernel thread, thus also residing in the same mem- provide a fully functional operating system (one of the most
ory area. This approach provides rich and powerful hard- popular of which is muLinux). This ability to miniaturize
ware access. Some developers, such as UNIX developer its kernel has also led to a rapid growth in the use of Linux
Ken Thompson, maintain that it is “easier to implement in embedded systems.
a monolithic kernel”[30] than microkernels. The main dis- These types of kernels consist of the core functions of the
advantages of monolithic kernels are the dependencies be- operating system and the device drivers with the ability to
tween system components – a bug in a device driver might load modules at runtime. They provide rich and powerful
crash the entire system – and the fact that large kernels can abstractions of the underlying hardware. They provide a
become very difficult to maintain. small set of simple hardware abstractions and use applica-
Monolithic kernels, which have traditionally been used by tions called servers to provide more functionality. This par-
Unix-like operating systems, contain all the operating sys- ticular approach defines a high-level virtual interface over
tem core functions and the device drivers (small programs the hardware, with a set of system calls to implement op-
that allow the operating system to interact with hardware erating system services such as process management, con-
devices, such as disk drives, video cards and printers). currency and memory management in several modules that
This is the traditional design of UNIX systems. A mono- run in supervisor mode. This design has several flaws and
lithic kernel is one single program that contains all of the limitations:
code necessary to perform every kernel related task. Ev-
ery part which is to be accessed by most programs which • Coding in kernel can be challenging, in part because
cannot be put in a library is in the kernel space: Device one cannot use common libraries (like a full-featured
7.3. KERNEL-WIDE DESIGN APPROACHES 33

libc), and because one needs to use a source-level de- the functionality of the system is moved out of the tradi-
bugger like gdb. Rebooting the computer is often re- tional “kernel”, into a set of “servers” that communicate
quired. This is not just a problem of convenience to through a “minimal” kernel, leaving as little as possible in
the developers. When debugging is harder, and as dif- “system space” and as much as possible in “user space”. A
ficulties become stronger, it becomes more likely that microkernel that is designed for a specific platform or de-
code will be “buggier”. vice is only ever going to have what it needs to operate. The
microkernel approach consists of defining a simple abstrac-
• Bugs in one part of the kernel have strong side effects; tion over the hardware, with a set of primitives or system
since every function in the kernel has all the privileges, calls to implement minimal OS services such as memory
a bug in one function can corrupt data structure of an- management, multitasking, and inter-process communica-
other, totally unrelated part of the kernel, or of any tion. Other services, including those normally provided by
running program. the kernel, such as networking, are implemented in user-
space programs, referred to as servers. Microkernels are
• Kernels often become very large and difficult to main- easier to maintain than monolithic kernels, but the large
tain. number of system calls and context switches might slow
down the system because they typically generate more over-
• Even if the modules servicing these operations are sep-
head than plain function calls.
arate from the whole, the code integration is tight and
difficult to do correctly. Only parts which really require being in a privileged mode
are in kernel space: IPC (Inter-Process Communication),
• Since the modules run in the same address space, a bug basic scheduler, or scheduling primitives, basic memory
can bring down the entire system. handling, basic I/O primitives. Many critical parts are now
running in user space: The complete scheduler, memory
• Monolithic kernels are not portable; therefore, they handling, file systems, and network stacks. Micro kernels
must be rewritten for each new architecture that the were invented as a reaction to traditional “monolithic” ker-
operating system is to be used on. nel design, whereby all system functionality was put in a
one static program running in a special “system” mode of
the processor. In the microkernel, only the most fundamen-
tal of tasks are performed such as being able to access some
(not necessarily all) of the hardware, manage memory and
Kernel coordinate message passing between the processes. Some
systems that use micro kernels are QNX and the HURD.
In the case of QNX and Hurd user sessions can be entire
snapshots of the system itself or views as it is referred to.
IPC The very essence of the microkernel architecture illustrates
some of its advantages:

Servers Software • Maintenance is generally easier.

• Patches can be tested in a separate instance, and then


swapped in to take over a production instance.

In the microkernel approach, the kernel itself only provides basic


• Rapid development time and new software can be
functionality that allows the execution of servers, separate programs tested without having to reboot the kernel.
that assume former kernel functions, such as device drivers, GUI
servers, etc. • More persistence in general, if one instance goes hay-
wire, it is often possible to substitute it with an opera-
tional mirror.

7.3.2 Microkernels Most micro kernels use a message passing system of some
sort to handle requests from one server to another. The
Main article: Microkernel message passing system generally operates on a port basis
with the microkernel. As an example, if a request for more
Microkernel (also abbreviated μK or uK) is the term de- memory is sent, a port is opened with the microkernel and
scribing an approach to operating system design by which the request sent through. Once within the microkernel, the
34 CHAPTER 7. KERNEL (OPERATING SYSTEM)

steps are similar to system calls. The rationale was that it A microkernel allows the implementation of the remain-
would bring modularity in the system architecture, which ing part of the operating system as a normal applica-
would entail a cleaner system, easier to debug or dynami- tion program written in a high-level language, and the use
cally modify, customizable to users’ needs, and more per- of different operating systems on top of the same un-
forming. They are part of the operating systems like AIX, changed kernel.[21] It is also possible to dynamically switch
BeOS, Hurd, Mach, macOS, MINIX, QNX. Etc. Although among operating systems and to have more than one active
micro kernels are very small by themselves, in combina- simultaneously.[21]
tion with all their required auxiliary code they are, in fact,
often larger than monolithic kernels. Advocates of mono-
lithic kernels also point out that the two-tiered structure of 7.3.3 Monolithic kernels vs. microkernels
microkernel systems, in which most of the operating sys-
tem does not interact directly with the hardware, creates a As the computer kernel grows, so grows the size and vul-
not-insignificant cost in terms of system efficiency. These nerability of its trusted computing base; and, besides reduc-
types of kernels normally provide only the minimal ser- ing security, there is the problem of enlarging the memory
vices such as defining memory address spaces, Inter-process footprint. This is mitigated to some degree by perfecting
communication (IPC) and the process management. The the virtual memory system, but not all computer architec-
other functions such as running the hardware processes are tures have virtual memory support.[31] To reduce the ker-
not handled directly by micro kernels. Proponents of mi- nel’s footprint, extensive editing has to be performed to
cro kernels point out those monolithic kernels have the dis- carefully remove unneeded code, which can be very dif-
advantage that an error in the kernel can cause the entire ficult with non-obvious interdependencies between parts of
system to crash. However, with a microkernel, if a ker- a kernel with millions of lines of code.
nel process crashes, it is still possible to prevent a crash of
By the early 1990s, due to the various shortcomings of
the system as a whole by merely restarting the service that
monolithic kernels versus microkernels, monolithic kernels
caused the error.
were considered obsolete by virtually all operating system
Other services provided by the kernel such as network- researchers. As a result, the design of Linux as a mono-
ing are implemented in user-space programs referred to as lithic kernel rather than a microkernel was the topic of a
servers. Servers allow the operating system to be modified famous debate between Linus Torvalds and Andrew Tanen-
by simply starting and stopping programs. For a machine baum.[32] There is merit on both sides of the argument pre-
without networking support, for instance, the networking sented in the Tanenbaum–Torvalds debate.
server is not started. The task of moving in and out of the
kernel to move data between the various applications and
servers creates overhead which is detrimental to the effi- Performance
ciency of micro kernels in comparison with monolithic ker-
nels. Monolithic kernels are designed to have all of their code
Disadvantages in the microkernel exist however. Some are: in the same address space (kernel space), which some de-
velopers argue is necessary to increase the performance of
the system.[33] Some developers also maintain that mono-
• Larger running memory footprint lithic systems are extremely efficient if well written.[33] The
monolithic model tends to be more efficient[34] through the
• More software for interfacing is required, there is a use of shared kernel memory, rather than the slower IPC
potential for performance loss. system of microkernel designs, which is typically based on
message passing.
• Messaging bugs can be harder to fix due to the longer
trip they have to take versus the one off copy in a The performance of microkernels was poor in both the
monolithic kernel. 1980s and early 1990s.[35][36] However, studies that em-
pirically measured the performance of these microkernels
• Process management in general can be very compli- did not analyze the reasons of such inefficiency.[35] The ex-
cated. planations of this data were left to “folklore”, with the as-
sumption that they were due to the increased frequency of
[35]
The disadvantages for micro kernels are extremely context switches from “kernel-mode” to “user-mode”, to[35] the in-
based. As an example, they work well for small single pur- creased frequency of inter-process communication and
[35]
pose (and critical) systems because if not many processes to the increased frequency of context switches.
need to run, then the complications of process management In fact, as guessed in 1995, the reasons for the poor per-
are effectively mitigated. formance of microkernels might as well have been: (1) an
7.3. KERNEL-WIDE DESIGN APPROACHES 35

actual inefficiency of the whole microkernel approach, (2) nels can provide high performance. These types of ker-
the particular concepts implemented in those microkernels, nels are extensions of micro kernels with some properties of
and (3) the particular implementation of those concepts.[35] monolithic kernels. Unlike monolithic kernels, these types
Therefore it remained to be studied if the solution to build of kernels are unable to load modules at runtime on their
an efficient microkernel was, unlike previous attempts, to own. Hybrid kernels are micro kernels that have some “non-
apply the correct construction techniques.[35] essential” code in kernel-space in order for the code to run
On the other end, the hierarchical protection domains archi- more quickly than it would were it to be in user-space. Hy-
tecture that leads to the design of a monolithic kernel[29] has brid kernels are a compromise between the monolithic and
microkernel designs. This implies running some services
a significant performance drawback each time there’s an in-
teraction between different levels of protection (i.e. when (such as the network stack or the filesystem) in kernel space
to reduce the performance overhead of a traditional micro-
a process has to manipulate a data structure both in 'user
mode' and 'supervisor mode'), since this requires message kernel, but still running kernel code (such as device drivers)
as servers in user space.
copying by value.[37]
By the mid-1990s, most researchers had abandoned the Many traditionally monolithic kernels are now at least
belief that careful tuning could reduce this overhead dra- adding (if not actively exploiting) the module capability.
matically, but recently, newer microkernels, optimized for The most well known of these kernels is the Linux kernel.
performance, such as L4[38] and K42 have addressed these The modular kernel essentially can have parts of it that are
problems. built into the core kernel binary or binaries that load into
memory on demand. It is important to note that a code
tainted module has the potential to destabilize a running
kernel. Many people become confused on this point when
discussing micro kernels. It is possible to write a driver for a
Servers Kernel microkernel in a completely separate memory space and test
it before “going” live. When a kernel module is loaded, it
accesses the monolithic portion’s memory space by adding
to it what it needs, therefore, opening the doorway to possi-
ble pollution. A few advantages to the modular (or) Hybrid
kernel are:

Software • Faster development time for drivers that can operate


from within modules. No reboot required for testing
(provided the kernel is not destabilized).

• On demand capability versus spending time recompil-


The hybrid kernel approach combines the speed and simpler design ing a whole kernel for things like new drivers or sub-
of a monolithic kernel with the modularity and execution safety of systems.
a microkernel.
• Faster integration of third party technology (related to
development but pertinent unto itself nonetheless).
7.3.4 Hybrid (or modular) kernels
Modules, generally, communicate with the kernel using a
Main article: Hybrid kernel module interface of some sort. The interface is generalized
(although particular to a given operating system) so it is not
Hybrid kernels are used in most commercial operating sys- always possible to use modules. Often the device drivers
tems such as Microsoft Windows NT 3.1, NT 3.5, NT may need more flexibility than the module interface affords.
3.51, NT 4.0, 2000, XP, Vista, 7, 8, 8.1 and 10. Apple Essentially, it is two system calls and often the safety checks
Inc's own macOS uses a hybrid kernel called XNU which that only have to be done once in the monolithic kernel now
is based upon code from Carnegie Mellon's Mach kernel may be done twice. Some of the disadvantages of the mod-
and FreeBSD's monolithic kernel. They are similar to mi- ular approach are:
cro kernels, except they include some additional code in
kernel-space to increase performance. These kernels rep- • With more interfaces to pass through, the possibility
resent a compromise that was implemented by some de- of increased bugs exists (which implies more security
velopers before it was demonstrated that pure micro ker- holes).
36 CHAPTER 7. KERNEL (OPERATING SYSTEM)

• Maintaining modules can be confusing for some ad- lary programs such as program loaders and debuggers were
ministrators when dealing with problems like symbol left in memory between runs, or loaded from ROM. As
differences. these were developed, they formed the basis of what be-
came early operating system kernels. The “bare metal” ap-
proach is still used today on some video game consoles and
7.3.5 Nanokernels embedded systems,[40] but in general, newer computers use
modern operating systems and kernels.
Main article: Nanokernel
In 1969, the RC 4000 Multiprogramming System intro-
duced the system design philosophy of a small nucleus
A nanokernel delegates virtually all services – including “upon which operating systems for different purposes could
even the most basic ones like interrupt controllers or the be built in an orderly manner”,[41] what would be called the
timer – to device drivers to make the kernel memory re- microkernel approach.
quirement even smaller than a traditional microkernel.[39]

7.4.2 Time-sharing operating systems


7.3.6 Exokernels
Main article: Time-sharing
Main article: Exokernel
In the decade preceding Unix, computers had grown enor-
Exokernels are a still-experimental approach to operating mously in power – to the point where computer operators
system design. They differ from the other types of kernels were looking for new ways to get people to use their spare
in that their functionality is limited to the protection and time on their machines. One of the major developments
multiplexing of the raw hardware, providing no hardware during this era was time-sharing, whereby a number of users
abstractions on top of which to develop applications. This would get small slices of computer time, at a rate at which
separation of hardware protection from hardware manage- it appeared they were each connected to their own, slower,
ment enables application developers to determine how to machine.[42]
make the most efficient use of the available hardware for The development of time-sharing systems led to a number
each specific program. of problems. One was that users, particularly at universities
Exokernels in themselves are extremely small. However, where the systems were being developed, seemed to want
they are accompanied by library operating systems (see also to hack the system to get more CPU time. For this rea-
unikernel), providing application developers with the func- son, security and access control became a major focus of
tionalities of a conventional operating system. A major ad- the Multics project in 1965.[43] Another ongoing issue was
vantage of exokernel-based systems is that they can incor- properly handling computing resources: users spent most
porate multiple library operating systems, each exporting a of their time staring at the screen and thinking instead of
different API, for example one for high level UI develop- actually using the resources of the computer, and a time-
ment and one for real-time control. sharing system should give the CPU time to an active user
during these periods. Finally, the systems typically offered
a memory hierarchy several layers deep, and partitioning
7.4 History of kernel development this expensive resource led to major developments in virtual
memory systems.

7.4.1 Early operating system kernels


7.4.3 Amiga
Main article: History of operating systems
Main article: AmigaOS
Strictly speaking, an operating system (and thus, a kernel)
is not required to run a computer. Programs can be directly The Commodore Amiga was released in 1985, and was
loaded and executed on the “bare metal” machine, provided among the first – and certainly most successful – home
that the authors of those programs are willing to work with- computers to feature an advanced kernel architecture. The
out any hardware abstraction or operating system support. AmigaOS kernel’s executive component, exec.library, uses
Most early computers operated this way during the 1950s a microkernel message-passing design, but there are other
and early 1960s, which were reset and reloaded between the kernel components, like graphics.library, that have direct
execution of different programs. Eventually, small ancil- access to the hardware. There is no memory protection,
7.4. HISTORY OF KERNEL DEVELOPMENT 37

and the kernel is almost always running in user mode. Only


nipulate the entire system using their existing file manage-
special actions are executed in kernel mode, and user-mode
ment utilities and concepts, dramatically simplifying oper-
applications can ask the operating system to execute their
ation. As an extension of the same paradigm, Unix allows
code in kernel mode. programmers to manipulate files using a series of small pro-
grams, using the concept of pipes, which allowed users to
complete operations in stages, feeding a file through a chain
7.4.4 Unix of single-purpose tools. Although the end result was the
same, using smaller programs in this way dramatically in-
Main article: Unix creased flexibility as well as ease of development and use,
During the design phase of Unix, programmers decided to allowing the user to modify their workflow by adding or re-
1969
moving a program from the chain.
Unics (PDP7)

1970
In the Unix model, the operating system consists of two
parts; first, the huge collection of utility programs that
1971
Version 1 Unix

1972
Version 2 Unix

1973
Version 3 Unix Version 4 Unix
drive most operations, the other the kernel that runs the
1974
PWB/UNIX Version 5 Unix programs.[44] Under Unix, from a programming standpoint,
1975

1976
Version 6 Unix
the distinction between the two is fairly thin; the kernel is a
1977
program, running in supervisor mode,[45] that acts as a pro-
1978
1BSD
gram loader and supervisor for the small utility programs
1979
Version 7 Unix UNIX/32V 2BSD
making up the rest of the system, and to provide locking
1980

1981
3BSD 2.79BSD
and I/O services for these programs; beyond that, the ker-
4.1BSD

1982
UNIX System III 4.1cBSD SunOS 1.0
nel didn't intervene at all in user space.
1983

Over the years the computing model changed, and Unix’s


UNIX System V.0 GNU

1984

1985
Version 8 Unix
treatment of everything as a file or byte stream no longer
1986
Version 9 Unix 4.3BSD was as universally applicable as it was before. Although a
1987

1988
UNIX System V.3
SunOS 3.2 MINIX 1.1
terminal could be treated as a file or a byte stream, which is
4.3BSD Tahoe

1989
Version 10 Unix
printed to or read from, the same did not seem to be true for
1990
4.3BSD Reno a graphical user interface. Networking posed another prob-
1991

1992
Solaris 2 BSD Net/2 Linux 0.0.1
lem. Even if network communication can be compared to
UnixWare 1.0 BSD/386 0.3.1 386BSD 0.0 386BSD 0.1

file access, the low-level packet-oriented architecture dealt


GNU/Linux

1993
Free BSD 1.0 NetBSD 0.8

1994
386BSD 1.0 NetBSD 1.0 4.4BSD-Lite SunOS 4.1.4 with discrete chunks of data and not with whole files. As
1995

1996
FreeBSD 2.0 NetBSD 1.1 4.4BSD-Lite Release 2
the capability of computers grew, Unix became increasingly
1997
OpenBSD 2.0 2.11BSD Patch 335
cluttered with code. It is also because the modularity of the
1998
FreeBSD 3.0 OpenBSD 2.3 NetBSD 1.3
Unix kernel is extensively scalable.[46] While kernels might
1999
have had 100,000 lines of code in the seventies and eighties,
2000

2001
2.11BSD Patch 431
kernels of modern Unix successors like Linux have more
2002 than 13 million lines.[47]
2003

2004
UnixWare 7.1.4
Modern Unix-derivatives are generally based on module-
4.3BSD-Quasijarus 0c

2005
Solaris 10
loading monolithic kernels. Examples of this are the Linux
NetBSD 3.0

2006

2007
FreeBSD 6.1
kernel in its many distributions as well as the Berkeley
OpenBSD 3.9 MINIX 3.1.2 GNU/Linux 2.6.16

software distribution variant kernels such as FreeBSD,


DragonflyBSD, OpenBSD, NetBSD,and macOS. Apart
A diagram of the predecessor/successor family relationship for from these alternatives, amateur developers maintain an ac-
Unix-like systems tive operating system development community, populated
by self-written hobby kernels which mostly end up shar-
model every high-level device as a file, because they be- ing many features with Linux, FreeBSD, DragonflyBSD,
lieved the purpose of computation was data transforma- OpenBSD or NetBSD kernels and/or being compatible with
tion.[44] them.[48]
For instance, printers were represented as a “file” at a known
location – when data was copied to the file, it printed out.
Other systems, to provide a similar functionality, tended to 7.4.5 Mac OS
virtualize devices at a lower level – that is, both devices
and files would be instances of some lower level concept. Main articles: Classic Mac OS and macOS
Virtualizing the system at the file level allowed users to ma-
38 CHAPTER 7. KERNEL (OPERATING SYSTEM)

Apple first launched its classic Mac OS in 1984, bundled • Inter-process communication
with its Macintosh personal computer. Apple moved to a
nanokernel design in Mac OS 8.6. Against this, the modern
macOS (originally named Mac OS X) is based on Darwin, 7.6 Notes
which uses a hybrid kernel called XNU, which was created
combining the 4.3BSD kernel and the Mach kernel.[49] [1] “Kernel”. Linfo. Bellevue Linux Users Group. Retrieved 15
September 2016.

7.4.6 Microsoft Windows [2] cf. Daemon (computing)

[3] Roch 2004


Main article: History of Microsoft Windows
[4] Wulf 1974 pp.337–345
Microsoft Windows was first released in 1985 as an add-on [5] Silberschatz 1991
to MS-DOS. Because of its dependence on another operat-
ing system, initial releases of Windows, prior to Windows [6] Tanenbaum, Andrew S. (2008). Modern Operating Systems
95, were considered an operating environment (not to be (3rd ed.). Prentice Hall. pp. 50–51. ISBN 0-13-600663-9.
confused with an operating system). This product line con- . . . nearly all system calls [are] invoked from C programs
tinued to evolve through the 1980s and 1990s, culminat- by calling a library procedure . . . The library procedure . .
. executes a TRAP instruction to switch from user mode to
ing with release of the Windows 9x series (upgrading the
kernel mode and start execution . . .
system’s capabilities to 32-bit addressing and pre-emptive
multitasking) through the mid-1990s and ending with the [7] Denning 1976
release of Windows Me in 2000. Microsoft also developed
Windows NT, an operating system intended for high-end [8] Swift 2005, p.29 quote: “isolation, resource control, deci-
and business users. This line started with the release of sion verification (checking), and error recovery.”
Windows NT 3.1 in 1993, and still continues with Windows [9] Schroeder 72
10.
[10] Linden 76
The release of Windows XP in October 2001 brought the
NT kernel version of Windows to general users, replacing [11] Stephane Eranian and David Mosberger, Virtual Memory in
Windows 9x with a completely different operating system. the IA-64 Linux Kernel, Prentice Hall PTR, 2002
The architecture of Windows NT's kernel is considered a
hybrid kernel because the kernel itself contains tasks such [12] Silberschatz & Galvin, Operating System Concepts, 4th ed,
pp. 445 & 446
as the Window Manager and the IPC Managers, with a
client/server layered subsystem model.[50] [13] Hoch, Charles; J. C. Browne (July 1980). “An im-
plementation of capabilities on the PDP-11/45” (PDF).
ACM SIGOPS Operating Systems Review. 14 (3): 22–32.
7.4.7 Development of microkernels doi:10.1145/850697.850701. Retrieved 2007-01-07.

[14] A Language-Based Approach to Security, Schneider F.,


Although Mach, developed at Carnegie Mellon University Morrissett G. (Cornell University) and Harper R. (Carnegie
from 1985 to 1994, is the best-known general-purpose mi- Mellon University)
crokernel, other microkernels have been developed with
more specific aims. The L4 microkernel family (mainly the [15] P. A. Loscocco, S. D. Smalley, P. A. Muckelbauer, R. C.
L3 and the L4 kernel) was created to demonstrate that mi- Taylor, S. J. Turner, and J. F. Farrell. The Inevitability of
[38]
crokernels are not necessarily slow. Newer implementa- Failure: The Flawed Assumption of Security in Modern Com-
tions such as Fiasco and Pistachio are able to run Linux next puting Environments. In Proceedings of the 21st National
[51][52] Information Systems Security Conference, pages 303–314,
to other L4 processes in separate address spaces.
Oct. 1998. .
Additionally, QNX is a microkernel which is principally
used in embedded systems.[53] [16] J. Lepreau et al. The Persistent Relevance of the Local Op-
erating System to Global Applications. Proceedings of the
7th ACM SIGOPS Eurcshelf/book001/book001.html Infor-
mation Security: An Integrated Collection of Essays, IEEE
7.5 See also Comp. 1995.

[17] J. Anderson, Computer Security Technology Planning Study,


• Comparison of operating system kernels Air Force Elect. Systems Div., ESD-TR-73-51, October 1972.
7.7. REFERENCES 39

[18] • Jerry H. Saltzer; Mike D. Schroeder (September [...] Unfortunately, these benefits come at the cost of the
1975). “The protection of information in computer microkernel having to pass a lot of information in and out
systems”. Proceedings of the IEEE. 63 (9): 1278– of the kernel space through a process known as a context
1308. doi:10.1109/PROC.1975.9939. switch. Context switches introduce considerable overhead
and therefore result in a performance penalty.”
[19] Jonathan S. Shapiro; Jonathan M. Smith; David J. Farber
(1999). “EROS: a fast capability system”. Proceedings of the [34] Operating Systems/Kernel Models
seventeenth ACM symposium on Operating systems principles.
33 (5): 170–185. doi:10.1145/319344.319163. [35] Liedtke 95

[20] Dijkstra, E. W. Cooperating Sequential Processes. Math. [36] Härtig 97


Dep., Technological U., Eindhoven, Sept. 1965.
[37] Hansen 73, section 7.3 p.233 "interactions between differ-
[21] Brinch Hansen 70 pp.238–241 ent levels of protection require transmission of messages by
value"
[22] “SHARER, a time sharing system for the CDC 6600”. Re-
trieved 2007-01-07. [38] The L4 microkernel family – Overview
[23] “Dynamic Supervisors – their design and construction”. Re- [39] KeyKOS Nanokernel Architecture
trieved 2007-01-07.
[40] Ball: Embedded Microprocessor Designs, p. 129
[24] Baiardi 1988
[41] Hansen 2001 (os), pp.17–18
[25] Levin 75

[26] Denning 1980 [42] BSTJ version of C.ACM Unix paper

[27] Jürgen Nehmer The Immortality of Operating Systems, or: Is [43] Introduction and Overview of the Multics System, by F. J.
Research in Operating Systems still Justified? Lecture Notes Corbató and V. A. Vissotsky.
In Computer Science; Vol. 563. Proceedings of the Interna-
[44] “The Single Unix Specification”. The open group.
tional Workshop on Operating Systems of the 90s and Be-
yond. pp. 77–83 (1991) ISBN 3-540-54987-0 quote: “The [45] The highest privilege level has various names throughout dif-
past 25 years have shown that research on operating system ferent architectures, such as supervisor mode, kernel mode,
architecture had a minor effect on existing main stream [sic] CPL0, DPL0, ring 0, etc. See Ring (computer security) for
systems.” more information.
[28] Levy 84, p.1 quote: “Although the complexity of com-
[46] Unix’s Revenge by Horace Dediu
puter applications increases yearly, the underlying hardware
architecture for applications has remained unchanged for [47] Linux Kernel 2.6: It’s Worth More!, by David A. Wheeler,
decades.” October 12, 2004
[29] Levy 84, p.1 quote: “Conventional architectures support a
[48] This community mostly gathers at Bona Fide OS Develop-
single privileged mode of operation. This structure leads to
ment, The Mega-Tokyo Message Board and other operating
monolithic design; any module needing protection must be
system enthusiast web sites.
part of the single operating system kernel. If, instead, any
module could execute within a protected domain, systems [49] XNU: The Kernel
could be built as a collection of independent modules exten-
sible by any user.” [50] Windows History: Windows Desktop Products History
[30] Open Sources: Voices from the Open Source Revolution [51] The Fiasco microkernel – Overview
[31] Virtual addressing is most commonly achieved through a [52] L4Ka – The L4 microkernel family and friends
built-in memory management unit.
[53] QNX Realtime Operating System Overview
[32] Recordings of the debate between Torvalds and Tanenbaum
can be found at dina.dk, groups.google.com, oreilly.com and
Andrew Tanenbaum’s website
7.7 References
[33] Matthew Russell. “What Is Darwin (and How It Powers Mac
OS X)". O'Reilly Media. quote: “The tightly coupled na-
ture of a monolithic kernel allows it to make very efficient • Roch, Benjamin (2004). “Monolithic kernel vs. Mi-
use of the underlying hardware [...] Microkernels, on the crokernel” (PDF). Archived from the original (PDF)
other hand, run a lot more of the core processes in userland. on 2006-11-01. Retrieved 2006-10-12.
40 CHAPTER 7. KERNEL (OPERATING SYSTEM)

• Silberschatz, Abraham; James L. Peterson; Peter B. • Levin, R.; Cohen, E.; Corwin, W.; Pollack, F.;
Galvin (1991). Operating system concepts. Boston, Wulf, William (1975). “Policy/mechanism separa-
Massachusetts: Addison-Wesley. p. 696. ISBN 0- tion in Hydra”. ACM Symposium on Operating Sys-
201-51379-X. tems Principles / Proceedings of the fifth ACM sympo-
sium on Operating systems principles. 9 (5): 132–140.
• Ball, Stuart R. (2002) [2002]. Embedded Micropro-
doi:10.1145/1067629.806531.
cessor Systems: Real World Designs (first ed.). Elsevier
Science. ISBN 0-7506-7534-9. • Levy, Henry M. (1984). Capability-based computer
systems. Maynard, Mass: Digital Press. ISBN 0-
• Deitel, Harvey M. (1984) [1982]. An introduction
932376-22-3.
to operating systems (revisited first ed.). Addison-
Wesley. p. 673. ISBN 0-201-14502-2. • Liedtke, Jochen. On µ-Kernel Construction, Proc.
15th ACM Symposium on Operating System Principles
• Denning, Peter J. (December 1976). “Fault tol-
(SOSP), December 1995
erant operating systems”. ACM Computing Sur-
veys. 8 (4): 359–389. ISSN 0360-0300. • Linden, Theodore A. (December 1976). “Operating
doi:10.1145/356678.356680. System Structures to Support Security and Reliable
Software”. ACM Computing Surveys. 8 (4): 409–445.
• Denning, Peter J. (April 1980). “Why not innovations
ISSN 0360-0300. doi:10.1145/356678.356682.,
in computer architecture?". ACM SIGARCH Computer
“Operating System Structures to Support Security and
Architecture News. 8 (2): 4–7. ISSN 0163-5964.
Reliable Software” (PDF). Retrieved 2010-06-19.
doi:10.1145/859504.859506.
• Lorin, Harold (1981). Operating systems. Boston,
• Hansen, Per Brinch (April 1970). “The nucleus
Massachusetts: Addison-Wesley. pp. 161–186. ISBN
of a Multiprogramming System”. Communications
0-201-14464-6.
of the ACM. 13 (4): 238–241. ISSN 0001-0782.
doi:10.1145/362258.362278. • Schroeder, Michael D.; Jerome H. Saltzer (March
1972). “A hardware architecture for imple-
• Hansen, Per Brinch (1973). Operating System Princi-
menting protection rings”. Communications of
ples. Englewood Cliffs: Prentice Hall. p. 496. ISBN
the ACM. 15 (3): 157–170. ISSN 0001-0782.
0-13-637843-9.
doi:10.1145/361268.361275.
• Hansen, Per Brinch (2001). “The evolution of operat-
• Shaw, Alan C. (1974). The logical design of Operating
ing systems” (PDF). Retrieved 2006-10-24. included
systems. Prentice-Hall. p. 304. ISBN 0-13-540112-7.
in book: Per Brinch Hansen, ed. (2001). “1” (PDF).
Classic operating systems: from batch processing to dis- • Tanenbaum, Andrew S. (1979). Structured Com-
tributed systems. New York,: Springer-Verlag. pp. 1– puter Organization. Englewood Cliffs, New Jersey:
36. ISBN 0-387-95113-X. Prentice-Hall. ISBN 0-13-148521-0.
• Hermann Härtig, Michael Hohmuth, Jochen Liedtke, • Wulf, W.; E. Cohen; W. Corwin; A. Jones; R. Levin;
Sebastian Schönberg, Jean Wolter The performance of C. Pierson; F. Pollack (June 1974). “HYDRA: the
μ-kernel-based systems, Härtig, Hermann; Hohmuth, kernel of a multiprocessor operating system” (PDF).
Michael; Liedtke, Jochen; Schönberg, Sebastian Communications of the ACM. 17 (6): 337–345. ISSN
(1997). “The performance of μ-kernel-based sys- 0001-0782. doi:10.1145/355616.364017.
tems”. Proceedings of the sixteenth ACM symposium on
• Baiardi, F.; A. Tomasi; M. Vanneschi (1988).
Operating systems principles - SOSP '97. p. 66. ISBN
Architettura dei Sistemi di Elaborazione, volume 1 (in
0897919165. doi:10.1145/268998.266660. ACM
Italian). Franco Angeli. ISBN 88-204-2746-X.
SIGOPS Operating Systems Review, v.31 n.5, p. 66–
77, Dec. 1997 • Swift, Michael M.; Brian N. Bershad; Henry M. Levy.
Improving the reliability of commodity operating sys-
• Houdek, M. E., Soltis, F. G., and Hoffman, R. L.
tems (PDF).
1981. IBM System/38 support for capability-based ad-
dressing. In Proceedings of the 8th ACM International • “Improving the reliability of commod-
Symposium on Computer Architecture. ACM/IEEE, ity operating systems”. Doi.acm.org.
pp. 341–348. doi:10.1002/spe.4380201404. Retrieved 2010-
06-19.
• Intel Corporation (2002) The IA-32 Architecture Soft-
ware Developer’s Manual, Volume 1: Basic Architec- • “ACM Transactions on Computer Systems (TOCS),
ture v.23 n.1, p. 77–110, February 2005”.
7.9. EXTERNAL LINKS 41

7.8 Further reading


• Andrew Tanenbaum, Operating Systems – Design and
Implementation (Third edition);
• Andrew Tanenbaum, Modern Operating Systems (Sec-
ond edition);

• Daniel P. Bovet, Marco Cesati, The Linux Kernel;


• David A. Peterson, Nitin Indurkhya, Patterson, Com-
puter Organization and Design, Morgan Koffman
(ISBN 1-55860-428-6);

• B.S. Chalk, Computer Organisation and Architecture,


Macmillan P.(ISBN 0-333-64551-0).

7.9 External links


• Detailed comparison between most popular operating
system kernels
Chapter 8

Kernel-based Virtual Machine

Kernel-based Virtual Machine (KVM) is a virtualization


infrastructure for the Linux kernel that turns it into a
hypervisor. It was merged into the Linux kernel main-
line in kernel version 2.6.20, which was released on Febru-
ary 5, 2007.[1] KVM requires a processor with hardware
virtualization extensions.[2] KVM has also been ported to
FreeBSD[3] and illumos[4] in the form of loadable kernel
modules.
KVM originally supported x86 processors and has been
ported to S/390,[5] PowerPC,[6] and IA-64. An ARM port
was merged during the 3.9 kernel merge window.[7]
A wide variety of guest operating systems work with
KVM, including many flavours and versions of Linux, BSD,
Solaris, Windows, Haiku, ReactOS, Plan 9, AROS Re-
search Operating System[8] and OS X.[9] In addition, An-
droid 2.2, GNU/Hurd[10] (Debian K16), Minix 3.1.2a, So-
laris 10 U3 and Darwin 8.0.1, together with other operating
systems and some newer versions of these listed, are known
to work with certain limitations.[11]
Paravirtualization support for certain devices is available for A high-level overview of the KVM/QEMU virtualization
[18]:3
Linux, OpenBSD,[12] FreeBSD,[13] NetBSD,[14] Plan 9[15] environment
[16]
and Windows guests using the VirtIO API. This sup-
ports a paravirtual Ethernet card, a paravirtual disk I/O • Map the guest’s video display back onto the host.
controller,[17] a balloon device for adjusting guest mem-
ory usage, and a VGA graphics interface using SPICE or
On Linux, QEMU versions 0.10.1 and later is one such
VMware drivers.
userspace host. QEMU uses KVM when available to virtu-
alize guests at near-native speeds, but otherwise falls back
to software-only emulation.
8.1 Internals
Internally, KVM uses SeaBIOS as an open source imple-
mentation of a 16-bit x86 BIOS.[19]
By itself, KVM does not perform any emulation. Instead, it
exposes the /dev/kvm interface, which a userspace host can
then use to:
8.2 Licensing
• Set up the guest VM’s address space. The host must
[20]
also supply a firmware image (usually a custom BIOS KVM’s parts are licensed under various GNU licenses:
when emulating PCs) that the guest can use to boot-
strap into its main OS. • KVM kernel module: GPL v2
• Feed the guest simulated I/O. • KVM user module: LGPL v2

42
8.5. EMULATED HARDWARE 43

• QEMU virtual CPU core library (libqemu.a) and • oVirt – open-source virtualization management tool
QEMU PC system emulator: LGPL for KVM built on top of libvirt
• Linux user mode QEMU emulator: GPL • ArchivistaMini
• BIOS files (bios.bin, vgabios.bin and vgabios-
cirrus.bin): LGPL v2 or later
8.5 Emulated hardware
8.3 History 8.6 Implementations
Avi Kivity began the development of KVM at Qumranet,
• Debian 5.0 and above
a technology startup company[21] that was acquired by Red
Hat in 2008.[22] • Gentoo Linux
KVM was merged into the Linux kernel mainline in kernel
version 2.6.20, which was released on 5 February 2007.[1] • illumos-based distributions

KVM is maintained by Paolo Bonzini.[23] • OpenIndiana

• Red Hat Enterprise Linux (RHEL) 5.4 and above and


8.4 Graphical management tools Red Hat Virtualization

• SmartOS
virsh virt-manager OpenStack oVirt
• SUSE Linux Enterprise Server (SLES) 11 SP1 and
above

• Ubuntu 10.04 LTS and above

VIRTUALIZATION API • Univention Corporate Server

KVM LXC OpenVZ UML Xen ESX other...


8.7 See also
• CloudStack
libvirt supports KVM
• Comparison of platform virtualization software
• Kimchi – web-based virtualization management tool • Kernel same-page merging (KSM)
for KVM
• Lguest
• Virtual Machine Manager – supports creating, editing,
starting, and stopping KVM-based virtual machines, • libguestfs
as well as live or cold drag-and-drop migration of VMs
between hosts. • libvirt
• Proxmox Virtual Environment – an open-source vir- • Open Virtualization Alliance
tualization management package including KVM and
LXC. It has a bare-metal installer, a web-based remote • OpenNebula
management GUI, a HA cluster stack, unified storage,
flexible network, and optional commercial support. • OpenStack

• OpenQRM – management platform for managing het- • oVirt


erogeneous data center infrastructures.
• Vx32
• GNOME Boxes – Gnome interface for managing lib-
virt guests on Linux. • Xen
44 CHAPTER 8. KERNEL-BASED VIRTUAL MACHINE

8.8 References [24] wiki.qemu.org – QEMU Emulator User Documentation,


read 2010-05-06
[1] “Linux kernel 2.6.20, Section 2.2. Virtualization support [25] “Introducing Virgil - 3D virtual GPU for qemu”. 2013-07-
through KVM”. kernelnewbies.org. 2007-02-05. Retrieved 18. Archived from the original on 2013-07-25.
2014-06-16.

[2] KVM FAQ: What do I need to use KVM?

[3] “FreeBSD Quarterly Status Report: Porting Linux KVM to


8.9 Bibliography
FreeBSD”.
• Amit Shah (2016-11-02). “Ten years of KVM”.
[4] “KVM on illumos”. lwn.net. Retrieved 2017-02-10.
[5] Gmane - Mail To News And Back Again

[6] Gmane Loom 8.10 External links


[7] KVM/ARM Open Source Project
• Official website
[8] “KVM wiki: Guest support status”. Retrieved 2007-05-27.
• Best practices for the Kernel-based Virtual Machine,
[9] “Running Mac OS X as a QEMU/KVM Guest”. Retrieved IBM, second edition, April 2012
2014-08-20.
• Virtio-blk Performance Improvement, KVM Forum
[10] “status”. Gnu.org. Retrieved 2014-02-12. 2012, November 8, 2012, by Asias He
[11] “Guest Support Status - KVM”. Linux-kvm.org. Retrieved • Wikibook QEMU & KVM
2014-02-12.

[12] “OpenBSD man page virtio(4)". Retrieved 2013-07-15.

[13] “virtio binary packages for FreeBSD”. Retrieved 2012-10-


29.

[14] “NetBSD man page virtio(4)". Retrieved 2013-07-15.

[15] “plan9front”. Retrieved 2013-02-11.

[16] “An API for virtual I/O: virtio”. LWN.net. 2007-07-11.


Retrieved 2014-04-16.

[17] “SCSI target for KVM wiki”. linux-iscsi.org. 2012-08-07.


Retrieved 2012-08-12.

[18] Khoa Huynh; Stefan Hajnoczi (2010). “KVM/QEMU Stor-


age Stack Performance Discussion” (PDF). ibm.com. Linux
Plumbers Conference. Retrieved January 3, 2015.

[19] “SeaBIOS”. seabios.org. 2013-12-21. Retrieved 2014-06-


16.

[20] Licensing info from Ubuntu 7.04


/usr/share/doc/kvm/copyright

[21] Interview: Avi Kivity Archived 2007-04-26 at the Wayback


Machine. on KernelTrap

[22] “Red Hat Advances Virtualization Leadership with Qum-


ranet, Inc. Acquisition”. Red Hat. 4 September 2008. Re-
trieved 16 June 2015.

[23] Libby Clark (7 April 2015). “Git Success Stories and Tips
from KVM Maintainer Paolo Bonzini”. Linux.com. Re-
trieved 17 June 2015.
Chapter 9

QEMU

QEMU (short for Quick Emulator) is a free and open- computer system, including peripherals. It can be used
source hosted hypervisor that performs hardware virtual- to provide virtual hosting of several virtual computers
ization (not to be confused with hardware-assisted virtual- on a single computer. QEMU can boot many guest
ization). operating systems, including Linux, Solaris, Microsoft
QEMU is a hosted virtual machine monitor: it emulates Windows, DOS, and BSD;[4] it supports emulating
CPUs through dynamic binary translation and provides a several instruction sets, including x86, MIPS, 32-bit
set of device models, enabling it to run a variety of unmod- ARMv7, ARMv8, PowerPC, SPARC, ETRAX CRIS
ified guest operating systems. It also can be used together and MicroBlaze.
with KVM in order to run virtual machines at near-native KVM Hosting Here QEMU deals with the setting up and
speed (requiring hardware virtualization extensions on x86 migration of KVM images. It is still involved in the
machines). QEMU can also do CPU emulation for user- emulation of hardware, but the execution of the guest
level processes, allowing applications compiled for one ar- is done by KVM as requested by QEMU.
chitecture to run on another.
Xen Hosting QEMU is involved only in the emulation of
hardware; the execution of the guest is done within
9.1 Licensing Xen and is totally hidden from QEMU.

QEMU was written by Fabrice Bellard and is free soft-


ware and is mainly licensed under GNU General Public 9.3 Features
License (GPL). Various parts are released under BSD li-
cense, GNU Lesser General Public License (LGPL) or QEMU can save and restore the state of the virtual machine
other GPL-compatible licenses.[2] There is an option to use with all programs running. Guest operating-systems do not
the proprietary FMOD library when running on Microsoft need patching in order to run inside QEMU.
Windows, which, if used, disqualifies the use of a single QEMU supports the emulation of various architectures, in-
open source software license. However, the default is to cluding:
use DirectSound.
• IA-32 (x86) PCs
9.2 Operating modes • x86-64 PCs
• MIPS64 Release 6[5] and earlier variants
QEMU has multiple operating modes:[3]
• Sun’s SPARC sun4m
User-mode emulation In this mode QEMU runs single
Linux or Darwin/macOS programs that were com- • Sun’s SPARC sun4u
piled for a different instruction set. System calls are • ARM development boards (Integrator/CP and Versa-
thunked for endianness and for 32/64 bit mismatches. tile/PB)
Fast cross-compilation and cross-debugging are the
main targets for user-mode emulation. • SH4 SHIX board
System emulation In this mode QEMU emulates a full • PowerPC (PReP and Power Macintosh)

45
46 CHAPTER 9. QEMU

• ETRAX CRIS 9.3.1 Tiny Code Generator

The Tiny Code Generator (TCG) aims to remove the short-


• MicroBlaze coming of relying on a particular version of GCC or any
compiler, instead incorporating the compiler (code genera-
tor) into other tasks performed by QEMU at run time. The
whole translation task thus consists of two parts: blocks
The virtual machine can interface with many types of phys- of target code (TBs) being rewritten in TCG ops - a kind
ical host hardware. These include: hard disks, CD-ROM of machine-independent intermediate notation, and subse-
drives, network cards, audio interfaces, and USB devices. quently this notation being compiled for the host’s architec-
USB devices can be completely emulated (mass storage ture by TCG. Optional optimisation passes are performed
from image files, input devices), or the host’s USB devices between them.
can be used (however, this requires administrator privileges
and does not work with all devices). TCG requires dedicated code written to support every ar-
chitecture it runs on. It also requires that the target instruc-
Virtual disk images can be stored in a special format (qcow tion translation be rewritten to take advantage of TCG ops,
or qcow2) that only take up disk space that the guest OS instead of the previously used dyngen ops.
actually uses. This way, an emulated 120 GB disk may
occupy only a few hundred megabytes on the host. The Starting with QEMU Version 0.10.0, TCG ships with the
[6]
QCOW2 format also allows the creation of overlay images QEMU stable release.
that record the difference from another (unmodified) base
image file. This provides the possibility for reverting the
emulated disk’s contents to an earlier state. For example,
a base image could hold a fresh install of an operating sys- 9.3.2 Accelerator
tem that is known to work, and the overlay images are used.
Should the guest system become unusable (through virus at- KQEMU was a Linux kernel module, also written by Fabrice
tack, accidental system destruction, etc), the user can delete Bellard, which notably speed up emulation of x86 or x86-
the overlay and reconstruct an earlier emulated disk-image 64 guests on platforms with the same CPU architecture.
version. This worked by running user mode code (and optionally
some kernel code) directly on the host computer’s CPU,
QEMU can emulate network cards (of different models) and by using processor and peripheral emulation only for
which share the host system’s connectivity by doing net- kernel-mode and real-mode code. KQEMU could execute
work address translation, effectively allowing the guest to code from many guest OSes even if the host CPU did not
use the same network as the host. The virtual network support hardware-assisted virtualization. KQEMU was ini-
cards can also connect to network cards of other instances tially a closed-source product available free of charge, but
of QEMU or to local TAP interfaces. Network connectiv- starting from version 1.3.0pre10,[7] it was relicensed under
ity can also be achieved by bridging a TUN/TAP interface the GNU General Public License. QEMU versions start-
used by QEMU with a non-virtual Ethernet interface on the ing with 0.12.0 (as of August 2009) support large memory
host OS using the host OS’s bridging features. which makes them incompatible with KQEMU.[8] Newer
QEMU integrates several services to allow the host and releases of QEMU have completely removed support for
guest systems to communicate; for example, an integrated KQEMU.
SMB server and network-port redirection (to allow incom-
QVM86 was a GNU GPLv2 licensed drop-in replacement
ing connections to the virtual machine). It can also boot for the then closed-source KQEMU. The developers of
Linux kernels without a bootloader.
QVM86 ceased development in January, 2007.
QEMU does not depend on the presence of graphical output Kernel-based Virtual Machine (KVM) has mostly taken
methods on the host system. Instead, it can allow one to over as the Linux-based hardware-assisted virtualization so-
access the screen of the guest OS via an integrated VNC lution for use with QEMU in the wake of the lack of support
server. It can also use an emulated serial line, without any for KQEMU and QVM86.
screen, with applicable operating systems.
Intel’s Hardware Accelerated Execution Manager (HAXM)
Simulating multiple CPUs running SMP is possible. is a cost-free (but not open-source) alternative to KVM
QEMU does not require administrative rights to run, un- for x86-based hardware-assisted virtualization on Windows
less additional kernel modules for improving speed are used and macOS. As of 2013 Intel mostly solicits its use with
(like KQEMU), or when some modes of its network con- QEMU for Android development.[9] Starting with version
nectivity model are utilized. 2.9.0, the official QEMU includes support for HAXM.
9.6. INTEGRATION 47

9.3.3 Supported disk image formats 9.6 Integration


QEMU supports the following disk image formats:[10]
9.6.1 VirtualBox
• macOS Universal Disk Image Format (.dmg) – Read- VirtualBox, released in January 2007, uses some of
only QEMU’s virtual hardware devices, and has a built-in
dynamic recompiler based on QEMU. As with KQEMU,
• Bochs – Read-only VirtualBox runs nearly all guest code natively on the host
via the VMM (Virtual Machine Manager) and uses the re-
• Linux cloop – Read-only compiler only as a fallback mechanism - for example, when
guest code executes in real mode.[13] In addition, Virtual-
• Parallels disk image (.hdd, .hds) – Read-only Box does a lot of code analysis and patching using a built-in
disassembler in order to minimize recompilation. Virtual-
• QEMU copy-on-write (.qcow2, .qed, .qcow, .cow) Box is free and open-source (available under GPL), except
for certain features.
• VirtualBox Virtual Disk Image (.vdi)

• Virtual PC Virtual Hard Disk (.vhd)


9.6.2 Xen-HVM
• Virtual VFAT
Xen, a virtual machine monitor, can run in HVM (hard-
• VMware Virtual Machine Disk (.vmdk) ware virtual machine) mode, using Intel VT-x or AMD-V
hardware x86 virtualization extensions and ARM Cortex-
• Raw images (.img) that contain sector-by-sector con- A7 and Cortex-A15 virtualization extension.[14] This means
tents of a disk that instead of paravirtualized devices, a real set of virtual
hardware is exposed to the domU to use real device drivers
• CD/DVD images (.iso) that contain sector-by-sector to talk to.
contents of an optical disk (e.g. booting live OSes[11] ) QEMU includes several components: CPU emulators, em-
ulated devices, generic devices, machine descriptions, user
interface, and a debugger. The emulated devices and
9.4 Hardware-assisted emulation generic devices in QEMU make up its device models for
I/O virtualization.[15] They comprise a PIIX3 IDE (with
some rudimentary PIIX4 capabilities), Cirrus Logic or
The MIPS-compatible Loongson−3 processor adds 200 plain VGA emulated video, RTL8139 or E1000 network
new instructions to help QEMU translate x86 instructions;
emulation, and ACPI support.[16] APIC support is provided
those new instructions lower the overhead of executing by Xen.
x86/CISC-style instructions in the MIPS pipeline. With ad-
ditional improvements in QEMU by the Chinese Academy Xen-HVM has device emulation based on the QEMU
of Sciences, Loongson-3 achieves an average of 70% the project to provide I/O virtualization to the VMs. Hard-
performance of executing native binaries while running x86 ware is emulated via a QEMU “device model” daemon run-
binaries from nine benchmarks.[12] ning as a backend in dom0. Unlike other QEMU running
modes (dynamic translation or KVM), virtual CPUs are
completely managed to the hypervisor, which takes care of
stopping them while QEMU is emulating memory-mapped
9.5 Parallel emulation I/O accesses.

Virtualization solutions that use QEMU are able to execute


multiple virtual CPUs in parallel. For user-mode emulation 9.6.3 KVM
QEMU maps emulated threads to host threads. For full sys-
tem emulation QEMU is capable of running a host thread KVM (Kernel-based Virtual Machine) is a FreeBSD and
for each emulated virtual CPU (vCPU). This is dependant Linux kernel module that allows a user space program ac-
on the guest having been updated to support parallel system cess to the hardware virtualization features of various pro-
emulation, currently ARM and Alpha. Otherwise a single cessors, with which QEMU is able to offer virtualization
thread is used to emulate all virtual CPUS (vCPUS) which for x86, PowerPC, and S/390 guests. When the target ar-
executes each vCPU in a round-robin manner. chitecture is the same as the host architecture, QEMU can
48 CHAPTER 9. QEMU

make use of KVM particular features, such as acceleration. • Network card (Realtek 8139C+ PCI network adapter)

• Depending on target: i82551, i82557b,


9.6.4 Win4Lin Pro Desktop i82559er, NE2000 PCI, NE2000 ISA, PCnet,
RTL8139, e1000 (PCI Intel Gigabit Ethernet),
In early 2005, Win4Lin introduced Win4Lin Pro Desk- SMC91c111, Lance and mcf_fec.[22][23]
top, based on a 'tuned' version of QEMU and KQEMU
• NVMe disk interface
and it hosts NT-versions of Windows. In June 2006,[17]
Win4Lin released Win4Lin Virtual Desktop Server based • Parallel port
on the same code base. Win4Lin Virtual Desktop Server
serves Microsoft Windows sessions to thin clients from a • PC speaker
Linux server.
• i440FX/PIIX3 (PCI and ISA) or Q35/ICH9 (PCIe
In September 2006, Win4Lin announced a change of the and LPC) chipsets
company name to Virtual Bridges with the release of
Win4BSD Pro Desktop, a port of the product to FreeBSD • PS/2-mouse and -keyboard
and PC-BSD. Solaris support followed in May 2007 with
• SCSI controller (LSI MegaRAID SAS 1078,
the release of Win4Solaris Pro Desktop and Win4Solaris
LSI53C895A, NCR53C9x as found in the AMD
Virtual Desktop Server.[18]
PCscsi and Tekram DC-390 controllers)

• Serial interface
9.6.5 SerialICE
• Sound card (Sound Blaster 16, ES1370 PCI, Gravis
SerialICE is a QEMU-based firmware debugging tool run- Ultrasound, AC97, and Intel HD Audio[24] )
ning system firmware inside of QEMU while accessing real
hardware through a serial connection to a host system. This • Watchdog timer (Intel 6300 ESB PCI, or iB700 ISA)
can be used as a cheap replacement for hardware ICEs.[19]
• USB 1.x/2.x/3.x controllers (UHCI, EHCI, xHCI)

• USB devices: audio, Bluetooth dongle, HID (key-


9.6.6 WinUAE board/mouse/tablet), MTP, serial interface, CAC
smartcard reader, storage (bulk-only transfer and USB
WinUAE Amiga emulator introduced in version 3.0.0 the Attached SCSI), Wacom tablet
support for CyberStorm PPC and Blizzard 603e boards us-
ing QEMU PPC core.[20] • Paravirtualized VirtIO devices: block device, network
card, SCSI controller, serial interface, balloon driver,
9pfs filesystem driver
9.7 Emulated hardware platforms • Paravirtualized Xen devices: block device, network
card, console, framebuffer and input device
9.7.1 x86
The BIOS implementation used by QEMU starting from
Besides the CPU (which is also configurable and can em- version 0.12 is SeaBIOS. The VGA BIOS implementation
ulate the Intel Sandy Bridge[21] ), the following devices are comes from Plex86/Bochs.The UEFI firmware for QEMU
emulated: is OVMF.

• CD-ROM/DVD-drive using an ISO image


9.7.2 PowerPC
• Floppy disk
• ATA controller or Serial ATA AHCI controller PowerMac

• Graphics card (Cirrus CLGD 5446 PCI VGA- QEMU emulates the following PowerMac peripherals:
card,Standard-VGA graphics card with Bochs-VESA-
BIOS-Extensions - Hardware level, including all non- • UniNorth PCI bridge
standard modes, and an experimental patch that can
accelerate simple 3D graphics via OpenGL,Red Hat • PCI-VGA-compatible graphics card which maps the
QXL VGA or VirtIO GPU) VESA Bochs Extensions
9.7. EMULATED HARDWARE PLATFORMS 49

• Two PMAC-IDE-Interfaces with hard disk and CD-


ROM support.

• NE2000 PCI adapter

• Non-volatile RAM

• VIA-CUDA with ADB keyboard and mouse.

OpenBIOS is used as the firmware.

PREP QEMU booted into the ARM port of Fedora 8

QEMU emulates the following PREP peripherals:


9.7.3 ARM
• PCI bridge QEMU emulates the ARMv7 instruction set (and down to
ARMv5TEJ) with NEON extension.[25] It emulates full sys-
• PCI VGA-compatible graphics card with VESA Bochs tems like Integrator/CP board, Versatile baseboard, Re-
Extensions alView Emulation baseboard, XScale-based PDAs, Palm
Tungsten|E PDA, Nokia N800 and Nokia N810 Internet
• Two IDE interfaces with hard disk and CD-ROM sup- tablets etc. QEMU also powers the Android emulator
port which is part of the Android SDK (most current Android
implementations are ARM based). Starting from version
• Floppy disk drive 2.0.0 of their Bada SDK, Samsung has chosen QEMU to
help development on emulated 'Wave' devices.
• NE2000 network adapter
In 1.5.0 and 1.6.0 Samsung Exynos 4210 (dual-core Cortex
• Serial interface a9) and Versatile Express ARM Cortex-A9 ARM Cortex-
A15 are emulated. In 1.6.0, the 32-bit instructions of the
• PREP non-volatile RAM ARMv8 (AARCH64) architecture are emulated, but 64-bit
instructions are unsupported.
• PC-compatible keyboard and mouse
The Xilinx Cortex A9-based Zynq SoC is modelled, with
the following elements:
On the PREP target, Open Hack'Ware, an Open-Firmware–
compatible BIOS, is used. • Zynq-7000 ARM Cortex-A9 CPU
• Zynq-7000 ARM Cortex-A9 MPCore
IBM System p • Triple Timer Counter

QEMU can emulate the paravirtual sPAPR interface with • DDR Memory Controller
the following peripherals:
• DMA Controller (PL330)

• PCI bridge, for access to virtio devices, VGA- • Static Memory Controller (NAND/NOR Flash)
compatible graphics, USB, etc. • SD/SDIO Peripheral Controller (SDHCI)
• Virtual I/O network adapter, SCSI controller, and se- • Zynq Gigabit Ethernet Controller
rial interface
• USB Controller (EHCI - Host support only)
• sPAPR non-volatile RAM • Zynq UART Controller
• SPI and QSPI Controllers
On the sPAPR target, another Open-Firmware–compatible
BIOS is used, called SLOF. • I2C Controller
50 CHAPTER 9. QEMU

9.7.4 SPARC 9.7.5 MicroBlaze

QEMU has support for both 32 and 64-bit SPARC archi- Supported peripherals:
tectures.
When the firmware in the JavaStation (sun4m-Architecture) • MicroBlaze with/without MMU, including
became version 0.8.1 Proll,[26] a PROM replacement used • AXI Timer and Interrupt controller peripherals
in version 0.8.2, was replaced with OpenBIOS.
• AXI External Memory Controller
• AXI DMA Controller
Sparc32
• Xilinx AXI Ethernet
QEMU emulates the following sun4m/sun4c/sun4d periph-
erals: • AXI Ethernet Lite
• AXI UART 16650 and UARTLite
• IOMMU or IO-UNITs
• AXI SPI Controller
• TCX Frame buffer (graphics card)
9.7.6 LatticeMico32
• Lance (Am7990) Ethernet
Supported peripherals:
• Non-volatile RAM M48T02/M48T08
From the Milkymist SoC
• Slave I/O: timers, interrupt controllers, Zilog serial
ports, keyboard and power/reset logic • UART

• ESP SCSI controller with hard disk and CD-ROM • VGA


support • Memory card
• Floppy drive (not on SS-600MP) • Ethernet

• CS4231 sound device (only on SS-5, not working yet) • pfu


• timer
Sparc64
9.7.7 CRIS
Emulating Sun4u (UltraSPARC PC-like machine), Sun4v
(T1 PC-like machine), or generic Niagara (T1) machine Main article: ETRAX CRIS
with the following peripherals:

• UltraSparc IIi APB PCI Bridge


9.7.8 OpenRISC
• PCI VGA compatible card with VESA Bochs Exten-
Main article: OpenRISC
sions

• PS/2 mouse and keyboard


9.7.9 External patches
• Non-volatile RAM M48T59
External trees exist supporting the following targets:
• PC-compatible serial ports

• 2 PCI IDE interfaces with hard disk and CD-ROM • Zilog Z80[27] emulating a Sinclair 48K ZX Spectrum
support • HP PA-RISC[28]
• Floppy disk • RISC-V[29]
9.10. EXTERNAL LINKS 51

9.8 See also [16] Demystifying Xen HVM Archived December 22, 2007, at
the Wayback Machine.
• qcow [17] win4lin VDS announcement Archived February 10, 2008, at
the Wayback Machine.
• Comparison of platform virtualization software
[18] Win4Solaris announcement Archived December 23, 2007,
• Mtools at the Wayback Machine.
• OVPsim [19] “SerialICE”. serialice.com.

• Q (emulator) [20] “WinUAE 3.0.0”. English Amiga Board. 2014-12-17. Re-


trieved 2016-03-25.
• SPIM
[21] "[Qemu-devel] [PATCH 3/3] add SandyBridge CPU
• GXemul model”. lists.gnu.org.

• Gnome Boxes [22] https://qemu.weilnetz.de/qemu-doc.html#pcsys_


005fnetwork “i82551, i82557b, i82559er, ne2k_pci,
ne2k_isa, pcnet, rtl8139, e1000, smc91c111, lance and
mcf_fec”
9.9 References
[23] http://pclosmag.com/html/issues/201208/page11.html Net-
[1] "qemu.git/summary". Retrieved 20 April 2017. working on QEMU: Setting Up The E1000 & Novell
NE2000 ISA Evaluation
[2] “License - QEMU”. wiki.qemu.org.
[24] “ChangeLog/0.14”. Retrieved 2011-08-08.
[3] “QEMU Internals”. qemu.weilnetz.de.
[25] “gitorious.org Git - rowboat:external-qemu.git/commit". gi-
[4] “QEMU OS Support List”. www.claunia.com. torious.org. External link in |title= (help)

[5] “QEMU PRIP 1 - support for MIPS64 Release 6 - PRPL”. [26] “Zaitcev’s Linux”. 090427 people.redhat.com
wiki.prplfoundation.org.
[27] “QEMU Z80 Target”. 090506 homepage.ntlworld.com
[6] "[Qemu-devel] ANNOUNCE: Release 0.10.0 of QEMU”.
lists.gnu.org. [28] “QEMU links”. 090506 nongnu.org

[29] “Download - RISC-V”.


[7] “KQEMU 1.3.0pre10 released - under the GPL [LWN.net]".
Lwn.net. February 6, 2007. Retrieved 2009-01-03.

[8] Liguori, Anthony (10 August 2009). "[Qemu-devel]


[PATCH 1/2] Unbreak large mem support by removing
9.10 External links
kqemu”. Retrieved 2010-03-11.
• Official website
[9] “Intel Hardware Accelerated Execution Manager”. In-
tel. 2013-11-27. Retrieved 2014-05-12. The Intel Hard- • Systems emulation with QEMU an IBM developer-
ware Accelerated Execution Manager (Intel® HAXM) is Works article by M. Tim Jones
a hardware-assisted virtualization engine (hypervisor) that
uses Intel Virtualization Technology (Intel® VT) to speed • QVM86 project page
up Android app emulation on a host machine.
• Debian on an emulated ARM machine
[10] “QEMU Emulator User Documentation”. qemu.weilnetz.de.
• Fedora ARM port emulation with QEMU
[11] “Booting from an ISO image using qemu”. Linux Tips.
• The Wikibook “QEMU and KVM” (in German, or
[12] “Godson-3: A Scalable Multicore RISC Processor with x86 computer translated to English)
Emulation”. IEEE. Retrieved 2009-04-16.
• QEMU on Windows
[13] “VirtualBox Developer FAQ”. Retrieved 2015-02-02.
• QEMU Binaries for Windows
[14] “Xen ARM with Virtualization Extensions”.
• Microblaze emulation with QEMU
[15] “Oracle and Sun Microsystems - Strategic Acquisitions - Or-
acle” (PDF). www.sun.com. • QEMU speed comparison
52 CHAPTER 9. QEMU

• UnifiedSessionsManager - An unofficial QEMU/KVM


configuration file definition
• Couverture, a code coverage project based on QEMU
9.11. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 53

9.11 Text and image sources, contributors, and licenses


9.11.1 Text
• Virtualization Source: https://en.wikipedia.org/wiki/Virtualization?oldid=784088614 Contributors: Breakpoint, Expatrick, Marteau, Furrykef,
Jeffq, Bearcat, Pengo, Tea2min, Kak, Uzume, Ency, Velella, Versageek, Mahanga, Woohookitty, Pol098, BD2412, Siddhant, Wavelength, J. M.,
Bhny, Jengelh, Mitte, SmackBot, Gribeco, Vermorel, Yamaguchi , Gilliam, Thumperward, Nick Levine, Warren, 16@r, Kvng, Quaeler, Ter-
ryE, Colonel Warden, UncleDouggie, Artemgy, FleetCommand, Sir Lothar, Yukoba~enwiki, Gogo Dodo, Kubanczyk, Widefox, VictorAnyakin,
JAnDbot, Wongba, NapoliRoma, Kitdaddio, Dekimasu, GermanX, In Transit, Funandtrvl, VolkovBot, Wrev, Zhenqinli, Heiser, JerzyTarasiuk,
Svick, StaticGull, Farrahi, CiudadanoGlobal, Lms2007, RafaAzevedo, Pointillist, Swtechwr, Vdmeraj, SF007, XLinkBot, Nepenthes, Dsimic,
Cabayi, Addbot, Ghettoblaster, KitchM, MrOllie, Jasper Deng, Jarble, Luckas-bot, Yobot, Pcap, Wadamja, Jkeogh, AnomieBOT, Materialsci-
entist, M.amjad.saeed, Nakakapagpabagabag, Zero Thrust, Geek2003, Wisewave, Boleyn3, Cannolis, Umawera, DeWriterMD, Pinethicket,
HRoestBot, MHPSM, Yunshui, Wo.luren, Rakarlin, Ripchip Bot, Smbat.petrosyan, Virtimo, Chibby0ne, Dbastro, EmausBot, Sliceofmiami,
Bathawes, GoingBatty, Icantthinkofaname4msn, L Kensington, Sunil256, Petrb, ClueBot NG, MelbourneStar, Ardahal.nitw, Cntras, Firowkp,
Widr, Opensystemsmedia, Blewisopensystemsmedia, Bslewis1, Texasfight2010, Titodutta, BG19bot, Wasbeer, Titizebioutifoul, Michael Vick
PHI, JMJeditor, Collinusc, Glacialfox, Anshuln95, Yousufatik, Pdelvecchio, Darylgolden, Softwareqa, Calsoft123, ChrisGualtieri, Amirhossini,
Dwright7861, SimonBramfitt, Giso6150, Codename Lisa, Webclient101, Jamesey1, Lugia2453, Vahid alpha, Stefmorrow, Dave Braunschweig,
DBhavsar709, Ruby Murray, Ckolias, Brk0 0, ScotXW, Tazi Mehdi, Wills64, Melcous, Monkbot, Shrutib6182, RippleSax, Davidjpfeiffer,
Majora, Vishwas.D.B, Kamcdonald, Orosk, NoToleranceForIntolerance, Bladefrost, Tsantanamm and Anonymous: 165
• Hardware virtualization Source: https://en.wikipedia.org/wiki/Hardware_virtualization?oldid=766731056 Contributors: Pnm, Ronz, Nurg,
Rsduhamel, Dratman, Cmprince, Pol098, Bilbo1507, Rjwilmsi, Bernard van der Wees, Chobot, Wavelength, SmackBot, DagErlingSmørgrav,
Dl2000, Mrozlog, UncleDouggie, Sir Lothar, Mblumber, JAnDbot, Honette, Mathglot, Kwesterh, Cucinotta, Billinghurst, Heiser, Ooiiww,
Jerryobject, Svick, Niceguyedc, Rprpr, Jusdafax, Zodon, Dsimic, Addbot, Mortense, Thecarpy, MrOllie, Jasper Deng, Yobot, Pcap, Becky
Sayles, KamikazeBot, AnomieBOT, Xqbot, Tayloj, Animist, Anna Frodesiak, FrescoBot, Skatesmen, William Surya Permana, A.wasylewski,
Jesse V., Jfmantis, Orphan Wiki, Jsdustin, Manasprakash79, Shencypeter, Robbiemorrison, John Aplessed, ClueBot NG, Jfalkent, Harihs,
Pgkaila, Ctopham, Cyberbot II, Codename Lisa, Bbartlomiej, Rotlink, YiFeiBot, James Cage, Monkbot, ComsciStudent, Parthkamdar, Renamed
user 8gzkdu68phsoyd2 and Anonymous: 60
• I/O virtualization Source: https://en.wikipedia.org/wiki/I/O_virtualization?oldid=745559319 Contributors: Rich Farmbrough, Xezbeth, Red-
vers, Woohookitty, RHaworth, BD2412, Josh Parris, MZMcBride, Wavelength, Henriok, Nick Number, Dougher, CosineKitty, Tinucherian,
EagleFan, R'n'B, Reedy Bot, Spellicy, Dsimic, Yobot, LGee LGee, Bongeman, Δ, Piggybank25, BG19bot, BattyBot and Anonymous: 14
• Hypervisor Source: https://en.wikipedia.org/wiki/Hypervisor?oldid=786000801 Contributors: AxelBoldt, Chuckhoffmann, Ghakko, Dcoet-
zee, Havardk, Visorstuff, Earthsound, Nurg, DocWatson42, Brouhaha, Levin, Dratman, Dav4is, Mboverload, Uzume, Pgan002, ConradPino,
Pbannister, TheObtuseAngleOfDoom, Moxfyre, Thorwald, Praxis, Grstain, RossPatterson, Rich Farmbrough, Sbb, ChrisJ, Bender235, Paul-
MEdwards, Csabo, Guy Harris, Andrewpmk, Snowolf, Velella, Ronark, DanShearer, Rothgar, Pauli133, Ringbang, Linas, JFG, GregorB, Mark
Williamson, Marudubshinki, BD2412, Rjwilmsi, Mfwills, PHenry, Pmc, FlaBot, Pruefer, Quuxplusone, Eugene Esterly III, Pasin~enwiki, Bg-
white, FrankTobia, YurikBot, Pcoulombeau, Jeffhoy, Bovineone, Роман Беккер, Dwarfpower, Cleared as filed, Voidxor, Scope creep, Drdavis,
ThinkingInBinary, Zzuuzz, JLaTondre, ViperSnake151, Renegade54, Benandorsqueaks, Finell, Mark hermeling, SmackBot, Mmernex, Hen-
riok, Omniuni, Gilliam, Brianski, Hmains, Chris the speller, Thumperward, Kevin Ryde, Nbarth, Tonyhinkle, Liontooth, Frap, Eliyahu S,
Jacob Poon, Dweaver, Adamantios, UU, Meson537, Schultmc, Warren, Rudolph04, Spinality, Rmohns, Martinkop, Soumyasch, Nagle, Ko-
rejora, Smswigart, UncleDouggie, Thewebdruid, Raysonho, Cydebot, PamD, Thijs!bot, Kubanczyk, Nick Number, Applemeister, Changlinn,
Dougher, Skarkkai, JAnDbot, Magioladitis, Appraiser, Bswinnerton, Ehdr, Seashorewiki, Symbolt, Gwern, Delucardenal~enwiki, Hlsu, Panar-
chy, Cyprien44, ItsProgrammable, Dave Tuttle, RenniePet, Juliancolton, GCFreak2, Milnivlek, Oshwah, MaxBrowne, Mezzaluna, Mwilso24,
DRady, Kbrose, Gambler132, Heiser, SieBot, EwokiWiki, Noveltyghost, XMNboy, AlanUS, Edifyyo, CP2002, Echo95, HumanBeing01, Ciu-
dadanoGlobal, Chaosfreak69, Glenikins, RanceDeLong, RobChafer, Niceguyedc, DragonBot, Socrates2008, Aeolian145, Anon lynx, Bennopia,
Paul Lizer, Neo9922, SF007, Atxbyea, Dsimic, Addbot, Xionglingfeng, Eamei, Elsendero, Ratokeshi, MrOllie, Enormator, Favonian, Tide
rolls, Lightbot, Jarble, Ben Ben, Yobot, Bunnyhop11, Pcap, KamikazeBot, Peter Flass, Kehuston, Fultheim, AnomieBOT, Aneah, PabloCastel-
lano, Smredo, LarryStevens, Kicekpicek, Technopedian, Terrivsm, Nasa-verve, RibotBOT, Locobot, Alfons.crespo, Breadtk, Baishuwei, Fres-
coBot, Llukomsk, ChrixH, Winterst, Skyerise, Jandalhandler, Alexey Izbyshev, A.wasylewski, Linuxpundit, Hanky27, Phlegat, Dbastro, Emaus-
Bot, John of Reading, Dewritech, Barnet.Herts, LiamWestley, Slashusrbin, Bbalban, Bollyjeff, Kiwi128, Singaporian, Demonkoryu, Acwil-
son9, OnePt618, Erget2005, MainFrame, Voomoo, ClueBot NG, Crowdthink, Jeff Song, Bbruce32, IAmTheCandyman, Khajaea, Alexbee2,
BG19bot, Raghu Udiyar, Geekbychoice, Sumercip, GGShinobi, Snow Blizzard, Meabandit, JMtB03, Vxassist, HistoryStamp, Vicentenico-
laugallego, Amirhossini, Khazar2, SimonBramfitt, Codename Lisa, Jnargus, Debrell, Me, Myself, and I are Here, Wanderenvy, Bouncyshot,
ScotXW, Aswirthm, Monkbot, Negative24, Jiten Dhandha, Vmdocker, SoSivr, PerfectBike, BikeyHS, Seeekr, DSmurf and Anonymous: 257
• Virtual machine Source: https://en.wikipedia.org/wiki/Virtual_machine?oldid=788338771 Contributors: Tobias Hoevekamp, Derek Ross, Wo-
jPob, Zundark, Espen, Robertl30, Andre Engels, SimonP, Hannes Hirzel, Hirzel, K.lee, RTC, EddEdmondson, Fuzzie, Norm, Haakon, Speuler,
Charles Matthews, Chatool, Zoicon5, SatyrTN, Didactylos, Jake Nelson, Maximus Rex, Furrykef, Jnc, Bevo, Robbot, Nurg, Samrolken, Aca-
demic Challenger, Rholton, Wlievens, Hadal, Xanzzibar, Tea2min, Giftlite, Thv, Brouhaha, MaGioZal, Jhf, Gilgamesh~enwiki, Mboverload,
AlistairMcMillan, VampWillow, Uzume, Tagishsimon, Wmahan, Bact, J~enwiki, Beland, Icairns, Marc Mongenet, Picapica, Chmod007, Zon-
dor, Thorwald, Porges, Ta bu shi da yu, Shipmaster, RossPatterson, Discospinster, Michal Jurosz, Notinasnaid, Brynosaurus, Leif, Etz Haim,
Agoode, John Vandenberg, Flxmghvgvk, R. S. Shaw, Jakew, Hackwrench, Guy Harris, PatrickFisher, Sl, Docboat, Tobyc75, Mcsee, Lost.goblin,
RHaworth, Ruud Koot, Tabletop, BD2412, Terryn3, SpiralOut, Kbdank71, Phoenix-forgotten, Ketiltrout, Rjwilmsi, Amire80, Trlovejoy, Jef-
frey Henning, ScottJ, Fred Bradstadt, Maxim Razin, StuartBrady, FlaBot, Mga, Ewlyahoocom, Intgr, Ahunt, Moocha, Pinecar, YurikBot,
Pcoulombeau, Crazytales, Martinhenz, Arado, Pi Delport, Jengelh, Gaius Cornelius, Bovineone, EngineerScotty, Welsh, Mjchonoles, Lomedae,
DGJM, Falcon9x5, Wknight94, Abune, Josh3580, Ansud~enwiki, Tvarnoe~enwiki, Back ache, JLaTondre, ViperSnake151, GrinBot~enwiki,
SmackBot, Mmernex, Gribeco, Clpo13, BurntSky, Brick Thrower, Commander Keane bot, Hmains, Bluebot, MalafayaBot, Jerome Charles
Potts, Nbarth, DHN-bot~enwiki, Colonies Chris, Krallja, Nintendude, Can't sleep, clown will eat me, Makewa, Frap, Neo139, JonHarder, Zoon-
fafer, Kennovak, Zvar, Phaedriel, B^4, Corby, Tossrock, Duckbill, MichaelBillington, Wybot, FlyHigh, Spinality, Lester, Doug Bell, Gobonobo,
54 CHAPTER 9. QEMU

Soumyasch, Breno, NJA, Ryulong, Mrozlog, Dreftymac, UncleDouggie, Arto B, Courcelles, Bclaremont, FatalError, Raysonho, Elendal, Mblum-
ber, MC10, Quibik, Torc2, Dayyan, Thijs!bot, Kubanczyk, Coelacan, Hervegirod, TheWhiteCrowsShadow, K001, JeffV, JAnDbot, MER-C,
Albany NY, Bradmkjr~enwiki, Fuchsias, JamesBWatson, Swpb, Gusunyzu, Twsx, Sachingopal, Japo, Doesnotexist, P Escobar, Donsez, Mi5key,
Gwern, MartinBot, Hlsu, R'n'B, Cyprien44, Jesant13, Raise exception, Pterre, Jcea, VolkovBot, Part Deux, MenasimBot, Xandell, TXiKiBoT,
Aatch, Oshwah, Quackdave, Slowish guitar, Rponamgi, Cfbolz, Pmedema, BwDraco, VisionQwest, Bejer, Cucinotta, Dirkbb, Dpleibovitz,
YordanGeorgiev, SaltyBoatr, Heiser, SieBot, Winchelsea, Missy Prissy, Jerryobject, Hello71, Alexey.noskov, XMNboy, Smearly, Priyakishan,
Anchor Link Bot, Martarius, ClueBot, Alksentrs, Adamin91, Nonxero, DragonBot, PixelBot, MorrisRob, Posix memalign, A Pirard, Muro
Bot, BOTarate, Bob13377, SF007, Vayalir, Galzigler, Atxbyea, Dsimic, Mortense, DOI bot, Faltschuler, AkhtaBot, Elsendero, Lclam, Cst17,
MrOllie, Glane23, Mob590, Jasper Deng, Jarble, Luckas-bot, Yobot, Pcap, Jerebin, Wonderfl, Peter Flass, Fultheim, AnomieBOT, Houselifter,
ThaddeusB, Jim1138, JamesBKenney, LirazSiri, Materialscientist, Larry.Imperas, Ms.wiki.us, ArthurBot, Sirgorpster, Terrivsm, Pmlineditor,
Bugefun, Stevenbshaffer, Nits.singla, Zaokski, Prari, FrescoBot, W Nowicki, Jc3s5h, Saehrimnir, Pipoetmollo, Expertour, Helenistic, Skyerise,
Txt.file, Trappist the monk, Rangsynth, Boogaart, Vrenator, Diannaa, Virtimo, EmausBot, Jjacobso, Dewritech, Barnet.Herts, Jasonanaggie,
Pome wiki, Lambda-mon key, HiW-Bot, Grondilu, Cogiati, Josve05a, Wackywace, Dznidarsic, L Kensington, , MainFrame, Aravin-
dsrivatsa, Kinkreet, Bhaskarmnnit, Petrb, ClueBot NG, Matthiaspaul, Strcat, Nickholbrook, BigTheta, Electriccatfish2, BG19bot, Murry1975,
Kendall-K1, Mark Arsten, Compfreak7, Mariraja2007, Neel.pant18, Baxter4173, Thegreatgrabber, Sondhic, David.moreno72, ChrisGualtieri,
Amirhossini, Earl King Jr., Codename Lisa, Pavel Tkachuk, Dbasellc, Frosty, Me, Myself, and I are Here, BurritoBazooka, Stefmorrow, Epicge-
nius, Tdshepard, François Robere, Tljplslc, Comp.arch, Spyglasses, Melody Lavender, Ginsuloft, Reda cosi, HarlemChocolateShake, Dough34,
Hrbm14, XHeliotrope, Lagoset, Monkbot, Pedrotangtang, CerealKillerYum, Snowbooks1419, BlackCat1978, Eurodyne, Fatdaddy12222, Guet-
tli, PerfectBike, Mr. Artem Kerpatenko, , Intercambrian, UnequivocalAmbivalence, Kethrus II, Daveprochnow, 1 bicbluepen 1, Fmadd,
NoToleranceForIntolerance, Marvellous Spider-Man, Ashleyeinspahr, PrimeBOT and Anonymous: 462

• Virtual memory Source: https://en.wikipedia.org/wiki/Virtual_memory?oldid=788501636 Contributors: Derek Ross, WojPob, Bryan Derk-
sen, XJaM, Ray Van De Walker, Stevertigo, Frecklefoot, Ubiquity, Alan Peakall, Goatasaur, Mkweise, Ahoerstemeier, Nanshu, Snoyes,
TUF-KAT, Kingturtle, Julesd, Nikai, Dwo, Emperorbma, Bemoeial, Dysprosia, Furrykef, Jnc, Omegatron, Bevo, Jdcope, Aliekens, Rob-
bot, Kstailey, Fredrik, RedWolf, Kday~enwiki, SchmuckyTheCat, Caknuck, Smb1001, Tea2min, Martinwguy, Connelly, Centrx, Giftlite,
Arved, Noone~enwiki, Radius, AlistairMcMillan, Falcon Kirtaran, VampWillow, Alanl, Geni, Knutux, Yath, Stephan Leclercq, Antandrus,
Beland, Apotheon, Nils~enwiki, Bumm13, Sam Hocevar, Urhixidur, Edsanville, Ehamberg, Abdull, RossPatterson, UrmasU, Discospinster,
YUL89YYZ, Pavel Vozenilek, Joepearson, Dyl, Plugwash, CanisRufus, El C, Shanes, Alereon, Seiji, Underdog~enwiki, Bobo192, Yonghokim,
Bradkittenbrink, R. S. Shaw, .:Ajvol:., Unquietwiki, Espoo, Poweroid, Guy Harris, ABCD, Fritzpoll, Seans Potato Business, Super-Magician,
Anthony Ivanoff, Feezo, Jkt, Kelly Martin, Simetrical, Awk~enwiki, Damicatz, Isnow, Palica, Marudubshinki, Graham87, Qwertyus, Kbdank71,
Rjwilmsi, Pdelong, Loudenvier, Vegaswikian, Eptalon, Guinness2702, Boccobrock, GeorgeBills, Silvestre Zabala, Sydbarrett74, Toresbe, Soup
man, Perteghella~enwiki, Nimur, DevastatorIIC, Intgr, BradBeattie, Chobot, Alpinesol, YurikBot, Wavelength, Borgx, Sceptre, CTR, Armistej,
Pseudomonas, Anomie, Aeusoes1, Jaxl, Amcfreely, Zwobot, DeadEyeArrow, Jeh, K.Nevelsteen, Lt-wiki-bot, Closedmouth, Abune, Tyomitch,
GrinBot~enwiki, SmackBot, Mmernex, PEHowland, Incnis Mrsi, Germ~enwiki, Unyoyega, Pgk, Ccalvin, Vald, PizzaMargherita, Agentbla,
Nil Einne, Msrkiran, Aksi great, Gilliam, Brianski, Hmains, Manavkataria, Persian Poet Gal, Justforasecond, Thumperward, HartzR, Can't
sleep, clown will eat me, Frap, JonHarder, Kittybrewster, Ghiraddje, VegaDark, MichaelBillington, Thomasyen, Warren, NickPenguin, Kleuske,
Daniel Santos, Haniefdar, Curly Turkey, Dragonfly298, Writtenonsand, JoshuaZ, Doodan, EdC~enwiki, Yodaat, Phuzion, Quaeler, JeffW, Chief
of Staff, Paul Koning, J Di, Ramack, Tawkerbot2, JForget, CRGreathouse, CmdrObot, JohnCD, GHe, Lentower, VTBassMatt, Weyrick, Erar-
chit007, Ankit jn, Acolyte of Discord, Christian75, DumbBOT, Thijs!bot, Kubanczyk, Edupedro, Marek69, DmitTrix, Jirislaby, AntiVandal-
Bot, Widefox, Quintote, Doktor Who, Rsocol, Alphachimpbot, Hyarmion, Lfstevens, E.James, Avocado27, Bearclause, Arch dude, Phospho-
ricx, Jed S, Bongwarrior, VoABot II, Midgrid, MCG, DerHexer, Evanh, ItsProgrammable, Artaxiad, Huggie, Philcha, Kojozone, Theo Mark,
Yonidebot, SparsityProblem, Teac77, JavierMC, Asorini, Jcea, VolkovBot, Allstar87, Philip Trueman, Nigelrees, TXiKiBoT, WatchAndOb-
serve, Sailorman2003, Someguy1221, Monkey Bounce, Duncan.Hull, Enigmaman, Pthibault, Jonnyspace, Insanity Incarnate, Why Not A Duck,
Stirlingstout, Winterspan, Yintan, Xelgen, Flyer22 Reborn, Radon210, Dwiakigle, JCLately, EnriqueVillar, OKBot, Svick, AlanUS, Orever,
Ken123BOT, Denisarona, ClueBot, The Thing That Should Not Be, Rilak, Rbakels, Excirial, Alexbot, Friendlydata, Sol Blue, SpikeToronto,
Eldri005, Stypex, Vendeka, AlexGWU, Gnowor, G.ardaud, Beach drifter, Edepa, BitterTwitter, Addbot, Markuswise, DOI bot, Shervine-
mami, Elsendero, Ronhjones, MrOllie, Bazza1971, LinkFA-Bot, Numbo3-bot, Lightbot, Zorrobot, Bezymov, Ochib, Softy, Yobot, OrgasGirl,
Fraggle81, VitalyLipatov, Pcap, Crispmuncher, Nallimbot, Pmalmsten, Peter Flass, AnomieBOT, Jim1138, Materialscientist, Citation bot, Lil-
Helpa, NotAnEditor, Capricorn42, Jwaustin188, Mathonius, WillMall, Chatul, FrescoBot, RoyGoldsmith, Mfwitten, Dhtwiki, Citation bot 1,
Pinethicket, E.w.bullock, RjwilmsiBot, Random2001, EmausBot, Angrytoast, Olof nord, Dcirovic, AvicBot, Ebrambot, AndrewN, Makecat,
Openstrings, W163, Peterh5322, Bomazi, BioPupil, ClueBot NG, Cracked acorns, Shaddim, Parcly Taxel, 123Hedgehog456, Zakblade2000,
‫ساجد امجد ساجد‬, Helpful Pixie Bot, BG19bot, Parthasarathinag, Walk&check, SSDPenguin, MusikAnimal, Mark Arsten, Compfreak7, Tre-
vayne08, Thesquaregroot, HripsimeHripsime, Jörg Olschewski, ChrisGualtieri, Khazar2, Dexbot, Kushalbiswas777, Cameron Montgomery,
Mogism, ZaferXYZ, IVORK, Me, Myself, and I are Here, Godswearhats, Shurakai, Johnny-−34, Andiamo1, EnricoMartignetti, Spyglasses,
Capteroh, Gabriel Southern, Visayan Spotted Deer, Ian.joyner, Monkbot, Sami.ullah052, BethNaught, Tsolakic, KasparBot, My Chemistry
romantic, Andiamo1980, GreenC bot, Bender the Bot, SavageEdits, CPU 5, PrimeBOT, Riyashi and Anonymous: 466

• Kernel (operating system) Source: https://en.wikipedia.org/wiki/Kernel_(operating_system)?oldid=788744058 Contributors: Mav, The


Anome, Larry Sanger, Arvindn, Toby Bartels, Mbp (usurped), Merphant, Maury Markowitz, Ark~enwiki, Mjb, OlofE~enwiki, Edward, Bde-
sham, Aholstenson, Nixdorf, Wapcaplet, Ixfd64, TakuyaMurata, Karada, Delirium, Dori, Iluvcapra, Ellywa, Mac, Nanshu, Hyungjin Ahn, Marco
Krohn, Evercat, Astudent, Kaysov, Ghewgill, Mulad, David Latapie, Paul Stansifer, Dysprosia, Jay, Wik, Zoicon5, Furrykef, Tero~enwiki, Jnc,
IanM, Wernher, Samsara, Joy, Raul654, Olathe, Finlay McWalter, Chris 73, Romanm, Rfc1394, Anthony, HaeB, Joshays, Tea2min, Cedars,
Giftlite, DocWatson42, Uday, Kenny sh, Ævar Arnfjörð Bjarmason, Tom harrison, Levin, Siroxo, AlistairMcMillan, Timbatron, Pne, Bob-
blewik, Knutux, Beland, Luky-amiga, MJA, DNewhall, MFNickster, Vbs, Bk0, Kutulu, Positron, Julien~enwiki, Abdull, Yuriz, Gazpacho,
Mernen, Mormegil, Rfl, Poccil, Discospinster, Rich Farmbrough, Leibniz, Jpk, Martpol, Paul August, Bender235, Mykhal, Rubicon, CanisRu-
fus, Shanes, Twilo, RAM, R. S. Shaw, Polluks, Dungodung, MARQUIS111, WikiLeon, Helix84, Justinc, Jakew, Jumbuck, Alansohn, Hack-
wrench, Guy Harris, DariuszT, Jeltz, Ferrierd, Sligocki, Daniel.inform, Danaman5, Stephan Leeds, Suruena, Ringbang, TheCoffee, Mikenolte,
Zntrip, Fandyllic, Simetrical, Reinoutr, Roboshed, Woohookitty, Uncle G, Unixer, Cruccone, MattGiuca, Ruud Koot, Dah31, Thruston, Eyre-
land, Palica, Samvscat, Marudubshinki, Qwertyus, Yurik, Icey, Ryan Norton, Rjwilmsi, .digamma, Lofrsh, Xosé, Bruce1ee, Raffaele Megabyte,
9.11. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 55

TPIRman, LjL, The wub, Yahoolian, Muthukumar, JanSuchy, LVDX, Titoxd, FlaBot, Ysangkok, Fragglet, Ahunt, BMF81, Chobot, DVdm,
Bgwhite, YurikBot, Wavelength, RobotE, RussBot, Arado, Jellocube27, Epolk, Gardar Rurak, Jengelh, Manop, Gaius Cornelius, Rsrikanth05,
Aleczopf, CarlHewitt, Jehoshaphat, NawlinWiki, Joshf, Ino5hiro, Dureo, Goffrie, Bobbo, Ravedave, JulesH, Beanyk, Tony1, Alex43223, Nat-
keeran, Aaron Schulz, Johndrinkwater, Lt-wiki-bot, Ka-Ping Yee, Shawnc, Snoopen, GrinBot~enwiki, TuukkaH, That Guy, From That Show!,
Chris Chittleborough, Attilios, SmackBot, PaulWay, Mmernex, Smitz, Herostratus, Vald, Bomac, Milowarmerdam, TheTweaker, Gilliam, Jon-
nymay, Alias777, Chris the speller, @modi, RevenDS, DStoykov, MK8, CrookedAsterisk, Thumperward, Snori, Omniplex, ACupOfCoffee,
Cfallin, Frap, Sephiroth BCR, ZachPruckowski, JonHarder, VMS Mosaic, Coastalsteve984, Cybercobra, Fiosharishs, Warren, Freedom to share,
Wybot, Phoenix314, Sigma 7, Luigi.a.cruz, Ugur Basak Bot~enwiki, Midkay, SashatoBot, Rory096, Harryboyles, Ninjagecko, Soumyasch,
Candamir, Minna Sora no Shita, IronGargoyle, Brainix, SandyGeorgia, Peyre, LaMenta3, Beno1000, MrRedwood, Tawkerbot2, Shakespeare-
Fan00, Ale jrb, Hertzsprung, Orannis, WeggeBot, Cydebot, Vatekor, Gogo Dodo, Jayen466, Dinnerbone, Kozuch, Malleus Fatuorum, Thijs!bot,
Epbr123, Jdm64, Elrohir4, RickinBaltimore, Escarbot, Marvoir, Mg55~enwiki, Gioto, Zachwoo, Lupusrex, Farosdaughter, AlekseyFy, .ana-
conda, JAnDbot, CombatWombat42, .anacondabot, I80and, Magioladitis, Bongwarrior, Bobby D. DS., CS46, Indon, DrSeehas, Adrian J.
Hunter, Vssun, Tom Herbert, Seba5618, Gwern, Infrangible, Okloster, Vigyani, Padillah, Sibi antony, Lilac Soul, Trusilver, Hans Dunkelberg,
Theo Mark, Jesant13, Maxturk, Mozzley, Starnestommy, Jayden54, Plasticup, Anakletos, SJP, Gemini1980, Pdcook, Hwbehrens, Squids and
Chips, Endorf, VolkovBot, DrDentz, OliviaGuest, Infomniac, Philip Trueman, Benjamin Barenblat, Oshwah, A4bot, Rei-bot, Combatentropy,
Chrisleonard, Anna Lincoln, Una Smith, Wordsmith, UnitedStatesian, 3DS Mike, Lejarrag, Haseo9999, Synthebot, Miko3k, Romatrast, Allebor-
goBot, S.Örvarr.S, SieBot, Weeliljimmy, Gerakibot, Phe-bot, Josh the Nerd, Android Mouse, Fisico~enwiki, Anchor Link Bot, Swampsparrow,
Curtdbz, Sak31122, Relytmcd, Imran22, ClueBot, SummerWithMorons, The Thing That Should Not Be, Starkiller88, Campbellssource, Ri-
lak, Niceguyedc, DragonBot, Excirial, Ykhwong, Sun Creator, Arjayay, Yes-minister, SchreiberBike, Dimonic, Chiefmanzzz, John318, Mitch
Ames, SilvonenBot, Dekart, Badgernet, Jabberwoch, Dsimic, Zirguezi, Addbot, Professor Calculus, RPHv, Some jerk on the Internet, DOI bot,
Melab-1, Kongr43gpen, Rjpryan, Download, AndersBot, Kisbesbot, Tassedethe, Numbo3-bot, Llakais, Rusty778, Zorrobot, Fiftyquid, Jarble,
Chaprash27, Luckas-bot, Senator Palpatine, Wikipaddn, Peter Flass, AnomieBOT, Götz, Jim1138, Galoubet, Giants27, Materialscientist, Cita-
tion bot, Crimsonmargarine, Ryan Sawhill, Xqbot, Xorxos, Miym, GrouchoBot, RibotBOT, SCΛRECROW, K;;m5m k;;m5m, Moritz schlarb,
=Josh.Harris, Shadowjams, FrescoBot, Komitsuki, Rkr1991, Fchristophersen, Citation bot 1, HRoestBot, PrincessofLlyr, Skyerise, Abion47,
Turian, FoxBot, Trappist the monk, Dinamik-bot, Zvn, Gzorg, Gardrek, Peacedance, Lysander89, Brian the Editor, Jesse V., DARTH SIDI-
OUS 2, Electricninja, Lauri.pirttiaho, From Adam, Slon02, EmausBot, Simple.fatima, Klbrain, TuHan-Bot, Dcirovic, Rkononenko, ZéroBot,
Midas02, Hazard-SJ, Demonkoryu, GeorgeBarnick, Spicemix, ClueBot NG, Jmreinhart, Bped1985, Wdchk, Uğurkent, Widr, Amircrypto, Help-
ful Pixie Bot, පසිඳු කාවින්ද, BG19bot, MKar, ‫بلال الدويك‬, Yowanvista, Emerge --sync, Evanharmon, Bourne4war, AnDDreas52, Wallmani,
Minsbot, Johnt9000, C10191, Hoopya11042010, Khazar2, UnderstandingSP, HelicopterLlama, Irideforhomieeees, Pearsejward, Monkeydoes-
better, Coolermaster1314, Mabbage, Comp.arch, Yblad, Skr15081997, Yoshi24517, Cwbr77, Monkbot, Dmunafo, Wikigeek244, DSCrowned,
Sangram12, Jaysheel10, Mrithyunjaya, Kristianwlt, Absolution provider 1999, Luke1337, Dnivogkrishnan, Eric0928, KasparBot, Qxz qxz,
Tom29739, Chaitanya gurivisetti, LORDTEK, Qzd, Shaconjc, InternetArchiveBot, Ellyawidarto, Ddogmc101, Algmwc5, Magic links bot and
Anonymous: 551

• Kernel-based Virtual Machine Source: https://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine?oldid=784473347 Contributors: Bryan


Derksen, AlexWasFirst, Furrykef, Rasmus Faber, Jonabbey, Uzume, Beland, Bumm13, Ezhar Fairlight, Apalsola, RossPatterson, Bender235,
Richard W.M. Jones, John Vandenberg, SpeedyGonsales, Sebastian Goll, Frodet, Mrzaius, Jsierra, Qwertyus, Rjwilmsi, Wiebel~enwiki, FlaBot,
Intersofia, Ysangkok, TheAnarcat, Intgr, Jengelh, Роман Беккер, Daniel Bonniot de Ruisselet, Gronau~enwiki, BOT-Superzerocool, SMcCan-
dlish, Fmyhr, Smithj, Liujiang, SmackBot, Stux, Gribeco, Anthony Liguori, Eaglizard, Brianski, Kitamozihr, Thumperward, Jerome Charles
Potts, Chisophugis, Jdthood, Harumphy, Frap, Nharipra, Kvng, UncleDouggie, Btate, Linus M., Cydebot, Kckid, Electron9, Btuttle, Samat-
Jain, Xhienne, Wilee, Lathama, Tanger, Magioladitis, Brownout, Scostas, Rajpaj, RadioGuyTed, Dispenser, It Is Me Here, Nemo bis, Richard
Wolf VI, My wing hk, Linportal, Manndl, Yosnoop, Javy tahu, Xandell, Oshwah, Symbology101, Canaima, Jamelan, EmxBot, SieBot, Jerry-
object, Rikkrdo, Hxhbot, Free Software Knight, Svick, Camelek, Treekids, ImageRemovalBot, PipepBot, RandallB, Pot, SF007, DumZiBoT,
DeirdreStraughan, TimTay, Imllorente, Dsimic, Addbot, AkhtaBot, Scientus, Balabiot, Yobot, Rjb1000, AnomieBOT, OttoTheFish, Quebec99,
Happyrabbit, Mmahut, FrescoBot, Winterst, Stefan Weil, Sss41, Jandalhandler, Txt.file, Evert:Meulie, Etenil, KommissärMatthäi, Jæs, Hatchmt,
Marcfl, Jenks24, Galdortakacs, Netinept, Pankajr141, Christofferdall, Demonkoryu, Palosirkka, Tarian.liber, Acue wiki, M00dawg, Mum-
boJumboDumbo, Main2frame, Mattmorr, Ris icle, BG19bot, Tyatsumi, V4711, YFdyh-bot, Snarespenguin, Dexbot, Marevoula, Codename
Lisa, Kephir, MartinMichlmayr, Vmblog, Kgsw, Comp.arch, ScotXW, Wikisansan, Ramonmedeiros, Gabrielsaragoca, Mpolednik, KasparBot,
VZzePGy, InternetArchiveBot, GreenC bot and Anonymous: 127

• QEMU Source: https://en.wikipedia.org/wiki/QEMU?oldid=788760843 Contributors: AxelBoldt, AlexWasFirst, Michael Hardy, Ixfd64, Delir-
ium, Rvalles, Furrykef, Joy, Jni, Chealer, Jwbrown77, Scott McNay, Rursus, Wereon, Alerante, Reub2000, Ds13, Rchandra, Djegan, AlistairM-
cMillan, Khalid hassani, Uzume, Comatose51, Robert Brockway, Am088, DNewhall, Reagle, Urhixidur, Popolon, Rich Farmbrough, Evert,
Sn0wflake, Gronky, Bender235, Wwahammy, Plugwash, Evice, Chungy, Kiand, Bletch, Richard W.M. Jones, Dreish, Polluks, Giraffedata,
MARQUIS111, Martinultima, Minghong, Michael Drüing, Walter Görlitz, Guy Harris, CyberSkull, Kocio, Melaen, Ronark, Gortu, Itsmine,
Cristan, NicM, QUILzhunter931, Evan Deaubl, Pol098, Sega381, Hughcharlesparker, Alecv, Toussaint, Marudubshinki, Cuvtixo, Amr Ra-
madan, Fred Bradstadt, Buo, Pmc, StuartBrady, Mikecron, FlaBot, Ian Pitchford, Ysangkok, Crazycomputers, Fragglet, Ewlyahoocom, Chobot,
Stuorguk, SaidinUnleashed, Mortenoesterlundjoergensen, Wengier, IByte, David Woodward, Lavenderbunny, Bovineone, Claunia, Antidrugue,
Raim, Goffrie, ThomasChung, Carl A. Dunham, Ali Akbar~enwiki, Lod, Speculatrix, Emc2, SmackBot, Hydrology, Imz, Eric boutilier, Anas-
trophe, Draga~enwiki, Eskimbot, Arny, Scott Paeth, Valent, JorgePeixoto, Chris the speller, Carpetsmoker, Thumperward, Jerome Charles
Potts, Jdthood, Can't sleep, clown will eat me, Adamc83, Dbiagioli, Frap, Radagast83, BfmcGuS, Daniel Santos, A5b, Towsonu2003~enwiki,
Smremde, AThing, RoboDick~enwiki, Obram, BioTube, Arkrishna, Christian Juner, JeffryJohnston, Balrog, Alan.ca, Gautam p, TH-Foreigner,
RaviC, JerkerNyberg, Raysonho, Fumblebruschi, Firklar, 67-21-48-122, MichaelPloujnikov, Metatinara, Willemijns~enwiki, TempestSA, Cy-
debot, Webaware, Vezhlys, Cyfdecyf, CritterNYC, SJ2571, JLD, Dan Forward, PamD, Electron9, Nimakha, Grayshi, Druiloor, Henderr, Il-
ion2, Widefox, TimingClock, Madd the sane, I am neuron~enwiki, Isilanes, Blastwave, Carewolf, DOSGuy, Dirkjot~enwiki, Mwarren us,
Toutoune25, Bongwarrior, SwH, Pixel ;-), Teceha, Elopio, TimSmall, EoD, Ariel., Chibimaru, AAA!, KenSharp, Arite, Dispenser, Tkgd2007,
Imperator3733, TheOtherJesse, Davidwr, Flyte35, OlavN, Phobos11, Finity, Prolixium, EmxBot, Josh the Nerd, Ender8282, Jim manley, Jer-
ryobject, Flyer22 Reborn, MinorContributor, Ilhanli, Ddgromit, ImageRemovalBot, Jacob Myers, Bekuletz, Jt, Porchdoor~enwiki, Kl4m-AWB,
Mtimjones, Niceguyedc, Edgar igl, Da rulz07, Cygwinxp, Technobadger, Pot, Yes-minister, Damiansoul, SF007, DumZiBoT, XLinkBot, Agent-
56 CHAPTER 9. QEMU

lame, Kwjbot, SilvonenBot, Zodon, Addbot, Mortense, Ghettoblaster, Cdshi, Aggplanta, Download, Ovpa, Fiftyquid, Balabiot, Jarble, Fale,
Yobot, Themfromspace, AnomieBOT, Лъчезар, LilHelpa, TheAMmollusc, SkyKiDS, Flying sheep, J04n, Pavlor, Kernel.package, MrAronnax,
Baishuwei, FrescoBot, W Nowicki, Azru0512, Winterst, Stefan Weil, Jandalhandler, Alexey Izbyshev, Txt.file, FoxBot, Captchad, Bunchosia,
Pythomit, Firefoxian, Hex539, EmausBot, Tuankiet65, Dewritech, Kiwi128, Demonkoryu, Bomazi, Voomoo, Acue wiki, Shaddim, Buggybug,
Helpful Pixie Bot, Strike Eagle, Tomko222, Clopez-igalia, Gmlucas, Christmasboy 81, Isacdaavid, TheJJJunk, Snarespenguin, Dexbot, Co-
dename Lisa, MartinMichlmayr, Kap 7, Comp.arch, Semsi Paco Virchow, FockeWulf FW 190, ScotXW, Anthonym7009, Maxime Vernier,
Sofia Koutsouveli, Omikrofan, Olzirg, Djordan1337, , So-retro-it-hurts, Transfat0g, FindMeLost, InternetArchiveBot, GreenC bot,
Bytesock, 95, 50504F and Anonymous: 281

9.11.2 Images
• File:Ambox_important.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Ambox_important.svg License: Public domain
Contributors: Own work based on: Ambox scales.svg Original artist: Dsmurat, penubag
• File:Bus_icon.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/ca/Bus_icon.svg License: Public domain Contributors: No
machine-readable source provided. Own work assumed (based on copyright claims). Original artist: No machine-readable author provided.
Booyabazooka assumed (based on copyright claims).
• File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: PD Contributors: ? Original
artist: ?
• File:Desktop_computer_clipart_-_Yellow_theme.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d7/Desktop_computer_
clipart_-_Yellow_theme.svg License: CC0 Contributors: https://openclipart.org/detail/17924/computer Original artist: AJ from openclipart.org
• File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: The Tango!
Desktop Project. Original artist:
The people from the Tango! project. And according to the meta-data in the file, specifically: “Andreas Nilsson, and Jakub Steiner (although
minimally).”
• File:Folder_Hexagonal_Icon.svg Source: https://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc-by-sa-3.0
Contributors: ? Original artist: ?
• File:Free_and_open-source_software_logo_(2009).svg Source: https://upload.wikimedia.org/wikipedia/commons/3/31/Free_and_
open-source_software_logo_%282009%29.svg License: Public domain Contributors: FOSS Logo.svg Original artist: Free Software Portal
Logo.svg (FOSS Logo.svg): ViperSnake151
• File:Hardware_Virtualization_(copy).svg Source: https://upload.wikimedia.org/wikipedia/commons/0/08/Hardware_Virtualization_
%28copy%29.svg License: Public domain Contributors: Own work Original artist: John Aplessed
• File:Hyperviseur.png Source: https://upload.wikimedia.org/wikipedia/commons/e/e1/Hyperviseur.png License: CC0 Contributors: Own work
Original artist: Scsami
• File:Internet_map_1024.jpg Source: https://upload.wikimedia.org/wikipedia/commons/d/d2/Internet_map_1024.jpg License: CC BY 2.5
Contributors: Originally from the English Wikipedia; description page is/was here. Original artist: The Opte Project
• File:Kernel-based_Virtual_Machine.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/40/Kernel-based_Virtual_Machine.
svg License: CC BY-SA 4.0 Contributors: Own work Original artist: V4711
• File:Kernel-hybrid.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/39/Kernel-hybrid.svg License: Public domain Contribu-
tors: Own work Original artist: Mattia Gentilini
• File:Kernel-microkernel.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/ec/Kernel-microkernel.svg License: Public domain
Contributors: Own work Original artist: Mattia Gentilini
• File:Kernel-simple.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/64/Kernel-simple.svg License: CC BY-SA 3.0 Contribu-
tors: https://en.wikipedia.org/wiki/File:Kernel-simple.png Original artist: https://en.wikipedia.org/wiki/User:Ysangkok
• File:Kernel_Layout.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8f/Kernel_Layout.svg License: CC BY-SA 3.0 Contrib-
utors: Own work Original artist: Bobbo
• File:Kvm_running_various_guests.png Source: https://upload.wikimedia.org/wikipedia/commons/3/33/Kvm_running_various_guests.png
License: CC BY-SA 3.0 Contributors: My computer - Guillaume Pasquet Original artist: Various - NetBSD Developpers, Free Software Foun-
dation, Sun Microsystems
• File:Kvmbanner-logo2_1.png Source: https://upload.wikimedia.org/wikipedia/commons/7/70/Kvmbanner-logo2_1.png License: CC BY-SA
3.0 Contributors: http://openvirtualizationalliance.org/downloads/kvm-logo_300dpi.png Original artist: O.T.S.U.
• File:Libvirt_support.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d0/Libvirt_support.svg License: CC BY-SA 3.0 Con-
tributors: This vector image includes elements that have been taken or adapted from this: <a href='//commons.wikimedia.org/wiki/File:
Libvirt_logo.svg' class='image'><img alt='Libvirt logo.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/d/da/Libvirt_logo.
svg/45px-Libvirt_logo.svg.png' width='45' height='20' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/d/da/Libvirt_logo.svg/
68px-Libvirt_logo.svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/d/da/Libvirt_logo.svg/90px-Libvirt_logo.svg.png
2x' data-file-width='654' data-file-height='292' /></a> Libvirt logo.svg. Original artist: ScotXW
• File:Mergefrom.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0f/Mergefrom.svg License: Public domain Contributors: ?
Original artist: ?
• File:NewTux.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b0/NewTux.svg License: Attribution Contributors: New Tux,
created using Sodipodi. Based on original image by Larry Ewing, made in GIMP. Original artist: Larry Ewing, gg3po
9.11. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 57

• File:Portal-puzzle.svg Source: https://upload.wikimedia.org/wikipedia/en/f/fd/Portal-puzzle.svg License: Public domain Contributors: ? Orig-


inal artist: ?
• File:QEMU_ARM_Fedora_Login1.png Source: https://upload.wikimedia.org/wikipedia/commons/8/80/QEMU_ARM_Fedora_Login1.
png License: CC BY-SA 3.0 Contributors: Own work Original artist: MrAronnax
• File:Qemu_linux.png Source: https://upload.wikimedia.org/wikipedia/commons/a/a0/Qemu_linux.png License: GPL Contributors: own
screenshot Original artist: nurnware
• File:Qemu_logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/45/Qemu_logo.svg License: CC BY 3.0 Contributors: http:
//lists.gnu.org/archive/html/qemu-devel/2012-02/msg02865.html Original artist: Benoît Canet
• File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0 Con-
tributors:
Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist:
Tkgd2007
• File:Symbol_book_class2.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/89/Symbol_book_class2.svg License: CC BY-SA
2.5 Contributors: Mad by Lokal_Profil by combining: Original artist: Lokal_Profil
• File:Unix-history.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/11/Unix-history.svg License: CC-BY-SA-3.0 Contributors:
Based on work by Jean-Baptiste Campesato. Campesato’s chart, with different creative design elements, available under the the Creative Commons
Attribution v2.5 license. Original artist: Original uploader was Phidauex at en.wikipedia
• File:Virtual_memory.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/6e/Virtual_memory.svg License: CC BY-SA 3.0 Con-
tributors: Own work Original artist: Ehamberg
• File:Wiki_letter_w.svg Source: https://upload.wikimedia.org/wikipedia/en/6/6c/Wiki_letter_w.svg License: Cc-by-sa-3.0 Contributors: ?
Original artist: ?
• File:Wiki_letter_w_cropped.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1c/Wiki_letter_w_cropped.svg License: CC-
BY-SA-3.0 Contributors: This file was derived from Wiki letter w.svg: <a href='//commons.wikimedia.org/wiki/File:Wiki_letter_w.svg'
class='image'><img alt='Wiki letter w.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_letter_w.svg/50px-Wiki_
letter_w.svg.png' width='50' height='50' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_letter_w.svg/75px-Wiki_
letter_w.svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_letter_w.svg/100px-Wiki_letter_w.svg.png 2x'
data-file-width='44' data-file-height='44' /></a>
Original artist: Derivative work by Thumperward
• File:Wikibooks-logo-en-noslogan.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/df/Wikibooks-logo-en-noslogan.svg Li-
cense: CC BY-SA 3.0 Contributors: Own work Original artist: User:Bastique, User:Ramac et al.
• File:Wikisource-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4c/Wikisource-logo.svg License: CC BY-SA 3.0 Con-
tributors: Rei-artur Original artist: Nicholas Moreau
• File:Wikiversity-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/91/Wikiversity-logo.svg License: CC BY-SA 3.0 Con-
tributors: Snorky (optimized and cleaned up by verdy_p) Original artist: Snorky (optimized and cleaned up by verdy_p)
• File:Wiktionary-logo-v2.svg Source: https://upload.wikimedia.org/wikipedia/en/0/06/Wiktionary-logo-v2.svg License: CC-BY-SA-3.0 Con-
tributors: ? Original artist: ?

9.11.3 Content license


• Creative Commons Attribution-Share Alike 3.0

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy