Ada 620757
Ada 620757
POSTGRADUATE
SCHOOL
MONTEREY, CALIFORNIA
THESIS
by
Katherine K. Sheridan-Barbian
March 2015
11. SUPPLEM E NTARY N O TES TI1e views exp ressed in this thesis are those of the author and do not reflect the official policy
or position of the Department o f Defense or the U.S. Govemment. IRB Protocol number _ _N/A _ _.
12a. DISTRIBUTION I A VAILABILITY STA TE M ENT 12b. DISTRIBUTION CODE
Approved for public release; distribution is unlimited A
13. ABSTRAC T (maximum 200 wor ds)
The Department of Defense and the intelligence conummity rely on space systems for a broad spectnnn of services.
These systems operate in highly constrained environments (in terms of space, weight and power), making
virtua.liza.tion and resource sharing a desirable approach. Agencies are actively exploring new architectures, such as
those employing virtualization, to supp01t their growing space mission. In this thesis, we review how vi.ttualization
architectures claim to support the real-time requirements of then· guests. We sw"Vey real-time systems and
vi.t·tualiza.tion architectures proposed for use in space systems. Fwiher, we investigate the behaviors of vi.t·tualized
operating systems using a method of remote network-based fingerprinting with TCP ti.tnestamps. Our work provides
insights into how guests, both general pw-pose and real-time, behave in vi1tualized envi.t·onments. Our sw"Vey work
and experimental analysis aim to fwther understanding of how vi.ttualization can be securely incm-porated into space
systems.
ii
Approved for public release; distribution is unlimited
Katherine K. Sheridan-Barbian
Civilian, Department of Defense
B.A., Barnard College, 2004
from the
Mark Gondree
Thesis Co-Advisor
Peter Denning
Chair, Department of Computer Science
iii
THIS PAGE INTENTIONALLY LEFT BLANK
iv
ABSTRACT
The Department of Defense and the intelligence community rely on space systems for a
broad spectrum of services. These systems operate in highly constrained environments (in
terms of space, weight and power), making virtualization and resource sharing a desirable
approach. Agencies are actively exploring new architectures, such as those employing
virtualization, to support their growing space mission. In this thesis, we review how
virtualization architectures claim to support the real-time requirements of their guests.
We survey real-time systems and virtualization architectures proposed for use in space
systems. Further, we investigate the behaviors of virtualized operating systems using a
method of remote network-based fingerprinting with TCP timestamps. Our work
provides insights into how guests, both general purpose and real-time, behave in
virtualized environments. Our survey work and experimental analysis aim to further
understanding of how virtualization can be securely incorporated into space systems.
v
THIS PAGE INTENTIONALLY LEFT BLANK
vi
TABLE OF CONTENTS
I. INTRODUCTION........................................................................................................1
A. MOTIVATION ................................................................................................1
B. IMA AND IMA-SP ..........................................................................................3
C. THESIS ORGANIZATION ............................................................................4
II. BACKGROUND ..........................................................................................................5
A. REAL-TIME OPERATING SYSTEMS .......................................................5
B. REAL-TIME OPERATING SYSTEMS IN SPACE ....................................6
C. SOFTWARE COMPLIANCE IN SPACE SYSTEMS.................................7
1. DOD Standards ....................................................................................7
a. IEEE 1228 and NASA-STD-8719.13B ................................................8
D. VIRTUALIZATION BACKGROUND .......................................................11
1. Hypervisor Terminology ...................................................................12
2. Full Virtualization Architectures .....................................................13
3. Paravirtualization Architectures ......................................................13
4. Software Emulation Architectures ...................................................15
5. Hardware-Assisted Virtualization Architectures ...........................15
6. Example Architectures ......................................................................16
7. Microkernel and Microvisor .............................................................20
III. REAL-TIME OPERATING SYSTEMS FOR SPACE ..........................................23
A. SCOPE ............................................................................................................23
B. VXWORKS ....................................................................................................27
1. Design ..................................................................................................28
2. Analysis ...............................................................................................31
C. REAL-TIME LINUX.....................................................................................31
1. RTLinux, Xenomai, and RTAI .........................................................32
2. PREEMPT_RT ..................................................................................34
3. Analysis ...............................................................................................35
D. GREEN HILLS INTEGRITY-178B ............................................................36
1. Design ..................................................................................................36
2. Analysis ...............................................................................................38
E. FREERTOS ....................................................................................................39
1. Design ..................................................................................................40
2. Analysis ...............................................................................................41
F. LYNXOS-178..................................................................................................41
1. Design ..................................................................................................42
2. Analysis ...............................................................................................43
G. RTEMS ...........................................................................................................44
1. Space Standards Compliance............................................................44
2. Design ..................................................................................................45
3. Analysis ...............................................................................................47
H. ADDITIONAL REAL-TIME OPERATING SYSTEMS...........................47
vii
1. LithOS .................................................................................................47
2. VxWorks 653 ......................................................................................48
IV. VIRTUALIZATION ARCHITECTURES USED IN SPACE...............................51
A. XTRATUM .....................................................................................................53
1. Design ..................................................................................................54
2. Partition Management .......................................................................54
3. Memory Management .......................................................................55
4. Scheduling Management ...................................................................55
5. Analysis ...............................................................................................56
B. ARLX ..............................................................................................................56
1. Design ..................................................................................................57
2. Partition Management .......................................................................58
3. Analysis ...............................................................................................59
C. PIKEOS ..........................................................................................................60
1. Design ..................................................................................................61
2. Partition Management .......................................................................61
3. Memory Management .......................................................................62
4. Scheduling Management ...................................................................62
5. Analysis ...............................................................................................63
D. AIR ..................................................................................................................63
1. Design ..................................................................................................64
2. Scheduling Management ...................................................................64
3. Memory Management .......................................................................64
4. Analysis ...............................................................................................64
E. ADDITIONAL VIRTUALIZATION ARCHITECTURES .......................65
1. Green Hills Multivisor .......................................................................65
2. Wind River Hypervisor .....................................................................66
3. SafeHype .............................................................................................67
4. NOVA ..................................................................................................67
5. Proteus ................................................................................................68
6. X-Hyp ..................................................................................................70
7. RT-Xen ................................................................................................70
V. REMOTE FINGERPRINTING OF VIRTUALIZED OPERATING
SYSTEMS ...................................................................................................................73
A. MOTIVATION ..............................................................................................73
B. TEST METHODOLOGY .............................................................................73
1. TCP Timestamp Option ....................................................................73
2. Prior Work .........................................................................................74
C. TEST PLAN ...................................................................................................75
1. Hardware and Software Decisions ...................................................77
2. Test Execution ....................................................................................77
3. Test Notation ......................................................................................79
D. ANALYSIS .....................................................................................................80
1. Observation 1: MSE Is Not Sensitive to Session Length ................80
viii
2. Observation 2: Frequency Calculation Appears Relatively
Stable with Respect to Packet Selection ...........................................81
3. Observation 3: MSE[A] ≠ MSE[B] (for all A≠B, except [RT]) ......82
4. Observation 4: No Obvious Difference in MSE Behavior
between Virtualized and Bare Metal Configurations .....................82
5. Observation 5: MSE[A/F] ≠ MSE[A/W] ..........................................84
6. Observation 6: MSE[A/X] > {MSE[A/F], MSE[A/W], MSE[A]}
for all A ≠ [RT] ...................................................................................84
7. Observation 7: MSE[RT] ≠ MSE[A] for all A ≠ RT .......................85
8. Observation 8: MSE[RT] ≈ MSE[RT/F] ≈ MSE[RT/W] ≈
MSE[RT/X].........................................................................................85
9. Observation 9: MSE[RT] ≈ MSE[RT-1FF] ≈ MSE[RT-1RR] .......86
10. Observation 10: MSE[RT-S/W] > {MSE[RT], MSE[RT-T],
MSE[RT/A]} .......................................................................................87
11. Observation 11: MSE[RT-S/A] ≈ MSE[RT-T] ≈ MSE[RT] for
A ≠ W ...................................................................................................87
12. Observation 12: MSE[RT-S/A] ≈ MSE[RT-T/B] for A, B ≠ W .....87
13. Observation 13: [A/B] is more like [A] than [B] for A ≠ B and
A≠F ......................................................................................................88
E. DISCUSSION .................................................................................................88
VI. CONCLUSION AND FUTURE WORK .................................................................91
APPENDIX A. BARE METAL, 1.5-HOUR RUN .............................................................93
APPENDIX B. BARE METAL, 10-MINUTE RUN ..........................................................95
APPENDIX C. VIRTUALIZED LINUX ............................................................................97
APPENDIX D. VIRTUALIZED WINDOWS ....................................................................99
APPENDIX E. VIRTUALIZED PREEMPT_RT ............................................................101
APPENDIX F. PREEMPT_RT, FIFO SCHEDULING ..................................................103
APPENDIX G. PREEMPT_RT, ROUND ROBIN SCHEDULING ..............................105
SUPPLEMENTAL ...............................................................................................................107
LIST OF REFERENCES ....................................................................................................109
INITIAL DISTRIBUTION LIST .......................................................................................127
ix
THIS PAGE INTENTIONALLY LEFT BLANK
x
LIST OF FIGURES
xii
LIST OF TABLES
xiii
THIS PAGE INTENTIONALLY LEFT BLANK
xiv
LIST OF ACRONYMS AND ABBREVIATIONS
APEX applications/executive
ARLX ARINC-653 Real-time Linux on Xen
AIR ARINC-653 Interface in RTEMS
I/O input/output
IMA Integrated Modular Avionics
IMA-SP Integrated Modular Avionics for Space
ISA instruction set architectures
OS operating system
RAD-HARD radiation-hardened
RTEMS Real-Time Executive for Multiprocessor Systems
RTOS real-time operating systems
RTP real-time process
xv
SMP symmetric multiprocessing
SPAWAR Space and Naval Warfare Systems Command
SWaP size, weight and power
xvi
ACKNOWLEDGMENTS
I thank my family more than anything for supporting me over the past two years
and for being patient while I worked on this thesis. I also thank my fellow classmates,
especially Francisco Gutierrez-Villarreal, for helping me get through this program and
helping refine my Python graphing skills. Lastly, I thank my thesis advisors for putting
up with me and mentoring me through this process.
xvii
THIS PAGE INTENTIONALLY LEFT BLANK
xviii
I. INTRODUCTION
A. MOTIVATION
The use of space systems has grown dramatically since their inception. This
motivates the development of new space system architectures able to support this
demand. General William Shelton of Air Force Space Command claims that space was
once a domain in which a single satellite orbited earth and is now one that supports nearly
every United States military operation across the world (Garamone, 2014). The
Department of Defense (DOD) relies on space systems for a broad spectrum of services,
including communications, mission specific intelligence, operational awareness and
weather analysis. The 2000 National Reconnaissance Office (NRO) Commission Report
describes how the demand for data from NRO satellites has increased disproportionally to
the resources provisioned, which is putting pressure on the office to meet all the
1
requirements from its customers (“Report of the National Commission for the Review of
the National Reconnaissance Office,” 2000). The DOD recognizes this strain on space
system resources and is developing strategies to overcome such issues. One such strategy
is the development of alternative architectures to make space systems more flexible, more
secure and less costly. The 2011 National Security Space Strategy emphasizes the need to
develop a “resilient, flexible, and healthy space industrial base” and states that it will
“continue to explore a mix of capabilities with shorter development cycles to minimize
delays, cut cost growth, and enable more rapid technology maturation, innovation, and
exploitation” (Department of Defense [DOD], 2011).
At the same time, the functional requirements of embedded systems in the space
domain and the hardware that supports them have become more complex over the past
two decades (Andrews, Bate, Nolte, Otero-Perez, & Petters, 2005; Windsor, Deredempt,
& De-Ferluc, 2011). Many systems are now moving to multicore processors instead of
single core processors, which complicate the systems’ ability to safely and securely
support isolated real-time processes (Santangelo, 2013). As a result, efforts are being
made to consolidate the code base of these complex systems and to design a robust
management infrastructure to maintain temporal and spatial isolation between real-time
applications and to limit security vulnerabilities (Joe et al., 2012; DaSilva, 2012; Windsor
et al., 2011).
The DOD faces a number of challenges in the space domain given the numerous
requirements for space systems vital to national security today. The DOD needs to
incorporate the growing complexities of embedded systems in space while
simultaneously cutting the costs of space missions and increasing the flexibility and
adaptability of these systems. The U.S. space industry is exploring different ways to
effectively address these needs (Cudmore, 2013). One solution that has gained
considerable traction is to move away from federated system architectures, integrating
software components into a tightly-coupled, modular architecture. The avionics industry
paved the way to such an integrated architecture with its development of the integrated
modular avionics (IMA) architecture. The space industry is now in the process of
2
developing an architecture similar to IMA that addresses the unique requirements of
space systems (Windsor et al., 2011).
The IMA conceptual architecture centralizes the various functions and services
involved in a complex avionics system onto a single set of physical resources (Rushby,
2000; DaSilva, 2012). IMA was introduced by the commercial avionics industry in the
1990s (Ramsey, 2007). The motivation for IMA was to reduce costs associated with
distributed hardware systems while maintaining the ability to manage the software in
avionics systems efficiently, safely and securely. IMA was also meant to make system
development easier by enabling incremental validation and parallel development of
components (Windsor & Hjortnaes, 2009). The two key principles of security in the IMA
construct are spatial and temporal isolation (Parkinson, 2011). Spatial isolation is
achieved through software partitions, which are implemented in order to handle fault
containment. If a fault event occurs in one partition, it is isolated to that partition and
does not affect the other partitions in the system (Rushby, 2000). Temporal isolation is
achieved through a statically defined scheduling algorithm for each partition, which
regulates the amount of processing power each partition receives (DaSilva, 2012). An
attractive method for implementing the IMA concept is through virtualization. Instead of
having a distributed network of hardware devices that are each dedicated to specific
functions, virtualization allows applications running in different software partitions to
share the same hardware resources. IMA’s use is widespread throughout the commercial
avionics industry (FAA, 2007) and its successful implementation has motivated the space
industry to consider a similar conceptual framework (Diniz & Rufino, 2005).
3
considered in the development of an IMA framework for space, including the limited
power, mass and volume resources of space systems, which the ESA is currently studying
(DaSilva, 2012). Windsor et al. (2011) also discuss the ESA’s work in evaluating
Integrated Modular Avionics for Space (IMA-SP) and the current work in defining and
demonstrating the IMA-SP construct with other members of the space community.
NASA is cognizant of the need for more modular software architectures in space and is
currently researching the benefits of virtualization and partitioning architectures in space,
with the same goals as IMA-SP (Cudmore, 2013; Rushby, 2011). Many U.S. companies
developing software products for the aerospace industry are also aware of the movement
towards integrated architectures in space and are developing products that adhere to the
IMA architecture.
C. THESIS ORGANIZATION
4
II. BACKGROUND
In this chapter, we review a number of topics that provide context for the real-
time operating systems and virtualization architectures we survey later. First, we discuss
real-time operating systems and the requirements for real-time operating systems in
space. We review security criteria for space systems and software standards for space
applications. Finally, we review common virtualization architectures and prior work
relating to virtualization with real-time operating systems.
Characteristics of an RTOS
There are three primary categories for deadlines of real-time tasks: soft, firm and
hard. Soft deadlines are those that are desirable but, if not met, will not cause serious
damage to the system. If a firm deadline is missed, the system will not encounter total
5
failure but consecutive firm deadline misses could lead to system failure. Hard deadlines
are ones that, if missed, result in catastrophic consequences to the system (“RTOS 101,”
n.d.).
RTOSs are used extensively in space operations due to the time-sensitive and
safety-critical operations handled, such as attitude and orbit control, navigation,
communications, critical payload management and power management (Keesee, 2003).
Unlike those for terrestrial systems, RTOSs for space systems must perform their
functions under harsh environments over the lifetime of the space mission, which can be
over a decade in some cases (Air Force Space Command, 2013). Additionally, RTOSs
must be compatible with space-qualified hardware. For example, a relatively small
number of processors are designed to withstand the radiation present in space
environments by being radiation-hardened (RAD-HARD) (Beus-Dukic, 2001; Ginosar,
2012). Further, efforts need to be taken to manage the size, weight and power (SWaP) of
all space system components, including the operating system. Thus, RTOSs used for
space systems often have a smaller memory footprint to accommodate SWaP constraints
(Jones & Gross, 2014; Cudmore, 2013).
6
programming languages and the availability of development tools. Unfortunately, there
has not been a comparable survey since this, but their data gives us some insight into
what criteria might be used by developers when choosing a commercial RTOS.
1. DOD Standards
b. DO-178B
c. ARINC-653
9
Figure 1. Example Application of the ARINC-653 Specification (from “ARINC
653,” 2008)
At the heart of the ARINC-653 specification are two main concepts: the partition
and the applications/executive (APEX) layer. The partition is intended to be a container
for applications running on the operating system, ensuring applications are separated
spatially and temporally from one another to avoid fault propagation (Gomes, 2012).
Partitions can also be used for system services not available through the APEX interface,
like fault management or device drivers (Samolej, 2011).
10
services; instead, it assumes memory is statically allocated to partitions at configuration
time (Samolej, 2011).
D. VIRTUALIZATION BACKGROUND
11
1. Hypervisor Terminology
Popek and Goldberg (1974) define two primary types of hypervisors: type-1 (or
native) and type-2 (or hosted). Type-1 hypervisors run directly above the host system’s
hardware and provide all VM resources. Type-2 hypervisors operate on top of a host
environment and are dependent on this underlying OS for maintenance and distribution of
resources. For example, type-2 hypervisors cannot boot until the host operating system
has booted and, in the event the host operating system crashes, so too does the type-2
hypervisor (Jones, 2010). Figure 2 illustrates type-1 and type-2 hypervisors.
12
2. Full Virtualization Architectures
In full virtualization binary translation converts privileged machine code from the
VM to the hardware. Binary translation is a process whereby the hypervisor scans a
VM’s memory for privileged instructions before they are executed, and dynamically
modifies these into code that the hypervisor can emulate for the hardware (Binu &
Kumar, 2011). Full virtualization tends to have high overhead due to the need to translate
machine code, and the frequency of traps between the VM and the hypervisor (Jeong,
2013).
3. Paravirtualization Architectures
The modified instructions used by paravirtualized guest OSs are called hypercalls.
Hypercalls are software traps from the VM’s virtual driver to the hypervisor (LeVasseur
et al., 2005; “Xen Hypercall,” n.d.). Paravirtualization tends to be simpler and faster than
full virtualization but has considerable engineering cost, since each guest OS is modified
to be aware that it does not run on native hardware (Barham et al., 2003).
14
4. Software Emulation Architectures
15
6. Example Architectures
a. VMWare Workstation
Non-privileged instructions executed on the guest OS are sent through the VMM
directly to the host system to be processed. Privileged instructions, however, are trapped
by the VMM and translated via binary translation. The VMDriver then facilitates a
transfer so that the VMM can communicate with the host OS. Once in the “host world,”
16
the VMApp-translated instructions are communicated via the VMApp to the host OS,
which executes the instruction (Rosenblum & Garfinkel, 2005; Chiueh & Brook, 2005;
USENIX, 2001).
b. XEN
17
c. QEMU
18
d. KVM
e. VMware ESXi
Armand and Gien suggest that the use of microkernels is motivated by the
increasing complexity of operating systems (Armand, 2009). Microkernels are well suited
for use in embedded systems, which are often not designed to support a full-featured,
20
monolithic kernel. Microkernels allow systems to be designed in less complex ways and
in a more modular fashion since less functionality is included at the kernel level
(Armand, 2009). Security is another motivation for the development of microkernels.
Iqbal et al. observe that microkernels support the principle of least privilege:
functionalities at higher privilege levels are as limited as possible (Iqbal et al., 2009).
Only essential tasks, such as low-level address space management, thread management
and inter-process communication are handled by the microkernel.
21
THIS PAGE INTENTIONALLY LEFT BLANK
22
III. REAL-TIME OPERATING SYSTEMS FOR SPACE
A. SCOPE
The purpose of our survey work is to review fundamental RTOS designs and
identify different methods of implementing key functionalities (see Table 2). Some
RTOSs have been excluded from this study, due to lack of industry adoption or lack of
available system information. This includes eCos (“Home Page,” n.d.), ThreadX
(“ThreadX,” n.d.), Wind River Linux (“Wind River Linux,” n.d.), QNX (QNX, n.d.),
Deos, HeartDeos (“A Time,” n.d.), and Salvo (“Welcome,” n.d.). Table 2 summarizes the
pertinent attributes of an RTOS.
23
Table 2. RTOS Attributes Chart
Supported Supported Relevant Hardware Security
Memory
Memory Performance Task 1
RTOS License Footprint Scheduling Execution
Languages APIs Standards Support Modes Protection Evaluation
(kernel) Mode
RTEMS Open-source C, Ada, POSIX, None ERC32, supervisor ~1200MB None Round robin, Yes Privileged
(GNU GPL) C++, Java, BSD (Space LEON, (On-Line (Evans, fix priority,
Go, Lua Sockets, Qualified ARM, Applications 2007) earlierst
SAPI, version is Pentium, x86, Research deadline first,
Classic GSWS MIPS, Corporation, constant
RTEMS qualified) PowerPC 2013) bandwidth,
API simple SMP,
partitioned/cl
uster
scheduler
FreeRTOS Open-source C FreeRTOS None x86, Xilinx, user, ~5-10KB Use of Priority No Privileged
(Modified API (SafeRTOS ARM, PIC, supervisor hardware based
GNU GPL) is DO178-B Freescale (PowerPC) MPU on preemptive,
certified) Cortex-M3 cooperative,
and ARM hybrid
processors
PREEMPT_RT Open-source All Linux POSIX None all Linux user, kernel ~100MB None FIFO, Round Yes User
(GNU GPL) Robin, Batch,
Idle, Other
RTLinux Open-source All Linux POSIX None all Linux user, kernel ~9MB None FIFO, Round Yes Kernel
(GNU GPL) (Compute (Pettersson Robin, Batch,
or r as a & Idle, Other
Commercial controller, Svensson,
n.d.). 2006)
1 The term “task” refers to the basic unit of execution for an RTOS.
24
Supported Supported Relevant Hardware Security
Memory
Memory Performance Task 1
RTOS License Footprint Scheduling Execution
Languages APIs Standards Support Modes Protection Evaluation
(kernel) Mode
RTAI Open-source All Linux RTAI None ARM, x86, user, kernel ~4.5MB None FIFO, round Yes Kernel
(GNU GPL) Native API, PowerPC (size of (though use robin, dual
POSIX latest tar of LXRT 2 scheduler
file) module (RT-
allows microkernel
applications and userland
to be non-RT
written in kernel)
user space)
(Contributi
ng Editor,
2001)
Xenomai Open-source All Linux Xenomai None ARM, user, kernel ~20MB Mmap FIFO, Round Yes Primary,
(LGPL) Native API, BlackFin, (size of POSIX Robin, secondary 3
POSIX x86, stable facility Sporadic, TP,
(skin) PowerPC, release tar other
Nios 11 file)
(“Embedded
Hardware,
n.d.)
VxWorks Commercial C, C++, VxWorks Customizabl ARM, user, kernel ~20KB Hardware Round Yes Privileged
Ada, Java API, e to be DO- FreeScale, MMU Robin, or user
POSIX 178B MIPS, support preemptive (RTP
certified Pentium, x86, configuratio priority- tasks 4 run
etc. n options; based in user
stack mode)
protection;
POSIX
VxWorks 653 Commercial C, C++, POSIX, ARINC-653 FreeScale; User, kernel Unknown POSIX ARINC No Supervisor,
Ada, Java VxWorks PowerPC, memory time- user
API, Intel IA-32 lock facility preemptive (partitions
ARINC-653 scheduling; run in user
priority- mode)
preemptive
scheduling
2 LXRT is an RTAI module that allows real-time tasks to be developed and run in user space. LXRT processes can be migrated to kernel space.
3 Primary mode is equivalent to kernel mode and secondary mode is equivalent to user mode.
4 See “VxWorks” section where RTPs are discussed.
25
Supported Supported Relevant Hardware Security
Memory
Memory Performance Task 1
RTOS License Footprint Scheduling Execution
Languages APIs Standards Support Modes Protection Evaluation
(kernel) Mode
INTEGRITY- Commercial C, C++, Ada ARINC- DO178-B; x86, supervisor, Unknown Acccess ARINC- No Privileged
178B 653; SKPP High PowerPC, user verification; partition or user
Integrity Robustness ARM, MIPS, processor scheduler
Kernel API FreeScale etc. MMU (preemptive
support scheduler)
LithOS Open-source Unknown ARINC-653 Unknown x86 Unknown Unknown unknown Whatever is No Unknown
(Unknown) defined at
configuration
LynxOS-178 Commercial C, C++ ARINC- DO-178B x86, User, kernel Unknown POSIX FIFO, round No User,
653; POSIX PowerPC memory robin, kernel
lock facility priority-
based
quantum
(proprietary)
26
B. VXWORKS
Over the past 20 years, NASA has used VxWorks in a number of its missions
(“VxWorks Space,” n.d.). VxWorks 5.3.1 was used on a MIPS processor by the Mars
Exploration Rover (“VxWorks,” n.d.). Other versions of the operating system are being
used on other missions including the Cygnus Spacecraft, an unmanned cargo transport
vessel where VxWorks is running on the main flight computer (“Genesis,” n.d.).
VxWorks is also being used to control the flight computer of the MESSENGER probe, an
unmanned spacecraft orbiting Mercury (“Messenger,” n.d.; “VxWorks Space,” n.d.).
SpaceX, the private space travel company, uses an unspecified VxWorks platform on its
Dragon reusable spacecraft (“SpaceX,” n.d.).
5 Supports the 1003.1 standard but does not provide process creation capability with fork() or exec() or
file ownership and file permissions.
27
The VxWorks operating system is tightly coupled with the additional software
products designed for embedded systems that Wind River offers. As such, the operating
system is compatible with the Wind River Hypervisor. VxWorks can also run as an
unmodified guest operating system on the Green Hills Multivisor (“Integrity Multivisor,”
n.d.).
Wind River offers a suite of highly customizable and modular software products
with different design features based on the certifications or architectures required. As
such, there is no set of standards with which the core VxWorks operating system alone
complies. Wind River offers separate products, such as VxWorks653 that complies with
the ARINC-653 specification, and the VxWorks CERT platform that complies with the
DO-178 standard (“Profiles,” n.d.).
1. Design
Conceptually, VxWorks reflects the “process model” similar to UNIX and Linux,
whereby kernel space and user space are clearly delineated and the applications that run
in these two spaces run at different privilege levels (“6.9 Guide,” n.d.). VxWorks can be
configured as a micro-kernel, a basic kernel or as a full-featured operating system. It is
unclear which versions of the operating system are commonly used in spacecraft but
documentation does confirm that VxWorks has been used in space systems of different
sizes, such as microsatellites (Teston, Vuilleumier, Hardy, & Bernaerts, 2004) and
unmanned spacecraft (“CIRA,” n.d.), which might indicate the use of different VxWorks
configurations in space systems. Figure 11 illustrates the various capabilities included in
each configuration.
28
Figure 11. VxWorks Kernel Scale Options (from “6.9 Guide,” n.d.)
a. Task Management
b. Scheduling Management
VxWorks supports three types of task schedulers, listed in Table 3. For all
schedulers, the default scheduling option is priority-based preemptive scheduling in
which a higher priority task can preempt a lower priority task to run.
c. Memory Management
VxWorks also offers a proprietary mapping facility called sdLib, which enables
RTP applications to share memory through a shared data region. Once established, user-
mode applications and kernel tasks have access to these shared data regions (“6.9 Guide,”
n.d., p. 66).
6 This applies to ARM, Intel and SuperH processors. On MIPS processors, if RTPs are not supported,
tasks run in kernel mode.
30
2. Analysis
VxWorks is a legacy RTOS that has proven its ability to perform on space
missions for a number of years. Reliability is a major decision factor for use in space
systems given the time and money involved in validating a new system. Space system
developers tend to choose VxWorks due to its proven reputation on high profile space
missions (“CIRA, n.d.; Volpe et al., 2000, p. 30).
C. REAL-TIME LINUX
There are several projects dedicated to making Linux capable of handling real-
time requirements (“Introduction to Linux,” 2002). These projects offer different
solutions to making Linux an RTOS. One approach taken by the RTLinux, Xenomai and
RTAI projects is to develop a software layer below the Linux kernel that handles real-
time requirements. A second approach, taken by the CONFIG_PREEMPT_RT
community (“Real-Time Linux Wiki,” n.d.), is to improve the existing Linux kernel to
meet real-time requirements with the PREEMPT_RT patch (McKenney, 2005;
Opdenacker, 2004; Clark, 2013). Each version of real-time Linux comes in the form of a
patch to the standard Linux kernel. With this approach, the portability of these RTOSs to
various hypervisors is comparable to main line Linux.
31
To the best of our knowledge, the different implementations of real-time Linux
run on all of the virtualization architectures surveyed in this thesis. The implementations
of real-time Linux do not comply with any space standards and the developers are open
about the fact that there are no guarantees with the real-time Linux code.
In a 2013 presentation, Keven Scharpf of the PTR group cited the PREEMPT_RT
patch as a viable solution to hard-real-time requirements for space systems. The PTR
group has worked on a number of space missions, including the Tacsat-2 microsatellite
mission, which was the first mission to use Linux in space (Scharpf, 2013). Wind River
also makes use of the PREEMPT_RT patch in its WindRiver Linux 4 and 6 products
(“Wind River Linux 4,” n.d.; “Wind River Linux 6,” n.d.).
RTLinux, Xenomai, and RTAI are all designed as “dual kernel” configurations.
These operating systems have some minor differences, but their fundamental approach to
making Linux real-time is the same. We will focus on the architecture of RTLinux for the
remainder of this section.
32
In RTLinux (see Figure 12), a microkernel extension is added to the Linux kernel
(Opdenacker, 2004). This extension is a set of Linux kernel modules that deal specifically
with real-time tasks by providing a subset of the POSIX API (“RTLinux,” n.d.). With this
alteration to the standard Linux kernel, a second real-time microkernel, i.e., RTLinux
Kernel, is placed under the standard Linux kernel, which runs as an idle task on top of the
RTLinux Kernel (Balasubramaniam, n.d.). Real-time applications are created as modules
that run on the RTLinux Kernel and are written using a subset of the POSIX API, based
on the POSIX Minimal Realtime System Profile, or PSE51 (Terrasa, Garcia-Fornes, &
Espinosa, 2002).
a. Task Management
All real-time tasks run at kernel privilege level and have direct access to the
hardware. All interrupts are intercepted by the RT-microkernel, which decides what to
do. If these interrupts have real-time handlers, then the RT-microkernel schedules them
first (Yodaiken, 1999).
b. Scheduling Management
The RT-microkernel has its own scheduler that is responsible for scheduling both
real-time and non-real-time tasks (Yodaiken, 2001). This scheduler is generally a
preemptive priority based scheduler with tasks having their priority statically determined.
33
c. Memory Management
In RTLinux and Xenomai, real-time tasks are allocated fixed amounts of memory
for data and code (Balasubramaniam, n.d.) and do not use virtual memory (Yodaiken,
2001). RTAI on the other hand, uses dynamic memory allocation (Balasubramaniam,
n.d.). For all three dual-kernel configurations of Linux, the real-time applications running
on top of the RT-microkernel share a common address space (Haas, n.d.).
2. PREEMPT_RT
The PREEMPT_RT patch (see Figure 13) makes the Linux kernel fully pre-
emptible through optimizations inside the kernel. The patch is sometimes referred to as
RT-PREEMPT, PREEMPT-RT, CONFIG_PREEMPT_RT or CONFIG_PREEMPT
(“Real-Time Linux Wiki,” n.d.). Unlike RTLinux, RTAI and Xenomai, PREEMPT_RT
does not include a separate kernel to handle real-time tasks. The goal of the
PREEMPT_RT project is to make the existing Linux kernel 100% pre-emptible (Rostedt
& Hart, 2007, pp. 161–172).
a. Design
The PREEMPT_RT patch allows the Linux kernel to become a predictable and
deterministic operating system (Rostedt & Hart, 2007). This is done by doing two things:
34
using threads to service selected device interrupts and replacing existing spin locks with
mutexes that are preemptive and support priority inheritance (Fayyad-Kazan, 2014).
b. Task Management
c. Scheduling Management
The PREEMPT_RT patch does not include any modification to the schedulers
already available in the standard Linux kernel.
d. Memory Management
The PREEMPT_RT patch does not include any additional memory management
functionalities that are not already in use in the standard Linux kernel.
3. Analysis
The Naval Research Laboratory cited that Linux was used on its TacSat-1
spacecraft, primarily because accessibility to source code was vital for debugging
purposes and because of the ease of migrating development software on x86 platforms to
the actual PowerPC space processor. The TacSat-1, however, did not have any hard real-
35
time requirements, which was a reason why NRL chose Linux as opposed to a
proprietary RTOS like VxWorks (Huffine, 2005).
Linux is an attractive operating system for space systems given its widespread use
and legacy reliability in terrestrial systems. The real-time Linux projects surveyed offer
features like task prioritization and bounded latencies that provide useful determinism for
space systems. The projects however, have not been certified to any space standard and
the developers make no guarantee that the real-time Linux projects are suitable for hard
real-time systems. Key safety features like memory protection or static scheduling
policies (in PREEMPT_RT) are only as good as the standard kernel.
NASA selected INTEGRITY-178B to operate the flight control module and the
backup emergency controller on the Orion Crew Exploration Vehicle, a space vessel
designed to carry astronauts to the moon. NASA chose INTEGRITY-178B since it was
considered the most mature RTOS and was the most cost-effective (“NASA’s Orion,”
2008). INTEGRITY-178B is also used on NASA’s Pad Abort Demonstrator, a test bed
platform meant to evaluate emergency abort scenarios for spacecraft crewmembers on the
International Space Station (“Green Hills Software,” 2003; “Pad Abort,” 2003).
1. Design
a. Task Management
37
All tasks associated with a partition (i.e., an AddressSpace) have an identifier that
links it to its AddressSpace. This task identifier is used for authentication purposes. The
task identifier is used to enforce authorized information flow and resource sharing. Tasks
within a partition can freely access resources allocated to the partition, but if a task tries
to access resources from a different partition, the task will be terminated. Access policies
for each AddressSpace are defined at configuration time.
b. Scheduling Management
c. Memory Management
The INTEGRITY kernel runs in a physical address space and leverages the
processor MMU to manage the virtual address spaces allocated to the partitions. Each
partition has its memory and data statically assigned. INTEGRITY does not support
dynamic memory allocation.
2. Analysis
INTEGRITY-178B is the only RTOS surveyed that has a separation kernel that
has undergone formal verification and been proven to perform at “high robustness” levels
by the National Information Assurance Partnership evaluation scheme. Security and
safety design considerations, such as memory protection, ARINC-653 scheduling
compliance and access policies for tasks are built into the RTOS, which make it suitable
for safety-critical missions. The RTOS is also a proven RTOS for space systems, given
its use in NASA missions.
E. FREERTOS
39
FreeRTOS to Xen on ARM was introduced by the Oregon based company Galois
(Daugherty, 2014).
1. Design
The kernel itself is only composed of three C source files: queue.c, (queue
structures), list.c, (linked list used in the queue structure) and tasks.c (task and scheduling
logic) (Douglas, 2010).
a. Task Management
Tasks are defined as basic C functions and are the unit of execution. Applications
that run on FreeRTOS are treated as a set of independent tasks (Real Time Engineers,
Ltd., 2014). FreeRTOS supports one to one mapping of resources to tasks through the use
of “gatekeeper tasks,” which are tasks that have sole ownership of a resource. Only this
task can communicate with the resource directly; other tasks needing the resource need to
communicate with the resource’s gatekeeper (via a queue) which will then make the
resource available.
b. Scheduling Management
c. Memory Management
FreeRTOS supports a macro that is used to allocate protected regions on the ARM
memory protection unit (MPU) regions, but this requires the specific port of FreeRTOS
40
to run on processors that support an MPU such as the ARM Cortex-M3 (Real Time
Engineers, Ltd., 2014).
2. Analysis
The fact that manufacturers of microprocessors for small satellites are including
FreeRTOS on their chip sets indicates that FreeRTOS has a legacy in the small satellite
domain (“NanoMind Computers,” n.d.). Being open-source also makes FreeRTOS an
attractive option for missions with limited budgets. FreeRTOS is well documented and its
core code development is maintained separately from community contributions, which
makes revisions to the code consistent and traceable. The proprietary SafeRTOS version
of FreeRTOS offers potential flexibility to developers who might be interested in a more
secure version of the RTOS.
FreeRTOS however, does not provide much in the way of security for its
applications. The small code base of the kernel limits the potential vector for security
breaches but protection mechanisms, such as memory protection are not consistently
available for all versions of the RTOS. Furthermore, tasks can execute at the same
privilege level as the kernel.
F. LYNXOS-178
41
ranging and timing starting in the early 1990s but switched to RTAI in 2011 for cost
reasons (Ricklefs, n.d.). To the best of our knowledge, LynxOS-178 can only run as a
guest OS on the LynxSecure Microkernel hypervisor (“LynxOS-178,” n.d.).
1. Design
LynxOS-178 is fully POSIX compliant and uses POSIX as its native interface
(see Figure 15). LynxOS-178 also includes some ARINC-653 functions, such as health
monitoring, partition management, time and process management and the ARINC-653
API.
a. Task Management
POSIX threads are the basic scheduling entity. A task in LynxOS-178 is a group
of threads Tasks run within partitions which are spatially isolated blocks of memory
allocated by the processor’s MMU. LynxOS-178 uses a patented approach called
“priority tracking” to prevent priority inversion. Each task has two priority values
associated with it, one for kernel threads and one for user threads. Kernel threads that
handle interrupts do so “in step” with the user thread that actually requires the interrupt
42
(“Linux Software,” n.d.). This allows kernel threads to have their priority dynamically
changed so that they always have higher priority than user tasks (Carlgren & Ferej, n.d.).
b. Scheduling Management
c. Memory Management
2. Analysis
43
G. RTEMS
RTEMS has been and continues to be used in many different space projects.
RTEMS was used on the FedSat, a research microsatellite developed by an Australian
cooperative research group composed of university, commercial and government
organizations (“Operating Systems,” 2008; “Fed Sat 1,” n.d.) between 2003 and 2006.
RTEMS was also used on the Galileo GIOVE-A, ESA’s first prototype for a navigation
satellite (“Galileo Pathfinder,” 2010). RTEMS is a supported operating system on
NASA’s SpaceCube satellites (Seagrave, 2008) and is being used on NASA’s Mars
Reconnaissance Orbiter (Komolafe & Sventek, 2006/07; “Mars Reconnissance Orbiter,”
n.d.).
RTEMS version 4.8.1 has been ported to run on the XtratuM hypervisor as a para-
virtualized guest OS. The ported code includes board support packages for the LEON2
and LEON3 processors (“RTEMS,” n.d.). RTEMS can also run on the PikeOS
microkernel developed by Sysco (“SYSGO’s Safe and Secure,” 2010) and on the AIR
microkernel. RTEMS is the basis for the hardware abstraction layers of AIR but can also
run as a client partition alongside the ARINC-653 API (Schoofs, 2011).
The European Space Agency used version 4.8.0 of RTEMS to develop a “space-
qualified” version of RTEMS that was qualified under the Galileo software standard
(GSWS) to work on the ERC32, LEON2 and LEON3 processors. The GSWS is a space
system software compliance policy that sets standards for the development, integration
44
and testing of software used specifically in NASA’ Galileo Spacecraft. GSWS requires
independent module/unit testing to ensure software safety and assurance (Feldt, Torkar,
Ahmad, & Raza, 2010). The ESA considered validating RTEMS with DO-178B but
decided GSWS was a more complete standard at the time hence its use. The space-
qualified version of RTEMS is comprised of a series of scripts and patches that when
applied to RTEMS code will delete some managers and will add others, making the
system qualified up to a GSWS Development Assurance level B, which means that the
OS does not contain any unused code (Silva, 2009).
RTEMS has continued to evolve and as of version 4.10 ESA’s version is not
maintained in the main RTEMS repository (Lee, 2012), which makes consistent
development a challenge. ESA’s goal was to make RTEMS a building block in space
missions but it first needed to get RTEMS TRL6 certified (“Definition of Technology,”
n.d.). To achieve this goal, the ESA decided to focus on the components of RTEMS that
were relevant to ESA space missions and enlisted the firm Edisoft to establish an RTEMS
maintenance center that dealt only with the RTEMS developments being made by ESA
instead of the general RTEMS community (“Operating Systems,” 2008). This diversion
has led to some confusion and frustration amongst developers who are unclear on which
version of RTEMS to work with for space projects (Lee, 2012).
2. Design
45
Figure 16. RTEMS Conceptual Architecture (from “RTEMS Architecture,” n.d.)
a. Task Management
Tasks are defined in RTEMS as the “smallest thread of execution that can
compete on its own for resources” (On-Line Applications Research Corporation, 2013, p.
64). When a task is created, it is allocated a task control block data structure. The TCB is
the only RTEMS internal data structure that an application can access and modify. Tasks
have a priority assigned to them when they are initially created (On-Line Applications
Research Corporation, 2013).
b. Scheduling Management
The RTEMS scheduler is in charge of managing a given set of tasks in the ready
state and determining when tasks get executed. The default scheduling algorithm is a
priority-based scheduler, however, developers can also work with the following: a simple
priority scheduler that maintains a single linear list--meant for small applications, earliest
deadline first scheduler, constant bandwidth server scheduler (each task is given a CPU
budget and if the budget is exceeded then a callback is invoked), simple SMP (symmetric
multiprocessing) or a partitioned/clustered scheduler, which allows developers to choose
different policies for different cores (On-Line Applications Research Corporation, 2013).
46
c. Memory Management
RTEMS uses a flat memory model and does not support virtual memory
allocation, segmentation or MMU hardware support. The partition manager creates and
deletes partitions and dynamically allocates memory to them in fixed-sized units (On-
Line Applications Research Corporation, 2013). The POSIX mprotect() function can be
used to protect regions of memory (“RTEMS 4.10.99.0 On-line Library,” 2014).
3. Analysis
RTEMS is proven in the space community given its use in many different space
applications and its use by the ESA to develop a “space qualified” version of the RTOS.
Its compatibility with many different processors, as well as its extensive documentation
makes it an attractive RTOS for space system developers.
The following real-time operating systems are worth surveying due to their
compatibility with key virtualization architectures despite the fact that there is limited
documentation on how they function. We discuss the important attributes of each RTOS.
1. LithOS
47
spatial and temporal isolation that run on the XtratuM hypervisor, which the ESA is using
to evaluate virtualization and IMA-SP.
The XtratuM hypervisor (see Chapter IV) incorporates many of the ARINC-653
spatial and temporal isolation mechanisms. LithOS leverages these when running as a
virtual machine on the hypervisor. Additionally, LithOS provides support for multi-
processing, intra-process communication and process scheduling, which are services that
XtratuM does not provide.
LithOS follows the ARINC-653 standard and implements the ARINC-653 API, as
well as its own native API. LithOS also includes a few non-portable services relating to
time and partition management that the ARINC-653 API does not include, which are
non-portable.
2. VxWorks 653
48
VxWorks 653 is an ARINC-653 certified operating system that is comprised of the
module OS and the partition OS. The module OS is the supervisor-mode OS that enforces
time-space partitioning through memory management services and static schedules to
ensures fault isolation. The partition OS is designed to run within a VxWorks 653 user-
mode partition, which is a virtualized run-time environment that supports applications.
The partition OS is also known as “vThreads,” a multi-threading system based on
VxWorks 5.5, which includes additional libraries that support the ARINC-653 APEX and
POSIX APIs. Each instance of vThreads also contains its own scheduler. Figure 18
illustrates the architecture of VxWorks 653.
Figure 18. VxWorks 653 Architecture (from Parkinson & Kinnan, n.d.)
Next, we discuss the virtualization architectures that are designed for, or are
applicable to the space domain.
49
THIS PAGE INTENTIONALLY LEFT BLANK
50
IV. VIRTUALIZATION ARCHITECTURES USED IN SPACE
51
Table 5. Summary of Virtualization Architecture Key Attributes
Hypervisor License Internal Design Development Documentation Hardware API and Standards Footprint Performance Space
Tools Support Guests (kernel) Evaluation use
supported status
INTEGRITY Proprietary Security Kernel Wind River Unavailable (see All guests DO-178B, Unknown No No
Multivisor Workbench openly INTEGRITY (designed to be ARINC-
RTOS) OS agnostic) 653, EAL
6+
VxWorks Proprietary Configurable Yes Unavailable (see VxWorks All guests None Depends, No No
hypervisor openly Hypervisor) (designed to be highly
OS agnostic) modular
XtratuM Open-source Monolithic kernel No Yes X86, ARM, LithOS, Unknown 10K lines ESA ESA
GPL or PowerPC paRTiKle, of code
proprietary Linux, RTEMS
ARLX Permissive Xen-based No Some ARM, x86 All guests DO-178C ~70K Yes Yes
after supported by
subscription Xen
PikeOS proprietary Microkernel Yes Some X86, MIPS, Linux; RTEMS; DO-178B, Unknown NASA NASA
PowerPC, POSIX, Ada MILS and
ARM, SPARC ARINC-653
V8/LEON
AIR Open-source Microkernel Unknown No All All ARINC-653 Unknown Yes (ESA) Unclear
guests(designed Current status
to support most unknown
OSes)
NOVA Open-source Separation kernel No Yes X86 All guests (via None 9k lines of No No
emulation) code
X-hyp proprietary Unknown Unknown No ARM-9, FreeRTOS, None Unknown No No
Cortex Linux, RTEMS
Proteus Unknown No No PowerPC All guests (via None 15 Kb No No
full
virtualization)
RT-Xen Open-source Xen-based No No All Xen Linux guests None Unknown No No
(unspecified
versions)
52
A. XTRATUM
To the best of our knowledge, XtratuM has yet to be deployed in space however
considerable research is in progress focusing on its ability to support space systems.
Since 2012, the ESA has been conducting a set of studies to evaluate the effectiveness of
using time-space partitioned (TSP) architectures in space, using XtratuM as the base for
this research. These studies are conducted under the ESA’s EagleEye virtual space
mission intended for software testing (“New-generation Aircraft,” n.d.). As of 2013, the
EagleEye TSP project has tested XtratuM version 3.4 with support for the LEON3
processor with a memory management unit (Bos et al., 2013). In 2014, Carrascosa et al.
(2014) documented porting XtratuM to the LEON4 multicore processor in support of the
ESA’s ongoing efforts to test and evaluate XtratuM’s performance with multicore
processors.
NASA (n.d.) also carried out some research with the XtratuM hypervisor during
the 2012 Internal Research and Development Program (IRAD) that sought to demonstrate
the benefits of virtualization on the LEON 3 flight processor.
53
1. Design
2. Partition Management
54
inter-process communication policies defined at configuration time for each partition.
System partitions are able to manage the system but still rely on the hypervisor to access
hardware. For multi-thread applications, the operating system or run-time support
libraries on which the applications run must support threads (“Xtratum Hypervisor,”
2011). This is different from the ARINC-653 specification for partitions, which isolates
and manages threads and processes inside a partition through the use of a defined API.
3. Memory Management
4. Scheduling Management
55
5. Analysis
B. ARLX
ARLX is available via subscription under a permissive license, meaning that with
an initial purchase all source code is available and can be modified. ARLX is compatible
with ARM and x86 family processors and supports any operating systems compatible
with Xen (VanderLeest, Greve, & Skentzos, 2013). A Navy-fielded deployment of
ARLX runs VxWorks and Integrity in guest domains (Santangelo, 2013).
56
MicroSATs. QuickSAT is being used in NASA research centers and by the Air Force
Research Laboratory’s University NanoSat program (Santangelo, 2013).
1. Design
ARLX core architecture follows that of Xen, but it modifies the kernel and adds
another privileged domain in addition to Dom0 to support input and output. The code
base of ARLX is 30–50% smaller than the generic code base of Xen, which
DornerWorks claims is over 150,000 lines of code. The designers of ARLX point out that
ARLX is still a work-in-progress and that the hypervisor is in heavy development. As a
result, some features, like minimized partition memory footprints, optimized partition
switching mechanisms and full ARINC-653 compliance are still future projects (Greve &
VanderLeest, 2013). The current status of these projects is unknown.
In ARLX, the Xen kernel is modified so that it implements time and space
partitioning according to the ARINC-653 standard. The typical Xen scheduler is replaced
with the ARINC-653 scheduler. Additionally, an ARINC-653 memory manager replaces
the traditional Xen memory manager in the Xen kernel. To the best of our knowledge,
ARLX requires an MMU to enforce spatial isolation. The inter-partition ARINC-653 API
is added to Xen’s communication architecture, which allows for ARINC-653 compliant
inter-partition communication mechanisms (Greve & VanderLeest, 2013). The
developers of ARLX define five security policy domains that are used to enforce
information flow between partitions. Security domains refer to information flow levels
and not to the guest domains running on top of Xen. These security domains are listed in
Table 6.
57
Table 6. ARLX Security Domains (from Greve & VanderLeest, 2013)
SECURITY DOMAIN CONTENT
Initialization read-only data for
ARLX_INIT
system startup
Configuration data only written at
ARLX_CONFIG system initialization. Read-only
while system is running
ARLX_XEN State of Xen hypervisor
State of Xen Dom0 (privileged
ARLX_DOM0
Domain)
State of Xen DomU’s (non-
ARLX_DOMU
privileged)
Information flows from top down with two exceptions: Dom0 and Xen are able to
communicate freely and each DomU can communicate if specified by configuration.
There is no domain defined for the privileged I/O domain.
2. Partition Management
3. Analysis
C. PIKEOS
PikeOS (see Figure 21) was used as the hypervisor in NASA’s 2013 Internal
Research and Development Program. This program explored flight hardware
virtualization for science data processing, to consolidate multiple physical processors to
reduce their size, weight and power consumption and to increase security on flight
systems (“Fall 2013,” 2013). Their test configuration consisted of PikeOS run on a
LEON3 processor, supporting ElinOS (Sysgo’s verison of embedded Linux) in one
partition and custom Goddard Space Flight Center (GSFC) software running in another
partition. The ElinOS VM was used to do non-critical science data processing that did not
have real-time requirements, and the GSFC partition was used as the core flight executive
that handled critical functions with hard real-time requirements. The project
demonstrated that when the ElinOS partition crashed, it had no effect on the GSFC
partition. The 2013 tests with PikeOS also demonstrated that multiple flight processors
can be booted in virtual machines and that virtual machines can be rebooted individually
mid-flight (NASA, n.d.; Cudmore, 2013).
9 Paravirtualized virtual machines can also leverage hardware assisted virtualization if the processor
supports it.
60
Figure 21. PikeOS Architecture (from Lehrbaum, 2013)
1. Design
There are two layers to the PikeOS architecture: the microkernel layer and the
virtualization layer (see Figure 3). The microkernel is responsible for managing address
space separation, partition scheduling, inter-partition communication and enforcing
communication control and access measures for threads and tasks (Tverdyshev, 2011;
Müller, Paulitsch, Tverdyshev, & Blasum, 2012). The virtualization layer is responsible
for implementing the API for partitions and guest applications.
PikeOS has two primary abstractions: tasks and threads. Threads are always
associated with a task and execute based on the task’s state. Tasks consist of a virtual
address space, threads and other resources that they might be allocated. The microkernel
controls all resources in the system, is responsible for managing communication for tasks
and threads and delegating use of resources to partitions based on the security policy set
at configuration time (Tverdyshev, 2011; Baumann, Bormer, Blasum, & Tverdyshev,
2011).
2. Partition Management
Each partition consists of a set of tasks, threads and communication ports (as
defined in the ARINC-653 API). It is the job of the virtualization layer to instantiate these
61
partitions, mediate communication with other partitions based on a pre-defined security
policy and control access to system resources.
3. Memory Management
The microkernel has a memory manager that assigns address space through
memory pages to the partitions. Partition memory pages are statically defined at
configuration time and assigned to partitions by the memory manager at run-time. At run-
time, each partition can dynamically store data and allocates memory to its applications
through these memory pages (Baumann et al., 2011).
4. Scheduling Management
Partitions running on PikeOS are statically assigned a priority level, by which the
microkernel schedules partitions based on this priority. In addition, PikeOS uses what are
referred to as “time domains” in which priority-based scheduling of threads is based on
their “class,” i.e., time-driven, event-driven or non-real-time. Event-driven and time-
driven threads are assigned a higher priority than other threads. Threads are grouped into
time domains and can only execute when their time domain is active, no matter their
priority.
There are two types of time domains: foreground and background domains. The
foreground domain is always running, and the background domain is scheduled by the
microkernel based on a static schedule determined at configuration. The background
domain can run at the same time as one other domain. Event-driven threads are assigned
to the background domain. The highest priority task between the two active domains gets
scheduled first. Low priority threads get executed when all event and time-driven threads
within their time domains are completed (Kaiser, 2007; Kaiser, 2009).
62
5. Analysis
D. AIR
63
1. Design
2. Scheduling Management
AIR has an ARINC-653 scheduling manager within the PMK that ensures
priority-based partition scheduling, as well as POS schedulers that are responsible for
scheduling processes within each partition. AIR also includes “timeliness enhancement
mechanisms” within the PMK layer, which are meant to further ensure robust scheduling
within the system (Rufino et al., 2009). One enhancement mechanism is mode-based
scheduling, which give the option of switching to different scheduling modes for a
partition. Another is process deadline monitoring, whereby the PMK verifies that earliest
deadline tasks in a partition are completed by when they are intended. If they are not,
then the PMK reports this to the ARINC-653 compliant health monitor.
3. Memory Management
AIR accounts for memory protection and management with the use of the
processor’s MMU or MPU. Each partition has its own page directory. Memory pages and
shared libraries can be shared between partitions. POS and APEX code can also be
shared across partitions (“Air Overview,” 2011). Memory and code sharing between
partitions is done based on pre-defined inter-partition communication policies established
at configuration time (Rosa, 2011).
4. Analysis
Figure 22. AIR Architecture (from Rosa, 2011 ; Rufino et al. , 2009).
The following section briefly surveys several vi1i ualization architectures wo1i hy
of mention, despite lack of consideration by the space community and/or lack of
sufficient documentation to survey adequately.
The hypervisor, like the VxWorks operating system, is highly configurable and
offers different scheduling options on single or multicore processors, different means of
configuring external devices and different ways to virtualize each partition (full or partial
virtualization). The hypervisor is responsible for scheduling partitions (called virtual
boards) and uses time-slice or priority-driven methods. Threads are completely event-
driven, meaning they are only executed when an event prompts them. Developers have
the customization option of replacing the hypervisor scheduler. External device driver
management is also configurable: drivers can be located within partitions or within the
hypervisor and can be shared or private resources (“Wind River Hypervisor,” n.d.).
The hypervisor is not known to be compliant with any relevant standards, though
Wind River offers a separation kernel for systems requiring high assurance (not part of
this survey). The relationship between these two products is unclear. To our knowledge,
the Wind River hypervisor has not been deployed or considered for deployment in any
space system. The hypervisor is marketed primarily to the industrial control and
telecommunications industries.
66
3. SafeHype
4. NOVA
Table 7. The Five Kernel Objects in the NOVA Microvisor (from Steinberg &
Kauer, 2010)
Kernel Object Function
Protection Domain Spatial Isolation
Execution Context Protection Domain Thread and CPU execution
Scheduling Context Temporal Isolation
Portals Intra-partition (domain) communication
Semaphores Execution synchronization
What makes NOVA an interesting virtualization solution for space is the fact that
it has a small code base, has a means of controlling access to critical resources through its
capability-based interface and is open-source. The object-oriented approach to access
control employed by the NOVA microvisor is similar to the proprietary INTEGRITY
kernel, which regulates information flow through statically defined policies for subjects
and objects (discussed in Chapter III). The main draw backs of the microvisor are the fact
that it is only compatible with the x86 processor, relies on processor virtualization
support and does not make any claim to support real-time systems.
5. Proteus
68
Figure 23. The Proteus Hypervisor Architecture (from Baldin & Kerstan, 2009)
There are two execution modes on the PowerPC processor that the hypervisor
uses: applications run in problem mode; interrupts, the virtual machine scheduler and the
inter-partition communication manager run in supervisor mode. Device drivers and other
non-critical resources are run in a separate partition on top of the hypervisor and run in
problem mode. Problem mode is subdivided into two logical modes: VM privileged
mode and VM problem mode. System calls made by the virtual machine are executed in
the VM privileged mode.
Proteus uses the PowerPC MMU for memory management and each VM running
on the hypervisor has its own dedicated address space that is statically defined. For
temporal isolation, Proteus supports different configurations of core support for virtual
machines. VMs can be dedicated to one core or can be divided among multiple cores.
The hypervisor uses a fixed time slice approach to scheduling, based on statically
assigned priorities.
Figure 24. The Basic X-Hyp Architecture (from “X-hyp Paravirtualized,” n.d.)
7. RT-Xen
70
Though not designed for space systems, RT-Xen is an interesting technology that
might be considered in conjunction with ARLX. Whereas ARLX is designed for high
assurance, RT-Xen is designed for real-time guarantees, both of which are attributes
required for mission-critical space systems. RT-Xen, however, is only meant to meet soft
real-time requirements and suffers from the same drawbacks as ARLX, namely that it is
based on a large, legacy code base not intended for high assurance applications.
71
THIS PAGE INTENTIONALLY LEFT BLANK
72
V. REMOTE FINGERPRINTING OF VIRTUALIZED
OPERATING SYSTEMS
In this chapter, we discuss our work in measuring and comparing fingerprints for
virtualized operating systems, employing methods explored previously by Chen et al.
(2008). We use TCP timestamp measurements to derive a timestamp skew, which prior
work shows can be used to characterize some operating systems remotely. Our work
focuses both on (1) validating prior experiments with fingerprinting general-purpose
operating systems under different virtualization scenarios, and (2) extending these results
to real-time systems, using Real-Time Linux (i.e., Linux with the PREEMPT_RT patch
enabled) as a target.
A. MOTIVATION
B. TEST METHODOLOGY
The TCP timestamp option (TSopt field) is an optional 32-bit field in the TCP
packet header that was first introduced in 1992 in RFC 1323. Its purpose was to improve
performance and provide reliable operation over paths with high speed (Jacobson, 1992).
73
The timestamp is a number that represents the perception of time for each party in every
packet of a TCP flow. RFC 1323 states that the timestamp measurement should be taken
from a virtual clock that “must be at least approximately proportional to real time”
(Jacobson, 1992, section 3.3). The virtual clock is not required to be synchronized with
the system clock and is often independent of a system’s adjustments if network time
protocol (NTP) is enabled. This virtual clock is usually reset every time a system is
rebooted. The TCP timestamp clock increases monotonically with a predefined frequency
between 1 and 1,000 Hz.
The timestamp option is enabled if the initiator of the TCP flow includes a TSopt
payload with a timestamp value in its original SYN packet and if the reply indicates that
both hosts implement the option. For the fingerprinting methodology we employ, we
require the remote host to support the TCP timestamp option and have open ports that can
be used to initiate a TCP session.
2. Prior Work
Chen et al. (2008) extend techniques originally introduced by Kohno et al. (2005)
for remote OS fingerprinting. Chen et al. (2008) examine timestamp skew behavior
between (unspecified versions of) Windows and Linux, both running on bare metal and
running as virtualized guest operating systems on either VMWare or Xen. In their
experiment, they send several hundred SYN packets to the target host for an unspecified
amount of time. They calculate the frequency at which the TCP timestamp clock
increases and use this to calculate the skew of the target’s time source. This is achieved
by comparing the actual time the target’s response packet is received and the time
recorded in the response’s TCP options. The perceived skew is measured over time and
used to generate a mean squared error (MSE) or randomness indicator associated with
the target. They compare the MSEs associated with bare metal and virtualized targets,
concluding that virtualized operating systems can be fingerprinted based on MSE
behavior. In particular, Chen et al. (2008) suggest skew can be used to distinguish
virtualized systems from bare metal systems, and to distinguish identical guest OSes
hosted on different hypervisors.
74
C. TEST PLAN
We conduct all tests in an isolated environment on a small local network. Our test
environment consists of five Optiplex 755 desktop machines with Intel Duo Core CPUs
and 8GB of RAM. One of these machines, called sniffer, serves as the active host
performing remote fingerprinting. The remaining machines (M1, M2, M3, M4) act as
targets in various configurations (see Table 8). Details of the versions of the hypervisors
and operating systems used in the M1–M4 host configurations are summarized in Table
9. The sniffer machine employs the same version of Fedora 19 used in the target host
configurations. All virtualized configurations are run in full virtualization mode, meaning
the guest operating system is unaware that it is being virtualized. Xen supports full
virtualization by using Qemu (see Chapter II).
75
Table 8. Target Host Configuration Summary
Type of
NOTATION CONFIGURATION IP ADDRESS MACHINE
Virtualization
[F] Fedora 19 bare metal - 10.10.10.2 M1
[F/F] Fedora 19 running VMWare with Fedora 19 guest Full 10.10.10.21 M1
[W/F] Fedora 19 running VMWare with Windows 7 guest Full 10.10.10.22 M1
[RT/F] Fedora 19 running VMWare with PREEMPT_RT guest Full 10.10.10.23 M1
[X] Xen bare metal - 10.10.10.3 M2
[F/X] Xen running Fedora 19 guest / DomU Full 10.10.10.31 M2
[W/X] Xen running Windows 7 guest / DomU Full 10.10.10.32 M2
[RT/X] Xen running PREEMPT_RT guest / DomU Full 10.10.10.33 M2
[RT] PREEMPT_RT bare metal - 10.10.10.4 M3
[W] Windows 7 bare metal - 10.10.10.5 M4
[F/W] Windows 7 running VMWare with Fedora19 guest Full 10.10.10.51 M4
[W/W] Windows 7 running VMWare with Windows 7 guest Full 10.10.10.52 M4
[RT/W] Window 7 running VMWare with PREEMPT_RT guest Full 10.10.10.53 M4
76
In the test environment: all machines are connected to a local switch; IP addresses
are statically assigned; firewalls and Network Time Protocol services are disabled on
operating systems and all hypervisors use bridged devices for networking.
The intent of our test environment and target host configurations is to replicate
prior work as closely as possible; however, Chen et al. (2008) did not indicate the
specific versions of operating systems or hypervisors they employ. Further, Kohno et
al.’s (2005) experiments, cited by Chen et al. (2008), employ software that (presumably)
was current circa 2005. We had no selection criteria beyond VMWare Workstation, Xen
and some Linux distribution, considered current as of 2005 or 2008. Thus, selecting
newer software was not a criterion for us.
Hardware decisions were based on the availability of five machines with identical
physical profiles. We chose VMWare Workstation 10 because we were unable to obtain
an older version of VMWare. Our choice of Windows 7 Service Pack 2 was based on its
compatibility with VMWare Workstation 10 and its status as an older but still heavily
used Windows distribution. We chose Xen release 3.0 with Debian running in Dom0
because installation instructions were readily obtainable. We chose Fedora 19 because
one of our planned 10 target configurations used RTEMS, whose build instructions
required Fedora 19. We chose real-time Linux using the PREEMPT_RT patch because it
is open-source and readily available. Our decision to build real-time Linux using Ubuntu
12.04-LTS with the PREEMPT_RT patch was based on forum recommendations (Ask
Ubuntu, n.d.) suggesting this is a stable distribution for which the patch works, and based
on availability of patch instructions.
2. Test Execution
For each test configuration, we capture two separate TCP sessions with sniffer,
one 90 minutes long and one 10 minutes long. For each session, we probe each host
10 Later, we abandoned employing RTEMS in our experiments, due to difficulty in configuring the
RTOS to run on our physical machine profile.
77
configuration through banner grabbing with netcat. During each session, we capture all
traffic using tcpdump. Table 10 summarizes the ports used for each operating system.
Chen et al. (2008) only capture SYN packets, whereas we capture all packets in the
session.
To obtain TCP timestamp values from a TCP session, we employ a Python script
(tcp_skew.py) written by Russell Fink of the University of Maryland, Baltimore (Fink,
n.d.) to parse the packet capture. For each packet, this script extracts the time recorded in
the options field of the TCP packet (T) and the timestamp recorded by tcpdump running
on sniffer (t). Figure 25 shows sample output from this script.
t T
78
frequency. Chen’s original formula is F = (T1 - T2) / (t1 - t2). We use F = (Tlast - T0) / (tlast -
t0), believing this may provide a similarly accurate reading. We validate this assumption
in testing (see Observation 2).
We use the derived frequency F for each operating system to generate clock
readings. We translate TCP timestamps into a set of clock readings following Chen et al.
(2008), by calculating (Ti - T0)/F. There are two clocks that can be compared with these
values: the time elapsed locally (xi = ti - t0) and the time elapsed on the target (wi = (Ti -
T0)/F). For each configuration, we generate a scatter plot of the target’s skew, plotting
time elapsed on sniffer (xi) on the x-axis and the skew (yi = wi - xi) on the y-axis.
Appendices A through G include all graphs generated for our experiment.
Given the calculated skew, we use Chen et al.’s (2008) method to calculate the
MSE for each configuration. We use linear least-squares fitting to find a best-fit line, f(x)
for the timeseries data. We calculate the MSE for the best-fit line by adding the squares
of the offsets and dividing by the number of TCP packets in the traffic capture, N (See
Figure 26). Chen et al. (2008) characterize the MSE as a randomness indicator, to be
used as the baseline for comparison between bare-metal and virtualized operating
systems.
∑ f ( x ) − y
2
i i i
N
Figure 26. MSE Equation
3. Test Notation
D. ANALYSIS
We validate many of Chen et al.’s (2008) original findings; however, we find one
of their conclusions—that virtualized operating systems can be easily fingerprinted
because of their dramatically different TCP time skew variation—is not entirely
convincing in light of our experimentation with some (previously unevaluated)
configurations. We divide the analysis that follows into a series of individual
observations.
Chen et al. (2008) do not specify the amount of time they run each packet capture
but state that experiments conclude within “a few minutes.” We want to determine if the
length of the packet capture has any impact on the MSE calculation. We do this by
comparing two packet captures for each bare metal target ([F], [W], [X] and [RT]). We
find that that the average MSE difference between 10 minute and 90 minute captures is
0.026ms, leading us to conclude that the capture length does not have a significant impact
80
on MSE calculation. Figure 27 shows the time series data for [F] under both time frames
(see Appendix A and B for other configurations). The time values on these two packet
captures are different since packet times vary for each packet capture. This explains the
visually incongruous lines in Figure 27. The skew behavior however is comparable. We
conclude that Chen et al.’s (2008) “a few minutes” timeframe provides a relatively stable
MSE calculation, as longer time frames do not significantly impact these calculations.
Based on this observation, we conduct all subsequent tests using 10-minute packet
captures.
Figure 27. Configuration [F], Skew vs. Time, 1.5 hour Capture (Blue) and 10-
Minute Capture (Red)
Chen et al. (2008) present a method for measuring the operating systems’ TCP
clock frequency remotely. As explained in Section 3, we modify their equation by
looking at the first and last timestamps: F = (Tlast - T0) / (tlast - t0). We verify that our
modified equation has no impact to this calculation after rounding the result to the nearest
real frequency interval, as Chen et al. (2008) suggest. To confirm that the choice of
packets used to calculate frequency is arbitrary, we calculated (Tj - Ti) / (tj-ti) for every
j>i>0. These calculations also have no impact to the frequency calculation. For our
Linux configurations we compare our result to the actual operating systems’ clock
frequency configuration by looking at the kernel configuration file. We do not do this for
our Windows configurations because, to our knowledge, this information is not
accessible within Windows. Table 12 summarizes our frequency results.
81
Table 12. Frequency Results
Operating Calculated Frequency (Hz) Calculated Reported Host
System (Tlast - T0) / (tlast - t0) Frequency (Hz) TCP Clock
(Tj - Ti) / (tj-ti) Frequency (Hz)
[F] 1000 1000 1000
[W] 100 100 N/A
[X] 250 250 250
[RT] 250 250 250
Chen et al. (2008) observe different MSE behaviors for Windows running on bare
metal and for Linux running on bare metal. They did not record a bare metal MSE value
for Xen’s Dom0. Chen et al. (2008) found the MSE value for bare metal Windows is very
high, attributing this to that configuration yielding the lowest measured frequency value
(10Hz) among all target configurations. Our results match Chen’s as illustrated in Table
13. Excluding [RT] configurations, all our bare metal configurations exhibit dissimilar
MSE behavior. In agreement with Chen et al.’s observations, our [W] configuration has
the highest MSE value, possibly due to its low clock frequency compared to that of other
configurations (see Table 13).
Chen et al. (2008) conclude that virtualized hosts have “more perturbed clock
skew behavior” than bare metal hosts which they claim is observable through MSE. Our
results also reflect a difference between bare metal and virtualized MSE but less
pronounced than in prior work.
82
a. Observation 4a: MSE[F/A] ≈ MSE[F]
Chen et al. (2008) conclude that virtualized instances of Linux exhibit orders of
magnitude larger MSE than Linux running on bare metal. In particular, their results show
almost 300,000% change between bare metal Linux and Linux on VMWare, and 173%
change (Chen et al., 2008) 11 between bare metal Linux and Linux running on Xen. We
find our virtualized Fedora configurations demonstrate at least one order of magnitude
change compared to the bare metal configuration, but the changes are smaller than Chen
et al.’s observations suggest (see Table 14).
Chen et al. (2008) find noticeable differences in MSE behavior among virtualized
and bare metal Windows configurations. In particular, they observe a 22% change
between bare metal Windows and Windows running on VMWare and an 8% change
between bare metal Windows and Windows running on Xen. Chen et al. (2008) claim
these changes are statistically meaningful under Z-test analysis, making “the randomness
introduced by VMM very obvious.” We find, however that there is not a substantial
difference between [W] and its virtualized counterparts ([W/W], [W/F], [W/X]) in terms
of MSE. In fact, comparing [W] with [W/W], changes in MSE behavior appears fairly
negligible (see Table 15).
11 See 0.083 ms2 MSE for baseline Linux and 245.8 ms2 MSE for Linux on VMWare; we calculate
difference as ((0.083-245.8)/0.083)*100.
83
Table 15. Windows Configuration MSEs
MSE DIFFERENCE CHANGE
CONFIGURATION
(ms) (ms) from MSE[W] % from MSE[W]
[W] 8.156 - -
[W/W] 8.066 0.09 1.1%
[W/F] 6.873 1.283 15.7%
[W/X] 8.658 -0.502 -6%
Chen et al. (2008) do not clarify what configuration of VMWare they use in their
experiment and do not comment on any difference in behavior of VMWare on Windows
vs. VMWare on Linux. We find that configurations [A/W] and [A/F] appear different in
terms of MSE, suggesting that the host OS for VMWare Workstation impacts
fingerprinting substantially (see Table 16).
Chen et al. (2008) observe that Windows on Xen and Linux on Xen exhibit
smaller MSE values than Linux on VMWare and Windows on VMWare. They suggest
“Xen introduces much less randomness than VMWare does, probably because they have
different algorithms for firing software interrupts.” In contrast, we observe [F/X]
demonstrates higher MSE than [F], [F/W] or [F/F] (see Table 14); also, [W/X]
demonstrates higher MSE than [W], [W/W] or [W/F] (see Table 15). This contradicts
Chen et al.’s observations that Xen introduces less randomness than VMWare. It is,
however, in-line with their larger observation that one can observe MSE differences
among hypervisors, albeit somewhat more limited.
84
7. Observation 7: MSE[RT] ≠ MSE[A] for all A ≠ RT
12 Default priority is 0 since there is no prioritization associated with SCHED_NORMAL, which is the
default universal time-sharing scheduler policy in our configuration.
85
Table 18. PREEMPT_RT Configuration MSEs
DIFFERENCE % CHANGE
CONFIGURATION MSE
(ms) from MSE[RT] from MSE[RT]
[RT] 1.337 - -
[RT/F] 1.395 -0.058 -4.3%
[RT/W] 1.788 -0.451 -34%
[RT/X] 1.297 0.04 2.99%
For configuration [RT-1FF], we make the following two changes: we adjust the
priority of sshd using the chrt command to be priority 1, i.e., the highest process priority
level; we change the scheduling class to FIFO. For configuration [RT-1RR] we make the
same changes but use Round Robin scheduling class instead of FIFO. We find these [RT-
S] configurations have similar MSE behavior relative to our [RT] configuration. Table 19
summarizes our findings for the FIFO configurations and Table 20 summarizes our
findings for the Round Robin configurations.
86
10. Observation 10: MSE[RT-S/W] > {MSE[RT], MSE[RT-T],
MSE[RT/A]}
We observe our [RT-S/W] configurations result in a much higher MSE than all
other [RT] configurations, indicating that Windows 7 has an impact on our [RT]
configuration when scheduling class and process priority are altered. Table 21 lists MSEs
for other configurations not listed in Tables 19 and 20 as points of comparison.
We observe that, aside from our [RT-S/W] configurations, MSE for [RT-S/A] are
similar to both [RT] and [RT-S] configurations. This observation agrees with
Observations 4, 8 and 9. Continuing the trend in Observations 4 and 8, we see no obvious
difference in MSE between [RT-S/A] and [RT-S]. Combined with Observation 9 on the
similarity between [RT-S] and [RT-T], this implies the similarity in MSE for all
configurations [RT-S/A] compared to [RT] (see Tables 19, 20 and 21).
We observe that, aside from our [RT-S/W] configurations, the MSE behavior for
all virtualized [RT-T] configurations is similar. In fact, our results show identical MSE
for [RT-1FF/X] and [RT-1FF/F] (see Tables 19, 20 and 21).
87
13. Observation 13: [AlB] is more like [A) than [B) for A f:. B and Af:.F
Chen et al. (2008) do not rep01i MSE comparing the virtualized guest and its bare
metal host. We extend this work by investigating which MSE viliualized guests most
closely resemble. We find that (with the exception of om [FIB] configm ations) all [AlB]
configurations more closely resemble the MSE of [A] instead of [B]. Table 22
summarizes om findings.
Table 22. MSE Comparisons (Blue Indicates Most Similar MSE Based on %)
DIFFERENCE
DIFFERENCE
DIFFERENCE (ms)
DIFFERENCE (ms) /CHANGE
(ms) /CHANGE /CHANGE
MSE from MSE[W] (%) from MSE
CONFIG (%) from MSE[F] (%) from MSE
[X]
[RT]
E. DISCUSSION
Our work also reveals some interesting behavior of virtualized operating systems,
particularly in the [RT-1FF/W] and [RT-1RR/W] configurations. The MSE behavior for
these configurations is dramatically different from [RT], [RT-S] and [RT-T/A]
configurations. Of note is the observation that only the [F/F] and [F/W] configurations
have MSE behavior that more closely resembled the host OS instead of the guest. We
investigate the reason for this behavior as future work.
There are several limitations to our experiment that may have impacted the
generality of our results. Our setup lacked extraneous network and CPU load, as host and
guest had limited background processes running and had exclusive use of a local
network. As future work, these experiments may be re-run on a typical network for an
enterprise or in a setting with multiple processes competing for CPU time to see if the
results change. We also do not run our experiments on multiple physical machine
profiles. To confirm the generality of our observed behaviors one would re-run these
experiments on different physical machine profiles, i.e., to investigate how much TCP
timestamp skew variation can be attributed to the operating system and how much can be
attributed to the hardware. Also, all the tested virtualized configurations are based on full
virtualization. We suggest re-running our tests with different virtualization settings, such
as paravirtualization and hardware-assisted virtualization to see how MSE behavior
compares.
A possible limitation of our work is the use of tcpdump to label time of receipt for
each TCP packet at the sniffer machine. We suggest re-running these experiment to
employ a system clock timestamp, rather than relying on a user-land application’s
perception of time. Our experiment could also benefit if the operating system choices
were more consistent. We chose a different version of Linux with a different frequency to
run our Xen Dom0 ([X] configuration) compared to our other Linux configurations. We
suggest standardizing these software choices for consistency and comparison. We further
suggest experimenting with different Linux distributions and different kernel versions. It
would be interesting to see how our results compare to newer operating systems. Finally,
89
additional research should consider statistical metrics for comparison to see if they offer
more insight into the behavior of different hypervisors and virtualized operating systems
in the context of fingerprinting.
Our work is an attempt to capture the TCP timestamp skew behavior of a set of
general-purpose and real-time operating systems in an isolated, controlled environment.
Our results differ from Chen et al. (2008) and suggest that hypervisor and operating
system fingerprinting is not clearly predictable from MSE. We propose some future work
to carry this research forward.
90
VI. CONCLUSION AND FUTURE WORK
Virtualization is a promising field of research for the space community, and its
implementation in space research projects indicates that it is a technology that the space
community appears committed to utilizing. In this thesis we have sought to highlight
some key security-relevant properties of real-time operating systems and virtualization
architectures for space systems. Our work has revealed the diversity of architectures
supporting virtualized for the space domain, and the ways in which these virtualization
architectures handle real-time requirements of guests. Our work highlights some tradeoffs
associated with security, flexibility, popularity and compatibility with other systems and
hardware. The purpose of our survey was to explain, at a high level, the fundamental
differences and similarities between real-time operating systems and virtualization
solutions for space. A limitation of this survey was that we did not analyze the
implementation of consequential security features in the surveyed systems. We leave as
future work the analysis of enforcement mechanisms for key security functionality, such
as memory management or spatial isolation. For unevaluated systems, penetration testing
may be warranted to investigate these security properties.
91
THIS PAGE INTENTIONALLY LEFT BLANK
92
APPENDIX A. BARE METAL, 1.5-HOUR RUN
Figure 28. Configuration [F], Skew vs. Time, 1.5 Hour Packet Capture
Figure 29. Configuration [X], Skew vs. Time, 1.5 Hour Packet Capture
93
Figure 30. Configuration [W], Skew vs. Time, 1.5 hour Packet Capture
Figure 31. Configuration [RT], Skew vs. Time, 1.5 Hour Packet Capture
94
APPENDIX B. BARE METAL, 10-MINUTE RUN
Figure 32. Configuration [F], Skew vs. Time, 10-Minute Packet Capture
Figure 33. Configuration [X], Skew vs. Time, 10-Minute Packet Capture
95
Figure 34. Configuration [W], Skew vs. Time, 10-Minute Packet Capture
Figure 35. Configuration [RT], Skew vs. Time, 10-Minute Packet Capture
96
APPENDIX C. VIRTUALIZED LINUX
97
Figure 38. Configuration [F/X], Skew vs. Time
98
APPENDIX D. VIRTUALIZED WINDOWS
99
Figure 41. Configuration [W/X], Skew vs. Time
100
APPENDIX E. VIRTUALIZED PREEMPT_RT
101
Figure 44. Configuration [RT/X], Skew vs. Time
102
APPENDIX F. PREEMPT_RT, FIFO SCHEDULING
103
Figure 47. Configuration [RT-1FF/W], Skew vs. Time
104
APPENDIX G. PREEMPT_RT, ROUND ROBIN SCHEDULING
105
Figure 51. Configuration [RT-1RR/W], Skew vs. Time
106
SUPPLEMENTAL
Code to run the experiment and generated data from Chapter V is available in the
CISR Archive, which may be accessed at the Computer Science Department of the Naval
Postgraduate School.
107
THIS PAGE INTENTIONALLY LEFT BLANK
108
LIST OF REFERENCES
A time & space partitioned DO-178 level A certifiable RTOS. (n.d.). Retrieved January
15, 2015, from http://www.ddci.com/products_deos.php
Air Force Space Command. (2013). Resiliency and disaggregated space architectures.
Retrieved from http://www.afspc.af.mil/shared/media/document/AFD-130821-
034.pdf
Andrews, D., Bate, I., Nolte, T., Otero-Perez, C., & Petters, S. M. (2005, July). Impact of
embedded systems evolution on RTOS use and design. 1st International
Workshop Operating System Platforms for Embedded Real-Time Applications
(OSPERT’05).
Apecechea, G., Inci, M. S., Eisenbarth, T., & Sunar, B. (2014). Fine grain cross-VM
attacks on Xen and VMware are possible. Retrieved from https://eprint.iacr.org/
2014/248.pdf
Architecture of VMware ESXi, The. (n.d.). Retrieved March 12, 2014, from http://www.
vmware.com/files/pdf/ESXi_architecture.pdf
ARINC Standards Store. (n.d.). ARINC standards 600 series. Retrieved from http://store.
aviation-ia.com/cf/store/catalog.cfm?prod_group_id=1&category_group_id=3
Ask Ubuntu. (n.d.). How can I install a real-time kernel? Retrieved March 18, 2014 from
http://askubuntu.com/questions/72964/how-can-i-install-a-realtime-kernel
109
Balasubramaniam, M. (n.d.). Introduction to real-time operating systems. Retrieved
December 3, 2014, from http://www.cis.upenn.edu/~lee/06cse480/lec-
RTOS_RTlinux.pdf
Baldin, D., & Kerstan, T. (2009). Proteus, a hybrid virtualization platform for embedded
systems. In Analysis, Architectures and Modelling of Embedded Systems (pp.
185–194).
Barham, P., Dragovic, B., Fraser, K., Hand, S., Harris, T., Ho, A., & Warfield, A. (2003).
Xen and the art of virtualization. ACM SIGOPS Operating Systems Review, 37(5),
164–177.
Baumann, C., Bormer, T., Blasum, H., & Tverdyshev, S. (2011, March). Proving
memory separation in a microkernel by code level verification. In Object/
Component/Service-Oriented Real-Time Distributed Computing Workshops
(ISORCW), 2011 14th IEEE International Symposium (pp. 25–32).
Bellard, F. (2005, April). QEMU, a fast and portable dynamic translator. In USENIX
Annual Technical Conference, FREENIX Track (pp. 41–46).
Beus-Dukic, L. (2001). COTS real-time operating systems in space. Safety Systems: The
Safety-Critical Systems Club Newsletter, 10(3), 11–14.
Bloom, G., & Sherrill, J. (2014). Scheduling and thread management with RTEMS. ACM
SIGBED Review, 11(1), 20–25.
110
Board support packages. (n.d.). Retrieved January 17, 2015, from https://bsp.windriver.
com/index.php?bsp&on=list&type=platform&value=VxWorks:%206.8%20-
%20Wind%20River%20Workbench%203.2
Bos, V., Mendham, P., Kauppinen, P. K., Holst, N., Crespo Lorente, A., Masmano, M., ...
& Zamorano Flores, J. R. (2013). Time and space partitioning the EagleEye
reference mission. Data Systems in Aerospace (DASIA 2013), May 14, 2013–
May 16, 2013, Porto, Portugal.
Carlgren, H., & Ferej, R. (n.d.). Comparison of CPU scheduling in VxWorks and
LynxOS. Retrieved January 2, 2015 from http://class.ece.iastate.edu/cpre
584/ref/embedded_OS/vxworks_vs_lynxOS.pdf
Carrascosa, E., Coronel, J., Masmano, M., Balbastre, P., & Crespo, A. (2014). XtratuM
hypervisor redesign for LEON4 multicore processor. ACM SIGBED Review,
11(2), 27–31.
Chen, X., Andersen, J., Mao, Z. M., Bailey, M., & Nazario, J. (2008, June). Towards an
understanding of anti-virtualization and anti-debugging behavior in modern
malware. In Dependable Systems and Networks with FTCS and DCC, 2008. DSN
2008. IEEE International Conference (pp. 177–186).
Clark, Libby. (2013, March 21). Intro to real-time Linux for embedded developers.
Retrieved from https://www.linux.com/news/featured-blogs/200-libby-
clark/710319-intro-to-real-time-linux-for-embedded-developers
Contributing editor. (2001, July 23). Create hard read-time Tasks with precision under
Linux. Retrieved from http://electronicdesign.com/embedded/create-hard-real-
time-tasks-precision-under-linux
Crespo, A., Masmano, M., Coronel, J., Peiró, S., Balbastre, P., & Simó, J. (2014).
Multicore partitioned systems based on hypervisor. Preprints of the 19th World
Congress. The International Federation of Automatic Control, Cape Town, South
Africa, August 24–29, 2014.
111
Cudmore, A. (2007, November). Flight software workshop 2007 (FSW-07). Retrieved
from http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20080040872.pdf
Daugherty, J. (2014, August, 19). Porting FreeRTOS to Xen on ARM. Retrieved from
http://www.slideshare.net/xen_com_mgr/free-rtos-xensummit
Definition of technology readiness levels. (n.d.). Retrieved October 21, 2014, from
http://esto.nasa.gov/files/trl_definitions.pdf
Department of Defense. (2010). Information assurance (IA) policy for space systems used
by the Department of Defense. Retrieved from http://www.dtic.mil/whs/
directives/corres/pdf/858101p.pdf
Diniz, N., & Rufino, J. (2005). ARINC 653 in space. In Dasia 2005, EUROSPACE,
Edinburgh, Scotland.
DornerWorks. (2014, October 28). DornerWorks wins SBIR phase 2 award from
DARPA. Retrieved from http://dornerworks.com/about/news
Edge, J. (2013, March 6). ELC: SpaceX lessons learned. Retrieved from http://lwn.net/
Articles/540368/
112
Embedded hardware. (n.d.). Retrieved December 17, 2014, from https://xenomai.org/
embedded-hardware/
Evans, P. (2007, February 27). How big is RTEMS? Retrieved from http://lists.rtems.org/
pipermail/users/2007-February/015838.html
Fayyad-Kazan, H., Perneel, L., & Timmerman, M. (2014). Linux PREEMPT-RT v2. 6.33
versus v3. 6.6: Better or worse for real-time applications?. ACM SIGBED Review,
11(1), 26–31.
Feldt, R. Torkar, R., Ahmad, E., & Raza, B. (2010). Challenges with software
verification and validation activities in the space industry. Retrieved from
http://www.cse.chalmers.se/~feldt/publications/feldt_2010_icst_space_vav_challe
nges.pdf
Five ways NASA is using Linux OS to run. (n.d.). Retrieved July 20, 2014, from http://
www.100tb.com/blog/?p=485
Galileo pathfinder achieves five years in orbit. (2010, December 28). Retrieved from
http://www.esa.int/Our_Activities/Navigation/Galileo_pathfinder_GIOVE-
A_achieves_five_years_in_orbit
113
General Dynamics. (n.d.). OKL4 microvisor. Retrieved from http://www.ok-labs.com/
products/okl4-microvisor
General Dynamics. (2008, April). Microkernels vs. hypervisors. Retrieved from http://
www.ok-labs.com/blog/entry/microkernels-vs-hypervisors/
Gilles, K., Groesbrink, S., Baldin, D., & Kerstan, T. (2013). Proteus hypervisor: Full
virtualization and paravirtualization for multi-core embedded systems. In
Embedded Systems: Design, Analysis and Verification (pp. 293–305).
Gomes, A. O. (2012, March). Formal specification of the ARINC 653 architecture using
circus. (master’s thesis). Retrieved from http://etheses.whiterose.ac.uk/2683/
Green Hills software to power spaceflight crew escape system demonstrator. (2003,
December 22). Retrieved from http://www.spaceref.com/news/viewpr.html?
pid=13275
Green Hills software INTEGRITY-178B separation kernel security target. (2008, May
30). Retrieved from http://www.niap-ccevs.org/st/st_vid10119-st.pdf
Greve, D., & VanderLeest, S. H. (2013). Data flow analysis of a Xen-based separation
kernel. In 7th Layered Assurance Workshop (pp. 1–34).
Haas, J. (n.d.). RTLinux HOWTO, 4.2 creating RTLinux threads. Retrieved November
10, 2014 from http://linux.about.com/od/howtos/a/rtlinuxhowto4b.htm
Han, S., & Jin, H. (2011, October). Full virtualization based ARINC 653 partitioning.
Digital Avionics Systems Conference (DASC), 2011 IEEE/AIAA 30th (pp. 7E1–1).
Heiser, G., & Leslie, B. (2010, August). The OKL4 microvisor: Convergence point of
microkernels and hypervisors. Proceedings of the First ACM Asia-Pacific
Workshop on Workshop on Systems. pp. 19–24.
114
Home page. (n.d.). Retrieved January 1, 2015, from http://ecos.sourceware.org/
Howard, C. (2007, April 4). LynuxWorks provides safety-critical RTOS for European
Space Agency’s Galileo satellite navigation system. Military and Aerospace
Electronics. Retrieved from http://www.militaryaerospace.com/articles/2007/04/
lynuxworks-provides-safety-critical-rtos-for-european-space-agencys-galileo-
satellite-navigation-system.html
Howard, C. (2011, March 1). RTOS for a software driven world. Military and Aerospace
Electronics. Retrieved from http://www.militaryaerospace.com/articles/print/
volume-22/issue-30/technology-focus/rtos-for-a-software-driven-world.html
Huffine, C. (2005, March 1). Linux on a small satellite. Retrieved from http://www.linux
journal.com/article/7767
Hussein, S. (2009, May). Containing Linux instances with OpenVZ. Retrieved from
http://www.opensourceforu.com/2009/05/containing-linux-instances-with-
openvz/
Introduction to linux for real-time control, introductory guidelines and references for
control engineers and manager. (2002). Retrieved from http://www.aeolean.
com/html/RealTimeLinux/RealTimeLinuxReport-2.0.0.pdf
Iqbal, A., Sadeque, N., & Mutia, R. I. (2009). An overview of microkernel, hypervisor
and microvisor virtualization approaches for embedded systems. Report,
Department of Electrical and Information Technology, Lund University, Sweden,
2110.
Jacobson, V. (1992). TCP extensions for high performance. Retrieved from https://www.
ietf.org/rfc/rfc1323.txt
Jaekel, S., Stelzer, M., & Herpel, H. J. (2014, March). Robust and modular on-board
architecture for future robotic spacecraft. In Aerospace Conference, 2014 IEEE
(pp. 1–11).
115
Jeong, S. (2013). In-depth overview of x86 server virtualization technology. Retrieved
from http://www.cubrid.org/blog/dev-platform/x86-server-virtualization-tech
nology/
Joe, H., Jeong, H., Yoon, Y., Kim, H., Han, S., & Jin, H. W. (2012, October). Full
virtualizing micro hypervisor for spacecraft flight computer. In Digital Avionics
Systems Conference (DASC), 2012 IEEE/AIAA 31st (pp. 6C5-1–6C5-9).
Jones, K. H., & Gross, J. (2014). Reducing size, weight, and power (SWaP) of perception
systems in small autonomous aerial systems. Retrieved from http://arc.aiaa.org/
doi/abs/10.2514/6.2014-2705
Jones, M. T. (2011, January 25). Platform emulation with bochs. Retrieved from http://
www.ibm.com/developerworks/library/l-bochs/
Jones, T. (2008, April 15). Anatomy of real-time Linux architectures. Retrieved from
http://www.ibm.com/developerworks/library/l-real-time-linux/
Kang, S., & Kim, H. (2014, March). The study of the virtual machine for space real-time
embedded systems. In Aerospace Conference, 2014 IEEE (pp. 1–7).
Katz, D. S., & Some, R. R. (2003). Advances robotic space exploration. Retrieved from
http://web.ci.uchicago.edu/~dsk/papers/computer2003.pdf
Kenyon, S., Bridges, C. P., Liddle, D., Dyer, R., Parsons, J., Feltham, D., Taylor, R.
Mellor, D., Schofield, A., & Linehan, R. (2011, October). STRaND-1: Use of a
$500 Smartphone as the Central Avionics of a Nanosatellite. In Proceedings of
the 2nd International Astronautical Congress 2011, (IAC’11). p. 1–19.
Kirch, J. (2007, September). Virtual machine security guidelines, version 1.0. Retrieved
from http://benchmarks.cisecurity.org/tools2/vm/CIS_VM_Benchmark_v1.0.pdf
Kohno, T., Broido, A., & Claffy, K. C. (2005). Remote physical device fingerprinting.
Dependable and Secure Computing, IEEE Transactions, 2(2), 93–108.
116
Komolafe, O., & Sventek, J. (2006/07). Information for practical sessions. Retrieved
from http://www.dcs.gla.ac.uk/~joe/Teaching/ESW1/Session4/esw1-
practicalsinfo.pdf
Kovacs, E. (2014, October 2). Xen Hypervisor vulnerability exposed virtualized servers.
Retrieved from http://www.securityweek.com/xen-hypervisor-vulnerability-
exposed-virtualized-servers
Krüger, T., Schiele, A., & Hambuchen, K. (2013, May). Exoskeleton control of the
robonaut through rapid and ros. In Proceedings of the 12th Symposium on
Advanced Space Technologies in Robotics and Automation, Noordwijik,
Netherlands.
Landley, R. (2009, September). Developing for non-x86 targets using QEMU. Retrieved
from http://landley.net/aboriginal/presentation.html
Lee, M. (2012, January 24). Space qualified RTEMS. Retrieved from http://comments.
gmane.org/gmane.os.rtems.user/18900
Lehrbaum, R. (2013, April 11). Android apps taps secure resources via ARIM TrustZone.
Retrieved from http://linuxgizmos.com/android-app-taps-secure-resources-via-
arm-trustzone/
Leiner, B., Schlager, M., Obermaisser, R., & Huber, B. (2007). A comparison of
partitioning operating systems for integrated systems. In Computer Safety,
Reliability, and Security (pp. 342–355). Springer, Berlin Heidelberg.
Leroux, P. (2005). RTOS vs. GPOS: What is best for embedded development. Embedded
Computing Design.
LeVasseur, J., Uhlig, V., Chapman, M., Chubb, P., Leslie, B., & Heiser, G. (2005). Pre-
virtualization: Slashing the cost of virtualization. Karlsruhe, Germany:
Universität Karlsruhe, Fakultät für Informatik
Mars reconnaissance orbiter. (n.d.). Retrieved October 11, 2014, from http://mars.jpl.
nasa.gov/mro/
117
Masmano, M., Ripoll, I., Peiró, S., & Crespo, A. (2010, May). Xtratum for leon3: An
open source hypervisor for high integrity systems. In European Conference on
Embedded Real Time Software and Systems. ERTS2 (Vol. 2010).
McKenney, Paul. (2005, August 10). A realtime preemption overview. Retrieved from
http://lwn.net/Articles/146861/
Moore, J. W. (1998, October). IEEE/EIA 12207 as the foundation for enterprise software
processes. Sixteenth Annual Pacific Northwest Software Quality Conference.
Moore, R. (2005). Mutex tech note, Mutexes provide a level of safety for mutual
exclusion, not possible with counting or binary semaphores. Retrieved from
http://www.smxrtos.com/articles/techppr/mutex.htm
Müller, K., Paulitsch, M., Tverdyshev, S., & Blasum, H. (2012). MILS-related
information flow control in the avionic domain: A view on security-enhancing
software architectures. In DSN Workshops (pp. 1–6).
Munro, J. (2001). Virtual machines and VMWare part 1. Retrieved from http://www.
extremetech.com/computing/72186-virtual-machines-vmware-part-i/6
Murray, D., Milos, G., & Hand, S. (2008). Improving Xen Security through
Dissagregation. Retrieved from https://www.cl.cam.ac.uk/research/srg/netos/
papers/2008-murray2008improving.pdf.
NASA. (2004b). Software safety standard NASA technical standard. Retrieved from
http://www.system-safety.org/Documents/NASA-STD-8719.13B.pdf
NASA’s Mars rover curiosity powered by wind river. (n.d.). Retrieved February 15,
2015, from http://www.windriver.com/announces/curiosity/Wind-River_NASA_
0812.pdf
NASA’s Orion crew exploration vehicle built with INTEGRITY-178B. New generation
of space exploration utilizes green hills software. (2008, September 8). Retrieved
from http://www.ghs.com/news/20080908_integrity178b_nasa.html
New-generation aircraft offer key to slimmer, smarter satellites. (n.d.). Retrieved October
15, 2014, from http://www.esa.int/Our_Activities/Technology/New-generation_
aircraft_offer_key_to_slimmer_smarter_satellites
One-stop-shop for all your CubeSat and nanosat systems, The. (n.d.). Retrieved
December 16, 2014, from http://www.cubesatshop.com/
Pad abort demonstrator to test crew escape technologies. (2003, September). Retrieved
from http://www.nasa.gov/centers/marshall/pdf/104862main_padabort.pdf
119
Para virtualized quests for Xhyp. (n.d.). Retrieved January 3, 2015, from http://x-hyp.org/
products/guests/
Parkinson, P. (2011). Safety, security and multicore. In C. Dale & T. Anderso (Eds.),
Advances in systems safety (pp. 215–232). London: Springer.
Parkinson, P., & Kinnan, L. (n.d.). Safety-critical software development for integrated
modular avionics [white paper]. Retrieved May 30, 2014, from http://www.
element14.com/community/servlet/JiveServlet/previewBody/19565-102-1-
59593/Safety-Critical%20Software%20Development%20for.pdf
Prieto, S. S., Tejedor, I. G., Meziat, D., & Sánchez, A. V. (2004). Is Linux ready for
space applications? Madrid, Spain: Computer Engineering Department
(University of Alcala).
Prisaznuk, P. J. (2008, October). ARINC 653 role in integrated modular avionics (IMA).
Digital Avionics Systems Conference, 2008. DASC 2008. IEEE/AIAA 27th (pp. 1–
E).
Products PikeOS hypervisor. (n.d.). Retrieved January 30, 2015, from http://www.sysgo.
com/products/pikeos-rtos-and-virtualization-concept/
Ramsey, J. (2007, February). Integrated modular avionics: Less is more. Retrieved from
http://www.aviationtoday.com/av/commercial/Integrated-Modular-Avionics-Less-
is-More_8420.html#.VOKaFvnF-Sp
Real Time Engineers, Ltd. (2014). The FreeRTOS reference manual for FreeRTOS
version 8.2.0. Bristol, United Kingdom: Texas Instruments.
Real-Time Linux wiki. (n.d.). Retrieved March 21, 2015, from https://rt.wiki.kernel.
org/index.php/Main_Page
Report of the National Commission for the Review of the National Reconnaissance
Office. (2000). [Executive Summary]. Retrieved from http://fas.org/irp/nro/
commission/nro.pdf
Ricklefs, R. (n.d.). Real-time Linux at MLRS. Retrieved November 18, 2014, from
http://cddis.gsfc.nasa.gov/lw18/docs/posters/13-Po25-Ricklefs.pdf
Rosenblum, M., & Garfinkel, T. (2005). Virtual machine monitors: Current technology
and future trends. Computer, 38(5), 39–47.
Rostedt, S., & Hart, D. V. (2007, June). Internals of the RT patch. In Proceedings of the
Linux Symposium.
RT-Xen project. (2013, July 16). RT-Xen project receives grant from Office of Naval
Research. Retrieved from http://cse.wustl.edu/Research/Pages/news-story.aspx?
news=476
121
RTEMS. (n.d.). Retrieved October 11, 2014, from http://www.fentiss.com/en/
products/rtems.html
Rufino, J., & Craveiro, J. (2008, July). Robust partitioning and composability in ARINC
653 conformant real-time operating systems. In 1st INTERAC Research Network
Plenary Workshop, Braga, Portugal.
Rufino, J., Craveiro, J., Schoofs, T., Tatibana, C., & Windsor, J. (2009, May). AIR
Technology: A step towards ARINC 653 in space. In Proceedings DASIA.
Rufino J., & Filipe, S. (2007, December). AIR project final report. Technical report TR
07–35. Retrieved from http://air.di.fc.ul.pt/air/downloads/07-35.pdf
Rushby, J. (2011). New challenges in certification for aircraft software. Retrieved from
http://www.csl.sri.com/users/rushby/papers/emsoft11.pdf
Rutkowska, J., & Tereshkin, A. (2008). Bluepilling the xen hypervisor. In Black Hat
USA, 2008.
Safety critical products: Integrity®-178B RTOS. (n.d.). Retrieved January 11, 2015, from
http://www.ghs.com/products/safety_critical/integrity-do-178b.html
Sahoo, J., Mohapatra, S., & Lath, R. (2010, April). Virtualization: A survey on concepts,
taxonomy and associated security issues. In Computer and Network Technology
(ICCNT), 2010 Second International Conference (pp. 222–226).
Santangelo, A. D. (2013). An open source space hypervisor for small satellites. In AIAA
SPACE 2013 Conference and Exposition (pp. 1–10).
122
Scharpf, K. (2013, December 11). The last cathedral-democratizing flight software.
Retrieved from http://flightsoftware.jhuapl.edu/files/2013/talks/FSW-13-
TALKS/KS_FSW2013.pdf
Schoofs, T., Santos, S., Tatibana, C., & Anjos, J. (2009, October). An integrated modular
avionics development environment. Digital Avionics Systems Conference, 2009.
DASC’09. IEEE/AIAA 28th (pp. 1–A).
Secure separation architecture white paper PDF download form. (n.d.). Retrieved June
10, 2014, from http://www.ghs.com/articles/index.php?wp=secure_separation
Silva, H., Sousa, J., Freitas, D., Faustino, S., Constantino, A., & Coutinho, M. (2009).
RTEMS improvement-space qualification of RTEMS executive. In 1st Simpósio
de Informática-INFORUM, University of Lisbon.
Smith, J. E., & Nair, R. (2005). The architecture of virtual machines. Computer, 38(5),
32–38.
Studer, N. (2014). Xen and the art of certification. Xen developer summit 2014.
Retrieved from http://www.xenproject.org/presentations-and-videos/video/
xpds14v-certification.html
SYSGO’s safe and secure virtualization PikeOS now available for LEON and RTEMS.
(2010, August 26). Retrieved from http://www.sysgo.com/news-events/press/
press/details/article/sysgos-safe-and-secure-virtualization-pikeos-now-available-
for-leon-and-rtems/
123
Tavares, A., Carvalho, A., Rodrigues, P., Garcia, P., Gomes, T., Cabral, J., &
Ekpanyapong, M. (2012, March). A customizable and ARINC 653 quasi-
compliant hypervisor. Industrial Technology (ICIT), 2012 IEEE International
Conference (pp. 140–147).
Terrasa, A., Garcia-Fornes, A., & Espinosa, A. (2002, September 10). RTL POSIX trace
1.0 (a POSIX trace system in RT-Linux. Retrieved from http://www.gti-
ia.upv.es/sma/tools/rtl-ptm/archivos/documentation/rtl-posixtrace.pdf
Teston, F., Vuilleumier, P., Hardy, D., & Bernaerts, D. (2004, October). The PROBA-1
microsatellite. In Proc. of SPIE Vol. 5546, pp. 132–140).
Tverdyshev, S. (2011). Extending the GWV security policy and its modular application
to a separation kernel. In NASA Formal Methods (pp. 391–405). Springer Verlag
Berlin, Heidelberg.
USENIX. (2001). 2001 proceedings of the 2001 USENIX annual technical conference.
Retrieved from http://www.vmware.com/pdf/usenix_io_devices.pdf
VanderLeest, S. H., Greve, D., & Skentzos, P. (2013, October). A safe & secure arinc
653 hypervisor. In Digital Avionics Systems Conference (DASC), 2013
IEEE/AIAA 32nd (pp. 7B4–1).
Volpe, R., Nesnas, I. A.D., Estlin, T., Mutz, D., Petras, R., & Das, H. (2000). CLARAty:
Coupled layer architecture for robotic autonomy. Retrieved from https://www-
robotics.jpl.nasa.gov/publications/Issa_Nesnas/CLARAty.pdf
124
vSphere ESXi. (n.d.). Retrieved January 2, 2014, from http://www.vmware.com/prod
ucts/vsphere/features-esxi-hypervisor
VxWorks on the Mars exploration rovers. (n.d.). Retrieved March 20, 2014, from
http://trs-new.jpl.nasa.gov/dspace/bitstream/2014/37779/1/05-0825.pdf
Wind river hypervisor. (n.d.). Retrieved February 11, 2015, from http://www.windriver.
com/products/product-notes/wind-river-hypervisor-product-note.pdf
Wind river linux. (n.d.). Retrieved January 15, 2015, from http://www.windriver.com/
products/linux/
Windsor, J., Deredempt, M. H., & De-Ferluc, R. (2011, October). Integrated modular
avionics for spacecraft—User requirements, architecture and role definition. In
Digital Avionics Systems Conference (DASC), 2011 IEEE/AIAA 30th (pp. 8A6-1–
8A6-16).
Windsor, J., & Hjortnaes, K. (2009, July). Time and space partitioning in spacecraft
avionics. In Space Mission Challenges for Information Technology, 2009. Third
IEEE International Conference (pp. 13–20).
Wojtczuk, R. (2008). Subverting the Xen hypervisor. In Black Hat USA, 2008.
Wright, C. W., & Walsh, E. J. (1999, February 1). Hunting hurricanes. Retrieved from
http://www.linuxjournal.com/article/3212?page=0,0
125
Xi, S., Wilson, J., Lu, C., & Gill, C. (2011, October). Rt-xen: Towards real-time
hypervisor scheduling in xen. In Embedded Software (EMSOFT), 2011
Proceedings of the International Conference (pp. 39–48).
Xtratum hypervisor. (2011). XtratuM hypervisor for Leon3 volume 2: User manual.
Retrieved from http://www.xtratum.org/files/xm-3-usermanual-022c.pdf
Zhang, J., Chen, K., Zuo, B., Ma, R., Dong, Y., & Guan, H. (2010, November).
Performance analysis towards a kvm-based embedded real-time virtualization
architecture. In Computer Sciences and Convergence Information Technology
(ICCIT), 2010 5th International Conference (pp. 421–426).
126
INITIAL DISTRIBUTION LIST
127