Chorusamoeba 2011 V 2
Chorusamoeba 2011 V 2
1
Processor Pool of 80 single-board SPARC computers.
2
Design Goals (1)
•Distribution:
Connecting together many machines so that multiple independent users can work
on different projects. The machines need not be of the same type, and may be
spread around a building on a LAN.
• Parallelism:
Allowing individual jobs to use multiple CPUs easily. For example, a branch and
bound problem, such as the TSP, would be able to use tens or hundreds of
CPUs. Chess players where the CPUs evaluate different parts of the game tree.
•Transparency:
Having the collection of computers act like a single system. So, the user should
not log into a specific machine, but into the system as a whole. Storage and
location transparency, just-in-time binding
•Performance:
Achieving all of the above in an efficient manner. The basic communication
mechanism should be optimized to allow messages to be sent and received with a
minimum of delay.Also, large blocks of data should be moved from machine to
machine at high bandwidth.
3
Architectural Models
4
The Amoeba System Architecture
5
6
Micro-kernel
Provides low-level memory management. Threads and allocate or de-
allocate segments of memory.
Threads can be kernel threads or User threads which are a part of a
Process
Micro-kernel provides communication between different threads
regardless of the nature or location of the threads
RPC mechanism is carried out via client and server stubs. All
communication is RPC based in the Amoeba system
Microkernel and Server Architecture
Modular design:
1. For example, the file server is isolated from the kernel.
2. Users may implement a specialized file server.
8
Threads
Each process has its own address space, but may contain multiple
threads of control.
Each thread logically has its own registers, program counter, and
stack.
Each thread shares code and global data with all other threads in the
process.
11
12
Great effort was made to optimize performance of RPCs between a
client and server running as user processes on different machines.
1.1 msec from client RPC initiation until reply is received and client
unblocks.
13
Objects and Capabilities
•During object creation, the server constructs a 128 bit value called a
“capability” and returns it to the caller.
•Subsequent operations on the object require the user to send its
capability to the server to both specify the object and prove that the
user has permission to manipulate the object.
14
128 bit Capability
15
Memory Management
No swapping or paging.
16
Amoeba servers (outside the kernel)
Underlying concept: the services (objects) they provide
To create an object, the client does an RPC with appropriate server
To perform operation, the client calls the stub procedure that builds a message
containing the object’s capability and then traps to kernel
The kernel extracts the server port field from the capability and looks it up in the
cache to locate machine on which the server resides
If no cache entry is found-kernel locates server by broadcasting
Directory Server
One-to-Many Communication:
22
Software Outside the Kernel
23
Applications
Use as a Program development environment-it has a partial UNIX
emulation library. Most of the common library calls like open, write,
close, fork have been emulated.
Use it for parallel programming-The large number of processor pools
make it possible to carry out processes in parallel
Use it in embedded industrial application as shown in the diagram
below
Amoeba – Lessons Learned
After more than eight years of development and use, the researchers
assessed Amoeba. [Tanenbaum 1990, 1991]. Amoeba has
demonstrated that it is possible to build a efficient, high performance
distributed operating system.
[Tanebaum 1991]
Tanenbaum, A.S., Kaashoek, M.F., Renesse, R. van, and Bal, H.:
"The Amoeba Distributed Operating System-A Status Report,"
Computer Communications, vol. 14, pp. 324-335, July/August 1991.
[Amoeba 1996]
The Amoeba Distributed Operating System,
http://www.cs.vu.nl/pub/amoeba/amoeba.html
27
Chorus Distributed OS - Goals
Research Project in INRIA (1979 – 1986)
Separate applications from different suppliers running on different
operating systems
need some higher level of coupling
Applications often evolve by growing in size leading to distribution of
programs to different machines
need for a gradual on-line evolution
Applications grow in complexity
need for modularity of the application to be be mapped onto the operating
system concealing the unnecessary details of distribution from the
application
Chorus – Basic Architecture
• Nucleus
There is a general nucleus running on each machine
Communication and distribution are managed at the lowest level
by this nucleus
CHORUS nucleus implements the real time required by real time
applications
Traditional operating systems like UNIX are built on top of the
Nucleus and use its basic services.
Chorus versions
Chorus V0 (Pascal implementation)
Actor concept - Alternating sequence of indivisible
execution and communication phases
Distributed application as actors communicating by
messages through ports or groups of ports
Nucleus on each site
Chorus V1
Multiprocessor configuration
Structured messages, activity messages
Chorus V2, V3 (C++ implementation)
Unix subsystem (distant fork, distributed signals,
distributed files)
Nucleus Architecture
Chorus Nucleus
Supervisor(machine dependent)
dispatches interrupts, traps and exception given by hardware
Real-time executive
controls allocation of processes and provides synchronization and
scheduling
Virtual Memory Manager
manipulates the virtual memory hardware and and local memory
resources. It uses IPC to request remote date in case of page fault
IPC manager
provides asynchronous message exchange and RPC in a location
independent fashion.
Version V3 onwards, the actors , RPC and ports management
were made a part of the Nucleus functions
Chorus Architecture
The Subsystems provide
applications with with
traditional operating
system services
Nucleus Interface
Provides direct access to
low-level services of the
CHORUS Nucleus
Subsystem Interface
e.g.. UNIX emulation
environment, CHORUS/MiX
Thus, functions of an
operating system are split
into groups of services
provided by System Servers
(Subsystems)
User libraries – e.g. “C”
Chorus Architecture (cont.)
System servers work together to form what is called the subsystem
The Subsystem interface –
implemented as a set of cooperating servers representing complex
operating system abstractions
Note: the Nucleus interface Abstractions in the Chorus Nucleus
Actor-collection of resources in a Chorus System.
It defines a protected address space. Three types of actors-user(in user address
space), system and supervisor
Thread
Message (byte string addressed to a port)
Port and Port Groups -
A port is attached to one actor and allows the threads of that Actor to receive
messages to that port
Region
Actors, port and port groups have UIs
Actors
* trusted if the Nucleus allows to it perform sensitive Nucleus Operations
* privileged if allowed to execute privileged instructions.