0% found this document useful (0 votes)
21 views136 pages

Posa2 Schmidt

Uploaded by

eusmesaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views136 pages

Posa2 Schmidt

Uploaded by

eusmesaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 136

Pattern-Oriented Software Architecture

Applying Concurrent & Networked Objects


to Develop & Use
Distributed Object Computing Middleware
Dr. Douglas C. Schmidt
schmidt@uci.edu
http://www.posa.uci.edu/

Electrical & Computing Engineering Department


The Henry Samueli School of Engineering
University of California, Irvine

Montag, 19. April 2004


Middleware Patterns Tutorial Outline
Illustrate how/why it’s hard to build robust, efficient,
& extensible concurrent & networked applications
• e.g., we must address many complex topics that are less
problematic for non-concurrent, stand-alone applications
Stand-alone
Architecture

Distributed Architecture

Describe OO techniques & language


features to enhance software quality
OO techniques & language features include: Tutorial Organization
• Patterns (25+), which embody reusable software 1. Background & motivation
architectures & designs 2. Concurrent & network
• Frameworks & components, which embody reusable challenges & solution
software implementations approaches
• OO language features, e.g., classes, inheritance & 3. Multiple case studies
2dynamic binding, parameterized types & exceptions 4. Wrap-up and Q&A
The Road Ahead
CPUs and networks have
increased by 3-7 orders of
2,400 bits/sec to magnitude in the past decade
1 Gigabits/sec
Extrapolating this trend to
2010 yields
• ~100 Gigahertz desktops
• ~100 Gigabits/sec LANs
• ~100 Megabits/sec wireless
• ~10 Terabits/sec Internet
backbone
These advances stem
10 Megahertz to
largely from standardizing
1 Gigahertz
hardware & software APIs
and protocols, e.g.:
• Intel x86 & Power PC chipsets
In general, software has not • TCP/IP, ATM
Increasing software productivity
improved as rapidly or as • POSIX & JVMs
and QoS depends heavily on COTS • Middleware & components
effectively as hardware • Ada, C, C++, RT Java
3
Addressing the COTS “Crisis”
Distributed systems must increasingly
reuse commercial-off-the-shelf (COTS)
hardware & software
• i.e., COTS is essential to R&D success
However, this trend presents many vexing
R&D challenges for mission-critical
systems, e.g.,
• Inflexibility and lack of QoS
• Security & global competition
Why we should care:
• Despite IT commoditization, progress in
COTS hardware & software is often not
applicable for mission-critical
distributed systems
• Recent advances in COTS software
technology can help to fundamentally
reshape distributed system R&D
4
R&D Challenges & Opportunities
Challenges Opportunities
High-performance, real-time,
fault-tolerant, & secure systems
IOM IOM

IOM BSE BSE BSE IOM

IOM IOM

IOM IOM Middleware,


IOM BSE BSE BSE IOM Frameworks, &
IOM IOM
Components
IOM IOM

IOM BSE BSE BSE IOM

IOM IOM

Autonomous systems Patterns &


Pattern
Languages

Power-aware ad hoc, Standards


mobile, distributed, & & Open-
embedded systems source
5
The Evolution of COTS
Historically, mission-critical apps were
built directly atop hardware & OS
• Tedious, error-prone, & costly over lifecycles
Standards-based COTS middleware helps:
• Manage end-to-end resources
• Leverage HW/SW technology advances
• Evolve to new environments & requirements
The domain-specific services layer is
where system integrators can provide the
most value & derive the most benefits
Key R&D challenges include:
• Layered QoS specification • Multi-level global
& enforcement resource mgmt. &
• Separating policies & optimization
mechanisms across layers • High confidence
• Time/space optimizations • Stable & robust
There are multiple COTS for middleware & apps adaptive systems
layers & research/ Prior R&D efforts have address some, but by
business opportunities no means all, of these issues
6
Consequences of COTS
& IT Commoditization
•More emphasis on integration rather
than programming
•Increased technology convergence &
standardization
•Mass market economies of scale for
technology & personnel
•More disruptive technologies & global
competition
•Lower priced--but often lower quality--
hardware & software components
•The decline of internally funded R&D
•Potential for complexity cap in next-
generation complex systems
Not all trends bode well for Ultimately, competitiveness will depend
long-term competitiveness on success of long-term R&D efforts on
of traditional R&D leaders complex distributed & embedded systems
7
Why We are Succeeding Now
Recent synergistic advances in fundamentals:
Why middleware-centric reuse works
1.Hardware advances
• e.g., faster CPUs & networks
2.Software/system architecture
advances
• e.g., inter-layer optimizations &
meta-programming mechanisms
Standards-based QoS-enabled Patterns
3.Economic & necessity
Pattern Languages:
Middleware: Pluggable service & Generate software
• e.g., global architectures
competition for
micro-protocol components & by capturing
customersrecurring structures
& engineers
reusable “semi-complete” & dynamics & by resolving
application frameworks design forces

Revolutionary changes in software


process: Open-source, refactoring,
extreme programming (XP), advanced
V&V techniques
8
Example:
Applying COTS in Real-time Avionics
Goals
• Apply COTS & open systems to
mission-critical real-time avionics
Key System Characteristics
• Deterministic & statistical deadlines
• ~20 Hz
• Low latency & jitter
• ~250 usecs
• Periodic & aperiodic processing
• Complex dependencies
• Continuous platform upgrades
Key Results
• Test flown at China Lake NAWS by Boeing
OSAT II ‘98, funded by OS-JTF
•• www.cs.wustl.edu/~schmidt/TAO-boeing.html
www.cs.wustl.edu/~schmidt/TAO-boeing.html
• Also used on SOFIA project by Raytheon
•• sofia.arc.nasa.gov
sofia.arc.nasa.gov
• First use of RT CORBA in mission computing
9
• Drove Real-time CORBA standardization
Example:
Applying COTS to Time-Critical Targets
Joint
JointForces
Goals Global
Forces Challenge
Challenges are
GlobalInfo
InfoGrid
Grid
• Detect, identify, is to
also make this
relevant to
track, & destroy TBMD possible!
& NMD
time-critical
targets

Key System
Characteristics
• Real-time mission-critical
sensor-to-shooter needs
Adapted from “The Future of AWACS”,
• Highlybydynamic QoS
LtCol Joe Chapa
requirements & environmental Key Solution Characteristics
conditions Time-critical targets ••Adaptive
Adaptive
require&
& reflective
reflective
immediate •Efficient
response& scalable
because:
• Multi-service & •They
asset pose a clear •High present danger•Affordable
and confidence to friendly&forces
flexible&
coordination •Are highly lucrative, •Safety
fleeting •COTS-based
criticaltargets of opportunity
10
Example:
Applying COTS to Large-scale Routers
IOM IOM
Goal
IOM BSE BSE BSE IOM
• Switch ATM cells +
IOM IOM
IP packets at terabit
IOM IOM rates
IOM BSE BSE BSE IOM
IOM IOM
Key System
Characteristics
IOM IOM • Very high-speed WDM
IOM BSE BSE BSE IOM links
IOM IOM • 102/103 line cards
www.arl.wustl.edu • Stringent requirements
for availability
Key Software Solution Characteristics
• Multi-layer load
•High confidence & scalable computing architecture balancing, e.g.:
• Networked embedded processors
• Layer 3+4
• Distribution middleware
• FT & load sharing • Layer 5
• Distributed & layered resource management
•Affordable, flexible, & COTS
Example:
Applying COTS to Hot Rolling Mills
Goals
• Control the processing of molten
steel moving through a hot rolling
mill in real-time
System Characteristics
• Hard real-time process automation
requirements
• i.e., 250 ms real-time cycles
• System acquires values
representing plant’s current state,
tracks material flow, calculates new
settings for the rolls & devices, &
submits new settings back to plant
Key Software Solution Characteristics www.siroll.de
•Affordable, flexible, & COTS
• Product-line architecture • Windows NT/2000
• Design guided by patterns & frameworks • Real-time CORBA (ACE+TAO)
12
Example:
Applying COTS to Real-time Image Processing
www.krones.com Goals
• Examine glass bottles
for defects in real-
time

System
Characteristics
• Process 20 bottles
per sec
• i.e., ~50 msec per
bottle
• Networked
configuration
• ~10 cameras
Key Software Solution Characteristics
•Affordable, flexible, & COTS
• Embedded Linux (Lem) • Remote booted by DHCP/TFTP
• Compact PCI bus + Celeron processors • Real-time CORBA (ACE+TAO)
13
Key Opportunities & Challenges in
Concurrent & Networked Applications
Concurrency & Synchronization Accidental Complexities
Motivations • Low-level APIs
• Leverage • Poor debugging tools
hardware/software • Algorithmic
advances decomposition
• Simplify program • Continuous re-invention
structure & re-discover of core
• Increase concepts & components
performance
• Improve response- Inherent Complexities
time • Latency
• Reliability
Networking & • Load balancing
Distribution • Scheduling
Motivations • Causal ordering
• Collaboration • Synchronization
• Performance • Deadlocks
• Reliability & availability
• Scalability & portability
• Extensibility
•14Cost effectiveness
Overview of Patterns & Pattern Languages
www.posa.uci.edu
Patterns
• Present solutions to common software •Flexibility The Proxy
problems arising within a certain context •ExtensibilityPattern
• Help resolve key design forces •Dependability
• Capture recurring structures & dynamics •Predictability
among software participants to facilitate
•Scalability
reuse of successful designs
• Generally codify expert knowledge of •Efficiency
design constraints & “best practices”
Pattern Languages
• Define a vocabulary for
talking about software
development problems
• Provide a process for the
orderly resolution of these
problems
• Help to generate & reuse
software architectures
15
Software Design Abstractions for
Concurrent & Networked Applications
Problem
• Distributed application functionality
is subject to change since it is
often reused in unforeseen
MIDDLEWARE
contexts, e.g.,
• Accessed from different clients
• Run on different platforms
• Configured into different run-time
contexts

Solution
• Don‘t structure distributed applications as a
monoliths, but instead decompose them into
classes, frameworks, & components

A class is a unit of A framework is an integrated A component is an


abstraction & collection of classes that encapsulation unit with
implementation in collaborate to produce a one or more interfaces
an OO reusable architecture for a that provide clients with
programming
16
family of related applications access to its services
language
A Comparison of Class Libraries,
Frameworks, & Components
Class Library Architecture Framework Architecture
LOCAL
APPLICATION- INVOCATIONS
SPECIFIC Math ADTs
FUNCTIONALITY ADTs
Files Strings
Strings INVOKES

GUI Files
GLUE
EVENT
CODE Locks
LOOP
IPC Locks

Class
Frameworks Components
Libraries
Component Architecture
LOCAL/REMOTE
Micro-level Meso-level Macro-level
APPLICATION- INVOCATIONS
SPECIFIC
Locking
Stand-alone “Semi- Stand-alone
FUNCTIONALITY Naming language complete” composition
entities applications entities
Trading Logging Domain-specific or
Domain- Domain-
GLUE Domain-independent
EVENT
LOOP CODE independent specific
Events
Borrow Inversion of Borrow caller’s
17
caller’s thread control thread
Overview of the ACE Framework
www.cs.wustl.edu/~schmidt/ACE.html Features
•Open-source
•200,000+ lines
of C++
•30+ person-
years of effort
•Ported to
Win32, UNIX, &
RTOSs
• e.g., VxWorks,
pSoS, LynxOS,
Chorus, QNX

•Large open-source user community •Commercial support by Riverace


•www.cs.wustl.edu/~schmidt/ACE-
users.html • www.riverace.com/
18
Key Capabilities Provided by ACE
Service Access & Control Event Handling

Concurrency Synchronization

19
The POSA2 Pattern Language
Observation
•Failure rarely results
from unknown scientific
principles, but from
failing to apply proven
engineering practices &
patterns

Benefits of POSA2 Patterns


•Preserve crucial design
information used by
applications & underlying
frameworks/components
•Facilitate design reuse
•Guide design choices for
application developers

URL for POSA Books


www.posa.uci.edu
20
POSA2 Pattern Abstracts
Service Access & Configuration Patterns Event Handling Patterns
The Wrapper Facade design pattern The Reactor architectural pattern allows event-
encapsulates the functions and data provided by driven applications to demultiplex and dispatch
existing non-object-oriented APIs within more service requests that are delivered to an
concise, robust, portable, maintainable, and application from one or more clients.
cohesive object-oriented class interfaces.
The Proactor architectural pattern allows
The Component Configurator design pattern event-driven applications to efficiently
allows an application to link and unlink its demultiplex and dispatch service requests
component implementations at run-time without triggered by the completion of asynchronous
having to modify, recompile, or statically relink operations, to achieve the performance
the application. Component Configurator further benefits of concurrency without incurring
supports the reconfiguration of components into certain of its liabilities.
different application processes without having to
The Asynchronous Completion Token design
shut down and re-start running processes.
pattern allows an application to demultiplex
The Interceptor architectural pattern allows and process efficiently the responses of
services to be added transparently to a asynchronous operations it invokes on
framework and triggered automatically when services.
certain events occur.
The Acceptor-Connector design pattern
The Extension Interface design pattern allows decouples the connection and initialization of
multiple interfaces to be exported by a cooperating peer services in a networked
component, to prevent bloating of interfaces and system from the processing performed by the
breaking of client code when developers extend peer services after they are connected and
or modify the functionality of the component.
21
initialized.
POSA2 Pattern Abstracts (cont’d)
Synchronization Patterns Concurrency Patterns
The Scoped Locking C++ idiom The Active Object design pattern decouples method
ensures that a lock is acquired when execution from method invocation to enhance concurrency
control enters a scope and released and simplify synchronized access to objects that reside in
automatically when control leaves the their own threads of control.
scope, regardless of the return path
The Monitor Object design pattern synchronizes concurrent
from the scope.
method execution to ensure that only one method at a time
The Strategized Locking design pattern runs within an object. It also allows an object’s methods to
parameterizes synchronization cooperatively schedule their execution sequences.
mechanisms that protect a component’s
The Half-Sync/Half-Async architectural pattern decouples
critical sections from concurrent
asynchronous and synchronous service processing in
access.
concurrent systems, to simplify programming without
The Thread-Safe Interface design unduly reducing performance. The pattern introduces two
pattern minimizes locking overhead and intercommunicating layers, one for asynchronous and one
ensures that intra-component method for synchronous service processing.
calls do not incur ‘self-deadlock’ by
The Leader/Followers architectural pattern provides an
trying to reacquire a lock that is held by
efficient concurrency model where multiple threads take
the component already.
turns sharing a set of event sources in order to detect,
The Double-Checked Locking demultiplex, dispatch, and process service requests that
Optimization design pattern reduces occur on the event sources.
contention and synchronization
The Thread-Specific Storage design pattern allows multiple
overhead whenever critical sections of
threads to use one ‘logically global’ access point to retrieve
code must acquire locks in a thread-
an object that is local to a thread, without incurring locking
safe manner just once during program
overhead on each object access.
execution.
22
Example of Applying Patterns & Frameworks:
Real-time CORBA & The ACE ORB (TAO)
www.cs.wustl.edu/~schmidt/TAO.html
TAO Features
•Open-source
•500+ classes &
500,000+ lines of C++
•ACE/patterns-based
End-to-end Priority Propagation •30+ person-years of
Scheduling Service effort
Thread •Ported to UNIX,
Standard Synchronizers Pools Win32, MVS, & many
Protocol RT & embedded OSs
Properties Explicit Binding • e.g., VxWorks, LynxOS,
Chorus, QNX
Portable Priorities

•Large open-source user community


•www.cs.wustl.edu/~schmidt/TAO- •Commercial support by OCI
23 users.html • www.theaceorb.com/
Tutorial Example 1:
Electronic Medical Imaging Systems
Goal
• Route, manage, & manipulate
electronic medical images robustly,
efficiently, & securely thoughout a
distributed environment
System Characteristics
• Large volume of “blob” data
• e.g.,10 to 40 Mps
• “Lossy” compression isn’t viable
due to liability concerns
• Diverse QoS requirements, e.g.,
• Synchronous & asynchronous Modalities
communication e.g., MRI, CT, CR,
• Streaming communication Ultrasound, etc.
• Prioritization of requests & streams
• Distributed resource management www.syngo.com
Key Software Solution Characteristics
•Affordable, flexible, & COTS • General-purpose & embedded
• Product-line architecture OS platforms
24
• Design guided by patterns & frameworks • Middleware technology agnostic
Image Acquisition Scenario
Diagnostic & Clinical Workstations
Image
Image Xfer
Xfer
Naming Interface
Interface
7.New
Service
3. Bind 13. Activate Image Xfer
4. Find Factory Component
Factory 14. Delegate Servant
5. Find Image Factory
Factory Proxy
Proxy Container
8. New
Radiology Xfer
Xfer Proxy
Proxy Image
Client 10. Invoke get_image() call
Database
11. Query 2. Enter 1. Deploy
Config. info
Key Tasks 9. Return Ref Configuration
1.Image Configuration
6. Intercept
routing
12. Check Database & delegate
Authorization
2.Image
delivery Factory/Finder
Security Service
25
Applying Patterns to Resolve Key
Design Challenges
Naming
Naming
Image
Image Xfer
Xfer Component
Proxy Service Layers Interface
Interface
Service Configurator
Extension Image
Image Xfer
Xfer
Component
Component Active Object
Interface Servant
Servant
Factory
Factory Proxy
Proxy Container
Container
Publisher/ Radiology
Radiology Xfer
Xfer Proxy
Proxy Image Activator
Subscriber Client
Client Database
Configuration
Broker Database Configuration
Configuration Interceptor
Async
Forwarder/
Receiver Security
Security Service
Service Factory/Finder
Factory/Finder Factory/Finder

Patterns help resolve the following common design challenges:


•Separating concerns between tiers •Decoupling suppliers & consumers
•Improving type-safety & •Providing mechanisms to find &
performance create remote components
•Enabling client extensibility •Locating & creating components
•Ensuring platform-neutral & effectively
network-transparent OO comm. •Extending components transparently
•Supporting async comm. •Minimizing resource utilization
•Supporting
26
OO async comm. •Enhancing server (re)configurability
Separating Concerns Between Tiers
Context Problem
• Distributed systems are now • One reason it’s hard to build COTS-
common due to the advent of based distributed systems is because
• The global Internet a large number of capabilities must
• Ubiquitous mobile & embedded be provided to meet end-to-end
devices application requirements
Solution
• Apply the Layers architectural Presentation Tier Client Client
pattern to create a multi-tier • e.g., thin clients
architecture that separates
concerns between groups of Application
subtasks occurring at distinct
layers in the distributed system Middle Tier comp
• e.g., common comp
• Services in the middle-tier participate business logic Server
in various types of tasks, e.g.,
• Workflow of integrated “business”
processes Database Tier
• Connect to databases & other • e.g., persistent DB DB
backend systems for data storage data Server Server
27 & access
Applying the Layers Pattern to
Image Acquisition
Presentation Tier Diagnostic Clinical Diagnostic & clinical
• e.g., radiology Workstations Workstations workstations are
clients presentation tier components
that:
Middle Tier Image • Typically represent
• e.g., image comp sophisticated GUI
routing & image elements
comp
transfer logic • Share the same address
Server space with their clients
• Their clients are containers
Database Tier that provide all the
• e.g., persistent resources
image data Image Image • Exchange messages with
Database Database the middle tier components

Image servers are middle tier components that:


• Provide server-side functionality
• e.g., they are responsible for scalable concurrency & networking
• Can run in their own address space
28
• Are integrated into containers that hide low-level system details
Pros & Cons of the Layers Pattern
This pattern has four benefits: This pattern also has liabilities:
• Reuse of layers • Cascades of changing behavior
• If an individual layer embodies a well- • If layer interfaces & semantics
defined abstraction & has a well-defined & aren’t abstracted properly then
documented interface, the layer can be changes can ripple when behavior
reused in multiple contexts of a layer is modified
• Support for standardization • Lower efficiency
• Clearly-defined and commonly-accepted • A layered architecture can be less
levels of abstraction enable the efficient than a monolithic
development of standardized tasks & architecture
interfaces • Unnecessary work
• Dependencies are localized • If some services performed by lower
• Standardized interfaces between layers layers perform excessive or
usually confine the effect of code changes duplicate work not actually required
to the layer that is changed by the higher layer, performance
• Exchangeability can suffer
• Individual layer implementations can be • Difficulty of establishing the
replaced by semantically-equivalent correct granularity of layers
implementations without undue effort • It’s important to avoid too many &
too few layers
29
Overview of Distributed Object Computing
Communication Mechanisms
Context Problem
In multi-tier systems both the tiers & the • A single communication
components within the tiers must be mechanism does not fit all
connected via communication mechanisms uses!
Solution
• DOC middleware provides multiple types of communication mechanisms
• Collocated client/server (i.e., native function call)
• Synchronous & asynchronous RPC/IPC
• Group communication
• Data streaming

Next,
Next, we’ll
we’ll explore
explore
various
various patterns
patterns that
that
applications
applications can
can apply
apply
to
to leverage
leverage these
these
communication
communication
mechanisms
mechanisms

30
Improving Type-safety & Performance
Context Problems
• The configuration of • Low-level message passing is fraught with
components in accidental complexity
• Remote components should look like local
distributed systems is
components from an application perspective
often subject to change
• i.e., clients & servers should be oblivious to
as requirements evolve communication issues
AbstractService
Solution Client
service
Apply the Proxy design pattern to
provide an OO surrogate through
which clients can access remote
objects Proxy Service
1 1
service service
: Client : Proxy : Service
service
• A Service implements the object, which is not
pre-processing: accessible directly
marshaling
service • A Proxy represents the Service and
ensures the correct access to it
post-processing: • Proxy offers same interface as Service
unmarshaling • Clients use the Proxy to access the Service
31
Applying the Proxy Pattern
to Image Acquisition
AbstractService
We can apply the Proxy pattern Client
get_image()
to provide a strongly-typed
interface to initiate & coordinate
the downloading of images
from an image database Proxy Image Xfer
1 1
get_image() get_image()

Image
Image Xfer
Xfer
Xfer
Xfer Proxy
Proxy Service
Service
Radiology
Invoke get_image() call Image
Client Database

When proxies are generated automatically by middleware they


can be optimized to be much more efficient than manual
message passing
• e.g., improved memory management, data copying, &
compiled marshaling/demarshaling
32
Pros & Cons of the Proxy Pattern
This pattern provides three benefits: This pattern has two liabilities:
• Decoupling clients from the location • Potential overkill via
of server components sophisticated strategies
• By putting all location information & • If proxies include overly
addressing functionality into a proxy sophisticated functionality they many
clients are not affected by migration of introduce overhead that defeats their
servers or changes in the networking intended purpose
infrastructure • Less efficiency due to
• Potential for time & space indirection
optimizations • Proxies introduce an additional layer
• Proxy implementations can be loaded “on- of indirection that can be excessive if
demand” and can also be used to cache the proxy implementation is
values to avoid remote calls inefficient
• Proxies can also be optimized to improve
both type-safety & performance
• Separation of housekeeping &
functionality
• A proxy relieves clients from burdens that
do not inherently belong to the task the
client performs
33
Enabling Client Extensibility
Context Problem
• Object models define how • Many object models assign a single interface
components import & export to each component
functionality • This design makes it hard to evolve
• e.g., UML class diagrams components without
specify well-defined OO • Breaking existing client interfaces
interfaces • Bloating client interfaces

Solution Client Ask for a reference


to an interface
Root
Call_service QueryInterface
•• Apply the Extension Call an
*
Interface design pattern to CreateInstance operation on an
interface
allow multiple interfaces to <<extends>>

be exported by a *

Factory 1 Component
component, to prevent new
*

CreateComponent CreateComponent
bloating of interfaces &
breaking of client code 1+
Extension
when developers extend Server implements
Interface i
or modify component Initialize QueryInterface
unititialize service_i
functionality
34
Extension Interface Pattern Dynamics
: Client : Factory
Start_client
: Component : Extension 1 : Extension 2
CreateInstance(Ext.Intf. 1)
new create

Ref. To Extension1

service_1

QueryInterface(Extension Interface 2)
create

Ref. To Extension2
service_2
service2

Note
Note how
how each
each extension
extension interface
interface can
can
serve
serve as
as aa “factory”
“factory” to
to return
return object
object
reference
reference toto other
other extension
extension interfaces
interfaces

35
Pros & Cons of the
Extension Interface Pattern
This pattern has five benefits: This pattern also has liabilities:
• Separation of concerns • Overhead due to indirection
• Interfaces are strictly decoupled from • Clients must incur the
implementations overhead of several round-trips
• Exchangeability of components to obtain the appropriate object
• Component implementations can evolve reference from a server
independently from clients that access component
them • Complexity & cost for
• Extensibility through interfaces development & deployment
• Clients only access components via their • This pattern off-loads the
interfaces, which reduces coupling to responsibility for determining
representation & implementation details the appropriate interface from
• Prevention of interface bloating the component designer to the
• Interfaces need not contain all possible client application
methods, just the ones associated with a
particular capability
• No subclassing required
• Delegation—rather than inheritance—is
used to customize components
36
Ensuring Platform-neutral & Network-
transparent OO Communication
Context Problem
• Using the Proxy pattern is • We need an architecture that:
insufficient since it doesn‘t address • Supports remote method invocation
how • Provides location transparency
• Remote components are located • Allows the addition, exchange, or
• Connections are established remove of services dynamically
• Messages are exchanged across a • Hides system details from the
network developer
• etc. 1 Broker 1
Client Proxy * * Server Proxy
Solution marshal message main_loop message marshal
• Apply the unmarhal exchange srv_registration exchange unmarshal
Broker pattern receive_result srv_lookup dispatch
service_p transmit_message receive_request
to provide OO
platform-neutral calls
1 * calls
* calls
communication
Bridge 1
between 1
networked client marshal Server
Client
& server call_service_p
unmarshal start_up
forward_message main_loop
components start_task
37
transmit_message service_i
Broker Pattern Dynamics
: Client : Client Proxy : Broker : Server Proxy : Server
register_service
start_up
method
(proxy) locate_server assigned port

server port

marshal

receive_request
unmarshal

dispatch
method (impl.)

result
marshal
receive_result
unmarshal

result
Broker tools provide the
generation of necessary client Interface Proxy
& server proxies from higher Generator
Specif. Code
level interface definitions
38
Applying the Broker Pattern
to Image Acquisition
Interface IDL Implementation
• Common Object Request
Repository Compiler Repository Broker Architecture (CORBA)
• A family of specifications
• OMG is the standards body
in args Object • Over 800 companies
Client OBJ
REF
operation()
out args +
(Servant) • CORBA defines interfaces
return • Rather than implementations
• Simplifies development of
IDL
DSI distributed applications by
SKEL
IDL ORB
automating
DII • Object location
STUBS INTERFACE Object Adapter
• Connection management
ORB CORE GIOP/IIOP/ESIOPS • Memory management
• Parameter (de)marshaling
• Event & request demuxing
• CORBA shields applications from environment
• Error handling
heterogeneity
• e.g., programming languages, operating • Object/server activation
systems, networking protocols, hardware • Concurrency
39
Pros & Cons of the Broker Pattern
This pattern has five benefits: This pattern also has liabilities:
• Portability enhancements
• A broker hides OS & network system details • Restricted efficiency
from clients and servers by using indirection & • Applications using brokers may
abstraction layers, such as APIs, proxies, be slower than applications
adapters, & bridges written manually
• Interoperability with other brokers
• Different brokers may interoperate if they • Lower fault tolerance
understand a common protocol for exchanging • Compared with non-distributed
messages software applications,
• Reusability of services distributed broker systems may
• When building new applications, brokers incur lower fault tolerance
enable application functionality to reuse
existing services • Testing & debugging may be
• Location transparency harder
• A broker is responsible for locating servers, so • Testing & debugging of
clients need not know where servers are distributed systems is tedious
located because of all the components
• Changeability & extensibility of involved
components
• If server implementations change without
affecting interfaces clients should not be
40 affected
Supporting Async Communication
Context Problem
• Some clients want • Broker implementations based on conventional RPC
to send requests, semantics often just support blocking operations
continue their work, • i.e., clients must wait until two-way invocations return
• Unfortunately, this design can reduce scalability &
& receive the
complicate certain use-cases
results at some
later point in time Solution
• Apply the Async Forwarder/Receiver design
Client
<<send>>
Message API pattern to allow asynchronous communication
between clients & servers
Local Queue
Introduce intermediary queue(s) between clients &
store
forward
servers:
remove • A queue is used to store messages
Message
• A queue can cooperate with other queues to
<<route>>
route messages
Remote Queue • Messages are sent from sender to receiver
store
• A client sends a message, which is queued &
forward then forwarded to a message processor on a
<<exec>> remove server that receives & executes them
Message • A Message API is provided for clients & servers to
Processor
<<recv>>
Message API send/receive messages
41
Async Forwarder/Receiver Pattern Dynamics
: Message : Local : Remote : Message
: Client API Queue Queue API
create
: Message
send Message
Message
store : Message
Message
forward Processor
create
receive

Other receive
processing
Message
Message

exec

Reply
Reply
send
Reply store
recv recv forward
Reply

42
Applying the Async Forwarder/Receiver
Pattern to Image Acquisition
We can apply the Async Forwarder/Receiver
pattern to Radiology <<send>>
Message API
Client
• Queue up image request messages remotely
without blocking the diagnostic/clinical Local Queue
workstation clients store
forward
• Execute the requests at a later point & return the remove
results to the client Message <<route>>

Remote Queue
Image
Image Xfer
Xfer
Radiology Service
Service
store
forward
Client <<exec>> remove
Image Image Server
Message
Database Processor <<recv>>
Message API

This design also enables other, more advanced capabilities, e.g.,


• Multi-hop store & forward persistence
• QoS-driven routing, where requests can be delivered to the
“best” image database
43
Pros & Cons of the Async
Forwarder/Receiver Pattern
This pattern provides three benefits: This pattern also has some
• Enhances concurrency by liabilities:
transparently leveraging available • Message execution order can
parallelism differ from message invocation
• Messages can be executed remotely on order
servers while clients perform other
• As a result, clients must be careful not
processing
to rely on ordering dependencies
• Simplifies synchronized access to
a shared object that resides in its • Lack of type-safety
own thread of control • Clients & servers are responsible for
• Since messages are processed serially formatting & passing messages
by a message processor target objects • Complicated debugging
often need not be concerned with • As with all distributed systems,
synchronization mechanisms debugging & testing is complex
• Message execution order can differ
from message invocation order
• This allows reprioritizing of messages to
enhance quality of service
44
Supporting OO Async Communication
Context Problem
• Some clients want to invoke • Using the explicit message-passing API of
remote operations, continue the Async Forwarder/Receiver pattern can
their work, & retrieve the reduce type-safety & performance
results at a later point in time • Similar to motivation for Proxy pattern...

Solution
• Apply the Active Object design pattern to decouple method invocation from
method execution using an object-oriented programming model
Proxy Scheduler
Activation • A proxy provides an interface that allows
List clients to access methods of an object
method_1 enqueue enqueue
• A concrete method request is created for
method_n dispatch dequeue
every method invoked on the proxy
creates creates maintains • A scheduler receives the method requests
*
& dispatches them on the servant when
Future MethodRequest Servant
they become runnable
guard method_1
call method_n • An activation list maintains pending
method requests
• A servant implements the methods
Concrete Concrete • A future allows clients to access the
45 MethodRequest1 MethodRequest2 results of a method call on the proxy
Active Object Pattern Dynamics
• A client invokes a method on the
: Client : Proxy : Scheduler : Servant
proxy
method • The proxy returns a future to the
client, & creates a method
: Future
request, which it passes to the
: Method scheduler
enqueue
Request • The scheduler enqueues the
method request into the
dispatch call activation list (not shown here)
method
• When the method request
write
becomes runnable, the scheduler
dequeues it from the activation
list (not shown here) & executes
read it in a different thread than the
client
• The method request executes the
method on the servant & writes
results, if any, to the future
Clients
Clients can
can obtain
obtain result
result from
from futures
futures • Clients obtain the method’s
via
via blocking,
blocking, polling,
polling, or
or callbacks
callbacks results via the future
46
Applying the Active Object Pattern
to Image Acquisition
• OO developers generally prefer
Activation
method-oriented request/response Proxy Scheduler
List
semantics to message-oriented method_1 enqueue enqueue
semantics method_n dispatch dequeue
• The Active Object pattern supports
this preference via strongly-typed creates creates * maintains
async method APIs: MethodRequest Servant
Future
• Several types of parameters can guard method_1
be passed: call method_n
• Requests contain in/inout
arguments
• Results carry out/inout
arguments & results get_image() put_image()
• Callback object or poller object can
be used to retrieve results
get_image()
Image
Image Xfer
Xfer
Radiology Service
Service
Client future results Image
47
Database
Pros & Cons of the Active Object Pattern
This pattern provides four benefits: This pattern also has some
• Enhanced type-safety liabilities:
• Compared with async message passing • Performance overhead
• Enhances concurrency & simplifies
• Depending on how an active
synchronized complexity object’s scheduler is
• Concurrency is enhanced by allowing client threads
implemented, context
& asynchronous method executions to run
simultaneously switching, synchronization, &
• Synchronization complexity is simplified by using a data movement overhead may
scheduler that evaluates synchronization occur when scheduling &
constraints to guarantee serialized access to executing active object
servants invocations
• Transparent leveraging of available • Complicated debugging
parallelism • It is hard to debug programs
• Multiple active object methods can execute in that use the Active Object
parallel if supported by the OS/hardware pattern due to the concurrency
• Method execution order can differ from & non-determinism of the
method invocation order various active object
• Methods invoked asynchronous are executed schedulers & the underlying
according to the synchronization constraints OS thread scheduler
defined by their guards & by scheduling policies
48
Decoupling Suppliers & Consumers
Context Problem
• In large-scale electronic medical • Having each client call a specific server
imaging systems, radiologists may is inefficient & non-scalable
share “work lists” of patient images • A polling strategy leads to
to balance workloads effectively performance bottlenecks
• Work lists could be spread across
Solution different servers
• Apply the Publisher/Subscriber • More than one client may be
pattern to decouple image interested in work list content
suppliers from image consumers
Decouple suppliers (publishers) &
Publisher Event Channel Subscriber consumers (subscribers) of events:
attachPublisher • An Event Channel stores/forwards events
produce detachPublisher consume • Publishers create events & store them in a
attachSubscriber queue maintained by the Event Channel
detachSubscriber • Consumers register with event queues,
from which they retrieve events
• Events are used to transmit state change
creates receives info from publishers to consumers
*
• For event transmission push-models &
Event Filter pull-models are possible
filter • Filters can filter events for consumers
49
Publisher/Subscriber Pattern Dynamics
: Publisher : Event Channel : Subscriber

• The Publisher/Subscriber attachSubscriber


pattern helps keep the
produce
state of cooperating : Event
components synchronized pushEvent
event

• To achieve this it enables pushEvent


event
one-way propagation of
changes: one publisher consume

notifies any number of


subscribers about detachSubscriber
changes to its state

Key design considerations for the Publisher/Subscriber pattern include:


• Push vs. pull interaction models
• Control vs. data event notification models
• Multicast vs. unicast communication models
• Persistence vs. transient queueing models
50
Applying the Publisher/Subscriber
Pattern to Image Acquisition
• Radiologists can subscribe to Event Channel Radiologist
Modality
an event channel in order to attachPublisher
receive notifications produced produce detachPublisher consume
when modalities publish events attachSubscriber
detachSubscriber
indicating the arrival of a new
image
• This design enables a group of creates receives
*
distributed radiologists to
Event Filter
collaborate effectively in a
networked environment filter

Radiology
Client
Radiology Radiology
Client Event
Event Client
Radiology Channel
Channel Radiology
Client Client
Image
Database
51
Pros & Cons of the
Publisher/Subscriber Pattern
This pattern has two benefits: There is also a liability:
• Decouples consumers & • Must be careful with potential
producers of events update cascades
• All an event channel knows is that it • Since subscribers have no
has a list of consumers, each knowledge of each other’s presence,
conforming to the simple interface of applications can be blind to the
the Subscriber class ultimate cost of publishing events
• Thus, the coupling between the through an event channel
publishers and subscribers is abstract • Thus, a seemingly innocuous
& minimal operation on the subject may cause
• n:m communication models are a cascade of updates to observers &
supported their dependent objects
• Unlike an ordinary request/response
interaction, the notification that a
publisher sends needn’t designate its
receiver, which enables a broader
range of communication topologies,
including multicast & broadcast

52
Locating & Creating Components Effectively
Context Problem
• Our electronic medical • How to create new components and/or find
imaging system contains existing ones
many components • Simple solutions appropriate for stand-alone
distributed in a network applications don’t scale
• “Obvious” solutions for distribution also don’t scale

Solution
• Apply the Factory/Finder pattern to separate the management of component
lifecycles from their use by client applications
AbstractHome AbstractComp • An Abstract Home declares an interface for
find operation operations that find and/or create abstract
create instances of components
• Concrete Homes implements the abstract
ConcreteComp Home interface to find specific instances and/or
ConcreteHome
operation create new ones
find
• Abstract Comp declares interface for a specific
create
type of component class
• Concrete Comp define instances
Primary Key
53
• A Primary Key is associated with a component
Factory/Finder Pattern Dynamics
: Client : Home : Component
create : Primary Key
• The Factory/Finder pattern
is supported by distributed find (“ImageXYZ”); Primary Key
component models lookup
• e.g., EJB, COM+, & the
CCM
Component

operation

Node
Client
getName
*
•• Homes enable the creation &
location of components, but
Binding
we still need a global naming
Directory
getObject resolve service to locate the homes
listNodes
navigate
newBinding
newSubDir
54
remove
Applying the Factory/Finder Pattern
to Image Acquisition
• We can apply the Factory/Finder AbstractHome AbstractComp
pattern to create/locate image find operation
create
transfer components for images
needed by radiologists ImageXferComp
• If a suitable component already ImageXferHome
operation
exists the component home will find
use it, otherwise, it will create a create

new component
Primary Key

Naming
Naming Image
Image Xfer
Xfer
Service
Service Interface
Interface
3. Find 2. Bind
Factory Factory 5.New
Container
Image
6. Find Image Factory
Factory Proxy
Proxy Database
4. Intercept
& delegate
Radiology 1. Deploy
Client Factory/Finder
Factory/Finder Configuration
Configuration
55
Pros & Cons of the Factory/Finder Pattern
This pattern has three benefits: This pattern also has some liabilities:
• Separation of concerns • Overhead due to indirection
• Finding/creating individual • Clients must incur the overhead of
components is decoupled from several round-trips to obtain the
locating the factories that find/create appropriate object reference
these components • Complexity & cost for
• Improved scalability development & deployment
• e.g., general-purpose directory • There are more steps involved in
mechanisms need not manage the obtaining object references, which can
creation & location of large amounts of complicate client programming
finer-grained components whose
lifetimes may be short
• Customized capabilities
• The location/creation mechanism can
be specialized to support key
capabilities that are unique for various
types of components

56
Extending Components Transparently
Context
• Component developers may Client Client
not know a priori where their
components will execute
• Thus, containers are
introduced to:
• Shield clients & components Container
from the details of the
underlying middleware,
services, network & OS
• Manage the lifecycle of
components & notify Server Server ...
components about lifecycle Component Component
events
• e.g., activation, passivation, & Transaction Transaction
transaction progress Security Security
• Provide components uniform Resources Resources
... ...
access to infrastructure
services Declarative Imperative
• e.g., transactions, security, &
persistence
Programming Programming
57
• Register & deploy components
Extending Components Transparently (cont‘d)
Problem
• Components should be able to specify
Solution
• Apply the Interceptor
declaratively in configuration files which
architectural pattern to attach
execution environment they require
interceptors to a framework
• Containers then should provide the right
that can handle particular
execution environment
events by invoking associated
• e.g., by creating a new transaction or
interceptors automatically
new servant when required
provide
Context • Framework represents the concrete
Framework framework to which we attach interceptors
• Concrete Interceptors implement the
attach_interceptor event handler for the system-specific
AbstractInterceptor manage_interceptors
events they have subscribed for
handle_event • Context contains information about the
event & allows modification of system
callback behavior after interceptor completion
attach • The Dispatcher allows applications to
register & remove interceptors with the
ConcreteInterceptor Dispatcher framework & to delegate events to
*
handle_event interceptors
58
Interceptor Pattern Dynamics
• Interceptor are a “meta-
: Framework : Application
programming mechanism”
• Other meta-programming : Dispatcher create : Interceptor
mechanisms include attach
interceptor
• Smart proxies
Place interceptor in
• Pluggable protocols internal interceptor
• Gateways/bridges run_event_loop map
• Interface repositories & DII event
• These mechanisms provide create : Context
building-blocks to handle delegate
variation translucently &
reflectively Look for handle_event
• More information on meta- registered context
interceptors
programming mechanisms
can be found at
www.cs.wustl.edu/ • Interception can also enable performance
~schmidt/PDF/IEEE.pdf enhancement strategies
• e.g., just-in-time activation, object
pooling, & caching
59
Applying the Interceptor Pattern
to Image Acquisition
provide
• A container provides generic Context Image Server
interfaces to a component that it Framework
can use to access container attach_interceptor
functionality AbstractInterceptor manage_interceptors
• e.g., transaction control, handle_event
persistence, security,etc.
• A container intercepts all incoming callback
requests from clients attach
• It reads the component’s Container Dispatcher
requirements from a XML *
handle_event
configuration file & does some
pre-processing before actually
delegating the request to the
component Container
• A component provides event
interfaces the container invokes XML
automatically when particular config
events occur
• e.g., activation or passivation Component
60
Pros & Cons of the Interceptor Pattern
This pattern has five benefits: This pattern also has liabilities:
• Extensibility & flexibility • Complex design issues
• Interceptors allow an application to evolve • Determining interceptor APIs &
without breaking existing APIs & semantics is non-trivial
components • Malicious or erroneous
• Separation of concerns interceptors
• Interceptors decouple the “functional” • Mis-behaving interceptors can
path from the “meta” path wreak havoc on application
• Support for monitoring & control of stability
frameworks • Potential interception
• e.g., generic logging mechanisms can be cascades
used to unobtrusively track application • Interceptors can result in infinite
behavior recursion
• Layer symmetry
• Interceptors can perform transformations
on a client-side whose inverse are
performed on the server-side
• Reusability
• Interceptors can be reused for various
general-purpose behaviors
61
Minimizing Resource Utilization
Context Problem
• Image servers are simply one of • It may not feasible to have all image
many services running throughout server implementations running all
an distributed electronic medical the time since this ties up end-
image system system resources unnecessarily

Solution
•• Apply the Activator pattern to spawn servers on-demand in
order to minimize end-system resource utilization
Client • When incoming requests arrive, the
Activation Table
useService Activator looks up whether a target object is
insertEntry already active & if the object is not running it
getService removeEntry activates the implementation
lookupEntry
changeEntry
• The Activation Table stores associations
Activator
between services & their physical location
(de)activate • The Client uses the Activator to get service
getService access
register Service
unregister
• A Service implements a specific type of
service
onShutdown shutdown functionality that it provides to clients
62
Activator Pattern Dynamics
: Client : Activator : Activ. Table
getService
lookupEntry
[not active]

activate : Implementation
changeEntry
result object
port
service
onShutdown
changeEntry

•• A container can be used to activate & passivate a component


• A component can be activated/passivated by itself, the container,
after each method call, after each transaction, etc.
63
Applying the Activator Pattern
to Image Acquisition
Client
• We can use the Activator pattern Activation Table
useService
to launch image transfer servers insertEntry
getService removeEntry
on-demand lookupEntry
Activator changeEntry
• The Activator pattern is available in
(de)activate
various COTS technologies: getService
• UNIX Inetd “super server” register ImageXferService
• CORBA Implementation Repository unregister service
onShutdown shutdown
1. some_request

Client ImR (ringil:5000)


iiop://ringil:5000/poa_name/object_name 4. LOCATION_FORWARD
poa_name server.exe ringil:5500
iiop://ringil:5500/poa_name/object_name airplane_poa plane.exe ringil:4500

2. ping
3. is_running
2.1 start

5. some_request
Server (ringil:5500)
6. some_response
64
Pros & Cons of the Activator Pattern
This pattern has three benefits: This pattern also has liabilities:
•Uniformity • Lack of determinism & ordering
• By imposing a uniform activation dependencies
interface to spawn & control servers • This pattern makes it hard to
•Modularity, testability, & reusability determine or analyze the behavior
• Application modularity & reusability is of an application until its
improved by decoupling server components are activated at run-
implementations from the manner in time
which the servers are activated • Reduced security or reliability
•More effective resource utilization • An application that uses the
• Servers can be spawned “on-demand,” Activator pattern may be less
thereby minimizing resource utilization secure or reliable than an
until clients actually require them equivalent statically-configured
application
• Increased run-time overhead &
infrastructure complexity
• By adding levels of abstraction &
indirection when activating &
executing components

65
Enhancing Server (Re)Configurability
Context Problem
The implementation of certain Prematurely committing to a particular
image server components depends image server component configuration
on a variety of factors: is inflexible and inefficient:
•Certain factors are static, such as •No single image server configuration is
the number of available CPUs & optimal for all use cases
operating system support for •Certain design decisions cannot be
asynchronous I/O made efficiently until run-time
•Other factors are dynamic, such Component
as system workload
Component * init()
Repository components
fini()
suspend()
Solution <<contains>> resume()
• Apply the Component Configurator design Component info()
pattern to enhance server configurability Configurator

This pattern allows an application to link & unlink its


component implementations at run-time so new &
enhanced services can therefore be added without
Concrete Concrete
having to modify, recompile, statically relink, or shut
Component A Component B
down
66
& restart a running application
Component Configurator Pattern Dynamics

: Component : Concrete : Concrete : Component


Configurator Component A Component B Repository
init()
Concrete
1.Component Comp. A insert()

initialization
init()
Concrete
Comp. B insert()

2.Component run_component()

processing run_component()

fini()

3.Component Concrete
Comp. A remove()

termination
fini()
Concrete
Comp. B
remove()

67
Applying the Component Configurator
Pattern to Image Acquisition
Image servers can use the
Component
Component Configurator pattern to Component * init()
dynamically optimize, control, & Repository components fini()
reconfigure the behavior of its suspend()
components at installation-time or <<contains>> resume()
info()
during run-time Component
Configurator
•For example, an image server can apply
the Component Configurator pattern to
configure various Cached Virtual
Filesystem strategies LRU LFU
• e.g., least-recently used (LRU) or File Cache File Cache
least-frequently used (LFU)

Concrete components can be Only the components


packaged into a suitable unit of that are currently in use
configuration, such as a need to be configured
dynamically linked library (DLL) into an image server
68
Reconfiguring an Image Server
Image servers Reconfiguration State Chart
CONFIGURE RECONFIGURE
can also be IDLE init() init()
reconfigured TERMINATE
fini()
dynamically to RUNNING
support new RESUME
resume()
TERMINATE
components & fini() EXECUTE
SUSPENDED SUSPEND
new component suspend() run_component()
implementations
Image LFU File
Image LRU File Server Cache
Server Cache
# Reconfigure an image server.
# Configure an image server. Remove File_Cache
dynamic File_Cache Component * dynamic File_Cache Component *
img_server.dll:make_File_Cache() img_server.dll:make_File_Cache()
"-t LRU" "-t LFU"

INITIAL AFTER
CONFIGURATION RECONFIGURATION

69
Pros and Cons of the
Component Configurator Pattern
This pattern offers four benefits: This pattern also incurs liabilities:
•Uniformity •Lack of determinism & ordering
• By imposing a uniform configuration & dependencies
control interface to manage components • This pattern makes it hard to
•Centralized administration determine or analyze the behavior of
• By grouping one or more components into an application until its components are
a single administrative unit that simplifies configured at run-time
development by centralizing common •Reduced security or reliability
component initialization & termination • An application that uses the
activities Component Configurator pattern may
•Modularity, testability, & reusability be less secure or reliable than an
• Application modularity & reusability is equivalent statically-configured
improved by decoupling component application
implementations from the manner in which •Increased run-time overhead &
the components are configured into infrastructure complexity
processes • By adding levels of abstraction &
•Configuration dynamism & control indirection when executing
• By enabling a component to be components
dynamically reconfigured without •Overly narrow common interfaces
modifying, recompiling, statically relinking • The initialization or termination of a
existing code & without restarting the component may be too complicated or
component or other active components too tightly coupled with its context to
with which it is collocated be performed in a uniform manner
70
Tutorial Example 2:
High-performance Content Delivery Servers
GET /index.html HTTP/1.0
HTTP Client
HTTP Server Goal
www.posa.uci.edu
<H1>POSA page</H1>...
•Download content scalably
& efficiently
HTML File Protocol
• e.g., images & other
Parser Cache Handlers
GUI multi-media content types
Requester Event Dispatcher Key System
Transfer Protocol
Characteristics
Graphics
e.g. , HTTP 1.0
• Robust implementation
Adapter
• e.g., stop malicious clients
• Extensible to other protocols
OS Kernel OS Kernel
• e.g., HTTP 1.1, IIOP, DICOM
& Protocols
TCP/IP Network
& Protocols
• Leverage advanced multi-
processor hardware &
Key Solution Characteristics software
•Support many content delivery server • Implementation is based on ACE
design alternatives seamlessly framework components to reduce
•• e.g.,
e.g., different
different concurrency
concurrency &
& event
event models
models effort & amortize prior effort
•Design is guided by patterns to leverage • Open-source to control costs & to
time-proven solutions
71
leverage technology advances
JAWS Content Server Framework
Key Sources of Variation
• Concurrency models
• e.g.,thread pool vs. thread-per
request
• Event demultiplexing models
• e.g.,sync vs. async
• File caching models
• e.g.,LRU vs. LFU
• Content delivery protocols
• e.g.,HTTP 1.0+1.1, HTTP-NG,
IIOP, DICOM

Event Dispatcher Protocol Handler Cached Virtual Filesystem


• Accepts client connection • Performs parsing & protocol • Improves Web server
request events, receives processing of HTTP request performance by reducing the
HTTP GET requests, & events. overhead of file system accesses
coordinates JAWS’s event • JAWS Protocol Handler design when processing HTTP GET
demultiplexing strategy allows multiple Web protocols, such requests.
as HTTP/1.0, HTTP/1.1, & HTTP- • Various caching strategies, such as
with its concurrency
NG, to be incorporated into a Web least-recently used (LRU) or least-
strategy. server.
• As events are processed frequently used (LFU), can be
• To add a new protocol, developers selected according to the actual or
they are dispatched to the just write a new Protocol Handler
appropriate Protocol anticipated workload & configured
component & configure it into the statically or dynamically.
72 Handler.
JAWS framework.
Applying Patterns to Resolve Key
JAWS Design Challenges

Double-
checked
Locking
Optimization

Thread-specific Storage

Patterns help resolve the following common design challenges:


•Encapsulating low-level OS APIs •Efficiently demuxing asynchronous
•Decoupling event demultiplexing & operations & completions
connection management from •Transparently parameterizing
protocol processing synchronization into components
•Scaling up performance via threading •Ensuring locks are released
•Implementing a synchronized request properly
queue •Minimizing unnecessary locking
•Minimizing server threading overhead •Synchronizing singletons correctly
•Using asynchronous I/O effectively
73 •Logging access statistics efficiently
Encapsulating Low-level OS APIs
Context Problem
• A Web server must manage a variety of • The diversity of hardware & operating
OS services, including processes, systems makes it hard to build portable
threads, Socket connections, virtual & robust Web server software by
memory, & files. Most operating systems programming directly to low-level
provide low-level APIs written in C to operating system APIs, which are
access these services tedious, error-prone, & non-portable
calls
Wrapper Facade API FunctionA()
calls methods
Application data calls
API FunctionB()
method1()
Solution … calls
methodN() API FunctionC()
•• Apply
Apply the
the Wrapper
Wrapper Facade
Facade design
design
void method1(){ void methodN(){
pattern
pattern to
to avoid
avoid accessing
accessing low-level
low-level functionA(); functionA();
operating
operating system
system APIs
APIs directly
directly }
functionB(); }

The Wrapper Facade pattern : Application : Wrapper : APIFunctionA : APIFunctionB


encapsulates data & functions Facade

provided by existing non-OO APIs method()


functionA()
within more concise, robust,
portable, maintainable, & cohesive functionB()

OO
74
class interfaces
Applying the Wrapper Façade Pattern in JAWS
JAWS uses the wrapper facades defined by ACE to ensure its framework
components can run on many operating systems, including Windows, UNIX,
& many real-time operating systems

For example, JAWS uses Thread_Mutex calls


mutex_lock()
the Thread_Mutex
mutex calls
wrapper facade in ACE JAWS mutex_trylock()
calls
acquire()
to provide a portable methods
tryacquire() calls
mutex_unlock()
interface to operating release()
system mutual exclusion void acquire(){
mutex_lock(mutex);
void release(){
mutex_unlock(mutex);
mechanisms } }

The Thread_Mutex wrapper


The Thread_Mutex wrapper in
in the
the diagram
diagram Other
Other ACE
ACE wrapper
wrapper facades
facades used
used in
in
is
is implemented
implemented using
using the
the Solaris
Solaris thread
thread API
API JAWS
JAWS encapsulate
encapsulate Sockets,
Sockets, process
process &
&
thread
thread management,
management, memory-mapped
memory-mapped
The
The ACE Thread_Mutex wrapper
ACE Thread_Mutex wrapper facade
facade is
is files,
files, explicit
explicit dynamic
dynamic linking,
linking, &
& time
time
also
also available
available for
for other
other threading
threading APIs,
APIs, e.g.,
e.g., operations
operations
pSoS,
pSoS, VxWorks,
VxWorks, Win32
Win32 threads
threads or
or POSIX
POSIX
Pthreads
Pthreads
www.cs.wustl.edu/~schmidt/ACE/
75
Pros and Cons of the Wrapper Façade Pattern
This pattern provides three benefits: This pattern can incur liabilities:
•Concise, cohesive, & robust higher- •Loss of functionality
level object-oriented programming • Whenever an abstraction is layered
interfaces on top of an existing abstraction it is
• These interfaces reduce the tedium & possible to lose functionality
increase the type-safety of developing
applications, which descreases certain •Performance degradation
types of programming errors • This pattern can degrade
•Portability & maintainability performance if several forwarding
• Wrapper facades can shield application function calls are made per method
developers from non-portable aspects of •Programming language &
lower-level APIs compiler limitations
•Modularity, reusability &
• It may be hard to define wrapper
configurability facades for certain languages due
• This pattern creates cohesive & reusable
to a lack of language support or
class components that can be ‘plugged’
into other components in a wholesale limitations with compilers
fashion, using object-oriented language
features like inheritance & parameterized
types

76
Decoupling Event Demuxing & Connection
Management from Protocol Processing
Context
Event Dispatcher
•Web servers can be accessed select()
simultaneously by multiple Client
clients HTTP GET Web Server
Sockets
request
•They must demux & process Client HTTP GET Socket
multiple types of indication request Handles

events arriving from clients Client Connect


concurrently request
•A common way to demux events
in a server is to use select() •Thus, changes to event-
Problem demuxing & connection code
•This code cannot then affects the server protocol
•Developers often couple be reused directly by
event-demuxing & code directly & may yield
other protocols or by subtle bugs
connection code with other middleware & • e.g., porting it to use TLI or
protocol-handling code applications WaitForMultipleObjects()

Solution
Apply the Reactor architectural pattern & the Acceptor-Connector design
pattern to separate the generic event-demultiplexing & connection-
management code from the web server’s protocol code
77
The Reactor Pattern
Reactor Event Handler
The Reactor architectural *
handle_events() dispatches handle_event ()
pattern allows event-driven register_handler()
owns get_handle()
remove_handler() * Handle
applications to demultiplex
& dispatch service requests * notifies
handle set
<<uses>>
that are delivered to an Concrete Event Concrete Event
Synchronous
application from one or Event Demuxer
Handler A Handler B
more clients. handle_event () handle_event ()
select () get_handle() get_handle()

Observations
: Main Program : Concrete : Reactor : Synchronous
• Note inversion
Event Handler Event of control
Demultiplexer
• Also note how
Con. Event
Events
long-running
register_handler()
1. Initialize Handler
event handlers
get_handle()
phase can degrade the
Handle
QoS since
2. Event handle_events() Handles select() event
callbacks steal
handling handle_event() the reactor’s
phase service() Handles
thread!
78
The Acceptor-Connector Pattern
The Acceptor-Connector design pattern decouples the connection &
initialization of cooperating peer services in a networked system from the
processing performed by the peer services after being connected & initialized.
notifies notifies
Dispatcher
select()
uses uses uses
* handle_events() * *
Transport register_handler() Transport Transport
remove_handler() Handle Handle
uses Handle
owns owns <<creates>> owns
notifies
*
* Service *
Connector Handler Acceptor
Connector() peer_stream_ peer_acceptor_
connect()
complete() open() Acceptor()
handle_event () Accept()
handle_event ()
set_handle() handle_event ()

<<activate>>
<<activate>>
*
Concrete Concrete Service Concrete Service Concrete
Connector Handler A Handler B Acceptor
79
Acceptor Dynamics
: Application : Acceptor : Dispatcher

open() ACCEPT_
Acceptor Handle1 register_handler()
1.Passive-mode EVENT

endpoint handle_events()
initialize phase accept()

2.Service : Handle2
handler
initialize phase : Service
Handler
Handle2
Handle2 open() Service Events
Handler
3.Service register_handler()
processing handle_event()
phase
service()

• The Acceptor ensures that passive-


mode transport endpoints aren’t used • There is typically one Acceptor
to read/write data accidentally factory per-service/per-port
•And vice versa for data transport •Additional demuxing can be done
80
endpoints… at higher layers, a la CORBA
Synchronous Connector Dynamics
Motivation for Synchrony
• If connection latency is • If multiple threads of • If the services must be
negligible control are available & it initialized in a fixed
•e.g., connecting with is efficient to use a order & the client can’t
a server on the thread-per-connection perform useful work
same host via a to connect each service until all connections
‘loopback’ device handler synchronously are established
: Application : Connector : Service : Dispatcher
Handler

1.Sync Service
Handler Addr get_handle()
connection connect()

initiation phase Handle


register_handler()
2.Service Service
open() Handle Events
handler Handler

initialize phase
handle_events()
3.Service
processing handle_event()

phase
service()
81
Asynchronous Connector Dynamics
Motivation for Asynchrony
• If client is establishing • If client is a • If client is initializing many
connections over high single-threaded peers that can be connected
latency links applications in an arbitrary order
: Application : Connector : Service : Dispatcher
Handler

Service Addr
Handler get_handle()
1.Async connect()
register_handler()
connection Handle CONNECT
Handle Connector EVENT
initiation
phase handle_events()
2.Service complete()
handler open() register_handler()
initialize Service
Handler Handle Events
phase
3.Service handle_event()
processing service()
phase
82
Applying the Reactor and Acceptor-
Connector Patterns in JAWS
The Reactor architectural Reactor Event Handler
pattern decouples: handle_events()
*
dispatches handle_event ()
1.JAWS generic register_handler()
owns get_handle()
synchronous event remove_handler() * Handle
demultiplexing &
handle set
* notifies

dispatching logic from <<uses>> HTTP HTTP


2.The HTTP protocol Synchronous Acceptor Handler
processing it performs Event Demuxer handle_event () handle_event ()
in response to events select () get_handle() get_handle()

The Acceptor-Connector design pattern can use a Reactor as its


Dispatcher in order to help decouple:
1.The connection & initialization of peer client & server HTTP services
from
2.The processing activities performed by these peer services once
they are connected & initialized
83
Reactive Connection Management
& Data Transfer in JAWS

84
Pros and Cons of the Reactor Pattern
This pattern offers four benefits: This pattern can incur liabilities:
•Separation of concerns •Restricted applicability
• This pattern decouples application- • This pattern can be applied
independent demuxing & dispatching efficiently only if the OS supports
mechanisms from application-specific hook synchronous event demuxing on
method functionality handle sets
•Modularity, reusability, & •Non-pre-emptive
configurability • In a single-threaded application,
• This pattern separates event-driven concrete event handlers that
application functionality into several
borrow the thread of their reactor
components, which enables the configuration
of event handler components that are loosely can run to completion & prevent the
integrated via a reactor reactor from dispatching other
•Portability event handlers
• By decoupling the reactor’s interface from •Complexity of debugging &
the lower-level OS synchronous event testing
demuxing functions used in its • It is hard to debug applications
implementation, the Reactor pattern structured using this pattern due to
improves portability its inverted flow of control, which
•Coarse-grained concurrency control oscillates between the framework
• This pattern serializes the invocation of event infrastructure & the method call-
handlers at the level of event demuxing & backs on application-specific event
dispatching within an application process or handlers
thread
85
Pros and Cons of the Acceptor-
Connector Pattern
This pattern provides three benefits: This pattern also has liabilities:
•Reusability, portability, & extensibility •Additional indirection
• This pattern decouples mechanisms for • The Acceptor-Connector pattern
connecting & initializing service handlers from can incur additional indirection
the service processing performed after service compared to using the underlying
handlers are connected & initialized network programming interfaces
•Robustness directly
•Additional complexity
• This pattern strongly decouples the service
• The Acceptor-Connector pattern
handler from the acceptor, which ensures that a
may add unnecessary complexity
passive-mode transport endpoint can’t be used
for simple client applications that
to read or write data accidentally
connect with only one server &
•Efficiency perform one service using a
• This pattern can establish connections actively single network programming
with many hosts asynchronously & efficiently interface
over long-latency wide area networks
• Asynchrony is important in this situation
because a large networked system may have
hundreds or thousands of host that must be
connected

86
Scaling Up Performance via Threading
Context Problem
• HTTP runs over TCP, which uses • Processing all HTTP GET requests
flow control to ensure that senders reactively within a single-threaded
do not produce data more rapidly process does not scale up, because
than slow receivers or congested each server CPU time-slice spends
networks can buffer and process much of its time blocked waiting for I/O
• Since achieving efficient end-to-end operations to complete
quality of service (QoS) is important • Similarly, to improve QoS for all its
to handle heavy Web traffic loads, a connected clients, an entire Web server
Web server must scale up process must not block while waiting for
efficiently as its number of clients connection flow control to abate so it
increases can finish sending a file to a client

Solution This solution yields two benefits:


1. Threads can be mapped to separate CPUs
• Apply the Half-Sync/Half-Async to scale up server performance via multi-
architectural pattern to scale up processing
server performance by processing 2. Each thread blocks independently, which
different HTTP requests prevents a flow-controlled connection from
87
concurrently in multiple threads degrading the QoS other clients receive
The Half-Sync/Half-Async Pattern
The Half-Sync/Half-Async Sync
Service Sync Service 1 Sync Service 2 Sync Service 3
architectural pattern Layer
decouples async & sync <<read/write>>
<<read/write>>
service processing in Queueing
Queue
concurrent systems, to Layer <<read/write>>

simplify programming
without unduly reducing Async <<dequeue/enqueue>> <<interrupt>>
Service External
performance Layer Async Service
Event Source

: External Event : Async Service : Queue : Sync Service


Source
• This pattern defines two service
notification
processing layers—one async & one
sync—along with a queueing layer read()

that allows services to exchange message


work()

messages between the two layers


• The pattern allows sync services, message
such as HTTP protocol processing, enqueue()
notification

to run concurrently, relative both to read()


each other & to async services, such work()
as event demultiplexing message

88
Applying the Half-Sync/Half-Async
Pattern in JAWS
Synchronous
Service Layer Worker Thread 1 Worker Thread 2 Worker Thread 3

<<get>>
Queueing <<get>> <<get>>
Layer Request Queue
<<put>>

HTTP Handlers, HTTP Acceptor


Asynchronous
Service Layer Socket
<<ready to read>>
Reactor Event Sources

• JAWS uses the Half- • The worker thread • If flow control occurs
Sync/Half-Async that removes the on its client connection
pattern to process request this thread can block
HTTP GET requests synchronously without degrading the
synchronously from performs HTTP QoS experienced by
multiple clients, but protocol processing & clients serviced by
concurrently in then transfers the file other worker threads in
separate threads back to the client the pool
89
Pros & Cons of the
Half-Sync/Half-Async Pattern
This pattern has three benefits: This pattern also incurs liabilities:
•Simplification & performance •A boundary-crossing penalty may
• The programming of higher-level be incurred
synchronous processing services are • This overhead arises from context
simplified without degrading the switching, synchronization, & data
performance of lower-level system copying overhead when data is
services transferred between the sync & async
•Separation of concerns service layers via the queueing layer
• Synchronization policies in each •Higher-level application services
layer are decoupled so that each may not benefit from the efficiency
layer need not use the same of async I/O
concurrency control strategies • Depending on the design of operating
•Centralization of inter-layer system or application framework
communication interfaces, it may not be possible for
• Inter-layer communication is higher-level services to use low-level
centralized at a single access point, async I/O devices effectively
because all interaction is mediated •Complexity of debugging & testing
by the queueing layer • Applications written with this pattern can
be hard to debug due its concurrent
execution
90
Implementing a Synchronized Request Queue
Context Worker Worker Worker
• The Half-Sync/Half-Async Thread 1 Thread 2 Thread 3
pattern contains a queue <<get>>
• The JAWS Reactor thread is a <<get>> <<get>>
Request Queue
‘producer’ that inserts HTTP
GET requests into the queue <<put>>

• Worker pool threads are HTTP Handlers, HTTP Acceptor


‘consumers’ that remove &
process queued requests Reactor

Problem
• A naive implementation of a request queue will incur race
conditions or ‘busy waiting’ when multiple threads insert & remove
requests
• e.g., multiple concurrent producer & consumer threads can
corrupt the queue’s internal state if it is not synchronized properly
• Similarly, these threads will ‘busy wait’ when the queue is empty
or full, which wastes CPU cycles unnecessarily
91
The Monitor Object Pattern
Solution
• Apply the Monitor Object design pattern to synchronize the queue efficiently
& conveniently

Monitor Object
• This pattern synchronizes Client
2..* sync_method1()
concurrent method execution
sync_methodN()
to ensure that only one
method at a time runs within
an object uses * uses
• It also allows an object’s Monitor Condition Monitor Lock
methods to cooperatively
wait() acquire()
schedule their execution
notify() release()
sequences
notify_all()

• It’s instructive to compare Monitor Object pattern solutions with Active Object
pattern solutions
•The key tradeoff is efficiency vs. flexibility
92
Monitor Object Pattern Dynamics
: Client : Client : Monitor : Monitor : Monitor
Thread1 Thread2 Object Lock Condition

sync_method1() acquire()
1. Synchronized
dowork()
method
wait()
invocation &
the OS thread scheduler
serialization automatically suspends
the client thread
2. Synchronized sync_method2()
acquire() the OS thread scheduler
method thread the OS thread
atomically releases
the monitor lock
suspension scheduler
automatically dowork()
resumes
3. Monitor the client
thread and the
notify()

condition synchronized
method
release()

notification
4. Synchronized
method thread
dowork()
resumption the OS thread scheduler
atomically reacquires
release() the monitor lock

93
Applying the Monitor Object Pattern in JAWS

The JAWS synchronized Request Queue


HTTP <<put>> <<get>> Worker
request queue Handler put()
Thread
implements the queue’s get()
not-empty and not-full uses
uses 2
monitor conditions via a
Thread Condition Thread_Mutex
pair of ACE wrapper
wait() acquire()
facades for POSIX-style notify() release()
condition variables notify_all()

•When a worker thread attempts to dequeue an HTTP GET request


from an empty queue, the request queue’s get() method
atomically releases the monitor lock & the worker thread suspends
itself on the not-empty monitor condition
•The thread remains suspended until the queue is no longer empty,
which happens when an HTTP_Handler running in the Reactor
thread inserts a request into the queue

94
Pros & Cons of the Monitor Object Pattern
This pattern provides two benefits: This pattern can also incur liabilities:
•Simplification of concurrency •The use of a single monitor lock can
control limit scalability due to increased
• The Monitor Object pattern presents contention when multiple threads
a concise programming model for serialize on a monitor object
sharing an object among •Complicated extensibility
cooperating threads where object
semantics
synchronization corresponds to
method invocations • These result from the coupling between
a monitor object’s functionality & its
•Simplification of scheduling synchronization mechanisms
method execution
•It is also hard to inherit from a monitor
• Synchronized methods use their
object transparently, due to the
monitor conditions to determine the
circumstances under which they
inheritance anomaly problem
should suspend or resume their •Nested monitor lockout
execution & that of collaborating • This problem is similar to the preceding
monitor objects liability & can occur when a monitor
object is nested within another monitor
object
95
Minimizing Server Threading Overhead
Context
• Socket implementations in certain multi-threaded
operating systems provide a concurrent accept()
optimization to accept client connection requests &
improve the performance of Web servers that
accept()
implement the HTTP 1.0 protocol as follows:
•The OS allows a pool of threads in a Web server
to call accept() on the same passive-mode
socket handle
•When a connection request arrives, the
operating system’s transport layer creates a new accept()
connected transport endpoint, encapsulates this accept()
new endpoint with a data-mode socket handle &
passes the handle as the return value from accept()
accept() accept()
•The OS then schedules one of the threads in
the pool to receive this data-mode handle, passive-mode
which it uses to communicate with its socket handle
96 connected client
Drawbacks with the Half-Sync/
Half-Async Architecture
Problem Worker Worker Worker
• Although Half-Sync/Half-Async Thread 1 Thread 2 Thread 3
threading model is more
<<get>>
scalable than the purely reactive <<get>> <<get>>
model, it is not necessarily the Request Queue
most efficient design <<put>>

•e.g., passing a request HTTP Handlers, HTTP Acceptor

between the Reactor thread


& a worker thread incurs: Reactor
•Dynamic memory (de)allocation,
•Synchronization operations,
•A context switch, &
•CPU cache updates Solution
• Apply the Leader/Followers
•This overhead makes JAWS’ latency architectural pattern to
unnecessarily high, particularly on minimize server threading
operating systems that support the overhead
concurrent accept() optimization
97
The Leader/Followers Pattern
demultiplexes
The Leader/Followers architectural Thread Pool
pattern provides an efficient synchronizer
concurrency model where multiple join()
promote_new_leader()
threads take turns sharing event
*
sources to detect, demux, dispatch, & Event Handler
uses
process service requests that occur on Handle handle_event ()
get_handle()
the event sources
*
This pattern eliminates the need for—& Handle Set
the overhead of—a separate Reactor Concrete Event
handle_events()
Handler B
thread & synchronized request queue deactivate_handle()
reactivate_handle() handle_event ()
used in the Half-Sync/Half-Async pattern select() get_handle()

Handles Concrete Event


Concurrent Handles Iterative Handles Handler A
Handle Sets handle_event ()
get_handle()
Concurrent UDP Sockets + TCP Sockets +
Handle Sets WaitForMultipleObjects( WaitForMultpleObjects()
)
Iterative UDP Sockets + TCP Sockets +
98 Handle Sets
select()/poll() select()/poll()
Leader/Followers Pattern Dynamics
Thread 1 Thread 2 : Thread : Handle : Concrete
Pool Set Event Handler

1.Leader join()
thread handle_events()
demuxing join() event
handle_event()
2.Follower deactivate_
thread thread 2 sleeps
until it becomes
handle()
promote_
promotion the leader new_leader()

thread 2
3.Event waits for a
new event, handle_
handler thread 1 events() reactivate_
processes handle()
demuxing & current
event
event
processing join() event

4.Rejoining the thread 1 sleeps


handle_event()
until it becomes deactivate_
thread pool the leader handle()

99
Applying the Leader/Followers
Pattern in JAWS
Two options: Although Leader/Followers thread
1.If platform supports accept() pool design is highly efficient the
optimization then the Leader/Followers Half-Sync/Half-Async design may be
pattern can be implemented by the OS more appropriate for certain types of
2.Otherwise, this pattern can be servers, e.g.:
implemented as a reusable framework • The Half-Sync/Half-Async
demultiplexes design can reorder &
Thread Pool prioritize client requests
synchronizer
more flexibly, because it
join() has a synchronized request
promote_new_leader()
* queue implemented using
uses
Event Handler the Monitor Object pattern
* Handle handle_event ()
Reactor get_handle() • It may be more scalable,
handle_events() because it queues requests
deacitivate_handle() in Web server virtual
reactivate_handle()
select() HTTP HTTP memory, rather than the OS
Acceptor Handler kernel
handle_event () handle_event ()
get_handle() get_handle()
100
Pros and Cons of the
Leader/Followers Pattern
This pattern provides two benefits: This pattern also incur liabilities:
•Performance enhancements •Implementation complexity
• This can improve performance as follows:
• The advanced variants of the
• It enhances CPU cache affinity and
Leader/ Followers pattern are
eliminates the need for dynamic memory
allocation & data buffer sharing between hard to implement
threads •Lack of flexibility
• It minimizes locking overhead by not • In the Leader/ Followers
exchanging data between threads, thereby model it is hard to discard or
reducing thread synchronization reorder events because there
• It can minimize priority inversion because is no explicit queue
no extra queueing is introduced in the
server •Network I/O bottlenecks
• It doesn’t require a context switch to • The Leader/Followers pattern
handle each event, reducing dispatching serializes processing by
latency allowing only a single thread
•Programming simplicity at a time to wait on the handle
• The Leader/Follower pattern simplifies the set, which could become a
programming of concurrency models where bottleneck because only one
multiple threads can receive requests, thread at a time can
process responses, & demultiplex demultiplex I/O events
connections using a shared handle set
101
Using Asynchronous I/O Effectively
Context GetQueued
CompletionStatus()
• Synchronous multi-threading may not be the
most scalable way to implement a Web server GetQueued
on OS platforms that support async I/O more CompletionStatus()
GetQueued
efficiently than synchronous multi-threading CompletionStatus()
• For example, highly-efficient Web servers can
be implemented on Windows NT by invoking
async Win32 operations that perform the
following activities:
I/O Completion
• Processing indication events, such as TCP Port
CONNECT and HTTP GET requests, via
AcceptEx() & ReadFile(), respectively
• Transmitting requested files to clients
AcceptEx()
asynchronously via WriteFile() or
AcceptEx()
TransmitFile() AcceptEx()
• When these async operations complete, WinNT
1.Delivers the associated completion events
containing their results to the Web server passive-mode
2.Processes these events & performs the appropriate socket handle
102 actions before returning to its event loop
The Proactor Pattern
Problem Solution
• Developing software that achieves • Apply the Proactor architectural pattern
the potential efficiency & scalability to make efficient use of async I/O
of async I/O is hard due to the This pattern allows event-driven
separation in time & space of async applications to efficiently demultiplex &
operation invocations & their dispatch service requests triggered by the
subsequent completion events completion of async operations, thereby
achieving the performance benefits of
Initiator concurrency
<<uses>> <<uses>>
without incurring
<<uses>> <<invokes>> its many liabilities
is associated with
Asynchronous Asynchronous Handle Completion
Operation Processor Operation Handler
execute_async_op() async_op() * handle_event()
<<demultiplexes
<<enqueues>> <<executes>> & dispatches>>

Asynchronous Proactor
Completion Concrete
Event Queue Event Demuxer
handle_events() Completion
get_completion_event() Handler
103 <<dequeues>>
Dynamics in the Proactor Pattern
: Initiator : Asynchronous : Asynchronous : Completion : Proactor Completion
Operation Operation Event Queue Handler
Processor

1. Initiate Completion
Handler
operation Completion
2. Process Ev. Queue async_operation()
exec_async_
operation operation () handle_events()
3. Run event
loop event
4. Generate Result
& queue Result event
completion
event
service()
5. Dequeue Result Result

completion handle_
event()
event &
perform
completion
processing Note similarities & differences with the Reactor pattern, e.g.:
•Both process events via callbacks
•However, it’s generally easier to multi-thread a proactor
104
Applying the Proactor Pattern in JAWS
The Proactor pattern JAWS HTTP components are split into two parts:
structures the JAWS 1. Operations that execute asynchronously
• e.g., to accept connections & receive client HTTP GET
concurrent server to
requests
receive & process
2. The corresponding completion handlers that process the
requests from multiple async operation results
clients asynchronously • e.g., to transmit a file back to a client after an async
connection operation completes
Web Server
<<uses>> <<uses>>

<<uses>> <<invokes>>
is associated with
Windows NT Asynchronous
Operation Handle Completion
Operating System AcceptEx() Handler
execute_async_op() ReadFile()
WriteFile() * handle_event()
<<demultiplexes
<<enqueues>> <<executes>> & dispatches>>

Asynchronous Proactor
I/O Completion
Port Event Demuxer
handle_events() HTTP HTTP
GetQueuedCompletionStatus()
Acceptor Handler
105 <<dequeues>>
Proactive Connection Management
& Data Transfer in JAWS

106
Pros and Cons of the Proactor Pattern
This pattern offers five benefits: This pattern incurs some liabilities:
•Separation of concerns •Restricted applicability
• Decouples application-independent async • This pattern can be applied most
mechanisms from application-specific efficiently if the OS supports
functionality asynchronous operations
•Portability natively
• Improves application portability by allowing its •Complexity of programming,
interfaces to be reused independently of the OS debugging, & testing
event demuxing calls • It is hard to program applications
•Decoupling of threading from & higher-level system services
concurrency using asynchrony mechanisms,
• The async operation processor executes long- due to the separation in time &
duration operations on behalf of initiators so space between operation
applications can spawn fewer threads invocation and completion
•Performance •Scheduling, controlling, &
• Avoids context switching costs by activating canceling asynchronously
only those logical threads of control that have running operations
events to process • Initiators may be unable to
•Simplification of application control the scheduling order in
synchronization which asynchronous operations
• If concrete completion handlers spawn no are executed by an
threads, application logic can be written with asynchronous operation
107 little or no concern for synchronization issues processor
Efficiently Demuxing Asynchronous
Operations & Completions
Context Problem
• In a proactive Web •As little overhead as possible should be incurred to
server async I/O determine how the completion handler will demux &
operations will yield process completion events after async operations
I/O completion event finish executing
responses that must •When a response arrives, the application should
be processed spend as little time as possible demultiplexing the
efficiently completion event to the handler that will process the
async operation’s response
Solution
• Apply the Asynchronous Completion Token design pattern to demux &
process the responses of asynchronous operations efficiently

•Together with each async operation •Return this information to the initiator
that a client initiator invokes on a when the operation finishes, so that it
service, transmit information that can be used to demux the response
identifies how the initiator should efficiently, allowing the initiator to
process the service’s response process it accordingly
108
The Asynchronous Completion Token Pattern
Structure and Participants

Dynamic Interactions

handle_event()
109
Applying the Asynchronous
Completion Token Pattern in JAWS

Detailed
Interactions
(HTTP_Acceptor
is both initiator &
completion handler)

110
Pros and Cons of the Asynchronous
Completion Token Pattern
This pattern has four benefits: This pattern has some liabilities:
•Simplified initiator data structures •Memory leaks
• Initiators need not maintain complex • Memory leaks can result if initiators use
data structures to associate service ACTs as pointers to dynamically
responses with completion handlers allocated memory & services fail to
•Efficient state acquisition return the ACTs, for example if the
• ACTs are time efficient because they service crashes
need not require complex parsing of •Authentication
data returned with the service response • When an ACT is returned to an initiator
•Space efficiency on completion of an asynchronous
• ACTs can consume minimal data space event, the initiator may need to
yet can still provide applications with authenticate the ACT before using it
sufficient information to associate large •Application re-mapping
amounts of state to process • If ACTs are used as direct pointers to
asynchronous operation completion memory, errors can occur if part of the
actions application is re-mapped in virtual
•Flexibility memory
• User-defined ACTs are not forced to
inherit from an interface to use the
service’s ACTs
111
Transparently Parameterizing
Synchronization into Components
Context Problem
•The various concurrency •It should be possible to customize JAWS
patterns described earlier impact component synchronization mechanisms
component synchronization according to the requirements of particular
strategies in various ways application use cases & configurations
•e.g.,ranging from no locks to •Hard-coding synchronization strategies
readers/writer locks into component implementations is
•In general, components must run inflexible
efficiently in a variety of •Maintaining multiple versions of
concurrency models components manually is not scalable

Solution
• Apply the Strategized Locking design pattern to parameterize JAWS
component synchronization strategies by making them ‘pluggable’ types
•Each type objectifies a •Instances of these pluggable types can be
particular synchronization defined as objects contained within a
strategy, such as a mutex, component, which then uses these objects to
readers/writer lock, synchronize its method implementations
semaphore, or ‘null’ lock
112 efficiently
Applying Polymorphic Strategized
Locking in JAWS
class Lock {
Polymorphic public:
Strategized // Acquire and release the lock.
virtual void acquire () = 0;
Locking virtual void release () = 0;
// ...
};
class Thread_Mutex : public Lock {
class File_Cache { // ...
public: };
// Constructor.
File_Cache (Lock &l): lock_ (&l) { }
// A method.
const void *lookup (const string &path) const {
lock_->acquire();
// Implement the <lookup> method.
lock_->release ();
}
// ...
private:
// The polymorphic strategized locking object.
mutable Lock *lock_;
// Other data members and methods go here...
};
113
Applying Parameterized
Strategized Locking in JAWS
• Single-threaded file cache.
typedef File_Cache<Null_Mutex>
Parameterized Content_Cache;
• Multi-threaded file cache using a thread mutex.
Strategized typedef File_Cache<Thread_Mutex>
Locking Content_Cache;
• Multi-threaded file cache using a readers/writer
lock.
typedef File_Cache<RW_Lock>
template <class LOCK> Content_Cache;
class File_Cache {
public:
// A method.
const void *lookup Note that the various
(const string &path) const { locks need not inherit
lock_.acquire ();
// Implement the <lookup> method. from a common base
lock_.release ();
} class or use virtual
// ... methods!
private:
// The polymorphic strategized locking object.
mutable LOCK lock_;
// Other data members and methods go here...
};
114
Pros and Cons of the
Strategized Locking Pattern
This pattern provides three benefits: This pattern also incurs liabilities:
•Enhanced flexibility & customization •Obtrusive locking
• It is straightforward to configure & • If templates are used to
customize a component for certain parameterize locking aspects this
concurrency models because the will expose the locking strategies to
synchronization aspects of components are application code
strategized •Over-engineering
•Decreased maintenance effort for • Externalizing a locking mechanism
components by placing it in a component’s
• It is straightforward to add enhancements & interface may actually provide too
bug fixes to a component because there is much flexibility in certain situations
only one implementation, rather than a • e.g., inexperienced developers
separate implementation for each may try to parameterize a
concurrency model component with the wrong type
•Improved reuse of lock, resulting in improper
• Components implemented using this pattern compile- or run-time behavior
are more reusable, because their locking
strategies can be configured orthogonally to
their behavior

115
Ensuring Locks are Released Properly
Context Problem
• Concurrent •Code that shouldn’t execute concurrently must be
applications, protected by some type of lock that is acquired & released
such as JAWS, when control enters & leaves a critical section, respectively
contain shared •If programmers must acquire & release locks explicitly, it is
resources that hard to ensure that the locks are released in all paths
are manipulated through the code
by multiple •e.g., in C++ control can leave a scope due to a return,
threads break, continue, or goto statement, as well as from an
concurrently unhandled exception being propagated out of the scope

Solution // A method.
• In C++, apply the Scoped Locking const void *lookup
(const string &path) const {
idiom to define a guard class whose lock_.acquire ();
constructor automatically acquires a // The <lookup> method
// implementation may return
lock when control enters a scope & // prematurely…
whose destructor automatically lock_.release ();
releases the lock when control leaves }

116
the scope
Applying the Scoped Locking
Idiom in JAWS
template <class LOCK>
class Guard { Generic Guard Wrapper Facade
public:
// Store a pointer to the lock and acquire the lock.
Guard (LOCK &lock)
: lock_ (&lock)
{ lock_->acquire (); }
// Release the lock when the guard goes out of scope,
~Guard () { lock_->release (); }
private:
// Pointer to the lock we’re managing.
LOCK *lock_;
}; template <class LOCK>
class File_Cache { Applying the Guard
Instances of the guard public:
// A method.
class can be allocated const void *lookup
on the run-time stack to (const string &path) const {
// Use Scoped Locking idiom to acquire
acquire & release locks // & release the <lock_>
in method or block automatically.
scopes that define Guard<LOCK> guard (*lock);
// Implement the <lookup> method.
critical sections // lock_ released automatically…
117
}
Pros and Cons of the
Scoped Locking Idiom
This idiom has one benefit: This idiom also has liabilities:
•Increased robustness •Potential for deadlock when used
• This idiom increases the recursively
robustness of concurrent • If a method that uses the Scoped Locking idiom
applications by eliminating calls itself recursively, ‘self-deadlock’ will occur if
common programming errors the lock is not a ‘recursive’ mutex
related to synchronization & •Limitations with language-specific
multi-threading semantics
• By applying the Scoped • The Scoped Locking idiom is based on a C++
Locking idiom, locks are language feature & therefore will not be integrated
acquired & released with operating system-specific system calls
automatically when control • Thus, locks may not be released automatically
enters and leaves critical when threads or processes abort or exit inside a
sections defined by C++ guarded critical section
method & block scopes • Likewise, they will not be released properly if
the standard C longjmp() function is called
because this function does not call the
destructors of C++ objects as the run-time stack
unwinds

118
Minimizing Unnecessary Locking
Context Problem
•Components in multi-threaded •Thread-safe components should be
applications that contain intra- designed to avoid unnecessary
component method calls locking
•Components that have applied the •Thread-safe components should be
Strategized Locking pattern designed to avoid “self-deadlock”

Solution
• Apply the Thread-safe Interface design pattern to minimize locking overhead
& ensure that intra-component method calls do not incur ‘self-deadlock’ by
trying to reacquire a lock that is held by the component already
This pattern structures all components that process intra-component method
invocations according two design conventions:
•Interface methods check •Implementation methods trust
• All interface methods, such as C++ • Implementation methods, such as
public methods, should only C++ private and protected
acquire/release component lock(s), methods, should only perform
thereby performing synchronization work when called by interface
119 checks at the ‘border’ of the component. methods.
Motivating the Need for the
Thread-safe Interface Pattern
template <class LOCK>
class File_Cache {
public:
// Return a pointer to the memory-mapped file associated with
// <path> name, adding it to the cache if it doesn’t exist.
const void *lookup (const string &path) const {
// Use the Scoped Locking idiom to acquire
// & release the <lock_> automatically.
Guard<LOCK> guard (lock_);
const void *file_pointer = check_cache (path);
if (file_pointer == 0) {
// Insert the <path> name into the cache.
// Note the intra-class <insert> call.
insert (path);
file_pointer = check_cache (path); Since File_Cache
}
return file_pointer; is a template we
} don’t know the
// Add <path> name to the cache.
void insert (const string &path) { type of lock used
// Use the Scoped Locking idiom to acquire to parameterize it.
// and release the <lock_> automatically.
Guard<LOCK> guard (lock_);
// ... insert <path> into the cache...
}
private:
mutable LOCK lock_;
const void *check_cache (const string &) const;
// ... other private methods and data omitted...
};
120
Applying the Thread-safe Interface
Pattern in JAWS
template <class LOCK>
class File_Cache {
public:
// Return a pointer to the memory-mapped file associated with
// <path> name, adding it to the cache if it doesn’t exist.
const void *lookup (const string &path) const {
// Use the Scoped Locking idiom to acquire
// and release the <lock_> automatically.
Guard<LOCK> guard (lock_);
return lookup_i (path); Note fewer constraints
} on the type of LOCK…
private:
mutable LOCK lock_; // The strategized locking object.
// This implementation method doesn’t acquire or release
// <lock_> and does its work without calling interface methods.
const void *lookup_i (const string &path) const {
const void *file_pointer = check_cache_i (path);
if (file_pointer == 0) {
// If <path> name isn’t in the cache then
// insert it and look it up again.
insert_i (path);
file_pointer = check_cache_i (path);
// The calls to implementation methods <insert_i> and
// <check_cache_i> assume that the lock is held & do work.
}
121 return file_pointer;
Pros and Cons of the Thread-safe
Interface Pattern
This pattern has three benefits: This pattern has some liabilities:
•Increased robustness •Additional indirection and extra methods
• This pattern ensures that self- • Each interface method requires at least one
deadlock does not occur due to implementation method, which increases the
intra-component method calls footprint of the component & may also add an
•Enhanced performance extra level of method-call indirection for each
• This pattern ensures that locks invocation
are not acquired or released •Potential for misuse
unnecessarily • OO languages, such as C++ and Java, support
•Simplification of software class-level rather than object-level access
• Separating the locking and control
functionality concerns can help • As a result, an object can bypass the public
to simplify both aspects interface to call a private method on another
object of the same class, thus bypassing that
object’s lock
•Potential overhead
• This pattern prevents multiple components from
sharing the same lock & prevents locking at a
finer granularity than the component, which can
increase lock contention
122
Synchronizing Singletons Correctly
Context
• JAWS uses various singletons to implement components where only one
instance is required
• e.g., the ACE Reactor, the request queue, etc.
Problem
• Singletons can be problematic in multi-threaded programs
Either too little locking… … or too much
class Singleton { class Singleton {
public: public:
static Singleton *instance () static Singleton *instance ()
{ {
if (instance_ == 0) { Guard<Thread_Mutex> g (lock_);
// Enter critical section. if (instance_ == 0) {
instance_ = new Singleton; // Enter critical section.
// Leave critical section. instance_ = new Singleton;
} // Leave critical section.
return instance_; }
} return instance_;
void method_1 (); }
// Other methods omitted. private:
private: static Singleton *instance_;
static Singleton *instance_; // Initialized to 0 by linker.
// Initialized to 0 by linker. static Thread_Mutex lock_;
};
123 };
The Double-checked Locking
Optimization Pattern
Solution
• Apply the Double-Checked Locking Optimization design pattern to reduce
contention & synchronization overhead whenever critical sections of code
must acquire locks in a thread-safe manner just once during program
execution
// Perform first-check to class Singleton {
// evaluate ‘hint’. public:
if (first_time_in is FALSE) static Singleton *instance ()
{
{ // First check
acquire the mutex if (instance_ == 0) {
// Perform double-check to Guard<Thread_Mutex> g(lock_);
// avoid race condition. // Double check.
if (instance_ == 0)
if (first_time_in is FALSE) instance_ = new Singleton;
{ }
execute the critical section return instance_;
set first_time_in to TRUE }
private:
} static Singleton *instance_;
release the mutex static Thread_Mutex lock_;
} };
124
Applying the Double-Checked Locking
Optimization Pattern in ACE
ACE defines a
template <class TYPE> “singleton adapter”
class ACE_Singleton { template to automate
public: the double-checked
static TYPE *instance () {
locking optimization
// First check
if (instance_ == 0) {
// Scoped Locking acquires and release lock.
Guard<Thread_Mutex> guard (lock_);
// Double check instance_.
if (instance_ == 0)
instance_ = new TYPE;
}
return instance_;
}
private:
static TYPE *instance_;
static Thread_Mutex lock_; Thus, creating a “thread-
}; safe” singleton is easy
typedef ACE_Singleton
<Request_Queue>
Request_Queue_Singleton;
125
Pros and Cons of the Double-Checked
Locking Optimization Pattern
This pattern has two benefits: This pattern has some liabilities:
•Minimized locking overhead •Non-atomic pointer or integral
• By performing two first-time-in assignment semantics
flag checks, this pattern • If an instance_ pointer is used as the flag in
minimizes overhead for the a singleton implementation, all bits of the
common case singleton instance_ pointer must be read &
• After the flag is set the first written atomically in a single operation
check ensures that subsequent • If the write to memory after the call to new is
accesses require no further not atomic, other threads may try to read an
locking invalid pointer
•Prevents race conditions •Multi-processor cache coherency
• The second check of the first- • Certain multi-processor platforms, such as the
time-in flag ensures that the COMPAQ Alpha & Intel Itanium, perform
critical section is executed just aggressive memory caching optimizations in
once which read & write operations can execute ‘out
of order’ across multiple CPU caches, such
that the CPU cache lines will not be flushed
properly if shared data is accessed without
locks held

126
Logging Access Statistics Efficiently
Context Problem
• Web servers often need • Having a central logging object in a multi-
to log certain information threaded server process can become a
• e.g., count number of bottleneck
times web pages are • e.g., due to synchronization required to
accessed serialize access by multiple threads

m <<uses>> Thread-Specific n m Thread-Specific


Application Object Proxy calls
Object Set
Thread get(key)
key
set(key, object)
method1()
Solution …
methodN() maintains n x m
• Apply the Thread-Specific Storage
Thread-Specific
pattern to allow multiple threads to Object
use one ‘logically global’ access Key Factory
method1()
point to retrieve an object that is create_key()

local to a thread, without incurring methodN()
locking overhead on each object
access
127
Thread-Specific Storage Pattern Dynamics
The application thread identifier, thread-specific
object set, & proxy cooperate to obtain the
correct thread-specific object Thread-Specific
Object Set
manages
thread 1 thread m

key 1
Thread-Specific Thread-Specific
Object Proxy accesses [k,t] Object
key n

: Application : Thread-Specific : Key : Thread-Specific


Thread Object Proxy Factory Object Set
method()
create_key()

key
: Thread-Specific
Object
TSObject key set()

128
Applying the Thread-Specific
Storage Pattern to JAWS
template <class TYPE> n m Thread-Specific
Class ACE_TSS { Application m <<uses>> ACE_TSS
Thread key calls Object Set
public: get(key)
TYPE *operator->() const { operator->() set(key, object)
TYPE *tss_data = 0;
if (!once_) {
Guard <Thread_Mutex> guard (keylock_); Key Factory maintains n x m
if (!once_) { create_key() Error_Logger
ACE_OS::thr_keycreate last_error()
(&key_, &cleanup_hook); log()
once_ = true; …
} class Error_Logger {
} public:
ACE_OS::thr_getspecific int last_error ();
(key, (void **) &tss_data); void log (const char *format,
if (tss_data == 0) { ...);
tss_data = new TYPE; };
ACE_OS::thr_setspecific
(key, (void *) tss_data); ACE_TSS <Error_Logger>
} my_logger;
return tss_data; // ...
} if (recv (……) == -1 &&
private: my_logger->last_error () !=
mutable pthread_key_t key_; EWOULDBLOCK)
mutable bool once_; my_logger->log
mutable Thread_Mutex keylock_; (“recv failed, errno = %d”,
static void cleanup_hook (void *ptr); my_logger->last_error ());
}; };
129
Pros & Cons of the Thread-Specific
Storage Pattern
This pattern has four benefits: This pattern also has liabilities:
• Efficiency • It encourages use of thread-
• It’s possible to implement this pattern specific global objects
so that no locking is needed to • Many applications do not require
access thread-specific data multiple threads to access thread-
• Ease of use specific data via a common access point
• When encapsulated with wrapper • In this case, data should be stored so
facades, thread-specific storage is that only the thread owning the data can
easy for application developers to access it
use • It obscures the structure of the
• Reusability system
• By combining this pattern with the • The use of thread-specific storage
Wrapper Façade pattern it’s possible potentially makes an application harder
to shield developers from non- to understand, by obscuring the
portable OS platform characteristics relationships between its components
• Portability • It restricts implementation options
• It’s possible to implement portable • Not all languages support
thread-specific storage mechanisms parameterized types or smart pointers,
on most multi-threaded operating which are useful for simplifying the
systems access to thread-specific data
130
Tutorial Example 3:
Applying Patterns to Real-time CORBA
http://www.posa.uci.edu

UML models of
a software
architecture
can illustrate
how a system
is designed, but
not why the
system is
designed in a
particular way

Patterns are used throughout The ACE ORB (TAO) Real-


time CORBA implementation to codify expert knowledge &
to generate the ORB’s software architecture by capturing
recurring structures & dynamics & resolving common
design forces
131
The Evolution of TAO
TAO ORB
• Compliant with CORBA
2.4 & some CORBA 3.0
• AMI
• INS
• Portable Interceptors
• Pattern-centric design
• Key capabilities
• QoS-enabled
• Configurable
• Pluggable protocols
• IIOP
• UIOP
• Shared memory
• SSL
• VME
• Open-source
• Commercially supported
• www.theaceorb.com
• Available now
• ZEN (RT Java/RT CORBA)
• www.zen.uci.edu
132
The Evolution of TAO
DYNAMIC/STATIC Static Scheduling
SCHEDULING • Rate monotonic analysis
Dynamic Scheduling
A/V STREAMING • Earliest deadline first
• Minimum laxity first
• Maximal urgency first
Hybrid Dynamic/Static
• Demo in WSOA
• ETA Winter 2001

A/V Streaming Service


• QoS mapping
• QoS monitoring
• QoS adaptation
ACE QoS API (AQoSA)
• GQoS + RAPI
• Integration with A/V
Streaming ETA Winter
2001

133
The Evolution of TAO
FT-CORBA
DYNAMIC/STATIC FT-CORBA
• Entity redundancy
SCHEDULING & LOAD
• Multiple models
BALANCING • Cold passive
A/V STREAMING • Warm passive
SECURITY
• Active
• IOGR
• ETA Winter 2001

Load Balancing
• Static & dynamic
• LOCATION_FORWARDING

SSL Support
• Integrity
• Confidentiality
• Authentication (limited)
Security Service
• Authentication
• Access control
• Non-repudiation
• Audit
134
• ETA Winter 2001
The Evolution of TAO
FT-CORBA Notification Service
DYNAMIC/STATIC NOTIFICATIONS
& LOAD • Structured events
SCHEDULING
BALANCING TRANSACTIONS • Event filtering
A/V STREAMING • QoS properties
SECURITY • Priority
• Expiry times
• Order policy
• Compatible w/Events
Object Transaction
Service
• Encapsulates RDBMs
• ETA Winter 2001
CORBA Component
Model (CCM)
• Extension Interfaces
• Component navigation
• Standardized life-
cycles
• Dynamic configuration
• QoS-enabled
containers
• Reflective collocation
135 • ETA Winter 2001
Concluding Remarks
R&D Synergies •Researchers & developers of distributed
applications face common challenges
•e.g., connection management,
service initialization, error handling,
flow & congestion control, event
R&D
demuxing, distribution, concurrency
control, fault tolerance
User synchronization, scheduling, &
Needs persistence
Standard
COTS R&D
•Patterns, frameworks, &
components help to resolve
these challenges
•These techniques can yield efficient,
scalable, predictable, & flexible
“Secrets” to R&D success: middleware & applications
• Embrace & lead COTS standards • Solve “real” problems
• Leverage open-source • See ideas thru to completion
• Be entrepreneurial & use the Web • Leave an enduring legacy
136

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy