M 1 IA
M 1 IA
Focus: HPC is designed for complex, computa onally intensive tasks that require significant processing power
and speed. It's about solving a single, large problem as quickly as possible.
Architecture:
o Nodes: Each node contains powerful CPUs, o en with GPUs (Graphics Processing Units) for
accelerated compu ng, and large amounts of RAM.
o High-Speed Interconnect: A crucial element is the high-speed interconnect (e.g., Infiniband), which
allows nodes to communicate rapidly and efficiently, minimizing latency.
o Shared Storage: HPC systems o en use shared storage systems for fast data access.
Workload:
o HPC is used for ghtly coupled applica ons, where tasks depend on each other and require frequent
communica on.
o Examples include:
Weather forecas ng
Molecular modeling
Focus: HTC is designed for execu ng a large number of independent tasks (jobs) over a long period. It's about
ge ng many things done, rather than doing one thing very quickly.
Architecture:
o Grid/Cluster: HTC systems can be organized as grids or clusters, o en distributed across mul ple
loca ons.
o Nodes/Workers: Each node, or worker, executes individual jobs. Nodes can be less powerful than
HPC nodes.
o Network/Internet: HTC relies on network connec vity, o en the internet, to distribute jobs and
manage resources.
o Distributed Storage: HTC o en uses distributed storage systems, allowing jobs to access data from
various loca ons.
Workload:
o HTC is used for loosely coupled applica ons, where tasks are independent and require minimal
communica on.
o Examples include:
Genome sequencing
Data mining
Image processing
Parameter sweeps
o Fault tolerance is very important, as jobs can run for long periods of me.
A mul core CPU consists of mul ple processing units (cores) on a single chip. Each core func ons as an independent
processor, capable of execu ng tasks simultaneously, thereby improving parallel processing and overall
computa onal efficiency.
Each core has its own private L1 cache, and all cores share an L2 cache.
Examples of mul core processors include Intel i7, Xeon, AMD Opteron, Sun Niagara, IBM Power 6, and X
Cell processors.
High-performance compu ng (HPC) and cloud compu ng systems leverage mul core processors for parallel
processing.
Reduced power consump on compared to single-core processors with higher clock speeds.
Mul threading is a technique where a processor executes mul ple instruc on threads concurrently, improving the
efficiency of CPU resource u liza on.
1. Fine-Grain Mul threading: The processor switches between threads every cycle, reducing idle me.
2. Coarse-Grain Mul threading: The processor executes instruc ons from the same thread for mul ple cycles
before switching.
3. Simultaneous Mul threading (SMT): Allows instruc ons from mul ple threads to execute simultaneously
within a single cycle.
Coarse-Grain Mul threading: Executes instruc ons from a single thread for mul ple cycles before switching.
Simultaneous Mul threading (SMT): Executes instruc ons from mul ple threads in the same cycle
1. Bare-metal VMs
In this configura on, the hypervisor runs directly on the host machine's hardware, without any underlying opera ng
system. This approach offers high performance and security, as the hypervisor has direct access to the hardware
resources. VMware ESXi and Microso Hyper-V are examples of bare-metal hypervisors.
2. Hosted VMs
In contrast to bare-metal VMs, hosted VMs rely on an exis ng opera ng system on the host machine. The hypervisor
runs as an applica on on top of this opera ng system, crea ng a layer of abstrac on between the VMs and the
hardware. This configura on is simpler to set up and manage, but it may introduce some performance overhead due
to the addi onal layer. Oracle VirtualBox and VMware Worksta on are examples of hosted hypervisors.
3. Para-virtualized VMs
This type of VM configura on strikes a balance between bare-metal and hosted VMs. Para-virtualized VMs require
modifica ons to the guest opera ng systems to make them aware of the virtualiza on layer. This allows for be er
performance compared to hosted VMs, as the guest OS can directly communicate with the hypervisor. However, it
also introduces some complexity in terms of OS compa bility. Xen and KVM are examples of para-virtualized
hypervisors.
1. Parallel Compu ng
Parallel compu ng involves mul ple processors working on the same task simultaneously by breaking it into smaller
sub-tasks. These processors are ghtly coupled and share memory.
2. Distributed Compu ng
Distributed compu ng involves mul ple computers (nodes) working together over a network to complete a task.
Each node has its own private memory, and communica on occurs via message passing.
Use Case HPC, AI, Simula ons Cloud Compu ng, Big Data, IoT
What is SOA Explain Layered Architecture for web services and grids
SOA applies to web services, grid compu ng, and cloud compu ng. It provides a framework for designing
distributed applica ons where services interact through standard protocols like SOAP, REST, and XML.
Characteris cs of SOA:
Loose Coupling: Services operate independently and can be updated without affec ng other services.
SOA has evolved to support grids, clouds, and inter-cloud compu ng, where services include:
Compute services
Storage services
Discovery services.
2. Layered Architecture for Web Services & Grids
The layered architecture for web services and grids builds on the OSI model, adding service-oriented layers on top.
1. Service Interfaces:
o Defines how services communicate (e.g., WSDL, Java methods, CORBA IDL).
2. Communica on Layer:
3. Middleware Layer:
4. Service Management:
5. Applica on Services: