Imported File (2)
Imported File (2)
## Process Management
- **UNIX Structure**
- Two main parts: System programs and kernel
- Kernel manages core functions below system-call interface
- **Microkernel Structure**
- Minimal kernel design
- Most functionality in user space
- Uses message passing for communication
- Benefits: Extensibility, portability, reliability
- Drawback: Performance overhead
==============note 3==============
## Process States
- New: Process being created
- Running: Instructions being executed
- Waiting: Process waiting for event
- Ready: Process waiting for processor assignment
- Terminated: Process finished execution
## Process Scheduling
- Managed by process scheduler to maximize CPU usage
- Maintains different queues:
- Job queue: All processes in system
- Ready queue: Processes in memory ready to execute
- Device queues: Processes waiting for I/O
## Schedulers
- Long-term scheduler (job scheduler):
- Selects processes for ready queue
- Controls degree of multiprogramming
- Runs infrequently
- Short-term scheduler (CPU scheduler):
- Selects next process to execute
- Runs very frequently
- Medium-term scheduler:
- Handles swapping
- Reduces degree of multiprogramming
### Termination
- Normal completion (exit())
- Parent termination of child (abort())
- Cascading termination
- Zombie processes: Terminated but parent hasn't called wait()
- Orphan processes: Parent terminated before child
2. Message Passing
- Processes exchange messages
- Can be direct or indirect communication
- Synchronous (blocking) or asynchronous (non-blocking)
- Uses mailboxes or ports for indirect communication
## Context Switching
- Saves state of old process and loads state of new process
- Represents system overhead
- Time depends on hardware support and OS complexity
This comprehensive overview covers the fundamental concepts of processes in operating systems, their
management, scheduling, creation, termination, and communication mechanisms, as presented in Chapter
3 of the operating systems textbook.
==============note 4==============
## Background
- Cooperating processes can affect or be affected by other processes in the system
- Processes can share logical address space or share data through files/messages
- Concurrent access to shared data may result in data inconsistency
- Mechanisms needed to ensure orderly execution of cooperating processes
## Peterson's Solution
- Limited to two processes
- Uses shared variables:
- turn: Indicates whose turn it is
- flag array: Indicates if process is ready to enter critical section
- Provides mutual exclusion but has limitations on modern architectures
## Semaphores
- Synchronization tool avoiding busy waiting
- Two atomic operations:
- wait() - decrements semaphore
- signal() - increments semaphore
- Types:
- Counting semaphore: Unrestricted domain
- Binary semaphore: Values 0 or 1 only
- Implementation includes waiting queue for blocked processes
================note 5========
## Basic Concepts
- CPU scheduling is fundamental for multiprogrammed operating systems
- Process execution alternates between CPU execution and I/O wait (CPU-I/O Burst Cycle)
- Maximum CPU utilization is achieved through multiprogramming
- CPU burst distribution is a key consideration in scheduling
### Dispatcher
- Gives CPU control to selected process
- Functions include:
- Context switching
- Switching to user mode
- Program restart at proper location
- Dispatch latency: Time to stop one process and start another
## Scheduling Algorithms
================note 6 ========
#### Fragmentation
- External: Total space exists but not contiguous
- Internal: Allocated space slightly larger than needed
- Compaction can reduce external fragmentation
### Paging
- Divides physical memory into fixed-size frames
- Logical memory divided into pages of same size
- Page table translates logical to physical addresses
- Uses page number and page offset for address translation
- Translation Look-aside Buffer (TLB) speeds up address translation
- Protection implemented through valid-invalid bits
### Segmentation
- Supports user view of memory
- Programs divided into logical units (segments)
- Segment table contains:
- Base address
- Limit
- Protection bits
- Logical address consists of segment number and offset
This organization of memory management techniques provides the foundation for modern operating
systems to efficiently manage system memory while ensuring protection and optimal performance.
================note 7========
## Directory Structure
### Organization
- Collection of nodes containing file information
- Stored on disk with backup on tapes
- Directory operations:
- Search, create, delete files
- List directory contents
- Rename files
- Traverse file system
2. Two-Level Directory
- Separate directory per user
- Supports path names
- Efficient searching
3. Tree-Structured Directory
- Hierarchical organization
- Supports grouping
- Uses working directory concept
- Absolute/relative path names
================note 8========
## Core Concepts
- I/O management is a crucial component of operating system design and operation
- Devices connect through ports, buses, and device controllers
- Device drivers encapsulate device details and provide uniform interface
- Wide variety of I/O devices: storage, transmission, and human-interface devices
### Polling
1. Host continuously checks device busy bit
2. Host sets read/write operations
3. Host signals command ready
4. Controller executes transfer
5. Controller clears status bits upon completion
### Interrupts
- More efficient than polling for infrequent events
- CPU checks interrupt-request line after each instruction
- Includes interrupt handler and vector for proper routing
- Supports priority-based processing
- Also used for exceptions and system calls
- Supports concurrent processing in multi-CPU systems
## STREAMS
- Full-duplex communication channel in Unix System V
- Components include:
- Stream head for user process interface
- Driver end for device interface
- Optional intermediate modules
- Uses message passing between queues
- Supports flow control
- Combines asynchronous internal operation with synchronous user interface
This comprehensive overview of I/O systems demonstrates the complexity and importance of I/O
management in modern operating systems, from hardware interfaces to software abstractions and
performance optimization strategies.
================note 9========
## Core Concepts
- Protection refers to mechanisms controlling access of processes/users to OS-defined resources
- Security focuses on defending against internal/external attacks like denial-of-service, worms, viruses
- Systems use user IDs and group IDs to manage access control
- Privilege escalation allows users to temporarily gain increased rights
## Access Matrix
- Rows represent domains
- Columns represent objects
- Matrix entries show allowed operations
- Special access rights include:
- owner rights
- copy operations
- control rights
- transfer rights
2. Access Lists
- Per-object lists of domain, rights-set pairs
- Easily extendable with default permissions
3. Capability Lists
- Domain-based lists of objects with allowed operations
- Capabilities protected and managed by OS
- Like secure pointers
4. Lock-key
- Objects have unique bit pattern locks
- Domains have matching keys
- Access granted when keys match locks
## Language-Based Protection
- Java 2 protection through Java Virtual Machine
- Classes assigned protection domains on loading
- Stack inspection ensures proper privilege levels
- Allows high-level policy description
- Software enforcement when hardware support unavailable
================note 11========
## Core Concepts
- Virtual machines (VMs) abstract hardware of a single computer into multiple execution environments
- Key components include:
- Host: The underlying hardware system
- Virtual Machine Manager (VMM) or hypervisor: Creates and manages VMs
- Guest: Usually an operating system running in the virtual environment
## Types of Hypervisors
1. Type 0 Hypervisors
- Hardware-based solutions implemented in firmware
- Examples: IBM LPARs, Oracle LDOMs
- Each guest has dedicated hardware
- Can support virtualization-within-virtualization
2. Type 1 Hypervisors
- Operating system-like software built for virtualization
- Examples: VMware ESX, Citrix XenServer
- Common in corporate datacenters
- Can include general-purpose OS functionality (like Windows Server with HyperV)
3. Type 2 Hypervisors
- Applications running on standard operating systems
- Examples: VMware Workstation, Parallels Desktop, Oracle VirtualBox
- Less OS involvement in virtualization
- Suitable for personal use and testing
## Implementation Methods
### Trap-and-Emulate
- Guest executes in user mode
- Privileged instructions cause traps to VMM
- VMM executes operations on behalf of guest
- Performance impact on kernel mode operations
## Memory Management
- Uses nested page tables (NPTs) for memory virtualization
- VMware ESX employs three methods:
1. Double-paging
2. Balloon memory management
3. Memory deduplication
## Advanced Features
## Benefits
- System protection and isolation
- Snapshot and restore capabilities
- Resource consolidation
- Multiple OS support
- Templating for rapid deployment
- Cloud computing enablement
## Special Implementations
================note 12========
## Core Concepts
- A distributed system consists of loosely coupled processors connected by a communications network
- Processors are referred to as nodes, computers, machines, or hosts
- Sites represent processor locations
- Servers provide resources that client nodes at different sites want to use
## Network Structure
## Communication Structure
- Key design considerations:
1. Naming and name resolution
2. Routing strategies
3. Connection strategies
4. Contention management
## TCP/IP Implementation
- Requires both IP and MAC addresses for communication
- Uses DNS for IP address resolution
- Employs ARP for MAC address mapping
- Supports inter-network routing through routers
## System Robustness
### Reconfiguration
- System must adapt to failures
- Broadcasts failure information to all sites
- Updates system when failed components recover