50% found this document useful (2 votes)
406 views

Assignment Report: Simple Operating System

The document describes a student assignment to simulate a simple operating system. It includes sections on processes, scheduling, memory management, and integrating the components. For scheduling, it discusses implementing a priority feedback queue algorithm with two queues - a ready queue for higher priority processes and a run queue for paused processes. The priority feedback queue allows processes to continue running until completion while still prioritizing higher priority processes waiting in the ready queue.

Uploaded by

Nguyen Trong Tin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
50% found this document useful (2 votes)
406 views

Assignment Report: Simple Operating System

The document describes a student assignment to simulate a simple operating system. It includes sections on processes, scheduling, memory management, and integrating the components. For scheduling, it discusses implementing a priority feedback queue algorithm with two queues - a ready queue for higher priority processes and a run queue for paused processes. The priority feedback queue allows processes to continue running until completion while still prioritizing higher priority processes waiting in the ready queue.

Uploaded by

Nguyen Trong Tin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

VIETNAM NATIONAL UNIVERSITY, HO CHI MINH CITY

UNIVERSITY OF TECHNOLOGY
FACULTY OF COMPUTER SCIENCE AND ENGINEERING

Course: Operating Systems

Assignment Report:
Simple Operating System

Intructor: Pham Hoang Anh


Students: Cao Tiến Đạt 2052434
Phan Bùi Tấn Minh 1852578

HO CHI MINH CITY, November 2021


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

Contents

1 Introduction 2
1.1 An overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Scheduler 4
2.1 Operation of scheduler: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Question - Priority Feedback Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Implement: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.4 Gantt diagram: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.5 Result: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3 Memory Management 9
3.1 Background: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 Question - Segmentation with Paging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2.1 Segmented Paging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2.2 Paged Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2.3 Conclusion: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 Implement: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.4 Result: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

4 Put It All Together 13


4.1 Organization of the simple OS: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2 Question: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.3 Result: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

5 Change log 22

6 Workload 23

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 1/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

1 Introduction
1.1 An overview
The assignment is about simulating a simple operating system to help student understand the fundamental
concepts of scheduling, synchronization and memory management. Figure 1 shows the overall architecture of
the “operating system” we are going to implement. Generally, the OS has to manage two “virtual” resources:
CPU(s) and RAM using two core components:
• Scheduler (and Dispatcher): determines with process is allowed to run on which CPU.
• Virtual memory engine (VME): isolates the memory space of each process from other. That is, although
RAM is shared by multiple processes, each process do not know the existence of other. This is done
by letting each process has its own virtual memory space and the Virtual memory engine will map and
translate the virtual addresses provided by processes to corresponding physical addresses.

Figure 1: The general view of key modules in this assignment

Through those modules, The OS allows mutliple processes created by users to share and use the “virtual”
computing resources. Therefore, in this assignment, we focus on implementing scheduler/dispatcher and virtual
memory engine.

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 2/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

1.2 Processes
We are going to build a multitasking OS which lets multiple processes run simultaneously so it is worth to spend
some space explaining the organization of processes. The OS manages processes through their PCB described
as follows:

// From include/common.h
struct pcb_t {
uint32_t pid;
uint32_t priority;
uint32_t code_seg_t * code;
addr_t regs;
uint32_t pc;
struct seg_table_t * seg_table;
uint32_t bp;
}
The meaning of fields in the struct:

• PID: Process’s PID


• priority: Process priority, Scheduler will let processes with higher priority run before the one with lower
priority.
• code: Text segment of the process (To simplify the simulation, we do not put the text segment in RAM).

• regs: Registers, each process could use up to 10 registers numbered from 0 to 9.


• pc: The current position of program counter.
• seg table: Page table used to translate virtual addresses to physical addresses.

• bp: Break pointer, use to manage the heap segment.


There are five instructions a process could perform:
• CALC: do some calculation using the CPU.
• ALLOC: Allocate some chunk of bytes on the main memory (RAM).

• FREE: Free allocated memory.


• READ: Read a byte from memory.
• WRITE: Write a value register to memory.

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 3/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

2 Scheduler
2.1 Operation of scheduler:
- For each new program, the loader will create a new process and assign a new PCB to it.

- The loader then reads and copies the content of the program to the text segment of the new process.

- Finally, the PCB of the process is pushed to ready queue and waits for the CPU.

- The CPU runs processes in round-robin style. Each process is allowed to run up to a given period of time.

- After that, the CPU is forced to pause the process and push it to run queue. The CPU then picks up
another process from ready queue and continue running.

- Since CPU does not take process back to ready queue after pausing it, the ready queue will soon or late empty.

- If this phenomenon occurs, the scheduler will move all processes waiting at run queue back to ready queue to
let the CPU continue running paused process again.

Figure 2: The operation of scheduler in the assignment

2.2 Question - Priority Feedback Queue


What is the advantage of using priority feedback queue in comparison with other scheduling algorithms you have
learned such as FIFO, Round Robin? Explain clearly your answer.

Priority Feedback Queue (PFQ) algorithm are used to determine which process to be executed when a CPU
becomes available. Priority Feedback Queue algorithm consist of two queues: ready_queue and run_queue.

• ready_queue: the queue contains processes with higher execution priority than the run_queue queue.
After the CPU is forced to pause a process and push it to run_queue, the CPU then searches for the
process in this queue.
• run_queue: this queue contains processes that are waiting to continue executing after being paused by the
CPU without completing their task. Processes in this queue can only continue executing when ready_queue
is empty, then they are moved to the ready_queue to be run by the CPU.

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 4/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

Advantages:
• Compared to SJF, SRTF, FCFS or Round Robin with FCFS, it’s more flexible as it’s use priority to allow
urgent process run first.
Actually, we can say that FCFS and SJF are special cases of priority scheduling with FCFS using arrival
time as its priority and SJF use burst time as its priority. Therefore, we can say that, priority scheduling is
a generalization of SJF and FCFS. So, it’s more flexible in a way that: the programmer is allow to decide
a process priority without have to follow a strict rule.

• Comparing to a priority scheduling with one or multiple level without feedback, it prevent starvation by
using aging to resolve the starvation problem. In which process can move around in the queues (process
that waits too long in a lower priority queue can be moved to the higher ones)

2.3 Implement:
enqueue() and dequeue() functions to help put a new PCB to the queue and get a PCB with the highest priority
out of the queue.
Implement enqueue() :

void enqueue(struct queue_t *q, struct pcb_t *proc)


{
/* TODO: put a new process to queue [q] */
if (q->size >= MAX_QUEUE_SIZE)
{
return;
}
q->proc[q->size] = proc;
q->size += 1;
}
Implement dequeue() :

struct pcb_t *dequeue(struct queue_t *q)


{
/* TODO: return a pcb whose prioprity is the highest
* in the queue [q] and remember to remove it from q
* */
if (q->size <= 0)
{
return NULL;
}
int i = 0, j;
for (j = 1; j < q->size; j++)
{
if (q->proc[j]->priority < q->proc[i]->priority)
{
i = j;
}
}
struct pcb_t *result = q->proc[i];
for (j = i + 1; j < q->size; j++)
{
q->proc[j - 1] = q->proc[j];
}
q->size -= 1;
return result;
}

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 5/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

get_proc() functions to gets PCB of a process waiting at ready queue.


struct pcb_t *get_proc(void)
{
struct pcb_t *proc = NULL;
/*TODO: get a process from [ready_queue]. If ready queue
* is empty, push all processes in [run_queue] back to
* [ready_queue] and return the highest priority one.
* Remember to use lock to protect the queue.
* */
pthread_mutex_lock(&queue_lock);
if (empty(&ready_queue))
{
// move all process is waiting in run_queue back to ready_queue
while (!empty(&run_queue))
{
enqueue(&ready_queue, dequeue(&run_queue));
}
}
if (!empty(&ready_queue))
{
proc = dequeue(&ready_queue);
}
pthread_mutex_unlock(&queue_lock);
return proc;
}

2.4 Gantt diagram:


input: sched_0:
212
0 s0
4 s1
=> time slice = 2
Number of CPU = 1
Number of Processes to be run = 2

Process Burst time Arrival time Priority


P1 5 0 12

P2 7 4 20

Gantt diagram

Figure 3: Gantt diagram

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 6/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

input: sched_1
214
0 s0
4 s1
6 s2
7 s3
=> time slice = 2
Number of CPU = 1
Number of Processes to be run = 4

Process Burst time Arrival time Priority


P1 15 0 12

P2 7 4 20

P3 12 6 20

P4 11 7 7

Gantt diagram

Figure 4: Gantt diagram

• Time slot 0: Load process 1 to ready queue.

• Time slot 1: Load process 1 to CPU.


• Time slot 3: Put process 1 to run queue, the ready queue is empty so we load from run queue and load
process 1 to CPU.
• Time slot 4: Load process 2 to ready queue.

• Time slot 5: Move process 1 to run queue, load process 2 from ready queue to CPU.
• Time slot 6: Load process 3 to ready queue.
• Time slot 7: Move process 2 to run queue, load process 3 from ready queue to CPU. Load s3 to ready
queue.

• Time slot 9: Put process 3 to run queue. Load process 4 to CPU


• Time slot 11: Put process 4 to run queue. Ready queue is empty so we load run queue back to ready
queue. Although the priority of process 2 and process 3 is the same, process 2 is moved to run queue
before process 3 so we load process 2 to CPU.
• Time slot 13: Move process 2 to run queue. Load process 3 to CPU.

• Time slot 15: Move process 3 to run queue. Load process 1 to CPU.
• Time slot 17: Move process 1 to run queue. Load process 4 to CPU.
• Time slot 19: Move process 4 to run queue. The ready queue is empty so we load the run queue back and
execute the same as before.

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 7/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

2.5 Result:
make test_sched The result of our test is similar to that of the output file.

Figure 5: Scheduler run result

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 8/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

3 Memory Management
3.1 Background:
A computer can address more memory than the amount physically installed on the system. This extra memory
is actually called virtual memory and it is a section of a hard that’s set up to emulate the computer’s RAM.
Paging is a memory management technique in which process address space is broken into blocks of the same
size called pages (size is power of 2, between 512 bytes and 8192 bytes). The size of the process is measured in
the number of pages.

Segmentation memory management works very similar to paging but here segments are of variable-length
where as in paging pages are of fixed size. Segmentation is a memory management technique in which each
job is divided into several segments of different sizes, one for each module that contains pieces that perform
related functions. Each segment is actually a different logical address space of the program. When a process is
to be executed, its corresponding segmentation are loaded into non-contiguous memory though every segment
is loaded into a contiguous block of available memory.

By default, the size of virtual RAM is 1 MB so we must use 20 bit to represent the address of each of its
byte. With the segmentation with paging mechanism, we use the first 5 bits for segment index, the next 5 bits
for page index and the last 10 bits for offset.

Figure 6 shows how we allocate new memory regions and create new entry in the segment and page tables
inside a process. Particularly, for each new page we allocated, we must add new entry to page tables according
to this page’s segment number and page number.

Figure 6: The operation related to virtual memory in the assignment

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 9/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

3.2 Question - Segmentation with Paging


In which system is segmentation with paging used (give an example of at least one system)? Explain clearly the
advantage and disadvantage of segmentation with paging.

Segmentation with Paging is the combination of the two techniques: Segmentation and Paging, to get the
best features out of both the techniques. Segmented paging is used in Linux/x86-32.

There are two most popular non-contiguous memory allocation techniques: segmented paging and paged seg-
mentation.

3.2.1 Segmented Paging


Before going into the details of the combined strategy, first, let’s discuss the problems with paging. Paging allows
jobs and processes to be stored as a discontinuous space in memory. Thus, it solves the problem of external
fragmentation.

To implement this technique, we divide the processes into fixed-size blocks. These blocks are called pages.
We also divide physical memory into fixed-size blocks called frames.

The main limit of the paging technique resides within the fact that when the virtual address is large, pages will
take a large space in actual memory. In general, programs tend to be large. Therefore, they would consume a
considerable amount of memory, as a result, some addresses would be invalid. To solve this problem, we can use
segmentation along with paging in order to reduce page table size.

Advantages:
• The main advantage of segmented paging is the memory usage reduction. Since it allocates fixed-size
pages, it does not cause external fragmentation. It makes memory allocations simpler.
• The size of each segment can be varied with the number of paged its mapped, up to a certain limit.
Therefore, a program could have multiple segments with varying size, while still relies on paging mechanism
for allocating each segment.
• More secure because the virtual space of each process is isolated.
• Allow the process to used more memory than what available physically, by swapping un-used page to the
hard disk.

• Gives a programmers view along with the advantages of paging, and reduces external fragmentation in
comparison with segmentation.
Disadvantages:
• The main drawback is external fragmentation, so extra hardware is required.

• External fragmentation occurs because of varying sizes of page tables and varying sizes of segment tables
in today’s systems.
• Complexity level is much higher than paging.
• Slower speed, as both the page content and the page table itself required to be looked up, or worse,
swapped into a frame.
• Communication between processes have to be done through message passing (through message queue in
kernel space or network socket), since a process’s memory is isolated, preventing sharing memory.

3.2.2 Paged Segmentation


In segmented paging, not every process has the same number of segments and the segment tables can be large
in size which will cause external fragmentation due to the varying segment table sizes. To solve this problem,
we use paged segmentation which requires the segment table to be paged.

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 10/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

Paged segmentation was proposed as a solution to improve memory management. The paged segmentation
technique consists of partitioning the segment table into pages, reducing the size of the segments table.

To clear this, the page table even with segmented paging can have a lot of invalid pages. Instead of using
multi level paging along with segmented paging, the problem of larger page table can be solved by directly
applying multi level paging instead of segmented paging.

Advantages:

• The main advantage of paged segmentation is eliminating external fragmentation and reducing the page
table size.
• Similar to segmented paging, the entire segment need not be swapped out.
Disadvantages:

• On the other hand, the problem of internal fragmentation is still not solved completely. There are occasions
where internal fragmentation can occur, but the probability is less.
• There exist delays when we access the memory, because extra level of paging at first stage adds to the
delay in memory access.

• Hardware is more complex than segmented paging.

3.2.3 Conclusion:
Despite some advantages of paging and segmentation, they remain one of the leading causes of memory frag-
mentation. To reduce the chances of memory fragmentation, combined memory management techniques are
developed.

3.3 Implement:
In this part, we implement functions follow above theory, and instruction in TODO part in source code file.
1. Finding page table from segment level index and segment table.

2. Translate virtual memory to physical memory.


3. Allocate memory
• Check if memory is ready to be used.
• Allocate memory

4. Free memory.
• Free physical memory.
• Update virtual memory.
• Update break point.

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 11/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

3.4 Result:
The result of our test is similar to that of the output file.

Figure 7: Memory Management Result

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 12/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

4 Put It All Together


4.1 Organization of the simple OS:
Finally, we combine scheduler and Virtual Memory Engine to form a complete OS.
Figure 4 shows the complete organization of the OS. The last task to do is synchronization.

Figure 8: The model of assignment 2

Since the OS runs on multiple processors, it is possible that share resources could be concurrently accessed
by more than one process at a time.

4.2 Question:
What will be happen if the synchronization is not handled in your system? Illustrate the problem by example if
you have any.

When multiple processes execute concurrently sharing system resources, if the synchronization is not han-
dled then inconsistent results might be produced.

The following Example shows how inconsistent results may be produced if multiple processes execute con-
currently without any synchronization:

Consider:
• One bank account are shared between two users: Husban-withdraw money; wife-recharge.
• Two processes P1 -withdraw money and P2 -recharge are executing concurrently.
• Both the processes share a common variable named “Account Balance”.
• Process P1 tries to decrement the value of Account Balance.
• Process P2 tries to increment the value of Account Balance.
Now, when these processes execute concurrently without synchronization, different results may be produced.

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 13/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

4.3 Result:
We then completing the files, synchronize the scheduler and Virtual Memory Engine to form a complete operating
system.
After running the command "make test_all" we get the following results:
−−−−−− MEMORY MANAGEMENT TEST 0 −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
. /mem i n p u t / p r o c /m0
0 0 0 : 00000 −003 f f − PID : 01 ( i d x 0 0 0 , nxt : 0 0 1 )
003 e8 : 15
0 0 1 : 00400 −007 f f − PID : 01 ( i d x 0 0 1 , nxt : −01)
0 0 2 : 00800 −00 b f f − PID : 01 ( i d x 0 0 0 , nxt : 0 0 3 )
0 0 3 : 00 c00 −00 f f f − PID : 01 ( i d x 0 0 1 , nxt : 0 0 4 )
0 0 4 : 01000 −013 f f − PID : 01 ( i d x 0 0 2 , nxt : 0 0 5 )
0 0 5 : 01400 −017 f f − PID : 01 ( i d x 0 0 3 , nxt : 0 0 6 )
0 0 6 : 01800 −01 b f f − PID : 01 ( i d x 0 0 4 , nxt : −01)
0 1 4 : 03800 −03 b f f − PID : 01 ( i d x 0 0 0 , nxt : 0 1 5 )
0 3 8 1 4 : 66
0 1 5 : 03 c00 −03 f f f − PID : 01 ( i d x 0 0 1 , nxt : −01)
NOTE: Read f i l e output /m0 t o v e r i f y your r e s u l t
−−−−−− MEMORY MANAGEMENT TEST 1 −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
. /mem i n p u t / p r o c /m1
NOTE: Read f i l e output /m1 t o v e r i f y your r e s u l t ( your i m p l e m e n t a t i o n s h o u l d p r i n t n o t h i n g )
−−−−−− SCHEDULING TEST 0 −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
. / o s sched_0
Time s l o t 0
Loaded a p r o c e s s a t i n p u t / p r o c / s0 , PID : 1
Time s l o t 1
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 2
Time s l o t 3
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 4
Loaded a p r o c e s s a t i n p u t / p r o c / s1 , PID : 2
Time s l o t 5
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 2
Time s l o t 6
Time s l o t 7
CPU 0 : Put p r o c e s s 2 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 8
Time s l o t 9
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 2
Time s l o t 10
Time s l o t 11
CPU 0 : Put p r o c e s s 2 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 12
Time s l o t 13
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 2
Time s l o t 14
Time s l o t 15
CPU 0 : Put p r o c e s s 2 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 16

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 14/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

Time s l o t 17
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 2
Time s l o t 18
CPU 0 : P r o c e s s e d 2 has f i n i s h e d
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 19
Time s l o t 20
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 21
Time s l o t 22
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 23
CPU 0 : P r o c e s s e d 1 has f i n i s h e d
CPU 0 s t o p p e d

MEMORY CONTENT:
NOTE: Read f i l e output / sched_0 t o v e r i f y your r e s u l t
−−−−−− SCHEDULING TEST 1 −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
. / o s sched_1
Time s l o t 0
Loaded a p r o c e s s a t i n p u t / p r o c / s0 , PID : 1
Time s l o t 1
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 2
Time s l o t 3
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 1
Loaded a p r o c e s s a t i n p u t / p r o c / s1 , PID : 2
Time s l o t 4
Time s l o t 5
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 2
Loaded a p r o c e s s a t i n p u t / p r o c / s2 , PID : 3
Time s l o t 6
Time s l o t 7
CPU 0 : Put p r o c e s s 2 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 3
Loaded a p r o c e s s a t i n p u t / p r o c / s3 , PID : 4
Time s l o t 8
Time s l o t 9
CPU 0 : Put p r o c e s s 3 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 4
Time s l o t 10
Time s l o t 11
CPU 0 : Put p r o c e s s 4 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 4
Time s l o t 12
Time s l o t 13
CPU 0 : Put p r o c e s s 4 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 14
Time s l o t 15
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 2
Time s l o t 16

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 15/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

Time s l o t 17
CPU 0 : Put p r o c e s s 2 t o run queue
CPU 0 : Dispatched p r o c e s s 3
Time s l o t 18
Time s l o t 19
CPU 0 : Put p r o c e s s 3 t o run queue
CPU 0 : Dispatched p r o c e s s 4
Time s l o t 20
Time s l o t 21
CPU 0 : Put p r o c e s s 4 t o run queue
CPU 0 : Dispatched p r o c e s s 1
Time s l o t 22
Time s l o t 23
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : Dispatched p r o c e s s 2
Time s l o t 24
Time s l o t 25
CPU 0 : Put p r o c e s s 2 t o run queue
CPU 0 : Dispatched p r o c e s s 3
Time s l o t 26
Time s l o t 27
CPU 0 : Put p r o c e s s 3 t o run queue
CPU 0 : Dispatched p r o c e s s 4
Time s l o t 28
Time s l o t 29
CPU 0 : Put p r o c e s s 4 t o run queue
CPU 0 : Dispatched p r o c e s s 1
Time s l o t 30
Time s l o t 31
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : Dispatched p r o c e s s 2
Time s l o t 32
CPU 0 : P r o c e s s e d 2 has f i n i s h e d
CPU 0 : Dispatched p r o c e s s 3
Time s l o t 33
Time s l o t 34
CPU 0 : Put p r o c e s s 3 t o run queue
CPU 0 : Dispatched p r o c e s s 4
Time s l o t 35
Time s l o t 36
CPU 0 : Put p r o c e s s 4 t o run queue
CPU 0 : Dispatched p r o c e s s 1
Time s l o t 37
Time s l o t 38
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : Dispatched p r o c e s s 3
Time s l o t 39
Time s l o t 40
CPU 0 : Put p r o c e s s 3 t o run queue
CPU 0 : Dispatched p r o c e s s 4
Time s l o t 41
CPU 0 : P r o c e s s e d 4 has f i n i s h e d
CPU 0 : Dispatched p r o c e s s 1
Time s l o t 42
Time s l o t 43
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : Dispatched p r o c e s s 3
Time s l o t 44

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 16/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

Time s l o t 45
CPU 0 : P r o c e s s e d 3 has f i n i s h e d
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 46
CPU 0 : P r o c e s s e d 1 has f i n i s h e d
CPU 0 s t o p p e d

MEMORY CONTENT:
NOTE: Read f i l e output / sched_1 t o v e r i f y your r e s u l t
−−−−− OS TEST 0 −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
. / o s os_0
Time s l o t 0
Loaded a p r o c e s s a t i n p u t / p r o c /p0 , PID : 1
CPU 1 : D i s p a t c h e d p r o c e s s 1
Time s l o t 1
Loaded a p r o c e s s a t i n p u t / p r o c /p1 , PID : 2
Time s l o t 2
CPU 0 : D i s p a t c h e d p r o c e s s 2
Time s l o t 3
Loaded a p r o c e s s a t i n p u t / p r o c /p1 , PID : 3
Time s l o t 4
Loaded a p r o c e s s a t i n p u t / p r o c /p1 , PID : 4
Time s l o t 5
Time s l o t 6
CPU 1 : Put p r o c e s s 1 t o run queue
CPU 1 : D i s p a t c h e d p r o c e s s 3
Time s l o t 7
Time s l o t 8
CPU 0 : Put p r o c e s s 2 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 4
Time s l o t 9
Time s l o t 10
Time s l o t 11
Time s l o t 12
CPU 1 : Put p r o c e s s 3 t o run queue
CPU 1 : D i s p a t c h e d p r o c e s s 1
Time s l o t 13
Time s l o t 14
CPU 0 : Put p r o c e s s 4 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 2
Time s l o t 15
Time s l o t 16
CPU 1 : P r o c e s s e d 1 has f i n i s h e d
CPU 1 : D i s p a t c h e d p r o c e s s 3
Time s l o t 17
Time s l o t 18
CPU 0 : P r o c e s s e d 2 has f i n i s h e d
CPU 0 : D i s p a t c h e d p r o c e s s 4
Time s l o t 19
Time s l o t 20
CPU 1 : P r o c e s s e d 3 has f i n i s h e d
CPU 1 s t o p p e d
Time s l o t 21
Time s l o t 22
CPU 0 : P r o c e s s e d 4 has f i n i s h e d
CPU 0 s t o p p e d

MEMORY CONTENT:

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 17/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

000: 00000 −003 f f − PID : 03 ( i d x 0 0 0 , nxt : 0 0 1 )


001: 00400 −007 f f − PID : 03 ( i d x 0 0 1 , nxt : 0 0 2 )
002: 00800 −00 b f f − PID : 03 ( i d x 0 0 2 , nxt : 0 0 3 )
003: 00 c00 −00 f f f − PID : 03 ( i d x 0 0 3 , nxt : −01)
004: 01000 −013 f f − PID : 04 ( i d x 0 0 0 , nxt : 0 0 5 )
005: 01400 −017 f f − PID : 04 ( i d x 0 0 1 , nxt : 0 0 6 )
006: 01800 −01 b f f − PID : 04 ( i d x 0 0 2 , nxt : 0 1 2 )
007: 01 c00 −01 f f f − PID : 02 ( i d x 0 0 0 , nxt : 0 0 8 )
008: 02000 −023 f f − PID : 02 ( i d x 0 0 1 , nxt : 0 0 9 )
009: 02400 −027 f f − PID : 02 ( i d x 0 0 2 , nxt : 0 1 0 )
025 e7 : 0 a
0 1 0 : 02800 −02 b f f − PID : 02 ( i d x 0 0 3 , nxt : 0 1 1 )
0 1 1 : 02 c00 −02 f f f − PID : 02 ( i d x 0 0 4 , nxt : −01)
0 1 2 : 03000 −033 f f − PID : 04 ( i d x 0 0 3 , nxt : −01)
0 1 4 : 03800 −03 b f f − PID : 03 ( i d x 0 0 0 , nxt : 0 1 5 )
0 1 5 : 03 c00 −03 f f f − PID : 03 ( i d x 0 0 1 , nxt : 0 1 6 )
0 1 6 : 04000 −043 f f − PID : 03 ( i d x 0 0 2 , nxt : 0 1 7 )
041 e7 : 0 a
0 1 7 : 04400 −047 f f − PID : 03 ( i d x 0 0 3 , nxt : 0 1 8 )
0 1 8 : 04800 −04 b f f − PID : 03 ( i d x 0 0 4 , nxt : −01)
0 2 7 : 06 c00 −06 f f f − PID : 02 ( i d x 0 0 0 , nxt : 0 2 8 )
0 2 8 : 07000 −073 f f − PID : 02 ( i d x 0 0 1 , nxt : 0 2 9 )
0 2 9 : 07400 −077 f f − PID : 02 ( i d x 0 0 2 , nxt : 0 3 0 )
0 3 0 : 07800 −07 b f f − PID : 02 ( i d x 0 0 3 , nxt : −01)
0 4 7 : 0 bc00 −0 b f f f − PID : 01 ( i d x 0 0 0 , nxt : −01)
0 bc14 : 64
0 5 7 : 0 e400 −0 e 7 f f − PID : 04 ( i d x 0 0 0 , nxt : 0 5 8 )
0 5 8 : 0 e800 −0 e b f f − PID : 04 ( i d x 0 0 1 , nxt : 0 5 9 )
0 5 9 : 0 ec00 −0 e f f f − PID : 04 ( i d x 0 0 2 , nxt : 0 6 0 )
0 ede7 : 0a
0 6 0 : 0 f000 −0 f 3 f f − PID : 04 ( i d x 0 0 3 , nxt : 0 6 1 )
0 6 1 : 0 f400 −0 f 7 f f − PID : 04 ( i d x 0 0 4 , nxt : −01)
NOTE: Read f i l e output /os_0 t o v e r i f y your r e s u l t
−−−−− OS TEST 1 −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
. / o s os_1
Time s l o t 0
Loaded a p r o c e s s a t i n p u t / p r o c /p0 , PID : 1
CPU 1 : D i s p a t c h e d p r o c e s s 1
Time s l o t 1
Time s l o t 2
Loaded a p r o c e s s a t i n p u t / p r o c / s3 , PID : 2
CPU 3 : D i s p a t c h e d p r o c e s s 2
Time s l o t 3
CPU 1 : Put p r o c e s s 1 t o run queue
CPU 1 : D i s p a t c h e d p r o c e s s 1
Loaded a p r o c e s s a t i n p u t / p r o c /m1, PID : 3
CPU 2 : D i s p a t c h e d p r o c e s s 3
Time s l o t 4
CPU 3 : Put p r o c e s s 2 t o run queue
CPU 3 : D i s p a t c h e d p r o c e s s 2
CPU 1 : Put p r o c e s s 1 t o run queue
CPU 1 : D i s p a t c h e d p r o c e s s 1
Time s l o t 5
Loaded a p r o c e s s a t i n p u t / p r o c / s2 , PID : 4
CPU 2 : Put p r o c e s s 3 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 4
CPU 2 : D i s p a t c h e d p r o c e s s 3
Time s l o t 6

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 18/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

CPU 3 : Put p r o c e s s 2 t o run queue


Time s l o t 7
CPU 3 : Dispatched p r o c e s s 2
CPU 1 : Put p r o c e s s 1 t o run queue
CPU 1 : Dispatched p r o c e s s 1
Loaded a p r o c e s s a t i n p u t / p r o c /m0, PID : 5
CPU 2 : Put p r o c e s s 3 t o run queue
Time s l o t 8
CPU 2 : Dispatched p r o c e s s 5
CPU 0 : Put p r o c e s s 4 t o run queue
CPU 0 : Dispatched p r o c e s s 3
Loaded a p r o c e s s a t i n p u t / p r o c /p1 , PID : 6
CPU 3 : Put p r o c e s s 2 t o run queue
CPU 3 : Dispatched p r o c e s s 6
CPU 1 : Put p r o c e s s 1 t o run queue
CPU 1 : Dispatched p r o c e s s 4
Time s l o t 9
CPU 2 : Put p r o c e s s 5 t o run queue
Time s l o t 10
CPU 2 : Dispatched p r o c e s s 1
CPU 0 : Put p r o c e s s 3 t o run queue
CPU 0 : Dispatched p r o c e s s 5
Loaded a p r o c e s s a t i n p u t / p r o c / s0 , PID : 7
CPU 3 : Put p r o c e s s 6 t o run queue
CPU 3 : Dispatched p r o c e s s 2
CPU 1 : Put p r o c e s s 4 t o run queue
CPU 1 : Dispatched p r o c e s s 7
Time s l o t 11
CPU 2 : Processed 1 has f i n i s h e d
Time s l o t 12
CPU 0 : Put p r o c e s s 5 t o run queue
CPU 0 : Dispatched p r o c e s s 6
CPU 2 : Dispatched p r o c e s s 3
CPU 3 : Put p r o c e s s 2 t o run queue
CPU 3 : Dispatched p r o c e s s 4
Time s l o t 13
CPU 1 : Put p r o c e s s 7 t o run queue
CPU 1 : Dispatched p r o c e s s 5
CPU 2 : P r o c e s s e d 3 has f i n i s h e d
CPU 2 : Dispatched p r o c e s s 2
Time s l o t 14
CPU 0 : Put p r o c e s s 6 t o run queue
CPU 0 : Dispatched p r o c e s s 7
CPU 3 : Put p r o c e s s 4 t o run queue
CPU 3 : Dispatched p r o c e s s 6
CPU 1 : Put p r o c e s s 5 t o run queue
CPU 1 : Dispatched p r o c e s s 4
Time s l o t 15
Loaded a p r o c e s s a t i n p u t / p r o c / s1 , PID : 8
CPU 2 : Put p r o c e s s 2 t o run queue
CPU 2 : Dispatched p r o c e s s 8
Time s l o t 16
CPU 0 : Put p r o c e s s 7 t o run queue
CPU 0 : Dispatched p r o c e s s 5
CPU 3 : Put p r o c e s s 6 t o run queue
CPU 0 : P r o c e s s e d 5 has f i n i s h e d
CPU 0 : Dispatched p r o c e s s 7
Time s l o t 17

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 19/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

CPU 3 : D i s p a t c h e d p r o c e s s 2
CPU 1 : Put p r o c e s s 4 t o run queue
CPU 1 : D i s p a t c h e d p r o c e s s 6
CPU 3 : P r o c e s s e d 2 has f i n i s h e d
Time s l o t 18
CPU 3 : D i s p a t c h e d p r o c e s s 4
CPU 2 : Put p r o c e s s 8 t o run queue
CPU 2 : D i s p a t c h e d p r o c e s s 8
CPU 1 : Put p r o c e s s 6 t o run queue
CPU 1 : D i s p a t c h e d p r o c e s s 6
Time s l o t 19
CPU 0 : Put p r o c e s s 7 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 7
CPU 3 : Put p r o c e s s 4 t o run queue
Time s l o t 20
CPU 3 : D i s p a t c h e d p r o c e s s 4
CPU 2 : Put p r o c e s s 8 t o run queue
CPU 2 : D i s p a t c h e d p r o c e s s 8
CPU 1 : P r o c e s s e d 6 has f i n i s h e d
CPU 1 s t o p p e d
Time s l o t 21
CPU 0 : Put p r o c e s s 7 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 7
CPU 3 : P r o c e s s e d 4 has f i n i s h e d
Time s l o t 22
CPU 2 : Put p r o c e s s 8 t o run queue
CPU 2 : D i s p a t c h e d p r o c e s s 8
CPU 3 s t o p p e d
Time s l o t 23
CPU 2 : P r o c e s s e d 8 has f i n i s h e d
CPU 2 s t o p p e d
CPU 0 : Put p r o c e s s 7 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 7
Time s l o t 24
Time s l o t 25
CPU 0 : Put p r o c e s s 7 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 7
Time s l o t 26
Time s l o t 27
CPU 0 : Put p r o c e s s 7 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 7
Time s l o t 28
CPU 0 : P r o c e s s e d 7 has f i n i s h e d
CPU 0 s t o p p e d

MEMORY CONTENT:
0 0 0 : 00000 −003 f f − PID : 06 ( i d x 0 0 0 , nxt : 0 0 1 )
0 0 1 : 00400 −007 f f − PID : 06 ( i d x 0 0 1 , nxt : 0 3 1 )
0 0 2 : 00800 −00 b f f − PID : 05 ( i d x 0 0 0 , nxt : 0 0 3 )
00 be8 : 15
0 0 3 : 00 c00 −00 f f f − PID : 05 ( idx 001 , nxt : −01)
0 0 4 : 01000 −013 f f − PID : 05 ( idx 000 , nxt : 005)
0 0 5 : 01400 −017 f f − PID : 05 ( idx 001 , nxt : 006)
0 0 6 : 01800 −01 b f f − PID : 05 ( idx 002 , nxt : 007)
0 0 7 : 01 c00 −01 f f f − PID : 05 ( idx 003 , nxt : 008)
0 0 8 : 02000 −023 f f − PID : 05 ( idx 004 , nxt : −01)
0 1 3 : 03400 −037 f f − PID : 06 ( idx 000 , nxt : 014)
0 1 4 : 03800 −03 b f f − PID : 06 ( idx 001 , nxt : 015)

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 20/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

0 1 5 : 03 c00 −03 f f f − PID : 06 ( i d x 0 0 2 , nxt : 0 1 6 )


0 1 6 : 04000 −043 f f − PID : 06 ( i d x 0 0 3 , nxt : −01)
0 2 1 : 05400 −057 f f − PID : 01 ( i d x 0 0 0 , nxt : −01)
0 5 4 1 4 : 64
0 2 9 : 07400 −077 f f − PID : 05 ( i d x 0 0 0 , nxt : 0 3 0 )
0 7 4 1 4 : 66
0 3 0 : 07800 −07 b f f − PID : 05 ( i d x 0 0 1 , nxt : −01)
0 3 1 : 07 c00 −07 f f f − PID : 06 ( i d x 0 0 2 , nxt : 0 3 2 )
07 de7 : 0 a
0 3 2 : 08000 −083 f f − PID : 06 ( i d x 0 0 3 , nxt : 0 3 3 )
0 3 3 : 08400 −087 f f − PID : 06 ( i d x 0 0 4 , nxt : −01)
NOTE: Read f i l e output /os_1 t o v e r i f y your r e s u l t

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 21/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

5 Change log
No. Date Change/Work Participant

1 05/12/2021 Initialize Latex File Cao Tien Dat

2 06/12/2021 1.1, 1.2, 2.1, Question 2.2, 2.3 Cao Tien Dat, Phan Bùi Tấn Minh

3 07/12/2021 3.1 Cao Tien Dat, Phan Bùi Tấn Minh

4 08/12/2021 2.3, 4.1, Question 4.2 Cao Tien Dat

5 09/12/2021 2.5, 3.4, 3.3 Cao Tien Dat

6 10/12/2021 4.3 Cao Tien Dat, Phan Bùi Tấn Minh

7 12/12/2021 2.4, 2.5 Cao Tien Dat, Phan Bùi Tấn Minh

8 12/12/2021 3.2, 2.4 Phan Bùi Tấn Minh

9 14/12/2021 3.3 Cao Tien Dat

10 14/12/2021 Formating, Finishing Cao Tien Dat

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 22/23


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

6 Workload
No. Date Workload Participant

1 14/12/2021 65% Cao Tien Dat

2 14/12/2021 35% Phan Bùi Tấn Minh

Operating Systems (CO2018) - Academic year 2021 - 2022 Page 23/23

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy