Assignment Report: Simple Operating System
Assignment Report: Simple Operating System
UNIVERSITY OF TECHNOLOGY
FACULTY OF COMPUTER SCIENCE AND ENGINEERING
Assignment Report:
Simple Operating System
Contents
1 Introduction 2
1.1 An overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Scheduler 4
2.1 Operation of scheduler: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Question - Priority Feedback Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Implement: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.4 Gantt diagram: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.5 Result: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 Memory Management 9
3.1 Background: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 Question - Segmentation with Paging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2.1 Segmented Paging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2.2 Paged Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2.3 Conclusion: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 Implement: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.4 Result: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5 Change log 22
6 Workload 23
1 Introduction
1.1 An overview
The assignment is about simulating a simple operating system to help student understand the fundamental
concepts of scheduling, synchronization and memory management. Figure 1 shows the overall architecture of
the “operating system” we are going to implement. Generally, the OS has to manage two “virtual” resources:
CPU(s) and RAM using two core components:
• Scheduler (and Dispatcher): determines with process is allowed to run on which CPU.
• Virtual memory engine (VME): isolates the memory space of each process from other. That is, although
RAM is shared by multiple processes, each process do not know the existence of other. This is done
by letting each process has its own virtual memory space and the Virtual memory engine will map and
translate the virtual addresses provided by processes to corresponding physical addresses.
Through those modules, The OS allows mutliple processes created by users to share and use the “virtual”
computing resources. Therefore, in this assignment, we focus on implementing scheduler/dispatcher and virtual
memory engine.
1.2 Processes
We are going to build a multitasking OS which lets multiple processes run simultaneously so it is worth to spend
some space explaining the organization of processes. The OS manages processes through their PCB described
as follows:
// From include/common.h
struct pcb_t {
uint32_t pid;
uint32_t priority;
uint32_t code_seg_t * code;
addr_t regs;
uint32_t pc;
struct seg_table_t * seg_table;
uint32_t bp;
}
The meaning of fields in the struct:
2 Scheduler
2.1 Operation of scheduler:
- For each new program, the loader will create a new process and assign a new PCB to it.
- The loader then reads and copies the content of the program to the text segment of the new process.
- Finally, the PCB of the process is pushed to ready queue and waits for the CPU.
- The CPU runs processes in round-robin style. Each process is allowed to run up to a given period of time.
- After that, the CPU is forced to pause the process and push it to run queue. The CPU then picks up
another process from ready queue and continue running.
- Since CPU does not take process back to ready queue after pausing it, the ready queue will soon or late empty.
- If this phenomenon occurs, the scheduler will move all processes waiting at run queue back to ready queue to
let the CPU continue running paused process again.
Priority Feedback Queue (PFQ) algorithm are used to determine which process to be executed when a CPU
becomes available. Priority Feedback Queue algorithm consist of two queues: ready_queue and run_queue.
• ready_queue: the queue contains processes with higher execution priority than the run_queue queue.
After the CPU is forced to pause a process and push it to run_queue, the CPU then searches for the
process in this queue.
• run_queue: this queue contains processes that are waiting to continue executing after being paused by the
CPU without completing their task. Processes in this queue can only continue executing when ready_queue
is empty, then they are moved to the ready_queue to be run by the CPU.
Advantages:
• Compared to SJF, SRTF, FCFS or Round Robin with FCFS, it’s more flexible as it’s use priority to allow
urgent process run first.
Actually, we can say that FCFS and SJF are special cases of priority scheduling with FCFS using arrival
time as its priority and SJF use burst time as its priority. Therefore, we can say that, priority scheduling is
a generalization of SJF and FCFS. So, it’s more flexible in a way that: the programmer is allow to decide
a process priority without have to follow a strict rule.
• Comparing to a priority scheduling with one or multiple level without feedback, it prevent starvation by
using aging to resolve the starvation problem. In which process can move around in the queues (process
that waits too long in a lower priority queue can be moved to the higher ones)
2.3 Implement:
enqueue() and dequeue() functions to help put a new PCB to the queue and get a PCB with the highest priority
out of the queue.
Implement enqueue() :
P2 7 4 20
Gantt diagram
input: sched_1
214
0 s0
4 s1
6 s2
7 s3
=> time slice = 2
Number of CPU = 1
Number of Processes to be run = 4
P2 7 4 20
P3 12 6 20
P4 11 7 7
Gantt diagram
• Time slot 5: Move process 1 to run queue, load process 2 from ready queue to CPU.
• Time slot 6: Load process 3 to ready queue.
• Time slot 7: Move process 2 to run queue, load process 3 from ready queue to CPU. Load s3 to ready
queue.
• Time slot 15: Move process 3 to run queue. Load process 1 to CPU.
• Time slot 17: Move process 1 to run queue. Load process 4 to CPU.
• Time slot 19: Move process 4 to run queue. The ready queue is empty so we load the run queue back and
execute the same as before.
2.5 Result:
make test_sched The result of our test is similar to that of the output file.
3 Memory Management
3.1 Background:
A computer can address more memory than the amount physically installed on the system. This extra memory
is actually called virtual memory and it is a section of a hard that’s set up to emulate the computer’s RAM.
Paging is a memory management technique in which process address space is broken into blocks of the same
size called pages (size is power of 2, between 512 bytes and 8192 bytes). The size of the process is measured in
the number of pages.
Segmentation memory management works very similar to paging but here segments are of variable-length
where as in paging pages are of fixed size. Segmentation is a memory management technique in which each
job is divided into several segments of different sizes, one for each module that contains pieces that perform
related functions. Each segment is actually a different logical address space of the program. When a process is
to be executed, its corresponding segmentation are loaded into non-contiguous memory though every segment
is loaded into a contiguous block of available memory.
By default, the size of virtual RAM is 1 MB so we must use 20 bit to represent the address of each of its
byte. With the segmentation with paging mechanism, we use the first 5 bits for segment index, the next 5 bits
for page index and the last 10 bits for offset.
Figure 6 shows how we allocate new memory regions and create new entry in the segment and page tables
inside a process. Particularly, for each new page we allocated, we must add new entry to page tables according
to this page’s segment number and page number.
Segmentation with Paging is the combination of the two techniques: Segmentation and Paging, to get the
best features out of both the techniques. Segmented paging is used in Linux/x86-32.
There are two most popular non-contiguous memory allocation techniques: segmented paging and paged seg-
mentation.
To implement this technique, we divide the processes into fixed-size blocks. These blocks are called pages.
We also divide physical memory into fixed-size blocks called frames.
The main limit of the paging technique resides within the fact that when the virtual address is large, pages will
take a large space in actual memory. In general, programs tend to be large. Therefore, they would consume a
considerable amount of memory, as a result, some addresses would be invalid. To solve this problem, we can use
segmentation along with paging in order to reduce page table size.
Advantages:
• The main advantage of segmented paging is the memory usage reduction. Since it allocates fixed-size
pages, it does not cause external fragmentation. It makes memory allocations simpler.
• The size of each segment can be varied with the number of paged its mapped, up to a certain limit.
Therefore, a program could have multiple segments with varying size, while still relies on paging mechanism
for allocating each segment.
• More secure because the virtual space of each process is isolated.
• Allow the process to used more memory than what available physically, by swapping un-used page to the
hard disk.
• Gives a programmers view along with the advantages of paging, and reduces external fragmentation in
comparison with segmentation.
Disadvantages:
• The main drawback is external fragmentation, so extra hardware is required.
• External fragmentation occurs because of varying sizes of page tables and varying sizes of segment tables
in today’s systems.
• Complexity level is much higher than paging.
• Slower speed, as both the page content and the page table itself required to be looked up, or worse,
swapped into a frame.
• Communication between processes have to be done through message passing (through message queue in
kernel space or network socket), since a process’s memory is isolated, preventing sharing memory.
Paged segmentation was proposed as a solution to improve memory management. The paged segmentation
technique consists of partitioning the segment table into pages, reducing the size of the segments table.
To clear this, the page table even with segmented paging can have a lot of invalid pages. Instead of using
multi level paging along with segmented paging, the problem of larger page table can be solved by directly
applying multi level paging instead of segmented paging.
Advantages:
• The main advantage of paged segmentation is eliminating external fragmentation and reducing the page
table size.
• Similar to segmented paging, the entire segment need not be swapped out.
Disadvantages:
• On the other hand, the problem of internal fragmentation is still not solved completely. There are occasions
where internal fragmentation can occur, but the probability is less.
• There exist delays when we access the memory, because extra level of paging at first stage adds to the
delay in memory access.
3.2.3 Conclusion:
Despite some advantages of paging and segmentation, they remain one of the leading causes of memory frag-
mentation. To reduce the chances of memory fragmentation, combined memory management techniques are
developed.
3.3 Implement:
In this part, we implement functions follow above theory, and instruction in TODO part in source code file.
1. Finding page table from segment level index and segment table.
4. Free memory.
• Free physical memory.
• Update virtual memory.
• Update break point.
3.4 Result:
The result of our test is similar to that of the output file.
Since the OS runs on multiple processors, it is possible that share resources could be concurrently accessed
by more than one process at a time.
4.2 Question:
What will be happen if the synchronization is not handled in your system? Illustrate the problem by example if
you have any.
When multiple processes execute concurrently sharing system resources, if the synchronization is not han-
dled then inconsistent results might be produced.
The following Example shows how inconsistent results may be produced if multiple processes execute con-
currently without any synchronization:
Consider:
• One bank account are shared between two users: Husban-withdraw money; wife-recharge.
• Two processes P1 -withdraw money and P2 -recharge are executing concurrently.
• Both the processes share a common variable named “Account Balance”.
• Process P1 tries to decrement the value of Account Balance.
• Process P2 tries to increment the value of Account Balance.
Now, when these processes execute concurrently without synchronization, different results may be produced.
4.3 Result:
We then completing the files, synchronize the scheduler and Virtual Memory Engine to form a complete operating
system.
After running the command "make test_all" we get the following results:
−−−−−− MEMORY MANAGEMENT TEST 0 −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
. /mem i n p u t / p r o c /m0
0 0 0 : 00000 −003 f f − PID : 01 ( i d x 0 0 0 , nxt : 0 0 1 )
003 e8 : 15
0 0 1 : 00400 −007 f f − PID : 01 ( i d x 0 0 1 , nxt : −01)
0 0 2 : 00800 −00 b f f − PID : 01 ( i d x 0 0 0 , nxt : 0 0 3 )
0 0 3 : 00 c00 −00 f f f − PID : 01 ( i d x 0 0 1 , nxt : 0 0 4 )
0 0 4 : 01000 −013 f f − PID : 01 ( i d x 0 0 2 , nxt : 0 0 5 )
0 0 5 : 01400 −017 f f − PID : 01 ( i d x 0 0 3 , nxt : 0 0 6 )
0 0 6 : 01800 −01 b f f − PID : 01 ( i d x 0 0 4 , nxt : −01)
0 1 4 : 03800 −03 b f f − PID : 01 ( i d x 0 0 0 , nxt : 0 1 5 )
0 3 8 1 4 : 66
0 1 5 : 03 c00 −03 f f f − PID : 01 ( i d x 0 0 1 , nxt : −01)
NOTE: Read f i l e output /m0 t o v e r i f y your r e s u l t
−−−−−− MEMORY MANAGEMENT TEST 1 −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
. /mem i n p u t / p r o c /m1
NOTE: Read f i l e output /m1 t o v e r i f y your r e s u l t ( your i m p l e m e n t a t i o n s h o u l d p r i n t n o t h i n g )
−−−−−− SCHEDULING TEST 0 −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
. / o s sched_0
Time s l o t 0
Loaded a p r o c e s s a t i n p u t / p r o c / s0 , PID : 1
Time s l o t 1
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 2
Time s l o t 3
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 4
Loaded a p r o c e s s a t i n p u t / p r o c / s1 , PID : 2
Time s l o t 5
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 2
Time s l o t 6
Time s l o t 7
CPU 0 : Put p r o c e s s 2 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 8
Time s l o t 9
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 2
Time s l o t 10
Time s l o t 11
CPU 0 : Put p r o c e s s 2 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 12
Time s l o t 13
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 2
Time s l o t 14
Time s l o t 15
CPU 0 : Put p r o c e s s 2 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 16
Time s l o t 17
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 2
Time s l o t 18
CPU 0 : P r o c e s s e d 2 has f i n i s h e d
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 19
Time s l o t 20
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 21
Time s l o t 22
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 23
CPU 0 : P r o c e s s e d 1 has f i n i s h e d
CPU 0 s t o p p e d
MEMORY CONTENT:
NOTE: Read f i l e output / sched_0 t o v e r i f y your r e s u l t
−−−−−− SCHEDULING TEST 1 −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
. / o s sched_1
Time s l o t 0
Loaded a p r o c e s s a t i n p u t / p r o c / s0 , PID : 1
Time s l o t 1
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 2
Time s l o t 3
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 1
Loaded a p r o c e s s a t i n p u t / p r o c / s1 , PID : 2
Time s l o t 4
Time s l o t 5
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 2
Loaded a p r o c e s s a t i n p u t / p r o c / s2 , PID : 3
Time s l o t 6
Time s l o t 7
CPU 0 : Put p r o c e s s 2 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 3
Loaded a p r o c e s s a t i n p u t / p r o c / s3 , PID : 4
Time s l o t 8
Time s l o t 9
CPU 0 : Put p r o c e s s 3 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 4
Time s l o t 10
Time s l o t 11
CPU 0 : Put p r o c e s s 4 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 4
Time s l o t 12
Time s l o t 13
CPU 0 : Put p r o c e s s 4 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 14
Time s l o t 15
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 2
Time s l o t 16
Time s l o t 17
CPU 0 : Put p r o c e s s 2 t o run queue
CPU 0 : Dispatched p r o c e s s 3
Time s l o t 18
Time s l o t 19
CPU 0 : Put p r o c e s s 3 t o run queue
CPU 0 : Dispatched p r o c e s s 4
Time s l o t 20
Time s l o t 21
CPU 0 : Put p r o c e s s 4 t o run queue
CPU 0 : Dispatched p r o c e s s 1
Time s l o t 22
Time s l o t 23
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : Dispatched p r o c e s s 2
Time s l o t 24
Time s l o t 25
CPU 0 : Put p r o c e s s 2 t o run queue
CPU 0 : Dispatched p r o c e s s 3
Time s l o t 26
Time s l o t 27
CPU 0 : Put p r o c e s s 3 t o run queue
CPU 0 : Dispatched p r o c e s s 4
Time s l o t 28
Time s l o t 29
CPU 0 : Put p r o c e s s 4 t o run queue
CPU 0 : Dispatched p r o c e s s 1
Time s l o t 30
Time s l o t 31
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : Dispatched p r o c e s s 2
Time s l o t 32
CPU 0 : P r o c e s s e d 2 has f i n i s h e d
CPU 0 : Dispatched p r o c e s s 3
Time s l o t 33
Time s l o t 34
CPU 0 : Put p r o c e s s 3 t o run queue
CPU 0 : Dispatched p r o c e s s 4
Time s l o t 35
Time s l o t 36
CPU 0 : Put p r o c e s s 4 t o run queue
CPU 0 : Dispatched p r o c e s s 1
Time s l o t 37
Time s l o t 38
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : Dispatched p r o c e s s 3
Time s l o t 39
Time s l o t 40
CPU 0 : Put p r o c e s s 3 t o run queue
CPU 0 : Dispatched p r o c e s s 4
Time s l o t 41
CPU 0 : P r o c e s s e d 4 has f i n i s h e d
CPU 0 : Dispatched p r o c e s s 1
Time s l o t 42
Time s l o t 43
CPU 0 : Put p r o c e s s 1 t o run queue
CPU 0 : Dispatched p r o c e s s 3
Time s l o t 44
Time s l o t 45
CPU 0 : P r o c e s s e d 3 has f i n i s h e d
CPU 0 : D i s p a t c h e d p r o c e s s 1
Time s l o t 46
CPU 0 : P r o c e s s e d 1 has f i n i s h e d
CPU 0 s t o p p e d
MEMORY CONTENT:
NOTE: Read f i l e output / sched_1 t o v e r i f y your r e s u l t
−−−−− OS TEST 0 −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
. / o s os_0
Time s l o t 0
Loaded a p r o c e s s a t i n p u t / p r o c /p0 , PID : 1
CPU 1 : D i s p a t c h e d p r o c e s s 1
Time s l o t 1
Loaded a p r o c e s s a t i n p u t / p r o c /p1 , PID : 2
Time s l o t 2
CPU 0 : D i s p a t c h e d p r o c e s s 2
Time s l o t 3
Loaded a p r o c e s s a t i n p u t / p r o c /p1 , PID : 3
Time s l o t 4
Loaded a p r o c e s s a t i n p u t / p r o c /p1 , PID : 4
Time s l o t 5
Time s l o t 6
CPU 1 : Put p r o c e s s 1 t o run queue
CPU 1 : D i s p a t c h e d p r o c e s s 3
Time s l o t 7
Time s l o t 8
CPU 0 : Put p r o c e s s 2 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 4
Time s l o t 9
Time s l o t 10
Time s l o t 11
Time s l o t 12
CPU 1 : Put p r o c e s s 3 t o run queue
CPU 1 : D i s p a t c h e d p r o c e s s 1
Time s l o t 13
Time s l o t 14
CPU 0 : Put p r o c e s s 4 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 2
Time s l o t 15
Time s l o t 16
CPU 1 : P r o c e s s e d 1 has f i n i s h e d
CPU 1 : D i s p a t c h e d p r o c e s s 3
Time s l o t 17
Time s l o t 18
CPU 0 : P r o c e s s e d 2 has f i n i s h e d
CPU 0 : D i s p a t c h e d p r o c e s s 4
Time s l o t 19
Time s l o t 20
CPU 1 : P r o c e s s e d 3 has f i n i s h e d
CPU 1 s t o p p e d
Time s l o t 21
Time s l o t 22
CPU 0 : P r o c e s s e d 4 has f i n i s h e d
CPU 0 s t o p p e d
MEMORY CONTENT:
CPU 3 : D i s p a t c h e d p r o c e s s 2
CPU 1 : Put p r o c e s s 4 t o run queue
CPU 1 : D i s p a t c h e d p r o c e s s 6
CPU 3 : P r o c e s s e d 2 has f i n i s h e d
Time s l o t 18
CPU 3 : D i s p a t c h e d p r o c e s s 4
CPU 2 : Put p r o c e s s 8 t o run queue
CPU 2 : D i s p a t c h e d p r o c e s s 8
CPU 1 : Put p r o c e s s 6 t o run queue
CPU 1 : D i s p a t c h e d p r o c e s s 6
Time s l o t 19
CPU 0 : Put p r o c e s s 7 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 7
CPU 3 : Put p r o c e s s 4 t o run queue
Time s l o t 20
CPU 3 : D i s p a t c h e d p r o c e s s 4
CPU 2 : Put p r o c e s s 8 t o run queue
CPU 2 : D i s p a t c h e d p r o c e s s 8
CPU 1 : P r o c e s s e d 6 has f i n i s h e d
CPU 1 s t o p p e d
Time s l o t 21
CPU 0 : Put p r o c e s s 7 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 7
CPU 3 : P r o c e s s e d 4 has f i n i s h e d
Time s l o t 22
CPU 2 : Put p r o c e s s 8 t o run queue
CPU 2 : D i s p a t c h e d p r o c e s s 8
CPU 3 s t o p p e d
Time s l o t 23
CPU 2 : P r o c e s s e d 8 has f i n i s h e d
CPU 2 s t o p p e d
CPU 0 : Put p r o c e s s 7 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 7
Time s l o t 24
Time s l o t 25
CPU 0 : Put p r o c e s s 7 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 7
Time s l o t 26
Time s l o t 27
CPU 0 : Put p r o c e s s 7 t o run queue
CPU 0 : D i s p a t c h e d p r o c e s s 7
Time s l o t 28
CPU 0 : P r o c e s s e d 7 has f i n i s h e d
CPU 0 s t o p p e d
MEMORY CONTENT:
0 0 0 : 00000 −003 f f − PID : 06 ( i d x 0 0 0 , nxt : 0 0 1 )
0 0 1 : 00400 −007 f f − PID : 06 ( i d x 0 0 1 , nxt : 0 3 1 )
0 0 2 : 00800 −00 b f f − PID : 05 ( i d x 0 0 0 , nxt : 0 0 3 )
00 be8 : 15
0 0 3 : 00 c00 −00 f f f − PID : 05 ( idx 001 , nxt : −01)
0 0 4 : 01000 −013 f f − PID : 05 ( idx 000 , nxt : 005)
0 0 5 : 01400 −017 f f − PID : 05 ( idx 001 , nxt : 006)
0 0 6 : 01800 −01 b f f − PID : 05 ( idx 002 , nxt : 007)
0 0 7 : 01 c00 −01 f f f − PID : 05 ( idx 003 , nxt : 008)
0 0 8 : 02000 −023 f f − PID : 05 ( idx 004 , nxt : −01)
0 1 3 : 03400 −037 f f − PID : 06 ( idx 000 , nxt : 014)
0 1 4 : 03800 −03 b f f − PID : 06 ( idx 001 , nxt : 015)
5 Change log
No. Date Change/Work Participant
2 06/12/2021 1.1, 1.2, 2.1, Question 2.2, 2.3 Cao Tien Dat, Phan Bùi Tấn Minh
7 12/12/2021 2.4, 2.5 Cao Tien Dat, Phan Bùi Tấn Minh
6 Workload
No. Date Workload Participant