0% found this document useful (0 votes)
2 views

EmbeddedSoftware4

The document provides an overview of embedded systems, focusing on threads and processes, including their creation, termination, and inter-process communication. It discusses the differences between processes and threads, their advantages and disadvantages, and various methods for inter-thread communication. Additionally, it covers scheduling, mutexes, and condition variables essential for managing concurrent execution in embedded systems.

Uploaded by

Ramiz Karaeski
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

EmbeddedSoftware4

The document provides an overview of embedded systems, focusing on threads and processes, including their creation, termination, and inter-process communication. It discusses the differences between processes and threads, their advantages and disadvantages, and various methods for inter-thread communication. Additionally, it covers scheduling, mutexes, and condition variables essential for managing concurrent execution in embedded systems.

Uploaded by

Ramiz Karaeski
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Embedded Systems

Prof. Dr. Faruk Bağcı


Prof. Dr. Mesut Güneş
CHAPTER OVERVIEW
• Introduction to Embedded Systems
• Embedded & Real-time Operating Systems
• Linux Kernel
• Peripheral Access
• Threads & Processes
• Memory
• Inter-process Communication
• Real-time Scheduling
• Interrupt Handling
• Advanced Topics
LEARNING ABOUT THREADS AND PROCESSES

• Process or Thread?
• Processes
• Threads
• Scheduling
PROCESS OR THREAD?

• Many embedded system developers who are familiar with real-time


operating systems (RTOS) consider Unix process model to be
cumbersome
• On the other hand, they see a similarity between an RTOS task and a
Linux thread
• Tendency to transfer an existing design using a one-to-one mapping of RTOS
tasks to threads
• There are designs in which the entire application is implemented with
one process containing 40 or more threads
• Is this a good idea or not?
PROCESS IN SHORT

• A process is a memory address space and a thread of


execution
• The address space is private to the process
• So, threads running in different processes cannot access it.
• This memory separation is created by the memory
management subsystem in the kernel,
• keeps a memory page mapping for each process
• re-programs the memory management unit on each context
switch
PROCESS IN SHORT (CONTD.)

• Part of the address space is mapped to a file that contains the code and static
data that the program is running
• As the program runs, it will allocate resources such as stack space, heap memory,
references to files, etc.
• When the process terminates, these resources are reclaimed by the system
• all the memory is freed up and all the file descriptors are closed
• Processes can communicate with each other using inter-process communication
(IPC), such as local sockets
THREAD IN SHORT
• A thread is a thread of execution within a process.
• All processes begin with one thread that runs the main()function
• called the main thread
• You can create additional threads, for example, using the POSIX function
pthread_create()
• Results in multiple threads executing in the same address space
• Being in the same process, the threads share resources with each other
• They can read and write the same memory and use the same file
descriptors
• Communication between threads is easy
• Take care of the synchronization and locking issues
TWO EXTREME DESIGNS

• Two extreme designs for a hypothetical system with 40 RTOS tasks


being ported to Linux
• First design: map tasks to processes and have 40 individual programs
communicating through IPC
• Reduce memory corruption problems since the main thread running in each
process is protected from the others
• reduce resource leakage since each process is cleaned up after it exits
• However, message interface between processes is quite complex and,
• the number of messages might be large and become a limiting factor in the
performance of the system
• Furthermore, any one of the 40 processes may terminate, perhaps because of
a bug causing it to crash, leaving the other 39 to carry on
• Each process would have to handle the case that its neighbors are no longer
running and recover gracefully
TWO EXTREME DESIGNS (CONTD.)

• Second Design: map tasks to threads and implement the system as a


single process containing 40 threads
• Cooperation becomes much easier because they share the same address
space and file descriptors
• The overhead of sending messages is reduced or eliminated, and
• context switches between threads are faster than between processes
• The downside is that you have introduced the possibility of one task
corrupting the heap or the stack of another
• If any one of the threads encounters a fatal bug, the whole process will
terminate, taking all the threads with it
• Finally, debugging a complex multithreaded process can be a nightmare
CREATING A PROCESS

• The POSIX function to create a process is fork()


• It is an odd function because for each successful call, there are two
returns:
• one in the process that made the call, known as the Parent, and
• one in the newly created process, known as the Child
• The child is an exact copy of the parent:
• it has the same stack, the same heap,
the same file descriptors,
it executes the same line of code after fork
• Only difference: return value
TERMINATING A PROCESS

• A process may be stopped voluntarily by calling the exit()function or,


• involuntarily, by receiving a signal that is not handled.
• In particular, SIGKILL, cannot be handled and so will always kill a process
• In all cases, terminating the process will stop all threads, close all file
descriptors, and release all memory
• The system sends a signal SIGCHLD to the parent so that it knows
• Processes have a return value
• either the argument to exit if it terminated normally, or
• the signal number if it was killed
• The parent can collect the return value with wait()or waitpid()
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h> else if (pid > 0) {
#include <sys/wait.h> printf("I am the parent, PID %d\n",
int main(void) getpid());
wait(&status);
{ printf("Child terminated, status %d\n",
int pid; WEXITSTATUS(status));
} else
int status;
perror("fork:");
pid = fork(); return 0;
if (pid == 0) { }
printf("I am the child, PID %d\n",
getpid());
sleep(10);
exit(42);
}
RUNNING A DIFFERENT PROGRAM

• The fork function creates a copy of a running program, but it does


not run a different program
• For that, you need one of the exec functions:
• int execl(const char *path, const char *arg, ...);
• int execlp(const char *file, const char *arg, ...);
• int execle(const char *path, const char *arg, ...,
char * const envp[]);
• int execv(const char *path, char *const argv[]);
• int execvp(const char *file, char *const argv[]);
• int execvpe(const char *file, char *const argv[],
..., char *const envp[]);
INTER-PROCESS COMMUNICATION

• Each process is an island of memory


• Passing information from one to another in two ways:
• Firstly, copy it from one address space to the other
• Secondly, create an area of memory that both can access and share the data
• First method usually combines it with a queue or buffer
• This implies copying the message twice: first to a holding area and then to the
destination
• Some examples of this are sockets, pipes, and message queues
• Second method requires mapping memory into two (or more) address
spaces at once
• It is also a means of synchronizing access to that memory, for example, using
semaphores or mutexes.
MESSAGE-BASED IPC

• There are several options


• Different attributes for message-based
communication:
• Message flow is uni- or bi-directorial
• Data flow is a byte stream with no message boundary or
discrete messages with boundaries preserved
• maximum size of a message is important
• Messages are tagged with a priority
OVERVIEW OF DIFFERENT OPTIONS
UNIX (OR LOCAL) SOCKETS

• Unix sockets fulfill most requirements


• They are by far the most common mechanism
• Unix sockets are created with the AF_UNIX address family and bound to a pathname
• Access to the socket is determined by the access permission of the socket file
• As with internet sockets, the socket type can be SOCK_STREAM or SOCK_DGRAM,
• bidirectional byte stream or
• discrete messages with preserved boundaries
• Unix socket datagrams are reliable, which means that they will not be dropped or
reordered
• The maximum size for a datagram is system-dependent and is available via
/proc/sys/net/core/wmem_max
• It is typically 100 KiB or more
• Unix sockets do not have a mechanism to indicate the priority of a message
FIFO AND NAMED PIPES

• FIFO and named pipe are just different terms for the same thing
• An extension of the anonymous pipe used to communicate between parent and
child processes when implementing pipes in the shell
• A FIFO is a special sort of file, created by the mkfifo()command.
• As with Unix sockets, the file access permissions determine who can read and write
• They are unidirectional
• There is one reader and usually one writer, but there may be several
• The data is a pure byte stream but with a guarantee of the atomicity of messages
that are smaller than the buffer associated with the pipe
• Writes less than this size will not be split into several smaller writes and
• Read the whole message in one go as long as the size of the buffer is large enough
• The default size of the FIFO buffer is 64 KiB on modern kernels
• Can be increased using fcntl()with F_SETPIPE_SZ up to the value in
/proc/sys/fs/pipe-max-size
• typically 1 MiB
• There is no concept of priority
POSIX MESSAGE QUEUES

• Message queues are identified by a name


• Must begin with a forward slash / and contain only one / character: message queues are
actually kept in a pseudo filesystem of the type mqueue.
• Create a queue and get a reference to an existing queue through mq_open()
• Returns a file descriptor
• Each message has a priority
• Messages are read from the queue based on priority and then on the age order
• Messages can be up to /proc/sys/kernel/msgmax bytes long.
• The default value is 8 KiB, but
• Can be set to be any size in the range 128 bytes to 1 MiB by writing the value to
/proc/sys/kernel/msgmax bytes
• Since the reference is a file descriptor, you can use select(), poll(), and
other similar functions to wait for activity in the queue
POSIX SHARED MEMORY

• To share memory between processes


• first create a new area of memory and
• then map it to the address space of each process that wants access to it
POSIX SHARED MEMORY (CONTD.)
• Naming of POSIX shared memory segments follows the same pattern
as message queues:
• The segments are identified by names that begin with a / character and have
exactly one such character
• The shm_open()function takes the name and returns a file descriptor for it
• If it does not exist already and the O_CREAT flag is set, then a new segment
is created
• Initially, it has a size of zero
• Can use the ftruncate()function to expand it to the desired size
• Once you have a descriptor for the shared memory, you map it to the
address space of the process using mmap()
• So threads in different processes can access the memory
THREADS

• Now it is time to look at multithreaded processes


• The programming interface for threads is the POSIX threads API
• Was first defined in the IEEE POSIX 1003.1c standard (1995)
• Commonly known as pthreads
• It is implemented as an additional part of the C-library
• There have been two implementations of pthreads over the last 15 years:
LinuxThreads and Native POSIX Thread Library (NPTL)
• The latter is much more compliant with the specification, particularly with
regard to the handling of signals and process IDs
• It is pretty dominant now, but you may come across some older versions of
uClibc that use LinuxThreads
CREATING A NEW THREAD

• The function to create a thread is pthread_create():


• int pthread_create(pthread_t *thread, const
pthread_attr_t *attr, void *(*start_routine) (void *),
void *arg);
• It creates a new thread of execution that begins in the function
start_routine and places a descriptor in the pthread_t pointed to
by thread
• It inherits the scheduling parameters of the calling thread
• these can be overridden by passing a pointer to the thread attributes in attr
• The thread will begin to execute immediately
• pthread_t is the main way to refer to the thread within the program
• thread can also be seen from outside using a command such as ps -eLf
EXAMPLE

#include <stdio.h>
#include <unistd.h>
#include <pthread.h>
int main(void)
#include <sys/syscall.h> {
pthread_t t;
printf("Main thread, PID %d TID %d\n",
static void *thread_fn(void *arg) getpid(), (pid_t)syscall(SYS_gettid));
{ pthread_create(&t, NULL, thread_fn, NULL);
pthread_join(t, NULL);
printf("New thread started, PID %d TID %d\n", return 0;
getpid(), (pid_t)syscall(SYS_gettid)); }
sleep(10);
printf("New thread terminating\n");
return NULL;
}
TERMINATING A THREAD

• A thread terminates when:


• It reaches the end of its start_routine
• It calls pthread_exit()
• It is canceled by another thread calling pthread_cancel()
• The process that contains the thread terminates, for example, because of a thread calling
exit(), or the process receives a signal that is not handled, masked, or ignored
• Note that if a multithreaded program calls fork, only the thread that made the
call will exist in the new child process.
• Fork does not replicate all threads
• A thread has a return value, which is a void pointer
• One thread can wait for another to terminate and collect its return value by
calling pthread_join().
• If threads remain unjoined, there is a resource leak in the program
INTER-THREAD COMMUNICATION

• Big advantage of threads: they share the address space and can share
memory variables
• Big disadvantage: It requires synchronization to preserve data consistency
• In fact, threads can create private memory using thread local storage (TLS),
but not covered here
• The pthreads interface provides the basics necessary to achieve
synchronization:
• mutexes and condition variables
• If you want more complex structures, you will have to build them yourself
• All of the IPC methods described earlier, that is sockets, pipes and message
queues, work equally well between threads in the same process
MUTUAL EXCLUSION

• To write robust programs, you need to protect each shared resource with a
mutex lock
• make sure that every code path that reads or writes the resource has locked the
mutex first
• Problems which still remain:
• Deadlocks: system is locked completely, because of mutual wait for several mutexes
• Solution: same order locking, timeouts, back-off periods
• Priority inversion: high priority thread becomes blocked waiting for a mutex locked
by a low priority thread
• Solution: priority inheritance, priority ceiling
• Poor performance: usually mutexes have minimal overhead, but performance
problems occur, if threads block on them most of the time
• Solution: proper design
CHANGING CONDITIONS

• Cooperating threads need a method of alerting one another that


something has changed and needs attention
• Called a condition and the alert is sent through a condition variable, or condvar
• A condition is just something that you can test to give a true or false result
• Simple example: buffer that contains either zero or some items
• One thread takes items from the buffer and sleeps when it is empty
• Another thread places items into the buffer and signals the other thread that it has done so
because the condition that the other thread is waiting on has changed
• If it is sleeping, it needs to wake up and do something
• The only complexity is that the condition is, by definition, a shared
resource and so has to be protected by a mutex
SCHEDULING

• Linux scheduler has a queue of threads that are ready to run


• Its job is to schedule them on CPUs as they become available
• Each thread has a scheduling policy that may be time-shared or real-time
• The time-shared threads have a niceness value that increases or reduces their entitlement to
CPU time
• The real-time threads have a priority such that a higher priority thread will preempt a lower
one.
• The scheduler works with threads, not processes
• Each thread is scheduled regardless of which process it is running in
• The scheduler runs when:
• A thread is blocked by calling sleep() or another blocking system call
• A time-shared thread exhausts its time slice
• An interruption causes a thread to be unblocked, for example, because of I/O completing
FAIRNESS VS DETERMINISM

• Scheduling policies are grouped into categories of time-shared and


realtime
• Time-shared policies are based on the principal of fairness
• Designed to make sure that each thread gets a fair amount of processor time
and that no thread can hog the system
• If a thread runs for too long, it is put to the back of the queue so that others
can have a go
• At the same time, a fairness policy needs to adjust to threads that are doing a
lot of work and give them the resources to get the job done
• Time-shared scheduling is good because of the way it automatically adjusts to
a wide range of workloads
FAIRNESS VS DETERMINISM (CONTD.)

• On the other hand, in a real-time program, fairness is not helpful


• Instead, you then want a policy that is deterministic, which will give you at least
minimal guarantees that your real-time threads will be scheduled at the right time so
that they don't miss their deadlines
• Real-time thread must preempt time-shared threads
• Real-time threads also have a static priority
• Scheduler can use it to choose between threads when there are several of them to run at
once
• The Linux real-time scheduler implements a fairly standard algorithm that
runs the highest priority real-time thread
• Most RTOS schedulers are also written in this way.
• Both types of thread can coexist
• Those requiring deterministic scheduling are scheduled first and the time remaining
is divided between the time-shared threads
TIME-SHARED POLICIES

• Time-shared policies are designed for fairness


• From Linux 2.6.23 onward, the scheduler used has been Completely Fair
Scheduler (CFS)
• It does not use timeslices in the normal sense
• Instead, it calculates a running tally of the length of time a thread would be
entitled to run if it had its fair share of CPU time, and it balances that with the
actual amount of time it has run for
• If it exceeds its entitlement and there are other time-shared threads waiting
to run, the scheduler will suspend the thread and run a waiting thread instead
TIME-SHARED POLICIES (CONTD.)

• The time-shared policies are as follows:


• SCHED_NORMAL (also known as SCHED_OTHER): This is the default policy.
The vast majority of Linux threads use this policy.
• SCHED_BATCH: This is similar to SCHED_NORMAL except that threads are
scheduled with a larger granularity; that is, they run for longer but have to
wait longer until they are scheduled again. The intention is to reduce the
number of context switches for background processing (batch jobs) and
reduce the amount of CPU cache churn.
• SCHED_IDLE: These threads are run only when there are no threads of any
other policy ready to run. It is the lowest possible priority.
NICENESS

• Some time-shared threads are more important than others.


• Can indicated with the nice value, which multiplies a thread's CPU
entitlement by a scaling factor
• The name comes from the function call, nice()
• A thread becomes nice by reducing its load on the system, or moves in
the opposite direction by increasing it
• The range of values is from 19, which is really nice, to -20, which is really not nice
• The default value is 0
• The nice value can be changed for SCHED_NORMAL and SCHED_BATCH
threads
• To reduce niceness, which increases the CPU load, you need the
CAP_SYS_NICE capability, which is available to the root user
REAL-TIME POLICIES

• Real-time policies are intended for determinism


• The real-time scheduler will always run the highest priority real-time
thread that is ready to run
• Real-time threads always preempt timeshare threads.
REAL-TIME POLICIES (CONTD.)

• There are two real-time policies:


• SCHED_FIFO: This is a run to completion algorithm, which means that once the
thread starts to run, it will continue until it is preempted by a higher priority real-
time thread, it is blocked in a system call, or until it terminates
• SCHED_RR: This a round robin algorithm that will cycle between threads of the
same priority if they exceed their time slice, which is 100 ms by default.
• Since Linux 3.9, it has been possible to control the timeslice value through
/proc/sys/kernel/sched_rr_timeslice_ms
• Apart from this, it behaves in the same way as SCHED_FIFO
• Each real-time thread has a priority in the range 1 to 99, with 99 being the
highest
• To give a thread a real-time policy, you need CAP_SYS_NICE, which is
given only to the root user by default
REAL-TIME POLICIES (CONTD.)

• One problem with real-time scheduling is that of a thread that


becomes compute bound
• often because a bug has caused it to loop indefinitely
• It will prevent real-time threads of lower priority from running along with all
the timeshare threads
• In this case, the system becomes erratic and may lock up completely
REAL-TIME POLICIES (CONTD.)

• There are a couple of ways to guard against this possibility


• First, since Linux 2.6.25, the scheduler has, by default, reserved 5% of CPU
time for non-real-time threads so that even a runaway real-time thread
cannot completely halt the system.
• It is configured via two kernel controls:
• /proc/sys/kernel/sched_rt_period_us
• /proc/sys/kernel/sched_rt_runtime_us
• They have default values of 1,000,000 (1 second) and 950,000 (950 ms), respectively,
which means that out of every second, 50 ms is reserved for nonreal-time processing
• If you want real-time threads to be able to take 100%, then set
sched_rt_runtime_us to -1
• The second option is to use a watchdog, either hardware or software, to
monitor the execution of key threads and take action when they begin to miss
deadlines

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy