0% found this document useful (0 votes)
16 views12 pages

Solutions CSC 310 2018 - 19

The document discusses various aspects of multithreading, including issues related to shared resources, synchronization techniques, and the differences between processes and threads. It also covers the importance of thread safety, potential risks in multithreaded programming, and specific problems such as deadlock and race conditions. Additionally, it provides code examples to illustrate these concepts and suggests methods for ensuring safe concurrent access to shared resources.

Uploaded by

Abdullah Opadeji
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views12 pages

Solutions CSC 310 2018 - 19

The document discusses various aspects of multithreading, including issues related to shared resources, synchronization techniques, and the differences between processes and threads. It also covers the importance of thread safety, potential risks in multithreaded programming, and specific problems such as deadlock and race conditions. Additionally, it provides code examples to illustrate these concepts and suggests methods for ensuring safe concurrent access to shared resources.

Uploaded by

Abdullah Opadeji
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

CSC 310

1.
a. A shared resource may be corrupted if it is accessed by multiple threads.
Discuss.
b. What is the difference when a lock is applied to an instance method and when a
lock is applied to a static method?
c. Use the program listing below to answer the following:
i. Identify the critical region in the listing
ii. What is the function of the method newCachedThreadPool() in the listing
and how different it is from newFixedThreadPool()?
iii. Do you think the code will yield desirable output? Justify your response
iv. If need be, use the java lock interface to resolve the possible conflict in
the code listing

import java.util.concurrent.*;
public class AccountWithoutSync {
private static Account account = new Account();
public static void main(String[] args) {
ExecutorService executor = Executors.newcachedThreadPool();
for (int i = 0; i < 100; i++) {
executor.execute(new AddAPennyTask());
}
executor.shutdown();
while (!executor.isTerminated()) {
}
System.out.println("What is balance ? " +
account.getBalance());
}
private static class AddAPennyTask implements Runnable {
public void run() {
account.deposit(1);
}
}
private static class Account {
private int balance = 0;
public int getBalance() {
return balance;
}
public void deposit(int amount) {
int newBalance = balance + amount;
try {
Thread.sleep(10);
} catch (InterruptedException ex) {
}
balance = newBalance;
}
}
}

2.
a. DIscuss the 3 factors that led to the development of operating systems
b. Differentiate between a process and a thread
c. Discuss the advantages of using multithreading in concurrent applications
d. What are the risks to avoid when writing a multithreaded program?
3.
a. Define the term synchronizer
b. Compare and contrast the following synchronizers: latches, barriers and
semaphores
c.
i. What is a synchronized collection?
ii. Is ArrayList synchronized? Justify your answer
4.
a. Designing a thread-safe class is about managing concurrent access to the same
shared mutable state variable in the class. Explain what thread-safe and shared
mutable state mean
b. List and explain four ways of fixing a broken multithreaded program
c. Study the following blocks of code carefully. State whether each block is
thread-safe or not. Justify your answers and if need be, make it thread safe.
public class Task1 {
private int v;
public int getNext() {
return v++;
}
}
public class Task2 implements Servlet {
private long count = 0;
public long.getCount() {
return count;
}
Public void service(ServletRequest req, ServletResponse resp)
{
BigInteger i = extractFromRequest(req);
BigInteger[] factors = factor(i);
++count;
encodeIntoResponse(resp, factors);
}
}
public class Task3 implements Servlet {
public void service(ServletRequest req, ServletResponse resp)
{
BigInteger i = extractFromRequest(req);
BigInteger[] factors = factor(i);
encodeIntoResponse(resp, factors);
}
}

5.
a.
i. What Is deadlock?
ii. How can deadlock be avoided?
b. Threads can be in one of these five states: New, Ready, Running, Blocked, or
Finished.
i. Explain each of these states.
ii. What are the defined method(s) in Java associated with each thread
state?
c. Give some examples of possible resource corruption when running multiple
threads.
6.
a. Discuss the following:
i. Reader/Writer problem
ii. Dining philosopher problem
iii. Producer/Consumer problem
b. Describe how deadlock, livelock and starvation can arise in the philosopher’s
problem

Answers
1.
a. If a shared resource is accessed by multiple threads without proper
synchronization, it can lead to data corruption. This is because threads can
interfere with each other's execution, causing unexpected behavior and
potentially leading to corruption of the shared resource.
To avoid this issue, it is important to use proper synchronization techniques, such
as locks or semaphores, to ensure that only one thread can access the shared
resource at a time. This can prevent threads from interfering with each other's
execution and help to prevent corruption of the shared resource.
It is also a good idea to design the code in such a way that threads do not need
to access shared resources simultaneously. For example, if multiple threads only
need to read from a shared resource, it may be possible to allow multiple threads
to access it at the same time without causing any issues. However, if multiple
threads need to write to the shared resource, it is important to use
synchronization to ensure that only one thread can write to it at a time.
Overall, ensuring proper synchronization is key to preventing corruption of shared
resources in a multithreaded environment. By using the appropriate
synchronization techniques and designing the code carefully, it is possible to
avoid data corruption and ensure that shared resources are accessed safely and
reliably.

b. The main difference between applying a lock to an instance method and applying
a lock to a static method is the scope of the lock. When a lock is applied to an
instance method, the lock is specific to the instance of the object on which the
method is called. This means that multiple threads can access the method on
different instances of the object at the same time, without interfering with each
other.

On the other hand, when a lock is applied to a static method, the lock is applied
to the class itself, rather than to a specific instance of the object. This means that
only one thread can access the static method at a time, regardless of which
instance of the object the method is called on. This can be useful for ensuring
that the static method is thread-safe, but it can also limit concurrency if multiple
threads need to access the method simultaneously.

In general, the decision of whether to apply a lock to an instance method or a


static method will depend on the specific requirements of the code and the
desired behavior of the method. It is important to consider the trade-offs between
concurrency and thread-safety when deciding which approach to use.

c.
i. The critical region in the code you provided is the deposit method. This is
because it is the part of the code that modifies the balance field of the
Account object, and multiple threads may be executing this method
concurrently, leading to race conditions.
ii. The newCachedThreadPool() method creates a thread pool that
dynamically creates and destroys threads, while the
newFixedThreadPool() method creates a thread pool with a fixed
number of threads.
iii. No, the code will not yield the desired output because the deposit method
is not thread-safe. Multiple threads are modifying the balance field of the
Account object concurrently, and because of that there is no guarantee
that the final value of the balance field will be correct.
2.
a. There are several factors that led to the development of operating systems,
including the need for:
i. Efficient resource utilization: As computers became more powerful and
could handle more complex tasks, it became necessary to develop ways
to efficiently utilize the available resources (such as CPU time, memory,
and storage) to run multiple programs concurrently.
ii. Convenient user interaction: Early computers were complex and difficult
to use, requiring users to have specialized knowledge and to interact with
the machine using low-level commands. Operating systems provided a
convenient interface for users to interact with the computer, allowing them
to run programs, manage files, and perform other tasks in a user-friendly
way.
iii. Reliable operation: As computers became more widely used in various
industries, it became important to ensure that they were reliable and could
run continuously without failing. Operating systems provided mechanisms
for managing resources, handling errors, and recovering from crashes,
making them more reliable and resilient.

These factors led to the development of operating systems, which provide a


platform for running applications and managing the underlying hardware
resources. Without operating systems, it would be much more difficult and
inefficient to use computers for a wide range of tasks.

b. One key difference between processes and threads is that processes are
independent of each other, while threads within the same process share memory
and system resources. This means that threads can communicate with each
other more easily than processes can, but it also means that the failure of one
thread can potentially affect the other threads within the same process.
Additionally, because threads share memory and system resources, they are
more efficient than processes, but they are also more susceptible to conflicts and
errors.
c.
i. Improved resource utilization: With multithreading, a single CPU or
processor core can be used to execute multiple threads concurrently,
which can improve the utilization of the CPU or processor and other
system resources. This can lead to better performance and higher
throughput for the application.
ii. Enhanced responsiveness: Multithreading can improve the
responsiveness of an application by allowing it to continue executing
other threads even if one thread is blocked or waiting for a resource. This
can make the application feel more responsive and smooth to the user.
iii. Improved parallelism: Multithreading can enable an application to take
advantage of parallelism, which is the ability to execute multiple threads
simultaneously on multiple CPU cores or processors. This can
significantly improve the performance of the application and make it run
faster.
iv. Easier program structure: Multithreading can make it easier to structure
an application as a set of concurrently executing threads, which can
simplify the design and implementation of the application. This can make
it easier to maintain and evolve the application over time.
d.
i. Deadlock: Deadlock is a situation where two or more threads are blocked
and unable to make progress because they are waiting for each other to
release a resource. This can cause the entire program to become stuck
and unresponsive. To avoid deadlock, it is important to carefully design
the program to avoid circular dependencies between threads, and to
properly manage and synchronize access to shared resources.
ii. Race condition: A race condition is a situation where the outcome of a
program depends on the relative timing or ordering of threads. This can
lead to unpredictable and inconsistent behavior in the program, and can
be difficult to diagnose and debug. To avoid race conditions, it is important
to properly synchronize access to shared resources, and to ensure that
the program's behavior is deterministic and consistent.
iii. Resource leaks: In a multithreaded program, it is important to properly
manage and release any resources that are allocated by the program,
such as memory, file handles, and network connections. Failing to
properly release these resources can lead to resource leaks, which can
cause the program to run out of memory or other system resources, and
can eventually crash or hang.
iv. Unhandled exceptions: In a multithreaded program, it is important to
properly handle any exceptions that may be thrown by the program, and
to ensure that the program does not crash or hang if an exception is
thrown. This can be particularly challenging in a multithreaded
environment, where multiple threads may be running concurrently and
interacting with each other. To avoid unhandled exceptions, it is important
to properly structure the program to catch and handle exceptions, and to
ensure that the program's behavior is predictable and consistent.
v.
import java.util.concurrent.*;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;

public class AccountWithoutSync {


private static Account account = new Account();
public static void main(String[] args) {
ExecutorService executor =
Executors.newCachedThreadPool();
for (int i = 0; i < 100; i++) {
executor.execute(new AddAPennyTask());
}
executor.shutdown();
while (!executor.isTerminated()) {
}
System.out.println("What is balance ? " +
account.getBalance());
}
private static class AddAPennyTask implements Runnable {
public void run() {
account.deposit(1);
}
}
private static class Account {
private int balance = 0;
private final Lock lock = new ReentrantLock();

public int getBalance() {


return balance;
}
public void deposit(int amount) {
lock.lock();
try {
int newBalance = balance + amount;
try {
Thread.sleep(10);
} catch (InterruptedException ex) {
}
balance = newBalance;
} finally {
lock.unlock();
}
}
}
}
3.
a. A synchronizer is a mechanism or object that is used to coordinate the behavior
of multiple threads in a multithreaded program. Synchronizers can be used to
ensure that threads access shared resources in a safe and predictable manner,
and to avoid race conditions and other synchronization-related problems.
b. One key difference between latches, barriers, and semaphores is the way in
which they block threads. Latches block a thread until a specific condition is
satisfied, barriers block a group of threads until all of the threads have reached a
specific point in the program, and semaphores block threads until they can
acquire a permit to access a shared resource. Additionally, latches and barriers
are typically used to block a thread or group of threads for a short period of time,
while semaphores can be used to block threads for longer periods of time.
c. A synchronized collection is a collection data structure, such as a list or a map,
that is designed to be accessed by multiple threads concurrently in a safe and
predictable manner. A synchronized collection typically uses a synchronizer, such
as a mutex or a monitor, to coordinate the behavior of threads and to ensure that
only one thread at a time can access the collection. Synchronized collections are
often used to store and manage data in a multithreaded program, in order to
avoid race conditions and other synchronization-related problems.
d. ArrayList is a type of list data structure that is implemented in the Java
programming language. By default, an ArrayList is not synchronized, which
means that multiple threads can access it concurrently without any
synchronization or coordination. This means that if multiple threads are
accessing an ArrayList concurrently, it is possible for them to interfere with each
other and cause race conditions or other synchronization-related problems.
4.
a. A thread-safe class is a class that can be used in a multi-threaded environment
without causing race conditions or other synchronization issues. In this context,
"shared mutable state" refers to variables in the class that are shared by multiple
threads and can be modified by those threads.
b. There are several ways to fix a broken multithreaded program. Here are four
common approaches:
i. Use synchronization constructs: If the problem is caused by race
conditions or other synchronization issues, you can use synchronization
constructs like locks, semaphores, or the synchronized keyword in Java
to ensure that only one thread can access a shared resource at a time.
ii. Avoid shared mutable state: One way to avoid synchronization issues is
to avoid using shared mutable state altogether. Instead, you can use
immutable objects or data structures, which cannot be modified by
multiple threads simultaneously.
iii. Use thread-safe data structures: If you must use a shared mutable
state, you can use thread-safe data structures that provide built-in
synchronization and prevent race conditions. Examples of thread-safe
data structures include ConcurrentHashMap in Java and SyncMap in Go.
iv. Use a separate object for each thread: Another approach is to create a
separate object for each thread that accesses the shared state. This way,
each thread has its own local copy of the state, and there is no need to
synchronize access to the shared state. This approach can be useful in
certain situations, but it can also lead to increased memory usage and
reduced performance.
c. In Task1, the getNext method increments the v variable without any
synchronization, which could lead to race conditions if multiple threads access
and modify the v variable concurrently

In Task2, the count variable is incremented in the service method without any
synchronization, which could also lead to race conditions if multiple threads
access and modify the count variable concurrently.

The third class, Task3, is thread-safe because it does not use any shared
mutable state. This means that multiple threads can access and use the class
simultaneously without causing synchronization issues.
public class Task1 {
private volatile int v;
public synchronized int getNext() {
return v++;
}
}
public class Task2 implements Servlet {
private volatile long count = 0;
public synchronized long getCount() {
return count;
}
public synchronized void service(ServletRequest req,
ServletResponse resp) {
BigInteger i = extractFromRequest(req);
BigInteger[] factors = factor(i);
++count;
encodeIntoResponse(resp, factors);
}
}
public class Task3 implements Servlet {
public synchronized void service(ServletRequest req,
ServletResponse resp) {
BigInteger i = extractFromRequest(req);
BigInteger[] factors = factor(i);
encodeIntoResponse(resp, factors);
}
}

5. A deadlock is a situation where two or more threads are blocked and unable to proceed
because each thread is waiting for a resource held by the other thread. This can happen
when multiple threads need to acquire multiple locks in a specific order, and each thread
acquires one lock but is unable to acquire the other locks because they are held by the
other threads.
a.
i. Deadlocks can be avoided by using synchronization constructs carefully
and avoiding situations where threads need to acquire multiple locks in a
specific order. Here are some common strategies for avoiding deadlocks:

1. Avoiding nested locks: Nested locks can lead to deadlock


situations, where two threads are waiting for each other to release
a lock. To avoid this, try to avoid taking multiple locks in a single
method.
2. Using timeouts for lock acquisition: If a thread is waiting for a
lock for a long time, it can lead to deadlock. To avoid this, you can
use a timeout for lock acquisition. This way, if a thread is unable to
acquire a lock within the specified time, it can terminate gracefully
and release any held locks.
3. Using lock ordering: In some cases, deadlock can occur when
two threads are trying to acquire locks in different orders. To avoid
this, you can establish a fixed order for acquiring locks and ensure
that all threads follow this order.
4. Using deadlock detection and prevention algorithms: There
are several algorithms that can be used to detect and prevent
deadlock situations. For example, the "wait-for" graph algorithm
can be used to detect whether a set of threads is in a deadlock
state, and the "banker's algorithm" can be used to prevent
deadlocks in resource allocation systems.
b.
i.
1. New: A new thread begins its life cycle in the New state. It
remains in this state until the program starts the thread.
2. Ready: When a thread is ready to be executed, it is in the Ready
state. It is waiting for the processor to be available to execute the
thread.
3. Running: A thread in the Running state is currently being
executed by the processor
4. Blocked: A thread in the Blocked state is unable to be executed
because it is waiting for a resource or synchronization with
another thread, or waiting for a specific condition to be met, such
as the completion of an I/O operation or the acquisition of a lock,
before it can resume execution.
5. Finished: A thread in the Finished state has completed its
execution.
ii.
1. New: In Java, a new thread can be created and started using the
Thread class and its start() method.
2. Ready: In Java, a thread transitions to the Ready state after the
start() method has been called on it, but it has not yet begun
executing.
3. Running: In Java, a thread transitions to the Running state when it
has been scheduled by the operating system and is currently
executing its run() method.
4. Blocked: In Java, a thread can be placed in the Blocked state
using the wait(), join(), or sleep() methods.
5. Finished: In Java, a thread transitions to the Finished state when
its run() method completes or when the stop() method is called on
it.
c. Data corruption: When multiple threads access the same data concurrently,
they can interfere with each other and cause the data to become corrupted. For
example, if two threads are trying to update the same variable at the same time,
the resulting value may be incorrect or inconsistent.

Memory corruption: When multiple threads allocate and deallocate memory


concurrently, they can cause memory corruption. For example, if two threads are
trying to free the same memory block at the same time, it can cause the memory
manager to become confused and lead to memory errors or crashes.

File corruption: When multiple threads are accessing the same files
concurrently, they can cause file corruption. For example, if two threads are trying
to write to the same file at the same time, the resulting file may be incomplete or
inconsistent.

Device corruption: When multiple threads are accessing the same hardware
device concurrently, they can cause device corruption. For example, if two
threads are trying to control the same motor at the same time, the resulting
motion may be unpredictable or unsafe.

6.
a.
i. The reader/writer problem is a classic concurrency problem that involves
multiple threads that want to access a shared resource. In the
reader/writer problem, there are two types of threads: readers, which only
read from the shared resource, and writers, which read from and write to
the shared resource. The problem is to design a synchronization
mechanism that allows multiple readers to access the resource
concurrently, but that ensures that only one writer can access the
resource at a time, and that writers have priority over readers when
accessing the resource.
ii. The dining philosopher problem is a classic concurrency problem that
involves a group of philosophers who are sitting around a table and trying
to eat. In the dining philosopher problem, each philosopher has a plate of
food and a fork, and they must use the forks to eat their food. The
problem is to design a synchronization mechanism that allows the
philosophers to eat in a coordinated and deadlock-free manner.
iii. The producer/consumer problem is a classic concurrency problem that
involves multiple threads that are communicating and exchanging data. In
the producer/consumer problem, there are two types of threads:
producers, which produce data and put it into a shared buffer, and
consumers, which consume data from the shared buffer. The problem is
to design a synchronization mechanism that allows the producers and
consumers to communicate and exchange data in a safe and predictable
manner, without losing or corrupting data.
b.
i. In the philosopher's problem, deadlock can arise if the philosophers are
not careful about how they use their forks. If each philosopher picks up
one fork and then waits for the other fork to become available, it is
possible for all of the philosophers to be waiting for each other and for
none of them to be able to eat. This situation can be avoided by properly
designing the synchronization mechanism for the philosophers' forks.
ii. Livelock is a situation that can arise in the philosopher's problem if the
philosophers are too careful about how they use their forks. If each
philosopher repeatedly picks up and puts down their forks, trying to avoid
deadlock, it is possible for them to be constantly changing their state
without making any progress. This situation can be avoided by properly
designing the synchronization mechanism for the philosophers' forks to
ensure that they make progress.
iii. Starvation is a situation that can arise in the philosopher's problem if the
philosophers are not treated fairly. If some philosophers are always able
to acquire their forks before the others, it is possible for them to eat all of
their food and leave the others with nothing. This situation can be avoided
by properly designing the synchronization mechanism for the
philosophers' forks to ensure that they are treated fairly.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy