0% found this document useful (0 votes)
4 views15 pages

Unit-4 Dbms

Transaction management involves actions that read from or write to a database, ensuring operations are completed or aborted entirely through ACID properties: Atomicity, Consistency, Isolation, and Durability. Concurrency control is crucial for managing simultaneous operations to maintain data integrity, addressing issues like lost updates, uncommitted data, and inconsistent retrievals. Deadlocks can occur when transactions are unable to proceed due to waiting on each other, and strategies such as deadlock avoidance, detection, and prevention are essential for maintaining smooth database operations.

Uploaded by

anu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views15 pages

Unit-4 Dbms

Transaction management involves actions that read from or write to a database, ensuring operations are completed or aborted entirely through ACID properties: Atomicity, Consistency, Isolation, and Durability. Concurrency control is crucial for managing simultaneous operations to maintain data integrity, addressing issues like lost updates, uncommitted data, and inconsistent retrievals. Deadlocks can occur when transactions are unable to proceed due to waiting on each other, and strategies such as deadlock avoidance, detection, and prevention are essential for maintaining smooth database operations.

Uploaded by

anu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

UNIT - IV

Transaction Management

Transaction
A Transaction is any action that reads from and/or writes to a database. It is a logical unit of
work that must be either entirely completed or aborted. A database request is the equivalent
of a single SQL statement in an application program.
A Transaction can be:
Simple SELECT statement to list table contents
A series of related UPDATE statements to change the values of attributes.
A series of INSERT statements to add rows to one or more tables
A combination of SELECT, UPDATE, and INSERT statements
Example: Following is an example of a transaction. A transaction that changes the
contents of the database must alter the database from one consistent database state to
another. And, every transaction must begin with the database in a known consistent
state.

Properties of Transactions:
Each transaction must display ACID properties: Atomicity,Consistency, Isolation and Durability.
 Atomicity: Atomicity requires that all operations of a transaction be completed; if
not, the transaction is aborted. There must be no state in a database where a
transaction is left partially completed.
 Consistency: Consistent, meaning any database constraints that must be true
before the transaction must also be true after the transaction
 Isolation: It means that the data used during the execution of a transaction
cannot be used by a second transaction until the first one is completed.
 Durability: It means that the changes are permanent. Thus, once a transaction is
committed, no subsequent failure of the database can reverse the effect of the
transaction.
Transaction State Diagram: Following is the diagram that shows transaction states

1. Active State: A database transaction is in this state while its statements are being
executed. This is the initial state of every transaction.
2. Partially Committed Phase: A database transaction enters this state when its final
statement has been executed. But, it is still possible for the transaction to be aborted in
the case of hardware or output failures.
3. Failed State: A database transaction enters the failed state when its normal
execution can no longer proceed due to hardware or program errors.
4. Aborted State: A database transaction, if determined by the DBMS to have failed,
enters the aborted state. An aborted transaction must have no effect on the database.
The database will return to its consistent state when the aborted transaction has been
rolled back.
1
5. Committed State: A database transaction enters the committed state when
information has been stored or altered after completing its execution successfully.

Concurrency:

The process of managing simultaneous operations on the database without having them
interfere with one another. But, this can create several data integrity and consistency
problems.
Concurrency control: The process of managing simultaneous operations
on the database without having them interfere with one another so that
data integrity is maintained and the operations do not interfere with each
other in a multi-user environment. The objective of concurrency control is
to ensure the serializability of transactions in a multi-user database
environment.
The three main problems associated with concurrency are:
1. Lost Updates.
2. Uncommitted Data.
3. Inconsistent retrievals.
1) Lost Updates: The lost update problem occurs when two concurrent
transactions, T1 and T2, are updating the same data element and one of the
updates is lost (overwritten by the other transaction). This is a kind of
write/write conflict.
Example: Assume that there are two concurrent transactions, T1 and T2
working with X, and Initially, X=50. Then,
As the Non-Interleaving execution performs the transactions as
desired, the value of X will be „600‟.
But, in the case of Interleaving execution, First, T2 reads X (=50) before its updation by T1;
then T2 updates X to 500 (=50*100) considering the previous value of X. Thus we lost the
update by T1. The process is shown below: Non-Interleaving Execution Interleaving

Execution.

2) Uncommitted Data (Dirty-Read): The Uncommitted-data problem occurs when two


transactions, T1 and T2, are executed concurrently and the first transaction (T1) is
rolled back (due to a failure) after the second transaction (T2) has already accessed the
uncommitted data. It is a kind of write/read conflict and violates isolation property. This
is also called Dirty-Read problem.
Example: Assume that there are two concurrent transactions, T1 and T2 working with
X, and Initially, X=50. Then,
- As the Non-Interleaving execution performs the transactions as desired, the value
of X will be „600‟.
-But, in the case of Interleaving execution, First, T1 writes X (=60) and then T2
writes X (=600). But, if T1 fails and is rolled back, it has to restore its original
value i.e. 50. So, X will be 50.

2
The process is shown below:

3) Inconsistent Retrievals:
Inconsistent retrievals occur when a transaction accesses
data before and after another transaction finish working with such data. For example,
an inconsistent retrieval would occur if transaction T1 calculates some summary
(aggregate) function over a set of data while another transaction (T2) is updating the
same data. The problem is that the transaction might read some data before they are
changed and other data after they are changed, thereby yielding inconsistent results.
This is a kind of write/read conflict.
Example: Assume that there are two concurrent transactions, T1 and T2 working with
X, Y, and Initially, X=10, and Y=20. Then,
 As the Non-Interleaving execution performs the transactions as desired, the value
of X will be „300‟.
 But, in the case of Interleaving execution, First, T1 reads X (=10) and adds it to
SUM(=10). Then T2 writes X (=100) and writes Y (=200). Now, if T1 reads Y (=200,
updated by T2) and adds it to SUM (=210). Thus, it is evident that the transactions
add before value of X and after value of Y to SUM resulting „210‟. This is nothing
but Inconsistent Retrieval.

The process is shown below:

Serializability:
Concurrent transactions need to be processed differently so that they do not interfere
with each other. When two or more transactions execute concurrently and they are
accessing same data there will be conflict among transactions. The selection of one
transaction‟s execution over the other may result undesirable consequences.
Serializability: Serializability ensures that a schedule for executing concurrent
transactions is equivalent to one that executes the transactions serially in some order
i.e. if one transaction were processed before another transaction, no interference would
occur. Serializability can be achieved by locking mechanism. But, not all the
transactions are serializable. There will be many conflicts in serializability which are
discussed below.
Read/Write Conflict Scenarios: All database operations might require READ/WRITE
operations that produce conflicts. The following table lists all the possible conflict
scenarios when two concurrent transactions T1 and T2 access the same data. The

3
operations are in conflict when they access same data and at least one of them is a
WRITE operation.

Example: The following example shows write/write conflict seen in a problem called „lost
update’. The lost update problem occurs when two concurrent transactions, T1 and T2, are
updating the same data element and one of the updates is lost (overwritten by the other
transaction). Assume that there are two concurrent transactions, T1 and T2 working with X,
and Initially, X=50. Then,  In the case of Schedule „S1‟, the value of X will be „600‟.  But, in
the case of Schedule „S2‟, First, T2 reads X (=50) before its updation by T1; then T2 updates
X to 500 (=50*100) considering the previous value of X. Thus we lost the update by T1.

The conflict is clearly evident that the above two schedules produce different results.

Recoverability of Schedule
Sometimes a transaction may not execute completely due to a software issue, system
crash or hardware failure. In that case, the failed transaction has to be rollback. But
some other transaction may also have used value produced by the failed transaction. So

The above table 1 shows a schedule which has two transactions. T1 reads and writes the
value of A and that value is read and written by T2. T2 commits but later on, T1 fails.
Due to the failure, we have to rollback T1. T2 should also be rollback because it reads
the value written by T1, but T2 can't be rollback because it already committed. So this
type of schedule is known as irrecoverable schedule.
Irrecoverable schedule: The schedule will be irrecoverable if Tj reads the updated value
of Ti and Tj committed before Ti commit.

4
The above table 2 shows a schedule with two transactions. Transaction T1 reads and
writes A, and that value is read and written by transaction T2. But later on, T1 fails. Due
to this, we have to rollback T1. T2 should be rollback because T2 has read the value
written by T1. As it has not committed before T1 commits so we can rollback transaction
T2 as well. So it is recoverable with cascade rollback.
Recoverable with cascading rollback: The schedule will be recoverable with cascading

rollback if Tj reads the updated value of Ti. Commit of Tj is delayed till commit of Ti.

The above Table 3 shows a schedule with two transactions. Transaction T1 reads and write A
and commits, and that value is read and written by T2. So this is a cascade less recoverable
schedule.
Locks:
A Lock is a variable assigned to any data item in order to keep track of the status of that data
item so that isolation and non-interference is ensured during concurrent transactions.
Locking protocol in DBMS is like a set of rules for managing access to data in databases. With
lock-based protocols, a transaction cannot read or write data until the appropriate lock is
obtained. This helps to solve the concurrency problem by locking a given transaction to a
specific user.
Types of Locks in DBMS
There are two types of Locking protocol in DBMS :
 Shared Lock (S)
 Exclusive Lock (X)
Types of Lock
1. Shared Lock (S): Shared Lock is also known as Read-only lock. As the name suggests it
can be shared between transactions because while holding this lock the transaction does
not have the permission to update data on the data item. S-lock is requested using lock-S
instruction.
2. Exclusive Lock (X): Data item can be both read as well as written. This is Exclusive and
cannot be held simultaneously on the same data item. X-lock is requested using lock-X
instruction.

Shared Lock Exclusive Lock

 A transaction T1 having been shared can  A transaction T1 having an exclusive lock can
only read the data. both read as well as write

5
Shared Lock Exclusive Lock

 More than one transaction can acquire a  At any given time, only one Transaction can
shared lock on the same variable. have an exclusive lock on a variable.
 Represented by S  Represented by X

Types of Lock-Based Protocols


Two Phase Locking Protocol
The Two-Phase Locking (2PL) Protocol is an essential concept in database management
systems used to maintain data consistency and ensure smooth operation when multiple
transactions are happening simultaneously. It helps to prevent issues like data conflicts where
two or more transactions try to access or modify the same data at the same time, potentially
causing errors.

Two Phase Locking


The Two-Phase Locking (2PL) Protocol is a key technique used in database management
systems to manage how multiple transactions access and modify data at the same time.
When many users or processes interact with a database, it’s important to ensure that data
remains consistent and error-free. Without proper management, issues like data conflicts or
corruption can occur if two transactions try to use the same data simultaneously.
The Two-Phase Locking Protocol resolves this issue by defining clear rules for managing data
locks. It divides a transaction into two phases:
1. Growing Phase: In this step, the transaction gathers all the locks it needs to access the
required data. During this phase, it cannot release any locks.
2. Shrinking Phase: Once a transaction starts releasing locks, it cannot acquire any new
ones. This ensures that no other transaction interferes with the ongoing process.
Example:
Let’s see an example of a two-phase locking protocol. Let’s consider three transactions: T1,
T2, and T3. We have two data items, A and B. Consider the following table for more
understanding:

T1 T2 T3

Lock A – Lock A

Read A Lock B –

Lock B – Write A

Read B Write B –

Unlock A Unlock B –

Unlock B – –

– – Unlock A

6
In the above example, you can see that the T1 transaction locks the data items A and B
in the first call. T2 transaction locks the data B, and T3 transaction locks the data item A.

Transaction T1 first proceeds by locking data item A, then reading data item A, followed by
locking B, reading B, and finally unlocking both A and B data items.
Transaction T2 locks data item B, writes to B, and then unlocks B.
Transaction T3 locks data item A writes to A and then unlocks A.

As per the 2PL, If T1 locks the data items A and B, T2 and T3 must wait until T1 releases the
locks. T1 enters the growing phase while obtaining locks and the shrinking phase while
releasing them. Doing all of this will make sure that T1’s operations are isolated and consistent
with T2 and T3’s operations.

Rules for Two-Phase locking:

1. No two transactions can have conflicting locks.


2. No unlock operation can precede a lock operation in the same transaction.
3. No data is affected until all locks are obtained.

Deadlocks in DBMS

In database management systems (DBMS) a deadlock occurs when two or more


transactions are unable to the proceed because each transaction is waiting for the
other to the release locks on resources.

What is Deadlock?

The Deadlock is a condition in a multi-user database environment where transactions are


unable to complete because they are each waiting for the resources held by other
transactions. This results in a cycle of the dependencies where no transaction can proceed.

Characteristics of Deadlock
 Mutual Exclusion: Only one transaction can hold a particular resource at a time.
 Hold and Wait: The Transactions holding resources may request additional resources held
by others.
 No Preemption: The Resources cannot be forcibly taken from the transaction holding them.

7
 Circular Wait: A cycle of transactions exists where each transaction is waiting for the
resource held by the next transaction in the cycle.

Deadlock Avoidance?
When a database is stuck in a deadlock, It is always better to avoid the deadlock rather than
restarting or aborting the database. The deadlock avoidance method is suitable for smaller
databases whereas the deadlock prevention method is suitable for larger databases.
One method of avoiding deadlock is using application-consistent logic. In the above-given
example, Transactions that access Students and Grades should always access the tables in
the same order. In this way, in the scenario described above, Transaction T1 simply waits for
transaction T2 to release the lock on Grades before it begins. When transaction T2 releases
the lock, Transaction T1 can proceed freely.
Another method for avoiding deadlock is to apply both the row-level locking mechanism and
the READ COMMITTED isolation level. However, It does not guarantee to remove deadlocks
completely.
Deadlock Detection?
When a transaction waits indefinitely to obtain a lock, The database management system
should detect whether the transaction is involved in a deadlock or not.

Wait-for-graph is one of the methods for detecting the deadlock situation. This method is
suitable for smaller databases. In this method, a graph is drawn based on the transaction and
its lock on the resource. If the graph created has a closed loop or a cycle, then there is a
deadlock. For the above-mentioned scenario, the Wait-For graph is drawn below:

Deadlock Prevention?
For a large database, the deadlock prevention method is suitable. A deadlock can be
prevented if the resources are allocated in such a way that a deadlock never occurs. The
DBMS analyzes the operations whether they can create a deadlock situation or not, If they do,
that transaction is never allowed to be executed.
Deadlock prevention mechanism proposes two schemes:
 Wait-Die Scheme: In this scheme, If a transaction requests a resource that is locked by
another transaction, then the DBMS simply checks the timestamp of both transactions and
allows the older transaction to wait until the resource is available for execution.
Suppose, there are two transactions T1 and T2, and Let the timestamp of any transaction T
be TS (T). Now, If there is a lock on T2 by some other transaction and T1 is requesting
resources held by T2, then DBMS performs the following actions:
Checks if TS (T1) < TS (T2) – if T1 is the older transaction and T2 has held some resource,
then it allows T1 to wait until resource is available for execution. That means if a younger
transaction has locked some resource and an older transaction is waiting for it, then an
older transaction is allowed to wait for it till it is available. If T1 is an older transaction and
has held some resource with it and if T2 is waiting for it, then T2 is killed and restarted
later with random delay but with the same timestamp. i.e. if the older transaction has held
some resource and the younger transaction waits for the resource, then the younger
transaction is killed and restarted with a very minute delay with the same timestamp.
This scheme allows the older transaction to wait but kills the younger one.
 Wound Wait Scheme: In this scheme, if an older transaction requests for a resource held
by a younger transaction, then an older transaction forces a younger transaction to kill the
transaction and release the resource. The younger transaction is restarted with a minute
delay but with the same timestamp. If the younger transaction is requesting a resource
that is held by an older one, then the younger transaction is asked to wait till the older one
releases it.

8
Timestamp based Concurrency Control
Timestamp-based concurrency control is a method used in database systems to ensure that
transactions are executed safely and consistently without conflicts, even when multiple
transactions are being processed simultaneously. This approach relies on timestamps to
manage and coordinate the execution order of transactions.
What is Timestamp Ordering Protocol?
The Timestamp Ordering Protocol is a method used in database systems to order transactions
based on their timestamps. A timestamp is a unique identifier assigned to each transaction,
typically determined using the system clock or a logical counter. Transactions are executed in
the ascending order of their timestamps, ensuring that older transactions get higher priority.
For example:
 If Transaction T1 enters the system first, it gets a timestamp TS(T1) = 007 (assumption).
 If Transaction T2 enters after T1, it gets a timestamp TS(T2) = 009 (assumption).
This means T1 is “older” than T2 and T1 should execute before T2 to maintain consistency.
Basic Timestamp Ordering

Multiversion Timestamp Ordering


Multiversion Timestamp Ordering (MVTO) is a popular concurrency control technique used in
database management systems (DBMSs). MVTO allows multiple versions of a data item to
coexist at the same time, providing high concurrency and data consistency while preventing
conflicts and deadlocks.

In MVTO, each version of the data item has a unique timestamp associated with it. Transactions
that access the data item are assigned timestamps as well.
There are three components of MVTO: timestamps, versions, and ordering −
Timestamps are assigned to transactions and data item versions to determine the order of
operations.
Versions are created when a data item is modified, and each version has a unique
timestamp.
Ordering ensures that transactions only access the appropriate version of the data item
based on their timestamps.

Multiversion timestamp ordering technique for serializability

In the Multiversion timestamp ordering technique, unique timestamps are assigned to each
transaction and multiple versions are maintained for each data item. The technique ensures
serializability by following two rules.

Rule-1
If a transaction T issues a Read(X) request, and Read_TS(Xi) < TS(T), the system returns the
value of Xi to the transaction T and updates Read_TS(Xi) to TS(T).
Rule-2
If a transaction T issues a Write(X) request and TS(T) < Read_TS(X), the system aborts
transaction T. If TS(T) = Write_TS(X), the system overwrites the contents of X; if TS(T) >
Write_TS(X), it creates a new version of X.

Maintained fields for each version of a data item −


The value of the version.
Read_TS(Xi): The read timestamp of Xi is the largest timestamp of any transaction that
successfully reads version Xi.
Write_TS(Xi): The write timestamp of Xi is the largest timestamp of any transaction that
successfully writes version Xi.

9
Advantages
High concurrency
Preventing conflicts and deadlocks
Data consistency
Support for long-running transactions

Disadvantages
Storage overhead
Increased computational complexity
Limited scalability

Optimistic Techniques:
Granularity of Data Items:

Granularity: The size of data items chosen as the unit of protection by a concurrency
control protocol.
Lock Granularity: An important consideration in implementing concurrency control is
choosing the locking level. Lock granularity indicates the level of lock use. Locking can
take place at the following levels: data-base, table, page, row, or even field (attribute).
 Database Level: In a database-level lock, the entire database is locked, thus
preventing the use of any tables in the database by transaction T2 while transaction
T1 is being executed. This level of locking is good for batch processes, but it is
unsuitable for multiuser DBMSs.
 Table Level: In a table-level lock, the entire table is locked, preventing access to any
row by transaction T2 while transaction T1 is using the table. However, two
transactions can access the same database as long as they access different tables.
Table-level locks are less restrictive than database-level locks.
 Page level lock: The physical storage block (or page) containing a requested record is
locked. This level is the most commonly implemented locking level. A page will be a
fixed size (4K, 8K, etc.) and may contain records of one or more tables. Page-level
locks are currently the most frequently used multiuser DBMS locking method.
 Row Level Lock: A row-level lock is much less restrictive than other locks. The
DBMS allows concurrent transactions to access other rows of the same table even
when the rows belong to the same table.
 Field Level Lock: The field-level lock allows concurrent transactions to access the
same row as long as they required use of different fields (attributes) within that row.
It is rarely implemented in a DBMS because it requires an extremely high level of
computer overhead.

Database Recovery:

10
Database recovery is crucial to ensure that data integrity and availability are maintained.
When a failure occurs, it can cause data loss or corruption. Effective recovery methods help
restore data, minimize business interruptions, and ensure that operations can resume smoothly.

Recovery is basically making the database or rebuilding the database after the
problem of failures.

Why Recovery is Required in Database?


Here are some of the reasons why recovery is needed in DBMS.
 System failures: The DBMS can experience various types of failures, such as hardware
failures, software bugs, or power outages, which can lead to data corruption or loss.
Recovery mechanisms can help restore the database to a consistent state after such
failures.
 Transaction failures: Transactions can fail due to various reasons, such as network
failures, deadlock, or errors in application logic. Recovery mechanisms can help roll back
or undo the effects of such failed transactions to ensure data consistency.
 Human errors: Human errors such as accidental deletion, updating or overwriting data,
or incorrect data entry can cause data inconsistencies. Recovery mechanisms can help
recover the lost or corrupted data and restore it to the correct state.
 Security breaches: Security breaches such as hacking or unauthorized access can
compromise the integrity of data. Recovery mechanisms can help restore the database to
a consistent state and prevent further data breaches.
 Hardware upgrades: When a DBMS is upgraded to a new hardware system, the
migration process can potentially lead to data loss or corruption. Recovery mechanisms
can help ensure that the data is successfully migrated and the integrity of the database is
maintained.
 Natural disasters: Natural disasters such as earthquakes, floods, or fires can damage
the hardware on which the database is stored, leading to data loss. Recovery mechanisms
can help restore the data from backups and minimize the impact of the disaster.
 Compliance regulations: Many industries have regulations that require businesses to
retain data for a certain period of time. Recovery mechanisms can help ensure that the
data is available for compliance purposes even if it was deleted or lost accidentally.
 Data corruption: Data corruption can occur due to various reasons such as hardware
failure, software bugs, or viruses. Recovery mechanisms can help restore the database to
a consistent state and recover any lost or corrupted data.

Types of Recovery Techniques in DBMS

Database recovery techniques are used in database management systems (DBMS) to restore
a database to a consistent state after a failure or error has occurred. The main goal of
recovery techniques is to ensure data integrity and consistency and prevent data loss.

Techniques for Database Recovery


There are several techniques to recover a database after a failure. These methods help
restore data to a consistent state, ensuring minimal data loss and downtime.
1. Backup and Restore
The Backup and Restore method involves creating periodic backups of the database,
which can be used to restore data in case of a failure.

Example: A company creates daily backups of its customer database. In the event of a system
crash, they can use the latest backup to restore the data.
The Backup and Restore method involves creating periodic backups of the database,
which can be used to restore data in case of a failure.
Example: A company creates daily backups of its customer database. In the event of a system
crash, they can use the latest backup to restore the data.

11
2. Log-Based Recovery
Log-Based Recovery keeps a record of all database changes in a log file. If a failure
occurs, the log file is used to redo completed transactions and undo incomplete ones.
Example: A bank uses log-based recovery to ensure that transactions are either fully
completed or rolled back if a failure happens during processing.
3. Shadow Paging
Shadow Paging maintains two copies of a database page: a shadow page and a current
page. During updates, changes are made to the current page while the shadow page
remains intact, providing a recovery point.
Example: A manufacturing company uses shadow paging to protect its inventory
database. If a failure occurs, the system reverts to the shadow page to restore the
previous state.
4. Checkpointing
Checkpointing involves creating a snapshot of the database at a particular point in
time. During recovery, the system starts from the last checkpoint, reducing the
amount of data that needs to be processed.
Example: An e-commerce platform creates checkpoints every 10 minutes, so in case of
a crash, only transactions after the last checkpoint need to be reprocessed.

Database Security
Database Security is the mechanism that protects the database against intentional or accidental
threats. Breaches of security may affect other parts of the system, which may in turn affect the
database. Database security encompasses hardware, software, people, and data.
Database security covers the following issues:
• Theft and Fraud: It not only affect the database environment but also the entire
organization.
• Confidentiality / Privacy: Confidentiality refers to the need to maintain secrecy over
data that is critical to the organization. Privacy refers to the need to protect data about
individuals.
• Loss of Integrity: Loss of data integrity results in invalid or corrupted data, which
may seriously affect the operation of an organization.
• Loss of Availability: Loss of availability means that the data, or the system, or both
cannot be accessed, Which can seriously affect an organization‟s financial performance.
In some cases, events that cause a system to be unavailable may also cause data
corruption.

Measures of Control
12
The measures of control can be broadly divided into the following categories −
 Access Control − Access control includes security mechanisms in a database
management system to protect against unauthorized access. A user can gain
access to the database after clearing the login process through only valid user
accounts. Each user account is password protected.
 Flow Control − Distributed systems encompass a lot of data flow from one site to
another and also within a site. Flow control prevents data from being transferred
in such a way that it can be accessed by unauthorized agents.
 Data Encryption − Data encryption refers to coding data when sensitive data is to
be communicated over public channels.Converting the plain text(readable message) into Cipher
text(unreadable message).
The following are the computer-based security controls for a multi-user environment.
• Authorization
• Access controls
• Views
• Backup and Recovery
• Integrity
• Encryption
• RAID Technology
Authorization: The granting of a right or privilege that enables a subject to have legitimate
access to a system or a system‟s object.
The process of authorization involves
authentication of subjects requesting access to objects, where “subject” represents a
user or program and “object” represents a database table, view, procedure, trigger, or
any other object that can be created within the system.
(i) Authentication: A mechanism that determines whether a user is who he or she claims
to be. To authenticate a user in the database environment, two elements are required.
(ii) User Id
(iii) Authentication token

Access Controls: A privilege allows a user to create or access (that is read, write, or modify)
some database object (such as a relation, view, or index) or to run certain DBMS utilities.
Views: A view is a virtual relation that does not actually exist in the database, but is produced
upon request by a particular user, at the time of request.
Backup and Recovery: The process of periodically copying of the database and log file (and
possibly programs) to offline storage media is known as Backup.
Integrity: Integrity constraints also contribute to maintaining a secure database system by
preventing data from becoming invalid, and hence giving misleading or incorrect results.
Encryption: The encoding of the data by a special algorithm that renders the data unreadable
by any program without the decryption key is Enryption. To transmit data securely over
insecure networks requires the use of a cryptosystem, which includes:
• An encryption key to encrypt the data (plaintext-readable)
• An encryption algorithm that with the encryption key transforms the plaintext
into ciphertext
• A decryption key to decrypt the ciphertext
• A decryption algorithm that with the decryption key transforms the ciphertext
back into plaintext.
RAID Technology:
The hardware that the DBMS is running on must be fault-tolerant, meaning that the DBMS
should continue to operate even if one of the hardware components fails. This suggests having
redundant components that can be seamlessly integrated into the working system whenever
there is one or more component failures. The solution is the use of Redundant Array of
Independent Disks (RAID) technology. RAID works on having a large disk array comprising an
arrangement of several independent disks that are organized to improve reliability and at the
same time increase performance.

Redundant Array of Independent Disks(RAID)


RAID or Redundant Array of Independent Disks, is a technology to connect multiple
secondary storage devices and use them as a single storage media.
RAID consists of an array of disks in which multiple disks are connected together to
achieve different goals. RAID levels define the use of disk arrays.
RAID 0
In this level, a striped array of disks is implemented. The data is broken down into
blocks and the blocks are distributed among disks. Each disk receives a block of data to
write/read in parallel. It enhances the speed and performance of the storage device.
There is no parity and backup in Level 0.

13
RAID 1
RAID 1 uses mirroring techniques. When data is sent to a RAID controller, it sends a
copy of data to all the disks in the array. RAID level 1 is also called mirroring and
provides 100% redundancy in case of a failure.

RAID 2
RAID 2 records Error Correction Code using Hamming distance for its data, striped on
different disks. Like level 0, each data bit in a word is recorded on a separate disk and

ECC codes of the data words are stored on a different set disks. Due to its complex structure
and high cost, RAID 2 is not commercially available.

RAID 3
RAID 3 stripes the data onto multiple disks. The parity bit generated for data word is stored on
a different disk. This technique makes it to overcome single disk failures.

RAID 4
In this level, an entire block of data is written onto data disks and then the parity is generated
and stored on a different disk. Note that level 3 uses byte-level striping, whereas level 4 uses
block-level striping. Both level 3 and level 4 require at least three disks to implement RAID.

RAID 5
RAID 5 writes whole data blocks onto different disks, but the parity bits generated for data block
stripe are distributed among all the data disks rather than storing them on a different dedicated
disk.

14
RAID 6
RAID 6 is an extension of level 5. In this level, two independent parities are generated and
stored in distributed fashion among multiple disks. Two parities provide additional fault
tolerance. This level requires at least four disk drives to implement RAID.

Nested Transaction Model:

A nested transaction forms a tree of transactions with the root being called a top-level
transaction and all other nodes called nested transactions (subtransactions). Transactions
having no subtransactions are called leaf transactions. Transactions with subtransactions are
called parents (ancestors) and their subtransactions are called children (descendants).

A subtransaction can commit or rollback by itself. However, the effects of the commit
cannot take place unless the parent transaction also commits. Therefore, in order for any
subtransaction to commit, the top-level transaction must commit. If a subtransaction aborts,
all its children subtransactions (forming a subtree) are forced to abort even if they
committed locally.

15

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy