0% found this document useful (0 votes)
11 views27 pages

DBMS Unit-5

Database Management System Notes

Uploaded by

purfun594
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
11 views27 pages

DBMS Unit-5

Database Management System Notes

Uploaded by

purfun594
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 27
FRA Issues and Models for Resilient Operation A computer system, like any other mechanical or electrical device is subject to failure, There fare many causes of such failure, such as disk crash, power failure, software error, ete, In cach of these cases, information may be lost, Therefore, the database system maintains an integral part known as recovery manager It 1s responsible for the restore af the database fo @ consistent state that existed prior to the occurrence of the failures, The recovery manager of a DBMS is responsible for ensuring transaction atomicity and durability, It ensures atomicity by undoing the actions of transactions, that do not commit and durability by making sure that all actions of committed transactions survive system crashes and media failures. When a DBMNS 1s restarted alter crashes, the recavery manager is given control and must bring the database (0 # consistent state. The recovery manager is also responsible for undoing the actions of an aborted transaction 1) Transaction Fs ‘There are two types of errors that may cause a transaction failure. 1) Logical Error: The transaction can do longer continue with its normal execution with some internal conditions such as bad input, data not found, overflow or resource limits exceeded. ii) System Error: The system has entered an undesirable state (deadlock) as a result of which a transaction cannot continue with its normal execution. This transaction can be reexecuted at a later time. 2) System Crash: "There is a hardware failure or an error in the database software or the operating system that causes the loss of the content of temporary storage and brings transaction processing to a halt. The content of permanent storage remains same and is not corrupted, 3) Disk failure: —A isk block loses its conlen ay a TesUN Of either WHEAT Crh oF failure during a daar ranster operation. Copies of the data on other disks or backups on tapes are used to recover from the failure Causes of failures: Some failures might cause the database to go dawn, some athers might be trivial, On the other hand, if a data file has been lost, recovery requires additional steps, Some common causes of failures include: 4) System Crashes: can be happen due to hardware or software errors resulting in loss of main memory 2) User error: Ik-can be happen due to a user inadvertently deleting a row or dropping a table 3) Carelessness: It can be happen due to the destruction of data or facilities by operators/users because of lack of concentration. 4 Sabotage: It can be happen due to the intentional corruption or destruction of data, hardware or software facilities 5) Statement faiture: It can be happen due to the inability by the database to execute an SQL statement. 6) Application software errors: Tt can be happen due to the logical errors in the program to access the database, which causes ‘one or more transactions to fail 7) Network failure: It can be happen due to a network failure / communication software failure / aborted asynchronous connections. 8) Media failure: It-can be happen due to the disk controller failure / disk head crash / disk to be lost. Itis ht most dangerous failure. 9) Natural physical disasters: Trean be happen due o the natural disasters lke fire, floods, earthquakes, power failure, es Unde Logging is a way to assure that transactions are atomic. They appear to the database either to have executed in their entirety or not to have executed at all. A log is & sequence of lag records, cach telling something about what some transaction has done. The actions of several transactions can “interleave,” so that a step of one transaction may be executed and its effect logged, then the for a step of another tansaction, then fora second step of the first transaction or a step of athied transaction, and soon ‘This anterleaving of transactions complicates logging. itis not sufficient simply to log the entire story of transaction after that transaction completes SIS. CLLLITT TTT LTTE SII GGG GIGI IID GGT, LEG u when the crash o¢curred. The log also may be used, in conjunction with an archive, if there is a media failure of a disk that does not store the log. Generally, to repair the effect of the crash, some transactions will have their work done again, and the new values they wrote into the database are ‘waitten again. Other transactions will have their work undone, and the database restored so that it appears that they never executed The first style of logging, which 1s called undo logging, makes only repairs of the second type If 11s not absolutely certain that the effects of @ transaction have been completed and stored on disk, then any database changes that the transaction may have made to the database are undone, and the database state is restored to what existed prior to the transaction. Log Records The log 1s a file opened for appending only. As transactions execute, the log manager has the job of recording in the log each important event. One block of the log at a time 1s filled with log, records, each representing one of these events. Log blocks ate initially created in main memory and are allocated by the buffer manager like any other blocks that the DBMS needs. The log blocks are written to nonvolatile storage on disk as soon as is feasible. There are several forms of log record that are used with each of the types of logging. These are 1. : This record indicates that transaction T bas begun. 2. : Transaction T has completed successfully and will make no mote changes to database elements. Any changes to the database made by T should appear on disk. If we insist that the changes already be on disk, this requirement must be enforced by the log manager. 3. ; Transaction T could not complete successfully If transaction T aborts, no ‘changes it mace can have been copied t0 disk, and its the job ofthe transaction manager fo make sure that such changes never appeat on disk, or thal their eifect on disk is cancelled if they do ‘The Undo-Logging Rules There are two rules that transactions must obey in order that an undo log allows us to recover from a system failure, These niles affect what the buffer manager can do and also require that certain actions be taken whenever a transaction commits Us + [Ftransaction 7 modifies database element X, then the log record of the form <7; X.¥"* must — be written to disk before the new value of is writtertirdisk, ——— - U2: Ifa transaction commits, then its COMMIT log record must be written to disk only afier all database clements changed by the transaction have been written to disk, but as soon thereafter as possible, To summarize rules Uy and Us, material associated with one transaction must be written to disk in the following order 4) The log records mdicating changed database elements +b) The changed database elements themselves ©) The COMMIT fog record In order to force log records to disk, the log manager needs a flush-log command that tells. the bufler manager to copy to disk any log blocks that have not previously been copied to disk or that have been changed since they were last copied Example: jdetion 1H) MALMO DAL DO) Lox i ‘Make ajay jos] st al xl pleas bag] Fal al 4) WAITED | 6 al 8] «TAR Sime |] mw) s[ a 8) Stewed la] wl ates Mfawracdis| if sel a] 8 | eres | rug ese paste over |x| se} a) Ww] 8 cc uy fired | commute > 1) | ruse woo || ee Actions and their log entries ‘The transaction of undo logging to show the log entries and fushlog actions that have to take place along with the actions of the transaction 7, Recovery Using Undo Logging. It is the job of the recovery manager to use the log to restore the database state to some consistent state, The first task of the recovery manager is to divide the transactions into committed and uncommitted transactions. {f there is a log record , then by undo rule 2 all changes made by ion T were previousty written to disk Thus, Thy itself could norhave tetrthe database ‘an inconsistent state when the system failure ocurred. If there is 0 log record on the log but no record, ‘Then there could have been some changes to the database made by T that got writen to disk before the crash, while other changes by T either were not made, even in the main-memory buffers, or were made in the buffers but not copied to disk After making all the changes, the recovery manager must write a log record < ABORT T° for cach incomplete transaction T that was not previously aborted, and then flush the log. Now. normal operation of the database may resume, and new transactions may begin executing Redo Logging While undo logging provides a natural and simple strategy for maintaining a log and recovering from a system failure, itis not the only possible approach. The requirement for immediate backup of database elements to disk can be avoided if we use a logging mechanism called redo logging. The principal differences between redo and undo logging are: 1, While undo logging cancels the effect of incomplete transactions and ignores committed ones during recovery, redo logging ignores incomplete transactions and repeats the changes made by committed transactions 2. While undo logging requires us to write changed database elements to disk before the COMMIT Jog record reaches disk, redo logging requires that the COMMIT record appear on disk before any changed values reach disk. 3. While the old values of changed database elements are exactly what we nced to recover when the undo rules Ut and U2 are followed, to recover using redo logging, we need the new values. ‘The Redo-Logging Rule Redo logging represents changes to database elements by a log record that gives the new value, rather than the old value, which undo logging uses, These records look the same as for undo Jogging: must be written o the log The order in which data and log entries reach disk can be described by a single “tedo rule.” called the write-ahead logging rule. Rt: Before modifying any database element X on disk, it is necessary that all log records Pertaining to this modification of X, including both the update record <7; X. v> and the record, must appear on disk. The order of redo logging associated with one transaction gets written to disk is 1. The log records indicating changed database elements ’ 2. The COMMIT log record. ' 3. The changed database elements themselves. Step Action _t MA MB DA DB Log » < START T> 2)READ(AA) 8 8 8 8 3yt:=t*2 16 8 8 8 4) WRITE(A) 16 16-8 8 5) READB) 8 168 8 8 6) tomt%2 16168 8 8 7) WRITEBY) 16 16 16 8 8 8) 9) FLUSH LOG 10) OUTPUT(A) 16 16 16 16 8 1) OUTPUT(B) 16 16 16 16 16 Actions and their log entries using redo logging Recovery with Redo Logging: ‘An important consequence of the redo lule R/ is that unless the log has a record, we know that no changes to the database made by transaction T have been written to disk “To Tecover asin ATEOO log after s systenrerasts-we do the following 1. Identify the committed transactions. 2. Scan the log forward from the beginning. For each log record encountered t (a) If Tis oot a committed transaction, do nothing (b) If Tis committed, write value v for database clement X 3. For each incomplete transaction T, write an record to the log and flush the log. The steps to be taken to perform a nonquiescent checkpoint of a redo log are as follows, 1, Write a log record record to the log and flush the log. Recovery with a Check pointed Redo Log: ___As for an undo log, the insertion of records to mark the start and end of a checkpoint helps us limit our examination of the log when a recovery is necessary. Also as with undo logging. there are two eases, depending on whether the last checkpoint record 18 START or END i) Suppose first that the last checkpoint record on the log before a crash is “END CKPT> oF that started after that log record appeared in the log. In searching the log, we do not look further back than the earliest of the < START T,> records, Linking backwards all the 4og records for a given transaction helps us to find the necessary records, as it did for undo logging ti) The last checkpoint record on the log is a record. We must search back to the previous record, find its matching means that transaction T changed the value of database element X; its former value was v, and its new value is w. The constraints that an undo/redo logging system. ‘must follow are summarized by the following rule: UR: _ : Before modifying any database clement X on disk because of changes made by some transaction T, it is necessary that the update record appear on disk. Rule UR, for undo/redo logging thus enforces only the constraints enforced by both undo Jogging and redo logging. In particular, the log record can precede or follow any of the changes to the database elements on disk. Step | Action [t | M-A | M-B | D-A | D-B | Log 1) 2) | READ(A,t) 8 8 8 8 3){trete2 | ie] 8 8| 8 4) | WRITE(A,t) | 16 16 8 8 | G1 Meee) 1a) wel..a) a| 8 6)|t:=te2 |i6] 16] 8| 8s] 8 7) | WRITE(B,t) | 16 16 16 8 8 | 8) | FLUSH LOG 9) | OUTPUT(A) 16 16 16 16 8 | 10) 11) | ovrpursy) | 16] 16] 16] 16] 16| A possible sequence of actions and their log entries using undo/redo logging. Recovery with Undo/Redo Logging: ‘When we need to recover using an undo/redo log, we have the information in the update records either to undo a transaction T, by restoring the old values of the database elements that T changed, or to redo T by repeating the changes it has made, The undo/redo recovery policy ts 1, Redo all the committed transactions in the order earliest-first, and 2. Undo all the incomplete transactions in the ordet latest-first. It is necessary for us to do both. Because of the Mexibility allowed by undo/redo logging regarding the relative order in which COMMIT log records and the database changes themselves are copied to disk, we could have either a committed transaction with some or all of its changes not on disk, or an uncommitted transaction with some or all of its changes on disk Check pointing an Undo/Rede Log: A nonguiescent checkpoint is somewhat simpler for undo‘redo logging than for the other Sazig method, We have only to do the following: 1, Write a record tothe log. where [1 .Tk are all the active transactions, and flush the log. 2, Write to-disk-all the buffers that are dirty, i¢., they contain one or more changed database elements. Unlike redo logging, we flush all buffers, not just those written by committed ‘transactions. 3. Write an record to the log, and flush the log. < START Ti “END CKPT> An undo/redo log Suppose the crash occurs just before the COMMIT 7> record is written to disk, Then we identify T2 as committed but T) as incomplete. We redo T2 by setting C to 15 on disk; itis not necessary to set B10 10 since we know that change reached disk before the . However, unlike the situation with a redo log, we also undo Ts; that is, we set D to 19 on disk. If T3 had been active at the start of the checkpoint, we would have had to look prior to the STARI-CKPT record to find if there were more actions by 71 that may have reached disk and need to be undone, ENT TT UUVI IIIT? TF. P) ia Me Protect “The log can protect us against system failures, where nothing is lost from disk, but temporary data in main memory is lost. The serious failures involve the loss of one or more disks. We can reconstruct the database from the log if: 4) The log were on a disk other than the disk(S) that hold the data, b) The log were never thrown away after a checkpoint, and ©) The log were of the redo or the undo/redo type, so new values ate stored on the log ‘The log will usually grow faster than the database. So it is not practical to keep the log forever. ‘The Archive: To protect against media failures, we are thus led to a solution involving archning — maintaining a copy of the database separate from the database itself’ If it were possible to shut down the database for a while, we could make a backup copy on some storage medium such as tape or optical disk, and store them remote from the database in some secure location. ‘The backup would preserve the database state as it existed at this time, and if there were a media failure, the database could be restored to the state that existed then. Since writing an archive is a lengthy process ifthe database 1s large, one generally ines to avoid copying the entire database at each archiving step. Thus, we distinguish between two levels of archiving: 1.A full dump, in which the entire database is copied 2. An incremental dump, in which only those database elements changed, the previous full of incremental dump is copied. It is also possible to have several levels of dump, with a full dump thought of as a “level 0° dump, and a “level 1” dump copying everything changed since the last dump at level / or less. ‘We can restore the database from a full dump and its subsequent meremental dumps, in a process much like the way a redo or undo/redo log can be used to repair damage duc to a system failure. We copy the full dump back (© the database, and then in an earhest-first order, make the changes recorded by the later incremental dumps. Since incremental dumps will tend to involve only a small fraction of the data changed since the last dump, they take le: cay wont y take less space and can be done Nonquiescent Archiving: A RigGHENCEA CHEEK PAIMT ATCTIPT make a Copy SA TE GSK OF the (apprN - database state thal existed when the checkpoint started. diel RET A nonquiescent dump tries to make a copy of the database that existed when the dump began, but database activity may change many database elements on disk during the minutes or hours that the dump takes. [Fit is necessary to restore the database from the archive, the log entries made during the dump can be used to sort things out and get the database to a consistent state ‘A tionguiescent dump copies the database elements in some fixed order, possibly while those clements are being changed by executing transactions. As a result, the value of a database element that is copied to the archive may or may not be the value that existed when the dump began. As long as the log for-the duration of the dump is preserved, the discrepancies can be corrected from the log. Rt peer bald Cops Events during a nonquiescent dump ph Tog allows ‘rom eens falas Arshive The analogy between checkpoints and dumps “The process of making an archive can be broken into the following steps. We assume that the logging method is either redo or undo‘redo; an undo log is not suitable for use with archiving. 1. Write a log record < START DUMP>. 2. Perform a checkpoint appropriate for whichever logging method is being used. 3. Perform a full or incremental dump of the data disk(s), as desired; making sure thatthe copy of fe date has ached dhs seen, remote Se 4 ‘Make sure that eoough ofthe log has been copied tothe secure, remote site that at least the prefix of the log up to and including the checkpoint in item (2) will survive a media failure of the database . 5. Write a log record <2, C,3, 6 <1, B, 2, > ‘Dump completes END DUMP> Log taken during a dump ‘Note that we did not show 7; committing: It would be unusual that a transaction remained active during the entire time a full dump was in progress, but that possibility doesn’t affect the correctness of the recovery method. Recovery Using an Archive and Log: ‘Suppose that a media failure occurs, and we must reconstruct the database from the most recent archive and whatever prefix of the log has reached the remote site and has not been fost in the crash We perform the following steps: 1. Restore the database from the archive. {a) Find the most recent full dump and reconstruct the database from it (i.¢., copy the archive into the database). (b) If there are later incremental dumps, modify the database according to each, carlest first eee e 2. Modify the database using the surviving tog, Use the method of recovery appropriate to the Jog method being used PART—A_ [2 mark questions | J) Transaction Management: The two principal tasks of the transaction manager are assuring recoverability of database actions through logging, and assuring correct, concurrent behavior of transactions through the scheduler (not discussed in this chapter) 2) Database Elements: The database is divided into elements, which are typically disk blocks, but could be tuples, extents of a class, or many other units. Database elements are the units for both logging and scheduling 3) Logging: A record of every important action of a transaction —~ beginning, changing a database ‘element, committing, or aborting —is stored on a log. The log must be backed up on disk at a time that is related to when the corresponding database changes migrate to disk, but that time depends ‘on the particular logging method used. 4) Recovery: When a system crash occurs, the log is used to repair the database, restoring it to a consistent state. $) Logging Methods: ‘The three principal methods for logging are undo, redo, and undo‘tedo, named for the way(s) that they are allowed to fix the database during recovery. 6) Undo Logging —_: This method logs only the old value, each time a database element is changed. With undo logging, a new value of a database element can only be written to disk after the log record for the change has reached disk, but before the commit record for the transaction performing the change reaches disk. Recovery is done by restoring the old value for every ‘uncommitted transaction, 7) Redo Logging: Were, only the new value of database clements is logged. With this form of fogging. values of a database element can only be written to disk after both the log record of its ‘change and the commit record for its transaction have reached disk, Recovery involves rewriting the new value for every committed transaction. 8 Undo/Redo Logging: In this method, both old and new values are logged. Undo/redo logging is more flexible than the other methods, since it requires only that the log record of a change appear on the disk befine the change itself does. There is no requirement about when the commit recon appears. Recovery is effected by redoing committed transactions and undoing the uncommitted (ansactions. 9 Check pointing: Since all methods require; im principle, looking at the entire log from the dawn of history when a recovery is necessary, the DBMS must occasionally checkpoint the log, 10 assure that no log records prior to the checkpoint will be needed during a recovery. Thus, old log records ‘can eventually be thrown away and its disk space reused. 10) Nonguiescent Check pointing : To avoid shutting down the system while a checkpoint 1s made, techniques associated with each logging method allow the checkpoint to be made while the system is in operation and database changes are occurring. The only cost is that some log records prior to the nonquiescent checkpoint may need to be examined during recovery I) Archiving: While logging protects against system failures involving only the loss of main memory, archiving is necessary to protect against failures where the contents of disk are lost Archives are copies of the database stored in a safe place. 12) Recovery from Media Failures: When a disk is lost, it may be restored by starting with a full backup of the database, modifying it according to any later incremental backups, and finally recovering to a consistent database state by using an archived copy of the log, 13) Incremental Backups _; Instead of copying the entire database to an archive periodically, a le complete backup can be followed by several incremental backups, where only the changed data is copied to the archive 14) Nonqmescent Archiving : Techniques for making a backup of the data while the database is in operation exist. They involve making log records of the beginning and end of the archiving, as ‘well as performing a checkpoint for the log during the archiving. ‘Scrializability ‘Serislizabilty is a widely accepted standard that ensures the consistency of @ schedile. A ‘schedule is consistent if and only if it is serializable. A schedule is said to be serializable if the interleaved transactions produces the result, which 1s equivalent to the result produced by executing individual transactions separately. ‘Transaction | Transaction th t read(X) wnte(X) read(X) write(X) read(Y) write(Y) read(Y) wnite(Y) Serial Schedule Two interleaved transaction Schedule The above two schedules produce the same result, these schedules are said to be serializable. The transaction may be interleaved in any order and DBMS doesn’t provide any ‘guarantee about the order in which they are executed. ‘There two differemt types of Serializability. They are, 1) Conflict Serializability ii) View Serializabitity i) Conflict Serializability: Consider a schedule Si, consisting of two successive instructions I, and Is belonging to transactions T, and Ty refer to different data items then it is very easy to swap these instructions. ‘The result of swapping these instructions doesn’t have any impact on the remaining instructions in the schedule. If t, and In refers to same data item then the following four cases must be considered, Case Ts Ins read(x), te» read(x), Case 2 Ly = read(x), Ip = write(x), Case 3 TNS WHAETRT. ta ready, Ce Cased: 1, = write(x), Ip = write(x), Nd NINO NSEC eel ee AY Yt BSN) SAN DAN SASoky Case | Here, both I, and Ip are read instructions. In this case, the execution order of the instructions is not considered since the same data item x is read by both the transactions T, and To. Case2 : Here.1, and Ipyare read and write instructions respectively. If the execution order of instructions ts I4 > I, then transaction Ts cannot read the value written by transaction Ts in instruction Is. but order is lu Ia, then transaction Ta can read the value written by transaction Tw Therefore in this case, the execution order of the instructions is important Case3 : Here,Iy and Ieare write and read instructions respectively. If the execution order of instructions is [4 1, then transaction Ty can read the value written by transaction Ta, but order is In Is, then transaction Ts cannot read the value written by transaction Tu. Therefore in this case, the execution order of the instructions is important. Case 1: Here, both Iq and In ate write instructions. In this ease, the execution order of the instructions doesn’t matter. If a read operation is performed before the write operation, then the data item which was already stored in the database is read, ii) View Serializability: Two schedules S: and $:* consisting of some set of transactions are said to be view equivalent, i the following conditions are satisfied, 1) [fa transaction T, in schedule $; performs the read operation on the initial value of data item x, then the same transaction in schedule $;* must also perform the read operation on the initial value of x. 2) Ifa transaction T, in schedule S) reads the value x, which was written by transaction Ty, then Ta in schedule $\* must also perform the read the value x written by transaction Tp, 3) Ifa transaction T, in schedule S; performs the final write operation on data item x, then the same transaction in schedule S," must also perform the final write operation on x. Example: View Serialtzability Schedule S: ‘The view equivalence leads to another notion called view serializability. A schedule say S is said to be view Serializable, if it is view equivalent with the serial schedule. Every conflict Serializable schedule ts view Serializable but every view Serializable is not conflict Serializable. Concurrency Control, In a multiprogramming environment where multiple transactions can be executed simultaneously, itis highly important fo contro! the concurrency of transactions We have concurrency control protocols to ensure atomicity, isolation, and serializability of concurrent transactions. Why DBMS needs a concurrency control? es ES o In general, concuirency control is an essential part of TM. It isa mechanism for correctness when two or more database transactions that access the same data or data set are executed / ann cnn nnAEAnE ER REEEERERREENER Dane O14 TOEP Pant ah COE concurrently with time overlap, According to Wikipedia org, if multiple transactions-arc executed serially or sequentially, data is consistent in a database. However, if concurrent transactions with interleaving operations are executed, some unexpected data and inconsistent result may occur Data interference is usually caused by a write operation among transactions on the same set of data in DBMS. For example, the lost update problem may occur when a second transaction writes a second value of data content on top of the first value written by a first concurrent transaction. Other Problems such as the dirty read problem, the incorrect summary problem Concurs v Conti fechnic iz The following techniques are the various concurrency control techniques. They are: 1. concurrency control by Locks 2. Concurrency Control by Timestamps 3. Concurrency Control by Validation 1, Concurrency control by Locks A lock is nothing but a mechanism that tells the DBMS whether a particular data item is being used by any transaction for read/write purpose. ‘There are two types of operations, i.e, read and write, whose basic nature are different, the locks for read and write operation may behave differently. The simple rule for locking can be derived from here. Ifa transaction is reading the content of a sharable data item, then any number of other processes can be allowed to read the content of the same data item. But if any transaction is writing into a sharable data item, then no other transaction will be allowed to read or write that same data item Depending upon the rules we have found, we can classify the locks into two types ‘Shared Lock: A transaction may acquire shared lock on a data item in order to read its content ‘The lock is shared in the sense that any other transaction can acquire the shared lock on that same data item for reading purpose. Exclusive Lock: A transaction may acquire exclusive lock on a data item in order to both _——1ead/writeimto it. The lock is excusive in the sense that no other transaction can acquire any kind of lock (either shared or exclusive) on that same data item. ‘The relationship between Shared and Exclusive Lock ean be represented by the following table which is known as Lock Matrix, SE EERE “Shared ~~ Exclusive ‘Shared TRUE FALSE Exclusive FALSE FALSE ‘Two Phase Locking Protocol ‘The use of locks has helped us to create neat and clean concurrent schedule. The Two Phase Locking Protocol defines the rules of how to acquire the locks on a data item and how to release the locks, ‘The Two Phase Locking Protocol assumes that a transaction can only be in one of two phases. In this phase the transaction can only acquire locks, but cannot release any lock voy ‘The transaction enters the growing phase as soon as it acquires the first lock it wants. Vv It cannot release any lock at this phase even if it has finished working with a locked data item, > Ultimately the transaction reaches a point where all the lock it may need has been acquired This point is called Lock Point. ® After Lock Point has been reached, the transaction enters the shrinking phase: In this phase the transaction can only release locks, but cannot acquire any new lock > The transaction enters the shrinking phase as soon as it releases the first lock after crossing the Lock Point. > There are two different versions of the Two Phase Locking Protocol. They are: 1, Strict Two Phase Locking Protocol 2. Rigorous Two Phase Locking Protocol >In this protocol, a transaction may release all the shared locks after the Lock Point has been reached, but it cannot release any of the exclusive locks until the transaction commits. This protocol helps in creating cascade less schedule. A Cascading Schedule is a typical problem faced while creating concurrent schedule Consider the following schedule onee again, ci BR Lock-X (A) Read A; A=A- 100; Write A; Unlock (A) Lock-S (A) Read A; ‘Temp=A*0.1; Unlock (A) Lock-X(C) Read C: mC + Temp; Write C; Unlock (C) Lock-X (B) Read B Be B+ 100, Write B: Unlock (B) + The schedule is theoretically corect, but avery strange kind of problem may arise here T releases the exclusive lock on A, and immediately after that the Context Switch is made acquires-2-shared lock-on.A.1o reed i-value_pesform-a caleulation_-pdate-the- content of ‘account C and then issue COMMIT. However, TI is not finshed yet. What if the remaining portion of TI encounters a problem (power failure, disc failure etc) and cannot be committed? . an exclusive lock. that are assigned-based-on- their age: instruction ty of transaction Tw then it can be said that ts i executed before by if and only if Ts(T 4) i) Ts(Ta) < WTS(x) = ii) Ts(Ta) GOWTS(x) Case |; If a transaction Ta wants to read the initial value of some data item x that had been ‘overwritten by some younger transaction then, the transaction T, cannot perform the read operation and therefore the transaction must be rejected. Then the transaction Ta must be rolled and restarted with a new timestamp. r ‘ase 2: Ifa transaction T, wants to read the initial value of some data item x that had not been — updated then the transaction can execute the read operation. Once the value has been read, gn conte read umestamp val (RTS(x)) which is set to the largest value of RTS(x} and y . ‘ii) Logical Counter: When a transaction enters the system. then it is assigned a timestamp whichis | 2) IFTA executes write(x) instruction, then the following two cases must be considered, i) Ts(Ta) < RTS(x) ii) Ts(Ta) < WTS(x) iii) Ts(Ta) >OWTS(x) L emasl | Case I: Meg senan (swale to eine fig vats of anne data gaze x oo which We « yperation has been performed by some younger transaction, then the transaction cannot execu he write operation. This is because the value of data item x that is being generated by Tx ‘equired previously and therefore, the system assumes that the value will never be generated. write operation is thereby rejected and the transaction T, must be rolled back and should ' -estarted with new timestamp value. | Case 2 \fa transaction T, wants to write a new value to some data item x, that was overwritt | ee cn ene a raiain eana teal ‘9 inconsistency of data item. ‘Therefore, the write operation is rejected and the transaction 2 rolled back with a new timestamp value. Ue Case 3 - Ifa transaction Ta wants to write a new value on some data item x that was not dy a younger transaction, then the transaction can executed the write operation. Once the value seen written, changes occur on WTS(x) value which is set to the value of Ts(T'4). A, 2xample: The above schedule can be executed under the timestamp protocol when Ts{T 1} < Ts(T2 . | |. Concurrency Control by Validation Validation techniques are also called as Optimistic techniques. vy > [fread only transactions are executed without employing any of the concurrency control mechanisms, then the result generated 1s m inconsistent state However if concurrency control schemes are used then the execution of transactions may be delayed and overhead may be resulted. To avoid such issue, optimistic concurrency control mechanism is used that reduces the execution overhead > But the problem in reducing the overhead is that, prior knowledge regarding the conflicting required to gain such knowledge Let us consider that every transaction T, is executed in two or three-phases during its life-time phases involved in optimistic concurrency control are, eeu SY 1) Read Phase 2) Validation Phase and 3) Write Phase Ay Read phase: In this phase, the copies of the data items (their values) are stored in local variables the modifications are made to these local variables and the actual values are not modified in is phase. WN checked upon each update. If the conflicts occur between the transaction, then it is aborted and transactions will not be known, Therefore, a mechanism called “monitoring” the system 1s WZ Validation Phase: This phase follow's the read phase where the assurance of the serializability | else it is committed. Write Phase : The successful completion of the validation phase leads to the write phase in rhich all the changes are made to the orginal copy of data items, This phase is applicable only to Ae read-write transaction. Each transaction ts assigned three timestamps as follows, i) When execution is initiated (T) ii) At the start of the validation phase V(T) ui) At the end of the validation phase E(T) Qualifying conditions for successful validation: Consider two transactions, transaction Ta, transaction Ts and let the timestamp of transact Ta is less than the timestamp of transaction Tp ie:, Ts (Tx) < Ts (Ta) then, PILLS 1) Before the start of transaction Ts, transaction Tx must complete its execution. i¢., E(Ta) < MT 2) The values written by transaction T must not be necessarily matched with the values read by . a transaction Ty. T must execute the write phase before Tp initiate the execution of validation pha ‘Le. Tp) < E(Ta) < V(Ta) | 3) If transaction T. starts its execution before transaction Ts completes. then the write phase transaction Ts must be finished before transaction T, starts the validation phase. i) The efficiency of optimistic techniques lie in the scarcity of the conflicts. ii) It doesn’t cause the significant delays. iti) Cascading rollbacks never occurs. fafa Pe oaaee Disadvantages: i) Wastage in processing time during the rollback of aborting transactions which are very ji) Hence, when one process is m its critical section ( a portion of its code), no other process, allowed to enter. This is the principal of mutual exclusion, | Weennnn ania,

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy