Open In App

Concurrency problems in DBMS Transactions

Improve
Improve
Like Article
Like
Save
Share
Report

Concurrency control is an essential aspect of database management systems (DBMS) that ensures transactions can execute concurrently without interfering with each other. However, concurrency control can be challenging to implement, and without it, several problems can arise, affecting the consistency of the database. In this article, we will discuss some of the concurrency problems that can occur in DBMS transactions and explore solutions to prevent them.

When multiple transactions execute concurrently in an uncontrolled or unrestricted manner, then it might lead to several problems. These problems are commonly referred to as concurrency problems in a database environment. 

The five concurrency problems that can occur in the database are: 

  • Temporary Update Problem
  • Incorrect Summary Problem
  • Lost Update Problem
  • Unrepeatable Read Problem
  • Phantom Read Problem

These are explained as following below. 

Temporary Update Problem: 

Temporary update or dirty read problem occurs when one transaction updates an item and fails. But the updated item is used by another transaction before the item is changed or reverted back to its last value. 

Example: 

In the above example, if transaction 1 fails for some reason then X will revert back to its previous value. But transaction 2 has already read the incorrect value of X. 

Incorrect Summary Problem: 

Consider a situation, where one transaction is applying the aggregate function on some records while another transaction is updating these records. The aggregate function may calculate some values before the values have been updated and others after they are updated. 

Example:

In the above example, transaction 2 is calculating the sum of some records while transaction 1 is updating them. Therefore the aggregate function may calculate some values before they have been updated and others after they have been updated. 

Lost Update Problem: 

In the lost update problem, an update done to a data item by a transaction is lost as it is overwritten by the update done by another transaction. 

Example: 

In the above example, transaction 2 changes the value of X but it will get overwritten by the write commit by transaction 1 on X (not shown in the image above). Therefore, the update done by transaction 2 will be lost. Basically, the write commit done by the last transaction will overwrite all previous write commits. 

Unrepeatable Read Problem: 

The unrepeatable problem occurs when two or more read operations of the same transaction read different values of the same variable. 

Example: 

In the above example, once transaction 2 reads the variable X, a write operation in transaction 1 changes the value of the variable X. Thus, when another read operation is performed by transaction 2, it reads the new value of X which was updated by transaction 1. 

Phantom Read Problem: 

The phantom read problem occurs when a transaction reads a variable once but when it tries to read that same variable again, an error occurs saying that the variable does not exist. 

Example: 

In the above example, once transaction 2 reads the variable X, transaction 1 deletes the variable X without transaction 2’s knowledge. Thus, when transaction 2 tries to read X, it is not able to do it. 

 

Solution :

To prevent concurrency problems in DBMS transactions, several concurrency control techniques can be used, including locking, timestamp ordering, and optimistic concurrency control.

Locking involves acquiring locks on the data items used by transactions, preventing other transactions from accessing the same data until the lock is released. There are different types of locks, such as shared and exclusive locks, and they can be used to prevent Dirty Read and Non-Repeatable Read.

Timestamp ordering assigns a unique timestamp to each transaction and ensures that transactions execute in timestamp order. Timestamp ordering can prevent Non-Repeatable Read and Phantom Read.

Optimistic concurrency control assumes that conflicts between transactions are rare and allows transactions to proceed without acquiring locks initially. If a conflict is detected, the transaction is rolled back, and the conflict is resolved. Optimistic concurrency control can prevent Dirty Read, Non-Repeatable Read, and Phantom Read.

In conclusion, concurrency control is crucial in DBMS transactions to ensure data consistency and prevent concurrency problems such as Dirty Read, Non-Repeatable Read, and Phantom Read. By using techniques like locking, timestamp ordering, and optimistic concurrency control, developers can build robust database systems that support concurrent access while maintaining data consistency.


Last Updated : 11 May, 2023
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads