Blog
DB2utor

Categories

September 09, 2014

Implementing 'Dirty Reads'

I was recently asked about "dirty reads." This term refers to setting an application's SELECT statement to avoid taking locks while reading and processing uncommitted data.

Of course, the benefit of doing this is to reduce CPU costs and reap potentially significant savings. As the name implies, however, dirty reads comes with a risk. It can lead to failed transactions due to discrepancies in the data. So before implementing dirty reads, you must understand how your data is being used and if discrepancies can be tolerated.

In DB2, locking behavior is controlled through the isolation level used when the statement is executed. I like this statement from the manual, "DB2 for z/OS 11.0: Managing DB2 Performance":


"Various isolation levels offer less or more concurrency at the cost of more or less protection from other application processes."

 The manual also includes a list of isolation levels with descriptions. Again, read more here:

Warning: Dirty reads -- aka the uncommitted reads (UR) isolation level -- should never be used with transactions where the data must always be accurate or up-to-date (e.g., banking transactions or inventory data). To a DBA this may seem obvious, but I've worked at places where some developers, enticed by how fast programs can run at the UR level, decided on their own to apply it to business transactions. As powerful as UR can be, it's simply an unacceptable choice outside of development. CPU savings are not worth jeopardizing customer relationships or compromising your own critical business data.

Do you use dirty reads? When do you find it beneficial? Please share your experiences in comments.