Read/Write Locks in Java (ReentrantReadWriteLock)

18 October 2015
By Gonçalo Marques
This article covers Read/Write locks in Java (ReentrantReadWriteLock implementation)

Introduction

Read/Write locks - also known as Shared/Exclusive locks - are designed for use cases where an application allows simultaneous read access to some piece of data by multiple processes at the same time, while restricting write access to that same data to a single process at a given time.

Since read operations shall not change the data being read, it may be reasonable to allow multiple processes to read it at the same time, thus improving the application's support for concurrency and throughput. Each time a given process needs to read the information it must acquire a Read lock, which is granted if there is no other process currently holding the Write lock (the one that a process need to acquire before any write operation). Multiple processes may access the information at the same time if they only need the read lock (read access is shared).

When a process need to write to the shared data it must acquire a Write lock. The lock is granted if no other process currently holds a lock to the shared data, being it Read or Write (write access is exclusive). It should be clear at this time that a read/write lock is suitable for scenarios where read operations are frequent and write operations are sporadic. If we spend to much time in exclusive lock mode because of a large number of write operations, the application will not gain the significant throughput that should be otherwise granted by simultaneous data access through shared read locks.

In Java this is achieved by the ReadWriteLock interface, which default implementation - bundled with the SDK - is the ReentrantReadWriteLock.

The Read/Write lock (ReentrantReadWriteLock)

Let's start with an illustrative ReentrantReadWriteLock usage. Keep in mind that the following example is just for clarification on the basic read/write lock semantics and may not be a suitable use case for a shared lock (we will see why through the article).

Shared read/write lock semantics

public class SharedIntegerArray {

  private final int[] integerArray = new int[10];
  private final ReadWriteLock lock = new ReentrantReadWriteLock();
  private final Lock readLock = lock.readLock();
  private final Lock writeLock = lock.writeLock();

  public void write(int value, int index) {
    writeLock.lock();
    try {
      integerArray[index] = value;
    } finally {
      writeLock.unlock();
    }
  }

  public int read(int index) {
    readLock.lock();
    try {
      return integerArray[index];
    } finally {
      readLock.unlock();
    }
  }

}

The ReadWriteLock interface provides methods for obtaining read and write lock instances: the readLock() and writeLock() methods respectively. These locks must be obtained from the same ReadWriteLock instance. Both locks expose a lock() and unlock() method. Code that is between calls to lock() and unlock() is subject to read/write lock semantics. In this example, multiple processes may acquire the read lock and simultaneously fetch values from the integer array, but only if there is no process that holds the write lock.

The write operation needs to acquire the write lock, which is granted only if there is no other process that already holds the read or write locks: once again and to make it clear, write access is exclusive. No other process may be reading or writing when a given process holds the write lock.

Lock acquires should always be inside a try/finally block, where the finally section is used to release the lock [unlock() method]. This way we guarantee that the lock will always be released no matter what may happen inside the protected code section (ex: an unexpected exception).

As we have said, this was just an illustrative example of the ReentrantReadWriteLock usage and may not be suitable for a real usage scenario. Why said that because in this example both read and write operations are very fast: we are directly setting or reading a position from an integer array. Real read/write lock usage is meant for scenarios where read operations have some degree of complexity and may take some considerable amount of time to complete, thus allowing access to those non-trivial read operations by multiple processes at the same time really raises the application's throughput.

Read/write lock implementations in their vast majority will still serialize access at some point in their internals: They must update the internal lock state, for example to keep track if there is currently any thread that holds a lock, and if the answer is positive, which kind of lock is being held (read or write) and from which thread(s). If you inspect the source code of ReentrantReadWriteLock you will see that the current lock state is updated by the means of compare and swap operations made against fields of an internal synchronization data structure.

With this in mind one may conclude that if read and write operations are extremely fast - like our integer array example - it may be more suitable to use an exclusive access locking mechanism instead (a regular ReentrantLock, synchronized wrappers or even resort to synchronized primitives if the protected code sections are contiguous). The access will be serialized anyway and we don't have the read/write locking overhead penalty.

Regarding the Java Memory Model, it's also important to note that the ReentrantReadWriteLock establishes a happens-before relationship between write lock acquire and read lock acquire operations. More information about the happens-before relationship in the following article: Java volatile example.

Lock fairness

The ReentrantReadWriteLock instance may be constructed with an optional boolean fairness parameter:


ReadWriteLock lock = new ReentrantReadWriteLock(true);

By default, fairness is set to false. This means that locks will be granted to waiting processes in an unspecified order. This mode of operation - unfair - will provide the most throughput but may also raise problems under some circumstances: if read operations are very frequent and long enough in what matters to execution time, we may have a scenario where writer threads will be left waiting indefinitely until there is no other thread holding any lock so they can get into the protected code in an exclusive mode.

This is where lock fairness comes into play. If a lock is configured as fair, the locks will be granted to waiting processes in a scheduling similar to first in first out: If a thread requests a read lock, the lock is only granted if there is no other thread already waiting for a write lock. If there is another thread already waiting for a write lock, the read lock will only be granted after this thread executes the protected section or abandons the waiting queue in the meantime. This does not apply to writing threads that arrive after the reading thread is already waiting. In this case, the writing threads will not get in front of the already waiting reading thread.

Still in fair mode, write lock requests will be granted if there is currently no other thread holding any lock - read or write - or if any reading thread arrived the waiting queue after the writing thread did.

As we may expect, fairness introduces a higher degree of complexity and consequently a performance penalty under some circumstances. Consider the following scenario:

  1. Thread A is currently holding a write lock and executing a protected section
  2. Thread B is waiting for a read lock to that protected section
  3. Thread A completes its execution and releases the lock, but at this precise moment thread C arrives and requests a write lock

In this scenario, in what matters to throughput, it may be less expensive to let thread C get the lock and get in front of thread B because we don't have to deal with the overhead of suspending - and later resuming - thread C. If the lock is configured as fair, thread C will only be assigned the lock after thread B completes it execution, and we have to sum up the overhead of suspending and resuming thread C to the overall execution time.

Acquire timeout

Another factor that must be considered is the lock acquire timeout. An acquire operation may be accompanied with a timeout:


// Try to acquire and give up if the lock 
// is not available at this precise moment
readLock.tryLock();

// Try to acquire and give up if the lock 
// is not available within 10 seconds
readLock.tryLock(10, TimeUnit.SECONDS);

When the specific timeout duration is specified, the time unit must also be provided (ex: seconds, milliseconds, etc.). If a tryLock operation is executed against a fair lock, the fairness policy will not be respected, ie. the lock will be granted as soon as possible, disregarding the fact that there may be other threads waiting for the lock.

Lock Upgrade and Downgrade

In order to understand lock upgrade and downgrade it's essential to know about lock reentrancy. Reentrancy is the action of re-acquiring a lock when we already hold that same lock. A reentrant lock will only be released if the unlock operation is called the same exact number of times that the lock operation was called, ie. if a thread hold a lock and re-acquires that same lock while still holding it, it will need to unlock that lock a couple of times in order to release it.

Reentrancy in ReentrantReadWriteLock may also be seen as the action of acquiring a read lock while still holding the write lock. If we release the write lock in this condition, we will still be holding the read lock: the lock is downgraded from write mode to read mode. The opposite operation - acquiring the write lock while we still hold the read lock - is not supported by ReentrantReadWriteLock. In order to upgrade the lock, from read to write, the read lock must first be released.

Lock downgrade and upgrade may be useful in scenarios where protected data is written in a lazy fashion, ie. we detect that the data needs to be written while we are currently reading the data:

Read/write lock upgrade and downgrade

public class SharedIntegerArray {

  private int[] integerArray = new int[10];
  private final ReadWriteLock lock = new ReentrantReadWriteLock();
  private final Lock readLock = this.lock.readLock();
  private final Lock writeLock = this.lock.writeLock();
  private volatile boolean reloadArray = true;

  public void useIntegerArray() {

    // Acquire read lock to prevent other threads from writing to the shared array
    readLock.lock();
    try {

      // Check if the array needs to be reloaded (first check)
      if (reloadArray) {

        // Release the read lock
        readLock.unlock();
        // Acquire the write lock (upgrade)
        writeLock.lock();
        try {

          // Check if the array still needs to be reloaded (second check).
          // Second check is needed because multiple threads may detect that the
          // array needs to be reloaded in the first check, and another thread
          // may have already acquired the write lock and updated the array
          if (reloadArray) {
            // reload the array
            reloadArray();
            reloadArray = false;
          }

          // Acquire the read lock before releasing the write lock (downgrade)
          readLock.lock();

        } finally {
          writeLock.unlock();
        }

      }

      // use the array
      useArray();

    } finally {
      readLock.unlock();
    }
  }

  private void reloadArray() {
    // Reload the integer array
  }

  private void useArray() {
    // Use the integer array
  }

  // Called by an external thread to trigger array reload
  public void forceArrayUpdate() {
    reloadArray = true;
  }

}

Conclusion

As we have seen there are lots of variables and trade-offs in ReentrantReadWriteLock. As expected, if we need the lock to be more strict - or fair - there will be an associated performance or contention penalty. Every scenario is different so we should always profile the lock usage throughput in order to assert if it meets our application's requirements.

Related Articles

Comments

About the author
Gonçalo Marques is a Software Engineer with several years of experience in software development and architecture definition. During this period his main focus was delivering software solutions in banking, telecommunications and governmental areas. He created the Bytes Lounge website with one ultimate goal: share his knowledge with the software development community. His main area of expertise is Java and open source.

GitHub profile: http://github.com/gonmarques

He is also the author of the WiFi File Browser Android application: