Concurrency - Latches (System Lock)

> (Data|State) Management and Processing > (Concurrency|Parallel|Asynchronous) programming

1 - About

Latches are like semaphores. Latches are used to guarantee physical consistency of data, while locks are used to assure logical consistency of data.

Latches are simple, low-level system lock (serialization mechanisms) that coordinate multi-user access (concurrency) to shared data structures, objects, and files.

Latches protect shared memory resources from corruption when accessed by multiple processes. Specifically, latches protect data structures from the following situations:

  • Concurrent modification by multiple sessions
  • Being read by one session while being modified by another session
  • Deallocation (aging out) of memory while being accessed

Typically, a single latch protects multiple objects in the SGA.

The implementation of latches is operating system-dependent, especially in respect to whether and how long a process waits for a latch.

As an auxiliary to locks, lighter-weight latches are also provided for mutual exclusion. Latches are more akin to monitors or semaphores than locks; they are used to provide exclusive access to internal data structures.

As an example in a database, the buffer pool page table has a latch associated with each frame, to guarantee that only one DBMS thread is replacing a given frame at any time. Latches are used in the implementation of locks and to briefly stabilize internal data structures potentially being concurrently modified.


3 - Latch vs Lock

Latches differ from locks in a number of ways:

  • Locks are kept in the lock table and located via hash tables; latches reside in memory near the resources they protect, and are accessed via direct addressing.
  • Lock acquisition is entirely driven by data access, and hence the order and lifetime of lock acquisitions is largely in the hands of applications and the query optimizer. Latches are acquired by specialized code inside the DBMS, and the DBMS internal code issues latch requests and releases strategically.
  • Locks are allowed to produce deadlock, and lock deadlocks are detected and resolved via transactional restart. Latch deadlock must be avoided; the occurrence of a latch deadlock represents a bug in the DBMS code.
  • Latches are implemented using an atomic hardware instruction or, in rare cases, where this is not available, via mutual exclusion in the OS kernel.
  • Latch calls take at most a few dozen CPU cycles whereas lock requests take hundreds of CPU cycles.
  • The lock manager tracks all the locks held by a transaction and automatically releases the locks in case the transaction throws an exception, but internal DBMS routines that manipulate latches must carefully track them and include manual cleanup as part of their exception handling.
  • Latches are not tracked and so cannot be automatically released if the task faults.

4 - Example with Oracle

Background processes such as DBWn and LGWR allocate memory from the shared pool to create data structures. To allocate this memory, these processes use a shared pool latch that serializes access to prevent two processes from trying to inspect or modify the shared pool simultaneously. After the memory is allocated, other processes may need to access shared pool areas such as the library cache, which is required for parsing. In this case, processes latch only the library cache, not the entire shared pool.

5 - Concurrency

An increase in latching means a decrease in concurrency. For example, excessive hard parse operations create contention for the library cache latch.

Latches are a type of lightweight lock. Locks are serialization devices. Serialization devices inhibit concurrency.

To build applications that have the potential to scale, ones that can service 1 user as well as 1,000 or 10,000 users, the less latching we incur in our approaches, the better off will be.

You have to choose always an approach that takes longer to run on the wall clock but that uses 10 percent of the latches. We know that the approach that uses fewer latches will scale substantially better than the approach that uses more latches.

Latch contention increases statement execution time and decreases concurrency.


6 - Queuing

Unlike enqueue latches such as row locks, latches do not permit sessions to queue. When a latch becomes available, the first session to request the latch obtains exclusive access to it.

  • Latch spinning occurs when a process repeatedly requests a latch in a loop, whereas
  • Latch sleeping occurs when a process releases the CPU before renewing the latch request.

Typically, an Oracle process acquires a latch for an extremely short time while manipulating or looking at a data structure. For example, while processing a salary update of a single employee, the database may obtain and release thousands of latches.

7 - Documentation / Reference

  • Architecture of a Database System Joseph M. Hellerstein, Michael Stonebraker and James Hamilton
data/concurrency/latches.txt · Last modified: 2018/12/08 11:30 by gerardnico