Concurrency - Latches (System Lock)

Data System Architecture

About

Latches are like semaphores.

Latches are used to guarantee physical consistency of data, while locks are used to assure logical consistency of data.

Latches are simple, low-level system lock (serialization mechanisms) that coordinate multi-user access (concurrency) to shared data structures, objects, and files.

Latches protect shared memory resources from corruption when accessed by multiple processes. Specifically, latches protect data structures from the following situations:

  • Concurrent modification by multiple sessions
  • Being read by one session while being modified by another session
  • Deallocation (aging out) of memory while being accessed

Typically, a single latch protects multiple objects in the SGA.

The implementation of latches is operating system-dependent, especially in respect to whether and how long a process waits for a latch.

As an auxiliary to locks, lighter-weight latches are also provided for mutual exclusion. Latches are more akin to monitors or semaphores than locks; they are used to provide exclusive access to internal data structures.

As an example in a database, the buffer pool page table has a latch associated with each frame, to guarantee that only one DBMS thread is replacing a given frame at any time. Latches are used in the implementation of locks and to briefly stabilize internal data structures potentially being concurrently modified.

Latch vs Lock

Latches differ from locks in a number of ways:

  • Locks are kept in the lock table and located via hash tables; latches reside in memory near the resources they protect, and are accessed via direct addressing.
  • Lock acquisition is entirely driven by data access, and hence the order and lifetime of lock acquisitions is largely in the hands of applications and the query optimizer. Latches are acquired by specialized code inside the DBMS, and the DBMS internal code issues latch requests and releases strategically.
  • Locks are allowed to produce deadlock, and lock deadlocks are detected and resolved via transactional restart. Latch deadlock must be avoided; the occurrence of a latch deadlock represents a bug in the DBMS code.
  • Latches are implemented using an atomic hardware instruction or, in rare cases, where this is not available, via mutual exclusion in the OS kernel.
  • Latch calls take at most a few dozen CPU cycles whereas lock requests take hundreds of CPU cycles.
  • The lock manager tracks all the locks held by a transaction and automatically releases the locks in case the transaction throws an exception, but internal DBMS routines that manipulate latches must carefully track them and include manual cleanup as part of their exception handling.
  • Latches are not tracked and so cannot be automatically released if the task faults.

Example with Oracle

Background processes such as DBWn and LGWR allocate memory from the shared pool to create data structures. To allocate this memory, these processes use a shared pool latch that serializes access to prevent two processes from trying to inspect or modify the shared pool simultaneously. After the memory is allocated, other processes may need to access shared pool areas such as the library cache, which is required for parsing. In this case, processes latch only the library cache, not the entire shared pool.

Concurrency

An increase in latching means a decrease in concurrency. For example, excessive hard parse operations create contention for the library cache latch.

Latches are a type of lightweight lock. Locks are serialization devices. Serialization devices inhibit concurrency.

To build applications that have the potential to scale, ones that can service 1 user as well as 1,000 or 10,000 users, the less latching we incur in our approaches, the better off will be.

You have to choose always an approach that takes longer to run on the wall clock but that uses 10 percent of the latches. We know that the approach that uses fewer latches will scale substantially better than the approach that uses more latches.

Latch contention increases statement execution time and decreases concurrency.

Queuing

Unlike enqueue latches such as row locks, latches do not permit sessions to queue. When a latch becomes available, the first session to request the latch obtains exclusive access to it.

  • Latch spinning occurs when a process repeatedly requests a latch in a loop, whereas
  • Latch sleeping occurs when a process releases the CPU before renewing the latch request.

Typically, an Oracle process acquires a latch for an extremely short time while manipulating or looking at a data structure. For example, while processing a salary update of a single employee, the database may obtain and release thousands of latches.

Documentation / Reference

  • Architecture of a Database System Joseph M. Hellerstein, Michael Stonebraker and James Hamilton





Discover More
Data System Architecture
Concurrency - Lock (Mutex)

A lock is a synchronizationmechanism designed to enforce a mutual exclusion of threads. A lock is also known as a mutex. Type: binary semaphore - yes / no Most locking designs block the execution...
Data System Architecture
Concurrency - Mutex (Mutual exclusion object)

A mutex is a mutual exclusion object that restricts access to a shared resource (e.g. a file) to a single thread instance. The mutual exclusion synchronization between concurrent thread means that: ...
Data System Architecture
Concurrency - Synchronization

Synchronization insures thread safety by preventing the same code being run by two different threads at the same time. When a code (object, method) has a synchronized property, if a thread enter a synchronized...
Card Puncher Data Processing
Oracle Database

Documentation about the Oracle database
Card Puncher Data Processing
Oracle Database - Buffer IO (Logical IO)

A buffer is a container for data. A logical I/O, also known as a buffer I/O, refers to reads and writes of buffers in the buffer cache. When a requested buffer is not found in memory, the database performs...
Card Puncher Data Processing
Oracle Database - Internal Locks

Internal locks are higher-level, more complex mechanisms than latches and mutexes and serve various purposes. The database uses the following types of internal locks: These locks are of very short...
Card Puncher Data Processing
Oracle Database - Runstats

Runstats is a tool that have developed to compare two different methods of doing the same thing and show which one is superior. You supply the two different methods and runstats does the rest. Runstats...
Card Puncher Data Processing
Oracle Database - System Lock

The system lock protect internal database structures such as data files. Oracle Database uses various types of system locks to protect internal database and memory structures. These mechanisms are inaccessible...
Db File Sequential Read
Oracle Database - Wait Event

A session can hang for a lot of reason that are called wait event. lock From SQL Developer, in the Monitoring Session Tools: Every wait event belongs to a class of wait event. The following list...
Card Puncher Data Processing
Oracle database - Concurrency Wait Class

The concurrency wait class is Waits for internal database resources (for example, latches)



Share this page:
Follow us:
Task Runner