# Update

2020.6.1 update/ 2021.5 update：

• 因为“乐观并发控制”本身 imply使用到了锁（lock），而锁本来就是悲观的（pessimistic），这与乐观（optimistic）相互矛盾
• 另外，乐观并发控制其实描述的是一种进行并发控制（concurrent control）的思想，在这种思想中并不存在锁
• 因而，就可能导致 confusion
• 如果称为乐观并发控制（optimistic concurrent control），就会更准确。因而在本文中，不会使用“乐观锁”这个词，而是使用乐观并发控制。
• 也可能称为无锁并发控制（lock-free concurrent control）

# Background

## 并发控制（Concurrency Control）

• 实现数据库的 ACID 中的一致性（consistency）
• 编程中的实现互斥（mutual exclusion），比如Java中的 synchrinzed 方法

• The lost update problem: A second transaction writes a second value of a data-item (datum) on top of a first value written by a first concurrent transaction, and the first value is lost to other transactions running concurrently which need, by their precedence, to read the first value. The transactions that have read the wrong value end with incorrect results.
• The dirty read problem: Transactions read a value written by a transaction that has been later aborted. This value disappears from the database upon abort, and should not have been read by any transaction (“dirty read”). The reading transactions end with incorrect results.
• The incorrect summary problem: While one transaction takes a summary over the values of all the instances of a repeated data-item, a second transaction updates some instances of that data-item. The resulting summary does not reflect a correct result for any (usually needed for correctness) precedence order between the two transactions (if one is executed before the other), but rather some random result, depending on the timing of the updates, and whether certain update results have been included in the summary or not.

## 悲观并发控制（Pessimistic Concurrency Control）

• 传统的关系型数据库里边就用到了很多这种锁机制，比如行锁，表锁等，读锁，写锁等，即，都是在做操作之前先上锁。
• Java 中基于synchronized 关键字的**独占锁（exclusive lock）**就是基于悲观锁思想的实现。

• 他可能会不断的重试（直到拿到这个锁），这称为自旋（spin）
• 也可能直接进入阻塞状态（目的是为了节省CPU时间片）。直接当锁被释放时，会被唤醒且尝试再次获取锁。

## 乐观并发控制（Optimistic Concurrency Control）

### 代表

• 多版本并发控制（Multiversion concurrency control, MVCC）
• 非阻塞算法（non-blocking algorithm），非阻塞算法都依赖于CPU指令集提供的原子修改原语指令（atomic read-modify-write primitives），最典型的非阻塞算法就是CAS 算法 （compare-and-swap）

• 像数据库提供的类似于 write_condition 的机制，其实都是基于乐观锁并发控制思想的体现。
• 在 Java 中java.util.concurrent.atomic 包中，提供了很多支持原子操作的类。如：AtomicBoolean、AtomicInteger、AtomicLong、AtomicReference 等，全都是基于乐观并发控制思想的 CAS 算法实现的。
• 在 SVN 、Git 等版本控制管理器中，也有乐观锁并发控制思想的体现。当你提交数据的时候对比下版本号，如果远程仓库的版本号和本地的不一样，就表示有人已经提交过代码了，你需要先更新代码到本地处理一下版本冲突问题，不然是没有办法提交的。
• HTTP Header中的 If-Match，The GET method returns an ETag for a resource and subsequent PUTs use the ETag value in the If-Match headers; while the first PUT will succeed, the second will not, as the value in If-Match is based on the first version of the resource

### 乐观并发控制的几个阶段

Optimistic concurrency control transactions involve these phases:

• Begin: Record a timestamp marking the transaction’s beginning.
• Modify: Read database values, and tentatively write changes.
• Validate: Check whether other transactions have modified data that this transaction has used (read or written). This includes transactions that completed after this transaction’s start time, and optionally, transactions that are still active at validation time.
• Commit/Rollback: If there is no conflict, make all changes take effect. If there is a conflict, resolve it, typically by aborting the transaction, although other resolution schemes are possible. Care must be taken to avoid a TOCTTOU bug, particularly if this phase and the previous one are not performed as a single atomic operation.

# 乐观并发控制（Optimistic Concurrency Control）思想的分析

## 乐观并发控制思想常见的两种实现方式

### 1 多版本并发控制（Multiversion concurrency control, MVCC）

• 取出记录时，获取当前 version
• 更新时，带上这个 version
• 执行更新，先执行 set version = newVersion where version = oldVersion
• 如果上面执行的 set语句没有影响任何行，就更新失败；
• 并且不断重试。

2. 在操作员 A 操作的过程中，操作员B 也读入此用户信息（ version=1 ），并从其帐户余额中扣除 $20 （$100-$20 ）。 3. 操作员 A 完成了修改工作，将数据版本号加一（ version=2 ），连同帐户扣除后余额（ balance=$50 ），提交至数据库更新，此时由于提交数据版本大于数据库记录当前版本，数据被更新，数据库记录 version 更新为 2 。
4. 操作员 B 完成了操作，也将版本号加一（ version=2 ）试图向数据库提交数据（ balance=\$80 ），但此时比对数据库记录版本时发现，操作员 B 提交的数据版本号为 2 ，数据库记录当前版本也为 2 ，不满足 “ 提交版本必须大于记录当前版本才能执行更新 “ 的乐观锁策略，因此，操作员 B 的提交被驳回。

### 2 非阻塞算法（non-blocking algorithm）

// TODO

• Wait-freedom
• Lock-freedom
• Obstruction-freedom

#### 非阻塞算法的好处

There are several benefits of non-blocking algorithms compared to blocking algorithms. This section will describe these benefits.

##### Choice

The first benefit of non-blocking algorithms is, that threads are given a choice about what to do when their requested action cannot be performed. Instead of just being blocked, the request thread has a choice about what to do. Sometimes there is nothing a thread can do. In that case it can choose to block or wait itself, thus freeing up the CPU for other tasks. But at least the requesting thread is given a choice.

On a single CPU system perhaps it makes sense to suspend a thread that cannot perform a desired action, and let other threads which can perform their work run on the CPU. But even on a single CPU system blocking algorithms may lead to problems like deadlock, starvation and other concurrency problems.

The second benefit of non-blocking algorithms is, that the suspension of one thread cannot lead to the suspension of other threads. This means that deadlock cannot occur. Two threads cannot be blocked waiting for each other to release a lock they want. Since threads are not blocked when they cannot perform their requested action, they cannot be blocked waiting for each other. Non-blocking algorithms may still result in live lock, where two threads keep attempting some action, but keep being told that it is not possible (because of the actions of the other thread).

Suspending and reactivating a thread is costly. Yes, the costs of suspension and reactivation has gone down over time as operating systems and thread libraries become more efficient. However, there is still a high price to pay for thread suspension and reactivation.

Whenever a thread is blocked it is suspended, thus incurring the overhead of thread suspension and reactivation. Since threads are not suspended by non-blocking algorithms, this overhead does not occur. This means that the CPUs can potentially spend more time performing actual business logic instead of context switching.

On a multi CPU system blocking algorithms can have more significant impact on the overall performance. A thread running on CPU A can be blocked waiting for a thread running on CPU B. This lowers the level of parallelism the application is capable of achieving. Of course, CPU A could just schedule another thread to run, but suspending and activating threads (context switches) are expensive. The less threads need to be suspended the better.

Latency in this context means the time between a requested action becomes possible and the thread actually performs it. Since threads are not suspended in non-blocking algorithms they do not have to pay the expensive, slow reactivation overhead. That means that when a requested action becomes possible threads can respond faster and thus reduce their response latency.

The non-blocking algorithms often obtain the lower latency by busy-waiting until the requested action becomes possible. Of course, in a system with high thread contention on the non-blocking data structure, CPUs may end up burning a lot of cycles during these busy waits. This is a thing to keep in mind. Non-blocking algorithms may not be the best if your data structure has high thread contention. However, there are often ways do redesign your application to have less thread contention.

#### CAS 算法

CAS 即 Compare And Swap（比较与交换），是一种无锁（Lock Free）的非阻塞算法（Non-blocking algorithm）的实现。 CAS 算法是基于乐观并发控制思想的实现体现。

CAS 算法涉及到三个操作数：

• 需要读写的内存值 V
• 进行比较的值 A
• 拟写入的新值 B

CAS 算法通过 CPU 指令集保证原子性（atomicity），因此不需要进行用户态（user mode）和内核态（kernel mode）的切换，也不需要进行线程的切换。

## 乐观并发控制的缺点

### 循环时间长开销大

• 它可以延迟流水线执行指令（de-pipeline），使CPU不会消耗过多的执行资源，延迟的时间取决于具体实现的版本，在一些处理器上延迟时间是零。
• 它可以避免在退出循环的时候因内存顺序冲突（memory order violation）而引起CPU流水线被清空（CPU pipeline flush），从而提高CPU的执行效率。

# 乐观并发控制的 CAS 与悲观并发控制的 Java synchronized 关键字的使用情景

• 对于资源竞争较少（线程冲突较轻）的情况，使用 synchronized 同步锁进行线程阻塞和唤醒切换以及用户态内核态间的切换操作额外浪费消耗 CPU 资源；而 CAS 基于 CPU 指令集实现，不需要进入内核，不需要切换线程，操作自旋几率较少，因此可以获得更高的性能。
• 对于资源竞争严重（线程冲突严重）的情况，CAS 自旋的概率会比较大，从而浪费更多的 CPU 资源，效率低于 synchronized。

synchronized 的底层实现主要依靠 Lock-Free 的队列，基本思路是自旋后阻塞竞争切换后继续竞争锁稍微牺牲了公平性，但获得了高吞吐量。在线程冲突较少的情况下，可以获得和 CAS 类似的性能；而线程冲突严重的情况下，性能远高于 CAS 。