summaryrefslogtreecommitdiffstats
path: root/doc/src/sgml/html/transaction-iso.html
blob: 4eca15ad9079d13db2715ea7372674d173273561 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /><title>13.2. Transaction Isolation</title><link rel="stylesheet" type="text/css" href="stylesheet.css" /><link rev="made" href="pgsql-docs@lists.postgresql.org" /><meta name="generator" content="DocBook XSL Stylesheets Vsnapshot" /><link rel="prev" href="mvcc-intro.html" title="13.1. Introduction" /><link rel="next" href="explicit-locking.html" title="13.3. Explicit Locking" /></head><body id="docContent" class="container-fluid col-10"><div class="navheader"><table width="100%" summary="Navigation header"><tr><th colspan="5" align="center">13.2. Transaction Isolation</th></tr><tr><td width="10%" align="left"><a accesskey="p" href="mvcc-intro.html" title="13.1. Introduction">Prev</a> </td><td width="10%" align="left"><a accesskey="u" href="mvcc.html" title="Chapter 13. Concurrency Control">Up</a></td><th width="60%" align="center">Chapter 13. Concurrency Control</th><td width="10%" align="right"><a accesskey="h" href="index.html" title="PostgreSQL 16.2 Documentation">Home</a></td><td width="10%" align="right"> <a accesskey="n" href="explicit-locking.html" title="13.3. Explicit Locking">Next</a></td></tr></table><hr /></div><div class="sect1" id="TRANSACTION-ISO"><div class="titlepage"><div><div><h2 class="title" style="clear: both">13.2. Transaction Isolation <a href="#TRANSACTION-ISO" class="id_link">#</a></h2></div></div></div><div class="toc"><dl class="toc"><dt><span class="sect2"><a href="transaction-iso.html#XACT-READ-COMMITTED">13.2.1. Read Committed Isolation Level</a></span></dt><dt><span class="sect2"><a href="transaction-iso.html#XACT-REPEATABLE-READ">13.2.2. Repeatable Read Isolation Level</a></span></dt><dt><span class="sect2"><a href="transaction-iso.html#XACT-SERIALIZABLE">13.2.3. Serializable Isolation Level</a></span></dt></dl></div><a id="id-1.5.12.5.2" class="indexterm"></a><p>
    The <acronym class="acronym">SQL</acronym> standard defines four levels of
    transaction isolation.  The most strict is Serializable,
    which is defined by the standard in a paragraph which says that any
    concurrent execution of a set of Serializable transactions is guaranteed
    to produce the same effect as running them one at a time in some order.
    The other three levels are defined in terms of phenomena, resulting from
    interaction between concurrent transactions, which must not occur at
    each level.  The standard notes that due to the definition of
    Serializable, none of these phenomena are possible at that level.  (This
    is hardly surprising -- if the effect of the transactions must be
    consistent with having been run one at a time, how could you see any
    phenomena caused by interactions?)
   </p><p>
    The phenomena which are prohibited at various levels are:

    </p><div class="variablelist"><dl class="variablelist"><dt><span class="term">
       dirty read
       <a id="id-1.5.12.5.4.1.1.1.1" class="indexterm"></a>
      </span></dt><dd><p>
        A transaction reads data written by a concurrent uncommitted transaction.
       </p></dd><dt><span class="term">
       nonrepeatable read
       <a id="id-1.5.12.5.4.1.2.1.1" class="indexterm"></a>
      </span></dt><dd><p>
        A transaction re-reads data it has previously read and finds that data
        has been modified by another transaction (that committed since the
        initial read).
       </p></dd><dt><span class="term">
       phantom read
       <a id="id-1.5.12.5.4.1.3.1.1" class="indexterm"></a>
      </span></dt><dd><p>
        A transaction re-executes a query returning a set of rows that satisfy a
        search condition and finds that the set of rows satisfying the condition
        has changed due to another recently-committed transaction.
       </p></dd><dt><span class="term">
       serialization anomaly
       <a id="id-1.5.12.5.4.1.4.1.1" class="indexterm"></a>
      </span></dt><dd><p>
        The result of successfully committing a group of transactions
        is inconsistent with all possible orderings of running those
        transactions one at a time.
       </p></dd></dl></div><p>
   </p><p>
    <a id="id-1.5.12.5.5.1" class="indexterm"></a>
    The SQL standard and PostgreSQL-implemented transaction isolation levels
    are described in <a class="xref" href="transaction-iso.html#MVCC-ISOLEVEL-TABLE" title="Table 13.1. Transaction Isolation Levels">Table 13.1</a>.
   </p><div class="table" id="MVCC-ISOLEVEL-TABLE"><p class="title"><strong>Table 13.1. Transaction Isolation Levels</strong></p><div class="table-contents"><table class="table" summary="Transaction Isolation Levels" border="1"><colgroup><col /><col /><col /><col /><col /></colgroup><thead><tr><th>
         Isolation Level
        </th><th>
         Dirty Read
        </th><th>
         Nonrepeatable Read
        </th><th>
         Phantom Read
        </th><th>
         Serialization Anomaly
        </th></tr></thead><tbody><tr><td>
         Read uncommitted
        </td><td>
         Allowed, but not in PG
        </td><td>
         Possible
        </td><td>
         Possible
        </td><td>
         Possible
        </td></tr><tr><td>
         Read committed
        </td><td>
         Not possible
        </td><td>
         Possible
        </td><td>
         Possible
        </td><td>
         Possible
        </td></tr><tr><td>
         Repeatable read
        </td><td>
         Not possible
        </td><td>
         Not possible
        </td><td>
         Allowed, but not in PG
        </td><td>
         Possible
        </td></tr><tr><td>
         Serializable
        </td><td>
         Not possible
        </td><td>
         Not possible
        </td><td>
         Not possible
        </td><td>
         Not possible
        </td></tr></tbody></table></div></div><br class="table-break" /><p>
    In <span class="productname">PostgreSQL</span>, you can request any of
    the four standard transaction isolation levels, but internally only
    three distinct isolation levels are implemented, i.e., PostgreSQL's
    Read Uncommitted mode behaves like Read Committed.  This is because
    it is the only sensible way to map the standard isolation levels to
    PostgreSQL's multiversion concurrency control architecture.
   </p><p>
    The table also shows that PostgreSQL's Repeatable Read implementation
    does not allow phantom reads.  This is acceptable under the SQL
    standard because the standard specifies which anomalies must
    <span class="emphasis"><em>not</em></span> occur at certain isolation levels;  higher
    guarantees are acceptable.
    The behavior of the available isolation levels is detailed in the
    following subsections.
   </p><p>
    To set the transaction isolation level of a transaction, use the
    command <a class="xref" href="sql-set-transaction.html" title="SET TRANSACTION"><span class="refentrytitle">SET TRANSACTION</span></a>.
   </p><div class="important"><h3 class="title">Important</h3><p>
       Some <span class="productname">PostgreSQL</span> data types and functions have
       special rules regarding transactional behavior.  In particular, changes
       made to a sequence (and therefore the counter of a
       column declared using <code class="type">serial</code>) are immediately visible
       to all other transactions and are not rolled back if the transaction
       that made the changes aborts.  See <a class="xref" href="functions-sequence.html" title="9.17. Sequence Manipulation Functions">Section 9.17</a>
       and <a class="xref" href="datatype-numeric.html#DATATYPE-SERIAL" title="8.1.4. Serial Types">Section 8.1.4</a>.
     </p></div><div class="sect2" id="XACT-READ-COMMITTED"><div class="titlepage"><div><div><h3 class="title">13.2.1. Read Committed Isolation Level <a href="#XACT-READ-COMMITTED" class="id_link">#</a></h3></div></div></div><a id="id-1.5.12.5.11.2" class="indexterm"></a><a id="id-1.5.12.5.11.3" class="indexterm"></a><p>
    <em class="firstterm">Read Committed</em> is the default isolation
    level in <span class="productname">PostgreSQL</span>.  When a transaction
    uses this isolation level, a <code class="command">SELECT</code> query
    (without a <code class="literal">FOR UPDATE/SHARE</code> clause) sees only data
    committed before the query began; it never sees either uncommitted
    data or changes committed by concurrent transactions during the query's
    execution.  In effect, a <code class="command">SELECT</code> query sees
    a snapshot of the database as of the instant the query begins to
    run.   However, <code class="command">SELECT</code> does see the effects
    of previous updates executed within its own transaction, even
    though they are not yet committed.  Also note that two successive
    <code class="command">SELECT</code> commands can see different data, even
    though they are within a single transaction, if other transactions
    commit changes after the first <code class="command">SELECT</code> starts and
    before the second <code class="command">SELECT</code> starts.
   </p><p>
    <code class="command">UPDATE</code>, <code class="command">DELETE</code>, <code class="command">SELECT
    FOR UPDATE</code>, and <code class="command">SELECT FOR SHARE</code> commands
    behave the same as <code class="command">SELECT</code>
    in terms of searching for target rows: they will only find target rows
    that were committed as of the command start time.  However, such a target
    row might have already been updated (or deleted or locked) by
    another concurrent transaction by the time it is found.  In this case, the
    would-be updater will wait for the first updating transaction to commit or
    roll back (if it is still in progress).  If the first updater rolls back,
    then its effects are negated and the second updater can proceed with
    updating the originally found row.  If the first updater commits, the
    second updater will ignore the row if the first updater deleted it,
    otherwise it will attempt to apply its operation to the updated version of
    the row.  The search condition of the command (the <code class="literal">WHERE</code> clause) is
    re-evaluated to see if the updated version of the row still matches the
    search condition.  If so, the second updater proceeds with its operation
    using the updated version of the row.  In the case of
    <code class="command">SELECT FOR UPDATE</code> and <code class="command">SELECT FOR
    SHARE</code>, this means it is the updated version of the row that is
    locked and returned to the client.
   </p><p>
    <code class="command">INSERT</code> with an <code class="literal">ON CONFLICT DO UPDATE</code> clause
    behaves similarly. In Read Committed mode, each row proposed for insertion
    will either insert or update. Unless there are unrelated errors, one of
    those two outcomes is guaranteed.  If a conflict originates in another
    transaction whose effects are not yet visible to the <code class="command">INSERT</code>,
    the <code class="command">UPDATE</code> clause will affect that row,
    even though possibly <span class="emphasis"><em>no</em></span> version of that row is
    conventionally visible to the command.
   </p><p>
    <code class="command">INSERT</code> with an <code class="literal">ON CONFLICT DO
    NOTHING</code> clause may have insertion not proceed for a row due to
    the outcome of another transaction whose effects are not visible
    to the <code class="command">INSERT</code> snapshot.  Again, this is only
    the case in Read Committed mode.
   </p><p>
    <code class="command">MERGE</code> allows the user to specify various
    combinations of <code class="command">INSERT</code>, <code class="command">UPDATE</code>
    and <code class="command">DELETE</code> subcommands. A <code class="command">MERGE</code>
    command with both <code class="command">INSERT</code> and <code class="command">UPDATE</code>
    subcommands looks similar to <code class="command">INSERT</code> with an
    <code class="literal">ON CONFLICT DO UPDATE</code> clause but does not
    guarantee that either <code class="command">INSERT</code> or
    <code class="command">UPDATE</code> will occur.
    If <code class="command">MERGE</code> attempts an <code class="command">UPDATE</code> or
    <code class="command">DELETE</code> and the row is concurrently updated but
    the join condition still passes for the current target and the
    current source tuple, then <code class="command">MERGE</code> will behave
    the same as the <code class="command">UPDATE</code> or
    <code class="command">DELETE</code> commands and perform its action on the
    updated version of the row.  However, because <code class="command">MERGE</code>
    can specify several actions and they can be conditional, the
    conditions for each action are re-evaluated on the updated version of
    the row, starting from the first action, even if the action that had
    originally matched appears later in the list of actions.
    On the other hand, if the row is concurrently updated or deleted so
    that the join condition fails, then <code class="command">MERGE</code> will
    evaluate the condition's <code class="literal">NOT MATCHED</code> actions next,
    and execute the first one that succeeds.
    If <code class="command">MERGE</code> attempts an <code class="command">INSERT</code>
    and a unique index is present and a duplicate row is concurrently
    inserted, then a uniqueness violation error is raised;
    <code class="command">MERGE</code> does not attempt to avoid such
    errors by restarting evaluation of <code class="literal">MATCHED</code>
    conditions.
   </p><p>
    Because of the above rules, it is possible for an updating command to see
    an inconsistent snapshot: it can see the effects of concurrent updating
    commands on the same rows it is trying to update, but it
    does not see effects of those commands on other rows in the database.
    This behavior makes Read Committed mode unsuitable for commands that
    involve complex search conditions; however, it is just right for simpler
    cases.  For example, consider updating bank balances with transactions
    like:

</p><pre class="screen">
BEGIN;
UPDATE accounts SET balance = balance + 100.00 WHERE acctnum = 12345;
UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 7534;
COMMIT;
</pre><p>

    If two such transactions concurrently try to change the balance of account
    12345, we clearly want the second transaction to start with the updated
    version of the account's row.  Because each command is affecting only a
    predetermined row, letting it see the updated version of the row does
    not create any troublesome inconsistency.
   </p><p>
    More complex usage can produce undesirable results in Read Committed
    mode.  For example, consider a <code class="command">DELETE</code> command
    operating on data that is being both added and removed from its
    restriction criteria by another command, e.g., assume
    <code class="literal">website</code> is a two-row table with
    <code class="literal">website.hits</code> equaling <code class="literal">9</code> and
    <code class="literal">10</code>:

</p><pre class="screen">
BEGIN;
UPDATE website SET hits = hits + 1;
-- run from another session:  DELETE FROM website WHERE hits = 10;
COMMIT;
</pre><p>

    The <code class="command">DELETE</code> will have no effect even though
    there is a <code class="literal">website.hits = 10</code> row before and
    after the <code class="command">UPDATE</code>. This occurs because the
    pre-update row value <code class="literal">9</code> is skipped, and when the
    <code class="command">UPDATE</code> completes and <code class="command">DELETE</code>
    obtains a lock, the new row value is no longer <code class="literal">10</code> but
    <code class="literal">11</code>, which no longer matches the criteria.
   </p><p>
    Because Read Committed mode starts each command with a new snapshot
    that includes all transactions committed up to that instant,
    subsequent commands in the same transaction will see the effects
    of the committed concurrent transaction in any case.  The point
    at issue above is whether or not a <span class="emphasis"><em>single</em></span> command
    sees an absolutely consistent view of the database.
   </p><p>
    The partial transaction isolation provided by Read Committed mode
    is adequate for many applications, and this mode is fast and simple
    to use;  however, it is not sufficient for all cases.  Applications
    that do complex queries and updates might require a more rigorously
    consistent view of the database than Read Committed mode provides.
   </p></div><div class="sect2" id="XACT-REPEATABLE-READ"><div class="titlepage"><div><div><h3 class="title">13.2.2. Repeatable Read Isolation Level <a href="#XACT-REPEATABLE-READ" class="id_link">#</a></h3></div></div></div><a id="id-1.5.12.5.12.2" class="indexterm"></a><a id="id-1.5.12.5.12.3" class="indexterm"></a><p>
    The <em class="firstterm">Repeatable Read</em> isolation level only sees
    data committed before the transaction began; it never sees either
    uncommitted data or changes committed by concurrent transactions during
    the transaction's execution.  (However, each query does see the
    effects of previous updates executed within its own transaction,
    even though they are not yet committed.)  This is a stronger
    guarantee than is required by the <acronym class="acronym">SQL</acronym> standard
    for this isolation level, and prevents all of the phenomena described
    in <a class="xref" href="transaction-iso.html#MVCC-ISOLEVEL-TABLE" title="Table 13.1. Transaction Isolation Levels">Table 13.1</a> except for serialization
    anomalies.  As mentioned above, this is
    specifically allowed by the standard, which only describes the
    <span class="emphasis"><em>minimum</em></span> protections each isolation level must
    provide.
   </p><p>
    This level is different from Read Committed in that a query in a
    repeatable read transaction sees a snapshot as of the start of the
    first non-transaction-control statement in the
    <span class="emphasis"><em>transaction</em></span>, not as of the start
    of the current statement within the transaction.  Thus, successive
    <code class="command">SELECT</code> commands within a <span class="emphasis"><em>single</em></span>
    transaction see the same data, i.e., they do not see changes made by
    other transactions that committed after their own transaction started.
   </p><p>
    Applications using this level must be prepared to retry transactions
    due to serialization failures.
   </p><p>
    <code class="command">UPDATE</code>, <code class="command">DELETE</code>,
    <code class="command">MERGE</code>, <code class="command">SELECT FOR UPDATE</code>,
    and <code class="command">SELECT FOR SHARE</code> commands
    behave the same as <code class="command">SELECT</code>
    in terms of searching for target rows: they will only find target rows
    that were committed as of the transaction start time.  However, such a
    target row might have already been updated (or deleted or locked) by
    another concurrent transaction by the time it is found.  In this case, the
    repeatable read transaction will wait for the first updating transaction to commit or
    roll back (if it is still in progress).  If the first updater rolls back,
    then its effects are negated and the repeatable read transaction can proceed
    with updating the originally found row.  But if the first updater commits
    (and actually updated or deleted the row, not just locked it)
    then the repeatable read transaction will be rolled back with the message

</p><pre class="screen">
ERROR:  could not serialize access due to concurrent update
</pre><p>

    because a repeatable read transaction cannot modify or lock rows changed by
    other transactions after the repeatable read transaction began.
   </p><p>
    When an application receives this error message, it should abort
    the current transaction and retry the whole transaction from
    the beginning.  The second time through, the transaction will see the
    previously-committed change as part of its initial view of the database,
    so there is no logical conflict in using the new version of the row
    as the starting point for the new transaction's update.
   </p><p>
    Note that only updating transactions might need to be retried; read-only
    transactions will never have serialization conflicts.
   </p><p>
    The Repeatable Read mode provides a rigorous guarantee that each
    transaction sees a completely stable view of the database.  However,
    this view will not necessarily always be consistent with some serial
    (one at a time) execution of concurrent transactions of the same level.
    For example, even a read-only transaction at this level may see a
    control record updated to show that a batch has been completed but
    <span class="emphasis"><em>not</em></span> see one of the detail records which is logically
    part of the batch because it read an earlier revision of the control
    record.  Attempts to enforce business rules by transactions running at
    this isolation level are not likely to work correctly without careful use
    of explicit locks to block conflicting transactions.
   </p><p>
    The Repeatable Read isolation level is implemented using a technique
    known in academic database literature and in some other database products
    as <em class="firstterm">Snapshot Isolation</em>.  Differences in behavior
    and performance may be observed when compared with systems that use a
    traditional locking technique that reduces concurrency.  Some other
    systems may even offer Repeatable Read and Snapshot Isolation as distinct
    isolation levels with different behavior.  The permitted phenomena that
    distinguish the two techniques were not formalized by database researchers
    until after the SQL standard was developed, and are outside the scope of
    this manual.  For a full treatment, please see
    <a class="xref" href="biblio.html#BERENSON95">[berenson95]</a>.
   </p><div class="note"><h3 class="title">Note</h3><p>
     Prior to <span class="productname">PostgreSQL</span> version 9.1, a request
     for the Serializable transaction isolation level provided exactly the
     same behavior described here.  To retain the legacy Serializable
     behavior, Repeatable Read should now be requested.
    </p></div></div><div class="sect2" id="XACT-SERIALIZABLE"><div class="titlepage"><div><div><h3 class="title">13.2.3. Serializable Isolation Level <a href="#XACT-SERIALIZABLE" class="id_link">#</a></h3></div></div></div><a id="id-1.5.12.5.13.2" class="indexterm"></a><a id="id-1.5.12.5.13.3" class="indexterm"></a><a id="id-1.5.12.5.13.4" class="indexterm"></a><a id="id-1.5.12.5.13.5" class="indexterm"></a><p>
    The <em class="firstterm">Serializable</em> isolation level provides
    the strictest transaction isolation.  This level emulates serial
    transaction execution for all committed transactions;
    as if transactions had been executed one after another, serially,
    rather than concurrently.  However, like the Repeatable Read level,
    applications using this level must
    be prepared to retry transactions due to serialization failures.
    In fact, this isolation level works exactly the same as Repeatable
    Read except that it also monitors for conditions which could make
    execution of a concurrent set of serializable transactions behave
    in a manner inconsistent with all possible serial (one at a time)
    executions of those transactions.  This monitoring does not
    introduce any blocking beyond that present in repeatable read, but
    there is some overhead to the monitoring, and detection of the
    conditions which could cause a
    <em class="firstterm">serialization anomaly</em> will trigger a
    <em class="firstterm">serialization failure</em>.
   </p><p>
    As an example,
    consider a table <code class="structname">mytab</code>, initially containing:
</p><pre class="screen">
 class | value
-------+-------
     1 |    10
     1 |    20
     2 |   100
     2 |   200
</pre><p>
    Suppose that serializable transaction A computes:
</p><pre class="screen">
SELECT SUM(value) FROM mytab WHERE class = 1;
</pre><p>
    and then inserts the result (30) as the <code class="structfield">value</code> in a
    new row with <code class="structfield">class</code><code class="literal"> = 2</code>.  Concurrently, serializable
    transaction B computes:
</p><pre class="screen">
SELECT SUM(value) FROM mytab WHERE class = 2;
</pre><p>
    and obtains the result 300, which it inserts in a new row with
    <code class="structfield">class</code><code class="literal"> = 1</code>.  Then both transactions try to commit.
    If either transaction were running at the Repeatable Read isolation level,
    both would be allowed to commit; but since there is no serial order of execution
    consistent with the result, using Serializable transactions will allow one
    transaction to commit and will roll the other back with this message:

</p><pre class="screen">
ERROR:  could not serialize access due to read/write dependencies among transactions
</pre><p>

    This is because if A had
    executed before B, B would have computed the sum 330, not 300, and
    similarly the other order would have resulted in a different sum
    computed by A.
   </p><p>
    When relying on Serializable transactions to prevent anomalies, it is
    important that any data read from a permanent user table not be
    considered valid until the transaction which read it has successfully
    committed.  This is true even for read-only transactions, except that
    data read within a <em class="firstterm">deferrable</em> read-only
    transaction is known to be valid as soon as it is read, because such a
    transaction waits until it can acquire a snapshot guaranteed to be free
    from such problems before starting to read any data.  In all other cases
    applications must not depend on results read during a transaction that
    later aborted; instead, they should retry the transaction until it
    succeeds.
   </p><p>
    To guarantee true serializability <span class="productname">PostgreSQL</span>
    uses <em class="firstterm">predicate locking</em>, which means that it keeps locks
    which allow it to determine when a write would have had an impact on
    the result of a previous read from a concurrent transaction, had it run
    first.  In <span class="productname">PostgreSQL</span> these locks do not
    cause any blocking and therefore can <span class="emphasis"><em>not</em></span> play any part in
    causing a deadlock.  They are used to identify and flag dependencies
    among concurrent Serializable transactions which in certain combinations
    can lead to serialization anomalies.  In contrast, a Read Committed or
    Repeatable Read transaction which wants to ensure data consistency may
    need to take out a lock on an entire table, which could block other
    users attempting to use that table, or it may use <code class="literal">SELECT FOR
    UPDATE</code> or <code class="literal">SELECT FOR SHARE</code> which not only
    can block other transactions but cause disk access.
   </p><p>
    Predicate locks in <span class="productname">PostgreSQL</span>, like in most
    other database systems, are based on data actually accessed by a
    transaction.  These will show up in the
    <a class="link" href="view-pg-locks.html" title="54.12. pg_locks"><code class="structname">pg_locks</code></a>
    system view with a <code class="literal">mode</code> of <code class="literal">SIReadLock</code>.  The
    particular locks
    acquired during execution of a query will depend on the plan used by
    the query, and multiple finer-grained locks (e.g., tuple locks) may be
    combined into fewer coarser-grained locks (e.g., page locks) during the
    course of the transaction to prevent exhaustion of the memory used to
    track the locks.  A <code class="literal">READ ONLY</code> transaction may be able to
    release its SIRead locks before completion, if it detects that no
    conflicts can still occur which could lead to a serialization anomaly.
    In fact, <code class="literal">READ ONLY</code> transactions will often be able to
    establish that fact at startup and avoid taking any predicate locks.
    If you explicitly request a <code class="literal">SERIALIZABLE READ ONLY DEFERRABLE</code>
    transaction, it will block until it can establish this fact.  (This is
    the <span class="emphasis"><em>only</em></span> case where Serializable transactions block but
    Repeatable Read transactions don't.)  On the other hand, SIRead locks
    often need to be kept past transaction commit, until overlapping read
    write transactions complete.
   </p><p>
    Consistent use of Serializable transactions can simplify development.
    The guarantee that any set of successfully committed concurrent
    Serializable transactions will have the same effect as if they were run
    one at a time means that if you can demonstrate that a single transaction,
    as written, will do the right thing when run by itself, you can have
    confidence that it will do the right thing in any mix of Serializable
    transactions, even without any information about what those other
    transactions might do, or it will not successfully commit.  It is
    important that an environment which uses this technique have a
    generalized way of handling serialization failures (which always return
    with an SQLSTATE value of '40001'), because it will be very hard to
    predict exactly which transactions might contribute to the read/write
    dependencies and need to be rolled back to prevent serialization
    anomalies.  The monitoring of read/write dependencies has a cost, as does
    the restart of transactions which are terminated with a serialization
    failure, but balanced against the cost and blocking involved in use of
    explicit locks and <code class="literal">SELECT FOR UPDATE</code> or <code class="literal">SELECT FOR
    SHARE</code>, Serializable transactions are the best performance choice
    for some environments.
   </p><p>
    While <span class="productname">PostgreSQL</span>'s Serializable transaction isolation
    level only allows concurrent transactions to commit if it can prove there
    is a serial order of execution that would produce the same effect, it
    doesn't always prevent errors from being raised that would not occur in
    true serial execution.  In particular, it is possible to see unique
    constraint violations caused by conflicts with overlapping Serializable
    transactions even after explicitly checking that the key isn't present
    before attempting to insert it.  This can be avoided by making sure
    that <span class="emphasis"><em>all</em></span> Serializable transactions that insert potentially
    conflicting keys explicitly check if they can do so first.  For example,
    imagine an application that asks the user for a new key and then checks
    that it doesn't exist already by trying to select it first, or generates
    a new key by selecting the maximum existing key and adding one.  If some
    Serializable transactions insert new keys directly without following this
    protocol, unique constraints violations might be reported even in cases
    where they could not occur in a serial execution of the concurrent
    transactions.
   </p><p>
    For optimal performance when relying on Serializable transactions for
    concurrency control, these issues should be considered:

    </p><div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; "><li class="listitem"><p>
       Declare transactions as <code class="literal">READ ONLY</code> when possible.
      </p></li><li class="listitem"><p>
       Control the number of active connections, using a connection pool if
       needed.  This is always an important performance consideration, but
       it can be particularly important in a busy system using Serializable
       transactions.
      </p></li><li class="listitem"><p>
       Don't put more into a single transaction than needed for integrity
       purposes.
      </p></li><li class="listitem"><p>
       Don't leave connections dangling <span class="quote"><span class="quote">idle in transaction</span></span>
       longer than necessary.  The configuration parameter
       <a class="xref" href="runtime-config-client.html#GUC-IDLE-IN-TRANSACTION-SESSION-TIMEOUT">idle_in_transaction_session_timeout</a> may be used to
       automatically disconnect lingering sessions.
      </p></li><li class="listitem"><p>
       Eliminate explicit locks, <code class="literal">SELECT FOR UPDATE</code>, and
       <code class="literal">SELECT FOR SHARE</code> where no longer needed due to the
       protections automatically provided by Serializable transactions.
      </p></li><li class="listitem"><p>
       When the system is forced to combine multiple page-level predicate
       locks into a single relation-level predicate lock because the predicate
       lock table is short of memory, an increase in the rate of serialization
       failures may occur.  You can avoid this by increasing
       <a class="xref" href="runtime-config-locks.html#GUC-MAX-PRED-LOCKS-PER-TRANSACTION">max_pred_locks_per_transaction</a>,
       <a class="xref" href="runtime-config-locks.html#GUC-MAX-PRED-LOCKS-PER-RELATION">max_pred_locks_per_relation</a>, and/or
       <a class="xref" href="runtime-config-locks.html#GUC-MAX-PRED-LOCKS-PER-PAGE">max_pred_locks_per_page</a>.
      </p></li><li class="listitem"><p>
       A sequential scan will always necessitate a relation-level predicate
       lock.  This can result in an increased rate of serialization failures.
       It may be helpful to encourage the use of index scans by reducing
       <a class="xref" href="runtime-config-query.html#GUC-RANDOM-PAGE-COST">random_page_cost</a> and/or increasing
       <a class="xref" href="runtime-config-query.html#GUC-CPU-TUPLE-COST">cpu_tuple_cost</a>.  Be sure to weigh any decrease
       in transaction rollbacks and restarts against any overall change in
       query execution time.
      </p></li></ul></div><p>
   </p><p>
    The Serializable isolation level is implemented using a technique known
    in academic database literature as Serializable Snapshot Isolation, which
    builds on Snapshot Isolation by adding checks for serialization anomalies.
    Some differences in behavior and performance may be observed when compared
    with other systems that use a traditional locking technique.  Please see
    <a class="xref" href="biblio.html#PORTS12">[ports12]</a> for detailed information.
   </p></div></div><div class="navfooter"><hr /><table width="100%" summary="Navigation footer"><tr><td width="40%" align="left"><a accesskey="p" href="mvcc-intro.html" title="13.1. Introduction">Prev</a> </td><td width="20%" align="center"><a accesskey="u" href="mvcc.html" title="Chapter 13. Concurrency Control">Up</a></td><td width="40%" align="right"> <a accesskey="n" href="explicit-locking.html" title="13.3. Explicit Locking">Next</a></td></tr><tr><td width="40%" align="left" valign="top">13.1. Introduction </td><td width="20%" align="center"><a accesskey="h" href="index.html" title="PostgreSQL 16.2 Documentation">Home</a></td><td width="40%" align="right" valign="top"> 13.3. Explicit Locking</td></tr></table></div></body></html>