summaryrefslogtreecommitdiffstats
path: root/doc
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-13 14:07:11 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-13 14:07:11 +0000
commit63847496f14c813a5d80efd5b7de0f1294ffe1e3 (patch)
tree01c7571c7c762ceee70638549a99834fdd7c411b /doc
parentInitial commit. (diff)
downloadsqlite3-63847496f14c813a5d80efd5b7de0f1294ffe1e3.tar.xz
sqlite3-63847496f14c813a5d80efd5b7de0f1294ffe1e3.zip
Adding upstream version 3.45.1.upstream/3.45.1
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'doc')
-rw-r--r--doc/F2FS.txt87
-rw-r--r--doc/compile-for-windows.md148
-rw-r--r--doc/json-enhancements.md144
-rw-r--r--doc/jsonb.md290
-rw-r--r--doc/lemon.html1264
-rw-r--r--doc/pager-invariants.txt76
-rw-r--r--doc/testrunner.md344
-rw-r--r--doc/trusted-schema.md142
-rw-r--r--doc/vdbesort-memory.md49
-rw-r--r--doc/vfs-shm.txt130
-rw-r--r--doc/wal-lock.md88
11 files changed, 2762 insertions, 0 deletions
diff --git a/doc/F2FS.txt b/doc/F2FS.txt
new file mode 100644
index 0000000..47ad229
--- /dev/null
+++ b/doc/F2FS.txt
@@ -0,0 +1,87 @@
+
+SQLite's OS layer contains the following definitions used in F2FS related
+calls:
+
+#define F2FS_IOCTL_MAGIC 0xf5
+#define F2FS_IOC_START_ATOMIC_WRITE _IO(F2FS_IOCTL_MAGIC, 1)
+#define F2FS_IOC_COMMIT_ATOMIC_WRITE _IO(F2FS_IOCTL_MAGIC, 2)
+#define F2FS_IOC_START_VOLATILE_WRITE _IO(F2FS_IOCTL_MAGIC, 3)
+#define F2FS_IOC_ABORT_VOLATILE_WRITE _IO(F2FS_IOCTL_MAGIC, 5)
+#define F2FS_IOC_GET_FEATURES _IOR(F2FS_IOCTL_MAGIC, 12, u32)
+#define F2FS_FEATURE_ATOMIC_WRITE 0x0004
+
+After opening a database file on Linux (including Android), SQLite determines
+whether or not a file supports F2FS atomic commits as follows:
+
+ u32 flags = 0;
+ rc = ioctl(fd, F2FS_IOC_GET_FEATURES, &flags);
+ if( rc==0 && (flags & F2FS_FEATURE_ATOMIC_WRITE) ){
+ /* File supports F2FS atomic commits */
+ }else{
+ /* File does NOT support F2FS atomic commits */
+ }
+
+where "fd" is the file-descriptor open on the database file.
+
+Usually, when writing to a database file that supports atomic commits, SQLite
+accumulates the entire transaction in heap memory, deferring all writes to the
+db file until the transaction is committed.
+
+When it is time to commit a transaction on a file that supports atomic
+commits, SQLite does:
+
+ /* Take an F_WRLCK lock on the database file. This prevents any other
+ ** SQLite clients from reading or writing the file until the lock
+ ** is released. */
+ rc = fcntl(fd, F_SETLK, ...);
+ if( rc!=0 ) goto failed;
+
+ rc = ioctl(fd, F2FS_IOC_START_ATOMIC_WRITE);
+ if( rc!=0 ) goto fallback_to_legacy_journal_commit;
+
+ foreach (dirty page){
+ rc = write(fd, ...dirty page...);
+ if( rc!=0 ){
+ ioctl(fd, F2FS_IOC_ABORT_VOLATILE_WRITE);
+ goto fallback_to_legacy_journal_commit;
+ }
+ }
+
+ rc = ioctl(fd, F2FS_IOC_COMMIT_ATOMIC_WRITE);
+ if( rc!=0 ){
+ ioctl(fd, F2FS_IOC_ABORT_VOLATILE_WRITE);
+ goto fallback_to_legacy_journal_commit;
+ }
+
+ /* If we get there, the transaction has been successfully
+ ** committed to persistent storage. The following call
+ ** relinquishes the F_WRLCK lock. */
+ fcntl(fd, F_SETLK, ...);
+
+Assumptions:
+
+1. After either of the F2FS_IOC_ABORT_VOLATILE_WRITE calls return,
+ the database file is in the state that it was in before
+ F2FS_IOC_START_ATOMIC_WRITE was invoked. Even if the ioctl()
+ fails - we're ignoring the return code.
+
+ This is true regardless of the type of error that occurred in
+ ioctl() or write().
+
+2. If the system fails before the F2FS_IOC_COMMIT_ATOMIC_WRITE is
+ completed, then following a reboot the database file is in the
+ state that it was in before F2FS_IOC_START_ATOMIC_WRITE was invoked.
+ Or, if the write was commited right before the system failed, in a
+ state indicating that all write() calls were successfully committed
+ to persistent storage before the failure occurred.
+
+3. If the process crashes before the F2FS_IOC_COMMIT_ATOMIC_WRITE is
+ completed then the file is automatically restored to the state that
+ it was in before F2FS_IOC_START_ATOMIC_WRITE was called. This occurs
+ before the posix advisory lock is automatically dropped - there is
+ no chance that another client will be able to read the file in a
+ half-committed state before the rollback operation occurs.
+
+
+
+
diff --git a/doc/compile-for-windows.md b/doc/compile-for-windows.md
new file mode 100644
index 0000000..b8a50af
--- /dev/null
+++ b/doc/compile-for-windows.md
@@ -0,0 +1,148 @@
+# Notes On Compiling SQLite On Windows 11
+
+Here are step-by-step instructions on how to build SQLite from
+canonical source on a new Windows 11 PC, as of 2023-11-01:
+
+ 1. Install Microsoft Visual Studio. The free "community edition"
+ will work fine. Do a standard install for C++ development.
+ SQLite only needs the
+ "cl" compiler and the "nmake" build tool.
+
+ 2. Under the "Start" menu, find "All Apps" then go to "Visual Studio 20XX"
+ and find "x64 Native Tools Command Prompt for VS 20XX". Pin that
+ application to your task bar, as you will use it a lot. Bring up
+ an instance of this command prompt and do all of the subsequent steps
+ in that "x64 Native Tools" command prompt. (Or use "x86" if you want
+ a 32-bit build.) The subsequent steps will not work in a vanilla
+ DOS prompt. Nor will they work in PowerShell.
+
+ 3. Install TCL development libraries. This note assumes that you will
+ install the TCL development libraries in the "`c:\Tcl`" directory.
+ Make adjustments
+ if you want TCL installed somewhere else. SQLite needs both the
+ "tclsh.exe" command-line tool as part of the build process, and
+ the "tcl86.lib" library in order to run tests. You will need
+ TCL version 8.6 or later.
+ <ol type="a">
+ <li>Get the TCL source archive, perhaps from
+ [https://www.tcl.tk/software/tcltk/download.html](https://www.tcl.tk/software/tcltk/download.html).
+ <li>Untar or unzip the source archive. CD into the "win/" subfolder
+ of the source tree.
+ <li>Run: `nmake /f makefile.vc release`
+ <li>Run: `nmake /f makefile.vc INSTALLDIR=c:\Tcl install`
+ <li>CD to `c:\Tcl\lib`. In that subfolder make a copy of the
+ "`tcl86t.lib`" file to the alternative name "`tcl86.lib`"
+ (omitting the second 't'). Leave the copy in the same directory
+ as the original.
+ <li>CD to `c:\Tcl\bin`. Make a copy of the "`tclsh86t.exe`"
+ file into "`tclsh.exe`" (without the "86t") in the same directory.
+ <li>Add `c:\Tcl\bin` to your %PATH%. To do this, go to Settings
+ and search for "path". Select "edit environment variables for
+ your account" and modify your default PATH accordingly.
+ You will need to close and reopen your command prompts after
+ making this change.
+ </ol>
+
+ 4. Download the SQLite source tree and unpack it. CD into the
+ toplevel directory of the source tree.
+
+ 5. Set the TCLDIR environment variable to point to your TCL installation.
+ Like this:
+ <ul>
+ <li> `set TCLDIR=c:\Tcl`
+ </ul>
+
+ 6. Run the "`Makefile.msc`" makefile with an appropriate target.
+ Examples:
+ <ul>
+ <li> `nmake /f makefile.msc`
+ <li> `nmake /f makefile.msc sqlite3.c`
+ <li> `nmake /f makefile.msc devtest`
+ <li> `nmake /f makefile.msc releasetest`
+ </ul>
+
+## 32-bit Builds
+
+Doing a 32-bit build is just like doing a 64-bit build with the
+following minor changes:
+
+ 1. Use the "x86 Native Tools Command Prompt" instead of
+ "x64 Native Tools Command Prompt". "**x86**" instead of "**x64**".
+
+ 2. Use a different installation directory for TCL.
+ The recommended directory is `c:\tcl32`. Thus you end up
+ with two TCL builds:
+ <ul>
+ <li> `c:\tcl` &larr; 64-bit (the default)
+ <li> `c:\tcl32` &larr; 32-bit
+ </ul>
+
+ 3. Ensure that `c:\tcl32\bin` comes before `c:\tcl\bin` on
+ your PATH environment variable. You can achieve this using
+ a command like:
+ <ul>
+ <li> `set PATH=c:\tcl32\bin;%PATH%`
+ </ul>
+
+## Building a DLL
+
+The command the developers use for building the deliverable DLL on the
+[download page](https://sqlite.org/download.html) is as follows:
+
+> ~~~~
+nmake /f Makefile.msc sqlite3.dll USE_NATIVE_LIBPATHS=1 "OPTS=-DSQLITE_ENABLE_FTS3=1 -DSQLITE_ENABLE_FTS4=1 -DSQLITE_ENABLE_FTS5=1 -DSQLITE_ENABLE_RTREE=1 -DSQLITE_ENABLE_JSON1=1 -DSQLITE_ENABLE_GEOPOLY=1 -DSQLITE_ENABLE_SESSION=1 -DSQLITE_ENABLE_PREUPDATE_HOOK=1 -DSQLITE_ENABLE_SERIALIZE=1 -DSQLITE_ENABLE_MATH_FUNCTIONS=1"
+~~~~
+
+That command generates both the sqlite3.dll and sqlite3.def files. The same
+command works for both 32-bit and 64-bit builds.
+
+## Statically Linking The TCL Library
+
+Some utility programs associated with SQLite need to be linked
+with TCL in order to function. The [sqlite3_analyzer.exe program](https://sqlite.org/sqlanalyze.html)
+is an example. You can build as described above, and then
+enter:
+
+> ~~~~
+nmake /f Makefile.msc sqlite3_analyzer.exe
+~~~~
+
+And you will end up with a working executable. However, that executable
+will depend on having the "tcl86.dll" library somewhere on your %PATH%.
+Use the following steps to build an executable that has the TCL library
+statically linked so that it does not depend on separate DLL:
+
+ 1. Use the appropriate "Command Prompt" window - either x86 or
+ x64, depending on whether you want a 32-bit or 64-bit executable.
+
+ 2. Untar the TCL source tarball into a fresh directory. CD into
+ the "win/" subfolder.
+
+ 3. Run: `nmake /f makefile.vc OPTS=nothreads,static shell`
+
+
+ 4. CD into the "Release*" subfolder that is created (note the
+ wildcard - the full name of the directory might vary). There
+ you will find the "tcl86s.lib" file. Copy this file into the
+ same directory that you put the "tcl86.lib" on your initial
+ installation. (In this document, that directory is
+ "`C:\Tcl32\lib`" for 32-bit builds and
+ "`C:\Tcl\lib`" for 64-bit builds.)
+
+ 5. CD into your SQLite source code directory and build the desired
+ utility program, but add the following extra arguments to the
+ nmake command line:
+ <blockquote><pre>
+ CCOPTS="-DSTATIC_BUILD" LIBTCL="tcl86s.lib netapi32.lib user32.lib"
+ </pre></blockquote>
+ <p>So, for example, to build a statically linked version of
+ sqlite3_analyzer.exe, you might type:
+ <blockquote><pre>
+ nmake /f Makefile.msc CCOPTS="-DSTATIC_BUILD" LIBTCL="tcl86s.lib netapi32.lib user32.lib" sqlite3_analyzer.exe
+ </pre></blockquote>
+
+ 6. After your executable is built, you can verify that it does not
+ depend on the TCL DLL by running:
+ <blockquote><pre>
+ dumpbin /dependents sqlite3_analyzer.exe
+ </pre></blockquote>
diff --git a/doc/json-enhancements.md b/doc/json-enhancements.md
new file mode 100644
index 0000000..bc03e89
--- /dev/null
+++ b/doc/json-enhancements.md
@@ -0,0 +1,144 @@
+# JSON Functions Enhancements (2022)
+
+This document summaries enhancements to the SQLite JSON support added in
+early 2022.
+
+## 1.0 Change summary:
+
+ 1. New **->** and **->>** operators that work like MySQL and PostgreSQL (PG).
+ 2. JSON functions are built-in rather than being an extension. They
+ are included by default, but can be omitted using the
+ -DSQLITE_OMIT_JSON compile-time option.
+
+
+## 2.0 New operators **->** and **->>**
+
+The SQLite language adds two new binary operators **->** and **->>**.
+Both operators are similar to json_extract(). The left operand is
+JSON and the right operand is a JSON path expression (possibly abbreviated
+for compatibility with PG - see below). So they are similar to a
+two-argument call to json_extract().
+
+The difference between -> and ->> (and json_extract()) is as follows:
+
+ * The -> operator always returns JSON.
+
+ * The ->> operator converts the answer into a primitive SQL datatype
+ such as TEXT, INTEGER, REAL, or NULL. If a JSON object or array
+ is selected, that object or array is rendered as text. If a JSON
+ value is selected, that value is converted into its corresponding
+ SQL type
+
+ * The json_extract() interface returns JSON when a JSON object or
+ array is selected, or a primitive SQL datatype when a JSON value
+ is selected. This is different from MySQL, in which json_extract()
+ always returns JSON, but the difference is retained because it has
+ worked that way for 6 years and changing it now would likely break
+ a lot of legacy code.
+
+In MySQL and PG, the ->> operator always returns TEXT (or NULL) and never
+INTEGER or REAL. This is due to limitations in the type handling capabilities
+of those systems. In MySQL and PG, the result type a function or operator
+may only depend on the type of its arguments, never the value of its arguments.
+But the underlying JSON type depends on the value of the JSON path
+expression, not the type of the JSON path expression (which is always TEXT).
+Hence, the result type of ->> in MySQL and PG is unable to vary according
+to the type of the JSON value being extracted.
+
+The type system in SQLite is more general. Functions in SQLite are able
+to return different datatypes depending on the value of their arguments.
+So the ->> operator in SQLite is able to return TEXT, INTEGER, REAL, or NULL
+depending on the JSON type of the value being extracted. This means that
+the behavior of the ->> is slightly different in SQLite versus MySQL and PG
+in that it will sometimes return INTEGER and REAL values, depending on its
+inputs. It is possible to implement the ->> operator in SQLite so that it
+always operates exactly like MySQL and PG and always returns TEXT or NULL,
+but I have been unable to think of any situations where returning the
+actual JSON value this would cause problems, so I'm including the enhanced
+functionality in SQLite.
+
+The table below attempts to summarize the differences between the
+-> and ->> operators and the json_extract() function, for SQLite, MySQL,
+and PG. JSON values are shown using their SQL text representation but
+in a bold font.
+
+
+<table border=1 cellpadding=5 cellspacing=0>
+<tr><th>JSON<th>PATH<th>-&gt; operator<br>(all)<th>-&gt;&gt; operator<br>(MySQL/PG)
+ <th>-&gt;&gt; operator<br>(SQLite)<th>json_extract()<br>(SQLite)
+<tr><td> **'{"a":123}'** <td>'$.a'<td> **'123'** <td> '123' <td> 123 <td> 123
+<tr><td> **'{"a":4.5}'** <td>'$.a'<td> **'4.5'** <td> '4.5' <td> 4.5 <td> 4.5
+<tr><td> **'{"a":"xyz"}'** <td>'$.a'<td> **'"xyz"'** <td> 'xyz' <td> 'xyz' <td> 'xyz'
+<tr><td> **'{"a":null}'** <td>'$.a'<td> **'null'** <td> NULL <td> NULL <td> NULL
+<tr><td> **'{"a":[6,7,8]}'** <td>'$.a'<td> **'[6,7,8]'** <td> '[6,7,8]' <td> '[6,7,8]' <td> **'[6,7,8]'**
+<tr><td> **'{"a":{"x":9}}'** <td>'$.a'<td> **'{"x":9}'** <td> '{"x":9}' <td> '{"x":9}' <td> **'{"x":9}'**
+<tr><td> **'{"b":999}'** <td>'$.a'<td> NULL <td> NULL <td> NULL <td> NULL
+</table>
+
+Important points about the table above:
+
+ * The -> operator always returns either JSON or NULL.
+
+ * The ->> operator never returns JSON. It always returns TEXT or NULL, or in the
+ case of SQLite, INTEGER or REAL.
+
+ * The MySQL json_extract() function works exactly the same
+ as the MySQL -> operator.
+
+ * The SQLite json_extract() operator works like -> for JSON objects and
+ arrays, and like ->> for JSON values.
+
+ * The -> operator works the same for all systems.
+
+ * The only difference in ->> between SQLite and other systems is that
+ when the JSON value is numeric, SQLite returns a numeric SQL value,
+ whereas the other systems return a text representation of the numeric
+ value.
+
+### 2.1 Abbreviated JSON path expressions for PG compatibility
+
+The table above always shows the full JSON path expression: '$.a'. But
+PG does not accept this syntax. PG only allows a single JSON object label
+name or a single integer array index. In order to provide compatibility
+with PG, The -> and ->> operators in SQLite are extended to also support
+a JSON object label or an integer array index for the right-hand side
+operand, in addition to a full JSON path expression.
+
+Thus, a -> or ->> operator that works on MySQL will work in
+SQLite. And a -> or ->> operator that works in PG will work in SQLite.
+But because SQLite supports the union of the disjoint capabilities of
+MySQL and PG, there will always be -> and ->> operators that work in
+SQLite that do not work in one of MySQL and PG. This is an unavoidable
+consequence of the different syntax for -> and ->> in MySQL and PG.
+
+In the following table, assume that "value1" is a JSON object and
+"value2" is a JSON array.
+
+<table border=1 cellpadding=5 cellspacing=0>
+<tr><th>SQL expression <th>Works in MySQL?<th>Works in PG?<th>Works in SQLite
+<tr><td>value1-&gt;'$.a' <td> yes <td> no <td> yes
+<tr><td>value1-&gt;'a' <td> no <td> yes <td> yes
+<tr><td>value2-&gt;'$[2]' <td> yes <td> no <td> yes
+<tr><td>value2-&gt;2 <td> no <td> yes <td> yes
+</table>
+
+The abbreviated JSON path expressions only work for the -> and ->> operators
+in SQLite. The json_extract() function, and all other built-in SQLite
+JSON functions, continue to require complete JSON path expressions for their
+PATH arguments.
+
+## 3.0 JSON moved into the core
+
+The JSON interface is now moved into the SQLite core.
+
+When originally written in 2015, the JSON functions were an extension
+that could be optionally included at compile-time, or loaded at run-time.
+The implementation was in a source file named ext/misc/json1.c in the
+source tree. JSON functions were only compiled in if the
+-DSQLITE_ENABLE_JSON1 compile-time option was used.
+
+After these enhancements, the JSON functions are now built-ins.
+The source file that implements the JSON functions is moved to src/json.c.
+No special compile-time options are needed to load JSON into the build.
+Instead, there is a new -DSQLITE_OMIT_JSON compile-time option to leave
+them out.
diff --git a/doc/jsonb.md b/doc/jsonb.md
new file mode 100644
index 0000000..5beed16
--- /dev/null
+++ b/doc/jsonb.md
@@ -0,0 +1,290 @@
+# The JSONB Format
+
+This document describes SQLite's JSONB binary encoding of
+JSON.
+
+## 1.0 What Is JSONB?
+
+Beginning with version 3.45.0 (circa 2024-01-01), SQLite supports an
+alternative binary encoding of JSON which we call "JSONB". JSONB is
+a binary format that stored as a BLOB.
+
+The advantage of JSONB over ordinary text RFC 8259 JSON is that JSONB
+is both slightly smaller (by between 5% and 10% in most cases) and
+can be processed in less than half the number of CPU cycles. The built-in
+[JSON SQL functions] of SQLite can accept either ordinary text JSON
+or the binary JSONB encoding for any of their JSON inputs.
+
+The "JSONB" name is inspired by [PostgreSQL](https://postgresql.org), but the
+on-disk format for SQLite's JSONB is not the same as PostgreSQL's.
+The two formats have the same name, but they have wildly different internal
+representations and are not in any way binary compatible.
+
+The central idea behind this JSONB specification is that each element
+begins with a header that includes the size and type of that element.
+The header takes the place of punctuation such as double-quotes,
+curly-brackes, square-brackets, commas, and colons. Since the size
+and type of each element is contained in its header, the element can
+be read faster since it is no longer necessary to carefully scan forward
+looking for the closing delimiter. The payload of JSONB is the same
+as for corresponding text JSON. The same payload bytes occur in the
+same order. The only real difference between JSONB and ordinary text
+JSON is that JSONB includes a binary header on
+each element and omits delimiter and separator punctuation.
+
+### 1.1 Internal Use Only
+
+The details of the JSONB are not intended to be visible to application
+developers. Application developers should look at JSONB as an opaque BLOB
+used internally by SQLite. Nevertheless, we want the format to be backwards
+compatible across all future versions of SQLite. To that end, the format
+is documented by this file in the source tree. But this file should be
+used only by SQLite core developers, not by developers of applications
+that only use SQLite.
+
+## 2.0 The Purpose Of This Document
+
+JSONB is not intended as an external format to be used by
+applications. JSONB is designed for internal use by SQLite only.
+Programmers do not need to understand the JSONB format in order to
+use it effectively.
+Applications should access JSONB only through the [JSON SQL functions],
+not by looking at individual bytes of the BLOB.
+
+However, JSONB is intended to be portable and backwards compatible
+for all future versions of SQLite. In other words, you should not have
+to export and reimport your SQLite database files when you upgrade to
+a newer SQLite version. For that reason, the JSONB format needs to
+be well-defined.
+
+This document is therefore similar in purpose to the
+[SQLite database file format] document that describes the on-disk
+format of an SQLite database file. Applications are not expected
+to directly read and write the bits and bytes of SQLite database files.
+The SQLite database file format is carefully documented so that it
+can be stable and enduring. In the same way, the JSONB representation
+of JSON is documented here so that it too can be stable and enduring,
+not so that applications can read or writes individual bytes.
+
+## 3.0 Encoding
+
+JSONB is a direct translation of the underlying text JSON. The difference
+is that JSONB uses a binary encoding that is faster to parse compared to
+the detailed syntax of text JSON.
+
+Each JSON element is encoded as a header and a payload. The header
+determines type of element (string, numeric, boolean, null, object, or
+array) and the size of the payload. The header can be between 1 and
+9 bytes in size. The payload can be any size from zero bytes up to the
+maximum allowed BLOB size.
+
+### 3.1 Payload Size
+
+The upper four bits of the first byte of the header determine size of the
+header and possibly also the size of the payload.
+If the upper four bits have a value between 0 and 11, then the header is
+exactly one byte in size and the payload size is determined by those
+upper four bits. If the upper four bits have a value between 12 and 15,
+that means that the total header size is 2, 3, 5, or 9 bytes and the
+payload size is unsigned big-endian integer that is contained in the
+subsequent bytes. The size integer is the one byte that following the
+initial header byte if the upper four bits
+are 12, two bytes if the upper bits are 13, four bytes if the upper bits
+are 14, and eight bytes if the upper bits are 15. The current design
+of SQLite does not support BLOB values larger than 2GiB, so the eight-byte
+variant of the payload size integer will never be used by the current code.
+The eight-byte payload size integer is included in the specification
+to allow for future expansion.
+
+The header for an element does *not* need to be in its simplest
+form. For example, consider the JSON numeric value "`1`".
+That element can be encode in five different ways:
+
+ * `0x13 0x31`
+ * `0xc3 0x01 0x31`
+ * `0xd3 0x00 0x01 0x31`
+ * `0xe3 0x00 0x00 0x00 0x01 0x31`
+ * `0xf3 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x01 0x31`
+
+The shortest encoding is preferred, of course, and usually happens with
+primitive elements such as numbers. However the total size of an array
+or object might not be known exactly when the header of the element is
+first generated. It is convenient to reserve space for the largest
+possible header and then go back and fill in the correct payload size
+at the end. This technique can result in array or object headers that
+are larger than absolutely necessary.
+
+### 3.2 Element Type
+
+The least-significant four bits of the first byte of the header (the first
+byte masked against 0x0f) determine element type. The following codes are
+used:
+
+<ol>
+<li type="0"><p><b>NULL</b> &rarr;
+The element is a JSON "null". The payload size for a true JSON NULL must
+must be zero. Future versions of SQLite might extend the JSONB format
+with elements that have a zero element type but a non-zero size. In that
+way, legacy versions of SQLite will interpret the element as a NULL
+for backwards compatibility while newer versions will interpret the
+element in some other way.
+
+<li value="1"><p><b>TRUE</b> &rarr;
+The element is a JSON "true". The payload size must be zero for a actual
+"true" value. Elements with type 1 and a non-zero payload size are
+reserved for future expansion. Legacy implementations that see an element
+type of 1 with a non-zero payload size should continue to interpret that
+element as "true" for compatibility.
+
+<li value="2"><p><b>FALSE</b> &rarr;
+The element is a JSON "false". The payload size must be zero for a actual
+"false" value. Elements with type 2 and a non-zero payload size are
+reserved for future expansion. Legacy implementations that see an element
+type of 2 with a non-zero payload size should continue to interpret that
+element as "false" for compatibility.
+
+<li value="3"><p><b>INT</b> &rarr;
+The element is a JSON integer value in the canonical
+RFC 8259 format, without extensions. The payload is the ASCII
+text representation of that numeric value.
+
+<li value="4"><p><b>INT5</b> &rarr;
+The element is a JSON integer value that is not in the
+canonical format. The payload is the ASCII
+text representation of that numeric value. Because the payload is in a
+non-standard format, it will need to be translated when the JSONB is
+converted into RFC 8259 text JSON.
+
+<li value="5"><p><b>FLOAT</b> &rarr;
+The element is a JSON floating-point value in the canonical
+RFC 8259 format, without extensions. The payload is the ASCII
+text representation of that numeric value.
+
+<li value="6"><p><b>FLOAT5</b> &rarr;
+The element is a JSON floating-point value that is not in the
+canonical format. The payload is the ASCII
+text representation of that numeric value. Because the payload is in a
+non-standard format, it will need to be translated when the JSONB is
+converted into RFC 8259 text JSON.
+
+<li value="7"><p><b>TEXT</b> &rarr;
+The element is a JSON string value that does not contain
+any escapes nor any characters that need to be escaped for either SQL or
+JSON. The payload is the UTF8 text representation of the string value.
+The payload does <i>not</i> include string delimiters.
+
+<li value="8"><p><b>TEXTJ</b> &rarr;
+The element is a JSON string value that contains
+RFC 8259 character escapes (such as "<tt>\n</tt>" or "<tt>\u0020</tt>").
+Those escapes will need to be translated into actual UTF8 if this element
+is [json_extract|extracted] into SQL.
+The payload is the UTF8 text representation of the escaped string value.
+The payload does <i>not</i> include string delimiters.
+
+<li value="9"><p><b>TEXT5</b> &rarr;
+The element is a JSON string value that contains
+character escapes, including some character escapes that part of JSON5
+and which are not found in the canonical RFC 8259 spec.
+Those escapes will need to be translated into standard JSON prior to
+rendering the JSON as text, or into their actual UTF8 characters if this
+element is [json_extract|extracted] into SQL.
+The payload is the UTF8 text representation of the escaped string value.
+The payload does <i>not</i> include string delimiters.
+
+<li value="10"><p><b>TEXTRAW</b> &rarr;
+The element is a JSON string value that contains
+UTF8 characters that need to be escaped if this string is rendered into
+standard JSON text.
+The payload does <i>not</i> include string delimiters.
+
+<li value="11"><p><b>ARRAY</b> &rarr;
+The element is a JSON array. The payload contains
+JSONB elements that comprise values contained within the array.
+
+<li value="12"><p><b>OBJECT</b> &rarr;
+The element is a JSON object. The payload contains
+pairs of JSONB elements that comprise entries for the JSON object.
+The first element in each pair must be a string (types 7 through 10).
+The second element of each pair may be any types, including nested
+arrays or objects.
+
+<li value="13"><p><b>RESERVED-13</b> &rarr;
+Reserved for future expansion. Legacy implements that encounter this
+element type should raise an error.
+
+<li value="14"><p><b>RESERVED-14</b> &rarr;
+Reserved for future expansion. Legacy implements that encounter this
+element type should raise an error.
+
+<li value="15"><p><b>RESERVED-15</b> &rarr;
+Reserved for future expansion. Legacy implements that encounter this
+element type should raise an error.
+</ol>
+
+Element types outside the range of 0 to 12 are reserved for future
+expansion. The current implement raises an error if see an element type
+other than those listed above. However, future versions of SQLite might
+use of the three remaining element types to implement indexing or similar
+optimizations, to speed up lookup against large JSON arrays and/or objects.
+
+### 3.3 Design Rationale For Element Types
+
+A key goal of JSONB is that it should be quick to translate
+to and from text JSON and/or be constructed from SQL values.
+When converting from text into JSONB, we do not want the
+converter subroutine to burn CPU cycles converting elements
+values into some standard format which might never be used.
+Format conversion is "lazy" - it is deferred until actually
+needed. This has implications for the JSONB format design:
+
+ 1. Numeric values are stored as text, not a numbers. The values are
+ a direct copy of the text JSON values from which they are derived.
+
+ 2. There are multiple element types depending on the details of value
+ formats. For example, INT is used for pure RFC-8259 integer
+ literals and INT5 exists for JSON5 extensions such as hexadecimal
+ notation. FLOAT is used for pure RFC-8259 floating point literals
+ and FLOAT5 is used for JSON5 extensions. There are four different
+ representations of strings, depending on where the string came from
+ and how special characters within the string are escaped.
+
+A second goal of JSONB is that it should be capable of serving as the
+"parse tree" for JSON when a JSON value is being processed by the
+various [JSON SQL functions] built into SQLite. Before JSONB was
+developed, operations such [json_replace()] and [json_patch()]
+and similar worked in three stages:
+
+
+ 1. Translate the text JSON into a internal format that is
+ easier to scan and edit.
+ 2. Perform the requested operation on the JSON.
+ 3. Translate the internal format back into text.
+
+JSONB seeks to serve as the internal format directly - bypassing
+the first and third stages of that process. Since most of the CPU
+cycles are spent on the first and third stages, that suggests that
+JSONB processing will be much faster than text JSON processing.
+
+So when processing JSONB, only the second stage of the three-stage
+process is required. But when processing text JSON, it is still necessary
+to do stages one and three. If JSONB is to be used as the internal
+binary representation, this is yet another reason to store numeric
+values as text. Storing numbers as text minimizes the amount of
+conversion work needed for stages one and three. This is also why
+there are four different representations of text in JSONB. Different
+text representations are used for text coming from different sources
+(RFC-8259 JSON, JSON5, or SQL string values) and conversions only
+happen if and when they are actually needed.
+
+### 3.4 Valid JSONB BLOBs
+
+A valid JSONB BLOB consists of a single JSON element. The element must
+exactly fill the BLOB. This one element is often a JSON object or array
+and those usually contain additional elements as its payload, but the
+element can be a primite value such a string, number, boolean, or null.
+
+When the built-in JSON functions are attempting to determine if a BLOB
+argument is a JSONB or just a random BLOB, they look at the header of
+the outer element to see that it is well-formed and that the element
+completely fills the BLOB. If these conditions are met, then the BLOB
+is accepted as a JSONB value.
diff --git a/doc/lemon.html b/doc/lemon.html
new file mode 100644
index 0000000..66665f4
--- /dev/null
+++ b/doc/lemon.html
@@ -0,0 +1,1264 @@
+<html>
+<head>
+<title>The Lemon Parser Generator</title>
+</head>
+<body>
+<a id="main"></a>
+<h1 align='center'>The Lemon Parser Generator</h1>
+
+<p>Lemon is an LALR(1) parser generator for C.
+It does the same job as "bison" and "yacc".
+But Lemon is not a bison or yacc clone. Lemon
+uses a different grammar syntax which is designed to
+reduce the number of coding errors. Lemon also uses a
+parsing engine that is faster than yacc and
+bison and which is both reentrant and threadsafe.
+(Update: Since the previous sentence was written, bison
+has also been updated so that it too can generate a
+reentrant and threadsafe parser.)
+Lemon also implements features that can be used
+to eliminate resource leaks, making it suitable for use
+in long-running programs such as graphical user interfaces
+or embedded controllers.</p>
+
+<p>This document is an introduction to the Lemon
+parser generator.</p>
+
+<a id="toc"></a>
+<h2>1.0 Table of Contents</h2>
+<ul>
+<li><a href="#main">Introduction</a>
+<li><a href="#toc">1.0 Table of Contents</a>
+<li><a href="#secnot">2.0 Security Notes</a><br>
+<li><a href="#optheory">3.0 Theory of Operation</a>
+ <ul>
+ <li><a href="#options">3.1 Command Line Options</a>
+ <li><a href="#interface">3.2 The Parser Interface</a>
+ <ul>
+ <li><a href="#onstack">3.2.1 Allocating The Parse Object On Stack</a>
+ <li><a href="#ifsum">3.2.2 Interface Summary</a>
+ </ul>
+ <li><a href="#yaccdiff">3.3 Differences With YACC and BISON</a>
+ <li><a href="#build">3.4 Building The "lemon" Or "lemon.exe" Executable</a>
+ </ul>
+<li><a href="#syntax">4.0 Input File Syntax</a>
+ <ul>
+ <li><a href="#tnt">4.1 Terminals and Nonterminals</a>
+ <li><a href="#rules">4.2 Grammar Rules</a>
+ <li><a href="#precrules">4.3 Precedence Rules</a>
+ <li><a href="#special">4.4 Special Directives</a>
+ </ul>
+<li><a href="#errors">5.0 Error Processing</a>
+<li><a href="#history">6.0 History of Lemon</a>
+<li><a href="#copyright">7.0 Copyright</a>
+</ul>
+
+<a id="secnot"></a>
+<h2>2.0 Security Note</h2>
+
+<p>The language parser code created by Lemon is very robust and
+is well-suited for use in internet-facing applications that need to
+safely process maliciously crafted inputs.</p>
+
+<p>The "lemon.exe" command-line tool itself works great when given a valid
+input grammar file and almost always gives helpful
+error messages for malformed inputs. However, it is possible for
+a malicious user to craft a grammar file that will cause
+lemon.exe to crash.
+We do not see this as a problem, as lemon.exe is not intended to be used
+with hostile inputs.
+To summarize:</p>
+
+<ul>
+<li>Parser code generated by lemon &rarr; Robust and secure
+<li>The "lemon.exe" command line tool itself &rarr; Not so much
+</ul>
+
+<a id="optheory"></a>
+<h2>3.0 Theory of Operation</h2>
+
+<p>Lemon is computer program that translates a context free grammar (CFG)
+for a particular language into C code that implements a parser for
+that language.
+The Lemon program has two inputs:</p>
+<ul>
+<li>The grammar specification.
+<li>A parser template file.
+</ul>
+<p>Typically, only the grammar specification is supplied by the programmer.
+Lemon comes with a default parser template
+("<a href="https://sqlite.org/src/file/tool/lempar.c">lempar.c</a>")
+that works fine for most applications. But the user is free to substitute
+a different parser template if desired.</p>
+
+<p>Depending on command-line options, Lemon will generate up to
+three output files.</p>
+<ul>
+<li>C code to implement a parser for the input grammar.
+<li>A header file defining an integer ID for each terminal symbol
+ (or "token").
+<li>An information file that describes the states of the generated parser
+ automaton.
+</ul>
+<p>By default, all three of these output files are generated.
+The header file is suppressed if the "-m" command-line option is
+used and the report file is omitted when "-q" is selected.</p>
+
+<p>The grammar specification file uses a ".y" suffix, by convention.
+In the examples used in this document, we'll assume the name of the
+grammar file is "gram.y". A typical use of Lemon would be the
+following command:</p>
+<pre>
+ lemon gram.y
+</pre>
+<p>This command will generate three output files named "gram.c",
+"gram.h" and "gram.out".
+The first is C code to implement the parser. The second
+is the header file that defines numerical values for all
+terminal symbols, and the last is the report that explains
+the states used by the parser automaton.</p>
+
+<a id="options"></a>
+<h3>3.1 Command Line Options</h3>
+
+<p>The behavior of Lemon can be modified using command-line options.
+You can obtain a list of the available command-line options together
+with a brief explanation of what each does by typing</p>
+<pre>
+ lemon "-?"
+</pre>
+<p>As of this writing, the following command-line options are supported:</p>
+<ul>
+<li><b>-b</b>
+Show only the basis for each parser state in the report file.
+<li><b>-c</b>
+Do not compress the generated action tables. The parser will be a
+little larger and slower, but it will detect syntax errors sooner.
+<li><b>-d</b><i>directory</i>
+Write all output files into <i>directory</i>. Normally, output files
+are written into the directory that contains the input grammar file.
+<li><b>-D<i>name</i></b>
+Define C preprocessor macro <i>name</i>. This macro is usable by
+"<tt><a href='#pifdef'>%ifdef</a></tt>",
+"<tt><a href='#pifdef'>%ifndef</a></tt>", and
+"<tt><a href="#pifdef">%if</a></tt> lines
+in the grammar file.
+<li><b>-E</b>
+Run the "%if" preprocessor step only and print the revised grammar
+file.
+<li><b>-g</b>
+Do not generate a parser. Instead write the input grammar to standard
+output with all comments, actions, and other extraneous text removed.
+<li><b>-l</b>
+Omit "#line" directives in the generated parser C code.
+<li><b>-m</b>
+Cause the output C source code to be compatible with the "makeheaders"
+program.
+<li><b>-p</b>
+Display all conflicts that are resolved by
+<a href='#precrules'>precedence rules</a>.
+<li><b>-q</b>
+Suppress generation of the report file.
+<li><b>-r</b>
+Do not sort or renumber the parser states as part of optimization.
+<li><b>-s</b>
+Show parser statistics before exiting.
+<li><b>-T<i>file</i></b>
+Use <i>file</i> as the template for the generated C-code parser implementation.
+<li><b>-x</b>
+Print the Lemon version number.
+</ul>
+
+<a id="interface"></a>
+<h3>3.2 The Parser Interface</h3>
+
+<p>Lemon doesn't generate a complete, working program. It only generates
+a few subroutines that implement a parser. This section describes
+the interface to those subroutines. It is up to the programmer to
+call these subroutines in an appropriate way in order to produce a
+complete system.</p>
+
+<p>Before a program begins using a Lemon-generated parser, the program
+must first create the parser.
+A new parser is created as follows:</p>
+<pre>
+ void *pParser = ParseAlloc( malloc );
+</pre>
+<p>The ParseAlloc() routine allocates and initializes a new parser and
+returns a pointer to it.
+The actual data structure used to represent a parser is opaque &mdash;
+its internal structure is not visible or usable by the calling routine.
+For this reason, the ParseAlloc() routine returns a pointer to void
+rather than a pointer to some particular structure.
+The sole argument to the ParseAlloc() routine is a pointer to the
+subroutine used to allocate memory. Typically this means malloc().</p>
+
+<p>After a program is finished using a parser, it can reclaim all
+memory allocated by that parser by calling</p>
+<pre>
+ ParseFree(pParser, free);
+</pre>
+<p>The first argument is the same pointer returned by ParseAlloc(). The
+second argument is a pointer to the function used to release bulk
+memory back to the system.</p>
+
+<p>After a parser has been allocated using ParseAlloc(), the programmer
+must supply the parser with a sequence of tokens (terminal symbols) to
+be parsed. This is accomplished by calling the following function
+once for each token:<p>
+<pre>
+ Parse(pParser, hTokenID, sTokenData, pArg);
+</pre>
+<p>The first argument to the Parse() routine is the pointer returned by
+ParseAlloc().
+The second argument is a small positive integer that tells the parser the
+type of the next token in the data stream.
+There is one token type for each terminal symbol in the grammar.
+The gram.h file generated by Lemon contains #define statements that
+map symbolic terminal symbol names into appropriate integer values.
+A value of 0 for the second argument is a special flag to the
+parser to indicate that the end of input has been reached.
+The third argument is the value of the given token. By default,
+the type of the third argument is "void*", but the grammar will
+usually redefine this type to be some kind of structure.
+Typically the second argument will be a broad category of tokens
+such as "identifier" or "number" and the third argument will
+be the name of the identifier or the value of the number.</p>
+
+<p>The Parse() function may have either three or four arguments,
+depending on the grammar. If the grammar specification file requests
+it (via the <tt><a href='#extraarg'>%extra_argument</a></tt> directive),
+the Parse() function will have a fourth parameter that can be
+of any type chosen by the programmer. The parser doesn't do anything
+with this argument except to pass it through to action routines.
+This is a convenient mechanism for passing state information down
+to the action routines without having to use global variables.</p>
+
+<p>A typical use of a Lemon parser might look something like the
+following:</p>
+<pre>
+ 1 ParseTree *ParseFile(const char *zFilename){
+ 2 Tokenizer *pTokenizer;
+ 3 void *pParser;
+ 4 Token sToken;
+ 5 int hTokenId;
+ 6 ParserState sState;
+ 7
+ 8 pTokenizer = TokenizerCreate(zFilename);
+ 9 pParser = ParseAlloc( malloc );
+ 10 InitParserState(&amp;sState);
+ 11 while( GetNextToken(pTokenizer, &amp;hTokenId, &amp;sToken) ){
+ 12 Parse(pParser, hTokenId, sToken, &amp;sState);
+ 13 }
+ 14 Parse(pParser, 0, sToken, &amp;sState);
+ 15 ParseFree(pParser, free );
+ 16 TokenizerFree(pTokenizer);
+ 17 return sState.treeRoot;
+ 18 }
+</pre>
+<p>This example shows a user-written routine that parses a file of
+text and returns a pointer to the parse tree.
+(All error-handling code is omitted from this example to keep it
+simple.)
+We assume the existence of some kind of tokenizer which is created
+using TokenizerCreate() on line 8 and deleted by TokenizerFree()
+on line 16. The GetNextToken() function on line 11 retrieves the
+next token from the input file and puts its type in the
+integer variable hTokenId. The sToken variable is assumed to be
+some kind of structure that contains details about each token,
+such as its complete text, what line it occurs on, etc.</p>
+
+<p>This example also assumes the existence of a structure of type
+ParserState that holds state information about a particular parse.
+An instance of such a structure is created on line 6 and initialized
+on line 10. A pointer to this structure is passed into the Parse()
+routine as the optional 4th argument.
+The action routine specified by the grammar for the parser can use
+the ParserState structure to hold whatever information is useful and
+appropriate. In the example, we note that the treeRoot field of
+the ParserState structure is left pointing to the root of the parse
+tree.</p>
+
+<p>The core of this example as it relates to Lemon is as follows:</p>
+<pre>
+ ParseFile(){
+ pParser = ParseAlloc( malloc );
+ while( GetNextToken(pTokenizer,&amp;hTokenId, &amp;sToken) ){
+ Parse(pParser, hTokenId, sToken);
+ }
+ Parse(pParser, 0, sToken);
+ ParseFree(pParser, free );
+ }
+</pre>
+<p>Basically, what a program has to do to use a Lemon-generated parser
+is first create the parser, then send it lots of tokens obtained by
+tokenizing an input source. When the end of input is reached, the
+Parse() routine should be called one last time with a token type
+of 0. This step is necessary to inform the parser that the end of
+input has been reached. Finally, we reclaim memory used by the
+parser by calling ParseFree().</p>
+
+<p>There is one other interface routine that should be mentioned
+before we move on.
+The ParseTrace() function can be used to generate debugging output
+from the parser. A prototype for this routine is as follows:</p>
+<pre>
+ ParseTrace(FILE *stream, char *zPrefix);
+</pre>
+<p>After this routine is called, a short (one-line) message is written
+to the designated output stream every time the parser changes states
+or calls an action routine. Each such message is prefaced using
+the text given by zPrefix. This debugging output can be turned off
+by calling ParseTrace() again with a first argument of NULL (0).</p>
+
+<a id="onstack"></a>
+<h4>3.2.1 Allocating The Parse Object On Stack</h4>
+
+<p>If all calls to the Parse() interface are made from within
+<a href="#pcode"><tt>%code</tt> directives</a>, then the parse
+object can be allocated from the stack rather than from the heap.
+These are the steps:
+
+<ul>
+<li> Declare a local variable of type "yyParser"
+<li> Initialize the variable using ParseInit()
+<li> Pass a pointer to the variable in calls to Parse()
+<li> Deallocate substructure in the parse variable using ParseFinalize().
+</ul>
+
+<p>The following code illustrates how this is done:
+
+<pre>
+ ParseFile(){
+ yyParser x;
+ ParseInit( &x );
+ while( GetNextToken(pTokenizer,&amp;hTokenId, &amp;sToken) ){
+ Parse(&x, hTokenId, sToken);
+ }
+ Parse(&x, 0, sToken);
+ ParseFinalize( &x );
+ }
+</pre>
+
+<a id="ifsum"></a>
+<h4>3.2.2 Interface Summary</h4>
+
+<p>Here is a quick overview of the C-language interface to a
+Lemon-generated parser:</p>
+
+<blockquote><pre>
+void *ParseAlloc( (void*(*malloc)(size_t) );
+void ParseFree(void *pParser, (void(*free)(void*) );
+void Parse(void *pParser, int tokenCode, ParseTOKENTYPE token, ...);
+void ParseTrace(FILE *stream, char *zPrefix);
+</pre></blockquote>
+
+<p>Notes:</p>
+<ul>
+<li> Use the <a href="#pname"><tt>%name</tt> directive</a> to change
+the "Parse" prefix names of the procedures in the interface.
+<li> Use the <a href="#token_type"><tt>%token_type</tt> directive</a>
+to define the "ParseTOKENTYPE" type.
+<li> Use the <a href="#extraarg"><tt>%extra_argument</tt> directive</a>
+to specify the type and name of the 4th parameter to the
+Parse() function.
+</ul>
+
+<a id="yaccdiff"></a>
+<h3>3.3 Differences With YACC and BISON</h3>
+
+<p>Programmers who have previously used the yacc or bison parser
+generator will notice several important differences between yacc and/or
+bison and Lemon.</p>
+<ul>
+<li>In yacc and bison, the parser calls the tokenizer. In Lemon,
+ the tokenizer calls the parser.
+<li>Lemon uses no global variables. Yacc and bison use global variables
+ to pass information between the tokenizer and parser.
+<li>Lemon allows multiple parsers to be running simultaneously. Yacc
+ and bison do not.
+</ul>
+<p>These differences may cause some initial confusion for programmers
+with prior yacc and bison experience.
+But after years of experience using Lemon, I firmly
+believe that the Lemon way of doing things is better.</p>
+
+<p><i>Updated as of 2016-02-16:</i>
+The text above was written in the 1990s.
+We are told that Bison has lately been enhanced to support the
+tokenizer-calls-parser paradigm used by Lemon, eliminating the
+need for global variables.</p>
+
+<a id="build"><a>
+<h3>3.4 Building The "lemon" or "lemon.exe" Executable</h3>
+
+<p>The "lemon" or "lemon.exe" program is built from a single file
+of C-code named
+"<a href="https://sqlite.org/src/tool/lemon.c">lemon.c</a>".
+The Lemon source code is generic C89 code that uses
+no unusual or non-standard libraries. Any
+reasonable C compiler should suffice to compile the lemon program.
+A command-line like the following will usually work:</p>
+
+<blockquote><pre>
+cc -o lemon lemon.c
+</pre></blockquote
+
+<p>On Windows machines with Visual C++ installed, bring up a
+"VS20<i>NN</i> x64 Native Tools Command Prompt" window and enter:
+
+<blockquote><pre>
+cl lemon.c
+</pre></blockquote>
+
+<p>Compiling Lemon really is that simple.
+Additional compiler options such as
+"-O2" or "-g" or "-Wall" can be added if desired, but they are not
+necessary.</p>
+
+
+<a id="syntax"></a>
+<h2>4.0 Input File Syntax</h2>
+
+<p>The main purpose of the grammar specification file for Lemon is
+to define the grammar for the parser. But the input file also
+specifies additional information Lemon requires to do its job.
+Most of the work in using Lemon is in writing an appropriate
+grammar file.</p>
+
+<p>The grammar file for Lemon is, for the most part, a free format.
+It does not have sections or divisions like yacc or bison. Any
+declaration can occur at any point in the file. Lemon ignores
+whitespace (except where it is needed to separate tokens), and it
+honors the same commenting conventions as C and C++.</p>
+
+<a id="tnt"></a>
+<h3>4.1 Terminals and Nonterminals</h3>
+
+<p>A terminal symbol (token) is any string of alphanumeric
+and/or underscore characters
+that begins with an uppercase letter.
+A terminal can contain lowercase letters after the first character,
+but the usual convention is to make terminals all uppercase.
+A nonterminal, on the other hand, is any string of alphanumeric
+and underscore characters than begins with a lowercase letter.
+Again, the usual convention is to make nonterminals use all lowercase
+letters.</p>
+
+<p>In Lemon, terminal and nonterminal symbols do not need to
+be declared or identified in a separate section of the grammar file.
+Lemon is able to generate a list of all terminals and nonterminals
+by examining the grammar rules, and it can always distinguish a
+terminal from a nonterminal by checking the case of the first
+character of the name.</p>
+
+<p>Yacc and bison allow terminal symbols to have either alphanumeric
+names or to be individual characters included in single quotes, like
+this: ')' or '$'. Lemon does not allow this alternative form for
+terminal symbols. With Lemon, all symbols, terminals and nonterminals,
+must have alphanumeric names.</p>
+
+<a id="rules"></a>
+<h3>4.2 Grammar Rules</h3>
+
+<p>The main component of a Lemon grammar file is a sequence of grammar
+rules.
+Each grammar rule consists of a nonterminal symbol followed by
+the special symbol "::=" and then a list of terminals and/or nonterminals.
+The rule is terminated by a period.
+The list of terminals and nonterminals on the right-hand side of the
+rule can be empty.
+Rules can occur in any order, except that the left-hand side of the
+first rule is assumed to be the start symbol for the grammar (unless
+specified otherwise using the <tt><a href='#start_symbol'>%start_symbol</a></tt>
+directive described below.)
+A typical sequence of grammar rules might look something like this:</p>
+<pre>
+ expr ::= expr PLUS expr.
+ expr ::= expr TIMES expr.
+ expr ::= LPAREN expr RPAREN.
+ expr ::= VALUE.
+</pre>
+
+<p>There is one non-terminal in this example, "expr", and five
+terminal symbols or tokens: "PLUS", "TIMES", "LPAREN",
+"RPAREN" and "VALUE".</p>
+
+<p>Like yacc and bison, Lemon allows the grammar to specify a block
+of C code that will be executed whenever a grammar rule is reduced
+by the parser.
+In Lemon, this action is specified by putting the C code (contained
+within curly braces <tt>{...}</tt>) immediately after the
+period that closes the rule.
+For example:</p>
+<pre>
+ expr ::= expr PLUS expr. { printf("Doing an addition...\n"); }
+</pre>
+
+<p>In order to be useful, grammar actions must normally be linked to
+their associated grammar rules.
+In yacc and bison, this is accomplished by embedding a "$$" in the
+action to stand for the value of the left-hand side of the rule and
+symbols "$1", "$2", and so forth to stand for the value of
+the terminal or nonterminal at position 1, 2 and so forth on the
+right-hand side of the rule.
+This idea is very powerful, but it is also very error-prone. The
+single most common source of errors in a yacc or bison grammar is
+to miscount the number of symbols on the right-hand side of a grammar
+rule and say "$7" when you really mean "$8".</p>
+
+<p>Lemon avoids the need to count grammar symbols by assigning symbolic
+names to each symbol in a grammar rule and then using those symbolic
+names in the action.
+In yacc or bison, one would write this:</p>
+<pre>
+ expr -&gt; expr PLUS expr { $$ = $1 + $3; };
+</pre>
+<p>But in Lemon, the same rule becomes the following:</p>
+<pre>
+ expr(A) ::= expr(B) PLUS expr(C). { A = B+C; }
+</pre>
+<p>In the Lemon rule, any symbol in parentheses after a grammar rule
+symbol becomes a place holder for that symbol in the grammar rule.
+This place holder can then be used in the associated C action to
+stand for the value of that symbol.</p>
+
+<p>The Lemon notation for linking a grammar rule with its reduce
+action is superior to yacc/bison on several counts.
+First, as mentioned above, the Lemon method avoids the need to
+count grammar symbols.
+Secondly, if a terminal or nonterminal in a Lemon grammar rule
+includes a linking symbol in parentheses but that linking symbol
+is not actually used in the reduce action, then an error message
+is generated.
+For example, the rule</p>
+<pre>
+ expr(A) ::= expr(B) PLUS expr(C). { A = B; }
+</pre>
+<p>will generate an error because the linking symbol "C" is used
+in the grammar rule but not in the reduce action.</p>
+
+<p>The Lemon notation for linking grammar rules to reduce actions
+also facilitates the use of destructors for reclaiming memory
+allocated by the values of terminals and nonterminals on the
+right-hand side of a rule.</p>
+
+<a id='precrules'></a>
+<h3>4.3 Precedence Rules</h3>
+
+<p>Lemon resolves parsing ambiguities in exactly the same way as
+yacc and bison. A shift-reduce conflict is resolved in favor
+of the shift, and a reduce-reduce conflict is resolved by reducing
+whichever rule comes first in the grammar file.</p>
+
+<p>Just like in
+yacc and bison, Lemon allows a measure of control
+over the resolution of parsing conflicts using precedence rules.
+A precedence value can be assigned to any terminal symbol
+using the
+<tt><a href='#pleft'>%left</a></tt>,
+<tt><a href='#pright'>%right</a></tt> or
+<tt><a href='#pnonassoc'>%nonassoc</a></tt> directives. Terminal symbols
+mentioned in earlier directives have a lower precedence than
+terminal symbols mentioned in later directives. For example:</p>
+
+<pre>
+ %left AND.
+ %left OR.
+ %nonassoc EQ NE GT GE LT LE.
+ %left PLUS MINUS.
+ %left TIMES DIVIDE MOD.
+ %right EXP NOT.
+</pre>
+
+<p>In the preceding sequence of directives, the AND operator is
+defined to have the lowest precedence. The OR operator is one
+precedence level higher. And so forth. Hence, the grammar would
+attempt to group the ambiguous expression</p>
+<pre>
+ a AND b OR c
+</pre>
+<p>like this</p>
+<pre>
+ a AND (b OR c).
+</pre>
+<p>The associativity (left, right or nonassoc) is used to determine
+the grouping when the precedence is the same. AND is left-associative
+in our example, so</p>
+<pre>
+ a AND b AND c
+</pre>
+<p>is parsed like this</p>
+<pre>
+ (a AND b) AND c.
+</pre>
+<p>The EXP operator is right-associative, though, so</p>
+<pre>
+ a EXP b EXP c
+</pre>
+<p>is parsed like this</p>
+<pre>
+ a EXP (b EXP c).
+</pre>
+<p>The nonassoc precedence is used for non-associative operators.
+So</p>
+<pre>
+ a EQ b EQ c
+</pre>
+<p>is an error.</p>
+
+<p>The precedence of non-terminals is transferred to rules as follows:
+The precedence of a grammar rule is equal to the precedence of the
+left-most terminal symbol in the rule for which a precedence is
+defined. This is normally what you want, but in those cases where
+you want the precedence of a grammar rule to be something different,
+you can specify an alternative precedence symbol by putting the
+symbol in square braces after the period at the end of the rule and
+before any C-code. For example:</p>
+
+<pre>
+ expr = MINUS expr. [NOT]
+</pre>
+
+<p>This rule has a precedence equal to that of the NOT symbol, not the
+MINUS symbol as would have been the case by default.</p>
+
+<p>With the knowledge of how precedence is assigned to terminal
+symbols and individual
+grammar rules, we can now explain precisely how parsing conflicts
+are resolved in Lemon. Shift-reduce conflicts are resolved
+as follows:</p>
+<ul>
+<li> If either the token to be shifted or the rule to be reduced
+ lacks precedence information, then resolve in favor of the
+ shift, but report a parsing conflict.
+<li> If the precedence of the token to be shifted is greater than
+ the precedence of the rule to reduce, then resolve in favor
+ of the shift. No parsing conflict is reported.
+<li> If the precedence of the token to be shifted is less than the
+ precedence of the rule to reduce, then resolve in favor of the
+ reduce action. No parsing conflict is reported.
+<li> If the precedences are the same and the shift token is
+ right-associative, then resolve in favor of the shift.
+ No parsing conflict is reported.
+<li> If the precedences are the same and the shift token is
+ left-associative, then resolve in favor of the reduce.
+ No parsing conflict is reported.
+<li> Otherwise, resolve the conflict by doing the shift, and
+ report a parsing conflict.
+</ul>
+<p>Reduce-reduce conflicts are resolved this way:</p>
+<ul>
+<li> If either reduce rule
+ lacks precedence information, then resolve in favor of the
+ rule that appears first in the grammar, and report a parsing
+ conflict.
+<li> If both rules have precedence and the precedence is different,
+ then resolve the dispute in favor of the rule with the highest
+ precedence, and do not report a conflict.
+<li> Otherwise, resolve the conflict by reducing by the rule that
+ appears first in the grammar, and report a parsing conflict.
+</ul>
+
+<a id="special"></a>
+<h3>4.4 Special Directives</h3>
+
+<p>The input grammar to Lemon consists of grammar rules and special
+directives. We've described all the grammar rules, so now we'll
+talk about the special directives.</p>
+
+<p>Directives in Lemon can occur in any order. You can put them before
+the grammar rules, or after the grammar rules, or in the midst of the
+grammar rules. It doesn't matter. The relative order of
+directives used to assign precedence to terminals is important, but
+other than that, the order of directives in Lemon is arbitrary.</p>
+
+<p>Lemon supports the following special directives:</p>
+<ul>
+<li><tt><a href='#pcode'>%code</a></tt>
+<li><tt><a href='#default_destructor'>%default_destructor</a></tt>
+<li><tt><a href='#default_type'>%default_type</a></tt>
+<li><tt><a href='#destructor'>%destructor</a></tt>
+<li><tt><a href='#pifdef'>%else</a></tt>
+<li><tt><a href='#pifdef'>%endif</a></tt>
+<li><tt><a href='#extraarg'>%extra_argument</a></tt>
+<li><tt><a href='#pfallback'>%fallback</a></tt>
+<li><tt><a href='#pifdef'>%if</a></tt>
+<li><tt><a href='#pifdef'>%ifdef</a></tt>
+<li><tt><a href='#pifdef'>%ifndef</a></tt>
+<li><tt><a href='#pinclude'>%include</a></tt>
+<li><tt><a href='#pleft'>%left</a></tt>
+<li><tt><a href='#pname'>%name</a></tt>
+<li><tt><a href='#pnonassoc'>%nonassoc</a></tt>
+<li><tt><a href='#parse_accept'>%parse_accept</a></tt>
+<li><tt><a href='#parse_failure'>%parse_failure</a></tt>
+<li><tt><a href='#pright'>%right</a></tt>
+<li><tt><a href='#stack_overflow'>%stack_overflow</a></tt>
+<li><tt><a href='#stack_size'>%stack_size</a></tt>
+<li><tt><a href='#start_symbol'>%start_symbol</a></tt>
+<li><tt><a href='#syntax_error'>%syntax_error</a></tt>
+<li><tt><a href='#token'>%token</a></tt>
+<li><tt><a href='#token_class'>%token_class</a></tt>
+<li><tt><a href='#token_destructor'>%token_destructor</a></tt>
+<li><tt><a href='#token_prefix'>%token_prefix</a></tt>
+<li><tt><a href='#token_type'>%token_type</a></tt>
+<li><tt><a href='#ptype'>%type</a></tt>
+<li><tt><a href='#pwildcard'>%wildcard</a></tt>
+</ul>
+<p>Each of these directives will be described separately in the
+following sections:</p>
+
+<a id='pcode'></a>
+<h4>4.4.1 The <tt>%code</tt> directive</h4>
+
+<p>The <tt>%code</tt> directive is used to specify additional C code that
+is added to the end of the main output file. This is similar to
+the <tt><a href='#pinclude'>%include</a></tt> directive except that
+<tt>%include</tt> is inserted at the beginning of the main output file.</p>
+
+<p><tt>%code</tt> is typically used to include some action routines or perhaps
+a tokenizer or even the "main()" function
+as part of the output file.</p>
+
+<p>There can be multiple <tt>%code</tt> directives. The arguments of
+all <tt>%code</tt> directives are concatenated.</p>
+
+<a id='default_destructor'></a>
+<h4>4.4.2 The <tt>%default_destructor</tt> directive</h4>
+
+<p>The <tt>%default_destructor</tt> directive specifies a destructor to
+use for non-terminals that do not have their own destructor
+specified by a separate <tt>%destructor</tt> directive. See the documentation
+on the <tt><a href='#destructor'>%destructor</a></tt> directive below for
+additional information.</p>
+
+<p>In some grammars, many different non-terminal symbols have the
+same data type and hence the same destructor. This directive is
+a convenient way to specify the same destructor for all those
+non-terminals using a single statement.</p>
+
+<a id='default_type'></a>
+<h4>4.4.3 The <tt>%default_type</tt> directive</h4>
+
+<p>The <tt>%default_type</tt> directive specifies the data type of non-terminal
+symbols that do not have their own data type defined using a separate
+<tt><a href='#ptype'>%type</a></tt> directive.</p>
+
+<a id='destructor'></a>
+<h4>4.4.4 The <tt>%destructor</tt> directive</h4>
+
+<p>The <tt>%destructor</tt> directive is used to specify a destructor for
+a non-terminal symbol.
+(See also the <tt><a href='#token_destructor'>%token_destructor</a></tt>
+directive which is used to specify a destructor for terminal symbols.)</p>
+
+<p>A non-terminal's destructor is called to dispose of the
+non-terminal's value whenever the non-terminal is popped from
+the stack. This includes all of the following circumstances:</p>
+<ul>
+<li> When a rule reduces and the value of a non-terminal on
+ the right-hand side is not linked to C code.
+<li> When the stack is popped during error processing.
+<li> When the ParseFree() function runs.
+</ul>
+<p>The destructor can do whatever it wants with the value of
+the non-terminal, but its design is to deallocate memory
+or other resources held by that non-terminal.</p>
+
+<p>Consider an example:</p>
+<pre>
+ %type nt {void*}
+ %destructor nt { free($$); }
+ nt(A) ::= ID NUM. { A = malloc( 100 ); }
+</pre>
+<p>This example is a bit contrived, but it serves to illustrate how
+destructors work. The example shows a non-terminal named
+"nt" that holds values of type "void*". When the rule for
+an "nt" reduces, it sets the value of the non-terminal to
+space obtained from malloc(). Later, when the nt non-terminal
+is popped from the stack, the destructor will fire and call
+free() on this malloced space, thus avoiding a memory leak.
+(Note that the symbol "$$" in the destructor code is replaced
+by the value of the non-terminal.)</p>
+
+<p>It is important to note that the value of a non-terminal is passed
+to the destructor whenever the non-terminal is removed from the
+stack, unless the non-terminal is used in a C-code action. If
+the non-terminal is used by C-code, then it is assumed that the
+C-code will take care of destroying it.
+More commonly, the value is used to build some
+larger structure, and we don't want to destroy it, which is why
+the destructor is not called in this circumstance.</p>
+
+<p>Destructors help avoid memory leaks by automatically freeing
+allocated objects when they go out of scope.
+To do the same using yacc or bison is much more difficult.</p>
+
+<a id='extraarg'></a>
+<h4>4.4.5 The <tt>%extra_argument</tt> directive</h4>
+
+<p>The <tt>%extra_argument</tt> directive instructs Lemon to add a 4th parameter
+to the parameter list of the Parse() function it generates. Lemon
+doesn't do anything itself with this extra argument, but it does
+make the argument available to C-code action routines, destructors,
+and so forth. For example, if the grammar file contains:</p>
+
+<pre>
+ %extra_argument { MyStruct *pAbc }
+</pre>
+
+<p>Then the Parse() function generated will have an 4th parameter
+of type "MyStruct*" and all action routines will have access to
+a variable named "pAbc" that is the value of the 4th parameter
+in the most recent call to Parse().</p>
+
+<p>The <tt>%extra_context</tt> directive works the same except that it
+is passed in on the ParseAlloc() or ParseInit() routines instead of
+on Parse().</p>
+
+<a id='extractx'></a>
+<h4>4.4.6 The <tt>%extra_context</tt> directive</h4>
+
+<p>The <tt>%extra_context</tt> directive instructs Lemon to add a 2nd parameter
+to the parameter list of the ParseAlloc() and ParseInit() functions. Lemon
+doesn't do anything itself with these extra argument, but it does
+store the value make it available to C-code action routines, destructors,
+and so forth. For example, if the grammar file contains:</p>
+
+<pre>
+ %extra_context { MyStruct *pAbc }
+</pre>
+
+<p>Then the ParseAlloc() and ParseInit() functions will have an 2nd parameter
+of type "MyStruct*" and all action routines will have access to
+a variable named "pAbc" that is the value of that 2nd parameter.</p>
+
+<p>The <tt>%extra_argument</tt> directive works the same except that it
+is passed in on the Parse() routine instead of on ParseAlloc()/ParseInit().</p>
+
+<a id='pfallback'></a>
+<h4>4.4.7 The <tt>%fallback</tt> directive</h4>
+
+<p>The <tt>%fallback</tt> directive specifies an alternative meaning for one
+or more tokens. The alternative meaning is tried if the original token
+would have generated a syntax error.</p>
+
+<p>The <tt>%fallback</tt> directive was added to support robust parsing of SQL
+syntax in <a href='https://www.sqlite.org/'>SQLite</a>.
+The SQL language contains a large assortment of keywords, each of which
+appears as a different token to the language parser. SQL contains so
+many keywords that it can be difficult for programmers to keep up with
+them all. Programmers will, therefore, sometimes mistakenly use an
+obscure language keyword for an identifier. The <tt>%fallback</tt> directive
+provides a mechanism to tell the parser: "If you are unable to parse
+this keyword, try treating it as an identifier instead."</p>
+
+<p>The syntax of <tt>%fallback</tt> is as follows:</p>
+
+<blockquote>
+<tt>%fallback</tt> <i>ID</i> <i>TOKEN...</i> <b>.</b>
+</blockquote></p>
+
+<p>In words, the <tt>%fallback</tt> directive is followed by a list of token
+names terminated by a period.
+The first token name is the fallback token &mdash; the
+token to which all the other tokens fall back to. The second and subsequent
+arguments are tokens which fall back to the token identified by the first
+argument.</p>
+
+<a id='pifdef'></a>
+<h4>4.4.8 The <tt>%if</tt> directive and its friends</h4>
+
+<p>The <tt>%if</tt>, <tt>%ifdef</tt>, <tt>%ifndef</tt>, <tt>%else</tt>,
+and <tt>%endif</tt> directives
+are similar to #if, #ifdef, #ifndef, #else, and #endif in the C-preprocessor,
+just not as general.
+Each of these directives must begin at the left margin. No whitespace
+is allowed between the "%" and the directive name.</p>
+
+<p>Grammar text in between "<tt>%ifdef MACRO</tt>" and the next nested
+"<tt>%endif</tt>" is
+ignored unless the "-DMACRO" command-line option is used. Grammar text
+betwen "<tt>%ifndef MACRO</tt>" and the next nested "<tt>%endif</tt>" is
+included except when the "-DMACRO" command-line option is used.<p>
+
+<p>The text in between "<tt>%if</tt> <i>CONDITIONAL</i>" and its
+corresponding <tt>%endif</tt> is included only if <i>CONDITIONAL</i>
+is true. The CONDITION is one or more macro names, optionally connected
+using the "||" and "&amp;&amp;" binary operators, the "!" unary operator,
+and grouped using balanced parentheses. Each term is true if the
+corresponding macro exists, and false if it does not exist.</p>
+
+<p>An optional "<tt>%else</tt>" directive can occur anywhere in between a
+<tt>%ifdef</tt>, <tt>%ifndef</tt>, or <tt>%if</tt> directive and
+its corresponding <tt>%endif</tt>.</p>
+
+<p>Note that the argument to <tt>%ifdef</tt> and <tt>%ifndef</tt> is
+intended to be a single preprocessor symbol name, not a general expression.
+Use the "<tt>%if</tt>" directive for general expressions.</p>
+
+<a id='pinclude'></a>
+<h4>4.4.9 The <tt>%include</tt> directive</h4>
+
+<p>The <tt>%include</tt> directive specifies C code that is included at the
+top of the generated parser. You can include any text you want &mdash;
+the Lemon parser generator copies it blindly. If you have multiple
+<tt>%include</tt> directives in your grammar file, their values are concatenated
+so that all <tt>%include</tt> code ultimately appears near the top of the
+generated parser, in the same order as it appeared in the grammar.</p>
+
+<p>The <tt>%include</tt> directive is very handy for getting some extra #include
+preprocessor statements at the beginning of the generated parser.
+For example:</p>
+
+<pre>
+ %include {#include &lt;unistd.h&gt;}
+</pre>
+
+<p>This might be needed, for example, if some of the C actions in the
+grammar call functions that are prototyped in unistd.h.</p>
+
+<p>Use the <tt><a href="#pcode">%code</a></tt> directive to add code to
+the end of the generated parser.</p>
+
+<a id='pleft'></a>
+<h4>4.4.10 The <tt>%left</tt> directive</h4>
+
+The <tt>%left</tt> directive is used (along with the
+<tt><a href='#pright'>%right</a></tt> and
+<tt><a href='#pnonassoc'>%nonassoc</a></tt> directives) to declare
+precedences of terminal symbols.
+Every terminal symbol whose name appears after
+a <tt>%left</tt> directive but before the next period (".") is
+given the same left-associative precedence value. Subsequent
+<tt>%left</tt> directives have higher precedence. For example:</p>
+
+<pre>
+ %left AND.
+ %left OR.
+ %nonassoc EQ NE GT GE LT LE.
+ %left PLUS MINUS.
+ %left TIMES DIVIDE MOD.
+ %right EXP NOT.
+</pre>
+
+<p>Note the period that terminates each <tt>%left</tt>,
+<tt>%right</tt> or <tt>%nonassoc</tt>
+directive.</p>
+
+<p>LALR(1) grammars can get into a situation where they require
+a large amount of stack space if you make heavy use or right-associative
+operators. For this reason, it is recommended that you use <tt>%left</tt>
+rather than <tt>%right</tt> whenever possible.</p>
+
+<a id='pname'></a>
+<h4>4.4.11 The <tt>%name</tt> directive</h4>
+
+<p>By default, the functions generated by Lemon all begin with the
+five-character string "Parse". You can change this string to something
+different using the <tt>%name</tt> directive. For instance:</p>
+
+<pre>
+ %name Abcde
+</pre>
+
+<p>Putting this directive in the grammar file will cause Lemon to generate
+functions named</p>
+<ul>
+<li> AbcdeAlloc(),
+<li> AbcdeFree(),
+<li> AbcdeTrace(), and
+<li> Abcde().
+</ul>
+</p>The <tt>%name</tt> directive allows you to generate two or more different
+parsers and link them all into the same executable.</p>
+
+<a id='pnonassoc'></a>
+<h4>4.4.12 The <tt>%nonassoc</tt> directive</h4>
+
+<p>This directive is used to assign non-associative precedence to
+one or more terminal symbols. See the section on
+<a href='#precrules'>precedence rules</a>
+or on the <tt><a href='#pleft'>%left</a></tt> directive
+for additional information.</p>
+
+<a id='parse_accept'></a>
+<h4>4.4.13 The <tt>%parse_accept</tt> directive</h4>
+
+<p>The <tt>%parse_accept</tt> directive specifies a block of C code that is
+executed whenever the parser accepts its input string. To "accept"
+an input string means that the parser was able to process all tokens
+without error.</p>
+
+<p>For example:</p>
+
+<pre>
+ %parse_accept {
+ printf("parsing complete!\n");
+ }
+</pre>
+
+<a id='parse_failure'></a>
+<h4>4.4.14 The <tt>%parse_failure</tt> directive</h4>
+
+<p>The <tt>%parse_failure</tt> directive specifies a block of C code that
+is executed whenever the parser fails complete. This code is not
+executed until the parser has tried and failed to resolve an input
+error using is usual error recovery strategy. The routine is
+only invoked when parsing is unable to continue.</p>
+
+<pre>
+ %parse_failure {
+ fprintf(stderr,"Giving up. Parser is hopelessly lost...\n");
+ }
+</pre>
+
+<a id='pright'></a>
+<h4>4.4.15 The <tt>%right</tt> directive</h4>
+
+<p>This directive is used to assign right-associative precedence to
+one or more terminal symbols. See the section on
+<a href='#precrules'>precedence rules</a>
+or on the <a href='#pleft'>%left</a> directive for additional information.</p>
+
+<a id='stack_overflow'></a>
+<h4>4.4.16 The <tt>%stack_overflow</tt> directive</h4>
+
+<p>The <tt>%stack_overflow</tt> directive specifies a block of C code that
+is executed if the parser's internal stack ever overflows. Typically
+this just prints an error message. After a stack overflow, the parser
+will be unable to continue and must be reset.</p>
+
+<pre>
+ %stack_overflow {
+ fprintf(stderr,"Giving up. Parser stack overflow\n");
+ }
+</pre>
+
+<p>You can help prevent parser stack overflows by avoiding the use
+of right recursion and right-precedence operators in your grammar.
+Use left recursion and and left-precedence operators instead to
+encourage rules to reduce sooner and keep the stack size down.
+For example, do rules like this:</p>
+<pre>
+ list ::= list element. // left-recursion. Good!
+ list ::= .
+</pre>
+<p>Not like this:</p>
+<pre>
+ list ::= element list. // right-recursion. Bad!
+ list ::= .
+</pre>
+
+<a id='stack_size'></a>
+<h4>4.4.17 The <tt>%stack_size</tt> directive</h4>
+
+<p>If stack overflow is a problem and you can't resolve the trouble
+by using left-recursion, then you might want to increase the size
+of the parser's stack using this directive. Put an positive integer
+after the <tt>%stack_size</tt> directive and Lemon will generate a parse
+with a stack of the requested size. The default value is 100.</p>
+
+<pre>
+ %stack_size 2000
+</pre>
+
+<a id='start_symbol'></a>
+<h4>4.4.18 The <tt>%start_symbol</tt> directive</h4>
+
+<p>By default, the start symbol for the grammar that Lemon generates
+is the first non-terminal that appears in the grammar file. But you
+can choose a different start symbol using the
+<tt>%start_symbol</tt> directive.</p>
+
+<pre>
+ %start_symbol prog
+</pre>
+
+<a id='syntax_error'></a>
+<h4>4.4.19 The <tt>%syntax_error</tt> directive</h4>
+
+<p>See <a href='#errors'>Error Processing</a>.</p>
+
+<a id='token'></a>
+<h4>4.4.20 The <tt>%token</tt> directive</h4>
+
+<p>Tokens are normally created automatically, the first time they are used.
+Any identifier that begins with an upper-case letter is a token.
+
+<p>Sometimes it is useful to declare tokens in advance, however. The
+integer values assigned to each token determined by the order in which
+the tokens are seen. So by declaring tokens in advance, it is possible to
+cause some tokens to have low-numbered values, which might be desirable in
+some grammers, or to have sequential values assigned to a sequence of
+related tokens. For this reason, the %token directive is provided to
+declare tokens in advance. The syntax is as follows:
+
+<blockquote>
+<tt>%token</tt> <i>TOKEN</i> <i>TOKEN...</i> <b>.</b>
+</blockquote></p>
+
+<p>The %token directive is followed by zero or more token symbols and
+terminated by a single ".". Each token named is created if it does not
+already exist. Tokens are created in order.
+
+
+<a id='token_class'></a>
+<h4>4.4.21 The <tt>%token_class</tt> directive</h4>
+
+<p>Undocumented. Appears to be related to the MULTITERMINAL concept.
+<a href='http://sqlite.org/src/fdiff?v1=796930d5fc2036c7&v2=624b24c5dc048e09&sbs=0'>Implementation</a>.</p>
+
+<a id='token_destructor'></a>
+<h4>4.4.22 The <tt>%token_destructor</tt> directive</h4>
+
+<p>The <tt>%destructor</tt> directive assigns a destructor to a non-terminal
+symbol. (See the description of the
+<tt><a href='%destructor'>%destructor</a></tt> directive above.)
+The <tt>%token_destructor</tt> directive does the same thing
+for all terminal symbols.</p>
+
+<p>Unlike non-terminal symbols, which may each have a different data type
+for their values, terminals all use the same data type (defined by
+the <tt><a href='#token_type'>%token_type</a></tt> directive)
+and so they use a common destructor.
+Other than that, the token destructor works just like the non-terminal
+destructors.</p>
+
+<a id='token_prefix'></a>
+<h4>4.4.23 The <tt>%token_prefix</tt> directive</h4>
+
+<p>Lemon generates #defines that assign small integer constants
+to each terminal symbol in the grammar. If desired, Lemon will
+add a prefix specified by this directive
+to each of the #defines it generates.</p>
+
+<p>So if the default output of Lemon looked like this:</p>
+<pre>
+ #define AND 1
+ #define MINUS 2
+ #define OR 3
+ #define PLUS 4
+</pre>
+<p>You can insert a statement into the grammar like this:</p>
+<pre>
+ %token_prefix TOKEN_
+</pre>
+<p>to cause Lemon to produce these symbols instead:</p>
+<pre>
+ #define TOKEN_AND 1
+ #define TOKEN_MINUS 2
+ #define TOKEN_OR 3
+ #define TOKEN_PLUS 4
+</pre>
+
+<a id='token_type'></a><a id='ptype'></a>
+<h4>4.4.24 The <tt>%token_type</tt> and <tt>%type</tt> directives</h4>
+
+<p>These directives are used to specify the data types for values
+on the parser's stack associated with terminal and non-terminal
+symbols. The values of all terminal symbols must be of the same
+type. This turns out to be the same data type as the 3rd parameter
+to the Parse() function generated by Lemon. Typically, you will
+make the value of a terminal symbol be a pointer to some kind of
+token structure. Like this:</p>
+
+<pre>
+ %token_type {Token*}
+</pre>
+
+<p>If the data type of terminals is not specified, the default value
+is "void*".</p>
+
+<p>Non-terminal symbols can each have their own data types. Typically
+the data type of a non-terminal is a pointer to the root of a parse tree
+structure that contains all information about that non-terminal.
+For example:</p>
+
+<pre>
+ %type expr {Expr*}
+</pre>
+
+<p>Each entry on the parser's stack is actually a union containing
+instances of all data types for every non-terminal and terminal symbol.
+Lemon will automatically use the correct element of this union depending
+on what the corresponding non-terminal or terminal symbol is. But
+the grammar designer should keep in mind that the size of the union
+will be the size of its largest element. So if you have a single
+non-terminal whose data type requires 1K of storage, then your 100
+entry parser stack will require 100K of heap space. If you are willing
+and able to pay that price, fine. You just need to know.</p>
+
+<a id='pwildcard'></a>
+<h4>4.4.25 The <tt>%wildcard</tt> directive</h4>
+
+<p>The <tt>%wildcard</tt> directive is followed by a single token name and a
+period. This directive specifies that the identified token should
+match any input token.</p>
+
+<p>When the generated parser has the choice of matching an input against
+the wildcard token and some other token, the other token is always used.
+The wildcard token is only matched if there are no alternatives.</p>
+
+<a id='errors'></a>
+<h2>5.0 Error Processing</h2>
+
+<p>After extensive experimentation over several years, it has been
+discovered that the error recovery strategy used by yacc is about
+as good as it gets. And so that is what Lemon uses.</p>
+
+<p>When a Lemon-generated parser encounters a syntax error, it
+first invokes the code specified by the <tt>%syntax_error</tt> directive, if
+any. It then enters its error recovery strategy. The error recovery
+strategy is to begin popping the parsers stack until it enters a
+state where it is permitted to shift a special non-terminal symbol
+named "error". It then shifts this non-terminal and continues
+parsing. The <tt>%syntax_error</tt> routine will not be called again
+until at least three new tokens have been successfully shifted.</p>
+
+<p>If the parser pops its stack until the stack is empty, and it still
+is unable to shift the error symbol, then the
+<tt><a href='#parse_failure'>%parse_failure</a></tt> routine
+is invoked and the parser resets itself to its start state, ready
+to begin parsing a new file. This is what will happen at the very
+first syntax error, of course, if there are no instances of the
+"error" non-terminal in your grammar.</p>
+
+<a id='history'></a>
+<h2>6.0 History of Lemon</h2>
+
+<p>Lemon was originally written by Richard Hipp sometime in the late
+1980s on a Sun4 Workstation using K&amp;R C.
+There was a companion LL(1) parser generator program named "Lime".
+The Lime source code has been lost.</p>
+
+<p>The lemon.c source file was originally many separate files that were
+compiled together to generate the "lemon" executable. Sometime in the
+1990s, the individual source code files were combined together into
+the current single large "lemon.c" source file. You can still see traces
+of original filenames in the code.</p>
+
+<p>Since 2001, Lemon has been part of the
+<a href="https://sqlite.org/">SQLite project</a> and the source code
+to Lemon has been managed as a part of the
+<a href="https://sqlite.org/src">SQLite source tree</a> in the following
+files:</p>
+
+<ul>
+<li> <a href="https://sqlite.org/src/file/tool/lemon.c">tool/lemon.c</a>
+<li> <a href="https://sqlite.org/src/file/tool/lempar.c">tool/lempar.c</a>
+<li> <a href="https://sqlite.org/src/file/doc/lemon.html">doc/lemon.html</a>
+</ul>
+
+<a id="copyright"></a>
+<h2>7.0 Copyright</h2>
+
+<p>All of the source code to Lemon, including the template parser file
+"lempar.c" and this documentation file ("lemon.html") are in the public
+domain. You can use the code for any purpose and without attribution.</p>
+
+<p>The code comes with no warranty. If it breaks, you get to keep both
+pieces.</p>
+
+</body>
+</html>
diff --git a/doc/pager-invariants.txt b/doc/pager-invariants.txt
new file mode 100644
index 0000000..44444da
--- /dev/null
+++ b/doc/pager-invariants.txt
@@ -0,0 +1,76 @@
+ *** Throughout this document, a page is deemed to have been synced
+ automatically as soon as it is written when PRAGMA synchronous=OFF.
+ Otherwise, the page is not synced until the xSync method of the VFS
+ is called successfully on the file containing the page.
+
+ *** Definition: A page of the database file is said to be "overwriteable" if
+ one or more of the following are true about the page:
+
+ (a) The original content of the page as it was at the beginning of
+ the transaction has been written into the rollback journal and
+ synced.
+
+ (b) The page was a freelist leaf page at the start of the transaction.
+
+ (c) The page number is greater than the largest page that existed in
+ the database file at the start of the transaction.
+
+ (1) A page of the database file is never overwritten unless one of the
+ following are true:
+
+ (a) The page and all other pages on the same sector are overwriteable.
+
+ (b) The atomic page write optimization is enabled, and the entire
+ transaction other than the update of the transaction sequence
+ number consists of a single page change.
+
+ (2) The content of a page written into the rollback journal exactly matches
+ both the content in the database when the rollback journal was written
+ and the content in the database at the beginning of the current
+ transaction.
+
+ (3) Writes to the database file are an integer multiple of the page size
+ in length and are aligned to a page boundary.
+
+ (4) Reads from the database file are either aligned on a page boundary and
+ an integer multiple of the page size in length or are taken from the
+ first 100 bytes of the database file.
+
+ (5) All writes to the database file are synced prior to the rollback journal
+ being deleted, truncated, or zeroed.
+
+ (6) If a master journal file is used, then all writes to the database file
+ are synced prior to the master journal being deleted.
+
+ *** Definition: Two databases (or the same database at two points it time)
+ are said to be "logically equivalent" if they give the same answer to
+ all queries. Note in particular the content of freelist leaf
+ pages can be changed arbitarily without effecting the logical equivalence
+ of the database.
+
+ (7) At any time, if any subset, including the empty set and the total set,
+ of the unsynced changes to a rollback journal are removed and the
+ journal is rolled back, the resulting database file will be logical
+ equivalent to the database file at the beginning of the transaction.
+
+ (8) When a transaction is rolled back, the xTruncate method of the VFS
+ is called to restore the database file to the same size it was at
+ the beginning of the transaction. (In some VFSes, the xTruncate
+ method is a no-op, but that does not change the fact the SQLite will
+ invoke it.)
+
+ (9) Whenever the database file is modified, at least one bit in the range
+ of bytes from 24 through 39 inclusive will be changed prior to releasing
+ the EXCLUSIVE lock.
+
+(10) The pattern of bits in bytes 24 through 39 shall not repeat in less
+ than one billion transactions.
+
+(11) A database file is well-formed at the beginning and at the conclusion
+ of every transaction.
+
+(12) An EXCLUSIVE lock must be held on the database file before making
+ any changes to the database file.
+
+(13) A SHARED lock must be held on the database file before reading any
+ content out of the database file.
diff --git a/doc/testrunner.md b/doc/testrunner.md
new file mode 100644
index 0000000..d420076
--- /dev/null
+++ b/doc/testrunner.md
@@ -0,0 +1,344 @@
+
+
+# The testrunner.tcl Script
+
+<ul type=none>
+ <li> 1. <a href=#overview>Overview</a>
+ <li> 2. <a href=#binary_tests>Binary Tests</a>
+<ul type=none>
+ <li> 2.1. <a href=#organization_tests>Organization of Tcl Tests</a>
+ <li> 2.2. <a href=#run_tests>Commands to Run Tests</a>
+ <li> 2.3. <a href=#binary_test_failures>Investigating Binary Test Failures</a>
+</ul>
+ <li> 3. <a href=#source_code_tests>Source Tests</a>
+<ul type=none>
+ <li> 3.1. <a href=#commands_to_run_tests>Commands to Run SQLite Tests</a>
+ <li> 3.2. <a href=#zipvfs_tests>Running ZipVFS Tests</a>
+ <li> 3.3. <a href=#source_code_test_failures>Investigating Source Code Test Failures</a>
+</ul>
+ <li> 4. <a href=#testrunner_options>Extra testrunner.tcl Options</a>
+# 4. Extra testrunner.tcl Options
+ <li> 5. <a href=#cpu_cores>Controlling CPU Core Utilization</a>
+</ul>
+
+<a name=overview></a>
+# 1. Overview
+
+testrunner.tcl is a Tcl script used to run multiple SQLite tests using
+multiple jobs. It supports the following types of tests:
+
+ * Tcl test scripts.
+
+ * Tests run with [make] commands. Specifically, at time of writing,
+ [make fuzztest], [make mptest], [make sourcetest] and [make threadtest].
+
+testrunner.tcl pipes the output of all tests and builds run into log file
+**testrunner.log**, created in the cwd directory. Searching this file for
+"failed" is a good way to find the output of a failed test.
+
+testrunner.tcl also populates SQLite database **testrunner.db**. This database
+contains details of all tests run, running and to be run. A useful query
+might be:
+
+```
+ SELECT * FROM script WHERE state='failed'
+```
+
+Running the command:
+
+```
+ ./testfixture $(TESTDIR)/testrunner.tcl status
+```
+
+in the directory containing the testrunner.db database runs various queries
+to produce a succinct report on the state of a running testrunner.tcl script.
+Running:
+
+```
+ watch ./testfixture $(TESTDIR)/testrunner.tcl status
+```
+
+in another terminal is a good way to keep an eye on a long running test.
+
+Sometimes testrunner.tcl uses the [testfixture] binary that it is run with
+to run tests (see "Binary Tests" below). Sometimes it builds testfixture and
+other binaries in specific configurations to test (see "Source Tests").
+
+<a name=binary_tests></a>
+# 2. Binary Tests
+
+The commands described in this section all run various combinations of the Tcl
+test scripts using the [testfixture] binary used to run the testrunner.tcl
+script (i.e. they do not invoke the compiler to build new binaries, or the
+[make] command to run tests that are not Tcl scripts). The procedure to run
+these tests is therefore:
+
+ 1. Build the "testfixture" (or "testfixture.exe" for windows) binary using
+ whatever method seems convenient.
+
+ 2. Test the binary built in step 1 by running testrunner.tcl with it,
+ perhaps with various options.
+
+The following sub-sections describe the various options that can be
+passed to testrunner.tcl to test binary testfixture builds.
+
+<a name=organization_tests></a>
+## 2.1. Organization of Tcl Tests
+
+Tcl tests are stored in files that match the pattern *\*.test*. They are
+found in both the $TOP/test/ directory, and in the various sub-directories
+of the $TOP/ext/ directory of the source tree. Not all *\*.test* files
+contain Tcl tests - a handful are Tcl scripts designed to invoke other
+*\*.test* files.
+
+The **veryquick** set of tests is a subset of all Tcl test scripts in the
+source tree. In includes most tests, but excludes some that are very slow.
+Almost all fault-injection tests (those that test the response of the library
+to OOM or IO errors) are excluded. It is defined in source file
+*test/permutations.test*.
+
+The **full** set of tests includes all Tcl test scripts in the source tree.
+To run a "full" test is to run all Tcl test scripts that can be found in the
+source tree.
+
+File *permutations.test* defines various test "permutations". A permutation
+consists of:
+
+ * A subset of Tcl test scripts, and
+
+ * Runtime configuration to apply before running each test script
+ (e.g. enabling auto-vacuum, or disable lookaside).
+
+Running **all** tests is to run all tests in the full test set, plus a dozen
+or so permutations. The specific permutations that are run as part of "all"
+are defined in file *testrunner_data.tcl*.
+
+<a name=run_tests></a>
+## 2.2. Commands to Run Tests
+
+To run the "veryquick" test set, use either of the following:
+
+```
+ ./testfixture $TESTDIR/testrunner.tcl
+ ./testfixture $TESTDIR/testrunner.tcl veryquick
+```
+
+To run the "full" test suite:
+
+```
+ ./testfixture $TESTDIR/testrunner.tcl full
+```
+
+To run the subset of the "full" test suite for which the test file name matches
+a specified pattern (e.g. all tests that start with "fts5"), either of:
+
+```
+ ./testfixture $TESTDIR/testrunner.tcl fts5%
+ ./testfixture $TESTDIR/testrunner.tcl 'fts5*'
+```
+
+Strictly speaking, for a test to be run the pattern must match the script
+filename, not including the directory, using the rules of Tcl's
+\[string match\] command. Except that before the matching is done, any "%"
+characters specified as part of the pattern are transformed to "\*".
+
+
+To run "all" tests (full + permutations):
+
+```
+ ./testfixture $TESTDIR/testrunner.tcl all
+```
+
+<a name=binary_test_failures></a>
+## 2.3. Investigating Binary Test Failures
+
+If a test fails, testrunner.tcl reports name of the Tcl test script and, if
+applicable, the name of the permutation, to stdout. This information can also
+be retrieved from either *testrunner.log* or *testrunner.db*.
+
+If there is no permutation, the individual test script may be run with:
+
+```
+ ./testfixture $PATH_TO_SCRIPT
+```
+
+Or, if the failure occured as part of a permutation:
+
+```
+ ./testfixture $TESTDIR/testrunner.tcl $PERMUTATION $PATH_TO_SCRIPT
+```
+
+TODO: An example instead of "$PERMUTATION" and $PATH\_TO\_SCRIPT?
+
+<a name=source_code_tests></a>
+# 3. Source Code Tests
+
+The commands described in this section invoke the C compiler to build
+binaries from the source tree, then use those binaries to run Tcl and
+other tests. The advantages of this are that:
+
+ * it is possible to test multiple build configurations with a single
+ command, and
+
+ * it ensures that tests are always run using binaries created with the
+ same set of compiler options.
+
+The testrunner.tcl commands described in this section may be run using
+either a *testfixture* (or testfixture.exe) build, or with any other Tcl
+shell that supports SQLite 3.31.1 or newer via "package require sqlite3".
+
+TODO: ./configure + Makefile.msc build systems.
+
+<a name=commands_to_run_tests></a>
+## 3.1. Commands to Run SQLite Tests
+
+The **mdevtest** command is equivalent to running the veryquick tests and
+the [make fuzztest] target once for each of two --enable-all builds - one
+with debugging enabled and one without:
+
+```
+ tclsh $TESTDIR/testrunner.tcl mdevtest
+```
+
+In other words, it is equivalent to running:
+
+```
+ $TOP/configure --enable-all --enable-debug
+ make fuzztest
+ make testfixture
+ ./testfixture $TOP/test/testrunner.tcl veryquick
+
+ # Then, after removing files created by the tests above:
+ $TOP/configure --enable-all OPTS="-O0"
+ make fuzztest
+ make testfixture
+ ./testfixture $TOP/test/testrunner.tcl veryquick
+```
+
+The **sdevtest** command is identical to the mdevtest command, except that the
+second of the two builds is a sanitizer build. Specifically, this means that
+OPTS="-fsanitize=address,undefined" is specified instead of OPTS="-O0":
+
+```
+ tclsh $TESTDIR/testrunner.tcl sdevtest
+```
+
+The **release** command runs lots of tests under lots of builds. It runs
+different combinations of builds and tests depending on whether it is run
+on Linux, Windows or OSX. Refer to *testrunner\_data.tcl* for the details
+of the specific tests run.
+
+```
+ tclsh $TESTDIR/testrunner.tcl release
+```
+
+As with <a href=#source code tests>source code tests</a>, one or more patterns
+may be appended to any of the above commands (mdevtest, sdevtest or release).
+In that case only Tcl tests (no fuzz or other tests) that match the specified
+pattern are run. For example, to run the just the Tcl rtree tests in all
+builds and configurations supported by "release":
+
+```
+ tclsh $TESTDIR/testrunner.tcl release rtree%
+```
+
+<a name=zipvfs_tests></a>
+## 3.2. Running ZipVFS Tests
+
+testrunner.tcl can build a zipvfs-enabled testfixture and use it to run
+tests from the Zipvfs project with the following command:
+
+```
+ tclsh $TESTDIR/testrunner.tcl --zipvfs $PATH_TO_ZIPVFS
+```
+
+This can be combined with any of "mdevtest", "sdevtest" or "release" to
+test both SQLite and Zipvfs with a single command:
+
+```
+ tclsh $TESTDIR/testrunner.tcl --zipvfs $PATH_TO_ZIPVFS mdevtest
+```
+
+<a name=source_code_test_failures></a>
+## 3.3. Investigating Source Code Test Failures
+
+Investigating a test failure that occurs during source code testing is a
+two step process:
+
+ 1. Recreating the build configuration in which the test failed, and
+
+ 2. Re-running the actual test.
+
+To recreate a build configuration, use the testrunner.tcl **script** command
+to create a build script. A build script is a bash script on Linux or OSX, or
+a dos \*.bat file on windows. For example:
+
+```
+ # Create a script that recreates build configuration "Device-One" on
+ # Linux or OSX:
+ tclsh $TESTDIR/testrunner.tcl script Device-One > make.sh
+
+ # Create a script that recreates build configuration "Have-Not" on Windows:
+ tclsh $TESTDIR/testrunner.tcl script Have-Not > make.bat
+```
+
+The generated bash or \*.bat file script accepts a single argument - a makefile
+target to build. This may be used either to run a [make] command test directly,
+or else to build a testfixture (or testfixture.exe) binary with which to
+run a Tcl test script, as <a href=#binary_test_failures>described above</a>.
+
+<a name=testrunner_options></a>
+# 4. Extra testrunner.tcl Options
+
+The testrunner.tcl script options in this section may be used with both source
+code and binary tests.
+
+The **--buildonly** option instructs testrunner.tcl just to build the binaries
+required by a test, not to run any actual tests. For example:
+
+```
+ # Build binaries required by release test.
+ tclsh $TESTDIR/testrunner.tcl --buildonly release"
+```
+
+The **--dryrun** option prevents testrunner.tcl from building any binaries
+or running any tests. Instead, it just writes the shell commands that it
+would normally execute into the testrunner.log file. Example:
+
+```
+ # Log the shell commmands that make up the mdevtest test.
+ tclsh $TESTDIR/testrunner.tcl --dryrun mdevtest"
+```
+
+<a name=cpu_cores></a>
+# 5. Controlling CPU Core Utilization
+
+When running either binary or source code tests, testrunner.tcl reports the
+number of jobs it intends to use to stdout. e.g.
+
+```
+ $ ./testfixture $TESTDIR/testrunner.tcl
+ splitting work across 16 jobs
+ ... more output ...
+```
+
+By default, testfixture.tcl attempts to set the number of jobs to the number
+of real cores on the machine. This can be overridden using the "--jobs" (or -j)
+switch:
+
+```
+ $ ./testfixture $TESTDIR/testrunner.tcl --jobs 8
+ splitting work across 8 jobs
+ ... more output ...
+```
+
+The number of jobs may also be changed while an instance of testrunner.tcl is
+running by exucuting the following command from the directory containing the
+testrunner.log and testrunner.db files:
+
+```
+ $ ./testfixture $TESTDIR/testrunner.tcl njob $NEW_NUMBER_OF_JOBS
+```
+
+
+
diff --git a/doc/trusted-schema.md b/doc/trusted-schema.md
new file mode 100644
index 0000000..d431fd4
--- /dev/null
+++ b/doc/trusted-schema.md
@@ -0,0 +1,142 @@
+# The new-security-options branch
+
+## The problem that the [new-security-options](/timeline?r=new-security-options) branch tries to solve
+
+An attacker might modify the schema of an SQLite database by adding
+structures that cause code to run when some other application opens and
+reads the database. For example, the attacker might replace a table
+definition with a view. Or the attacker might add triggers to tables
+or views, or add new CHECK constraints or generated columns or indexes
+with expressions in the index list or in the WHERE clause. If the
+added features invoke SQL functions or virtual tables with side effects,
+that might cause harm to the system if run by a high-privilege victim.
+Or, the added features might exfiltrate information if the database is
+read by a high-privilege victim.
+
+The changes in this branch strive to make it easier for high-privilege
+applications to safely read SQLite database files that might have been
+maliciously corrupted by an attacker.
+
+## Overview of changes in [new-security-options](/timeline?r=new-security-options)
+
+The basic idea is to tag every SQL function and virtual table with one
+of three risk levels:
+
+ 1. Innocuous
+ 2. Normal
+ 3. Direct-Only
+
+Innocuous functions/vtabs are safe and can be used at any time.
+Direct-only elements, in contrast, might have cause side-effects and
+should only be used from top-level SQL, not from within triggers or views nor
+in elements of the schema such as CHECK constraint, DEFAULT values,
+generated columns, index expressions, or in the WHERE clause of a
+partial index that are potentially under the control of an attacker.
+Normal elements behave like Innocuous if TRUSTED\_SCHEMA=on
+and behave like direct-only if TRUSTED\_SCHEMA=off.
+
+Application-defined functions and virtual tables go in as Normal unless
+the application takes deliberate steps to change the risk level.
+
+For backwards compatibility, the default is TRUSTED\_SCHEMA=on. Documentation
+will be updated to recommend applications turn TRUSTED\_SCHEMA to off.
+
+An innocuous function or virtual table is one that can only read content
+from the database file in which it resides, and can only alter the database
+in which it resides. Most SQL functions are innocuous. For example, there
+is no harm in an attacker running the abs() function.
+
+Direct-only elements that have side-effects that go outside the database file
+in which it lives, or return information from outside of the database file.
+Examples of direct-only elements include:
+
+ 1. The fts3\_tokenizer() function
+ 2. The writefile() function
+ 3. The readfile() function
+ 4. The zipvfs virtual table
+ 5. The csv virtual table
+
+We do not want an attacker to be able to add these kinds of things to
+the database schema and possibly trick a high-privilege application
+from performing any of these actions. Therefore, functions and vtabs
+with side-effects are marked as Direct-Only.
+
+Legacy applications might add other risky functions or vtabs. Those will
+go in as "Normal" by default. For optimal security, we want those risky
+app-defined functions and vtabs to be direct-only, but making that the
+default might break some legacy applications. Hence, all app-defined
+functions and vtabs go in as Normal, but the application can switch them
+over to "Direct-Only" behavior using a single pragma.
+
+The restrictions on the use of functions and virtual tables do not apply
+to TEMP. A TEMP VIEW or a TEMP TRIGGER can use any valid SQL function
+or virtual table. The idea is that TEMP views and triggers must be
+directly created by the application and are thus under the control of the
+application. TEMP views and triggers cannot be created by an attacker who
+corrupts the schema of a persistent database file. Hence TEMP views and
+triggers are safe.
+
+## Specific changes
+
+ 1. New sqlite3\_db\_config() option SQLITE\_DBCONFIG\_TRUSTED\_SCHEMA for
+ turning TRUSTED\_SCHEMA on and off. It defaults to ON.
+
+ 2. Compile-time option -DSQLITE\_TRUSTED\_SCHEMA=0 causes the default
+ TRUSTED\_SCHEMA setting to be off.
+
+ 3. New pragma "PRAGMA trusted\_schema=(ON\|OFF);". This provides access
+ to the TRUSTED_SCHEMA setting for application coded using scripting
+ languages or other secondary languages where they are unable to make
+ calls to sqlite3\_db\_config().
+
+ 4. New options for the "enc" parameter to sqlite3\_create\_function() and
+ its kin:
+ <ol type="a">
+ <li> _SQLITE\_INNOCUOUS_ &rarr; tags the new functions as Innocuous
+ <li> _SQLITE\_DIRECTONLY_ &rarr; tags the new functions as Direct-Only
+ </ol>
+
+ 5. New options to sqlite3\_vtab\_config():
+ <ol type="a">
+ <li> _SQLITE\_VTAB\_INNOCUOUS_ &rarr; tags the vtab as Innocuous
+ <li> _SQLITE\_VTAB\_DIRECTONLY_ &rarr; tags the vtab as Direct-Only
+ </ol>
+
+ 6. Change many of the functions and virtual tables in the SQLite source
+ tree to use one of the tags above.
+
+ 7. Enhanced PRAGMA function\_list and virtual-table "pragma\_function\_list"
+ with additional columns. The columns now are:
+ <ul>
+ <li> _name_ &rarr; Name of the function
+ <li> _builtin_ &rarr; 1 for built-in functions. 0 otherwise.
+ <li> _type_ &rarr; 's'=Scalar, 'a'=Aggregate, 'w'=Window
+ <li> _enc_ &rarr; 'utf8', 'utf16le', or 'utf16be'
+ <li> _narg_ &rarr; number of argument
+ <li> _flags_ &rarr; Bitmask of SQLITE\_INNOCUOUS, SQLITE\_DIRECTONLY,
+ SQLITE\_DETERMINISTIC, SQLITE\_SUBTYPE, and
+ SQLITE\_FUNC\_INTERNAL flags.
+ </ul>
+ <p>The last four columns are new.
+
+ 8. The function\_list PRAGMA now also shows all entries for each function.
+ So, for example, if a function can take either 2 or 3 arguments,
+ there are separate rows for the 2-argument and 3-argument versions of
+ the function.
+
+## Additional Notes
+
+The function_list enhancements allow the application to query the set
+of SQL functions that meet various criteria. For example, to see all
+SQL functions that are never allowed to be used in the schema or in
+trigger or views:
+
+~~~
+ SELECT DISTINCT name FROM pragma_function_list
+ WHERE (flags & 0x80000)!=0
+ ORDER BY name;
+~~~
+
+Doing the same is not possible for virtual tables, as a virtual table
+might be Innocuous, Normal, or Direct-Only depending on the arguments
+passed into the xConnect method.
diff --git a/doc/vdbesort-memory.md b/doc/vdbesort-memory.md
new file mode 100644
index 0000000..5c3dd62
--- /dev/null
+++ b/doc/vdbesort-memory.md
@@ -0,0 +1,49 @@
+
+20-11-2020
+
+# Memory Allocation In vdbesort.c
+
+Memory allocation is slightly different depending on:
+
+ * whether or not SQLITE_CONFIG_SMALL_MALLOC is set, and
+ * whether or not worker threads are enabled.
+
+## SQLITE_CONFIG_SMALL_MALLOC=0
+
+Assuming SQLITE_CONFIG_SMALL_MALLOC is not set, keys passed to the sorter are
+added to an in-memory buffer. This buffer is grown using sqlite3Realloc() as
+required it reaches the size configured for the main pager cache using "PRAGMA
+cache_size". i.e. if the user has executed "PRAGMA main.cache_size = -2048",
+then this buffer is allowed to grow up to 2MB in size.
+
+Once the buffer has grown to its threshold, keys are sorted and written to
+a temp file. If worker threads are not enabled, this is the only significant
+allocation the sorter module makes. After keys are sorted and flushed out to
+the temp file, the buffer is reused to accumulate the next batch of keys.
+
+If worker threads are available, then the buffer is passed to a worker thread
+to sort and flush once it is full, and a new buffer allocated to allow the
+main thread to continue to accumulate keys. Buffers are reused once they
+have been flushed, so in this case at most (nWorker+1) buffers are allocated
+and used, where nWorker is the number of configured worker threads.
+
+There are no other significant users of heap memory in the sorter module.
+Once sorted buffers of keys have been flushed to disk, they are read back
+either by mapping the file (via sqlite3_file.xFetch()) or else read back
+in one page at a time.
+
+All buffers are allocated by the main thread. A sorter object is associated
+with a single database connection, to which it holds a pointer.
+
+## SQLITE_CONFIG_SMALL_MALLOC=1
+
+This case is similar to the above, except that instead of accumulating
+multiple keys in a single large buffer, sqlite3VdbeSorterWrite() stores
+keys in a regular heap-memory linked list (one allocation per element).
+List elements are freed as they are flushed to disk, either by the main
+thread or by a worker thread.
+
+Each time a key is added the sorter (and an allocation made),
+sqlite3HeapNearlyFull() is called. If it returns true, the current
+list of keys is flushed to a temporary file, even if it has not yet
+reached the size threshold.
diff --git a/doc/vfs-shm.txt b/doc/vfs-shm.txt
new file mode 100644
index 0000000..c1f125a
--- /dev/null
+++ b/doc/vfs-shm.txt
@@ -0,0 +1,130 @@
+The 5 states of an historical rollback lock as implemented by the
+xLock, xUnlock, and xCheckReservedLock methods of the sqlite3_io_methods
+objec are:
+
+ UNLOCKED
+ SHARED
+ RESERVED
+ PENDING
+ EXCLUSIVE
+
+The wal-index file has a similar locking hierarchy implemented using
+the xShmLock method of the sqlite3_vfs object, but with 7
+states. Each connection to a wal-index file must be in one of
+the following 7 states:
+
+ UNLOCKED
+ READ
+ READ_FULL
+ WRITE
+ PENDING
+ CHECKPOINT
+ RECOVER
+
+These roughly correspond to the 5 states of a rollback lock except
+that SHARED is split out into 2 states: READ and READ_FULL and
+there is an extra RECOVER state used for wal-index reconstruction.
+
+The meanings of the various wal-index locking states is as follows:
+
+ UNLOCKED - The wal-index is not in use.
+
+ READ - Some prefix of the wal-index is being read. Additional
+ wal-index information can be appended at any time. The
+ newly appended content will be ignored by the holder of
+ the READ lock.
+
+ READ_FULL - The entire wal-index is being read. No new information
+ can be added to the wal-index. The holder of a READ_FULL
+ lock promises never to read pages from the database file
+ that are available anywhere in the wal-index.
+
+ WRITE - It is OK to append to the wal-index file and to adjust
+ the header to indicate the new "last valid frame".
+
+ PENDING - Waiting on all READ locks to clear so that a
+ CHECKPOINT lock can be acquired.
+
+ CHECKPOINT - It is OK to write any WAL data into the database file
+ and zero the last valid frame field of the wal-index
+ header. The wal-index file itself may not be changed
+ other than to zero the last valid frame field in the
+ header.
+
+ RECOVER - Held during wal-index recovery. Used to prevent a
+ race if multiple clients try to recover a wal-index at
+ the same time.
+
+
+A particular lock manager implementation may coalesce one or more of
+the wal-index locking states, though with a reduction in concurrency.
+For example, an implemention might implement only exclusive locking,
+in which case all states would be equivalent to CHECKPOINT, meaning that
+only one reader or one writer or one checkpointer could be active at a
+time. Or, an implementation might combine READ and READ_FULL into
+a single state equivalent to READ, meaning that a writer could
+coexist with a reader, but no reader or writers could coexist with a
+checkpointer.
+
+The lock manager must obey the following rules:
+
+(1) A READ cannot coexist with CHECKPOINT.
+(2) A READ_FULL cannot coexist with WRITE.
+(3) None of WRITE, PENDING, CHECKPOINT, or RECOVER can coexist.
+
+The SQLite core will obey the next set of rules. These rules are
+assertions on the behavior of the SQLite core which might be verified
+during testing using an instrumented lock manager.
+
+(5) No part of the wal-index will be read without holding either some
+ kind of SHM lock or an EXCLUSIVE lock on the original database.
+ The original database is the file named in the 2nd parameter to
+ the xShmOpen method.
+
+(6) A holder of a READ_FULL will never read any page of the database
+ file that is contained anywhere in the wal-index.
+
+(7) No part of the wal-index other than the header will be written nor
+ will the size of the wal-index grow without holding a WRITE or
+ an EXCLUSIVE on the original database file.
+
+(8) The wal-index header will not be written without holding one of
+ WRITE, CHECKPOINT, or RECOVER on the wal-index or an EXCLUSIVE on
+ the original database files.
+
+(9) A CHECKPOINT or RECOVER must be held on the wal-index, or an
+ EXCLUSIVE on the original database file, in order to reset the
+ last valid frame counter in the header of the wal-index back to zero.
+
+(10) A WRITE can only increase the last valid frame pointer in the header.
+
+The SQLite core will only ever send requests for UNLOCK, READ, WRITE,
+CHECKPOINT, or RECOVER to the lock manager. The SQLite core will never
+request a READ_FULL or PENDING lock though the lock manager may deliver
+those locking states in response to READ and CHECKPOINT requests,
+respectively, if and only if the requested READ or CHECKPOINT cannot
+be delivered.
+
+The following are the allowed lock transitions:
+
+ Original-State Request New-State
+ -------------- ---------- ----------
+(11a) UNLOCK READ READ
+(11b) UNLOCK READ READ_FULL
+(11c) UNLOCK CHECKPOINT PENDING
+(11d) UNLOCK CHECKPOINT CHECKPOINT
+(11e) READ UNLOCK UNLOCK
+(11f) READ WRITE WRITE
+(11g) READ RECOVER RECOVER
+(11h) READ_FULL UNLOCK UNLOCK
+(11i) READ_FULL WRITE WRITE
+(11j) READ_FULL RECOVER RECOVER
+(11k) WRITE READ READ
+(11l) PENDING UNLOCK UNLOCK
+(11m) PENDING CHECKPOINT CHECKPOINT
+(11n) CHECKPOINT UNLOCK UNLOCK
+(11o) RECOVER READ READ
+
+These 15 transitions are all that needs to be supported. The lock
+manager implementation can assert that fact. The other 27 possible
+transitions among the 7 locking states will never occur.
diff --git a/doc/wal-lock.md b/doc/wal-lock.md
new file mode 100644
index 0000000..d74bb88
--- /dev/null
+++ b/doc/wal-lock.md
@@ -0,0 +1,88 @@
+# Wal-Mode Blocking Locks
+
+On some Unix-like systems, SQLite may be configured to use POSIX blocking locks
+by:
+
+ * building the library with SQLITE\_ENABLE\_SETLK\_TIMEOUT defined, and
+ * configuring a timeout in ms using the sqlite3\_busy\_timeout() API.
+
+Blocking locks may be advantageous as (a) waiting database clients do not
+need to continuously poll the database lock, and (b) using blocking locks
+facilitates transfer of OS priority between processes when a high priority
+process is blocked by a lower priority one.
+
+Only read/write clients use blocking locks. Clients that have read-only access
+to the \*-shm file nevery use blocking locks.
+
+Threads or processes that access a single database at a time never deadlock as
+a result of blocking database locks. But it is of course possible for threads
+that lock multiple databases simultaneously to do so. In most cases the OS will
+detect the deadlock and return an error.
+
+## Wal Recovery
+
+Wal database "recovery" is a process required when the number of connected
+database clients changes from zero to one. In this case, a client is
+considered to connect to the database when it first reads data from it.
+Before recovery commences, an exclusive WRITER lock is taken.
+
+Without blocking locks, if two clients attempt recovery simultaneously, one
+fails to obtain the WRITER lock and either invokes the busy-handler callback or
+returns SQLITE\_BUSY to the user. With blocking locks configured, the second
+client blocks on the WRITER lock.
+
+## Database Readers
+
+Usually, read-only are not blocked by any other database clients, so they
+have no need of blocking locks.
+
+If a read-only transaction is being opened on a snapshot, the CHECKPOINTER
+lock is required briefly as part of opening the transaction (to check that a
+checkpointer is not currently overwriting the snapshot being opened). A
+blocking lock is used to obtain the CHECKPOINTER lock in this case. A snapshot
+opener may therefore block on and transfer priority to a checkpointer in some
+cases.
+
+## Database Writers
+
+A database writer must obtain the exclusive WRITER lock. It uses a blocking
+lock to do so if any of the following are true:
+
+ * the transaction is an implicit one consisting of a single DML or DDL
+ statement, or
+ * the transaction is opened using BEGIN IMMEDIATE or BEGIN EXCLUSIVE, or
+ * the first SQL statement executed following the BEGIN command is a DML or
+ DDL statement (not a read-only statement like a SELECT).
+
+In other words, in all cases except when an open read-transaction is upgraded
+to a write-transaction. In that case a non-blocking lock is used.
+
+## Database Checkpointers
+
+Database checkpointers takes the following locks, in order:
+
+ * The exclusive CHECKPOINTER lock.
+ * The exclusive WRITER lock (FULL, RESTART and TRUNCATE only).
+ * Exclusive lock on read-mark slots 1-N. These are immediately released after being taken.
+ * Exclusive lock on read-mark 0.
+ * Exclusive lock on read-mark slots 1-N again. These are immediately released
+ after being taken (RESTART and TRUNCATE only).
+
+All of the above use blocking locks.
+
+## Summary
+
+With blocking locks configured, the only cases in which clients should see an
+SQLITE\_BUSY error are:
+
+ * if the OS does not grant a blocking lock before the configured timeout
+ expires, and
+ * when an open read-transaction is upgraded to a write-transaction.
+
+In all other cases the blocking locks implementation should prevent clients
+from having to handle SQLITE\_BUSY errors and facilitate appropriate transfer
+of priorities between competing clients.
+
+Clients that lock multiple databases simultaneously must be wary of deadlock.
+
+