From b4b8efbd3826ac0af2d1c2e7c40fcf80a4bfba45 Mon Sep 17 00:00:00 2001
From: Daniel Baumann
+
This is a list of acronyms commonly used in the PostgreSQL
documentation and in discussions about PostgreSQL.
@@ -137,7 +137,7 @@
Pluggable
Authentication Modules
- PostgreSQL
+ PostgreSQL
@@ -216,4 +216,4 @@
Extensible Markup
Language
- Appendix L. Acronyms
Appendix L. Acronyms
+
This part covers topics that are of interest to a PostgreSQL database administrator. This includes installation of the software, set up and configuration of the @@ -23,4 +23,4 @@ the PostgreSQL database system. Readers are encouraged to look at Part I and Part II for additional information.
Table of Contents
pg_hba.conf
Filepg_hba.conf
File+
adminpack
provides a number of support functions which
pgAdmin and other administration and management tools can
use to provide additional functionality, such as remote management
@@ -86,4 +86,4 @@
directory. The log_filename parameter must have its
default setting (postgresql-%Y-%m-%d_%H%M%S.log
) to use this
function.
-
+
The amcheck
module provides functions that allow you to
verify the logical consistency of the structure of relations.
@@ -374,4 +374,4 @@ SET client_min_messages = DEBUG1;
an invariant violation should be sought. pageinspect may play a useful role in diagnosing
corruption that amcheck
detects. A REINDEX
may not be effective in repairing corruption.
-
clusterdb — cluster a PostgreSQL database
clusterdb
[connection-option
...] [ --verbose
| -v
]
+
clusterdb — cluster a PostgreSQL database
clusterdb
[connection-option
...] [ --verbose
| -v
]
[
--table
| -t
table
@@ -119,4 +119,4 @@
xyzzy
:
$
clusterdb --table=foo xyzzy
-
createdb — create a new PostgreSQL database
createdb
[connection-option
...] [option
...] [dbname
+
createdb — create a new PostgreSQL database
createdb
[connection-option
...] [option
...] [dbname
[description
]]
createdb creates a new PostgreSQL database. @@ -146,4 +146,4 @@
$
createdb -p 5000 -h eden -T template0 -e demo
CREATE DATABASE demo TEMPLATE template0;
-
createuser — define a new PostgreSQL user account
createuser
[connection-option
...] [option
...] [username
]
+
createuser — define a new PostgreSQL user account
createuser
[connection-option
...] [option
...] [username
]
createuser creates a
new PostgreSQL user (or more precisely, a role).
Only superusers and users with CREATEROLE
privilege can create
@@ -187,4 +187,4 @@
In the above example, the new password isn't actually echoed when typed,
but we show what was typed for clarity. As you see, the password is
encrypted before it is sent to the client.
-
dropdb — remove a PostgreSQL database
dropdb
[connection-option
...] [option
...] dbname
+
dropdb — remove a PostgreSQL database
dropdb
[connection-option
...] [option
...] dbname
dropdb destroys an existing
PostgreSQL database.
The user who executes this command must be a database
@@ -108,4 +108,4 @@
Database "demo" will be permanently deleted.
Are you sure? (y/n)
y
DROP DATABASE demo;
-
dropuser — remove a PostgreSQL user account
dropuser
[connection-option
...] [option
...] [username
]
+
dropuser — remove a PostgreSQL user account
dropuser
[connection-option
...] [option
...] [username
]
dropuser removes an existing
PostgreSQL user.
Only superusers and users with the CREATEROLE
privilege can
@@ -100,4 +100,4 @@
Role "joe" will be permanently removed.
Are you sure? (y/n)
y
DROP ROLE joe;
-
ecpg — embedded SQL C preprocessor
ecpg
[option
...] file
...
+
ecpg — embedded SQL C preprocessor
ecpg
[option
...] file
...
ecpg
is the embedded SQL preprocessor for C
programs. It converts C programs with embedded SQL statements to
normal C code by replacing the SQL invocations with special
@@ -103,4 +103,4 @@
ecpg prog1.pgc
cc -I/usr/local/pgsql/include -c prog1.c
cc -o prog1 prog1.o -L/usr/local/pgsql/lib -lecpg
-
initdb — create a new PostgreSQL database cluster
initdb
[option
...] [ --pgdata
| -D
] directory
+
initdb — create a new PostgreSQL database cluster
initdb
[option
...] [ --pgdata
| -D
] directory
initdb
creates a new
PostgreSQL database cluster. A database
cluster is a collection of databases that are managed by a single
@@ -248,4 +248,4 @@
initdb
can also be invoked via
pg_ctl initdb
.
-
pg_ctl — initialize, start, stop, or control a PostgreSQL server
pg_ctl
init[db]
[-D
datadir
] [-s
] [-o
initdb-options
]
pg_ctl
start
[-D
datadir
] [-l
filename
] [-W
] [-t
seconds
] [-s
] [-o
options
] [-p
path
] [-c
]
pg_ctl
stop
[-D
datadir
] [-m
+
pg_ctl — initialize, start, stop, or control a PostgreSQL server
pg_ctl
init[db]
[-D
datadir
] [-s
] [-o
initdb-options
]
pg_ctl
start
[-D
datadir
] [-l
filename
] [-W
] [-t
seconds
] [-s
] [-o
options
] [-p
path
] [-c
]
pg_ctl
stop
[-D
datadir
] [-m
s[mart]
| f[ast]
| i[mmediate]
] [-W
] [-t
seconds
] [-s
]
pg_ctl
restart
[-D
datadir
] [-m
s[mart]
| f[ast]
| i[mmediate]
@@ -285,4 +285,4 @@ pg_ctl: server is running (PID: 13718)
/usr/local/pgsql/bin/postgres "-D" "/usr/local/pgsql/data" "-p" "5433" "-B" "128"
The second line is the command that would be invoked in restart mode. -
pg_dumpall — extract a PostgreSQL database cluster into a script file
pg_dumpall
[connection-option
...] [option
...]
+
pg_dumpall — extract a PostgreSQL database cluster into a script file
pg_dumpall
[connection-option
...] [option
...]
pg_dumpall is a utility for writing out (“dumping”) all PostgreSQL databases of a cluster into one script file. The script file contains @@ -361,4 +361,4 @@
Check pg_dump for details on possible error conditions. -
pg_isready — check the connection status of a PostgreSQL server
pg_isready
[connection-option
...] [option
...]
+
pg_isready — check the connection status of a PostgreSQL server
pg_isready
[connection-option
...] [option
...]
pg_isready is a utility for checking the connection
status of a PostgreSQL database server. The exit
status specifies the result of the connection check.
@@ -76,4 +76,4 @@
$
echo $?
2
-
pg_amcheck — checks for corruption in one or more +
pg_amcheck — checks for corruption in one or more PostgreSQL databases
pg_amcheck
[option
...] [dbname
]
pg_amcheck supports running amcheck's corruption checking functions against one or @@ -292,4 +292,4 @@
pg_amcheck is designed to work with PostgreSQL 14.0 and later. -
pg_basebackup — take a base backup of a PostgreSQL cluster
pg_basebackup
[option
...]
+
pg_basebackup — take a base backup of a PostgreSQL cluster
pg_basebackup
[option
...]
pg_basebackup is used to take a base backup of
a running PostgreSQL database cluster. The backup
is taken without affecting other clients of the database, and can be used
@@ -547,4 +547,4 @@
directory backup
:
$
pg_basebackup -D backup -Ft --compress=gzip:9
-
pg_checksums — enable, disable or check data checksums in a PostgreSQL database cluster
pg_checksums
[option
...] [[ -D
| --pgdata
]datadir
]
+
pg_checksums — enable, disable or check data checksums in a PostgreSQL database cluster
pg_checksums
[option
...] [[ -D
| --pgdata
]datadir
]
pg_checksums checks, enables or disables data checksums in a PostgreSQL cluster. The server must be shut down cleanly before running @@ -72,4 +72,4 @@ enabling or disabling checksums, the cluster's data checksum configuration remains unchanged, and pg_checksums can be re-run to perform the same operation. -
pg_config — retrieve information about the installed version of PostgreSQL
pg_config
[option
...]
+
pg_config — retrieve information about the installed version of PostgreSQL
pg_config
[option
...]
The pg_config utility prints configuration parameters
of the currently installed version of PostgreSQL. It is
intended, for example, to be used by software packages that want to interface
@@ -107,4 +107,4 @@ eval ./configure `pg_config --configure`
shell quotation marks so arguments with spaces are represented
correctly. Therefore, using eval
is required
for proper results.
-
pg_controldata — display control information of a PostgreSQL database cluster
pg_controldata
[option
] [[ -D
| --pgdata
]datadir
]
+
pg_controldata — display control information of a PostgreSQL database cluster
pg_controldata
[option
] [[ -D
| --pgdata
]datadir
]
pg_controldata
prints information initialized during
initdb
, such as the catalog version.
It also shows information about write-ahead logging and checkpoint
@@ -20,4 +20,4 @@
Specifies whether to use color in diagnostic messages. Possible values
are always
, auto
and
never
.
-
pg_dump — +
pg_dump — extract a PostgreSQL database into a script file or other archive file
pg_dump
[connection-option
...] [option
...] [dbname
]
pg_dump is a utility for backing up a @@ -821,4 +821,4 @@ CREATE DATABASE foo WITH TEMPLATE template0;
$
pg_dump -t "\"MixedCaseName\"" mydb > mytab.sql
-
pg_receivewal — stream write-ahead logs from a PostgreSQL server
pg_receivewal
[option
...]
+
pg_receivewal — stream write-ahead logs from a PostgreSQL server
pg_receivewal
[option
...]
pg_receivewal is used to stream the write-ahead log
from a running PostgreSQL cluster. The write-ahead
log is streamed using the streaming replication protocol, and is written
@@ -246,4 +246,4 @@
/usr/local/pgsql/archive
:
$
pg_receivewal -h mydbserver -D /usr/local/pgsql/archive
-
pg_receivexlog
renamed to pg_receivewal
+
pg_receivexlog
renamed to pg_receivewal
PostgreSQL 9.6 and below provided a command named
pg_receivexlog
to fetch write-ahead-log (WAL) files. This command was renamed to pg_receivewal
, see
pg_receivewal for documentation of pg_receivewal
and see
- the release notes for PostgreSQL 10 for details
+ the release notes for PostgreSQL 10 for details
on this change.
-
pg_recvlogical — control PostgreSQL logical decoding streams
pg_recvlogical
[option
...]
+
pg_recvlogical — control PostgreSQL logical decoding streams
pg_recvlogical
[option
...]
pg_recvlogical
controls logical decoding replication
slots and streams data from such replication slots.
@@ -174,4 +174,4 @@ cluster.
See Section 49.1 for an example. -
pg_resetwal — reset the write-ahead log and other control information of a PostgreSQL database cluster
pg_resetwal
[ -f
| --force
] [ -n
| --dry-run
] [option
...] [ -D
| --pgdata
]datadir
+
pg_resetwal — reset the write-ahead log and other control information of a PostgreSQL database cluster
pg_resetwal
[ -f
| --force
] [ -n
| --dry-run
] [option
...] [ -D
| --pgdata
]datadir
pg_resetwal
clears the write-ahead log (WAL) and
optionally resets some other control information stored in the
pg_control
file. This function is sometimes needed
@@ -166,4 +166,4 @@
pg_resetwal
works only with servers of the same
major version.
-
pg_resetxlog
renamed to pg_resetwal
+
pg_resetxlog
renamed to pg_resetwal
PostgreSQL 9.6 and below provided a command named
pg_resetxlog
to reset the write-ahead-log (WAL) files. This command was renamed to pg_resetwal
, see
pg_resetwal for documentation of pg_resetwal
and see
- the release notes for PostgreSQL 10 for details
+ the release notes for PostgreSQL 10 for details
on this change.
-
pg_restore — +
pg_restore — restore a PostgreSQL database from an archive file created by pg_dump
pg_restore
[connection-option
...] [option
...] [filename
]
@@ -501,4 +501,4 @@ CREATE DATABASE foo WITH TEMPLATE template0; items 10 and 6, in that order:
$
pg_restore -L db.list db.dump
-
pg_rewind — synchronize a PostgreSQL data directory with another data directory that was forked from it
pg_rewind
[option
...] { -D
| --target-pgdata
} directory
{ --source-pgdata=
| directory
--source-server=
} connstr
+
pg_rewind — synchronize a PostgreSQL data directory with another data directory that was forked from it
pg_rewind
[option
...] { -D
| --target-pgdata
} directory
{ --source-pgdata=
| directory
--source-server=
} connstr
pg_rewind is a tool for synchronizing a PostgreSQL cluster with another copy of the same cluster, after the clusters' timelines have diverged. A typical scenario is to bring an old primary server back online @@ -213,4 +213,4 @@ GRANT EXECUTE ON function pg_catalog.pg_read_binary_file(text, bigint, bigint, b When starting the target, PostgreSQL replays all the required WAL, resulting in a data directory in a consistent state. -
pg_verifybackup — verify the integrity of a base backup of a +
pg_verifybackup — verify the integrity of a base backup of a PostgreSQL cluster
pg_verifybackup
[option
...]
pg_verifybackup is used to check the
integrity of a database cluster backup taken using
@@ -141,4 +141,4 @@
$
pg_basebackup -h mydbserver -D /usr/local/pgsql/data
$
edit /usr/local/pgsql/data/note.to.self
$
pg_verifybackup --ignore=note.to.self --skip-checksums /usr/local/pgsql/data
-
postgres — PostgreSQL database server
postgres
[option
...]
+
postmaster — PostgreSQL database server
postmaster
[option
...]
+
postmaster — PostgreSQL database server
postmaster
[option
...]
postmaster
is a deprecated alias of postgres
.
postgres -
psql — +
psql — PostgreSQL interactive terminal
psql
[option
...] [dbname
[username
]]
@@ -345,7 +345,7 @@ $ psql postgresql://dbmaster:5433/mydb?sslmode=r
the string
=>
. For example:
$psql testdb
-psql (15.5) +psql (15.6) Type "help" for help. testdb=> @@ -2725,7 +2725,7 @@ PSQL_EDITOR_LINENUMBER_ARG='--line ' by appending a dash and the PostgreSQL major or minor release identifier to the file name, for example~/.psqlrc-15
or -~/.psqlrc-15.5
. +~/.psqlrc-15.6
. The most specific version-matching file will be read in preference to a non-version-specific file. These version suffixes are added after determining the file path @@ -2956,4 +2956,4 @@ testdb(>\crosstabview "A" "B" "AxB" ord
reindexdb — reindex a PostgreSQL database
reindexdb
[connection-option
...] [option
...]
+
reindexdb — reindex a PostgreSQL database
reindexdb
[connection-option
...] [option
...]
[
-S
| --schema
schema
@@ -161,4 +161,4 @@
bar
in a database named abcd
:
$
reindexdb --table=foo --index=bar abcd
-
vacuumdb — garbage-collect and analyze a PostgreSQL database
vacuumdb
[connection-option
...] [option
...]
+
vacuumdb — garbage-collect and analyze a PostgreSQL database
vacuumdb
[connection-option
...] [option
...]
[
-t
| --table
table
@@ -241,4 +241,4 @@
bar
of the table for the optimizer:
$
vacuumdb --analyze --verbose --table='foo(bar)' xyzzy
-
Table of Contents
+
Table of Contents
Functionality is sometimes removed from PostgreSQL, feature, setting and file names sometimes change, or documentation moves to different places. This section directs users coming from old versions of the documentation or from external links to the appropriate new location for the information they need. -
Table of Contents
Table of Contents
+
It is very difficult to enforce business rules regarding data integrity
using Read Committed transactions because the view of the data is
shifting with each statement, and even a single statement may not
@@ -111,4 +111,4 @@
UPDATE
, DELETE
, or
MERGE
), so it is possible to obtain locks explicitly
before the snapshot is frozen.
-
+
The archive callbacks define the actual archiving behavior of the module. The server will call them as required to process each individual WAL file.
@@ -47,4 +47,4 @@ typedef bool (*ArchiveFileCB) (const char *file, const char *path);
typedef void (*ArchiveShutdownCB) (void);
-
+
An archive library is loaded by dynamically loading a shared library with the
archive_library's name as the library base name. The
normal library search path is used to locate the library. To provide the
@@ -21,4 +21,4 @@ typedef void (*ArchiveModuleInit) (struct ArchiveModuleCallbacks *cb);
Only the archive_file_cb
callback is required. The
others are optional.
-
Table of Contents
+
Table of Contents
PostgreSQL provides infrastructure to create custom modules for continuous archiving (see Section 26.3). While archiving via a shell command (i.e., archive_command) is much @@ -21,4 +21,4 @@
The contrib/basic_archive
module contains a working
example, which demonstrates some useful techniques.
-
+
PostgreSQL allows columns of a table to be
defined as variable-length multidimensional arrays. Arrays of any
built-in or user-defined base type, enum type, composite type, range type,
@@ -644,4 +644,4 @@ SELECT f1[1][-2][3] AS e1, f1[1][-1][5] AS e2
with than the array-literal syntax when writing array values in SQL
commands. In ARRAY
, individual element values are written the
same way they would be written when not members of an array.
-
+
This authentication method operates similarly to
password
except that it uses BSD Authentication
to verify the password. BSD Authentication is used only
@@ -18,4 +18,4 @@
operating system user running the server) must first be added to
the auth
group. The auth
group
exists by default on OpenBSD systems.
-
+
This authentication method uses SSL client certificates to perform
authentication. It is therefore only available for SSL connections;
see Section 19.9.2 for SSL configuration instructions.
@@ -22,4 +22,4 @@
cert
authentication because cert
authentication is effectively trust
authentication
with clientcert=verify-full
.
-
+
auth_delay
causes the server to pause briefly before
reporting authentication failure, to make brute-force attacks on database
passwords more difficult. Note that it does nothing to prevent
@@ -25,4 +25,4 @@ shared_preload_libraries = 'auth_delay'
auth_delay.milliseconds = '500'
KaiGai Kohei <kaigai@ak.jp.nec.com>
-
+
The ident authentication method works by obtaining the client's operating system user name from an ident server and using it as the allowed database user name (with an optional user name mapping). @@ -49,4 +49,4 @@ used when using the ident server with PostgreSQL, since PostgreSQL does not have any way to decrypt the returned string to determine the actual user name. -
+
This authentication method operates similarly to
password
except that it uses LDAP
as the password verification method. LDAP is used only to validate
@@ -187,4 +187,4 @@ host ... ldap ldapbasedn="dc=example,dc=net"
Since LDAP often uses commas and spaces to separate the different
parts of a DN, it is often necessary to use double-quoted parameter
values when configuring LDAP options, as shown in the examples.
-
+
PostgreSQL provides various methods for authenticating users: @@ -56,4 +56,4 @@
The following sections describe each of these authentication methods in more detail. -
+
This authentication method operates similarly to
password
except that it uses PAM (Pluggable
Authentication Modules) as the authentication mechanism. The
@@ -28,4 +28,4 @@
will fail because the PostgreSQL server is started by a non-root
user. However, this is not an issue when PAM is configured to use
LDAP or other authentication methods.
-
+
There are several password-based authentication methods. These methods
operate similarly but differ in how the users' passwords are stored on the
server and how the password provided by a client is sent across the
@@ -77,4 +77,4 @@
in postgresql.conf
, make all users set new passwords,
and change the authentication method specifications
in pg_hba.conf
to scram-sha-256
.
-
pg_hba.conf
File+
pg_hba.conf
File
Client authentication is controlled by a configuration file,
which traditionally is named
pg_hba.conf
and is stored in the database
@@ -341,7 +341,7 @@ hostnogssenc database
DN of a client certificate
in this format, do
-openssl x509 -in myclient.crt -noout --subject -nameopt RFC2253 | sed "s/^subject=//" +openssl x509 -in myclient.crt -noout -subject -nameopt RFC2253 | sed "s/^subject=//"
Care needs to be taken when using this option, especially when using
regular expression matching against the DN
.
@@ -489,4 +489,4 @@ local all @admins,+support md5
# The database column can also use lists and file names:
local db1,db2,@demodbs all md5
-
+
When trust
authentication is specified,
PostgreSQL assumes that anyone who can
connect to the server is authorized to access the database with
@@ -34,4 +34,4 @@
to the server by the pg_hba.conf
lines that specify
trust
. It is seldom reasonable to use trust
for any TCP/IP connections other than those from localhost (127.0.0.1).
-
+
When using an external authentication system such as Ident or GSSAPI, the name of the operating system user that initiated the connection might not be the same as the database user (role) that is to be used. @@ -99,4 +99,4 @@ omicron ann ann omicron robert bob # bryanh can also connect as guest1 omicron bryanh guest1 -
+
The auto_explain
module provides a means for
logging execution plans of slow statements automatically, without
having to run EXPLAIN
@@ -186,4 +186,4 @@ LOG: duration: 3.651 ms plan:
Filter: indisunique
Takahiro Itagaki <itagaki.takahiro@oss.ntt.co.jp>
-
+
The idea behind this dump method is to generate a file with SQL
commands that, when fed back to the server, will recreate the
database in the same state as it was at the time of the dump.
@@ -244,4 +244,4 @@ pg_dump -j num
-F d -f pg_restore -j to restore a dump in parallel.
This will work for any archive of either the "custom" or the "directory"
archive mode, whether or not it has been created with pg_dump -j
.
-
+
An alternative backup strategy is to directly copy the files that PostgreSQL uses to store the data in the database; Section 19.2 explains where these files @@ -88,4 +88,4 @@ tar -cf backup.tar /usr/local/pgsql/data than an SQL dump. (pg_dump does not need to dump the contents of indexes for example, just the commands to recreate them.) However, taking a file system backup might be faster. -
+
The object which describes a single file contains either a
Path
key or an Encoded-Path
key.
Normally, the Path
key will be present. The
@@ -36,4 +36,4 @@
Checksum
The checksum computed for this file, stored as a series of hexadecimal characters, two for each byte of the checksum. -
Table of Contents
+
Table of Contents
The backup manifest generated by pg_basebackup is primarily intended to permit the backup to be verified using pg_verifybackup. However, it is @@ -13,4 +13,4 @@ supported server encoding. There is no similar exception for backup manifests.) The JSON document is always an object; the keys that are present in this object are described in the next section. -
+
The backup manifest JSON document contains the following keys.
PostgreSQL-Backup-Manifest-Version
The associated value is always the integer 1. @@ -22,4 +22,4 @@ is significantly more expensive than a CRC32C checksum, the manifest should normally be small enough that the extra computation won't matter very much. -
+
The object which describes a WAL range always has three keys:
Timeline
The timeline for this range of WAL records, as an integer. @@ -19,4 +19,4 @@ upstream promotion, it is possible for multiple ranges to be present, each with a different timeline. There will never be multiple WAL ranges present for the same timeline. -
Table of Contents
+
Table of Contents
As with everything that contains valuable data, PostgreSQL databases should be backed up regularly. While the procedure is essentially simple, it is important to have a clear understanding of @@ -10,4 +10,4 @@
SQL dump
File system level backup
Continuous archiving
Each has its own strengths and weaknesses; each is discussed in turn in the following sections. -
+
Robert Haas <rhaas@postgresql.org>
-
+
basic_archive
is an example of an archive module. This
module copies completed WAL segment files to the specified directory. This
may not be especially useful, but it can serve as a starting point for
@@ -35,4 +35,4 @@ basic_archive.archive_directory = '/path/to/archive/directory'
doing so.
Nathan Bossart -
+
PostgreSQL can be extended to run user-supplied code in separate processes.
Such processes are started, stopped and monitored by postgres
,
which permits them to have a lifetime closely linked to the server's status.
@@ -228,4 +228,4 @@ typedef struct BackgroundWorker
The maximum number of registered background workers is limited by max_worker_processes. -
+
Selected references and readings for SQL and PostgreSQL.
@@ -20,4 +20,4 @@ system”. SIGMOD Record 18(3). Sept. 1989.
[ston89b] “The case for partial indexes”. SIGMOD Record 18(4). Dec. 1989. 4–11.
[ston90a] “The implementation of POSTGRES”. Transactions on Knowledge and Data Engineering 2(1). IEEE. March 1990.
[ston90b] “On - Rules, Procedures, Caching and Views in Database Systems”. ACM-SIGMOD Conference on Management of Data, June 1990.
create
tablename
tableoid
@@ -108,4 +108,4 @@
is postponed.
build indices
Fill in the indices that have previously been declared. -
+
The following sequence of commands will create the table
test_table
with OID 420, having three columns
oid
, cola
and colb
@@ -12,4 +12,4 @@ insert ( 421 1 'value 1' )
insert ( 422 2 _null_ )
close test_table
-
+
This section describes how the PostgreSQL
backend interprets BKI files. This description
will be easier to understand if the postgres.bki
@@ -16,4 +16,4 @@
single-quoted strings. Everything is case sensitive.
Lines starting with #
are ignored.
-
+
The open
command cannot be used until the tables it uses
exist and have entries for the table that is to be opened.
(These minimum tables are pg_class
,
@@ -39,4 +39,4 @@
There are doubtless other, undocumented ordering dependencies. -
Table of Contents
+
Table of Contents
PostgreSQL uses many different system catalogs to keep track of the existence and properties of database objects, such as tables and functions. Physically there is no difference between a system @@ -53,4 +53,4 @@ and/or initial data files. The rest of this chapter gives some information about that, and for completeness describes the BKI file format. -
+
bloom
provides an index access method based on
Bloom filters.
@@ -187,4 +187,4 @@ DEFAULT FOR TYPE text USING bloom AS
Oleg Bartunov <obartunov@postgrespro.ru>
,
Postgres Professional, Moscow, Russia
-
Symbols +
Symbols | A | @@ -51,12 +51,12 @@ | Y | - Z
+
The core PostgreSQL distribution includes the BRIN operator classes shown in Table 71.1. @@ -43,4 +43,4 @@ minmax indexes to summarize a block range. Each value may represent either a point, or a boundary of an interval. Values must be between 8 and 256, and the default value is 32. -
+
The BRIN interface has a high level of abstraction,
requiring the access method implementer only to implement the semantics
of the data type being accessed. The BRIN layer
@@ -165,4 +165,4 @@ typedef struct BrinOpcInfo
right-hand-side argument of the supported operator. See
float4_minmax_ops
as an example of minmax, and
box_inclusion_ops
as an example of inclusion.
-
+
BRIN stands for Block Range Index. BRIN is designed for handling very large tables in which certain columns have some natural correlation with their @@ -96,4 +96,4 @@ LOG: request for BRIN range summarization for index "brin_wi_idx" page 128 was which is useful when the index tuple is no longer a very good representation because the existing values have changed. See Section 9.27.8 for details. -
+
As shown in Table 38.3, a btree operator
class must provide five comparison operators,
<
,
@@ -115,4 +115,4 @@
data types. The extensions to three or more data types within a family
are not strictly required by the btree index mechanism itself, but the
planner relies on them for optimization purposes.
-
+
btree_gin
provides sample GIN operator classes that
implement B-tree equivalent behavior for the data types
int2
, int4
, int8
, float4
,
@@ -35,4 +35,4 @@ SELECT * FROM test WHERE a < 10;
Oleg Bartunov (<oleg@sai.msu.su>
). See
http://www.sai.msu.su/~megera/oddmuse/index.cgi/Gin
for additional information.
-
+
btree_gist
provides GiST index operator classes that
implement B-tree equivalent behavior for the data types
int2
, int4
, int8
, float4
,
@@ -77,4 +77,4 @@ INSERT 0 1
Paul Jungwirth (<pj@illuminatedcomputing.com>
). See
http://www.sai.msu.su/~megera/postgres/gist/
for additional information.
-
+
This section covers B-Tree index implementation details that may be
of use to advanced users. See
src/backend/access/nbtree/README
in the source
@@ -251,4 +251,4 @@
INCLUDE
indexes can never use deduplication.
-
+
PostgreSQL includes an implementation of the standard btree (multi-way balanced tree) index data structure. Any data type that can be sorted into a well-defined linear @@ -14,4 +14,4 @@ some features that go beyond what would be needed just to support btree indexes, and parts of the system that are quite distant from the btree AM make use of them. -
+
As shown in Table 38.9, btree defines
one required and four optional support functions. The five
user-defined methods are:
@@ -288,4 +288,4 @@ options(relopts
loc
access method. Nevertheless, this support function was added to B-tree
for uniformity, and will probably find uses during further
evolution of B-tree in PostgreSQL.
-
+
When you find a bug in PostgreSQL we want to hear about it. Your bug reports play an important part in making PostgreSQL more reliable because even the utmost @@ -154,7 +154,7 @@ subversion the package might have. If you are talking about a Git snapshot, mention that, including the commit hash.
- If your version is older than 15.5 we will almost certainly + If your version is older than 15.6 we will almost certainly tell you to upgrade. There are many bug fixes and improvements in each new release, so it is quite possible that a bug you have encountered in an older release of PostgreSQL @@ -245,4 +245,4 @@ will be some delay before the email is delivered. If you wish to subscribe to the lists, please visit https://lists.postgresql.org/ for instructions. -
pg_aggregate
+
pg_aggregate
The catalog pg_aggregate
stores information about
aggregate functions. An aggregate function is a function that
operates on a set of values (typically one column from each row
@@ -167,4 +167,4 @@
command. See Section 38.12 for more information about
writing aggregate functions and the meaning of the transition
functions, etc.
-
pg_am
+
pg_am
The catalog pg_am
stores information about
relation access methods. There is one row for each access method supported
by the system.
@@ -41,4 +41,4 @@
However, pg_index_column_has_property()
and related
functions have been added to allow SQL queries to inspect index access
method properties; see Table 9.71.
-
pg_amop
+
pg_amop
The catalog pg_amop
stores information about
operators associated with access method operator families. There is one
row for each operator that is a member of an operator family. A family
@@ -101,4 +101,4 @@
amoplefttype
and amoprighttype
must match
the oprleft
and oprright
fields of the
referenced pg_operator
entry.
-
pg_amproc
+
pg_amproc
The catalog pg_amproc
stores information about
support functions associated with access method operator families. There
is one row for each support function belonging to an operator family.
@@ -52,4 +52,4 @@
an index, which are those with amproclefttype
and
amprocrighttype
both equal to the index operator class's
opcintype
.
-
pg_attrdef
+
pg_attrdef
The catalog pg_attrdef
stores column default
values. The main information about columns is stored in
pg_attribute
.
@@ -34,4 +34,4 @@
The column default value, in nodeToString()
representation. Use pg_get_expr(adbin, adrelid)
to
convert it to an SQL expression.
-
pg_attribute
+
pg_attribute
The catalog pg_attribute
stores information about
table columns. There will be exactly one
pg_attribute
row for every column in every
@@ -208,4 +208,4 @@
later dropped, and so there is no pg_type
row anymore.
attlen
and the other fields can be used
to interpret the contents of a row of the table.
-
pg_authid
+
pg_authid
The catalog pg_authid
contains information about
database authorization identifiers (roles). A role subsumes the concepts
of “users” and “groups”. A user is essentially just a
@@ -110,4 +110,4 @@ SCRAM-SHA-256$<iteration count>
:
A password that does not follow either of those formats is assumed to be unencrypted. -
pg_cast
+
pg_cast
The catalog pg_cast
stores data type conversion
paths, both built-in and user-defined.
@@ -83,4 +83,4 @@ coercion in a single step. When no such entry is available, coercion to a type that uses a type modifier involves two steps, one to convert between data types and a second to apply the modifier. -
pg_class
+
pg_class
The catalog pg_class
describes tables and
other objects that have columns or are otherwise similar to a
table. This includes indexes (but see also pg_index
),
@@ -261,4 +261,4 @@
DROP INDEX
. Instead, VACUUM
clears
relhasindex
if it finds the table has no indexes. This
arrangement avoids race conditions and improves concurrency.
-
pg_collation
+
pg_collation
The catalog pg_collation
describes the
available collations, which are essentially mappings from an SQL
name to operating system locale categories.
@@ -90,4 +90,4 @@
collations whose encoding does not match the database encoding,
since they could match the encodings of databases later cloned from
template0
. This would currently have to be done manually.
-
pg_constraint
+
pg_constraint
The catalog pg_constraint
stores check, primary
key, unique, foreign key, and exclusion constraints on tables.
(Column constraints are not treated specially. Every column constraint is
@@ -203,4 +203,4 @@
pg_class.relchecks
needs to agree with the
number of check-constraint entries found in this table for each
relation.
-
pg_conversion
+
pg_conversion
The catalog pg_conversion
describes
encoding conversion functions. See CREATE CONVERSION
for more information.
@@ -53,4 +53,4 @@
True if this is the default conversion -
pg_database
+
pg_database
The catalog pg_database
stores information about
the available databases. Databases are created with the CREATE DATABASE
command.
Consult Chapter 23 for details about the meaning
@@ -121,4 +121,4 @@
Access privileges; see Section 5.7 for details -
pg_default_acl
+
pg_default_acl
The catalog pg_default_acl
stores initial
privileges to be assigned to newly created objects.
Table 53.17. pg_default_acl
Columns
@@ -55,4 +55,4 @@
not whatever might be in 53.18. |
---|
pg_enum
+
pg_enum
The pg_enum
catalog contains entries
showing the values and labels for each enum type. The
internal representation of a given enum value is actually the OID
@@ -46,4 +46,4 @@
negative or fractional values of enumsortorder
.
The only requirement on these values is that they be correctly
ordered and unique within each enum type.
-
pg_event_trigger
+
pg_event_trigger
The catalog pg_event_trigger
stores event triggers.
See Chapter 40 for more information.
Table 53.21. pg_event_trigger
Columns
@@ -50,4 +50,4 @@ Command tags for which this trigger will fire. If NULL, the firing of this trigger is not restricted on the basis of the command tag. - |
---|
pg_extension
+
pg_extension
The catalog pg_extension
stores information
about the installed extensions. See Section 38.17
for details about extensions.
@@ -62,4 +62,4 @@
objects. If extrelocatable
is true, then
this schema must in fact contain all schema-qualifiable objects
belonging to the extension.
-
pg_index
+
pg_index
The catalog pg_index
contains part of the information
about indexes. The rest is mostly in
pg_class
.
@@ -160,4 +160,4 @@
Expression tree (in nodeToString()
representation) for partial index predicate. Null if not a
partial index.
-
pg_inherits
+
pg_inherits
The catalog pg_inherits
records information about
table and index inheritance hierarchies. There is one entry for each direct
parent-child table or index relationship in the database. (Indirect
@@ -38,4 +38,4 @@
true
for a partition that is in the process of
being detached; false
otherwise.
-
pg_init_privs
+
pg_init_privs
The catalog pg_init_privs
records information about
the initial privileges of objects in the system. There is one entry
for each object in the database which has a non-default (non-NULL)
@@ -58,4 +58,4 @@
The initial access privileges; see Section 5.7 for details -
pg_language
+
pg_language
The catalog pg_language
registers
languages in which you can write functions or stored procedures.
See CREATE LANGUAGE
@@ -73,4 +73,4 @@
Access privileges; see Section 5.7 for details -
pg_largeobject_metadata
+
pg_largeobject_metadata
The catalog pg_largeobject_metadata
holds metadata associated with large objects. The actual large object
data is stored in
@@ -25,4 +25,4 @@
Access privileges; see Section 5.7 for details -
pg_largeobject
+
pg_largeobject
The catalog pg_largeobject
holds the data making up
“large objects”. A large object is identified by an OID
assigned when it is created. Each large object is broken into
@@ -45,4 +45,4 @@
allows sparse storage: pages might be missing, and might be shorter than
LOBLKSIZE
bytes even if they are not the last page of the object.
Missing regions within a large object read as zeroes.
-
pg_namespace
+
pg_namespace
The catalog pg_namespace
stores namespaces.
A namespace is the structure underlying SQL schemas: each namespace
can have a separate collection of relations, types, etc. without name
@@ -30,4 +30,4 @@
Access privileges; see Section 5.7 for details -
pg_opclass
+
pg_opclass
The catalog pg_opclass
defines
index access method operator classes. Each operator class defines
semantics for index columns of a particular data type and a particular
@@ -72,4 +72,4 @@
Also, there must be no more than one pg_opclass
row having opcdefault
true for any given combination of
opcmethod
and opcintype
.
-
pg_operator
+
pg_operator
The catalog pg_operator
stores information about operators.
See CREATE OPERATOR
and Section 38.14 for more information.
@@ -98,4 +98,4 @@
Join selectivity estimation function for this operator (zero if none) -
pg_opfamily
+
pg_opfamily
The catalog pg_opfamily
defines operator families.
Each operator family is a collection of operators and associated
support routines that implement the semantics specified for a particular
@@ -50,4 +50,4 @@
pg_amproc
,
and
pg_opclass
.
-
pg_parameter_acl
+
pg_parameter_acl
The catalog pg_parameter_acl
records configuration
parameters for which privileges have been granted to one or more roles.
No entry is made for parameters that have default privileges.
@@ -28,4 +28,4 @@
Access privileges; see Section 5.7 for details -
pg_partitioned_table
+
pg_partitioned_table
The catalog pg_partitioned_table
stores
information about how tables are partitioned.
Table 53.37. pg_partitioned_table
Columns
@@ -68,4 +68,4 @@
references. This is a list with one element for each zero
entry in |
---|
pg_policy
+
pg_policy
The catalog pg_policy
stores row-level
security policies for tables. A policy includes the kind of
command that it applies to (possibly all commands), the roles that it
@@ -65,4 +65,4 @@
Policies stored in pg_policy
are applied only when
pg_class
.relrowsecurity
is set for
their table.
-
pg_proc
+
pg_proc
The catalog pg_proc
stores information about
functions, procedures, aggregate functions, and window functions
(collectively also known as routines). See CREATE FUNCTION, CREATE PROCEDURE, and
@@ -253,4 +253,4 @@
text. probin
is null except for
dynamically-loaded C functions, for which it gives the name of the
shared library file containing the function.
-
pg_publication_rel
+
pg_publication_rel
The catalog pg_publication_rel
contains the
mapping between relations and publications in the database. This is a
many-to-many mapping. See also Section 54.17
@@ -40,4 +40,4 @@
part of the publication. For example, a value of 1 3
would mean that the first and the third table columns are published.
A null value indicates that all columns are published.
-
pg_publication
+
pg_publication
The catalog pg_publication
contains all
publications created in the database. For more on publications see
Section 31.1.
@@ -61,4 +61,4 @@
If true, operations on a leaf partition are replicated using the
identity and schema of its topmost partitioned ancestor mentioned in the
publication instead of its own.
-
pg_range
+
pg_range
The catalog pg_range
stores information about
range types. This is in addition to the types' entries in
pg_type
.
@@ -58,4 +58,4 @@
type. rngcanonical
is used when the element type is
discrete. rngsubdiff
is optional but should be supplied to
improve performance of GiST indexes on the range type.
-
pg_replication_origin
+
pg_replication_origin
The pg_replication_origin
catalog contains
all replication origins created. For more on replication origins
see Chapter 50.
@@ -25,4 +25,4 @@
The external, user defined, name of a replication origin. -
pg_seclabel
+
pg_seclabel
The catalog pg_seclabel
stores security
labels on database objects. Security labels can be manipulated
with the SECURITY LABEL
command. For an easier
@@ -43,4 +43,4 @@
The security label applied to this object. -
pg_shdepend
+
pg_shdepend
The catalog pg_shdepend
records the
dependency relationships between database objects and shared objects,
such as roles. This information allows
@@ -95,4 +95,4 @@
considered “pinned”. No entries are made
in pg_shdepend
that would have a pinned
object as either referenced or dependent object.
-
pg_shdescription
+
pg_shdescription
The catalog pg_shdescription
stores optional
descriptions (comments) for shared database objects. Descriptions can be
manipulated with the COMMENT
command and viewed with
@@ -35,4 +35,4 @@
Arbitrary text that serves as the description of this object -
pg_shseclabel
+
pg_shseclabel
The catalog pg_shseclabel
stores security
labels on shared database objects. Security labels can be manipulated
with the SECURITY LABEL
command. For an easier
@@ -40,4 +40,4 @@
The security label applied to this object. -
pg_statistic_ext_data
+
pg_statistic_ext_data
The catalog pg_statistic_ext_data
holds data for extended planner statistics defined in
pg_statistic_ext
.
@@ -68,4 +68,4 @@
Per-expression statistics, serialized as an array of
pg_statistic
type
-
pg_statistic_ext
+
pg_statistic_ext
The catalog pg_statistic_ext
holds definitions of extended planner statistics.
Each row in this catalog corresponds to a statistics object
@@ -85,4 +85,4 @@
and populate an entry in the
pg_statistic_ext_data
catalog.
-
pg_statistic
+
pg_statistic
The catalog pg_statistic
stores
statistical data about the contents of the database. Entries are
created by ANALYZE
@@ -131,4 +131,4 @@
values are actually of the specific column's data type, or a related
type such as an array's element type, so there is no way to define
these columns' type more specifically than anyarray
.
-
pg_subscription
+
pg_subscription
The catalog pg_subscription
contains all existing
logical replication subscriptions. For more information about logical
replication see Chapter 31.
@@ -100,4 +100,4 @@
Array of subscribed publication names. These reference
publications defined in the upstream database. For more on publications
see Section 31.1.
-
pg_transform
+
pg_transform
The catalog pg_transform
stores information about
transforms, which are a mechanism to adapt data types to procedural
languages. See CREATE TRANSFORM for more information.
@@ -41,4 +41,4 @@
The OID of the function to use when converting output from the
procedural language (e.g., return values) to the data type. Zero is
stored if the default behavior should be used.
-
pg_trigger
+
pg_trigger
The catalog pg_trigger
stores triggers on tables
and views.
See CREATE TRIGGER
@@ -145,4 +145,4 @@
pg_class.relhastriggers
must be true if a relation has any triggers in this catalog.
-
pg_type
+
pg_type
The catalog pg_type
stores information about data
types. Base types and enum types (scalar types) are created with
CREATE TYPE
, and
@@ -307,4 +307,4 @@
of typcategory
. Any future additions to this list will
also be upper-case ASCII letters. All other ASCII characters are reserved
for user-defined categories.
-
Table 53.65. typcategory
Codes
Code | Category |
---|---|
A | Array types |
B | Boolean types |
C | Composite types |
D | Date/time types |
E | Enum types |
G | Geometric types |
I | Network address types |
N | Numeric types |
P | Pseudo-types |
R | Range types |
S | String types |
T | Timespan types |
U | User-defined types |
V | Bit-string types |
X | unknown type |
Z | Internal-use types |
Table 53.65. typcategory
Codes
Code | Category |
---|---|
A | Array types |
B | Boolean types |
C | Composite types |
D | Date/time types |
E | Enum types |
G | Geometric types |
I | Network address types |
N | Numeric types |
P | Pseudo-types |
R | Range types |
S | String types |
T | Timespan types |
U | User-defined types |
V | Bit-string types |
X | unknown type |
Z | Internal-use types |
+
Table 53.1 lists the system catalogs. More detailed documentation of each catalog follows below.
@@ -7,4 +7,4 @@ database creation and are thereafter database-specific. A few catalogs are physically shared across all databases in a cluster; these are noted in the descriptions of the individual catalogs. -
Table 53.1. System Catalogs
Catalog Name | Purpose |
---|---|
pg_aggregate | aggregate functions |
pg_am | relation access methods |
pg_amop | access method operators |
pg_amproc | access method support functions |
pg_attrdef | column default values |
pg_attribute | table columns (“attributes”) |
pg_authid | authorization identifiers (roles) |
pg_auth_members | authorization identifier membership relationships |
pg_cast | casts (data type conversions) |
pg_class | tables, indexes, sequences, views (“relations”) |
pg_collation | collations (locale information) |
pg_constraint | check constraints, unique constraints, primary key constraints, foreign key constraints |
pg_conversion | encoding conversion information |
pg_database | databases within this database cluster |
pg_db_role_setting | per-role and per-database settings |
pg_default_acl | default privileges for object types |
pg_depend | dependencies between database objects |
pg_description | descriptions or comments on database objects |
pg_enum | enum label and value definitions |
pg_event_trigger | event triggers |
pg_extension | installed extensions |
pg_foreign_data_wrapper | foreign-data wrapper definitions |
pg_foreign_server | foreign server definitions |
pg_foreign_table | additional foreign table information |
pg_index | additional index information |
pg_inherits | table inheritance hierarchy |
pg_init_privs | object initial privileges |
pg_language | languages for writing functions |
pg_largeobject | data pages for large objects |
pg_largeobject_metadata | metadata for large objects |
pg_namespace | schemas |
pg_opclass | access method operator classes |
pg_operator | operators |
pg_opfamily | access method operator families |
pg_parameter_acl | configuration parameters for which privileges have been granted |
pg_partitioned_table | information about partition key of tables |
pg_policy | row-security policies |
pg_proc | functions and procedures |
pg_publication | publications for logical replication |
pg_publication_namespace | schema to publication mapping |
pg_publication_rel | relation to publication mapping |
pg_range | information about range types |
pg_replication_origin | registered replication origins |
pg_rewrite | query rewrite rules |
pg_seclabel | security labels on database objects |
pg_sequence | information about sequences |
pg_shdepend | dependencies on shared objects |
pg_shdescription | comments on shared objects |
pg_shseclabel | security labels on shared database objects |
pg_statistic | planner statistics |
pg_statistic_ext | extended planner statistics (definition) |
pg_statistic_ext_data | extended planner statistics (built statistics) |
pg_subscription | logical replication subscriptions |
pg_subscription_rel | relation state for subscriptions |
pg_tablespace | tablespaces within this database cluster |
pg_transform | transforms (data type to procedural language conversions) |
pg_trigger | triggers |
pg_ts_config | text search configurations |
pg_ts_config_map | text search configurations' token mappings |
pg_ts_dict | text search dictionaries |
pg_ts_parser | text search parsers |
pg_ts_template | text search templates |
pg_type | data types |
pg_user_mapping | mappings of users to foreign servers |
Table 53.1. System Catalogs
Catalog Name | Purpose |
---|---|
pg_aggregate | aggregate functions |
pg_am | relation access methods |
pg_amop | access method operators |
pg_amproc | access method support functions |
pg_attrdef | column default values |
pg_attribute | table columns (“attributes”) |
pg_authid | authorization identifiers (roles) |
pg_auth_members | authorization identifier membership relationships |
pg_cast | casts (data type conversions) |
pg_class | tables, indexes, sequences, views (“relations”) |
pg_collation | collations (locale information) |
pg_constraint | check constraints, unique constraints, primary key constraints, foreign key constraints |
pg_conversion | encoding conversion information |
pg_database | databases within this database cluster |
pg_db_role_setting | per-role and per-database settings |
pg_default_acl | default privileges for object types |
pg_depend | dependencies between database objects |
pg_description | descriptions or comments on database objects |
pg_enum | enum label and value definitions |
pg_event_trigger | event triggers |
pg_extension | installed extensions |
pg_foreign_data_wrapper | foreign-data wrapper definitions |
pg_foreign_server | foreign server definitions |
pg_foreign_table | additional foreign table information |
pg_index | additional index information |
pg_inherits | table inheritance hierarchy |
pg_init_privs | object initial privileges |
pg_language | languages for writing functions |
pg_largeobject | data pages for large objects |
pg_largeobject_metadata | metadata for large objects |
pg_namespace | schemas |
pg_opclass | access method operator classes |
pg_operator | operators |
pg_opfamily | access method operator families |
pg_parameter_acl | configuration parameters for which privileges have been granted |
pg_partitioned_table | information about partition key of tables |
pg_policy | row-security policies |
pg_proc | functions and procedures |
pg_publication | publications for logical replication |
pg_publication_namespace | schema to publication mapping |
pg_publication_rel | relation to publication mapping |
pg_range | information about range types |
pg_replication_origin | registered replication origins |
pg_rewrite | query rewrite rules |
pg_seclabel | security labels on database objects |
pg_sequence | information about sequences |
pg_shdepend | dependencies on shared objects |
pg_shdescription | comments on shared objects |
pg_shseclabel | security labels on shared database objects |
pg_statistic | planner statistics |
pg_statistic_ext | extended planner statistics (definition) |
pg_statistic_ext_data | extended planner statistics (built statistics) |
pg_subscription | logical replication subscriptions |
pg_subscription_rel | relation state for subscriptions |
pg_tablespace | tablespaces within this database cluster |
pg_transform | transforms (data type to procedural language conversions) |
pg_trigger | triggers |
pg_ts_config | text search configurations |
pg_ts_config_map | text search configurations' token mappings |
pg_ts_dict | text search dictionaries |
pg_ts_parser | text search parsers |
pg_ts_template | text search templates |
pg_type | data types |
pg_user_mapping | mappings of users to foreign servers |
Table of Contents
pg_aggregate
pg_am
pg_amop
pg_amproc
pg_attrdef
pg_attribute
pg_authid
pg_auth_members
pg_cast
pg_class
pg_collation
pg_constraint
pg_conversion
pg_database
pg_db_role_setting
pg_default_acl
pg_depend
pg_description
pg_enum
pg_event_trigger
pg_extension
pg_foreign_data_wrapper
pg_foreign_server
pg_foreign_table
pg_index
pg_inherits
pg_init_privs
pg_language
pg_largeobject
pg_largeobject_metadata
pg_namespace
pg_opclass
pg_operator
pg_opfamily
pg_parameter_acl
pg_partitioned_table
pg_policy
pg_proc
pg_publication
pg_publication_namespace
pg_publication_rel
pg_range
pg_replication_origin
pg_rewrite
pg_seclabel
pg_sequence
pg_shdepend
pg_shdescription
pg_shseclabel
pg_statistic
pg_statistic_ext
pg_statistic_ext_data
pg_subscription
pg_subscription_rel
pg_tablespace
pg_transform
pg_trigger
pg_ts_config
pg_ts_config_map
pg_ts_dict
pg_ts_parser
pg_ts_template
pg_type
pg_user_mapping
+
Table of Contents
pg_aggregate
pg_am
pg_amop
pg_amproc
pg_attrdef
pg_attribute
pg_authid
pg_auth_members
pg_cast
pg_class
pg_collation
pg_constraint
pg_conversion
pg_database
pg_db_role_setting
pg_default_acl
pg_depend
pg_description
pg_enum
pg_event_trigger
pg_extension
pg_foreign_data_wrapper
pg_foreign_server
pg_foreign_table
pg_index
pg_inherits
pg_init_privs
pg_language
pg_largeobject
pg_largeobject_metadata
pg_namespace
pg_opclass
pg_operator
pg_opfamily
pg_parameter_acl
pg_partitioned_table
pg_policy
pg_proc
pg_publication
pg_publication_namespace
pg_publication_rel
pg_range
pg_replication_origin
pg_rewrite
pg_seclabel
pg_sequence
pg_shdepend
pg_shdescription
pg_shseclabel
pg_statistic
pg_statistic_ext
pg_statistic_ext_data
pg_subscription
pg_subscription_rel
pg_tablespace
pg_transform
pg_trigger
pg_ts_config
pg_ts_config_map
pg_ts_dict
pg_ts_parser
pg_ts_template
pg_type
pg_user_mapping
The system catalogs are the place where a relational database management system stores schema metadata, such as information about tables and columns, and internal bookkeeping information. @@ -14,4 +14,4 @@ particularly esoteric operations, but many of those have been made available as SQL commands over time, and so the need for direct manipulation of the system catalogs is ever decreasing. -
+
This chapter describes the available localization features from the point of view of the administrator. PostgreSQL supports two localization @@ -17,4 +17,4 @@ between client and server. This is covered in Section 24.3.
-
+
By default, data pages are not protected by checksums, but this can optionally be enabled for a cluster. When enabled, each data page includes a checksum that is updated when the page is written and verified each time @@ -22,4 +22,4 @@ The pg_checksums application can be used to enable or disable data checksums, as well as verify checksums, on an offline cluster. -
+
The citext
module provides a case-insensitive
character string type, citext
. Essentially, it internally calls
lower
when comparing values. Otherwise, it behaves almost
@@ -163,4 +163,4 @@ SELECT * FROM users WHERE nick = 'Larry';
David E. Wheeler <david@kineticode.com>
Inspired by the original citext
module by Donald Fraser.
-
+
Authentication failures and related problems generally manifest themselves through error messages like the following:
@@ -37,4 +37,4 @@ FATAL: database "testdb" does not exist The server log might contain more information about an authentication failure than is reported to the client. If you are confused about the reason for a failure, check the server log. -
Table of Contents
pg_hba.conf
File+
Table of Contents
pg_hba.conf
FileWhen a client application connects to the database server, it specifies which PostgreSQL database user name it wants to connect as, much the same way one logs into a Unix computer @@ -34,4 +34,4 @@ operating system account, and in such cases there need be no connection between database user names and OS user names. -
+
This part describes the client programming interfaces distributed with PostgreSQL. Each of these chapters can be read independently. Note that there are many other programming @@ -9,4 +9,4 @@ familiar with using SQL commands to manipulate and query the database (see Part II) and of course with the programming language that the interface uses. -
Table of Contents
COPY
Commandinformation_schema_catalog_name
administrable_role_authorizations
applicable_roles
attributes
character_sets
check_constraint_routine_usage
check_constraints
collations
collation_character_set_applicability
column_column_usage
column_domain_usage
column_options
column_privileges
column_udt_usage
columns
constraint_column_usage
constraint_table_usage
data_type_privileges
domain_constraints
domain_udt_usage
domains
element_types
enabled_roles
foreign_data_wrapper_options
foreign_data_wrappers
foreign_server_options
foreign_servers
foreign_table_options
foreign_tables
key_column_usage
parameters
referential_constraints
role_column_grants
role_routine_grants
role_table_grants
role_udt_grants
role_usage_grants
routine_column_usage
routine_privileges
routine_routine_usage
routine_sequence_usage
routine_table_usage
routines
schemata
sequences
sql_features
sql_implementation_info
sql_parts
sql_sizing
table_constraints
table_privileges
tables
transforms
triggered_update_columns
triggers
udt_privileges
usage_privileges
user_defined_types
user_mapping_options
user_mappings
view_column_usage
view_routine_usage
view_table_usage
views
Table of Contents
COPY
Commandinformation_schema_catalog_name
administrable_role_authorizations
applicable_roles
attributes
character_sets
check_constraint_routine_usage
check_constraints
collations
collation_character_set_applicability
column_column_usage
column_domain_usage
column_options
column_privileges
column_udt_usage
columns
constraint_column_usage
constraint_table_usage
data_type_privileges
domain_constraints
domain_udt_usage
domains
element_types
enabled_roles
foreign_data_wrapper_options
foreign_data_wrappers
foreign_server_options
foreign_servers
foreign_table_options
foreign_tables
key_column_usage
parameters
referential_constraints
role_column_grants
role_routine_grants
role_table_grants
role_udt_grants
role_usage_grants
routine_column_usage
routine_privileges
routine_routine_usage
routine_sequence_usage
routine_table_usage
routines
schemata
sequences
sql_features
sql_implementation_info
sql_parts
sql_sizing
table_constraints
table_privileges
tables
transforms
triggered_update_columns
triggers
udt_privileges
usage_privileges
user_defined_types
user_mapping_options
user_mappings
view_column_usage
view_routine_usage
view_table_usage
views
+
The collation feature allows specifying the sort order and character
classification behavior of data per-column, or even per-operation.
This alleviates the restriction that the
@@ -414,4 +414,4 @@ CREATE COLLATION ignore_accents (provider = icu, locale = 'und-u-ks-level1-kc-tr
normalize
and is normalized
to
preprocess or check the strings, instead of using nondeterministic
collations. There are different trade-offs for each approach.
-
+
To use colorized output, set the environment variable
PG_COLOR
as follows:
@@ -12,4 +12,4 @@
Otherwise, color is not used.
-
+
The actual colors to be used are configured using the environment variable
PG_COLORS
(note plural). The value is a colon-separated list of
@@ -23,4 +23,4 @@
This color specification format is also used by other software packages
such as GCC, GNU
coreutils, and GNU grep.
-
Table of Contents
+
+
All parameter names are case-insensitive. Every parameter takes a value of one of five types: boolean, string, integer, floating point, or enumerated (enum). The type determines the syntax for setting the @@ -333,4 +333,4 @@ include_dir 'conf.d' their configurations all stored in one place, such as in a version control repository. (Storing database configuration files under version control is another good practice to consider.) -
+
PostgreSQL implements a “process per user” client/server model. In this model, every @@ -33,4 +33,4 @@ process parses the query, creates an execution plan, executes the plan, and returns the retrieved rows to the client by transmitting them over the established connection. -
+
At all times, PostgreSQL maintains a
write ahead log (WAL) in the pg_wal/
subdirectory of the cluster's data directory. The log records
@@ -754,4 +754,4 @@ archive_command = 'local_backup_script.sh "%p" "%f"'
on. In the meantime, administrators might wish to reduce the number
of page snapshots included in WAL by increasing the checkpoint
interval parameters as much as feasible.
-
dblink_build_sql_delete — builds a DELETE statement using supplied values for primary +
dblink_build_sql_delete — builds a DELETE statement using supplied values for primary key field values
dblink_build_sql_delete(text relname, @@ -39,4 +39,4 @@ SELECT dblink_build_sql_delete('"MyFoo"', '1 2', 2, '{"1", "b"}'); --------------------------------------------- DELETE FROM "MyFoo" WHERE f1='1' AND f2='b' (1 row) -
dblink_build_sql_insert — +
dblink_build_sql_insert — builds an INSERT statement using a local tuple, replacing the primary key field values with alternative supplied values
@@ -49,4 +49,4 @@ SELECT dblink_build_sql_insert('foo', '1 2', 2, '{"1", "a"}', '{"1", "b''a"}'); -------------------------------------------------- INSERT INTO foo(f1,f2,f3) VALUES('1','b''a','1') (1 row) -
dblink_build_sql_update — builds an UPDATE statement using a local tuple, replacing +
dblink_build_sql_update — builds an UPDATE statement using a local tuple, replacing the primary key field values with alternative supplied values
dblink_build_sql_update(text relname, @@ -51,4 +51,4 @@ SELECT dblink_build_sql_update('foo', '1 2', 2, '{"1", "a"}', '{"1", "b"}'); ------------------------------------------------------------- UPDATE foo SET f1='1',f2='b',f3='1' WHERE f1='1' AND f2='b' (1 row) -
dblink_cancel_query — cancels any active query on the named connection
+dblink_cancel_query \ No newline at end of file +dblink_cancel_query
dblink_cancel_query — cancels any active query on the named connection
Synopsis
dblink_cancel_query(text connname) returns textDescription
dblink_cancel_query
attempts to cancel any query that @@ -16,4 +16,4 @@ dblink_cancel_query(text connname) returns text the text of an error message on failure.Examples
SELECT dblink_cancel_query('dtest1'); -
dblink_close — closes a cursor in a remote database
+dblink_close \ No newline at end of file +dblink_close
dblink_close — closes a cursor in a remote database
Synopsis
dblink_close(text cursorname [, bool fail_on_error]) returns text dblink_close(text connname, text cursorname [, bool fail_on_error]) returns textDescription
@@ -39,4 +39,4 @@ SELECT dblink_close('foo'); -------------- OK (1 row) -
dblink_connect_u — opens a persistent connection to a remote database, insecurely
+dblink_connect_u \ No newline at end of file +dblink_connect_u
dblink_connect_u — opens a persistent connection to a remote database, insecurely
Synopsis
dblink_connect_u(text connstr) returns text dblink_connect_u(text connname, text connstr) returns textDescription
@@ -26,4 +26,4 @@ dblink_connect_u(text connname, text connstr) returns text not contain any records specifying a wildcard host name.
For further details see
dblink_connect()
. -
dblink_connect — opens a persistent connection to a remote database
+dblink_connect \ No newline at end of file +dblink_connect
dblink_connect — opens a persistent connection to a remote database
Synopsis
dblink_connect(text connstr) returns text dblink_connect(text connname, text connstr) returns textDescription
@@ -102,4 +102,4 @@ REVOKE SELECT ON TABLE foo FROM regress_dblink_user; DROP USER MAPPING FOR regress_dblink_user SERVER fdtest; DROP USER regress_dblink_user; DROP SERVER fdtest; -
dblink_disconnect — closes a persistent connection to a remote database
+dblink_disconnect \ No newline at end of file +
dblink_error_message — gets last error message on the named connection
+dblink_error_message \ No newline at end of file +dblink_error_message
dblink_error_message — gets last error message on the named connection
Synopsis
dblink_error_message(text connname) returns textDescription
dblink_error_message
fetches the most recent remote @@ -19,4 +19,4 @@ dblink_error_message(text connname) returns text the asynchronous query will be visible.Examples
SELECT dblink_error_message('dtest1'); -
dblink_exec — executes a command in a remote database
+dblink_exec \ No newline at end of file +dblink_exec
dblink_exec — executes a command in a remote database
Synopsis
dblink_exec(text connname, text sql [, bool fail_on_error]) returns text dblink_exec(text connstr, text sql [, bool fail_on_error]) returns text dblink_exec(text sql [, bool fail_on_error]) returns text @@ -62,4 +62,4 @@ DETAIL: ERROR: null value in column "relnamespace" violates not-null constrain ------------- ERROR (1 row) -
dblink_fetch — returns rows from an open cursor in a remote database
+dblink_fetch \ No newline at end of file +dblink_fetch
dblink_fetch — returns rows from an open cursor in a remote database
Synopsis
dblink_fetch(text cursorname, int howmany [, bool fail_on_error]) returns setof record dblink_fetch(text connname, text cursorname, int howmany [, bool fail_on_error]) returns setof recordDescription
@@ -74,4 +74,4 @@ SELECT * FROM dblink_fetch('foo', 5) AS (funcname name, source text); funcname | source ----------+-------- (0 rows) -
dblink — executes a query in a remote database
+dblink \ No newline at end of file +dblink
dblink — executes a query in a remote database
Synopsis
dblink(text connname, text sql [, bool fail_on_error]) returns setof record dblink(text connstr, text sql [, bool fail_on_error]) returns setof record dblink(text sql [, bool fail_on_error]) returns setof record @@ -140,4 +140,4 @@ SELECT * FROM dblink('myconn', 'select proname, prosrc from pg_proc') byteain | byteain byteaout | byteaout (14 rows) -
dblink_get_connections — returns the names of all open named dblink connections
+dblink_get_connections \ No newline at end of file +dblink_get_connections
dblink_get_connections — returns the names of all open named dblink connections
Synopsis
dblink_get_connections() returns text[]Description
dblink_get_connections
returns an array of the names of all open nameddblink
connections.Return Value
Returns a text array of connection names, or NULL if none.
Examples
SELECT dblink_get_connections(); -
dblink_get_notify — retrieve async notifications on a connection
+dblink_get_notify \ No newline at end of file +dblink_get_notify
dblink_get_notify — retrieve async notifications on a connection
Synopsis
dblink_get_notify() returns setof (notify_name text, be_pid int, extra text) dblink_get_notify(text connname) returns setof (notify_name text, be_pid int, extra text)Description
@@ -30,4 +30,4 @@ SELECT * FROM dblink_get_notify(); -------------+--------+------- virtual | 1229 | (1 row) -
dblink_get_pkey — returns the positions and field names of a relation's +
dblink_get_result — gets an async query result
+dblink_get_result \ No newline at end of file +dblink_get_result
dblink_get_result — gets an async query result
Synopsis
dblink_get_result(text connname [, bool fail_on_error]) returns setof recordDescription
dblink_get_result
collects the results of an @@ -95,4 +95,4 @@ contrib_regression=# SELECT * FROM dblink_get_result('dtest1') AS t1(f1 int, f2 f1 | f2 | f3 ----+----+---- (0 rows) -
dblink_is_busy — checks if connection is busy with an async query
+dblink_is_busy \ No newline at end of file +dblink_is_busy
dblink_is_busy — checks if connection is busy with an async query
Synopsis
dblink_is_busy(text connname) returns intDescription
dblink_is_busy
tests whether an async query is in progress. @@ -11,4 +11,4 @@ dblink_is_busy(text connname) returns intdblink_get_result
will not block.Examples
SELECT dblink_is_busy('dtest1'); -
dblink_open — opens a cursor in a remote database
+dblink_open \ No newline at end of file +dblink_open
dblink_open — opens a cursor in a remote database
Synopsis
dblink_open(text cursorname, text sql [, bool fail_on_error]) returns text dblink_open(text connname, text cursorname, text sql [, bool fail_on_error]) returns textDescription
@@ -45,4 +45,4 @@ SELECT dblink_open('foo', 'select proname, prosrc from pg_proc'); ------------- OK (1 row) -
dblink_send_query — sends an async query to a remote database
+dblink_send_query \ No newline at end of file +dblink_send_query
dblink_send_query — sends an async query to a remote database
Synopsis
dblink_send_query(text connname, text sql) returns intDescription
dblink_send_query
sends a query to be executed @@ -21,4 +21,4 @@ dblink_send_query(text connname, text sql) returns int Returns 1 if the query was successfully dispatched, 0 otherwise.Examples
SELECT dblink_send_query('dtest1', 'SELECT * FROM foo WHERE f1 < 3'); -
+
This section covers PostgreSQL client
applications in contrib
. They can be run from anywhere,
independent of where the database server resides. See
also PostgreSQL Client Applications for information about client
applications that are part of the core PostgreSQL
distribution.
-
+
Some applications run on the PostgreSQL server
itself. Currently, no such applications are included in the
contrib
directory. See also PostgreSQL Server Applications for information about server applications that
are part of the core PostgreSQL distribution.
-
Table of Contents
+
Table of Contents
This appendix and the previous one contain information regarding the modules that
can be found in the contrib
directory of the
PostgreSQL distribution. See Appendix F for
@@ -12,4 +12,4 @@
the bin
directory of the
PostgreSQL installation and can be used like any
other program.
-
+
The spi module provides several workable examples of using the Server Programming Interface (SPI) and triggers. While these functions are of @@ -78,4 +78,4 @@ time zone.
There is an example in moddatetime.example
.
-
Table of Contents
Table of Contents
module_name
;
by way of
shared_preload_libraries. See the documentation of each
module for details.
- +
Before you can do anything, you must initialize a database storage
area on disk. We call this a database cluster.
(The SQL standard uses the term catalog cluster.) A
@@ -200,4 +200,4 @@ postgres$ initdb -D /usr/local/pgsql/data
it. That approach might relieve the DBA from having to deal with some of
the idiosyncrasies of NFS, but of course the complexity of managing
remote storage then happens at other levels.
-
+
+
This chapter explains the interface between the core PostgreSQL system and custom WAL resource managers, which enable extensions to integrate directly with the WAL. @@ -78,4 +78,4 @@ extern void RegisterCustomRmgr(RmgrId rmid, RmgrData *rmgr); custom WAL records may exist in the system. Otherwise PostgreSQL will not be able to apply or decode the custom WAL records, which may prevent the server from starting. -
+
When a CustomScan
is executed, its execution state is
represented by a CustomScanState
, which is declared as
follows:
@@ -136,4 +136,4 @@ void (*ExplainCustomScan) (CustomScanState *node,
ScanState
, such as the target list and scan relation, will
be shown even without this callback, but the callback allows the display
of additional, private state.
-
+
A custom scan provider will typically add paths for a base relation by
setting the following hook, which is called after the core code has
generated all the access paths it can for the relation (except for
@@ -98,4 +98,4 @@ List *(*ReparameterizeCustomPathByChild) (PlannerInfo *root,
reparameterize_path_by_child
,
adjust_appendrel_attrs
or
adjust_appendrel_attrs_multilevel
as required.
-
+
A custom scan is represented in a finished plan tree using the following structure:
@@ -64,4 +64,4 @@ Node *(*CreateCustomScanState) (CustomScan *cscan); stage; afterExecInitCustomScan
performs basic initialization, theBeginCustomScan
callback will be invoked to give the custom scan provider a chance to do whatever else is needed. -
Table of Contents
+
Table of Contents
PostgreSQL supports a set of experimental facilities which are intended to allow extension modules to add new scan types to the system. Unlike a foreign data wrapper, which is only @@ -18,4 +18,4 @@ Finally, it must be possible to execute the plan and generate the same results that would have been generated for any other access path targeting the same relation. -
+
Database roles are conceptually completely separate from operating system users. In practice it might be convenient to maintain a correspondence, but this is not required. Database roles @@ -67,4 +67,4 @@ SELECT rolname FROM pg_roles; identity determines the set of privileges available to a connected client, it is important to carefully configure privileges when setting up a multiuser environment. -
+
The bytea
data type allows storage of binary strings;
see Table 8.6.
Table 8.6. Binary Data Types
Name | Storage Size | Description |
---|---|---|
bytea | 1 or 4 bytes plus the actual binary string | variable-length binary string |
@@ -129,4 +129,4 @@ SELECT 'abc \153\154\155 \052\251\124'::bytea;
unescaping bytea
strings. For example, you might also
have to escape line feeds and carriage returns if your interface
automatically translates these.
-
+
Bit strings are strings of 1's and 0's. They can be used to store
or visualize bit masks. There are two SQL bit types:
bit(
and n
)bit
@@ -46,4 +46,4 @@ SELECT * FROM test;
5 or 8 bytes overhead depending on the length of the string
(but long values may be compressed or moved out-of-line, as explained
in Section 8.3 for character strings).
-
+
PostgreSQL provides the
standard SQL type boolean
;
see Table 8.19.
@@ -55,4 +55,4 @@ SELECT * FROM test1 WHERE a;
example NULL::boolean
. Conversely, the cast can be
omitted from a string-literal Boolean value in contexts where the parser
can deduce that the literal must be of type boolean
.
-
Table 8.4. Character Types
Name | Description |
---|---|
character varying( , varchar( | variable-length with limit |
character( , char( | fixed-length, blank padded |
text | variable unlimited length |
+
Table 8.4. Character Types
Name | Description |
---|---|
character varying( , varchar( | variable-length with limit |
character( , char( | fixed-length, blank padded |
text | variable unlimited length |
Table 8.4 shows the general-purpose character types available in PostgreSQL. @@ -139,4 +139,4 @@ SELECT b, char_length(b) FROM test2; only uses one byte of storage, and therefore can store only a single ASCII character. It is used in the system catalogs as a simplistic enumeration type. -
Table 8.5. Special Character Types
Name | Storage Size | Description |
---|---|---|
"char" | 1 byte | single-byte internal type |
name | 64 bytes | internal type for object names |
Table 8.5. Special Character Types
Name | Storage Size | Description |
---|---|---|
"char" | 1 byte | single-byte internal type |
name | 64 bytes | internal type for object names |
+
PostgreSQL supports the full set of
SQL date and time types, shown in Table 8.9. The operations available
on these data types are described in
@@ -547,4 +547,4 @@ SELECT EXTRACT(days from '80 hours'::interval);
The output of the iso_8601
style matches the “format
with designators” described in section 4.4.3.2 of the
ISO 8601 standard.
-
Table 8.18. Interval Output Style Examples
Style Specification | Year-Month Interval | Day-Time Interval | Mixed Interval |
---|---|---|---|
sql_standard | 1-2 | 3 4:05:06 | -1-2 +3 -4:05:06 |
postgres | 1 year 2 mons | 3 days 04:05:06 | -1 year -2 mons +3 days -04:05:06 |
postgres_verbose | @ 1 year 2 mons | @ 3 days 4 hours 5 mins 6 secs | @ 1 year 2 mons -3 days 4 hours 5 mins 6 secs ago |
iso_8601 | P1Y2M | P3DT4H5M6S | P-1Y-2M3DT-4H-5M-6S |
Table 8.18. Interval Output Style Examples
Style Specification | Year-Month Interval | Day-Time Interval | Mixed Interval |
---|---|---|---|
sql_standard | 1-2 | 3 4:05:06 | -1-2 +3 -4:05:06 |
postgres | 1 year 2 mons | 3 days 04:05:06 | -1 year -2 mons +3 days -04:05:06 |
postgres_verbose | @ 1 year 2 mons | @ 3 days 4 hours 5 mins 6 secs | @ 1 year 2 mons -3 days 4 hours 5 mins 6 secs ago |
iso_8601 | P1Y2M | P3DT4H5M6S | P-1Y-2M3DT-4H-5M-6S |
+
Enumerated (enum) types are data types that
comprise a static, ordered set of values.
They are equivalent to the enum
@@ -112,4 +112,4 @@ SELECT person.name, holidays.num_weeks FROM person, holidays
kept in the system catalog
pg_enum
.
Querying this catalog directly can be useful.
-
+
Geometric data types represent two-dimensional spatial objects. Table 8.20 shows the geometric types available in PostgreSQL. @@ -149,4 +149,4 @@ circle.
Circles are output using the first syntax. -
+
JSON data types are for storing JSON (JavaScript Object Notation)
data, as specified in RFC
7159. Such data can also be stored as text
, but
@@ -727,4 +727,4 @@ UPDATE table_name SET jsonb_field[1]['a'] = '1';
For this purpose, the term “value” includes array elements,
though JSON terminology sometimes considers array elements distinct
from values within objects.
-
+
The money
type stores a currency amount with a fixed
fractional precision; see Table 8.3. The fractional precision is
determined by the database's lc_monetary setting.
@@ -41,4 +41,4 @@ SELECT '52093.89'::money::numeric::float8;
When a money
value is divided by another money
value, the result is double precision
(i.e., a pure number,
not money); the currency units cancel each other out in the division.
-
+
PostgreSQL offers data types to store IPv4, IPv6, and MAC addresses, as shown in Table 8.21. It is better to use these types instead of plain text types to store @@ -129,4 +129,4 @@ SELECT macaddr8_set7bit('08:00:2b:01:02:03');
-
+
+
Object identifiers (OIDs) are used internally by PostgreSQL as primary keys for various system tables. @@ -163,4 +163,4 @@ WHERE ... physical location of the row within its table.
(The system columns are further explained in Section 5.5.) -
pg_lsn
Type+
pg_lsn
Type
The pg_lsn
data type can be used to store LSN (Log Sequence
Number) data which is a pointer to a location in the WAL. This type is a
representation of XLogRecPtr
and an internal system type of
@@ -19,4 +19,4 @@
the calculated LSN should be in the range of pg_lsn
type,
i.e., between 0/0
and
FFFFFFFF/FFFFFFFF
.
-
+
The PostgreSQL type system contains a
number of special-purpose entries that are collectively called
pseudo-types. A pseudo-type cannot be used as a
@@ -56,4 +56,4 @@
follow this coding rule: do not create any function that is
declared to return internal
unless it has at least one
internal
argument.
-
+
PostgreSQL provides two data types that are designed to support full text search, which is the activity of searching through a collection of natural-language documents @@ -193,4 +193,4 @@ SELECT to_tsvector( 'postgraduate' ), to_tsquery( 'postgres:*' ); 'postgradu':1 | 'postgr':*
which will match the stemmed form of postgraduate
.
-
+
The data type uuid
stores Universally Unique Identifiers
(UUID) as defined by RFC 4122,
ISO/IEC 9834-8:2005, and related standards.
@@ -36,4 +36,4 @@ a0ee-bc99-9c0b-4ef8-bb6d-6bb9-bd38-0a11
See Section 9.14 for how to generate a UUID in PostgreSQL. -
+
The xml
data type can be used to store XML data. Its
advantage over storing XML data in a text
field is that it
checks the input values for well-formedness, and there are support
@@ -148,4 +148,4 @@ SET xmloption TO { DOCUMENT | CONTENT };
up full-document searches of XML data. The necessary
preprocessing support is, however, not yet available in the PostgreSQL
distribution.
-
Table of Contents
pg_lsn
Type+
Table of Contents
pg_lsn
TypePostgreSQL has a rich set of native data types available to users. Users can add new types to PostgreSQL using the CREATE TYPE command. @@ -33,4 +33,4 @@ Some of the input and output functions are not invertible, i.e., the result of an output function might lose accuracy when compared to the original input. -
Table of Contents
+
Table of Contents
PostgreSQL uses an internal heuristic parser for all date/time input support. Dates and times are input as strings, and are broken up into distinct fields with a preliminary @@ -12,4 +12,4 @@ This appendix includes information on the content of these lookup tables and describes the steps used by the parser to decode dates and times. -
+
Since timezone abbreviations are not well standardized,
PostgreSQL provides a means to customize
the set of abbreviations accepted by the server. The
@@ -95,4 +95,4 @@
If you modify files in .../share/timezonesets/
,
it is up to you to make backups — a normal database dump
will not include this directory.
-
+
Date/time input strings are decoded using the following procedure.
Break the input string into tokens and categorize each token as
@@ -71,4 +71,4 @@
Gregorian years AD 1–99 can be entered by using 4 digits with leading
zeros (e.g., 0099
is AD 99).
-
+
Ordinarily, if a date/time string is syntactically valid but contains out-of-range field values, an error will be thrown. For example, input specifying the 31st of February will be rejected. @@ -59,4 +59,4 @@ abbreviation that corresponds to a fixed UTC offset. The rule just given applies only when it is necessary to infer a UTC offset for a time zone in which the offset varies. -
+
Table B.1 shows the tokens that are recognized as names of months.
Table B.1. Month Names
Month | Abbreviations |
---|---|
January | Jan |
February | Feb |
March | Mar |
April | Apr |
May | |
June | Jun |
July | Jul |
August | Aug |
September | Sep, Sept |
October | Oct |
November | Nov |
December | Dec |
@@ -8,4 +8,4 @@
Table B.2. Day of the Week Names
Day | Abbreviations |
---|---|
Sunday | Sun |
Monday | Mon |
Tuesday | Tue, Tues |
Wednesday | Wed, Weds |
Thursday | Thu, Thur, Thurs |
Friday | Fri |
Saturday | Sat |
Table B.3 shows the tokens that serve various modifier purposes. -
Table B.3. Date/Time Field Modifiers
Identifier | Description |
---|---|
AM | Time is before 12:00 |
AT | Ignored |
JULIAN , JD , J | Next field is Julian Date |
ON | Ignored |
PM | Time is on or after 12:00 |
T | Next field is time |
Table B.3. Date/Time Field Modifiers
Identifier | Description |
---|---|
AM | Time is before 12:00 |
AT | Ignored |
JULIAN , JD , J | Next field is Julian Date |
ON | Ignored |
PM | Time is on or after 12:00 |
T | Next field is time |
+
PostgreSQL can accept time zone specifications
that are written according to the POSIX standard's rules
for the TZ
environment
@@ -132,4 +132,4 @@
zone abbreviation(s). For example, SET TIMEZONE TO
FOOBAR0
will work, leaving the system effectively using a
rather peculiar abbreviation for UTC.
-
+
The SQL standard states that “Within the definition of a
‘datetime literal’, the ‘datetime
values’ are constrained by the natural rules for dates and
@@ -84,4 +84,4 @@ $ cal 9 1752
The People's Republic of China uses the Gregorian calendar
for civil purposes. The Chinese calendar is used for determining
festivals.
-
See also postgres_fdw, which provides roughly the same functionality using a more modern and standards-compliant infrastructure. -
+
When you create a table and you realize that you made a mistake, or the requirements of the application change, you can drop the table and create it again. But this is not a convenient option if @@ -153,4 +153,4 @@ ALTER TABLE products RENAME COLUMN product_no TO product_number;
ALTER TABLE products RENAME TO items;
-
+
A table in a relational database is much like a table on paper: It consists of rows and columns. The number and order of the columns is fixed, and each column has a name. The number of rows is @@ -98,4 +98,4 @@ DROP TABLE products; security, or convenience. If you are eager to fill your tables with data now you can skip ahead to Chapter 6 and read the rest of this chapter later. -
+
Data types are a way to limit the kind of data that can be stored
in a table. For many applications, however, the constraint they
provide is too coarse. For example, a column containing a product
@@ -573,16 +573,16 @@ CREATE TABLE posts (
referencing column(s) as NOT NULL
.
A foreign key must reference columns that either are a primary key or
- form a unique constraint. This means that the referenced columns always
- have an index (the one underlying the primary key or unique constraint);
- so checks on whether a referencing row has a match will be efficient.
- Since a DELETE
of a row from the referenced table
- or an UPDATE
of a referenced column will require
- a scan of the referencing table for rows matching the old value, it
- is often a good idea to index the referencing columns too. Because this
- is not always needed, and there are many choices available on how
- to index, declaration of a foreign key constraint does not
- automatically create an index on the referencing columns.
+ form a unique constraint, or are columns from a non-partial unique index.
+ This means that the referenced columns always have an index to allow
+ efficient lookups on whether a referencing row has a match. Since a
+ DELETE
of a row from the referenced table or an
+ UPDATE
of a referenced column will require a scan of
+ the referencing table for rows matching the old value, it is often a good
+ idea to index the referencing columns too. Because this is not always
+ needed, and there are many choices available on how to index, the
+ declaration of a foreign key constraint does not automatically create an
+ index on the referencing columns.
More information about updating and deleting data is in Chapter 6. Also see the description of foreign key constraint syntax in the reference documentation for @@ -604,4 +604,4 @@ CREATE TABLE circles (
Adding an exclusion constraint will automatically create an index of the type specified in the constraint declaration. -
+
A column can be assigned a default value. When a new row is created and no values are specified for some of the columns, those columns will be filled with their respective default values. A @@ -46,4 +46,4 @@ CREATE TABLE products ( );
The SERIAL
shorthand is discussed further in Section 8.1.4.
-
+
When you create complex database structures involving many tables
with foreign key constraints, views, triggers, functions, etc. you
implicitly create a net of dependencies between the objects.
@@ -96,4 +96,4 @@ END;
then the function's dependency on the my_colors
table will be known and enforced by DROP
.
-
+
PostgreSQL implements portions of the SQL/MED specification, allowing you to access data that resides outside PostgreSQL using regular SQL queries. Such data is referred to as @@ -38,4 +38,4 @@ CREATE USER MAPPING, CREATE FOREIGN TABLE, and IMPORT FOREIGN SCHEMA. -
+
A generated column is a special column that is always computed from other
columns. Thus, it is for columns what a view is for tables. There are two
kinds of generated columns: stored and virtual. A stored generated column
@@ -82,4 +82,4 @@ CREATE TABLE people (
generated columns. But conversely, it is not allowed to access
generated columns in BEFORE
triggers.
-
+
PostgreSQL implements table inheritance, which can be a useful tool for database designers. (SQL:1999 and later define a type inheritance feature, which differs in many @@ -286,4 +286,4 @@ VALUES ('Albany', NULL, NULL, 'NY'); implemented for declarative partitioning. Considerable care is needed in deciding whether partitioning with legacy inheritance is useful for your application. -
+
Tables are the central objects in a relational database structure, because they hold your data. But they are not the only objects that exist in a database. Many other kinds of objects can be @@ -17,4 +17,4 @@
Detailed information on these topics appears in Part V. -
+
PostgreSQL supports basic table partitioning. This section describes why and how to implement partitioning as part of your database design. @@ -368,7 +368,7 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02
CREATE INDEX measurement_usls_idx ON ONLY measurement (unitsales); -CREATE INDEX measurement_usls_200602_idx +CREATE INDEX CONCURRENTLY measurement_usls_200602_idx ON measurement_y2006m02 (unitsales); ALTER INDEX measurement_usls_idx ATTACH PARTITION measurement_usls_200602_idx; @@ -449,7 +449,9 @@ ALTER INDEX measurement_city_id_logdate_key UsingONLY
to add or drop a constraint on only the partitioned table is supported as long as there are no partitions. Once partitions exist, usingONLY
- will result in an error. Instead, constraints on the partitions + will result in an error for any constraints other than +UNIQUE
andPRIMARY KEY
. + Instead, constraints on the partitions themselves can be added and (if they are not present in the parent table) dropped.
@@ -989,4 +991,4 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; painfully slow. Simulations of the intended workload are often beneficial for optimizing the partitioning strategy. Never just assume that more partitions are better than fewer partitions, nor vice-versa. -
+
When an object is created, it is assigned an owner. The owner is normally the role that executed the creation statement. For most kinds of objects, the initial state is that only the owner @@ -249,9 +249,16 @@ REVOKE ALL ON accounts FROM PUBLIC;
Table 5.2. Summary of Access Privileges
Object Type | All Privileges | Default PUBLIC Privileges | psql Command |
---|---|---|---|
DATABASE | CTc | Tc | \l |
DOMAIN | U | U | \dD+ |
FUNCTION or PROCEDURE | X | X | \df+ |
FOREIGN DATA WRAPPER | U | none | \dew+ |
FOREIGN SERVER | U | none | \des+ |
LANGUAGE | U | U | \dL+ |
LARGE OBJECT | rw | none | \dl+ |
PARAMETER | sA | none | \dconfig+ |
SCHEMA | UC | none | \dn+ |
SEQUENCE | rwU | none | \dp |
TABLE (and table-like objects) | arwdDxt | none | \dp |
Table column | arwx | none | \dp |
TABLESPACE | C | none | \db+ |
TYPE | U | U | \dT+ |
The privileges that have been granted for a particular object are
- displayed as a list of aclitem
entries, where each
- aclitem
describes the permissions of one grantee that
- have been granted by a particular grantor. For example,
+ displayed as a list of aclitem
entries, each having the
+ format:
+
+grantee
=
privilege-abbreviation
[*
].../
grantor
+
+ Each aclitem
lists all the permissions of one grantee that
+ have been granted by a particular grantor. Specific privileges are
+ represented by one-letter abbreviations from
+ Table 5.1, with *
+ appended if the privilege was granted with grant option. For example,
calvin=r*w/hobbes
specifies that the role
calvin
has the privilege
SELECT
(r
) with grant option
@@ -304,4 +311,4 @@ GRANT SELECT (col1), UPDATE (col1) ON mytable TO miriam_rw;
Notice that the owner's implicit grant options are not marked in the
access privileges display. A *
will appear only when
grant options have been explicitly granted to someone.
-
+
In addition to the SQL-standard privilege system available through GRANT, tables can have row security policies that restrict, @@ -379,4 +379,4 @@ SELECT * FROM information WHERE group_id = 2 FOR UPDATE;
For additional details see CREATE POLICY and ALTER TABLE. -
+
A PostgreSQL database cluster contains one or more named databases. Roles and a few other object types are shared across the entire cluster. A client connection to the server @@ -326,4 +326,4 @@ REVOKE CREATE ON SCHEMA public FROM PUBLIC; limited) cross-database access. If you need to work with those systems, then maximum portability would be achieved by not using schemas at all. -
+
Every table has several system columns that are implicitly defined by the system. Therefore, these names cannot be used as names of user-defined columns. (Note that these @@ -55,4 +55,4 @@ SQL commands, not the number of rows processed. Also, only commands that actually modify the database contents will consume a command identifier. -
Table of Contents
+
Table of Contents
This chapter covers how one creates the database structures that will hold one's data. In a relational database, the raw data is stored in tables, so the majority of this chapter is devoted to @@ -10,4 +10,4 @@ we will briefly look at other features that affect the data storage, such as inheritance, table partitioning, views, functions, and triggers. -
+
PostgreSQL 13 and below used the term “Default Roles”. However, as these roles are not able to actually be changed and are installed as part of the system at initialization time, the more appropriate term to use is “Predefined Roles”. See Section 22.5 for current documentation regarding - Predefined Roles, and the release notes for + Predefined Roles, and the release notes for PostgreSQL 14 for details on this change. -
+
dict_int
is an example of an add-on dictionary template
for full-text search. The motivation for this example dictionary is to
control the indexing of integers (signed and unsigned), allowing such
@@ -59,4 +59,4 @@ ALTER TEXT SEARCH CONFIGURATION english
ALTER MAPPING FOR int, uint WITH intdict;
-
+
dict_xsyn
(Extended Synonym Dictionary) is an example of an
add-on dictionary template for full-text search. This dictionary type
replaces words with groups of their synonyms, and so makes it possible to
@@ -94,4 +94,4 @@ ALTER TEXT SEARCH CONFIGURATION english
ALTER MAPPING FOR word, asciiword WITH xsyn, english_stem;
-
+
Shared disk failover avoids synchronization overhead by having only one copy of the database. It uses a single disk array that is shared by multiple servers. If the main database server fails, the standby server @@ -134,4 +134,4 @@ taken PostgreSQL and created commercial closed-source solutions with unique failover, replication, and load balancing capabilities. These are not discussed here. -
+
The most important disk monitoring task of a database administrator is to make sure the disk doesn't become full. A filled data disk will not result in data corruption, but it might prevent useful activity @@ -17,4 +17,4 @@ will naturally be subject to whatever quota is placed on the user the server runs as. Exceeding the quota will have the same bad effects as running out of disk space entirely. -
Table of Contents
+
Table of Contents
This chapter discusses how to monitor the disk usage of a PostgreSQL database system. -
+
So far we have explained how to add data to tables and how to change data. What remains is to discuss how to remove data that is no longer needed. Just as adding data is only possible in whole @@ -25,4 +25,4 @@ DELETE FROM products WHERE price = 10; DELETE FROM products;
then all rows in the table will be deleted! Caveat programmer. -
+
When a table is created, it contains no data. The first thing to do before a database can be of much use is to insert data. Data is inserted one row at a time. You can also insert more than one row @@ -78,4 +78,4 @@ INSERT INTO products (product_no, name, price) command, but is more efficient. Refer to Section 14.4 for more information on improving bulk loading performance. -
+
Sometimes it is useful to obtain data from modified rows while they are
being manipulated. The INSERT
, UPDATE
,
and DELETE
commands all have an
@@ -50,4 +50,4 @@ DELETE FROM products
the data available to RETURNING
is the row as modified by
the triggers. Thus, inspecting columns computed by triggers is another
common use-case for RETURNING
.
-
+
The modification of data that is already in the database is referred to as updating. You can update individual rows, all the rows in a table, or a subset of all rows. Each column can be @@ -58,4 +58,4 @@ UPDATE products SET price = price * 1.10;
UPDATE mytable SET a = 5, b = 3, c = 1 WHERE a > 0;
-
Table of Contents
+
Table of Contents
The previous chapter discussed how to create tables and other structures to hold your data. Now it is time to fill the tables with data. This chapter covers how to insert, update, and delete table data. The chapter after this will finally explain how to extract your long-lost data from the database. -
+
The documentation sources are most conveniently modified with an editor that has a mode for editing XML, and even more so if it has some awareness of XML schema languages so that it can know about @@ -20,4 +20,4 @@
src/tools/editors/emacs.samples
contains
recommended settings for this mode.
-
+
Once you have everything set up, change to the directory
doc/src/sgml
and run one of the commands
described in the following subsections to build the
@@ -88,4 +88,4 @@ ADDITIONAL_FLAGS='-Xmx1500m'
doc/src/sgml$
make check
-
+
The documentation sources are written in DocBook, which is a markup language defined in XML. In what @@ -20,4 +20,4 @@ FreeBSD Documentation Project also uses DocBook and has some good information, including a number of style guidelines that might be worth considering. -
+
Reference pages should follow a standard layout. This allows users to find the desired information more quickly, and it also encourages writers to document all relevant aspects of a command. @@ -86,4 +86,4 @@ database system it is compatible. The See Also section of SQL commands should list SQL commands before cross-references to programs. -
+
The following tools are used to process the documentation. Some might be optional, as noted. @@ -112,4 +112,4 @@ checking for dbtoepub... dbtoepub
./configure ... XMLLINT=/opt/local/bin/xmllint ...
-
Table of Contents
+
Table of Contents
PostgreSQL has four primary documentation formats: @@ -21,4 +21,4 @@ standard distribution and are installed by default. PDF format documentation is available separately for download. -
+
A domain is a user-defined data type that is based on another underlying type. Optionally, it can have constraints that restrict its valid values to a subset of @@ -31,4 +31,4 @@ INSERT INTO mytable VALUES(-1); -- fails domain's constraints will be checked.
For additional information see CREATE DOMAIN. -
+
PostgreSQL provides facilities to support
dynamic tracing of the database server. This allows an external
utility to be called at specific points in the code and thereby trace
@@ -298,4 +298,4 @@ if (TRACE_POSTGRESQL_TRANSACTION_START_ENABLED())
Each trace macro has a corresponding ENABLED
macro.
-
+
The earthdistance
module provides two different approaches to
calculating great circle distances on the surface of the Earth. The one
described first depends on the cube
module.
@@ -155,4 +155,4 @@
you need to be careful about the edge conditions near the poles
and near +/- 180 degrees of longitude. The cube
-based
representation avoids these discontinuities.
-
+
Any SQL command can be run from within an embedded SQL application. Below are some examples of how to do that.
@@ -160,4 +160,4 @@ EXEC SQL DEALLOCATE PREPARE name
;
see PREPARE. Also
see Section 36.5 for more details about using
placeholders and input parameters.
-
+
An embedded SQL program consists of code written in an ordinary
programming language, in this case C, mixed with SQL commands in
specially marked sections. To build the program, the source code (*.pgc
)
@@ -49,4 +49,4 @@ EXEC SQL ...;
Of course, the C part of the program follows C quoting rules.
The following sections explain all the embedded SQL statements. -
+
This section describes how to open, close, and switch database connections.
@@ -244,4 +244,4 @@ EXEC SQL DISCONNECT [connec
It is good style that an application always explicitly disconnect from every connection it opened. -
+
ECPG has some limited support for C++ applications. This section describes some caveats.
@@ -225,4 +225,4 @@ c++ -c test_cpp.cpp -o test_cpp.o
c++ test_cpp.o test_mod.o -lecpg -o test_cpp
-
+
An SQL descriptor area is a more sophisticated method for processing
the result of a SELECT
, FETCH
or
a DESCRIBE
statement. An SQL descriptor area groups
@@ -707,4 +707,4 @@ tup_fetched = 3253694 (type: 9)
tup_inserted = 0 (type: 9)
tup_updated = 0 (type: 9)
tup_deleted = 0 (type: 9)
-
+
This section explains how ECPG works internally. This information can occasionally be useful to help users understand how to use ECPG. @@ -121,4 +121,4 @@ ECPGdo(__LINE__, NULL, "SELECT res FROM mytable WHERE index = ? ",
(The indentation here is added for readability and not something the preprocessor does.) -
+
In many cases, the particular SQL statements that an application has to execute are known at the time the application is written. In some cases, however, the SQL statements are composed at run time @@ -100,4 +100,4 @@ EXEC SQL CLOSE cursor1; EXEC SQL COMMIT; EXEC SQL DISCONNECT ALL;
-
+
This section describes how you can handle exceptional conditions and warnings in an embedded SQL program. There are two nonexclusive facilities for this. @@ -438,4 +438,4 @@ while (1)
ECPG_WARNING_PORTAL_EXISTS
)An existing cursor name was specified. (SQLSTATE 42P03)
-
+
ecpg
can be run in a so-called Informix compatibility mode. If
this mode is active, it tries to behave as if it were the Informix
precompiler for Informix E/SQL. Generally spoken this will allow you to use
@@ -889,4 +889,4 @@ risnull(CINTTYPE, (char *) &i);
characters it cannot parse. Internally it is defined as -1264 (the
Informix definition).
-
+
The libecpg
library primarily contains
“hidden” functions that are used to implement the
functionality expressed by the embedded SQL commands. But there
@@ -43,4 +43,4 @@
returns true if you are connected to a database and false if not.
connection_name
can be NULL
if a single connection is being used.
-
+
Large objects are not directly supported by ECPG, but ECPG
application can manipulate large objects through the libpq large
object functions, obtaining the necessary PGconn
@@ -97,4 +97,4 @@ main(void)
EXEC SQL DISCONNECT ALL;
return 0;
}
-
+
ecpg
can be run in a so-called Oracle
compatibility mode. If this mode is active, it tries to
behave as if it were Oracle Pro*C.
@@ -16,4 +16,4 @@
Set the null indicator to -1
when character
arrays receive empty character string types
-
+
The pgtypes library maps PostgreSQL database
types to C equivalents that can be used in C programs. It also offers
functions to do basic calculations with those types within C, i.e., without
@@ -762,4 +762,4 @@ void PGTYPESdecimal_free(decimal *var);
errno != 0
after each call to
PGTYPEStimestamp_from_asc
.
-
+
Several preprocessor directives are available that modify how
the ecpg
preprocessor parses and processes a
file.
@@ -126,4 +126,4 @@ EXEC SQL else;
EXEC SQL SET TIMEZONE TO 'GMT';
EXEC SQL endif;
-
+
Now that you have an idea how to form embedded SQL C programs, you probably want to know how to compile them. Before compiling you run the file through the embedded SQL @@ -65,4 +65,4 @@ ECPG = ecpg The ecpg library is thread-safe by default. However, you might need to use some threading command-line options to compile your client code. -
ALLOCATE DESCRIPTOR — allocate an SQL descriptor area
+ALLOCATE DESCRIPTOR \ No newline at end of file +ALLOCATE DESCRIPTOR
ALLOCATE DESCRIPTOR — allocate an SQL descriptor area
Synopsis
ALLOCATE DESCRIPTORname
Description
ALLOCATE DESCRIPTOR
allocates a new named SQL @@ -16,4 +16,4 @@ EXEC SQL ALLOCATE DESCRIPTOR mydesc;Compatibility
ALLOCATE DESCRIPTOR
is specified in the SQL standard. -
+
This section describes all SQL commands that are specific to embedded SQL. Also refer to the SQL commands listed in SQL Commands, which can also be used in embedded SQL, unless stated otherwise. -
CONNECT — establish a database connection
+CONNECT \ No newline at end of file +CONNECT
CONNECT — establish a database connection
Synopsis
CONNECT TOconnection_target
[ ASconnection_name
] [ USERconnection_user
] CONNECT TO DEFAULT CONNECTconnection_user
@@ -106,4 +106,4 @@ EXEC SQL END DECLARE SECTION;CONNECT
is specified in the SQL standard, but the format of the connection parameters is implementation-specific. -See Also
DISCONNECT, SET CONNECTION
DEALLOCATE DESCRIPTOR — deallocate an SQL descriptor area
+DEALLOCATE DESCRIPTOR \ No newline at end of file +DEALLOCATE DESCRIPTOR
DEALLOCATE DESCRIPTOR — deallocate an SQL descriptor area
Synopsis
DEALLOCATE DESCRIPTORname
Description
DEALLOCATE DESCRIPTOR
deallocates a named SQL @@ -13,4 +13,4 @@ EXEC SQL DEALLOCATE DESCRIPTOR mydesc;Compatibility
DEALLOCATE DESCRIPTOR
is specified in the SQL standard. -See Also
ALLOCATE DESCRIPTOR, GET DESCRIPTOR, SET DESCRIPTOR
DECLARE STATEMENT — declare SQL statement identifier
+DECLARE STATEMENT \ No newline at end of file +DECLARE STATEMENT
DECLARE STATEMENT — declare SQL statement identifier
Synopsis
EXEC SQL [ ATconnection_name
] DECLAREstatement_name
STATEMENTDescription
DECLARE STATEMENT
declares an SQL statement identifier. @@ -30,4 +30,4 @@ EXEC SQL CLOSE cursor_name;Compatibility
DECLARE STATEMENT
is an extension of the SQL standard, but can be used in famous DBMSs. -
DECLARE — define a cursor
+DECLARE \ No newline at end of file +DECLARE
DECLARE — define a cursor
Synopsis
DECLAREcursor_name
[ BINARY ] [ ASENSITIVE | INSENSITIVE ] [ [ NO ] SCROLL ] CURSOR [ { WITH | WITHOUT } HOLD ] FORprepared_name
DECLAREcursor_name
[ BINARY ] [ ASENSITIVE | INSENSITIVE ] [ [ NO ] SCROLL ] CURSOR [ { WITH | WITHOUT } HOLD ] FORquery
Description
@@ -40,4 +40,4 @@ EXEC SQL DECLARE cur1 CURSOR FOR stmt1;
Compatibility
DECLARE
is specified in the SQL standard. -
DESCRIBE — obtain information about a prepared statement or result set
+DESCRIBE \ No newline at end of file +DESCRIBE
DESCRIBE — obtain information about a prepared statement or result set
Synopsis
DESCRIBE [ OUTPUT ]prepared_name
USING [ SQL ] DESCRIPTORdescriptor_name
DESCRIBE [ OUTPUT ]prepared_name
INTO [ SQL ] DESCRIPTORdescriptor_name
DESCRIBE [ OUTPUT ]prepared_name
INTOsqlda_name
@@ -23,4 +23,4 @@ EXEC SQL GET DESCRIPTOR mydesc VALUE 1 :charvar = NAME; EXEC SQL DEALLOCATE DESCRIPTOR mydesc;Compatibility
DESCRIBE
is specified in the SQL standard. -See Also
ALLOCATE DESCRIPTOR, GET DESCRIPTOR
DISCONNECT — terminate a database connection
+DISCONNECT \ No newline at end of file +DISCONNECT
DISCONNECT — terminate a database connection
Synopsis
DISCONNECTconnection_name
DISCONNECT [ CURRENT ] DISCONNECT ALL @@ -32,4 +32,4 @@ main(void) }Compatibility
DISCONNECT
is specified in the SQL standard. -See Also
CONNECT, SET CONNECTION
EXECUTE IMMEDIATE — dynamically prepare and execute a statement
+EXECUTE IMMEDIATE \ No newline at end of file +EXECUTE IMMEDIATE
EXECUTE IMMEDIATE — dynamically prepare and execute a statement
Synopsis
EXECUTE IMMEDIATEstring
Description
EXECUTE IMMEDIATE
immediately prepares and @@ -34,4 +34,4 @@ EXEC SQL EXECUTE IMMEDIATE :command;
Compatibility
EXECUTE IMMEDIATE
is specified in the SQL standard. -
GET DESCRIPTOR — get information from an SQL descriptor area
+GET DESCRIPTOR \ No newline at end of file +GET DESCRIPTOR
GET DESCRIPTOR — get information from an SQL descriptor area
Synopsis
GET DESCRIPTORdescriptor_name
:cvariable
=descriptor_header_item
[, ... ] GET DESCRIPTORdescriptor_name
VALUEcolumn_number
:cvariable
=descriptor_item
[, ... ]Description
@@ -101,4 +101,4 @@ d_data = testdb
Compatibility
GET DESCRIPTOR
is specified in the SQL standard. -See Also
ALLOCATE DESCRIPTOR, SET DESCRIPTOR
OPEN — open a dynamic cursor
+OPEN \ No newline at end of file +OPEN
OPEN — open a dynamic cursor
Synopsis
OPENcursor_name
OPENcursor_name
USINGvalue
[, ... ] OPENcursor_name
USING SQL DESCRIPTORdescriptor_name
@@ -28,4 +28,4 @@ EXEC SQL OPEN c1 USING SQL DESCRIPTOR mydesc; EXEC SQL OPEN :curname1;Compatibility
OPEN
is specified in the SQL standard. -
PREPARE — prepare a statement for execution
+PREPARE \ No newline at end of file +PREPARE
PREPARE — prepare a statement for execution
Synopsis
PREPAREprepared_name
FROMstring
Description
PREPARE
prepares a statement dynamically @@ -39,4 +39,4 @@ EXEC SQL PREPARE foo FROM :stmt; EXEC SQL EXECUTE foo USING SQL DESCRIPTOR indesc INTO SQL DESCRIPTOR outdesc;Compatibility
PREPARE
is specified in the SQL standard. -See Also
EXECUTE
SET AUTOCOMMIT — set the autocommit behavior of the current session
+SET AUTOCOMMIT \ No newline at end of file +SET AUTOCOMMIT
SET AUTOCOMMIT — set the autocommit behavior of the current session
Synopsis
SET AUTOCOMMIT { = | TO } { ON | OFF }Description
SET AUTOCOMMIT
sets the autocommit behavior of @@ -10,4 +10,4 @@ SET AUTOCOMMIT { = | TO } { ON | OFF } where each individual statement is committed implicitly.Compatibility
SET AUTOCOMMIT
is an extension of PostgreSQL ECPG. -
SET CONNECTION — select a database connection
+SET CONNECTION \ No newline at end of file +SET CONNECTION
SET CONNECTION — select a database connection
Synopsis
SET CONNECTION [ TO | = ]connection_name
Description
SET CONNECTION
sets the “current” @@ -15,4 +15,4 @@ EXEC SQL SET CONNECTION TO con2; EXEC SQL SET CONNECTION = con1;Compatibility
SET CONNECTION
is specified in the SQL standard. -See Also
CONNECT, DISCONNECT
SET DESCRIPTOR — set information in an SQL descriptor area
+SET DESCRIPTOR \ No newline at end of file +SET DESCRIPTOR
SET DESCRIPTOR — set information in an SQL descriptor area
Synopsis
SET DESCRIPTORdescriptor_name
descriptor_header_item
=value
[, ... ] SET DESCRIPTORdescriptor_name
VALUEnumber
descriptor_item
=value
[, ...]Description
@@ -35,4 +35,4 @@ EXEC SQL SET DESCRIPTOR indesc VALUE 2 INDICATOR = :val1, DATA = 'some string'; EXEC SQL SET DESCRIPTOR indesc VALUE 2 INDICATOR = :val2null, DATA = :val2;
Compatibility
SET DESCRIPTOR
is specified in the SQL standard. -See Also
ALLOCATE DESCRIPTOR, GET DESCRIPTOR
TYPE — define a new data type
+TYPE \ No newline at end of file +TYPE
TYPE — define a new data type
Synopsis
TYPEtype_name
ISctype
Description
The
TYPE
command defines a new C type. It is @@ -85,4 +85,4 @@ t_ind.i_ind = 0
Compatibility
The
TYPE
command is a PostgreSQL extension. -
VAR — define a variable
+VAR \ No newline at end of file +VAR
VAR — define a variable
Synopsis
VARvarname
ISctype
Description
The
VAR
command assigns a new C data type @@ -16,4 +16,4 @@ exec sql end declare section; EXEC SQL VAR a IS int;Compatibility
The
VAR
command is a PostgreSQL extension. -
WHENEVER — specify the action to be taken when an SQL statement causes a specific class condition to be raised
+WHENEVER \ No newline at end of file +WHENEVER
WHENEVER — specify the action to be taken when an SQL statement causes a specific class condition to be raised
Synopsis
WHENEVER { NOT FOUND | SQLERROR | SQLWARNING }action
Description
Define a behavior which is called on the special cases (Rows not @@ -54,4 +54,4 @@ main(void)
Compatibility
WHENEVER
is specified in the SQL standard, but most of the actions are PostgreSQL extensions. -
+
In Section 36.3 you saw how you can execute SQL
statements from an embedded SQL program. Some of those statements
only used fixed values and did not provide a way to insert
@@ -878,4 +878,4 @@ EXEC SQL SELECT b INTO :val :val_ind FROM test1;
input and output) for character string types as empty string and
for integer types as the lowest possible value for type (for
example, INT_MIN
for int
).
-
Table of Contents
+
Table of Contents
This chapter describes the embedded SQL package
for PostgreSQL. It was written by
Linus Tolke (<linus@epact.se>
) and Michael Meskes
@@ -10,4 +10,4 @@
This documentation is quite incomplete. But since this
interface is standardized, additional information can be found in
many resources about SQL.
-
+
PostgreSQL offers encryption at several levels, and provides flexibility in protecting data from disclosure due to database server theft, unscrupulous administrators, and @@ -81,4 +81,4 @@ never appears on the database server. Data is encrypted on the client before being sent to the server, and database results have to be decrypted on the client before being used. -
+
All messages emitted by the PostgreSQL server are assigned five-character error codes that follow the SQL standard's conventions for “SQLSTATE” codes. Applications @@ -20,7 +20,7 @@ what to do from the error class.
Table A.1 lists all the error codes defined in - PostgreSQL 15.5. (Some are not actually + PostgreSQL 15.6. (Some are not actually used at present, but are defined by the SQL standard.) The error classes are also shown. For each error class there is a “standard” error code having the last three characters @@ -42,4 +42,4 @@ As of PostgreSQL 9.3, complete coverage for this feature exists only for errors in SQLSTATE class 23 (integrity constraint violation), but this is likely to be expanded in future. -
Table A.1. PostgreSQL Error Codes
Error Code | Condition Name |
---|---|
Class 00 — Successful Completion | |
00000 | successful_completion |
Class 01 — Warning | |
01000 | warning |
0100C | dynamic_result_sets_returned |
01008 | implicit_zero_bit_padding |
01003 | null_value_eliminated_in_set_function |
01007 | privilege_not_granted |
01006 | privilege_not_revoked |
01004 | string_data_right_truncation |
01P01 | deprecated_feature |
Class 02 — No Data (this is also a warning class per the SQL standard) | |
02000 | no_data |
02001 | no_additional_dynamic_result_sets_returned |
Class 03 — SQL Statement Not Yet Complete | |
03000 | sql_statement_not_yet_complete |
Class 08 — Connection Exception | |
08000 | connection_exception |
08003 | connection_does_not_exist |
08006 | connection_failure |
08001 | sqlclient_unable_to_establish_sqlconnection |
08004 | sqlserver_rejected_establishment_of_sqlconnection |
08007 | transaction_resolution_unknown |
08P01 | protocol_violation |
Class 09 — Triggered Action Exception | |
09000 | triggered_action_exception |
Class 0A — Feature Not Supported | |
0A000 | feature_not_supported |
Class 0B — Invalid Transaction Initiation | |
0B000 | invalid_transaction_initiation |
Class 0F — Locator Exception | |
0F000 | locator_exception |
0F001 | invalid_locator_specification |
Class 0L — Invalid Grantor | |
0L000 | invalid_grantor |
0LP01 | invalid_grant_operation |
Class 0P — Invalid Role Specification | |
0P000 | invalid_role_specification |
Class 0Z — Diagnostics Exception | |
0Z000 | diagnostics_exception |
0Z002 | stacked_diagnostics_accessed_without_active_handler |
Class 20 — Case Not Found | |
20000 | case_not_found |
Class 21 — Cardinality Violation | |
21000 | cardinality_violation |
Class 22 — Data Exception | |
22000 | data_exception |
2202E | array_subscript_error |
22021 | character_not_in_repertoire |
22008 | datetime_field_overflow |
22012 | division_by_zero |
22005 | error_in_assignment |
2200B | escape_character_conflict |
22022 | indicator_overflow |
22015 | interval_field_overflow |
2201E | invalid_argument_for_logarithm |
22014 | invalid_argument_for_ntile_function |
22016 | invalid_argument_for_nth_value_function |
2201F | invalid_argument_for_power_function |
2201G | invalid_argument_for_width_bucket_function |
22018 | invalid_character_value_for_cast |
22007 | invalid_datetime_format |
22019 | invalid_escape_character |
2200D | invalid_escape_octet |
22025 | invalid_escape_sequence |
22P06 | nonstandard_use_of_escape_character |
22010 | invalid_indicator_parameter_value |
22023 | invalid_parameter_value |
22013 | invalid_preceding_or_following_size |
2201B | invalid_regular_expression |
2201W | invalid_row_count_in_limit_clause |
2201X | invalid_row_count_in_result_offset_clause |
2202H | invalid_tablesample_argument |
2202G | invalid_tablesample_repeat |
22009 | invalid_time_zone_displacement_value |
2200C | invalid_use_of_escape_character |
2200G | most_specific_type_mismatch |
22004 | null_value_not_allowed |
22002 | null_value_no_indicator_parameter |
22003 | numeric_value_out_of_range |
2200H | sequence_generator_limit_exceeded |
22026 | string_data_length_mismatch |
22001 | string_data_right_truncation |
22011 | substring_error |
22027 | trim_error |
22024 | unterminated_c_string |
2200F | zero_length_character_string |
22P01 | floating_point_exception |
22P02 | invalid_text_representation |
22P03 | invalid_binary_representation |
22P04 | bad_copy_file_format |
22P05 | untranslatable_character |
2200L | not_an_xml_document |
2200M | invalid_xml_document |
2200N | invalid_xml_content |
2200S | invalid_xml_comment |
2200T | invalid_xml_processing_instruction |
22030 | duplicate_json_object_key_value |
22031 | invalid_argument_for_sql_json_datetime_function |
22032 | invalid_json_text |
22033 | invalid_sql_json_subscript |
22034 | more_than_one_sql_json_item |
22035 | no_sql_json_item |
22036 | non_numeric_sql_json_item |
22037 | non_unique_keys_in_a_json_object |
22038 | singleton_sql_json_item_required |
22039 | sql_json_array_not_found |
2203A | sql_json_member_not_found |
2203B | sql_json_number_not_found |
2203C | sql_json_object_not_found |
2203D | too_many_json_array_elements |
2203E | too_many_json_object_members |
2203F | sql_json_scalar_required |
2203G | sql_json_item_cannot_be_cast_to_target_type |
Class 23 — Integrity Constraint Violation | |
23000 | integrity_constraint_violation |
23001 | restrict_violation |
23502 | not_null_violation |
23503 | foreign_key_violation |
23505 | unique_violation |
23514 | check_violation |
23P01 | exclusion_violation |
Class 24 — Invalid Cursor State | |
24000 | invalid_cursor_state |
Class 25 — Invalid Transaction State | |
25000 | invalid_transaction_state |
25001 | active_sql_transaction |
25002 | branch_transaction_already_active |
25008 | held_cursor_requires_same_isolation_level |
25003 | inappropriate_access_mode_for_branch_transaction |
25004 | inappropriate_isolation_level_for_branch_transaction |
25005 | no_active_sql_transaction_for_branch_transaction |
25006 | read_only_sql_transaction |
25007 | schema_and_data_statement_mixing_not_supported |
25P01 | no_active_sql_transaction |
25P02 | in_failed_sql_transaction |
25P03 | idle_in_transaction_session_timeout |
Class 26 — Invalid SQL Statement Name | |
26000 | invalid_sql_statement_name |
Class 27 — Triggered Data Change Violation | |
27000 | triggered_data_change_violation |
Class 28 — Invalid Authorization Specification | |
28000 | invalid_authorization_specification |
28P01 | invalid_password |
Class 2B — Dependent Privilege Descriptors Still Exist | |
2B000 | dependent_privilege_descriptors_still_exist |
2BP01 | dependent_objects_still_exist |
Class 2D — Invalid Transaction Termination | |
2D000 | invalid_transaction_termination |
Class 2F — SQL Routine Exception | |
2F000 | sql_routine_exception |
2F005 | function_executed_no_return_statement |
2F002 | modifying_sql_data_not_permitted |
2F003 | prohibited_sql_statement_attempted |
2F004 | reading_sql_data_not_permitted |
Class 34 — Invalid Cursor Name | |
34000 | invalid_cursor_name |
Class 38 — External Routine Exception | |
38000 | external_routine_exception |
38001 | containing_sql_not_permitted |
38002 | modifying_sql_data_not_permitted |
38003 | prohibited_sql_statement_attempted |
38004 | reading_sql_data_not_permitted |
Class 39 — External Routine Invocation Exception | |
39000 | external_routine_invocation_exception |
39001 | invalid_sqlstate_returned |
39004 | null_value_not_allowed |
39P01 | trigger_protocol_violated |
39P02 | srf_protocol_violated |
39P03 | event_trigger_protocol_violated |
Class 3B — Savepoint Exception | |
3B000 | savepoint_exception |
3B001 | invalid_savepoint_specification |
Class 3D — Invalid Catalog Name | |
3D000 | invalid_catalog_name |
Class 3F — Invalid Schema Name | |
3F000 | invalid_schema_name |
Class 40 — Transaction Rollback | |
40000 | transaction_rollback |
40002 | transaction_integrity_constraint_violation |
40001 | serialization_failure |
40003 | statement_completion_unknown |
40P01 | deadlock_detected |
Class 42 — Syntax Error or Access Rule Violation | |
42000 | syntax_error_or_access_rule_violation |
42601 | syntax_error |
42501 | insufficient_privilege |
42846 | cannot_coerce |
42803 | grouping_error |
42P20 | windowing_error |
42P19 | invalid_recursion |
42830 | invalid_foreign_key |
42602 | invalid_name |
42622 | name_too_long |
42939 | reserved_name |
42804 | datatype_mismatch |
42P18 | indeterminate_datatype |
42P21 | collation_mismatch |
42P22 | indeterminate_collation |
42809 | wrong_object_type |
428C9 | generated_always |
42703 | undefined_column |
42883 | undefined_function |
42P01 | undefined_table |
42P02 | undefined_parameter |
42704 | undefined_object |
42701 | duplicate_column |
42P03 | duplicate_cursor |
42P04 | duplicate_database |
42723 | duplicate_function |
42P05 | duplicate_prepared_statement |
42P06 | duplicate_schema |
42P07 | duplicate_table |
42712 | duplicate_alias |
42710 | duplicate_object |
42702 | ambiguous_column |
42725 | ambiguous_function |
42P08 | ambiguous_parameter |
42P09 | ambiguous_alias |
42P10 | invalid_column_reference |
42611 | invalid_column_definition |
42P11 | invalid_cursor_definition |
42P12 | invalid_database_definition |
42P13 | invalid_function_definition |
42P14 | invalid_prepared_statement_definition |
42P15 | invalid_schema_definition |
42P16 | invalid_table_definition |
42P17 | invalid_object_definition |
Class 44 — WITH CHECK OPTION Violation | |
44000 | with_check_option_violation |
Class 53 — Insufficient Resources | |
53000 | insufficient_resources |
53100 | disk_full |
53200 | out_of_memory |
53300 | too_many_connections |
53400 | configuration_limit_exceeded |
Class 54 — Program Limit Exceeded | |
54000 | program_limit_exceeded |
54001 | statement_too_complex |
54011 | too_many_columns |
54023 | too_many_arguments |
Class 55 — Object Not In Prerequisite State | |
55000 | object_not_in_prerequisite_state |
55006 | object_in_use |
55P02 | cant_change_runtime_param |
55P03 | lock_not_available |
55P04 | unsafe_new_enum_value_usage |
Class 57 — Operator Intervention | |
57000 | operator_intervention |
57014 | query_canceled |
57P01 | admin_shutdown |
57P02 | crash_shutdown |
57P03 | cannot_connect_now |
57P04 | database_dropped |
57P05 | idle_session_timeout |
Class 58 — System Error (errors external to PostgreSQL itself) | |
58000 | system_error |
58030 | io_error |
58P01 | undefined_file |
58P02 | duplicate_file |
Class 72 — Snapshot Failure | |
72000 | snapshot_too_old |
Class F0 — Configuration File Error | |
F0000 | config_file_error |
F0001 | lock_file_exists |
Class HV — Foreign Data Wrapper Error (SQL/MED) | |
HV000 | fdw_error |
HV005 | fdw_column_name_not_found |
HV002 | fdw_dynamic_parameter_value_needed |
HV010 | fdw_function_sequence_error |
HV021 | fdw_inconsistent_descriptor_information |
HV024 | fdw_invalid_attribute_value |
HV007 | fdw_invalid_column_name |
HV008 | fdw_invalid_column_number |
HV004 | fdw_invalid_data_type |
HV006 | fdw_invalid_data_type_descriptors |
HV091 | fdw_invalid_descriptor_field_identifier |
HV00B | fdw_invalid_handle |
HV00C | fdw_invalid_option_index |
HV00D | fdw_invalid_option_name |
HV090 | fdw_invalid_string_length_or_buffer_length |
HV00A | fdw_invalid_string_format |
HV009 | fdw_invalid_use_of_null_pointer |
HV014 | fdw_too_many_handles |
HV001 | fdw_out_of_memory |
HV00P | fdw_no_schemas |
HV00J | fdw_option_name_not_found |
HV00K | fdw_reply_handle |
HV00Q | fdw_schema_not_found |
HV00R | fdw_table_not_found |
HV00L | fdw_unable_to_create_execution |
HV00M | fdw_unable_to_create_reply |
HV00N | fdw_unable_to_establish_connection |
Class P0 — PL/pgSQL Error | |
P0000 | plpgsql_error |
P0001 | raise_exception |
P0002 | no_data_found |
P0003 | too_many_rows |
P0004 | assert_failure |
Class XX — Internal Error | |
XX000 | internal_error |
XX001 | data_corrupted |
XX002 | index_corrupted |
Table A.1. PostgreSQL Error Codes
Error Code | Condition Name |
---|---|
Class 00 — Successful Completion | |
00000 | successful_completion |
Class 01 — Warning | |
01000 | warning |
0100C | dynamic_result_sets_returned |
01008 | implicit_zero_bit_padding |
01003 | null_value_eliminated_in_set_function |
01007 | privilege_not_granted |
01006 | privilege_not_revoked |
01004 | string_data_right_truncation |
01P01 | deprecated_feature |
Class 02 — No Data (this is also a warning class per the SQL standard) | |
02000 | no_data |
02001 | no_additional_dynamic_result_sets_returned |
Class 03 — SQL Statement Not Yet Complete | |
03000 | sql_statement_not_yet_complete |
Class 08 — Connection Exception | |
08000 | connection_exception |
08003 | connection_does_not_exist |
08006 | connection_failure |
08001 | sqlclient_unable_to_establish_sqlconnection |
08004 | sqlserver_rejected_establishment_of_sqlconnection |
08007 | transaction_resolution_unknown |
08P01 | protocol_violation |
Class 09 — Triggered Action Exception | |
09000 | triggered_action_exception |
Class 0A — Feature Not Supported | |
0A000 | feature_not_supported |
Class 0B — Invalid Transaction Initiation | |
0B000 | invalid_transaction_initiation |
Class 0F — Locator Exception | |
0F000 | locator_exception |
0F001 | invalid_locator_specification |
Class 0L — Invalid Grantor | |
0L000 | invalid_grantor |
0LP01 | invalid_grant_operation |
Class 0P — Invalid Role Specification | |
0P000 | invalid_role_specification |
Class 0Z — Diagnostics Exception | |
0Z000 | diagnostics_exception |
0Z002 | stacked_diagnostics_accessed_without_active_handler |
Class 20 — Case Not Found | |
20000 | case_not_found |
Class 21 — Cardinality Violation | |
21000 | cardinality_violation |
Class 22 — Data Exception | |
22000 | data_exception |
2202E | array_subscript_error |
22021 | character_not_in_repertoire |
22008 | datetime_field_overflow |
22012 | division_by_zero |
22005 | error_in_assignment |
2200B | escape_character_conflict |
22022 | indicator_overflow |
22015 | interval_field_overflow |
2201E | invalid_argument_for_logarithm |
22014 | invalid_argument_for_ntile_function |
22016 | invalid_argument_for_nth_value_function |
2201F | invalid_argument_for_power_function |
2201G | invalid_argument_for_width_bucket_function |
22018 | invalid_character_value_for_cast |
22007 | invalid_datetime_format |
22019 | invalid_escape_character |
2200D | invalid_escape_octet |
22025 | invalid_escape_sequence |
22P06 | nonstandard_use_of_escape_character |
22010 | invalid_indicator_parameter_value |
22023 | invalid_parameter_value |
22013 | invalid_preceding_or_following_size |
2201B | invalid_regular_expression |
2201W | invalid_row_count_in_limit_clause |
2201X | invalid_row_count_in_result_offset_clause |
2202H | invalid_tablesample_argument |
2202G | invalid_tablesample_repeat |
22009 | invalid_time_zone_displacement_value |
2200C | invalid_use_of_escape_character |
2200G | most_specific_type_mismatch |
22004 | null_value_not_allowed |
22002 | null_value_no_indicator_parameter |
22003 | numeric_value_out_of_range |
2200H | sequence_generator_limit_exceeded |
22026 | string_data_length_mismatch |
22001 | string_data_right_truncation |
22011 | substring_error |
22027 | trim_error |
22024 | unterminated_c_string |
2200F | zero_length_character_string |
22P01 | floating_point_exception |
22P02 | invalid_text_representation |
22P03 | invalid_binary_representation |
22P04 | bad_copy_file_format |
22P05 | untranslatable_character |
2200L | not_an_xml_document |
2200M | invalid_xml_document |
2200N | invalid_xml_content |
2200S | invalid_xml_comment |
2200T | invalid_xml_processing_instruction |
22030 | duplicate_json_object_key_value |
22031 | invalid_argument_for_sql_json_datetime_function |
22032 | invalid_json_text |
22033 | invalid_sql_json_subscript |
22034 | more_than_one_sql_json_item |
22035 | no_sql_json_item |
22036 | non_numeric_sql_json_item |
22037 | non_unique_keys_in_a_json_object |
22038 | singleton_sql_json_item_required |
22039 | sql_json_array_not_found |
2203A | sql_json_member_not_found |
2203B | sql_json_number_not_found |
2203C | sql_json_object_not_found |
2203D | too_many_json_array_elements |
2203E | too_many_json_object_members |
2203F | sql_json_scalar_required |
2203G | sql_json_item_cannot_be_cast_to_target_type |
Class 23 — Integrity Constraint Violation | |
23000 | integrity_constraint_violation |
23001 | restrict_violation |
23502 | not_null_violation |
23503 | foreign_key_violation |
23505 | unique_violation |
23514 | check_violation |
23P01 | exclusion_violation |
Class 24 — Invalid Cursor State | |
24000 | invalid_cursor_state |
Class 25 — Invalid Transaction State | |
25000 | invalid_transaction_state |
25001 | active_sql_transaction |
25002 | branch_transaction_already_active |
25008 | held_cursor_requires_same_isolation_level |
25003 | inappropriate_access_mode_for_branch_transaction |
25004 | inappropriate_isolation_level_for_branch_transaction |
25005 | no_active_sql_transaction_for_branch_transaction |
25006 | read_only_sql_transaction |
25007 | schema_and_data_statement_mixing_not_supported |
25P01 | no_active_sql_transaction |
25P02 | in_failed_sql_transaction |
25P03 | idle_in_transaction_session_timeout |
Class 26 — Invalid SQL Statement Name | |
26000 | invalid_sql_statement_name |
Class 27 — Triggered Data Change Violation | |
27000 | triggered_data_change_violation |
Class 28 — Invalid Authorization Specification | |
28000 | invalid_authorization_specification |
28P01 | invalid_password |
Class 2B — Dependent Privilege Descriptors Still Exist | |
2B000 | dependent_privilege_descriptors_still_exist |
2BP01 | dependent_objects_still_exist |
Class 2D — Invalid Transaction Termination | |
2D000 | invalid_transaction_termination |
Class 2F — SQL Routine Exception | |
2F000 | sql_routine_exception |
2F005 | function_executed_no_return_statement |
2F002 | modifying_sql_data_not_permitted |
2F003 | prohibited_sql_statement_attempted |
2F004 | reading_sql_data_not_permitted |
Class 34 — Invalid Cursor Name | |
34000 | invalid_cursor_name |
Class 38 — External Routine Exception | |
38000 | external_routine_exception |
38001 | containing_sql_not_permitted |
38002 | modifying_sql_data_not_permitted |
38003 | prohibited_sql_statement_attempted |
38004 | reading_sql_data_not_permitted |
Class 39 — External Routine Invocation Exception | |
39000 | external_routine_invocation_exception |
39001 | invalid_sqlstate_returned |
39004 | null_value_not_allowed |
39P01 | trigger_protocol_violated |
39P02 | srf_protocol_violated |
39P03 | event_trigger_protocol_violated |
Class 3B — Savepoint Exception | |
3B000 | savepoint_exception |
3B001 | invalid_savepoint_specification |
Class 3D — Invalid Catalog Name | |
3D000 | invalid_catalog_name |
Class 3F — Invalid Schema Name | |
3F000 | invalid_schema_name |
Class 40 — Transaction Rollback | |
40000 | transaction_rollback |
40002 | transaction_integrity_constraint_violation |
40001 | serialization_failure |
40003 | statement_completion_unknown |
40P01 | deadlock_detected |
Class 42 — Syntax Error or Access Rule Violation | |
42000 | syntax_error_or_access_rule_violation |
42601 | syntax_error |
42501 | insufficient_privilege |
42846 | cannot_coerce |
42803 | grouping_error |
42P20 | windowing_error |
42P19 | invalid_recursion |
42830 | invalid_foreign_key |
42602 | invalid_name |
42622 | name_too_long |
42939 | reserved_name |
42804 | datatype_mismatch |
42P18 | indeterminate_datatype |
42P21 | collation_mismatch |
42P22 | indeterminate_collation |
42809 | wrong_object_type |
428C9 | generated_always |
42703 | undefined_column |
42883 | undefined_function |
42P01 | undefined_table |
42P02 | undefined_parameter |
42704 | undefined_object |
42701 | duplicate_column |
42P03 | duplicate_cursor |
42P04 | duplicate_database |
42723 | duplicate_function |
42P05 | duplicate_prepared_statement |
42P06 | duplicate_schema |
42P07 | duplicate_table |
42712 | duplicate_alias |
42710 | duplicate_object |
42702 | ambiguous_column |
42725 | ambiguous_function |
42P08 | ambiguous_parameter |
42P09 | ambiguous_alias |
42P10 | invalid_column_reference |
42611 | invalid_column_definition |
42P11 | invalid_cursor_definition |
42P12 | invalid_database_definition |
42P13 | invalid_function_definition |
42P14 | invalid_prepared_statement_definition |
42P15 | invalid_schema_definition |
42P16 | invalid_table_definition |
42P17 | invalid_object_definition |
Class 44 — WITH CHECK OPTION Violation | |
44000 | with_check_option_violation |
Class 53 — Insufficient Resources | |
53000 | insufficient_resources |
53100 | disk_full |
53200 | out_of_memory |
53300 | too_many_connections |
53400 | configuration_limit_exceeded |
Class 54 — Program Limit Exceeded | |
54000 | program_limit_exceeded |
54001 | statement_too_complex |
54011 | too_many_columns |
54023 | too_many_arguments |
Class 55 — Object Not In Prerequisite State | |
55000 | object_not_in_prerequisite_state |
55006 | object_in_use |
55P02 | cant_change_runtime_param |
55P03 | lock_not_available |
55P04 | unsafe_new_enum_value_usage |
Class 57 — Operator Intervention | |
57000 | operator_intervention |
57014 | query_canceled |
57P01 | admin_shutdown |
57P02 | crash_shutdown |
57P03 | cannot_connect_now |
57P04 | database_dropped |
57P05 | idle_session_timeout |
Class 58 — System Error (errors external to PostgreSQL itself) | |
58000 | system_error |
58030 | io_error |
58P01 | undefined_file |
58P02 | duplicate_file |
Class 72 — Snapshot Failure | |
72000 | snapshot_too_old |
Class F0 — Configuration File Error | |
F0000 | config_file_error |
F0001 | lock_file_exists |
Class HV — Foreign Data Wrapper Error (SQL/MED) | |
HV000 | fdw_error |
HV005 | fdw_column_name_not_found |
HV002 | fdw_dynamic_parameter_value_needed |
HV010 | fdw_function_sequence_error |
HV021 | fdw_inconsistent_descriptor_information |
HV024 | fdw_invalid_attribute_value |
HV007 | fdw_invalid_column_name |
HV008 | fdw_invalid_column_number |
HV004 | fdw_invalid_data_type |
HV006 | fdw_invalid_data_type_descriptors |
HV091 | fdw_invalid_descriptor_field_identifier |
HV00B | fdw_invalid_handle |
HV00C | fdw_invalid_option_index |
HV00D | fdw_invalid_option_name |
HV090 | fdw_invalid_string_length_or_buffer_length |
HV00A | fdw_invalid_string_format |
HV009 | fdw_invalid_use_of_null_pointer |
HV014 | fdw_too_many_handles |
HV001 | fdw_out_of_memory |
HV00P | fdw_no_schemas |
HV00J | fdw_option_name_not_found |
HV00K | fdw_reply_handle |
HV00Q | fdw_schema_not_found |
HV00R | fdw_table_not_found |
HV00L | fdw_unable_to_create_execution |
HV00M | fdw_unable_to_create_reply |
HV00N | fdw_unable_to_establish_connection |
Class P0 — PL/pgSQL Error | |
P0000 | plpgsql_error |
P0001 | raise_exception |
P0002 | no_data_found |
P0003 | too_many_rows |
P0004 | assert_failure |
Class XX — Internal Error | |
XX000 | internal_error |
XX001 | data_corrupted |
XX002 | index_corrupted |
+
Error, warning, and log messages generated within the server code
should be created using ereport
, or its older cousin
elog
. The use of this function is complex enough to
@@ -247,4 +247,4 @@ ereport(level, errmsg_internal("format string", ...));
routines will not affect it. That would not be true if you were to
write strerror(errno)
explicitly in errmsg
's
parameter list; accordingly, do not do so.
-
+
This style guide is offered in the hope of maintaining a consistent, user-friendly style throughout all the messages generated by PostgreSQL. @@ -247,4 +247,4 @@ BETTER: unrecognized node type: 42 Keep in mind that error message texts need to be translated into other languages. Follow the guidelines in Section 57.2.2 to avoid making life difficult for translators. -
+
To register a Windows
event log library with the operating system,
issue this command:
@@ -25,4 +25,4 @@
To enable event logging in the database server, modify
log_destination to include
eventlog
in postgresql.conf
.
-
+
An event trigger fires whenever the event with which it is associated occurs in the database in which it is defined. Currently, the only supported events are @@ -75,4 +75,4 @@ trigger can be fired only for particular commands which the user wishes to intercept. A common use of such triggers is to restrict the range of DDL operations which users may perform. -
+
Here is a very simple example of an event trigger function written in C. (Examples of triggers written in procedural languages can be found in the documentation of the procedural languages.) @@ -75,4 +75,4 @@ COMMIT;
(Recall that DDL commands on event triggers themselves are not affected by event triggers.) -
+
This section describes the low-level details of the interface to an
event trigger function. This information is only needed when writing
event trigger functions in C. If you are using a higher-level language
@@ -65,4 +65,4 @@ typedef struct EventTriggerData
An event trigger function must return a NULL
pointer
(not an SQL null value, that is, do not
set isNull
true).
-
+
Table 40.1 lists all commands for which event triggers are supported. -
Table 40.1. Event Trigger Support by Command Tag
Command Tag | ddl_command_start | ddl_command_end | sql_drop | table_rewrite | Notes |
---|---|---|---|---|---|
ALTER AGGREGATE | X | X | - | - | |
ALTER COLLATION | X | X | - | - | |
ALTER CONVERSION | X | X | - | - | |
ALTER DOMAIN | X | X | - | - | |
ALTER DEFAULT PRIVILEGES | X | X | - | - | |
ALTER EXTENSION | X | X | - | - | |
ALTER FOREIGN DATA WRAPPER | X | X | - | - | |
ALTER FOREIGN TABLE | X | X | X | - | |
ALTER FUNCTION | X | X | - | - | |
ALTER LANGUAGE | X | X | - | - | |
ALTER LARGE OBJECT | X | X | - | - | |
ALTER MATERIALIZED VIEW | X | X | - | X | |
ALTER OPERATOR | X | X | - | - | |
ALTER OPERATOR CLASS | X | X | - | - | |
ALTER OPERATOR FAMILY | X | X | - | - | |
ALTER POLICY | X | X | - | - | |
ALTER PROCEDURE | X | X | - | - | |
ALTER PUBLICATION | X | X | - | - | |
ALTER ROUTINE | X | X | - | - | |
ALTER SCHEMA | X | X | - | - | |
ALTER SEQUENCE | X | X | - | - | |
ALTER SERVER | X | X | - | - | |
ALTER STATISTICS | X | X | - | - | |
ALTER SUBSCRIPTION | X | X | - | - | |
ALTER TABLE | X | X | X | X | |
ALTER TEXT SEARCH CONFIGURATION | X | X | - | - | |
ALTER TEXT SEARCH DICTIONARY | X | X | - | - | |
ALTER TEXT SEARCH PARSER | X | X | - | - | |
ALTER TEXT SEARCH TEMPLATE | X | X | - | - | |
ALTER TRIGGER | X | X | - | - | |
ALTER TYPE | X | X | - | X | |
ALTER USER MAPPING | X | X | - | - | |
ALTER VIEW | X | X | - | - | |
COMMENT | X | X | - | - | Only for local objects |
CREATE ACCESS METHOD | X | X | - | - | |
CREATE AGGREGATE | X | X | - | - | |
CREATE CAST | X | X | - | - | |
CREATE COLLATION | X | X | - | - | |
CREATE CONVERSION | X | X | - | - | |
CREATE DOMAIN | X | X | - | - | |
CREATE EXTENSION | X | X | - | - | |
CREATE FOREIGN DATA WRAPPER | X | X | - | - | |
CREATE FOREIGN TABLE | X | X | - | - | |
CREATE FUNCTION | X | X | - | - | |
CREATE INDEX | X | X | - | - | |
CREATE LANGUAGE | X | X | - | - | |
CREATE MATERIALIZED VIEW | X | X | - | - | |
CREATE OPERATOR | X | X | - | - | |
CREATE OPERATOR CLASS | X | X | - | - | |
CREATE OPERATOR FAMILY | X | X | - | - | |
CREATE POLICY | X | X | - | - | |
CREATE PROCEDURE | X | X | - | - | |
CREATE PUBLICATION | X | X | - | - | |
CREATE RULE | X | X | - | - | |
CREATE SCHEMA | X | X | - | - | |
CREATE SEQUENCE | X | X | - | - | |
CREATE SERVER | X | X | - | - | |
CREATE STATISTICS | X | X | - | - | |
CREATE SUBSCRIPTION | X | X | - | - | |
CREATE TABLE | X | X | - | - | |
CREATE TABLE AS | X | X | - | - | |
CREATE TEXT SEARCH CONFIGURATION | X | X | - | - | |
CREATE TEXT SEARCH DICTIONARY | X | X | - | - | |
CREATE TEXT SEARCH PARSER | X | X | - | - | |
CREATE TEXT SEARCH TEMPLATE | X | X | - | - | |
CREATE TRIGGER | X | X | - | - | |
CREATE TYPE | X | X | - | - | |
CREATE USER MAPPING | X | X | - | - | |
CREATE VIEW | X | X | - | - | |
DROP ACCESS METHOD | X | X | X | - | |
DROP AGGREGATE | X | X | X | - | |
DROP CAST | X | X | X | - | |
DROP COLLATION | X | X | X | - | |
DROP CONVERSION | X | X | X | - | |
DROP DOMAIN | X | X | X | - | |
DROP EXTENSION | X | X | X | - | |
DROP FOREIGN DATA WRAPPER | X | X | X | - | |
DROP FOREIGN TABLE | X | X | X | - | |
DROP FUNCTION | X | X | X | - | |
DROP INDEX | X | X | X | - | |
DROP LANGUAGE | X | X | X | - | |
DROP MATERIALIZED VIEW | X | X | X | - | |
DROP OPERATOR | X | X | X | - | |
DROP OPERATOR CLASS | X | X | X | - | |
DROP OPERATOR FAMILY | X | X | X | - | |
DROP OWNED | X | X | X | - | |
DROP POLICY | X | X | X | - | |
DROP PROCEDURE | X | X | X | - | |
DROP PUBLICATION | X | X | X | - | |
DROP ROUTINE | X | X | X | - | |
DROP RULE | X | X | X | - | |
DROP SCHEMA | X | X | X | - | |
DROP SEQUENCE | X | X | X | - | |
DROP SERVER | X | X | X | - | |
DROP STATISTICS | X | X | X | - | |
DROP SUBSCRIPTION | X | X | X | - | |
DROP TABLE | X | X | X | - | |
DROP TEXT SEARCH CONFIGURATION | X | X | X | - | |
DROP TEXT SEARCH DICTIONARY | X | X | X | - | |
DROP TEXT SEARCH PARSER | X | X | X | - | |
DROP TEXT SEARCH TEMPLATE | X | X | X | - | |
DROP TRIGGER | X | X | X | - | |
DROP TYPE | X | X | X | - | |
DROP USER MAPPING | X | X | X | - | |
DROP VIEW | X | X | X | - | |
GRANT | X | X | - | - | Only for local objects |
IMPORT FOREIGN SCHEMA | X | X | - | - | |
REFRESH MATERIALIZED VIEW | X | X | - | - | |
REVOKE | X | X | - | - | Only for local objects |
SECURITY LABEL | X | X | - | - | Only for local objects |
SELECT INTO | X | X | - | - |
Table 40.1. Event Trigger Support by Command Tag
Command Tag | ddl_command_start | ddl_command_end | sql_drop | table_rewrite | Notes |
---|---|---|---|---|---|
ALTER AGGREGATE | X | X | - | - | |
ALTER COLLATION | X | X | - | - | |
ALTER CONVERSION | X | X | - | - | |
ALTER DOMAIN | X | X | - | - | |
ALTER DEFAULT PRIVILEGES | X | X | - | - | |
ALTER EXTENSION | X | X | - | - | |
ALTER FOREIGN DATA WRAPPER | X | X | - | - | |
ALTER FOREIGN TABLE | X | X | X | - | |
ALTER FUNCTION | X | X | - | - | |
ALTER LANGUAGE | X | X | - | - | |
ALTER LARGE OBJECT | X | X | - | - | |
ALTER MATERIALIZED VIEW | X | X | - | X | |
ALTER OPERATOR | X | X | - | - | |
ALTER OPERATOR CLASS | X | X | - | - | |
ALTER OPERATOR FAMILY | X | X | - | - | |
ALTER POLICY | X | X | - | - | |
ALTER PROCEDURE | X | X | - | - | |
ALTER PUBLICATION | X | X | - | - | |
ALTER ROUTINE | X | X | - | - | |
ALTER SCHEMA | X | X | - | - | |
ALTER SEQUENCE | X | X | - | - | |
ALTER SERVER | X | X | - | - | |
ALTER STATISTICS | X | X | - | - | |
ALTER SUBSCRIPTION | X | X | - | - | |
ALTER TABLE | X | X | X | X | |
ALTER TEXT SEARCH CONFIGURATION | X | X | - | - | |
ALTER TEXT SEARCH DICTIONARY | X | X | - | - | |
ALTER TEXT SEARCH PARSER | X | X | - | - | |
ALTER TEXT SEARCH TEMPLATE | X | X | - | - | |
ALTER TRIGGER | X | X | - | - | |
ALTER TYPE | X | X | - | X | |
ALTER USER MAPPING | X | X | - | - | |
ALTER VIEW | X | X | - | - | |
COMMENT | X | X | - | - | Only for local objects |
CREATE ACCESS METHOD | X | X | - | - | |
CREATE AGGREGATE | X | X | - | - | |
CREATE CAST | X | X | - | - | |
CREATE COLLATION | X | X | - | - | |
CREATE CONVERSION | X | X | - | - | |
CREATE DOMAIN | X | X | - | - | |
CREATE EXTENSION | X | X | - | - | |
CREATE FOREIGN DATA WRAPPER | X | X | - | - | |
CREATE FOREIGN TABLE | X | X | - | - | |
CREATE FUNCTION | X | X | - | - | |
CREATE INDEX | X | X | - | - | |
CREATE LANGUAGE | X | X | - | - | |
CREATE MATERIALIZED VIEW | X | X | - | - | |
CREATE OPERATOR | X | X | - | - | |
CREATE OPERATOR CLASS | X | X | - | - | |
CREATE OPERATOR FAMILY | X | X | - | - | |
CREATE POLICY | X | X | - | - | |
CREATE PROCEDURE | X | X | - | - | |
CREATE PUBLICATION | X | X | - | - | |
CREATE RULE | X | X | - | - | |
CREATE SCHEMA | X | X | - | - | |
CREATE SEQUENCE | X | X | - | - | |
CREATE SERVER | X | X | - | - | |
CREATE STATISTICS | X | X | - | - | |
CREATE SUBSCRIPTION | X | X | - | - | |
CREATE TABLE | X | X | - | - | |
CREATE TABLE AS | X | X | - | - | |
CREATE TEXT SEARCH CONFIGURATION | X | X | - | - | |
CREATE TEXT SEARCH DICTIONARY | X | X | - | - | |
CREATE TEXT SEARCH PARSER | X | X | - | - | |
CREATE TEXT SEARCH TEMPLATE | X | X | - | - | |
CREATE TRIGGER | X | X | - | - | |
CREATE TYPE | X | X | - | - | |
CREATE USER MAPPING | X | X | - | - | |
CREATE VIEW | X | X | - | - | |
DROP ACCESS METHOD | X | X | X | - | |
DROP AGGREGATE | X | X | X | - | |
DROP CAST | X | X | X | - | |
DROP COLLATION | X | X | X | - | |
DROP CONVERSION | X | X | X | - | |
DROP DOMAIN | X | X | X | - | |
DROP EXTENSION | X | X | X | - | |
DROP FOREIGN DATA WRAPPER | X | X | X | - | |
DROP FOREIGN TABLE | X | X | X | - | |
DROP FUNCTION | X | X | X | - | |
DROP INDEX | X | X | X | - | |
DROP LANGUAGE | X | X | X | - | |
DROP MATERIALIZED VIEW | X | X | X | - | |
DROP OPERATOR | X | X | X | - | |
DROP OPERATOR CLASS | X | X | X | - | |
DROP OPERATOR FAMILY | X | X | X | - | |
DROP OWNED | X | X | X | - | |
DROP POLICY | X | X | X | - | |
DROP PROCEDURE | X | X | X | - | |
DROP PUBLICATION | X | X | X | - | |
DROP ROUTINE | X | X | X | - | |
DROP RULE | X | X | X | - | |
DROP SCHEMA | X | X | X | - | |
DROP SEQUENCE | X | X | X | - | |
DROP SERVER | X | X | X | - | |
DROP STATISTICS | X | X | X | - | |
DROP SUBSCRIPTION | X | X | X | - | |
DROP TABLE | X | X | X | - | |
DROP TEXT SEARCH CONFIGURATION | X | X | X | - | |
DROP TEXT SEARCH DICTIONARY | X | X | X | - | |
DROP TEXT SEARCH PARSER | X | X | X | - | |
DROP TEXT SEARCH TEMPLATE | X | X | X | - | |
DROP TRIGGER | X | X | X | - | |
DROP TYPE | X | X | X | - | |
DROP USER MAPPING | X | X | X | - | |
DROP VIEW | X | X | X | - | |
GRANT | X | X | - | - | Only for local objects |
IMPORT FOREIGN SCHEMA | X | X | - | - | |
REFRESH MATERIALIZED VIEW | X | X | - | - | |
REVOKE | X | X | - | - | Only for local objects |
SECURITY LABEL | X | X | - | - | Only for local objects |
SELECT INTO | X | X | - | - |
+
Thanks to the table_rewrite
event, it is possible to implement
a table rewriting policy only allowing the rewrite in maintenance windows.
@@ -45,4 +45,4 @@ CREATE EVENT TRIGGER no_rewrite_allowed ON table_rewrite EXECUTE FUNCTION no_rewrite();
-
Table of Contents
+
Table of Contents
To supplement the trigger mechanism discussed in Chapter 39, PostgreSQL also provides event triggers. Unlike regular triggers, which are attached to a single table and capture only DML events, @@ -9,4 +9,4 @@ Like regular triggers, event triggers can be written in any procedural language that includes event trigger support, or in C, but not in plain SQL. -
+
The executor takes the plan created by the
planner/optimizer and recursively processes it to extract the required set
of rows. This is essentially a demand-pull pipeline mechanism.
@@ -75,4 +75,4 @@
trivial plan tree consisting of a single Result
node, which computes just one result row, feeding that up
to ModifyTable
to perform the insertion.
-
JOIN
Clauses+
JOIN
Clauses
It is possible
to control the query planner to some extent by using the explicit JOIN
syntax. To see why this matters, we first need some background.
@@ -141,4 +141,4 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;
to control join order with explicit joins). But you might set them
differently if you are trying to fine-tune the trade-off between planning
time and run time.
-
+
PostgreSQL provides various lock modes to control concurrent access to data in tables. These modes can be used for application-controlled locking in situations where @@ -393,4 +393,4 @@ SELECT pg_advisory_lock(q.id) FROM
The functions provided to manipulate advisory locks are described in Section 9.27.10. -
+
A useful extension to PostgreSQL typically includes
multiple SQL objects; for example, a new data type will require new
functions, new operators, and probably new index operator classes.
@@ -623,4 +623,4 @@ include $(PGXS)
Once the files are installed, use the
CREATE EXTENSION
command to load the objects into
any particular database.
-
+
PostgreSQL is extensible because its operation is catalog-driven. If you are familiar with standard relational database systems, you know that they store information @@ -30,4 +30,4 @@ fly” makes PostgreSQL uniquely suited for rapid prototyping of new applications and storage structures. -
+
If you are thinking about distributing your
PostgreSQL extension modules, setting up a
portable build system for them can be fairly difficult. Therefore
@@ -227,4 +227,4 @@ make VPATH=/path/to/extension/source/tree install
output_iso/results/
directory (for tests in
ISOLATION
), then copy them to
expected/
if they match what you expect from the test.
-
+
PostgreSQL data types can be divided into base types, container types, domains, and pseudo-types.
@@ -219,4 +219,4 @@ RETURNS anycompatible AS ...
the same as if you had written the appropriate number of
anynonarray
or anycompatiblenonarray
parameters.
-
Table of Contents
TABLE
+
Table of Contents
TABLE
In the sections that follow, we will discuss how you can extend the PostgreSQL SQL query language by adding: @@ -17,4 +17,4 @@
packages of related objects (starting in Section 38.17)
-
+
There are several administration tools available for PostgreSQL. The most popular is pgAdmin, and there are several commercially available ones as well. -
+
PostgreSQL is designed to be easily extensible. For this reason, extensions loaded into the database can function just like features that are built in. The @@ -11,4 +11,4 @@ externally. For example, Slony-I is a popular primary/standby replication solution that is developed independently from the core project. -
+
There are only two client interfaces included in the base PostgreSQL distribution:
@@ -21,4 +21,4 @@ refer to its website and documentation.
+
Table of Contents
+
Table of Contents
PostgreSQL is a complex software project, and managing the project is difficult. We have found that many enhancements to PostgreSQL can be more efficiently developed separately from the core project. -
TRUNCATE
EXPLAIN
ANALYZE
IMPORT FOREIGN SCHEMA
+
TRUNCATE
EXPLAIN
ANALYZE
IMPORT FOREIGN SCHEMA
The FDW handler function returns a palloc'd FdwRoutine
struct containing pointers to the callback functions described below.
The scan-related functions are required, the rest are optional.
@@ -1254,4 +1254,4 @@ ReparameterizeForeignPathByChild(PlannerInfo *root, List *fdw_private,
callback may use reparameterize_path_by_child
,
adjust_appendrel_attrs
or
adjust_appendrel_attrs_multilevel
as required.
-
+
The FDW author needs to implement a handler function, and optionally
a validator function. Both functions must be written in a compiled
language such as C, using the version-1 interface.
@@ -24,13 +24,10 @@
foreign tables using the wrapper.
The validator function must be registered as taking two arguments, a
text array containing the options to be validated, and an OID
- representing the type of object the options are associated with (in
- the form of the OID of the system catalog the object would be stored
- in, either
- ForeignDataWrapperRelationId
,
- ForeignServerRelationId
,
- UserMappingRelationId
,
- or ForeignTableRelationId
).
+ representing the type of object the options are associated with. The
+ latter corresponds to the OID of the system catalog the object
+ would be stored in, one of:
+
AttributeRelationId
ForeignDataWrapperRelationId
ForeignServerRelationId
ForeignTableRelationId
UserMappingRelationId
If no validator function is supplied, options are not checked at object creation time or object alteration time. -
+
Several helper functions are exported from the core server so that
authors of foreign data wrappers can get easy access to attributes of
FDW-related objects, such as FDW options.
@@ -111,4 +111,4 @@ GetForeignServerByName(const char *name, bool missing_ok);
This function returns a ForeignServer
object
for the foreign server with the given name. If the server is not found,
return NULL if missing_ok is true, otherwise raise an error.
-
+
The FDW callback functions GetForeignRelSize
,
GetForeignPaths
, GetForeignPlan
,
PlanForeignModify
, GetForeignJoinPaths
,
@@ -188,4 +188,4 @@
exclusion constraints on remote tables are not locally known. This
in turn implies that ON CONFLICT DO UPDATE
is not supported,
since the specification is mandatory there.
-
+
If an FDW's underlying storage mechanism has a concept of locking
individual rows to prevent concurrent updates of those rows, it is
usually worthwhile for the FDW to perform row-level locking with as
@@ -90,4 +90,4 @@
in src/include/nodes/plannodes.h
, and the comments for
ExecRowMark
in src/include/nodes/execnodes.h
for
additional information.
-
Table of Contents
TRUNCATE
EXPLAIN
ANALYZE
IMPORT FOREIGN SCHEMA
+
Table of Contents
TRUNCATE
EXPLAIN
ANALYZE
IMPORT FOREIGN SCHEMA
All operations on a foreign table are handled through its foreign data wrapper, which consists of a set of functions that the core server calls. The foreign data wrapper is responsible for fetching @@ -18,4 +18,4 @@ However, PostgreSQL does not implement that API, because the effort to accommodate it into PostgreSQL would be large, and the standard API hasn't gained wide adoption anyway. -
+
Identifier | Core? | Description | Comment |
---|---|---|---|
B012 | Embedded C | ||
B021 | Direct SQL | ||
B128 | Routine language SQL | ||
E011 | Core | Numeric data types | |
E011-01 | Core | INTEGER and SMALLINT data types | |
E011-02 | Core | REAL, DOUBLE PRECISION, and FLOAT data types | |
E011-03 | Core | DECIMAL and NUMERIC data types | |
E011-04 | Core | Arithmetic operators | |
E011-05 | Core | Numeric comparison | |
E011-06 | Core | Implicit casting among the numeric data types | |
E021 | Core | Character data types | |
E021-01 | Core | CHARACTER data type | |
E021-02 | Core | CHARACTER VARYING data type | |
E021-03 | Core | Character literals | |
E021-04 | Core | CHARACTER_LENGTH function | trims trailing spaces from CHARACTER values before counting |
E021-05 | Core | OCTET_LENGTH function | |
E021-06 | Core | SUBSTRING function | |
E021-07 | Core | Character concatenation | |
E021-08 | Core | UPPER and LOWER functions | |
E021-09 | Core | TRIM function | |
E021-10 | Core | Implicit casting among the character string types | |
E021-11 | Core | POSITION function | |
E021-12 | Core | Character comparison | |
E031 | Core | Identifiers | |
E031-01 | Core | Delimited identifiers | |
E031-02 | Core | Lower case identifiers | |
E031-03 | Core | Trailing underscore | |
E051 | Core | Basic query specification | |
E051-01 | Core | SELECT DISTINCT | |
E051-02 | Core | GROUP BY clause | |
E051-04 | Core | GROUP BY can contain columns not in <select list> | |
E051-05 | Core | Select list items can be renamed | |
E051-06 | Core | HAVING clause | |
E051-07 | Core | Qualified * in select list | |
E051-08 | Core | Correlation names in the FROM clause | |
E051-09 | Core | Rename columns in the FROM clause | |
E061 | Core | Basic predicates and search conditions | |
E061-01 | Core | Comparison predicate | |
E061-02 | Core | BETWEEN predicate | |
E061-03 | Core | IN predicate with list of values | |
E061-04 | Core | LIKE predicate | |
E061-05 | Core | LIKE predicate ESCAPE clause | |
E061-06 | Core | NULL predicate | |
E061-07 | Core | Quantified comparison predicate | |
E061-08 | Core | EXISTS predicate | |
E061-09 | Core | Subqueries in comparison predicate | |
E061-11 | Core | Subqueries in IN predicate | |
E061-12 | Core | Subqueries in quantified comparison predicate | |
E061-13 | Core | Correlated subqueries | |
E061-14 | Core | Search condition | |
E071 | Core | Basic query expressions | |
E071-01 | Core | UNION DISTINCT table operator | |
E071-02 | Core | UNION ALL table operator | |
E071-03 | Core | EXCEPT DISTINCT table operator | |
E071-05 | Core | Columns combined via table operators need not have exactly the same data type | |
E071-06 | Core | Table operators in subqueries | |
E081 | Core | Basic Privileges | |
E081-01 | Core | SELECT privilege | |
E081-02 | Core | DELETE privilege | |
E081-03 | Core | INSERT privilege at the table level | |
E081-04 | Core | UPDATE privilege at the table level | |
E081-05 | Core | UPDATE privilege at the column level | |
E081-06 | Core | REFERENCES privilege at the table level | |
E081-07 | Core | REFERENCES privilege at the column level | |
E081-08 | Core | WITH GRANT OPTION | |
E081-09 | Core | USAGE privilege | |
E081-10 | Core | EXECUTE privilege | |
E091 | Core | Set functions | |
E091-01 | Core | AVG | |
E091-02 | Core | COUNT | |
E091-03 | Core | MAX | |
E091-04 | Core | MIN | |
E091-05 | Core | SUM | |
E091-06 | Core | ALL quantifier | |
E091-07 | Core | DISTINCT quantifier | |
E101 | Core | Basic data manipulation | |
E101-01 | Core | INSERT statement | |
E101-03 | Core | Searched UPDATE statement | |
E101-04 | Core | Searched DELETE statement | |
E111 | Core | Single row SELECT statement | |
E121 | Core | Basic cursor support | |
E121-01 | Core | DECLARE CURSOR | |
E121-02 | Core | ORDER BY columns need not be in select list | |
E121-03 | Core | Value expressions in ORDER BY clause | |
E121-04 | Core | OPEN statement | |
E121-06 | Core | Positioned UPDATE statement | |
E121-07 | Core | Positioned DELETE statement | |
E121-08 | Core | CLOSE statement | |
E121-10 | Core | FETCH statement implicit NEXT | |
E121-17 | Core | WITH HOLD cursors | |
E131 | Core | Null value support (nulls in lieu of values) | |
E141 | Core | Basic integrity constraints | |
E141-01 | Core | NOT NULL constraints | |
E141-02 | Core | UNIQUE constraints of NOT NULL columns | |
E141-03 | Core | PRIMARY KEY constraints | |
E141-04 | Core | Basic FOREIGN KEY constraint with the NO ACTION default for both referential delete action and referential update action | |
E141-06 | Core | CHECK constraints | |
E141-07 | Core | Column defaults | |
E141-08 | Core | NOT NULL inferred on PRIMARY KEY | |
E141-10 | Core | Names in a foreign key can be specified in any order | |
E151 | Core | Transaction support | |
E151-01 | Core | COMMIT statement | |
E151-02 | Core | ROLLBACK statement | |
E152 | Core | Basic SET TRANSACTION statement | |
E152-01 | Core | SET TRANSACTION statement: ISOLATION LEVEL SERIALIZABLE clause | |
E152-02 | Core | SET TRANSACTION statement: READ ONLY and READ WRITE clauses | |
E153 | Core | Updatable queries with subqueries | |
E161 | Core | SQL comments using leading double minus | |
E171 | Core | SQLSTATE support | |
E182 | Core | Host language binding | |
F021 | Core | Basic information schema | |
F021-01 | Core | COLUMNS view | |
F021-02 | Core | TABLES view | |
F021-03 | Core | VIEWS view | |
F021-04 | Core | TABLE_CONSTRAINTS view | |
F021-05 | Core | REFERENTIAL_CONSTRAINTS view | |
F021-06 | Core | CHECK_CONSTRAINTS view | |
F031 | Core | Basic schema manipulation | |
F031-01 | Core | CREATE TABLE statement to create persistent base tables | |
F031-02 | Core | CREATE VIEW statement | |
F031-03 | Core | GRANT statement | |
F031-04 | Core | ALTER TABLE statement: ADD COLUMN clause | |
F031-13 | Core | DROP TABLE statement: RESTRICT clause | |
F031-16 | Core | DROP VIEW statement: RESTRICT clause | |
F031-19 | Core | REVOKE statement: RESTRICT clause | |
F032 | CASCADE drop behavior | ||
F033 | ALTER TABLE statement: DROP COLUMN clause | ||
F034 | Extended REVOKE statement | ||
F034-01 | REVOKE statement performed by other than the owner of a schema object | ||
F034-02 | REVOKE statement: GRANT OPTION FOR clause | ||
F034-03 | REVOKE statement to revoke a privilege that the grantee has WITH GRANT OPTION | ||
F041 | Core | Basic joined table | |
F041-01 | Core | Inner join (but not necessarily the INNER keyword) | |
F041-02 | Core | INNER keyword | |
F041-03 | Core | LEFT OUTER JOIN | |
F041-04 | Core | RIGHT OUTER JOIN | |
F041-05 | Core | Outer joins can be nested | |
F041-07 | Core | The inner table in a left or right outer join can also be used in an inner join | |
F041-08 | Core | All comparison operators are supported (rather than just =) | |
F051 | Core | Basic date and time | |
F051-01 | Core | DATE data type (including support of DATE literal) | |
F051-02 | Core | TIME data type (including support of TIME literal) with fractional seconds precision of at least 0 | |
F051-03 | Core | TIMESTAMP data type (including support of TIMESTAMP literal) with fractional seconds precision of at least 0 and 6 | |
F051-04 | Core | Comparison predicate on DATE, TIME, and TIMESTAMP data types | |
F051-05 | Core | Explicit CAST between datetime types and character string types | |
F051-06 | Core | CURRENT_DATE | |
F051-07 | Core | LOCALTIME | |
F051-08 | Core | LOCALTIMESTAMP | |
F052 | Intervals and datetime arithmetic | ||
F053 | OVERLAPS predicate | ||
F081 | Core | UNION and EXCEPT in views | |
F111 | Isolation levels other than SERIALIZABLE | ||
F111-01 | READ UNCOMMITTED isolation level | ||
F111-02 | READ COMMITTED isolation level | ||
F111-03 | REPEATABLE READ isolation level | ||
F131 | Core | Grouped operations | |
F131-01 | Core | WHERE, GROUP BY, and HAVING clauses supported in queries with grouped views | |
F131-02 | Core | Multiple tables supported in queries with grouped views | |
F131-03 | Core | Set functions supported in queries with grouped views | |
F131-04 | Core | Subqueries with GROUP BY and HAVING clauses and grouped views | |
F131-05 | Core | Single row SELECT with GROUP BY and HAVING clauses and grouped views | |
F171 | Multiple schemas per user | ||
F181 | Core | Multiple module support | |
F191 | Referential delete actions | ||
F200 | TRUNCATE TABLE statement | ||
F201 | Core | CAST function | |
F202 | TRUNCATE TABLE: identity column restart option | ||
F221 | Core | Explicit defaults | |
F222 | INSERT statement: DEFAULT VALUES clause | ||
F231 | Privilege tables | ||
F231-01 | TABLE_PRIVILEGES view | ||
F231-02 | COLUMN_PRIVILEGES view | ||
F231-03 | USAGE_PRIVILEGES view | ||
F251 | Domain support | ||
F261 | Core | CASE expression | |
F261-01 | Core | Simple CASE | |
F261-02 | Core | Searched CASE | |
F261-03 | Core | NULLIF | |
F261-04 | Core | COALESCE | |
F262 | Extended CASE expression | ||
F271 | Compound character literals | ||
F281 | LIKE enhancements | ||
F292 | UNIQUE null treatment | SQL:202x draft | |
F302 | INTERSECT table operator | ||
F302-01 | INTERSECT DISTINCT table operator | ||
F302-02 | INTERSECT ALL table operator | ||
F304 | EXCEPT ALL table operator | ||
F311 | Core | Schema definition statement | |
F311-01 | Core | CREATE SCHEMA | |
F311-02 | Core | CREATE TABLE for persistent base tables | |
F311-03 | Core | CREATE VIEW | |
F311-04 | Core | CREATE VIEW: WITH CHECK OPTION | |
F311-05 | Core | GRANT statement | |
F312 | MERGE statement | ||
F313 | Enhanced MERGE statement | ||
F314 | MERGE statement with DELETE branch | ||
F321 | User authorization | ||
F341 | Usage tables | ||
F361 | Subprogram support | ||
F381 | Extended schema manipulation | ||
F381-01 | ALTER TABLE statement: ALTER COLUMN clause | ||
F381-02 | ALTER TABLE statement: ADD CONSTRAINT clause | ||
F381-03 | ALTER TABLE statement: DROP CONSTRAINT clause | ||
F382 | Alter column data type | ||
F383 | Set column not null clause | ||
F384 | Drop identity property clause | ||
F385 | Drop column generation expression clause | ||
F386 | Set identity column generation clause | ||
F391 | Long identifiers | ||
F392 | Unicode escapes in identifiers | ||
F393 | Unicode escapes in literals | ||
F394 | Optional normal form specification | ||
F401 | Extended joined table | ||
F401-01 | NATURAL JOIN | ||
F401-02 | FULL OUTER JOIN | ||
F401-04 | CROSS JOIN | ||
F402 | Named column joins for LOBs, arrays, and multisets | ||
F404 | Range variable for common column names | ||
F411 | Time zone specification | differences regarding literal interpretation | |
F421 | National character | ||
F431 | Read-only scrollable cursors | ||
F431-01 | FETCH with explicit NEXT | ||
F431-02 | FETCH FIRST | ||
F431-03 | FETCH LAST | ||
F431-04 | FETCH PRIOR | ||
F431-05 | FETCH ABSOLUTE | ||
F431-06 | FETCH RELATIVE | ||
F441 | Extended set function support | ||
F442 | Mixed column references in set functions | ||
F471 | Core | Scalar subquery values | |
F481 | Core | Expanded NULL predicate | |
F491 | Constraint management | ||
F501 | Core | Features and conformance views | |
F501-01 | Core | SQL_FEATURES view | |
F501-02 | Core | SQL_SIZING view | |
F502 | Enhanced documentation tables | ||
F531 | Temporary tables | ||
F555 | Enhanced seconds precision | ||
F561 | Full value expressions | ||
F571 | Truth value tests | ||
F591 | Derived tables | ||
F611 | Indicator data types | ||
F641 | Row and table constructors | ||
F651 | Catalog name qualifiers | ||
F661 | Simple tables | ||
F672 | Retrospective check constraints | ||
F690 | Collation support | but no character set support | |
F692 | Extended collation support | ||
F701 | Referential update actions | ||
F711 | ALTER domain | ||
F731 | INSERT column privileges | ||
F751 | View CHECK enhancements | ||
F761 | Session management | ||
F762 | CURRENT_CATALOG | ||
F763 | CURRENT_SCHEMA | ||
F771 | Connection management | ||
F781 | Self-referencing operations | ||
F791 | Insensitive cursors | ||
F801 | Full set function | ||
F850 | Top-level <order by clause> in <query expression> | ||
F851 | <order by clause> in subqueries | ||
F852 | Top-level <order by clause> in views | ||
F855 | Nested <order by clause> in <query expression> | ||
F856 | Nested <fetch first clause> in <query expression> | ||
F857 | Top-level <fetch first clause> in <query expression> | ||
F858 | <fetch first clause> in subqueries | ||
F859 | Top-level <fetch first clause> in views | ||
F860 | <fetch first row count> in <fetch first clause> | ||
F861 | Top-level <result offset clause> in <query expression> | ||
F862 | <result offset clause> in subqueries | ||
F863 | Nested <result offset clause> in <query expression> | ||
F864 | Top-level <result offset clause> in views | ||
F865 | <offset row count> in <result offset clause> | ||
F867 | FETCH FIRST clause: WITH TIES option | ||
S071 | SQL paths in function and type name resolution | ||
S091-01 | Arrays of built-in data types | ||
S091-03 | Array expressions | ||
S092 | Arrays of user-defined types | ||
S095 | Array constructors by query | ||
S096 | Optional array bounds | ||
S098 | ARRAY_AGG | ||
S111 | ONLY in query expressions | ||
S201 | SQL-invoked routines on arrays | ||
S201-01 | Array parameters | ||
S201-02 | Array as result type of functions | ||
S211 | User-defined cast functions | ||
S301 | Enhanced UNNEST | ||
S404 | TRIM_ARRAY | ||
T031 | BOOLEAN data type | ||
T071 | BIGINT data type | ||
T121 | WITH (excluding RECURSIVE) in query expression | ||
T122 | WITH (excluding RECURSIVE) in subquery | ||
T131 | Recursive query | ||
T132 | Recursive query in subquery | ||
T133 | Enhanced cycle mark values | SQL:202x draft | |
T141 | SIMILAR predicate | ||
T151 | DISTINCT predicate | ||
T152 | DISTINCT predicate with negation | ||
T171 | LIKE clause in table definition | ||
T172 | AS subquery clause in table definition | ||
T173 | Extended LIKE clause in table definition | ||
T174 | Identity columns | ||
T177 | Sequence generator support: simple restart option | ||
T178 | Identity columns: simple restart option | ||
T191 | Referential action RESTRICT | ||
T201 | Comparable data types for referential constraints | ||
T211-01 | Triggers activated on UPDATE, INSERT, or DELETE of one base table | ||
T211-02 | BEFORE triggers | ||
T211-03 | AFTER triggers | ||
T211-04 | FOR EACH ROW triggers | ||
T211-05 | Ability to specify a search condition that must be true before the trigger is invoked | ||
T211-07 | TRIGGER privilege | ||
T212 | Enhanced trigger capability | ||
T213 | INSTEAD OF triggers | ||
T241 | START TRANSACTION statement | ||
T261 | Chained transactions | ||
T271 | Savepoints | ||
T281 | SELECT privilege with column granularity | ||
T285 | Enhanced derived column names | ||
T312 | OVERLAY function | ||
T321-01 | Core | User-defined functions with no overloading | |
T321-02 | Core | User-defined stored procedures with no overloading | |
T321-03 | Core | Function invocation | |
T321-04 | Core | CALL statement | |
T321-05 | Core | RETURN statement | |
T321-06 | Core | ROUTINES view | |
T321-07 | Core | PARAMETERS view | |
T323 | Explicit security for external routines | ||
T325 | Qualified SQL parameter references | ||
T331 | Basic roles | ||
T332 | Extended roles | ||
T341 | Overloading of SQL-invoked functions and procedures | ||
T351 | Bracketed SQL comments (/*...*/ comments) | ||
T431 | Extended grouping capabilities | ||
T432 | Nested and concatenated GROUPING SETS | ||
T433 | Multiargument GROUPING function | ||
T434 | GROUP BY DISTINCT | ||
T441 | ABS and MOD functions | ||
T461 | Symmetric BETWEEN predicate | ||
T491 | LATERAL derived table | ||
T501 | Enhanced EXISTS predicate | ||
T521 | Named arguments in CALL statement | ||
T523 | Default values for INOUT parameters of SQL-invoked procedures | ||
T524 | Named arguments in routine invocations other than a CALL statement | ||
T525 | Default values for parameters of SQL-invoked functions | ||
T551 | Optional key words for default syntax | ||
T581 | Regular expression substring function | ||
T591 | UNIQUE constraints of possibly null columns | ||
T611 | Elementary OLAP operations | ||
T612 | Advanced OLAP operations | ||
T613 | Sampling | ||
T614 | NTILE function | ||
T615 | LEAD and LAG functions | ||
T617 | FIRST_VALUE and LAST_VALUE function | ||
T620 | WINDOW clause: GROUPS option | ||
T621 | Enhanced numeric functions | ||
T622 | Trigonometric functions | ||
T623 | General logarithm functions | ||
T624 | Common logarithm functions | ||
T631 | Core | IN predicate with one list element | |
T651 | SQL-schema statements in SQL routines | ||
T653 | SQL-schema statements in external routines | ||
T655 | Cyclically dependent routines | ||
T831 | SQL/JSON path language: strict mode | ||
T832 | SQL/JSON path language: item method | ||
T833 | SQL/JSON path language: multiple subscripts | ||
T834 | SQL/JSON path language: wildcard member accessor | ||
T835 | SQL/JSON path language: filter expressions | ||
T836 | SQL/JSON path language: starts with predicate | ||
T837 | SQL/JSON path language: regex_like predicate | ||
X010 | XML type | ||
X011 | Arrays of XML type | ||
X014 | Attributes of XML type | ||
X016 | Persistent XML values | ||
X020 | XMLConcat | ||
X031 | XMLElement | ||
X032 | XMLForest | ||
X034 | XMLAgg | ||
X035 | XMLAgg: ORDER BY option | ||
X036 | XMLComment | ||
X037 | XMLPI | ||
X040 | Basic table mapping | ||
X041 | Basic table mapping: nulls absent | ||
X042 | Basic table mapping: null as nil | ||
X043 | Basic table mapping: table as forest | ||
X044 | Basic table mapping: table as element | ||
X045 | Basic table mapping: with target namespace | ||
X046 | Basic table mapping: data mapping | ||
X047 | Basic table mapping: metadata mapping | ||
X048 | Basic table mapping: base64 encoding of binary strings | ||
X049 | Basic table mapping: hex encoding of binary strings | ||
X050 | Advanced table mapping | ||
X051 | Advanced table mapping: nulls absent | ||
X052 | Advanced table mapping: null as nil | ||
X053 | Advanced table mapping: table as forest | ||
X054 | Advanced table mapping: table as element | ||
X055 | Advanced table mapping: with target namespace | ||
X056 | Advanced table mapping: data mapping | ||
X057 | Advanced table mapping: metadata mapping | ||
X058 | Advanced table mapping: base64 encoding of binary strings | ||
X059 | Advanced table mapping: hex encoding of binary strings | ||
X060 | XMLParse: character string input and CONTENT option | ||
X061 | XMLParse: character string input and DOCUMENT option | ||
X070 | XMLSerialize: character string serialization and CONTENT option | ||
X071 | XMLSerialize: character string serialization and DOCUMENT option | ||
X072 | XMLSerialize: character string serialization | ||
X090 | XML document predicate | ||
X120 | XML parameters in SQL routines | ||
X121 | XML parameters in external routines | ||
X221 | XML passing mechanism BY VALUE | ||
X301 | XMLTable: derived column list option | ||
X302 | XMLTable: ordinality column option | ||
X303 | XMLTable: column default option | ||
X304 | XMLTable: passing a context item | must be XML DOCUMENT | |
X400 | Name and identifier mapping | ||
X410 | Alter column data type: XML type |
-
Table of Contents
+
Table of Contents
This section attempts to outline to what extent PostgreSQL conforms to the current SQL standard. The following information is not a full statement of @@ -70,4 +70,4 @@ Feature codes containing a hyphen are subfeatures. Therefore, if a particular subfeature is not supported, the main feature is listed as unsupported even if some other subfeatures are supported. -
+
The file_fdw
module provides the foreign-data wrapper
file_fdw
, which can be used to access data
files in the server's file system, or to execute programs on the server
@@ -142,4 +142,4 @@ OPTIONS ( filename 'log/pglog.csv', format 'csv' );
That's it — now you can query your log directly. In production, of course, you would need to define some way to deal with log rotation. -
+
The functions described in this section are used to control and monitor a PostgreSQL installation.
+
Aggregate functions compute a single result
from a set of input values. The built-in general-purpose aggregate
functions are listed in Table 9.58
@@ -726,4 +726,4 @@ SELECT count(*) FROM sometable;
neither make
nor model
was grouped
by in the last row (which therefore is an aggregate over all the input
rows).
-
+
Table 9.52 shows the specialized operators available for array types. In addition to those, the usual comparison operators shown in Table 9.1 are available for @@ -382,4 +382,4 @@
See also Section 9.21 about the aggregate
function array_agg
for use with arrays.
-
+
This section describes functions and operators for examining and
manipulating binary strings, that is values of type bytea
.
Many of these are equivalent, in purpose and syntax, to the
@@ -507,4 +507,4 @@
See also the aggregate function string_agg
in
Section 9.21 and the large object functions
in Section 35.4.
-
+
This section describes functions and operators for examining and
manipulating bit strings, that is values of the types
bit
and bit varying
. (While only
@@ -232,4 +232,4 @@ cast(-44 as bit(12)) “bit” means casting to
bit(1)
, and so will deliver only the least significant
bit of the integer.
-
+
The usual comparison operators are available, as shown in Table 9.1.
Table 9.1. Comparison Operators
Operator | Description |
---|---|
datatype < datatype
@@ -397,4 +397,4 @@
|
+
This section describes several specialized constructs for making multiple comparisons between groups of values. These forms are syntactically related to the subquery forms of the previous section, @@ -212,4 +212,4 @@ AND for materialized views and might be useful for other specialized purposes such as replication and B-Tree deduplication (see Section 67.4.3). They are not intended to be generally useful for writing queries, though. -
+
This section describes the SQL-compliant conditional expressions available in PostgreSQL.
@@ -184,4 +184,4 @@ SELECT NULLIF(value, '(none)') ... the SQL standard, but are a common extension. Some other databases make them return NULL if any argument is NULL, rather than only when all are NULL. -
+
Table 9.33 shows the available
functions for date/time value processing, with details appearing in
the following subsections. Table 9.32 illustrates the behaviors of
@@ -1313,4 +1313,4 @@ SELECT pg_sleep_until('tomorrow 03:00');
when calling pg_sleep
or its variants. Otherwise
other sessions might have to wait for your sleeping process, slowing down
the entire system.
-
+
For enum types (described in Section 8.7), there are several functions that allow cleaner programming without hard-coding particular values of an enum type. @@ -81,4 +81,4 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple the type can be passed, with the same result. It is more common to apply these functions to a table column or function argument than to a hardwired type name as used in the examples. -
+
PostgreSQL provides these helper functions to retrieve information from event triggers.
@@ -130,4 +130,4 @@ CREATE EVENT TRIGGER test_table_rewrite_oid ON table_rewrite EXECUTE FUNCTION test_event_trigger_table_rewrite_oid();
-
+
The PostgreSQL formatting functions provide a powerful set of tools for converting various data types (date/time, integer, floating point, numeric) to formatted strings @@ -407,4 +407,4 @@
Table 9.30. Template Pattern Modifiers for Numeric Formatting
Modifier | Description | Example |
---|---|---|
FM prefix | fill mode (suppress trailing zeroes and padding blanks) | FM99.99 |
TH suffix | upper case ordinal number suffix | 999TH |
th suffix | lower case ordinal number suffix | 999th |
Table 9.31 shows some
examples of the use of the to_char
function.
-
Table 9.31. to_char
Examples
Expression | Result |
---|---|
to_char(current_timestamp, 'Day, DD HH12:MI:SS') | 'Tuesday , 06 05:39:18' |
to_char(current_timestamp, 'FMDay, FMDD HH12:MI:SS') | 'Tuesday, 6 05:39:18' |
to_char(-0.1, '99.99') | ' -.10' |
to_char(-0.1, 'FM9.99') | '-.1' |
to_char(-0.1, 'FM90.99') | '-0.1' |
to_char(0.1, '0.9') | ' 0.1' |
to_char(12, '9990999.9') | ' 0012.0' |
to_char(12, 'FM9990999.9') | '0012.' |
to_char(485, '999') | ' 485' |
to_char(-485, '999') | '-485' |
to_char(485, '9 9 9') | ' 4 8 5' |
to_char(1485, '9,999') | ' 1,485' |
to_char(1485, '9G999') | ' 1 485' |
to_char(148.5, '999.999') | ' 148.500' |
to_char(148.5, 'FM999.999') | '148.5' |
to_char(148.5, 'FM999.990') | '148.500' |
to_char(148.5, '999D999') | ' 148,500' |
to_char(3148.5, '9G999D999') | ' 3 148,500' |
to_char(-485, '999S') | '485-' |
to_char(-485, '999MI') | '485-' |
to_char(485, '999MI') | '485 ' |
to_char(485, 'FM999MI') | '485' |
to_char(485, 'PL999') | '+485' |
to_char(485, 'SG999') | '+485' |
to_char(-485, 'SG999') | '-485' |
to_char(-485, '9SG99') | '4-85' |
to_char(-485, '999PR') | '<485>' |
to_char(485, 'L999') | 'DM 485' |
to_char(485, 'RN') | ' CDLXXXV' |
to_char(485, 'FMRN') | 'CDLXXXV' |
to_char(5.2, 'FMRN') | 'V' |
to_char(482, '999th') | ' 482nd' |
to_char(485, '"Good number:"999') | 'Good number: 485' |
to_char(485.8, '"Pre:"999" Post:" .999') | 'Pre: 485 Post: .800' |
to_char(12, '99V999') | ' 12000' |
to_char(12.4, '99V999') | ' 12400' |
to_char(12.45, '99V9') | ' 125' |
to_char(0.0004859, '9.99EEEE') | ' 4.86e-04' |
Table 9.31. to_char
Examples
Expression | Result |
---|---|
to_char(current_timestamp, 'Day, DD HH12:MI:SS') | 'Tuesday , 06 05:39:18' |
to_char(current_timestamp, 'FMDay, FMDD HH12:MI:SS') | 'Tuesday, 6 05:39:18' |
to_char(-0.1, '99.99') | ' -.10' |
to_char(-0.1, 'FM9.99') | '-.1' |
to_char(-0.1, 'FM90.99') | '-0.1' |
to_char(0.1, '0.9') | ' 0.1' |
to_char(12, '9990999.9') | ' 0012.0' |
to_char(12, 'FM9990999.9') | '0012.' |
to_char(485, '999') | ' 485' |
to_char(-485, '999') | '-485' |
to_char(485, '9 9 9') | ' 4 8 5' |
to_char(1485, '9,999') | ' 1,485' |
to_char(1485, '9G999') | ' 1 485' |
to_char(148.5, '999.999') | ' 148.500' |
to_char(148.5, 'FM999.999') | '148.5' |
to_char(148.5, 'FM999.990') | '148.500' |
to_char(148.5, '999D999') | ' 148,500' |
to_char(3148.5, '9G999D999') | ' 3 148,500' |
to_char(-485, '999S') | '485-' |
to_char(-485, '999MI') | '485-' |
to_char(485, '999MI') | '485 ' |
to_char(485, 'FM999MI') | '485' |
to_char(485, 'PL999') | '+485' |
to_char(485, 'SG999') | '+485' |
to_char(-485, 'SG999') | '-485' |
to_char(-485, '9SG99') | '4-85' |
to_char(-485, '999PR') | '<485>' |
to_char(485, 'L999') | 'DM 485' |
to_char(485, 'RN') | ' CDLXXXV' |
to_char(485, 'FMRN') | 'CDLXXXV' |
to_char(5.2, 'FMRN') | 'V' |
to_char(482, '999th') | ' 482nd' |
to_char(485, '"Good number:"999') | 'Good number: 485' |
to_char(485.8, '"Pre:"999" Post:" .999') | 'Pre: 485 Post: .800' |
to_char(12, '99V999') | ' 12000' |
to_char(12.4, '99V999') | ' 12400' |
to_char(12.45, '99V9') | ' 125' |
to_char(0.0004859, '9.99EEEE') | ' 4.86e-04' |
+
The geometric types point
, box
,
lseg
, line
, path
,
polygon
, and circle
have a large set of
@@ -883,4 +883,4 @@
UPDATE t SET p[1] = ...
changes the Y coordinate.
In the same way, a value of type box
or lseg
can be treated
as an array of two point
values.
-
+
Table 9.66 shows several functions that extract session and system information.
@@ -1771,4 +1771,4 @@ SELECT collation for ('foo' COLLATE "de_DE");
Returns information about recovery state, as shown in Table 9.86. -
Table 9.83. pg_control_checkpoint
Output Columns
Column Name | Data Type |
---|---|
checkpoint_lsn | pg_lsn |
redo_lsn | pg_lsn |
redo_wal_file | text |
timeline_id | integer |
prev_timeline_id | integer |
full_page_writes | boolean |
next_xid | text |
next_oid | oid |
next_multixact_id | xid |
next_multi_offset | xid |
oldest_xid | xid |
oldest_xid_dbid | oid |
oldest_active_xid | xid |
oldest_multi_xid | xid |
oldest_multi_dbid | oid |
oldest_commit_ts_xid | xid |
newest_commit_ts_xid | xid |
checkpoint_time | timestamp with time zone |
Table 9.84. pg_control_system
Output Columns
Column Name | Data Type |
---|---|
pg_control_version | integer |
catalog_version_no | integer |
system_identifier | bigint |
pg_control_last_modified | timestamp with time zone |
Table 9.85. pg_control_init
Output Columns
Column Name | Data Type |
---|---|
max_data_alignment | integer |
database_block_size | integer |
blocks_per_segment | integer |
wal_block_size | integer |
bytes_per_wal_segment | integer |
max_identifier_length | integer |
max_index_columns | integer |
max_toast_chunk_size | integer |
large_object_chunk_size | integer |
float8_pass_by_value | boolean |
data_page_checksum_version | integer |
Table 9.86. pg_control_recovery
Output Columns
Column Name | Data Type |
---|---|
min_recovery_end_lsn | pg_lsn |
min_recovery_end_timeline | integer |
backup_start_lsn | pg_lsn |
backup_end_lsn | pg_lsn |
end_of_backup_record_required | boolean |
Table 9.83. pg_control_checkpoint
Output Columns
Column Name | Data Type |
---|---|
checkpoint_lsn | pg_lsn |
redo_lsn | pg_lsn |
redo_wal_file | text |
timeline_id | integer |
prev_timeline_id | integer |
full_page_writes | boolean |
next_xid | text |
next_oid | oid |
next_multixact_id | xid |
next_multi_offset | xid |
oldest_xid | xid |
oldest_xid_dbid | oid |
oldest_active_xid | xid |
oldest_multi_xid | xid |
oldest_multi_dbid | oid |
oldest_commit_ts_xid | xid |
newest_commit_ts_xid | xid |
checkpoint_time | timestamp with time zone |
Table 9.84. pg_control_system
Output Columns
Column Name | Data Type |
---|---|
pg_control_version | integer |
catalog_version_no | integer |
system_identifier | bigint |
pg_control_last_modified | timestamp with time zone |
Table 9.85. pg_control_init
Output Columns
Column Name | Data Type |
---|---|
max_data_alignment | integer |
database_block_size | integer |
blocks_per_segment | integer |
wal_block_size | integer |
bytes_per_wal_segment | integer |
max_identifier_length | integer |
max_index_columns | integer |
max_toast_chunk_size | integer |
large_object_chunk_size | integer |
float8_pass_by_value | boolean |
data_page_checksum_version | integer |
Table 9.86. pg_control_recovery
Output Columns
Column Name | Data Type |
---|---|
min_recovery_end_lsn | pg_lsn |
min_recovery_end_timeline | integer |
backup_start_lsn | pg_lsn |
backup_end_lsn | pg_lsn |
end_of_backup_record_required | boolean |
+
+
The usual logical operators are available: @@ -33,4 +33,4 @@ without affecting the result. (However, it is not guaranteed that the left operand is evaluated before the right operand. See Section 4.2.14 for more information about the order of evaluation of subexpressions.) -
+
There are three separate approaches to pattern matching provided
by PostgreSQL: the traditional
SQL LIKE
operator, the
@@ -1412,4 +1412,4 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}');
backslash.
-
\ No newline at end of file + \ No newline at end of file diff --git a/doc/src/sgml/html/functions-math.html b/doc/src/sgml/html/functions-math.html index c748a0d..6fb0086 100644 --- a/doc/src/sgml/html/functions-math.html +++ b/doc/src/sgml/html/functions-math.html @@ -1,5 +1,5 @@ -+
Mathematical operators are provided for many PostgreSQL types. For types without standard mathematical conventions @@ -998,4 +998,4 @@
atanh(0.5)
→ 0.5493061443340548
-
+
The IP network address types, cidr
and inet
,
support the usual comparison operators shown in
Table 9.1
@@ -394,4 +394,4 @@
macaddr8_set7bit(macaddr8 '00:34:56:ab:cd:ef')
→ 02:34:56:ff:fe:ab:cd:ef
-
+
See Section 8.17 for an overview of range types.
Table 9.54 shows the specialized operators
@@ -704,4 +704,4 @@
The lower_inc
, upper_inc
,
lower_inf
, and upper_inf
functions all return false for an empty range or multirange.
-
+
This section describes functions for operating on sequence objects, also called sequence generators or just sequences. Sequence objects are special single-row tables created with CREATE SEQUENCE. @@ -136,4 +136,4 @@ SELECT setval('myseq', 42, false); regclass data type's input converter will do the work for you. See Section 8.19 for details. -
+
This section describes functions that possibly return more than one row. The most widely used functions in this class are series generating functions, as detailed in Table 9.64 and @@ -215,4 +215,4 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); pg_subtrans | 19 (19 rows)
-
+
+
This section describes functions and operators for examining and
manipulating string values. Strings in this context include values
of the types character
, character varying
,
@@ -1205,4 +1205,4 @@ SELECT format('Testing %3$s, %2$s, %s', 'one', 'two', 'three');
The %I
and %L
format specifiers are particularly
useful for safely constructing dynamic SQL statements. See
Example 43.1.
-
+
This section describes the SQL-compliant subquery expressions available in PostgreSQL. All of the expression forms documented in this section return @@ -210,4 +210,4 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2);
See Section 9.24.5 for details about the meaning of a row constructor comparison. -
+
Table 9.42, Table 9.43 and Table 9.44 @@ -760,4 +760,4 @@
ts_stat('SELECT vector FROM apod')
→ (foo,10,15) ...
-
+
While many uses of triggers involve user-written trigger functions, PostgreSQL provides a few built-in trigger functions that can be used directly in user-defined triggers. These @@ -90,4 +90,4 @@ FOR EACH ROW EXECUTE FUNCTION suppress_redundant_updates_trigger(); choose a trigger name that comes after the name of any other trigger you might have on the table. (Hence the “z” prefix in the example.) -
+
PostgreSQL includes one function to generate a UUID:
gen_random_uuid
() →uuid
@@ -13,4 +13,4 @@ PostgreSQL also provides the usual comparison operators shown in Table 9.1 for UUIDs. -
+
Window functions provide the ability to perform
calculations across sets of rows that are related to the current query
row. See Section 3.5 for an introduction to this
@@ -179,4 +179,4 @@
default FROM FIRST
behavior is supported. (You can achieve
the result of FROM LAST
by reversing the ORDER BY
ordering.)
-
+
The functions and function-like expressions described in this
section operate on values of type xml
. See Section 8.13 for information about the xml
type. The function-like expressions xmlparse
@@ -909,4 +909,4 @@ table2-mapping
will be put into content form with each such disallowed node replaced by
its string value, as defined for the XPath 1.0
string
function.
-
Table of Contents
+
Table of Contents
PostgreSQL provides a large number of
functions and operators for the built-in data types. This chapter
describes most of them, although additional special-purpose functions
@@ -30,4 +30,4 @@ repeat('Pg', 4) → PgPgPgPg
is present in other SQL database management
systems, and in many cases this functionality is compatible and
consistent between the various implementations.
-
+
+
Although all built-in WAL-logged modules have their own types of WAL records, there is also a generic WAL record type, which describes changes to pages in a generic way. This is useful for extensions that provide @@ -99,4 +99,4 @@ comparison. This is not very compact for the case of moving data within a page, and might be improved in the future.
-
+
The following resources contain additional information about genetic algorithms: @@ -15,4 +15,4 @@
-
+
Among all relational operators the most difficult one to process and optimize is the join. The number of possible query plans grows exponentially with the @@ -33,4 +33,4 @@ genetic algorithm to solve the join ordering problem in a manner that is efficient for queries involving large numbers of joins. -
+
The genetic algorithm (GA) is a heuristic optimization method which operates through randomized search. The set of possible solutions for the optimization problem is considered as a @@ -24,4 +24,4 @@ strongly that a GA is not a pure random search for a solution to a problem. A GA uses stochastic processes, but the result is distinctly non-random (better than random). -
The GEQO module approaches the query optimization problem as though it were the well-known traveling salesman @@ -104,4 +104,4 @@ of the rest of the tour, but this is certainly not true for query optimization. Thus it is questionable whether edge recombination crossover is the most effective mutation procedure. -
Table of Contents
Table of Contents
Written by Martin Utesch (<utesch@aut.tu-freiberg.de>
)
for the Institute of Automatic Control at the University of Mining and Technology in Freiberg, Germany.
-
+
The core PostgreSQL distribution
includes the GIN operator classes shown in
Table 70.1.
@@ -10,4 +10,4 @@
is the default. jsonb_path_ops
supports fewer operators but
offers better performance for those operators.
See Section 8.14.4 for details.
-
+
The core PostgreSQL distribution includes the GIN operator classes previously shown in Table 70.1. @@ -7,4 +7,4 @@ GIN operator classes:
btree_gin
B-tree equivalent functionality for several data types
hstore
Module for storing (key, value) pairs
intarray
Enhanced support for int[]
pg_trgm
Text similarity using trigram matching
-
+
The GIN interface has a high level of abstraction,
requiring the access method implementer only to implement the semantics of
the data type being accessed. The GIN layer itself
@@ -234,4 +234,4 @@
recommended that the SQL declarations of these three support functions use
the opclass's indexed data type for the query
argument, even
though the actual type might be something else depending on the operator.
-
+
Internally, a GIN index contains a B-tree index constructed over keys, where each key is an element of one or more indexed items (a member of an array, for example) and where each tuple in a leaf @@ -61,4 +61,4 @@ index key, less than zero for a non-match that is still within the range to be searched, or greater than zero if the index key is past the range that could match. -
+
GIN stands for Generalized Inverted Index. GIN is designed for handling cases where the items to be indexed are composite values, and the queries to be handled by @@ -37,4 +37,4 @@ maintained by Teodor Sigaev and Oleg Bartunov. There is more information about GIN on their website. -
+
GIN assumes that indexable operators are strict. This
means that extractValue
will not be called at all on a null
item value (instead, a placeholder index entry is created automatically),
@@ -7,4 +7,4 @@
value either (instead, the query is presumed to be unsatisfiable). Note
however that null key values contained within a non-null composite item
or query value are supported.
-
+
Insertion into a GIN index can be slow due to the likelihood of many keys being inserted for each item. So, for bulk insertions into a table it is advisable to drop the GIN @@ -55,4 +55,4 @@
From experience, values in the thousands (e.g., 5000 — 20000) work well. -
+
The core PostgreSQL distribution includes the GiST operator classes shown in Table 68.1. @@ -13,4 +13,4 @@
CREATE INDEX ON my_table USING GIST (my_inet_column inet_ops);
-
+
The PostgreSQL source distribution includes several examples of index methods implemented using GiST. The core system currently provides text search @@ -10,4 +10,4 @@ operator classes:
btree_gist
B-tree equivalent functionality for several data types
cube
Indexing for multidimensional cubes
hstore
Module for storing (key, value) pairs
intarray
RD-Tree for one-dimensional array of int4 values
ltree
Indexing for tree-like structures
pg_trgm
Text similarity using trigram matching
seg
Indexing for “float ranges”
-
+
Traditionally, implementing a new index access method meant a lot of
difficult work. It was necessary to understand the inner workings of the
database, such as the lock manager and Write-Ahead Log. The
@@ -810,4 +810,4 @@ my_sortsupport(PG_FUNCTION_ARGS)
index scan, index build, or index tuple insertion). Be careful to pfree
the previous value when replacing a fn_extra
value, or the leak
will accumulate for the duration of the operation.
-
+
The simplest way to build a GiST index is just to insert all the entries, one by one. This tends to be slow for large indexes, because if the index tuples are scattered across the index and the index is large enough @@ -35,4 +35,4 @@ CREATE INDEX command. The default behavior is good for most cases, but turning buffering off might speed up the build somewhat if the input data is ordered. -
+
GiST stands for Generalized Search Tree. It is a balanced, tree-structured access method, that acts as a base template in which to implement arbitrary indexing schemes. B-trees, R-trees and many @@ -20,4 +20,4 @@ maintained by Teodor Sigaev and Oleg Bartunov, and there is more information on their web site. -
+
With Git you will make a copy of the entire code repository on your local machine, so you will have access to all history and branches offline. This is the fastest and most flexible way to develop or test @@ -39,4 +39,4 @@ git fetch Git can do a lot more things than just fetch the source. For more information, consult the Git man pages, or see the website at https://git-scm.com. -
+
This is a list of terms and their meaning in the context of
PostgreSQL and relational database
systems in general.
@@ -497,11 +497,20 @@
(just like a view),
but stores data in the same way that a
table does. It cannot be
- modified via INSERT
, UPDATE
, or
- DELETE
operations.
+ modified via INSERT
, UPDATE
,
+ DELETE
, or MERGE
operations.
For more information, see CREATE MATERIALIZED VIEW. +
+ An SQL command used to conditionally add, modify, + or remove rows + in a given table, + using data from a source + relation. +
+ For more information, see + MERGE.
A mechanism designed to allow several transactions to be @@ -1058,4 +1067,4 @@ It comprises many individual WAL records written sequentially to WAL files. -
+
GSSAPI is an industry-standard protocol
for secure authentication defined in
RFC 2743.
@@ -115,4 +115,4 @@
parameter. If that is set to true, client principals are matched to
user map entries case-insensitively. krb_realm
, if
set, is also matched case-insensitively.
-
+
PostgreSQL also has native support for using GSSAPI to encrypt client/server communications for increased security. Support requires that a GSSAPI @@ -28,4 +28,4 @@ behavior, GSSAPI encryption requires no setup beyond that which is necessary for GSSAPI authentication. (For more information on configuring that, see Section 21.6.) -
+
There are four kinds of pages in a hash index: the meta page (page zero),
which contains statically allocated control information; primary bucket
pages; overflow pages; and bitmap pages, which keep track of overflow
@@ -33,4 +33,4 @@
src/backend/access/hash/README
.
The split algorithm is crash safe and can be restarted if not completed
successfully.
-
+
PostgreSQL includes an implementation of persistent on-disk hash indexes, which are fully crash recoverable. Any data type can be indexed by a @@ -74,4 +74,4 @@ The expansion occurs in the foreground, which could increase execution time for user inserts. Thus, hash indexes may not be suitable for tables with rapidly increasing number of rows. -
Table of Contents
+
Table of Contents
Database servers can work together to allow a second server to take over quickly if the primary server fails (high availability), or to allow several computers to serve the same @@ -54,4 +54,4 @@
The remainder of this section outlines various failover, replication, and load balancing solutions. -
+
The object-relational database management system now known as PostgreSQL is derived from the POSTGRES package written at the @@ -137,4 +137,4 @@
Details about what has happened in PostgreSQL since then can be found in Appendix E. -
+
Hot standby is the term used to describe the ability to connect to the server and run read-only queries while the server is in archive recovery or standby mode. This @@ -572,4 +572,4 @@ HINT: You can then restart the server after making the necessary configuration hot standby mode will generate an error.
-
+
When the optimizer determines that parallel query is the fastest execution
strategy for a particular query, it will create a query plan that includes
a Gather or Gather Merge
@@ -68,4 +68,4 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%';
order-preserving merge. In contrast, Gather
reads tuples
from the workers in whatever order is convenient, destroying any sort
order that may have existed.
-
+
This module implements the hstore
data type for storing sets of
key/value pairs within a single PostgreSQL value.
This can be useful in various scenarios, such as rows with many attributes
@@ -696,4 +696,4 @@ ALTER TABLE tablename ALTER hstorecol TYPE hstore USING hstorecol || '';
Additional enhancements by Andrew Gierth <andrew@tao11.riddles.org.uk>
,
United Kingdom
-
+
Each index access method is described by a row in the
pg_am
system catalog. The pg_am
entry
@@ -177,4 +177,4 @@ typedef struct IndexAmRoutine
sensible: it means that there can only be one key column, but there can
also be included column(s). Also, included columns must be allowed to be
null, independently of amoptionalkey
.
-
+
The amcostestimate
function is given information describing
a possible index scan, including lists of WHERE and ORDER BY clauses that
have been determined to be usable with the index. It must return estimates
@@ -139,4 +139,4 @@ cost_qual_eval(&index_qual_cost, path->indexquals, root);
Examples of cost estimator functions can be found in
src/backend/utils/adt/selfuncs.c
.
-
+
The index construction and maintenance functions that an index access
method must provide in IndexAmRoutine
are:
@@ -484,4 +484,4 @@ amparallelrescan (IndexScanDesc scan);
must be restarted. It should reset any shared state set up by
aminitparallelscan
such that the scan will be restarted from
the beginning.
-
+
Index access methods must handle concurrent updates of the index by multiple processes. The core PostgreSQL system obtains @@ -88,4 +88,4 @@ integrity. When the flag is set, it indicates that the index access method implements finer-grained predicate locking, which will tend to reduce the frequency of such transaction cancellations. -
+
In an index scan, the index access method is responsible for regurgitating
the TIDs of all the tuples it has been told about that match the
scan keys. The access method is not involved in
@@ -120,4 +120,4 @@
Note that it is permitted for an access method to implement only
amgetbitmap
and not amgettuple
, or vice versa,
if its internal implementation is unsuited to one API or the other.
-
+
PostgreSQL enforces SQL uniqueness constraints using unique indexes, which are indexes that disallow multiple entries with identical keys. An access method that supports this @@ -106,4 +106,4 @@ target row is found in the recheck verifies that we are scanning for the same tuple values as were used in the original insertion.
-
Copyright © 1996–2023 The PostgreSQL Global Development Group
Table of Contents
Copyright © 1996–2024 The PostgreSQL Global Development Group
Table of Contents
Table of Contents
+
Table of Contents
This chapter defines the interface between the core PostgreSQL system and index access methods, which manage individual index types. The core system @@ -32,4 +32,4 @@ statement; but indexes do not deal with those, either.) Index entries for dead tuples are reclaimed (by vacuuming) when the dead tuples themselves are reclaimed. -
+
A single index scan can only use query clauses that use the index's
columns with operators of its operator class and are joined with
AND
. For example, given an index on (a, b)
@@ -58,4 +58,4 @@
common. If one of the types of query is much less common than the
others, you'd probably settle for creating just the two indexes that
best match the common types.
-
+
An index can support only one collation per index column. If multiple collations are of interest, multiple indexes may be needed.
@@ -28,4 +28,4 @@ SELECT * FROM test1c WHERE content > constant
CREATE INDEX test1c_content_y_index ON test1c (content COLLATE "y");
-
+
Although indexes in PostgreSQL do not need maintenance or tuning, it is still important to check which indexes are actually used by the real-life query workload. @@ -79,4 +79,4 @@ appropriate, then you might have to resort to forcing index usage explicitly. You might also want to contact the PostgreSQL developers to examine the issue. -
+
An index column need not be just a column of the underlying table, but can be a function or scalar expression computed from one or more columns of the table. This feature is useful to obtain fast @@ -46,4 +46,4 @@ CREATE INDEX people_names ON people ((first_name || ' ' || last_name)); and so the speed of the search is equivalent to any other simple index query. Thus, indexes on expressions are useful when retrieval speed is more important than insertion and update speed. -
+
All indexes in PostgreSQL are secondary indexes, meaning that each index is stored separately from the table's main data area (which is called the @@ -206,4 +206,4 @@ SELECT target FROM tests WHERE subject = 'some-subject' AND success; checked in the plan. PostgreSQL versions 9.6 and later will recognize such cases and allow index-only scans to be generated, but older versions will not. -
+
Suppose we have a table similar to this:
CREATE TABLE test1 ( @@ -58,6 +58,35 @@ CREATE INDEX test1_id_index ON test1 (id); Indexes can moreover be used in join searches. Thus, an index defined on a column that is part of a join condition can also significantly speed up queries with joins. ++ In general, PostgreSQL indexes can be used + to optimize queries that contain one or more
WHERE
+ orJOIN
clauses of the form + ++indexed-column
indexable-operator
comparison-value
++ + Here, the
indexed-column
is whatever + column or expression the index has been defined on. + Theindexable-operator
is an operator that + is a member of the index's operator class for + the indexed column. (More details about that appear below.) + And thecomparison-value
can be any + expression that is not volatile and does not reference the index's + table. ++ In some cases the query planner can extract an indexable clause of + this form from another SQL construct. A simple example is that if + the original clause was + +
+comparison-value
operator
indexed-column
++ + then it can be flipped around into indexable form if the + original
operator
has a commutator + operator that is a member of the index's operator class.Creating an index on a large table can take a long time. By default, PostgreSQL allows reads (
SELECT
statements) to occur @@ -74,4 +103,4 @@ CREATE INDEX test1_id_index ON test1 (id); tuples. Therefore indexes that are seldom or never used in queries should be removed. -
+
An index can be defined on more than one column of a table. For example, if you have a table of this form:
@@ -79,4 +79,4 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor); Section 11.5 and Section 11.9 for some discussion of the merits of different index configurations. -
ORDER BY
+
ORDER BY
In addition to simply finding the rows to be returned by a query,
an index may be able to deliver them in a specific sorted order.
This allows a query's ORDER BY
specification to be honored
@@ -61,4 +61,4 @@ CREATE INDEX test3_desc_index ON test3 (id DESC NULLS LAST);
speedups for certain queries. Whether it's worth maintaining such an
index depends on how often you use queries that require a special
sort ordering.
-
More information about partial indexes can be found in [ston89b], [olson93], and [seshadri95]. -
\ No newline at end of file + \ No newline at end of file diff --git a/doc/src/sgml/html/indexes-types.html b/doc/src/sgml/html/indexes-types.html index e043b7b..c5c8e25 100644 --- a/doc/src/sgml/html/indexes-types.html +++ b/doc/src/sgml/html/indexes-types.html @@ -1,9 +1,9 @@ -+
PostgreSQL provides several index types:
B-tree, Hash, GiST, SP-GiST, GIN, BRIN, and the extension bloom.
Each index type uses a different
- algorithm that is best suited to different types of queries.
+ algorithm that is best suited to different types of indexable clauses.
By default, the CREATE
INDEX
command creates
B-tree indexes, which fit the most common situations.
@@ -159,4 +159,4 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10;
The BRIN operator classes included in the standard distribution are
documented in Table 71.1.
For more information see Chapter 71.
-
Table of Contents
ORDER BY
+
Table of Contents
ORDER BY
Indexes are a common way to enhance database performance. An index allows the database server to find and retrieve specific rows much faster than it could do without an index. But indexes also add overhead to the database system as a whole, so they should be used sensibly. -
Table of Contents
information_schema_catalog_name
administrable_role_authorizations
applicable_roles
attributes
character_sets
check_constraint_routine_usage
check_constraints
collations
collation_character_set_applicability
column_column_usage
column_domain_usage
column_options
column_privileges
column_udt_usage
columns
constraint_column_usage
constraint_table_usage
data_type_privileges
domain_constraints
domain_udt_usage
domains
element_types
enabled_roles
foreign_data_wrapper_options
foreign_data_wrappers
foreign_server_options
foreign_servers
foreign_table_options
foreign_tables
key_column_usage
parameters
referential_constraints
role_column_grants
role_routine_grants
role_table_grants
role_udt_grants
role_usage_grants
routine_column_usage
routine_privileges
routine_routine_usage
routine_sequence_usage
routine_table_usage
routines
schemata
sequences
sql_features
sql_implementation_info
sql_parts
sql_sizing
table_constraints
table_privileges
tables
transforms
triggered_update_columns
triggers
udt_privileges
usage_privileges
user_defined_types
user_mapping_options
user_mappings
view_column_usage
view_routine_usage
view_table_usage
views
+
Table of Contents
information_schema_catalog_name
administrable_role_authorizations
applicable_roles
attributes
character_sets
check_constraint_routine_usage
check_constraints
collations
collation_character_set_applicability
column_column_usage
column_domain_usage
column_options
column_privileges
column_udt_usage
columns
constraint_column_usage
constraint_table_usage
data_type_privileges
domain_constraints
domain_udt_usage
domains
element_types
enabled_roles
foreign_data_wrapper_options
foreign_data_wrappers
foreign_server_options
foreign_servers
foreign_table_options
foreign_tables
key_column_usage
parameters
referential_constraints
role_column_grants
role_routine_grants
role_table_grants
role_udt_grants
role_usage_grants
routine_column_usage
routine_privileges
routine_routine_usage
routine_sequence_usage
routine_table_usage
routines
schemata
sequences
sql_features
sql_implementation_info
sql_parts
sql_sizing
table_constraints
table_privileges
tables
transforms
triggered_update_columns
triggers
udt_privileges
usage_privileges
user_defined_types
user_mapping_options
user_mappings
view_column_usage
view_routine_usage
view_table_usage
views
The information schema consists of a set of views that contain
information about the objects defined in the current database. The
information schema is defined in the SQL standard and can therefore
@@ -28,4 +28,4 @@
issues but contain the table name to help distinguish duplicate
rows, e.g., constraint_column_usage
,
constraint_table_usage
, table_constraints
.
-
administrable_role_authorizations
+
administrable_role_authorizations
The view administrable_role_authorizations
identifies all roles that the current user has the admin option
for.
@@ -25,4 +25,4 @@
Always YES
-
applicable_roles
+
applicable_roles
The view applicable_roles
identifies all roles
whose privileges the current user can use. This means there is
some chain of role grants from the current user to the role in
@@ -30,4 +30,4 @@
YES
if the grantee has the admin option on
the role, NO
if not
-
attributes
+
attributes
The view attributes
contains information about
the attributes of composite data types defined in the database.
(Note that the view does not give information about table columns,
@@ -223,4 +223,4 @@
See also under Section 37.17, a similarly structured view, for further information on some of the columns. -
\ No newline at end of file + \ No newline at end of file diff --git a/doc/src/sgml/html/infoschema-character-sets.html b/doc/src/sgml/html/infoschema-character-sets.html index ea75a22..ef3ad3c 100644 --- a/doc/src/sgml/html/infoschema-character-sets.html +++ b/doc/src/sgml/html/infoschema-character-sets.html @@ -1,5 +1,5 @@ -character_sets
+
character_sets
The view character_sets
identifies the character
sets available in the current database. Since PostgreSQL does not
support multiple character sets within one database, this view only
@@ -14,8 +14,8 @@
An encoding of some character repertoire. Most older character
repertoires only use one encoding form, and so there are no
- separate names for them (e.g., LATIN1
is an
- encoding form applicable to the LATIN1
+ separate names for them (e.g., LATIN2
is an
+ encoding form applicable to the LATIN2
repertoire). But for example Unicode has the encoding forms
UTF8
, UTF16
, etc. (not
all supported by PostgreSQL). Encoding forms are not exposed
@@ -83,4 +83,4 @@
settings of the current database. If there is no such
collation, then this column and the associated schema and
catalog columns are null.
-
check_constraint_routine_usage
+
check_constraint_routine_usage
The view check_constraint_routine_usage
identifies routines (functions and procedures) that are used by a
check constraint. Only those routines are shown that are owned by
@@ -39,4 +39,4 @@
The “specific name” of the function. See Section 37.45 for more information. -
check_constraints
+
check_constraints
The view check_constraints
contains all check
constraints, either defined on a table or on a domain, that are
owned by a currently enabled role. (The owner of the table or
@@ -29,4 +29,4 @@
The check expression of the check constraint -
collation_character_set_applicability
+
collation_character_set_applicability
The view collation_character_set_applicability
identifies which character set the available collations are
applicable to. In PostgreSQL, there is only one character set per
@@ -41,4 +41,4 @@
Name of the character set -
collations
+
collations
The view collations
contains the collations
available in the current database.
Table 37.8. collations
Columns
@@ -28,4 +28,4 @@
Always |
---|
column_column_usage
+
column_column_usage
The view column_column_usage
identifies all generated
columns that depend on another base column in the same table. Only tables
owned by a currently enabled role are included.
@@ -33,4 +33,4 @@
Name of the generated column -
column_domain_usage
+
column_domain_usage
The view column_domain_usage
identifies all
columns (of a table or a view) that make use of some domain defined
in the current database and owned by a currently enabled role.
@@ -43,4 +43,4 @@
Name of the column -
column_options
+
column_options
The view column_options
contains all the
options defined for foreign table columns in the current database. Only
those foreign table columns are shown that the current user has access to
@@ -39,4 +39,4 @@
Value of the option -
column_privileges
+
column_privileges
The view column_privileges
identifies all
privileges granted on columns to a currently enabled role or by a
currently enabled role. There is one row for each combination of
@@ -57,4 +57,4 @@
YES
if the privilege is grantable, NO
if not
-
column_udt_usage
+
column_udt_usage
The view column_udt_usage
identifies all columns
that use data types owned by a currently enabled role. Note that in
PostgreSQL, built-in data types behave
@@ -49,4 +49,4 @@
Name of the column -
columns
+
columns
The view columns
contains information about all
table columns (or view columns) in the database. System columns
(ctid
, etc.) are not included. Only those columns are
@@ -334,4 +334,4 @@
columns with their associated data types and treat domains as
separate types, you could write coalesce(domain_name,
udt_name)
, etc.
-
constraint_column_usage
+
constraint_column_usage
The view constraint_column_usage
identifies all
columns in the current database that are used by some constraint.
Only those columns are shown that are contained in a table owned by
@@ -52,4 +52,4 @@
Name of the constraint -
constraint_table_usage
+
constraint_table_usage
The view constraint_table_usage
identifies all
tables in the current database that are used by some constraint and
are owned by a currently enabled role. (This is different from the
@@ -47,4 +47,4 @@
Name of the constraint -
data_type_privileges
+
data_type_privileges
The view data_type_privileges
identifies all
data type descriptors that the current user has access to, by way
of being the owner of the described object or having some privilege
@@ -49,4 +49,4 @@
The identifier of the data type descriptor, which is unique among the data type descriptors for that same object. -
+
The columns of the information schema views use special data types that are defined in the information schema. These are defined as simple domains over ordinary built-in types. You should not use @@ -30,4 +30,4 @@
Every column in the information schema has one of these five types. -
domain_constraints
+
domain_constraints
The view domain_constraints
contains all constraints
belonging to domains defined in the current database. Only those domains
are shown that the current user has access to (by way of being the owner or
@@ -49,4 +49,4 @@
YES
if the constraint is deferrable and initially deferred, NO
if not
-
domain_udt_usage
+
domain_udt_usage
The view domain_udt_usage
identifies all domains
that are based on data types owned by a currently enabled role.
Note that in PostgreSQL, built-in data
@@ -40,4 +40,4 @@
Name of the domain -
domains
+
domains
The view domains
contains all
domains defined in the
current database. Only those domains are shown that the current user has
@@ -194,4 +194,4 @@
instances of such identifiers. (The specific format of the
identifier is not defined and not guaranteed to remain the same
in future versions.)
-
element_types
+
element_types
The view element_types
contains the data type
descriptors of the elements of arrays. When a table column, composite-type attribute,
domain, function parameter, or function return value is defined to
@@ -191,4 +191,4 @@ ORDER BY c.ordinal_position;
An identifier of the data type descriptor of the element. This is currently not useful. -
enabled_roles
+
enabled_roles
The view enabled_roles
identifies the currently
“enabled roles”. The enabled roles are recursively
defined as the current user together with all roles that have been
@@ -25,4 +25,4 @@
Name of a role -
foreign_data_wrapper_options
+
foreign_data_wrapper_options
The view foreign_data_wrapper_options
contains
all the options defined for foreign-data wrappers in the current
database. Only those foreign-data wrappers are shown that the
@@ -30,4 +30,4 @@
Value of the option -
foreign_data_wrappers
+
foreign_data_wrappers
The view foreign_data_wrappers
contains all
foreign-data wrappers defined in the current database. Only those
foreign-data wrappers are shown that the current user has access to
@@ -35,4 +35,4 @@
Language used to implement this foreign-data wrapper -
foreign_server_options
+
foreign_server_options
The view foreign_server_options
contains all the
options defined for foreign servers in the current database. Only
those foreign servers are shown that the current user has access to
@@ -29,4 +29,4 @@
Value of the option -
foreign_servers
+
foreign_servers
The view foreign_servers
contains all foreign
servers defined in the current database. Only those foreign
servers are shown that the current user has access to (by way of
@@ -45,4 +45,4 @@
Name of the owner of the foreign server -
foreign_table_options
+
foreign_table_options
The view foreign_table_options
contains all the
options defined for foreign tables in the current database. Only
those foreign tables are shown that the current user has access to
@@ -34,4 +34,4 @@
Value of the option -
foreign_tables
+
foreign_tables
The view foreign_tables
contains all foreign
tables defined in the current database. Only those foreign
tables are shown that the current user has access to (by way of
@@ -34,4 +34,4 @@
Name of the foreign server -
information_schema_catalog_name
+
information_schema_catalog_name
information_schema_catalog_name
is a table that
always contains one row and one column containing the name of the
current database (current catalog, in SQL terminology).
@@ -13,4 +13,4 @@
Name of the database that contains this information schema -
key_column_usage
+
key_column_usage
The view key_column_usage
identifies all columns
in the current database that are restricted by some unique, primary
key, or foreign key constraint. Check constraints are not included
@@ -62,4 +62,4 @@
For a foreign-key constraint, ordinal position of the referenced
column within its unique constraint (count starts at 1);
otherwise null
-
parameters
+
parameters
The view parameters
contains information about
the parameters (arguments) of all functions in the current database.
Only those functions are shown that the current user has access to
@@ -185,4 +185,4 @@
The default expression of the parameter, or null if none or if the function is not owned by a currently enabled role. -
referential_constraints
+
referential_constraints
The view referential_constraints
contains all
referential (foreign key) constraints in the current database.
Only those constraints are shown for which the current user has
@@ -67,4 +67,4 @@
CASCADE
, SET NULL
,
SET DEFAULT
, RESTRICT
, or
NO ACTION
.
-
role_column_grants
+
role_column_grants
The view role_column_grants
identifies all
privileges granted on columns where the grantor or grantee is a
currently enabled role. Further information can be found under
@@ -55,4 +55,4 @@
YES
if the privilege is grantable, NO
if not
-
role_routine_grants
+
role_routine_grants
The view role_routine_grants
identifies all
privileges granted on functions where the grantor or grantee is a
currently enabled role. Further information can be found under
@@ -63,4 +63,4 @@
YES
if the privilege is grantable, NO
if not
-
role_table_grants
+
role_table_grants
The view role_table_grants
identifies all
privileges granted on tables or views where the grantor or grantee
is a currently enabled role. Further information can be found
@@ -61,4 +61,4 @@
in the SELECT
privilege, so this column
shows YES
if the privilege
is SELECT
, else NO
.
-
role_udt_grants
+
role_udt_grants
The view role_udt_grants
is intended to identify
USAGE
privileges granted on user-defined types
where the grantor or grantee is a currently enabled role. Further
@@ -50,4 +50,4 @@
YES
if the privilege is grantable, NO
if not
-
role_usage_grants
+
role_usage_grants
The view role_usage_grants
identifies
USAGE
privileges granted on various kinds of
objects where the grantor or grantee is a currently enabled role.
@@ -54,4 +54,4 @@
YES
if the privilege is grantable, NO
if not
-
routine_column_usage
- The view routine_column_usage
is meant to identify all
- columns that are used by a function or procedure. This information is
- currently not tracked by PostgreSQL.
+
routine_column_usage
+ The view routine_column_usage
identifies all columns
+ that are used by a function or procedure, either in the SQL body or in
+ parameter default expressions. (This only works for unquoted SQL bodies,
+ not quoted bodies or functions in other languages.) A column is only
+ included if its table is owned by a currently enabled role.
Table 37.38. routine_column_usage
Columns
Column Type @@ -59,4 +61,4 @@Name of the column that is used by the function - |
---|
routine_privileges
+
routine_privileges
The view routine_privileges
identifies all
privileges granted on functions to a currently enabled role or by a
currently enabled role. There is one row for each combination of function,
@@ -59,4 +59,4 @@
YES
if the privilege is grantable, NO
if not
-
routine_routine_usage
- The view routine_routine_usage
is meant to identify all
- functions or procedures that are used by another (or the same) function or
- procedure, either in the body or in parameter default expressions.
- Currently, only functions used in parameter default expressions are
- tracked. An entry is included here only if the used function is owned by a
- currently enabled role. (There is no such restriction on the using
+
routine_routine_usage
+ The view routine_routine_usage
identifies all functions
+ or procedures that are used by another (or the same) function or procedure,
+ either in the SQL body or in parameter default expressions. (This only
+ works for unquoted SQL bodies, not quoted bodies or functions in other
+ languages.) An entry is included here only if the used function is owned
+ by a currently enabled role. (There is no such restriction on the using
function.)
Note that the entries for both functions in the view refer to the @@ -52,4 +52,4 @@
The “specific name” of the function that is used by the first function. -
routine_sequence_usage
- The view routine_sequence_usage
is meant to identify all
- sequences that are used by a function or procedure, either in the body or
- in parameter default expressions. Currently, only sequences used in
- parameter default expressions are tracked. A sequence is only included if
- that sequence is owned by a currently enabled role.
+
routine_sequence_usage
+ The view routine_sequence_usage
identifies all sequences
+ that are used by a function or procedure, either in the SQL body or in
+ parameter default expressions. (This only works for unquoted SQL bodies,
+ not quoted bodies or functions in other languages.) A sequence is only
+ included if that sequence is owned by a currently enabled role.
Table 37.41. routine_sequence_usage
Columns
Column Type @@ -56,4 +56,4 @@Name of the sequence that is used by the function - |
---|
routine_table_usage
+
routine_table_usage
The view routine_table_usage
is meant to identify all
tables that are used by a function or procedure. This information is
currently not tracked by PostgreSQL.
@@ -54,4 +54,4 @@
Name of the table that is used by the function -
routines
+
routines
The view routines
contains all functions and procedures in the
current database. Only those functions and procedures are shown that the current
user has access to (by way of being the owner or having some
@@ -461,4 +461,4 @@
Applies to a feature not available in PostgreSQL -
+
The information schema itself is a schema named
information_schema
. This schema automatically
exists in all databases. The owner of this schema is the initial
@@ -13,4 +13,4 @@
schema are generic names that might occur in user applications, you
should be careful if you want to put the information schema in the
path.
-
schemata
+
schemata
The view schemata
contains all schemas in the current
database that the current user has access to (by way of being the owner or
having some privilege).
@@ -43,4 +43,4 @@
Applies to a feature not available in PostgreSQL -
sequences
+
sequences
The view sequences
contains all sequences
defined in the current database. Only those sequences are shown
that the current user has access to (by way of being the owner or
@@ -84,4 +84,4 @@
Note that in accordance with the SQL standard, the start, minimum, maximum, and increment values are returned as character strings. -
\ No newline at end of file + \ No newline at end of file diff --git a/doc/src/sgml/html/infoschema-sql-features.html b/doc/src/sgml/html/infoschema-sql-features.html index 4bf038d..a4d94dd 100644 --- a/doc/src/sgml/html/infoschema-sql-features.html +++ b/doc/src/sgml/html/infoschema-sql-features.html @@ -1,5 +1,5 @@ -sql_features
+
sql_features
The table sql_features
contains information
about which formal features defined in the SQL standard are
supported by PostgreSQL. This is the
@@ -47,4 +47,4 @@
Possibly a comment about the supported status of the feature -
sql_implementation_info
+
sql_implementation_info
The table sql_implementation_info
contains
information about various aspects that are left
implementation-defined by the SQL standard. This information is
@@ -42,4 +42,4 @@
Possibly a comment pertaining to the implementation information item -
sql_parts
+
sql_parts
The table sql_parts
contains information about
which of the several parts of the SQL standard are supported by
PostgreSQL.
@@ -36,4 +36,4 @@
Possibly a comment about the supported status of the part -
sql_sizing
+
sql_sizing
The table sql_sizing
contains information about
various size limits and maximum values in
PostgreSQL. This information is
@@ -35,4 +35,4 @@
Possibly a comment pertaining to the sizing item -
table_constraints
+
table_constraints
The view table_constraints
contains all
constraints belonging to tables that the current user owns or has
some privilege other than SELECT
on.
@@ -70,4 +70,4 @@
if the constraint treats nulls as distinct or NO
if
it treats nulls as not distinct, otherwise null for other types of
constraints.
-
table_privileges
+
table_privileges
The view table_privileges
identifies all
privileges granted on tables or views to a currently enabled role
or by a currently enabled role. There is one row for each
@@ -57,4 +57,4 @@
in the SELECT
privilege, so this column
shows YES
if the privilege
is SELECT
, else NO
.
-
tables
+
tables
The view tables
contains all tables and views
defined in the current database. Only those tables and views are
shown that the current user has access to (by way of being the
@@ -79,4 +79,4 @@
Not yet implemented -
transforms
+
transforms
The view transforms
contains information about the
transforms defined in the current database. More precisely, it contains a
row for each function contained in a transform (the “from SQL”
@@ -52,4 +52,4 @@
FROM SQL
or TO SQL
-
triggered_update_columns
+
triggered_update_columns
For triggers in the current database that specify a column list
(like UPDATE OF column1, column2
), the
view triggered_update_columns
identifies these
@@ -48,4 +48,4 @@
Name of the column that the trigger is defined on -
triggers
+
triggers
The view triggers
contains all triggers defined
in the current database on tables and views that the current user owns
or has some privilege other than SELECT
on.
@@ -147,4 +147,4 @@
respectively.
That was how they were named in the SQL:1999 standard.
The new naming conforms to SQL:2003 and later.
-
udt_privileges
+
udt_privileges
The view udt_privileges
identifies
USAGE
privileges granted on user-defined types to a
currently enabled role or by a currently enabled role. There is one row for
@@ -47,4 +47,4 @@
YES
if the privilege is grantable, NO
if not
-
usage_privileges
+
usage_privileges
The view usage_privileges
identifies
USAGE
privileges granted on various kinds of
objects to a currently enabled role or by a currently enabled role.
@@ -63,4 +63,4 @@
YES
if the privilege is grantable, NO
if not
-
user_defined_types
+
user_defined_types
The view user_defined_types
currently contains
all composite types defined in the current database.
Only those types are shown that the current user has access to (by way
@@ -165,4 +165,4 @@
Applies to a feature not available in PostgreSQL -
user_mapping_options
+
user_mapping_options
The view user_mapping_options
contains all the
options defined for user mappings in the current database. Only
those user mappings are shown where the current user has access to
@@ -42,4 +42,4 @@
server owner, or the current user is a superuser. The intent is
to protect password information stored as user mapping
option.
-
user_mappings
+
user_mappings
The view user_mappings
contains all user
mappings defined in the current database. Only those user mappings
are shown where the current user has access to the corresponding
@@ -27,4 +27,4 @@
Name of the foreign server used by this mapping -
view_column_usage
+
view_column_usage
The view view_column_usage
identifies all
columns that are used in the query expression of a view (the
SELECT
statement that defines the view). A
@@ -51,4 +51,4 @@
Name of the column that is used by the view -
view_routine_usage
+
view_routine_usage
The view view_routine_usage
identifies all
routines (functions and procedures) that are used in the query
expression of a view (the SELECT
statement that
@@ -40,4 +40,4 @@
The “specific name” of the function. See Section 37.45 for more information. -
view_table_usage
+
view_table_usage
The view view_table_usage
identifies all tables
that are used in the query expression of a view (the
SELECT
statement that defines the view). A
@@ -44,4 +44,4 @@
Name of the table that is used by the view -
views
+
views
The view views
contains all views defined in the
current database. Only those views are shown that the current user
has access to (by way of being the owner or having some privilege).
@@ -67,4 +67,4 @@
YES
if the view has an INSTEAD OF
INSERT
trigger defined on it, NO
if not
-
+
PostgreSQL is available in the form of binary packages for most common operating systems today. When available, this is the recommended way to install PostgreSQL for users of the system. Building @@ -10,4 +10,4 @@ the download section on the PostgreSQL website at https://www.postgresql.org/download/ and follow the instructions for the specific platform. -
+
The PostgreSQL source code for released versions can be obtained from the download section of our website: https://www.postgresql.org/ftp/source/. @@ -17,4 +17,4 @@
Alternatively, you can use the Git version control system; see Section I.1 for more information. -
+
On some systems with shared libraries
you need to tell the system how to find the newly installed
shared libraries. The systems on which this is
@@ -100,4 +100,4 @@ export MANPATH
user that plans to use the database sets PGHOST
. This
is not required, however; the settings can be communicated via command
line options to most client programs.
-
Configuration
+
Configuration
The first step of the installation procedure is to configure the
source tree for your system and choose the options you would like.
This is done by running the configure
script. For a
@@ -815,4 +815,4 @@ build-postgresql:
makefiles. Which to use is a matter of preference, but a common habit
among developers is to use PROFILE
for one-time flag
adjustments, while COPT
might be kept set all the time.
-
+
In general, a modern Unix-compatible platform should be able to run
PostgreSQL.
The platforms that had received specific testing at the
@@ -194,4 +194,4 @@
run the regression tests you will temporarily need up to an extra
300 MB. Use the df
command to check free disk
space.
-
+
./configure make @@ -16,4 +16,4 @@ su - postgres
The long version is the rest of this chapter. -
PostgreSQL can be built using the Visual C++ compiler suite from Microsoft.
These compilers can be either from Visual Studio,
@@ -338,4 +338,4 @@ $ENV{PROVE_TESTS}='t/020*.pl t/010*.pl'
zstd
, which will search for a command by that
name in the configured PATH
.
-