diff options
author | Daniel Baumann <daniel.baumann@progress-linux.org> | 2024-05-04 17:41:08 +0000 |
---|---|---|
committer | Daniel Baumann <daniel.baumann@progress-linux.org> | 2024-05-04 17:41:08 +0000 |
commit | 506ed8899b3a97e512be3fd6d44d5b11463bf9bf (patch) | |
tree | 808913770c5e6935d3714058c2a066c57b4632ec /docs/api | |
parent | Initial commit. (diff) | |
download | psycopg3-506ed8899b3a97e512be3fd6d44d5b11463bf9bf.tar.xz psycopg3-506ed8899b3a97e512be3fd6d44d5b11463bf9bf.zip |
Adding upstream version 3.1.7.upstream/3.1.7upstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to '')
-rw-r--r-- | docs/api/abc.rst | 75 | ||||
-rw-r--r-- | docs/api/adapt.rst | 91 | ||||
-rw-r--r-- | docs/api/connections.rst | 489 | ||||
-rw-r--r-- | docs/api/conninfo.rst | 24 | ||||
-rw-r--r-- | docs/api/copy.rst | 117 | ||||
-rw-r--r-- | docs/api/crdb.rst | 120 | ||||
-rw-r--r-- | docs/api/cursors.rst | 517 | ||||
-rw-r--r-- | docs/api/dns.rst | 145 | ||||
-rw-r--r-- | docs/api/errors.rst | 540 | ||||
-rw-r--r-- | docs/api/index.rst | 29 | ||||
-rw-r--r-- | docs/api/module.rst | 59 | ||||
-rw-r--r-- | docs/api/objects.rst | 256 | ||||
-rw-r--r-- | docs/api/pool.rst | 331 | ||||
-rw-r--r-- | docs/api/pq.rst | 218 | ||||
-rw-r--r-- | docs/api/rows.rst | 74 | ||||
-rw-r--r-- | docs/api/sql.rst | 151 | ||||
-rw-r--r-- | docs/api/types.rst | 168 |
17 files changed, 3404 insertions, 0 deletions
diff --git a/docs/api/abc.rst b/docs/api/abc.rst new file mode 100644 index 0000000..9514e9b --- /dev/null +++ b/docs/api/abc.rst @@ -0,0 +1,75 @@ +`!abc` -- Psycopg abstract classes +================================== + +The module exposes Psycopg definitions which can be used for static type +checking. + +.. module:: psycopg.abc + +.. autoclass:: Dumper(cls, context=None) + + :param cls: The type that will be managed by this dumper. + :type cls: type + :param context: The context where the transformation is performed. If not + specified the conversion might be inaccurate, for instance it will not + be possible to know the connection encoding or the server date format. + :type context: `AdaptContext` or None + + A partial implementation of this protocol (implementing everything except + `dump()`) is available as `psycopg.adapt.Dumper`. + + .. autoattribute:: format + + .. automethod:: dump + + The format returned by dump shouldn't contain quotes or escaped + values. + + .. automethod:: quote + + .. tip:: + + This method will be used by `~psycopg.sql.Literal` to convert a + value client-side. + + This method only makes sense for text dumpers; the result of calling + it on a binary dumper is undefined. It might scratch your car, or burn + your cake. Don't tell me I didn't warn you. + + .. autoattribute:: oid + + If the OID is not specified, PostgreSQL will try to infer the type + from the context, but this may fail in some contexts and may require a + cast (e.g. specifying :samp:`%s::{type}` for its placeholder). + + You can use the `psycopg.adapters`\ ``.``\ + `~psycopg.adapt.AdaptersMap.types` registry to find the OID of builtin + types, and you can use `~psycopg.types.TypeInfo` to extend the + registry to custom types. + + .. automethod:: get_key + .. automethod:: upgrade + + +.. autoclass:: Loader(oid, context=None) + + :param oid: The type that will be managed by this dumper. + :type oid: int + :param context: The context where the transformation is performed. If not + specified the conversion might be inaccurate, for instance it will not + be possible to know the connection encoding or the server date format. + :type context: `AdaptContext` or None + + A partial implementation of this protocol (implementing everything except + `load()`) is available as `psycopg.adapt.Loader`. + + .. autoattribute:: format + + .. automethod:: load + + +.. autoclass:: AdaptContext + :members: + + .. seealso:: :ref:`adaptation` for an explanation about how contexts are + connected. diff --git a/docs/api/adapt.rst b/docs/api/adapt.rst new file mode 100644 index 0000000..e47816c --- /dev/null +++ b/docs/api/adapt.rst @@ -0,0 +1,91 @@ +`adapt` -- Types adaptation +=========================== + +.. module:: psycopg.adapt + +The `!psycopg.adapt` module exposes a set of objects useful for the +configuration of *data adaptation*, which is the conversion of Python objects +to PostgreSQL data types and back. + +These objects are useful if you need to configure data adaptation, i.e. +if you need to change the default way that Psycopg converts between types or +if you want to adapt custom data types and objects. You don't need this object +in the normal use of Psycopg. + +See :ref:`adaptation` for an overview of the Psycopg adaptation system. + +.. _abstract base class: https://docs.python.org/glossary.html#term-abstract-base-class + + +Dumpers and loaders +------------------- + +.. autoclass:: Dumper(cls, context=None) + + This is an `abstract base class`_, partially implementing the + `~psycopg.abc.Dumper` protocol. Subclasses *must* at least implement the + `.dump()` method and optionally override other members. + + .. automethod:: dump + + .. attribute:: format + :type: psycopg.pq.Format + :value: TEXT + + Class attribute. Set it to `~psycopg.pq.Format.BINARY` if the class + `dump()` methods converts the object to binary format. + + .. automethod:: quote + + .. automethod:: get_key + + .. automethod:: upgrade + + +.. autoclass:: Loader(oid, context=None) + + This is an `abstract base class`_, partially implementing the + `~psycopg.abc.Loader` protocol. Subclasses *must* at least implement the + `.load()` method and optionally override other members. + + .. automethod:: load + + .. attribute:: format + :type: psycopg.pq.Format + :value: TEXT + + Class attribute. Set it to `~psycopg.pq.Format.BINARY` if the class + `load()` methods converts the object from binary format. + + +Other objects used in adaptations +--------------------------------- + +.. autoclass:: PyFormat + :members: + + +.. autoclass:: AdaptersMap + + .. seealso:: :ref:`adaptation` for an explanation about how contexts are + connected. + + .. automethod:: register_dumper + .. automethod:: register_loader + + .. attribute:: types + + The object where to look up for types information (such as the mapping + between type names and oids in the specified context). + + :type: `~psycopg.types.TypesRegistry` + + .. automethod:: get_dumper + .. automethod:: get_dumper_by_oid + .. automethod:: get_loader + + +.. autoclass:: Transformer(context=None) + + :param context: The context where the transformer should operate. + :type context: `~psycopg.abc.AdaptContext` diff --git a/docs/api/connections.rst b/docs/api/connections.rst new file mode 100644 index 0000000..db25382 --- /dev/null +++ b/docs/api/connections.rst @@ -0,0 +1,489 @@ +.. currentmodule:: psycopg + +Connection classes +================== + +The `Connection` and `AsyncConnection` classes are the main wrappers for a +PostgreSQL database session. You can imagine them similar to a :program:`psql` +session. + +One of the differences compared to :program:`psql` is that a `Connection` +usually handles a transaction automatically: other sessions will not be able +to see the changes until you have committed them, more or less explicitly. +Take a look to :ref:`transactions` for the details. + + +The `!Connection` class +----------------------- + +.. autoclass:: Connection() + + This class implements a `DBAPI-compliant interface`__. It is what you want + to use if you write a "classic", blocking program (eventually using + threads or Eventlet/gevent for concurrency). If your program uses `asyncio` + you might want to use `AsyncConnection` instead. + + .. __: https://www.python.org/dev/peps/pep-0249/#connection-objects + + Connections behave as context managers: on block exit, the current + transaction will be committed (or rolled back, in case of exception) and + the connection will be closed. + + .. automethod:: connect + + :param conninfo: The `connection string`__ (a ``postgresql://`` url or + a list of ``key=value`` pairs) to specify where and how to connect. + :param kwargs: Further parameters specifying the connection string. + They override the ones specified in `!conninfo`. + :param autocommit: If `!True` don't start transactions automatically. + See :ref:`transactions` for details. + :param row_factory: The row factory specifying what type of records + to create fetching data (default: `~psycopg.rows.tuple_row()`). See + :ref:`row-factories` for details. + :param cursor_factory: Initial value for the `cursor_factory` attribute + of the connection (new in Psycopg 3.1). + :param prepare_threshold: Initial value for the `prepare_threshold` + attribute of the connection (new in Psycopg 3.1). + + More specialized use: + + :param context: A context to copy the initial adapters configuration + from. It might be an `~psycopg.adapt.AdaptersMap` with customized + loaders and dumpers, used as a template to create several connections. + See :ref:`adaptation` for further details. + + .. __: https://www.postgresql.org/docs/current/libpq-connect.html + #LIBPQ-CONNSTRING + + This method is also aliased as `psycopg.connect()`. + + .. seealso:: + + - the list of `the accepted connection parameters`__ + - the `environment variables`__ affecting connection + + .. __: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS + .. __: https://www.postgresql.org/docs/current/libpq-envars.html + + .. versionchanged:: 3.1 + added `!prepare_threshold` and `!cursor_factory` parameters. + + .. automethod:: close + + .. note:: + + You can use:: + + with psycopg.connect() as conn: + ... + + to close the connection automatically when the block is exited. + See :ref:`with-connection`. + + .. autoattribute:: closed + .. autoattribute:: broken + + .. method:: cursor(*, binary: bool = False, \ + row_factory: Optional[RowFactory] = None) -> Cursor + .. method:: cursor(name: str, *, binary: bool = False, \ + row_factory: Optional[RowFactory] = None, \ + scrollable: Optional[bool] = None, withhold: bool = False) -> ServerCursor + :noindex: + + Return a new cursor to send commands and queries to the connection. + + :param name: If not specified create a client-side cursor, if + specified create a server-side cursor. See + :ref:`cursor-types` for details. + :param binary: If `!True` return binary values from the database. All + the types returned by the query must have a binary + loader. See :ref:`binary-data` for details. + :param row_factory: If specified override the `row_factory` set on the + connection. See :ref:`row-factories` for details. + :param scrollable: Specify the `~ServerCursor.scrollable` property of + the server-side cursor created. + :param withhold: Specify the `~ServerCursor.withhold` property of + the server-side cursor created. + :return: A cursor of the class specified by `cursor_factory` (or + `server_cursor_factory` if `!name` is specified). + + .. note:: + + You can use:: + + with conn.cursor() as cur: + ... + + to close the cursor automatically when the block is exited. + + .. autoattribute:: cursor_factory + + The type, or factory function, returned by `cursor()` and `execute()`. + + Default is `psycopg.Cursor`. + + .. autoattribute:: server_cursor_factory + + The type, or factory function, returned by `cursor()` when a name is + specified. + + Default is `psycopg.ServerCursor`. + + .. autoattribute:: row_factory + + The row factory defining the type of rows returned by + `~Cursor.fetchone()` and the other cursor fetch methods. + + The default is `~psycopg.rows.tuple_row`, which means that the fetch + methods will return simple tuples. + + .. seealso:: See :ref:`row-factories` for details about defining the + objects returned by cursors. + + .. automethod:: execute + + :param query: The query to execute. + :type query: `!str`, `!bytes`, `sql.SQL`, or `sql.Composed` + :param params: The parameters to pass to the query, if any. + :type params: Sequence or Mapping + :param prepare: Force (`!True`) or disallow (`!False`) preparation of + the query. By default (`!None`) prepare automatically. See + :ref:`prepared-statements`. + :param binary: If `!True` the cursor will return binary values from the + database. All the types returned by the query must have a binary + loader. See :ref:`binary-data` for details. + + The method simply creates a `Cursor` instance, `~Cursor.execute()` the + query requested, and returns it. + + See :ref:`query-parameters` for all the details about executing + queries. + + .. automethod:: pipeline + + The method is a context manager: you should call it using:: + + with conn.pipeline() as p: + ... + + At the end of the block, a synchronization point is established and + the connection returns in normal mode. + + You can call the method recursively from within a pipeline block. + Innermost blocks will establish a synchronization point on exit, but + pipeline mode will be kept until the outermost block exits. + + See :ref:`pipeline-mode` for details. + + .. versionadded:: 3.1 + + + .. rubric:: Transaction management methods + + For details see :ref:`transactions`. + + .. automethod:: commit + .. automethod:: rollback + .. automethod:: transaction + + .. note:: + + The method must be called with a syntax such as:: + + with conn.transaction(): + ... + + with conn.transaction() as tx: + ... + + The latter is useful if you need to interact with the + `Transaction` object. See :ref:`transaction-context` for details. + + Inside a transaction block it will not be possible to call `commit()` + or `rollback()`. + + .. autoattribute:: autocommit + + The property is writable for sync connections, read-only for async + ones: you should call `!await` `~AsyncConnection.set_autocommit` + :samp:`({value})` instead. + + The following three properties control the characteristics of new + transactions. See :ref:`transaction-characteristics` for details. + + .. autoattribute:: isolation_level + + `!None` means use the default set in the default_transaction_isolation__ + configuration parameter of the server. + + .. __: https://www.postgresql.org/docs/current/runtime-config-client.html + #GUC-DEFAULT-TRANSACTION-ISOLATION + + .. autoattribute:: read_only + + `!None` means use the default set in the default_transaction_read_only__ + configuration parameter of the server. + + .. __: https://www.postgresql.org/docs/current/runtime-config-client.html + #GUC-DEFAULT-TRANSACTION-READ-ONLY + + .. autoattribute:: deferrable + + `!None` means use the default set in the default_transaction_deferrable__ + configuration parameter of the server. + + .. __: https://www.postgresql.org/docs/current/runtime-config-client.html + #GUC-DEFAULT-TRANSACTION-DEFERRABLE + + + .. rubric:: Checking and configuring the connection state + + .. attribute:: pgconn + :type: psycopg.pq.PGconn + + The `~pq.PGconn` libpq connection wrapper underlying the `!Connection`. + + It can be used to send low level commands to PostgreSQL and access + features not currently wrapped by Psycopg. + + .. autoattribute:: info + + .. autoattribute:: prepare_threshold + + See :ref:`prepared-statements` for details. + + + .. autoattribute:: prepared_max + + If more queries need to be prepared, old ones are deallocated__. + + .. __: https://www.postgresql.org/docs/current/sql-deallocate.html + + + .. rubric:: Methods you can use to do something cool + + .. automethod:: cancel + + .. automethod:: notifies + + Notifies are received after using :sql:`LISTEN` in a connection, when + any sessions in the database generates a :sql:`NOTIFY` on one of the + listened channels. + + .. automethod:: add_notify_handler + + See :ref:`async-notify` for details. + + .. automethod:: remove_notify_handler + + .. automethod:: add_notice_handler + + See :ref:`async-messages` for details. + + .. automethod:: remove_notice_handler + + .. automethod:: fileno + + + .. _tpc-methods: + + .. rubric:: Two-Phase Commit support methods + + .. versionadded:: 3.1 + + .. seealso:: :ref:`two-phase-commit` for an introductory explanation of + these methods. + + .. automethod:: xid + + .. automethod:: tpc_begin + + :param xid: The id of the transaction + :type xid: Xid or str + + This method should be called outside of a transaction (i.e. nothing + may have executed since the last `commit()` or `rollback()` and + `~ConnectionInfo.transaction_status` is `~pq.TransactionStatus.IDLE`). + + Furthermore, it is an error to call `!commit()` or `!rollback()` + within the TPC transaction: in this case a `ProgrammingError` + is raised. + + The `!xid` may be either an object returned by the `xid()` method or a + plain string: the latter allows to create a transaction using the + provided string as PostgreSQL transaction id. See also + `tpc_recover()`. + + + .. automethod:: tpc_prepare + + A `ProgrammingError` is raised if this method is used outside of a TPC + transaction. + + After calling `!tpc_prepare()`, no statements can be executed until + `tpc_commit()` or `tpc_rollback()` will be + called. + + .. seealso:: The |PREPARE TRANSACTION|_ PostgreSQL command. + + .. |PREPARE TRANSACTION| replace:: :sql:`PREPARE TRANSACTION` + .. _PREPARE TRANSACTION: https://www.postgresql.org/docs/current/static/sql-prepare-transaction.html + + + .. automethod:: tpc_commit + + :param xid: The id of the transaction + :type xid: Xid or str + + When called with no arguments, `!tpc_commit()` commits a TPC + transaction previously prepared with `tpc_prepare()`. + + If `!tpc_commit()` is called prior to `!tpc_prepare()`, a single phase + commit is performed. A transaction manager may choose to do this if + only a single resource is participating in the global transaction. + + When called with a transaction ID `!xid`, the database commits the + given transaction. If an invalid transaction ID is provided, a + `ProgrammingError` will be raised. This form should be called outside + of a transaction, and is intended for use in recovery. + + On return, the TPC transaction is ended. + + .. seealso:: The |COMMIT PREPARED|_ PostgreSQL command. + + .. |COMMIT PREPARED| replace:: :sql:`COMMIT PREPARED` + .. _COMMIT PREPARED: https://www.postgresql.org/docs/current/static/sql-commit-prepared.html + + + .. automethod:: tpc_rollback + + :param xid: The id of the transaction + :type xid: Xid or str + + When called with no arguments, `!tpc_rollback()` rolls back a TPC + transaction. It may be called before or after `tpc_prepare()`. + + When called with a transaction ID `!xid`, it rolls back the given + transaction. If an invalid transaction ID is provided, a + `ProgrammingError` is raised. This form should be called outside of a + transaction, and is intended for use in recovery. + + On return, the TPC transaction is ended. + + .. seealso:: The |ROLLBACK PREPARED|_ PostgreSQL command. + + .. |ROLLBACK PREPARED| replace:: :sql:`ROLLBACK PREPARED` + .. _ROLLBACK PREPARED: https://www.postgresql.org/docs/current/static/sql-rollback-prepared.html + + + .. automethod:: tpc_recover + + Returns a list of `Xid` representing pending transactions, suitable + for use with `tpc_commit()` or `tpc_rollback()`. + + If a transaction was not initiated by Psycopg, the returned Xids will + have attributes `~Xid.format_id` and `~Xid.bqual` set to `!None` and + the `~Xid.gtrid` set to the PostgreSQL transaction ID: such Xids are + still usable for recovery. Psycopg uses the same algorithm of the + `PostgreSQL JDBC driver`__ to encode a XA triple in a string, so + transactions initiated by a program using such driver should be + unpacked correctly. + + .. __: https://jdbc.postgresql.org/ + + Xids returned by `!tpc_recover()` also have extra attributes + `~Xid.prepared`, `~Xid.owner`, `~Xid.database` populated with the + values read from the server. + + .. seealso:: the |pg_prepared_xacts|_ system view. + + .. |pg_prepared_xacts| replace:: `pg_prepared_xacts` + .. _pg_prepared_xacts: https://www.postgresql.org/docs/current/static/view-pg-prepared-xacts.html + + +The `!AsyncConnection` class +---------------------------- + +.. autoclass:: AsyncConnection() + + This class implements a DBAPI-inspired interface, with all the blocking + methods implemented as coroutines. Unless specified otherwise, + non-blocking methods are shared with the `Connection` class. + + The following methods have the same behaviour of the matching `!Connection` + methods, but should be called using the `await` keyword. + + .. automethod:: connect + + .. versionchanged:: 3.1 + + Automatically resolve domain names asynchronously. In previous + versions, name resolution blocks, unless the `!hostaddr` + parameter is specified, or the `~psycopg._dns.resolve_hostaddr_async()` + function is used. + + .. automethod:: close + + .. note:: You can use ``async with`` to close the connection + automatically when the block is exited, but be careful about + the async quirkness: see :ref:`async-with` for details. + + .. method:: cursor(*, binary: bool = False, \ + row_factory: Optional[RowFactory] = None) -> AsyncCursor + .. method:: cursor(name: str, *, binary: bool = False, \ + row_factory: Optional[RowFactory] = None, \ + scrollable: Optional[bool] = None, withhold: bool = False) -> AsyncServerCursor + :noindex: + + .. note:: + + You can use:: + + async with conn.cursor() as cur: + ... + + to close the cursor automatically when the block is exited. + + .. autoattribute:: cursor_factory + + Default is `psycopg.AsyncCursor`. + + .. autoattribute:: server_cursor_factory + + Default is `psycopg.AsyncServerCursor`. + + .. autoattribute:: row_factory + + .. automethod:: execute + + .. automethod:: pipeline + + .. note:: + + It must be called as:: + + async with conn.pipeline() as p: + ... + + .. automethod:: commit + .. automethod:: rollback + + .. automethod:: transaction + + .. note:: + + It must be called as:: + + async with conn.transaction() as tx: + ... + + .. automethod:: notifies + .. automethod:: set_autocommit + .. automethod:: set_isolation_level + .. automethod:: set_read_only + .. automethod:: set_deferrable + + .. automethod:: tpc_prepare + .. automethod:: tpc_commit + .. automethod:: tpc_rollback + .. automethod:: tpc_recover diff --git a/docs/api/conninfo.rst b/docs/api/conninfo.rst new file mode 100644 index 0000000..9e5b01d --- /dev/null +++ b/docs/api/conninfo.rst @@ -0,0 +1,24 @@ +.. _psycopg.conninfo: + +`conninfo` -- manipulate connection strings +=========================================== + +This module contains a few utility functions to manipulate database +connection strings. + +.. module:: psycopg.conninfo + +.. autofunction:: conninfo_to_dict + + .. code:: python + + >>> conninfo_to_dict("postgres://jeff@example.com/db", user="piro") + {'user': 'piro', 'dbname': 'db', 'host': 'example.com'} + + +.. autofunction:: make_conninfo + + .. code:: python + + >>> make_conninfo("dbname=db user=jeff", user="piro", port=5432) + 'dbname=db user=piro port=5432' diff --git a/docs/api/copy.rst b/docs/api/copy.rst new file mode 100644 index 0000000..81a96e2 --- /dev/null +++ b/docs/api/copy.rst @@ -0,0 +1,117 @@ +.. currentmodule:: psycopg + +COPY-related objects +==================== + +The main objects (`Copy`, `AsyncCopy`) present the main interface to exchange +data during a COPY operations. These objects are normally obtained by the +methods `Cursor.copy()` and `AsyncCursor.copy()`; however, they can be also +created directly, for instance to write to a destination which is not a +database (e.g. using a `~psycopg.copy.FileWriter`). + +See :ref:`copy` for details. + + +Main Copy objects +----------------- + +.. autoclass:: Copy() + + The object is normally returned by `!with` `Cursor.copy()`. + + .. automethod:: write_row + + The data in the tuple will be converted as configured on the cursor; + see :ref:`adaptation` for details. + + .. automethod:: write + .. automethod:: read + + Instead of using `!read()` you can iterate on the `!Copy` object to + read its data row by row, using ``for row in copy: ...``. + + .. automethod:: rows + + Equivalent of iterating on `read_row()` until it returns `!None` + + .. automethod:: read_row + .. automethod:: set_types + + +.. autoclass:: AsyncCopy() + + The object is normally returned by ``async with`` `AsyncCursor.copy()`. + Its methods are similar to the ones of the `Copy` object but offering an + `asyncio` interface (`await`, `async for`, `async with`). + + .. automethod:: write_row + .. automethod:: write + .. automethod:: read + + Instead of using `!read()` you can iterate on the `!AsyncCopy` object + to read its data row by row, using ``async for row in copy: ...``. + + .. automethod:: rows + + Use it as `async for record in copy.rows():` ... + + .. automethod:: read_row + + +.. _copy-writers: + +Writer objects +-------------- + +.. currentmodule:: psycopg.copy + +.. versionadded:: 3.1 + +Copy writers are helper objects to specify where to write COPY-formatted data. +By default, data is written to the database (using the `LibpqWriter`). It is +possible to write copy-data for offline use by using a `FileWriter`, or to +customize further writing by implementing your own `Writer` or `AsyncWriter` +subclass. + +Writers instances can be used passing them to the cursor +`~psycopg.Cursor.copy()` method or to the `~psycopg.Copy` constructor, as the +`!writer` argument. + +.. autoclass:: Writer + + This is an abstract base class: subclasses are required to implement their + `write()` method. + + .. automethod:: write + .. automethod:: finish + + +.. autoclass:: LibpqWriter + + This is the writer used by default if none is specified. + + +.. autoclass:: FileWriter + + This writer should be used without executing a :sql:`COPY` operation on + the database. For example, if `records` is a list of tuples containing + data to save in COPY format to a file (e.g. for later import), it can be + used as: + + .. code:: python + + with open("target-file.pgcopy", "wb") as f: + with Copy(cur, writer=FileWriter(f)) as copy: + for record in records + copy.write_row(record) + + +.. autoclass:: AsyncWriter + + This class methods have the same semantics of the ones of `Writer`, but + offer an async interface. + + .. automethod:: write + .. automethod:: finish + +.. autoclass:: AsyncLibpqWriter diff --git a/docs/api/crdb.rst b/docs/api/crdb.rst new file mode 100644 index 0000000..de8344e --- /dev/null +++ b/docs/api/crdb.rst @@ -0,0 +1,120 @@ +`crdb` -- CockroachDB support +============================= + +.. module:: psycopg.crdb + +.. versionadded:: 3.1 + +CockroachDB_ is a distributed database using the same fronted-backend protocol +of PostgreSQL. As such, Psycopg can be used to write Python programs +interacting with CockroachDB. + +.. _CockroachDB: https://www.cockroachlabs.com/ + +Opening a connection to a CRDB database using `psycopg.connect()` provides a +largely working object. However, using the `psycopg.crdb.connect()` function +instead, Psycopg will create more specialised objects and provide a types +mapping tweaked on the CockroachDB data model. + + +.. _crdb-differences: + +Main differences from PostgreSQL +-------------------------------- + +CockroachDB behaviour is `different from PostgreSQL`__: please refer to the +database documentation for details. These are some of the main differences +affecting Psycopg behaviour: + +.. __: https://www.cockroachlabs.com/docs/stable/postgresql-compatibility.html + +- `~psycopg.Connection.cancel()` doesn't work before CockroachDB 22.1. On + older versions, you can use `CANCEL QUERY`_ instead (but from a different + connection). + +- :ref:`server-side-cursors` are well supported only from CockroachDB 22.1.3. + +- `~psycopg.ConnectionInfo.backend_pid` is only populated from CockroachDB + 22.1. Note however that you cannot use the PID to terminate the session; use + `SHOW session_id`_ to find the id of a session, which you may terminate with + `CANCEL SESSION`_ in lieu of PostgreSQL's :sql:`pg_terminate_backend()`. + +- Several data types are missing or slightly different from PostgreSQL (see + `adapters` for an overview of the differences). + +- The :ref:`two-phase commit protocol <two-phase-commit>` is not supported. + +- :sql:`LISTEN` and :sql:`NOTIFY` are not supported. However the `CHANGEFEED`_ + command, in conjunction with `~psycopg.Cursor.stream()`, can provide push + notifications. + +.. _CANCEL QUERY: https://www.cockroachlabs.com/docs/stable/cancel-query.html +.. _SHOW session_id: https://www.cockroachlabs.com/docs/stable/show-vars.html +.. _CANCEL SESSION: https://www.cockroachlabs.com/docs/stable/cancel-session.html +.. _CHANGEFEED: https://www.cockroachlabs.com/docs/stable/changefeed-for.html + + +.. _crdb-objects: + +CockroachDB-specific objects +---------------------------- + +.. autofunction:: connect + + This is an alias of the class method `CrdbConnection.connect`. + + If you need an asynchronous connection use the `AsyncCrdbConnection.connect()` + method instead. + + +.. autoclass:: CrdbConnection + + `psycopg.Connection` subclass. + + .. automethod:: is_crdb + + :param conn: the connection to check + :type conn: `~psycopg.Connection`, `~psycopg.AsyncConnection`, `~psycopg.pq.PGconn` + + +.. autoclass:: AsyncCrdbConnection + + `psycopg.AsyncConnection` subclass. + + +.. autoclass:: CrdbConnectionInfo + + The object is returned by the `~psycopg.Connection.info` attribute of + `CrdbConnection` and `AsyncCrdbConnection`. + + The object behaves like `!ConnectionInfo`, with the following differences: + + .. autoattribute:: vendor + + The `CockroachDB` string. + + .. autoattribute:: server_version + + +.. data:: adapters + + The default adapters map establishing how Python and CockroachDB types are + converted into each other. + + The map is used as a template when new connections are created, using + `psycopg.crdb.connect()` (similarly to the way `psycopg.adapters` is used + as template for new PostgreSQL connections). + + This registry contains only the types and adapters supported by + CockroachDB. Several PostgreSQL types and adapters are missing or + different from PostgreSQL, among which: + + - Composite types + - :sql:`range`, :sql:`multirange` types + - The :sql:`hstore` type + - Geometric types + - Nested arrays + - Arrays of :sql:`jsonb` + - The :sql:`cidr` data type + - The :sql:`json` type is an alias for :sql:`jsonb` + - The :sql:`int` type is an alias for :sql:`int8`, not `int4`. diff --git a/docs/api/cursors.rst b/docs/api/cursors.rst new file mode 100644 index 0000000..9c5b478 --- /dev/null +++ b/docs/api/cursors.rst @@ -0,0 +1,517 @@ +.. currentmodule:: psycopg + +Cursor classes +============== + +The `Cursor` and `AsyncCursor` classes are the main objects to send commands +to a PostgreSQL database session. They are normally created by the +connection's `~Connection.cursor()` method. + +Using the `!name` parameter on `!cursor()` will create a `ServerCursor` or +`AsyncServerCursor`, which can be used to retrieve partial results from a +database. + +A `Connection` can create several cursors, but only one at time can perform +operations, so they are not the best way to achieve parallelism (you may want +to operate with several connections instead). All the cursors on the same +connection have a view of the same session, so they can see each other's +uncommitted data. + + +The `!Cursor` class +------------------- + +.. autoclass:: Cursor + + This class implements a `DBAPI-compliant interface`__. It is what the + classic `Connection.cursor()` method returns. `AsyncConnection.cursor()` + will create instead `AsyncCursor` objects, which have the same set of + method but expose an `asyncio` interface and require `!async` and + `!await` keywords to operate. + + .. __: dbapi-cursor_ + .. _dbapi-cursor: https://www.python.org/dev/peps/pep-0249/#cursor-objects + + + Cursors behave as context managers: on block exit they are closed and + further operation will not be possible. Closing a cursor will not + terminate a transaction or a session though. + + .. attribute:: connection + :type: Connection + + The connection this cursor is using. + + .. automethod:: close + + .. note:: + + You can use:: + + with conn.cursor() as cur: + ... + + to close the cursor automatically when the block is exited. See + :ref:`usage`. + + .. autoattribute:: closed + + .. rubric:: Methods to send commands + + .. automethod:: execute + + :param query: The query to execute. + :type query: `!str`, `!bytes`, `sql.SQL`, or `sql.Composed` + :param params: The parameters to pass to the query, if any. + :type params: Sequence or Mapping + :param prepare: Force (`!True`) or disallow (`!False`) preparation of + the query. By default (`!None`) prepare automatically. See + :ref:`prepared-statements`. + :param binary: Specify whether the server should return data in binary + format (`!True`) or in text format (`!False`). By default + (`!None`) return data as requested by the cursor's `~Cursor.format`. + + Return the cursor itself, so that it will be possible to chain a fetch + operation after the call. + + See :ref:`query-parameters` for all the details about executing + queries. + + .. versionchanged:: 3.1 + + The `query` argument must be a `~typing.StringLiteral`. If you + need to compose a query dynamically, please use `sql.SQL` and + related objects. + + See :pep:`675` for details. + + .. automethod:: executemany + + :param query: The query to execute + :type query: `!str`, `!bytes`, `sql.SQL`, or `sql.Composed` + :param params_seq: The parameters to pass to the query + :type params_seq: Sequence of Sequences or Mappings + :param returning: If `!True`, fetch the results of the queries executed + :type returning: `!bool` + + This is more efficient than performing separate queries, but in case of + several :sql:`INSERT` (and with some SQL creativity for massive + :sql:`UPDATE` too) you may consider using `copy()`. + + If the queries return data you want to read (e.g. when executing an + :sql:`INSERT ... RETURNING` or a :sql:`SELECT` with a side-effect), + you can specify `!returning=True`; the results will be available in + the cursor's state and can be read using `fetchone()` and similar + methods. Each input parameter will produce a separate result set: use + `nextset()` to read the results of the queries after the first one. + + See :ref:`query-parameters` for all the details about executing + queries. + + .. versionchanged:: 3.1 + + - Added `!returning` parameter to receive query results. + - Performance optimised by making use of the pipeline mode, when + using libpq 14 or newer. + + .. automethod:: copy + + :param statement: The copy operation to execute + :type statement: `!str`, `!bytes`, `sql.SQL`, or `sql.Composed` + :param params: The parameters to pass to the statement, if any. + :type params: Sequence or Mapping + + .. note:: + + The method must be called with:: + + with cursor.copy() as copy: + ... + + See :ref:`copy` for information about :sql:`COPY`. + + .. versionchanged:: 3.1 + Added parameters support. + + .. automethod:: stream + + This command is similar to execute + iter; however it supports endless + data streams. The feature is not available in PostgreSQL, but some + implementations exist: Materialize `TAIL`__ and CockroachDB + `CHANGEFEED`__ for instance. + + The feature, and the API supporting it, are still experimental. + Beware... 👀 + + .. __: https://materialize.com/docs/sql/tail/#main + .. __: https://www.cockroachlabs.com/docs/stable/changefeed-for.html + + The parameters are the same of `execute()`. + + .. warning:: + + Failing to consume the iterator entirely will result in a + connection left in `~psycopg.ConnectionInfo.transaction_status` + `~pq.TransactionStatus.ACTIVE` state: this connection will refuse + to receive further commands (with a message such as *another + command is already in progress*). + + If there is a chance that the generator is not consumed entirely, + in order to restore the connection to a working state you can call + `~generator.close` on the generator object returned by `!stream()`. The + `contextlib.closing` function might be particularly useful to make + sure that `!close()` is called: + + .. code:: + + with closing(cur.stream("select generate_series(1, 10000)")) as gen: + for rec in gen: + something(rec) # might fail + + Without calling `!close()`, in case of error, the connection will + be `!ACTIVE` and unusable. If `!close()` is called, the connection + might be `!INTRANS` or `!INERROR`, depending on whether the server + managed to send the entire resultset to the client. An autocommit + connection will be `!IDLE` instead. + + + .. attribute:: format + + The format of the data returned by the queries. It can be selected + initially e.g. specifying `Connection.cursor`\ `!(binary=True)` and + changed during the cursor's lifetime. It is also possible to override + the value for single queries, e.g. specifying `execute`\ + `!(binary=True)`. + + :type: `pq.Format` + :default: `~pq.Format.TEXT` + + .. seealso:: :ref:`binary-data` + + + .. rubric:: Methods to retrieve results + + Fetch methods are only available if the last operation produced results, + e.g. a :sql:`SELECT` or a command with :sql:`RETURNING`. They will raise + an exception if used with operations that don't return result, such as an + :sql:`INSERT` with no :sql:`RETURNING` or an :sql:`ALTER TABLE`. + + .. note:: + + Cursors are iterable objects, so just using the:: + + for record in cursor: + ... + + syntax will iterate on the records in the current recordset. + + .. autoattribute:: row_factory + + The property affects the objects returned by the `fetchone()`, + `fetchmany()`, `fetchall()` methods. The default + (`~psycopg.rows.tuple_row`) returns a tuple for each record fetched. + + See :ref:`row-factories` for details. + + .. automethod:: fetchone + .. automethod:: fetchmany + .. automethod:: fetchall + .. automethod:: nextset + .. automethod:: scroll + + .. attribute:: pgresult + :type: Optional[psycopg.pq.PGresult] + + The result returned by the last query and currently exposed by the + cursor, if available, else `!None`. + + It can be used to obtain low level info about the last query result + and to access to features not currently wrapped by Psycopg. + + + .. rubric:: Information about the data + + .. autoattribute:: description + + .. autoattribute:: statusmessage + + This is the status tag you typically see in :program:`psql` after + a successful command, such as ``CREATE TABLE`` or ``UPDATE 42``. + + .. autoattribute:: rowcount + .. autoattribute:: rownumber + + .. attribute:: _query + + An helper object used to convert queries and parameters before sending + them to PostgreSQL. + + .. note:: + This attribute is exposed because it might be helpful to debug + problems when the communication between Python and PostgreSQL + doesn't work as expected. For this reason, the attribute is + available when a query fails too. + + .. warning:: + You shouldn't consider it part of the public interface of the + object: it might change without warnings. + + Except this warning, I guess. + + If you would like to build reliable features using this object, + please get in touch so we can try and design an useful interface + for it. + + Among the properties currently exposed by this object: + + - `!query` (`!bytes`): the query effectively sent to PostgreSQL. It + will have Python placeholders (``%s``\-style) replaced with + PostgreSQL ones (``$1``, ``$2``\-style). + + - `!params` (sequence of `!bytes`): the parameters passed to + PostgreSQL, adapted to the database format. + + - `!types` (sequence of `!int`): the OID of the parameters passed to + PostgreSQL. + + - `!formats` (sequence of `pq.Format`): whether the parameter format + is text or binary. + + +The `!ClientCursor` class +------------------------- + +.. seealso:: See :ref:`client-side-binding-cursors` for details. + +.. autoclass:: ClientCursor + + This `Cursor` subclass has exactly the same interface of its parent class, + but, instead of sending query and parameters separately to the server, it + merges them on the client and sends them as a non-parametric query on the + server. This allows, for instance, to execute parametrized data definition + statements and other :ref:`problematic queries <server-side-binding>`. + + .. versionadded:: 3.1 + + .. automethod:: mogrify + + :param query: The query to execute. + :type query: `!str`, `!bytes`, `sql.SQL`, or `sql.Composed` + :param params: The parameters to pass to the query, if any. + :type params: Sequence or Mapping + + +The `!ServerCursor` class +-------------------------- + +.. seealso:: See :ref:`server-side-cursors` for details. + +.. autoclass:: ServerCursor + + This class also implements a `DBAPI-compliant interface`__. It is created + by `Connection.cursor()` specifying the `!name` parameter. Using this + object results in the creation of an equivalent PostgreSQL cursor in the + server. DBAPI-extension methods (such as `~Cursor.copy()` or + `~Cursor.stream()`) are not implemented on this object: use a normal + `Cursor` instead. + + .. __: dbapi-cursor_ + + Most attribute and methods behave exactly like in `Cursor`, here are + documented the differences: + + .. autoattribute:: name + .. autoattribute:: scrollable + + .. seealso:: The PostgreSQL DECLARE_ statement documentation + for the description of :sql:`[NO] SCROLL`. + + .. autoattribute:: withhold + + .. seealso:: The PostgreSQL DECLARE_ statement documentation + for the description of :sql:`{WITH|WITHOUT} HOLD`. + + .. _DECLARE: https://www.postgresql.org/docs/current/sql-declare.html + + + .. automethod:: close + + .. warning:: Closing a server-side cursor is more important than + closing a client-side one because it also releases the resources + on the server, which otherwise might remain allocated until the + end of the session (memory, locks). Using the pattern:: + + with conn.cursor(): + ... + + is especially useful so that the cursor is closed at the end of + the block. + + .. automethod:: execute + + :param query: The query to execute. + :type query: `!str`, `!bytes`, `sql.SQL`, or `sql.Composed` + :param params: The parameters to pass to the query, if any. + :type params: Sequence or Mapping + :param binary: Specify whether the server should return data in binary + format (`!True`) or in text format (`!False`). By default + (`!None`) return data as requested by the cursor's `~Cursor.format`. + + Create a server cursor with given `!name` and the `!query` in argument. + + If using :sql:`DECLARE` is not appropriate (for instance because the + cursor is returned by calling a stored procedure) you can avoid to use + `!execute()`, crete the cursor in other ways, and use directly the + `!fetch*()` methods instead. See :ref:`cursor-steal` for an example. + + Using `!execute()` more than once will close the previous cursor and + open a new one with the same name. + + .. automethod:: executemany + .. automethod:: fetchone + .. automethod:: fetchmany + .. automethod:: fetchall + + These methods use the FETCH_ SQL statement to retrieve some of the + records from the cursor's current position. + + .. _FETCH: https://www.postgresql.org/docs/current/sql-fetch.html + + .. note:: + + You can also iterate on the cursor to read its result one at + time with:: + + for record in cur: + ... + + In this case, the records are not fetched one at time from the + server but they are retrieved in batches of `itersize` to reduce + the number of server roundtrips. + + .. autoattribute:: itersize + + Number of records to fetch at time when iterating on the cursor. The + default is 100. + + .. automethod:: scroll + + This method uses the MOVE_ SQL statement to move the current position + in the server-side cursor, which will affect following `!fetch*()` + operations. If you need to scroll backwards you should probably + call `~Connection.cursor()` using `scrollable=True`. + + Note that PostgreSQL doesn't provide a reliable way to report when a + cursor moves out of bound, so the method might not raise `!IndexError` + when it happens, but it might rather stop at the cursor boundary. + + .. _MOVE: https://www.postgresql.org/docs/current/sql-fetch.html + + +The `!AsyncCursor` class +------------------------ + +.. autoclass:: AsyncCursor + + This class implements a DBAPI-inspired interface, with all the blocking + methods implemented as coroutines. Unless specified otherwise, + non-blocking methods are shared with the `Cursor` class. + + The following methods have the same behaviour of the matching `!Cursor` + methods, but should be called using the `await` keyword. + + .. attribute:: connection + :type: AsyncConnection + + .. automethod:: close + + .. note:: + + You can use:: + + async with conn.cursor(): + ... + + to close the cursor automatically when the block is exited. + + .. automethod:: execute + .. automethod:: executemany + .. automethod:: copy + + .. note:: + + The method must be called with:: + + async with cursor.copy() as copy: + ... + + .. automethod:: stream + + .. note:: + + The method must be called with:: + + async for record in cursor.stream(query): + ... + + .. automethod:: fetchone + .. automethod:: fetchmany + .. automethod:: fetchall + .. automethod:: scroll + + .. note:: + + You can also use:: + + async for record in cursor: + ... + + to iterate on the async cursor results. + + +The `!AsyncClientCursor` class +------------------------------ + +.. autoclass:: AsyncClientCursor + + This class is the `!async` equivalent of the `ClientCursor`. The + difference are the same shown in `AsyncCursor`. + + .. versionadded:: 3.1 + + + +The `!AsyncServerCursor` class +------------------------------ + +.. autoclass:: AsyncServerCursor + + This class implements a DBAPI-inspired interface as the `AsyncCursor` + does, but wraps a server-side cursor like the `ServerCursor` class. It is + created by `AsyncConnection.cursor()` specifying the `!name` parameter. + + The following are the methods exposing a different (async) interface from + the `ServerCursor` counterpart, but sharing the same semantics. + + .. automethod:: close + + .. note:: + You can close the cursor automatically using:: + + async with conn.cursor("name") as cursor: + ... + + .. automethod:: execute + .. automethod:: executemany + .. automethod:: fetchone + .. automethod:: fetchmany + .. automethod:: fetchall + + .. note:: + + You can also iterate on the cursor using:: + + async for record in cur: + ... + + .. automethod:: scroll diff --git a/docs/api/dns.rst b/docs/api/dns.rst new file mode 100644 index 0000000..186bde3 --- /dev/null +++ b/docs/api/dns.rst @@ -0,0 +1,145 @@ +`_dns` -- DNS resolution utilities +================================== + +.. module:: psycopg._dns + +This module contains a few experimental utilities to interact with the DNS +server before performing a connection. + +.. warning:: + This module is experimental and its interface could change in the future, + without warning or respect for the version scheme. It is provided here to + allow experimentation before making it more stable. + +.. warning:: + This module depends on the `dnspython`_ package. The package is currently + not installed automatically as a Psycopg dependency and must be installed + manually: + + .. code:: sh + + $ pip install "dnspython >= 2.1" + + .. _dnspython: https://dnspython.readthedocs.io/ + + +.. function:: resolve_srv(params) + + Apply SRV DNS lookup as defined in :RFC:`2782`. + + :param params: The input parameters, for instance as returned by + `~psycopg.conninfo.conninfo_to_dict()`. + :type params: `!dict` + :return: An updated list of connection parameters. + + For every host defined in the ``params["host"]`` list (comma-separated), + perform SRV lookup if the host is in the form ``_Service._Proto.Target``. + If lookup is successful, return a params dict with hosts and ports replaced + with the looked-up entries. + + Raise `~psycopg.OperationalError` if no lookup is successful and no host + (looked up or unchanged) could be returned. + + In addition to the rules defined by RFC 2782 about the host name pattern, + perform SRV lookup also if the the port is the string ``SRV`` (case + insensitive). + + .. warning:: + This is an experimental functionality. + + .. note:: + One possible way to use this function automatically is to subclass + `~psycopg.Connection`, extending the + `~psycopg.Connection._get_connection_params()` method:: + + import psycopg._dns # not imported automatically + + class SrvCognizantConnection(psycopg.Connection): + @classmethod + def _get_connection_params(cls, conninfo, **kwargs): + params = super()._get_connection_params(conninfo, **kwargs) + params = psycopg._dns.resolve_srv(params) + return params + + # The name will be resolved to db1.example.com + cnn = SrvCognizantConnection.connect("host=_postgres._tcp.db.psycopg.org") + + +.. function:: resolve_srv_async(params) + :async: + + Async equivalent of `resolve_srv()`. + + +.. automethod:: psycopg.Connection._get_connection_params + + .. warning:: + This is an experimental method. + + This method is a subclass hook allowing to manipulate the connection + parameters before performing the connection. Make sure to call the + `!super()` implementation before further manipulation of the arguments:: + + @classmethod + def _get_connection_params(cls, conninfo, **kwargs): + params = super()._get_connection_params(conninfo, **kwargs) + # do something with the params + return params + + +.. automethod:: psycopg.AsyncConnection._get_connection_params + + .. warning:: + This is an experimental method. + + +.. function:: resolve_hostaddr_async(params) + :async: + + Perform async DNS lookup of the hosts and return a new params dict. + + .. deprecated:: 3.1 + The use of this function is not necessary anymore, because + `psycopg.AsyncConnection.connect()` performs non-blocking name + resolution automatically. + + :param params: The input parameters, for instance as returned by + `~psycopg.conninfo.conninfo_to_dict()`. + :type params: `!dict` + + If a ``host`` param is present but not ``hostname``, resolve the host + addresses dynamically. + + The function may change the input ``host``, ``hostname``, ``port`` to allow + connecting without further DNS lookups, eventually removing hosts that are + not resolved, keeping the lists of hosts and ports consistent. + + Raise `~psycopg.OperationalError` if connection is not possible (e.g. no + host resolve, inconsistent lists length). + + See `the PostgreSQL docs`__ for explanation of how these params are used, + and how they support multiple entries. + + .. __: https://www.postgresql.org/docs/current/libpq-connect.html + #LIBPQ-PARAMKEYWORDS + + .. warning:: + Before psycopg 3.1, this function doesn't handle the ``/etc/hosts`` file. + + .. note:: + Starting from psycopg 3.1, a similar operation is performed + automatically by `!AsyncConnection._get_connection_params()`, so this + function is unneeded. + + In psycopg 3.0, one possible way to use this function automatically is + to subclass `~psycopg.AsyncConnection`, extending the + `~psycopg.AsyncConnection._get_connection_params()` method:: + + import psycopg._dns # not imported automatically + + class AsyncDnsConnection(psycopg.AsyncConnection): + @classmethod + async def _get_connection_params(cls, conninfo, **kwargs): + params = await super()._get_connection_params(conninfo, **kwargs) + params = await psycopg._dns.resolve_hostaddr_async(params) + return params diff --git a/docs/api/errors.rst b/docs/api/errors.rst new file mode 100644 index 0000000..2fca7c6 --- /dev/null +++ b/docs/api/errors.rst @@ -0,0 +1,540 @@ +`errors` -- Package exceptions +============================== + +.. module:: psycopg.errors + +.. index:: + single: Error; Class + +This module exposes objects to represent and examine database errors. + + +.. currentmodule:: psycopg + +.. index:: + single: Exceptions; DB-API + +.. _dbapi-exceptions: + +DB-API exceptions +----------------- + +In compliance with the DB-API, all the exceptions raised by Psycopg +derive from the following classes: + +.. parsed-literal:: + + `!Exception` + \|__ `Warning` + \|__ `Error` + \|__ `InterfaceError` + \|__ `DatabaseError` + \|__ `DataError` + \|__ `OperationalError` + \|__ `IntegrityError` + \|__ `InternalError` + \|__ `ProgrammingError` + \|__ `NotSupportedError` + +These classes are exposed both by this module and the root `psycopg` module. + +.. autoexception:: Error() + + .. autoattribute:: diag + .. autoattribute:: sqlstate + + The code of the error, if received from the server. + + This attribute is also available as class attribute on the + :ref:`sqlstate-exceptions` classes. + + .. autoattribute:: pgconn + + Most likely it will be in `~psycopg.pq.ConnStatus.BAD` state; + however it might be useful to verify precisely what went wrong, for + instance checking the `~psycopg.pq.PGconn.needs_password` and + `~psycopg.pq.PGconn.used_password` attributes. + + .. versionadded:: 3.1 + + .. autoattribute:: pgresult + + .. versionadded:: 3.1 + + +.. autoexception:: Warning() +.. autoexception:: InterfaceError() +.. autoexception:: DatabaseError() +.. autoexception:: DataError() +.. autoexception:: OperationalError() +.. autoexception:: IntegrityError() +.. autoexception:: InternalError() +.. autoexception:: ProgrammingError() +.. autoexception:: NotSupportedError() + + +Other Psycopg errors +^^^^^^^^^^^^^^^^^^^^ + +.. currentmodule:: psycopg.errors + + +In addition to the standard DB-API errors, Psycopg defines a few more specific +ones. + +.. autoexception:: ConnectionTimeout() +.. autoexception:: PipelineAborted() + + + +.. index:: + single: Exceptions; PostgreSQL + +Error diagnostics +----------------- + +.. autoclass:: Diagnostic() + + The object is available as the `~psycopg.Error`.\ `~psycopg.Error.diag` + attribute and is passed to the callback functions registered with + `~psycopg.Connection.add_notice_handler()`. + + All the information available from the :pq:`PQresultErrorField()` function + are exposed as attributes by the object. For instance the `!severity` + attribute returns the `!PG_DIAG_SEVERITY` code. Please refer to the + PostgreSQL documentation for the meaning of all the attributes. + + The attributes available are: + + .. attribute:: + column_name + constraint_name + context + datatype_name + internal_position + internal_query + message_detail + message_hint + message_primary + schema_name + severity + severity_nonlocalized + source_file + source_function + source_line + sqlstate + statement_position + table_name + + A string with the error field if available; `!None` if not available. + The attribute value is available only for errors sent by the server: + not all the fields are available for all the errors and for all the + server versions. + + +.. _sqlstate-exceptions: + +SQLSTATE exceptions +------------------- + +Errors coming from a database server (as opposite as ones generated +client-side, such as connection failed) usually have a 5-letters error code +called SQLSTATE (available in the `~Diagnostic.sqlstate` attribute of the +error's `~psycopg.Error.diag` attribute). + +Psycopg exposes a different class for each SQLSTATE value, allowing to +write idiomatic error handling code according to specific conditions happening +in the database: + +.. code-block:: python + + try: + cur.execute("LOCK TABLE mytable IN ACCESS EXCLUSIVE MODE NOWAIT") + except psycopg.errors.LockNotAvailable: + locked = True + +The exception names are generated from the PostgreSQL source code and includes +classes for every error defined by PostgreSQL in versions between 9.6 and 15. +Every class in the module is named after what referred as "condition name" `in +the documentation`__, converted to CamelCase: e.g. the error 22012, +``division_by_zero`` is exposed by this module as the class `!DivisionByZero`. +There is a handful of... exceptions to this rule, required for disambiguate +name clashes: please refer to the :ref:`table below <exceptions-list>` for all +the classes defined. + +.. __: https://www.postgresql.org/docs/current/errcodes-appendix.html#ERRCODES-TABLE + +Every exception class is a subclass of one of the :ref:`standard DB-API +exception <dbapi-exceptions>`, thus exposing the `~psycopg.Error` interface. + +.. versionchanged:: 3.1.4 + Added exceptions introduced in PostgreSQL 15. + +.. autofunction:: lookup + + Example: if you have code using constant names or sql codes you can use + them to look up the exception class. + + .. code-block:: python + + try: + cur.execute("LOCK TABLE mytable IN ACCESS EXCLUSIVE MODE NOWAIT") + except psycopg.errors.lookup("UNDEFINED_TABLE"): + missing = True + except psycopg.errors.lookup("55P03"): + locked = True + + +.. _exceptions-list: + +List of known exceptions +^^^^^^^^^^^^^^^^^^^^^^^^ + +The following are all the SQLSTATE-related error classed defined by this +module, together with the base DBAPI exception they derive from. + +.. autogenerated: start + +========= ================================================== ==================== +SQLSTATE Exception Base exception +========= ================================================== ==================== +**Class 02** - No Data (this is also a warning class per the SQL standard) +--------------------------------------------------------------------------------- +``02000`` `!NoData` `!DatabaseError` +``02001`` `!NoAdditionalDynamicResultSetsReturned` `!DatabaseError` +**Class 03** - SQL Statement Not Yet Complete +--------------------------------------------------------------------------------- +``03000`` `!SqlStatementNotYetComplete` `!DatabaseError` +**Class 08** - Connection Exception +--------------------------------------------------------------------------------- +``08000`` `!ConnectionException` `!OperationalError` +``08001`` `!SqlclientUnableToEstablishSqlconnection` `!OperationalError` +``08003`` `!ConnectionDoesNotExist` `!OperationalError` +``08004`` `!SqlserverRejectedEstablishmentOfSqlconnection` `!OperationalError` +``08006`` `!ConnectionFailure` `!OperationalError` +``08007`` `!TransactionResolutionUnknown` `!OperationalError` +``08P01`` `!ProtocolViolation` `!OperationalError` +**Class 09** - Triggered Action Exception +--------------------------------------------------------------------------------- +``09000`` `!TriggeredActionException` `!DatabaseError` +**Class 0A** - Feature Not Supported +--------------------------------------------------------------------------------- +``0A000`` `!FeatureNotSupported` `!NotSupportedError` +**Class 0B** - Invalid Transaction Initiation +--------------------------------------------------------------------------------- +``0B000`` `!InvalidTransactionInitiation` `!DatabaseError` +**Class 0F** - Locator Exception +--------------------------------------------------------------------------------- +``0F000`` `!LocatorException` `!DatabaseError` +``0F001`` `!InvalidLocatorSpecification` `!DatabaseError` +**Class 0L** - Invalid Grantor +--------------------------------------------------------------------------------- +``0L000`` `!InvalidGrantor` `!DatabaseError` +``0LP01`` `!InvalidGrantOperation` `!DatabaseError` +**Class 0P** - Invalid Role Specification +--------------------------------------------------------------------------------- +``0P000`` `!InvalidRoleSpecification` `!DatabaseError` +**Class 0Z** - Diagnostics Exception +--------------------------------------------------------------------------------- +``0Z000`` `!DiagnosticsException` `!DatabaseError` +``0Z002`` `!StackedDiagnosticsAccessedWithoutActiveHandler` `!DatabaseError` +**Class 20** - Case Not Found +--------------------------------------------------------------------------------- +``20000`` `!CaseNotFound` `!ProgrammingError` +**Class 21** - Cardinality Violation +--------------------------------------------------------------------------------- +``21000`` `!CardinalityViolation` `!ProgrammingError` +**Class 22** - Data Exception +--------------------------------------------------------------------------------- +``22000`` `!DataException` `!DataError` +``22001`` `!StringDataRightTruncation` `!DataError` +``22002`` `!NullValueNoIndicatorParameter` `!DataError` +``22003`` `!NumericValueOutOfRange` `!DataError` +``22004`` `!NullValueNotAllowed` `!DataError` +``22005`` `!ErrorInAssignment` `!DataError` +``22007`` `!InvalidDatetimeFormat` `!DataError` +``22008`` `!DatetimeFieldOverflow` `!DataError` +``22009`` `!InvalidTimeZoneDisplacementValue` `!DataError` +``2200B`` `!EscapeCharacterConflict` `!DataError` +``2200C`` `!InvalidUseOfEscapeCharacter` `!DataError` +``2200D`` `!InvalidEscapeOctet` `!DataError` +``2200F`` `!ZeroLengthCharacterString` `!DataError` +``2200G`` `!MostSpecificTypeMismatch` `!DataError` +``2200H`` `!SequenceGeneratorLimitExceeded` `!DataError` +``2200L`` `!NotAnXmlDocument` `!DataError` +``2200M`` `!InvalidXmlDocument` `!DataError` +``2200N`` `!InvalidXmlContent` `!DataError` +``2200S`` `!InvalidXmlComment` `!DataError` +``2200T`` `!InvalidXmlProcessingInstruction` `!DataError` +``22010`` `!InvalidIndicatorParameterValue` `!DataError` +``22011`` `!SubstringError` `!DataError` +``22012`` `!DivisionByZero` `!DataError` +``22013`` `!InvalidPrecedingOrFollowingSize` `!DataError` +``22014`` `!InvalidArgumentForNtileFunction` `!DataError` +``22015`` `!IntervalFieldOverflow` `!DataError` +``22016`` `!InvalidArgumentForNthValueFunction` `!DataError` +``22018`` `!InvalidCharacterValueForCast` `!DataError` +``22019`` `!InvalidEscapeCharacter` `!DataError` +``2201B`` `!InvalidRegularExpression` `!DataError` +``2201E`` `!InvalidArgumentForLogarithm` `!DataError` +``2201F`` `!InvalidArgumentForPowerFunction` `!DataError` +``2201G`` `!InvalidArgumentForWidthBucketFunction` `!DataError` +``2201W`` `!InvalidRowCountInLimitClause` `!DataError` +``2201X`` `!InvalidRowCountInResultOffsetClause` `!DataError` +``22021`` `!CharacterNotInRepertoire` `!DataError` +``22022`` `!IndicatorOverflow` `!DataError` +``22023`` `!InvalidParameterValue` `!DataError` +``22024`` `!UnterminatedCString` `!DataError` +``22025`` `!InvalidEscapeSequence` `!DataError` +``22026`` `!StringDataLengthMismatch` `!DataError` +``22027`` `!TrimError` `!DataError` +``2202E`` `!ArraySubscriptError` `!DataError` +``2202G`` `!InvalidTablesampleRepeat` `!DataError` +``2202H`` `!InvalidTablesampleArgument` `!DataError` +``22030`` `!DuplicateJsonObjectKeyValue` `!DataError` +``22031`` `!InvalidArgumentForSqlJsonDatetimeFunction` `!DataError` +``22032`` `!InvalidJsonText` `!DataError` +``22033`` `!InvalidSqlJsonSubscript` `!DataError` +``22034`` `!MoreThanOneSqlJsonItem` `!DataError` +``22035`` `!NoSqlJsonItem` `!DataError` +``22036`` `!NonNumericSqlJsonItem` `!DataError` +``22037`` `!NonUniqueKeysInAJsonObject` `!DataError` +``22038`` `!SingletonSqlJsonItemRequired` `!DataError` +``22039`` `!SqlJsonArrayNotFound` `!DataError` +``2203A`` `!SqlJsonMemberNotFound` `!DataError` +``2203B`` `!SqlJsonNumberNotFound` `!DataError` +``2203C`` `!SqlJsonObjectNotFound` `!DataError` +``2203D`` `!TooManyJsonArrayElements` `!DataError` +``2203E`` `!TooManyJsonObjectMembers` `!DataError` +``2203F`` `!SqlJsonScalarRequired` `!DataError` +``2203G`` `!SqlJsonItemCannotBeCastToTargetType` `!DataError` +``22P01`` `!FloatingPointException` `!DataError` +``22P02`` `!InvalidTextRepresentation` `!DataError` +``22P03`` `!InvalidBinaryRepresentation` `!DataError` +``22P04`` `!BadCopyFileFormat` `!DataError` +``22P05`` `!UntranslatableCharacter` `!DataError` +``22P06`` `!NonstandardUseOfEscapeCharacter` `!DataError` +**Class 23** - Integrity Constraint Violation +--------------------------------------------------------------------------------- +``23000`` `!IntegrityConstraintViolation` `!IntegrityError` +``23001`` `!RestrictViolation` `!IntegrityError` +``23502`` `!NotNullViolation` `!IntegrityError` +``23503`` `!ForeignKeyViolation` `!IntegrityError` +``23505`` `!UniqueViolation` `!IntegrityError` +``23514`` `!CheckViolation` `!IntegrityError` +``23P01`` `!ExclusionViolation` `!IntegrityError` +**Class 24** - Invalid Cursor State +--------------------------------------------------------------------------------- +``24000`` `!InvalidCursorState` `!InternalError` +**Class 25** - Invalid Transaction State +--------------------------------------------------------------------------------- +``25000`` `!InvalidTransactionState` `!InternalError` +``25001`` `!ActiveSqlTransaction` `!InternalError` +``25002`` `!BranchTransactionAlreadyActive` `!InternalError` +``25003`` `!InappropriateAccessModeForBranchTransaction` `!InternalError` +``25004`` `!InappropriateIsolationLevelForBranchTransaction` `!InternalError` +``25005`` `!NoActiveSqlTransactionForBranchTransaction` `!InternalError` +``25006`` `!ReadOnlySqlTransaction` `!InternalError` +``25007`` `!SchemaAndDataStatementMixingNotSupported` `!InternalError` +``25008`` `!HeldCursorRequiresSameIsolationLevel` `!InternalError` +``25P01`` `!NoActiveSqlTransaction` `!InternalError` +``25P02`` `!InFailedSqlTransaction` `!InternalError` +``25P03`` `!IdleInTransactionSessionTimeout` `!InternalError` +**Class 26** - Invalid SQL Statement Name +--------------------------------------------------------------------------------- +``26000`` `!InvalidSqlStatementName` `!ProgrammingError` +**Class 27** - Triggered Data Change Violation +--------------------------------------------------------------------------------- +``27000`` `!TriggeredDataChangeViolation` `!OperationalError` +**Class 28** - Invalid Authorization Specification +--------------------------------------------------------------------------------- +``28000`` `!InvalidAuthorizationSpecification` `!OperationalError` +``28P01`` `!InvalidPassword` `!OperationalError` +**Class 2B** - Dependent Privilege Descriptors Still Exist +--------------------------------------------------------------------------------- +``2B000`` `!DependentPrivilegeDescriptorsStillExist` `!InternalError` +``2BP01`` `!DependentObjectsStillExist` `!InternalError` +**Class 2D** - Invalid Transaction Termination +--------------------------------------------------------------------------------- +``2D000`` `!InvalidTransactionTermination` `!InternalError` +**Class 2F** - SQL Routine Exception +--------------------------------------------------------------------------------- +``2F000`` `!SqlRoutineException` `!OperationalError` +``2F002`` `!ModifyingSqlDataNotPermitted` `!OperationalError` +``2F003`` `!ProhibitedSqlStatementAttempted` `!OperationalError` +``2F004`` `!ReadingSqlDataNotPermitted` `!OperationalError` +``2F005`` `!FunctionExecutedNoReturnStatement` `!OperationalError` +**Class 34** - Invalid Cursor Name +--------------------------------------------------------------------------------- +``34000`` `!InvalidCursorName` `!ProgrammingError` +**Class 38** - External Routine Exception +--------------------------------------------------------------------------------- +``38000`` `!ExternalRoutineException` `!OperationalError` +``38001`` `!ContainingSqlNotPermitted` `!OperationalError` +``38002`` `!ModifyingSqlDataNotPermittedExt` `!OperationalError` +``38003`` `!ProhibitedSqlStatementAttemptedExt` `!OperationalError` +``38004`` `!ReadingSqlDataNotPermittedExt` `!OperationalError` +**Class 39** - External Routine Invocation Exception +--------------------------------------------------------------------------------- +``39000`` `!ExternalRoutineInvocationException` `!OperationalError` +``39001`` `!InvalidSqlstateReturned` `!OperationalError` +``39004`` `!NullValueNotAllowedExt` `!OperationalError` +``39P01`` `!TriggerProtocolViolated` `!OperationalError` +``39P02`` `!SrfProtocolViolated` `!OperationalError` +``39P03`` `!EventTriggerProtocolViolated` `!OperationalError` +**Class 3B** - Savepoint Exception +--------------------------------------------------------------------------------- +``3B000`` `!SavepointException` `!OperationalError` +``3B001`` `!InvalidSavepointSpecification` `!OperationalError` +**Class 3D** - Invalid Catalog Name +--------------------------------------------------------------------------------- +``3D000`` `!InvalidCatalogName` `!ProgrammingError` +**Class 3F** - Invalid Schema Name +--------------------------------------------------------------------------------- +``3F000`` `!InvalidSchemaName` `!ProgrammingError` +**Class 40** - Transaction Rollback +--------------------------------------------------------------------------------- +``40000`` `!TransactionRollback` `!OperationalError` +``40001`` `!SerializationFailure` `!OperationalError` +``40002`` `!TransactionIntegrityConstraintViolation` `!OperationalError` +``40003`` `!StatementCompletionUnknown` `!OperationalError` +``40P01`` `!DeadlockDetected` `!OperationalError` +**Class 42** - Syntax Error or Access Rule Violation +--------------------------------------------------------------------------------- +``42000`` `!SyntaxErrorOrAccessRuleViolation` `!ProgrammingError` +``42501`` `!InsufficientPrivilege` `!ProgrammingError` +``42601`` `!SyntaxError` `!ProgrammingError` +``42602`` `!InvalidName` `!ProgrammingError` +``42611`` `!InvalidColumnDefinition` `!ProgrammingError` +``42622`` `!NameTooLong` `!ProgrammingError` +``42701`` `!DuplicateColumn` `!ProgrammingError` +``42702`` `!AmbiguousColumn` `!ProgrammingError` +``42703`` `!UndefinedColumn` `!ProgrammingError` +``42704`` `!UndefinedObject` `!ProgrammingError` +``42710`` `!DuplicateObject` `!ProgrammingError` +``42712`` `!DuplicateAlias` `!ProgrammingError` +``42723`` `!DuplicateFunction` `!ProgrammingError` +``42725`` `!AmbiguousFunction` `!ProgrammingError` +``42803`` `!GroupingError` `!ProgrammingError` +``42804`` `!DatatypeMismatch` `!ProgrammingError` +``42809`` `!WrongObjectType` `!ProgrammingError` +``42830`` `!InvalidForeignKey` `!ProgrammingError` +``42846`` `!CannotCoerce` `!ProgrammingError` +``42883`` `!UndefinedFunction` `!ProgrammingError` +``428C9`` `!GeneratedAlways` `!ProgrammingError` +``42939`` `!ReservedName` `!ProgrammingError` +``42P01`` `!UndefinedTable` `!ProgrammingError` +``42P02`` `!UndefinedParameter` `!ProgrammingError` +``42P03`` `!DuplicateCursor` `!ProgrammingError` +``42P04`` `!DuplicateDatabase` `!ProgrammingError` +``42P05`` `!DuplicatePreparedStatement` `!ProgrammingError` +``42P06`` `!DuplicateSchema` `!ProgrammingError` +``42P07`` `!DuplicateTable` `!ProgrammingError` +``42P08`` `!AmbiguousParameter` `!ProgrammingError` +``42P09`` `!AmbiguousAlias` `!ProgrammingError` +``42P10`` `!InvalidColumnReference` `!ProgrammingError` +``42P11`` `!InvalidCursorDefinition` `!ProgrammingError` +``42P12`` `!InvalidDatabaseDefinition` `!ProgrammingError` +``42P13`` `!InvalidFunctionDefinition` `!ProgrammingError` +``42P14`` `!InvalidPreparedStatementDefinition` `!ProgrammingError` +``42P15`` `!InvalidSchemaDefinition` `!ProgrammingError` +``42P16`` `!InvalidTableDefinition` `!ProgrammingError` +``42P17`` `!InvalidObjectDefinition` `!ProgrammingError` +``42P18`` `!IndeterminateDatatype` `!ProgrammingError` +``42P19`` `!InvalidRecursion` `!ProgrammingError` +``42P20`` `!WindowingError` `!ProgrammingError` +``42P21`` `!CollationMismatch` `!ProgrammingError` +``42P22`` `!IndeterminateCollation` `!ProgrammingError` +**Class 44** - WITH CHECK OPTION Violation +--------------------------------------------------------------------------------- +``44000`` `!WithCheckOptionViolation` `!ProgrammingError` +**Class 53** - Insufficient Resources +--------------------------------------------------------------------------------- +``53000`` `!InsufficientResources` `!OperationalError` +``53100`` `!DiskFull` `!OperationalError` +``53200`` `!OutOfMemory` `!OperationalError` +``53300`` `!TooManyConnections` `!OperationalError` +``53400`` `!ConfigurationLimitExceeded` `!OperationalError` +**Class 54** - Program Limit Exceeded +--------------------------------------------------------------------------------- +``54000`` `!ProgramLimitExceeded` `!OperationalError` +``54001`` `!StatementTooComplex` `!OperationalError` +``54011`` `!TooManyColumns` `!OperationalError` +``54023`` `!TooManyArguments` `!OperationalError` +**Class 55** - Object Not In Prerequisite State +--------------------------------------------------------------------------------- +``55000`` `!ObjectNotInPrerequisiteState` `!OperationalError` +``55006`` `!ObjectInUse` `!OperationalError` +``55P02`` `!CantChangeRuntimeParam` `!OperationalError` +``55P03`` `!LockNotAvailable` `!OperationalError` +``55P04`` `!UnsafeNewEnumValueUsage` `!OperationalError` +**Class 57** - Operator Intervention +--------------------------------------------------------------------------------- +``57000`` `!OperatorIntervention` `!OperationalError` +``57014`` `!QueryCanceled` `!OperationalError` +``57P01`` `!AdminShutdown` `!OperationalError` +``57P02`` `!CrashShutdown` `!OperationalError` +``57P03`` `!CannotConnectNow` `!OperationalError` +``57P04`` `!DatabaseDropped` `!OperationalError` +``57P05`` `!IdleSessionTimeout` `!OperationalError` +**Class 58** - System Error (errors external to PostgreSQL itself) +--------------------------------------------------------------------------------- +``58000`` `!SystemError` `!OperationalError` +``58030`` `!IoError` `!OperationalError` +``58P01`` `!UndefinedFile` `!OperationalError` +``58P02`` `!DuplicateFile` `!OperationalError` +**Class 72** - Snapshot Failure +--------------------------------------------------------------------------------- +``72000`` `!SnapshotTooOld` `!DatabaseError` +**Class F0** - Configuration File Error +--------------------------------------------------------------------------------- +``F0000`` `!ConfigFileError` `!OperationalError` +``F0001`` `!LockFileExists` `!OperationalError` +**Class HV** - Foreign Data Wrapper Error (SQL/MED) +--------------------------------------------------------------------------------- +``HV000`` `!FdwError` `!OperationalError` +``HV001`` `!FdwOutOfMemory` `!OperationalError` +``HV002`` `!FdwDynamicParameterValueNeeded` `!OperationalError` +``HV004`` `!FdwInvalidDataType` `!OperationalError` +``HV005`` `!FdwColumnNameNotFound` `!OperationalError` +``HV006`` `!FdwInvalidDataTypeDescriptors` `!OperationalError` +``HV007`` `!FdwInvalidColumnName` `!OperationalError` +``HV008`` `!FdwInvalidColumnNumber` `!OperationalError` +``HV009`` `!FdwInvalidUseOfNullPointer` `!OperationalError` +``HV00A`` `!FdwInvalidStringFormat` `!OperationalError` +``HV00B`` `!FdwInvalidHandle` `!OperationalError` +``HV00C`` `!FdwInvalidOptionIndex` `!OperationalError` +``HV00D`` `!FdwInvalidOptionName` `!OperationalError` +``HV00J`` `!FdwOptionNameNotFound` `!OperationalError` +``HV00K`` `!FdwReplyHandle` `!OperationalError` +``HV00L`` `!FdwUnableToCreateExecution` `!OperationalError` +``HV00M`` `!FdwUnableToCreateReply` `!OperationalError` +``HV00N`` `!FdwUnableToEstablishConnection` `!OperationalError` +``HV00P`` `!FdwNoSchemas` `!OperationalError` +``HV00Q`` `!FdwSchemaNotFound` `!OperationalError` +``HV00R`` `!FdwTableNotFound` `!OperationalError` +``HV010`` `!FdwFunctionSequenceError` `!OperationalError` +``HV014`` `!FdwTooManyHandles` `!OperationalError` +``HV021`` `!FdwInconsistentDescriptorInformation` `!OperationalError` +``HV024`` `!FdwInvalidAttributeValue` `!OperationalError` +``HV090`` `!FdwInvalidStringLengthOrBufferLength` `!OperationalError` +``HV091`` `!FdwInvalidDescriptorFieldIdentifier` `!OperationalError` +**Class P0** - PL/pgSQL Error +--------------------------------------------------------------------------------- +``P0000`` `!PlpgsqlError` `!ProgrammingError` +``P0001`` `!RaiseException` `!ProgrammingError` +``P0002`` `!NoDataFound` `!ProgrammingError` +``P0003`` `!TooManyRows` `!ProgrammingError` +``P0004`` `!AssertFailure` `!ProgrammingError` +**Class XX** - Internal Error +--------------------------------------------------------------------------------- +``XX000`` `!InternalError_` `!InternalError` +``XX001`` `!DataCorrupted` `!InternalError` +``XX002`` `!IndexCorrupted` `!InternalError` +========= ================================================== ==================== + +.. autogenerated: end + +.. versionadded:: 3.1.4 + Exception `!SqlJsonItemCannotBeCastToTargetType`, introduced in PostgreSQL + 15. diff --git a/docs/api/index.rst b/docs/api/index.rst new file mode 100644 index 0000000..b99550d --- /dev/null +++ b/docs/api/index.rst @@ -0,0 +1,29 @@ +Psycopg 3 API +============= + +.. _api: + +This sections is a reference for all the public objects exposed by the +`psycopg` module. For a more conceptual description you can take a look at +:ref:`basic` and :ref:`advanced`. + +.. toctree:: + :maxdepth: 2 + :caption: Contents: + + module + connections + cursors + copy + objects + sql + rows + errors + pool + conninfo + adapt + types + abc + pq + crdb + dns diff --git a/docs/api/module.rst b/docs/api/module.rst new file mode 100644 index 0000000..3c3d3c4 --- /dev/null +++ b/docs/api/module.rst @@ -0,0 +1,59 @@ +The `!psycopg` module +===================== + +Psycopg implements the `Python Database DB API 2.0 specification`__. As such +it also exposes the `module-level objects`__ required by the specifications. + +.. __: https://www.python.org/dev/peps/pep-0249/ +.. __: https://www.python.org/dev/peps/pep-0249/#module-interface + +.. module:: psycopg + +.. autofunction:: connect + + This is an alias of the class method `Connection.connect`: see its + documentation for details. + + If you need an asynchronous connection use `AsyncConnection.connect` + instead. + + +.. rubric:: Exceptions + +The standard `DBAPI exceptions`__ are exposed both by the `!psycopg` module +and by the `psycopg.errors` module. The latter also exposes more specific +exceptions, mapping to the database error states (see +:ref:`sqlstate-exceptions`). + +.. __: https://www.python.org/dev/peps/pep-0249/#exceptions + +.. parsed-literal:: + + `!Exception` + \|__ `Warning` + \|__ `Error` + \|__ `InterfaceError` + \|__ `DatabaseError` + \|__ `DataError` + \|__ `OperationalError` + \|__ `IntegrityError` + \|__ `InternalError` + \|__ `ProgrammingError` + \|__ `NotSupportedError` + + +.. data:: adapters + + The default adapters map establishing how Python and PostgreSQL types are + converted into each other. + + This map is used as a template when new connections are created, using + `psycopg.connect()`. Its `~psycopg.adapt.AdaptersMap.types` attribute is a + `~psycopg.types.TypesRegistry` containing information about every + PostgreSQL builtin type, useful for adaptation customisation (see + :ref:`adaptation`):: + + >>> psycopg.adapters.types["int4"] + <TypeInfo: int4 (oid: 23, array oid: 1007)> + + :type: `~psycopg.adapt.AdaptersMap` diff --git a/docs/api/objects.rst b/docs/api/objects.rst new file mode 100644 index 0000000..f085ed9 --- /dev/null +++ b/docs/api/objects.rst @@ -0,0 +1,256 @@ +.. currentmodule:: psycopg + +Other top-level objects +======================= + +Connection information +---------------------- + +.. autoclass:: ConnectionInfo() + + The object is usually returned by `Connection.info`. + + .. autoattribute:: dsn + + .. note:: The `get_parameters()` method returns the same information + as a dict. + + .. autoattribute:: status + + The status can be one of a number of values. However, only two of + these are seen outside of an asynchronous connection procedure: + `~pq.ConnStatus.OK` and `~pq.ConnStatus.BAD`. A good connection to the + database has the status `!OK`. Ordinarily, an `!OK` status will remain + so until `Connection.close()`, but a communications failure might + result in the status changing to `!BAD` prematurely. + + .. autoattribute:: transaction_status + + The status can be `~pq.TransactionStatus.IDLE` (currently idle), + `~pq.TransactionStatus.ACTIVE` (a command is in progress), + `~pq.TransactionStatus.INTRANS` (idle, in a valid transaction block), + or `~pq.TransactionStatus.INERROR` (idle, in a failed transaction + block). `~pq.TransactionStatus.UNKNOWN` is reported if the connection + is bad. `!ACTIVE` is reported only when a query has been sent to the + server and not yet completed. + + .. autoattribute:: pipeline_status + + .. autoattribute:: backend_pid + .. autoattribute:: vendor + + Normally it is `PostgreSQL`; it may be different if connected to + a different database. + + .. versionadded:: 3.1 + + .. autoattribute:: server_version + + The number is formed by converting the major, minor, and revision + numbers into two-decimal-digit numbers and appending them together. + Starting from PostgreSQL 10 the minor version was dropped, so the + second group of digits is always 00. For example, version 9.3.5 is + returned as 90305, version 10.2 as 100002. + + .. autoattribute:: error_message + + .. automethod:: get_parameters + + .. note:: The `dsn` attribute returns the same information in the form + as a string. + + .. autoattribute:: timezone + + .. code:: pycon + + >>> conn.info.timezone + zoneinfo.ZoneInfo(key='Europe/Rome') + + .. autoattribute:: host + + This can be a host name, an IP address, or a directory path if the + connection is via Unix socket. (The path case can be distinguished + because it will always be an absolute path, beginning with ``/``.) + + .. autoattribute:: hostaddr + + Only available if the libpq used is at least from PostgreSQL 12. + Raise `~psycopg.NotSupportedError` otherwise. + + .. autoattribute:: port + .. autoattribute:: dbname + .. autoattribute:: user + .. autoattribute:: password + .. autoattribute:: options + .. automethod:: parameter_status + + Example of parameters are ``server_version``, + ``standard_conforming_strings``... See :pq:`PQparameterStatus()` for + all the available parameters. + + .. autoattribute:: encoding + + The value returned is always normalized to the Python codec + `~codecs.CodecInfo.name`:: + + conn.execute("SET client_encoding TO LATIN9") + conn.info.encoding + 'iso8859-15' + + A few PostgreSQL encodings are not available in Python and cannot be + selected (currently ``EUC_TW``, ``MULE_INTERNAL``). The PostgreSQL + ``SQL_ASCII`` encoding has the special meaning of "no encoding": see + :ref:`adapt-string` for details. + + .. seealso:: + + The `PostgreSQL supported encodings`__. + + .. __: https://www.postgresql.org/docs/current/multibyte.html + + +The description `Column` object +------------------------------- + +.. autoclass:: Column() + + An object describing a column of data from a database result, `as described + by the DBAPI`__, so it can also be unpacked as a 7-items tuple. + + The object is returned by `Cursor.description`. + + .. __: https://www.python.org/dev/peps/pep-0249/#description + + .. autoattribute:: name + .. autoattribute:: type_code + .. autoattribute:: display_size + .. autoattribute:: internal_size + .. autoattribute:: precision + .. autoattribute:: scale + + +Notifications +------------- + +.. autoclass:: Notify() + + The object is usually returned by `Connection.notifies()`. + + .. attribute:: channel + :type: str + + The name of the channel on which the notification was received. + + .. attribute:: payload + :type: str + + The message attached to the notification. + + .. attribute:: pid + :type: int + + The PID of the backend process which sent the notification. + + +Pipeline-related objects +------------------------ + +See :ref:`pipeline-mode` for details. + +.. autoclass:: Pipeline + + This objects is returned by `Connection.pipeline()`. + + .. automethod:: sync + .. automethod:: is_supported + + +.. autoclass:: AsyncPipeline + + This objects is returned by `AsyncConnection.pipeline()`. + + .. automethod:: sync + + +Transaction-related objects +--------------------------- + +See :ref:`transactions` for details about these objects. + +.. autoclass:: IsolationLevel + :members: + + The value is usually used with the `Connection.isolation_level` property. + + Check the PostgreSQL documentation for a description of the effects of the + different `levels of transaction isolation`__. + + .. __: https://www.postgresql.org/docs/current/transaction-iso.html + + +.. autoclass:: Transaction() + + .. autoattribute:: savepoint_name + .. autoattribute:: connection + + +.. autoclass:: AsyncTransaction() + + .. autoattribute:: connection + + +.. autoexception:: Rollback + + It can be used as: + + - ``raise Rollback``: roll back the operation that happened in the current + transaction block and continue the program after the block. + + - ``raise Rollback()``: same effect as above + + - :samp:`raise Rollback({tx})`: roll back any operation that happened in + the `Transaction` `!tx` (returned by a statement such as :samp:`with + conn.transaction() as {tx}:` and all the blocks nested within. The + program will continue after the `!tx` block. + + +Two-Phase Commit related objects +-------------------------------- + +.. autoclass:: Xid() + + See :ref:`two-phase-commit` for details. + + .. autoattribute:: format_id + + Format Identifier of the two-phase transaction. + + .. autoattribute:: gtrid + + Global Transaction Identifier of the two-phase transaction. + + If the Xid doesn't follow the XA standard, it will be the PostgreSQL + ID of the transaction (in which case `format_id` and `bqual` will be + `!None`). + + .. autoattribute:: bqual + + Branch Qualifier of the two-phase transaction. + + .. autoattribute:: prepared + + Timestamp at which the transaction was prepared for commit. + + Only available on transactions recovered by `~Connection.tpc_recover()`. + + .. autoattribute:: owner + + Named of the user that executed the transaction. + + Only available on recovered transactions. + + .. autoattribute:: database + + Named of the database in which the transaction was executed. + + Only available on recovered transactions. diff --git a/docs/api/pool.rst b/docs/api/pool.rst new file mode 100644 index 0000000..76ccc74 --- /dev/null +++ b/docs/api/pool.rst @@ -0,0 +1,331 @@ +`!psycopg_pool` -- Connection pool implementations +================================================== + +.. index:: + double: Connection; Pool + +.. module:: psycopg_pool + +A connection pool is an object used to create and maintain a limited amount of +PostgreSQL connections, reducing the time requested by the program to obtain a +working connection and allowing an arbitrary large number of concurrent +threads or tasks to use a controlled amount of resources on the server. See +:ref:`connection-pools` for more details and usage pattern. + +This package exposes a few connection pool classes: + +- `ConnectionPool` is a synchronous connection pool yielding + `~psycopg.Connection` objects and can be used by multithread applications. + +- `AsyncConnectionPool` has an interface similar to `!ConnectionPool`, but + with `asyncio` functions replacing blocking functions, and yields + `~psycopg.AsyncConnection` instances. + +- `NullConnectionPool` is a `!ConnectionPool` subclass exposing the same + interface of its parent, but not keeping any unused connection in its state. + See :ref:`null-pool` for details about related use cases. + +- `AsyncNullConnectionPool` has the same behaviour of the + `!NullConnectionPool`, but with the same async interface of the + `!AsyncConnectionPool`. + +.. note:: The `!psycopg_pool` package is distributed separately from the main + `psycopg` package: use ``pip install "psycopg[pool]"``, or ``pip install + psycopg_pool``, to make it available. See :ref:`pool-installation`. + + The version numbers indicated in this page refer to the `!psycopg_pool` + package, not to `psycopg`. + + +The `!ConnectionPool` class +--------------------------- + +.. autoclass:: ConnectionPool + + This class implements a connection pool serving `~psycopg.Connection` + instances (or subclasses). The constructor has *alot* of arguments, but + only `!conninfo` and `!min_size` are the fundamental ones, all the other + arguments have meaningful defaults and can probably be tweaked later, if + required. + + :param conninfo: The connection string. See + `~psycopg.Connection.connect()` for details. + :type conninfo: `!str` + + :param min_size: The minimum number of connection the pool will hold. The + pool will actively try to create new connections if some + are lost (closed, broken) and will try to never go below + `!min_size`. + :type min_size: `!int`, default: 4 + + :param max_size: The maximum number of connections the pool will hold. If + `!None`, or equal to `!min_size`, the pool will not grow or + shrink. If larger than `!min_size`, the pool can grow if + more than `!min_size` connections are requested at the same + time and will shrink back after the extra connections have + been unused for more than `!max_idle` seconds. + :type max_size: `!int`, default: `!None` + + :param kwargs: Extra arguments to pass to `!connect()`. Note that this is + *one dict argument* of the pool constructor, which is + expanded as `connect()` keyword parameters. + + :type kwargs: `!dict` + + :param connection_class: The class of the connections to serve. It should + be a `!Connection` subclass. + :type connection_class: `!type`, default: `~psycopg.Connection` + + :param open: If `!True`, open the pool, creating the required connections, + on init. If `!False`, open the pool when `!open()` is called or + when the pool context is entered. See the `open()` method + documentation for more details. + :type open: `!bool`, default: `!True` + + :param configure: A callback to configure a connection after creation. + Useful, for instance, to configure its adapters. If the + connection is used to run internal queries (to inspect the + database) make sure to close an eventual transaction + before leaving the function. + :type configure: `Callable[[Connection], None]` + + :param reset: A callback to reset a function after it has been returned to + the pool. The connection is guaranteed to be passed to the + `!reset()` function in "idle" state (no transaction). When + leaving the `!reset()` function the connection must be left in + *idle* state, otherwise it is discarded. + :type reset: `Callable[[Connection], None]` + + :param name: An optional name to give to the pool, useful, for instance, to + identify it in the logs if more than one pool is used. if not + specified pick a sequential name such as ``pool-1``, + ``pool-2``, etc. + :type name: `!str` + + :param timeout: The default maximum time in seconds that a client can wait + to receive a connection from the pool (using `connection()` + or `getconn()`). Note that these methods allow to override + the `!timeout` default. + :type timeout: `!float`, default: 30 seconds + + :param max_waiting: Maximum number of requests that can be queued to the + pool, after which new requests will fail, raising + `TooManyRequests`. 0 means no queue limit. + :type max_waiting: `!int`, default: 0 + + :param max_lifetime: The maximum lifetime of a connection in the pool, in + seconds. Connections used for longer get closed and + replaced by a new one. The amount is reduced by a + random 10% to avoid mass eviction. + :type max_lifetime: `!float`, default: 1 hour + + :param max_idle: Maximum time, in seconds, that a connection can stay unused + in the pool before being closed, and the pool shrunk. This + only happens to connections more than `!min_size`, if + `!max_size` allowed the pool to grow. + :type max_idle: `!float`, default: 10 minutes + + :param reconnect_timeout: Maximum time, in seconds, the pool will try to + create a connection. If a connection attempt + fails, the pool will try to reconnect a few + times, using an exponential backoff and some + random factor to avoid mass attempts. If repeated + attempts fail, after `!reconnect_timeout` second + the connection attempt is aborted and the + `!reconnect_failed()` callback invoked. + :type reconnect_timeout: `!float`, default: 5 minutes + + :param reconnect_failed: Callback invoked if an attempt to create a new + connection fails for more than `!reconnect_timeout` + seconds. The user may decide, for instance, to + terminate the program (executing `sys.exit()`). + By default don't do anything: restart a new + connection attempt (if the number of connection + fell below `!min_size`). + :type reconnect_failed: ``Callable[[ConnectionPool], None]`` + + :param num_workers: Number of background worker threads used to maintain the + pool state. Background workers are used for example to + create new connections and to clean up connections when + they are returned to the pool. + :type num_workers: `!int`, default: 3 + + .. versionchanged:: 3.1 + + added `!open` parameter to init method. + + .. note:: In a future version, the default value for the `!open` parameter + might be changed to `!False`. If you rely on this behaviour (e.g. if + you don't use the pool as a context manager) you might want to specify + this parameter explicitly. + + .. automethod:: connection + + .. code:: python + + with my_pool.connection() as conn: + conn.execute(...) + + # the connection is now back in the pool + + .. automethod:: open + + .. versionadded:: 3.1 + + + .. automethod:: close + + .. note:: + + The pool can be also used as a context manager, in which case it will + be opened (if necessary) on entering the block and closed on exiting it: + + .. code:: python + + with ConnectionPool(...) as pool: + # code using the pool + + .. automethod:: wait + + .. attribute:: name + :type: str + + The name of the pool set on creation, or automatically generated if not + set. + + .. autoattribute:: min_size + .. autoattribute:: max_size + + The current minimum and maximum size of the pool. Use `resize()` to + change them at runtime. + + .. automethod:: resize + .. automethod:: check + .. automethod:: get_stats + .. automethod:: pop_stats + + See :ref:`pool-stats` for the metrics returned. + + .. rubric:: Functionalities you may not need + + .. automethod:: getconn + .. automethod:: putconn + + +Pool exceptions +--------------- + +.. autoclass:: PoolTimeout() + + Subclass of `~psycopg.OperationalError` + +.. autoclass:: PoolClosed() + + Subclass of `~psycopg.OperationalError` + +.. autoclass:: TooManyRequests() + + Subclass of `~psycopg.OperationalError` + + +The `!AsyncConnectionPool` class +-------------------------------- + +`!AsyncConnectionPool` has a very similar interface to the `ConnectionPool` +class but its blocking methods are implemented as `!async` coroutines. It +returns instances of `~psycopg.AsyncConnection`, or of its subclass if +specified so in the `!connection_class` parameter. + +Only the functions with different signature from `!ConnectionPool` are +listed here. + +.. autoclass:: AsyncConnectionPool + + :param connection_class: The class of the connections to serve. It should + be an `!AsyncConnection` subclass. + :type connection_class: `!type`, default: `~psycopg.AsyncConnection` + + :param configure: A callback to configure a connection after creation. + :type configure: `async Callable[[AsyncConnection], None]` + + :param reset: A callback to reset a function after it has been returned to + the pool. + :type reset: `async Callable[[AsyncConnection], None]` + + .. automethod:: connection + + .. code:: python + + async with my_pool.connection() as conn: + await conn.execute(...) + + # the connection is now back in the pool + + .. automethod:: open + .. automethod:: close + + .. note:: + + The pool can be also used as an async context manager, in which case it + will be opened (if necessary) on entering the block and closed on + exiting it: + + .. code:: python + + async with AsyncConnectionPool(...) as pool: + # code using the pool + + All the other constructor parameters are the same of `!ConnectionPool`. + + .. automethod:: wait + .. automethod:: resize + .. automethod:: check + .. automethod:: getconn + .. automethod:: putconn + + +Null connection pools +--------------------- + +.. versionadded:: 3.1 + +The `NullConnectionPool` is a `ConnectionPool` subclass which doesn't create +connections preemptively and doesn't keep unused connections in its state. See +:ref:`null-pool` for further details. + +The interface of the object is entirely compatible with its parent class. Its +behaviour is similar, with the following differences: + +.. autoclass:: NullConnectionPool + + All the other constructor parameters are the same as in `ConnectionPool`. + + :param min_size: Always 0, cannot be changed. + :type min_size: `!int`, default: 0 + + :param max_size: If None or 0, create a new connection at every request, + without a maximum. If greater than 0, don't create more + than `!max_size` connections and queue the waiting clients. + :type max_size: `!int`, default: None + + :param reset: It is only called when there are waiting clients in the + queue, before giving them a connection already open. If no + client is waiting, the connection is closed and discarded + without a fuss. + :type reset: `Callable[[Connection], None]` + + :param max_idle: Ignored, as null pools don't leave idle connections + sitting around. + + .. automethod:: wait + .. automethod:: resize + .. automethod:: check + + +The `AsyncNullConnectionPool` is, similarly, an `AsyncConnectionPool` subclass +with the same behaviour of the `NullConnectionPool`. + +.. autoclass:: AsyncNullConnectionPool + + The interface is the same of its parent class `AsyncConnectionPool`. The + behaviour is different in the same way described for `NullConnectionPool`. diff --git a/docs/api/pq.rst b/docs/api/pq.rst new file mode 100644 index 0000000..3d9c033 --- /dev/null +++ b/docs/api/pq.rst @@ -0,0 +1,218 @@ +.. _psycopg.pq: + +`pq` -- libpq wrapper module +============================ + +.. index:: + single: libpq + +.. module:: psycopg.pq + +Psycopg is built around the libpq_, the PostgreSQL client library, which +performs most of the network communications and returns query results in C +structures. + +.. _libpq: https://www.postgresql.org/docs/current/libpq.html + +The low-level functions of the library are exposed by the objects in the +`!psycopg.pq` module. + + +.. _pq-impl: + +``pq`` module implementations +----------------------------- + +There are actually several implementations of the module, all offering the +same interface. Current implementations are: + +- ``python``: a pure-python implementation, implemented using the `ctypes` + module. It is less performing than the others, but it doesn't need a C + compiler to install. It requires the libpq installed in the system. + +- ``c``: a C implementation of the libpq wrapper (more precisely, implemented + in Cython_). It is much better performing than the ``python`` + implementation, however it requires development packages installed on the + client machine. It can be installed using the ``c`` extra, i.e. running + ``pip install "psycopg[c]"``. + +- ``binary``: a pre-compiled C implementation, bundled with all the required + libraries. It is the easiest option to deal with, fast to install and it + should require no development tool or client library, however it may be not + available for every platform. You can install it using the ``binary`` extra, + i.e. running ``pip install "psycopg[binary]"``. + +.. _Cython: https://cython.org/ + +The implementation currently used is available in the `~psycopg.pq.__impl__` +module constant. + +At import time, Psycopg 3 will try to use the best implementation available +and will fail if none is usable. You can force the use of a specific +implementation by exporting the env var :envvar:`PSYCOPG_IMPL`: importing the +library will fail if the requested implementation is not available:: + + $ PSYCOPG_IMPL=c python -c "import psycopg" + Traceback (most recent call last): + ... + ImportError: couldn't import requested psycopg 'c' implementation: No module named 'psycopg_c' + + +Module content +-------------- + +.. autodata:: __impl__ + + The choice of implementation is automatic but can be forced setting the + :envvar:`PSYCOPG_IMPL` env var. + + +.. autofunction:: version + + .. seealso:: the :pq:`PQlibVersion()` function + + +.. autodata:: __build_version__ + +.. autofunction:: error_message + + +Objects wrapping libpq structures and functions +----------------------------------------------- + +.. admonition:: TODO + + finish documentation + +.. autoclass:: PGconn() + + .. autoattribute:: pgconn_ptr + .. automethod:: get_cancel + .. autoattribute:: needs_password + .. autoattribute:: used_password + + .. automethod:: encrypt_password + + .. code:: python + + >>> enc = conn.info.encoding + >>> encrypted = conn.pgconn.encrypt_password(password.encode(enc), rolename.encode(enc)) + b'SCRAM-SHA-256$4096:... + + .. automethod:: trace + .. automethod:: set_trace_flags + .. automethod:: untrace + + .. code:: python + + >>> conn.pgconn.trace(sys.stderr.fileno()) + >>> conn.pgconn.set_trace_flags(pq.Trace.SUPPRESS_TIMESTAMPS | pq.Trace.REGRESS_MODE) + >>> conn.execute("select now()") + F 13 Parse "" "BEGIN" 0 + F 14 Bind "" "" 0 0 1 0 + F 6 Describe P "" + F 9 Execute "" 0 + F 4 Sync + B 4 ParseComplete + B 4 BindComplete + B 4 NoData + B 10 CommandComplete "BEGIN" + B 5 ReadyForQuery T + F 17 Query "select now()" + B 28 RowDescription 1 "now" NNNN 0 NNNN 8 -1 0 + B 39 DataRow 1 29 '2022-09-14 14:12:16.648035+02' + B 13 CommandComplete "SELECT 1" + B 5 ReadyForQuery T + <psycopg.Cursor [TUPLES_OK] [INTRANS] (database=postgres) at 0x7f18a18ba040> + >>> conn.pgconn.untrace() + + +.. autoclass:: PGresult() + + .. autoattribute:: pgresult_ptr + + +.. autoclass:: Conninfo +.. autoclass:: Escaping + +.. autoclass:: PGcancel() + :members: + + +Enumerations +------------ + +.. autoclass:: ConnStatus + :members: + + There are other values in this enum, but only `OK` and `BAD` are seen + after a connection has been established. Other statuses might only be seen + during the connection phase and are considered internal. + + .. seealso:: :pq:`PQstatus()` returns this value. + + +.. autoclass:: PollingStatus + :members: + + .. seealso:: :pq:`PQconnectPoll` for a description of these states. + + +.. autoclass:: TransactionStatus + :members: + + .. seealso:: :pq:`PQtransactionStatus` for a description of these states. + + +.. autoclass:: ExecStatus + :members: + + .. seealso:: :pq:`PQresultStatus` for a description of these states. + + +.. autoclass:: PipelineStatus + :members: + + .. seealso:: :pq:`PQpipelineStatus` for a description of these states. + + +.. autoclass:: Format + :members: + + +.. autoclass:: DiagnosticField + + Available attributes: + + .. attribute:: + SEVERITY + SEVERITY_NONLOCALIZED + SQLSTATE + MESSAGE_PRIMARY + MESSAGE_DETAIL + MESSAGE_HINT + STATEMENT_POSITION + INTERNAL_POSITION + INTERNAL_QUERY + CONTEXT + SCHEMA_NAME + TABLE_NAME + COLUMN_NAME + DATATYPE_NAME + CONSTRAINT_NAME + SOURCE_FILE + SOURCE_LINE + SOURCE_FUNCTION + + .. seealso:: :pq:`PQresultErrorField` for a description of these values. + + +.. autoclass:: Ping + :members: + + .. seealso:: :pq:`PQpingParams` for a description of these values. + +.. autoclass:: Trace + :members: + + .. seealso:: :pq:`PQsetTraceFlags` for a description of these values. diff --git a/docs/api/rows.rst b/docs/api/rows.rst new file mode 100644 index 0000000..204f1ea --- /dev/null +++ b/docs/api/rows.rst @@ -0,0 +1,74 @@ +.. _psycopg.rows: + +`rows` -- row factory implementations +===================================== + +.. module:: psycopg.rows + +The module exposes a few generic `~psycopg.RowFactory` implementation, which +can be used to retrieve data from the database in more complex structures than +the basic tuples. + +Check out :ref:`row-factories` for information about how to use these objects. + +.. autofunction:: tuple_row +.. autofunction:: dict_row +.. autofunction:: namedtuple_row +.. autofunction:: class_row + + This is not a row factory, but rather a factory of row factories. + Specifying `!row_factory=class_row(MyClass)` will create connections and + cursors returning `!MyClass` objects on fetch. + + Example:: + + from dataclasses import dataclass + import psycopg + from psycopg.rows import class_row + + @dataclass + class Person: + first_name: str + last_name: str + age: int = None + + conn = psycopg.connect() + cur = conn.cursor(row_factory=class_row(Person)) + + cur.execute("select 'John' as first_name, 'Smith' as last_name").fetchone() + # Person(first_name='John', last_name='Smith', age=None) + +.. autofunction:: args_row +.. autofunction:: kwargs_row + + +Formal rows protocols +--------------------- + +These objects can be used to describe your own rows adapter for static typing +checks, such as mypy_. + +.. _mypy: https://mypy.readthedocs.io/ + + +.. autoclass:: psycopg.rows.RowMaker() + + .. method:: __call__(values: Sequence[Any]) -> Row + + Convert a sequence of values from the database to a finished object. + + +.. autoclass:: psycopg.rows.RowFactory() + + .. method:: __call__(cursor: Cursor[Row]) -> RowMaker[Row] + + Inspect the result on a cursor and return a `RowMaker` to convert rows. + +.. autoclass:: psycopg.rows.AsyncRowFactory() + +.. autoclass:: psycopg.rows.BaseRowFactory() + +Note that it's easy to implement an object implementing both `!RowFactory` and +`!AsyncRowFactory`: usually, everything you need to implement a row factory is +to access the cursor's `~psycopg.Cursor.description`, which is provided by +both the cursor flavours. diff --git a/docs/api/sql.rst b/docs/api/sql.rst new file mode 100644 index 0000000..6959fee --- /dev/null +++ b/docs/api/sql.rst @@ -0,0 +1,151 @@ +`sql` -- SQL string composition +=============================== + +.. index:: + double: Binding; Client-Side + +.. module:: psycopg.sql + +The module contains objects and functions useful to generate SQL dynamically, +in a convenient and safe way. SQL identifiers (e.g. names of tables and +fields) cannot be passed to the `~psycopg.Cursor.execute()` method like query +arguments:: + + # This will not work + table_name = 'my_table' + cur.execute("INSERT INTO %s VALUES (%s, %s)", [table_name, 10, 20]) + +The SQL query should be composed before the arguments are merged, for +instance:: + + # This works, but it is not optimal + table_name = 'my_table' + cur.execute( + "INSERT INTO %s VALUES (%%s, %%s)" % table_name, + [10, 20]) + +This sort of works, but it is an accident waiting to happen: the table name +may be an invalid SQL literal and need quoting; even more serious is the +security problem in case the table name comes from an untrusted source. The +name should be escaped using `~psycopg.pq.Escaping.escape_identifier()`:: + + from psycopg.pq import Escaping + + # This works, but it is not optimal + table_name = 'my_table' + cur.execute( + "INSERT INTO %s VALUES (%%s, %%s)" % Escaping.escape_identifier(table_name), + [10, 20]) + +This is now safe, but it somewhat ad-hoc. In case, for some reason, it is +necessary to include a value in the query string (as opposite as in a value) +the merging rule is still different. It is also still relatively dangerous: if +`!escape_identifier()` is forgotten somewhere, the program will usually work, +but will eventually crash in the presence of a table or field name with +containing characters to escape, or will present a potentially exploitable +weakness. + +The objects exposed by the `!psycopg.sql` module allow generating SQL +statements on the fly, separating clearly the variable parts of the statement +from the query parameters:: + + from psycopg import sql + + cur.execute( + sql.SQL("INSERT INTO {} VALUES (%s, %s)") + .format(sql.Identifier('my_table')), + [10, 20]) + + +Module usage +------------ + +Usually you should express the template of your query as an `SQL` instance +with ``{}``\-style placeholders and use `~SQL.format()` to merge the variable +parts into them, all of which must be `Composable` subclasses. You can still +have ``%s``\-style placeholders in your query and pass values to +`~psycopg.Cursor.execute()`: such value placeholders will be untouched by +`!format()`:: + + query = sql.SQL("SELECT {field} FROM {table} WHERE {pkey} = %s").format( + field=sql.Identifier('my_name'), + table=sql.Identifier('some_table'), + pkey=sql.Identifier('id')) + +The resulting object is meant to be passed directly to cursor methods such as +`~psycopg.Cursor.execute()`, `~psycopg.Cursor.executemany()`, +`~psycopg.Cursor.copy()`, but can also be used to compose a query as a Python +string, using the `~Composable.as_string()` method:: + + cur.execute(query, (42,)) + full_query = query.as_string(cur) + +If part of your query is a variable sequence of arguments, such as a +comma-separated list of field names, you can use the `SQL.join()` method to +pass them to the query:: + + query = sql.SQL("SELECT {fields} FROM {table}").format( + fields=sql.SQL(',').join([ + sql.Identifier('field1'), + sql.Identifier('field2'), + sql.Identifier('field3'), + ]), + table=sql.Identifier('some_table')) + + +`!sql` objects +-------------- + +The `!sql` objects are in the following inheritance hierarchy: + +| `Composable`: the base class exposing the common interface +| ``|__`` `SQL`: a literal snippet of an SQL query +| ``|__`` `Identifier`: a PostgreSQL identifier or dot-separated sequence of identifiers +| ``|__`` `Literal`: a value hardcoded into a query +| ``|__`` `Placeholder`: a `%s`\ -style placeholder whose value will be added later e.g. by `~psycopg.Cursor.execute()` +| ``|__`` `Composed`: a sequence of `!Composable` instances. + + +.. autoclass:: Composable() + + .. automethod:: as_bytes + .. automethod:: as_string + + +.. autoclass:: SQL + + .. versionchanged:: 3.1 + + The input object should be a `~typing.LiteralString`. See :pep:`675` + for details. + + .. automethod:: format + + .. automethod:: join + + +.. autoclass:: Identifier + +.. autoclass:: Literal + + .. versionchanged:: 3.1 + Add a type cast to the representation if useful in ambiguous context + (e.g. ``'2000-01-01'::date``) + +.. autoclass:: Placeholder + +.. autoclass:: Composed + + .. automethod:: join + + +Utility functions +----------------- + +.. autofunction:: quote + +.. data:: + NULL + DEFAULT + + `sql.SQL` objects often useful in queries. diff --git a/docs/api/types.rst b/docs/api/types.rst new file mode 100644 index 0000000..f04659e --- /dev/null +++ b/docs/api/types.rst @@ -0,0 +1,168 @@ +.. currentmodule:: psycopg.types + +.. _psycopg.types: + +`!types` -- Types information and adapters +========================================== + +.. module:: psycopg.types + +The `!psycopg.types` package exposes: + +- objects to describe PostgreSQL types, such as `TypeInfo`, `TypesRegistry`, + to help or :ref:`customise the types conversion <adaptation>`; + +- concrete implementations of `~psycopg.abc.Loader` and `~psycopg.abc.Dumper` + protocols to :ref:`handle builtin data types <types-adaptation>`; + +- helper objects to represent PostgreSQL data types which :ref:`don't have a + straightforward Python representation <extra-adaptation>`, such as + `~range.Range`. + + +Types information +----------------- + +The `TypeInfo` object describes simple information about a PostgreSQL data +type, such as its name, oid and array oid. `!TypeInfo` subclasses may hold more +information, for instance the components of a composite type. + +You can use `TypeInfo.fetch()` to query information from a database catalog, +which is then used by helper functions, such as +`~psycopg.types.hstore.register_hstore()`, to register adapters on types whose +OID is not known upfront or to create more specialised adapters. + +The `!TypeInfo` object doesn't instruct Psycopg to convert a PostgreSQL type +into a Python type: this is the role of a `~psycopg.abc.Loader`. However it +can extend the behaviour of other adapters: if you create a loader for +`!MyType`, using the `TypeInfo` information, Psycopg will be able to manage +seamlessly arrays of `!MyType` or ranges and composite types using `!MyType` +as a subtype. + +.. seealso:: :ref:`adaptation` describes how to convert from Python objects to + PostgreSQL types and back. + +.. code:: python + + from psycopg.adapt import Loader + from psycopg.types import TypeInfo + + t = TypeInfo.fetch(conn, "mytype") + t.register(conn) + + for record in conn.execute("SELECT mytypearray FROM mytable"): + # records will return lists of "mytype" as string + + class MyTypeLoader(Loader): + def load(self, data): + # parse the data and return a MyType instance + + conn.adapters.register_loader("mytype", MyTypeLoader) + + for record in conn.execute("SELECT mytypearray FROM mytable"): + # records will return lists of MyType instances + + +.. autoclass:: TypeInfo + + .. method:: fetch(conn, name) + :classmethod: + + .. method:: fetch(aconn, name) + :classmethod: + :async: + :noindex: + + Query a system catalog to read information about a type. + + :param conn: the connection to query + :type conn: ~psycopg.Connection or ~psycopg.AsyncConnection + :param name: the name of the type to query. It can include a schema + name. + :type name: `!str` or `~psycopg.sql.Identifier` + :return: a `!TypeInfo` object (or subclass) populated with the type + information, `!None` if not found. + + If the connection is async, `!fetch()` will behave as a coroutine and + the caller will need to `!await` on it to get the result:: + + t = await TypeInfo.fetch(aconn, "mytype") + + .. automethod:: register + + :param context: the context where the type is registered, for instance + a `~psycopg.Connection` or `~psycopg.Cursor`. `!None` registers + the `!TypeInfo` globally. + :type context: Optional[~psycopg.abc.AdaptContext] + + Registering the `TypeInfo` in a context allows the adapters of that + context to look up type information: for instance it allows to + recognise automatically arrays of that type and load them from the + database as a list of the base type. + + +In order to get information about dynamic PostgreSQL types, Psycopg offers a +few `!TypeInfo` subclasses, whose `!fetch()` method can extract more complete +information about the type, such as `~psycopg.types.composite.CompositeInfo`, +`~psycopg.types.range.RangeInfo`, `~psycopg.types.multirange.MultirangeInfo`, +`~psycopg.types.enum.EnumInfo`. + +`!TypeInfo` objects are collected in `TypesRegistry` instances, which help type +information lookup. Every `~psycopg.adapt.AdaptersMap` exposes its type map on +its `~psycopg.adapt.AdaptersMap.types` attribute. + +.. autoclass:: TypesRegistry + + `!TypeRegistry` instances are typically exposed by + `~psycopg.adapt.AdaptersMap` objects in adapt contexts such as + `~psycopg.Connection` or `~psycopg.Cursor` (e.g. `!conn.adapters.types`). + + The global registry, from which the others inherit from, is available as + `psycopg.adapters`\ `!.types`. + + .. automethod:: __getitem__ + + .. code:: python + + >>> import psycopg + + >>> psycopg.adapters.types["text"] + <TypeInfo: text (oid: 25, array oid: 1009)> + + >>> psycopg.adapters.types[23] + <TypeInfo: int4 (oid: 23, array oid: 1007)> + + .. automethod:: get + + .. automethod:: get_oid + + .. code:: python + + >>> psycopg.adapters.types.get_oid("text[]") + 1009 + + .. automethod:: get_by_subtype + + +.. _json-adapters: + +JSON adapters +------------- + +See :ref:`adapt-json` for details. + +.. currentmodule:: psycopg.types.json + +.. autoclass:: Json +.. autoclass:: Jsonb + +Wrappers to signal to convert `!obj` to a json or jsonb PostgreSQL value. + +Any object supported by the underlying `!dumps()` function can be wrapped. + +If a `!dumps` function is passed to the wrapper, use it to dump the wrapped +object. Otherwise use the function specified by `set_json_dumps()`. + + +.. autofunction:: set_json_dumps +.. autofunction:: set_json_loads |