Localization
This chapter describes the available localization features from the
point of view of the administrator.
PostgreSQL supports two localization
facilities:
Using the locale features of the operating system to provide
locale-specific collation order, number formatting, translated
messages, and other aspects.
This is covered in and
.
Providing a number of different character sets to support storing text
in all kinds of languages, and providing character set translation
between client and server.
This is covered in .
Locale SupportlocaleLocale support refers to an application respecting
cultural preferences regarding alphabets, sorting, number
formatting, etc. PostgreSQL uses the standard ISO
C and POSIX locale facilities provided by the server operating
system. For additional information refer to the documentation of your
system.
Overview
Locale support is automatically initialized when a database
cluster is created using initdb.
initdb will initialize the database cluster
with the locale setting of its execution environment by default,
so if your system is already set to use the locale that you want
in your database cluster then there is nothing else you need to
do. If you want to use a different locale (or you are not sure
which locale your system is set to), you can instruct
initdb exactly which locale to use by
specifying the option. For example:
initdb --locale=sv_SE
This example for Unix systems sets the locale to Swedish
(sv) as spoken
in Sweden (SE). Other possibilities might include
en_US (U.S. English) and fr_CA (French
Canadian). If more than one character set can be used for a
locale then the specifications can take the form
language_territory.codeset. For example,
fr_BE.UTF-8 represents the French language (fr) as
spoken in Belgium (BE), with a UTF-8 character set
encoding.
What locales are available on your
system under what names depends on what was provided by the operating
system vendor and what was installed. On most Unix systems, the command
locale -a will provide a list of available locales.
Windows uses more verbose locale names, such as German_Germany
or Swedish_Sweden.1252, but the principles are the same.
Occasionally it is useful to mix rules from several locales, e.g.,
use English collation rules but Spanish messages. To support that, a
set of locale subcategories exist that control only certain
aspects of the localization rules:
LC_COLLATEString sort orderLC_CTYPECharacter classification (What is a letter? Its upper-case equivalent?)LC_MESSAGESLanguage of messagesLC_MONETARYFormatting of currency amountsLC_NUMERICFormatting of numbersLC_TIMEFormatting of dates and times
The category names translate into names of
initdb options to override the locale choice
for a specific category. For instance, to set the locale to
French Canadian, but use U.S. rules for formatting currency, use
initdb --locale=fr_CA --lc-monetary=en_US.
If you want the system to behave as if it had no locale support,
use the special locale name C, or equivalently
POSIX.
Some locale categories must have their values
fixed when the database is created. You can use different settings
for different databases, but once a database is created, you cannot
change them for that database anymore. LC_COLLATE
and LC_CTYPE are these categories. They affect
the sort order of indexes, so they must be kept fixed, or indexes on
text columns would become corrupt.
(But you can alleviate this restriction using collations, as discussed
in .)
The default values for these
categories are determined when initdb is run, and
those values are used when new databases are created, unless
specified otherwise in the CREATE DATABASE command.
The other locale categories can be changed whenever desired
by setting the server configuration parameters
that have the same name as the locale categories (see for details). The values
that are chosen by initdb are actually only written
into the configuration file postgresql.conf to
serve as defaults when the server is started. If you remove these
assignments from postgresql.conf then the
server will inherit the settings from its execution environment.
Note that the locale behavior of the server is determined by the
environment variables seen by the server, not by the environment
of any client. Therefore, be careful to configure the correct locale settings
before starting the server. A consequence of this is that if
client and server are set up in different locales, messages might
appear in different languages depending on where they originated.
When we speak of inheriting the locale from the execution
environment, this means the following on most operating systems:
For a given locale category, say the collation, the following
environment variables are consulted in this order until one is
found to be set: LC_ALL, LC_COLLATE
(or the variable corresponding to the respective category),
LANG. If none of these environment variables are
set then the locale defaults to C.
Some message localization libraries also look at the environment
variable LANGUAGE which overrides all other locale
settings for the purpose of setting the language of messages. If
in doubt, please refer to the documentation of your operating
system, in particular the documentation about
gettext.
To enable messages to be translated to the user's preferred language,
NLS must have been selected at build time
(configure --enable-nls). All other locale support is
built in automatically.
Behavior
The locale settings influence the following SQL features:
Sort order in queries using ORDER BY or the standard
comparison operators on textual data
ORDER BYand locales
The upper, lower, and initcap
functions
upperand localeslowerand locales
Pattern matching operators (LIKE, SIMILAR TO,
and POSIX-style regular expressions); locales affect both case
insensitive matching and the classification of characters by
character-class regular expressions
LIKEand localesregular expressionsand locales
The to_char family of functions
to_charand locales
The ability to use indexes with LIKE clauses
The drawback of using locales other than C or
POSIX in PostgreSQL is its performance
impact. It slows character handling and prevents ordinary indexes
from being used by LIKE. For this reason use locales
only if you actually need them.
As a workaround to allow PostgreSQL to use indexes
with LIKE clauses under a non-C locale, several custom
operator classes exist. These allow the creation of an index that
performs a strict character-by-character comparison, ignoring
locale comparison rules. Refer to
for more information. Another approach is to create indexes using
the C collation, as discussed in
.
Problems
If locale support doesn't work according to the explanation above,
check that the locale support in your operating system is
correctly configured. To check what locales are installed on your
system, you can use the command locale -a if
your operating system provides it.
Check that PostgreSQL is actually using the locale
that you think it is. The LC_COLLATE and LC_CTYPE
settings are determined when a database is created, and cannot be
changed except by creating a new database. Other locale
settings including LC_MESSAGES and LC_MONETARY
are initially determined by the environment the server is started
in, but can be changed on-the-fly. You can check the active locale
settings using the SHOW command.
The directory src/test/locale in the source
distribution contains a test suite for
PostgreSQL's locale support.
Client applications that handle server-side errors by parsing the
text of the error message will obviously have problems when the
server's messages are in a different language. Authors of such
applications are advised to make use of the error code scheme
instead.
Maintaining catalogs of message translations requires the on-going
efforts of many volunteers that want to see
PostgreSQL speak their preferred language well.
If messages in your language are currently not available or not fully
translated, your assistance would be appreciated. If you want to
help, refer to or write to the developers'
mailing list.
Collation Supportcollation
The collation feature allows specifying the sort order and character
classification behavior of data per-column, or even per-operation.
This alleviates the restriction that the
LC_COLLATE and LC_CTYPE settings
of a database cannot be changed after its creation.
Concepts
Conceptually, every expression of a collatable data type has a
collation. (The built-in collatable data types are
text, varchar, and char.
User-defined base types can also be marked collatable, and of course
a domain over a collatable data type is collatable.) If the
expression is a column reference, the collation of the expression is the
defined collation of the column. If the expression is a constant, the
collation is the default collation of the data type of the
constant. The collation of a more complex expression is derived
from the collations of its inputs, as described below.
The collation of an expression can be the default
collation, which means the locale settings defined for the
database. It is also possible for an expression's collation to be
indeterminate. In such cases, ordering operations and other
operations that need to know the collation will fail.
When the database system has to perform an ordering or a character
classification, it uses the collation of the input expression. This
happens, for example, with ORDER BY clauses
and function or operator calls such as <.
The collation to apply for an ORDER BY clause
is simply the collation of the sort key. The collation to apply for a
function or operator call is derived from the arguments, as described
below. In addition to comparison operators, collations are taken into
account by functions that convert between lower and upper case
letters, such as lower, upper, and
initcap; by pattern matching operators; and by
to_char and related functions.
For a function or operator call, the collation that is derived by
examining the argument collations is used at run time for performing
the specified operation. If the result of the function or operator
call is of a collatable data type, the collation is also used at parse
time as the defined collation of the function or operator expression,
in case there is a surrounding expression that requires knowledge of
its collation.
The collation derivation of an expression can be
implicit or explicit. This distinction affects how collations are
combined when multiple different collations appear in an
expression. An explicit collation derivation occurs when a
COLLATE clause is used; all other collation
derivations are implicit. When multiple collations need to be
combined, for example in a function call, the following rules are
used:
If any input expression has an explicit collation derivation, then
all explicitly derived collations among the input expressions must be
the same, otherwise an error is raised. If any explicitly
derived collation is present, that is the result of the
collation combination.
Otherwise, all input expressions must have the same implicit
collation derivation or the default collation. If any non-default
collation is present, that is the result of the collation combination.
Otherwise, the result is the default collation.
If there are conflicting non-default implicit collations among the
input expressions, then the combination is deemed to have indeterminate
collation. This is not an error condition unless the particular
function being invoked requires knowledge of the collation it should
apply. If it does, an error will be raised at run-time.
For example, consider this table definition:
CREATE TABLE test1 (
a text COLLATE "de_DE",
b text COLLATE "es_ES",
...
);
Then in
SELECT a < 'foo' FROM test1;
the < comparison is performed according to
de_DE rules, because the expression combines an
implicitly derived collation with the default collation. But in
SELECT a < ('foo' COLLATE "fr_FR") FROM test1;
the comparison is performed using fr_FR rules,
because the explicit collation derivation overrides the implicit one.
Furthermore, given
SELECT a < b FROM test1;
the parser cannot determine which collation to apply, since the
a and b columns have conflicting
implicit collations. Since the < operator
does need to know which collation to use, this will result in an
error. The error can be resolved by attaching an explicit collation
specifier to either input expression, thus:
SELECT a < b COLLATE "de_DE" FROM test1;
or equivalently
SELECT a COLLATE "de_DE" < b FROM test1;
On the other hand, the structurally similar case
SELECT a || b FROM test1;
does not result in an error, because the || operator
does not care about collations: its result is the same regardless
of the collation.
The collation assigned to a function or operator's combined input
expressions is also considered to apply to the function or operator's
result, if the function or operator delivers a result of a collatable
data type. So, in
SELECT * FROM test1 ORDER BY a || 'foo';
the ordering will be done according to de_DE rules.
But this query:
SELECT * FROM test1 ORDER BY a || b;
results in an error, because even though the || operator
doesn't need to know a collation, the ORDER BY clause does.
As before, the conflict can be resolved with an explicit collation
specifier:
SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR";
Managing Collations
A collation is an SQL schema object that maps an SQL name to locales
provided by libraries installed in the operating system. A collation
definition has a provider that specifies which
library supplies the locale data. One standard provider name
is libc, which uses the locales provided by the
operating system C library. These are the locales that most tools
provided by the operating system use. Another provider
is icu, which uses the external
ICUICU library. ICU locales can only be
used if support for ICU was configured when PostgreSQL was built.
A collation object provided by libc maps to a
combination of LC_COLLATE and LC_CTYPE
settings, as accepted by the setlocale() system library call. (As
the name would suggest, the main purpose of a collation is to set
LC_COLLATE, which controls the sort order. But
it is rarely necessary in practice to have an
LC_CTYPE setting that is different from
LC_COLLATE, so it is more convenient to collect
these under one concept than to create another infrastructure for
setting LC_CTYPE per expression.) Also,
a libc collation
is tied to a character set encoding (see ).
The same collation name may exist for different encodings.
A collation object provided by icu maps to a named
collator provided by the ICU library. ICU does not support
separate collate and ctype settings, so
they are always the same. Also, ICU collations are independent of the
encoding, so there is always only one ICU collation of a given name in
a database.
Standard Collations
On all platforms, the collations named default,
C, and POSIX are available. Additional
collations may be available depending on operating system support.
The default collation selects the LC_COLLATE
and LC_CTYPE values specified at database creation time.
The C and POSIX collations both specify
traditional C behavior, in which only the ASCII letters
A through Z
are treated as letters, and sorting is done strictly by character
code byte values.
Additionally, the SQL standard collation name ucs_basic
is available for encoding UTF8. It is equivalent
to C and sorts by Unicode code point.
Predefined Collations
If the operating system provides support for using multiple locales
within a single program (newlocale and related functions),
or if support for ICU is configured,
then when a database cluster is initialized, initdb
populates the system catalog pg_collation with
collations based on all the locales it finds in the operating
system at the time.
To inspect the currently available locales, use the query SELECT
* FROM pg_collation, or the command \dOS+
in psql.
libc Collations
For example, the operating system might
provide a locale named de_DE.utf8.
initdb would then create a collation named
de_DE.utf8 for encoding UTF8
that has both LC_COLLATE and
LC_CTYPE set to de_DE.utf8.
It will also create a collation with the .utf8
tag stripped off the name. So you could also use the collation
under the name de_DE, which is less cumbersome
to write and makes the name less encoding-dependent. Note that,
nevertheless, the initial set of collation names is
platform-dependent.
The default set of collations provided by libc map
directly to the locales installed in the operating system, which can be
listed using the command locale -a. In case
a libc collation is needed that has different values
for LC_COLLATE and LC_CTYPE, or if new
locales are installed in the operating system after the database system
was initialized, then a new collation may be created using
the command.
New operating system locales can also be imported en masse using
the pg_import_system_collations() function.
Within any particular database, only collations that use that
database's encoding are of interest. Other entries in
pg_collation are ignored. Thus, a stripped collation
name such as de_DE can be considered unique
within a given database even though it would not be unique globally.
Use of the stripped collation names is recommended, since it will
make one fewer thing you need to change if you decide to change to
another database encoding. Note however that the default,
C, and POSIX collations can be used regardless of
the database encoding.
PostgreSQL considers distinct collation
objects to be incompatible even when they have identical properties.
Thus for example,
SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1;
will draw an error even though the C and POSIX
collations have identical behaviors. Mixing stripped and non-stripped
collation names is therefore not recommended.
ICU Collations
With ICU, it is not sensible to enumerate all possible locale names. ICU
uses a particular naming system for locales, but there are many more ways
to name a locale than there are actually distinct locales.
initdb uses the ICU APIs to extract a set of distinct
locales to populate the initial set of collations. Collations provided by
ICU are created in the SQL environment with names in BCP 47 language tag
format, with a private use
extension -x-icu appended, to distinguish them from
libc locales.
Here are some example collations that might be created:
de-x-icuGerman collation, default variantde-AT-x-icuGerman collation for Austria, default variant
(There are also, say, de-DE-x-icu
or de-CH-x-icu, but as of this writing, they are
equivalent to de-x-icu.)
und-x-icu (for undefined)
ICU root collation. Use this to get a reasonable
language-agnostic sort order.
Some (less frequently used) encodings are not supported by ICU. When the
database encoding is one of these, ICU collation entries
in pg_collation are ignored. Attempting to use one
will draw an error along the lines of collation "de-x-icu" for
encoding "WIN874" does not exist.
Creating New Collation Objects
If the standard and predefined collations are not sufficient, users can
create their own collation objects using the SQL
command .
The standard and predefined collations are in the
schema pg_catalog, like all predefined objects.
User-defined collations should be created in user schemas. This also
ensures that they are saved by pg_dump.
libc Collations
New libc collations can be created like this:
CREATE COLLATION german (provider = libc, locale = 'de_DE');
The exact values that are acceptable for the locale
clause in this command depend on the operating system. On Unix-like
systems, the command locale -a will show a list.
Since the predefined libc collations already include all collations
defined in the operating system when the database instance is
initialized, it is not often necessary to manually create new ones.
Reasons might be if a different naming system is desired (in which case
see also ) or if the operating system has
been upgraded to provide new locale definitions (in which case see
also pg_import_system_collations()).
ICU Collations
ICU allows collations to be customized beyond the basic language+country
set that is preloaded by initdb. Users are encouraged
to define their own collation objects that make use of these facilities to
suit the sorting behavior to their requirements.
See
and for
information on ICU locale naming. The set of acceptable names and
attributes depends on the particular ICU version.
Here are some examples:
CREATE COLLATION "de-u-co-phonebk-x-icu" (provider = icu, locale = 'de-u-co-phonebk');CREATE COLLATION "de-u-co-phonebk-x-icu" (provider = icu, locale = 'de@collation=phonebook');German collation with phone book collation type
The first example selects the ICU locale using a language
tag per BCP 47. The second example uses the traditional
ICU-specific locale syntax. The first style is preferred going
forward, but it is not supported by older ICU versions.
Note that you can name the collation objects in the SQL environment
anything you want. In this example, we follow the naming style that
the predefined collations use, which in turn also follow BCP 47, but
that is not required for user-defined collations.
CREATE COLLATION "und-u-co-emoji-x-icu" (provider = icu, locale = 'und-u-co-emoji');CREATE COLLATION "und-u-co-emoji-x-icu" (provider = icu, locale = '@collation=emoji');
Root collation with Emoji collation type, per Unicode Technical Standard #51
Observe how in the traditional ICU locale naming system, the root
locale is selected by an empty string.
CREATE COLLATION latinlast (provider = icu, locale = 'en-u-kr-grek-latn');CREATE COLLATION latinlast (provider = icu, locale = 'en@colReorder=grek-latn');
Sort Greek letters before Latin ones. (The default is Latin before Greek.)
CREATE COLLATION upperfirst (provider = icu, locale = 'en-u-kf-upper');CREATE COLLATION upperfirst (provider = icu, locale = 'en@colCaseFirst=upper');
Sort upper-case letters before lower-case letters. (The default is
lower-case letters first.)
CREATE COLLATION special (provider = icu, locale = 'en-u-kf-upper-kr-grek-latn');CREATE COLLATION special (provider = icu, locale = 'en@colCaseFirst=upper;colReorder=grek-latn');
Combines both of the above options.
CREATE COLLATION numeric (provider = icu, locale = 'en-u-kn-true');CREATE COLLATION numeric (provider = icu, locale = 'en@colNumeric=yes');
Numeric ordering, sorts sequences of digits by their numeric value,
for example: A-21 < A-123
(also known as natural sort).
See Unicode
Technical Standard #35
and BCP 47 for
details. The list of possible collation types (co
subtag) can be found in
the CLDR
repository.
Note that while this system allows creating collations that ignore
case or ignore accents or similar (using the
ks key), in order for such collations to act in a
truly case- or accent-insensitive manner, they also need to be declared as not
deterministic in CREATE COLLATION;
see .
Otherwise, any strings that compare equal according to the collation but
are not byte-wise equal will be sorted according to their byte values.
By design, ICU will accept almost any string as a locale name and match
it to the closest locale it can provide, using the fallback procedure
described in its documentation. Thus, there will be no direct feedback
if a collation specification is composed using features that the given
ICU installation does not actually support. It is therefore recommended
to create application-level test cases to check that the collation
definitions satisfy one's requirements.
Copying Collations
The command can also be used to
create a new collation from an existing collation, which can be useful to
be able to use operating-system-independent collation names in
applications, create compatibility names, or use an ICU-provided collation
under a more readable name. For example:
CREATE COLLATION german FROM "de_DE";
CREATE COLLATION french FROM "fr-x-icu";
Nondeterministic Collations
A collation is either deterministic or
nondeterministic. A deterministic collation uses
deterministic comparisons, which means that it considers strings to be
equal only if they consist of the same byte sequence. Nondeterministic
comparison may determine strings to be equal even if they consist of
different bytes. Typical situations include case-insensitive comparison,
accent-insensitive comparison, as well as comparison of strings in
different Unicode normal forms. It is up to the collation provider to
actually implement such insensitive comparisons; the deterministic flag
only determines whether ties are to be broken using bytewise comparison.
See also Unicode Technical
Standard 10 for more information on the terminology.
To create a nondeterministic collation, specify the property
deterministic = false to CREATE
COLLATION, for example:
CREATE COLLATION ndcoll (provider = icu, locale = 'und', deterministic = false);
This example would use the standard Unicode collation in a
nondeterministic way. In particular, this would allow strings in
different normal forms to be compared correctly. More interesting
examples make use of the ICU customization facilities explained above.
For example:
CREATE COLLATION case_insensitive (provider = icu, locale = 'und-u-ks-level2', deterministic = false);
CREATE COLLATION ignore_accents (provider = icu, locale = 'und-u-ks-level1-kc-true', deterministic = false);
All standard and predefined collations are deterministic, all
user-defined collations are deterministic by default. While
nondeterministic collations give a more correct behavior,
especially when considering the full power of Unicode and its many
special cases, they also have some drawbacks. Foremost, their use leads
to a performance penalty. Note, in particular, that B-tree cannot use
deduplication with indexes that use a nondeterministic collation. Also,
certain operations are not possible with nondeterministic collations,
such as pattern matching operations. Therefore, they should be used
only in cases where they are specifically wanted.
To deal with text in different Unicode normalization forms, it is also
an option to use the functions/expressions
normalize and is normalized to
preprocess or check the strings, instead of using nondeterministic
collations. There are different trade-offs for each approach.
Character Set Supportcharacter set
The character set support in PostgreSQL
allows you to store text in a variety of character sets (also called
encodings), including
single-byte character sets such as the ISO 8859 series and
multiple-byte character sets such as EUC (Extended Unix
Code), UTF-8, and Mule internal code. All supported character sets
can be used transparently by clients, but a few are not supported
for use within the server (that is, as a server-side encoding).
The default character set is selected while
initializing your PostgreSQL database
cluster using initdb. It can be overridden when you
create a database, so you can have multiple
databases each with a different character set.
An important restriction, however, is that each database's character set
must be compatible with the database's LC_CTYPE (character
classification) and LC_COLLATE (string sort order) locale
settings. For C or
POSIX locale, any character set is allowed, but for other
libc-provided locales there is only one character set that will work
correctly.
(On Windows, however, UTF-8 encoding can be used with any locale.)
If you have ICU support configured, ICU-provided locales can be used
with most but not all server-side encodings.
Supported Character Sets shows the character sets available
for use in PostgreSQL.
PostgreSQL Character SetsNameDescriptionLanguageServer?ICU?Bytes/&zwsp;CharAliasesBIG5Big FiveTraditional ChineseNoNo1–2WIN950, Windows950EUC_CNExtended UNIX Code-CNSimplified ChineseYesYes1–3EUC_JPExtended UNIX Code-JPJapaneseYesYes1–3EUC_JIS_2004Extended UNIX Code-JP, JIS X 0213JapaneseYesNo1–3EUC_KRExtended UNIX Code-KRKoreanYesYes1–3EUC_TWExtended UNIX Code-TWTraditional Chinese, TaiwaneseYesYes1–3GB18030National StandardChineseNoNo1–4GBKExtended National StandardSimplified ChineseNoNo1–2WIN936, Windows936ISO_8859_5ISO 8859-5, ECMA 113Latin/CyrillicYesYes1ISO_8859_6ISO 8859-6, ECMA 114Latin/ArabicYesYes1ISO_8859_7ISO 8859-7, ECMA 118Latin/GreekYesYes1ISO_8859_8ISO 8859-8, ECMA 121Latin/HebrewYesYes1JOHABJOHABKorean (Hangul)NoNo1–3KOI8RKOI8-RCyrillic (Russian)YesYes1KOI8KOI8UKOI8-UCyrillic (Ukrainian)YesYes1LATIN1ISO 8859-1, ECMA 94Western EuropeanYesYes1ISO88591LATIN2ISO 8859-2, ECMA 94Central EuropeanYesYes1ISO88592LATIN3ISO 8859-3, ECMA 94South EuropeanYesYes1ISO88593LATIN4ISO 8859-4, ECMA 94North EuropeanYesYes1ISO88594LATIN5ISO 8859-9, ECMA 128TurkishYesYes1ISO88599LATIN6ISO 8859-10, ECMA 144NordicYesYes1ISO885910LATIN7ISO 8859-13BalticYesYes1ISO885913LATIN8ISO 8859-14CelticYesYes1ISO885914LATIN9ISO 8859-15LATIN1 with Euro and accentsYesYes1ISO885915LATIN10ISO 8859-16, ASRO SR 14111RomanianYesNo1ISO885916MULE_INTERNALMule internal codeMultilingual EmacsYesNo1–4SJISShift JISJapaneseNoNo1–2Mskanji, ShiftJIS, WIN932, Windows932SHIFT_JIS_2004Shift JIS, JIS X 0213JapaneseNoNo1–2SQL_ASCIIunspecified (see text)anyYesNo1UHCUnified Hangul CodeKoreanNoNo1–2WIN949, Windows949UTF8Unicode, 8-bitallYesYes1–4UnicodeWIN866Windows CP866CyrillicYesYes1ALTWIN874Windows CP874ThaiYesNo1WIN1250Windows CP1250Central EuropeanYesYes1WIN1251Windows CP1251CyrillicYesYes1WINWIN1252Windows CP1252Western EuropeanYesYes1WIN1253Windows CP1253GreekYesYes1WIN1254Windows CP1254TurkishYesYes1WIN1255Windows CP1255HebrewYesYes1WIN1256Windows CP1256ArabicYesYes1WIN1257Windows CP1257BalticYesYes1WIN1258Windows CP1258VietnameseYesYes1ABC, TCVN, TCVN5712, VSCII
Not all client APIs support all the listed character sets. For example, the
PostgreSQL
JDBC driver does not support MULE_INTERNAL, LATIN6,
LATIN8, and LATIN10.
The SQL_ASCII setting behaves considerably differently
from the other settings. When the server character set is
SQL_ASCII, the server interprets byte values 0–127
according to the ASCII standard, while byte values 128–255 are taken
as uninterpreted characters. No encoding conversion will be done when
the setting is SQL_ASCII. Thus, this setting is not so
much a declaration that a specific encoding is in use, as a declaration
of ignorance about the encoding. In most cases, if you are
working with any non-ASCII data, it is unwise to use the
SQL_ASCII setting because
PostgreSQL will be unable to help you by
converting or validating non-ASCII characters.
Setting the Character Setinitdb defines the default character set (encoding)
for a PostgreSQL cluster. For example,
initdb -E EUC_JP
sets the default character set to
EUC_JP (Extended Unix Code for Japanese). You
can use instead of
if you prefer longer option strings.
If no or option is
given, initdb attempts to determine the appropriate
encoding to use based on the specified or default locale.
You can specify a non-default encoding at database creation time,
provided that the encoding is compatible with the selected locale:
createdb -E EUC_KR -T template0 --lc-collate=ko_KR.euckr --lc-ctype=ko_KR.euckr korean
This will create a database named korean that
uses the character set EUC_KR, and locale ko_KR.
Another way to accomplish this is to use this SQL command:
CREATE DATABASE korean WITH ENCODING 'EUC_KR' LC_COLLATE='ko_KR.euckr' LC_CTYPE='ko_KR.euckr' TEMPLATE=template0;
Notice that the above commands specify copying the template0
database. When copying any other database, the encoding and locale
settings cannot be changed from those of the source database, because
that might result in corrupt data. For more information see
.
The encoding for a database is stored in the system catalog
pg_database. You can see it by using the
psql option or the
\l command.
$ psql -l
List of databases
Name | Owner | Encoding | Collation | Ctype | Access Privileges
-----------+----------+-----------+-------------+-------------+-------------------------------------
clocaledb | hlinnaka | SQL_ASCII | C | C |
englishdb | hlinnaka | UTF8 | en_GB.UTF8 | en_GB.UTF8 |
japanese | hlinnaka | UTF8 | ja_JP.UTF8 | ja_JP.UTF8 |
korean | hlinnaka | EUC_KR | ko_KR.euckr | ko_KR.euckr |
postgres | hlinnaka | UTF8 | fi_FI.UTF8 | fi_FI.UTF8 |
template0 | hlinnaka | UTF8 | fi_FI.UTF8 | fi_FI.UTF8 | {=c/hlinnaka,hlinnaka=CTc/hlinnaka}
template1 | hlinnaka | UTF8 | fi_FI.UTF8 | fi_FI.UTF8 | {=c/hlinnaka,hlinnaka=CTc/hlinnaka}
(7 rows)
On most modern operating systems, PostgreSQL
can determine which character set is implied by the LC_CTYPE
setting, and it will enforce that only the matching database encoding is
used. On older systems it is your responsibility to ensure that you use
the encoding expected by the locale you have selected. A mistake in
this area is likely to lead to strange behavior of locale-dependent
operations such as sorting.
PostgreSQL will allow superusers to create
databases with SQL_ASCII encoding even when
LC_CTYPE is not C or POSIX. As noted
above, SQL_ASCII does not enforce that the data stored in
the database has any particular encoding, and so this choice poses risks
of locale-dependent misbehavior. Using this combination of settings is
deprecated and may someday be forbidden altogether.
Automatic Character Set Conversion Between Server and ClientPostgreSQL supports automatic character
set conversion between server and client for many combinations of
character sets (
shows which ones).
To enable automatic character set conversion, you have to
tell PostgreSQL the character set
(encoding) you would like to use in the client. There are several
ways to accomplish this:
Using the \encoding command in
psql.
\encoding allows you to change client
encoding on the fly. For
example, to change the encoding to SJIS, type:
\encoding SJIS
libpq () has functions to control the client encoding.
Using SET client_encoding TO.
Setting the client encoding can be done with this SQL command:
SET CLIENT_ENCODING TO 'value';
Also you can use the standard SQL syntax SET NAMES
for this purpose:
SET NAMES 'value';
To query the current client encoding:
SHOW client_encoding;
To return to the default encoding:
RESET client_encoding;
Using PGCLIENTENCODING. If the environment variable
PGCLIENTENCODING is defined in the client's
environment, that client encoding is automatically selected
when a connection to the server is made. (This can
subsequently be overridden using any of the other methods
mentioned above.)
Using the configuration variable . If the
client_encoding variable is set, that client
encoding is automatically selected when a connection to the
server is made. (This can subsequently be overridden using any
of the other methods mentioned above.)
If the conversion of a particular character is not possible
— suppose you chose EUC_JP for the
server and LATIN1 for the client, and some
Japanese characters are returned that do not have a representation in
LATIN1 — an error is reported.
If the client character set is defined as SQL_ASCII,
encoding conversion is disabled, regardless of the server's character
set. (However, if the server's character set is
not SQL_ASCII, the server will still check that
incoming data is valid for that encoding; so the net effect is as
though the client character set were the same as the server's.)
Just as for the server, use of SQL_ASCII is unwise
unless you are working with all-ASCII data.
Available Character Set ConversionsPostgreSQL allows conversion between any
two character sets for which a conversion function is listed in the
pg_conversion
system catalog. PostgreSQL comes with
some predefined conversions, as summarized in
and shown in more
detail in . You can
create a new conversion using the SQL command
. (To be used for automatic
client/server conversions, a conversion must be marked
as default for its character set pair.)
Built-in Client/Server Character Set ConversionsServer Character SetAvailable Client Character SetsBIG5not supported as a server encodingEUC_CNEUC_CN,
MULE_INTERNAL,
UTF8EUC_JPEUC_JP,
MULE_INTERNAL,
SJIS,
UTF8EUC_JIS_2004EUC_JIS_2004,
SHIFT_JIS_2004,
UTF8EUC_KREUC_KR,
MULE_INTERNAL,
UTF8EUC_TWEUC_TW,
BIG5,
MULE_INTERNAL,
UTF8GB18030not supported as a server encodingGBKnot supported as a server encodingISO_8859_5ISO_8859_5,
KOI8R,
MULE_INTERNAL,
UTF8,
WIN866,
WIN1251ISO_8859_6ISO_8859_6,
UTF8ISO_8859_7ISO_8859_7,
UTF8ISO_8859_8ISO_8859_8,
UTF8JOHABnot supported as a server encodingKOI8RKOI8R,
ISO_8859_5,
MULE_INTERNAL,
UTF8,
WIN866,
WIN1251KOI8UKOI8U,
UTF8LATIN1LATIN1,
MULE_INTERNAL,
UTF8LATIN2LATIN2,
MULE_INTERNAL,
UTF8,
WIN1250LATIN3LATIN3,
MULE_INTERNAL,
UTF8LATIN4LATIN4,
MULE_INTERNAL,
UTF8LATIN5LATIN5,
UTF8LATIN6LATIN6,
UTF8LATIN7LATIN7,
UTF8LATIN8LATIN8,
UTF8LATIN9LATIN9,
UTF8LATIN10LATIN10,
UTF8MULE_INTERNALMULE_INTERNAL,
BIG5,
EUC_CN,
EUC_JP,
EUC_KR,
EUC_TW,
ISO_8859_5,
KOI8R,
LATIN1 to LATIN4,
SJIS,
WIN866,
WIN1250,
WIN1251SJISnot supported as a server encodingSHIFT_JIS_2004not supported as a server encodingSQL_ASCIIany (no conversion will be performed)UHCnot supported as a server encodingUTF8all supported encodingsWIN866WIN866,
ISO_8859_5,
KOI8R,
MULE_INTERNAL,
UTF8,
WIN1251WIN874WIN874,
UTF8WIN1250WIN1250,
LATIN2,
MULE_INTERNAL,
UTF8WIN1251WIN1251,
ISO_8859_5,
KOI8R,
MULE_INTERNAL,
UTF8,
WIN866WIN1252WIN1252,
UTF8WIN1253WIN1253,
UTF8WIN1254WIN1254,
UTF8WIN1255WIN1255,
UTF8WIN1256WIN1256,
UTF8WIN1257WIN1257,
UTF8WIN1258WIN1258,
UTF8
All Built-in Character Set ConversionsConversion Name
The conversion names follow a standard naming scheme: The
official name of the source encoding with all
non-alphanumeric characters replaced by underscores, followed
by _to_, followed by the similarly processed
destination encoding name. Therefore, these names sometimes
deviate from the customary encoding names shown in
.
Source EncodingDestination Encodingbig5_to_euc_twBIG5EUC_TWbig5_to_micBIG5MULE_INTERNALbig5_to_utf8BIG5UTF8euc_cn_to_micEUC_CNMULE_INTERNALeuc_cn_to_utf8EUC_CNUTF8euc_jp_to_micEUC_JPMULE_INTERNALeuc_jp_to_sjisEUC_JPSJISeuc_jp_to_utf8EUC_JPUTF8euc_kr_to_micEUC_KRMULE_INTERNALeuc_kr_to_utf8EUC_KRUTF8euc_tw_to_big5EUC_TWBIG5euc_tw_to_micEUC_TWMULE_INTERNALeuc_tw_to_utf8EUC_TWUTF8gb18030_to_utf8GB18030UTF8gbk_to_utf8GBKUTF8iso_8859_10_to_utf8LATIN6UTF8iso_8859_13_to_utf8LATIN7UTF8iso_8859_14_to_utf8LATIN8UTF8iso_8859_15_to_utf8LATIN9UTF8iso_8859_16_to_utf8LATIN10UTF8iso_8859_1_to_micLATIN1MULE_INTERNALiso_8859_1_to_utf8LATIN1UTF8iso_8859_2_to_micLATIN2MULE_INTERNALiso_8859_2_to_utf8LATIN2UTF8iso_8859_2_to_windows_1250LATIN2WIN1250iso_8859_3_to_micLATIN3MULE_INTERNALiso_8859_3_to_utf8LATIN3UTF8iso_8859_4_to_micLATIN4MULE_INTERNALiso_8859_4_to_utf8LATIN4UTF8iso_8859_5_to_koi8_rISO_8859_5KOI8Riso_8859_5_to_micISO_8859_5MULE_INTERNALiso_8859_5_to_utf8ISO_8859_5UTF8iso_8859_5_to_windows_1251ISO_8859_5WIN1251iso_8859_5_to_windows_866ISO_8859_5WIN866iso_8859_6_to_utf8ISO_8859_6UTF8iso_8859_7_to_utf8ISO_8859_7UTF8iso_8859_8_to_utf8ISO_8859_8UTF8iso_8859_9_to_utf8LATIN5UTF8johab_to_utf8JOHABUTF8koi8_r_to_iso_8859_5KOI8RISO_8859_5koi8_r_to_micKOI8RMULE_INTERNALkoi8_r_to_utf8KOI8RUTF8koi8_r_to_windows_1251KOI8RWIN1251koi8_r_to_windows_866KOI8RWIN866koi8_u_to_utf8KOI8UUTF8mic_to_big5MULE_INTERNALBIG5mic_to_euc_cnMULE_INTERNALEUC_CNmic_to_euc_jpMULE_INTERNALEUC_JPmic_to_euc_krMULE_INTERNALEUC_KRmic_to_euc_twMULE_INTERNALEUC_TWmic_to_iso_8859_1MULE_INTERNALLATIN1mic_to_iso_8859_2MULE_INTERNALLATIN2mic_to_iso_8859_3MULE_INTERNALLATIN3mic_to_iso_8859_4MULE_INTERNALLATIN4mic_to_iso_8859_5MULE_INTERNALISO_8859_5mic_to_koi8_rMULE_INTERNALKOI8Rmic_to_sjisMULE_INTERNALSJISmic_to_windows_1250MULE_INTERNALWIN1250mic_to_windows_1251MULE_INTERNALWIN1251mic_to_windows_866MULE_INTERNALWIN866sjis_to_euc_jpSJISEUC_JPsjis_to_micSJISMULE_INTERNALsjis_to_utf8SJISUTF8windows_1258_to_utf8WIN1258UTF8uhc_to_utf8UHCUTF8utf8_to_big5UTF8BIG5utf8_to_euc_cnUTF8EUC_CNutf8_to_euc_jpUTF8EUC_JPutf8_to_euc_krUTF8EUC_KRutf8_to_euc_twUTF8EUC_TWutf8_to_gb18030UTF8GB18030utf8_to_gbkUTF8GBKutf8_to_iso_8859_1UTF8LATIN1utf8_to_iso_8859_10UTF8LATIN6utf8_to_iso_8859_13UTF8LATIN7utf8_to_iso_8859_14UTF8LATIN8utf8_to_iso_8859_15UTF8LATIN9utf8_to_iso_8859_16UTF8LATIN10utf8_to_iso_8859_2UTF8LATIN2utf8_to_iso_8859_3UTF8LATIN3utf8_to_iso_8859_4UTF8LATIN4utf8_to_iso_8859_5UTF8ISO_8859_5utf8_to_iso_8859_6UTF8ISO_8859_6utf8_to_iso_8859_7UTF8ISO_8859_7utf8_to_iso_8859_8UTF8ISO_8859_8utf8_to_iso_8859_9UTF8LATIN5utf8_to_johabUTF8JOHAButf8_to_koi8_rUTF8KOI8Rutf8_to_koi8_uUTF8KOI8Uutf8_to_sjisUTF8SJISutf8_to_windows_1258UTF8WIN1258utf8_to_uhcUTF8UHCutf8_to_windows_1250UTF8WIN1250utf8_to_windows_1251UTF8WIN1251utf8_to_windows_1252UTF8WIN1252utf8_to_windows_1253UTF8WIN1253utf8_to_windows_1254UTF8WIN1254utf8_to_windows_1255UTF8WIN1255utf8_to_windows_1256UTF8WIN1256utf8_to_windows_1257UTF8WIN1257utf8_to_windows_866UTF8WIN866utf8_to_windows_874UTF8WIN874windows_1250_to_iso_8859_2WIN1250LATIN2windows_1250_to_micWIN1250MULE_INTERNALwindows_1250_to_utf8WIN1250UTF8windows_1251_to_iso_8859_5WIN1251ISO_8859_5windows_1251_to_koi8_rWIN1251KOI8Rwindows_1251_to_micWIN1251MULE_INTERNALwindows_1251_to_utf8WIN1251UTF8windows_1251_to_windows_866WIN1251WIN866windows_1252_to_utf8WIN1252UTF8windows_1256_to_utf8WIN1256UTF8windows_866_to_iso_8859_5WIN866ISO_8859_5windows_866_to_koi8_rWIN866KOI8Rwindows_866_to_micWIN866MULE_INTERNALwindows_866_to_utf8WIN866UTF8windows_866_to_windows_1251WIN866WINwindows_874_to_utf8WIN874UTF8euc_jis_2004_to_utf8EUC_JIS_2004UTF8utf8_to_euc_jis_2004UTF8EUC_JIS_2004shift_jis_2004_to_utf8SHIFT_JIS_2004UTF8utf8_to_shift_jis_2004UTF8SHIFT_JIS_2004euc_jis_2004_to_shift_jis_2004EUC_JIS_2004SHIFT_JIS_2004shift_jis_2004_to_euc_jis_2004SHIFT_JIS_2004EUC_JIS_2004
Further Reading
These are good sources to start learning about various kinds of encoding
systems.
CJKV Information Processing: Chinese, Japanese, Korean & Vietnamese Computing
Contains detailed explanations of EUC_JP,
EUC_CN, EUC_KR,
EUC_TW.
The web site of the Unicode Consortium.
RFC 3629
UTF-8 (8-bit UCS/Unicode Transformation
Format) is defined here.