SQLGlot is a no-dependency SQL parser, transpiler, optimizer, and engine. It can be used to format SQL or translate between 21 different dialects like DuckDB, Presto / Trino, Spark / Databricks, Snowflake, and BigQuery. It aims to read a wide variety of SQL inputs and output syntactically and semantically correct SQL in the targeted dialects.
It is a very comprehensive generic SQL parser with a robust test suite. It is also quite performant, while being written purely in Python.
You can easily customize the parser, analyze queries, traverse expression trees, and programmatically build SQL.
Syntax errors are highlighted and dialect incompatibilities can warn or raise depending on configurations. However, SQLGlot does not aim to be a SQL validator, so it may fail to detect certain syntax errors.
Learn more about SQLGlot in the API documentation and the expression tree primer.
Contributions are very welcome in SQLGlot; read the contribution guide to get started!
Table of Contents
- Install
- Versioning
- Get in Touch
- FAQ
- Examples
- Used By
- Documentation
- Run Tests and Lint
- Benchmarks
- Optional Dependencies
Install
From PyPI:
pip3 install "sqlglot[rs]"
# Without Rust tokenizer (slower):
# pip3 install sqlglot
Or with a local checkout:
make install
Requirements for development (optional):
make install-dev
Versioning
Given a version number MAJOR
.MINOR
.PATCH
, SQLGlot uses the following versioning strategy:
- The
PATCH
version is incremented when there are backwards-compatible fixes or feature additions. - The
MINOR
version is incremented when there are backwards-incompatible fixes or feature additions. - The
MAJOR
version is incremented when there are significant backwards-incompatible fixes or feature additions.
Get in Touch
We'd love to hear from you. Join our community Slack channel!
FAQ
I tried to parse SQL that should be valid but it failed, why did that happen?
- Most of the time, issues like this occur because the "source" dialect is omitted during parsing. For example, this is how to correctly parse a SQL query written in Spark SQL:
parse_one(sql, dialect="spark")
(alternatively:read="spark"
). If no dialect is specified,parse_one
will attempt to parse the query according to the "SQLGlot dialect", which is designed to be a superset of all supported dialects. If you tried specifying the dialect and it still doesn't work, please file an issue.
I tried to output SQL but it's not in the correct dialect!
- Like parsing, generating SQL also requires the target dialect to be specified, otherwise the SQLGlot dialect will be used by default. For example, to transpile a query from Spark SQL to DuckDB, do
parse_one(sql, dialect="spark").sql(dialect="duckdb")
(alternatively:transpile(sql, read="spark", write="duckdb")
).
I tried to parse invalid SQL and it worked, even though it should raise an error! Why didn't it validate my SQL?
- SQLGlot does not aim to be a SQL validator - it is designed to be very forgiving. This makes the codebase more comprehensive and also gives more flexibility to its users, e.g. by allowing them to include trailing commas in their projection lists.
What happened to sqlglot.dataframe?
- The PySpark dataframe api was moved to a standalone library called sqlframe in v24. It now allows you to run queries as opposed to just generate SQL.
Examples
Formatting and Transpiling
Easily translate from one dialect to another. For example, date/time functions vary between dialects and can be hard to deal with:
import sqlglot
transpile("SELECT EPOCH_MS(1618088028295)", read="duckdb", write="hive")[0]
'SELECT FROM_UNIXTIME(1618088028295 / POW(10, 3))'
SQLGlot can even translate custom time formats:
import sqlglot
transpile("SELECT STRFTIME(x, '%y-%-m-%S')", read="duckdb", write="hive")[0]
"SELECT DATE_FORMAT(x, 'yy-M-ss')"
Identifier delimiters and data types can be translated as well:
import sqlglot
# Spark SQL requires backticks (`) for delimited identifiers and uses `FLOAT` over `REAL`
sql = """WITH baz AS (SELECT a, c FROM foo WHERE a = 1) SELECT f.a, b.b, baz.c, CAST("b"."a" AS REAL) d FROM foo f JOIN bar b ON f.a = b.a LEFT JOIN baz ON f.a = baz.a"""
# Translates the query into Spark SQL, formats it, and delimits all of its identifiers
print(transpile(sql, write="spark", identify=True, pretty=True)[0])
WITH `baz` AS (
SELECT
`a`,
`c`
FROM `foo`
WHERE
`a` = 1
)
SELECT
`f`.`a`,
`b`.`b`,
`baz`.`c`,
CAST(`b`.`a` AS FLOAT) AS `d`
FROM `foo` AS `f`
JOIN `bar` AS `b`
ON `f`.`a` = `b`.`a`
LEFT JOIN `baz`
ON `f`.`a` = `baz`.`a`
Comments are also preserved on a best-effort basis:
sql = """
/* multi
line
comment
*/
SELECT
tbl.cola /* comment 1 */ + tbl.colb /* comment 2 */,
CAST(x AS SIGNED), # comment 3
y -- comment 4
FROM
bar /* comment 5 */,
tbl # comment 6
"""
# Note: MySQL-specific comments (`#`) are converted into standard syntax
print(transpile(sql, read='mysql', pretty=True)[0])
/* multi
line
comment
*/
SELECT
tbl.cola /* comment 1 */ + tbl.colb /* comment 2 */,
CAST(x AS INT), /* comment 3 */
y /* comment 4 */
FROM bar /* comment 5 */, tbl /* comment 6 */
Metadata
You can explore SQL with expression helpers to do things like find columns and tables in a query:
from sqlglot import parse_one, exp
# print all column references (a and b)
for column in parse_one("SELECT a, b + 1 AS c FROM d").find_all(exp.Column):
print(column.alias_or_name)
# find all projections in select statements (a and c)
for select in parse_one("SELECT a, b + 1 AS c FROM d").find_all(exp.Select):
for projection in select.expressions:
print(projection.alias_or_name)
# find all tables (x, y, z)
for table in parse_one("SELECT * FROM x JOIN y JOIN z").find_all(exp.Table):
print(table.name)
Read the ast primer to learn more about SQLGlot's internals.
Parser Errors
When the parser detects an error in the syntax, it raises a ParseError
:
import sqlglot
transpile("SELECT foo FROM (SELECT baz FROM t")
sqlglot.errors.ParseError: Expecting ). Line 1, Col: 34.
SELECT foo FROM (SELECT baz FROM t
~
Structured syntax errors are accessible for programmatic use:
import sqlglot
try:
transpile("SELECT foo FROM (SELECT baz FROM t")
except sqlglot.errors.ParseError as e:
print(e.errors)
[{
'description': 'Expecting )',
'line': 1,
'col': 34,
'start_context': 'SELECT foo FROM (SELECT baz FROM ',
'highlight': 't',
'end_context': '',
'into_expression': None
}]
Unsupported Errors
It may not be possible to translate some queries between certain dialects. For these cases, SQLGlot may emit a warning and will proceed to do a best-effort translation by default:
import sqlglot
transpile("SELECT APPROX_DISTINCT(a, 0.1) FROM foo", read="presto", write="hive")
APPROX_COUNT_DISTINCT does not support accuracy
'SELECT APPROX_COUNT_DISTINCT(a) FROM foo'
This behavior can be changed by setting the unsupported_level
attribute. For example, we can set it to either RAISE
or IMMEDIATE
to ensure an exception is raised instead:
import sqlglot
transpile("SELECT APPROX_DISTINCT(a, 0.1) FROM foo", read="presto", write="hive", unsupported_level=sqlglot.ErrorLevel.RAISE)
sqlglot.errors.UnsupportedError: APPROX_COUNT_DISTINCT does not support accuracy
There are queries that require additional information to be accurately transpiled, such as the schemas of the tables referenced in them. This is because certain transformations are type-sensitive, meaning that type inference is needed in order to understand their semantics. Even though the qualify
and annotate_types
optimizer rules can help with this, they are not used by default because they add significant overhead and complexity.
Transpilation is generally a hard problem, so SQLGlot employs an "incremental" approach to solving it. This means that there may be dialect pairs that currently lack support for some inputs, but this is expected to improve over time. We highly appreciate well-documented and tested issues or PRs, so feel free to reach out if you need guidance!
Build and Modify SQL
SQLGlot supports incrementally building SQL expressions:
from sqlglot import select, condition
where = condition("x=1").and_("y=1")
select("*").from_("y").where(where).sql()
'SELECT * FROM y WHERE x = 1 AND y = 1'
It's possible to modify a parsed tree:
from sqlglot import parse_one
parse_one("SELECT x FROM y").from_("z").sql()
'SELECT x FROM z'
Parsed expressions can also be transformed recursively by applying a mapping function to each node in the tree:
from sqlglot import exp, parse_one
expression_tree = parse_one("SELECT a FROM x")
def transformer(node):
if isinstance(node, exp.Column) and node.name == "a":
return parse_one("FUN(a)")
return node
transformed_tree = expression_tree.transform(transformer)
transformed_tree.sql()
'SELECT FUN(a) FROM x'
SQL Optimizer
SQLGlot can rewrite queries into an "optimized" form. It performs a variety of techniques to create a new canonical AST. This AST can be used to standardize queries or provide the foundations for implementing an actual engine. For example:
import sqlglot
from sqlglot.optimizer import optimize
print(
optimize(
parse_one("""
SELECT A OR (B OR (C AND D))
FROM x
WHERE Z = date '2021-01-01' + INTERVAL '1' month OR 1 = 0
"""),
schema={"x": {"A": "INT", "B": "INT", "C": "INT", "D": "INT", "Z": "STRING"}}
).sql(pretty=True)
)
SELECT
(
"x"."a" <> 0 OR "x"."b" <> 0 OR "x"."c" <> 0
)
AND (
"x"."a" <> 0 OR "x"."b" <> 0 OR "x"."d" <> 0
) AS "_col_0"
FROM "x" AS "x"
WHERE
CAST("x"."z" AS DATE) = CAST('2021-02-01' AS DATE)
AST Introspection
You can see the AST version of the parsed SQL by calling repr
:
from sqlglot import parse_one
print(repr(parse_one("SELECT a + 1 AS z")))
Select(
expressions=[
Alias(
this=Add(
this=Column(
this=Identifier(this=a, quoted=False)),
expression=Literal(this=1, is_string=False)),
alias=Identifier(this=z, quoted=False))])
AST Diff
SQLGlot can calculate the semantic difference between two expressions and output changes in a form of a sequence of actions needed to transform a source expression into a target one:
from sqlglot import diff, parse_one
diff(parse_one("SELECT a + b, c, d"), parse_one("SELECT c, a - b, d"))
[
Remove(expression=Add(
this=Column(
this=Identifier(this=a, quoted=False)),
expression=Column(
this=Identifier(this=b, quoted=False)))),
Insert(expression=Sub(
this=Column(
this=Identifier(this=a, quoted=False)),
expression=Column(
this=Identifier(this=b, quoted=False)))),
Keep(
source=Column(this=Identifier(this=a, quoted=False)),
target=Column(this=Identifier(this=a, quoted=False))),
...
]
See also: Semantic Diff for SQL.
Custom Dialects
Dialects can be added by subclassing Dialect
:
from sqlglot import exp
from sqlglot.dialects.dialect import Dialect
from sqlglot.generator import Generator
from sqlglot.tokens import Tokenizer, TokenType
class Custom(Dialect):
class Tokenizer(Tokenizer):
QUOTES = ["'", '"']
IDENTIFIERS = ["`"]
KEYWORDS = {
**Tokenizer.KEYWORDS,
"INT64": TokenType.BIGINT,
"FLOAT64": TokenType.DOUBLE,
}
class Generator(Generator):
TRANSFORMS = {exp.Array: lambda self, e: f"[{self.expressions(e)}]"}
TYPE_MAPPING = {
exp.DataType.Type.TINYINT: "INT64",
exp.DataType.Type.SMALLINT: "INT64",
exp.DataType.Type.INT: "INT64",
exp.DataType.Type.BIGINT: "INT64",
exp.DataType.Type.DECIMAL: "NUMERIC",
exp.DataType.Type.FLOAT: "FLOAT64",
exp.DataType.Type.DOUBLE: "FLOAT64",
exp.DataType.Type.BOOLEAN: "BOOL",
exp.DataType.Type.TEXT: "STRING",
}
print(Dialect["custom"])
<class '__main__.Custom'>
SQL Execution
SQLGlot is able to interpret SQL queries, where the tables are represented as Python dictionaries. The engine is not supposed to be fast, but it can be useful for unit testing and running SQL natively across Python objects. Additionally, the foundation can be easily integrated with fast compute kernels, such as Arrow and Pandas.
The example below showcases the execution of a query that involves aggregations and joins:
from sqlglot.executor import execute
tables = {
"sushi": [
{"id": 1, "price": 1.0},
{"id": 2, "price": 2.0},
{"id": 3, "price": 3.0},
],
"order_items": [
{"sushi_id": 1, "order_id": 1},
{"sushi_id": 1, "order_id": 1},
{"sushi_id": 2, "order_id": 1},
{"sushi_id": 3, "order_id": 2},
],
"orders": [
{"id": 1, "user_id": 1},
{"id": 2, "user_id": 2},
],
}
execute(
"""
SELECT
o.user_id,
SUM(s.price) AS price
FROM orders o
JOIN order_items i
ON o.id = i.order_id
JOIN sushi s
ON i.sushi_id = s.id
GROUP BY o.user_id
""",
tables=tables
)
user_id price
1 4.0
2 3.0
See also: Writing a Python SQL engine from scratch.
Used By
Documentation
SQLGlot uses pdoc to serve its API documentation.
A hosted version is on the SQLGlot website, or you can build locally with:
make docs-serve
Run Tests and Lint
make style # Only linter checks
make unit # Only unit tests (or unit-rs, to use the Rust tokenizer)
make test # Unit and integration tests (or test-rs, to use the Rust tokenizer)
make check # Full test suite & linter checks
Benchmarks
Benchmarks run on Python 3.10.12 in seconds.
Query | sqlglot | sqlglotrs | sqlfluff | sqltree | sqlparse | moz_sql_parser | sqloxide |
---|---|---|---|---|---|---|---|
tpch | 0.00944 (1.0) | 0.00590 (0.625) | 0.32116 (33.98) | 0.00693 (0.734) | 0.02858 (3.025) | 0.03337 (3.532) | 0.00073 (0.077) |
short | 0.00065 (1.0) | 0.00044 (0.687) | 0.03511 (53.82) | 0.00049 (0.759) | 0.00163 (2.506) | 0.00234 (3.601) | 0.00005 (0.073) |
long | 0.00889 (1.0) | 0.00572 (0.643) | 0.36982 (41.56) | 0.00614 (0.690) | 0.02530 (2.844) | 0.02931 (3.294) | 0.00059 (0.066) |
crazy | 0.02918 (1.0) | 0.01991 (0.682) | 1.88695 (64.66) | 0.02003 (0.686) | 7.46894 (255.9) | 0.64994 (22.27) | 0.00327 (0.112) |
Optional Dependencies
SQLGlot uses dateutil to simplify literal timedelta expressions. The optimizer will not simplify expressions like the following if the module cannot be found:
x + interval '1' month
1# ruff: noqa: F401 2""" 3.. include:: ../README.md 4 5---- 6""" 7 8from __future__ import annotations 9 10import logging 11import typing as t 12 13from sqlglot import expressions as exp 14from sqlglot.dialects.dialect import Dialect as Dialect, Dialects as Dialects 15from sqlglot.diff import diff as diff 16from sqlglot.errors import ( 17 ErrorLevel as ErrorLevel, 18 ParseError as ParseError, 19 TokenError as TokenError, 20 UnsupportedError as UnsupportedError, 21) 22from sqlglot.expressions import ( 23 Expression as Expression, 24 alias_ as alias, 25 and_ as and_, 26 case as case, 27 cast as cast, 28 column as column, 29 condition as condition, 30 except_ as except_, 31 from_ as from_, 32 func as func, 33 intersect as intersect, 34 maybe_parse as maybe_parse, 35 not_ as not_, 36 or_ as or_, 37 select as select, 38 subquery as subquery, 39 table_ as table, 40 to_column as to_column, 41 to_identifier as to_identifier, 42 to_table as to_table, 43 union as union, 44) 45from sqlglot.generator import Generator as Generator 46from sqlglot.parser import Parser as Parser 47from sqlglot.schema import MappingSchema as MappingSchema, Schema as Schema 48from sqlglot.tokens import Token as Token, Tokenizer as Tokenizer, TokenType as TokenType 49 50if t.TYPE_CHECKING: 51 from sqlglot._typing import E 52 from sqlglot.dialects.dialect import DialectType as DialectType 53 54logger = logging.getLogger("sqlglot") 55 56 57try: 58 from sqlglot._version import __version__, __version_tuple__ 59except ImportError: 60 logger.error( 61 "Unable to set __version__, run `pip install -e .` or `python setup.py develop` first." 62 ) 63 64 65pretty = False 66"""Whether to format generated SQL by default.""" 67 68 69def tokenize(sql: str, read: DialectType = None, dialect: DialectType = None) -> t.List[Token]: 70 """ 71 Tokenizes the given SQL string. 72 73 Args: 74 sql: the SQL code string to tokenize. 75 read: the SQL dialect to apply during tokenizing (eg. "spark", "hive", "presto", "mysql"). 76 dialect: the SQL dialect (alias for read). 77 78 Returns: 79 The resulting list of tokens. 80 """ 81 return Dialect.get_or_raise(read or dialect).tokenize(sql) 82 83 84def parse( 85 sql: str, read: DialectType = None, dialect: DialectType = None, **opts 86) -> t.List[t.Optional[Expression]]: 87 """ 88 Parses the given SQL string into a collection of syntax trees, one per parsed SQL statement. 89 90 Args: 91 sql: the SQL code string to parse. 92 read: the SQL dialect to apply during parsing (eg. "spark", "hive", "presto", "mysql"). 93 dialect: the SQL dialect (alias for read). 94 **opts: other `sqlglot.parser.Parser` options. 95 96 Returns: 97 The resulting syntax tree collection. 98 """ 99 return Dialect.get_or_raise(read or dialect).parse(sql, **opts) 100 101 102@t.overload 103def parse_one(sql: str, *, into: t.Type[E], **opts) -> E: ... 104 105 106@t.overload 107def parse_one(sql: str, **opts) -> Expression: ... 108 109 110def parse_one( 111 sql: str, 112 read: DialectType = None, 113 dialect: DialectType = None, 114 into: t.Optional[exp.IntoType] = None, 115 **opts, 116) -> Expression: 117 """ 118 Parses the given SQL string and returns a syntax tree for the first parsed SQL statement. 119 120 Args: 121 sql: the SQL code string to parse. 122 read: the SQL dialect to apply during parsing (eg. "spark", "hive", "presto", "mysql"). 123 dialect: the SQL dialect (alias for read) 124 into: the SQLGlot Expression to parse into. 125 **opts: other `sqlglot.parser.Parser` options. 126 127 Returns: 128 The syntax tree for the first parsed statement. 129 """ 130 131 dialect = Dialect.get_or_raise(read or dialect) 132 133 if into: 134 result = dialect.parse_into(into, sql, **opts) 135 else: 136 result = dialect.parse(sql, **opts) 137 138 for expression in result: 139 if not expression: 140 raise ParseError(f"No expression was parsed from '{sql}'") 141 return expression 142 else: 143 raise ParseError(f"No expression was parsed from '{sql}'") 144 145 146def transpile( 147 sql: str, 148 read: DialectType = None, 149 write: DialectType = None, 150 identity: bool = True, 151 error_level: t.Optional[ErrorLevel] = None, 152 **opts, 153) -> t.List[str]: 154 """ 155 Parses the given SQL string in accordance with the source dialect and returns a list of SQL strings transformed 156 to conform to the target dialect. Each string in the returned list represents a single transformed SQL statement. 157 158 Args: 159 sql: the SQL code string to transpile. 160 read: the source dialect used to parse the input string (eg. "spark", "hive", "presto", "mysql"). 161 write: the target dialect into which the input should be transformed (eg. "spark", "hive", "presto", "mysql"). 162 identity: if set to `True` and if the target dialect is not specified the source dialect will be used as both: 163 the source and the target dialect. 164 error_level: the desired error level of the parser. 165 **opts: other `sqlglot.generator.Generator` options. 166 167 Returns: 168 The list of transpiled SQL statements. 169 """ 170 write = (read if write is None else write) if identity else write 171 write = Dialect.get_or_raise(write) 172 return [ 173 write.generate(expression, copy=False, **opts) if expression else "" 174 for expression in parse(sql, read, error_level=error_level) 175 ]
Whether to format generated SQL by default.
70def tokenize(sql: str, read: DialectType = None, dialect: DialectType = None) -> t.List[Token]: 71 """ 72 Tokenizes the given SQL string. 73 74 Args: 75 sql: the SQL code string to tokenize. 76 read: the SQL dialect to apply during tokenizing (eg. "spark", "hive", "presto", "mysql"). 77 dialect: the SQL dialect (alias for read). 78 79 Returns: 80 The resulting list of tokens. 81 """ 82 return Dialect.get_or_raise(read or dialect).tokenize(sql)
Tokenizes the given SQL string.
Arguments:
- sql: the SQL code string to tokenize.
- read: the SQL dialect to apply during tokenizing (eg. "spark", "hive", "presto", "mysql").
- dialect: the SQL dialect (alias for read).
Returns:
The resulting list of tokens.
85def parse( 86 sql: str, read: DialectType = None, dialect: DialectType = None, **opts 87) -> t.List[t.Optional[Expression]]: 88 """ 89 Parses the given SQL string into a collection of syntax trees, one per parsed SQL statement. 90 91 Args: 92 sql: the SQL code string to parse. 93 read: the SQL dialect to apply during parsing (eg. "spark", "hive", "presto", "mysql"). 94 dialect: the SQL dialect (alias for read). 95 **opts: other `sqlglot.parser.Parser` options. 96 97 Returns: 98 The resulting syntax tree collection. 99 """ 100 return Dialect.get_or_raise(read or dialect).parse(sql, **opts)
Parses the given SQL string into a collection of syntax trees, one per parsed SQL statement.
Arguments:
- sql: the SQL code string to parse.
- read: the SQL dialect to apply during parsing (eg. "spark", "hive", "presto", "mysql").
- dialect: the SQL dialect (alias for read).
- **opts: other
sqlglot.parser.Parser
options.
Returns:
The resulting syntax tree collection.
111def parse_one( 112 sql: str, 113 read: DialectType = None, 114 dialect: DialectType = None, 115 into: t.Optional[exp.IntoType] = None, 116 **opts, 117) -> Expression: 118 """ 119 Parses the given SQL string and returns a syntax tree for the first parsed SQL statement. 120 121 Args: 122 sql: the SQL code string to parse. 123 read: the SQL dialect to apply during parsing (eg. "spark", "hive", "presto", "mysql"). 124 dialect: the SQL dialect (alias for read) 125 into: the SQLGlot Expression to parse into. 126 **opts: other `sqlglot.parser.Parser` options. 127 128 Returns: 129 The syntax tree for the first parsed statement. 130 """ 131 132 dialect = Dialect.get_or_raise(read or dialect) 133 134 if into: 135 result = dialect.parse_into(into, sql, **opts) 136 else: 137 result = dialect.parse(sql, **opts) 138 139 for expression in result: 140 if not expression: 141 raise ParseError(f"No expression was parsed from '{sql}'") 142 return expression 143 else: 144 raise ParseError(f"No expression was parsed from '{sql}'")
Parses the given SQL string and returns a syntax tree for the first parsed SQL statement.
Arguments:
- sql: the SQL code string to parse.
- read: the SQL dialect to apply during parsing (eg. "spark", "hive", "presto", "mysql").
- dialect: the SQL dialect (alias for read)
- into: the SQLGlot Expression to parse into.
- **opts: other
sqlglot.parser.Parser
options.
Returns:
The syntax tree for the first parsed statement.
147def transpile( 148 sql: str, 149 read: DialectType = None, 150 write: DialectType = None, 151 identity: bool = True, 152 error_level: t.Optional[ErrorLevel] = None, 153 **opts, 154) -> t.List[str]: 155 """ 156 Parses the given SQL string in accordance with the source dialect and returns a list of SQL strings transformed 157 to conform to the target dialect. Each string in the returned list represents a single transformed SQL statement. 158 159 Args: 160 sql: the SQL code string to transpile. 161 read: the source dialect used to parse the input string (eg. "spark", "hive", "presto", "mysql"). 162 write: the target dialect into which the input should be transformed (eg. "spark", "hive", "presto", "mysql"). 163 identity: if set to `True` and if the target dialect is not specified the source dialect will be used as both: 164 the source and the target dialect. 165 error_level: the desired error level of the parser. 166 **opts: other `sqlglot.generator.Generator` options. 167 168 Returns: 169 The list of transpiled SQL statements. 170 """ 171 write = (read if write is None else write) if identity else write 172 write = Dialect.get_or_raise(write) 173 return [ 174 write.generate(expression, copy=False, **opts) if expression else "" 175 for expression in parse(sql, read, error_level=error_level) 176 ]
Parses the given SQL string in accordance with the source dialect and returns a list of SQL strings transformed to conform to the target dialect. Each string in the returned list represents a single transformed SQL statement.
Arguments:
- sql: the SQL code string to transpile.
- read: the source dialect used to parse the input string (eg. "spark", "hive", "presto", "mysql").
- write: the target dialect into which the input should be transformed (eg. "spark", "hive", "presto", "mysql").
- identity: if set to
True
and if the target dialect is not specified the source dialect will be used as both: the source and the target dialect. - error_level: the desired error level of the parser.
- **opts: other
sqlglot.generator.Generator
options.
Returns:
The list of transpiled SQL statements.