Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(explore): Postgres datatype conversion #13294

Merged
merged 36 commits into from
Mar 12, 2021

Conversation

nikolagigic
Copy link
Contributor

@nikolagigic nikolagigic commented Feb 23, 2021

SUMMARY

In effort to make db_engine_spec more stable, we are moving all the type conversion logic to one place -> BaseEngineSpec and then reusing it per engine level as needed. This PR should give an PostgreSQL POC, to learn from it and adapt for the rest of the engines.

BEFORE/AFTER SCREENSHOTS OR ANIMATED GIF

TEST PLAN

ADDITIONAL INFORMATION

  • Has associated issue:
  • Changes UI
  • Requires DB Migration.
  • Confirm DB Migration upgrade and downgrade tested.
  • Introduces new feature or API
  • Removes existing feature or API

@codecov-io
Copy link

codecov-io commented Feb 23, 2021

Codecov Report

Merging #13294 (71a6671) into master (1d1a1cd) will decrease coverage by 6.07%.
The diff coverage is 66.05%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master   #13294      +/-   ##
==========================================
- Coverage   77.31%   71.24%   -6.08%     
==========================================
  Files         903      826      -77     
  Lines       45926    41326    -4600     
  Branches     5624     4265    -1359     
==========================================
- Hits        35508    29442    -6066     
- Misses      10282    11884    +1602     
+ Partials      136        0     -136     
Flag Coverage Δ
cypress 57.09% <37.50%> (+0.64%) ⬆️
javascript ?
mysql 80.37% <75.00%> (-0.09%) ⬇️
postgres 80.42% <76.44%> (-0.08%) ⬇️
presto 80.10% <76.81%> (-0.07%) ⬇️
python 80.66% <76.81%> (-0.08%) ⬇️
sqlite 80.03% <75.00%> (-0.09%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
...end/src/SqlLab/components/TemplateParamsEditor.jsx 23.80% <ø> (ø)
superset-frontend/src/common/components/index.tsx 100.00% <ø> (ø)
superset-frontend/src/components/Badge/index.tsx 25.00% <ø> (ø)
superset-frontend/src/components/Button/index.tsx 100.00% <ø> (ø)
...et-frontend/src/dashboard/actions/nativeFilters.ts 62.50% <0.00%> (-5.07%) ⬇️
...tiveFilters/FilterBar/FilterSets/FilterSetUnit.tsx 18.75% <ø> (-22.16%) ⬇️
.../nativeFilters/FilterBar/FilterSets/FilterSets.tsx 6.45% <0.00%> (-11.74%) ⬇️
...tiveFilters/FilterBar/FilterSets/FiltersHeader.tsx 20.00% <0.00%> (-20.91%) ⬇️
...perset-frontend/src/dashboard/containers/Chart.jsx 100.00% <ø> (ø)
...t-frontend/src/explore/reducers/getInitialState.ts 100.00% <ø> (ø)
... and 514 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1d1a1cd...4336ae5. Read the comment docs.

@junlincc
Copy link
Member

junlincc commented Feb 23, 2021

@nikolagigic Hey Nikola, thanks for the PR, I understand there's no UI changes involved, but could you add more details and context in description? 🙏

@junlincc junlincc requested review from villebro and ktmud February 23, 2021 05:02
@junlincc junlincc added the explore:dataset Related to the dataset of Explore label Feb 23, 2021
Copy link
Member

@villebro villebro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few first pass comments

Comment on lines 1126 to 1148
postgres_types_map: Dict[utils.GenericDataType, List[str]] = {
utils.GenericDataType.NUMERIC: [
"smallint",
"integer",
"bigint",
"decimal",
"numeric",
"real",
"double precision",
"smallserial",
"serial",
"bigserial",
],
utils.GenericDataType.STRING: ["varchar", "char", "text",],
utils.GenericDataType.TEMPORAL: [
"DATE",
"TIME",
"TIMESTAMP",
"TIMESTAMPTZ",
"INTERVAL",
],
utils.GenericDataType.BOOLEAN: ["boolean",],
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would break this out so that the base engine only provides a default mapping (can return None now), and then each engine would implement the method as they want. An example of similar logic: convert_dttm: BaseEngineSpec returns None (

@classmethod
def convert_dttm(cls, target_type: str, dttm: datetime) -> Optional[str]:
"""
Convert Python datetime object to a SQL expression
:param target_type: The target type of expression
:param dttm: The datetime object
:return: The SQL expression
"""
return None
), but e.g. BigQueryEngineSpec implements it (
@classmethod
def convert_dttm(cls, target_type: str, dttm: datetime) -> Optional[str]:
tt = target_type.upper()
if tt == utils.TemporalType.DATE:
return f"CAST('{dttm.date().isoformat()}' AS DATE)"
if tt == utils.TemporalType.DATETIME:
return f"""CAST('{dttm.isoformat(timespec="microseconds")}' AS DATETIME)"""
if tt == utils.TemporalType.TIME:
return f"""CAST('{dttm.strftime("%H:%M:%S.%f")}' AS TIME)"""
if tt == utils.TemporalType.TIMESTAMP:
return f"""CAST('{dttm.isoformat(timespec="microseconds")}' AS TIMESTAMP)"""
return None
).

For matching the native type to specific types I would probably use a sequence of regexp to to find the first match. Something similar has been done on Presto t map the database type to SQLAlchemy types (this new functionality would replace the old logic):

column_type_mappings = (
(re.compile(r"^boolean.*", re.IGNORECASE), types.Boolean()),
(re.compile(r"^tinyint.*", re.IGNORECASE), TinyInteger()),
(re.compile(r"^smallint.*", re.IGNORECASE), types.SmallInteger()),
(re.compile(r"^integer.*", re.IGNORECASE), types.Integer()),
(re.compile(r"^bigint.*", re.IGNORECASE), types.BigInteger()),
(re.compile(r"^real.*", re.IGNORECASE), types.Float()),
(re.compile(r"^double.*", re.IGNORECASE), types.Float()),
(re.compile(r"^decimal.*", re.IGNORECASE), types.DECIMAL()),
(
re.compile(r"^varchar(\((\d+)\))*$", re.IGNORECASE),
lambda match: types.VARCHAR(int(match[2])) if match[2] else types.String(),
),
(
re.compile(r"^char(\((\d+)\))*$", re.IGNORECASE),
lambda match: types.CHAR(int(match[2])) if match[2] else types.CHAR(),
),
(re.compile(r"^varbinary.*", re.IGNORECASE), types.VARBINARY()),
(re.compile(r"^json.*", re.IGNORECASE), types.JSON()),
(re.compile(r"^date.*", re.IGNORECASE), types.DATE()),
(re.compile(r"^timestamp.*", re.IGNORECASE), types.TIMESTAMP()),
(re.compile(r"^time.*", re.IGNORECASE), types.Time()),
(re.compile(r"^interval.*", re.IGNORECASE), Interval()),
(re.compile(r"^array.*", re.IGNORECASE), Array()),
(re.compile(r"^map.*", re.IGNORECASE), Map()),
(re.compile(r"^row.*", re.IGNORECASE), Row()),
)
. Some engines like Druid will also have fixed types for certain column names (e.g. __time is actually TIMESTAMP despite being returned as STRING on the cursor description), so these should probably be caught before doing the matching from the database type.

@nikolagigic nikolagigic reopened this Feb 24, 2021
Copy link
Member

@ktmud ktmud left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the new ColumnSpec class. It would be nice if you could describe in PR description more in details how you plan to use it.

@@ -41,7 +41,8 @@
import sqlparse
from flask import g
from flask_babel import lazy_gettext as _
from sqlalchemy import column, DateTime, select
from sqlalchemy import column, DateTime, select, types
from sqlalchemy.dialects.postgresql import DOUBLE_PRECISION
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like an extraneous import

@@ -45,6 +48,28 @@ class PostgresBaseEngineSpec(BaseEngineSpec):
engine = ""
engine_name = "PostgreSQL"

column_type_mappings = (
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to consolidate the structure of db_column_types and column_type_mappings as they are basically doing very similar things (one to convert db column type to generic types; one to convert to SQLA types)?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes - this is the ultimate objective. However, we want to leave the existing structures untouched until we're certain this doesn't break existing functionality. We'll be including other types in this mapping that are missing in db_column_types, like DbColumnType, PyArrow types etc to make sure this is a one-stop-shop for making available all necessary types in one place.

Copy link
Member

@villebro villebro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few comments

Comment on lines 51 to 138
column_type_mappings = (
(re.compile(r"^smallint", re.IGNORECASE), types.SMALLINT),
(re.compile(r"^integer", re.IGNORECASE), types.INTEGER),
(re.compile(r"^bigint", re.IGNORECASE), types.BIGINT),
(re.compile(r"^decimal", re.IGNORECASE), types.DECIMAL),
(re.compile(r"^numeric", re.IGNORECASE), types.NUMERIC),
(re.compile(r"^real", re.IGNORECASE), types.REAL),
(re.compile(r"^double precision", re.IGNORECASE), DOUBLE_PRECISION),
(re.compile(r"^smallserial", re.IGNORECASE), types.SMALLINT),
(re.compile(r"^serial", re.IGNORECASE), types.INTEGER),
(re.compile(r"^bigserial", re.IGNORECASE), types.BIGINT),
(re.compile(r"^varchar", re.IGNORECASE), types.VARCHAR),
(re.compile(r"^char", re.IGNORECASE), types.CHAR),
(re.compile(r"^text", re.IGNORECASE), types.TEXT),
(re.compile(r"^date", re.IGNORECASE), types.DATE),
(re.compile(r"^time", re.IGNORECASE), types.TIME),
(re.compile(r"^timestamp", re.IGNORECASE), types.TIMESTAMP),
(re.compile(r"^timestamptz", re.IGNORECASE), types.TIMESTAMP(timezone=True)),
(re.compile(r"^interval", re.IGNORECASE), types.Interval),
(re.compile(r"^boolean", re.IGNORECASE), types.BOOLEAN),
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should add at least DbColumnType in this mapping, too (should be added to ColumnSpec), as we want to pass that when returning chart query data to the frontend. This will make is_dttm redundant, as that info is already covered by DbColumnType.TEMPORAL.

Comment on lines 67 to 68
(re.compile(r"^timestamp", re.IGNORECASE), types.TIMESTAMP),
(re.compile(r"^timestamptz", re.IGNORECASE), types.TIMESTAMP(timezone=True)),
Copy link
Member

@villebro villebro Feb 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Were you able to get these from inspector.get_columns() or cursor.description? When I check a table with timestamps it seems to return the unabbreviated format TIMESTAMP WITHOUT TIME ZONE, not TIMESTAMP:
image

I'm assuming the abbreviations are usually relevant when creating tables, not fetching table metadata, but I may be wrong (this may have changed over the versions/years):

image

@nikolagigic nikolagigic force-pushed the postgres_type_conversion branch from fc0a7a1 to 488a840 Compare February 26, 2021 13:41
@pull-request-size pull-request-size bot added size/L and removed size/M labels Feb 26, 2021
@nikolagigic nikolagigic force-pushed the postgres_type_conversion branch from 641a16a to c894b90 Compare February 26, 2021 14:58
Comment on lines 188 to 193
dttm_types = [
types.TIME,
types.TIMESTAMP,
types.TIMESTAMP(timezone=True),
types.Interval,
]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't believe this is needed anymore.

def get_sqla_column_type(cls, type_: Optional[str]) -> Optional[TypeEngine]:
def get_sqla_column_type(
cls, column_type: Optional[str]
) -> Tuple[Union[TypeEngine, utils.GenericDataType, None]]:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would consider just leaving the signature of this method unchanged for now; otherwise I'd just remove this one and start using the new get_column_spec method and implement that wherever get_sqla_column_type is being used, as that's the end state we want to aim for.

return None
return sqla_type(match), generic_type
return sqla_type, generic_type
return None, None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For cases where we want to return an empty value (=no match was made), I'd perhaps prefer returning a pure None instead of a Tuple with Nones.

Comment on lines 1128 to 1131
if (
source == utils.ColumnTypeSource.CURSOR_DESCRIPION
): # Further logic to be implemented
pass
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now we can assume GET_TABLE and CURSOR_DESCRIPTION is handled with the same matching logic - this can later be refined for engines where this makes a difference. Also, we might consider keeping the base implementation as simple as possible, and leaving the more complex logic to the individual db engine specs.

Comment on lines 151 to 154
ARRAY = 4
JSON = 5
MAP = 6
ROW = 7
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to make sure this logic is in synced with superset-ui/core - for now it might be a good idea to just map all complex types to STRING as has previously been done, and later introduce more complex generic types once we add proper support for them.

@junlincc junlincc requested a review from zhaoyongjie March 1, 2021 14:56
(
re.compile(r"^smallint", re.IGNORECASE),
types.SMALLINT,
utils.GenericDataType.NUMERIC,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: can we import GenericDataType directly and get rid of the utils. prefix? Just want the code the look cleaner.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But, is adding GenericDataType here really necessary, though? I'd imagine all SQLA types could be definitively mapped to a GenericDataType, wouldn't they? Maybe ColumnSpec can just have an instance @property to map all known SQLA types to GenericDataType?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ktmud Yes, it could've been mapped as you proposed. The reason behind the current way is that if we get multiple matches we can prioritise those at the start of the list without necessarily sorting the results for the most part.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand the need to use a list to have priority for RegExp matching, but will there be a case where two SQLA types may be matched to different GenericDataType? If not, isn't GenericDataType already inferred by SQLA types?

),
(
re.compile(r"^smallint.*", re.IGNORECASE),
types.SmallInteger(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are types.SMALLINT and types.SmallInteger() interchangeable? If yes, can we stick to just one of them?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it to keep it consistent

@pull-request-size pull-request-size bot added size/XL and removed size/L labels Mar 9, 2021
@nikolagigic nikolagigic closed this Mar 9, 2021
@nikolagigic nikolagigic reopened this Mar 9, 2021
Comment on lines +1046 to +1053
column_type_mappings: Tuple[
Tuple[
Pattern[str],
Union[TypeEngine, Callable[[Match[str]], TypeEngine]],
GenericDataType,
],
...,
] = column_type_mappings,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of passing the mapping to the method, we can probably just call the cls.column_type_mappings property in the method call.

Comment on lines 81 to 90
(re.compile(r"^N((VAR)?CHAR|TEXT)", re.IGNORECASE), UnicodeText()),
(re.compile(r"^((VAR)?CHAR|TEXT|STRING)", re.IGNORECASE), String()),
(
re.compile(r"^N((VAR)?CHAR|TEXT)", re.IGNORECASE),
UnicodeText(),
utils.GenericDataType.STRING,
),
(
re.compile(r"^((VAR)?CHAR|TEXT|STRING)", re.IGNORECASE),
String(),
utils.GenericDataType.STRING,
),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would probably design this so that an engine spec can extend the base mapping. In the case of MSSQL, I believe the base mapping is a good fallback. Also, we might consider incorporating these types into the base spec, as I assume fairly many engines support N-prefixed character types, and some of those engines might also benefit from the UnicodeText SQLA type over the regular String one.

Comment on lines 82 to 83
# "NVARCHAR": GenericDataType.STRING, # MSSQL types; commeented out for now and will address in another PR
# "STRING": GenericDataType.STRING,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Were these causing problems in tests? This test might need some refactoring, as it will potentially give different results on different engines. We could potentially simplify this a bit by only checking types that are common for all databases supported by CI, and later potentially adding a few db specific tests.

@nikolagigic nikolagigic reopened this Mar 10, 2021
@nikolagigic nikolagigic force-pushed the postgres_type_conversion branch from 51b7a82 to bfdc994 Compare March 10, 2021 12:44
Comment on lines 208 to 217
(
re.compile(r"^timestamp", re.IGNORECASE),
types.TIMESTAMP(),
GenericDataType.TEMPORAL,
),
(
re.compile(r"^timestamptz", re.IGNORECASE),
types.TIMESTAMP(timezone=True),
GenericDataType.TEMPORAL,
),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

^timestamptz would never be caught as it's already matching ^timestamp above. I suggest removing timestamptz support for now (timezones aren't really properly supported yet).

Comment on lines 165 to 246
db_column_types: Dict[utils.GenericDataType, Tuple[Pattern[str], ...]] = {
utils.GenericDataType.NUMERIC: (
db_column_types: Dict[GenericDataType, Tuple[Pattern[str], ...]] = {
GenericDataType.NUMERIC: (
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this map anymore? I believe this can be achieved with get_column_spec

Comment on lines 213 to 289
cls, db_column_type: Optional[str], target_column_type: utils.GenericDataType
cls, db_column_type: Optional[str], target_column_type: GenericDataType
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe we can remove this method, as I don't see it being used anymore

Comment on lines 262 to 263
self.maxDiff = None
self.assertEquals(metadata, expected_metadata)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's not change this

Suggested change
self.maxDiff = None
self.assertEquals(metadata, expected_metadata)
assert metadata == expected_metadata

Comment on lines 39 to 50
assert_type("INT", None)
assert_type("STRING", String)
assert_type("CHAR(10)", String)
assert_type("VARCHAR(10)", String)
assert_type("TEXT", String)
assert_type("NCHAR(10)", UnicodeText)
assert_type("NVARCHAR(10)", UnicodeText)
assert_type("NTEXT", UnicodeText)
# assert_type("STRING", String, GenericDataType.STRING)
# assert_type("CHAR(10)", String, GenericDataType.STRING)
# assert_type("VARCHAR(10)", String, GenericDataType.STRING)
# assert_type("TEXT", String, GenericDataType.STRING)
# assert_type("NCHAR(10)", UnicodeText, GenericDataType.STRING)
# assert_type("NVARCHAR(10)", UnicodeText, GenericDataType.STRING)
# assert_type("NTEXT", UnicodeText, GenericDataType.STRING)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are these removed? couldn't we assert this using get_column_spec?

Comment on lines 48 to 75
def test_where_clause_n_prefix(self):
dialect = mssql.dialect()
spec = MssqlEngineSpec
str_col = column("col", type_=spec.get_sqla_column_type("VARCHAR(10)"))
unicode_col = column("unicode_col", type_=spec.get_sqla_column_type("NTEXT"))
tbl = table("tbl")
sel = (
select([str_col, unicode_col])
.select_from(tbl)
.where(str_col == "abc")
.where(unicode_col == "abc")
)
# def test_where_clause_n_prefix(self):
# dialect = mssql.dialect()
# spec = MssqlEngineSpec
# type_, _ = spec.get_sqla_column_type("VARCHAR(10)")
# str_col = column("col", type_=type_)
# type_, _ = spec.get_sqla_column_type("NTEXT")
# unicode_col = column("unicode_col", type_=type_)
# tbl = table("tbl")
# sel = (
# select([str_col, unicode_col])
# .select_from(tbl)
# .where(str_col == "abc")
# .where(unicode_col == "abc")
# )

query = str(
sel.compile(dialect=dialect, compile_kwargs={"literal_binds": True})
)
query_expected = (
"SELECT col, unicode_col \n"
"FROM tbl \n"
"WHERE col = 'abc' AND unicode_col = N'abc'"
)
self.assertEqual(query, query_expected)
# query = str(
# sel.compile(dialect=dialect, compile_kwargs={"literal_binds": True})
# )
# query_expected = (
# "SELECT col, unicode_col \n"
# "FROM tbl \n"
# "WHERE col = 'abc' AND unicode_col = N'abc'"
# )
# self.assertEqual(query, query_expected)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an important test - if it doesn't work we need to make sure it does

Comment on lines 137 to 145
(NVARCHAR(length=128), "NVARCHAR(128)"),
# (NVARCHAR(length=128), "NVARCHAR(128)"),
(TEXT(), "TEXT"),
(NTEXT(collation="utf8_general_ci"), "NTEXT"),
# (NTEXT(collation="utf8_general_ci"), "NTEXT"),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This functionality+test can be removed later, but I believe we need to make a db migration to resize the type column in the metadata (not sure if it was already).

("INT", GenericDataType.NUMERIC),
("INTEGER", GenericDataType.NUMERIC),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't INT an official datatype in Mysql? https://dev.mysql.com/doc/refman/8.0/en/integer-types.html I think we need to keep this here.

type_str, GenericDataType.TEMPORAL
) is (col_type == GenericDataType.TEMPORAL)
for type_str, col_type in type_expectations:
print(">>> ", type_str)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assuming a leftover:

Suggested change
print(">>> ", type_str)
print(">>> ", type_str)

GenericDataType.NUMERIC,
),
(
re.compile(r"^integer", re.IGNORECASE),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To make sure we match Mysql INT type, we could just change this to ^INT to match both INT and INTEGER, unless there are any known incompatible types that could cause a collision.

Copy link
Contributor Author

@nikolagigic nikolagigic Mar 12, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is an INTEGER type in mysql dialect.

@nikolagigic nikolagigic reopened this Mar 12, 2021
@villebro villebro merged commit 609c359 into apache:master Mar 12, 2021
allanco91 pushed a commit to allanco91/superset that referenced this pull request May 21, 2021
* test

* unnecessary import

* fix lint

* changes

* fix lint

* changes

* changes

* changes

* changes

* answering comments & changes

* answering comments

* answering comments

* changes

* changes

* changes

* fix tests

* fix tests

* fix tests

* fix tests

* fix tests

* fix tests

* fix tests

* fix tests

* fix tests

* fix tests

* fix tests

* fix tests

* fix tests

* fix tests

* fix tests

* fix tests

* fix tests

* fix tests
@mistercrunch mistercrunch added 🏷️ bot A label used by `supersetbot` to keep track of which PR where auto-tagged with release labels 🚢 1.2.0 labels Mar 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🏷️ bot A label used by `supersetbot` to keep track of which PR where auto-tagged with release labels explore:dataset Related to the dataset of Explore size/XL 🚢 1.2.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants