You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What is the bug?
When trying to work with tables that have escaped column names, this escaping is thrown away during index creation.
How can one reproduce the bug?
Steps to reproduce the behavior:
Create a table with column names that require escaping:
CREATE TABLE
mys3.default.sample_table_dot_cols (`fields.name` string, `fields.count`int) USING PARQUET LOCATION 's3://sample-location/sample-dotcol';
Then attempt to create an index on that table:
CREATEINDEXsample_indexONmys3.default.sample_table_dot_cols (`fields.name`, `fields.count`)
WITH
(auto_refresh = true);
Wait for query to fail and check logs, you'll see the error:
23/10/19 22:30:42 ERROR FlintJob: Fail to verify existing mapping or write result
org.apache.spark.sql.catalyst.parser.ParseException:
Syntax error at or near '.': extra input '.'(line 1, pos 6)
== SQL ==
fields.name string not null,fields.count int not null
------^^^
Which indicates that at some point in internal processing (maybe in FlintSparkIndex.scala), the column names are being unescaped.
What is the expected behavior?
If the column name is escaped in the index creation query, it should be correctly handled. Alternatively, if escaping column names isn't supported, there should be an error on table creation.
@Swiddis I can confirm Flint extension does unquote identifier automatically. Your error was actually thrown from Spark Application code. Wonder if you still have the error stack or more info. For now I didn't figure out which code is the root cause. cc: @kaituo
What is the bug?
When trying to work with tables that have escaped column names, this escaping is thrown away during index creation.
How can one reproduce the bug?
Steps to reproduce the behavior:
Which indicates that at some point in internal processing (maybe in FlintSparkIndex.scala), the column names are being unescaped.
What is the expected behavior?
If the column name is escaped in the index creation query, it should be correctly handled. Alternatively, if escaping column names isn't supported, there should be an error on table creation.
What is your host/environment?
plugin:observabilityDashboards@2.11.0
,plugin:queryWorkbenchDashboards@2.11.0
Do you have any screenshots?
N/A
Do you have any additional context?
N/A
The text was updated successfully, but these errors were encountered: