From 5293f5521d795d9f97a7470b1b9bd97091a190f4 Mon Sep 17 00:00:00 2001 From: Rob Moore Date: Thu, 19 Oct 2023 14:38:13 +0100 Subject: [PATCH 01/38] fix(sqllab): reinstate "Force trino client async execution" (#25680) --- .../databases/installing-database-drivers.mdx | 81 ++++++++++--------- docs/docs/frequently-asked-questions.mdx | 2 +- .../installation/configuring-superset.mdx | 4 +- superset/config.py | 5 +- superset/db_engine_specs/base.py | 18 +++++ superset/db_engine_specs/trino.py | 66 +++++++++++++-- superset/sql_lab.py | 7 +- .../unit_tests/db_engine_specs/test_trino.py | 31 ++++++- tests/unit_tests/sql_lab_test.py | 10 +-- 9 files changed, 163 insertions(+), 61 deletions(-) diff --git a/docs/docs/databases/installing-database-drivers.mdx b/docs/docs/databases/installing-database-drivers.mdx index e4e972f0648b2..57652db4b8cb7 100644 --- a/docs/docs/databases/installing-database-drivers.mdx +++ b/docs/docs/databases/installing-database-drivers.mdx @@ -22,46 +22,47 @@ as well as the packages needed to connect to the databases you want to access th Some of the recommended packages are shown below. Please refer to [setup.py](https://github.com/apache/superset/blob/master/setup.py) for the versions that are compatible with Superset. -| Database | PyPI package | Connection String | -| --------------------------------------------------------- | ---------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | -| [Amazon Athena](/docs/databases/athena) | `pip install pyathena[pandas]` , `pip install PyAthenaJDBC` | `awsathena+rest://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com/{ ` | -| [Amazon DynamoDB](/docs/databases/dynamodb) | `pip install pydynamodb` | `dynamodb://{access_key_id}:{secret_access_key}@dynamodb.{region_name}.amazonaws.com?connector=superset` | -| [Amazon Redshift](/docs/databases/redshift) | `pip install sqlalchemy-redshift` | ` redshift+psycopg2://:@:5439/` | -| [Apache Drill](/docs/databases/drill) | `pip install sqlalchemy-drill` | `drill+sadrill:// For JDBC drill+jdbc://` | -| [Apache Druid](/docs/databases/druid) | `pip install pydruid` | `druid://:@:/druid/v2/sql` | -| [Apache Hive](/docs/databases/hive) | `pip install pyhive` | `hive://hive@{hostname}:{port}/{database}` | -| [Apache Impala](/docs/databases/impala) | `pip install impyla` | `impala://{hostname}:{port}/{database}` | -| [Apache Kylin](/docs/databases/kylin) | `pip install kylinpy` | `kylin://:@:/?=&=` | -| [Apache Pinot](/docs/databases/pinot) | `pip install pinotdb` | `pinot://BROKER:5436/query?server=http://CONTROLLER:5983/` | -| [Apache Solr](/docs/databases/solr) | `pip install sqlalchemy-solr` | `solr://{username}:{password}@{hostname}:{port}/{server_path}/{collection}` | -| [Apache Spark SQL](/docs/databases/spark-sql) | `pip install pyhive` | `hive://hive@{hostname}:{port}/{database}` | -| [Ascend.io](/docs/databases/ascend) | `pip install impyla` | `ascend://{username}:{password}@{hostname}:{port}/{database}?auth_mechanism=PLAIN;use_ssl=true` | -| [Azure MS SQL](/docs/databases/sql-server) | `pip install pymssql` | `mssql+pymssql://UserName@presetSQL:TestPassword@presetSQL.database.windows.net:1433/TestSchema` | -| [Big Query](/docs/databases/bigquery) | `pip install sqlalchemy-bigquery` | `bigquery://{project_id}` | -| [ClickHouse](/docs/databases/clickhouse) | `pip install clickhouse-connect` | `clickhousedb://{username}:{password}@{hostname}:{port}/{database}` | -| [CockroachDB](/docs/databases/cockroachdb) | `pip install cockroachdb` | `cockroachdb://root@{hostname}:{port}/{database}?sslmode=disable` | -| [Dremio](/docs/databases/dremio) | `pip install sqlalchemy_dremio` | `dremio://user:pwd@host:31010/` | -| [Elasticsearch](/docs/databases/elasticsearch) | `pip install elasticsearch-dbapi` | `elasticsearch+http://{user}:{password}@{host}:9200/` | -| [Exasol](/docs/databases/exasol) | `pip install sqlalchemy-exasol` | `exa+pyodbc://{username}:{password}@{hostname}:{port}/my_schema?CONNECTIONLCALL=en_US.UTF-8&driver=EXAODBC` | -| [Google Sheets](/docs/databases/google-sheets) | `pip install shillelagh[gsheetsapi]` | `gsheets://` | -| [Firebolt](/docs/databases/firebolt) | `pip install firebolt-sqlalchemy` | `firebolt://{username}:{password}@{database} or firebolt://{username}:{password}@{database}/{engine_name}` | -| [Hologres](/docs/databases/hologres) | `pip install psycopg2` | `postgresql+psycopg2://:@/` | -| [IBM Db2](/docs/databases/ibm-db2) | `pip install ibm_db_sa` | `db2+ibm_db://` | -| [IBM Netezza Performance Server](/docs/databases/netezza) | `pip install nzalchemy` | `netezza+nzpy://:@/` | -| [MySQL](/docs/databases/mysql) | `pip install mysqlclient` | `mysql://:@/` | -| [Oracle](/docs/databases/oracle) | `pip install cx_Oracle` | `oracle://` | -| [PostgreSQL](/docs/databases/postgres) | `pip install psycopg2` | `postgresql://:@/` | -| [Trino](/docs/databases/trino) | `pip install trino` | `trino://{username}:{password}@{hostname}:{port}/{catalog}` | -| [Presto](/docs/databases/presto) | `pip install pyhive` | `presto://` | -| [SAP Hana](/docs/databases/hana) | `pip install hdbcli sqlalchemy-hana or pip install apache-superset[hana]` | `hana://{username}:{password}@{host}:{port}` | -| [StarRocks](/docs/databases/starrocks) | `pip install starrocks` | `starrocks://:@:/.` | -| [Snowflake](/docs/databases/snowflake) | `pip install snowflake-sqlalchemy` | `snowflake://{user}:{password}@{account}.{region}/{database}?role={role}&warehouse={warehouse}` | -| SQLite | No additional library needed | `sqlite://` | -| [SQL Server](/docs/databases/sql-server) | `pip install pymssql` | `mssql+pymssql://` | -| [Teradata](/docs/databases/teradata) | `pip install teradatasqlalchemy` | `teradatasql://{user}:{password}@{host}` | -| [TimescaleDB](/docs/databases/timescaledb) | `pip install psycopg2` | `postgresql://:@:/` | -| [Vertica](/docs/databases/vertica) | `pip install sqlalchemy-vertica-python` | `vertica+vertica_python://:@/` | -| [YugabyteDB](/docs/databases/yugabytedb) | `pip install psycopg2` | `postgresql://:@/` | +| Database | PyPI package | Connection String | +| --------------------------------------------------------- | ------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | +| [Amazon Athena](/docs/databases/athena) | `pip install pyathena[pandas]` , `pip install PyAthenaJDBC` | `awsathena+rest://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com/{ ` | +| [Amazon DynamoDB](/docs/databases/dynamodb) | `pip install pydynamodb` | `dynamodb://{access_key_id}:{secret_access_key}@dynamodb.{region_name}.amazonaws.com?connector=superset` | +| [Amazon Redshift](/docs/databases/redshift) | `pip install sqlalchemy-redshift` | ` redshift+psycopg2://:@:5439/` | +| [Apache Drill](/docs/databases/drill) | `pip install sqlalchemy-drill` | `drill+sadrill:// For JDBC drill+jdbc://` | +| [Apache Druid](/docs/databases/druid) | `pip install pydruid` | `druid://:@:/druid/v2/sql` | +| [Apache Hive](/docs/databases/hive) | `pip install pyhive` | `hive://hive@{hostname}:{port}/{database}` | +| [Apache Impala](/docs/databases/impala) | `pip install impyla` | `impala://{hostname}:{port}/{database}` | +| [Apache Kylin](/docs/databases/kylin) | `pip install kylinpy` | `kylin://:@:/?=&=` | +| [Apache Pinot](/docs/databases/pinot) | `pip install pinotdb` | `pinot://BROKER:5436/query?server=http://CONTROLLER:5983/` | +| [Apache Solr](/docs/databases/solr) | `pip install sqlalchemy-solr` | `solr://{username}:{password}@{hostname}:{port}/{server_path}/{collection}` | +| [Apache Spark SQL](/docs/databases/spark-sql) | `pip install pyhive` | `hive://hive@{hostname}:{port}/{database}` | +| [Ascend.io](/docs/databases/ascend) | `pip install impyla` | `ascend://{username}:{password}@{hostname}:{port}/{database}?auth_mechanism=PLAIN;use_ssl=true` | +| [Azure MS SQL](/docs/databases/sql-server) | `pip install pymssql` | `mssql+pymssql://UserName@presetSQL:TestPassword@presetSQL.database.windows.net:1433/TestSchema` | +| [Big Query](/docs/databases/bigquery) | `pip install sqlalchemy-bigquery` | `bigquery://{project_id}` | +| [ClickHouse](/docs/databases/clickhouse) | `pip install clickhouse-connect` | `clickhousedb://{username}:{password}@{hostname}:{port}/{database}` | +| [CockroachDB](/docs/databases/cockroachdb) | `pip install cockroachdb` | `cockroachdb://root@{hostname}:{port}/{database}?sslmode=disable` | +| [Dremio](/docs/databases/dremio) | `pip install sqlalchemy_dremio` | `dremio://user:pwd@host:31010/` | +| [Elasticsearch](/docs/databases/elasticsearch) | `pip install elasticsearch-dbapi` | `elasticsearch+http://{user}:{password}@{host}:9200/` | +| [Exasol](/docs/databases/exasol) | `pip install sqlalchemy-exasol` | `exa+pyodbc://{username}:{password}@{hostname}:{port}/my_schema?CONNECTIONLCALL=en_US.UTF-8&driver=EXAODBC` | +| [Google Sheets](/docs/databases/google-sheets) | `pip install shillelagh[gsheetsapi]` | `gsheets://` | +| [Firebolt](/docs/databases/firebolt) | `pip install firebolt-sqlalchemy` | `firebolt://{username}:{password}@{database} or firebolt://{username}:{password}@{database}/{engine_name}` | +| [Hologres](/docs/databases/hologres) | `pip install psycopg2` | `postgresql+psycopg2://:@/` | +| [IBM Db2](/docs/databases/ibm-db2) | `pip install ibm_db_sa` | `db2+ibm_db://` | +| [IBM Netezza Performance Server](/docs/databases/netezza) | `pip install nzalchemy` | `netezza+nzpy://:@/` | +| [MySQL](/docs/databases/mysql) | `pip install mysqlclient` | `mysql://:@/` | +| [Oracle](/docs/databases/oracle) | `pip install cx_Oracle` | `oracle://` | +| [PostgreSQL](/docs/databases/postgres) | `pip install psycopg2` | `postgresql://:@/` | +| [Trino](/docs/databases/trino) | `pip install trino` | `trino://{username}:{password}@{hostname}:{port}/{catalog}` | +| [Presto](/docs/databases/presto) | `pip install pyhive` | `presto://` | +| [SAP Hana](/docs/databases/hana) | `pip install hdbcli sqlalchemy-hana or pip install apache-superset[hana]` | `hana://{username}:{password}@{host}:{port}` | +| [StarRocks](/docs/databases/starrocks) | `pip install starrocks` | `starrocks://:@:/.` | +| [Snowflake](/docs/databases/snowflake) | `pip install snowflake-sqlalchemy` | `snowflake://{user}:{password}@{account}.{region}/{database}?role={role}&warehouse={warehouse}` | +| SQLite | No additional library needed | `sqlite://path/to/file.db?check_same_thread=false` | +| [SQL Server](/docs/databases/sql-server) | `pip install pymssql` | `mssql+pymssql://` | +| [Teradata](/docs/databases/teradata) | `pip install teradatasqlalchemy` | `teradatasql://{user}:{password}@{host}` | +| [TimescaleDB](/docs/databases/timescaledb) | `pip install psycopg2` | `postgresql://:@:/` | +| [Vertica](/docs/databases/vertica) | `pip install sqlalchemy-vertica-python` | `vertica+vertica_python://:@/` | +| [YugabyteDB](/docs/databases/yugabytedb) | `pip install psycopg2` | `postgresql://:@/` | + --- Note that many other databases are supported, the main criteria being the existence of a functional diff --git a/docs/docs/frequently-asked-questions.mdx b/docs/docs/frequently-asked-questions.mdx index bbb94d617b986..79a0863b088dc 100644 --- a/docs/docs/frequently-asked-questions.mdx +++ b/docs/docs/frequently-asked-questions.mdx @@ -168,7 +168,7 @@ Another workaround is to change where superset stores the sqlite database by add `superset_config.py`: ``` -SQLALCHEMY_DATABASE_URI = 'sqlite:////new/location/superset.db' +SQLALCHEMY_DATABASE_URI = 'sqlite:////new/location/superset.db?check_same_thread=false' ``` You can read more about customizing Superset using the configuration file diff --git a/docs/docs/installation/configuring-superset.mdx b/docs/docs/installation/configuring-superset.mdx index 9cb3aaefacc71..c6108d6f59c8f 100644 --- a/docs/docs/installation/configuring-superset.mdx +++ b/docs/docs/installation/configuring-superset.mdx @@ -32,7 +32,9 @@ SECRET_KEY = 'YOUR_OWN_RANDOM_GENERATED_SECRET_KEY' # superset metadata (slices, connections, tables, dashboards, ...). # Note that the connection information to connect to the datasources # you want to explore are managed directly in the web UI -SQLALCHEMY_DATABASE_URI = 'sqlite:////path/to/superset.db' +# The check_same_thread=false property ensures the sqlite client does not attempt +# to enforce single-threaded access, which may be problematic in some edge cases +SQLALCHEMY_DATABASE_URI = 'sqlite:////path/to/superset.db?check_same_thread=false' # Flask-WTF flag for CSRF WTF_CSRF_ENABLED = True diff --git a/superset/config.py b/superset/config.py index 27f78832d1e3b..73553fcc6c303 100644 --- a/superset/config.py +++ b/superset/config.py @@ -186,7 +186,10 @@ def _try_json_readsha(filepath: str, length: int) -> str | None: SECRET_KEY = os.environ.get("SUPERSET_SECRET_KEY") or CHANGE_ME_SECRET_KEY # The SQLAlchemy connection string. -SQLALCHEMY_DATABASE_URI = "sqlite:///" + os.path.join(DATA_DIR, "superset.db") +SQLALCHEMY_DATABASE_URI = ( + f"""sqlite:///{os.path.join(DATA_DIR, "superset.db")}?check_same_thread=false""" +) + # SQLALCHEMY_DATABASE_URI = 'mysql://myapp@localhost/myapp' # SQLALCHEMY_DATABASE_URI = 'postgresql://root:password@localhost/myapp' diff --git a/superset/db_engine_specs/base.py b/superset/db_engine_specs/base.py index 5836e6163f8d9..6be3ab24b0c13 100644 --- a/superset/db_engine_specs/base.py +++ b/superset/db_engine_specs/base.py @@ -1053,6 +1053,24 @@ def handle_cursor(cls, cursor: Any, query: Query, session: Session) -> None: query object""" # TODO: Fix circular import error caused by importing sql_lab.Query + @classmethod + def execute_with_cursor( + cls, cursor: Any, sql: str, query: Query, session: Session + ) -> None: + """ + Trigger execution of a query and handle the resulting cursor. + + For most implementations this just makes calls to `execute` and + `handle_cursor` consecutively, but in some engines (e.g. Trino) we may + need to handle client limitations such as lack of async support and + perform a more complicated operation to get information from the cursor + in a timely manner and facilitate operations such as query stop + """ + logger.debug("Query %d: Running query: %s", query.id, sql) + cls.execute(cursor, sql, async_=True) + logger.debug("Query %d: Handling cursor", query.id) + cls.handle_cursor(cursor, query, session) + @classmethod def extract_error_message(cls, ex: Exception) -> str: return f"{cls.engine} error: {cls._extract_error_message(ex)}" diff --git a/superset/db_engine_specs/trino.py b/superset/db_engine_specs/trino.py index eff78c4fa4eb5..f758f1fadd1aa 100644 --- a/superset/db_engine_specs/trino.py +++ b/superset/db_engine_specs/trino.py @@ -17,6 +17,8 @@ from __future__ import annotations import logging +import threading +import time from typing import Any, TYPE_CHECKING import simplejson as json @@ -154,14 +156,21 @@ def get_tracking_url(cls, cursor: Cursor) -> str | None: @classmethod def handle_cursor(cls, cursor: Cursor, query: Query, session: Session) -> None: - if tracking_url := cls.get_tracking_url(cursor): - query.tracking_url = tracking_url + """ + Handle a trino client cursor. + + WARNING: if you execute a query, it will block until complete and you + will not be able to handle the cursor until complete. Use + `execute_with_cursor` instead, to handle this asynchronously. + """ # Adds the executed query id to the extra payload so the query can be cancelled - query.set_extra_json_key( - key=QUERY_CANCEL_KEY, - value=(cancel_query_id := cursor.stats["queryId"]), - ) + cancel_query_id = cursor.query_id + logger.debug("Query %d: queryId %s found in cursor", query.id, cancel_query_id) + query.set_extra_json_key(key=QUERY_CANCEL_KEY, value=cancel_query_id) + + if tracking_url := cls.get_tracking_url(cursor): + query.tracking_url = tracking_url session.commit() @@ -176,6 +185,51 @@ def handle_cursor(cls, cursor: Cursor, query: Query, session: Session) -> None: super().handle_cursor(cursor=cursor, query=query, session=session) + @classmethod + def execute_with_cursor( + cls, cursor: Any, sql: str, query: Query, session: Session + ) -> None: + """ + Trigger execution of a query and handle the resulting cursor. + + Trino's client blocks until the query is complete, so we need to run it + in another thread and invoke `handle_cursor` to poll for the query ID + to appear on the cursor in parallel. + """ + execute_result: dict[str, Any] = {} + + def _execute(results: dict[str, Any]) -> None: + logger.debug("Query %d: Running query: %s", query.id, sql) + + # Pass result / exception information back to the parent thread + try: + cls.execute(cursor, sql) + results["complete"] = True + except Exception as ex: # pylint: disable=broad-except + results["complete"] = True + results["error"] = ex + + execute_thread = threading.Thread(target=_execute, args=(execute_result,)) + execute_thread.start() + + # Wait for a query ID to be available before handling the cursor, as + # it's required by that method; it may never become available on error. + while not cursor.query_id and not execute_result.get("complete"): + time.sleep(0.1) + + logger.debug("Query %d: Handling cursor", query.id) + cls.handle_cursor(cursor, query, session) + + # Block until the query completes; same behaviour as the client itself + logger.debug("Query %d: Waiting for query to complete", query.id) + while not execute_result.get("complete"): + time.sleep(0.5) + + # Unfortunately we'll mangle the stack trace due to the thread, but + # throwing the original exception allows mapping database errors as normal + if err := execute_result.get("error"): + raise err + @classmethod def prepare_cancel_query(cls, query: Query, session: Session) -> None: if QUERY_CANCEL_KEY not in query.extra: diff --git a/superset/sql_lab.py b/superset/sql_lab.py index afc682b10fbcf..ca157b324085d 100644 --- a/superset/sql_lab.py +++ b/superset/sql_lab.py @@ -191,7 +191,7 @@ def get_sql_results( # pylint: disable=too-many-arguments return handle_query_error(ex, query, session) -def execute_sql_statement( # pylint: disable=too-many-arguments,too-many-statements +def execute_sql_statement( # pylint: disable=too-many-arguments sql_statement: str, query: Query, session: Session, @@ -271,10 +271,7 @@ def execute_sql_statement( # pylint: disable=too-many-arguments,too-many-statem ) session.commit() with stats_timing("sqllab.query.time_executing_query", stats_logger): - logger.debug("Query %d: Running query: %s", query.id, sql) - db_engine_spec.execute(cursor, sql, async_=True) - logger.debug("Query %d: Handling cursor", query.id) - db_engine_spec.handle_cursor(cursor, query, session) + db_engine_spec.execute_with_cursor(cursor, sql, query, session) with stats_timing("sqllab.query.time_fetching_results", stats_logger): logger.debug( diff --git a/tests/unit_tests/db_engine_specs/test_trino.py b/tests/unit_tests/db_engine_specs/test_trino.py index 963953d18b48e..1b50a683a0841 100644 --- a/tests/unit_tests/db_engine_specs/test_trino.py +++ b/tests/unit_tests/db_engine_specs/test_trino.py @@ -352,7 +352,7 @@ def test_handle_cursor_early_cancel( query_id = "myQueryId" cursor_mock = engine_mock.return_value.__enter__.return_value - cursor_mock.stats = {"queryId": query_id} + cursor_mock.query_id = query_id session_mock = mocker.MagicMock() query = Query() @@ -366,3 +366,32 @@ def test_handle_cursor_early_cancel( assert cancel_query_mock.call_args[1]["cancel_query_id"] == query_id else: assert cancel_query_mock.call_args is None + + +def test_execute_with_cursor_in_parallel(mocker: MockerFixture): + """Test that `execute_with_cursor` fetches query ID from the cursor""" + from superset.db_engine_specs.trino import TrinoEngineSpec + + query_id = "myQueryId" + + mock_cursor = mocker.MagicMock() + mock_cursor.query_id = None + + mock_query = mocker.MagicMock() + mock_session = mocker.MagicMock() + + def _mock_execute(*args, **kwargs): + mock_cursor.query_id = query_id + + mock_cursor.execute.side_effect = _mock_execute + + TrinoEngineSpec.execute_with_cursor( + cursor=mock_cursor, + sql="SELECT 1 FROM foo", + query=mock_query, + session=mock_session, + ) + + mock_query.set_extra_json_key.assert_called_once_with( + key=QUERY_CANCEL_KEY, value=query_id + ) diff --git a/tests/unit_tests/sql_lab_test.py b/tests/unit_tests/sql_lab_test.py index 29f45eab682a0..edc1fd2ec4a5d 100644 --- a/tests/unit_tests/sql_lab_test.py +++ b/tests/unit_tests/sql_lab_test.py @@ -55,8 +55,8 @@ def test_execute_sql_statement(mocker: MockerFixture, app: None) -> None: ) database.apply_limit_to_sql.assert_called_with("SELECT 42 AS answer", 2, force=True) - db_engine_spec.execute.assert_called_with( - cursor, "SELECT 42 AS answer LIMIT 2", async_=True + db_engine_spec.execute_with_cursor.assert_called_with( + cursor, "SELECT 42 AS answer LIMIT 2", query, session ) SupersetResultSet.assert_called_with([(42,)], cursor.description, db_engine_spec) @@ -106,10 +106,8 @@ def test_execute_sql_statement_with_rls( 101, force=True, ) - db_engine_spec.execute.assert_called_with( - cursor, - "SELECT * FROM sales WHERE organization_id=42 LIMIT 101", - async_=True, + db_engine_spec.execute_with_cursor.assert_called_with( + cursor, "SELECT * FROM sales WHERE organization_id=42 LIMIT 101", query, session ) SupersetResultSet.assert_called_with([(42,)], cursor.description, db_engine_spec) From 315e75811f5dbbc33ac1220bd95277cfc89465ed Mon Sep 17 00:00:00 2001 From: Igor Khrol Date: Thu, 19 Oct 2023 21:03:44 +0300 Subject: [PATCH 02/38] fix: remove unnecessary redirect (#25679) (cherry picked from commit da42bf2dbb82a40d5ffcc9bfdc46584cb36af616) --- .../src/SqlLab/components/ResultSet/ResultSet.test.tsx | 2 +- .../components/SaveDatasetModal/SaveDatasetModal.test.tsx | 2 +- .../src/SqlLab/components/SaveDatasetModal/index.tsx | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/superset-frontend/src/SqlLab/components/ResultSet/ResultSet.test.tsx b/superset-frontend/src/SqlLab/components/ResultSet/ResultSet.test.tsx index 5e2a0455b5507..d823c586f799e 100644 --- a/superset-frontend/src/SqlLab/components/ResultSet/ResultSet.test.tsx +++ b/superset-frontend/src/SqlLab/components/ResultSet/ResultSet.test.tsx @@ -95,7 +95,7 @@ const asyncRefetchResultsTableProps = { resultsKey: 'async results key', }, }; -fetchMock.get('glob:*/api/v1/dataset?*', { result: [] }); +fetchMock.get('glob:*/api/v1/dataset/?*', { result: [] }); const middlewares = [thunk]; const mockStore = configureStore(middlewares); diff --git a/superset-frontend/src/SqlLab/components/SaveDatasetModal/SaveDatasetModal.test.tsx b/superset-frontend/src/SqlLab/components/SaveDatasetModal/SaveDatasetModal.test.tsx index 4cac5c6204640..8568bf20809e5 100644 --- a/superset-frontend/src/SqlLab/components/SaveDatasetModal/SaveDatasetModal.test.tsx +++ b/superset-frontend/src/SqlLab/components/SaveDatasetModal/SaveDatasetModal.test.tsx @@ -39,7 +39,7 @@ const mockedProps = { datasource: testQuery, }; -fetchMock.get('glob:*/api/v1/dataset?*', { +fetchMock.get('glob:*/api/v1/dataset/?*', { result: mockdatasets, dataset_count: 3, }); diff --git a/superset-frontend/src/SqlLab/components/SaveDatasetModal/index.tsx b/superset-frontend/src/SqlLab/components/SaveDatasetModal/index.tsx index eba873c83b4b8..1932798138abf 100644 --- a/superset-frontend/src/SqlLab/components/SaveDatasetModal/index.tsx +++ b/superset-frontend/src/SqlLab/components/SaveDatasetModal/index.tsx @@ -257,7 +257,7 @@ export const SaveDatasetModal = ({ }); return SupersetClient.get({ - endpoint: `/api/v1/dataset?q=${queryParams}`, + endpoint: `/api/v1/dataset/?q=${queryParams}`, }).then(response => ({ data: response.json.result.map( (r: { table_name: string; id: number; owners: [DatasetOwner] }) => ({ From 8483ab6c42f90ae7c3a62661c3a2a84858af238d Mon Sep 17 00:00:00 2001 From: Stepan <66589759+Always-prog@users.noreply.github.com> Date: Fri, 20 Oct 2023 10:32:14 +0300 Subject: [PATCH 03/38] fix(chore): dashboard requests to database equal the number of slices it has (#24709) (cherry picked from commit 75a74313799b70b636c88cf421fd4d1118cc8a61) --- superset/daos/dashboard.py | 2 -- 1 file changed, 2 deletions(-) diff --git a/superset/daos/dashboard.py b/superset/daos/dashboard.py index f9544aa53d5da..2c03711f25c5f 100644 --- a/superset/daos/dashboard.py +++ b/superset/daos/dashboard.py @@ -68,8 +68,6 @@ def get_by_id_or_slug(cls, id_or_slug: int | str) -> Dashboard: query = ( db.session.query(Dashboard) .filter(id_or_slug_filter(id_or_slug)) - .outerjoin(Slice, Dashboard.slices) - .outerjoin(Slice.table) .outerjoin(Dashboard.owners) .outerjoin(Dashboard.roles) ) From 8da27eda4059202438cbfaea511f71dea828afdf Mon Sep 17 00:00:00 2001 From: Daniel Vaz Gaspar Date: Fri, 20 Oct 2023 11:33:40 +0100 Subject: [PATCH 04/38] fix: bump to FAB 4.3.9 remove CSP exception (#25712) (cherry picked from commit 8fb0c8da56f572c086126cc5ca16676ce74e7a3c) --- requirements/base.txt | 2 +- setup.py | 2 +- superset/config.py | 2 -- 3 files changed, 2 insertions(+), 4 deletions(-) diff --git a/requirements/base.txt b/requirements/base.txt index d6ee2e6a6b9ef..95e691227221e 100644 --- a/requirements/base.txt +++ b/requirements/base.txt @@ -88,7 +88,7 @@ flask==2.2.5 # flask-migrate # flask-sqlalchemy # flask-wtf -flask-appbuilder==4.3.7 +flask-appbuilder==4.3.9 # via apache-superset flask-babel==1.0.0 # via flask-appbuilder diff --git a/setup.py b/setup.py index 3cb0c144b2f58..87a721d21b1ab 100644 --- a/setup.py +++ b/setup.py @@ -80,7 +80,7 @@ def get_git_sha() -> str: "cryptography>=39.0.1, <40", "deprecation>=2.1.0, <2.2.0", "flask>=2.2.5, <3.0.0", - "flask-appbuilder>=4.3.7, <5.0.0", + "flask-appbuilder>=4.3.9, <5.0.0", "flask-caching>=1.11.1, <2.0", "flask-compress>=1.13, <2.0", "flask-talisman>=1.0.0, <2.0", diff --git a/superset/config.py b/superset/config.py index 73553fcc6c303..e15c7bf990428 100644 --- a/superset/config.py +++ b/superset/config.py @@ -1421,7 +1421,6 @@ def EMAIL_HEADER_MUTATOR( # pylint: disable=invalid-name,unused-argument "style-src": [ "'self'", "'unsafe-inline'", - "https://cdn.jsdelivr.net/npm/swagger-ui-dist@5/swagger-ui.css", ], "script-src": ["'self'", "'strict-dynamic'"], }, @@ -1443,7 +1442,6 @@ def EMAIL_HEADER_MUTATOR( # pylint: disable=invalid-name,unused-argument "style-src": [ "'self'", "'unsafe-inline'", - "https://cdn.jsdelivr.net/npm/swagger-ui-dist@5/swagger-ui.css", ], "script-src": ["'self'", "'unsafe-inline'", "'unsafe-eval'"], }, From 9b31d97ac337bc02f4bfe5c3e5f091da685567a1 Mon Sep 17 00:00:00 2001 From: Ross Mabbett <92495987+rtexelm@users.noreply.github.com> Date: Mon, 23 Oct 2023 13:51:48 -0300 Subject: [PATCH 05/38] fix(horizontal filter label): show full tooltip with ellipsis (#25732) (cherry picked from commit e4173d90c8ccef58a87ec7ac00b57c1ec9317c11) --- .../FilterBar/FilterControls/FilterControl.tsx | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/superset-frontend/src/dashboard/components/nativeFilters/FilterBar/FilterControls/FilterControl.tsx b/superset-frontend/src/dashboard/components/nativeFilters/FilterBar/FilterControls/FilterControl.tsx index 515fed1907bd0..37739e5370686 100644 --- a/superset-frontend/src/dashboard/components/nativeFilters/FilterBar/FilterControls/FilterControl.tsx +++ b/superset-frontend/src/dashboard/components/nativeFilters/FilterBar/FilterControls/FilterControl.tsx @@ -112,6 +112,7 @@ const HorizontalOverflowFilterControlContainer = styled( const VerticalFormItem = styled(StyledFormItem)` .ant-form-item-label { + overflow: visible; label.ant-form-item-required:not(.ant-form-item-required-mark-optional) { &::after { display: none; @@ -127,6 +128,7 @@ const HorizontalFormItem = styled(StyledFormItem)` } .ant-form-item-label { + overflow: visible; padding-bottom: 0; margin-right: ${({ theme }) => theme.gridUnit * 2}px; label.ant-form-item-required:not(.ant-form-item-required-mark-optional) { @@ -200,10 +202,11 @@ const DescriptionToolTip = ({ description }: { description: string }) => ( placement="right" overlayInnerStyle={{ display: '-webkit-box', - overflow: 'hidden', - WebkitLineClamp: 20, + WebkitLineClamp: 10, WebkitBoxOrient: 'vertical', + overflow: 'hidden', textOverflow: 'ellipsis', + whiteSpace: 'normal', }} getPopupContainer={trigger => trigger.parentElement as HTMLElement} > From fd2c2725d4da7261083f88362d2b1084241678fe Mon Sep 17 00:00:00 2001 From: Geido <60598000+geido@users.noreply.github.com> Date: Wed, 25 Oct 2023 15:39:49 +0300 Subject: [PATCH 06/38] fix: Revert "fix(Charts): Set max row limit + removed the option to use an empty row limit value" (#25753) (cherry picked from commit e2fe96778887d203a852cf09def151ff024cfaf7) --- .../src/shared-controls/sharedControls.tsx | 9 +---- .../superset-ui-core/src/validator/index.ts | 1 - .../src/validator/validateMaxValue.ts | 8 ---- .../test/validator/validateMaxValue.test.ts | 38 ------------------- 4 files changed, 1 insertion(+), 55 deletions(-) delete mode 100644 superset-frontend/packages/superset-ui-core/src/validator/validateMaxValue.ts delete mode 100644 superset-frontend/packages/superset-ui-core/test/validator/validateMaxValue.test.ts diff --git a/superset-frontend/packages/superset-ui-chart-controls/src/shared-controls/sharedControls.tsx b/superset-frontend/packages/superset-ui-chart-controls/src/shared-controls/sharedControls.tsx index 69fa8a6864909..abf5153bb0d51 100644 --- a/superset-frontend/packages/superset-ui-chart-controls/src/shared-controls/sharedControls.tsx +++ b/superset-frontend/packages/superset-ui-chart-controls/src/shared-controls/sharedControls.tsx @@ -47,8 +47,6 @@ import { isDefined, hasGenericChartAxes, NO_TIME_RANGE, - validateNonEmpty, - validateMaxValue, } from '@superset-ui/core'; import { @@ -247,12 +245,7 @@ const row_limit: SharedControlConfig<'SelectControl'> = { type: 'SelectControl', freeForm: true, label: t('Row limit'), - clearable: false, - validators: [ - validateNonEmpty, - legacyValidateInteger, - v => validateMaxValue(v, 100000), - ], + validators: [legacyValidateInteger], default: 10000, choices: formatSelectOptions(ROW_LIMIT_OPTIONS), description: t('Limits the number of rows that get displayed.'), diff --git a/superset-frontend/packages/superset-ui-core/src/validator/index.ts b/superset-frontend/packages/superset-ui-core/src/validator/index.ts index fb37328c02290..532efcc959116 100644 --- a/superset-frontend/packages/superset-ui-core/src/validator/index.ts +++ b/superset-frontend/packages/superset-ui-core/src/validator/index.ts @@ -22,4 +22,3 @@ export { default as legacyValidateNumber } from './legacyValidateNumber'; export { default as validateInteger } from './validateInteger'; export { default as validateNumber } from './validateNumber'; export { default as validateNonEmpty } from './validateNonEmpty'; -export { default as validateMaxValue } from './validateMaxValue'; diff --git a/superset-frontend/packages/superset-ui-core/src/validator/validateMaxValue.ts b/superset-frontend/packages/superset-ui-core/src/validator/validateMaxValue.ts deleted file mode 100644 index 24c1da1c79dde..0000000000000 --- a/superset-frontend/packages/superset-ui-core/src/validator/validateMaxValue.ts +++ /dev/null @@ -1,8 +0,0 @@ -import { t } from '../translation'; - -export default function validateMaxValue(v: unknown, max: Number) { - if (Number(v) > +max) { - return t('Value cannot exceed %s', max); - } - return false; -} diff --git a/superset-frontend/packages/superset-ui-core/test/validator/validateMaxValue.test.ts b/superset-frontend/packages/superset-ui-core/test/validator/validateMaxValue.test.ts deleted file mode 100644 index 70f3d332c52e3..0000000000000 --- a/superset-frontend/packages/superset-ui-core/test/validator/validateMaxValue.test.ts +++ /dev/null @@ -1,38 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -import { validateMaxValue } from '@superset-ui/core'; -import './setup'; - -describe('validateInteger()', () => { - it('returns the warning message if invalid', () => { - expect(validateMaxValue(10.1, 10)).toBeTruthy(); - expect(validateMaxValue(1, 0)).toBeTruthy(); - expect(validateMaxValue('2', 1)).toBeTruthy(); - }); - it('returns false if the input is valid', () => { - expect(validateMaxValue(0, 1)).toBeFalsy(); - expect(validateMaxValue(10, 10)).toBeFalsy(); - expect(validateMaxValue(undefined, 1)).toBeFalsy(); - expect(validateMaxValue(NaN, NaN)).toBeFalsy(); - expect(validateMaxValue(null, 1)).toBeFalsy(); - expect(validateMaxValue('1', 1)).toBeFalsy(); - expect(validateMaxValue('a', 1)).toBeFalsy(); - }); -}); From 01d3ac20c7204007d66a240e3311fa19ea8455fd Mon Sep 17 00:00:00 2001 From: Beto Dealmeida Date: Wed, 25 Oct 2023 16:49:32 -0400 Subject: [PATCH 07/38] fix: dataset update uniqueness (#25756) (cherry picked from commit c7f8d11a7eca33b7eed187f4e757fd7b9f45f9be) --- superset/daos/dataset.py | 6 +- superset/datasets/commands/update.py | 5 +- tests/unit_tests/dao/dataset_test.py | 83 ++++++++++++++++++++++++++++ 3 files changed, 92 insertions(+), 2 deletions(-) create mode 100644 tests/unit_tests/dao/dataset_test.py diff --git a/superset/daos/dataset.py b/superset/daos/dataset.py index 716fcd9a057a6..0b6c4f6271712 100644 --- a/superset/daos/dataset.py +++ b/superset/daos/dataset.py @@ -100,11 +100,15 @@ def validate_uniqueness( @staticmethod def validate_update_uniqueness( - database_id: int, dataset_id: int, name: str + database_id: int, + schema: str | None, + dataset_id: int, + name: str, ) -> bool: dataset_query = db.session.query(SqlaTable).filter( SqlaTable.table_name == name, SqlaTable.database_id == database_id, + SqlaTable.schema == schema, SqlaTable.id != dataset_id, ) return not db.session.query(dataset_query.exists()).scalar() diff --git a/superset/datasets/commands/update.py b/superset/datasets/commands/update.py index a38439fb7f235..dfa3a3dcf85c7 100644 --- a/superset/datasets/commands/update.py +++ b/superset/datasets/commands/update.py @@ -89,7 +89,10 @@ def validate(self) -> None: table_name = self._properties.get("table_name", None) # Validate uniqueness if not DatasetDAO.validate_update_uniqueness( - self._model.database_id, self._model_id, table_name + self._model.database_id, + self._model.schema, + self._model_id, + table_name, ): exceptions.append(DatasetExistsValidationError(table_name)) # Validate/Populate database not allowed to change diff --git a/tests/unit_tests/dao/dataset_test.py b/tests/unit_tests/dao/dataset_test.py new file mode 100644 index 0000000000000..288f68cae026f --- /dev/null +++ b/tests/unit_tests/dao/dataset_test.py @@ -0,0 +1,83 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +from sqlalchemy.orm.session import Session + +from superset.daos.dataset import DatasetDAO + + +def test_validate_update_uniqueness(session: Session) -> None: + """ + Test the `validate_update_uniqueness` static method. + + In particular, allow datasets with the same name in the same database as long as they + are in different schemas + """ + from superset.connectors.sqla.models import SqlaTable + from superset.models.core import Database + + SqlaTable.metadata.create_all(session.get_bind()) + + database = Database( + database_name="my_db", + sqlalchemy_uri="sqlite://", + ) + dataset1 = SqlaTable( + table_name="my_dataset", + schema="main", + database=database, + ) + dataset2 = SqlaTable( + table_name="my_dataset", + schema="dev", + database=database, + ) + session.add_all([database, dataset1, dataset2]) + session.flush() + + # same table name, different schema + assert ( + DatasetDAO.validate_update_uniqueness( + database_id=database.id, + schema=dataset1.schema, + dataset_id=dataset1.id, + name=dataset1.table_name, + ) + is True + ) + + # duplicate schema and table name + assert ( + DatasetDAO.validate_update_uniqueness( + database_id=database.id, + schema=dataset2.schema, + dataset_id=dataset1.id, + name=dataset1.table_name, + ) + is False + ) + + # no schema + assert ( + DatasetDAO.validate_update_uniqueness( + database_id=database.id, + schema=None, + dataset_id=dataset1.id, + name=dataset1.table_name, + ) + is True + ) From fbe7e6265dec11da8a653e89e1ac9c2f9416a553 Mon Sep 17 00:00:00 2001 From: "JUST.in DO IT" Date: Thu, 26 Oct 2023 12:44:41 -0700 Subject: [PATCH 08/38] fix(sqllab): slow pop datasource query (#25741) (cherry picked from commit 2a2bc82a8bbf900c825ba44e8b0f3f320b5962e0) --- superset-frontend/src/SqlLab/actions/sqlLab.js | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/superset-frontend/src/SqlLab/actions/sqlLab.js b/superset-frontend/src/SqlLab/actions/sqlLab.js index fbfba6783e8e4..d25689e9468f2 100644 --- a/superset-frontend/src/SqlLab/actions/sqlLab.js +++ b/superset-frontend/src/SqlLab/actions/sqlLab.js @@ -1384,8 +1384,14 @@ export function popDatasourceQuery(datasourceKey, sql) { return function (dispatch) { const QUERY_TEXT = t('Query'); const datasetId = datasourceKey.split('__')[0]; + + const queryParams = rison.encode({ + keys: ['none'], + columns: ['name', 'schema', 'database.id', 'select_star'], + }); + return SupersetClient.get({ - endpoint: `/api/v1/dataset/${datasetId}?q=(keys:!(none))`, + endpoint: `/api/v1/dataset/${datasetId}?q=${queryParams}`, }) .then(({ json }) => dispatch( From 2f468900c87c6d384d928815cf85e572215b81c5 Mon Sep 17 00:00:00 2001 From: Elizabeth Thompson Date: Fri, 27 Oct 2023 11:04:33 -0700 Subject: [PATCH 09/38] fix: allow for backward compatible errors (#25640) --- .../Datasource/DatasourceEditor.jsx | 2 +- .../Datasource/DatasourceModal.test.jsx | 156 ++++++++++-------- .../components/Datasource/DatasourceModal.tsx | 26 ++- .../src/components/ErrorMessage/types.ts | 2 +- superset-frontend/src/utils/errorMessages.ts | 1 + 5 files changed, 115 insertions(+), 72 deletions(-) diff --git a/superset-frontend/src/components/Datasource/DatasourceEditor.jsx b/superset-frontend/src/components/Datasource/DatasourceEditor.jsx index ffbba4c7db1f8..195545a2f6c39 100644 --- a/superset-frontend/src/components/Datasource/DatasourceEditor.jsx +++ b/superset-frontend/src/components/Datasource/DatasourceEditor.jsx @@ -1386,7 +1386,7 @@ class DatasourceEditor extends React.PureComponent { const { theme } = this.props; return ( - + {this.renderErrors()} ({ marginBottom: theme.gridUnit * 4 })} diff --git a/superset-frontend/src/components/Datasource/DatasourceModal.test.jsx b/superset-frontend/src/components/Datasource/DatasourceModal.test.jsx index 5bcb705b683d4..6d991f24a092e 100644 --- a/superset-frontend/src/components/Datasource/DatasourceModal.test.jsx +++ b/superset-frontend/src/components/Datasource/DatasourceModal.test.jsx @@ -18,30 +18,35 @@ */ import React from 'react'; import { act } from 'react-dom/test-utils'; -import { mount } from 'enzyme'; -import { Provider } from 'react-redux'; +import { + render, + screen, + waitFor, + fireEvent, + cleanup, +} from '@testing-library/react'; import fetchMock from 'fetch-mock'; +import { Provider } from 'react-redux'; import sinon from 'sinon'; -import { supersetTheme, ThemeProvider } from '@superset-ui/core'; - -import waitForComponentToPaint from 'spec/helpers/waitForComponentToPaint'; +import { + supersetTheme, + ThemeProvider, + SupersetClient, +} from '@superset-ui/core'; import { defaultStore as store } from 'spec/helpers/testing-library'; -import Modal from 'src/components/Modal'; import { DatasourceModal } from 'src/components/Datasource'; -import DatasourceEditor from 'src/components/Datasource/DatasourceEditor'; import * as uiCore from '@superset-ui/core'; import mockDatasource from 'spec/fixtures/mockDatasource'; -import { api } from 'src/hooks/apiResources/queryApi'; - -const datasource = mockDatasource['7__table']; +// Define your constants here const SAVE_ENDPOINT = 'glob:*/api/v1/dataset/7'; const SAVE_PAYLOAD = { new: 'data' }; const SAVE_DATASOURCE_ENDPOINT = 'glob:*/api/v1/dataset/7'; const GET_DATASOURCE_ENDPOINT = SAVE_DATASOURCE_ENDPOINT; +const GET_DATABASE_ENDPOINT = 'glob:*/api/v1/database/?q=*'; const mockedProps = { - datasource, + datasource: mockDatasource['7__table'], addSuccessToast: () => {}, addDangerToast: () => {}, onChange: () => {}, @@ -50,80 +55,101 @@ const mockedProps = { onDatasourceSave: sinon.spy(), }; -async function mountAndWait(props = mockedProps) { - const mounted = mount( +let container; +let isFeatureEnabledMock; + +async function renderAndWait(props = mockedProps) { + const { container: renderedContainer } = render( - + + + , - { - wrappingComponent: ThemeProvider, - wrappingComponentProps: { theme: supersetTheme }, - }, ); - await waitForComponentToPaint(mounted); - return mounted; + container = renderedContainer; } +beforeEach(() => { + fetchMock.reset(); + cleanup(); + isFeatureEnabledMock = jest.spyOn(uiCore, 'isFeatureEnabled'); + renderAndWait(); + fetchMock.post(SAVE_ENDPOINT, SAVE_PAYLOAD); + fetchMock.put(SAVE_DATASOURCE_ENDPOINT, {}); + fetchMock.get(GET_DATASOURCE_ENDPOINT, { result: {} }); + fetchMock.get(GET_DATABASE_ENDPOINT, { result: [] }); +}); + +afterEach(() => { + isFeatureEnabledMock.mockRestore(); +}); + describe('DatasourceModal', () => { - let wrapper; - let isFeatureEnabledMock; - beforeEach(async () => { - isFeatureEnabledMock = jest.spyOn(uiCore, 'isFeatureEnabled'); - fetchMock.reset(); - wrapper = await mountAndWait(); + it('renders', async () => { + expect(container).toBeDefined(); }); - afterAll(() => { - isFeatureEnabledMock.restore(); - act(() => { - store.dispatch(api.util.resetApiState()); - }); + it('renders the component', () => { + expect(screen.getByText('Edit Dataset')).toBeInTheDocument(); }); - it('renders', () => { - expect(wrapper.find(DatasourceModal)).toExist(); + it('renders a Modal', async () => { + expect(screen.getByRole('dialog')).toBeInTheDocument(); }); - it('renders a Modal', () => { - expect(wrapper.find(Modal)).toExist(); + it('renders a DatasourceEditor', async () => { + expect(screen.getByTestId('datasource-editor')).toBeInTheDocument(); }); - it('renders a DatasourceEditor', () => { - expect(wrapper.find(DatasourceEditor)).toExist(); + it('renders a legacy data source btn', () => { + const button = screen.getByTestId('datasource-modal-legacy-edit'); + expect(button).toBeInTheDocument(); }); - it('saves on confirm', async () => { - const callsP = fetchMock.post(SAVE_ENDPOINT, SAVE_PAYLOAD); - fetchMock.put(SAVE_DATASOURCE_ENDPOINT, {}); - fetchMock.get(GET_DATASOURCE_ENDPOINT, {}); - act(() => { - wrapper - .find('button[data-test="datasource-modal-save"]') - .props() - .onClick(); + it('disables the save button when the datasource is managed externally', () => { + // the render is currently in a before operation, so it needs to be cleaned up + // we could alternatively move all the renders back into the tests or find a better + // way to automatically render but still allow to pass in props with the tests + cleanup(); + + renderAndWait({ + ...mockedProps, + datasource: { ...mockedProps.datasource, is_managed_externally: true }, }); - await waitForComponentToPaint(wrapper); - act(() => { - const okButton = wrapper.find( - '.ant-modal-confirm .ant-modal-confirm-btns .ant-btn-primary', - ); - okButton.simulate('click'); + const saveButton = screen.getByTestId('datasource-modal-save'); + expect(saveButton).toBeDisabled(); + }); + + it('calls the onDatasourceSave function when the save button is clicked', async () => { + cleanup(); + const onDatasourceSave = jest.fn(); + + renderAndWait({ ...mockedProps, onDatasourceSave }); + const saveButton = screen.getByTestId('datasource-modal-save'); + await act(async () => { + fireEvent.click(saveButton); + const okButton = await screen.findByRole('button', { name: 'OK' }); + okButton.click(); + }); + await waitFor(() => { + expect(onDatasourceSave).toHaveBeenCalled(); }); - await waitForComponentToPaint(wrapper); - // one call to PUT, then one to GET - const expected = [ - 'http://localhost/api/v1/dataset/7', - 'http://localhost/api/v1/dataset/7', - ]; - expect(callsP._calls.map(call => call[0])).toEqual( - expected, - ); /* eslint no-underscore-dangle: 0 */ }); - it('renders a legacy data source btn', () => { - expect( - wrapper.find('button[data-test="datasource-modal-legacy-edit"]'), - ).toExist(); + it.only('should render error dialog', async () => { + jest + .spyOn(SupersetClient, 'put') + .mockRejectedValue(new Error('Something went wrong')); + await act(async () => { + const saveButton = screen.getByTestId('datasource-modal-save'); + fireEvent.click(saveButton); + const okButton = await screen.findByRole('button', { name: 'OK' }); + okButton.click(); + }); + await act(async () => { + const errorTitle = await screen.findByText('Error saving dataset'); + expect(errorTitle).toBeInTheDocument(); + }); }); }); diff --git a/superset-frontend/src/components/Datasource/DatasourceModal.tsx b/superset-frontend/src/components/Datasource/DatasourceModal.tsx index f9c40c47ba02e..031609e09a480 100644 --- a/superset-frontend/src/components/Datasource/DatasourceModal.tsx +++ b/superset-frontend/src/components/Datasource/DatasourceModal.tsx @@ -28,12 +28,13 @@ import { SupersetClient, t, } from '@superset-ui/core'; - import Modal from 'src/components/Modal'; import AsyncEsmComponent from 'src/components/AsyncEsmComponent'; -import { getClientErrorObject } from 'src/utils/getClientErrorObject'; +import { SupersetError } from 'src/components/ErrorMessage/types'; +import ErrorMessageWithStackTrace from 'src/components/ErrorMessage/ErrorMessageWithStackTrace'; import withToasts from 'src/components/MessageToasts/withToasts'; import { useSelector } from 'react-redux'; +import { getClientErrorObject } from 'src/utils/getClientErrorObject'; const DatasourceEditor = AsyncEsmComponent(() => import('./DatasourceEditor')); @@ -202,11 +203,26 @@ const DatasourceModal: FunctionComponent = ({ }) .catch(response => { setIsSaving(false); - getClientErrorObject(response).then(({ error }) => { + getClientErrorObject(response).then(error => { + let errorResponse: SupersetError | undefined; + let errorText: string | undefined; + // sip-40 error response + if (error?.errors?.length) { + errorResponse = error.errors[0]; + } else if (typeof error.error === 'string') { + // backward compatible with old error messages + errorText = error.error; + } modal.error({ - title: t('Error'), - content: error || t('An error has occurred'), + title: t('Error saving dataset'), okButtonProps: { danger: true, className: 'btn-danger' }, + content: ( + + ), }); }); }); diff --git a/superset-frontend/src/components/ErrorMessage/types.ts b/superset-frontend/src/components/ErrorMessage/types.ts index d3fe5bfdf7aff..4375a9dec1cfc 100644 --- a/superset-frontend/src/components/ErrorMessage/types.ts +++ b/superset-frontend/src/components/ErrorMessage/types.ts @@ -88,7 +88,7 @@ export type ErrorType = ValueOf; // Keep in sync with superset/views/errors.py export type ErrorLevel = 'info' | 'warning' | 'error'; -export type ErrorSource = 'dashboard' | 'explore' | 'sqllab'; +export type ErrorSource = 'dashboard' | 'explore' | 'sqllab' | 'crud'; export type SupersetError | null> = { error_type: ErrorType; diff --git a/superset-frontend/src/utils/errorMessages.ts b/superset-frontend/src/utils/errorMessages.ts index 16a04105c4c81..d5bfbdc17b80f 100644 --- a/superset-frontend/src/utils/errorMessages.ts +++ b/superset-frontend/src/utils/errorMessages.ts @@ -16,6 +16,7 @@ * specific language governing permissions and limitations * under the License. */ + // Error messages used in many places across applications const COMMON_ERR_MESSAGES = { SESSION_TIMED_OUT: From 1d403dab9822a8cee6108669c53e53fad881c751 Mon Sep 17 00:00:00 2001 From: Beto Dealmeida Date: Mon, 30 Oct 2023 09:50:44 -0400 Subject: [PATCH 10/38] fix: DB-specific quoting in Jinja macro (#25779) (cherry picked from commit 5659c87ed2da1ebafe3578cac9c3c52aeb256c5d) --- superset/jinja_context.py | 45 ++++++++++++++++++-------- tests/unit_tests/jinja_context_test.py | 9 ++++-- 2 files changed, 38 insertions(+), 16 deletions(-) diff --git a/superset/jinja_context.py b/superset/jinja_context.py index 4bb0b91a4e3db..a736b9278ec18 100644 --- a/superset/jinja_context.py +++ b/superset/jinja_context.py @@ -25,6 +25,7 @@ from jinja2 import DebugUndefined from jinja2.sandbox import SandboxedEnvironment from sqlalchemy.engine.interfaces import Dialect +from sqlalchemy.sql.expression import bindparam from sqlalchemy.types import String from typing_extensions import TypedDict @@ -397,23 +398,39 @@ def validate_template_context( return validate_context_types(context) -def where_in(values: list[Any], mark: str = "'") -> str: - """ - Given a list of values, build a parenthesis list suitable for an IN expression. +class WhereInMacro: # pylint: disable=too-few-public-methods + def __init__(self, dialect: Dialect): + self.dialect = dialect - >>> where_in([1, "b", 3]) - (1, 'b', 3) + def __call__(self, values: list[Any], mark: Optional[str] = None) -> str: + """ + Given a list of values, build a parenthesis list suitable for an IN expression. - """ + >>> from sqlalchemy.dialects import mysql + >>> where_in = WhereInMacro(dialect=mysql.dialect()) + >>> where_in([1, "Joe's", 3]) + (1, 'Joe''s', 3) - def quote(value: Any) -> str: - if isinstance(value, str): - value = value.replace(mark, mark * 2) - return f"{mark}{value}{mark}" - return str(value) + """ + binds = [bindparam(f"value_{i}", value) for i, value in enumerate(values)] + string_representations = [ + str( + bind.compile( + dialect=self.dialect, compile_kwargs={"literal_binds": True} + ) + ) + for bind in binds + ] + joined_values = ", ".join(string_representations) + result = f"({joined_values})" + + if mark: + result += ( + "\n-- WARNING: the `mark` parameter was removed from the `where_in` " + "macro for security reasons\n" + ) - joined_values = ", ".join(quote(value) for value in values) - return f"({joined_values})" + return result class BaseTemplateProcessor: @@ -449,7 +466,7 @@ def __init__( self.set_context(**kwargs) # custom filters - self._env.filters["where_in"] = where_in + self._env.filters["where_in"] = WhereInMacro(database.get_dialect()) def set_context(self, **kwargs: Any) -> None: self._context.update(kwargs) diff --git a/tests/unit_tests/jinja_context_test.py b/tests/unit_tests/jinja_context_test.py index fe4b144d2fd7a..114f046300169 100644 --- a/tests/unit_tests/jinja_context_test.py +++ b/tests/unit_tests/jinja_context_test.py @@ -20,17 +20,22 @@ import pytest from pytest_mock import MockFixture +from sqlalchemy.dialects import mysql from superset.datasets.commands.exceptions import DatasetNotFoundError -from superset.jinja_context import dataset_macro, where_in +from superset.jinja_context import dataset_macro, WhereInMacro def test_where_in() -> None: """ Test the ``where_in`` Jinja2 filter. """ + where_in = WhereInMacro(mysql.dialect()) assert where_in([1, "b", 3]) == "(1, 'b', 3)" - assert where_in([1, "b", 3], '"') == '(1, "b", 3)' + assert where_in([1, "b", 3], '"') == ( + "(1, 'b', 3)\n-- WARNING: the `mark` parameter was removed from the " + "`where_in` macro for security reasons\n" + ) assert where_in(["O'Malley's"]) == "('O''Malley''s')" From c216b3efdfa8a7a8ac3f9723e9d62394469a3835 Mon Sep 17 00:00:00 2001 From: John Bodley <4567245+john-bodley@users.noreply.github.com> Date: Tue, 31 Oct 2023 06:21:47 -0700 Subject: [PATCH 11/38] fix: Revert "fix: Apply normalization to all dttm columns (#25147)" (#25801) --- .../Datasource/DatasourceModal.test.jsx | 2 +- superset/common/query_context_factory.py | 1 - superset/common/query_context_processor.py | 5 +- superset/common/query_object_factory.py | 67 +------------- .../integration_tests/query_context_tests.py | 8 +- .../common/test_query_object_factory.py | 90 +------------------ 6 files changed, 11 insertions(+), 162 deletions(-) diff --git a/superset-frontend/src/components/Datasource/DatasourceModal.test.jsx b/superset-frontend/src/components/Datasource/DatasourceModal.test.jsx index 6d991f24a092e..bb6fce7577bd7 100644 --- a/superset-frontend/src/components/Datasource/DatasourceModal.test.jsx +++ b/superset-frontend/src/components/Datasource/DatasourceModal.test.jsx @@ -137,7 +137,7 @@ describe('DatasourceModal', () => { }); }); - it.only('should render error dialog', async () => { + it('should render error dialog', async () => { jest .spyOn(SupersetClient, 'put') .mockRejectedValue(new Error('Something went wrong')); diff --git a/superset/common/query_context_factory.py b/superset/common/query_context_factory.py index 62e8b79893556..e4680ed5eda82 100644 --- a/superset/common/query_context_factory.py +++ b/superset/common/query_context_factory.py @@ -186,7 +186,6 @@ def _apply_granularity( filter for filter in query_object.filter if filter["col"] != filter_to_remove - or filter["op"] != "TEMPORAL_RANGE" ] def _apply_filters(self, query_object: QueryObject) -> None: diff --git a/superset/common/query_context_processor.py b/superset/common/query_context_processor.py index 754c9ae91a854..f6152b232a938 100644 --- a/superset/common/query_context_processor.py +++ b/superset/common/query_context_processor.py @@ -285,11 +285,10 @@ def _get_timestamp_format( datasource = self._qc_datasource labels = tuple( label - for label in { + for label in [ *get_base_axis_labels(query_object.columns), - *[col for col in query_object.columns or [] if isinstance(col, str)], query_object.granularity, - } + ] if datasource # Query datasource didn't support `get_column` and hasattr(datasource, "get_column") diff --git a/superset/common/query_object_factory.py b/superset/common/query_object_factory.py index a76431122e38c..ae85912cdfe78 100644 --- a/superset/common/query_object_factory.py +++ b/superset/common/query_object_factory.py @@ -16,24 +16,17 @@ # under the License. from __future__ import annotations -from datetime import datetime from typing import Any, TYPE_CHECKING from superset.common.chart_data import ChartDataResultType from superset.common.query_object import QueryObject from superset.common.utils.time_range_utils import get_since_until_from_time_range -from superset.utils.core import ( - apply_max_row_limit, - DatasourceDict, - DatasourceType, - FilterOperator, - QueryObjectFilterClause, -) +from superset.utils.core import apply_max_row_limit, DatasourceDict, DatasourceType if TYPE_CHECKING: from sqlalchemy.orm import sessionmaker - from superset.connectors.base.models import BaseColumn, BaseDatasource + from superset.connectors.base.models import BaseDatasource from superset.daos.datasource import DatasourceDAO @@ -73,10 +66,6 @@ def create( # pylint: disable=too-many-arguments ) kwargs["from_dttm"] = from_dttm kwargs["to_dttm"] = to_dttm - if datasource_model_instance and kwargs.get("filters", []): - kwargs["filters"] = self._process_filters( - datasource_model_instance, kwargs["filters"] - ) return QueryObject( datasource=datasource_model_instance, extras=extras, @@ -113,55 +102,3 @@ def _process_row_limit( # light version of the view.utils.core # import view.utils require application context # Todo: move it and the view.utils.core to utils package - - # pylint: disable=no-self-use - def _process_filters( - self, datasource: BaseDatasource, query_filters: list[QueryObjectFilterClause] - ) -> list[QueryObjectFilterClause]: - def get_dttm_filter_value( - value: Any, col: BaseColumn, date_format: str - ) -> int | str: - if not isinstance(value, int): - return value - if date_format in {"epoch_ms", "epoch_s"}: - if date_format == "epoch_s": - value = str(value) - else: - value = str(value * 1000) - else: - dttm = datetime.utcfromtimestamp(value / 1000) - value = dttm.strftime(date_format) - - if col.type in col.num_types: - value = int(value) - return value - - for query_filter in query_filters: - if query_filter.get("op") == FilterOperator.TEMPORAL_RANGE: - continue - filter_col = query_filter.get("col") - if not isinstance(filter_col, str): - continue - column = datasource.get_column(filter_col) - if not column: - continue - filter_value = query_filter.get("val") - - date_format = column.python_date_format - if not date_format and datasource.db_extra: - date_format = datasource.db_extra.get( - "python_date_format_by_column_name", {} - ).get(column.column_name) - - if column.is_dttm and date_format: - if isinstance(filter_value, list): - query_filter["val"] = [ - get_dttm_filter_value(value, column, date_format) - for value in filter_value - ] - else: - query_filter["val"] = get_dttm_filter_value( - filter_value, column, date_format - ) - - return query_filters diff --git a/tests/integration_tests/query_context_tests.py b/tests/integration_tests/query_context_tests.py index 00a98b2c21d93..8c2082d1c4b12 100644 --- a/tests/integration_tests/query_context_tests.py +++ b/tests/integration_tests/query_context_tests.py @@ -836,9 +836,11 @@ def test_special_chars_in_column_name(app_context, physical_dataset): query_object = qc.queries[0] df = qc.get_df_payload(query_object)["df"] - - # sqlite doesn't have timestamp columns - if query_object.datasource.database.backend != "sqlite": + if query_object.datasource.database.backend == "sqlite": + # sqlite returns string as timestamp column + assert df["time column with spaces"][0] == "2002-01-03 00:00:00" + assert df["I_AM_A_TRUNC_COLUMN"][0] == "2002-01-01 00:00:00" + else: assert df["time column with spaces"][0].strftime("%Y-%m-%d") == "2002-01-03" assert df["I_AM_A_TRUNC_COLUMN"][0].strftime("%Y-%m-%d") == "2002-01-01" diff --git a/tests/unit_tests/common/test_query_object_factory.py b/tests/unit_tests/common/test_query_object_factory.py index 4e8fadfe3e993..02304828dca82 100644 --- a/tests/unit_tests/common/test_query_object_factory.py +++ b/tests/unit_tests/common/test_query_object_factory.py @@ -43,45 +43,9 @@ def session_factory() -> Mock: return Mock() -class SimpleDatasetColumn: - def __init__(self, col_params: dict[str, Any]): - self.__dict__.update(col_params) - - -TEMPORAL_COLUMN_NAMES = ["temporal_column", "temporal_column_with_python_date_format"] -TEMPORAL_COLUMNS = { - TEMPORAL_COLUMN_NAMES[0]: SimpleDatasetColumn( - { - "column_name": TEMPORAL_COLUMN_NAMES[0], - "is_dttm": True, - "python_date_format": None, - "type": "string", - "num_types": ["BIGINT"], - } - ), - TEMPORAL_COLUMN_NAMES[1]: SimpleDatasetColumn( - { - "column_name": TEMPORAL_COLUMN_NAMES[1], - "type": "BIGINT", - "is_dttm": True, - "python_date_format": "%Y", - "num_types": ["BIGINT"], - } - ), -} - - @fixture def connector_registry() -> Mock: - datasource_dao_mock = Mock(spec=["get_datasource"]) - datasource_dao_mock.get_datasource.return_value = Mock() - datasource_dao_mock.get_datasource().get_column = Mock( - side_effect=lambda col_name: TEMPORAL_COLUMNS[col_name] - if col_name in TEMPORAL_COLUMN_NAMES - else Mock() - ) - datasource_dao_mock.get_datasource().db_extra = None - return datasource_dao_mock + return Mock(spec=["get_datasource"]) def apply_max_row_limit(limit: int, max_limit: Optional[int] = None) -> int: @@ -148,55 +112,3 @@ def test_query_context_null_post_processing_op( raw_query_context["result_type"], **raw_query_object ) assert query_object.post_processing == [] - - def test_query_context_no_python_date_format_filters( - self, - query_object_factory: QueryObjectFactory, - raw_query_context: dict[str, Any], - ): - raw_query_object = raw_query_context["queries"][0] - raw_query_object["filters"].append( - {"col": TEMPORAL_COLUMN_NAMES[0], "op": "==", "val": 315532800000} - ) - query_object = query_object_factory.create( - raw_query_context["result_type"], - raw_query_context["datasource"], - **raw_query_object - ) - assert query_object.filter[3]["val"] == 315532800000 - - def test_query_context_python_date_format_filters( - self, - query_object_factory: QueryObjectFactory, - raw_query_context: dict[str, Any], - ): - raw_query_object = raw_query_context["queries"][0] - raw_query_object["filters"].append( - {"col": TEMPORAL_COLUMN_NAMES[1], "op": "==", "val": 315532800000} - ) - query_object = query_object_factory.create( - raw_query_context["result_type"], - raw_query_context["datasource"], - **raw_query_object - ) - assert query_object.filter[3]["val"] == 1980 - - def test_query_context_python_date_format_filters_list_of_values( - self, - query_object_factory: QueryObjectFactory, - raw_query_context: dict[str, Any], - ): - raw_query_object = raw_query_context["queries"][0] - raw_query_object["filters"].append( - { - "col": TEMPORAL_COLUMN_NAMES[1], - "op": "==", - "val": [315532800000, 631152000000], - } - ) - query_object = query_object_factory.create( - raw_query_context["result_type"], - raw_query_context["datasource"], - **raw_query_object - ) - assert query_object.filter[3]["val"] == [1980, 1990] From 04c11b477b6a470c546fcd8308f6d84a232158f9 Mon Sep 17 00:00:00 2001 From: John Bodley <4567245+john-bodley@users.noreply.github.com> Date: Tue, 31 Oct 2023 08:24:41 -0700 Subject: [PATCH 12/38] fix: Resolve issue #24195 (#25804) (cherry picked from commit 8737a8a54669037473a89688b9029bc9f3b4ad09) --- superset-frontend/src/constants.ts | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/superset-frontend/src/constants.ts b/superset-frontend/src/constants.ts index bc2b0be843d3e..b707a48e04fa3 100644 --- a/superset-frontend/src/constants.ts +++ b/superset-frontend/src/constants.ts @@ -51,6 +51,10 @@ export const URL_PARAMS = { name: 'filter_set', type: 'string', }, + showFilters: { + name: 'show_filters', + type: 'boolean', + }, expandFilters: { name: 'expand_filters', type: 'boolean', From 2d574963f07b75b95c816ccb0f5ee9c56bc710cc Mon Sep 17 00:00:00 2001 From: Ross Mabbett <92495987+rtexelm@users.noreply.github.com> Date: Tue, 31 Oct 2023 13:23:44 -0300 Subject: [PATCH 13/38] fix(SQL field in edit dataset modal): display full sql query (#25768) (cherry picked from commit 1eba7121aa1c40fdaa55d1a55024c55c63901b4c) --- .../src/components/Datasource/DatasourceEditor.jsx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/superset-frontend/src/components/Datasource/DatasourceEditor.jsx b/superset-frontend/src/components/Datasource/DatasourceEditor.jsx index 195545a2f6c39..991a354d90704 100644 --- a/superset-frontend/src/components/Datasource/DatasourceEditor.jsx +++ b/superset-frontend/src/components/Datasource/DatasourceEditor.jsx @@ -1131,7 +1131,7 @@ class DatasourceEditor extends React.PureComponent { language="sql" offerEditInModal={false} minLines={20} - maxLines={20} + maxLines={Infinity} readOnly={!this.state.isEditMode} resize="both" /> From 28272527fc7868cceae14e0e923c5936ba90c285 Mon Sep 17 00:00:00 2001 From: "JUST.in DO IT" Date: Wed, 1 Nov 2023 05:03:03 -0700 Subject: [PATCH 14/38] fix(sqllab): infinite fetching status after results are landed (#25814) (cherry picked from commit 3f28eebb2061b53c0a15c24588261b6a71fbb799) --- .../src/SqlLab/reducers/sqlLab.js | 31 ++++++++++------ .../src/SqlLab/reducers/sqlLab.test.js | 35 +++++++++++++++++++ 2 files changed, 56 insertions(+), 10 deletions(-) diff --git a/superset-frontend/src/SqlLab/reducers/sqlLab.js b/superset-frontend/src/SqlLab/reducers/sqlLab.js index e6e0a54ed949e..6e2db85a8bc44 100644 --- a/superset-frontend/src/SqlLab/reducers/sqlLab.js +++ b/superset-frontend/src/SqlLab/reducers/sqlLab.js @@ -664,16 +664,27 @@ export default function sqlLabReducer(state = {}, action) { [actions.CLEAR_INACTIVE_QUERIES]() { const { queries } = state; const cleanedQueries = Object.fromEntries( - Object.entries(queries).filter(([, query]) => { - if ( - ['running', 'pending'].includes(query.state) && - Date.now() - query.startDttm > action.interval && - query.progress === 0 - ) { - return false; - } - return true; - }), + Object.entries(queries) + .filter(([, query]) => { + if ( + ['running', 'pending'].includes(query.state) && + Date.now() - query.startDttm > action.interval && + query.progress === 0 + ) { + return false; + } + return true; + }) + .map(([id, query]) => [ + id, + { + ...query, + state: + query.resultsKey && query.results?.status + ? query.results.status + : query.state, + }, + ]), ); return { ...state, queries: cleanedQueries }; }, diff --git a/superset-frontend/src/SqlLab/reducers/sqlLab.test.js b/superset-frontend/src/SqlLab/reducers/sqlLab.test.js index 89ddc61f8c8a6..e1a234734bca6 100644 --- a/superset-frontend/src/SqlLab/reducers/sqlLab.test.js +++ b/superset-frontend/src/SqlLab/reducers/sqlLab.test.js @@ -16,6 +16,7 @@ * specific language governing permissions and limitations * under the License. */ +import { QueryState } from '@superset-ui/core'; import sqlLabReducer from 'src/SqlLab/reducers/sqlLab'; import * as actions from 'src/SqlLab/actions/sqlLab'; import { table, initialState as mockState } from '../fixtures'; @@ -388,4 +389,38 @@ describe('sqlLabReducer', () => { newState = sqlLabReducer(newState, actions.refreshQueries({})); }); }); + describe('CLEAR_INACTIVE_QUERIES', () => { + let newState; + let query; + beforeEach(() => { + query = { + id: 'abcd', + changed_on: Date.now(), + startDttm: Date.now(), + state: QueryState.FETCHING, + progress: 100, + resultsKey: 'fa3dccc4-c549-4fbf-93c8-b4fb5a6fb8b7', + cached: false, + }; + }); + it('updates queries that have already been completed', () => { + newState = sqlLabReducer( + { + ...newState, + queries: { + abcd: { + ...query, + results: { + query_id: 1234, + status: QueryState.SUCCESS, + data: [], + }, + }, + }, + }, + actions.clearInactiveQueries(Date.now()), + ); + expect(newState.queries.abcd.state).toBe(QueryState.SUCCESS); + }); + }); }); From 925c63d4a6c74b77aa11a924b411b3c8156ef3da Mon Sep 17 00:00:00 2001 From: "Michael S. Molina" <70410625+michael-s-molina@users.noreply.github.com> Date: Fri, 3 Nov 2023 10:35:43 -0300 Subject: [PATCH 15/38] fix: Fires onChange when clearing all values of single select (#25853) (cherry picked from commit 8061d5cce982b0b828f5de69647a1f5b75f41a46) --- .../components/Select/AsyncSelect.test.tsx | 28 +++++++++++++++++++ .../src/components/Select/Select.test.tsx | 28 +++++++++++++++++++ .../src/components/Select/Select.tsx | 2 +- 3 files changed, 57 insertions(+), 1 deletion(-) diff --git a/superset-frontend/src/components/Select/AsyncSelect.test.tsx b/superset-frontend/src/components/Select/AsyncSelect.test.tsx index e49f00be537aa..c1442a6b70a1c 100644 --- a/superset-frontend/src/components/Select/AsyncSelect.test.tsx +++ b/superset-frontend/src/components/Select/AsyncSelect.test.tsx @@ -840,6 +840,34 @@ test('does not fire onChange when searching but no selection', async () => { expect(onChange).toHaveBeenCalledTimes(1); }); +test('fires onChange when clearing the selection in single mode', async () => { + const onChange = jest.fn(); + render( + , + ); + clearAll(); + expect(onChange).toHaveBeenCalledTimes(1); +}); + +test('fires onChange when clearing the selection in multiple mode', async () => { + const onChange = jest.fn(); + render( + , + ); + clearAll(); + expect(onChange).toHaveBeenCalledTimes(1); +}); + test('does not duplicate options when using numeric values', async () => { render( { expect(onChange).toHaveBeenCalledTimes(1); }); +test('fires onChange when clearing the selection in single mode', async () => { + const onChange = jest.fn(); + render( + , + ); + clearAll(); + expect(onChange).toHaveBeenCalledTimes(1); +}); + test('does not duplicate options when using numeric values', async () => { render(
{t( - 'Specify the database version. This should be used with ' + - 'Presto in order to enable query cost estimation.', + 'Specify the database version. This is used with Presto for query cost ' + + 'estimation, and Dremio for syntax changes, among others.', )}
From d265bd2ffcaed2bb5f2ba7dd139afeb9c5654645 Mon Sep 17 00:00:00 2001 From: Beto Dealmeida Date: Wed, 8 Nov 2023 07:38:38 -0500 Subject: [PATCH 22/38] fix: trino cursor (#25897) (cherry picked from commit cdb18e04ffa7d50120a26af990d1ce35b2bd8b5e) --- superset/db_engine_specs/trino.py | 32 ++++++++++++++++++------------- 1 file changed, 19 insertions(+), 13 deletions(-) diff --git a/superset/db_engine_specs/trino.py b/superset/db_engine_specs/trino.py index f758f1fadd1aa..19c11939c440c 100644 --- a/superset/db_engine_specs/trino.py +++ b/superset/db_engine_specs/trino.py @@ -187,7 +187,7 @@ def handle_cursor(cls, cursor: Cursor, query: Query, session: Session) -> None: @classmethod def execute_with_cursor( - cls, cursor: Any, sql: str, query: Query, session: Session + cls, cursor: Cursor, sql: str, query: Query, session: Session ) -> None: """ Trigger execution of a query and handle the resulting cursor. @@ -196,34 +196,40 @@ def execute_with_cursor( in another thread and invoke `handle_cursor` to poll for the query ID to appear on the cursor in parallel. """ + # Fetch the query ID beforehand, since it might fail inside the thread due to + # how the SQLAlchemy session is handled. + query_id = query.id + execute_result: dict[str, Any] = {} + execute_event = threading.Event() - def _execute(results: dict[str, Any]) -> None: - logger.debug("Query %d: Running query: %s", query.id, sql) + def _execute(results: dict[str, Any], event: threading.Event) -> None: + logger.debug("Query %d: Running query: %s", query_id, sql) - # Pass result / exception information back to the parent thread try: cls.execute(cursor, sql) - results["complete"] = True except Exception as ex: # pylint: disable=broad-except - results["complete"] = True results["error"] = ex + finally: + event.set() - execute_thread = threading.Thread(target=_execute, args=(execute_result,)) + execute_thread = threading.Thread( + target=_execute, + args=(execute_result, execute_event), + ) execute_thread.start() # Wait for a query ID to be available before handling the cursor, as # it's required by that method; it may never become available on error. - while not cursor.query_id and not execute_result.get("complete"): + while not cursor.query_id and not execute_event.is_set(): time.sleep(0.1) - logger.debug("Query %d: Handling cursor", query.id) + logger.debug("Query %d: Handling cursor", query_id) cls.handle_cursor(cursor, query, session) # Block until the query completes; same behaviour as the client itself - logger.debug("Query %d: Waiting for query to complete", query.id) - while not execute_result.get("complete"): - time.sleep(0.5) + logger.debug("Query %d: Waiting for query to complete", query_id) + execute_event.wait() # Unfortunately we'll mangle the stack trace due to the thread, but # throwing the original exception allows mapping database errors as normal @@ -237,7 +243,7 @@ def prepare_cancel_query(cls, query: Query, session: Session) -> None: session.commit() @classmethod - def cancel_query(cls, cursor: Any, query: Query, cancel_query_id: str) -> bool: + def cancel_query(cls, cursor: Cursor, query: Query, cancel_query_id: str) -> bool: """ Cancel query in the underlying database. From c655a3e7dd1778994ad40cade19415f1950caa34 Mon Sep 17 00:00:00 2001 From: "Michael S. Molina" Date: Mon, 13 Nov 2023 16:44:20 -0300 Subject: [PATCH 23/38] chore: Updates CHANGELOG.md for 3.0.2 --- CHANGELOG.md | 37 ++++++++++++++++++++++++++++++++++ helm/superset/Chart.yaml | 4 ++-- helm/superset/README.md | 2 +- superset-frontend/package.json | 2 +- 4 files changed, 41 insertions(+), 4 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 3e150d476b76e..e3b8ed27ce62c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -19,6 +19,7 @@ under the License. ## Change Log +- [3.0.2](#302-wed-nov-8-073838-2023--0500) - [3.0.1](#301-tue-oct-13-103221-2023--0700) - [3.0.0](#300-thu-aug-24-133627-2023--0600) - [2.1.1](#211-sun-apr-23-154421-2023-0100) @@ -32,6 +33,42 @@ under the License. - [1.4.2](#142-sat-mar-19-000806-2022-0200) - [1.4.1](#141) +### 3.0.2 (Wed Nov 8 07:38:38 2023 -0500) + +**Fixes** + +- [#25897](https://github.com/apache/superset/pull/25897) fix: trino cursor (@betodealmeida) +- [#25898](https://github.com/apache/superset/pull/25898) fix: database version field (@betodealmeida) +- [#25877](https://github.com/apache/superset/pull/25877) fix: Saving Mixed Chart with dashboard filter applied breaks adhoc_filter_b (@kgabryje) +- [#25842](https://github.com/apache/superset/pull/25842) fix(charts): Time grain is None when dataset uses Jinja (@Antonio-RiveroMartnez) +- [#25843](https://github.com/apache/superset/pull/25843) fix: remove `update_charts_owners` (@betodealmeida) +- [#25707](https://github.com/apache/superset/pull/25707) fix(table chart): Show Cell Bars correctly #25625 (@SA-Ark) +- [#25429](https://github.com/apache/superset/pull/25429) fix: the temporal x-axis results in a none time_range. (@mapledan) +- [#25853](https://github.com/apache/superset/pull/25853) fix: Fires onChange when clearing all values of single select (@michael-s-molina) +- [#25814](https://github.com/apache/superset/pull/25814) fix(sqllab): infinite fetching status after results are landed (@justinpark) +- [#25768](https://github.com/apache/superset/pull/25768) fix(SQL field in edit dataset modal): display full sql query (@rtexelm) +- [#25804](https://github.com/apache/superset/pull/25804) fix: Resolve issue #24195 (@john-bodley) +- [#25801](https://github.com/apache/superset/pull/25801) fix: Revert "fix: Apply normalization to all dttm columns (#25147)" (@john-bodley) +- [#25779](https://github.com/apache/superset/pull/25779) fix: DB-specific quoting in Jinja macro (@betodealmeida) +- [#25640](https://github.com/apache/superset/pull/25640) fix: allow for backward compatible errors (@eschutho) +- [#25741](https://github.com/apache/superset/pull/25741) fix(sqllab): slow pop datasource query (@justinpark) +- [#25756](https://github.com/apache/superset/pull/25756) fix: dataset update uniqueness (@betodealmeida) +- [#25753](https://github.com/apache/superset/pull/25753) fix: Revert "fix(Charts): Set max row limit + removed the option to use an empty row limit value" (@geido) +- [#25732](https://github.com/apache/superset/pull/25732) fix(horizontal filter label): show full tooltip with ellipsis (@rtexelm) +- [#25712](https://github.com/apache/superset/pull/25712) fix: bump to FAB 4.3.9 remove CSP exception (@dpgaspar) +- [#24709](https://github.com/apache/superset/pull/24709) fix(chore): dashboard requests to database equal the number of slices it has (@Always-prog) +- [#25679](https://github.com/apache/superset/pull/25679) fix: remove unnecessary redirect (@Khrol) +- [#25680](https://github.com/apache/superset/pull/25680) fix(sqllab): reinstate "Force trino client async execution" (@giftig) +- [#25657](https://github.com/apache/superset/pull/25657) fix(dremio): Fixes issue with Dremio SQL generation for Charts with Series Limit (@OskarNS) +- [#23638](https://github.com/apache/superset/pull/23638) fix: warning of nth-child (@justinpark) +- [#25658](https://github.com/apache/superset/pull/25658) fix: improve upload ZIP file validation (@dpgaspar) +- [#25495](https://github.com/apache/superset/pull/25495) fix(header navlinks): link navlinks to path prefix (@fisjac) +- [#25112](https://github.com/apache/superset/pull/25112) fix: permalink save/overwrites in explore (@hughhhh) +- [#25493](https://github.com/apache/superset/pull/25493) fix(import): Make sure query context is overwritten for overwriting imports (@jfrag1) +- [#25553](https://github.com/apache/superset/pull/25553) fix: avoid 500 errors with SQLLAB_BACKEND_PERSISTENCE (@Khrol) +- [#25626](https://github.com/apache/superset/pull/25626) fix(sqllab): template validation error within comments (@justinpark) +- [#25523](https://github.com/apache/superset/pull/25523) fix(sqllab): Mistitled for new tab after rename (@justinpark) + ### 3.0.1 (Tue Oct 13 10:32:21 2023 -0700) **Database Migrations** diff --git a/helm/superset/Chart.yaml b/helm/superset/Chart.yaml index 2aa2bc49a3178..bd5c6a7909059 100644 --- a/helm/superset/Chart.yaml +++ b/helm/superset/Chart.yaml @@ -15,7 +15,7 @@ # limitations under the License. # apiVersion: v2 -appVersion: "3.0.0" +appVersion: "3.0.2" description: Apache Superset is a modern, enterprise-ready business intelligence web application name: superset icon: https://artifacthub.io/image/68c1d717-0e97-491f-b046-754e46f46922@2x @@ -29,7 +29,7 @@ maintainers: - name: craig-rueda email: craig@craigrueda.com url: https://github.com/craig-rueda -version: 0.10.9 +version: 0.10.10 dependencies: - name: postgresql version: 12.1.6 diff --git a/helm/superset/README.md b/helm/superset/README.md index 38c69c38b524d..63578b1c787c0 100644 --- a/helm/superset/README.md +++ b/helm/superset/README.md @@ -23,7 +23,7 @@ NOTE: This file is generated by helm-docs: https://github.com/norwoodj/helm-docs # superset -![Version: 0.10.9](https://img.shields.io/badge/Version-0.10.9-informational?style=flat-square) +![Version: 0.10.10](https://img.shields.io/badge/Version-0.10.10-informational?style=flat-square) Apache Superset is a modern, enterprise-ready business intelligence web application diff --git a/superset-frontend/package.json b/superset-frontend/package.json index 680a8cbde10ac..1e6d28b467f43 100644 --- a/superset-frontend/package.json +++ b/superset-frontend/package.json @@ -1,6 +1,6 @@ { "name": "superset", - "version": "3.0.1", + "version": "3.0.2", "description": "Superset is a data exploration platform designed to be visual, intuitive, and interactive.", "keywords": [ "big", From 078b78f30b587ee0ec58af8eb89c34c4b0650040 Mon Sep 17 00:00:00 2001 From: FGrobelny <150029280+FGrobelny@users.noreply.github.com> Date: Wed, 8 Nov 2023 21:00:18 +0100 Subject: [PATCH 24/38] fix(trino): allow impersonate_user flag to be imported (#25872) Co-authored-by: John Bodley <4567245+john-bodley@users.noreply.github.com> (cherry picked from commit 458be8c848c9e3d2a798c9371cb2cd65c206e85c) --- superset/models/core.py | 1 + 1 file changed, 1 insertion(+) diff --git a/superset/models/core.py b/superset/models/core.py index 0581756b818ea..51d5f8a162f28 100755 --- a/superset/models/core.py +++ b/superset/models/core.py @@ -185,6 +185,7 @@ class Database( "is_managed_externally", "external_url", "encrypted_extra", + "impersonate_user", ] export_children = ["tables"] From eea6a8ed4fb9a5ad9103ce4a6df7a6ed313eabe1 Mon Sep 17 00:00:00 2001 From: John Bodley <4567245+john-bodley@users.noreply.github.com> Date: Wed, 8 Nov 2023 12:22:00 -0800 Subject: [PATCH 25/38] fix(table): Double percenting ad-hoc percentage metrics (#25857) (cherry picked from commit 784a478268fd89e6e58077e99bb2010987d6b07c) --- .../plugins/plugin-chart-table/src/transformProps.ts | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/superset-frontend/plugins/plugin-chart-table/src/transformProps.ts b/superset-frontend/plugins/plugin-chart-table/src/transformProps.ts index 3c9c00d3c165f..0c8970737d86d 100644 --- a/superset-frontend/plugins/plugin-chart-table/src/transformProps.ts +++ b/superset-frontend/plugins/plugin-chart-table/src/transformProps.ts @@ -118,9 +118,10 @@ const processColumns = memoizeOne(function processColumns( // because users can also add things like `MAX(str_col)` as a metric. const isMetric = metricsSet.has(key) && isNumeric(key, records); const isPercentMetric = percentMetricsSet.has(key); - const label = isPercentMetric - ? `%${verboseMap?.[key.replace('%', '')] || key}` - : verboseMap?.[key] || key; + const label = + isPercentMetric && verboseMap?.hasOwnProperty(key.replace('%', '')) + ? `%${verboseMap[key.replace('%', '')]}` + : verboseMap?.[key] || key; const isTime = dataType === GenericDataType.TEMPORAL; const isNumber = dataType === GenericDataType.NUMERIC; const savedFormat = columnFormats?.[key]; From 6da18f84515d6608f8a20f716574e6ec84acf1ac Mon Sep 17 00:00:00 2001 From: "JUST.in DO IT" Date: Thu, 9 Nov 2023 09:26:21 -0800 Subject: [PATCH 26/38] fix(sqllab): invalid sanitization on comparison symbol (#25903) (cherry picked from commit 581d3c710867120f85ddfc097713e5f2880722c1) --- .../packages/superset-ui-core/src/utils/html.test.tsx | 3 +++ .../packages/superset-ui-core/src/utils/html.tsx | 4 +++- 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/superset-frontend/packages/superset-ui-core/src/utils/html.test.tsx b/superset-frontend/packages/superset-ui-core/src/utils/html.test.tsx index 8fd06cb6f8e7a..9b950e4246e92 100644 --- a/superset-frontend/packages/superset-ui-core/src/utils/html.test.tsx +++ b/superset-frontend/packages/superset-ui-core/src/utils/html.test.tsx @@ -44,6 +44,9 @@ describe('isProbablyHTML', () => { const plainText = 'Just a plain text'; const isHTML = isProbablyHTML(plainText); expect(isHTML).toBe(false); + + const trickyText = 'a <= 10 and b > 10'; + expect(isProbablyHTML(trickyText)).toBe(false); }); }); diff --git a/superset-frontend/packages/superset-ui-core/src/utils/html.tsx b/superset-frontend/packages/superset-ui-core/src/utils/html.tsx index 3215eb9b9de5b..fffd43bda8f6e 100644 --- a/superset-frontend/packages/superset-ui-core/src/utils/html.tsx +++ b/superset-frontend/packages/superset-ui-core/src/utils/html.tsx @@ -28,7 +28,9 @@ export function sanitizeHtml(htmlString: string) { } export function isProbablyHTML(text: string) { - return /<[^>]+>/.test(text); + return Array.from( + new DOMParser().parseFromString(text, 'text/html').body.childNodes, + ).some(({ nodeType }) => nodeType === 1); } export function sanitizeHtmlIfNeeded(htmlString: string) { From a7fbdd607a7f1443146f8fe57404fe84f42c64f3 Mon Sep 17 00:00:00 2001 From: Giacomo Barone <46573388+ggbaro@users.noreply.github.com> Date: Sat, 11 Nov 2023 05:13:50 +0100 Subject: [PATCH 27/38] fix: update flask-caching to avoid breaking redis cache, solves #25339 (#25947) Co-authored-by: Ville Brofeldt <33317356+villebro@users.noreply.github.com> --- requirements/base.txt | 7 ++++--- setup.py | 2 +- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/requirements/base.txt b/requirements/base.txt index 95e691227221e..18cf4ccf63225 100644 --- a/requirements/base.txt +++ b/requirements/base.txt @@ -27,8 +27,9 @@ billiard==3.6.4.0 # via celery brotli==1.0.9 # via flask-compress -cachelib==0.6.0 - # via flask-caching +cachelib==0.9.0 + # via + # flask-caching celery==5.2.2 # via apache-superset cffi==1.15.1 @@ -92,7 +93,7 @@ flask-appbuilder==4.3.9 # via apache-superset flask-babel==1.0.0 # via flask-appbuilder -flask-caching==1.11.1 +flask-caching==2.1.0 # via apache-superset flask-compress==1.13 # via apache-superset diff --git a/setup.py b/setup.py index 87a721d21b1ab..20796cf13348f 100644 --- a/setup.py +++ b/setup.py @@ -81,7 +81,7 @@ def get_git_sha() -> str: "deprecation>=2.1.0, <2.2.0", "flask>=2.2.5, <3.0.0", "flask-appbuilder>=4.3.9, <5.0.0", - "flask-caching>=1.11.1, <2.0", + "flask-caching>=2.1.0, <3", "flask-compress>=1.13, <2.0", "flask-talisman>=1.0.0, <2.0", "flask-login>=0.6.0, < 1.0", From e07eed10a2f1af92aa531203fd87c6fb5b1c99a1 Mon Sep 17 00:00:00 2001 From: "Hugh A. Miles II" Date: Mon, 13 Nov 2023 13:18:28 -0500 Subject: [PATCH 28/38] fix: always denorm column value before querying values (#25919) --- superset/connectors/base/models.py | 7 ---- superset/connectors/sqla/models.py | 29 ---------------- superset/datasource/api.py | 4 +++ superset/models/helpers.py | 56 ++++++++++++++---------------- 4 files changed, 31 insertions(+), 65 deletions(-) diff --git a/superset/connectors/base/models.py b/superset/connectors/base/models.py index 706c82635c1bc..f560b4d86b7d7 100644 --- a/superset/connectors/base/models.py +++ b/superset/connectors/base/models.py @@ -496,13 +496,6 @@ def query(self, query_obj: QueryObjectDict) -> QueryResult: """ raise NotImplementedError() - def values_for_column(self, column_name: str, limit: int = 10000) -> list[Any]: - """Given a column, returns an iterable of distinct values - - This is used to populate the dropdown showing a list of - values in filters in the explore view""" - raise NotImplementedError() - @staticmethod def default_query(qry: Query) -> Query: return qry diff --git a/superset/connectors/sqla/models.py b/superset/connectors/sqla/models.py index 79203256f1e6b..5edc724b23e83 100644 --- a/superset/connectors/sqla/models.py +++ b/superset/connectors/sqla/models.py @@ -46,7 +46,6 @@ inspect, Integer, or_, - select, String, Table, Text, @@ -789,34 +788,6 @@ def get_fetch_values_predicate( ) ) from ex - def values_for_column(self, column_name: str, limit: int = 10000) -> list[Any]: - """Runs query against sqla to retrieve some - sample values for the given column. - """ - cols = {col.column_name: col for col in self.columns} - target_col = cols[column_name] - tp = self.get_template_processor() - tbl, cte = self.get_from_clause(tp) - - qry = ( - select([target_col.get_sqla_col(template_processor=tp)]) - .select_from(tbl) - .distinct() - ) - if limit: - qry = qry.limit(limit) - - if self.fetch_values_predicate: - qry = qry.where(self.get_fetch_values_predicate(template_processor=tp)) - - with self.database.get_sqla_engine_with_context() as engine: - sql = qry.compile(engine, compile_kwargs={"literal_binds": True}) - sql = self._apply_cte(sql, cte) - sql = self.mutate_query_from_config(sql) - - df = pd.read_sql_query(sql=sql, con=engine) - return df[column_name].to_list() - def mutate_query_from_config(self, sql: str) -> str: """Apply config's SQL_QUERY_MUTATOR diff --git a/superset/datasource/api.py b/superset/datasource/api.py index 6399d197e0049..213298d30b4b8 100644 --- a/superset/datasource/api.py +++ b/superset/datasource/api.py @@ -120,6 +120,10 @@ def get_column_values( column_name=column_name, limit=row_limit ) return self.response(200, result=payload) + except KeyError: + return self.response( + 400, message=f"Column name {column_name} does not exist" + ) except NotImplementedError: return self.response( 400, diff --git a/superset/models/helpers.py b/superset/models/helpers.py index 38f29eb2234d9..6163c3022cd35 100644 --- a/superset/models/helpers.py +++ b/superset/models/helpers.py @@ -700,10 +700,7 @@ class ExploreMixin: # pylint: disable=too-many-public-methods "MIN": sa.func.MIN, "MAX": sa.func.MAX, } - - @property - def fetch_value_predicate(self) -> str: - return "fix this!" + fetch_values_predicate = None @property def type(self) -> str: @@ -776,17 +773,20 @@ def sql(self) -> str: def columns(self) -> list[Any]: raise NotImplementedError() - def get_fetch_values_predicate( - self, template_processor: Optional[BaseTemplateProcessor] = None - ) -> TextClause: - raise NotImplementedError() - def get_extra_cache_keys(self, query_obj: dict[str, Any]) -> list[Hashable]: raise NotImplementedError() def get_template_processor(self, **kwargs: Any) -> BaseTemplateProcessor: raise NotImplementedError() + def get_fetch_values_predicate( + self, + template_processor: Optional[ # pylint: disable=unused-argument + BaseTemplateProcessor + ] = None, # pylint: disable=unused-argument + ) -> TextClause: + return self.fetch_values_predicate + def get_sqla_row_level_filters( self, template_processor: BaseTemplateProcessor, @@ -1334,36 +1334,34 @@ def get_time_filter( # pylint: disable=too-many-arguments return and_(*l) def values_for_column(self, column_name: str, limit: int = 10000) -> list[Any]: - """Runs query against sqla to retrieve some - sample values for the given column. - """ - cols = {} - for col in self.columns: - if isinstance(col, dict): - cols[col.get("column_name")] = col - else: - cols[col.column_name] = col - - target_col = cols[column_name] - tp = None # todo(hughhhh): add back self.get_template_processor() + # always denormalize column name before querying for values + db_dialect = self.database.get_dialect() + denomalized_col_name = self.database.db_engine_spec.denormalize_name( + db_dialect, column_name + ) + cols = {col.column_name: col for col in self.columns} + target_col = cols[denomalized_col_name] + tp = self.get_template_processor() tbl, cte = self.get_from_clause(tp) - if isinstance(target_col, dict): - sql_column = sa.column(target_col.get("name")) - else: - sql_column = target_col - - qry = sa.select([sql_column]).select_from(tbl).distinct() + qry = ( + sa.select([target_col.get_sqla_col(template_processor=tp)]) + .select_from(tbl) + .distinct() + ) if limit: qry = qry.limit(limit) + if self.fetch_values_predicate: + qry = qry.where(self.get_fetch_values_predicate(template_processor=tp)) + with self.database.get_sqla_engine_with_context() as engine: # type: ignore sql = qry.compile(engine, compile_kwargs={"literal_binds": True}) sql = self._apply_cte(sql, cte) sql = self.mutate_query_from_config(sql) df = pd.read_sql_query(sql=sql, con=engine) - return df[column_name].to_list() + return df[denomalized_col_name].to_list() def get_timestamp_expression( self, @@ -1935,7 +1933,7 @@ def get_sqla_query( # pylint: disable=too-many-arguments,too-many-locals,too-ma ) having_clause_and += [self.text(having)] - if apply_fetch_values_predicate and self.fetch_values_predicate: # type: ignore + if apply_fetch_values_predicate and self.fetch_values_predicate: qry = qry.where( self.get_fetch_values_predicate(template_processor=template_processor) ) From fb1919a483fff7df8df9365eee8ebcf35dc31a0d Mon Sep 17 00:00:00 2001 From: John Bodley <4567245+john-bodley@users.noreply.github.com> Date: Mon, 13 Nov 2023 11:25:14 -0800 Subject: [PATCH 29/38] chore(colors): Updating Airbnb brand colors (#23619) (cherry picked from commit 6d8424c104f196bde54d1ff3d02269e4c71059b4) --- .../cypress/e2e/dashboard/editmode.test.ts | 6 ++-- .../explore/visualizations/dist_bar.test.js | 2 +- .../e2e/explore/visualizations/line.test.ts | 2 +- .../color/colorSchemes/categorical/airbnb.ts | 34 +++++++------------ .../legacy-plugin-chart-map-box/Stories.tsx | 2 +- 5 files changed, 19 insertions(+), 27 deletions(-) diff --git a/superset-frontend/cypress-base/cypress/e2e/dashboard/editmode.test.ts b/superset-frontend/cypress-base/cypress/e2e/dashboard/editmode.test.ts index b35105a7b5911..e4d645bd2e083 100644 --- a/superset-frontend/cypress-base/cypress/e2e/dashboard/editmode.test.ts +++ b/superset-frontend/cypress-base/cypress/e2e/dashboard/editmode.test.ts @@ -515,7 +515,7 @@ describe('Dashboard edit', () => { // label Anthony cy.get('[data-test-chart-name="Trends"] .line .nv-legend-symbol') .eq(2) - .should('have.css', 'fill', 'rgb(0, 122, 135)'); + .should('have.css', 'fill', 'rgb(244, 176, 42)'); // open main tab and nested tab openTab(0, 0); @@ -526,7 +526,7 @@ describe('Dashboard edit', () => { '[data-test-chart-name="Top 10 California Names Timeseries"] .line .nv-legend-symbol', ) .first() - .should('have.css', 'fill', 'rgb(0, 122, 135)'); + .should('have.css', 'fill', 'rgb(244, 176, 42)'); }); it('should apply the color scheme across main tabs', () => { @@ -557,7 +557,7 @@ describe('Dashboard edit', () => { cy.get('[data-test-chart-name="Trends"] .line .nv-legend-symbol') .first() - .should('have.css', 'fill', 'rgb(204, 0, 134)'); + .should('have.css', 'fill', 'rgb(156, 52, 152)'); // change scheme now that charts are rendered across the main tabs editDashboard(); diff --git a/superset-frontend/cypress-base/cypress/e2e/explore/visualizations/dist_bar.test.js b/superset-frontend/cypress-base/cypress/e2e/explore/visualizations/dist_bar.test.js index 770e1e1c04d38..591ba31776935 100644 --- a/superset-frontend/cypress-base/cypress/e2e/explore/visualizations/dist_bar.test.js +++ b/superset-frontend/cypress-base/cypress/e2e/explore/visualizations/dist_bar.test.js @@ -89,6 +89,6 @@ describe('Visualization > Distribution bar chart', () => { ).should('exist'); cy.get('.dist_bar .nv-legend .nv-legend-symbol') .first() - .should('have.css', 'fill', 'rgb(255, 90, 95)'); + .should('have.css', 'fill', 'rgb(41, 105, 107)'); }); }); diff --git a/superset-frontend/cypress-base/cypress/e2e/explore/visualizations/line.test.ts b/superset-frontend/cypress-base/cypress/e2e/explore/visualizations/line.test.ts index 5cc398c7f3ef7..8499db5946818 100644 --- a/superset-frontend/cypress-base/cypress/e2e/explore/visualizations/line.test.ts +++ b/superset-frontend/cypress-base/cypress/e2e/explore/visualizations/line.test.ts @@ -85,7 +85,7 @@ describe('Visualization > Line', () => { ).should('exist'); cy.get('.line .nv-legend .nv-legend-symbol') .first() - .should('have.css', 'fill', 'rgb(255, 90, 95)'); + .should('have.css', 'fill', 'rgb(41, 105, 107)'); }); it('should work with adhoc metric', () => { diff --git a/superset-frontend/packages/superset-ui-core/src/color/colorSchemes/categorical/airbnb.ts b/superset-frontend/packages/superset-ui-core/src/color/colorSchemes/categorical/airbnb.ts index 462065b84f2b9..a126f502a9c3d 100644 --- a/superset-frontend/packages/superset-ui-core/src/color/colorSchemes/categorical/airbnb.ts +++ b/superset-frontend/packages/superset-ui-core/src/color/colorSchemes/categorical/airbnb.ts @@ -24,27 +24,19 @@ const schemes = [ id: 'bnbColors', label: 'Airbnb Colors', colors: [ - '#ff5a5f', // rausch - '#7b0051', // hackb - '#007A87', // kazan - '#00d1c1', // babu - '#8ce071', // lima - '#ffb400', // beach - '#b4a76c', // barol - '#ff8083', - '#cc0086', - '#00a1b3', - '#00ffeb', - '#bbedab', - '#ffd266', - '#cbc29a', - '#ff3339', - '#ff1ab1', - '#005c66', - '#00b3a5', - '#55d12e', - '#b37e00', - '#988b4e', + '#29696B', + '#5BCACE', + '#F4B02A', + '#F1826A', + '#792EB2', + '#C96EC6', + '#921E50', + '#B27700', + '#9C3498', + '#9C3498', + '#E4679D', + '#C32F0E', + '#9D63CA', ], }, ].map(s => new CategoricalScheme(s)); diff --git a/superset-frontend/packages/superset-ui-demo/storybook/stories/plugins/legacy-plugin-chart-map-box/Stories.tsx b/superset-frontend/packages/superset-ui-demo/storybook/stories/plugins/legacy-plugin-chart-map-box/Stories.tsx index 6cdca623a1c82..dd95ffada5b04 100644 --- a/superset-frontend/packages/superset-ui-demo/storybook/stories/plugins/legacy-plugin-chart-map-box/Stories.tsx +++ b/superset-frontend/packages/superset-ui-demo/storybook/stories/plugins/legacy-plugin-chart-map-box/Stories.tsx @@ -42,7 +42,7 @@ export const Basic = () => { allColumnsY: 'LAT', clusteringRadius: '60', globalOpacity: 1, - mapboxColor: 'rgb(0, 122, 135)', + mapboxColor: 'rgb(244, 176, 42)', mapboxLabel: [], mapboxStyle: 'mapbox://styles/mapbox/light-v9', pandasAggfunc: 'sum', From 1c287dfc74e98ebe2d5f655dad58c7f91755440e Mon Sep 17 00:00:00 2001 From: "Hugh A. Miles II" Date: Mon, 13 Nov 2023 18:46:09 -0500 Subject: [PATCH 30/38] fix: naming denomalized to denormalized in helpers.py (#25973) (cherry picked from commit 5def416f632ae7d7f90ae615a8600e8110797aec) --- superset/models/helpers.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/superset/models/helpers.py b/superset/models/helpers.py index 6163c3022cd35..037e4d8c6e868 100644 --- a/superset/models/helpers.py +++ b/superset/models/helpers.py @@ -1336,11 +1336,11 @@ def get_time_filter( # pylint: disable=too-many-arguments def values_for_column(self, column_name: str, limit: int = 10000) -> list[Any]: # always denormalize column name before querying for values db_dialect = self.database.get_dialect() - denomalized_col_name = self.database.db_engine_spec.denormalize_name( + denormalized_col_name = self.database.db_engine_spec.denormalize_name( db_dialect, column_name ) cols = {col.column_name: col for col in self.columns} - target_col = cols[denomalized_col_name] + target_col = cols[denormalized_col_name] tp = self.get_template_processor() tbl, cte = self.get_from_clause(tp) @@ -1361,7 +1361,7 @@ def values_for_column(self, column_name: str, limit: int = 10000) -> list[Any]: sql = self.mutate_query_from_config(sql) df = pd.read_sql_query(sql=sql, con=engine) - return df[denomalized_col_name].to_list() + return df[denormalized_col_name].to_list() def get_timestamp_expression( self, From 1d2a564d4eb1b02a2f80446450e8ae8edb911dc5 Mon Sep 17 00:00:00 2001 From: josedev-union <70741025+josedev-union@users.noreply.github.com> Date: Wed, 15 Nov 2023 20:38:54 +0100 Subject: [PATCH 31/38] fix(helm): Restart all related deployments when bootstrap script changed (#25703) --- helm/superset/Chart.yaml | 2 +- helm/superset/README.md | 476 +++++++++--------- helm/superset/templates/deployment-beat.yaml | 1 + .../superset/templates/deployment-worker.yaml | 1 + 4 files changed, 241 insertions(+), 239 deletions(-) diff --git a/helm/superset/Chart.yaml b/helm/superset/Chart.yaml index bd5c6a7909059..83f4119791abd 100644 --- a/helm/superset/Chart.yaml +++ b/helm/superset/Chart.yaml @@ -29,7 +29,7 @@ maintainers: - name: craig-rueda email: craig@craigrueda.com url: https://github.com/craig-rueda -version: 0.10.10 +version: 0.10.15 dependencies: - name: postgresql version: 12.1.6 diff --git a/helm/superset/README.md b/helm/superset/README.md index 63578b1c787c0..0a06e817e7a78 100644 --- a/helm/superset/README.md +++ b/helm/superset/README.md @@ -23,7 +23,7 @@ NOTE: This file is generated by helm-docs: https://github.com/norwoodj/helm-docs # superset -![Version: 0.10.10](https://img.shields.io/badge/Version-0.10.10-informational?style=flat-square) +![Version: 0.10.15](https://img.shields.io/badge/Version-0.10.15-informational?style=flat-square) Apache Superset is a modern, enterprise-ready business intelligence web application @@ -31,7 +31,7 @@ Apache Superset is a modern, enterprise-ready business intelligence web applicat ## Source Code -* +- ## TL;DR @@ -42,242 +42,242 @@ helm install my-superset superset/superset ## Requirements -| Repository | Name | Version | -|------------|------|---------| -| https://charts.bitnami.com/bitnami | postgresql | 12.1.6 | -| https://charts.bitnami.com/bitnami | redis | 17.9.4 | +| Repository | Name | Version | +| ---------------------------------- | ---------- | ------- | +| https://charts.bitnami.com/bitnami | postgresql | 12.1.6 | +| https://charts.bitnami.com/bitnami | redis | 17.9.4 | ## Values -| Key | Type | Default | Description | -|-----|------|---------|-------------| -| affinity | object | `{}` | | -| bootstrapScript | string | see `values.yaml` | Install additional packages and do any other bootstrap configuration in this script For production clusters it's recommended to build own image with this step done in CI | -| configFromSecret | string | `"{{ template \"superset.fullname\" . }}-config"` | The name of the secret which we will use to generate a superset_config.py file Note: this secret must have the key superset_config.py in it and can include other files as well | -| configMountPath | string | `"/app/pythonpath"` | | -| configOverrides | object | `{}` | A dictionary of overrides to append at the end of superset_config.py - the name does not matter WARNING: the order is not guaranteed Files can be passed as helm --set-file configOverrides.my-override=my-file.py | -| configOverridesFiles | object | `{}` | Same as above but the values are files | -| envFromSecret | string | `"{{ template \"superset.fullname\" . }}-env"` | The name of the secret which we will use to populate env vars in deployed pods This can be useful for secret keys, etc. | -| envFromSecrets | list | `[]` | This can be a list of templated strings | -| extraConfigMountPath | string | `"/app/configs"` | | -| extraConfigs | object | `{}` | Extra files to mount on `/app/pythonpath` | -| extraEnv | object | `{}` | Extra environment variables that will be passed into pods | -| extraEnvRaw | list | `[]` | Extra environment variables in RAW format that will be passed into pods | -| extraSecretEnv | object | `{}` | Extra environment variables to pass as secrets | -| extraSecrets | object | `{}` | Extra files to mount on `/app/pythonpath` as secrets | -| extraVolumeMounts | list | `[]` | | -| extraVolumes | list | `[]` | | -| fullnameOverride | string | `nil` | Provide a name to override the full names of resources | -| hostAliases | list | `[]` | Custom hostAliases for all superset pods # https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/ | -| image.pullPolicy | string | `"IfNotPresent"` | | -| image.repository | string | `"apachesuperset.docker.scarf.sh/apache/superset"` | | -| image.tag | string | `""` | | -| imagePullSecrets | list | `[]` | | -| ingress.annotations | object | `{}` | | -| ingress.enabled | bool | `false` | | -| ingress.extraHostsRaw | list | `[]` | | -| ingress.hosts[0] | string | `"chart-example.local"` | | -| ingress.ingressClassName | string | `nil` | | -| ingress.path | string | `"/"` | | -| ingress.pathType | string | `"ImplementationSpecific"` | | -| ingress.tls | list | `[]` | | -| init.adminUser.email | string | `"admin@superset.com"` | | -| init.adminUser.firstname | string | `"Superset"` | | -| init.adminUser.lastname | string | `"Admin"` | | -| init.adminUser.password | string | `"admin"` | | -| init.adminUser.username | string | `"admin"` | | -| init.affinity | object | `{}` | | -| init.command | list | a `superset_init.sh` command | Command | -| init.containerSecurityContext | object | `{}` | | -| init.createAdmin | bool | `true` | | -| init.enabled | bool | `true` | | -| init.initContainers | list | a container waiting for postgres | List of initContainers | -| init.initscript | string | a script to create admin user and initailize roles | A Superset init script | -| init.jobAnnotations."helm.sh/hook" | string | `"post-install,post-upgrade"` | | -| init.jobAnnotations."helm.sh/hook-delete-policy" | string | `"before-hook-creation"` | | -| init.loadExamples | bool | `false` | | -| init.podAnnotations | object | `{}` | | -| init.podSecurityContext | object | `{}` | | -| init.resources | object | `{}` | | -| init.tolerations | list | `[]` | | -| init.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to init job | -| initImage.pullPolicy | string | `"IfNotPresent"` | | -| initImage.repository | string | `"apache/superset"` | | -| initImage.tag | string | `"dockerize"` | | -| nameOverride | string | `nil` | Provide a name to override the name of the chart | -| nodeSelector | object | `{}` | | -| postgresql | object | see `values.yaml` | Configuration values for the postgresql dependency. ref: https://github.com/kubernetes/charts/blob/master/stable/postgresql/README.md | -| redis | object | see `values.yaml` | Configuration values for the Redis dependency. ref: https://github.com/bitnami/charts/blob/master/bitnami/redis More documentation can be found here: https://artifacthub.io/packages/helm/bitnami/redis | -| resources | object | `{}` | | -| runAsUser | int | `0` | User ID directive. This user must have enough permissions to run the bootstrap script Running containers as root is not recommended in production. Change this to another UID - e.g. 1000 to be more secure | -| service.annotations | object | `{}` | | -| service.loadBalancerIP | string | `nil` | | -| service.nodePort.http | int | `"nil"` | | -| service.port | int | `8088` | | -| service.type | string | `"ClusterIP"` | | -| serviceAccount.annotations | object | `{}` | | -| serviceAccount.create | bool | `false` | Create custom service account for Superset. If create: true and serviceAccountName is not provided, `superset.fullname` will be used. | -| serviceAccountName | string | `nil` | Specify service account name to be used | -| supersetCeleryBeat.affinity | object | `{}` | Affinity to be added to supersetCeleryBeat deployment | -| supersetCeleryBeat.command | list | a `celery beat` command | Command | -| supersetCeleryBeat.containerSecurityContext | object | `{}` | | -| supersetCeleryBeat.deploymentAnnotations | object | `{}` | Annotations to be added to supersetCeleryBeat deployment | -| supersetCeleryBeat.enabled | bool | `false` | This is only required if you intend to use alerts and reports | -| supersetCeleryBeat.forceReload | bool | `false` | If true, forces deployment to reload on each upgrade | -| supersetCeleryBeat.initContainers | list | a container waiting for postgres | List of init containers | -| supersetCeleryBeat.podAnnotations | object | `{}` | Annotations to be added to supersetCeleryBeat pods | -| supersetCeleryBeat.podLabels | object | `{}` | Labels to be added to supersetCeleryBeat pods | -| supersetCeleryBeat.podSecurityContext | object | `{}` | | -| supersetCeleryBeat.resources | object | `{}` | Resource settings for the CeleryBeat pods - these settings overwrite might existing values from the global resources object defined above. | -| supersetCeleryBeat.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to supersetCeleryBeat deployments | -| supersetCeleryFlower.affinity | object | `{}` | Affinity to be added to supersetCeleryFlower deployment | -| supersetCeleryFlower.command | list | a `celery flower` command | Command | -| supersetCeleryFlower.containerSecurityContext | object | `{}` | | -| supersetCeleryFlower.deploymentAnnotations | object | `{}` | Annotations to be added to supersetCeleryFlower deployment | -| supersetCeleryFlower.enabled | bool | `false` | Enables a Celery flower deployment (management UI to monitor celery jobs) WARNING: on superset 1.x, this requires a Superset image that has `flower<1.0.0` installed (which is NOT the case of the default images) flower>=1.0.0 requires Celery 5+ which Superset 1.5 does not support | -| supersetCeleryFlower.initContainers | list | a container waiting for postgres and redis | List of init containers | -| supersetCeleryFlower.livenessProbe.failureThreshold | int | `3` | | -| supersetCeleryFlower.livenessProbe.httpGet.path | string | `"/api/workers"` | | -| supersetCeleryFlower.livenessProbe.httpGet.port | string | `"flower"` | | -| supersetCeleryFlower.livenessProbe.initialDelaySeconds | int | `5` | | -| supersetCeleryFlower.livenessProbe.periodSeconds | int | `5` | | -| supersetCeleryFlower.livenessProbe.successThreshold | int | `1` | | -| supersetCeleryFlower.livenessProbe.timeoutSeconds | int | `1` | | -| supersetCeleryFlower.podAnnotations | object | `{}` | Annotations to be added to supersetCeleryFlower pods | -| supersetCeleryFlower.podLabels | object | `{}` | Labels to be added to supersetCeleryFlower pods | -| supersetCeleryFlower.podSecurityContext | object | `{}` | | -| supersetCeleryFlower.readinessProbe.failureThreshold | int | `3` | | -| supersetCeleryFlower.readinessProbe.httpGet.path | string | `"/api/workers"` | | -| supersetCeleryFlower.readinessProbe.httpGet.port | string | `"flower"` | | -| supersetCeleryFlower.readinessProbe.initialDelaySeconds | int | `5` | | -| supersetCeleryFlower.readinessProbe.periodSeconds | int | `5` | | -| supersetCeleryFlower.readinessProbe.successThreshold | int | `1` | | -| supersetCeleryFlower.readinessProbe.timeoutSeconds | int | `1` | | -| supersetCeleryFlower.replicaCount | int | `1` | | -| supersetCeleryFlower.resources | object | `{}` | Resource settings for the CeleryBeat pods - these settings overwrite might existing values from the global resources object defined above. | -| supersetCeleryFlower.service.annotations | object | `{}` | | -| supersetCeleryFlower.service.loadBalancerIP | string | `nil` | | -| supersetCeleryFlower.service.nodePort.http | int | `"nil"` | | -| supersetCeleryFlower.service.port | int | `5555` | | -| supersetCeleryFlower.service.type | string | `"ClusterIP"` | | -| supersetCeleryFlower.startupProbe.failureThreshold | int | `60` | | -| supersetCeleryFlower.startupProbe.httpGet.path | string | `"/api/workers"` | | -| supersetCeleryFlower.startupProbe.httpGet.port | string | `"flower"` | | -| supersetCeleryFlower.startupProbe.initialDelaySeconds | int | `5` | | -| supersetCeleryFlower.startupProbe.periodSeconds | int | `5` | | -| supersetCeleryFlower.startupProbe.successThreshold | int | `1` | | -| supersetCeleryFlower.startupProbe.timeoutSeconds | int | `1` | | -| supersetCeleryFlower.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to supersetCeleryFlower deployments | -| supersetNode.affinity | object | `{}` | Affinity to be added to supersetNode deployment | -| supersetNode.command | list | See `values.yaml` | Startup command | -| supersetNode.connections.db_host | string | `"{{ .Release.Name }}-postgresql"` | | -| supersetNode.connections.db_name | string | `"superset"` | | -| supersetNode.connections.db_pass | string | `"superset"` | | -| supersetNode.connections.db_port | string | `"5432"` | | -| supersetNode.connections.db_user | string | `"superset"` | | -| supersetNode.connections.redis_host | string | `"{{ .Release.Name }}-redis-headless"` | Change in case of bringing your own redis and then also set redis.enabled:false | -| supersetNode.connections.redis_port | string | `"6379"` | | -| supersetNode.containerSecurityContext | object | `{}` | | -| supersetNode.deploymentAnnotations | object | `{}` | Annotations to be added to supersetNode deployment | -| supersetNode.deploymentLabels | object | `{}` | Labels to be added to supersetNode deployment | -| supersetNode.env | object | `{}` | | -| supersetNode.extraContainers | list | `[]` | Launch additional containers into supersetNode pod | -| supersetNode.forceReload | bool | `false` | If true, forces deployment to reload on each upgrade | -| supersetNode.initContainers | list | a container waiting for postgres | Init containers | -| supersetNode.livenessProbe.failureThreshold | int | `3` | | -| supersetNode.livenessProbe.httpGet.path | string | `"/health"` | | -| supersetNode.livenessProbe.httpGet.port | string | `"http"` | | -| supersetNode.livenessProbe.initialDelaySeconds | int | `15` | | -| supersetNode.livenessProbe.periodSeconds | int | `15` | | -| supersetNode.livenessProbe.successThreshold | int | `1` | | -| supersetNode.livenessProbe.timeoutSeconds | int | `1` | | -| supersetNode.podAnnotations | object | `{}` | Annotations to be added to supersetNode pods | -| supersetNode.podLabels | object | `{}` | Labels to be added to supersetNode pods | -| supersetNode.podSecurityContext | object | `{}` | | -| supersetNode.readinessProbe.failureThreshold | int | `3` | | -| supersetNode.readinessProbe.httpGet.path | string | `"/health"` | | -| supersetNode.readinessProbe.httpGet.port | string | `"http"` | | -| supersetNode.readinessProbe.initialDelaySeconds | int | `15` | | -| supersetNode.readinessProbe.periodSeconds | int | `15` | | -| supersetNode.readinessProbe.successThreshold | int | `1` | | -| supersetNode.readinessProbe.timeoutSeconds | int | `1` | | -| supersetNode.replicaCount | int | `1` | | -| supersetNode.resources | object | `{}` | Resource settings for the supersetNode pods - these settings overwrite might existing values from the global resources object defined above. | -| supersetNode.startupProbe.failureThreshold | int | `60` | | -| supersetNode.startupProbe.httpGet.path | string | `"/health"` | | -| supersetNode.startupProbe.httpGet.port | string | `"http"` | | -| supersetNode.startupProbe.initialDelaySeconds | int | `15` | | -| supersetNode.startupProbe.periodSeconds | int | `5` | | -| supersetNode.startupProbe.successThreshold | int | `1` | | -| supersetNode.startupProbe.timeoutSeconds | int | `1` | | -| supersetNode.strategy | object | `{}` | | -| supersetNode.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to supersetNode deployments | -| supersetWebsockets.affinity | object | `{}` | Affinity to be added to supersetWebsockets deployment | -| supersetWebsockets.command | list | `[]` | | -| supersetWebsockets.config | object | see `values.yaml` | The config.json to pass to the server, see https://github.com/apache/superset/tree/master/superset-websocket Note that the configuration can also read from environment variables (which will have priority), see https://github.com/apache/superset/blob/master/superset-websocket/src/config.ts for a list of supported variables | -| supersetWebsockets.containerSecurityContext | object | `{}` | | -| supersetWebsockets.deploymentAnnotations | object | `{}` | | -| supersetWebsockets.enabled | bool | `false` | This is only required if you intend to use `GLOBAL_ASYNC_QUERIES` in `ws` mode see https://github.com/apache/superset/blob/master/CONTRIBUTING.md#async-chart-queries | -| supersetWebsockets.image.pullPolicy | string | `"IfNotPresent"` | | -| supersetWebsockets.image.repository | string | `"oneacrefund/superset-websocket"` | There is no official image (yet), this one is community-supported | -| supersetWebsockets.image.tag | string | `"latest"` | | -| supersetWebsockets.ingress.path | string | `"/ws"` | | -| supersetWebsockets.ingress.pathType | string | `"Prefix"` | | -| supersetWebsockets.livenessProbe.failureThreshold | int | `3` | | -| supersetWebsockets.livenessProbe.httpGet.path | string | `"/health"` | | -| supersetWebsockets.livenessProbe.httpGet.port | string | `"ws"` | | -| supersetWebsockets.livenessProbe.initialDelaySeconds | int | `5` | | -| supersetWebsockets.livenessProbe.periodSeconds | int | `5` | | -| supersetWebsockets.livenessProbe.successThreshold | int | `1` | | -| supersetWebsockets.livenessProbe.timeoutSeconds | int | `1` | | -| supersetWebsockets.podAnnotations | object | `{}` | | -| supersetWebsockets.podLabels | object | `{}` | | -| supersetWebsockets.podSecurityContext | object | `{}` | | -| supersetWebsockets.readinessProbe.failureThreshold | int | `3` | | -| supersetWebsockets.readinessProbe.httpGet.path | string | `"/health"` | | -| supersetWebsockets.readinessProbe.httpGet.port | string | `"ws"` | | -| supersetWebsockets.readinessProbe.initialDelaySeconds | int | `5` | | -| supersetWebsockets.readinessProbe.periodSeconds | int | `5` | | -| supersetWebsockets.readinessProbe.successThreshold | int | `1` | | -| supersetWebsockets.readinessProbe.timeoutSeconds | int | `1` | | -| supersetWebsockets.replicaCount | int | `1` | | -| supersetWebsockets.resources | object | `{}` | | -| supersetWebsockets.service.annotations | object | `{}` | | -| supersetWebsockets.service.loadBalancerIP | string | `nil` | | -| supersetWebsockets.service.nodePort.http | int | `"nil"` | | -| supersetWebsockets.service.port | int | `8080` | | -| supersetWebsockets.service.type | string | `"ClusterIP"` | | -| supersetWebsockets.startupProbe.failureThreshold | int | `60` | | -| supersetWebsockets.startupProbe.httpGet.path | string | `"/health"` | | -| supersetWebsockets.startupProbe.httpGet.port | string | `"ws"` | | -| supersetWebsockets.startupProbe.initialDelaySeconds | int | `5` | | -| supersetWebsockets.startupProbe.periodSeconds | int | `5` | | -| supersetWebsockets.startupProbe.successThreshold | int | `1` | | -| supersetWebsockets.startupProbe.timeoutSeconds | int | `1` | | -| supersetWebsockets.strategy | object | `{}` | | -| supersetWebsockets.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to supersetWebsockets deployments | -| supersetWorker.affinity | object | `{}` | Affinity to be added to supersetWorker deployment | -| supersetWorker.command | list | a `celery worker` command | Worker startup command | -| supersetWorker.containerSecurityContext | object | `{}` | | -| supersetWorker.deploymentAnnotations | object | `{}` | Annotations to be added to supersetWorker deployment | -| supersetWorker.deploymentLabels | object | `{}` | Labels to be added to supersetWorker deployment | -| supersetWorker.extraContainers | list | `[]` | Launch additional containers into supersetWorker pod | -| supersetWorker.forceReload | bool | `false` | If true, forces deployment to reload on each upgrade | -| supersetWorker.initContainers | list | a container waiting for postgres and redis | Init container | -| supersetWorker.livenessProbe.exec.command | list | a `celery inspect ping` command | Liveness probe command | -| supersetWorker.livenessProbe.failureThreshold | int | `3` | | -| supersetWorker.livenessProbe.initialDelaySeconds | int | `120` | | -| supersetWorker.livenessProbe.periodSeconds | int | `60` | | -| supersetWorker.livenessProbe.successThreshold | int | `1` | | -| supersetWorker.livenessProbe.timeoutSeconds | int | `60` | | -| supersetWorker.podAnnotations | object | `{}` | Annotations to be added to supersetWorker pods | -| supersetWorker.podLabels | object | `{}` | Labels to be added to supersetWorker pods | -| supersetWorker.podSecurityContext | object | `{}` | | -| supersetWorker.readinessProbe | object | `{}` | No startup/readiness probes by default since we don't really care about its startup time (it doesn't serve traffic) | -| supersetWorker.replicaCount | int | `1` | | -| supersetWorker.resources | object | `{}` | Resource settings for the supersetWorker pods - these settings overwrite might existing values from the global resources object defined above. | -| supersetWorker.startupProbe | object | `{}` | No startup/readiness probes by default since we don't really care about its startup time (it doesn't serve traffic) | -| supersetWorker.strategy | object | `{}` | | -| supersetWorker.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to supersetWorker deployments | -| tolerations | list | `[]` | | -| topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to all deployments | +| Key | Type | Default | Description | +| ------------------------------------------------------- | ------ | -------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| affinity | object | `{}` | | +| bootstrapScript | string | see `values.yaml` | Install additional packages and do any other bootstrap configuration in this script For production clusters it's recommended to build own image with this step done in CI | +| configFromSecret | string | `"{{ template \"superset.fullname\" . }}-config"` | The name of the secret which we will use to generate a superset_config.py file Note: this secret must have the key superset_config.py in it and can include other files as well | +| configMountPath | string | `"/app/pythonpath"` | | +| configOverrides | object | `{}` | A dictionary of overrides to append at the end of superset_config.py - the name does not matter WARNING: the order is not guaranteed Files can be passed as helm --set-file configOverrides.my-override=my-file.py | +| configOverridesFiles | object | `{}` | Same as above but the values are files | +| envFromSecret | string | `"{{ template \"superset.fullname\" . }}-env"` | The name of the secret which we will use to populate env vars in deployed pods This can be useful for secret keys, etc. | +| envFromSecrets | list | `[]` | This can be a list of templated strings | +| extraConfigMountPath | string | `"/app/configs"` | | +| extraConfigs | object | `{}` | Extra files to mount on `/app/pythonpath` | +| extraEnv | object | `{}` | Extra environment variables that will be passed into pods | +| extraEnvRaw | list | `[]` | Extra environment variables in RAW format that will be passed into pods | +| extraSecretEnv | object | `{}` | Extra environment variables to pass as secrets | +| extraSecrets | object | `{}` | Extra files to mount on `/app/pythonpath` as secrets | +| extraVolumeMounts | list | `[]` | | +| extraVolumes | list | `[]` | | +| fullnameOverride | string | `nil` | Provide a name to override the full names of resources | +| hostAliases | list | `[]` | Custom hostAliases for all superset pods # https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/ | +| image.pullPolicy | string | `"IfNotPresent"` | | +| image.repository | string | `"apachesuperset.docker.scarf.sh/apache/superset"` | | +| image.tag | string | `""` | | +| imagePullSecrets | list | `[]` | | +| ingress.annotations | object | `{}` | | +| ingress.enabled | bool | `false` | | +| ingress.extraHostsRaw | list | `[]` | | +| ingress.hosts[0] | string | `"chart-example.local"` | | +| ingress.ingressClassName | string | `nil` | | +| ingress.path | string | `"/"` | | +| ingress.pathType | string | `"ImplementationSpecific"` | | +| ingress.tls | list | `[]` | | +| init.adminUser.email | string | `"admin@superset.com"` | | +| init.adminUser.firstname | string | `"Superset"` | | +| init.adminUser.lastname | string | `"Admin"` | | +| init.adminUser.password | string | `"admin"` | | +| init.adminUser.username | string | `"admin"` | | +| init.affinity | object | `{}` | | +| init.command | list | a `superset_init.sh` command | Command | +| init.containerSecurityContext | object | `{}` | | +| init.createAdmin | bool | `true` | | +| init.enabled | bool | `true` | | +| init.initContainers | list | a container waiting for postgres | List of initContainers | +| init.initscript | string | a script to create admin user and initailize roles | A Superset init script | +| init.jobAnnotations."helm.sh/hook" | string | `"post-install,post-upgrade"` | | +| init.jobAnnotations."helm.sh/hook-delete-policy" | string | `"before-hook-creation"` | | +| init.loadExamples | bool | `false` | | +| init.podAnnotations | object | `{}` | | +| init.podSecurityContext | object | `{}` | | +| init.resources | object | `{}` | | +| init.tolerations | list | `[]` | | +| init.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to init job | +| initImage.pullPolicy | string | `"IfNotPresent"` | | +| initImage.repository | string | `"apache/superset"` | | +| initImage.tag | string | `"dockerize"` | | +| nameOverride | string | `nil` | Provide a name to override the name of the chart | +| nodeSelector | object | `{}` | | +| postgresql | object | see `values.yaml` | Configuration values for the postgresql dependency. ref: https://github.com/kubernetes/charts/blob/master/stable/postgresql/README.md | +| redis | object | see `values.yaml` | Configuration values for the Redis dependency. ref: https://github.com/bitnami/charts/blob/master/bitnami/redis More documentation can be found here: https://artifacthub.io/packages/helm/bitnami/redis | +| resources | object | `{}` | | +| runAsUser | int | `0` | User ID directive. This user must have enough permissions to run the bootstrap script Running containers as root is not recommended in production. Change this to another UID - e.g. 1000 to be more secure | +| service.annotations | object | `{}` | | +| service.loadBalancerIP | string | `nil` | | +| service.nodePort.http | int | `"nil"` | | +| service.port | int | `8088` | | +| service.type | string | `"ClusterIP"` | | +| serviceAccount.annotations | object | `{}` | | +| serviceAccount.create | bool | `false` | Create custom service account for Superset. If create: true and serviceAccountName is not provided, `superset.fullname` will be used. | +| serviceAccountName | string | `nil` | Specify service account name to be used | +| supersetCeleryBeat.affinity | object | `{}` | Affinity to be added to supersetCeleryBeat deployment | +| supersetCeleryBeat.command | list | a `celery beat` command | Command | +| supersetCeleryBeat.containerSecurityContext | object | `{}` | | +| supersetCeleryBeat.deploymentAnnotations | object | `{}` | Annotations to be added to supersetCeleryBeat deployment | +| supersetCeleryBeat.enabled | bool | `false` | This is only required if you intend to use alerts and reports | +| supersetCeleryBeat.forceReload | bool | `false` | If true, forces deployment to reload on each upgrade | +| supersetCeleryBeat.initContainers | list | a container waiting for postgres | List of init containers | +| supersetCeleryBeat.podAnnotations | object | `{}` | Annotations to be added to supersetCeleryBeat pods | +| supersetCeleryBeat.podLabels | object | `{}` | Labels to be added to supersetCeleryBeat pods | +| supersetCeleryBeat.podSecurityContext | object | `{}` | | +| supersetCeleryBeat.resources | object | `{}` | Resource settings for the CeleryBeat pods - these settings overwrite might existing values from the global resources object defined above. | +| supersetCeleryBeat.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to supersetCeleryBeat deployments | +| supersetCeleryFlower.affinity | object | `{}` | Affinity to be added to supersetCeleryFlower deployment | +| supersetCeleryFlower.command | list | a `celery flower` command | Command | +| supersetCeleryFlower.containerSecurityContext | object | `{}` | | +| supersetCeleryFlower.deploymentAnnotations | object | `{}` | Annotations to be added to supersetCeleryFlower deployment | +| supersetCeleryFlower.enabled | bool | `false` | Enables a Celery flower deployment (management UI to monitor celery jobs) WARNING: on superset 1.x, this requires a Superset image that has `flower<1.0.0` installed (which is NOT the case of the default images) flower>=1.0.0 requires Celery 5+ which Superset 1.5 does not support | +| supersetCeleryFlower.initContainers | list | a container waiting for postgres and redis | List of init containers | +| supersetCeleryFlower.livenessProbe.failureThreshold | int | `3` | | +| supersetCeleryFlower.livenessProbe.httpGet.path | string | `"/api/workers"` | | +| supersetCeleryFlower.livenessProbe.httpGet.port | string | `"flower"` | | +| supersetCeleryFlower.livenessProbe.initialDelaySeconds | int | `5` | | +| supersetCeleryFlower.livenessProbe.periodSeconds | int | `5` | | +| supersetCeleryFlower.livenessProbe.successThreshold | int | `1` | | +| supersetCeleryFlower.livenessProbe.timeoutSeconds | int | `1` | | +| supersetCeleryFlower.podAnnotations | object | `{}` | Annotations to be added to supersetCeleryFlower pods | +| supersetCeleryFlower.podLabels | object | `{}` | Labels to be added to supersetCeleryFlower pods | +| supersetCeleryFlower.podSecurityContext | object | `{}` | | +| supersetCeleryFlower.readinessProbe.failureThreshold | int | `3` | | +| supersetCeleryFlower.readinessProbe.httpGet.path | string | `"/api/workers"` | | +| supersetCeleryFlower.readinessProbe.httpGet.port | string | `"flower"` | | +| supersetCeleryFlower.readinessProbe.initialDelaySeconds | int | `5` | | +| supersetCeleryFlower.readinessProbe.periodSeconds | int | `5` | | +| supersetCeleryFlower.readinessProbe.successThreshold | int | `1` | | +| supersetCeleryFlower.readinessProbe.timeoutSeconds | int | `1` | | +| supersetCeleryFlower.replicaCount | int | `1` | | +| supersetCeleryFlower.resources | object | `{}` | Resource settings for the CeleryBeat pods - these settings overwrite might existing values from the global resources object defined above. | +| supersetCeleryFlower.service.annotations | object | `{}` | | +| supersetCeleryFlower.service.loadBalancerIP | string | `nil` | | +| supersetCeleryFlower.service.nodePort.http | int | `"nil"` | | +| supersetCeleryFlower.service.port | int | `5555` | | +| supersetCeleryFlower.service.type | string | `"ClusterIP"` | | +| supersetCeleryFlower.startupProbe.failureThreshold | int | `60` | | +| supersetCeleryFlower.startupProbe.httpGet.path | string | `"/api/workers"` | | +| supersetCeleryFlower.startupProbe.httpGet.port | string | `"flower"` | | +| supersetCeleryFlower.startupProbe.initialDelaySeconds | int | `5` | | +| supersetCeleryFlower.startupProbe.periodSeconds | int | `5` | | +| supersetCeleryFlower.startupProbe.successThreshold | int | `1` | | +| supersetCeleryFlower.startupProbe.timeoutSeconds | int | `1` | | +| supersetCeleryFlower.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to supersetCeleryFlower deployments | +| supersetNode.affinity | object | `{}` | Affinity to be added to supersetNode deployment | +| supersetNode.command | list | See `values.yaml` | Startup command | +| supersetNode.connections.db_host | string | `"{{ .Release.Name }}-postgresql"` | | +| supersetNode.connections.db_name | string | `"superset"` | | +| supersetNode.connections.db_pass | string | `"superset"` | | +| supersetNode.connections.db_port | string | `"5432"` | | +| supersetNode.connections.db_user | string | `"superset"` | | +| supersetNode.connections.redis_host | string | `"{{ .Release.Name }}-redis-headless"` | Change in case of bringing your own redis and then also set redis.enabled:false | +| supersetNode.connections.redis_port | string | `"6379"` | | +| supersetNode.containerSecurityContext | object | `{}` | | +| supersetNode.deploymentAnnotations | object | `{}` | Annotations to be added to supersetNode deployment | +| supersetNode.deploymentLabels | object | `{}` | Labels to be added to supersetNode deployment | +| supersetNode.env | object | `{}` | | +| supersetNode.extraContainers | list | `[]` | Launch additional containers into supersetNode pod | +| supersetNode.forceReload | bool | `false` | If true, forces deployment to reload on each upgrade | +| supersetNode.initContainers | list | a container waiting for postgres | Init containers | +| supersetNode.livenessProbe.failureThreshold | int | `3` | | +| supersetNode.livenessProbe.httpGet.path | string | `"/health"` | | +| supersetNode.livenessProbe.httpGet.port | string | `"http"` | | +| supersetNode.livenessProbe.initialDelaySeconds | int | `15` | | +| supersetNode.livenessProbe.periodSeconds | int | `15` | | +| supersetNode.livenessProbe.successThreshold | int | `1` | | +| supersetNode.livenessProbe.timeoutSeconds | int | `1` | | +| supersetNode.podAnnotations | object | `{}` | Annotations to be added to supersetNode pods | +| supersetNode.podLabels | object | `{}` | Labels to be added to supersetNode pods | +| supersetNode.podSecurityContext | object | `{}` | | +| supersetNode.readinessProbe.failureThreshold | int | `3` | | +| supersetNode.readinessProbe.httpGet.path | string | `"/health"` | | +| supersetNode.readinessProbe.httpGet.port | string | `"http"` | | +| supersetNode.readinessProbe.initialDelaySeconds | int | `15` | | +| supersetNode.readinessProbe.periodSeconds | int | `15` | | +| supersetNode.readinessProbe.successThreshold | int | `1` | | +| supersetNode.readinessProbe.timeoutSeconds | int | `1` | | +| supersetNode.replicaCount | int | `1` | | +| supersetNode.resources | object | `{}` | Resource settings for the supersetNode pods - these settings overwrite might existing values from the global resources object defined above. | +| supersetNode.startupProbe.failureThreshold | int | `60` | | +| supersetNode.startupProbe.httpGet.path | string | `"/health"` | | +| supersetNode.startupProbe.httpGet.port | string | `"http"` | | +| supersetNode.startupProbe.initialDelaySeconds | int | `15` | | +| supersetNode.startupProbe.periodSeconds | int | `5` | | +| supersetNode.startupProbe.successThreshold | int | `1` | | +| supersetNode.startupProbe.timeoutSeconds | int | `1` | | +| supersetNode.strategy | object | `{}` | | +| supersetNode.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to supersetNode deployments | +| supersetWebsockets.affinity | object | `{}` | Affinity to be added to supersetWebsockets deployment | +| supersetWebsockets.command | list | `[]` | | +| supersetWebsockets.config | object | see `values.yaml` | The config.json to pass to the server, see https://github.com/apache/superset/tree/master/superset-websocket Note that the configuration can also read from environment variables (which will have priority), see https://github.com/apache/superset/blob/master/superset-websocket/src/config.ts for a list of supported variables | +| supersetWebsockets.containerSecurityContext | object | `{}` | | +| supersetWebsockets.deploymentAnnotations | object | `{}` | | +| supersetWebsockets.enabled | bool | `false` | This is only required if you intend to use `GLOBAL_ASYNC_QUERIES` in `ws` mode see https://github.com/apache/superset/blob/master/CONTRIBUTING.md#async-chart-queries | +| supersetWebsockets.image.pullPolicy | string | `"IfNotPresent"` | | +| supersetWebsockets.image.repository | string | `"oneacrefund/superset-websocket"` | There is no official image (yet), this one is community-supported | +| supersetWebsockets.image.tag | string | `"latest"` | | +| supersetWebsockets.ingress.path | string | `"/ws"` | | +| supersetWebsockets.ingress.pathType | string | `"Prefix"` | | +| supersetWebsockets.livenessProbe.failureThreshold | int | `3` | | +| supersetWebsockets.livenessProbe.httpGet.path | string | `"/health"` | | +| supersetWebsockets.livenessProbe.httpGet.port | string | `"ws"` | | +| supersetWebsockets.livenessProbe.initialDelaySeconds | int | `5` | | +| supersetWebsockets.livenessProbe.periodSeconds | int | `5` | | +| supersetWebsockets.livenessProbe.successThreshold | int | `1` | | +| supersetWebsockets.livenessProbe.timeoutSeconds | int | `1` | | +| supersetWebsockets.podAnnotations | object | `{}` | | +| supersetWebsockets.podLabels | object | `{}` | | +| supersetWebsockets.podSecurityContext | object | `{}` | | +| supersetWebsockets.readinessProbe.failureThreshold | int | `3` | | +| supersetWebsockets.readinessProbe.httpGet.path | string | `"/health"` | | +| supersetWebsockets.readinessProbe.httpGet.port | string | `"ws"` | | +| supersetWebsockets.readinessProbe.initialDelaySeconds | int | `5` | | +| supersetWebsockets.readinessProbe.periodSeconds | int | `5` | | +| supersetWebsockets.readinessProbe.successThreshold | int | `1` | | +| supersetWebsockets.readinessProbe.timeoutSeconds | int | `1` | | +| supersetWebsockets.replicaCount | int | `1` | | +| supersetWebsockets.resources | object | `{}` | | +| supersetWebsockets.service.annotations | object | `{}` | | +| supersetWebsockets.service.loadBalancerIP | string | `nil` | | +| supersetWebsockets.service.nodePort.http | int | `"nil"` | | +| supersetWebsockets.service.port | int | `8080` | | +| supersetWebsockets.service.type | string | `"ClusterIP"` | | +| supersetWebsockets.startupProbe.failureThreshold | int | `60` | | +| supersetWebsockets.startupProbe.httpGet.path | string | `"/health"` | | +| supersetWebsockets.startupProbe.httpGet.port | string | `"ws"` | | +| supersetWebsockets.startupProbe.initialDelaySeconds | int | `5` | | +| supersetWebsockets.startupProbe.periodSeconds | int | `5` | | +| supersetWebsockets.startupProbe.successThreshold | int | `1` | | +| supersetWebsockets.startupProbe.timeoutSeconds | int | `1` | | +| supersetWebsockets.strategy | object | `{}` | | +| supersetWebsockets.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to supersetWebsockets deployments | +| supersetWorker.affinity | object | `{}` | Affinity to be added to supersetWorker deployment | +| supersetWorker.command | list | a `celery worker` command | Worker startup command | +| supersetWorker.containerSecurityContext | object | `{}` | | +| supersetWorker.deploymentAnnotations | object | `{}` | Annotations to be added to supersetWorker deployment | +| supersetWorker.deploymentLabels | object | `{}` | Labels to be added to supersetWorker deployment | +| supersetWorker.extraContainers | list | `[]` | Launch additional containers into supersetWorker pod | +| supersetWorker.forceReload | bool | `false` | If true, forces deployment to reload on each upgrade | +| supersetWorker.initContainers | list | a container waiting for postgres and redis | Init container | +| supersetWorker.livenessProbe.exec.command | list | a `celery inspect ping` command | Liveness probe command | +| supersetWorker.livenessProbe.failureThreshold | int | `3` | | +| supersetWorker.livenessProbe.initialDelaySeconds | int | `120` | | +| supersetWorker.livenessProbe.periodSeconds | int | `60` | | +| supersetWorker.livenessProbe.successThreshold | int | `1` | | +| supersetWorker.livenessProbe.timeoutSeconds | int | `60` | | +| supersetWorker.podAnnotations | object | `{}` | Annotations to be added to supersetWorker pods | +| supersetWorker.podLabels | object | `{}` | Labels to be added to supersetWorker pods | +| supersetWorker.podSecurityContext | object | `{}` | | +| supersetWorker.readinessProbe | object | `{}` | No startup/readiness probes by default since we don't really care about its startup time (it doesn't serve traffic) | +| supersetWorker.replicaCount | int | `1` | | +| supersetWorker.resources | object | `{}` | Resource settings for the supersetWorker pods - these settings overwrite might existing values from the global resources object defined above. | +| supersetWorker.startupProbe | object | `{}` | No startup/readiness probes by default since we don't really care about its startup time (it doesn't serve traffic) | +| supersetWorker.strategy | object | `{}` | | +| supersetWorker.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to supersetWorker deployments | +| tolerations | list | `[]` | | +| topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to all deployments | diff --git a/helm/superset/templates/deployment-beat.yaml b/helm/superset/templates/deployment-beat.yaml index 43754efb06147..eab9a6f3eb4f5 100644 --- a/helm/superset/templates/deployment-beat.yaml +++ b/helm/superset/templates/deployment-beat.yaml @@ -42,6 +42,7 @@ spec: metadata: annotations: checksum/superset_config.py: {{ include "superset-config" . | sha256sum }} + checksum/superset_bootstrap.sh: {{ tpl .Values.bootstrapScript . | sha256sum }} checksum/connections: {{ .Values.supersetNode.connections | toYaml | sha256sum }} checksum/extraConfigs: {{ .Values.extraConfigs | toYaml | sha256sum }} checksum/extraSecrets: {{ .Values.extraSecrets | toYaml | sha256sum }} diff --git a/helm/superset/templates/deployment-worker.yaml b/helm/superset/templates/deployment-worker.yaml index 7f2bcf8df3cd5..be710b723a04b 100644 --- a/helm/superset/templates/deployment-worker.yaml +++ b/helm/superset/templates/deployment-worker.yaml @@ -46,6 +46,7 @@ spec: metadata: annotations: checksum/superset_config.py: {{ include "superset-config" . | sha256sum }} + checksum/superset_bootstrap.sh: {{ tpl .Values.bootstrapScript . | sha256sum }} checksum/connections: {{ .Values.supersetNode.connections | toYaml | sha256sum }} checksum/extraConfigs: {{ .Values.extraConfigs | toYaml | sha256sum }} checksum/extraSecrets: {{ .Values.extraSecrets | toYaml | sha256sum }} From 8d873e6da6c00d9ed853f86f4f9dfd06c5a19aac Mon Sep 17 00:00:00 2001 From: yousoph Date: Thu, 16 Nov 2023 09:48:54 -0800 Subject: [PATCH 32/38] fix(rls): Update text from tables to datasets in RLS modal (#25997) (cherry picked from commit 210f1f8f95531365da2c5a5897e801c4cb7edacd) --- superset-frontend/src/features/rls/RowLevelSecurityModal.tsx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/superset-frontend/src/features/rls/RowLevelSecurityModal.tsx b/superset-frontend/src/features/rls/RowLevelSecurityModal.tsx index dac4858e4adb9..20197ecf5862e 100644 --- a/superset-frontend/src/features/rls/RowLevelSecurityModal.tsx +++ b/superset-frontend/src/features/rls/RowLevelSecurityModal.tsx @@ -385,10 +385,10 @@ function RowLevelSecurityModal(props: RowLevelSecurityModalProps) {
- {t('Tables')} * + {t('Datasets')} *
From ee1ba7e17223b3108ae1ec54de7ccff3a6d461ec Mon Sep 17 00:00:00 2001 From: Jack Fragassi Date: Thu, 16 Nov 2023 12:06:05 -0800 Subject: [PATCH 33/38] fix: Make Select component fire onChange listener when a selection is pasted in (#25993) (cherry picked from commit 5fccf67cdc4a84edb067a3cde48efacc76dbe33a) --- helm/superset/README.md | 474 +++++++++--------- .../components/Select/AsyncSelect.test.tsx | 14 + .../src/components/Select/AsyncSelect.tsx | 1 + .../src/components/Select/Select.test.tsx | 14 + .../src/components/Select/Select.tsx | 1 + superset/models/helpers.py | 16 +- 6 files changed, 274 insertions(+), 246 deletions(-) diff --git a/helm/superset/README.md b/helm/superset/README.md index 0a06e817e7a78..b8d4385008950 100644 --- a/helm/superset/README.md +++ b/helm/superset/README.md @@ -31,7 +31,7 @@ Apache Superset is a modern, enterprise-ready business intelligence web applicat ## Source Code -- +* ## TL;DR @@ -42,242 +42,242 @@ helm install my-superset superset/superset ## Requirements -| Repository | Name | Version | -| ---------------------------------- | ---------- | ------- | -| https://charts.bitnami.com/bitnami | postgresql | 12.1.6 | -| https://charts.bitnami.com/bitnami | redis | 17.9.4 | +| Repository | Name | Version | +|------------|------|---------| +| https://charts.bitnami.com/bitnami | postgresql | 12.1.6 | +| https://charts.bitnami.com/bitnami | redis | 17.9.4 | ## Values -| Key | Type | Default | Description | -| ------------------------------------------------------- | ------ | -------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| affinity | object | `{}` | | -| bootstrapScript | string | see `values.yaml` | Install additional packages and do any other bootstrap configuration in this script For production clusters it's recommended to build own image with this step done in CI | -| configFromSecret | string | `"{{ template \"superset.fullname\" . }}-config"` | The name of the secret which we will use to generate a superset_config.py file Note: this secret must have the key superset_config.py in it and can include other files as well | -| configMountPath | string | `"/app/pythonpath"` | | -| configOverrides | object | `{}` | A dictionary of overrides to append at the end of superset_config.py - the name does not matter WARNING: the order is not guaranteed Files can be passed as helm --set-file configOverrides.my-override=my-file.py | -| configOverridesFiles | object | `{}` | Same as above but the values are files | -| envFromSecret | string | `"{{ template \"superset.fullname\" . }}-env"` | The name of the secret which we will use to populate env vars in deployed pods This can be useful for secret keys, etc. | -| envFromSecrets | list | `[]` | This can be a list of templated strings | -| extraConfigMountPath | string | `"/app/configs"` | | -| extraConfigs | object | `{}` | Extra files to mount on `/app/pythonpath` | -| extraEnv | object | `{}` | Extra environment variables that will be passed into pods | -| extraEnvRaw | list | `[]` | Extra environment variables in RAW format that will be passed into pods | -| extraSecretEnv | object | `{}` | Extra environment variables to pass as secrets | -| extraSecrets | object | `{}` | Extra files to mount on `/app/pythonpath` as secrets | -| extraVolumeMounts | list | `[]` | | -| extraVolumes | list | `[]` | | -| fullnameOverride | string | `nil` | Provide a name to override the full names of resources | -| hostAliases | list | `[]` | Custom hostAliases for all superset pods # https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/ | -| image.pullPolicy | string | `"IfNotPresent"` | | -| image.repository | string | `"apachesuperset.docker.scarf.sh/apache/superset"` | | -| image.tag | string | `""` | | -| imagePullSecrets | list | `[]` | | -| ingress.annotations | object | `{}` | | -| ingress.enabled | bool | `false` | | -| ingress.extraHostsRaw | list | `[]` | | -| ingress.hosts[0] | string | `"chart-example.local"` | | -| ingress.ingressClassName | string | `nil` | | -| ingress.path | string | `"/"` | | -| ingress.pathType | string | `"ImplementationSpecific"` | | -| ingress.tls | list | `[]` | | -| init.adminUser.email | string | `"admin@superset.com"` | | -| init.adminUser.firstname | string | `"Superset"` | | -| init.adminUser.lastname | string | `"Admin"` | | -| init.adminUser.password | string | `"admin"` | | -| init.adminUser.username | string | `"admin"` | | -| init.affinity | object | `{}` | | -| init.command | list | a `superset_init.sh` command | Command | -| init.containerSecurityContext | object | `{}` | | -| init.createAdmin | bool | `true` | | -| init.enabled | bool | `true` | | -| init.initContainers | list | a container waiting for postgres | List of initContainers | -| init.initscript | string | a script to create admin user and initailize roles | A Superset init script | -| init.jobAnnotations."helm.sh/hook" | string | `"post-install,post-upgrade"` | | -| init.jobAnnotations."helm.sh/hook-delete-policy" | string | `"before-hook-creation"` | | -| init.loadExamples | bool | `false` | | -| init.podAnnotations | object | `{}` | | -| init.podSecurityContext | object | `{}` | | -| init.resources | object | `{}` | | -| init.tolerations | list | `[]` | | -| init.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to init job | -| initImage.pullPolicy | string | `"IfNotPresent"` | | -| initImage.repository | string | `"apache/superset"` | | -| initImage.tag | string | `"dockerize"` | | -| nameOverride | string | `nil` | Provide a name to override the name of the chart | -| nodeSelector | object | `{}` | | -| postgresql | object | see `values.yaml` | Configuration values for the postgresql dependency. ref: https://github.com/kubernetes/charts/blob/master/stable/postgresql/README.md | -| redis | object | see `values.yaml` | Configuration values for the Redis dependency. ref: https://github.com/bitnami/charts/blob/master/bitnami/redis More documentation can be found here: https://artifacthub.io/packages/helm/bitnami/redis | -| resources | object | `{}` | | -| runAsUser | int | `0` | User ID directive. This user must have enough permissions to run the bootstrap script Running containers as root is not recommended in production. Change this to another UID - e.g. 1000 to be more secure | -| service.annotations | object | `{}` | | -| service.loadBalancerIP | string | `nil` | | -| service.nodePort.http | int | `"nil"` | | -| service.port | int | `8088` | | -| service.type | string | `"ClusterIP"` | | -| serviceAccount.annotations | object | `{}` | | -| serviceAccount.create | bool | `false` | Create custom service account for Superset. If create: true and serviceAccountName is not provided, `superset.fullname` will be used. | -| serviceAccountName | string | `nil` | Specify service account name to be used | -| supersetCeleryBeat.affinity | object | `{}` | Affinity to be added to supersetCeleryBeat deployment | -| supersetCeleryBeat.command | list | a `celery beat` command | Command | -| supersetCeleryBeat.containerSecurityContext | object | `{}` | | -| supersetCeleryBeat.deploymentAnnotations | object | `{}` | Annotations to be added to supersetCeleryBeat deployment | -| supersetCeleryBeat.enabled | bool | `false` | This is only required if you intend to use alerts and reports | -| supersetCeleryBeat.forceReload | bool | `false` | If true, forces deployment to reload on each upgrade | -| supersetCeleryBeat.initContainers | list | a container waiting for postgres | List of init containers | -| supersetCeleryBeat.podAnnotations | object | `{}` | Annotations to be added to supersetCeleryBeat pods | -| supersetCeleryBeat.podLabels | object | `{}` | Labels to be added to supersetCeleryBeat pods | -| supersetCeleryBeat.podSecurityContext | object | `{}` | | -| supersetCeleryBeat.resources | object | `{}` | Resource settings for the CeleryBeat pods - these settings overwrite might existing values from the global resources object defined above. | -| supersetCeleryBeat.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to supersetCeleryBeat deployments | -| supersetCeleryFlower.affinity | object | `{}` | Affinity to be added to supersetCeleryFlower deployment | -| supersetCeleryFlower.command | list | a `celery flower` command | Command | -| supersetCeleryFlower.containerSecurityContext | object | `{}` | | -| supersetCeleryFlower.deploymentAnnotations | object | `{}` | Annotations to be added to supersetCeleryFlower deployment | -| supersetCeleryFlower.enabled | bool | `false` | Enables a Celery flower deployment (management UI to monitor celery jobs) WARNING: on superset 1.x, this requires a Superset image that has `flower<1.0.0` installed (which is NOT the case of the default images) flower>=1.0.0 requires Celery 5+ which Superset 1.5 does not support | -| supersetCeleryFlower.initContainers | list | a container waiting for postgres and redis | List of init containers | -| supersetCeleryFlower.livenessProbe.failureThreshold | int | `3` | | -| supersetCeleryFlower.livenessProbe.httpGet.path | string | `"/api/workers"` | | -| supersetCeleryFlower.livenessProbe.httpGet.port | string | `"flower"` | | -| supersetCeleryFlower.livenessProbe.initialDelaySeconds | int | `5` | | -| supersetCeleryFlower.livenessProbe.periodSeconds | int | `5` | | -| supersetCeleryFlower.livenessProbe.successThreshold | int | `1` | | -| supersetCeleryFlower.livenessProbe.timeoutSeconds | int | `1` | | -| supersetCeleryFlower.podAnnotations | object | `{}` | Annotations to be added to supersetCeleryFlower pods | -| supersetCeleryFlower.podLabels | object | `{}` | Labels to be added to supersetCeleryFlower pods | -| supersetCeleryFlower.podSecurityContext | object | `{}` | | -| supersetCeleryFlower.readinessProbe.failureThreshold | int | `3` | | -| supersetCeleryFlower.readinessProbe.httpGet.path | string | `"/api/workers"` | | -| supersetCeleryFlower.readinessProbe.httpGet.port | string | `"flower"` | | -| supersetCeleryFlower.readinessProbe.initialDelaySeconds | int | `5` | | -| supersetCeleryFlower.readinessProbe.periodSeconds | int | `5` | | -| supersetCeleryFlower.readinessProbe.successThreshold | int | `1` | | -| supersetCeleryFlower.readinessProbe.timeoutSeconds | int | `1` | | -| supersetCeleryFlower.replicaCount | int | `1` | | -| supersetCeleryFlower.resources | object | `{}` | Resource settings for the CeleryBeat pods - these settings overwrite might existing values from the global resources object defined above. | -| supersetCeleryFlower.service.annotations | object | `{}` | | -| supersetCeleryFlower.service.loadBalancerIP | string | `nil` | | -| supersetCeleryFlower.service.nodePort.http | int | `"nil"` | | -| supersetCeleryFlower.service.port | int | `5555` | | -| supersetCeleryFlower.service.type | string | `"ClusterIP"` | | -| supersetCeleryFlower.startupProbe.failureThreshold | int | `60` | | -| supersetCeleryFlower.startupProbe.httpGet.path | string | `"/api/workers"` | | -| supersetCeleryFlower.startupProbe.httpGet.port | string | `"flower"` | | -| supersetCeleryFlower.startupProbe.initialDelaySeconds | int | `5` | | -| supersetCeleryFlower.startupProbe.periodSeconds | int | `5` | | -| supersetCeleryFlower.startupProbe.successThreshold | int | `1` | | -| supersetCeleryFlower.startupProbe.timeoutSeconds | int | `1` | | -| supersetCeleryFlower.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to supersetCeleryFlower deployments | -| supersetNode.affinity | object | `{}` | Affinity to be added to supersetNode deployment | -| supersetNode.command | list | See `values.yaml` | Startup command | -| supersetNode.connections.db_host | string | `"{{ .Release.Name }}-postgresql"` | | -| supersetNode.connections.db_name | string | `"superset"` | | -| supersetNode.connections.db_pass | string | `"superset"` | | -| supersetNode.connections.db_port | string | `"5432"` | | -| supersetNode.connections.db_user | string | `"superset"` | | -| supersetNode.connections.redis_host | string | `"{{ .Release.Name }}-redis-headless"` | Change in case of bringing your own redis and then also set redis.enabled:false | -| supersetNode.connections.redis_port | string | `"6379"` | | -| supersetNode.containerSecurityContext | object | `{}` | | -| supersetNode.deploymentAnnotations | object | `{}` | Annotations to be added to supersetNode deployment | -| supersetNode.deploymentLabels | object | `{}` | Labels to be added to supersetNode deployment | -| supersetNode.env | object | `{}` | | -| supersetNode.extraContainers | list | `[]` | Launch additional containers into supersetNode pod | -| supersetNode.forceReload | bool | `false` | If true, forces deployment to reload on each upgrade | -| supersetNode.initContainers | list | a container waiting for postgres | Init containers | -| supersetNode.livenessProbe.failureThreshold | int | `3` | | -| supersetNode.livenessProbe.httpGet.path | string | `"/health"` | | -| supersetNode.livenessProbe.httpGet.port | string | `"http"` | | -| supersetNode.livenessProbe.initialDelaySeconds | int | `15` | | -| supersetNode.livenessProbe.periodSeconds | int | `15` | | -| supersetNode.livenessProbe.successThreshold | int | `1` | | -| supersetNode.livenessProbe.timeoutSeconds | int | `1` | | -| supersetNode.podAnnotations | object | `{}` | Annotations to be added to supersetNode pods | -| supersetNode.podLabels | object | `{}` | Labels to be added to supersetNode pods | -| supersetNode.podSecurityContext | object | `{}` | | -| supersetNode.readinessProbe.failureThreshold | int | `3` | | -| supersetNode.readinessProbe.httpGet.path | string | `"/health"` | | -| supersetNode.readinessProbe.httpGet.port | string | `"http"` | | -| supersetNode.readinessProbe.initialDelaySeconds | int | `15` | | -| supersetNode.readinessProbe.periodSeconds | int | `15` | | -| supersetNode.readinessProbe.successThreshold | int | `1` | | -| supersetNode.readinessProbe.timeoutSeconds | int | `1` | | -| supersetNode.replicaCount | int | `1` | | -| supersetNode.resources | object | `{}` | Resource settings for the supersetNode pods - these settings overwrite might existing values from the global resources object defined above. | -| supersetNode.startupProbe.failureThreshold | int | `60` | | -| supersetNode.startupProbe.httpGet.path | string | `"/health"` | | -| supersetNode.startupProbe.httpGet.port | string | `"http"` | | -| supersetNode.startupProbe.initialDelaySeconds | int | `15` | | -| supersetNode.startupProbe.periodSeconds | int | `5` | | -| supersetNode.startupProbe.successThreshold | int | `1` | | -| supersetNode.startupProbe.timeoutSeconds | int | `1` | | -| supersetNode.strategy | object | `{}` | | -| supersetNode.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to supersetNode deployments | -| supersetWebsockets.affinity | object | `{}` | Affinity to be added to supersetWebsockets deployment | -| supersetWebsockets.command | list | `[]` | | -| supersetWebsockets.config | object | see `values.yaml` | The config.json to pass to the server, see https://github.com/apache/superset/tree/master/superset-websocket Note that the configuration can also read from environment variables (which will have priority), see https://github.com/apache/superset/blob/master/superset-websocket/src/config.ts for a list of supported variables | -| supersetWebsockets.containerSecurityContext | object | `{}` | | -| supersetWebsockets.deploymentAnnotations | object | `{}` | | -| supersetWebsockets.enabled | bool | `false` | This is only required if you intend to use `GLOBAL_ASYNC_QUERIES` in `ws` mode see https://github.com/apache/superset/blob/master/CONTRIBUTING.md#async-chart-queries | -| supersetWebsockets.image.pullPolicy | string | `"IfNotPresent"` | | -| supersetWebsockets.image.repository | string | `"oneacrefund/superset-websocket"` | There is no official image (yet), this one is community-supported | -| supersetWebsockets.image.tag | string | `"latest"` | | -| supersetWebsockets.ingress.path | string | `"/ws"` | | -| supersetWebsockets.ingress.pathType | string | `"Prefix"` | | -| supersetWebsockets.livenessProbe.failureThreshold | int | `3` | | -| supersetWebsockets.livenessProbe.httpGet.path | string | `"/health"` | | -| supersetWebsockets.livenessProbe.httpGet.port | string | `"ws"` | | -| supersetWebsockets.livenessProbe.initialDelaySeconds | int | `5` | | -| supersetWebsockets.livenessProbe.periodSeconds | int | `5` | | -| supersetWebsockets.livenessProbe.successThreshold | int | `1` | | -| supersetWebsockets.livenessProbe.timeoutSeconds | int | `1` | | -| supersetWebsockets.podAnnotations | object | `{}` | | -| supersetWebsockets.podLabels | object | `{}` | | -| supersetWebsockets.podSecurityContext | object | `{}` | | -| supersetWebsockets.readinessProbe.failureThreshold | int | `3` | | -| supersetWebsockets.readinessProbe.httpGet.path | string | `"/health"` | | -| supersetWebsockets.readinessProbe.httpGet.port | string | `"ws"` | | -| supersetWebsockets.readinessProbe.initialDelaySeconds | int | `5` | | -| supersetWebsockets.readinessProbe.periodSeconds | int | `5` | | -| supersetWebsockets.readinessProbe.successThreshold | int | `1` | | -| supersetWebsockets.readinessProbe.timeoutSeconds | int | `1` | | -| supersetWebsockets.replicaCount | int | `1` | | -| supersetWebsockets.resources | object | `{}` | | -| supersetWebsockets.service.annotations | object | `{}` | | -| supersetWebsockets.service.loadBalancerIP | string | `nil` | | -| supersetWebsockets.service.nodePort.http | int | `"nil"` | | -| supersetWebsockets.service.port | int | `8080` | | -| supersetWebsockets.service.type | string | `"ClusterIP"` | | -| supersetWebsockets.startupProbe.failureThreshold | int | `60` | | -| supersetWebsockets.startupProbe.httpGet.path | string | `"/health"` | | -| supersetWebsockets.startupProbe.httpGet.port | string | `"ws"` | | -| supersetWebsockets.startupProbe.initialDelaySeconds | int | `5` | | -| supersetWebsockets.startupProbe.periodSeconds | int | `5` | | -| supersetWebsockets.startupProbe.successThreshold | int | `1` | | -| supersetWebsockets.startupProbe.timeoutSeconds | int | `1` | | -| supersetWebsockets.strategy | object | `{}` | | -| supersetWebsockets.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to supersetWebsockets deployments | -| supersetWorker.affinity | object | `{}` | Affinity to be added to supersetWorker deployment | -| supersetWorker.command | list | a `celery worker` command | Worker startup command | -| supersetWorker.containerSecurityContext | object | `{}` | | -| supersetWorker.deploymentAnnotations | object | `{}` | Annotations to be added to supersetWorker deployment | -| supersetWorker.deploymentLabels | object | `{}` | Labels to be added to supersetWorker deployment | -| supersetWorker.extraContainers | list | `[]` | Launch additional containers into supersetWorker pod | -| supersetWorker.forceReload | bool | `false` | If true, forces deployment to reload on each upgrade | -| supersetWorker.initContainers | list | a container waiting for postgres and redis | Init container | -| supersetWorker.livenessProbe.exec.command | list | a `celery inspect ping` command | Liveness probe command | -| supersetWorker.livenessProbe.failureThreshold | int | `3` | | -| supersetWorker.livenessProbe.initialDelaySeconds | int | `120` | | -| supersetWorker.livenessProbe.periodSeconds | int | `60` | | -| supersetWorker.livenessProbe.successThreshold | int | `1` | | -| supersetWorker.livenessProbe.timeoutSeconds | int | `60` | | -| supersetWorker.podAnnotations | object | `{}` | Annotations to be added to supersetWorker pods | -| supersetWorker.podLabels | object | `{}` | Labels to be added to supersetWorker pods | -| supersetWorker.podSecurityContext | object | `{}` | | -| supersetWorker.readinessProbe | object | `{}` | No startup/readiness probes by default since we don't really care about its startup time (it doesn't serve traffic) | -| supersetWorker.replicaCount | int | `1` | | -| supersetWorker.resources | object | `{}` | Resource settings for the supersetWorker pods - these settings overwrite might existing values from the global resources object defined above. | -| supersetWorker.startupProbe | object | `{}` | No startup/readiness probes by default since we don't really care about its startup time (it doesn't serve traffic) | -| supersetWorker.strategy | object | `{}` | | -| supersetWorker.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to supersetWorker deployments | -| tolerations | list | `[]` | | -| topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to all deployments | +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| affinity | object | `{}` | | +| bootstrapScript | string | see `values.yaml` | Install additional packages and do any other bootstrap configuration in this script For production clusters it's recommended to build own image with this step done in CI | +| configFromSecret | string | `"{{ template \"superset.fullname\" . }}-config"` | The name of the secret which we will use to generate a superset_config.py file Note: this secret must have the key superset_config.py in it and can include other files as well | +| configMountPath | string | `"/app/pythonpath"` | | +| configOverrides | object | `{}` | A dictionary of overrides to append at the end of superset_config.py - the name does not matter WARNING: the order is not guaranteed Files can be passed as helm --set-file configOverrides.my-override=my-file.py | +| configOverridesFiles | object | `{}` | Same as above but the values are files | +| envFromSecret | string | `"{{ template \"superset.fullname\" . }}-env"` | The name of the secret which we will use to populate env vars in deployed pods This can be useful for secret keys, etc. | +| envFromSecrets | list | `[]` | This can be a list of templated strings | +| extraConfigMountPath | string | `"/app/configs"` | | +| extraConfigs | object | `{}` | Extra files to mount on `/app/pythonpath` | +| extraEnv | object | `{}` | Extra environment variables that will be passed into pods | +| extraEnvRaw | list | `[]` | Extra environment variables in RAW format that will be passed into pods | +| extraSecretEnv | object | `{}` | Extra environment variables to pass as secrets | +| extraSecrets | object | `{}` | Extra files to mount on `/app/pythonpath` as secrets | +| extraVolumeMounts | list | `[]` | | +| extraVolumes | list | `[]` | | +| fullnameOverride | string | `nil` | Provide a name to override the full names of resources | +| hostAliases | list | `[]` | Custom hostAliases for all superset pods # https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/ | +| image.pullPolicy | string | `"IfNotPresent"` | | +| image.repository | string | `"apachesuperset.docker.scarf.sh/apache/superset"` | | +| image.tag | string | `""` | | +| imagePullSecrets | list | `[]` | | +| ingress.annotations | object | `{}` | | +| ingress.enabled | bool | `false` | | +| ingress.extraHostsRaw | list | `[]` | | +| ingress.hosts[0] | string | `"chart-example.local"` | | +| ingress.ingressClassName | string | `nil` | | +| ingress.path | string | `"/"` | | +| ingress.pathType | string | `"ImplementationSpecific"` | | +| ingress.tls | list | `[]` | | +| init.adminUser.email | string | `"admin@superset.com"` | | +| init.adminUser.firstname | string | `"Superset"` | | +| init.adminUser.lastname | string | `"Admin"` | | +| init.adminUser.password | string | `"admin"` | | +| init.adminUser.username | string | `"admin"` | | +| init.affinity | object | `{}` | | +| init.command | list | a `superset_init.sh` command | Command | +| init.containerSecurityContext | object | `{}` | | +| init.createAdmin | bool | `true` | | +| init.enabled | bool | `true` | | +| init.initContainers | list | a container waiting for postgres | List of initContainers | +| init.initscript | string | a script to create admin user and initailize roles | A Superset init script | +| init.jobAnnotations."helm.sh/hook" | string | `"post-install,post-upgrade"` | | +| init.jobAnnotations."helm.sh/hook-delete-policy" | string | `"before-hook-creation"` | | +| init.loadExamples | bool | `false` | | +| init.podAnnotations | object | `{}` | | +| init.podSecurityContext | object | `{}` | | +| init.resources | object | `{}` | | +| init.tolerations | list | `[]` | | +| init.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to init job | +| initImage.pullPolicy | string | `"IfNotPresent"` | | +| initImage.repository | string | `"apache/superset"` | | +| initImage.tag | string | `"dockerize"` | | +| nameOverride | string | `nil` | Provide a name to override the name of the chart | +| nodeSelector | object | `{}` | | +| postgresql | object | see `values.yaml` | Configuration values for the postgresql dependency. ref: https://github.com/kubernetes/charts/blob/master/stable/postgresql/README.md | +| redis | object | see `values.yaml` | Configuration values for the Redis dependency. ref: https://github.com/bitnami/charts/blob/master/bitnami/redis More documentation can be found here: https://artifacthub.io/packages/helm/bitnami/redis | +| resources | object | `{}` | | +| runAsUser | int | `0` | User ID directive. This user must have enough permissions to run the bootstrap script Running containers as root is not recommended in production. Change this to another UID - e.g. 1000 to be more secure | +| service.annotations | object | `{}` | | +| service.loadBalancerIP | string | `nil` | | +| service.nodePort.http | int | `"nil"` | | +| service.port | int | `8088` | | +| service.type | string | `"ClusterIP"` | | +| serviceAccount.annotations | object | `{}` | | +| serviceAccount.create | bool | `false` | Create custom service account for Superset. If create: true and serviceAccountName is not provided, `superset.fullname` will be used. | +| serviceAccountName | string | `nil` | Specify service account name to be used | +| supersetCeleryBeat.affinity | object | `{}` | Affinity to be added to supersetCeleryBeat deployment | +| supersetCeleryBeat.command | list | a `celery beat` command | Command | +| supersetCeleryBeat.containerSecurityContext | object | `{}` | | +| supersetCeleryBeat.deploymentAnnotations | object | `{}` | Annotations to be added to supersetCeleryBeat deployment | +| supersetCeleryBeat.enabled | bool | `false` | This is only required if you intend to use alerts and reports | +| supersetCeleryBeat.forceReload | bool | `false` | If true, forces deployment to reload on each upgrade | +| supersetCeleryBeat.initContainers | list | a container waiting for postgres | List of init containers | +| supersetCeleryBeat.podAnnotations | object | `{}` | Annotations to be added to supersetCeleryBeat pods | +| supersetCeleryBeat.podLabels | object | `{}` | Labels to be added to supersetCeleryBeat pods | +| supersetCeleryBeat.podSecurityContext | object | `{}` | | +| supersetCeleryBeat.resources | object | `{}` | Resource settings for the CeleryBeat pods - these settings overwrite might existing values from the global resources object defined above. | +| supersetCeleryBeat.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to supersetCeleryBeat deployments | +| supersetCeleryFlower.affinity | object | `{}` | Affinity to be added to supersetCeleryFlower deployment | +| supersetCeleryFlower.command | list | a `celery flower` command | Command | +| supersetCeleryFlower.containerSecurityContext | object | `{}` | | +| supersetCeleryFlower.deploymentAnnotations | object | `{}` | Annotations to be added to supersetCeleryFlower deployment | +| supersetCeleryFlower.enabled | bool | `false` | Enables a Celery flower deployment (management UI to monitor celery jobs) WARNING: on superset 1.x, this requires a Superset image that has `flower<1.0.0` installed (which is NOT the case of the default images) flower>=1.0.0 requires Celery 5+ which Superset 1.5 does not support | +| supersetCeleryFlower.initContainers | list | a container waiting for postgres and redis | List of init containers | +| supersetCeleryFlower.livenessProbe.failureThreshold | int | `3` | | +| supersetCeleryFlower.livenessProbe.httpGet.path | string | `"/api/workers"` | | +| supersetCeleryFlower.livenessProbe.httpGet.port | string | `"flower"` | | +| supersetCeleryFlower.livenessProbe.initialDelaySeconds | int | `5` | | +| supersetCeleryFlower.livenessProbe.periodSeconds | int | `5` | | +| supersetCeleryFlower.livenessProbe.successThreshold | int | `1` | | +| supersetCeleryFlower.livenessProbe.timeoutSeconds | int | `1` | | +| supersetCeleryFlower.podAnnotations | object | `{}` | Annotations to be added to supersetCeleryFlower pods | +| supersetCeleryFlower.podLabels | object | `{}` | Labels to be added to supersetCeleryFlower pods | +| supersetCeleryFlower.podSecurityContext | object | `{}` | | +| supersetCeleryFlower.readinessProbe.failureThreshold | int | `3` | | +| supersetCeleryFlower.readinessProbe.httpGet.path | string | `"/api/workers"` | | +| supersetCeleryFlower.readinessProbe.httpGet.port | string | `"flower"` | | +| supersetCeleryFlower.readinessProbe.initialDelaySeconds | int | `5` | | +| supersetCeleryFlower.readinessProbe.periodSeconds | int | `5` | | +| supersetCeleryFlower.readinessProbe.successThreshold | int | `1` | | +| supersetCeleryFlower.readinessProbe.timeoutSeconds | int | `1` | | +| supersetCeleryFlower.replicaCount | int | `1` | | +| supersetCeleryFlower.resources | object | `{}` | Resource settings for the CeleryBeat pods - these settings overwrite might existing values from the global resources object defined above. | +| supersetCeleryFlower.service.annotations | object | `{}` | | +| supersetCeleryFlower.service.loadBalancerIP | string | `nil` | | +| supersetCeleryFlower.service.nodePort.http | int | `"nil"` | | +| supersetCeleryFlower.service.port | int | `5555` | | +| supersetCeleryFlower.service.type | string | `"ClusterIP"` | | +| supersetCeleryFlower.startupProbe.failureThreshold | int | `60` | | +| supersetCeleryFlower.startupProbe.httpGet.path | string | `"/api/workers"` | | +| supersetCeleryFlower.startupProbe.httpGet.port | string | `"flower"` | | +| supersetCeleryFlower.startupProbe.initialDelaySeconds | int | `5` | | +| supersetCeleryFlower.startupProbe.periodSeconds | int | `5` | | +| supersetCeleryFlower.startupProbe.successThreshold | int | `1` | | +| supersetCeleryFlower.startupProbe.timeoutSeconds | int | `1` | | +| supersetCeleryFlower.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to supersetCeleryFlower deployments | +| supersetNode.affinity | object | `{}` | Affinity to be added to supersetNode deployment | +| supersetNode.command | list | See `values.yaml` | Startup command | +| supersetNode.connections.db_host | string | `"{{ .Release.Name }}-postgresql"` | | +| supersetNode.connections.db_name | string | `"superset"` | | +| supersetNode.connections.db_pass | string | `"superset"` | | +| supersetNode.connections.db_port | string | `"5432"` | | +| supersetNode.connections.db_user | string | `"superset"` | | +| supersetNode.connections.redis_host | string | `"{{ .Release.Name }}-redis-headless"` | Change in case of bringing your own redis and then also set redis.enabled:false | +| supersetNode.connections.redis_port | string | `"6379"` | | +| supersetNode.containerSecurityContext | object | `{}` | | +| supersetNode.deploymentAnnotations | object | `{}` | Annotations to be added to supersetNode deployment | +| supersetNode.deploymentLabels | object | `{}` | Labels to be added to supersetNode deployment | +| supersetNode.env | object | `{}` | | +| supersetNode.extraContainers | list | `[]` | Launch additional containers into supersetNode pod | +| supersetNode.forceReload | bool | `false` | If true, forces deployment to reload on each upgrade | +| supersetNode.initContainers | list | a container waiting for postgres | Init containers | +| supersetNode.livenessProbe.failureThreshold | int | `3` | | +| supersetNode.livenessProbe.httpGet.path | string | `"/health"` | | +| supersetNode.livenessProbe.httpGet.port | string | `"http"` | | +| supersetNode.livenessProbe.initialDelaySeconds | int | `15` | | +| supersetNode.livenessProbe.periodSeconds | int | `15` | | +| supersetNode.livenessProbe.successThreshold | int | `1` | | +| supersetNode.livenessProbe.timeoutSeconds | int | `1` | | +| supersetNode.podAnnotations | object | `{}` | Annotations to be added to supersetNode pods | +| supersetNode.podLabels | object | `{}` | Labels to be added to supersetNode pods | +| supersetNode.podSecurityContext | object | `{}` | | +| supersetNode.readinessProbe.failureThreshold | int | `3` | | +| supersetNode.readinessProbe.httpGet.path | string | `"/health"` | | +| supersetNode.readinessProbe.httpGet.port | string | `"http"` | | +| supersetNode.readinessProbe.initialDelaySeconds | int | `15` | | +| supersetNode.readinessProbe.periodSeconds | int | `15` | | +| supersetNode.readinessProbe.successThreshold | int | `1` | | +| supersetNode.readinessProbe.timeoutSeconds | int | `1` | | +| supersetNode.replicaCount | int | `1` | | +| supersetNode.resources | object | `{}` | Resource settings for the supersetNode pods - these settings overwrite might existing values from the global resources object defined above. | +| supersetNode.startupProbe.failureThreshold | int | `60` | | +| supersetNode.startupProbe.httpGet.path | string | `"/health"` | | +| supersetNode.startupProbe.httpGet.port | string | `"http"` | | +| supersetNode.startupProbe.initialDelaySeconds | int | `15` | | +| supersetNode.startupProbe.periodSeconds | int | `5` | | +| supersetNode.startupProbe.successThreshold | int | `1` | | +| supersetNode.startupProbe.timeoutSeconds | int | `1` | | +| supersetNode.strategy | object | `{}` | | +| supersetNode.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to supersetNode deployments | +| supersetWebsockets.affinity | object | `{}` | Affinity to be added to supersetWebsockets deployment | +| supersetWebsockets.command | list | `[]` | | +| supersetWebsockets.config | object | see `values.yaml` | The config.json to pass to the server, see https://github.com/apache/superset/tree/master/superset-websocket Note that the configuration can also read from environment variables (which will have priority), see https://github.com/apache/superset/blob/master/superset-websocket/src/config.ts for a list of supported variables | +| supersetWebsockets.containerSecurityContext | object | `{}` | | +| supersetWebsockets.deploymentAnnotations | object | `{}` | | +| supersetWebsockets.enabled | bool | `false` | This is only required if you intend to use `GLOBAL_ASYNC_QUERIES` in `ws` mode see https://github.com/apache/superset/blob/master/CONTRIBUTING.md#async-chart-queries | +| supersetWebsockets.image.pullPolicy | string | `"IfNotPresent"` | | +| supersetWebsockets.image.repository | string | `"oneacrefund/superset-websocket"` | There is no official image (yet), this one is community-supported | +| supersetWebsockets.image.tag | string | `"latest"` | | +| supersetWebsockets.ingress.path | string | `"/ws"` | | +| supersetWebsockets.ingress.pathType | string | `"Prefix"` | | +| supersetWebsockets.livenessProbe.failureThreshold | int | `3` | | +| supersetWebsockets.livenessProbe.httpGet.path | string | `"/health"` | | +| supersetWebsockets.livenessProbe.httpGet.port | string | `"ws"` | | +| supersetWebsockets.livenessProbe.initialDelaySeconds | int | `5` | | +| supersetWebsockets.livenessProbe.periodSeconds | int | `5` | | +| supersetWebsockets.livenessProbe.successThreshold | int | `1` | | +| supersetWebsockets.livenessProbe.timeoutSeconds | int | `1` | | +| supersetWebsockets.podAnnotations | object | `{}` | | +| supersetWebsockets.podLabels | object | `{}` | | +| supersetWebsockets.podSecurityContext | object | `{}` | | +| supersetWebsockets.readinessProbe.failureThreshold | int | `3` | | +| supersetWebsockets.readinessProbe.httpGet.path | string | `"/health"` | | +| supersetWebsockets.readinessProbe.httpGet.port | string | `"ws"` | | +| supersetWebsockets.readinessProbe.initialDelaySeconds | int | `5` | | +| supersetWebsockets.readinessProbe.periodSeconds | int | `5` | | +| supersetWebsockets.readinessProbe.successThreshold | int | `1` | | +| supersetWebsockets.readinessProbe.timeoutSeconds | int | `1` | | +| supersetWebsockets.replicaCount | int | `1` | | +| supersetWebsockets.resources | object | `{}` | | +| supersetWebsockets.service.annotations | object | `{}` | | +| supersetWebsockets.service.loadBalancerIP | string | `nil` | | +| supersetWebsockets.service.nodePort.http | int | `"nil"` | | +| supersetWebsockets.service.port | int | `8080` | | +| supersetWebsockets.service.type | string | `"ClusterIP"` | | +| supersetWebsockets.startupProbe.failureThreshold | int | `60` | | +| supersetWebsockets.startupProbe.httpGet.path | string | `"/health"` | | +| supersetWebsockets.startupProbe.httpGet.port | string | `"ws"` | | +| supersetWebsockets.startupProbe.initialDelaySeconds | int | `5` | | +| supersetWebsockets.startupProbe.periodSeconds | int | `5` | | +| supersetWebsockets.startupProbe.successThreshold | int | `1` | | +| supersetWebsockets.startupProbe.timeoutSeconds | int | `1` | | +| supersetWebsockets.strategy | object | `{}` | | +| supersetWebsockets.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to supersetWebsockets deployments | +| supersetWorker.affinity | object | `{}` | Affinity to be added to supersetWorker deployment | +| supersetWorker.command | list | a `celery worker` command | Worker startup command | +| supersetWorker.containerSecurityContext | object | `{}` | | +| supersetWorker.deploymentAnnotations | object | `{}` | Annotations to be added to supersetWorker deployment | +| supersetWorker.deploymentLabels | object | `{}` | Labels to be added to supersetWorker deployment | +| supersetWorker.extraContainers | list | `[]` | Launch additional containers into supersetWorker pod | +| supersetWorker.forceReload | bool | `false` | If true, forces deployment to reload on each upgrade | +| supersetWorker.initContainers | list | a container waiting for postgres and redis | Init container | +| supersetWorker.livenessProbe.exec.command | list | a `celery inspect ping` command | Liveness probe command | +| supersetWorker.livenessProbe.failureThreshold | int | `3` | | +| supersetWorker.livenessProbe.initialDelaySeconds | int | `120` | | +| supersetWorker.livenessProbe.periodSeconds | int | `60` | | +| supersetWorker.livenessProbe.successThreshold | int | `1` | | +| supersetWorker.livenessProbe.timeoutSeconds | int | `60` | | +| supersetWorker.podAnnotations | object | `{}` | Annotations to be added to supersetWorker pods | +| supersetWorker.podLabels | object | `{}` | Labels to be added to supersetWorker pods | +| supersetWorker.podSecurityContext | object | `{}` | | +| supersetWorker.readinessProbe | object | `{}` | No startup/readiness probes by default since we don't really care about its startup time (it doesn't serve traffic) | +| supersetWorker.replicaCount | int | `1` | | +| supersetWorker.resources | object | `{}` | Resource settings for the supersetWorker pods - these settings overwrite might existing values from the global resources object defined above. | +| supersetWorker.startupProbe | object | `{}` | No startup/readiness probes by default since we don't really care about its startup time (it doesn't serve traffic) | +| supersetWorker.strategy | object | `{}` | | +| supersetWorker.topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to supersetWorker deployments | +| tolerations | list | `[]` | | +| topologySpreadConstraints | list | `[]` | TopologySpreadConstrains to be added to all deployments | diff --git a/superset-frontend/src/components/Select/AsyncSelect.test.tsx b/superset-frontend/src/components/Select/AsyncSelect.test.tsx index c1442a6b70a1c..0bb24b474a0cc 100644 --- a/superset-frontend/src/components/Select/AsyncSelect.test.tsx +++ b/superset-frontend/src/components/Select/AsyncSelect.test.tsx @@ -868,6 +868,20 @@ test('fires onChange when clearing the selection in multiple mode', async () => expect(onChange).toHaveBeenCalledTimes(1); }); +test('fires onChange when pasting a selection', async () => { + const onChange = jest.fn(); + render(); + await open(); + const input = getElementByClassName('.ant-select-selection-search-input'); + const paste = createEvent.paste(input, { + clipboardData: { + getData: () => OPTIONS[0].label, + }, + }); + fireEvent(input, paste); + expect(onChange).toHaveBeenCalledTimes(1); +}); + test('does not duplicate options when using numeric values', async () => { render( expect(onChange).toHaveBeenCalledTimes(1); }); +test('fires onChange when pasting a selection', async () => { + const onChange = jest.fn(); + render( builtins.type["BaseEngineSpec"]: raise NotImplementedError() @property - def database(self) -> builtins.type["Database"]: + def database(self) -> "Database": raise NotImplementedError() @property @@ -783,7 +783,7 @@ def get_fetch_values_predicate( self, template_processor: Optional[ # pylint: disable=unused-argument BaseTemplateProcessor - ] = None, # pylint: disable=unused-argument + ] = None, ) -> TextClause: return self.fetch_values_predicate @@ -792,7 +792,7 @@ def get_sqla_row_level_filters( template_processor: BaseTemplateProcessor, ) -> list[TextClause]: """ - Return the appropriate row level security filters for this table and the + Returns the appropriate row level security filters for this table and the current user. A custom username can be passed when the user is not present in the Flask global namespace. @@ -896,7 +896,7 @@ def get_query_str_extended( self, query_obj: QueryObjectDict, mutate: bool = True ) -> QueryStringExtended: sqlaq = self.get_sqla_query(**query_obj) - sql = self.database.compile_sqla_query(sqlaq.sqla_query) # type: ignore + sql = self.database.compile_sqla_query(sqlaq.sqla_query) sql = self._apply_cte(sql, sqlaq.cte) sql = sqlparse.format(sql, reindent=True) if mutate: @@ -935,7 +935,7 @@ def _normalize_prequery_result_type( value = value.item() column_ = columns_by_name[dimension] - db_extra: dict[str, Any] = self.database.get_extra() # type: ignore + db_extra: dict[str, Any] = self.database.get_extra() if isinstance(column_, dict): if ( @@ -1020,9 +1020,7 @@ def assign_column_label(df: pd.DataFrame) -> Optional[pd.DataFrame]: return df try: - df = self.database.get_df( - sql, self.schema, mutator=assign_column_label # type: ignore - ) + df = self.database.get_df(sql, self.schema, mutator=assign_column_label) except Exception as ex: # pylint: disable=broad-except df = pd.DataFrame() status = QueryStatus.FAILED @@ -1355,7 +1353,7 @@ def values_for_column(self, column_name: str, limit: int = 10000) -> list[Any]: if self.fetch_values_predicate: qry = qry.where(self.get_fetch_values_predicate(template_processor=tp)) - with self.database.get_sqla_engine_with_context() as engine: # type: ignore + with self.database.get_sqla_engine_with_context() as engine: sql = qry.compile(engine, compile_kwargs={"literal_binds": True}) sql = self._apply_cte(sql, cte) sql = self.mutate_query_from_config(sql) From e3fbb01bb8c5511628401f4f739ba7ead156b4bc Mon Sep 17 00:00:00 2001 From: "JUST.in DO IT" Date: Thu, 16 Nov 2023 12:58:06 -0800 Subject: [PATCH 34/38] fix(explore): redandant force param (#25985) (cherry picked from commit e7a187680713867f22b082f3bb0a57296d2a331c) --- superset-frontend/src/components/Chart/Chart.jsx | 2 +- superset-frontend/src/components/Chart/chartAction.js | 2 +- superset-frontend/src/components/Chart/chartActions.test.js | 4 ++-- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/superset-frontend/src/components/Chart/Chart.jsx b/superset-frontend/src/components/Chart/Chart.jsx index af90ae6b0a089..da9a81516f5e8 100644 --- a/superset-frontend/src/components/Chart/Chart.jsx +++ b/superset-frontend/src/components/Chart/Chart.jsx @@ -169,7 +169,7 @@ class Chart extends React.PureComponent { // Create chart with POST request this.props.actions.postChartFormData( this.props.formData, - this.props.force || getUrlParam(URL_PARAMS.force), // allow override via url params force=true + Boolean(this.props.force || getUrlParam(URL_PARAMS.force)), // allow override via url params force=true this.props.timeout, this.props.chartId, this.props.dashboardId, diff --git a/superset-frontend/src/components/Chart/chartAction.js b/superset-frontend/src/components/Chart/chartAction.js index d08070fe40561..6db969ebb94e7 100644 --- a/superset-frontend/src/components/Chart/chartAction.js +++ b/superset-frontend/src/components/Chart/chartAction.js @@ -185,7 +185,7 @@ const v1ChartDataRequest = async ( const qs = {}; if (sliceId !== undefined) qs.form_data = `{"slice_id":${sliceId}}`; if (dashboardId !== undefined) qs.dashboard_id = dashboardId; - if (force !== false) qs.force = force; + if (force) qs.force = force; const allowDomainSharding = // eslint-disable-next-line camelcase diff --git a/superset-frontend/src/components/Chart/chartActions.test.js b/superset-frontend/src/components/Chart/chartActions.test.js index 65b008de62f52..b44ca7c8d791a 100644 --- a/superset-frontend/src/components/Chart/chartActions.test.js +++ b/superset-frontend/src/components/Chart/chartActions.test.js @@ -51,7 +51,7 @@ describe('chart actions', () => { .callsFake(() => MOCK_URL); getChartDataUriStub = sinon .stub(exploreUtils, 'getChartDataUri') - .callsFake(() => URI(MOCK_URL)); + .callsFake(({ qs }) => URI(MOCK_URL).query(qs)); fakeMetadata = { useLegacyApi: true }; metadataRegistryStub = sinon .stub(chartlib, 'getChartMetadataRegistry') @@ -81,7 +81,7 @@ describe('chart actions', () => { }); it('should query with the built query', async () => { - const actionThunk = actions.postChartFormData({}); + const actionThunk = actions.postChartFormData({}, null); await actionThunk(dispatch); expect(fetchMock.calls(MOCK_URL)).toHaveLength(1); From da06206ea65ab4aad2e502a8902a2254bbdaa994 Mon Sep 17 00:00:00 2001 From: John Bodley <4567245+john-bodley@users.noreply.github.com> Date: Thu, 16 Nov 2023 15:42:39 -0800 Subject: [PATCH 35/38] chore: Optimize fetching samples logic (#25995) (cherry picked from commit 326ac4a6c49c49d60ac92b9722a2fd5379817c76) --- superset/views/datasource/utils.py | 23 ++++++++++------------- 1 file changed, 10 insertions(+), 13 deletions(-) diff --git a/superset/views/datasource/utils.py b/superset/views/datasource/utils.py index 65b19c34938f3..9baabdcc54163 100644 --- a/superset/views/datasource/utils.py +++ b/superset/views/datasource/utils.py @@ -104,21 +104,18 @@ def get_samples( # pylint: disable=too-many-arguments,too-many-locals result_type=ChartDataResultType.FULL, force=force, ) - samples_results = samples_instance.get_payload() - count_star_results = count_star_instance.get_payload() try: - sample_data = samples_results["queries"][0] - count_star_data = count_star_results["queries"][0] - failed_status = ( - sample_data.get("status") == QueryStatus.FAILED - or count_star_data.get("status") == QueryStatus.FAILED - ) - error_msg = sample_data.get("error") or count_star_data.get("error") - if failed_status and error_msg: - cache_key = sample_data.get("cache_key") - QueryCacheManager.delete(cache_key, region=CacheRegion.DATA) - raise DatasetSamplesFailedError(error_msg) + count_star_data = count_star_instance.get_payload()["queries"][0] + + if count_star_data.get("status") == QueryStatus.FAILED: + raise DatasetSamplesFailedError(count_star_data.get("error")) + + sample_data = samples_instance.get_payload()["queries"][0] + + if sample_data.get("status") == QueryStatus.FAILED: + QueryCacheManager.delete(sample_data.get("cache_key"), CacheRegion.DATA) + raise DatasetSamplesFailedError(sample_data.get("error")) sample_data["page"] = page sample_data["per_page"] = per_page From 49661bcc5969f4736e0d62defe039a81ef30c325 Mon Sep 17 00:00:00 2001 From: "JUST.in DO IT" Date: Mon, 20 Nov 2023 10:01:56 -0800 Subject: [PATCH 36/38] fix(native filters): rendering performance improvement by reduce overrendering (#25901) (cherry picked from commit e1d73d5420867b0310d4c2608686d5ccca94920f) --- .../superset-ui-core/src/chart/types/Base.ts | 1 - .../src/dashboard/components/Dashboard.jsx | 15 +-- .../dashboard/components/Dashboard.test.jsx | 13 ++- .../SyncDashboardState.test.tsx | 34 ++++++ .../components/SyncDashboardState/index.tsx | 103 ++++++++++++++++++ .../FilterBar/FilterControls/FilterValue.tsx | 3 +- .../src/dashboard/containers/Dashboard.ts | 2 - .../dashboard/containers/DashboardPage.tsx | 94 +++------------- superset-frontend/src/dataMask/reducer.ts | 1 - .../Select/SelectFilterPlugin.test.tsx | 24 ---- .../components/Select/SelectFilterPlugin.tsx | 41 ++++--- .../src/filters/components/common.ts | 4 +- 12 files changed, 191 insertions(+), 144 deletions(-) create mode 100644 superset-frontend/src/dashboard/components/SyncDashboardState/SyncDashboardState.test.tsx create mode 100644 superset-frontend/src/dashboard/components/SyncDashboardState/index.tsx diff --git a/superset-frontend/packages/superset-ui-core/src/chart/types/Base.ts b/superset-frontend/packages/superset-ui-core/src/chart/types/Base.ts index 1c4d278f6cc46..b3884a8488013 100644 --- a/superset-frontend/packages/superset-ui-core/src/chart/types/Base.ts +++ b/superset-frontend/packages/superset-ui-core/src/chart/types/Base.ts @@ -58,7 +58,6 @@ export enum AppSection { export type FilterState = { value?: any; [key: string]: any }; export type DataMask = { - __cache?: FilterState; extraFormData?: ExtraFormData; filterState?: FilterState; ownState?: JsonObject; diff --git a/superset-frontend/src/dashboard/components/Dashboard.jsx b/superset-frontend/src/dashboard/components/Dashboard.jsx index 827f0f455d3d6..6e909f3b1527f 100644 --- a/superset-frontend/src/dashboard/components/Dashboard.jsx +++ b/superset-frontend/src/dashboard/components/Dashboard.jsx @@ -25,9 +25,8 @@ import Loading from 'src/components/Loading'; import getBootstrapData from 'src/utils/getBootstrapData'; import getChartIdsFromLayout from '../util/getChartIdsFromLayout'; import getLayoutComponentFromChartId from '../util/getLayoutComponentFromChartId'; -import DashboardBuilder from './DashboardBuilder/DashboardBuilder'; + import { - chartPropShape, slicePropShape, dashboardInfoPropShape, dashboardStatePropShape, @@ -53,7 +52,6 @@ const propTypes = { }).isRequired, dashboardInfo: dashboardInfoPropShape.isRequired, dashboardState: dashboardStatePropShape.isRequired, - charts: PropTypes.objectOf(chartPropShape).isRequired, slices: PropTypes.objectOf(slicePropShape).isRequired, activeFilters: PropTypes.object.isRequired, chartConfiguration: PropTypes.object, @@ -213,11 +211,6 @@ class Dashboard extends React.PureComponent { } } - // return charts in array - getAllCharts() { - return Object.values(this.props.charts); - } - applyFilters() { const { appliedFilters } = this; const { activeFilters, ownDataCharts } = this.props; @@ -288,11 +281,7 @@ class Dashboard extends React.PureComponent { if (this.context.loading) { return ; } - return ( - <> - - - ); + return this.props.children; } } diff --git a/superset-frontend/src/dashboard/components/Dashboard.test.jsx b/superset-frontend/src/dashboard/components/Dashboard.test.jsx index 56a696f913140..a66eab37e37d7 100644 --- a/superset-frontend/src/dashboard/components/Dashboard.test.jsx +++ b/superset-frontend/src/dashboard/components/Dashboard.test.jsx @@ -21,7 +21,6 @@ import { shallow } from 'enzyme'; import sinon from 'sinon'; import Dashboard from 'src/dashboard/components/Dashboard'; -import DashboardBuilder from 'src/dashboard/components/DashboardBuilder/DashboardBuilder'; import { CHART_TYPE } from 'src/dashboard/util/componentTypes'; import newComponentFactory from 'src/dashboard/util/newComponentFactory'; @@ -63,8 +62,14 @@ describe('Dashboard', () => { loadStats: {}, }; + const ChildrenComponent = () =>
Test
; + function setup(overrideProps) { - const wrapper = shallow(); + const wrapper = shallow( + + + , + ); return wrapper; } @@ -76,9 +81,9 @@ describe('Dashboard', () => { '3_country_name': { values: ['USA'], scope: [] }, }; - it('should render a DashboardBuilder', () => { + it('should render the children component', () => { const wrapper = setup(); - expect(wrapper.find(DashboardBuilder)).toExist(); + expect(wrapper.find(ChildrenComponent)).toExist(); }); describe('UNSAFE_componentWillReceiveProps', () => { diff --git a/superset-frontend/src/dashboard/components/SyncDashboardState/SyncDashboardState.test.tsx b/superset-frontend/src/dashboard/components/SyncDashboardState/SyncDashboardState.test.tsx new file mode 100644 index 0000000000000..1565a43e19657 --- /dev/null +++ b/superset-frontend/src/dashboard/components/SyncDashboardState/SyncDashboardState.test.tsx @@ -0,0 +1,34 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +import React from 'react'; +import { render } from 'spec/helpers/testing-library'; +import { getItem, LocalStorageKeys } from 'src/utils/localStorageHelpers'; +import SyncDashboardState from '.'; + +test('stores the dashboard info with local storages', () => { + const testDashboardPageId = 'dashboardPageId'; + render(, { + useRedux: true, + }); + expect(getItem(LocalStorageKeys.dashboard__explore_context, {})).toEqual({ + [testDashboardPageId]: expect.objectContaining({ + dashboardPageId: testDashboardPageId, + }), + }); +}); diff --git a/superset-frontend/src/dashboard/components/SyncDashboardState/index.tsx b/superset-frontend/src/dashboard/components/SyncDashboardState/index.tsx new file mode 100644 index 0000000000000..b25d243292254 --- /dev/null +++ b/superset-frontend/src/dashboard/components/SyncDashboardState/index.tsx @@ -0,0 +1,103 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +import React, { useEffect } from 'react'; +import pick from 'lodash/pick'; +import { shallowEqual, useSelector } from 'react-redux'; +import { DashboardContextForExplore } from 'src/types/DashboardContextForExplore'; +import { + getItem, + LocalStorageKeys, + setItem, +} from 'src/utils/localStorageHelpers'; +import { RootState } from 'src/dashboard/types'; +import { getActiveFilters } from 'src/dashboard/util/activeDashboardFilters'; + +type Props = { dashboardPageId: string }; + +const EMPTY_OBJECT = {}; + +export const getDashboardContextLocalStorage = () => { + const dashboardsContexts = getItem( + LocalStorageKeys.dashboard__explore_context, + {}, + ); + // A new dashboard tab id is generated on each dashboard page opening. + // We mark ids as redundant when user leaves the dashboard, because they won't be reused. + // Then we remove redundant dashboard contexts from local storage in order not to clutter it + return Object.fromEntries( + Object.entries(dashboardsContexts).filter( + ([, value]) => !value.isRedundant, + ), + ); +}; + +const updateDashboardTabLocalStorage = ( + dashboardPageId: string, + dashboardContext: DashboardContextForExplore, +) => { + const dashboardsContexts = getDashboardContextLocalStorage(); + setItem(LocalStorageKeys.dashboard__explore_context, { + ...dashboardsContexts, + [dashboardPageId]: dashboardContext, + }); +}; + +const SyncDashboardState: React.FC = ({ dashboardPageId }) => { + const dashboardContextForExplore = useSelector< + RootState, + DashboardContextForExplore + >( + ({ dashboardInfo, dashboardState, nativeFilters, dataMask }) => ({ + labelColors: dashboardInfo.metadata?.label_colors || EMPTY_OBJECT, + sharedLabelColors: + dashboardInfo.metadata?.shared_label_colors || EMPTY_OBJECT, + colorScheme: dashboardState?.colorScheme, + chartConfiguration: + dashboardInfo.metadata?.chart_configuration || EMPTY_OBJECT, + nativeFilters: Object.entries(nativeFilters.filters).reduce( + (acc, [key, filterValue]) => ({ + ...acc, + [key]: pick(filterValue, ['chartsInScope']), + }), + {}, + ), + dataMask, + dashboardId: dashboardInfo.id, + filterBoxFilters: getActiveFilters(), + dashboardPageId, + }), + shallowEqual, + ); + + useEffect(() => { + updateDashboardTabLocalStorage(dashboardPageId, dashboardContextForExplore); + return () => { + // mark tab id as redundant when dashboard unmounts - case when user opens + // Explore in the same tab + updateDashboardTabLocalStorage(dashboardPageId, { + ...dashboardContextForExplore, + isRedundant: true, + }); + }; + }, [dashboardContextForExplore, dashboardPageId]); + + return null; +}; + +export default SyncDashboardState; diff --git a/superset-frontend/src/dashboard/components/nativeFilters/FilterBar/FilterControls/FilterValue.tsx b/superset-frontend/src/dashboard/components/nativeFilters/FilterBar/FilterControls/FilterValue.tsx index 5235edcdc353d..f44a1a1df6878 100644 --- a/superset-frontend/src/dashboard/components/nativeFilters/FilterBar/FilterControls/FilterValue.tsx +++ b/superset-frontend/src/dashboard/components/nativeFilters/FilterBar/FilterControls/FilterValue.tsx @@ -52,6 +52,7 @@ import { onFiltersRefreshSuccess, setDirectPathToChild, } from 'src/dashboard/actions/dashboardState'; +import { RESPONSIVE_WIDTH } from 'src/filters/components/common'; import { FAST_DEBOUNCE } from 'src/constants'; import { dispatchHoverAction, dispatchFocusAction } from './utils'; import { FilterControlProps } from './types'; @@ -322,7 +323,7 @@ const FilterValue: React.FC = ({ ) : ( import( /* webpackChunkName: "DashboardContainer" */ /* webpackPreload: true */ - 'src/dashboard/containers/Dashboard' + 'src/dashboard/components/DashboardBuilder/DashboardBuilder' ), ); @@ -83,74 +81,15 @@ type PageProps = { idOrSlug: string; }; -const getDashboardContextLocalStorage = () => { - const dashboardsContexts = getItem( - LocalStorageKeys.dashboard__explore_context, - {}, - ); - // A new dashboard tab id is generated on each dashboard page opening. - // We mark ids as redundant when user leaves the dashboard, because they won't be reused. - // Then we remove redundant dashboard contexts from local storage in order not to clutter it - return Object.fromEntries( - Object.entries(dashboardsContexts).filter( - ([, value]) => !value.isRedundant, - ), - ); -}; - -const updateDashboardTabLocalStorage = ( - dashboardPageId: string, - dashboardContext: DashboardContextForExplore, -) => { - const dashboardsContexts = getDashboardContextLocalStorage(); - setItem(LocalStorageKeys.dashboard__explore_context, { - ...dashboardsContexts, - [dashboardPageId]: dashboardContext, - }); -}; - -const useSyncDashboardStateWithLocalStorage = () => { - const dashboardPageId = useMemo(() => shortid.generate(), []); - const dashboardContextForExplore = useSelector< - RootState, - DashboardContextForExplore - >(({ dashboardInfo, dashboardState, nativeFilters, dataMask }) => ({ - labelColors: dashboardInfo.metadata?.label_colors || {}, - sharedLabelColors: dashboardInfo.metadata?.shared_label_colors || {}, - colorScheme: dashboardState?.colorScheme, - chartConfiguration: dashboardInfo.metadata?.chart_configuration || {}, - nativeFilters: Object.entries(nativeFilters.filters).reduce( - (acc, [key, filterValue]) => ({ - ...acc, - [key]: pick(filterValue, ['chartsInScope']), - }), - {}, - ), - dataMask, - dashboardId: dashboardInfo.id, - filterBoxFilters: getActiveFilters(), - dashboardPageId, - })); - - useEffect(() => { - updateDashboardTabLocalStorage(dashboardPageId, dashboardContextForExplore); - return () => { - // mark tab id as redundant when dashboard unmounts - case when user opens - // Explore in the same tab - updateDashboardTabLocalStorage(dashboardPageId, { - ...dashboardContextForExplore, - isRedundant: true, - }); - }; - }, [dashboardContextForExplore, dashboardPageId]); - return dashboardPageId; -}; - export const DashboardPage: FC = ({ idOrSlug }: PageProps) => { const theme = useTheme(); const dispatch = useDispatch(); const history = useHistory(); - const dashboardPageId = useSyncDashboardStateWithLocalStorage(); + const dashboardPageId = useMemo(() => shortid.generate(), []); + const hasDashboardInfoInitiated = useSelector( + ({ dashboardInfo }) => + dashboardInfo && Object.keys(dashboardInfo).length > 0, + ); const { addDangerToast } = useToasts(); const { result: dashboard, error: dashboardApiError } = useDashboard(idOrSlug); @@ -284,7 +223,7 @@ export const DashboardPage: FC = ({ idOrSlug }: PageProps) => { }, [addDangerToast, datasets, datasetsApiError, dispatch]); if (error) throw error; // caught in error boundary - if (!readyToRender || !isDashboardHydrated.current) return ; + if (!readyToRender || !hasDashboardInfoInitiated) return ; return ( <> @@ -295,8 +234,11 @@ export const DashboardPage: FC = ({ idOrSlug }: PageProps) => { chartContextMenuStyles(theme), ]} /> + - + + + ); diff --git a/superset-frontend/src/dataMask/reducer.ts b/superset-frontend/src/dataMask/reducer.ts index 6e9a5fae5404a..f2163a54a44a0 100644 --- a/superset-frontend/src/dataMask/reducer.ts +++ b/superset-frontend/src/dataMask/reducer.ts @@ -56,7 +56,6 @@ export function getInitialDataMask( } return { ...otherProps, - __cache: {}, extraFormData: {}, filterState: {}, ownState: {}, diff --git a/superset-frontend/src/filters/components/Select/SelectFilterPlugin.test.tsx b/superset-frontend/src/filters/components/Select/SelectFilterPlugin.test.tsx index c035f81c01b89..99e6259871430 100644 --- a/superset-frontend/src/filters/components/Select/SelectFilterPlugin.test.tsx +++ b/superset-frontend/src/filters/components/Select/SelectFilterPlugin.test.tsx @@ -91,15 +91,6 @@ describe('SelectFilterPlugin', () => { test('Add multiple values with first render', async () => { getWrapper(); expect(setDataMask).toHaveBeenCalledWith({ - extraFormData: {}, - filterState: { - value: ['boy'], - }, - }); - expect(setDataMask).toHaveBeenCalledWith({ - __cache: { - value: ['boy'], - }, extraFormData: { filters: [ { @@ -118,9 +109,6 @@ describe('SelectFilterPlugin', () => { userEvent.click(screen.getByTitle('girl')); expect(await screen.findByTitle(/girl/i)).toBeInTheDocument(); expect(setDataMask).toHaveBeenCalledWith({ - __cache: { - value: ['boy'], - }, extraFormData: { filters: [ { @@ -146,9 +134,6 @@ describe('SelectFilterPlugin', () => { }), ); expect(setDataMask).toHaveBeenCalledWith({ - __cache: { - value: ['boy'], - }, extraFormData: { adhoc_filters: [ { @@ -174,9 +159,6 @@ describe('SelectFilterPlugin', () => { }), ); expect(setDataMask).toHaveBeenCalledWith({ - __cache: { - value: ['boy'], - }, extraFormData: {}, filterState: { label: undefined, @@ -191,9 +173,6 @@ describe('SelectFilterPlugin', () => { expect(await screen.findByTitle('girl')).toBeInTheDocument(); userEvent.click(screen.getByTitle('girl')); expect(setDataMask).toHaveBeenCalledWith({ - __cache: { - value: ['boy'], - }, extraFormData: { filters: [ { @@ -216,9 +195,6 @@ describe('SelectFilterPlugin', () => { expect(await screen.findByRole('combobox')).toBeInTheDocument(); userEvent.click(screen.getByTitle(NULL_STRING)); expect(setDataMask).toHaveBeenLastCalledWith({ - __cache: { - value: ['boy'], - }, extraFormData: { filters: [ { diff --git a/superset-frontend/src/filters/components/Select/SelectFilterPlugin.tsx b/superset-frontend/src/filters/components/Select/SelectFilterPlugin.tsx index 7d8ab55fb5571..a4b9f5b05efaf 100644 --- a/superset-frontend/src/filters/components/Select/SelectFilterPlugin.tsx +++ b/superset-frontend/src/filters/components/Select/SelectFilterPlugin.tsx @@ -37,7 +37,6 @@ import { Select } from 'src/components'; import { SLOW_DEBOUNCE } from 'src/constants'; import { hasOption, propertyComparator } from 'src/components/Select/utils'; import { FilterBarOrientation } from 'src/dashboard/types'; -import { uniqWith, isEqual } from 'lodash'; import { PluginFilterSelectProps, SelectValue } from './types'; import { FilterPluginStyle, StatusMessage, StyledFormItem } from '../common'; import { getDataRecordFormatter, getSelectExtraFormData } from '../../utils'; @@ -46,15 +45,11 @@ type DataMaskAction = | { type: 'ownState'; ownState: JsonObject } | { type: 'filterState'; - __cache: JsonObject; extraFormData: ExtraFormData; filterState: { value: SelectValue; label?: string }; }; -function reducer( - draft: DataMask & { __cache?: JsonObject }, - action: DataMaskAction, -) { +function reducer(draft: DataMask, action: DataMaskAction) { switch (action.type) { case 'ownState': draft.ownState = { @@ -63,10 +58,18 @@ function reducer( }; return draft; case 'filterState': - draft.extraFormData = action.extraFormData; - // eslint-disable-next-line no-underscore-dangle - draft.__cache = action.__cache; - draft.filterState = { ...draft.filterState, ...action.filterState }; + if ( + JSON.stringify(draft.extraFormData) !== + JSON.stringify(action.extraFormData) + ) { + draft.extraFormData = action.extraFormData; + } + if ( + JSON.stringify(draft.filterState) !== JSON.stringify(action.filterState) + ) { + draft.filterState = { ...draft.filterState, ...action.filterState }; + } + return draft; default: return draft; @@ -130,7 +133,6 @@ export default function PluginFilterSelect(props: PluginFilterSelectProps) { const suffix = inverseSelection && values?.length ? t(' (excluded)') : ''; dispatchDataMask({ type: 'filterState', - __cache: filterState, extraFormData: getSelectExtraFormData( col, values, @@ -219,16 +221,13 @@ export default function PluginFilterSelect(props: PluginFilterSelectProps) { }, [filterState.validateMessage, filterState.validateStatus]); const uniqueOptions = useMemo(() => { - const allOptions = [...data]; - return uniqWith(allOptions, isEqual).map(row => { - const [value] = groupby.map(col => row[col]); - return { - label: labelFormatter(value, datatype), - value, - isNewOption: false, - }; - }); - }, [data, datatype, groupby, labelFormatter]); + const allOptions = new Set([...data.map(el => el[col])]); + return [...allOptions].map((value: string) => ({ + label: labelFormatter(value, datatype), + value, + isNewOption: false, + })); + }, [data, datatype, col, labelFormatter]); const options = useMemo(() => { if (search && !multiSelect && !hasOption(search, uniqueOptions, true)) { diff --git a/superset-frontend/src/filters/components/common.ts b/superset-frontend/src/filters/components/common.ts index af1fe9c791761..cb6d7f22f14be 100644 --- a/superset-frontend/src/filters/components/common.ts +++ b/superset-frontend/src/filters/components/common.ts @@ -20,9 +20,11 @@ import { styled } from '@superset-ui/core'; import { PluginFilterStylesProps } from './types'; import FormItem from '../../components/Form/FormItem'; +export const RESPONSIVE_WIDTH = 0; + export const FilterPluginStyle = styled.div` min-height: ${({ height }) => height}px; - width: ${({ width }) => width}px; + width: ${({ width }) => (width === RESPONSIVE_WIDTH ? '100%' : `${width}px`)}; `; export const StyledFormItem = styled(FormItem)` From 08604cc6868e03916a801b15f8d46c6b111fc032 Mon Sep 17 00:00:00 2001 From: Daniel Vaz Gaspar Date: Mon, 20 Nov 2023 19:02:30 +0000 Subject: [PATCH 37/38] fix: update FAB to 4.3.10, Azure user info fix (#26037) (cherry picked from commit 628cd345f2b5a9128fcbfaaefa02b24c77d06155) --- requirements/base.txt | 2 +- setup.py | 2 +- superset/views/datasource/utils.py | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/requirements/base.txt b/requirements/base.txt index 18cf4ccf63225..1d2d568efcd98 100644 --- a/requirements/base.txt +++ b/requirements/base.txt @@ -89,7 +89,7 @@ flask==2.2.5 # flask-migrate # flask-sqlalchemy # flask-wtf -flask-appbuilder==4.3.9 +flask-appbuilder==4.3.10 # via apache-superset flask-babel==1.0.0 # via flask-appbuilder diff --git a/setup.py b/setup.py index 20796cf13348f..1bd979c10740a 100644 --- a/setup.py +++ b/setup.py @@ -80,7 +80,7 @@ def get_git_sha() -> str: "cryptography>=39.0.1, <40", "deprecation>=2.1.0, <2.2.0", "flask>=2.2.5, <3.0.0", - "flask-appbuilder>=4.3.9, <5.0.0", + "flask-appbuilder>=4.3.10, <5.0.0", "flask-caching>=2.1.0, <3", "flask-compress>=1.13, <2.0", "flask-talisman>=1.0.0, <2.0", diff --git a/superset/views/datasource/utils.py b/superset/views/datasource/utils.py index 9baabdcc54163..e5294278982c6 100644 --- a/superset/views/datasource/utils.py +++ b/superset/views/datasource/utils.py @@ -43,7 +43,7 @@ def get_limit_clause(page: Optional[int], per_page: Optional[int]) -> dict[str, return {"row_offset": offset, "row_limit": limit} -def get_samples( # pylint: disable=too-many-arguments,too-many-locals +def get_samples( # pylint: disable=too-many-arguments datasource_type: str, datasource_id: int, force: bool = False, From 961eba6d9707c8f1b1588fe3e2bf602ef2d1a4da Mon Sep 17 00:00:00 2001 From: "Michael S. Molina" Date: Mon, 20 Nov 2023 16:40:41 -0300 Subject: [PATCH 38/38] chore: Updates CHANGELOG.md for 3.0.2 (rc2) --- CHANGELOG.md | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index e3b8ed27ce62c..d80537b69c455 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -19,7 +19,7 @@ under the License. ## Change Log -- [3.0.2](#302-wed-nov-8-073838-2023--0500) +- [3.0.2](#302-mon-nov-20-073838-2023--0500) - [3.0.1](#301-tue-oct-13-103221-2023--0700) - [3.0.0](#300-thu-aug-24-133627-2023--0600) - [2.1.1](#211-sun-apr-23-154421-2023-0100) @@ -33,10 +33,22 @@ under the License. - [1.4.2](#142-sat-mar-19-000806-2022-0200) - [1.4.1](#141) -### 3.0.2 (Wed Nov 8 07:38:38 2023 -0500) +### 3.0.2 (Mon Nov 20 07:38:38 2023 -0500) **Fixes** +- [#26037](https://github.com/apache/superset/pull/26037) fix: update FAB to 4.3.10, Azure user info fix (@dpgaspar) +- [#25901](https://github.com/apache/superset/pull/25901) fix(native filters): rendering performance improvement by reduce overrendering (@justinpark) +- [#25985](https://github.com/apache/superset/pull/25985) fix(explore): redandant force param (@justinpark) +- [#25993](https://github.com/apache/superset/pull/25993) fix: Make Select component fire onChange listener when a selection is pasted in (@jfrag1) +- [#25997](https://github.com/apache/superset/pull/25997) fix(rls): Update text from tables to datasets in RLS modal (@yousoph) +- [#25703](https://github.com/apache/superset/pull/25703) fix(helm): Restart all related deployments when bootstrap script changed (@josedev-union) +- [#25973](https://github.com/apache/superset/pull/25973) fix: naming denomalized to denormalized in helpers.py (@hughhhh) +- [#25919](https://github.com/apache/superset/pull/25919) fix: always denorm column value before querying values (@hughhhh) +- [#25947](https://github.com/apache/superset/pull/25947) fix: update flask-caching to avoid breaking redis cache, solves #25339 (@ggbaro) +- [#25903](https://github.com/apache/superset/pull/25903) fix(sqllab): invalid sanitization on comparison symbol (@justinpark) +- [#25857](https://github.com/apache/superset/pull/25857) fix(table): Double percenting ad-hoc percentage metrics (@john-bodley) +- [#25872](https://github.com/apache/superset/pull/25872) fix(trino): allow impersonate_user flag to be imported (@FGrobelny) - [#25897](https://github.com/apache/superset/pull/25897) fix: trino cursor (@betodealmeida) - [#25898](https://github.com/apache/superset/pull/25898) fix: database version field (@betodealmeida) - [#25877](https://github.com/apache/superset/pull/25877) fix: Saving Mixed Chart with dashboard filter applied breaks adhoc_filter_b (@kgabryje) @@ -69,6 +81,11 @@ under the License. - [#25626](https://github.com/apache/superset/pull/25626) fix(sqllab): template validation error within comments (@justinpark) - [#25523](https://github.com/apache/superset/pull/25523) fix(sqllab): Mistitled for new tab after rename (@justinpark) +**Others** + +- [#25995](https://github.com/apache/superset/pull/25995) chore: Optimize fetching samples logic (@john-bodley) +- [#23619](https://github.com/apache/superset/pull/23619) chore(colors): Updating Airbnb brand colors (@john-bodley) + ### 3.0.1 (Tue Oct 13 10:32:21 2023 -0700) **Database Migrations**