Skip to content

Commit

Permalink
Merge branch 'master' of https://github.com/preset-io/superset into c…
Browse files Browse the repository at this point in the history
…hart-power-query
  • Loading branch information
hughhhh authored Jun 13, 2022
2 parents 613d78a + c6b1523 commit e076f13
Show file tree
Hide file tree
Showing 100 changed files with 2,143 additions and 885 deletions.
59 changes: 42 additions & 17 deletions CHANGELOG.md

Large diffs are not rendered by default.

40 changes: 37 additions & 3 deletions RELEASING/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -422,13 +422,47 @@ with the changes on `CHANGELOG.md` and `UPDATING.md`.

### Publishing a Convenience Release to PyPI

Using the final release tarball, unpack it and run `./pypi_push.sh`.
This script will build the JavaScript bundle and echo the twine command
allowing you to publish to PyPI. You may need to ask a fellow committer to grant
Extract the release to the `/tmp` folder to build the PiPY release. Files in the `/tmp` folder will be automatically deleted by the OS.

```bash
mkdir -p /tmp/superset && cd /tmp/superset
tar xfvz ~/svn/superset/${SUPERSET_VERSION}/${SUPERSET_RELEASE_TARBALL}
```

Create a virtual environment and install the dependencies

```bash
cd ${SUPERSET_RELEASE_RC}
python3 -m venv venv
source venv/bin/activate
pip install -r requirements/base.txt
pip install twine
```

Create the distribution

```bash
cd superset-frontend/
npm ci && npm run build
cd ../
flask fab babel-compile --target superset/translations
python setup.py sdist
```

Publish to PyPI

You may need to ask a fellow committer to grant
you access to it if you don't have access already. Make sure to create
an account first if you don't have one, and reference your username
while requesting access to push packages.

```bash
twine upload dist/apache-superset-${SUPERSET_VERSION}.tar.gz

# Set your username to token
# Set your password to the token value, including the pypi- prefix
```

### Announcing

Once it's all done, an [ANNOUNCE] thread announcing the release to the dev@ mailing list is the final step.
Expand Down
4 changes: 2 additions & 2 deletions UPDATING.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,10 +31,10 @@ assists people when migrating to a new version.

### Breaking Changes

- [19770](https://github.com/apache/superset/pull/19770): As per SIPs 11 and 68, the native NoSQL Druid connector is deprecated and has been removed. Druid is still supported through SQLAlchemy via pydruid. The config keys `DRUID_IS_ACTIVE` and `DRUID_METADATA_LINKS_ENABLED` have also been removed.
- [19981](https://github.com/apache/superset/pull/19981): Per [SIP-81](https://github.com/apache/superset/issues/19953) the /explore/form_data api now requires a `datasource_type` in addition to a `datasource_id` for POST and PUT requests
- [19770](https://github.com/apache/superset/pull/19770): Per [SIP-11](https://github.com/apache/superset/issues/6032) and [SIP-68](https://github.com/apache/superset/issues/14909), the native NoSQL Druid connector is deprecated and has been removed. Druid is still supported through SQLAlchemy via pydruid. The config keys `DRUID_IS_ACTIVE` and `DRUID_METADATA_LINKS_ENABLED` have also been removed.
- [19274](https://github.com/apache/superset/pull/19274): The `PUBLIC_ROLE_LIKE_GAMMA` config key has been removed, set `PUBLIC_ROLE_LIKE = "Gamma"` to have the same functionality.
- [19273](https://github.com/apache/superset/pull/19273): The `SUPERSET_CELERY_WORKERS` and `SUPERSET_WORKERS` config keys has been removed. Configure Celery directly using `CELERY_CONFIG` on Superset.
- [19262](https://github.com/apache/superset/pull/19262): Per [SIP-11](https://github.com/apache/superset/issues/6032) and [SIP-68](https://github.com/apache/superset/issues/14909) the native NoSQL Druid connector is deprecated and will no longer be supported. Druid SQL is still [supported](https://superset.apache.org/docs/databases/druid).
- [19231](https://github.com/apache/superset/pull/19231): The `ENABLE_REACT_CRUD_VIEWS` feature flag has been removed (premantly enabled). Any deployments which had set this flag to false will need to verify that the React views support their use case.
- [19230](https://github.com/apache/superset/pull/19230): The `ROW_LEVEL_SECURITY` feature flag has been removed (permantly enabled). Any deployments which had set this flag to false will need to verify that the presence of the Row Level Security feature does not interfere with their use case.
- [19168](https://github.com/apache/superset/pull/19168): Celery upgrade to 5.X resulted in breaking changes to its command line invocation. Please follow [these](https://docs.celeryq.dev/en/stable/whatsnew-5.2.html#step-1-adjust-your-command-line-invocation) instructions for adjustments. Also consider migrating you Celery config per [here](https://docs.celeryq.dev/en/stable/userguide/configuration.html#conf-old-settings-map).
Expand Down
1 change: 1 addition & 0 deletions docker/run-server.sh
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ gunicorn \
--worker-class ${SERVER_WORKER_CLASS:-gthread} \
--threads ${SERVER_THREADS_AMOUNT:-20} \
--timeout ${GUNICORN_TIMEOUT:-60} \
--keep-alive ${GUNICORN_KEEPALIVE:-2} \
--limit-request-line ${SERVER_LIMIT_REQUEST_LINE:-0} \
--limit-request-field_size ${SERVER_LIMIT_REQUEST_FIELD_SIZE:-0} \
"${FLASK_APP}"
49 changes: 37 additions & 12 deletions docs/docs/databases/databricks.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,16 +7,12 @@ version: 1

## Databricks

To connect to Databricks, first install [databricks-dbapi](https://pypi.org/project/databricks-dbapi/) with the optional SQLAlchemy dependencies:
Databricks now offer a native DB API 2.0 driver, `databricks-sql-connector`, that can be used with the `sqlalchemy-databricks` dialect. You can install both with:

```bash
pip install databricks-dbapi[sqlalchemy]
pip install "superset[databricks]"
```

There are two ways to connect to Databricks: using a Hive connector or an ODBC connector. Both ways work similarly, but only ODBC can be used to connect to [SQL endpoints](https://docs.databricks.com/sql/admin/sql-endpoints.html).

### Hive

To use the Hive connector you need the following information from your cluster:

- Server hostname
Expand All @@ -27,31 +23,60 @@ These can be found under "Configuration" -> "Advanced Options" -> "JDBC/ODBC".

You also need an access token from "Settings" -> "User Settings" -> "Access Tokens".

Once you have all this information, add a database of type "Databricks (Hive)" in Superset, and use the following SQLAlchemy URI:
Once you have all this information, add a database of type "Databricks Native Connector" and use the following SQLAlchemy URI:

```
databricks+pyhive://token:{access token}@{server hostname}:{port}/{database name}
databricks+connector://token:{access_token}@{server_hostname}:{port}/{database_name}
```

You also need to add the following configuration to "Other" -> "Engine Parameters", with your HTTP path:

```json
{
"connect_args": {"http_path": "sql/protocolv1/o/****"},
"http_headers": [["User-Agent", "Apache Superset"]]
}
```

The `User-Agent` header is optional, but helps Databricks identify traffic from Superset. If you need to use a different header please reach out to Databricks and let them know.

## Older driver

Originally Superset used `databricks-dbapi` to connect to Databricks. You might want to try it if you're having problems with the official Databricks connector:

```bash
pip install "databricks-dbapi[sqlalchemy]"
```

There are two ways to connect to Databricks when using `databricks-dbapi`: using a Hive connector or an ODBC connector. Both ways work similarly, but only ODBC can be used to connect to [SQL endpoints](https://docs.databricks.com/sql/admin/sql-endpoints.html).

### Hive

To connect to a Hive cluster add a database of type "Databricks Interactive Cluster" in Superset, and use the following SQLAlchemy URI:

```
databricks+pyhive://token:{access_token}@{server_hostname}:{port}/{database_name}
```

You also need to add the following configuration to "Other" -> "Engine Parameters", with your HTTP path:

```json
{"connect_args": {"http_path": "sql/protocolv1/o/****"}}
```

### ODBC

For ODBC you first need to install the [ODBC drivers for your platform](https://databricks.com/spark/odbc-drivers-download).

For a regular connection use this as the SQLAlchemy URI:
For a regular connection use this as the SQLAlchemy URI after selecting either "Databricks Interactive Cluster" or "Databricks SQL Endpoint" for the database, depending on your use case:

```
databricks+pyodbc://token:{access token}@{server hostname}:{port}/{database name}
databricks+pyodbc://token:{access_token}@{server_hostname}:{port}/{database_name}
```

And for the connection arguments:

```
```json
{"connect_args": {"http_path": "sql/protocolv1/o/****", "driver_path": "/path/to/odbc/driver"}}
```

Expand All @@ -62,6 +87,6 @@ The driver path should be:

For a connection to a SQL endpoint you need to use the HTTP path from the endpoint:

```
```json
{"connect_args": {"http_path": "/sql/1.0/endpoints/****", "driver_path": "/path/to/odbc/driver"}}
```
20 changes: 20 additions & 0 deletions docs/docs/installation/sql-templating.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,26 @@ For example, to add a time range to a virtual dataset, you can write the followi
SELECT * from tbl where dttm_col > '{{ from_dttm }}' and dttm_col < '{{ to_dttm }}'
```

You can also use [Jinja's logic](https://jinja.palletsprojects.com/en/2.11.x/templates/#tests)
to make your query robust to clearing the timerange filter:

```sql
SELECT *
FROM tbl
WHERE (
{% if from_dttm is not none %}
dttm_col > '{{ from_dttm }}' AND
{% endif %}
{% if to_dttm is not none %}
dttm_col < '{{ to_dttm }}' AND
{% endif %}
true
)
```

Note how the Jinja parameters are called within double brackets in the query, and without in the
logic blocks.

To add custom functionality to the Jinja context, you need to overload the default Jinja
context in your environment by defining the `JINJA_CONTEXT_ADDONS` in your superset configuration
(`superset_config.py`). Objects referenced in this dictionary are made available for users to use
Expand Down
3 changes: 3 additions & 0 deletions docs/static/resources/openapi.json
Original file line number Diff line number Diff line change
Expand Up @@ -4399,6 +4399,9 @@
"nullable": true,
"type": "boolean"
},
"kind": {
"readOnly": true
},
"main_dttm_col": {
"maxLength": 250,
"nullable": true,
Expand Down
2 changes: 1 addition & 1 deletion helm/superset/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ maintainers:
- name: craig-rueda
email: craig@craigrueda.com
url: https://github.com/craig-rueda
version: 0.6.2
version: 0.6.3
dependencies:
- name: postgresql
version: 11.1.22
Expand Down
3 changes: 3 additions & 0 deletions helm/superset/templates/deployment-beat.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,9 @@ spec:
{{ toYaml .Values.supersetCeleryBeat.podLabels | nindent 8 }}
{{- end }}
spec:
{{- if or (.Values.serviceAccount.create) (.Values.serviceAccountName) }}
serviceAccountName: {{ template "superset.serviceAccountName" . }}
{{- end }}
securityContext:
runAsUser: {{ .Values.runAsUser }}
{{- if .Values.supersetCeleryBeat.initContainers }}
Expand Down
33 changes: 0 additions & 33 deletions scripts/pypi_push.sh

This file was deleted.

5 changes: 4 additions & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,10 @@ def get_git_sha() -> str:
"cockroachdb": ["cockroachdb>=0.3.5, <0.4"],
"cors": ["flask-cors>=2.0.0"],
"crate": ["crate[sqlalchemy]>=0.26.0, <0.27"],
"databricks": ["databricks-dbapi[sqlalchemy]>=0.5.0, <0.6"],
"databricks": [
"databricks-sql-connector>=2.0.2, <3",
"sqlalchemy-databricks>=0.2.0",
],
"db2": ["ibm-db-sa>=0.3.5, <0.4"],
"dremio": ["sqlalchemy-dremio>=1.1.5, <1.3"],
"drill": ["sqlalchemy-drill==0.1.dev"],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,11 +45,6 @@ describe('Dashboard edit mode', () => {
.should('not.exist');
});

cy.get('[data-test="dashboard-builder-component-pane-tabs-navigation"]')
.find('.ant-tabs-tab')
.last()
.click();

// find box plot is available from list
cy.get('[data-test="dashboard-charts-filter-search-input"]').type(
'Box plot',
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,11 @@ describe('Dashboard edit markdown', () => {
.find('[aria-label="Edit dashboard"]')
.click();

cy.get('[data-test="dashboard-builder-component-pane-tabs-navigation"]')
.find('.ant-tabs-tab')
.last()
.click();

// lazy load - need to open dropdown for the scripts to load
cy.get('.header-with-actions').find('[aria-label="more-horiz"]').click();
cy.get('[data-test="grid-row-background--transparent"]')
Expand Down
2 changes: 1 addition & 1 deletion superset-frontend/jest.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ module.exports = {
'tmp/',
'dist/',
],
coverageReporters: ['lcov', 'json-summary', 'html'],
coverageReporters: ['lcov', 'json-summary', 'html', 'text'],
moduleFileExtensions: ['ts', 'tsx', 'js', 'jsx', 'json'],
snapshotSerializers: ['@emotion/jest/enzyme-serializer'],
globals: {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ export interface RadioButtonControlProps {
description?: string;
options: RadioButtonOption[];
hovered?: boolean;
value?: string;
value?: JsonValue;
onChange: (opt: RadioButtonOption[0]) => void;
}

Expand Down
22 changes: 19 additions & 3 deletions superset-frontend/packages/superset-ui-chart-controls/src/types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -17,17 +17,17 @@
* specific language governing permissions and limitations
* under the License.
*/
import React, { ReactNode, ReactText, ReactElement } from 'react';
import React, { ReactElement, ReactNode, ReactText } from 'react';
import type {
AdhocColumn,
Column,
DatasourceType,
JsonValue,
Metric,
QueryFormColumn,
QueryFormData,
QueryResponse,
QueryFormMetric,
QueryFormColumn,
QueryResponse,
} from '@superset-ui/core';
import { sharedControls } from './shared-controls';
import sharedControlComponents from './shared-controls/components';
Expand Down Expand Up @@ -437,3 +437,19 @@ export function isControlPanelSectionConfig(
): section is ControlPanelSectionConfig {
return section !== null;
}

export function isDataset(
datasource: Dataset | QueryResponse | null | undefined,
): datasource is Dataset {
return !!datasource && 'columns' in datasource;
}

export function isQueryResponse(
datasource: Dataset | QueryResponse | null | undefined,
): datasource is QueryResponse {
return (
!!datasource &&
('results' in datasource ||
datasource?.type === ('query' as DatasourceType.Query))
);
}
Loading

0 comments on commit e076f13

Please sign in to comment.