Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(object storage): add unused object storage #9846

Merged
merged 24 commits into from
May 20, 2022
Merged
Show file tree
Hide file tree
Changes from 16 commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
d0565fa
feat(object_storage): add unused object storage with health checks
pauldambra May 18, 2022
bf246d4
only prompt debug users if object storage not available at preflight
pauldambra May 18, 2022
78b6cb8
safe plugin server health check for unused object storage
pauldambra May 18, 2022
a9b6041
merge from master, resolving conflict, and regenerating requirements.txt
pauldambra May 18, 2022
49e984c
explicit object storage settings
pauldambra May 18, 2022
7612702
explicit object storage settings
pauldambra May 18, 2022
2ba951d
explicit object storage settings
pauldambra May 18, 2022
1833121
downgrade pip tools
pauldambra May 18, 2022
9fce55b
without spaces?
pauldambra May 19, 2022
0b429d0
like this?
pauldambra May 19, 2022
bd18138
without updating pip?
pauldambra May 19, 2022
bf4cf36
remove object_storage from dev volumes
pauldambra May 19, 2022
37b4b8f
named volume on hobby
pauldambra May 19, 2022
3ad75aa
lazily init object storage
pauldambra May 19, 2022
1a57e81
simplify conditional check
pauldambra May 19, 2022
38b02a4
reproduced error locally
pauldambra May 19, 2022
fce58b3
reproduced error locally
pauldambra May 19, 2022
0ee2019
object_storage_endpoint not host and port
pauldambra May 19, 2022
b269fc1
log more when checking kafka and clickhouse
pauldambra May 19, 2022
2dfa565
merge from master
pauldambra May 19, 2022
9039451
don't filter docker output
pauldambra May 19, 2022
24f8452
Merge branch 'master' into add_unused_object_storage
pauldambra May 19, 2022
9013f46
add kafka to hosts before starting stack?
pauldambra May 19, 2022
8fb2a51
silly cloud tests (not my brain)
pauldambra May 19, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/actions/run-backend-tests/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ runs:
run: |
export CLICKHOUSE_SERVER_IMAGE=${{ inputs.clickhouse-server-image }}
docker-compose -f docker-compose.dev.yml down
docker-compose -f docker-compose.dev.yml up -d db clickhouse zookeeper kafka redis &
docker-compose -f docker-compose.dev.yml up -d db clickhouse zookeeper kafka redis object_storage &

- name: Set up Python
uses: actions/setup-python@v2
Expand Down
7 changes: 6 additions & 1 deletion .github/workflows/ci-backend.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,11 @@ env:
CLICKHOUSE_VERIFY: 'False'
TEST: 1
CLICKHOUSE_SERVER_IMAGE_VERSION: ${{ github.event.inputs.clickhouseServerVersion || '' }}
OBJECT_STORAGE_ENABLED: 'True'
OBJECT_STORAGE_HOST: 'localhost'
OBJECT_STORAGE_PORT: '19000'
OBJECT_STORAGE_ACCESS_KEY_ID: 'object_storage_root_user'
OBJECT_STORAGE_SECRET_ACCESS_KEY: 'object_storage_root_password'

jobs:
# Job to decide if we should run backend ci
Expand Down Expand Up @@ -264,7 +269,7 @@ jobs:
- name: Start stack with Docker Compose
run: |
docker-compose -f deploy/docker-compose.dev.yml down
docker-compose -f deploy/docker-compose.dev.yml up -d db clickhouse zookeeper kafka redis &
docker-compose -f deploy/docker-compose.dev.yml up -d db clickhouse zookeeper kafka redis object_storage &

- name: Set up Python 3.8.12
uses: actions/setup-python@v2
Expand Down
25 changes: 17 additions & 8 deletions .github/workflows/ci-plugin-server.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,15 @@ on:
- 'docker*.yml'
- '*Dockerfile'

env:
OBJECT_STORAGE_ENABLED: true
OBJECT_STORAGE_HOST: 'localhost'
OBJECT_STORAGE_PORT: '19000'
OBJECT_STORAGE_ACCESS_KEY_ID: 'object_storage_root_user'
OBJECT_STORAGE_SECRET_ACCESS_KEY: 'object_storage_root_password'
OBJECT_STORAGE_SESSION_RECORDING_FOLDER: 'session_recordings'
OBJECT_STORAGE_BUCKET: 'posthog'

jobs:
code-quality:
name: Code quality
Expand Down Expand Up @@ -73,8 +82,8 @@ jobs:
sudo bash -c 'echo "127.0.0.1 kafka zookeeper" >> /etc/hosts'
ping -c 1 kafka
ping -c 1 zookeeper
- name: Start Kafka, ClickHouse, Zookeeper
run: docker-compose -f docker-compose.dev.yml up -d zookeeper kafka clickhouse
- name: Start Kafka, ClickHouse, Zookeeper, Object Storage
run: docker-compose -f docker-compose.dev.yml up -d zookeeper kafka clickhouse object_storage

- name: Set up Python 3.8.12
uses: actions/setup-python@v2
Expand Down Expand Up @@ -154,8 +163,8 @@ jobs:
sudo bash -c 'echo "127.0.0.1 kafka zookeeper" >> /etc/hosts'
ping -c 1 kafka
ping -c 1 zookeeper
- name: Start Kafka, ClickHouse, Zookeeper
run: docker-compose -f docker-compose.dev.yml up -d zookeeper kafka clickhouse
- name: Start Kafka, ClickHouse, Zookeeper, Object Storage
run: docker-compose -f docker-compose.dev.yml up -d zookeeper kafka clickhouse object_storage

- name: Set up Python 3.8.12
uses: actions/setup-python@v2
Expand Down Expand Up @@ -236,8 +245,8 @@ jobs:
sudo bash -c 'echo "127.0.0.1 kafka zookeeper" >> /etc/hosts'
ping -c 1 kafka
ping -c 1 zookeeper
- name: Start Kafka, ClickHouse, Zookeeper
run: docker-compose -f docker-compose.dev.yml up -d zookeeper kafka clickhouse
- name: Start Kafka, ClickHouse, Zookeeper, Object Storage
run: docker-compose -f docker-compose.dev.yml up -d zookeeper kafka clickhouse object_storage

- name: Set up Python 3.8.12
uses: actions/setup-python@v2
Expand Down Expand Up @@ -318,8 +327,8 @@ jobs:
sudo bash -c 'echo "127.0.0.1 kafka zookeeper" >> /etc/hosts'
ping -c 1 kafka
ping -c 1 zookeeper
- name: Start Kafka, ClickHouse, Zookeeper
run: docker-compose -f docker-compose.dev.yml up -d zookeeper kafka clickhouse
- name: Start Kafka, ClickHouse, Zookeeper, Object Storage
run: docker-compose -f docker-compose.dev.yml up -d zookeeper kafka clickhouse object_storage

- name: Set up Python 3.8.12
uses: actions/setup-python@v2
Expand Down
11 changes: 7 additions & 4 deletions .github/workflows/e2e.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,11 @@ env:
SITE_URL: 'test.posthog.net' # used to test password resets
NO_RESTART_LOOP: 1
CLICKHOUSE_SECURE: 0
OBJECT_STORAGE_ENABLED: 1
OBJECT_STORAGE_HOST: 'localhost'
OBJECT_STORAGE_PORT: '19000'
OBJECT_STORAGE_ACCESS_KEY_ID: 'object_storage_root_user'
OBJECT_STORAGE_SECRET_ACCESS_KEY: 'object_storage_root_password'

jobs:
# Job that lists and chunks spec file names and caches node modules
Expand Down Expand Up @@ -69,7 +74,7 @@ jobs:
- name: Start stack with Docker Compose
run: |
docker-compose -f docker-compose.dev.yml down
docker-compose -f docker-compose.dev.yml up -d db clickhouse zookeeper kafka redis &
docker-compose -f docker-compose.dev.yml up -d db clickhouse zookeeper kafka redis object_storage &
- name: Add kafka host to /etc/hosts for kafka connectivity
run: sudo echo "127.0.0.1 kafka" | sudo tee -a /etc/hosts

Expand All @@ -91,9 +96,7 @@ jobs:
- name: Install python dependencies
if: steps.cache-virtualenv.outputs.cache-hit != 'true'
run: |
python -m pip install --upgrade pip
python -m pip install $(grep -ivE "psycopg2" requirements.txt | cut -d'#' -f1) --no-cache-dir --compile
python -m pip install psycopg2-binary --no-cache-dir --compile
python -m pip install requirements.txt --no-cache-dir --compile
- uses: actions/setup-node@v1
with:
node-version: 16
Expand Down
7 changes: 4 additions & 3 deletions .run/Plugin Server.run.xml
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,13 @@
</scripts>
<node-interpreter value="project" />
<envs>
<env name="WORKER_CONCURRENCY" value="2" />
<env name="CLICKHOUSE_SECURE" value="False" />
<env name="DATABASE_URL" value="postgres://posthog:posthog@localhost:5432/posthog" />
<env name="KAFKA_ENABLED" value="true" />
<env name="CLICKHOUSE_SECURE" value="False" />
<env name="KAFKA_HOSTS" value="localhost:9092" />
<env name="WORKER_CONCURRENCY" value="2" />
<env name="OBJECT_STORAGE_ENABLED" value="True" />
</envs>
<method v="2" />
</configuration>
</component>
</component>
15 changes: 15 additions & 0 deletions docker-compose.arm64.yml
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,7 @@ services:
- redis
- clickhouse
- kafka
- object_storage
web:
<<: *worker
command: '${CH_WEB_SCRIPT:-./ee/bin/docker-ch-dev-web}'
Expand All @@ -103,3 +104,17 @@ services:
- redis
- clickhouse
- kafka
- object_storage

object_storage:
image: minio/minio
ports:
- '19000:19000'
- '19001:19001'
volumes:
- ./object_storage:/data
environment:
MINIO_ROOT_USER: object_storage_root_user
MINIO_ROOT_PASSWORD: object_storage_root_password
entrypoint: sh
command: -c 'mkdir -p /data/posthog && minio server --address ":19000" --console-address ":19001" /data' # create the 'posthog' bucket before starting the service
13 changes: 13 additions & 0 deletions docker-compose.dev.yml
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,7 @@ services:
- redis
- clickhouse
- kafka
- object_storage
web:
<<: *worker
command: '${CH_WEB_SCRIPT:-./ee/bin/docker-ch-dev-web}'
Expand All @@ -100,3 +101,15 @@ services:
- redis
- clickhouse
- kafka
- object_storage

object_storage:
image: minio/minio
ports:
- '19000:19000'
- '19001:19001'
environment:
MINIO_ROOT_USER: object_storage_root_user
MINIO_ROOT_PASSWORD: object_storage_root_password
entrypoint: sh
command: -c 'mkdir -p /data/posthog && chmod u+rxw /data/posthog && minio server --address ":19000" --console-address ":19001" /data' # create the 'posthog' bucket before starting the service
17 changes: 17 additions & 0 deletions docker-compose.hobby.yml
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,7 @@ services:
- redis
- clickhouse
- kafka
- object_storage
web:
<<: *worker
command: /compose/start
Expand Down Expand Up @@ -117,6 +118,21 @@ services:
- redis
- clickhouse
- kafka
- object_storage

object_storage:
image: minio/minio
ports:
- '19000:19000'
- '19001:19001'
volumes:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For hobby, I'd add a named volume as with zookeeper and reference this rather than ./object_storage, to be in line with the other services.

- object_storage:/data
environment:
MINIO_ROOT_USER: object_storage_root_user
MINIO_ROOT_PASSWORD: object_storage_root_password
entrypoint: sh
command: -c 'mkdir -p /data/posthog && minio server --address ":19000" --console-address ":19001" /data' # create the 'posthog' bucket before starting the service

asyncmigrationscheck:
<<: *worker
command: python manage.py run_async_migrations --check
Expand All @@ -127,3 +143,4 @@ volumes:
zookeeper-data:
zookeeper-datalog:
zookeeper-logs:
object_storage:
1 change: 1 addition & 0 deletions docker-compose.test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,4 @@ services:
- redis
- clickhouse
- kafka
- object_storage
15 changes: 15 additions & 0 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,7 @@ services:
- redis
- clickhouse
- kafka
- object_storage
environment:
DATABASE_URL: postgres://posthog:posthog@db:5432/posthog
REDIS_URL: redis://redis:6379/
Expand All @@ -70,6 +71,20 @@ services:
ports:
- 8000:8000
- 80:8000

object_storage:
image: minio/minio
ports:
- '19000:19000'
- '19001:19001'
volumes:
- ./object_storage:/data
environment:
MINIO_ROOT_USER: object_storage_root_user
MINIO_ROOT_PASSWORD: object_storage_root_password
entrypoint: sh
command: -c 'mkdir -p /data/posthog && minio server --address ":19000" --console-address ":19001" /data' # create the 'posthog' bucket before starting the service

volumes:
postgres-data:
version: '3'
1 change: 1 addition & 0 deletions frontend/src/mocks/fixtures/_preflight.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
"initiated": true,
"cloud": false,
"demo": false,
"object_storage": true,
"realm": "hosted-clickhouse",
"available_social_auth_providers": {
"github": false,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@
"clickhouse": false,
"kafka": false,
"realm": "hosted",
"object_storage": true,
"available_social_auth_providers": {
"github": false,
"gitlab": false,
Expand Down
14 changes: 12 additions & 2 deletions frontend/src/scenes/PreflightCheck/preflightLogic.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,11 @@ describe('preflightLogic', () => {
status: 'warning',
caption: 'Set up before ingesting real user data',
},
{
id: 'object_storage',
name: 'Object Storage',
status: 'validated',
},
],
})
})
Expand Down Expand Up @@ -144,6 +149,11 @@ describe('preflightLogic', () => {
status: 'optional',
caption: 'Not required for experimentation mode',
},
{
id: 'object_storage',
name: 'Object Storage',
status: 'validated',
},
],
})
})
Expand All @@ -156,7 +166,7 @@ describe('preflightLogic', () => {
.toDispatchActions(['loadPreflightSuccess'])
.toMatchValues({
checksSummary: {
summaryString: '6 successful, 1 warning, 2 errors',
summaryString: '7 successful, 1 warning, 2 errors',
summaryStatus: 'error',
},
})
Expand All @@ -169,7 +179,7 @@ describe('preflightLogic', () => {
.toDispatchActions(['loadPreflightSuccess'])
.toMatchValues({
checksSummary: {
summaryString: '6 successful, 1 warning, 1 error, 1 optional',
summaryString: '7 successful, 1 warning, 1 error, 1 optional',
summaryStatus: 'error',
},
})
Expand Down
18 changes: 16 additions & 2 deletions frontend/src/scenes/PreflightCheck/preflightLogic.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ export const preflightLogic = kea<
checks: [
(s) => [s.preflight, s.preflightMode],
(preflight, preflightMode) => {
return [
const preflightItems = [
{
id: 'database',
name: 'Application database · Postgres',
Expand Down Expand Up @@ -139,7 +139,21 @@ export const preflightLogic = kea<
? 'Not required for experimentation mode'
: 'Set up before ingesting real user data',
},
] as PreflightItemInterface[]
]

if (preflight?.object_storage || preflight?.is_debug) {
/** __for now__, only prompt debug users if object storage is unhealthy **/
preflightItems.push({
id: 'object_storage',
name: 'Object Storage',
status: preflight?.object_storage ? 'validated' : 'warning',
caption: preflight?.object_storage
? undefined
: 'Some features will not work without object storage',
})
}

return preflightItems as PreflightItemInterface[]
},
],
checksSummary: [
Expand Down
1 change: 1 addition & 0 deletions frontend/src/types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1390,6 +1390,7 @@ export interface PreflightStatus {
licensed_users_available?: number | null
site_url?: string
instance_preferences?: InstancePreferencesInterface
object_storage: boolean
}

export enum ItemMode { // todo: consolidate this and dashboardmode
Expand Down
4 changes: 2 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,9 @@
"arm64:ch-dev:start": "concurrently -n DOCKER,ESBUILD,TYPEGEN -c red,blue,green \"docker-compose -f docker-compose.arm64.yml pull && CH_WEB_SCRIPT=./ee/bin/docker-ch-dev-backend docker-compose -f docker-compose.arm64.yml up\" \"yarn run start-http --host 0.0.0.0\" \"yarn run typegen:watch\"",
"arm64:ch-dev:clear": "docker compose -f docker-compose.arm64.yml stop && docker compose -f docker-compose.arm64.yml rm -v && docker compose -f docker-compose.arm64.yml down",
"arm64:services": "yarn arm64:services:stop && yarn arm64:services:clean && yarn arm64:services:start",
"arm64:services:start": "docker-compose -f docker-compose.arm64.yml up zookeeper kafka clickhouse",
"arm64:services:start": "docker-compose -f docker-compose.arm64.yml up zookeeper kafka clickhouse object_storage",
"arm64:services:stop": "docker-compose -f docker-compose.arm64.yml down",
"arm64:services:clean": "docker-compose -f docker-compose.arm64.yml rm -v zookeeper kafka clickhouse",
"arm64:services:clean": "docker-compose -f docker-compose.arm64.yml rm -v zookeeper kafka clickhouse object_storage",
"dev:migrate:postgres": "export DEBUG=1 && source env/bin/activate && python manage.py migrate",
"dev:migrate:clickhouse": "export DEBUG=1 && source env/bin/activate && python manage.py migrate_clickhouse",
"prepare": "husky install"
Expand Down
4 changes: 2 additions & 2 deletions plugin-server/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,9 @@
"prepublishOnly": "yarn build",
"setup:dev:clickhouse": "cd .. && DEBUG=1 python manage.py migrate_clickhouse",
"setup:test": "cd .. && TEST=1 python manage.py setup_test_environment",
"services:start": "cd .. && docker-compose -f docker-compose.dev.yml up zookeeper kafka clickhouse",
"services:start": "cd .. && docker-compose -f docker-compose.dev.yml up zookeeper kafka clickhouse object_storage",
"services:stop": "cd .. && docker-compose -f docker-compose.dev.yml down",
"services:clean": "cd .. && docker-compose -f docker-compose.dev.yml rm -v zookeeper kafka clickhouse",
"services:clean": "cd .. && docker-compose -f docker-compose.dev.yml rm -v zookeeper kafka clickhouse object_storage",
"services": "yarn services:stop && yarn services:clean && yarn services:start"
},
"bin": {
Expand Down
Loading