-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(object storage): add unused object storage #9846
Conversation
docker-compose.dev.yml
Outdated
ports: | ||
- '19000:19000' | ||
- '19001:19001' | ||
volumes: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For dev compose files, could we remove these volumes? I believe the Dockerfile already specifies that /data is a volume and should create an anonymous volume for us? Otherwise we might need to add ./object_storage
to gitignore
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
object_storage is in ./gitignore already... but no point having things in the docker-compose file that are unnecessary? 👍
ports: | ||
- '19000:19000' | ||
- '19001:19001' | ||
volumes: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For hobby, I'd add a named volume as with zookeeper and reference this rather than ./object_storage
, to be in line with the other services.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. The only real thing is the hobby deployment comment.
posthog/storage/object_storage.py
Outdated
if not s3_client: | ||
s3_client = client( | ||
"s3", | ||
endpoint_url=f"http://{OBJECT_STORAGE_HOST}:{OBJECT_STORAGE_PORT}", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit but I think we should simply provide the whole endpoint_url
as {OBJECT_STORAGE_ENDPOINT}
. Hardcoding http
is already a warning flag to me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor comment, overall LGTM
* master: chore: start stack once in cloud tests (#9879) feat(apps): frontend apps (#9831) chore: Fix snapshots on master (#9885) chore(apps): rename plugins to apps (#9755) refactor: Remove constance library dependency, use json-encoded model (#9852) chore(clickhouse): avoid creating kafka_events, events_mv (#9863) fix(insights): Fix timezone date issues (#9678) refactor(plugin-server): refactor the event pipeline (#9829) feat(object storage): add unused object storage (#9846) fix: make kafka health check timeout test reliable (#9857) fix: query elements from start of day (#9827)
* feat(object_storage): add unused object storage with health checks * only prompt debug users if object storage not available at preflight * safe plugin server health check for unused object storage * explicit object storage settings * explicit object storage settings * explicit object storage settings * downgrade pip tools * without spaces? * like this? * without updating pip? * remove object_storage from dev volumes * named volume on hobby * lazily init object storage * simplify conditional check * reproduced error locally * reproduced error locally * object_storage_endpoint not host and port * log more when checking kafka and clickhouse * don't filter docker output * add kafka to hosts before starting stack? * silly cloud tests (not my brain)
Problem
see #9294
For several reasons we want to add object storage
Changes
This adds object storage to docker-compose files and adds it to health checks, preflight, and instance status
Without breaking for anyone not running it, and without using it for anything
Preflight
debug set to true, object store not running
user sees a preflight warning
debug set to false, object store not running
no warning for the user
object store running
Plugin Server
on startup, with object storage enabled and running
on startup, with object storage enabled and not running
on startup, with object storage disabled (regardless of if minio is running)
Django
Instance status
How did you test this code?
running it locally with object storage enabled but the minio container both running and not running