-
-
Notifications
You must be signed in to change notification settings - Fork 380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make installation available on Unraid OS #81
Comments
I understand how convenient it is for hoarder to be available in the unraid store. Unfortunately, as far as I know, unraid store doesn't support having multi-container deployments. They'll need to be separate services as far as I know which won't be that easy to setup. It might be easier to use the "Docker compose manager" plugin (https://forums.unraid.net/topic/114415-plugin-docker-compose-manager/) to set up hoarder instead. |
It really isn't hard to get Hoarder running under Unraid. Install Docker Compose Manager, copy in compose file, set a couple of variables in .env per the install docs. Took me less than 5 min to have it running. Dockerman in Unraid is very limiting - it's worth the effort to learn how to do Docker things, and especially Compose things, outside of it. |
I would jump on board if i see a unraid rocker image!! |
@bob912 |
@TheSylus typically you shouldn't change the DATA_DIR var. This should stay '/data'. If you want to make your data persistent, you change what /data gets mapped to. So this line that says:
should be
change that for all the containers |
@TheSylus Change the volume definitions of each container to include a real path as mentioned by @MohamedBassem and comment out the internal volume definitions at the bottom. version: "3.8"
services:
web:
image: ghcr.io/mohamedbassem/hoarder-web:${HOARDER_VERSION:-release}
restart: unless-stopped
volumes:
# - data:/data
- /mnt/user/appdata/hoarder/data:/data:rw
ports:
- 3000:3000
env_file:
- .env
environment:
REDIS_HOST: redis
MEILI_ADDR: http://meilisearch:7700
DATA_DIR: /data
redis:
image: redis:7.2-alpine
restart: unless-stopped
volumes:
# - redis:/data
- /mnt/user/appdata/hoarder/data:/data:rw
chrome:
image: gcr.io/zenika-hub/alpine-chrome:100
restart: unless-stopped
command:
- --no-sandbox
- --disable-gpu
- --remote-debugging-address=0.0.0.0
- --remote-debugging-port=9222
meilisearch:
image: getmeili/meilisearch:v1.6
restart: unless-stopped
env_file:
- .env
volumes:
# - meilisearch:/meili_data
- /mnt/user/appdata/hoarder/meili_data:/meili_data:rw
workers:
image: ghcr.io/mohamedbassem/hoarder-workers:${HOARDER_VERSION:-release}
restart: unless-stopped
volumes:
# - data:/data
- /mnt/user/appdata/hoarder/data:/data:rw
env_file:
- .env
environment:
REDIS_HOST: redis
MEILI_ADDR: http://meilisearch:7700
BROWSER_WEB_URL: http://chrome:9222
DATA_DIR: /data
# OPENAI_API_KEY: ...
depends_on:
web:
condition: service_started
#volumes:
# redis:
# meilisearch:
# data: |
a native template for unraid would be nice. i also have the docker compose frickel running now but native would be nicer. |
I have uploaded a Hoarder and Hoarder-workers templates to CA and they are now available in Unraid. You will need to install a few of other required containers as well if you don't already have them. They are listed in the template.
Support forum: https://forums.unraid.net/topic/165108-support-collectathon-hoarder/ |
Thanks a lot @Collectathon. Added the installation instruction to the docs! |
Hello Collectathon, Firstly, thank you for the work you've done on the Hoarder project. I wanted to ask if it would be possible to create an all-in-one Docker image for absolute beginners like myself. An all-in-one image would simplify the installation process significantly and make it more accessible for users who may not be familiar with setting up multiple containers or Docker Compose. Thank you for considering this request! Best regards |
Hello,
Having a docker app ready for unraid that shows in their docker store would make it easier for unraid users to install it faster.
Cheers!
The text was updated successfully, but these errors were encountered: