Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] object storage #4445

Open
uhthomas opened this issue Oct 11, 2023 · 31 comments
Open

[Feature] object storage #4445

uhthomas opened this issue Oct 11, 2023 · 31 comments
Assignees

Comments

@uhthomas
Copy link
Member

Object storage support has been widely requested (#1683) and is something we're keen to support. The limitations imposed by object storage happen to be beneficial for data resilience and consistency, as it makes features like the storage template infeasible. Issues like orphaned assets (#2877) or asset availability (#4442) would be resolved completely.

As discussed on the orphaned assets issue (#2877), I'd like to propose a new storage layout designed for object storage, with scalability and resilience as priorities.

Where:

  • <asset id> is a unique ID for an asset, ideally a random UUID. UUIDv7 may serve to be beneficial due to its property of natural order. If not, UUIDv4 should be sufficient.
  • <original asset filename> is the original filename of an asset, as it was uploaded.
.
└── <asset id>/
    ├── <original asset filename>
    ├── sidecar.xml
    └── thumbnails/
        ├── small.jpg
        └── large.webp

The above structure should serve to efficiently scale with resiliency and flexibility. The unique 'directory' for an asset can contain additional files like edits, colour profiles, thumbnails or anything else.

The original file and filename is preserved in case of an unlikely full database loss, where it should be possible to restore most information in such a scenario. This property is also good for humans, or if a full export is required. A directory of vague filenames without extensions would be quite unhelpful. I feel this strikes a good balance between legibility and a resiliency.

I have also considered content addressable storage (CAD), as it would save space in the event of duplicate uploads but consider it to be impractical due to complexity and the previous concern of legibility. I believe this should instead be deferred to the underlying storage provider, which can make much better decisions about how to store opaque binary blobs.

Part of this effort will require some changes to the storage interfaces (#1011) and the actual object storage implementation should use the AWS S3 SDK (docs). Most, if not all object storage systems should have an S3-compatible API.

@tonya11en
Copy link

The advantages aren't clear to me after reading- can you elaborate? Seems like the pitch is that it fixes orphaned assets and availability during storage migrations, but I'm failing to see how object storage fixes this as opposed to anything else (storing photos in the DB, flat filesystem indexed by DB, etc.)

@uhthomas
Copy link
Member Author

uhthomas commented Nov 8, 2023

@tonya11en This is possible with a regular file system, but there has been a lot of push back for implementing the proposal for it. The current model is fundamentally incompatible with object storage, and so the proposed safe and efficient structure is required.

It may be possible to introduce some configuration option to completely disable storage migration and use this proposal for block storage too, but I am not sure if it's worth the confusion at current. I'd much rather implement object storage and gather feedback.

I have started work on this, so hopefully I can show it soon.

@jrasm91
Copy link
Contributor

jrasm91 commented Nov 9, 2023

I think the tldr is that if you don't ever move the file after it is uploaded you get a simpler system.

It has been discussed several times before and we have no immediate plans to drop support the storage template feature.

@pinpox
Copy link

pinpox commented Dec 26, 2023

Apart from the technical benefits, object storage can be rented way cheaper on providers like backblaze, scales without re-partitioning drives and has become pretty standard for theses applications as it makes deployment in clouds a lot easier and cleaner.

I'm eagerly awaiting the S3 support of immich to be able to migrate all my photos. Currently running a self hosted Nextcloud instance on a small VPS with external S3 storage via backblaze.

So, TL;DR: please strongly consider adding native support 🙂

@uhthomas
Copy link
Member Author

#5917 will help - as it allows storage migration to be disabled.

@janbuchar
Copy link

janbuchar commented Dec 31, 2023

To sum up discussion from Discord:

  • @zackpollard mentioned that having many nested folders would make listing all files in the library slow on HDDs
    • however, it is desirable to have the same directory layout for both object storage and local storage
    • not just for simplicity's sake, I can see myself wanting to move my library from rclone-mounted S3 to native S3
    • listing all files is done quite often, for example on the Repair page in the administration
  • renaming, however, is slow (and potentially expensive) in cloud storage - it's always copy+delete
    • the current directory layout needs to move every single uploaded file though
  • disabling storage template migration for cloud storage seems like a reasonable thing to do
  • wouldn't a middle ground approach where we store uploaded files in the local storage and upload them to cloud storage once metadata are extracted be sufficient? (I believe this was not answered by the team)

It is evident that there is pushback from the team against radical changes to the directory layout that may hinder performance. What would an MVP for cloud storage support look like?

@uhthomas
Copy link
Member Author

uhthomas commented Jan 2, 2024

listing all files is done quite often

I would argue this is not the case at all and listing files is not a normal part of operation for Immich at all. It is only used for the repair page, and should run infrequently (if ever). There was also discussion of backups and how that may take a while, but I would also argue it should be an infrequent operation. Regardless, it seems important to some users, so we should try to optimise for this case. @bo0tzz proposed we move forward with an object storage implementation and answer some of these questions later, as to make progress, which I agree with.

wouldn't a middle ground approach where we store uploaded files in the local storage and upload them to cloud storage once metadata are extracted be sufficient? (I believe this was not answered by the team)

I don't think this would be sensible. The whole point of object storage support is to be fast and reliable. We should try to understand how to read directly from object storage rather than add additional complexity (i.e persisting things in multiple places).

@janbuchar
Copy link

listing all files is done quite often

I would argue this is not the case at all and listing files is not a normal part of operation for Immich at all.

I can't be the judge of that, but it looks like that there is no consensus about that amongst the developers, so a conservative approach to this seems correct.

wouldn't a middle ground approach where we store uploaded files in the local storage and upload them to cloud storage once metadata are extracted be sufficient? (I believe this was not answered by the team)

I don't think this would be sensible. The whole point of object storage support is to be fast and reliable. We should try to understand how to read directly from object storage rather than add additional complexity (i.e persisting things in multiple places).

I believe that this complexity is inherent to the problem though. Object storage can be the long-term destination for the assets, and assets can be delivered directly from there. However, operations such as metadata extraction and thumbnail generation work with the local filesystem and it would be difficult to change that.

@bo0tzz
Copy link
Member

bo0tzz commented Jan 2, 2024

However, operations such as metadata extraction and thumbnail generation work with the local filesystem and it would be difficult to change that.

This is true, but I think the best approach there would be for the microservices instances to keep a cache folder that they download files into, rather than having files go to local storage -first- before being uploaded to S3.

@janbuchar
Copy link

However, operations such as metadata extraction and thumbnail generation work with the local filesystem and it would be difficult to change that.

This is true, but I think the best approach there would be for the microservices instances to keep a cache folder that they download files into, rather than having files go to local storage -first- before being uploaded to S3.

If I understand correctly, the two proposed ways of operation for the microservices are very similar - check if the target asset is present in the local filesystem (it doesn't matter if we call it a cache), if not, fetch it from the object storage. Then proceed with whatever the microservice does.

If the uploads folder is on the local filesystem, we 1) save ourselves one roundtrip to the object storage and 2) won't need to rename the uploaded file after we extract the metadata. What are the advantages of the object-storage-first approach?

@bo0tzz
Copy link
Member

bo0tzz commented Jan 3, 2024

The advantage is consistency, knowing for a fact that if an asset is in Immich, it is absolutely also in the object storage. It also means that the server and microservices instances can be decoupled further, no longer requiring a shared filesystem.

@janbuchar
Copy link

The advantage is consistency, knowing for a fact that if an asset is in Immich, it is absolutely also in the object storage. It also means that the server and microservices instances can be decoupled further, no longer requiring a shared filesystem.

Fair enough. What would be the way forward with object storage support though?

  • use the proposed storage for both object storage and local filesystem, ignoring the performance concerns?
  • have a different storage layout for object storage and local filesystem?
  • something entirely different?

@bo0tzz
Copy link
Member

bo0tzz commented Jan 4, 2024

The past few days have seen significant discussion of the object storage topic amongst the maintainer team. There's no full consensus yet, but one thing that seems clear is that there will be a need for significant refactoring around how we store and handle files before object storage can be approached directly. That means cases such as abstracting away the current (filesystem) storage backend behind a common interface and using file streams throughout the code base rather than directly accessing paths. (I'll let @jrasm91 chime in on what other refactors might be needed).

@aries1980
Copy link

As a workaround, maybe https://github.com/efrecon/docker-s3fs-client could help? Has anyone tried using mounting S3 with FUSE?

@janbuchar
Copy link

As a workaround, maybe https://github.com/efrecon/docker-s3fs-client could help? Has anyone tried using mounting S3 with FUSE?

I currently run immich with the rclone docker volume driver and it is perfectly usable.

@LawyZheng
Copy link

Is it possible to support s3 storage as an external library?
Something went wrong when I tried to use s3fs-client to share the volume between my host and container.
So maybe embed rclone/s3fs in the docker image?
Use rlone to mount s3 bucket as a local folder, and the next thing will be the same.

@Underknowledge
Copy link

As a workaround, maybe https://github.com/efrecon/docker-s3fs-client could help? Has anyone tried using mounting S3 with FUSE?

I guess it could work, but I would see the real benefit of using S3 is having the files on a remote S3 storage.
You could use for example presinged URL's to avoid piping the the data throu the immich instance. (where I host the instance I have only 8mbit upload)

@xangelix
Copy link

xangelix commented May 12, 2024

For all using FUSE mount options-- please consider https://github.com/yandex-cloud/geesefs
It should be dramatically faster and dramatically more posix compatible.

Hoping for official support though! FUSE is always very un-ideal.

@mdafer
Copy link

mdafer commented May 12, 2024

For all using FUSE mount options-- please consider https://github.com/yandex-cloud/geesefs It should be dramatically faster and dramatically more posix compatible.

Hoping for official support though! FUSE is always very un-ideal.

Thanks for the suggestion, I configured rclone volume plugin yesterday and it was not usable at all, most thumbnails were missing and many original files were either missing or corrupted...

I'm gonna try this one today based on your suggestion.

Really looking forward to having native S3-compatible storage support!

Thank you Immich team for this amazing software :)

@dislazy
Copy link

dislazy commented May 13, 2024

Immich is indeed an amazing software, the experience is very good, we live based on the cloud era, and always want to have more backups, so it feels like a very great way to access S3 or even S3 compatible object storage, and it can also effectively prevent data loss

@pinpox
Copy link

pinpox commented May 13, 2024

Immich team: I would be willing to contribute time or money for this feature, since S3 support is something I need personally. Is there a roadmap for this? Could this be broken up into tasks I can tacle as contributer?
If this is something you as a team would rather implement internally, would it be possible to set up a bounty or similar specific for this?

I would love to help out with this, let me know how to make it possible!

@createchange
Copy link

createchange commented Jun 19, 2024

Another reason I would like for this is so that I can avoid egress bandwidth costs from cloud providers. If I could store in Backblaze, my cloud provider egress costs would evaporate.

@DomiiBunn
Copy link

DomiiBunn commented Jun 20, 2024 via email

@MattyMay
Copy link

Any movement on this? Lack of support for S3 storage is the only thing keeping me from using Immich at the moment. I'm happy to contribute in any way I can if help is wanted.

@Underknowledge
Copy link

I think this comment still gives the best overview.

The past few days have seen significant discussion of the object storage topic amongst the maintainer team. There's no full consensus yet, but one thing that seems clear is that there will be a need for significant refactoring around how we store and handle files before object storage can be approached directly. That means cases such as abstracting away the current (filesystem) storage backend behind a common interface and using file streams throughout the code base rather than directly accessing paths. (I'll let @jrasm91 chime in on what other refactors might be needed).

There is even a bit of confusion how this s3 could work thou.
Lets take the usecase of a android phone (I think most ppl use immich this way)
How could this be handled?

Example workflow
The app puts a newly taken picture to the Immich server,
Immich will take this photo, extract the Metadata to the DB and resizes as set in the options.
after all this, the server side will upload the new pcture to S3 instead of the DB blob and... Provides a link to the object?
afterwards we can delete the original picture from the server again.
Does this mean you will lose access to the pictures after 7 days max when offline? (sounds actually like a good feature)
Including secret and access key in the app seems like a terrible idea.
or all this heavy work could be made on the client, but that's also doesn't sound like a sane idea.

but yea, scary stuff first refactoring around how we store and handle files

@pinpox
Copy link

pinpox commented Jun 28, 2024

Does this mean you will lose access to the pictures after 7 days max when offline? (sounds actually like a good feature)
Including secret and access key in the app seems like a terrible idea.

Why would that be needed?
The workflow would be the same as the nextcloud app+server does it, for example.

The server is the only one inteacting with the S3 remote, acting similar as a proxy for the client. The client (app) only queries the immich server for a photo, so it does not need any crendentials.

@Underknowledge
Copy link

Seeing it that way we can already do S3,
Just use rclone to mount an S3 storage and then use it as a volume.

Again, just my opinion,
the real benefit would be to offload the traffic of the server and query the objects directly off the S3 storage,
Avoiding 2 roundtrips (download from S3 > pushing it to the client).
Generally you would do something like this with pre-singed URL's, these have a maximum validity of 7 days

I just chimed in here because my internet at home (the place where I host my Photos) is rather slow, and when the 2 Grandparents scroll the media I uploaded, I cant do any work.

@mdafer
Copy link

mdafer commented Jun 28, 2024

Seeing it that way we can already do S3, Just use rclone to mount an S3 storage and then use it as a volume.

Again, just my opinion, the real benefit would be to offload the traffic of the server and query the objects directly off the S3 storage, Avoiding 2 roundtrips (download from S3 > pushing it to the client). Generally you would do something like this with pre-singed URL's, these have a maximum validity of 7 days

I just chimed in here because my internet at home (the place where I host my Photos) is rather slow, and when the 2 Grandparents scroll the media I uploaded, I cant do any work.

Using rclone with a big library is not really an option. Many files end up being corrupted or having issues due to several factors including that rclone doesn't support softlinks. A good alternative is a tool that is posix compliant for example.

However, even with such tools, the overhead is way too big that scrolling through a big library is very bothersome. Not to mention the possible extra egress fees due to overhead.

@grapemix
Copy link

However, operations such as metadata extraction and thumbnail generation work with the local filesystem and it would be difficult to change that.

This is true, but I think the best approach there would be for the microservices instances to keep a cache folder that they download files into, rather than having files go to local storage -first- before being uploaded to S3.

If I understand correctly, the two proposed ways of operation for the microservices are very similar - check if the target asset is present in the local filesystem (it doesn't matter if we call it a cache), if not, fetch it from the object storage. Then proceed with whatever the microservice does.

If the uploads folder is on the local filesystem, we 1) save ourselves one roundtrip to the object storage and 2) won't need to rename the uploaded file after we extract the metadata. What are the advantages of the object-storage-first approach?

I would like to provide additional benefits for switching to object storage in case someone asked in the future.

  • Build-in object tagging
  • Build-in quota support
  • Build-in event support
  • Build-in versioning in object level
  • Third-party integrated backup solution
  • Build-in ACL
  • Has third-party IAM liked permission system (but it is complicated to setup)
  • Redundancy (not just in disk level, but also instance level)
  • Easy horizontal scalability (not just in disk level, but also instance level)
  • Easy size expansion (we don't have to copy files if we want to expand the storage)
  • Able to share file in limited time (like via pre-signed url)
  • Easy to find library to upload from client side (like via pre-signed url)
  • Designed for parallel IO. Ceph split files into multiple drives, so workloads will also separated into multiple drives instead of one.

If we have truly object storage layer, the docker container can become truly stateless which means we can have multiple k8s pods and spin up and down if we need to among different instances. It is painful to share persistent volume between different pods/projects.... Yes, we can hack it, but it's painful to watch ;)

Is it overkill? probably, but we can also simply use midnight commander for our galleries too if we are minimalist , right? ;)

Supporting object storage sounds like an investment to me. Yes, we have to spend some resources on this feature, but this features can save us lots of time in the futures because of the benefits being shown above.

@tonya11en
Copy link

I don't think anyone needs to enumerate the benefits at this point. The storage system needs to be refactored before anyone can work on adding object storage support, so the conversation needs to shift towards how to close #1011.

Until that issue is closed, I don't think there's any point in continuing to discuss object storage here.

@halfa
Copy link

halfa commented Jul 22, 2024

I agree with @tonya11en, for people for which object storage is a requirement, Ente is similar to Immich and supports object storage.

@C-Otto C-Otto changed the title object storage [Feature] object storage Oct 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.