Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

migrate from local volume (nextcloud / data) to s3 after adding s3 to config.php #25781

Closed
gorsing opened this issue Feb 24, 2021 · 14 comments
Closed
Labels
0. Needs triage Pending check for reproducibility or if it fits our roadmap enhancement Nice to have

Comments

@gorsing
Copy link

gorsing commented Feb 24, 2021

Lacks functionality to migrate from local volume (nextcloud / data) to s3 after adding s3 to config.php via occ.

Add two repositories to config.php

  1. check if there is enough free space for migration
  2. Migration of information / data
  3. Switch to s3 and use it as main storage
@gorsing gorsing added 0. Needs triage Pending check for reproducibility or if it fits our roadmap enhancement labels Feb 24, 2021
@blieb
Copy link

blieb commented Feb 24, 2021

I did do this manual with this manual: https://pedal.me.uk/migrating-a-nextcloud-instance-to-amazon-s3/

@gorsing
Copy link
Author

gorsing commented Feb 24, 2021

It's great. I will try this option. But it would also be very nice if there was a ready-made solution in occ

@szaimen
Copy link
Contributor

szaimen commented Jun 23, 2021

As this sounds like a nice feature, the requests for this are quite low. Currently there are no plans to implement such a feature. Thus I will close this ticket for now. This does not mean we don't want this feature, but it is simply not on our roadmap for the near future. If somebody wants to implement this feature nevertheless we are happy to assist and help out.If you wish to have this feature implemented by the Nextcloud GmbH there is the option for consulting work on top of your Nextcloud Enterprise subscription to get your features implemented.

@wioxjk
Copy link

wioxjk commented Dec 21, 2021

I did do this manual with this manual: https://pedal.me.uk/migrating-a-nextcloud-instance-to-amazon-s3/

Sadly, the site is offline - domain name does not even resolve..

@bardolf69
Copy link

This was copied from https://pedal.me.uk/migrating-a-nextcloud-instance-to-amazon-s3/ in case his domain ever goes offline again. All credit for this belongs to the original author and was copied here to ensure a lasting copy exists.

*** Steps To Migrate***

1 – Kick off your users
Several steps here assume nothing has changed while you were working so DO NOT try to do this while users are on Nextcloud.

2 – Backup your instance
I take no responsibility for any damage done by following my instructions. Backup your instance. You have been warned!

3 – Rearrange your data directory
Don’t move anything, just create symbolic links. That way if this goes wrong you’ve done no damage.

My Data directory was /srv/nextcloud. My Nextcloud database is called nextcloud_db You will need to change these to match your own setup. When you are done you should end up with a directory full of files named something like urn:oid:1234. At the command line I typed the following:

Get the user files

mysql -p -B --disable-column-names -D nextcloud_db << EOF > user_file_list   
      select concat('urn:oid:', fileid, ' ', '/srv/nextcloud/',
                 substring(id from 7), '/', path)     
      from oc_filecache     
      join oc_storages      
     on storage = numeric_id   
     where id like 'home::%'   
     order by id;
EOF 

Get the meta files

mysql -p -B --disable-column-names -D nextcloud_db << EOF > meta_file_list   
    select concat('urn:oid:', fileid, ' ', substring(id from 8), path)
      from oc_filecache
      join oc_storages
      on storage = numeric_id
     where id like 'local::%'
     order by id;
EOF 

mkdir s3_files
cd s3_files
while read target source ; do
    if [ -f "$source" ] ; 
then
        ln -s "$source" "$target"
    fi
done < ../user_file_list
 while read target source ; do
    if [ -f "$source" ] ;
 then
        ln -s "$source" "$target"
    fi
done < ../meta_file_list

4 – Copy your files to S3
If you have not already done so you will need to install the AWS CLI tools. On Ubuntu this was simple enough:

apt-get install awscli
aws configure

Actually copying can take some time, particularly if you have > 10GB.

aws s3 sync . s3://my-nextcloud-bucket-name

5 – Update Nextcloud to Use S3.
STOP!

This step can damage your nextcloud instance!

Previous steps do not damage or change your Nextcloud instance because you were working on a copy of everything. Now you are about to change your existing Nexcloud instance.

Please don’t follow these steps blindly. Take the time to understand what you are doing first.

Again on the command line type:

mysql -p -D nextcloud_db << EOF > update oc_storages
      set id = concat('object::user:', substring(id from 7)) 
    where id like 'home::%';
   update oc_storages 
     set id = 'object::store:amazon::my-nextcloud-bucket'
   where id like 'local::%';
EOF

Then add the the extra configuration item to config/config.php

'objectstore' => array(
        'class' => 'OC\\Files\\ObjectStore\\S3',
        'arguments' => array(
                'bucket' => 'nextcloud', // your bucket name
                'autocreate' => true,
                'key'    => 'EJ39ITYZEUH5BGWDRUFY', // your key
                'secret' => 'M5MrXTRjkyMaxXPe2FRXMTfTfbKEnZCu+7uRTVSj', // your secret
                'use_ssl' => true,
                'region' => 'eu-west-2', // your region
        ),
),

@SNThrailkill
Copy link

This was copied from https://pedal.me.uk/migrating-a-nextcloud-instance-to-amazon-s3/ in case his domain ever goes offline again. All credit for this belongs to the original author and was copied here to ensure a lasting copy exists.

*** Steps To Migrate***

1 – Kick off your users
Several steps here assume nothing has changed while you were working so DO NOT try to do this while users are on Nextcloud.

2 – Backup your instance
I take no responsibility for any damage done by following my instructions. Backup your instance. You have been warned!

3 – Rearrange your data directory
Don’t move anything, just create symbolic links. That way if this goes wrong you’ve done no damage.

My Data directory was /srv/nextcloud. My Nextcloud database is called nextcloud_db You will need to change these to match your own setup. When you are done you should end up with a directory full of files named something like urn:oid:1234. At the command line I typed the following:

Get the user files

mysql -p -B --disable-column-names -D nextcloud_db << EOF > user_file_list   
      select concat('urn:oid:', fileid, ' ', '/srv/nextcloud/',
                 substring(id from 7), '/', path)     
      from oc_filecache     
      join oc_storages      
     on storage = numeric_id   
     where id like 'home::%'   
     order by id;
EOF 

Get the meta files

mysql -p -B --disable-column-names -D nextcloud_db << EOF > meta_file_list   
    select concat('urn:oid:', fileid, ' ', substring(id from 8), path)
      from oc_filecache
      join oc_storages
      on storage = numeric_id
     where id like 'local::%'
     order by id;
EOF 

mkdir s3_files
cd s3_files
while read target source ; do
    if [ -f "$source" ] ; 
then
        ln -s "$source" "$target"
    fi
done < ../user_file_list
 while read target source ; do
    if [ -f "$source" ] ;
 then
        ln -s "$source" "$target"
    fi
done < ../meta_file_list

4 – Copy your files to S3
If you have not already done so you will need to install the AWS CLI tools. On Ubuntu this was simple enough:

apt-get install awscli
aws configure

Actually copying can take some time, particularly if you have > 10GB.

aws s3 sync . s3://my-nextcloud-bucket-name

5 – Update Nextcloud to Use S3.
STOP!

This step can damage your nextcloud instance!

Previous steps do not damage or change your Nextcloud instance because you were working on a copy of everything. Now you are about to change your existing Nexcloud instance.

Please don’t follow these steps blindly. Take the time to understand what you are doing first.

Again on the command line type:

mysql -p -D nextcloud_db << EOF > update oc_storages
      set id = concat('object::user:', substring(id from 7)) 
    where id like 'home::%';
   update oc_storages 
     set id = 'object::store:amazon::my-nextcloud-bucket'
   where id like 'local::%';
EOF

Then add the the extra configuration item to config/config.php

'objectstore' => array(
        'class' => 'OC\\Files\\ObjectStore\\S3',
        'arguments' => array(
                'bucket' => 'nextcloud', // your bucket name
                'autocreate' => true,
                'key'    => 'EJ39ITYZEUH5BGWDRUFY', // your key
                'secret' => 'M5MrXTRjkyMaxXPe2FRXMTfTfbKEnZCu+7uRTVSj', // your secret
                'use_ssl' => true,
                'region' => 'eu-west-2', // your region
        ),
),

Has anyone tried these steps with any success? So far in my research this seems like the only documentation of any kind of migration path.

Would the other option be start from scratch on a new server and import settings? Is that even possible?

@thelfensdrfer
Copy link

thelfensdrfer commented Nov 24, 2022

@SNThrailkill The export process worked well, although there are a few hidden characters in the markdown code. Otherwise, the migration worked great. And because symlinks take nearly no space, this is also an option for instances where the HDD is almost full.

@SNThrailkill
Copy link

@thelfensdrfer thank you for the confirmation! Highly encouraging to hear. I'll try this out soon. Hopefully no issues too big.

@bardolf69
Copy link

Yeah have used essentially the same process 2 or 3 times now without any major issues.

@hylobates-agilis
Copy link

For those interested that might bump into this when researching the matter, you also need to change the oc_mounts table to change mount_provider_class to OC\Files\Mount\ObjectHomeMountProvider.

If you don’t, shares would be messed up and you would end up with a lot of errors when dealing with editing files, displaying images, etc…

@mrAceT
Copy link

mrAceT commented Jan 30, 2023

I built a script that "does it all", migrate, check stuff, test, update, "pre migrate" (minimal downtime!) and even perform a "sanity check" (like occ files:cleanup does for local, but doesn't do for S3, my script does ;) )

https://github.com/mrAceT/nextcloud-S3-local-S3-migration

@Exagone313
Copy link

Both of your solutions assume MySQL is used, but my Nextcloud instance uses PostgreSQL.

@gymnae
Copy link

gymnae commented Sep 22, 2023

Yeah, also looking for a solution for PostgreSQL. Did you find a solution?

@larwood
Copy link

larwood commented Mar 3, 2024

Hi All,

Thanks for the good info everyone. I just migrated from local (/data) to S3 storage and initial testing shows it was a success. See my notes below. My server is currently running CentOS7 and this is the first step before migrating to a Fedora39 server.

I do have one question regard updating the oc_mounts and mount_provider_class value. I only updated the rows with mount_provider_class = 'OC\Files\Mount\LocalHomeMountProvider'. Should I update them all? Other values are 'OCA\Files_Sharing\MountProvider' and NULL.

I hit one error when updating the oc_storages table for 'local::' entries as I had two. One was an old entry that no longer existed. I removed the redundant row and reran successfully.
ERROR 1062 (23000): Duplicate entry 'object::store:amazon::nextcloud-s3-example-com/' for key 'storages_id_index'

I used rclone to sync the files with the goal to preserve the timestamps and this appears to have been successful. If I download a file from the S3 bucket using 'rclone copy' the timestamp matches the original.

su apache -s /bin/bash -c '/opt/remi/php82/root/bin/php /domains/cloud.example.com/occ files:cleanup'
  0 orphaned file cache entries deleted
  0 orphaned mount entries deleted

su apache -s /bin/bash -c '/opt/remi/php82/root/bin/php /domains/cloud.example.com/occ files:scan --all'
  Starting scan for user 1 out of 7 (xxx)
  Starting scan for user 2 out of 7 (xxx)
  Starting scan for user 3 out of 7 (xxx)
  Starting scan for user 4 out of 7 (xxx)
  Starting scan for user 5 out of 7 (xxx)
  Starting scan for user 6 out of 7 (xxx)
  Starting scan for user 7 out of 7 (xxx)
  +---------+-------+--------+--------------+
  | Folders | Files | Errors | Elapsed time |
  +---------+-------+--------+--------------+
  | 2226    | 19284 | 0      | 00:00:13     |
  +---------+-------+--------+--------------+

su apache -s /bin/bash -c '/opt/remi/php82/root/bin/php /domains/cloud.example.com/occ maintenance:mode --on'

mariadb -p -B --disable-column-names -D nextclouddb -e "select concat('urn:oid:', fileid, ' ', '/domains/cloud.example.com/data/', substring(id from 7), '/', path) from oc_filecache join oc_storages on storage = numeric_id where id like 'home::%' order by id;" > user_file_list

mariadb -p -B --disable-column-names -D nextclouddb -e "select concat('urn:oid:', fileid, ' ', substring(id from 8), path) from oc_filecache join oc_storages on storage = numeric_id where id like 'local::%' order by id;" > meta_file_list

mkdir s3_files
cd s3_files

while read target source; do if [ -f "$source" ]; then ln -s "$source" "$target"; fi; done < ../user_file_list

while read target source; do if [ -f "$source" ]; then ln -s "$source" "$target"; fi; done < ../meta_file_list

rclone sync --copy-links --stats-log-level NOTICE --progress . s3:/nextcloud-s3-example-com/

mariadb -p -D nextclouddb -e "update oc_storages set id = concat('object::user:', substring(id from 7)) where id like 'home::%';"
mariadb -p -D nextclouddb -e "update oc_storages set id = 'object::store:amazon::nextcloud-s3-example-com/' where id like 'local::%';"
mariadb -p -D nextclouddb -e "update oc_mounts set mount_provider_class = 'OC\\\Files\\\Mount\\\ObjectHomeMountProvider' where mount_provider_class like '%LocalHomeMountProvider%';"

update config.php
diff config.php config.php_backup-2024-03-03 
46,54d45
<   'objectstore' => [
<           'class' => '\\OC\\Files\\ObjectStore\\S3',
<           'arguments' => [
<                   'bucket' => 'nextcloud-s3-example-com',
<                   'region' => 'ap-southeast-2',
<                   'key' => 's3accesskey',
<                   'secret' => 's3secretaccesskey',
<                   'storageClass' => 'INTELLIGENT_TIERING',
<           ],
<   ],

su apache -s /bin/bash -c '/opt/remi/php82/root/bin/php /domains/cloud.example.com/occ maintenance:mode --off'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
0. Needs triage Pending check for reproducibility or if it fits our roadmap enhancement Nice to have
Projects
None yet
Development

No branches or pull requests