Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update howto-import-big-databases-between-environments.md #2181

Merged
merged 3 commits into from
Sep 15, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@ last_updated: April 5, 2023
template: howto-guide-template
---

{% info_block warningBox "S3 bucket permissions" %}
Make sure that the S3 bucket you use to import and export databases is not public and is only accessible by users who should have access to the database.
{% endinfo_block %}

Suppose you have two testing environments, and you need to migrate a large amount of data from one environment to another to perform different tests with the same data. If you have little data, you can export by running the `mysqldump` command on the local machine. However, for large amounts of data, this method can be slow due to long waiting time and VPN connection issues. In this case, to import the data between the environments faster, you can run the `mysqldump` command on the Jenkins instance and upload the dump file to AWS S3. Here's how to do it:

1. Go to the Jenkins instance of the environment from where you want to import the data.
Expand All @@ -29,4 +33,4 @@ If you run the command from step 2 multiple times, this creates multiple dump fi

{% endinfo_block %}

With this approach, you can efficiently import large databases between environments since you are downloading the dump file not to your local machine but to the machine in the same network as the database. Additionally, compressing the dump file speeds up the upload process, reducing the overall time it takes to import the data.
With this approach, you can efficiently import large databases between environments since you are downloading the dump file not to your local machine but to the machine in the same network as the database. Additionally, compressing the dump file speeds up the upload process, reducing the overall time it takes to import the data.