-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: refactor the export process #3119
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work 👍
See my change proposal for you:
Yes exactly. I also agree that we should allow other formats, but in that case, I’m not sure we should try to create a service that does both exports, as each one will be very specific. The drawback of having two services, though, is that we need to duplicate the work in that case. |
I'm totally in favor of two different services. Let's make it three actually - one for JSON, one for CSV and the one you have for SQL. It will give me a nice intro to the project + it really does make sense to separate concerns in that regard. |
Why JSON though? |
Why not XML? :) CSV is as universal as they come, and JSON feels natural to me personally, but I don't have a concrete reason. |
@asbiin now the service writes to a temp file directly as the script runs. Locally it’s really fast, I wonder how it will run in production. Can you check the PR again? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated some of your code to optimize some part. I think there is only the exportContactPhoto
that remains to be fixed.
…to 2019-10-13-fix-export
This pull request has been automatically locked since there |
This PR tries to refactor the export process which currently doesn't work in production, for an unknown reason (but I believe this is because of some sort of memory limitation).
This new way of exporting is way different that what we used to do. Before we were simply looping over all the tables (except some of them defined in an array at the top of the file). Now we are calling explicitly which tables we would like to export.
The file is pretty huge, around 1300 lines, but it lets us control exactly what we want to export, and what we want to avoid to export --- something we didn't have in the past.
Locally it works fast. I don't know if it will work in production, we have to merge to see it.