-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: storage backup #417
feat: storage backup #417
Changes from all commits
75ce3c7
947549f
4002e4e
2659d45
3adcc2c
f10f1d4
20a0cb3
074e8d8
5d634d1
f5d22f4
b505cef
abd4570
1ea2e49
943ffda
4442070
d1be3b1
29a5389
c7d31a9
382cbcf
548b2d2
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Large diffs are not rendered by default.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,9 @@ | ||
/** | ||
* https://github.com/sinedied/smoke#javascript-mocks | ||
*/ | ||
module.exports = () => { | ||
return { | ||
statusCode: 200, | ||
headers: { 'Content-Type': 'application/json' } | ||
} | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -50,6 +50,7 @@ export default { | |
} | ||
}, | ||
optimization: { | ||
minimize: true | ||
minimize: true, | ||
usedExports: true | ||
} | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -103,6 +103,19 @@ const body = Query( | |
) | ||
) | ||
), | ||
Foreach( | ||
Select('backupUrls', Var('data')), | ||
Lambda( | ||
['url'], | ||
Create('Backup', { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If a user uploads the same file twice then we'll get multiple backup objects for the same CAR pointing to the same bucket+key? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We will get a single S3 object stored considering it is stored on a key value fashion, but multiple entries in Fauna for the same file. I thought about adding an index at first, but it would run in every single upload to check if it already exists. Using So, given we should just iterate on the list of objects in S3 for backup I think the best solution is to simply keep record of everything as is. With the record, we can easily access specific data to prfioritize backups as needed. What do you think? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Regarding this, If a same upload is created, we will not create the Upload entry in Fauna, which means we will not create the backups for this in Fauna as well. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The above is only true because there is a bug 🤦🏼 We are not adding the following chunks to Fauna... Working on a PR |
||
data: { | ||
upload: Select('ref', Var('upload')), | ||
url: Var('url'), | ||
created: Now() | ||
} | ||
}) | ||
) | ||
), | ||
Var('upload') | ||
) | ||
) | ||
|
@@ -163,6 +176,19 @@ const body = Query( | |
) | ||
) | ||
), | ||
Foreach( | ||
Select('backupUrls', Var('data')), | ||
Lambda( | ||
['url'], | ||
Create('Backup', { | ||
data: { | ||
upload: Select('ref', Var('upload')), | ||
url: Var('url'), | ||
created: Now() | ||
} | ||
}) | ||
) | ||
), | ||
Var('upload') | ||
) | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://developers.cloudflare.com/workers/platform/limits#script-size
So let's just test this as the limits are defined by Cloudflare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm getting an error on launching the api in dev mode now about 50% of the time, so we may need to find a lighter s3 client or figure out why cf is less happy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is not related to s3, but to wrangler itself. I will share with you the details