-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Complete multipart upload fails with timeout #843
Comments
Yeah, I consider this error was fixed in 0.28.0 version. According to the logs old version was used:
Highly recommended to update to the latest 0.29.0. On the last s3 version test is green |
Now there is an error during delete operation:
|
It looks like there is the same problem we have in neofs-testcases repo with multipart - too many parts. I tried to replace 30mb object with a 6mb object in test settings, and the result was ~35sec for the test (with 30mb it is 90sec) and 3/3 runs are green. |
But anyway, in some real case, user may have the same issue. Do we need to react somehow to this? I mean, maybe try to return something like ErrBusy. Will the node continue the deleting operation if the gate has context cancelled? The user may come later and try to delete but have the error - object not found, because the node already done its work. |
Another good thing to consider is 202 status code, but the operation should be async in this case (or in case of timeouts). Like if we encounter a timeout, we can return 202 and on a next request check if an object was deleted. |
Found another failing test - test_versioning_bucket_multipart_upload_return_version_id
Latest dev-env from master was used |
IIUC the problem is object reslicing during finalization, it can take arbitrary amount of time. In that sense:
|
We need to slice objects in the gateway in a way that doesn't need reslicing (a part in -> a set of additional objects out). It will change somewhat after nspcc-dev/neofs-node#2729, but these changes will be minor. |
Closes #843. Signed-off-by: Evgenii Baidakov <evgenii@nspcc.io>
Closes #843. Signed-off-by: Evgenii Baidakov <evgenii@nspcc.io>
Closes #843. Signed-off-by: Evgenii Baidakov <evgenii@nspcc.io>
Closes #843. Before, on multipartComplete all parts of Big object downloaded in memory and a new object put to the NeoFS. With these changes we are slicing object during part uploading routine. Signed-off-by: Evgenii Baidakov <evgenii@nspcc.io>
Closes #843. Before, on multipartComplete all parts of Big object downloaded in memory and a new object put to the NeoFS. With these changes we are slicing object during part uploading routine. Signed-off-by: Evgenii Baidakov <evgenii@nspcc.io>
Closes #843. Before, on multipartComplete all parts of Big object downloaded in memory and a new object put to the NeoFS. With these changes we are slicing object during part uploading routine. Signed-off-by: Evgenii Baidakov <evgenii@nspcc.io>
Closes #843. Before, multipartComplete read all parts of Big object to the memory, combine them and generate final Big object. These step consume time and memory, eventually any system will fail to load all parts in mem or timeout during the process. After, object slicing process works from the first uploaded part. Calculating each part hash and whole object hash during whole process. Storing object hash state to each part metadata in tree service. Signed-off-by: Evgenii Baidakov <evgenii@nspcc.io>
Closes #843. Before, multipartComplete read all parts of Big object to the memory, combine them and generate final Big object. These step consume time and memory, eventually any system will fail to load all parts in mem or timeout during the process. After, object slicing process works from the first uploaded part. Calculating each part hash and whole object hash during whole process. Storing object hash state to each part metadata in tree service. Signed-off-by: Evgenii Baidakov <evgenii@nspcc.io>
test_multipart_upload at s3tests_boto3/functional/test_s3_neofs.py
We create a multipart upload, upload parts, then try to complete it and fail on timeout:
Regression
Not sure if it ever worked, but it definitely should
Full logs - containers_logs.tar.gz
The text was updated successfully, but these errors were encountered: