Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

File corruption #2425

Closed
chandon opened this issue Oct 29, 2014 · 107 comments
Closed

File corruption #2425

chandon opened this issue Oct 29, 2014 · 107 comments
Assignees
Labels
p2-high Escalation, on top of current planning, release blocker type:bug

Comments

@chandon
Copy link

chandon commented Oct 29, 2014

Expected behaviour

I create a file in a folder shared with Owncloud and it always upload a correct file (not corrupted)

Actual behaviour

When I create a file in a folder shared with Owncloud it sometimes uploads a corrupted file. The file has the same size as the original one, but is corrupted. If it can help, i've uploaded 2 corruption examples here:
https://www.wetransfer.com/downloads/499390103af226851e2ea96237d5487620141029092303/efdb625623a454be881d013a02c10e3e20141029092303/b2bcb7

The original files have been created in the owncloud folder on purpose to test the bug. The corrupted files have been downloaded directly from the server folder /var/www/owncloud/data/test/files

Possibly the same bug as #1969 ?

Steps to reproduce

It's a ramdom bug...
I've past massivly 50 files in my owncloud folder, then i did a md5sum on the server to find the corrupted files among all of them

Server configuration

Operating system: Ubuntu 14.04 LTS server
Web server: Apache 2.4.7-1
Database: MySQL 5.5.37-0ubuntu0.14.04.1
PHP version: 5.5.9-1ubuntu4.3
ownCloud version: 7.0.2(stable)

List of activated apps:
Default apps (i don't use them)

Client configuration

Client version: 1.6.4 build 1195

Operating system: Mac Osx Maverick (we are 2 having the same problem)

@LukasReschke
Copy link
Member

@PVince81 I think that might be of interest to you?

@PVince81
Copy link
Contributor

@chandon did you enable encryption on the server ?

@chandon
Copy link
Author

chandon commented Oct 29, 2014

No, as you can see in my description, the examples files are directly downloaded from the server (so they are not encrypted).

A binary compare show that the first 0x4000 bytes are corrupted... like a bad "part" included at the beginning of the corrupted files. You can find the same part later in the file

@PVince81
Copy link
Contributor

Verrry weird. This guy also had the exact same issue but with encryption enabled: see owncloud/core#10975 (comment)

CC @VincentvgNn

I'm suspecting a concurrency / race condition issue... You could try installing and enabling the "files_locking" app.

@guruz
Copy link
Contributor

guruz commented Oct 29, 2014

Do you have the corrupted file open in any application?

@chandon
Copy link
Author

chandon commented Oct 29, 2014

no, it's just a copy/paste from my mac osX desktop to the owncloud folder, that's all. neither the desktop files or destination files are open in any application (i don't have virus tool or apps like that, only owncloud accessing files in background)

@ckamm
Copy link
Contributor

ckamm commented Oct 30, 2014

I think this could possibly be the client's fault. In 1.7 we avoid uploading files that were changed too recently, but in 1.6 the following could happen:

  • Copy many files into a sync folder on the client side
  • The file system watcher triggers a new sync while copying is ongoing
  • Then we want to upload a file and call read() on it several times while its copy operation is ongoing.

On the other hand I think it's odd that the file ends up having the right size and that we consistently see the first 16kb being replaced by a later 16kb chunk of the same file. I don't see a reason why the first read() would go wrong in this particular way, but it might happen. It could also be a server bug.

I'll try to reproduce this locally with the 1.6 branch. @chandon maybe you could try whether you still see the issue with the files_locking ownCloud server plugin installed? And if you can, attempting to reproduce the issue with a 1.7 client would also be appreciated - if it's reproducible there it's much more likely to be a server issue.

@chandon
Copy link
Author

chandon commented Oct 30, 2014

I've installed the "files_locking" app for 2 days now. I've done a test, i didn't had any corrupted file (which doesn't mean that the problem is solved...). If we still have any corrupted file with the plugin, i'll let you know in a later comment

@ckamm
Copy link
Contributor

ckamm commented Oct 30, 2014

@chandon Thank you!

I've tried it locally with 1.6 (on ext4, with SSD) using 50 copies of your test_ORIGINAL.pptx, but couldn't get a corrupt file in 10 attempts. I tried again with various levels of io and hdd load, but could not produce corrupt data. I also tried with 500 files; and with sleeping 100ms between each copy.

@ckamm
Copy link
Contributor

ckamm commented Nov 5, 2014

@chandon Did you observe any more corruption?

@chandon
Copy link
Author

chandon commented Nov 5, 2014

@ckamm Yes, i did observe a corruption again, always with the MAC OSX Client sync, and with the files_locking enabled. Always the first 0x4000 bytes corrupted (corresponding to another block later...)
So the "files_locking" app doesn't help...

@ckamm ckamm removed the Needs info label Nov 5, 2014
@ckamm
Copy link
Contributor

ckamm commented Nov 5, 2014

@chandon Thanks for your efforts! Have you tried with a 1.7 client? Do you have any up- or download rate limits configured?

@chandon
Copy link
Author

chandon commented Nov 5, 2014

@ckamm I've got the 1.6.4 client. We will install 1.7 client. We don't have up/download limit configured nor proxy

@ckamm
Copy link
Contributor

ckamm commented Nov 7, 2014

@chandon Thanks. If this issue is due to the client reading files while they are written to the sync folder, it should be solved with 1.7. If you can still reproduce it, it's more likely to be a communication or server issue

@chandon
Copy link
Author

chandon commented Nov 7, 2014

@ckamm thanks i will give a try with 1.7. If i still got any error, i'll report it

@moscicki
Copy link
Contributor

I have seen recent;y a very similar problem using owncloudcmd in our testing framework (https://github.com/cernbox/smashbox/blob/master/lib/test_nplusone.py). The symptoms were very similar: first 16KB block of the file corrupted (corresponding to the 10th block of that file). However this is NOT related to the file update while syncing because all the files are updated BEFORE owncloudcmd is run.

And now I am positively sure that I saw this on 1.6.4.

@moscicki
Copy link
Contributor

The question is: was there some subtle change from 1.6.3 to 1.6.4 that could trigger that? Nobody has seen that before. Is the 1.7 exposed to this too? Could someone do some comparison? Especially between 1.6.3 and 1.6.4 sources the changes should not be so many...

@chandon
Copy link
Author

chandon commented Nov 13, 2014

I already had the problem with 1.6.3... i updated to 1.6.4 to check if the problem was solved (because of the changelog : "avoid data corruption due to wrong error handling, bug #2280") but the problem was still there. At this time, i don't have any corrupted file with 1.7 (using it for 6 days now)

@moscicki
Copy link
Contributor

That #2280 was another corruption (that I discovered on server errors).

On 1.6.3 did you also see the 16KB block issue? I have been running many tests on previous versions and never seen that.

@danimo, @dragotin: Could that be that Qt HTTP stuff is not multithreaded fully or some buffers are reused? I guess you added Qt-based propagator in 1.6

On Nov 13, 2014, at 9:36 PM, chandon notifications@github.com wrote:

I already had the problem with 1.6.3... i updated to 1.6.4 to check if the problem was solved (because of the changelog : "avoid data corruption due to wrong error handling, bug #2280") but the problem was still there. At this time, i don't have any corrupted file with 1.7 (using it for 6 days now)


Reply to this email directly or view it on GitHub.

@dragotin
Copy link
Contributor

@chandon thanks for letting us know your 1.7.0 experience so far. Even though that is not yet a proof, it's already good news :-)

@moscicki
Copy link
Contributor

@chandon: on 1.6.3 did you also see the same 16KB block issue?

I have been running many tests on previous versions and never seen that. And last Friday I saw it quite many times. I thought it may have been related to 1.6.4 update.

Here is the point: I do continous testing and put/get lterally thousands of files. And this issue is not easily reproducible. So I would really like to understand a root cause.

@danimo, @dragotin: Could that be that Qt HTTP stuff is not multithreaded fully or some buffers are reused? I guess you added Qt-based propagator in 1.6

@guruz
Copy link
Contributor

guruz commented Nov 13, 2014

@moscicki You've seen the corruption for the uploads?
Or download?

@moscicki
Copy link
Contributor

Always uploads. From our setup I would rather exclude the owncloud server problem. Maybe proxy. Or timing issue on the client.

kuba

On Nov 13, 2014, at 10:20 PM, Markus Goetz notifications@github.com wrote:

@moscicki You've seen the corruption for the uploads?
Or download?


Reply to this email directly or view it on GitHub.

@guruz guruz added p2-high Escalation, on top of current planning, release blocker and removed Needs info Server Involved labels Apr 10, 2015
@guruz guruz added this to the 1.8.1 - Bugfix milestone Apr 10, 2015
@guruz guruz assigned guruz and unassigned ckamm Apr 10, 2015
@brozkeff
Copy link

brozkeff commented Apr 10, 2015 via email

@jturcotte
Copy link
Member

@moscicki The two test branches would get me that crash once the server served 3-10 requests.

I don't know all the different ways of triggering the issue (guruz knows more) but that one case can't be reproduced without modifying the client unless we would patch and rebuild nginx to close the connection even though it responds with a "Connection: Keep-Alive" header.

@guruz
Copy link
Contributor

guruz commented Apr 14, 2015

@brozkeff
Copy link

I just experienced 3 corrupted PDFs out of 3 uploaded PDFs on Mac OS X 10.10
with ownCloud client 1.7.1 Qt 5.3.2
in our production environment. I have all three files in corrupted and correct form if you need them for analysis.

I immediately upgraded OC to 1.8.0 Qt 5.4.0 hopefully it will disappear immediately.

I was shocked that just saving 3 PDFs from Thunderbird email to ownCloud folder causes all three to become corrupted during upload to server. Now there is 5 GB of data on that Mac that is perhaps OK but most of this data may be randomly or systematically corrupted on the server. How can I force ALL files in Mac to be reuploaded to the server correctly?

@brozkeff
Copy link

It is even more bizzare that it is only PDFs that are corrupted. Next email saved to the same folder, with 6 DOC and 3 DOCX files, no problems with these files, all uploaded correctly.

@moscicki
Copy link
Contributor

As far as I understand the fix comes in 1.8.1. To be confirmed with developers

What you can do is the following:

  • capture the log file of the client
  • check your corrupted files with md5blocks --check FILE
  • if you have the original and corrupted file you can also: md5blocks FILE1 FILE2

https://github.com/cernbox/smashbox/blob/master/corruption_test/md5blocks

To trigger the re-upload do: touch FILE

@brozkeff
Copy link

I was too quick to reinstall the client so I am not sure if there is any logfile of the already uninstalled version of the client.

I have original and corrupted files so:

first one:
./md5blocks ./correct/BL2015_MAGIC\ TOUCH.pdf ./corrupted/BL2015_MAGIC\ TOUCH.pdf
File block analyzer
BLOCK_LIMIT 0
BLOCK_OFFSET 0
BLOCK_SIZE 16384
check False
file1 ./correct/BL2015_MAGIC TOUCH.pdf
file2 ./corrupted/BL2015_MAGIC TOUCH.pdf

offset   i file1                             file2                            mod j
     0   0 10f3201ac30b80b98216f6b27a099910 9ad77e69a955c3f12006349e9e8decad *** 2
 16384   1 61f92bdc1edf158867d91f1e6dc302da 61f92bdc1edf158867d91f1e6dc302da
 32768   2 9ad77e69a955c3f12006349e9e8decad 9ad77e69a955c3f12006349e9e8decad
 49152   3 a1b61c7a2dd30d8f01d5661d6c35283b a1b61c7a2dd30d8f01d5661d6c35283b
 65536   4 cf31f5c0a4fe361994bee8e74fd72cac cf31f5c0a4fe361994bee8e74fd72cac
 81920   5 d6ff183f7fd8dc9d88ae6dc1636644f5 d6ff183f7fd8dc9d88ae6dc1636644f5
 98304   6 a906a26bb91710f1d7faf95a9e04abbf a906a26bb91710f1d7faf95a9e04abbf
114688   7 ef78916e2a567723839d6bc0258fb383 ef78916e2a567723839d6bc0258fb383
131072   8 371efb41f3da797bf90e06d46dfb419f 371efb41f3da797bf90e06d46dfb419f
147456   9 2ecdd0255df100138174baf2c430d6d8 2ecdd0255df100138174baf2c430d6d8
163840  10 2d0926a4e0b93bdc3efe4a47a65a5fa6 2d0926a4e0b93bdc3efe4a47a65a5fa6
180224  11 4b89c097d3aaa225aaf9b4ba97a00e55 4b89c097d3aaa225aaf9b4ba97a00e55
196608  12 c9f187561bbffa1880bee936ccc9334c c9f187561bbffa1880bee936ccc9334c
212992  13 cc693a9957b1e613fccb9ba07ab4d86a cc693a9957b1e613fccb9ba07ab4d86a
229376  14 faa25218497b0c3c06d5f00b6e5b4c32 faa25218497b0c3c06d5f00b6e5b4c32
245760  15 ca192077683a0b33988fa14c641c3c3d ca192077683a0b33988fa14c641c3c3d
262144  16 4bcef5b5fb6f992acbb599f18633d156 4bcef5b5fb6f992acbb599f18633d156
278528  17 8d7d8e8980a69bf361486232fd89fc50 8d7d8e8980a69bf361486232fd89fc50
294912  18 81f1ffb5f6555d5f3eb3e2c76046716d 81f1ffb5f6555d5f3eb3e2c76046716d
311296  19 4ef4dd924217d0b450315ee1fe92f5df 4ef4dd924217d0b450315ee1fe92f5df
327680  20 31d8c0e0a79c582849e1f32cf6c7ddd1 31d8c0e0a79c582849e1f32cf6c7ddd1
344064  21 eb6dfce5e1312b9319b39b5a1c13b473 eb6dfce5e1312b9319b39b5a1c13b473
360448  22 0cc2bcb175d19f88b2df4dc74331ac58 0cc2bcb175d19f88b2df4dc74331ac58
376832  23 e34d4f732b00cec837e7e40f3d1d06a3 e34d4f732b00cec837e7e40f3d1d06a3
393216  24 05ad3ae819d299d37d8f9847cf6ab007 05ad3ae819d299d37d8f9847cf6ab007
409600  25 9aa980e7864f7bfa8cdc925c8b3dd939 9aa980e7864f7bfa8cdc925c8b3dd939
425984  26 c68a600a53516873c318401c51272e6b c68a600a53516873c318401c51272e6b
442368  27 b5fba0a1284c692da47316854a84188f b5fba0a1284c692da47316854a84188f
offset   i file1                             file2                            mod j

total --- cacf4da14a76f62654368f76d39becdb 3ed3e9846b56c09d31934a1d27cb0a8d ***

@moscicki
Copy link
Contributor

Yes, first 16K block lost. Others untouched (which is different from what we saw with our test data). But close enough.

@brozkeff
Copy link

second file seems the same pattern

./md5blocks ./correct/BL2015_Clean\ Touch\ (Myci\ prostredek\ na\ lahve).pdf ./corrupted/BL2015_Clean\ Touch.pdf
File block analyzer
BLOCK_LIMIT 0
BLOCK_OFFSET 0
BLOCK_SIZE 16384
check False
file1 ./correct/BL2015_Clean Touch (Myci prostredek na lahve).pdf
file2 ./corrupted/BL2015_Clean Touch.pdf

offset   i file1                             file2                            mod j
     0   0 d077d92f66385ba26c676b1e7939a4d8 dbf16f5db9f1caa76a99b7b37553d1aa *** 9
 16384   1 260d867d7b8c5270b02a85abb02b21ae 260d867d7b8c5270b02a85abb02b21ae
 32768   2 b57ff9d1a9e9d371884c6fe4aac63ab6 b57ff9d1a9e9d371884c6fe4aac63ab6
 49152   3 3ca7364d0246fba07efbc4c0d361984b 3ca7364d0246fba07efbc4c0d361984b
 65536   4 76db8379393eba0126eee05daa4797a0 76db8379393eba0126eee05daa4797a0
 81920   5 cc0ea2cecc86895e5bc5bb874e3bddd0 cc0ea2cecc86895e5bc5bb874e3bddd0
 98304   6 842cbee78472decc4503473f44f35032 842cbee78472decc4503473f44f35032
114688   7 880cc9800adc06c34d79ce0933c7db35 880cc9800adc06c34d79ce0933c7db35
131072   8 268e6b0ee84b7cd41dd5e5561ac5f9a1 268e6b0ee84b7cd41dd5e5561ac5f9a1
147456   9 dbf16f5db9f1caa76a99b7b37553d1aa dbf16f5db9f1caa76a99b7b37553d1aa
163840  10 37c70d625c0067046af915463b5782b8 37c70d625c0067046af915463b5782b8
180224  11 5e5df22eac983c50d612c220eb7c1d3f 5e5df22eac983c50d612c220eb7c1d3f
196608  12 c8af5e33447a32f9332a29d0660686ce c8af5e33447a32f9332a29d0660686ce
212992  13 b75cc7f57b14c59eb3cd32ac514b5f85 b75cc7f57b14c59eb3cd32ac514b5f85
229376  14 96f46b19414841efe5b64cb8f1ef718f 96f46b19414841efe5b64cb8f1ef718f
245760  15 05a5d8ee73685d818ff1559a436823d2 05a5d8ee73685d818ff1559a436823d2
262144  16 e629895d499c2e19dea735474ba8792d e629895d499c2e19dea735474ba8792d
278528  17 88f75acc12786fde5b0d8525c441649e 88f75acc12786fde5b0d8525c441649e
294912  18 f1820c452373ec785844bad2607cc7e0 f1820c452373ec785844bad2607cc7e0
311296  19 b0a7d4a81f0a216d86de73ea4ab41003 b0a7d4a81f0a216d86de73ea4ab41003
327680  20 b4cc8fa260e829ffe44202b4e1b8069d b4cc8fa260e829ffe44202b4e1b8069d
344064  21 80b550a6db597edf72b52bb2609e553a 80b550a6db597edf72b52bb2609e553a
360448  22 3b2cbf9e8f9d88d31eb1100512dbe82d 3b2cbf9e8f9d88d31eb1100512dbe82d
376832  23 3af0651cff1006595c65dbf9aa4ecdae 3af0651cff1006595c65dbf9aa4ecdae
393216  24 5bd2eba49334db413140d981f2ebdb67 5bd2eba49334db413140d981f2ebdb67
409600  25 4407c4f019f16496122c3dacc1996419 4407c4f019f16496122c3dacc1996419
425984  26 844ccf4d3bd0d039ba0a3090610390da 844ccf4d3bd0d039ba0a3090610390da
442368  27 7d8a4cf6f26a0f1d74b77b3905c83d86 7d8a4cf6f26a0f1d74b77b3905c83d86
offset   i file1                             file2                            mod j

total --- 8f89939eca0acb976e50295b55ffe32e 671960d1c109566846b0a2e9c8b2226a ***

@brozkeff
Copy link

and the third one the very same thing
./md5blocks ./correct/BL2015_MINA\ -\ Praci\ prostredek\ na\ textil\ pro\ zvirata.pdf ./corrupted/BL2015_MINA.pdf
File block analyzer
BLOCK_LIMIT 0
BLOCK_OFFSET 0
BLOCK_SIZE 16384
check False
file1 ./correct/BL2015_MINA - Praci prostredek na textil pro zvirata.pdf
file2 ./corrupted/BL2015_MINA.pdf

offset   i file1                             file2                            mod j
     0   0 9d95b8578f22c603ceeb7e32cd92b130 28e19df438747cd825219c881d965362 *** 3
 16384   1 493c3128563e9207a3624f70cd515ab4 493c3128563e9207a3624f70cd515ab4
 32768   2 41ec0547be440d819159f64eb274a77c 41ec0547be440d819159f64eb274a77c
 49152   3 28e19df438747cd825219c881d965362 28e19df438747cd825219c881d965362
 65536   4 f8b236bc2bfed0e55939742e68f822bb f8b236bc2bfed0e55939742e68f822bb
 81920   5 d27cf7f6b9a6743a017ce9a0c79afd4a d27cf7f6b9a6743a017ce9a0c79afd4a
 98304   6 ee690d82e0c61379aeb2a7747f9fa4d6 ee690d82e0c61379aeb2a7747f9fa4d6
114688   7 59713f5d42d9513de3da26fe17c29e2e 59713f5d42d9513de3da26fe17c29e2e
131072   8 53db93432e14161523fbd5fba1581569 53db93432e14161523fbd5fba1581569
147456   9 50910c378e4a59bec755d14608a036c2 50910c378e4a59bec755d14608a036c2
163840  10 4b2e92e8ef3db99b2c5210ef5629c791 4b2e92e8ef3db99b2c5210ef5629c791
180224  11 143f7457931ff1ad569b94e366c51f7e 143f7457931ff1ad569b94e366c51f7e
196608  12 554c69c24ad944dc8ee83e821b541344 554c69c24ad944dc8ee83e821b541344
212992  13 8acb70efc37734ef58653256515545dd 8acb70efc37734ef58653256515545dd
229376  14 3d60a6deee136e8742676ef970d7104c 3d60a6deee136e8742676ef970d7104c
245760  15 dad3023405eda388051c914d2ee85ba0 dad3023405eda388051c914d2ee85ba0
262144  16 11c4a8487daf922e3d89ea7d9522537a 11c4a8487daf922e3d89ea7d9522537a
278528  17 409072d3fa4a2c2fafb6d1922fe38808 409072d3fa4a2c2fafb6d1922fe38808
294912  18 58786f5fd93d1f48ed0030c702d04ced 58786f5fd93d1f48ed0030c702d04ced
311296  19 944b2e950df417f70a066ff6539382d8 944b2e950df417f70a066ff6539382d8
327680  20 ae2868861d8be1419e3065c301351157 ae2868861d8be1419e3065c301351157
344064  21 d91ac4b7f49c638b0feba5d3dd57d7a8 d91ac4b7f49c638b0feba5d3dd57d7a8
360448  22 8e964f95b1f5bb30d800e84aa7af511f 8e964f95b1f5bb30d800e84aa7af511f
376832  23 9e7019b0534018f935486deb670195b2 9e7019b0534018f935486deb670195b2
393216  24 36f4ba2a44f3b2b59139f4eb93bd59cf 36f4ba2a44f3b2b59139f4eb93bd59cf
409600  25 2032d78308285733232c806b21ecba6f 2032d78308285733232c806b21ecba6f
425984  26 4aa8f25ba5a9b77f0486fb8bc69c4042 4aa8f25ba5a9b77f0486fb8bc69c4042
442368  27 5d54f14a8ea3eaaf596c170483b28992 5d54f14a8ea3eaaf596c170483b28992
offset   i file1                             file2                            mod j

total --- ce7717511ddd205c62c7d4d15f3f96b9 0a2136e4270432388e85440b8604f0ce ***

@moscicki
Copy link
Contributor

I do not want to speak for developers but the issue has been apparently fixed (and the bug was not in the owncloud client but in the Qt toolkit the client uses). It is simply amazing that Qt which is used worldwide so much has such bugs.

@dragotin
Copy link
Contributor

Yes, @moscicki says it: We consider that fixed. The next upcoming version 1.8.1 will have the fix.

Thanks for verifying that!

@dragotin dragotin added the ReadyToTest QA, please validate the fix/enhancement label Apr 23, 2015
@brozkeff
Copy link

Does it mean that the client 1.8.0 freshly downloaded from the website, using Qt 5.4.0 is still affected by the same bug?

If it is, is there some workaround until 1.8.1 is released?

Regarding triggering of reupload with touch, could that be safely performed on a Mac OS X with a command such as find . -exec touch {} + ? The user has several GBs of data there (own) plus GBs of shared data, some directories are read-only on owncloud. Won't changing the timestamps force OC client to reupload files the user does not rights to upload (therefore the owncloud client will show error for each of such file indefinitely as it happens now when user writes to owncloud-RO folder)

Is there some script that could detect which files on the server, no matter what user owns them, are possibly corrupted, so we could find them, and try to find if possible the original file. not corrupted, on the owner's PC? Is there some specific data always on some position or is the corrupted part just random garbage?

And if the Client 1.8.1 finally solves this critical issue on all plarforms, is there finally some setting on the server side to force all users to upgrade their clients by certain date to the fixed version? We have several external collaborators with their own devices, and I cannot manage their laptops remotely or by any other way.

@LukasReschke
Copy link
Member

And if the Client 1.8.1 finally solves this critical issue on all plarforms, is there finally some setting on the server side to force all users to upgrade their clients by certain date to the fixed version? We have several external collaborators with their own devices, and I cannot manage their laptops remotely or by any other way.

This will be possible with ownCloud 8.1 - ref owncloud/core#15683

@moscicki
Copy link
Contributor

You may try to use the md5blocks script with --check option with a single file argument. This does not cover cases of chunked file upload if corruption happened on any other chunk but the first one. I intend to actually extend this option to cover the chunked upload cases. You may estimate how this currently affects you by calculating the ratio of files below 10MB in your entire population of files.

This is not 100% waterproof but should give you some idea (you will also probably need to exclude some false positives, such as a file full of zeros as a simplest example).

@Dianafg76
Copy link

This bug is tested with QNetworkReply autotest.

@Dianafg76 Dianafg76 removed the ReadyToTest QA, please validate the fix/enhancement label May 4, 2015
jturcotte added a commit that referenced this issue May 6, 2015
Since QNonContiguousByteDeviceThreadForwardImpl::reset will
call UploadDevice::reset with a BlockingQueuedConnection, this
allows us to reset the HTTP channel along with its buffers
before they get the chance to be reused with a subsequent request.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
p2-high Escalation, on top of current planning, release blocker type:bug
Projects
None yet
Development

No branches or pull requests