-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large file upload failing with "The Content-Range header length does not match the provided number of bytes." #295
Comments
Upvoting this issue. I'm running into the same issue using similar code:
Result:
|
@rkodev Any chance you can take a look at this? As stated this should occur in order and the use of goroutines here will not give that assurance...
|
@rkodev @andrueastman Since you are having problems with the release, I copied your fixed fileuploader code into my code base. This is happening with the above code and also with only one slice. |
Re-opening for now.
@buechele Any chance you can confirm the file size of the file being uploaded? Does this mean that the task fails if the file is less than the size of a single chunk? |
@andrueastman It fails if the file size is below the maxSlice size and also if the size is above. |
Does this mean that this happens for all files you try to upload? |
@andrueastman Yes, it fails for all files. |
@andrueastman Short update: |
@andrueastman It's even worse: The originally uncompressed file which is now being uploaded without an error (by manipulating the content-range header), appears gzip compressed in the destination folder on the Microsoft side. |
Thanks for looking into this @buechele and getting to the root cause here. A similar issue has been raised at microsoftgraph/msgraph-sdk-go#747. The correct behavior should be to have data sent uncompressed to the server (I don't believe Graph api supports receiving compressed content yet). But for the client to try to decompress content from the server as it will send the accept header. @buechele Are you able to make this work by initialization of the graph client that has the CompressionHandler removed? @rkodev Any chance you can look into updating the default middleware to avoid using the default compression handler?
|
I can confirm that I uploaded a "Microsoft Word 2007+" file type, then downloaded the file from SharePoint, checked the filetype and it was "gzip compressed data, original size modulo 2^32 11954". I appended the .gz extension to the filename and ran the "gzip -d test.docx.gz" command resulting in a valid docx file that I was able to open. |
@andrueastman I was not able to create a graph client without compression. gzip was still on. And now I'm getting: |
@andrueastman I was now able to create a graph client without compression. |
Hi everyone, |
@baywet Maybe I understand here something wrong, but I removed, as @andrueastman suggested, the entire CompressionHandler from the pipeline and the result was, that the content-length header is now missing. |
It's strange that you're not seeing a content length header without the compression middleware as it's being set here |
@baywet Yes, I ran the application in debug mode and it is set correctly in the code but missing in the actual http request. |
Thanks for confirming, it might be the case that since this header has a dedicated field on the request object, it's not being read from the headers collection. |
Now that we have all the pieces in place, would you like to submit a pull request to:
? |
Fixed by microsoft/kiota-http-go#174 |
Here's my code, I'm running this on Ubuntu with Go 1.21.5.

largeFile
is/home/jasonjoh/vacation.gif
, anditemPath
isDocuments/vacation.gif
. I've attached vacation.gif in case the file is important.Using a debugging middleware I see the headers look right, but the slices are sent in a random order (i.e. the first one isn't the 0-163839 range), which causes the failure. Per the docs, slices must be sent sequentially or you will get an error.
The text was updated successfully, but these errors were encountered: