Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

store: Prevent store from panic after range object storage #3322

Closed
wants to merge 2 commits into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 12 additions & 4 deletions pkg/store/bucket.go
Original file line number Diff line number Diff line change
Expand Up @@ -1900,6 +1900,7 @@ func (r *bucketIndexReader) loadSeries(ctx context.Context, ids []uint64, refetc
return errors.Wrap(err, "read series range")
}

indexLength := len(b)
r.mtx.Lock()
r.stats.seriesFetchCount++
r.stats.seriesFetched += len(ids)
Expand All @@ -1908,7 +1909,11 @@ func (r *bucketIndexReader) loadSeries(ctx context.Context, ids []uint64, refetc
r.mtx.Unlock()

for i, id := range ids {
c := b[id-start:]
seriesOffset := id - start
if int(seriesOffset) > indexLength {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This just checks the offset, but the following code could overread anyway. Am I missing anything?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, I found the following issue, we encountered the same issue.
#3057

When I debugged, I found that when using readIndexRange and readChunkRange, the length of the data is not necessarily equal to or longer than the required length of the data, but when we use it, we will set the offset according to the length, so there will be situations .

I use the s3 interface and the underlying ceph storage. I am still looking for ways to solve the root cause of the incorrect data length, but there is no progress. In addition, I hope the store will not panic in this case

I tried to modify the readChunkRange and readIndexRange methods, but I found that other upstream methods do not require the returned data length. So I only modified the place where panic occurs : )

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MegaEle do you use radosgw, perchance? I have seen the same thing a few times but never got to it. It's probably some kind of bug there in the API in that it doesn't follow the spec. Have you ever found a way to consistently reproduce it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if the issue I just encountered is also relevant: #3339

I use Radosgw/ceph to confirm what @GiedriusS says

return errors.Errorf("series offset out of range, off: %d, length: %d", seriesOffset, indexLength)
}
c := b[seriesOffset:]

l, n := binary.Uvarint(c)
if n < 1 {
Expand Down Expand Up @@ -2079,7 +2084,7 @@ func (r *bucketChunkReader) loadChunks(ctx context.Context, offs []uint32, seq i
if err != nil {
return errors.Wrapf(err, "read range for %d", seq)
}

chunkLength := len(*b)
locked := true
r.mtx.Lock()

Expand All @@ -2096,8 +2101,11 @@ func (r *bucketChunkReader) loadChunks(ctx context.Context, offs []uint32, seq i
r.stats.chunksFetchedSizeSum += int(end - start)

for _, o := range offs {
cb := (*b)[o-start:]

chunkOffset := o - start
if int(chunkOffset) > chunkLength {
return errors.Errorf("chunk offset out of range, off: %d, length: %d", chunkOffset, chunkLength)
}
cb := (*b)[chunkOffset:]
l, n := binary.Uvarint(cb)
if n < 1 {
return errors.New("reading chunk length failed")
Expand Down