-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
drastically reduce allocations in ring buffer implementation #64
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -68,15 +68,16 @@ type segmentedBuffer struct { | |
cap uint32 | ||
len uint32 | ||
bm sync.Mutex | ||
// read position in b[0]. | ||
// read position in b[bPos]. | ||
// We must not reslice any of the buffers in b, as we need to put them back into the pool. | ||
readPos int | ||
bPos int | ||
b [][]byte | ||
} | ||
|
||
// NewSegmentedBuffer allocates a ring buffer. | ||
func newSegmentedBuffer(initialCapacity uint32) segmentedBuffer { | ||
return segmentedBuffer{cap: initialCapacity, b: make([][]byte, 0)} | ||
return segmentedBuffer{cap: initialCapacity, b: make([][]byte, 0, 16)} | ||
} | ||
|
||
// Len is the amount of data in the receive buffer. | ||
|
@@ -109,15 +110,15 @@ func (s *segmentedBuffer) GrowTo(max uint32, force bool) (bool, uint32) { | |
func (s *segmentedBuffer) Read(b []byte) (int, error) { | ||
s.bm.Lock() | ||
defer s.bm.Unlock() | ||
if len(s.b) == 0 { | ||
if s.bPos == len(s.b) { | ||
return 0, io.EOF | ||
} | ||
data := s.b[0][s.readPos:] | ||
data := s.b[s.bPos][s.readPos:] | ||
n := copy(b, data) | ||
if n == len(data) { | ||
pool.Put(s.b[0]) | ||
s.b[0] = nil | ||
s.b = s.b[1:] | ||
pool.Put(s.b[s.bPos]) | ||
s.b[s.bPos] = nil | ||
s.bPos++ | ||
s.readPos = 0 | ||
} else { | ||
s.readPos += n | ||
|
@@ -152,6 +153,22 @@ func (s *segmentedBuffer) Append(input io.Reader, length uint32) error { | |
if n > 0 { | ||
s.len += uint32(n) | ||
s.cap -= uint32(n) | ||
// we are out of cap | ||
if len(s.b) == cap(s.b) && s.bPos > 0 { | ||
if s.bPos == len(s.b) { | ||
// have no unread chunks, just move pos | ||
s.bPos = 0 | ||
s.b = s.b[:0] | ||
} else { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd change this to to check if at least half of the slice is free. That'll slightly increase allocations until we hit a steady-state, but should avoid the degenerate case where we slide by one every single time. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is definitely a good improvement, but I am not sure about "at least half of the slice", I think it should be a bit lower than that, how about 0.25 of the capacity? Also, then it will be equal to That got me thinking, should we limit the maximum capacity of the buffer (recreate slice with default capacity when reach certain maximum and buffer is empty now)? What is the average Stream lifespan? I tried to test this on the package's benchmark, but it does not grow at all because of in-memory network. |
||
// have unread chunks, but also have space at the start of slice, so shift it to the left | ||
copied := copy(s.b, s.b[s.bPos:]) | ||
for i := copied; i < len(s.b); i++ { | ||
s.b[i] = nil | ||
} | ||
s.b = s.b[:copied] | ||
s.bPos = 0 | ||
} | ||
} | ||
s.b = append(s.b, dst[0:n]) | ||
} | ||
return err | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggest you also add
b.ReportAllocs()
here and inBenchmarkSendRecvLarge
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure this is necessary, you can add
-benchmem
togo test
to achieve the same result.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, but I'm suggesting it because allocs seem to be of ongoing concern. Up to you.