-
-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Direct writing mode #292
Comments
Regarding the CPU usage, #290 may drastically improve it. |
Thank you for the pointers @soywod, actually this is a component in an email app, to expose its custom storage. The linked PR looks very relevant, I will test shortly |
Hi, thanks for giving us feedback!
Don't worry, we are feeling the same.
Features like batching are definitely out of scope for
I have no experience with this. Is 6s really slow for processing 200k items on a single thread? Anyway, I would love to see a flamegraph for your case.
Using async for We intentionally tried to implement as few features in I have the impression that the API you have in mind would be rather opinionated. We wanted to keep To be honest I'm not sure how to continue. We tried out different APIs and the current API is the least worst one regarding maintainability and usability. I don't expect big changes in the near future. Unless someone has a brilliant proposal :p |
Hey there,
I absolutely love imap-codec, it's made it possible to pull together an IMAP server in really record time, even as a Rust newcomer. I'm a little less loving of imap-next though, the interface is causing some headaches and I was wondering if you could offer suggestions (or consider mine).
For example, when producing a large FETCH response, in the simplest use of the API, it is necessary to temporarily cease calling enqueue_....() and pump next() / stream.flush() on a regular basis, otherwise entire response will become buffered in memory. Deciding when to pump next/flush itself creates a new headache.. calling it for each response item causes large CPU overhead, perhaps as a result of heavy syscall use writing small messages. Finding a balance is hard because there doesn't appear to be much of any information to estimate the current size of the output buffer. At present I have hard-wired "if >10 responses sent with no corresponding ResponseSent, loop flushing until <= 10" loop inside the FETCH response handler, which is not ideal. It also does not account for example tiny responses (e.g. "(UID)") vs. large responses fetching the whole message body and headers.
Not just for memory' sake, but also response latency, it is necessary for a dance like above. Fetching my largest folder of 200k items has 6 seconds of raw CPU usage just to build the response (quite a reasonable overhead, I think), but without the above loop that turns into a 6 second delay before the client sees the first byte of the response.
Finally, it is necessary to continuously call next()/flush() during large writes to detect client state: there is no point burning all CPU producing a large response for a low bandwidth client, or indeed continuing to generate a response for a client that has hung or disconnected.
I like how imap-next is abstracting away all the details of the protocol, but what I really wish for is some interface like:
server.write_data(&data).await
where all the internal buffering and parallel world to the underlying network state is avoided. The other possibility of blocking the calling function is enabling sharing large message bodies rather than needing to copy them just to enter a queue they will almost immediately leave. This would ideally help to completely disconnect resident memory usage from the actual size of messages being sent. Is that something that might be possible?Thanks
The text was updated successfully, but these errors were encountered: