Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

async fn send(&self, mut message: Message) -> Result<Disposition> makes it hard to get impressive benchmark throughput #36

Open
jiridanek opened this issue Dec 7, 2021 · 2 comments

Comments

@jiridanek
Copy link

Ideally, I'd like to get a Flow frame with say 1000 credits, fire 1000 messages, then process the Disposition frames as they come, and again fire another batch of messages when I get a Flow frame. This way I'll be able to get the best possible message throughput for a benchmark.

(This is how qpid-proton-c works, https://github.com/ssorj/quiver/blob/072ae54a99cbfc395aa301c89ac82fd64c5a30e0/impls/quiver-arrow-qpid-proton-c.c#L320)

What I am forced to do is to every time wait for a delivery before I am done sending one message and I can send another message. This means that sending a message requires the time needed for the roundtrip between sender and receiver.

I tried collecting all the futures from send() into a

let mut deliveries: Vec<Pin<Box<dyn Future<Output=Result<Disposition, AmqpError>>>>> = Vec::new();

then waiting for them with

let _ = futures::future::join_all(deliveries).await;

but this is even slower than awaiting each individually.

Proton-c is able to send ~250k messages/second on loopback. Dove can do for me only a bit over 10k messages/s at most.

What I figure I can do instead would be to handle the frame traffic myself as in examples/send_framing.rs. That way I should be able to do everything proton-c can do, but I am then no longer using the client in a realistic way.

@lulf
Copy link
Owner

lulf commented Dec 8, 2021

Thank you @jiridanek, this is valuable input and there might be some API changes needed, but also profiling the internals to for bottlenecks. I haven't really done any performance benchmarking, so I didn't expect it to perform well at this point.

@jiridanek
Copy link
Author

I like the API, I think it is a practical one regarding productive uses of the library. I did not try to handle redeliveries/reconnects in Dove, but with this API it looks reasonable to do.

It's just that I could not figure out how to get it to perform well in the 'do nothing useful just fire messages' scenario.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants