You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have a use case that require fetching 100K-1M filtered records directly from Pinot servers with minimal performance hit. Each record has between 5 and 10 columns. We noticed fetching 500K records through default path (Pinot servers -> Pinot broker -> client) is a challenge for brokers.
Once reason is because Pinot dbapi client uses HTTP/JSON communication which is inefficient for large result set. Pinot-Connector for Presto and Spark fetches large result set directly from Pinot servers using a more efficient communication method: gRPC + streaming. This method has less impact on Pinot servers and allow fetching larger result set quickly.
Can you add gRPC + streaming support to Pinot python client?
[More details]
We noticed high CPU utilization on Pinot brokers. The following chart shows that Pinot brokers are spending most time on Reduce operation. Please note that the queries in question are simple SELECT + WHERE clause queries (no aggregations, no group by and no joins).
Reduce operation: Time spent by broker in combining query results from multiple servers.
Broker Avg. P99 reduce operation:
To summarize above chart, broker spends:
between 1s and up to 3.5s combining response for ApplicationStage queries.
between 1s and up to 4.5s combining response for ApplicationMilestone queries.
up to 1s combining response for ATSApplicant queries.
💡 The chart explains where 1s and up to 3s-4s of ApplicationStage and ApplicationMilestone queries are spent (broker combining responses, serializing into JSON before responding back to Reports Pinot client).
The text was updated successfully, but these errors were encountered:
One thing I'm planning to work on is to support different JSON libraries, like ujson and orjson (the latter being the fastest available for Python). This should allow much faster desserialization, but should still be easy and quick enough to implement. Would this be OK as a short-term improvement towards what you need?
We have a use case that require fetching 100K-1M filtered records directly from Pinot servers with minimal performance hit. Each record has between 5 and 10 columns. We noticed fetching 500K records through default path (Pinot servers -> Pinot broker -> client) is a challenge for brokers.
Once reason is because Pinot dbapi client uses HTTP/JSON communication which is inefficient for large result set. Pinot-Connector for Presto and Spark fetches large result set directly from Pinot servers using a more efficient communication method: gRPC + streaming. This method has less impact on Pinot servers and allow fetching larger result set quickly.
Can you add gRPC + streaming support to Pinot python client?
[More details]
We noticed high CPU utilization on Pinot brokers. The following chart shows that Pinot brokers are spending most time on Reduce operation. Please note that the queries in question are simple SELECT + WHERE clause queries (no aggregations, no group by and no joins).
Reduce operation: Time spent by broker in combining query results from multiple servers.
Broker Avg. P99 reduce operation:
To summarize above chart, broker spends:
💡 The chart explains where 1s and up to 3s-4s of ApplicationStage and ApplicationMilestone queries are spent (broker combining responses, serializing into JSON before responding back to Reports Pinot client).
The text was updated successfully, but these errors were encountered: