-
Notifications
You must be signed in to change notification settings - Fork 227
Collect usage and performance metrics #46
Comments
A lighter weight version of this request are logs that can detail the processing status of each thread. For example, we just debugged an issue with our R&D cluster where 2 nodes were acting flaky, and I'm pretty sure that the Phoenix client was hanging specifically on those two nodes. Through trial and error we figured it out, but it would be nice to have logs something along the line of this: [[timestamp][request-id]Starting query "select count(*) from myTable" What this would have uncovered is that we had N region servers, and N-2 requests were completing. Icing on the cake is that if a query times out, tell me which RS's it's waiting on. |
We should look at the Zipkin-based monitoring that Elliot Clark is doing for HBase here. Needs to aggregate/rollup the costs, but if it did that, it would be a sweet way to monitor perf. |
I was thinking we could use the Hadoop metrics2 to manage the metrics for a given request (both scans and inserts). What you really want in metrics tooling is:
metrics2 gives us all of that. Also, there are good reference implementations, for instance, the hadoop code itself (here and here) as well as the new HBase metrics system. We can then use this to keep stats on phoenix in phoenix tables. By instrumenting the phoenix methods correctly we can gather things like number of bytes sent, method times, region/regionserver response times, etc. Then you would publish these metrics to a phoenix sink that again, writes back to a phoenix table (and possibly updates a local stats cache too). This only interesting bits are then:
The latter is just good engineering. The former can be solved by tagging each method call with a UUID (similar to how Zipkin would track the same request). Stats about the whole call would then both eventually end up in the same phoenix stat table, which is then queryable. The intelligent bit then becomes updating the stats table with metrics in an intelligent way so you can do a rollup later to reconstruct history. Since you know the query id, you can correlate it between the clients and servers. This also gives you perfect timing as you know the operation order (and could get smarter when you parallelize things by having "sub" parts that getting their own UUID, but that correlate to the original request, e.g. UUID 1234 splits into 1234-1, 1234-2, 1234-3). I started working through some simple, toy examples of using metrics2 for logging (simple and with dynamic method calls). Its nothing fancy and shouldn't be directly used for phoenix, but might be helpful to someone trying to figure out how the metrics2 stuff all works. |
Simple prototype is up at github.com/jyates/phoenix/tree/tracing. Just traces mutations from the client to the server through the indexing path, writes them to the sink, which writes them to a phoenix table and then has a simple reader to rebuild the traces. See the end-to-end test for a full example. |
I'd like to know how much cpu, physical io, logical io, wait time, blocking time, transmission time was spent for each thread of execution across the hbase cluster, within coprocessors, and within the client's phoenix threadpools for each query.
Here are some of the problems I want to solve:
The text was updated successfully, but these errors were encountered: