Skip to content
This repository has been archived by the owner on Feb 12, 2022. It is now read-only.

Collect usage and performance metrics #46

Open
ryang-sfdc opened this issue Feb 13, 2013 · 4 comments
Open

Collect usage and performance metrics #46

ryang-sfdc opened this issue Feb 13, 2013 · 4 comments
Assignees

Comments

@ryang-sfdc
Copy link
Contributor

I'd like to know how much cpu, physical io, logical io, wait time, blocking time, transmission time was spent for each thread of execution across the hbase cluster, within coprocessors, and within the client's phoenix threadpools for each query.

Here are some of the problems I want to solve:

  1. every component has one or more configurable threadpools, and I have no idea how to gather data to make any decisions.
  2. queries that I think should be fast turn out to be dog slow, e.g., select foo from bar where foo like 'abc%' group by foo Without attaching a profiler to hbase, which most people won't bother with, it's not clear why it's slow.
@doug-explorys
Copy link

A lighter weight version of this request are logs that can detail the processing status of each thread. For example, we just debugged an issue with our R&D cluster where 2 nodes were acting flaky, and I'm pretty sure that the Phoenix client was hanging specifically on those two nodes. Through trial and error we figured it out, but it would be nice to have logs something along the line of this:

[[timestamp][request-id]Starting query "select count(*) from myTable"
[timestamp]request-id][thread-id] starting against RS-X (for each thread)
[timestamp]request-id][thread-id] ending in XXX (ms)
[[timestamp][request-id]Finished in YYY (ms)

What this would have uncovered is that we had N region servers, and N-2 requests were completing.

Icing on the cake is that if a query times out, tell me which RS's it's waiting on.

@jtaylor-sfdc
Copy link
Contributor

We should look at the Zipkin-based monitoring that Elliot Clark is doing for HBase here. Needs to aggregate/rollup the costs, but if it did that, it would be a sweet way to monitor perf.

@ghost ghost assigned samarthjain Oct 30, 2013
@jyates
Copy link
Contributor

jyates commented Oct 31, 2013

I was thinking we could use the Hadoop metrics2 to manage the metrics for a given request (both scans and inserts). What you really want in metrics tooling is:

  1. Async collection
  2. non-blocking
  3. Flexible writers
  4. Dropping extra metrics if it becomes too full

metrics2 gives us all of that. Also, there are good reference implementations, for instance, the hadoop code itself (here and here) as well as the new HBase metrics system.

We can then use this to keep stats on phoenix in phoenix tables. By instrumenting the phoenix methods correctly we can gather things like number of bytes sent, method times, region/regionserver response times, etc. Then you would publish these metrics to a phoenix sink that again, writes back to a phoenix table (and possibly updates a local stats cache too).

This only interesting bits are then:

  1. Tracking method calls from the client to the server
  2. Creating a clean abstraction around dynamic variables

The latter is just good engineering. The former can be solved by tagging each method call with a UUID (similar to how Zipkin would track the same request). Stats about the whole call would then both eventually end up in the same phoenix stat table, which is then queryable.

The intelligent bit then becomes updating the stats table with metrics in an intelligent way so you can do a rollup later to reconstruct history. Since you know the query id, you can correlate it between the clients and servers. This also gives you perfect timing as you know the operation order (and could get smarter when you parallelize things by having "sub" parts that getting their own UUID, but that correlate to the original request, e.g. UUID 1234 splits into 1234-1, 1234-2, 1234-3).

I started working through some simple, toy examples of using metrics2 for logging (simple and with dynamic method calls). Its nothing fancy and shouldn't be directly used for phoenix, but might be helpful to someone trying to figure out how the metrics2 stuff all works.

@ghost ghost assigned jyates Nov 1, 2013
@jyates
Copy link
Contributor

jyates commented Nov 13, 2013

Simple prototype is up at github.com/jyates/phoenix/tree/tracing. Just traces mutations from the client to the server through the indexing path, writes them to the sink, which writes them to a phoenix table and then has a simple reader to rebuild the traces.

See the end-to-end test for a full example.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants