You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Queue Name | Worker Available | Jobs Waiting | Jobs Running
---------------------------------------------------------------------------------
foo-bar-1 | 10 | 0 | 0
foo-bar-2 | 10 | 0 | 0
foo | 0 | 10 | 0
Python is not my forte, but I suspect Line 405 of libgearman-server/plugins/queue/redis/queue.cc to be the culprit:
int fmt_str_length= snprintf(fmt_str, sizeof(fmt_str), "%%%d[^-]-%%%d[^-]-%%%ds",
int(GEARMAND_QUEUE_GEARMAND_DEFAULT_PREFIX_SIZE),
int(GEARMAN_FUNCTION_MAX_SIZE),
int(GEARMAN_MAX_UNIQUE_SIZE));
It might make sense, now that gearmand utilizes Redis' Hash Keys, to instead add a hash field for function. Since we aren't utilizing any sort key name pattern matching at the moment.
If key name pattern matching did become a necessity, I believe it would be a bit cleaner to create a set keyed by Gearman function (with a unique prefix) containing a pointer to the hash keys.
To illustrate using the above example (it would be best to use a pipeline if possible):
Hahah Python definitely not involved. Burning the candle at both ends, bringing up a bunch of new Gearman workers for text and image classification (Python) to handle a large traffic spike. Which is, of course, the same time gearman job servers need a kick, and come back up with all jobs in truncated buckets. But hey, at least they persisted and were accessible in an easy to access format to recreate job payloads 👍
Just, as a general rule, I'd suggest that you have your workers store the data in redis, and not rely on gearmand to do it. Using background jobs and a queue plugin is basically the least scalable way to use gearmand.
I understand, the queue plugins are there, so they seem attractive, but you're far better off having workers store and recover in-flight jobs in a place that scales with the workers.
I wrote an example worker in python to do just that:
When gearmand is restarted, and needs to read the persistent queue back into memory, it drops anything after the first hyphen for the function name.
Debian 9
gearmand-1.1.18 (from source)
redis-server 3.2.6
hiredis 0.13.3
For example:
Shows up in gearman_top as:
Python is not my forte, but I suspect
Line 405
oflibgearman-server/plugins/queue/redis/queue.cc
to be the culprit:It might make sense, now that gearmand utilizes Redis' Hash Keys, to instead add a hash field for
function
. Since we aren't utilizing any sort key name pattern matching at the moment.If key name pattern matching did become a necessity, I believe it would be a bit cleaner to create a set keyed by Gearman function (with a unique prefix) containing a pointer to the hash keys.
To illustrate using the above example (it would be best to use a pipeline if possible):
The text was updated successfully, but these errors were encountered: