You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If MapReduce on a bucket is cancelled (timeout or dropped client connection) the listkeys feeding the objects from the bucket is not cancelled, instead it generates a large number of error messages
Pipe worker startup failed:fitting was gone before startup
Additionally, these running listkey jobs continue to tie up resources that would otherwise be capable of running other MapReduce queries. If enough listkey jobs end up in this state, subsequent MapReduce queries will be unable to complete and will timeout. In the worst case, all MapReduce queries submitted will timeout until the running listkeys eventually terminate and the cluster recovers.
The text was updated successfully, but these errors were encountered:
how can I prevent the local Erlang client from dropping the connection on a long running pipe? In riak_mongo, when running the count test which leads to mapred of all keys in a bucket, I get just as many keys counted as came through before the pipe has died. I absolutely well understand that it's stupid to ask for all keys in the bucket for many reasons, but that's how the corresponding MongoDB js test is counting the items:
db.things.find().toArray().length
Which leades to count(*) in the KV store when getting translated. I can't work with the count approximation since the corresponding test simply compares in and out count in an assertion. So i need to mapred and thus count all the keys in a potential long-runner.
Any idea how to increase the local client timeout?
This should be fixed as of #408. A few of those messages may still print, as in-flight messages are consumed from mailboxes, but it should be a small fixed number (#vnodes or #vnodes/N-value), instead of the endless run.
If MapReduce on a bucket is cancelled (timeout or dropped client connection) the listkeys feeding the objects from the bucket is not cancelled, instead it generates a large number of error messages
Additionally, these running listkey jobs continue to tie up resources that would otherwise be capable of running other MapReduce queries. If enough listkey jobs end up in this state, subsequent MapReduce queries will be unable to complete and will timeout. In the worst case, all MapReduce queries submitted will timeout until the running listkeys eventually terminate and the cluster recovers.
The text was updated successfully, but these errors were encountered: