You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
class AsyncTestMoveWorker : public AsyncWorker {
public:
static void DoWork(const CallbackInfo&info){
// WorkerA, suppress destruct, Destroy() method should not be called
AsyncTestMoveWorker * workerA = new AsyncTestMoveWorker(info[0].As<Function>(),1);
workerA->SuppressDestruct();
AsyncTestMoveWorker* workerB = new AsyncTestMoveWorker(info[1].As<Function>(),2);
*workerB = std::move(*workerA);
// Expects that after the move operation, workerB will invoke workerA's callback,
// and Destory() method should also not be called
workerB->Queue();
}
AsyncTestMoveWorker(const Function &cb, int unique_id) : AsyncWorker(cb), _id(unique_id) {};
protected:
void Execute() override {};
void Destroy() override {
assert(true == false);
};
private:
int _id;
};
A crash was seen on line (https://github.com/nodejs/node-addon-api/blob/main/napi-inl.h#L4905), where the error was napi_invalid_arg because _env was a nullptr. It appears that OnWorkComplete was invoked using workerA as the "this" pointer, but we have already cleared workerA's _env in the move assignment operator overload.
From what I understand, we create AsyncWorker via a call to napi_create_async_work, where one of the parameters was a user provided data context that gets "passed back into the execute and complete functions" per n-api.md. We also set this parameter to be "this" in the AsyncWorker ctors. My hypothesis is even though we have moved the _env and _work from workerA to workerB, when we queue up the _work it is still passing workerA's address to the execute and oncomplete callbacks. However by that point _env was already set to nullptr, causing a crash.
Am I using the move operators incorrectly with asyncworkers, or could this a bug instead? Thank you!
The text was updated successfully, but these errors were encountered:
Thanks for looking into this! I did some testing locally to reproduce and yes, I see the crash too. Your explanation does make sense: We call napi_create_async_work on construct of workerA passing this as data which has instance members which are no longer valid after moved into workerB.
Looking at the move operator: Not too sure about the proper fix. Do we destroy the work on other and re-create it on this? We would need to hold the resource and resource_name in order to accomplish this re-creation. However, with this approach, I'm not sure what would happen if you tried to move an AsyncWorker whose work has already started execution? There are a lot of caveats here .... almost makes me think we shouldn't have implemented the move in the first place 🤷
Let's see what the others say next week. Enjoy your NYE! 🍾🎇🎆🎆🎇🍾
Hello everyone, happy holidays! I am trying to add test coverage for the std::move operator overloads for async workers https://github.com/nodejs/node-addon-api/blob/main/napi-inl.h#L4811, and ran into some problems.
A crash was seen on line (https://github.com/nodejs/node-addon-api/blob/main/napi-inl.h#L4905), where the error was
napi_invalid_arg
because_env
was a nullptr. It appears thatOnWorkComplete
was invoked usingworkerA
as the "this" pointer, but we have already cleared workerA's _env in the move assignment operator overload.From what I understand, we create
AsyncWorker
via a call tonapi_create_async_work
, where one of the parameters was a user provided data context that gets "passed back into the execute and complete functions" per n-api.md. We also set this parameter to be "this" in theAsyncWorker
ctors. My hypothesis is even though we have moved the_env
and_work
from workerA to workerB, when we queue up the_work
it is still passing workerA's address to theexecute
andoncomplete
callbacks. However by that point_env
was already set to nullptr, causing a crash.Am I using the move operators incorrectly with asyncworkers, or could this a bug instead? Thank you!
The text was updated successfully, but these errors were encountered: