-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: Failed to fetch | TypeError: Cannot read properties of undefined (reading 'status') #26
Comments
Hello! Could you specify more details about what happened? Code, error path, etc... |
I believe it's due to attempting to send multiple messages while one is already being generated. I have resolved the issue by waiting until one is completed before sending the next one. Will let you know if I find more issues. Thanks for making this |
I decided to use multiple instances of the node_characterai and run them separately. So that if the bot is already generating a message for one response it can use the other instances to generate one for another. (It seems to have worked and now has the ability to generate multiple responses at the same time.) I have only had the issue once since then, and i'm still trying to find more info about it. The issue also seems to occur under the chat.js file, on line 65.
|
If request is undefined it means something went wrong with the puppeteer evaluation, as it returns an object and the status is a procedurally generated function when /streaming/ is called. |
I can confirm the same thing happening on my end. I have not tried the similar fix of multiple instances of node_characterai. I can confirm this happens if multiple fetches are happening at the same time. One other solution is probably to queue each process and wait for each to finish. edit: Now that I think about it, it might also affect other functions such as fetching and searching characters. |
I did try queueing each one and waiting for it to finish and it did work. Although if I had multiple users conversing with it at the same time the chat would end up getting really far behind (because it takes anywhere from around 4 - 10 seconds to generate a response) and was not able to respond to new ones quick enough. |
Yeah queueing will probably have it take a long time depending on how many users are using it. This is just an alternative solution instead of having to create multiple instances of the client for each request. The problem with making multiple instances of puppeteer is that each one opens a chromium instance and depending on how you are hosting this it might not be feasible, so there are trade-offs for each solution. Trying to look into it and see if I can find the solution. I really think they should just remove the Cloudflare blocking. |
One other solution I can think of at this moment is to try multiple puppeteer instances per request call without having to force the develop to make a new client. This still will make a chromium instance and multiple puppeteer instances for all the requests but this is probably better for the end developer. |
I don't know exactly how Add some sort of "request_id" header to requests. I use this to make sure whatever I request is what I intercept. This basically fixes issues with Eval-Fetching and makes sure we only listen to requests we want. Of course, I don't know what might break with this change. Could be set up like this for a very specific reason. This would just let you use one singular type of requesting that should hopefully fix whatever issue this is. |
Hello! |
You can close this issue now. It was definitely related to a caching problem and has been resolved now as of v1.0.8. |
No description provided.
The text was updated successfully, but these errors were encountered: