Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multi models and multi agent support for rasa in single deployment #400

Closed
pvagare opened this issue May 29, 2017 · 16 comments
Closed

multi models and multi agent support for rasa in single deployment #400

pvagare opened this issue May 29, 2017 · 16 comments

Comments

@pvagare
Copy link

pvagare commented May 29, 2017

As we know in the api.ai we have different agent and different training modules for each agent.
So is there such mechanism available in rasa.ai ?

Please let me know early as possible. Thanks !

@pvagare pvagare changed the title multi module and multi agent support for rasa in single deployment multi models and multi agent support for rasa in single deployment May 29, 2017
@pvagare
Copy link
Author

pvagare commented May 29, 2017

Sorry for this issue. I haven't gone through all documentation.
I got the solution : https://rasa-nlu.readthedocs.io/en/stable/http.html#section-http-config
I am closing issue.
Thank you.

@pvagare pvagare closed this as completed May 29, 2017
@tmbo
Copy link
Member

tmbo commented May 29, 2017

No worries 👍

@devashishmamgain
Copy link

@tmbo whats the latest docs link to achieve multiple agents support in the same machine?

@wrathagom
Copy link
Contributor

Assuming by latest you mean what is on Master: https://nlu.rasa.com/master/http.html#serving-multiple-apps

@devashishmamgain
Copy link

thanks @wrathagom is there anything equivalent for rasa core?

@wrathagom
Copy link
Contributor

Not to my knowledge, but I have a lot less experience with Core than I do NLU. Might be worth asking on the Rasa Core Gitter.

@devashishmamgain
Copy link

tried gitter sometime back, didn't get any response.
any solution for the following case:

  • FAQ bot for multiple customers
  • each customer have their own set of question answers
  • we are able to achieve it by running multiple servers but we are looking for a way to run server on a single port and process request for different customers from their data only.

@tmbo

@vinayver198
Copy link

@devashishmamgain
Hi,

Have you achieved the above task?
If yes, Can you please explain how to achieve such scenario.

Thanks,

@streamride
Copy link

@vinayver198
hi, did you figure out how to do it?

@gambhirpulkit
Copy link

@devashishmamgain Hey Devashish. Did you find a solution for this?

@devashishmamgain
Copy link

@vinayver198 @gambhirpulkit no, couldn't find a reliable and scalable solution.

@avinash9008
Copy link

@pvagare ---> multi models and multi agent support for rasa in single deployment

Sorry for this issue. I haven't gone through all documentation.
I got the solution : https://rasa-nlu.readthedocs.io/en/stable/http.html#section-http-config
I am closing issue.

hi ... pvagare i am also facing same issue can you please give solution link the
above link i am not able to open that you have mentioned

@vkrntgtm
Copy link

vkrntgtm commented Jan 5, 2021

Need to know, whether multiple agents or multiple projects running on single rasa server instance feature is enabled in rasa v2.1.2+ or v2.2.x?

@ftarlaci
Copy link

I have the same question above by @vkrntgtm

@taimoorkhokhar
Copy link

taimoorkhokhar commented Jan 22, 2021

If anybody still stuck on getting this to work then I found a solution which seems to be working with multiple bots for a single server using a flask application.

app = Flask(__name__)
CORS(app)
cors = CORS(app, resources={r”/*”: {“origins”: “*”}})

action_endpoint = “http://localhost:5055/webhook"

def agent_get(orgid):
  orgId= {“agent_one”: ‘./models/20200728–131120.tar.gz’,
  “agent_two”: ‘./models/20200728–140043.tar.gz’,
  “agent_three”: ‘./models/20200728–140043.tar.gz’,
  “agent_four”: ‘./models/20200728–140043.tar.gz’}
  agent = Agent.load(agents_org[orgid],action_endpoint=EndpointConfig(action_endpoint))
  return agent

async def process(agent, msg):
  output = await agent.handle_text(msg)
  print(output)
  return output

@app.route(‘/message’, methods=[‘POST’])
@cross_origin(origin=’*’)
def new_message():
  if not request.json:
    abort(400)
  orgId = request.args.get(‘orgId’)
  current_agent=agent_get(orgId)
  user = request.json[‘sender’]
  message = request.json[‘message’]
  res = (current_agent.handle_text(text_message=message, sender_id=user))
  message= asyncio.run(process(current_agent, message))
  message= json.dumps(message)
  return message
  if __name__ == ‘__main__’:
    app.run(host=’localhost’, port=5000, debug=True, use_reloader=False)

payload: {
“message”: “Hi”,
“sender”: “user”,
“agent”: “agent_one”
}

[{
“recipient_id”: “default”,
“text”: “Hi, I am your virtual assistant\n\n* How can
I help you?
}]

cheers!

@Mushahid2521
Copy link

@taimoorkhokhar Thanks for the solution. I am wondering can it retain the states for each bot? I can see each time a message comes and the bot is initialized each time. So can it be used in form filling when multiple users are chatting same time?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests