You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Troubleshooting Connection Issues When Setting Up an OpenSearch Cluster Across Multiple Ubuntu Virtual Machines
Related component
Cluster Manager
To Reproduce
I am trying to set up an OpenSearch cluster with 2 nodes on two different Ubuntu 22.04 virtual machine instances with the following IPs:
os01: 10.34.132.37
os02: 10.34.132.230
I have pinged and confirmed that they can communicate with each other.
On each instance, I will install OpenSearch using Docker (specifically Docker Compose) as follows: Instance IP 10.34.132.37:
`version: '3.8'
Describe the bug
Troubleshooting Connection Issues When Setting Up an OpenSearch Cluster Across Multiple Ubuntu Virtual Machines
Related component
Cluster Manager
To Reproduce
I am trying to set up an OpenSearch cluster with 2 nodes on two different Ubuntu 22.04 virtual machine instances with the following IPs:
os01: 10.34.132.37
os02: 10.34.132.230
I have pinged and confirmed that they can communicate with each other.
On each instance, I will install OpenSearch using Docker (specifically Docker Compose) as follows:
Instance IP 10.34.132.37:
`version: '3.8'
services:
os01:
restart: always
image: opensearchproject/opensearch:2.15.0
environment:
- cluster.name=opensearch-cluster
- node.name=os01
- discovery.seed_hosts=10.34.132.37,10.34.132.230
- cluster.initial_master_nodes=10.34.132.37,10.34.132.230
- "OPENSEARCH_JAVA_OPTS=-Xms2g -Xmx2g"
- bootstrap.memory_lock=true
- network.host=0.0.0.0
- OPENSEARCH_INITIAL_ADMIN_PASSWORD=a!Str0ng686#@!
ulimits:
memlock:
soft: -1
hard: -1
volumes:
# - ./custom-opensearch.yml:/usr/share/opensearch/config/opensearch.yml
- os-data1:/usr/share/opensearch/data
logging:
driver: "json-file"
options:
max-size: "100m"
max-file: "1"
ports:
- 9200:9200
- 9600:9600
- 9300:9300
networks:
- opensearch-network
kibana:
restart: always
image: opensearchproject/opensearch-dashboards:2.15.0
environment:
OPENSEARCH_HOSTS: '["https://10.34.132.37:9200", "https://10.34.132.230:9200"]'
OPENSEARCH_USERNAME: "admin"
OPENSEARCH_PASSWORD: "a!Str0ng686#@!"
server.host: "0.0.0.0"
logging:
driver: "json-file"
options:
max-size: "100m"
max-file: "1"
depends_on:
- os01
ports:
- 5601:5601
networks:
- opensearch-network
volumes:
os-data1:
networks:
opensearch-network:
driver: bridge`
Instance IP 10.34.132.230:
`version: '3.8'
services:
os02:
restart: always
image: opensearchproject/opensearch:2.15.0
environment:
- cluster.name=opensearch-cluster
- node.name=os02
- discovery.seed_hosts=10.34.132.37,10.34.132.230
- cluster.initial_master_nodes=10.34.132.37,10.34.132.230
- "OPENSEARCH_JAVA_OPTS=-Xms2g -Xmx2g"
- bootstrap.memory_lock=true
- network.host=0.0.0.0
- OPENSEARCH_INITIAL_ADMIN_PASSWORD=a!Str0ng686#@!
ulimits:
memlock:
soft: -1
hard: -1
volumes:
# - ./custom-opensearch.yml:/usr/share/opensearch/config/opensearch.yml
- os-data1:/usr/share/opensearch/data
logging:
driver: "json-file"
options:
max-size: "100m"
max-file: "1"
ports:
- 9200:9200
- 9600:9600
- 9300:9300
networks:
- opensearch-network
volumes:
os-data1:
networks:
opensearch-network:
driver: bridge
`
However, when I run it, it shows an error like this:
I have also tried customizing the settings in opensearch.yml like this:
`cluster.name: test-cluster
node.name: node-1
node.roles: [master, data, ingest]
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts:
cluster.initial_master_nodes:
bootstrap.memory_lock: true
plugins.security.disabled: true
cluster.name: test-cluster
node.name: node-2
node.roles: [master, data, ingest]
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts:
cluster.initial_master_nodes:
bootstrap.memory_lock: true
plugins.security.disabled: true
`
Can you help me with this?
Expected behavior
I want them to connect with each other and be on the same OpenSearch cluster.
Additional Details
No response
The text was updated successfully, but these errors were encountered: