-
Notifications
You must be signed in to change notification settings - Fork 297
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hostname_verification not using SAN DNS entry #2054
Comments
@kkensy Thank you providing an example configuration. I noticed that the configuration works when all instances of |
If I remove all the |
@kkensy Thanks for filing this issue, this looks like a defect in our support for hostname verification. |
The problem persists in version 2.4 |
Hi!
The secret definition
And part of statefullset.
Here is a envs to mount each cert/key pair to specific pod and announce all discovery node hostnames:
As you see I set each node hostname specifically by providing its name like this:
If I set |
Hello, security/src/main/java/org/opensearch/security/ssl/transport/SecuritySSLNettyTransport.java Lines 174 to 201 in 6ace852
Maybe this is the expected behaviour and there must be a reverse DNS set on this address. |
@yann-soubeyrand I'd recommend enabling debug logging and seeing what is being recorded in the log statement Here is a snippet showing how this kind of logging was from another issue, or just enable debug logging at the root logger level.
From #1689 |
@peternied there’s a mix of IP addresses and hostname for the same hosts (I modified the real domains and IP addresses):
|
@yann-soubeyrand That's odd, it looks like sometimes its resolving and other times it is not. This is a class from OpenJDK14 where the name resolution is being performed if I am tracking the stack correctly: https://github.com/openjdk/jdk/blob/master/src/java.base/share/classes/java/net/InetAddress.java#L815-L860 Could there be some |
Hello @cwperks, here are the logs. |
I am having the same issue with version 2.7 helm chart deployment. I have created all the needed certificates, but I am using SAN and the issue still appears. at first it was asking to add headless service name, I added it with the normal service name too, but it started checking the pods IP where it should not, I added the pods names of the statefulset and still showing an issue on the pods name, with certificate unknown. Any updates on this issue?
nodes_dn:
- 'CN=test.com'
I tried also to revert hostname_verification to but I am still getting the following error,
Thank you. |
I also tried to add the self-signed certificates within an Initial container using an empty dir and mounted the empty dir in /certs inside Opensearch container and added the following inside helm chart values.yaml and the issue still exists, and the Java still showing the error |
The problem still happens in version v2.7, even with hostname verification being disabled, do you have any updates? |
We have the same problem. I'm fairly new to helm but figured if its possible for the |
What is the bug?
After successfully replacing the demo certificates with my own, I am having a problem enabling the
plugins.security.ssl.transport.enforce_hostname_verification
. Im following the documentation:https://opensearch.org/docs/latest/security-plugin/configuration/generate-certificates/
https://opensearch.org/docs/latest/opensearch/install/docker-security/
The
X509v3 Subject Alternative Name
entry and theplugins.security.nodes_dn
property are set accordingly:After setting
plugins.security.ssl.transport.enforce_hostname_verification: true
the following errors occur:I have to insist on the DNS SAN record and cannot add an IP address SAN record to my certificates.
How can one reproduce the bug?
I have prepared an example with docker. Steps to reproduce the behavior:
What is the expected behavior?
plugins.security.ssl.transport.enforce_hostname_verification
enabled, cluster up with all nodes, no related errors in logsThe text was updated successfully, but these errors were encountered: