Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Impossible to watch a kubernetes resource #496

Closed
ghost opened this issue Aug 18, 2016 · 9 comments
Closed

Impossible to watch a kubernetes resource #496

ghost opened this issue Aug 18, 2016 · 9 comments

Comments

@ghost
Copy link

ghost commented Aug 18, 2016

Hello,

I encounter an issue when I try to watch a kubernetes resource:

2016-08-18 14:11:02.270 [OkHttp http://127.0.0.1:8001/...] ERROR i.f.k.c.d.i.WatchConnectionManager$1 - Exec Failure: HTTP:200. Message:
java.net.ProtocolException: Expected HTTP 101 response but was '200 OK'
        at okhttp3.ws.WebSocketCall.createWebSocket(WebSocketCall.java:122) ~[com.squareup.okhttp3.okhttp-ws-3.4.1.jar:na]
        at okhttp3.ws.WebSocketCall.access$000(WebSocketCall.java:41) ~[com.squareup.okhttp3.okhttp-ws-3.4.1.jar:na]
        at okhttp3.ws.WebSocketCall$1.onResponse(WebSocketCall.java:97) ~[com.squareup.okhttp3.okhttp-ws-3.4.1.jar:na]
        at okhttp3.RealCall$AsyncCall.execute(RealCall.java:126) [com.squareup.okhttp3.okhttp-3.4.1.jar:na]
        at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) [com.squareup.okhttp3.okhttp-3.4.1.jar:na]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_102]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_102]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]

When I send a request with curl, the following HTTP headers are returned:

curl https://x/api/v1/pods?watch=true -v  --insecure --cert /exxt --key /xxy
* About to connect() to xxxx port 8888 (#0)
*   Trying xxx...
* Connected to xx (xxx) port 8888 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* NSS: client certificate from file
*       subject: CN=kubecfg-proxy
*       start date: Aug 10 13:16:14 2016 GMT
*       expire date: Aug 08 13:16:14 2026 GMT
*       common name: kubecfg-proxy
*       issuer: CN=xxxxxx@1470834973
* SSL connection using TLS_RSA_WITH_AES_128_CBC_SHA
* Server certificate:
*       subject: CN=server
*       start date: Aug 10 13:16:15 2016 GMT
*       expire date: Aug 08 13:16:15 2026 GMT
*       common name: server
*       issuer: CN=xxxxxx@1470834973
> GET /api/v1/pods?watch=true HTTP/1.1
> User-Agent: curl/7.29.0
> Host: xxxxxxx:8888
> Accept: */*
>
< HTTP/1.1 200 OK
< Transfer-Encoding: chunked
< Date: Thu, 18 Aug 2016 14:23:29 GMT
< Content-Type: text/plain; charset=utf-8
< Transfer-Encoding: chunked

It seems the API server actually returns a 200 http code.. Do you know a solution for this "issue"?

kubernete-client version: 1.4.4
kubernetes-version: v1.2.0

@akroston
Copy link

akroston commented Nov 1, 2016

I see the same thing. Any word on this?

@jimmidyson
Copy link
Contributor

Are you using the same version of kubernetes?

@akroston
Copy link

akroston commented Nov 1, 2016

No, I'm using latest on GCE which as of this date is 1.4 and the latest client version. (compile 'io.fabric8:kubernetes-client:1.4.17')

@jimmidyson
Copy link
Contributor

I've just run the following example against Kubernetes 1.4.1 without problems. Can you try it & see if it works for you?

import io.fabric8.kubernetes.api.model.Pod;
import io.fabric8.kubernetes.client.Config;
import io.fabric8.kubernetes.client.ConfigBuilder;
import io.fabric8.kubernetes.client.DefaultKubernetesClient;
import io.fabric8.kubernetes.client.KubernetesClient;
import io.fabric8.kubernetes.client.KubernetesClientException;
import io.fabric8.kubernetes.client.Watch;
import io.fabric8.kubernetes.client.Watcher;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;

public class WatchExample {

  private static final Logger logger = LoggerFactory.getLogger(WatchExample.class);

  public static void main(String[] args) throws InterruptedException {
    final CountDownLatch closeLatch = new CountDownLatch(1);
    Config config = new ConfigBuilder().build();
    KubernetesClient client = new DefaultKubernetesClient(config);
    try {
      try (Watch watch = client.pods().inAnyNamespace().watch(new Watcher<Pod>() {
        @Override
        public void eventReceived(Action action, Pod resource) {
          logger.info("Received {}: {}", action, resource.getMetadata().getResourceVersion());
        }

        @Override
        public void onClose(KubernetesClientException e) {
          logger.debug("Watcher onClose");
          if (e != null) {
            logger.error(e.getMessage(), e);
            closeLatch.countDown();
          }
        }
      })) {
        closeLatch.await(2, TimeUnit.SECONDS);
      } catch (KubernetesClientException | InterruptedException e) {
        logger.error("Could not watch resources", e);
      }
    } catch (Exception e) {
      e.printStackTrace();
      logger.error(e.getMessage(), e);

      Throwable[] suppressed = e.getSuppressed();
      if (suppressed != null) {
        for (Throwable t : suppressed) {
          logger.error(t.getMessage(), t);
        }
      }
    } finally {
      client.close();
    }
  }

}

@akroston
Copy link

akroston commented Nov 1, 2016

Same error. The only change I made was

Config config = new ConfigBuilder().withMasterUrl("http://127.0.0.1:8001").build();
Because I am using kubectl proxy to interact with GCE.

Here's the stack traces:


16:03:37.927 [OkHttp http://127.0.0.1:8001/...] ERROR io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$1 - Exec Failure: HTTP:200. Message:
java.net.ProtocolException: Expected HTTP 101 response but was '200 OK'
    at okhttp3.ws.WebSocketCall.createWebSocket(WebSocketCall.java:122)
    at okhttp3.ws.WebSocketCall.access$000(WebSocketCall.java:41)
    at okhttp3.ws.WebSocketCall$1.onResponse(WebSocketCall.java:97)
    at okhttp3.RealCall$AsyncCall.execute(RealCall.java:126)
    at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
16:03:37.928 [main] ERROR com.weblife.TestKub2 - Could not watch resources
io.fabric8.kubernetes.client.KubernetesClientException: 
    at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$1.onFailure(WatchConnectionManager.java:165)
    at okhttp3.ws.WebSocketCall$1.onResponse(WebSocketCall.java:99)
    at okhttp3.RealCall$AsyncCall.execute(RealCall.java:126)
    at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Exception in thread "OkHttp Dispatcher" java.lang.NullPointerException
    at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$1.onClose(WatchConnectionManager.java:250)
    at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$1.onFailure(WatchConnectionManager.java:193)
    at okhttp3.ws.WebSocketCall$1.onResponse(WebSocketCall.java:99)
    at okhttp3.RealCall$AsyncCall.execute(RealCall.java:126)
    at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

@jimmidyson
Copy link
Contributor

Aha perhaps a kubectl proxy bug not handling websockets properly.

@jimmidyson
Copy link
Contributor

@akroston
Copy link

akroston commented Nov 1, 2016

Thanks for tracking that down!

@stale
Copy link

stale bot commented Aug 6, 2019

This issue has been automatically marked as stale because it has not had any activity since 90 days. It will be closed if no further activity occurs within 7 days. Thank you for your contributions!

@stale stale bot added the status/stale label Aug 6, 2019
@stale stale bot closed this as completed Aug 13, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants