Create and clone into a directory where you would like to work.
Note that this is setup to work with the VisualStudio Code Remote Development Extension Pack.
It is assumed that you have:
-
command line access to docker (pointing into the OpenShift cluster) for instance [1]
export DOCKER_TLS_VERIFY="1" # set to Docker Host on cluster export DOCKER_HOST="tcp://192.168.99.100:2376" export DOCKER_CERT_PATH="/Users/marc.hildenbrand/.minishift/profiles/oc/certs"
-
command line access to openshift-cli that points into an accessible OpenShift cluster)
-
an OpenShift project you have 'edit' access to (will be called remote-debug throughout this example)
-
You must have a public route to whichever service you expose [2]
You can skip this section if you already have a destination project that you’d like to work in, just bear in mind the pre-requisites above.
$ oc new-project remote-debug Now using project "remotedebug" on server "https://192.168.99.100:8443". You can add applications to this project with the 'new-app' command. For example, try: oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git to build a new example application in Ruby.
We’re going to build a simple springboot application that will be folded into a docker container. This container will be associated with a deployment.
Start at the root of the cloned git repo and build the application locally
$ mvn clean package ... INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 01:15 min [INFO] Finished at: 2019-08-20T12:20:48+00:00 [INFO] Final Memory: 30M/174M [INFO] ------------------------------------------------------------------------
Test that the application runs locally
$ java -jar target/remote-debug-demo-0.0.1.jar $ curl localhost:8080
Now that the target is built, we want to build it into our Dockerfile. Notice that the only exposed port is the port that SpringBoot wants to listen on, 8080
$ docker build -t hatmarch/myremotedebug:v1 . Sending build context to Docker daemon 14.59MB Step 1/6 : FROM openjdk:8u151 ---> a30a1e547e6d Step 2/6 : ENV JAVA_APP_JAR remote-debug-demo-0.0.1.jar ---> Running in a3ec4310bf55 ---> 979fe7924887 Removing intermediate container a3ec4310bf55 Step 3/6 : WORKDIR /app/ ---> 086d05926279 Removing intermediate container 4d6ae7dafc3a Step 4/6 : COPY target/$JAVA_APP_JAR . ---> 0c2e4b36fdce Removing intermediate container 83a08807f89e Step 5/6 : EXPOSE 8080 ---> Running in 08eea37846a8 ---> c578def62b1d Removing intermediate container 08eea37846a8 Step 6/6 : CMD java -XX:+PrintFlagsFinal -XX:+PrintGCDetails -jar $JAVA_APP_JAR ---> Running in 2d497de28246 ---> 38d2afa981db Removing intermediate container 2d497de28246 Successfully built 38d2afa981db
Check that the image has been uploaded to the cluster
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE hatmarch/myremotedebug v1 38d2afa981db 3 days ago 752MB
Now create a deployment config that uses that image
$ oc create -f kubefiles/myremotedebug-deployment.yml deployment.extensions/myremotedebug created
Then create the service to reference this deployment
$ oc create -f kubefiles/myremotedebug-service.yml service/myremotedebug created
Note
|
If you do not have a route to the service then you can connect to the nodeport of the service. If you have admin access then you can:
$ oc create -f kubefiles/myremotedebug-route.yml route.route.openshift.io/remote-debug created This will create a route through the Dynamic DNS service nip.io. |
You should now see your app and deployment under the remote-debug project
And be able to access it
$ curl http://myremotedebug-remote-demo.192.168.99.102.nip.io/ Aloha from Spring Boot! 1 on 1563536cbdf2
Let’s pretend you call an endpoint and it causes a crash
$ curl http://myremotedebug-remote-demo.192.168.99.102.nip.io/crash {"timestamp":1565993738150,"status":500,"error":"Internal Server Error","exception":"java.lang.IllegalAccessError","message":"No message available","path":"/crash"}
Or this:
We will want to open a debug port on 5000. Not only is there no publically accessible route to this port on the container/node, there is no internally accessible port set up for this. Yet.
We don’t always want the debugger to be running in the container, but we also want to keep the container as immutable as possible. What we’ll do instead is to expose some environment variables into the entrypoint of the container.
Notice the Dockerfile-Debug file in the root of the repo. Notice the following changes:
The JAVA_OPTIONS environment variable will allow us to container whether the java entry point is run with jdwp support.
Let’s create a new image based on the Dockerfile-Debug file
$ docker build -t hatmarch/myremotedebug:v2 -f Dockerfile-Debug . Sending build context to Docker daemon 15.14MB Step 1/6 : FROM openjdk:8u151 ---> a30a1e547e6d Step 2/6 : ENV JAVA_APP_JAR remote-debug-demo-0.0.1.jar ---> Using cache ---> 979fe7924887 Step 3/6 : WORKDIR /app/ ---> Using cache ---> 086d05926279 Step 4/6 : COPY target/$JAVA_APP_JAR . ---> Using cache ---> 0c2e4b36fdce Step 5/6 : EXPOSE 8080 ---> Using cache ---> c578def62b1d Step 6/6 : CMD java -XX:+PrintFlagsFinal -XX:+PrintGCDetails $JAVA_OPTIONS -jar $JAVA_APP_JAR ---> Using cache ---> 38d2afa981db Successfully built 38d2afa981db
Now let’s update our deployment to point to the new image
$ oc set image deployment/myremotedebug myremotedebug=hatmarch/myremotedebug:v2 deployment.extensions/myremotedebug image updated
If you’d like, go back to the "Simulate a Crash" section and prove that debug port is still not open.
Next, we want to update the environment variables in our deployment to activate remote debugging services. For this, take a look at the contents of the Java_Debug.txt file.
It will cause the debugger to run listening on port 5000. It is also setup NOT to suspend execution until a debugger is attached. You can change that functionality if you’d like.
$ oc set env deployment/myremotedebug JAVA_OPTIONS="$(cat Java_Debug.txt)" deployment.extensions/myremotedebug updated
This should change the deployment and trigger the creation of a new pod. You can check this in the console.
Now all that’s left is being able to connect to the pod. For this, we will use port forwarding
Port forwarding works by routing a port on our localhost to a port on a specific pod. First, find the specific pod you want
$ oc get pods NAME READY STATUS RESTARTS AGE myremotedebug-5679bf775c-gwzpx 1/1 Running 0 6m
Next set up port forwarding to port 5000 (the port the debugger should be listening on) on that pod. Do this from a terminal on your local machine
$ oc port-forward myremotedebug-5679bf775c-gwzpx 32000:5000 Forwarding from 127.0.0.1:32000 -> 5000 Forwarding from [::1]:32000 -> 5000
Now our local port 32000 should be forwarded to port 5000 on pod myremotedebug-5679bf775c-gwzpx. Note that you can also forward to port 5000 is you like. See info here.
Now we open a new terminal whilst the port forwarding is active and open the java command line debugger.[3] Note that the debugger is attaching to our local machine (the machine from whence we issued the oc port-forward command)
$ jdb -attach localhost:32000 -sourcepath src/main/java Set uncaught java.lang.Throwable Set deferred uncaught java.lang.Throwable Initializing jdb ... > stop in com.hatmarch.MyRESTController.doCrash() Set breakpoint com.hatmarch.MyRESTController.doCrash()
Then hit the service as usual [4]
$ curl http://myremotedebug-remote-demo.192.168.99.102.nip.io/crash
And then you should see the debugger terminal:
Breakpoint hit: "thread=http-nio-8080-exec-3", com.hatmarch.MyRESTController.doCrash(), line=30 bci=0 30 throw new IllegalAccessError(); http-nio-8080-exec-3[1] list 26 } 27 28 @RequestMapping("/crash") 29 public String doCrash() { 30 => throw new IllegalAccessError(); 31 } 32 33 34 }
Happy Debugging!