-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
logsize and lag are missing from secure (SSL) kafka 10.2 #410
Comments
I added truststore in consumer config on KM but even after that don't see any offset information and consumer information |
I thought (correct me if I am wrong) KM did not support consumer information for ssl enabled clusters yet. How did you configure your configurations to see those results? Do you mind sharing me how you configured your consumer.properties with ssl enabled cluster? |
Use latest version, now you can configure security protocol to use per cluster. |
I'd like to get KM to work with my SSL secured cluster. I'm using the latest 1.3.3.11 version and now see the new Security Protocol dropdown. I'm still not seeing KF consumer groups and even tried changing conf/consumer.properties to:
Yet still not seeing consumers. I am receiving the log error 'key not found' below so perhaps I'm missing a step?
|
@patelh Nice Work, But my issue still remains the same. I reviewed your code changes, merged on Aug 1. It seems this tool always needed a PLAINTEXT port to communicate with Kafka. All its test cases are looks like that. |
@bigdata4u do you know offhand if the new consumer allows us to get log size ? |
Would be awesome if we can use this for a ssl secure cluster (without plaintext port open) :( Edit: Should have read the files before asking this. Just added the certs/keys to consumer.properties. Working fine so far for me. |
@johnjang I was able to follow you and run kafka-manager with certs information provided in consumer.properties file.
There is an issue opened for this error message - https://github.com/yahoo/kafka-manager/issues/471 |
my Kafka cluster is SSL enabled (NO SASL or ACL).
I am getting below error in the kafka manager log
[info] k.m.a.KafkaManagerActor - Updating internal state... java.io.EOFException at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:99) at kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129) at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120) at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:99) at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149) at kafka.manager.actor.cluster.OffsetCache$$anonfun$19$$anonfun$21$$anonfun$22.apply(KafkaStateActor.scala:415) at kafka.manager.actor.cluster.OffsetCache$$anonfun$19$$anonfun$21$$anonfun$22.apply(KafkaStateActor.scala:412) at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [error] k.m.a.c.OffsetCachePassive - [topic=test_7] An error has occurred while getting topic offsets from broker List((BrokerIdentity(1001,localhost,9092,-1,false),8), (BrokerIdentity(1001,localhost,9092,-1,false),4), (BrokerIdentity(1001,localhost,9092,-1,false),9), (BrokerIdentity(1001,localhost,9092,-1,false),5), (BrokerIdentity(1001,localhost,9092,-1,false),6), (BrokerIdentity(1001,localhost,9092,-1,false),1), (BrokerIdentity(1001,localhost,9092,-1,false),0), (BrokerIdentity(1001,localhost,9092,-1,false),2), (BrokerIdentity(1001,localhost,9092,-1,false),7), (BrokerIdentity(1001,localhost,9092,-1,false),3)) java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:99) ~[org.apache.kafka.kafka-clients-0.10.0.1.jar:na] at kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na] at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na] at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:99) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na] at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na] at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na] at kafka.manager.actor.cluster.OffsetCache$$anonfun$19$$anonfun$21$$anonfun$22.apply(KafkaStateActor.scala:415) ~[kafka-manager.kafka-manager-1.3.3.8-sans-externalized.jar:na] at kafka.manager.actor.cluster.OffsetCache$$anonfun$19$$anonfun$21$$anonfun$22.apply(KafkaStateActor.scala:412) ~[kafka-manager.kafka-manager-1.3.3.8-sans-externalized.jar:na] at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) ~[org.scala-lang.scala-library-2.11.8.jar:na] at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) ~[org.scala-lang.scala-library-2.11.8.jar:na] [info] k.m.a.c.BrokerViewCacheActor - Updating broker view...
The text was updated successfully, but these errors were encountered: