【Kafka】Troubleshooting

Posted by 西维蜀黍 on 2021-10-10, Last Modified on 2022-02-19

could not be established. Broker may not be available

Error

$ kafka-topics --bootstrap-server 192.168.18.134:9092 --topic demo_topic --describe
[2021-10-10 22:39:38,319] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (/192.168.18.134:29092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)

Solution

# if your config is like this, when connect to kafka, have to be "kafka-topics --bootstrap-server 192.168.18.134:9092 --topic demo_topic --describe", instead of localhost:9092
$ sudo vim /home/sw/kafka/config/server.properties
...
listeners=PLAINTEXT://192.168.18.129:9092
...

# 如果修改成listeners=PLAINTEXT://:9092,则会bind to all interfaces

Leave hostname empty to bind to default interface. Examples of legal listener lists:

  • PLAINTEXT://myhost:9092,TRACE://:9091
  • PLAINTEXT://0.0.0.0:9092, TRACE://localhost:9093
# 在sever本机上,用下面两种方式都可以连接到
$ kafka ./bin/kafka-topics.sh --bootstrap-server 192.168.18.129:9092 --list
demo_topic
$ kafka ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --list
demo_topic

# 但是在远程主机上,则无法连接
[2021-10-26 16:56:19,059] WARN [AdminClient clientId=adminclient-1] Error connecting to node Truenasubuntusw:9092 (id: 0 rack: null) (org.apache.kafka.clients.NetworkClient)
java.net.UnknownHostException: Truenasubuntusw
	at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:801)
	at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1509)
	at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1367)
	at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1301)
	at org.apache.kafka.clients.DefaultHostResolver.resolve(DefaultHostResolver.java:27)
	at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:109)
	at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:508)
	at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:465)
	at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:170)
	at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:975)
	at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:301)
	at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:1117)
	at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1377)
	at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1320)
	at java.base/java.lang.Thread.run(Thread.java:833)

Remote Connection

Config

$ sudo vim /home/sw/kafka/config/server.properties
...
listeners=PLAINTEXT://192.168.18.129:9092
...

Error

$ /usr/local/opt/kafka/bin/kafka-console-consumer --bootstrap-server 192.168.18.129:9092 --topic demo_topic --from-beginning
[2021-10-10 23:18:58,503] WARN [Consumer clientId=consumer-console-consumer-83033-1, groupId=console-consumer-83033] Bootstrap broker 192.168.18.129:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)

Solution

$ sudo iptables -A INPUT -s 192.168.18.0/24 -p tcp -m state --state NEW -m tcp --dport 9092 -j ACCEPT

kafka.common.InconsistentClusterIdException: The Cluster ID … doesn’t match stored clusterId Some(…) in meta.properties

[2021-10-26 16:38:52,418] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID ZXkYZa0uShOVyhWWZ5nuRQ doesn't match stored clusterId Some(m6Tm3E5wQTi6Y6hznRmZeA) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
	at kafka.server.KafkaServer.startup(KafkaServer.scala:235)
	at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
	at kafka.Kafka$.main(Kafka.scala:82)
	at kafka.Kafka.main(Kafka.scala)

I managed to Solve this issue with the following steps :

  1. Just Delete all the log/Data file created (or generated) into zookeeper and kafka.
  2. Run Zookeper
  3. Run Kafka
$ cd /home/sw/kafka/logs
$ sudo rm -rf *

Reference