spark-streaming receiver方式接受kafka数据报错

2020年06月16日 146点热度 0人点赞 0条评论

错误代码

ERROR scheduler.ReceiverTracker: Deregistered receiver for stream 0: 
Error starting receiver 0 - 
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 10000 
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:880) at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:98) 
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:84) 
at kafka.consumer.ZookeeperConsumerConnector.connectZk(ZookeeperConsumerConnector.scala:171) 
at kafka.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:126) 
at kafka.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:143) 
at kafka.consumer.Consumer$.create(ConsumerConnector.scala:94) 
at org.apache.spark.streaming.kafka.KafkaReceiver.onStart(KafkaInputDStream.scala:100) 
at org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:149) 
at org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:131)

错误分析

  从报错中可以看到org.I0Itec.zkclient.exception.ZkTimeoutException:Unable to connect to zookeeper server within timeout: 10000 连接超时,既然是连接超时就有可能是zookeeper集群本身问题、防火墙的原因,或者ip地址错误、端口错误等原因。
  经过排查,排除了zookeeper集群自身问题、防火墙、ip地址的问题,那就可能是端口或者其他问题了。
  仔细查看代码:val ZK_QUORUM = zkHostIp.map(_+":2888").mkstring(",")
  因为自己疏忽的问题,将配置的zookeeper端口2888作为了数据传输端口,造成了和zookeeper服务冲突。
  修改端口,重新打包,问题解决。

阿布

源自灵魂深处的自我救赎。

文章评论