flume监听端口整合kafka以及相关错误
之前的http.conf:
#savecontenttofile
agent2.sources=httpSrcagent2.channels=channel
agent2.sinks=sink
agent2.sources.httpSrc.type=http
agent2.sources.httpSrc.bind=172.16.90.61
agent2.sources.httpSrc.port=55555
agent2.sources.httpSrc.channels=channelagent2.channels.channel.type=memory
agent2.channels.channel.capacity=100000
agent2.channels.channel.transactionCapacity=100000
agent2.channels.channel.keep-alive=30//延迟可适当调整
agent2.sinks.type=org.apache.flume.sink.kafka.KafkaSink
agent2.sinks.brokerList=app1:6667,app2:6667
agent2.sinks.topic=kafkatest
agent2.sinks.serializer.class= .StringEncoder
agent2.sinks.channel=channel
测试:
同事向端口发送数据
异常:
,监听端口
handler | org.apache.flume.source.http.JSONHandler | 可选值有org.apache.flume.source.http.JSONHandler和org.apache.flume.sink.solr.morphline.BlobHandler |
默认值为JSONHandler,即使不谢也会默认JSONHandler,所以当发送字符串、数字等非json格式的数据时会导致Json Sytax异常,需要发送json的格式,比如命令:
curl -X POST -d '[{ "headers" :{"a" : "a1","b" : "b1"},"body" : "idoall.org_body"}]' 172.16.90.61:55555
结果:
由于之前不太明白,现在找出原因了,之前发送的格式不正确,一定要有json的语法,搞了两天不是我的错0.0.0.0.0.0,那位同事也发现之前的JAVA代码生成的json有问题,解决