Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
A
amos-boot-biz
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
项目统一框架
amos-boot-biz
Commits
ede3e0f8
Commit
ede3e0f8
authored
Nov 10, 2022
by
litengwei
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
配置文件提交
parent
8c2a780d
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
33 additions
and
30 deletions
+33
-30
application-dev.properties
...ils-message/src/main/resources/application-dev.properties
+3
-2
application.properties
...t-utils-message/src/main/resources/application.properties
+28
-28
pom.xml
amos-boot-utils/pom.xml
+2
-0
No files found.
amos-boot-utils/amos-boot-utils-message/src/main/resources/application-dev.properties
View file @
ede3e0f8
...
@@ -23,7 +23,7 @@ spring.redis.password=1234560
...
@@ -23,7 +23,7 @@ spring.redis.password=1234560
#需要监听得kafka消息主题 根据是否是中心极和站端选择需要监听得主题进行配置
#需要监听得kafka消息主题 根据是否是中心极和站端选择需要监听得主题进行配置
kafka.topics
=
null.topic
kafka.topics
=
null.topic
kafka.init.topics
=
akka.iot.created,akka.patrol.created,akka.sign.created,akka.bussSign.created,akka.user.created
kafka.init.topics
=
#需要监听得eqm消息主题 根据是否是中心极和站端选择需要监听得主题进行配置
#需要监听得eqm消息主题 根据是否是中心极和站端选择需要监听得主题进行配置
emq.iot.created,
emq.topic
=
emq.iot.created,emq.patrol.created,emq.sign.created,emq.bussSign.created,emq.user.created
emq.topic
=
emq.iot.created,emq.patrol.created,emq.sign.created,emq.bussSign.created,emq.user.created
\ No newline at end of file
amos-boot-utils/amos-boot-utils-message/src/main/resources/application.properties
View file @
ede3e0f8
...
@@ -5,59 +5,59 @@ spring.profiles.active=dev
...
@@ -5,59 +5,59 @@ spring.profiles.active=dev
spring.jackson.time-zone
=
GMT+8
spring.jackson.time-zone
=
GMT+8
spring.jackson.date-format
=
yyyy-MM-dd HH:mm:ss
spring.jackson.date-format
=
yyyy-MM-dd HH:mm:ss
spring.jackson.serialization.write-dates-as-timestamps
=
true
spring.jackson.serialization.write-dates-as-timestamps
=
true
# kafka
集群信息
# kafka
集群信息
spring.kafka.bootstrap-servers
=
172.16.3.100:9092
spring.kafka.bootstrap-servers
=
172.16.3.100:9092
#
生产者配置
#
生产者配置
#
设置大于0的值,则客户端会将发送失败的记录重新发送 # 重试次数
#
设置大于0的值,则客户端会将发送失败的记录重新发送 # 重试次数
spring.kafka.producer.retries
=
3
spring.kafka.producer.retries
=
0
#16K
#16K
spring.kafka.producer.batch-size
=
16384
spring.kafka.producer.batch-size
=
16384
spring.kafka.producer.buffer-memory
=
33554432
spring.kafka.producer.buffer-memory
=
33554432
#
应答级别
#
应答级别
# acks=0
把消息发送到kafka就认为发送成功
# acks=0
把消息发送到kafka就认为发送成功
# acks=1
把消息发送到kafka leader分区,并且写入磁盘就认为发送成功
# acks=1
把消息发送到kafka leader分区,并且写入磁盘就认为发送成功
# acks=all
把消息发送到kafka leader分区,并且leader分区的副本follower对消息进行了同步就任务发送成功
# acks=all
把消息发送到kafka leader分区,并且leader分区的副本follower对消息进行了同步就任务发送成功
spring.kafka.producer.acks
=
1
spring.kafka.producer.acks
=
1
#
指定消息key和消息体的编解码方式
#
指定消息key和消息体的编解码方式
# #
批量处理的最大大小 单位
byte
# #
批量处理的最大大小 单位 byte
# batch-size: 4096
# batch-size: 4096
# #
发送延时,当生产端积累的消息达到batch-size或接收到消息linger.ms后,生产者就会将消息提交给k
afka
# #
发送延时,当生产端积累的消息达到batch-size或接收到消息linger.ms后,生产者就会将消息提交给kafka
# buffer-memory: 33554432
# buffer-memory: 33554432
# #
客户端I
D
# #
客户端ID
# client-id: hello-kafka
# client-id: hello-kafka
# #
消息压缩:none、lz4、gzip、snappy,默认为 none。
# #
消息压缩:none、lz4、gzip、snappy,默认为 none。
# compression-type: gzip
# compression-type: gzip
spring.kafka.producer.key-serializer
=
org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.key-serializer
=
org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer
=
org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer
=
org.apache.kafka.common.serialization.StringSerializer
#
消费者组
#
消费者组
#
当kafka中没有初始offset或offset超出范围时将自动重置
offset
#
当kafka中没有初始offset或offset超出范围时将自动重置offset
# earliest:
重置为分区中最小的
offset
# earliest:
重置为分区中最小的offs
et
# latest:
重置为分区中最新的offset(消费分区中新产生的数据
)
# latest:
重置为分区中最新的offset(消费分区中新产生的数据)
# none:
只要有一个分区不存在已提交的offset,就抛出异常
# none:
只要有一个分区不存在已提交的offset,就抛出异常
spring.kafka.consumer.group-id
=
zhTestGroup
spring.kafka.consumer.group-id
=
zhTestGroup
spring.kafka.consumer.enable-auto-commit
=
false
spring.kafka.consumer.enable-auto-commit
=
false
#
当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,从头开始消费
#
当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,从头开始消费
# #
自动提交的频率 单位
ms
# #
自动提交的频率 单位 ms
# auto-commit-interval: 1000
# auto-commit-interval: 1000
# #
批量消费最大数量
# #
批量消费最大数量
# max-poll-records: 100
# max-poll-records: 100
spring.kafka.consumer.auto-offset-reset
=
earliest
spring.kafka.consumer.auto-offset-reset
=
earliest
spring.kafka.consumer.key-deserializer
=
org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.key-deserializer
=
org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer
=
org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer
=
org.apache.kafka.common.serialization.StringDeserializer
#
当每一条记录被消费者监听器(ListenerConsumer)处理之后提交
#
当每一条记录被消费者监听器(ListenerConsumer)处理之后提交
# RECORD
# RECORD
#
当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后提交
#
当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后提交
# BATCH
# BATCH
#
当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后,距离上次提交时间大于TIME时提交
#
当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后,距离上次提交时间大于TIME时提交
# TIME
# TIME
#
当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后,被处理record数量大于等于COUNT时提交
#
当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后,被处理record数量大于等于COUNT时提交
# COUNT
# COUNT
# TIME |
COUNT 有一个条件满足时提交
# TIME |
COUNT 有一个条件满足时提交
# COUNT_TIME
# COUNT_TIME
#
当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后, 手动调用Acknowledgment.acknowledge()后提交
#
当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后, 手动调用Acknowledgment.acknowledge()后提交
# MANUAL
# MANUAL
#
手动调用Acknowledgment.acknowledge()后立即提交,一般使用这种
#
手动调用Acknowledgment.acknowledge()后立即提交,一般使用这种
# MANUAL_IMMEDIATE
# MANUAL_IMMEDIATE
spring.kafka.listener.ack-mode
=
manual_immediate
spring.kafka.listener.ack-mode
=
manual_immediate
amos-boot-utils/pom.xml
View file @
ede3e0f8
...
@@ -20,5 +20,6 @@
...
@@ -20,5 +20,6 @@
<module>
amos-boot-utils-jpush
</module>
<module>
amos-boot-utils-jpush
</module>
<module>
amos-boot-utils-video
</module>
<module>
amos-boot-utils-video
</module>
<module>
amos-boot-utils-speech
</module>
<module>
amos-boot-utils-speech
</module>
<module>
amos-boot-utils-message
</module>
</modules>
</modules>
</project>
</project>
\ No newline at end of file
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment