Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
A
amos-boot-biz
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
项目统一框架
amos-boot-biz
Commits
9db7581f
Commit
9db7581f
authored
Nov 16, 2022
by
litengwei
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
任务 11791
parent
24d3d23a
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
165 additions
and
55 deletions
+165
-55
application-dev.properties
...ils-message/src/main/resources/application-dev.properties
+58
-0
application-roma.properties
...ls-message/src/main/resources/application-roma.properties
+107
-0
application.properties
...t-utils-message/src/main/resources/application.properties
+0
-55
No files found.
amos-boot-utils/amos-boot-utils-message/src/main/resources/application-dev.properties
View file @
9db7581f
...
@@ -9,6 +9,64 @@ eureka.instance.status-page-url-path=/actuator/info
...
@@ -9,6 +9,64 @@ eureka.instance.status-page-url-path=/actuator/info
eureka.instance.metadata-map.management.api-docs
=
http://localhost:${server.port}${server.servlet.context-path}/swagger-ui.html
eureka.instance.metadata-map.management.api-docs
=
http://localhost:${server.port}${server.servlet.context-path}/swagger-ui.html
# kafka集群信息
spring.kafka.bootstrap-servers
=
172.16.3.100:9092
# 生产者配置
# 设置大于0的值,则客户端会将发送失败的记录重新发送 # 重试次数
spring.kafka.producer.retries
=
1
#16K
spring.kafka.producer.batch-size
=
16384
spring.kafka.producer.buffer-memory
=
33554432
# 应答级别
# acks=0 把消息发送到kafka就认为发送成功
# acks=1 把消息发送到kafka leader分区,并且写入磁盘就认为发送成功
# acks=all 把消息发送到kafka leader分区,并且leader分区的副本follower对消息进行了同步就任务发送成功
spring.kafka.producer.acks
=
1
# 指定消息key和消息体的编解码方式
# # 批量处理的最大大小 单位 byte
# batch-size: 4096
# # 发送延时,当生产端积累的消息达到batch-size或接收到消息linger.ms后,生产者就会将消息提交给kafka
# buffer-memory: 33554432
# # 客户端ID
# client-id: hello-kafka
# # 消息压缩:none、lz4、gzip、snappy,默认为 none。
# compression-type: gzip
spring.kafka.producer.key-serializer
=
org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer
=
org.apache.kafka.common.serialization.StringSerializer
# 消费者组
# 当kafka中没有初始offset或offset超出范围时将自动重置offset
# earliest:重置为分区中最小的offset
# latest:重置为分区中最新的offset(消费分区中新产生的数据)
# none:只要有一个分区不存在已提交的offset,就抛出异常
spring.kafka.consumer.group-id
=
zhTestGroup
spring.kafka.consumer.enable-auto-commit
=
false
# 当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,从头开始消费
# # 自动提交的频率 单位 ms
# auto-commit-interval: 1000
# # 批量消费最大数量
# max-poll-records: 100
spring.kafka.consumer.auto-offset-reset
=
earliest
spring.kafka.consumer.key-deserializer
=
org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer
=
org.apache.kafka.common.serialization.StringDeserializer
# 当每一条记录被消费者监听器(ListenerConsumer)处理之后提交
# RECORD
# 当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后提交
# BATCH
# 当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后,距离上次提交时间大于TIME时提交
# TIME
# 当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后,被处理record数量大于等于COUNT时提交
# COUNT
# TIME | COUNT 有一个条件满足时提交
# COUNT_TIME
# 当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后, 手动调用Acknowledgment.acknowledge()后提交
# MANUAL
# 手动调用Acknowledgment.acknowledge()后立即提交,一般使用这种
# MANUAL_IMMEDIATE
spring.kafka.listener.ack-mode
=
manual_immediate
management.health.redis.enabled
=
false
management.health.redis.enabled
=
false
## emqx
## emqx
...
...
amos-boot-utils/amos-boot-utils-message/src/main/resources/application-roma.properties
0 → 100644
View file @
9db7581f
#注册中心地址
eureka.client.service-url.defaultZone
=
http://172.16.11.201:10001/eureka/
eureka.instance.prefer-ip-address
=
true
management.endpoint.health.show-details
=
always
management.endpoints.web.exposure.include
=
*
eureka.instance.health-check-url-path
=
/actuator/health
eureka.instance.metadata-map.management.context-path
=
${server.servlet.context-path}/actuator
eureka.instance.status-page-url-path
=
/actuator/info
eureka.instance.metadata-map.management.api-docs
=
http://localhost:${server.port}${server.servlet.context-path}/swagger-ui.html
# kafka集群信息
spring.kafka.bootstrap-servers
=
172.16.3.100:9092
# 生产者配置
# 设置大于0的值,则客户端会将发送失败的记录重新发送 # 重试次数
spring.kafka.producer.retries
=
1
#16K
spring.kafka.producer.batch-size
=
16384
spring.kafka.producer.buffer-memory
=
33554432
# 应答级别
# acks=0 把消息发送到kafka就认为发送成功
# acks=1 把消息发送到kafka leader分区,并且写入磁盘就认为发送成功
# acks=all 把消息发送到kafka leader分区,并且leader分区的副本follower对消息进行了同步就任务发送成功
spring.kafka.producer.acks
=
1
# 指定消息key和消息体的编解码方式
# # 批量处理的最大大小 单位 byte
# batch-size: 4096
# # 发送延时,当生产端积累的消息达到batch-size或接收到消息linger.ms后,生产者就会将消息提交给kafka
# buffer-memory: 33554432
# # 客户端ID
# client-id: hello-kafka
# # 消息压缩:none、lz4、gzip、snappy,默认为 none。
# compression-type: gzip
spring.kafka.producer.key-serializer
=
org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer
=
org.apache.kafka.common.serialization.StringSerializer
# 生产者 ssl
spring.kafka.producer.properties.security.protocol
=
SASL_SSL
spring.kafka.producer.properties.sasl.mechanism
=
PLAIN
spring.kafka.producer.properties.sasl.jaas.config
=
org.apache.kafka.common.security.plain.PlainLoginModule required
\
username="io.cs"
\
password="@$-5mocGV2n62K67GiMv9#sqvpaOu.0=%4y6vZ#cS#.@n2n-x5$n/35xfvSAKD12";
spring.kafka.producer.properties.ssl.truststore.location
=
D:/client.truststore.jks
spring.kafka.producer.properties.ssl.truststore.password
=
dms@kafka
spring.kafka.producer.properties.ssl.endpoint.identification.algorithm
=
# 消费者组
# 当kafka中没有初始offset或offset超出范围时将自动重置offset
# earliest:重置为分区中最小的offset
# latest:重置为分区中最新的offset(消费分区中新产生的数据)
# none:只要有一个分区不存在已提交的offset,就抛出异常
spring.kafka.consumer.group-id
=
zhTestGroup
spring.kafka.consumer.enable-auto-commit
=
false
# 消费者 ssl
spring.kafka.consumer.properties.security.protocol
=
SASL_SSL
spring.kafka.consumer.properties.sasl.mechanism
=
PLAIN
spring.kafka.consumer.properties.sasl.jaas.config
=
org.apache.kafka.common.security.plain.PlainLoginModule required
\
username="io.cs"
\
password="@$-5mocGV2n62K67GiMv9#sqvpaOu.0=%4y6vZ#cS#.@n2n-x5$n/35xfvSAKD12";
spring.kafka.consumer.properties.ssl.truststore.location
=
D:/client.truststore.jks
spring.kafka.consumer.properties.ssl.truststore.password
=
dms@kafka
spring.kafka.consumer.properties.ssl.endpoint.identification.algorithm
=
# 当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,从头开始消费
# # 自动提交的频率 单位 ms
# auto-commit-interval: 1000
# # 批量消费最大数量
# max-poll-records: 100
spring.kafka.consumer.auto-offset-reset
=
earliest
spring.kafka.consumer.key-deserializer
=
org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer
=
org.apache.kafka.common.serialization.StringDeserializer
# 当每一条记录被消费者监听器(ListenerConsumer)处理之后提交
# RECORD
# 当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后提交
# BATCH
# 当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后,距离上次提交时间大于TIME时提交
# TIME
# 当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后,被处理record数量大于等于COUNT时提交
# COUNT
# TIME | COUNT 有一个条件满足时提交
# COUNT_TIME
# 当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后, 手动调用Acknowledgment.acknowledge()后提交
# MANUAL
# 手动调用Acknowledgment.acknowledge()后立即提交,一般使用这种
# MANUAL_IMMEDIATE
spring.kafka.listener.ack-mode
=
manual_immediate
management.health.redis.enabled
=
false
## emqx
emqx.clean-session
=
true
emqx.client-id
=
${spring.application.name}-${random.int[1024,65536]}
emqx.broker
=
tcp://172.16.11.201:1883
emqx.client-user-name
=
admin
emqx.client-password
=
public
emqx.max-inflight
=
1000
#需要监听得kafka消息主题 根据是否是中心极和站端选择需要监听得主题进行配置
kafka.topics
=
null.topic
kafka.init.topics
=
#需要监听得eqm消息主题 根据是否是中心极和站端选择需要监听得主题进行配置 emq.iot.created,
emq.topic
=
emq.iot.created,emq.patrol.created,emq.sign.created,emq.bussSign.created,emq.user.created
\ No newline at end of file
amos-boot-utils/amos-boot-utils-message/src/main/resources/application.properties
View file @
9db7581f
...
@@ -5,59 +5,4 @@ spring.profiles.active=dev
...
@@ -5,59 +5,4 @@ spring.profiles.active=dev
spring.jackson.time-zone
=
GMT+8
spring.jackson.time-zone
=
GMT+8
spring.jackson.date-format
=
yyyy-MM-dd HH:mm:ss
spring.jackson.date-format
=
yyyy-MM-dd HH:mm:ss
spring.jackson.serialization.write-dates-as-timestamps
=
true
spring.jackson.serialization.write-dates-as-timestamps
=
true
# kafka集群信息
spring.kafka.bootstrap-servers
=
172.16.3.100:9092
# 生产者配置
# 设置大于0的值,则客户端会将发送失败的记录重新发送 # 重试次数
spring.kafka.producer.retries
=
0
#16K
spring.kafka.producer.batch-size
=
16384
spring.kafka.producer.buffer-memory
=
33554432
# 应答级别
# acks=0 把消息发送到kafka就认为发送成功
# acks=1 把消息发送到kafka leader分区,并且写入磁盘就认为发送成功
# acks=all 把消息发送到kafka leader分区,并且leader分区的副本follower对消息进行了同步就任务发送成功
spring.kafka.producer.acks
=
1
# 指定消息key和消息体的编解码方式
# # 批量处理的最大大小 单位 byte
# batch-size: 4096
# # 发送延时,当生产端积累的消息达到batch-size或接收到消息linger.ms后,生产者就会将消息提交给kafka
# buffer-memory: 33554432
# # 客户端ID
# client-id: hello-kafka
# # 消息压缩:none、lz4、gzip、snappy,默认为 none。
# compression-type: gzip
spring.kafka.producer.key-serializer
=
org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer
=
org.apache.kafka.common.serialization.StringSerializer
# 消费者组
# 当kafka中没有初始offset或offset超出范围时将自动重置offset
# earliest:重置为分区中最小的offset
# latest:重置为分区中最新的offset(消费分区中新产生的数据)
# none:只要有一个分区不存在已提交的offset,就抛出异常
spring.kafka.consumer.group-id
=
zhTestGroup
spring.kafka.consumer.enable-auto-commit
=
false
# 当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,从头开始消费
# # 自动提交的频率 单位 ms
# auto-commit-interval: 1000
# # 批量消费最大数量
# max-poll-records: 100
spring.kafka.consumer.auto-offset-reset
=
earliest
spring.kafka.consumer.key-deserializer
=
org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer
=
org.apache.kafka.common.serialization.StringDeserializer
# 当每一条记录被消费者监听器(ListenerConsumer)处理之后提交
# RECORD
# 当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后提交
# BATCH
# 当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后,距离上次提交时间大于TIME时提交
# TIME
# 当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后,被处理record数量大于等于COUNT时提交
# COUNT
# TIME | COUNT 有一个条件满足时提交
# COUNT_TIME
# 当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后, 手动调用Acknowledgment.acknowledge()后提交
# MANUAL
# 手动调用Acknowledgment.acknowledge()后立即提交,一般使用这种
# MANUAL_IMMEDIATE
spring.kafka.listener.ack-mode
=
manual_immediate
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment