RocketMQ源码分析

RocketMQ源码阅读

环境

  • JDK :1.8+、Maven、IntelliJ IDEA

  • 仓库 https://github.com/apache/rocketmq

    • broker: broker 模块(broke 启动进程)

    • client :消息客户端,包含消息生产者、消息消费者相关类

    • common :公共包

    • dev :开发者信息(非源代码)

    • distribution :部署实例文件夹(非源代码)

    • example:RocketMQ 范例代码

    • filter:消息过滤相关基础类

    • filtersrv:消息过滤服务器实现相关类(Filter启动进程)

    • logappender:日志实现相关类

    • namesrv:NameServer实现相关类(NameServer启动进程)

    • openmessageing:消息开放标准

    • remoting:远程通信模块

    • srcutil:服务工具类

    • store:消息存储实现相关类

    • style:checkstyle相关实现

    • test:测试相关类

    • tools:工具类,监控命令相关实现类

  • 安装:clean install -Dmaven.test.skip=true

  • 启动NameServer

    • 创建conf配置文件夹,从distribution拷贝broker.conflogback_broker.xmllogback_namesrv.xml
    • namesrv模块中run NamesrvStartup.java(需要配置ROCKETMQ_HOME环境变量)
  • 启动Broker

    • broker.conf:

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      brokerClusterName = DefaultCluster
      brokerName = broker-a
      brokerId = 0
      # namesrvAddr地址
      namesrvAddr=127.0.0.1:9876
      deleteWhen = 04
      fileReservedTime = 48
      brokerRole = ASYNC_MASTER
      flushDiskType = ASYNC_FLUSH
      autoCreateTopicEnable=true

      # 存储路径
      storePathRootDir=E:\\RocketMQ\\data\\rocketmq\\dataDir
      # commitLog路径
      storePathCommitLog=E:\\RocketMQ\\data\\rocketmq\\dataDir\\commitlog
      # 消息队列存储路径
      storePathConsumeQueue=E:\\RocketMQ\\data\\rocketmq\\dataDir\\consumequeue
      # 消息索引存储路径
      storePathIndex=E:\\RocketMQ\\data\\rocketmq\\dataDir\\index
      # checkpoint文件路径
      storeCheckpoint=E:\\RocketMQ\\data\\rocketmq\\dataDir\\checkpoint
      # abort文件存储路径
      abortFile=E:\\RocketMQ\\data\\rocketmq\\dataDir\\abort
    • 创建数据文件夹dataDir

    • 启动BrokerStartup,配置broker.confROCKETMQ_HOME

  • 发送消息

    • 进入example模块的org.apache.rocketmq.example.quickstart

    • 指定Nameserver地址,运行main

      1
      2
      DefaultMQProducer producer = new DefaultMQProducer("please_rename_unique_group_name");
      producer.setNamesrvAddr("127.0.0.1:9876");
  • 消费消息

    • 进入example模块的org.apache.rocketmq.example.quickstart

    • 指定Nameserver地址,运行main

      1
      2
      DefaultMQPushConsumer consumer = new DefaultMQPushConsumer("please_rename_unique_group_name_4");
      consumer.setNamesrvAddr("127.0.0.1:9876");

NameServer

架构设计

  • Producer发送某一个主题到消息服务器,消息服务器将消息持久化存储,Consumer订阅该兴趣的主题,消息服务器根据订阅信息(路由信息)将消息推送到消费者(Push模式)或者消费者主动向消息服务器拉取(Pull模式),同时部署多台消息服务器共同承担消息的存储
  • Producer如何知道消息要发送到哪台消息服务器?一个Broker宕机后,Producer如何在不重启服务下发送消息给新的Broker?——NameServer
    • Broker启动时向所有NameServer注册,Producer发送消息前从NameServer获取Broker地址列表,根据负载均衡算法从列表选择一台Broker发送
    • NameServer与每台Broker保持长连接,间隔30S检测Broker是否存活,如果Broker宕机则从路由注册表中删除(路由变化不会马上通知Producer——降低NameServer实现的复杂度,消息发送端提供容错机制保证消息发送的可用性)
    • 通过部署多台NameServer实现高可用,但彼此之间不通讯——在某一个时刻Nameservers并不完全相同,但并不影响消息发送

启动流程

  • 启动类:org.apache.rocketmq.namesrv.NamesrvStartup

步骤一

  • 解析配置文件,填充NameServerConfig、NettyServerConfig属性值,创建NamesrvController

代码:NamesrvController#createNamesrvController

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
//创建NamesrvConfig
final NamesrvConfig namesrvConfig = new NamesrvConfig();
//创建NettyServerConfig
final NettyServerConfig nettyServerConfig = new NettyServerConfig();
//设置启动端口号
nettyServerConfig.setListenPort(9876);
//解析启动-c参数
if (commandLine.hasOption('c')) {
String file = commandLine.getOptionValue('c');
if (file != null) {
InputStream in = new BufferedInputStream(new FileInputStream(file));
properties = new Properties();
properties.load(in);
MixAll.properties2Object(properties, namesrvConfig);
MixAll.properties2Object(properties, nettyServerConfig);

namesrvConfig.setConfigStorePath(file);

System.out.printf("load config properties file OK, %s%n", file);
in.close();
}
}
//解析启动-p参数
if (commandLine.hasOption('p')) {
InternalLogger console = InternalLoggerFactory.getLogger(LoggerName.NAMESRV_CONSOLE_NAME);
MixAll.printObjectProperties(console, namesrvConfig);
MixAll.printObjectProperties(console, nettyServerConfig);
System.exit(0);
}
//将启动参数填充到namesrvConfig,nettyServerConfig
MixAll.properties2Object(ServerUtil.commandLine2Properties(commandLine), namesrvConfig);

//创建NameServerController
final NamesrvController controller = new NamesrvController(namesrvConfig, nettyServerConfig);

NamesrvConfig属性

1
2
3
4
5
6
private String rocketmqHome = System.getProperty(MixAll.ROCKETMQ_HOME_PROPERTY, System.getenv(MixAll.ROCKETMQ_HOME_ENV));  // rocketmq主目录
private String kvConfigPath = System.getProperty("user.home") + File.separator + "namesrv" + File.separator + "kvConfig.json"; // NameServer默认配置文件路径
private String configStorePath = System.getProperty("user.home") + File.separator + "namesrv" + File.separator + "namesrv.properties"; // NameServer存储KV配置属性的持久化路径
private String productEnvName = "center";
private boolean clusterTest = false;
private boolean orderMessageEnable = false; // 是否支持顺序消息

NettyServerConfig属性

1
2
3
4
5
6
7
8
9
10
11
private int listenPort = 8888;  // NameServer监听端口,该值默认会被初始化为9876
private int serverWorkerThreads = 8; // Netty业务线程池线程个数
private int serverCallbackExecutorThreads = 0; // Netty public任务线程池线程个数,Netty网络设计,根据业务类型会创建不同的线程池,比如处理消息发送、消息消费、心跳检测等。如果该业务类型未注册线程池,则由public线程池执行
private int serverSelectorThreads = 3; // IO线程池个数,主要是NameServer、Broker端解析请求、返回相应的线程个数,这类线程主要是处理网路请求的,解析请求包,然后转发到各个业务线程池完成具体的操作,然后将结果返回给调用方
private int serverOnewaySemaphoreValue = 256; // send oneway消息请求并发读(Broker端参数)
private int serverAsyncSemaphoreValue = 64; // 异步消息发送最大并发度
private int serverChannelMaxIdleTimeSeconds = 120; // 网络连接最大的空闲时间,默认120s
private int serverSocketSndBufSize = NettySystemConfig.socketSndbufSize; // 网络socket发送缓冲区大小
private int serverSocketRcvBufSize = NettySystemConfig.socketRcvbufSize; // 网络接收端缓存区大小
private boolean serverPooledByteBufAllocatorEnable = true; // ByteBuffer是否开启缓存
private boolean useEpollNativeSelector = false; // 是否启用Epoll IO模型

步骤二

  • 根据启动属性创建NamesrvController实例,并初始化该实例(NameServer核心控制器)

代码:NamesrvController#initialize

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
public boolean initialize() {
//加载KV配置
this.kvConfigManager.load();
//创建NettyServer网络处理对象
this.remotingServer = new NettyRemotingServer(this.nettyServerConfig, this.brokerHousekeepingService);
//开启定时任务:每隔10s扫描一次Broker,移除不活跃的Broker
this.remotingExecutor =
Executors.newFixedThreadPool(nettyServerConfig.getServerWorkerThreads(), new ThreadFactoryImpl("RemotingExecutorThread_"));
this.registerProcessor();
this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
NamesrvController.this.routeInfoManager.scanNotActiveBroker();
}
}, 5, 10, TimeUnit.SECONDS);
//开启定时任务:每隔10min打印一次KV配置
this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
NamesrvController.this.kvConfigManager.printAllPeriodically();
}
}, 1, 10, TimeUnit.MINUTES);

return true;
}

步骤三

  • JVM进程关闭之前,先将线程池关闭,释放资源

代码:NamesrvStartup#start

1
2
3
4
5
6
7
8
9
//注册JVM钩子函数代码
Runtime.getRuntime().addShutdownHook(new ShutdownHookThread(log, new Callable<Void>() {
@Override
public Void call() throws Exception {
//释放资源
controller.shutdown();
return null;
}
}));

路由管理

  • NameServer为Producer、Consumer提供关于Topic的路由信息,需要存储路由的基础信息并还要管理Broker节点——路由注册、路由删除等
  • 路由注册通过Broker与NameServer的心跳功能实现。Broker启动时向集群中所有NameServer发送心跳,每隔30s向集群中所有NameServer 发送心跳
  • NameServer收到心跳更新brokerLiveTable中BrokerLiveInfo的lastUpdataTimeStamp
  • NameServer每隔10s扫描brokerLiveTable,连续120S没有收到心跳包则移除Broker的路由信息同时关闭Socket连接
  • 心跳包含BrokerIdBroker地址,Broker名称,Broker所属集群名称、Broker关联的FilterServer列表

路由元信息

代码:RouteInfoManager

1
2
3
4
5
private final HashMap<String/* topic */, List<QueueData>> topicQueueTable;  // Topic消息队列路由信息,消息发送时根据路由表进行负载均衡
private final HashMap<String/* brokerName */, BrokerData> brokerAddrTable; // Broker基础信息,包括brokerName、所属集群名称、主备Broker地址
private final HashMap<String/* clusterName */, Set<String/* brokerName */>> clusterAddrTable; // Broker集群信息,存储集群中所有Broker名称
private final HashMap<String/* brokerAddr */, BrokerLiveInfo> brokerLiveTable; // Broker状态信息,NameServer每次收到心跳包是会替换该信息
private final HashMap<String/* brokerAddr */, List<String>/* Filter Server */> filterServerTable; // Broker上的FilterServer列表,用于类模式消息过滤
  • 一个Topic拥有多个消息队列,一个Broker为每一个Topic创建4个读队列和4个写队列
  • 多个Broker组成一个集群,集群由相同的多台Broker组成Master-Slave,brokerId为0代表Master,大于0为Slave
  • BrokerLiveInfo中的lastUpdateTimestamp存储上次收到Broker心跳包的时间
  • 实例数据如下:

路由注册

1)发送心跳包

代码:BrokerController#start

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
//注册Broker信息
this.registerBrokerAll(true, false, true);
//每隔30s上报Broker信息到NameServer
this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {

@Override
public void run() {
try {
BrokerController.this.registerBrokerAll(true, false, brokerConfig.isForceRegister());
} catch (Throwable e) {
log.error("registerBrokerAll Exception", e);
}
}
}, 1000 * 10, Math.max(10000, Math.min(brokerConfig.getRegisterNameServerPeriod(), 60000)),
TimeUnit.MILLISECONDS);

代码:BrokerOuterAPI#registerBrokerAll

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
//获得nameServer地址信息
List<String> nameServerAddressList = this.remotingClient.getNameServerAddressList();
//遍历所有nameserver列表
if (nameServerAddressList != null && nameServerAddressList.size() > 0) {

//封装请求头
final RegisterBrokerRequestHeader requestHeader = new RegisterBrokerRequestHeader();
requestHeader.setBrokerAddr(brokerAddr);
requestHeader.setBrokerId(brokerId);
requestHeader.setBrokerName(brokerName);
requestHeader.setClusterName(clusterName);
requestHeader.setHaServerAddr(haServerAddr);
requestHeader.setCompressed(compressed);
//封装请求体
RegisterBrokerBody requestBody = new RegisterBrokerBody();
requestBody.setTopicConfigSerializeWrapper(topicConfigWrapper);
requestBody.setFilterServerList(filterServerList);
final byte[] body = requestBody.encode(compressed);
final int bodyCrc32 = UtilAll.crc32(body);
requestHeader.setBodyCrc32(bodyCrc32);
final CountDownLatch countDownLatch = new CountDownLatch(nameServerAddressList.size());
for (final String namesrvAddr : nameServerAddressList) {
brokerOuterExecutor.execute(new Runnable() {
@Override
public void run() {
try {
//分别向NameServer注册
RegisterBrokerResult result = registerBroker(namesrvAddr,oneway, timeoutMills,requestHeader,body);
if (result != null) {
registerBrokerResultList.add(result);
}

log.info("register broker[{}]to name server {} OK", brokerId, namesrvAddr);
} catch (Exception e) {
log.warn("registerBroker Exception, {}", namesrvAddr, e);
} finally {
countDownLatch.countDown();
}
}
});
}

try {
countDownLatch.await(timeoutMills, TimeUnit.MILLISECONDS);
} catch (InterruptedException e) {
}
}

代码:BrokerOutAPI#registerBroker

1
2
3
4
5
6
7
8
9
if (oneway) {
try {
this.remotingClient.invokeOneway(namesrvAddr, request, timeoutMills);
} catch (RemotingTooMuchRequestException e) {
// Ignore
}
return null;
}
RemotingCommand response = this.remotingClient.invokeSync(namesrvAddr, request, timeoutMills);
2)处理心跳包

  • org.apache.rocketmq.namesrv.processor.DefaultRequestProcessor处理类解析请求类型,如果请求类型是为**REGISTER_BROKER**,则将请求转发到RouteInfoManager#regiesterBroker

代码:DefaultRequestProcessor#processRequest

1
2
3
4
5
6
7
8
9
//判断是注册Broker信息
case RequestCode.REGISTER_BROKER:
Version brokerVersion = MQVersion.value2Version(request.getVersion());
if (brokerVersion.ordinal() >= MQVersion.Version.V3_0_11.ordinal()) {
return this.registerBrokerWithFilterServer(ctx, request);
} else {
//注册Broker信息
return this.registerBroker(ctx, request);
}

代码:DefaultRequestProcessor#registerBroker

1
2
3
4
5
6
7
8
9
10
RegisterBrokerResult result = this.namesrvController.getRouteInfoManager().registerBroker(
requestHeader.getClusterName(),
requestHeader.getBrokerAddr(),
requestHeader.getBrokerName(),
requestHeader.getBrokerId(),
requestHeader.getHaServerAddr(),
topicConfigWrapper,
null,
ctx.channel()
);

代码:RouteInfoManager#registerBroker

维护路由信息

1
2
3
4
5
6
7
8
9
//加锁
this.lock.writeLock().lockInterruptibly();
//维护clusterAddrTable
Set<String> brokerNames = this.clusterAddrTable.get(clusterName);
if (null == brokerNames) {
brokerNames = new HashSet<String>();
this.clusterAddrTable.put(clusterName, brokerNames);
}
brokerNames.add(brokerName);
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
//维护brokerAddrTable
BrokerData brokerData = this.brokerAddrTable.get(brokerName);
//第一次注册,则创建brokerData
if (null == brokerData) {
registerFirst = true;
brokerData = new BrokerData(clusterName, brokerName, new HashMap<Long, String>());
this.brokerAddrTable.put(brokerName, brokerData);
}
//非第一次注册,更新Broker
Map<Long, String> brokerAddrsMap = brokerData.getBrokerAddrs();
Iterator<Entry<Long, String>> it = brokerAddrsMap.entrySet().iterator();
while (it.hasNext()) {
Entry<Long, String> item = it.next();
if (null != brokerAddr && brokerAddr.equals(item.getValue()) && brokerId != item.getKey()) {
it.remove();
}
}
String oldAddr = brokerData.getBrokerAddrs().put(brokerId, brokerAddr);
registerFirst = registerFirst || (null == oldAddr);
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
//维护topicQueueTable
if (null != topicConfigWrapper && MixAll.MASTER_ID == brokerId) {
if (this.isBrokerTopicConfigChanged(brokerAddr, topicConfigWrapper.getDataVersion()) ||
registerFirst) {
ConcurrentMap<String, TopicConfig> tcTable = topicConfigWrapper.getTopicConfigTable();
if (tcTable != null) {
for (Map.Entry<String, TopicConfig> entry : tcTable.entrySet()) {
this.createAndUpdateQueueData(brokerName, entry.getValue());
}
}
}
}


***代码:RouteInfoManager#createAndUpdateQueueData***
private void createAndUpdateQueueData(final String brokerName, final TopicConfig topicConfig) {
//创建QueueData
QueueData queueData = new QueueData();
queueData.setBrokerName(brokerName);
queueData.setWriteQueueNums(topicConfig.getWriteQueueNums());
queueData.setReadQueueNums(topicConfig.getReadQueueNums());
queueData.setPerm(topicConfig.getPerm());
queueData.setTopicSynFlag(topicConfig.getTopicSysFlag());
//获得topicQueueTable中队列集合
List<QueueData> queueDataList = this.topicQueueTable.get(topicConfig.getTopicName());
//topicQueueTable为空,则直接添加queueData到队列集合
if (null == queueDataList) {
queueDataList = new LinkedList<QueueData>();
queueDataList.add(queueData);
this.topicQueueTable.put(topicConfig.getTopicName(), queueDataList);
log.info("new topic registered, {} {}", topicConfig.getTopicName(), queueData);
} else {
//判断是否是新的队列
boolean addNewOne = true;
Iterator<QueueData> it = queueDataList.iterator();
while (it.hasNext()) {
QueueData qd = it.next();
//如果brokerName相同,代表不是新的队列
if (qd.getBrokerName().equals(brokerName)) {
if (qd.equals(queueData)) {
addNewOne = false;
} else {
log.info("topic changed, {} OLD: {} NEW: {}", topicConfig.getTopicName(), qd,
queueData);
it.remove();
}
}
}
//如果是新的队列,则添加队列到queueDataList
if (addNewOne) {
queueDataList.add(queueData);
}
}
}
1
2
3
4
5
6
//维护brokerLiveTable
BrokerLiveInfo prevBrokerLiveInfo = this.brokerLiveTable.put(brokerAddr,new BrokerLiveInfo(
System.currentTimeMillis(),
topicConfigWrapper.getDataVersion(),
channel,
haServerAddr));
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
//维护filterServerList
if (filterServerList != null) {
if (filterServerList.isEmpty()) {
this.filterServerTable.remove(brokerAddr);
} else {
this.filterServerTable.put(brokerAddr, filterServerList);
}
}

if (MixAll.MASTER_ID != brokerId) {
String masterAddr = brokerData.getBrokerAddrs().get(MixAll.MASTER_ID);
if (masterAddr != null) {
BrokerLiveInfo brokerLiveInfo = this.brokerLiveTable.get(masterAddr);
if (brokerLiveInfo != null) {
result.setHaServerAddr(brokerLiveInfo.getHaServerAddr());
result.setMasterAddr(masterAddr);
}
}
}

路由删除

  • 删除的场景:
    • NameServer定期扫描brokerLiveTable,检测上次心跳包与当前系统的时间差,如果时间超过120s,则移除broker
    • Broker在正常关闭的情况下,会执行unregisterBroker指令
  • 二者都是从相关路由表中删除与该broker相关的信息

代码:NamesrvController#initialize

1
2
3
4
5
6
7
8
//每隔10s扫描一次为活跃Broker
this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {

@Override
public void run() {
NamesrvController.this.routeInfoManager.scanNotActiveBroker();
}
}, 5, 10, TimeUnit.SECONDS);

代码:RouteInfoManager#scanNotActiveBroker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
public void scanNotActiveBroker() {
//获得brokerLiveTable
Iterator<Entry<String, BrokerLiveInfo>> it = this.brokerLiveTable.entrySet().iterator();
//遍历brokerLiveTable
while (it.hasNext()) {
Entry<String, BrokerLiveInfo> next = it.next();
long last = next.getValue().getLastUpdateTimestamp();
//如果收到心跳包的时间距当时时间是否超过120s
if ((last + BROKER_CHANNEL_EXPIRED_TIME) < System.currentTimeMillis()) {
//关闭连接
RemotingUtil.closeChannel(next.getValue().getChannel());
//移除broker
it.remove();
//维护路由表
this.onChannelDestroy(next.getKey(), next.getValue().getChannel());
}
}
}

代码:RouteInfoManager#onChannelDestroy

1
2
3
4
//申请写锁,brokerAddress从brokerLiveTable和filterServerTable移除
this.lock.writeLock().lockInterruptibly();
this.brokerLiveTable.remove(brokerAddrFound);
this.filterServerTable.remove(brokerAddrFound);
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
//维护brokerAddrTable
String brokerNameFound = null;
boolean removeBrokerName = false;
Iterator<Entry<String, BrokerData>> itBrokerAddrTable =this.brokerAddrTable.entrySet().iterator();
//遍历brokerAddrTable
while (itBrokerAddrTable.hasNext() && (null == brokerNameFound)) {
BrokerData brokerData = itBrokerAddrTable.next().getValue();
//遍历broker地址
Iterator<Entry<Long, String>> it = brokerData.getBrokerAddrs().entrySet().iterator();
while (it.hasNext()) {
Entry<Long, String> entry = it.next();
Long brokerId = entry.getKey();
String brokerAddr = entry.getValue();
//根据broker地址移除brokerAddr
if (brokerAddr.equals(brokerAddrFound)) {
brokerNameFound = brokerData.getBrokerName();
it.remove();
log.info("remove brokerAddr[{}, {}] from brokerAddrTable, because channel destroyed",
brokerId, brokerAddr);
break;
}
}
//如果当前主题只包含待移除的broker,则移除该topic
if (brokerData.getBrokerAddrs().isEmpty()) {
removeBrokerName = true;
itBrokerAddrTable.remove();
log.info("remove brokerName[{}] from brokerAddrTable, because channel destroyed",
brokerData.getBrokerName());
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
//维护clusterAddrTable
if (brokerNameFound != null && removeBrokerName) {
Iterator<Entry<String, Set<String>>> it = this.clusterAddrTable.entrySet().iterator();
//遍历clusterAddrTable
while (it.hasNext()) {
Entry<String, Set<String>> entry = it.next();
//获得集群名称
String clusterName = entry.getKey();
//获得集群中brokerName集合
Set<String> brokerNames = entry.getValue();
//从brokerNames中移除brokerNameFound
boolean removed = brokerNames.remove(brokerNameFound);
if (removed) {
log.info("remove brokerName[{}], clusterName[{}] from clusterAddrTable, because channel destroyed",
brokerNameFound, clusterName);

if (brokerNames.isEmpty()) {
log.info("remove the clusterName[{}] from clusterAddrTable, because channel destroyed and no broker in this cluster",
clusterName);
//如果集群中不包含任何broker,则移除该集群
it.remove();
}

break;
}
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
//维护topicQueueTable队列
if (removeBrokerName) {
//遍历topicQueueTable
Iterator<Entry<String, List<QueueData>>> itTopicQueueTable =
this.topicQueueTable.entrySet().iterator();
while (itTopicQueueTable.hasNext()) {
Entry<String, List<QueueData>> entry = itTopicQueueTable.next();
//主题名称
String topic = entry.getKey();
//队列集合
List<QueueData> queueDataList = entry.getValue();
//遍历该主题队列
Iterator<QueueData> itQueueData = queueDataList.iterator();
while (itQueueData.hasNext()) {
//从队列中移除为活跃broker信息
QueueData queueData = itQueueData.next();
if (queueData.getBrokerName().equals(brokerNameFound)) {
itQueueData.remove();
log.info("remove topic[{} {}], from topicQueueTable, because channel destroyed",
topic, queueData);
}
}
//如果该topic的队列为空,则移除该topic
if (queueDataList.isEmpty()) {
itTopicQueueTable.remove();
log.info("remove topic[{}] all queue, from topicQueueTable, because channel destroyed",
topic);
}
}
}
1
2
3
4
//释放写锁
finally {
this.lock.writeLock().unlock();
}

路由发现

  • 路由发现是非实时的,Topic路由变化后NameServer不会主动推送给客户端(Producer),由客户端定时拉取Topic最新的路由

代码:DefaultRequestProcessor#getRouteInfoByTopic

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
public RemotingCommand getRouteInfoByTopic(ChannelHandlerContext ctx,
RemotingCommand request) throws RemotingCommandException {
final RemotingCommand response = RemotingCommand.createResponseCommand(null);
final GetRouteInfoRequestHeader requestHeader =
(GetRouteInfoRequestHeader) request.decodeCommandCustomHeader(GetRouteInfoRequestHeader.class);
//调用RouteInfoManager的方法,从路由表topicQueueTable、brokerAddrTable、filterServerTable中分别填充TopicRouteData的List<QueueData>、List<BrokerData>、filterServer
TopicRouteData topicRouteData = this.namesrvController.getRouteInfoManager().pickupTopicRouteData(requestHeader.getTopic());
//如果找到主题对应你的路由信息并且该主题为顺序消息,则从NameServer KVConfig中获取关于顺序消息相关的配置填充路由信息
if (topicRouteData != null) {
if (this.namesrvController.getNamesrvConfig().isOrderMessageEnable()) {
String orderTopicConf =
this.namesrvController.getKvConfigManager().getKVConfig(NamesrvUtil.NAMESPACE_ORDER_TOPIC_CONFIG,
requestHeader.getTopic());
topicRouteData.setOrderTopicConf(orderTopicConf);
}

byte[] content = topicRouteData.encode();
response.setBody(content);
response.setCode(ResponseCode.SUCCESS);
response.setRemark(null);
return response;
}

response.setCode(ResponseCode.TOPIC_NOT_EXIST);
response.setRemark("No topic route info in name server for the topic: " + requestHeader.getTopic()
+ FAQUrl.suggestTodo(FAQUrl.APPLY_TOPIC_URL));
return response;
}

Producer

  • Producer的代码都在client模块中

方法和属性

接口方法

  • MQAdmin

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    //创建主题
    void createTopic(final String key, final String newTopic, final int queueNum) throws MQClientException;
    //根据时间戳从队列中查找消息偏移量
    long searchOffset(final MessageQueue mq, final long timestamp)
    //查找消息队列中最大的偏移量
    long maxOffset(final MessageQueue mq) throws MQClientException;
    //查找消息队列中最小的偏移量
    long minOffset(final MessageQueue mq)
    //根据偏移量查找消息
    MessageExt viewMessage(final String offsetMsgId) throws RemotingException, MQBrokerException,
    InterruptedException, MQClientException;
    //根据条件查找消息
    QueryResult queryMessage(final String topic, final String key, final int maxNum, final long begin,
    final long end) throws MQClientException, InterruptedException;
    //根据消息ID和主题查找消息
    MessageExt viewMessage(String topic,String msgId) throws RemotingException, MQBrokerException, InterruptedException, MQClientException;
  • MQProducer

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    //启动
    void start() throws MQClientException;
    //关闭
    void shutdown();
    //查找该主题下所有消息
    List<MessageQueue> fetchPublishMessageQueues(final String topic) throws MQClientException;
    //同步发送消息
    SendResult send(final Message msg) throws MQClientException, RemotingException, MQBrokerException,
    InterruptedException;
    //同步超时发送消息
    SendResult send(final Message msg, final long timeout) throws MQClientException,
    RemotingException, MQBrokerException, InterruptedException;
    //异步发送消息
    void send(final Message msg, final SendCallback sendCallback) throws MQClientException,
    RemotingException, InterruptedException;
    //异步超时发送消息
    void send(final Message msg, final SendCallback sendCallback, final long timeout)
    throws MQClientException, RemotingException, InterruptedException;
    //发送单向消息
    void sendOneway(final Message msg) throws MQClientException, RemotingException,
    InterruptedException;
    //选择指定队列同步发送消息
    SendResult send(final Message msg, final MessageQueue mq) throws MQClientException,
    RemotingException, MQBrokerException, InterruptedException;
    //选择指定队列异步发送消息
    void send(final Message msg, final MessageQueue mq, final SendCallback sendCallback)
    throws MQClientException, RemotingException, InterruptedException;
    //选择指定队列单项发送消息
    void sendOneway(final Message msg, final MessageQueue mq) throws MQClientException,
    RemotingException, InterruptedException;
    //批量发送消息
    SendResult send(final Collection<Message> msgs) throws MQClientException, RemotingException, MQBrokerException,InterruptedException;

类属性

1
2
3
4
5
6
7
8
9
producerGroup:生产者所属组
createTopicKey:默认Topic
defaultTopicQueueNums:默认主题在每一个Broker队列数量
sendMsgTimeout:发送消息默认超时时间,默认3s
compressMsgBodyOverHowmuch:消息体超过该值则启用压缩,默认4k
retryTimesWhenSendFailed:同步方式发送消息重试次数,默认为2,总共执行3
retryTimesWhenSendAsyncFailed:异步方法发送消息重试次数,默认为2
retryAnotherBrokerWhenNotStoreOK:消息重试时选择另外一个Broker时,是否不等待存储结果就返回,默认为false
maxMessageSize:允许发送的最大消息长度,默认为4M

启动流程

  • JVM中只存在一个MQClientManager实例,维护一个MQClientInstance缓存表

    ConcurrentMap<String/* clientId */, MQClientInstance> factoryTable = new ConcurrentHashMap<String,MQClientInstance>();

  • 一个clientId只会创建一个MQClientInstance,其中封装RocketMQ网络处理API,是Producer、Consumer、NameServer、Broker之间的网络通道

代码:DefaultMQProducerImpl#start

1
2
3
4
5
6
7
8
//检查生产者组是否满足要求
this.checkConfig();
//更改当前instanceName为进程ID
if (!this.defaultMQProducer.getProducerGroup().equals(MixAll.CLIENT_INNER_PRODUCER_GROUP)) {
this.defaultMQProducer.changeInstanceNameToPID();
}
//获得MQ客户端实例
this.mQClientFactory = MQClientManager.getInstance().getAndCreateMQClientInstance(this.defaultMQProducer, rpcHook);

代码:MQClientManager#getAndCreateMQClientInstance

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
public MQClientInstance getAndCreateMQClientInstance(final ClientConfig clientConfig, 
RPCHook rpcHook) {
//构建客户端ID
String clientId = clientConfig.buildMQClientId();
//根据客户端ID或者客户端实例
MQClientInstance instance = this.factoryTable.get(clientId);
//实例如果为空就创建新的实例,并添加到实例表中
if (null == instance) {
instance =
new MQClientInstance(clientConfig.cloneClientConfig(),
this.factoryIndexGenerator.getAndIncrement(), clientId, rpcHook);
MQClientInstance prev = this.factoryTable.putIfAbsent(clientId, instance);
if (prev != null) {
instance = prev;
log.warn("Returned Previous MQClientInstance for clientId:[{}]", clientId);
} else {
log.info("Created new MQClientInstance for clientId:[{}]", clientId);
}
}

return instance;
}

代码:DefaultMQProducerImpl#start

1
2
3
4
5
6
7
8
9
10
11
12
//注册当前生产者到到MQClientInstance管理中,方便后续调用网路请求
boolean registerOK = mQClientFactory.registerProducer(this.defaultMQProducer.getProducerGroup(), this);
if (!registerOK) {
this.serviceState = ServiceState.CREATE_JUST;
throw new MQClientException("The producer group[" + this.defaultMQProducer.getProducerGroup()
+ "] has been created before, specify another name please." + FAQUrl.suggestTodo(FAQUrl.GROUP_NAME_DUPLICATE_URL),
null);
}
//启动生产者
if (startFactory) {
mQClientFactory.start();
}

消息发送

代码:DefaultMQProducerImpl#send(Message msg)

1
2
3
4
//发送消息
public SendResult send(Message msg) {
return send(msg, this.defaultMQProducer.getSendMsgTimeout());
}

代码:DefaultMQProducerImpl#send(Message msg,long timeout)

1
2
3
4
//发送消息,默认超时时间为3s
public SendResult send(Message msg,long timeout){
return this.sendDefaultImpl(msg, CommunicationMode.SYNC, null, timeout);
}

代码:DefaultMQProducerImpl#sendDefaultImpl

1
2
//校验消息
Validators.checkMessage(msg, this.defaultMQProducer);

1)校验消息

代码:Validators#checkMessage

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
public static void checkMessage(Message msg, DefaultMQProducer defaultMQProducer)
throws MQClientException {
//判断是否为空
if (null == msg) {
throw new MQClientException(ResponseCode.MESSAGE_ILLEGAL, "the message is null");
}
// 校验主题
Validators.checkTopic(msg.getTopic());

// 校验消息体
if (null == msg.getBody()) {
throw new MQClientException(ResponseCode.MESSAGE_ILLEGAL, "the message body is null");
}

if (0 == msg.getBody().length) {
throw new MQClientException(ResponseCode.MESSAGE_ILLEGAL, "the message body length is zero");
}

if (msg.getBody().length > defaultMQProducer.getMaxMessageSize()) {
throw new MQClientException(ResponseCode.MESSAGE_ILLEGAL,
"the message body size over max value, MAX: " + defaultMQProducer.getMaxMessageSize());
}
}

2)查找路由

代码:DefaultMQProducerImpl#tryToFindTopicPublishInfo

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
private TopicPublishInfo tryToFindTopicPublishInfo(final String topic) {
//从缓存中获得主题的路由信息
TopicPublishInfo topicPublishInfo = this.topicPublishInfoTable.get(topic);
//路由信息为空,则从NameServer获取路由
if (null == topicPublishInfo || !topicPublishInfo.ok()) {
this.topicPublishInfoTable.putIfAbsent(topic, new TopicPublishInfo());
this.mQClientFactory.updateTopicRouteInfoFromNameServer(topic);
topicPublishInfo = this.topicPublishInfoTable.get(topic);
}

if (topicPublishInfo.isHaveTopicRouterInfo() || topicPublishInfo.ok()) {
return topicPublishInfo;
} else {
//如果未找到当前主题的路由信息,则用默认主题继续查找
this.mQClientFactory.updateTopicRouteInfoFromNameServer(topic, true, this.defaultMQProducer);
topicPublishInfo = this.topicPublishInfoTable.get(topic);
return topicPublishInfo;
}
}

代码:TopicPublishInfo

1
2
3
4
5
6
7
public class TopicPublishInfo {
private boolean orderTopic = false; //是否是顺序消息
private boolean haveTopicRouterInfo = false;
private List<MessageQueue> messageQueueList = new ArrayList<MessageQueue>(); //该主题消息队列
private volatile ThreadLocalIndex sendWhichQueue = new ThreadLocalIndex();//每选择一次消息队列,该值+1
private TopicRouteData topicRouteData;//关联Topic路由元信息
}

代码:MQClientInstance#updateTopicRouteInfoFromNameServer

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
TopicRouteData topicRouteData;
//使用默认主题从NameServer获取路由信息
if (isDefault && defaultMQProducer != null) {
topicRouteData = this.mQClientAPIImpl.getDefaultTopicRouteInfoFromNameServer(defaultMQProducer.getCreateTopicKey(),
1000 * 3);
if (topicRouteData != null) {
for (QueueData data : topicRouteData.getQueueDatas()) {
int queueNums = Math.min(defaultMQProducer.getDefaultTopicQueueNums(), data.getReadQueueNums());
data.setReadQueueNums(queueNums);
data.setWriteQueueNums(queueNums);
}
}
} else {
//使用指定主题从NameServer获取路由信息
topicRouteData = this.mQClientAPIImpl.getTopicRouteInfoFromNameServer(topic, 1000 * 3);
}

代码:MQClientInstance#updateTopicRouteInfoFromNameServer

1
2
3
4
5
6
7
8
//判断路由是否需要更改
TopicRouteData old = this.topicRouteTable.get(topic);
boolean changed = topicRouteDataIsChange(old, topicRouteData);
if (!changed) {
changed = this.isNeedUpdateTopicRouteInfo(topic);
} else {
log.info("the topic[{}] route info changed, old[{}] ,new[{}]", topic, old, topicRouteData);
}

代码:MQClientInstance#updateTopicRouteInfoFromNameServer

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
if (changed) {
//将topicRouteData转换为发布队列
TopicPublishInfo publishInfo = topicRouteData2TopicPublishInfo(topic, topicRouteData);
publishInfo.setHaveTopicRouterInfo(true);
//遍历生产
Iterator<Entry<String, MQProducerInner>> it = this.producerTable.entrySet().iterator();
while (it.hasNext()) {
Entry<String, MQProducerInner> entry = it.next();
MQProducerInner impl = entry.getValue();
if (impl != null) {
//生产者不为空时,更新publishInfo信息
impl.updateTopicPublishInfo(topic, publishInfo);
}
}
}

代码:MQClientInstance#topicRouteData2TopicPublishInfo

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
public static TopicPublishInfo topicRouteData2TopicPublishInfo(final String topic, final TopicRouteData route) {
//创建TopicPublishInfo对象
TopicPublishInfo info = new TopicPublishInfo();
//关联topicRoute
info.setTopicRouteData(route);
//顺序消息,更新TopicPublishInfo
if (route.getOrderTopicConf() != null && route.getOrderTopicConf().length() > 0) {
String[] brokers = route.getOrderTopicConf().split(";");
for (String broker : brokers) {
String[] item = broker.split(":");
int nums = Integer.parseInt(item[1]);
for (int i = 0; i < nums; i++) {
MessageQueue mq = new MessageQueue(topic, item[0], i);
info.getMessageQueueList().add(mq);
}
}

info.setOrderTopic(true);
} else {
//非顺序消息更新TopicPublishInfo
List<QueueData> qds = route.getQueueDatas();
Collections.sort(qds);
//遍历topic队列信息
for (QueueData qd : qds) {
//是否是写队列
if (PermName.isWriteable(qd.getPerm())) {
BrokerData brokerData = null;
//遍历写队列Broker
for (BrokerData bd : route.getBrokerDatas()) {
//根据名称获得读队列对应的Broker
if (bd.getBrokerName().equals(qd.getBrokerName())) {
brokerData = bd;
break;
}
}

if (null == brokerData) {
continue;
}

if (!brokerData.getBrokerAddrs().containsKey(MixAll.MASTER_ID)) {
continue;
}
//封装TopicPublishInfo写队列
for (int i = 0; i < qd.getWriteQueueNums(); i++) {
MessageQueue mq = new MessageQueue(topic, qd.getBrokerName(), i);
info.getMessageQueueList().add(mq);
}
}
}

info.setOrderTopic(false);
}
//返回TopicPublishInfo对象
return info;
}

3)选择队列

  • 默认不启用Broker故障延迟机制的队列选择过程

    代码:TopicPublishInfo#selectOneMessageQueue(lastBrokerName)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    public MessageQueue selectOneMessageQueue(final String lastBrokerName) {
    //第一次选择队列
    if (lastBrokerName == null) {
    return selectOneMessageQueue();
    } else {
    //sendWhichQueue
    int index = this.sendWhichQueue.getAndIncrement();
    //遍历消息队列集合
    for (int i = 0; i < this.messageQueueList.size(); i++) {
    //sendWhichQueue自增后取模
    int pos = Math.abs(index++) % this.messageQueueList.size();
    if (pos < 0)
    pos = 0;
    //规避上次Broker队列
    MessageQueue mq = this.messageQueueList.get(pos);
    if (!mq.getBrokerName().equals(lastBrokerName)) {
    return mq;
    }
    }
    //如果以上情况都不满足,返回sendWhichQueue取模后的队列
    return selectOneMessageQueue();
    }
    }

    代码:TopicPublishInfo#selectOneMessageQueue()

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    //第一次选择队列
    public MessageQueue selectOneMessageQueue() {
    //sendWhichQueue自增
    int index = this.sendWhichQueue.getAndIncrement();
    //对队列大小取模
    int pos = Math.abs(index) % this.messageQueueList.size();
    if (pos < 0)
    pos = 0;
    //返回对应的队列
    return this.messageQueueList.get(pos);
    }
  • 启用Broker故障延迟机制的队列选择过程

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    public MessageQueue selectOneMessageQueue(final TopicPublishInfo tpInfo, final String lastBrokerName) {
    //Broker故障延迟机制
    if (this.sendLatencyFaultEnable) {
    try {
    //对sendWhichQueue自增
    int index = tpInfo.getSendWhichQueue().getAndIncrement();
    //对消息队列轮询获取一个队列
    for (int i = 0; i < tpInfo.getMessageQueueList().size(); i++) {
    int pos = Math.abs(index++) % tpInfo.getMessageQueueList().size();
    if (pos < 0)
    pos = 0;
    MessageQueue mq = tpInfo.getMessageQueueList().get(pos);
    //验证该队列是否可用
    if (latencyFaultTolerance.isAvailable(mq.getBrokerName())) {
    //可用
    if (null == lastBrokerName || mq.getBrokerName().equals(lastBrokerName))
    return mq;
    }
    }
    //从规避的Broker中选择一个可用的Broker
    final String notBestBroker = latencyFaultTolerance.pickOneAtLeast();
    //获得Broker的写队列集合
    int writeQueueNums = tpInfo.getQueueIdByBroker(notBestBroker);
    if (writeQueueNums > 0) {
    //获得一个队列,指定broker和队列ID并返回
    final MessageQueue mq = tpInfo.selectOneMessageQueue();
    if (notBestBroker != null) {
    mq.setBrokerName(notBestBroker);
    mq.setQueueId(tpInfo.getSendWhichQueue().getAndIncrement() % writeQueueNums);
    }
    return mq;
    } else {
    latencyFaultTolerance.remove(notBestBroker);
    }
    } catch (Exception e) {
    log.error("Error occurred when selecting message queue", e);
    }

    return tpInfo.selectOneMessageQueue();
    }

    return tpInfo.selectOneMessageQueue(lastBrokerName);
    }
  • 延迟机制接口规范

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    public interface LatencyFaultTolerance<T> {
    //更新失败条目
    void updateFaultItem(final T name, final long currentLatency, final long notAvailableDuration);
    //判断Broker是否可用
    boolean isAvailable(final T name);
    //移除Fault条目
    void remove(final T name);
    //尝试从规避的Broker中选择一个可用的Broker
    T pickOneAtLeast();
    }
    • FaultItem:失败条目

      1
      2
      3
      4
      5
      6
      7
      8
      class FaultItem implements Comparable<FaultItem> {
      //条目唯一键,这里为brokerName
      private final String name;
      //本次消息发送延迟
      private volatile long currentLatency;
      //故障规避开始时间
      private volatile long startTimestamp;
      }
    • 消息失败策略

      1
      2
      3
      4
      5
      6
      public class MQFaultStrategy {
      //根据currentLatency本地消息发送延迟,从latencyMax尾部向前找到第一个比currentLatency小的索引,如果没有找到,返回0
      private long[] latencyMax = {50L, 100L, 550L, 1000L, 2000L, 3000L, 15000L};
      //根据这个索引从notAvailableDuration取出对应的时间,在该时长内,Broker设置为不可用
      private long[] notAvailableDuration = {0L, 0L, 30000L, 60000L, 120000L, 180000L, 600000L};
      }

      原理分析

    • *代码:DefaultMQProducerImpl#sendDefaultImpl**

      1
      2
      3
      4
      5
      6
      7
      8
      sendResult = this.sendKernelImpl(msg, 
      mq,
      communicationMode,
      sendCallback,
      topicPublishInfo,
      timeout - costTime);
      endTimestamp = System.currentTimeMillis();
      this.updateFaultItem(mq.getBrokerName(), endTimestamp - beginTimestampPrev, false);

      如果上述发送过程出现异常,则调用DefaultMQProducerImpl#updateFaultItem

      1
      2
      3
      4
      5
      6
      public void updateFaultItem(final String brokerName, final long currentLatency, boolean isolation) {
      //参数一:broker名称
      //参数二:本次消息发送延迟时间
      //参数三:是否隔离
      this.mqFaultStrategy.updateFaultItem(brokerName, currentLatency, isolation);
      }
    • *代码:MQFaultStrategy#updateFaultItem**

      1
      2
      3
      4
      5
      6
      7
      8
      public void updateFaultItem(final String brokerName, final long currentLatency, boolean isolation) {
      if (this.sendLatencyFaultEnable) {
      //计算broker规避的时长
      long duration = computeNotAvailableDuration(isolation ? 30000 : currentLatency);
      //更新该FaultItem规避时长
      this.latencyFaultTolerance.updateFaultItem(brokerName, currentLatency, duration);
      }
      }
    • *代码:MQFaultStrategy#computeNotAvailableDuration**

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      private long computeNotAvailableDuration(final long currentLatency) {
      //遍历latencyMax
      for (int i = latencyMax.length - 1; i >= 0; i--) {
      //找到第一个比currentLatency的latencyMax值
      if (currentLatency >= latencyMax[i])
      return this.notAvailableDuration[i];
      }
      //没有找到则返回0
      return 0;
      }
    • *代码:LatencyFaultToleranceImpl#updateFaultItem**

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      public void updateFaultItem(final String name, final long currentLatency, final long notAvailableDuration) {
      //获得原FaultItem
      FaultItem old = this.faultItemTable.get(name);
      //为空新建faultItem对象,设置规避时长和开始时间
      if (null == old) {
      final FaultItem faultItem = new FaultItem(name);
      faultItem.setCurrentLatency(currentLatency);
      faultItem.setStartTimestamp(System.currentTimeMillis() + notAvailableDuration);

      old = this.faultItemTable.putIfAbsent(name, faultItem);
      if (old != null) {
      old.setCurrentLatency(currentLatency);
      old.setStartTimestamp(System.currentTimeMillis() + notAvailableDuration);
      }
      } else {
      //更新规避时长和开始时间
      old.setCurrentLatency(currentLatency);
      old.setStartTimestamp(System.currentTimeMillis() + notAvailableDuration);
      }
      }

4)发送消息

  • 消息发送API核心入口:**DefaultMQProducerImpl#sendKernelImpl**
1
2
3
4
5
6
7
8
private SendResult sendKernelImpl(
final Message msg, //待发送消息
final MessageQueue mq, //消息发送队列
final CommunicationMode communicationMode, //消息发送内模式
final SendCallback sendCallback, pp //异步消息回调函数
final TopicPublishInfo topicPublishInfo, //主题路由信息
final long timeout //超时时间
)

代码:DefaultMQProducerImpl#sendKernelImpl

1
2
3
4
5
6
7
//获得broker网络地址信息
String brokerAddr = this.mQClientFactory.findBrokerAddressInPublish(mq.getBrokerName());
if (null == brokerAddr) {
//没有找到从NameServer更新broker网络地址信息
tryToFindTopicPublishInfo(mq.getTopic());
brokerAddr = this.mQClientFactory.findBrokerAddressInPublish(mq.getBrokerName());
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
//为消息分类唯一ID
if (!(msg instanceof MessageBatch)) {
MessageClientIDSetter.setUniqID(msg);
}

boolean topicWithNamespace = false;
if (null != this.mQClientFactory.getClientConfig().getNamespace()) {
msg.setInstanceId(this.mQClientFactory.getClientConfig().getNamespace());
topicWithNamespace = true;
}
//消息大小超过4K,启用消息压缩
int sysFlag = 0;
boolean msgBodyCompressed = false;
if (this.tryToCompressMessage(msg)) {
sysFlag |= MessageSysFlag.COMPRESSED_FLAG;
msgBodyCompressed = true;
}
//如果是事务消息,设置消息标记MessageSysFlag.TRANSACTION_PREPARED_TYPE
final String tranMsg = msg.getProperty(MessageConst.PROPERTY_TRANSACTION_PREPARED);
if (tranMsg != null && Boolean.parseBoolean(tranMsg)) {
sysFlag |= MessageSysFlag.TRANSACTION_PREPARED_TYPE;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
//如果注册了消息发送钩子函数,在执行消息发送前的增强逻辑
if (this.hasSendMessageHook()) {
context = new SendMessageContext();
context.setProducer(this);
context.setProducerGroup(this.defaultMQProducer.getProducerGroup());
context.setCommunicationMode(communicationMode);
context.setBornHost(this.defaultMQProducer.getClientIP());
context.setBrokerAddr(brokerAddr);
context.setMessage(msg);
context.setMq(mq);
context.setNamespace(this.defaultMQProducer.getNamespace());
String isTrans = msg.getProperty(MessageConst.PROPERTY_TRANSACTION_PREPARED);
if (isTrans != null && isTrans.equals("true")) {
context.setMsgType(MessageType.Trans_Msg_Half);
}

if (msg.getProperty("__STARTDELIVERTIME") != null || msg.getProperty(MessageConst.PROPERTY_DELAY_TIME_LEVEL) != null) {
context.setMsgType(MessageType.Delay_Msg);
}
this.executeSendMessageHookBefore(context);
}

代码:SendMessageHook

1
2
3
4
5
6
7
public interface SendMessageHook {
String hookName();

void sendMessageBefore(final SendMessageContext context);

void sendMessageAfter(final SendMessageContext context);
}

代码:DefaultMQProducerImpl#sendKernelImpl

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
//构建消息发送请求包
SendMessageRequestHeader requestHeader = new SendMessageRequestHeader();
//生产者组
requestHeader.setProducerGroup(this.defaultMQProducer.getProducerGroup());
//主题
requestHeader.setTopic(msg.getTopic());
//默认创建主题Key
requestHeader.setDefaultTopic(this.defaultMQProducer.getCreateTopicKey());
//该主题在单个Broker默认队列树
requestHeader.setDefaultTopicQueueNums(this.defaultMQProducer.getDefaultTopicQueueNums());
//队列ID
requestHeader.setQueueId(mq.getQueueId());
//消息系统标记
requestHeader.setSysFlag(sysFlag);
//消息发送时间
requestHeader.setBornTimestamp(System.currentTimeMillis());
//消息标记
requestHeader.setFlag(msg.getFlag());
//消息扩展信息
requestHeader.setProperties(MessageDecoder.messageProperties2String(msg.getProperties()));
//消息重试次数
requestHeader.setReconsumeTimes(0);
requestHeader.setUnitMode(this.isUnitMode());
//是否是批量消息等
requestHeader.setBatch(msg instanceof MessageBatch);
if (requestHeader.getTopic().startsWith(MixAll.RETRY_GROUP_TOPIC_PREFIX)) {
String reconsumeTimes = MessageAccessor.getReconsumeTime(msg);
if (reconsumeTimes != null) {
requestHeader.setReconsumeTimes(Integer.valueOf(reconsumeTimes));
MessageAccessor.clearProperty(msg, MessageConst.PROPERTY_RECONSUME_TIME);
}

String maxReconsumeTimes = MessageAccessor.getMaxReconsumeTimes(msg);
if (maxReconsumeTimes != null) {
requestHeader.setMaxReconsumeTimes(Integer.valueOf(maxReconsumeTimes));
MessageAccessor.clearProperty(msg, MessageConst.PROPERTY_MAX_RECONSUME_TIMES);
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
case ASYNC:		//异步发送
Message tmpMessage = msg;
boolean messageCloned = false;
if (msgBodyCompressed) {
//If msg body was compressed, msgbody should be reset using prevBody.
//Clone new message using commpressed message body and recover origin massage.
//Fix bug:https://github.com/apache/rocketmq-externals/issues/66
tmpMessage = MessageAccessor.cloneMessage(msg);
messageCloned = true;
msg.setBody(prevBody);
}

if (topicWithNamespace) {
if (!messageCloned) {
tmpMessage = MessageAccessor.cloneMessage(msg);
messageCloned = true;
}
msg.setTopic(NamespaceUtil.withoutNamespace(msg.getTopic(),
this.defaultMQProducer.getNamespace()));
}

long costTimeAsync = System.currentTimeMillis() - beginStartTime;
if (timeout < costTimeAsync) {
throw new RemotingTooMuchRequestException("sendKernelImpl call timeout");
}
sendResult = this.mQClientFactory.getMQClientAPIImpl().sendMessage(
brokerAddr,
mq.getBrokerName(),
tmpMessage,
requestHeader,
timeout - costTimeAsync,
communicationMode,
sendCallback,
topicPublishInfo,
this.mQClientFactory,
this.defaultMQProducer.getRetryTimesWhenSendAsyncFailed(),
context,
this);
break;
case ONEWAY:
case SYNC: //同步发送
long costTimeSync = System.currentTimeMillis() - beginStartTime;
if (timeout < costTimeSync) {
throw new RemotingTooMuchRequestException("sendKernelImpl call timeout");
}
sendResult = this.mQClientFactory.getMQClientAPIImpl().sendMessage(
brokerAddr,
mq.getBrokerName(),
msg,
requestHeader,
timeout - costTimeSync,
communicationMode,
context,
this);
break;
default:
assert false;
break;
}
1
2
3
4
5
//如果注册了钩子函数,则发送完毕后执行钩子函数
if (this.hasSendMessageHook()) {
context.setSendResult(sendResult);
this.executeSendMessageHookAfter(context);
}

批量消息发送

  • 同一个主题的多条消息一起打包发送到消息服务端
  • 如果单条消息内容比较长,则打包多条消息发送会影响其他线程发送消息的响应时间
  • 单批次消息总长度不能超过DefaultMQProducer#maxMessageSize
  • 如何将这些消息编码以便服务端能够正确解码出每条消息的消息内容?

代码:DefaultMQProducer#send

1
2
3
4
5
public SendResult send(Collection<Message> msgs) 
throws MQClientException, RemotingException, MQBrokerException, InterruptedException {
//压缩消息集合成一条消息,然后发送出去
return this.defaultMQProducerImpl.send(batch(msgs));
}

代码:DefaultMQProducer#batch

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
private MessageBatch batch(Collection<Message> msgs) throws MQClientException {
MessageBatch msgBatch;
try {
//将集合消息封装到MessageBatch
msgBatch = MessageBatch.generateFromList(msgs);
//遍历消息集合,检查消息合法性,设置消息ID,设置Topic
for (Message message : msgBatch) {
Validators.checkMessage(message, this);
MessageClientIDSetter.setUniqID(message);
message.setTopic(withNamespace(message.getTopic()));
}
//压缩消息,设置消息body
msgBatch.setBody(msgBatch.encode());
} catch (Exception e) {
throw new MQClientException("Failed to initiate the MessageBatch", e);
}
//设置msgBatch的topic
msgBatch.setTopic(withNamespace(msgBatch.getTopic()));
return msgBatch;
}

消息存储

消息存储核心类

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
private final MessageStoreConfig messageStoreConfig;	//消息配置属性
private final CommitLog commitLog; //CommitLog文件存储的实现类
private final ConcurrentMap<String/* topic */, ConcurrentMap<Integer/* queueId */, ConsumeQueue>> consumeQueueTable; //消息队列存储缓存表,按照消息主题分组
private final FlushConsumeQueueService flushConsumeQueueService; //消息队列文件刷盘线程
private final CleanCommitLogService cleanCommitLogService; //清除CommitLog文件服务
private final CleanConsumeQueueService cleanConsumeQueueService; //清除ConsumerQueue队列文件服务
private final IndexService indexService; //索引实现类
private final AllocateMappedFileService allocateMappedFileService; //MappedFile分配服务
private final ReputMessageService reputMessageService;//CommitLog消息分发,根据CommitLog文件构建ConsumerQueue、IndexFile文件
private final HAService haService; //存储HA机制
private final ScheduleMessageService scheduleMessageService; //消息服务调度线程
private final StoreStatsService storeStatsService; //消息存储服务
private final TransientStorePool transientStorePool; //消息堆外内存缓存
private final BrokerStatsManager brokerStatsManager; //Broker状态管理器
private final MessageArrivingListener messageArrivingListener; //消息拉取长轮询模式消息达到监听器
private final BrokerConfig brokerConfig; //Broker配置类
private StoreCheckpoint storeCheckpoint; //文件刷盘监测点
private final LinkedList<CommitLogDispatcher> dispatcherList; //CommitLog文件转发请求

消息存储流程

消息存储入口:DefaultMessageStore#putMessage

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
//判断Broker角色如果是从节点,则无需写入
if (BrokerRole.SLAVE == this.messageStoreConfig.getBrokerRole()) {
long value = this.printTimes.getAndIncrement();
if ((value % 50000) == 0) {
log.warn("message store is slave mode, so putMessage is forbidden ");
}

return new PutMessageResult(PutMessageStatus.SERVICE_NOT_AVAILABLE, null);
}

//判断当前写入状态如果是正在写入,则不能继续
if (!this.runningFlags.isWriteable()) {
long value = this.printTimes.getAndIncrement();
return new PutMessageResult(PutMessageStatus.SERVICE_NOT_AVAILABLE, null);
} else {
this.printTimes.set(0);
}
//判断消息主题长度是否超过最大限制
if (msg.getTopic().length() > Byte.MAX_VALUE) {
log.warn("putMessage message topic length too long " + msg.getTopic().length());
return new PutMessageResult(PutMessageStatus.MESSAGE_ILLEGAL, null);
}
//判断消息属性长度是否超过限制
if (msg.getPropertiesString() != null && msg.getPropertiesString().length() > Short.MAX_VALUE) {
log.warn("putMessage message properties length too long " + msg.getPropertiesString().length());
return new PutMessageResult(PutMessageStatus.PROPERTIES_SIZE_EXCEEDED, null);
}
//判断系统PageCache缓存去是否占用
if (this.isOSPageCacheBusy()) {
return new PutMessageResult(PutMessageStatus.OS_PAGECACHE_BUSY, null);
}

//将消息写入CommitLog文件
PutMessageResult result = this.commitLog.putMessage(msg);

代码:CommitLog#putMessage

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
//记录消息存储时间
msg.setStoreTimestamp(beginLockTimestamp);

//判断如果mappedFile如果为空或者已满,创建新的mappedFile文件
if (null == mappedFile || mappedFile.isFull()) {
mappedFile = this.mappedFileQueue.getLastMappedFile(0);
}
//如果创建失败,直接返回
if (null == mappedFile) {
log.error("create mapped file1 error, topic: " + msg.getTopic() + " clientAddr: " + msg.getBornHostString());
beginTimeInLock = 0;
return new PutMessageResult(PutMessageStatus.CREATE_MAPEDFILE_FAILED, null);
}

//写入消息到mappedFile中
result = mappedFile.appendMessage(msg, this.appendMessageCallback);

代码:MappedFile#appendMessagesInner

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
//获得文件的写入指针
int currentPos = this.wrotePosition.get();

//如果指针大于文件大小则直接返回
if (currentPos < this.fileSize) {
//通过writeBuffer.slice()创建一个与MappedFile共享的内存区,并设置position为当前指针
ByteBuffer byteBuffer = writeBuffer != null ? writeBuffer.slice() : this.mappedByteBuffer.slice();
byteBuffer.position(currentPos);
AppendMessageResult result = null;
if (messageExt instanceof MessageExtBrokerInner) {
//通过回调方法写入
result = cb.doAppend(this.getFileFromOffset(), byteBuffer, this.fileSize - currentPos, (MessageExtBrokerInner) messageExt);
} else if (messageExt instanceof MessageExtBatch) {
result = cb.doAppend(this.getFileFromOffset(), byteBuffer, this.fileSize - currentPos, (MessageExtBatch) messageExt);
} else {
return new AppendMessageResult(AppendMessageStatus.UNKNOWN_ERROR);
}
this.wrotePosition.addAndGet(result.getWroteBytes());
this.storeTimestamp = result.getStoreTimestamp();
return result;
}

代码:CommitLog#doAppend

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
//文件写入位置
long wroteOffset = fileFromOffset + byteBuffer.position();
//设置消息ID
this.resetByteBuffer(hostHolder, 8);
String msgId = MessageDecoder.createMessageId(this.msgIdMemory, msgInner.getStoreHostBytes(hostHolder), wroteOffset);

//获得该消息在消息队列中的偏移量
keyBuilder.setLength(0);
keyBuilder.append(msgInner.getTopic());
keyBuilder.append('-');
keyBuilder.append(msgInner.getQueueId());
String key = keyBuilder.toString();
Long queueOffset = CommitLog.this.topicQueueTable.get(key);
if (null == queueOffset) {
queueOffset = 0L;
CommitLog.this.topicQueueTable.put(key, queueOffset);
}

//获得消息属性长度
final byte[] propertiesData =msgInner.getPropertiesString() == null ? null : msgInner.getPropertiesString().getBytes(MessageDecoder.CHARSET_UTF8);

final int propertiesLength = propertiesData == null ? 0 : propertiesData.length;

if (propertiesLength > Short.MAX_VALUE) {
log.warn("putMessage message properties length too long. length={}", propertiesData.length);
return new AppendMessageResult(AppendMessageStatus.PROPERTIES_SIZE_EXCEEDED);
}

//获得消息主题大小
final byte[] topicData = msgInner.getTopic().getBytes(MessageDecoder.CHARSET_UTF8);
final int topicLength = topicData.length;

//获得消息体大小
final int bodyLength = msgInner.getBody() == null ? 0 : msgInner.getBody().length;
//计算消息总长度
final int msgLen = calMsgLength(bodyLength, topicLength, propertiesLength);

代码:CommitLog#calMsgLength

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
protected static int calMsgLength(int bodyLength, int topicLength, int propertiesLength) {
final int msgLen = 4 //TOTALSIZE
+ 4 //MAGICCODE
+ 4 //BODYCRC
+ 4 //QUEUEID
+ 4 //FLAG
+ 8 //QUEUEOFFSET
+ 8 //PHYSICALOFFSET
+ 4 //SYSFLAG
+ 8 //BORNTIMESTAMP
+ 8 //BORNHOST
+ 8 //STORETIMESTAMP
+ 8 //STOREHOSTADDRESS
+ 4 //RECONSUMETIMES
+ 8 //Prepared Transaction Offset
+ 4 + (bodyLength > 0 ? bodyLength : 0) //BODY
+ 1 + topicLength //TOPIC
+ 2 + (propertiesLength > 0 ? propertiesLength : 0) //propertiesLength
+ 0;
return msgLen;
}

代码:CommitLog#doAppend

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
//消息长度不能超过4M
if (msgLen > this.maxMessageSize) {
CommitLog.log.warn("message size exceeded, msg total size: " + msgLen + ", msg body size: " + bodyLength
+ ", maxMessageSize: " + this.maxMessageSize);
return new AppendMessageResult(AppendMessageStatus.MESSAGE_SIZE_EXCEEDED);
}

//消息是如果没有足够的存储空间则新创建CommitLog文件
if ((msgLen + END_FILE_MIN_BLANK_LENGTH) > maxBlank) {
this.resetByteBuffer(this.msgStoreItemMemory, maxBlank);
// 1 TOTALSIZE
this.msgStoreItemMemory.putInt(maxBlank);
// 2 MAGICCODE
this.msgStoreItemMemory.putInt(CommitLog.BLANK_MAGIC_CODE);
// 3 The remaining space may be any value
// Here the length of the specially set maxBlank
final long beginTimeMills = CommitLog.this.defaultMessageStore.now();
byteBuffer.put(this.msgStoreItemMemory.array(), 0, maxBlank);
return new AppendMessageResult(AppendMessageStatus.END_OF_FILE, wroteOffset, maxBlank, msgId, msgInner.getStoreTimestamp(),
queueOffset, CommitLog.this.defaultMessageStore.now() - beginTimeMills);
}

//将消息存储到ByteBuffer中,返回AppendMessageResult
final long beginTimeMills = CommitLog.this.defaultMessageStore.now();
// Write messages to the queue buffer
byteBuffer.put(this.msgStoreItemMemory.array(), 0, msgLen);
AppendMessageResult result = new AppendMessageResult(AppendMessageStatus.PUT_OK, wroteOffset,
msgLen, msgId,msgInner.getStoreTimestamp(),
queueOffset,
CommitLog.this.defaultMessageStore.now()
-beginTimeMills);
switch (tranType) {
case MessageSysFlag.TRANSACTION_PREPARED_TYPE:
case MessageSysFlag.TRANSACTION_ROLLBACK_TYPE:
break;
case MessageSysFlag.TRANSACTION_NOT_TYPE:
case MessageSysFlag.TRANSACTION_COMMIT_TYPE:
//更新消息队列偏移量
CommitLog.this.topicQueueTable.put(key, ++queueOffset);
break;
default:
break;
}

代码:CommitLog#putMessage

1
2
3
4
5
6
//释放锁
putMessageLock.unlock();
//刷盘
handleDiskFlush(result, putMessageResult, msg);
//执行HA主从同步
handleHA(result, putMessageResult, msg);

存储文件

  • commitLog:消息存储目录
  • config:运行期间一些配置信息
  • consumerqueue:消息消费队列存储目录
  • index:消息索引文件存储目录
  • abort:如果存在,表明该Broker非正常关闭
  • checkpoint:存储CommitLog文件最后一次刷盘时间戳、consumerqueue最后一次刷盘时间,index索引文件最后一次刷盘时间戳

存储文件内存映射

  • RocketMQ通过内存映射文件提高IO性能,CommitLog、ConsumerQueue、IndexFile的单个文件都为固定长度,一个文件写满后创建的新文件,其文件名为该文件第一条消息对应的全局物理偏移量

1)MappedFileQueue

1
2
3
4
5
6
String storePath;	//存储目录
int mappedFileSize; // 单个文件大小
CopyOnWriteArrayList<MappedFile> mappedFiles; //MappedFile文件集合
AllocateMappedFileService allocateMappedFileService; //创建MapFile服务类
long flushedWhere = 0; //当前刷盘指针
long committedWhere = 0; //当前数据提交指针,内存中ByteBuffer当前的写指针,该值大于等于flushWhere
  • 根据存储时间查询MappedFile

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    public MappedFile getMappedFileByTime(final long timestamp) {
    Object[] mfs = this.copyMappedFiles(0);

    if (null == mfs)
    return null;
    //遍历MappedFile文件数组
    for (int i = 0; i < mfs.length; i++) {
    MappedFile mappedFile = (MappedFile) mfs[i];
    //MappedFile文件的最后修改时间大于指定时间戳则返回该文件
    if (mappedFile.getLastModifiedTimestamp() >= timestamp) {
    return mappedFile;
    }
    }

    return (MappedFile) mfs[mfs.length - 1];
    }
  • 根据消息偏移量offset查找MappedFile

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    public MappedFile findMappedFileByOffset(final long offset, final boolean returnFirstOnNotFound) {
    try {
    //获得第一个MappedFile文件
    MappedFile firstMappedFile = this.getFirstMappedFile();
    //获得最后一个MappedFile文件
    MappedFile lastMappedFile = this.getLastMappedFile();
    //第一个文件和最后一个文件均不为空,则进行处理
    if (firstMappedFile != null && lastMappedFile != null) {
    if (offset < firstMappedFile.getFileFromOffset() ||
    offset >= lastMappedFile.getFileFromOffset() + this.mappedFileSize) {
    } else {
    //获得文件索引
    int index = (int) ((offset / this.mappedFileSize)
    - (firstMappedFile.getFileFromOffset() / this.mappedFileSize));
    MappedFile targetFile = null;
    try {
    //根据索引返回目标文件
    targetFile = this.mappedFiles.get(index);
    } catch (Exception ignored) {
    }

    if (targetFile != null && offset >= targetFile.getFileFromOffset()
    && offset < targetFile.getFileFromOffset() + this.mappedFileSize) {
    return targetFile;
    }

    for (MappedFile tmpMappedFile : this.mappedFiles) {
    if (offset >= tmpMappedFile.getFileFromOffset()
    && offset < tmpMappedFile.getFileFromOffset() + this.mappedFileSize) {
    return tmpMappedFile;
    }
    }
    }

    if (returnFirstOnNotFound) {
    return firstMappedFile;
    }
    }
    } catch (Exception e) {
    log.error("findMappedFileByOffset Exception", e);
    }

    return null;
    }
  • 获取存储文件最小偏移量

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    public long getMinOffset() {

    if (!this.mappedFiles.isEmpty()) {
    try {
    return this.mappedFiles.get(0).getFileFromOffset();
    } catch (IndexOutOfBoundsException e) {
    //continue;
    } catch (Exception e) {
    log.error("getMinOffset has exception.", e);
    }
    }
    return -1;
    }
  • 获取存储文件最大偏移量

    1
    2
    3
    4
    5
    6
    7
    public long getMaxOffset() {
    MappedFile mappedFile = getLastMappedFile();
    if (mappedFile != null) {
    return mappedFile.getFileFromOffset() + mappedFile.getReadPosition();
    }
    return 0;
    }
  • 返回存储文件当前写指针

    1
    2
    3
    4
    5
    6
    7
    public long getMaxWrotePosition() {
    MappedFile mappedFile = getLastMappedFile();
    if (mappedFile != null) {
    return mappedFile.getFileFromOffset() + mappedFile.getWrotePosition();
    }
    return 0;
    }

2)MappedFile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
int OS_PAGE_SIZE = 1024 * 4;		//操作系统每页大小,默认4K
AtomicLong TOTAL_MAPPED_VIRTUAL_MEMORY = new AtomicLong(0); //当前JVM实例中MappedFile虚拟内存
AtomicInteger TOTAL_MAPPED_FILES = new AtomicInteger(0); //当前JVM实例中MappedFile对象个数
AtomicInteger wrotePosition = new AtomicInteger(0); //当前文件的写指针
AtomicInteger committedPosition = new AtomicInteger(0); //当前文件的提交指针
AtomicInteger flushedPosition = new AtomicInteger(0); //刷写到磁盘指针
int fileSize; //文件大小
FileChannel fileChannel; //文件通道
ByteBuffer writeBuffer = null; //堆外内存ByteBuffer
TransientStorePool transientStorePool = null; //堆外内存池
String fileName; //文件名称
long fileFromOffset; //该文件的处理偏移量
File file; //物理文件
MappedByteBuffer mappedByteBuffer; //物理文件对应的内存映射Buffer
volatile long storeTimestamp = 0; //文件最后一次内容写入时间
boolean firstCreateInQueue = false; //是否是MappedFileQueue队列中第一个文件
  • MappedFile初始化

    • 未开启transientStorePoolEnable

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      private void init(final String fileName, final int fileSize) throws IOException {
      this.fileName = fileName;
      this.fileSize = fileSize;
      this.file = new File(fileName);
      this.fileFromOffset = Long.parseLong(this.file.getName());
      boolean ok = false;

      ensureDirOK(this.file.getParent());

      try {
      this.fileChannel = new RandomAccessFile(this.file, "rw").getChannel();
      this.mappedByteBuffer = this.fileChannel.map(MapMode.READ_WRITE, 0, fileSize);
      TOTAL_MAPPED_VIRTUAL_MEMORY.addAndGet(fileSize);
      TOTAL_MAPPED_FILES.incrementAndGet();
      ok = true;
      } catch (FileNotFoundException e) {
      log.error("create file channel " + this.fileName + " Failed. ", e);
      throw e;
      } catch (IOException e) {
      log.error("map file " + this.fileName + " Failed. ", e);
      throw e;
      } finally {
      if (!ok && this.fileChannel != null) {
      this.fileChannel.close();
      }
      }
      }
    • 开启transientStorePoolEnabletransientStorePoolEnable=true表示数据先存储到堆外内存,然后通过Commit线程将数据提交到内存映射Buffer,再通过Flush线程将内存映射Buffer中数据持久化磁盘

      1
      2
      3
      4
      5
      6
      public void init(final String fileName, final int fileSize,
      final TransientStorePool transientStorePool) throws IOException {
      init(fileName, fileSize);
      this.writeBuffer = transientStorePool.borrowBuffer(); //初始化writeBuffer
      this.transientStorePool = transientStorePool;
      }
  • MappedFile提交

    • 提交数据到FileChannel。commitLeastPages为本次提交最小的页数,如果数据不满commitLeastPages,则不执行本次提交操作

    • 如果writeBuffer如果为空,直接返回writePosition指针,无需执行commit操作——commit操作主体是writeBuffer

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      public int commit(final int commitLeastPages) {
      if (writeBuffer == null) {
      //no need to commit data to file channel, so just regard wrotePosition as committedPosition.
      return this.wrotePosition.get();
      }
      //判断是否满足提交条件
      if (this.isAbleToCommit(commitLeastPages)) {
      if (this.hold()) {
      commit0(commitLeastPages);
      this.release();
      } else {
      log.warn("in commit, hold failed, commit offset = " + this.committedPosition.get());
      }
      }

      // 所有数据提交后,清空缓冲区
      if (writeBuffer != null && this.transientStorePool != null && this.fileSize == this.committedPosition.get()) {
      this.transientStorePool.returnBuffer(writeBuffer);
      this.writeBuffer = null;
      }

      return this.committedPosition.get();
      }

      MappedFile#isAbleToCommit

      • 判断是否执行commit操作
  • 如果文件已满返回true

    • 如果commitLeastpages大于0,比较writePosition与上一次提交的指针commitPosition的差值,除以OS_PAGE_SIZE得到当前脏页的数量,如果大于commitLeastPages则返回true

    • 如果commitLeastpages小于0表示只要存在脏页就提交

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      protected boolean isAbleToCommit(final int commitLeastPages) {
      //已经刷盘指针
      int flush = this.committedPosition.get();
      //文件写指针
      int write = this.wrotePosition.get();
      //写满刷盘
      if (this.isFull()) {
      return true;
      }

      if (commitLeastPages > 0) {
      //文件内容达到commitLeastPages页数,则刷盘
      return ((write / OS_PAGE_SIZE) - (flush / OS_PAGE_SIZE)) >= commitLeastPages;
      }

      return write > flush;
      }
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33

      ***MappedFile#commit0***

      * 创建WriteBuffer区共享缓存区,将新创建的position回退到上一次提交的位置(commitPosition),设置limit为wrotePosition(当前最大有效数据指针)
      * 把commitPosition到wrotePosition的数据写入到FileChannel中,更新committedPosition指针为wrotePosition
      * commit的作用是将MappedFile的writeBuffer中数据提交到文件通道FileChannel

      ```java
      protected void commit0(final int commitLeastPages) {
      //写指针
      int writePos = this.wrotePosition.get();
      //上次提交指针
      int lastCommittedPosition = this.committedPosition.get();

      if (writePos - this.committedPosition.get() > 0) {
      try {
      //复制共享内存区域
      ByteBuffer byteBuffer = writeBuffer.slice();
      //设置提交位置是上次提交位置
      byteBuffer.position(lastCommittedPosition);
      //最大提交数量
      byteBuffer.limit(writePos);
      //设置fileChannel位置为上次提交位置
      this.fileChannel.position(lastCommittedPosition);
      //将lastCommittedPosition到writePos的数据复制到FileChannel中
      this.fileChannel.write(byteBuffer);
      //重置提交位置
      this.committedPosition.set(writePos);
      } catch (Throwable e) {
      log.error("Error occurred when commit data to FileChannel.", e);
      }
      }
      }

    MappedFile#flush

    • 刷写磁盘

    • 直接调用MappedByteBuffer或fileChannel的force方法将内存中的数据持久化到磁盘,flushedPosition应该等于MappedByteBuffer中的写指针

    • 如果writeBuffer不为空,则flushPosition应该等于上一次的commit指针

    • 如果writeBuffer为空,直接进入到MappedByteBuffer,wrotePosition代表的是MappedByteBuffer中的指针,故设置flushPosition为wrotePosition

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      public int flush(final int flushLeastPages) {
      //数据达到刷盘条件
      if (this.isAbleToFlush(flushLeastPages)) {
      //加锁,同步刷盘
      if (this.hold()) {
      //获得读指针
      int value = getReadPosition();
      try {
      //数据从writeBuffer提交数据到fileChannel再刷新到磁盘
      if (writeBuffer != null || this.fileChannel.position() != 0) {
      this.fileChannel.force(false);
      } else {
      //从mmap刷新数据到磁盘
      this.mappedByteBuffer.force();
      }
      } catch (Throwable e) {
      log.error("Error occurred when force data to disk.", e);
      }
      //更新刷盘位置
      this.flushedPosition.set(value);
      this.release();
      } else {
      log.warn("in flush, hold failed, flush offset = " + this.flushedPosition.get());
      this.flushedPosition.set(getReadPosition());
      }
      }
      return this.getFlushedPosition();
      }

      MappedFile#getReadPosition

  • 获取当前文件最大可读指针

    • 如果writeBuffer为空,则直接返回当前的写指针
  • 如果writeBuffer不为空,则返回上一次提交的指针

    • 在MappedFile设置中,只有提交了的数据(写入到MappedByteBuffer或FileChannel中的数据)才是安全的数据

      1
      2
      3
      4
      public int getReadPosition() {
      //如果writeBuffer为空,刷盘的位置就是应该等于上次commit的位置,如果为空则为mmap的写指针
      return this.writeBuffer == null ? this.wrotePosition.get() : this.committedPosition.get();
      }

      MappedFile#selectMappedBuffer

    • 查找pos到当前最大可读之间的数据

    • 在整个写入期间都未更改MappedByteBuffer的指针,mappedByteBuffer.slice()方法返回的共享缓存区空间为整个MappedFile

    • 通过设置ByteBuffer的position为待查找的值,读取字节长度当前可读最大长度,最终返回的ByteBuffer的limit为size

    • 整个共享缓存区的容量为(MappedFile#fileSize-pos)

    • 在操作SelectMappedBufferResult不能对包含在里面的ByteBuffer调用filp方法

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      public SelectMappedBufferResult selectMappedBuffer(int pos) {
      //获得最大可读指针
      int readPosition = getReadPosition();
      //pos小于最大可读指针,并且大于0
      if (pos < readPosition && pos >= 0) {
      if (this.hold()) {
      //复制mappedByteBuffer读共享区
      ByteBuffer byteBuffer = this.mappedByteBuffer.slice();
      //设置读指针位置
      byteBuffer.position(pos);
      //获得可读范围
      int size = readPosition - pos;
      //设置最大刻度范围
      ByteBuffer byteBufferNew = byteBuffer.slice();
      byteBufferNew.limit(size);
      return new SelectMappedBufferResult(this.fileFromOffset + pos, byteBufferNew, size, this);
      }
      }

      return null;
      }

      MappedFile#shutdown

    • MappedFile文件销毁的实现方法为public boolean destory(long intervalForcibly),intervalForcibly表示拒绝被销毁的最大存活时间

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      public void shutdown(final long intervalForcibly) {
      if (this.available) {
      //关闭MapedFile
      this.available = false;
      //设置当前关闭时间戳
      this.firstShutdownTimestamp = System.currentTimeMillis();
      //释放资源
      this.release();
      } else if (this.getRefCount() > 0) {
      if ((System.currentTimeMillis() - this.firstShutdownTimestamp) >= intervalForcibly) {
      this.refCount.set(-1000 - this.getRefCount());
      this.release();
      }
      }
      }

3)TransientStorePool

  • 短暂的存储池
  • RocketMQ单独创建一个MappedByteBuffer内存缓存池,用来临时存储数据
  • 数据先写入该内存映射中,然后由commit线程定时将数据从该内存复制到与目标物理文件对应的内存映射中。RocketMQ引入该机制是为了提供一种内存锁定,将当前堆外内存一直锁定在堆内内存中,避免被进程将内存交换到磁盘
1
2
3
private final int poolSize;		//availableBuffers个数
private final int fileSize; //每隔ByteBuffer大小
private final Deque<ByteBuffer> availableBuffers; //ByteBuffer容器。双端队列

初始化

1
2
3
4
5
6
7
8
9
10
11
12
public void init() {
//创建poolSize个堆外内存
for (int i = 0; i < poolSize; i++) {
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(fileSize);
final long address = ((DirectBuffer) byteBuffer).address();
Pointer pointer = new Pointer(address);
//使用com.sun.jna.Library类库将该批内存锁定,避免被置换到交换区,提高存储性能
LibC.INSTANCE.mlock(pointer, new NativeLong(fileSize));

availableBuffers.offer(byteBuffer);
}
}

实时更新消息消费队列与索引文件

  • 消息消费队列文件、消息属性索引文件是基于CommitLog文件构建的,Producer提交的消息存储在CommitLog后,ConsumerQueue、IndexFile需要及时更新,否则消息无法及时被消费,根据消息属性查找消息也会出现较大延迟
  • RocketMQ通过开启一个线程ReputMessageService来准实时转发CommitLog文件更新事件,相应的任务处理器根据转发的消息更新ConsumerQueue、IndexFile文件

代码:DefaultMessageStore:start

1
2
3
4
//设置CommitLog内存中最大偏移量
this.reputMessageService.setReputFromOffset(maxPhysicalPosInLogicQueue);
//启动
this.reputMessageService.start();

代码:DefaultMessageStore:run

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public void run() {
DefaultMessageStore.log.info(this.getServiceName() + " service started");
//每隔1毫秒就继续尝试推送消息到消息消费队列和索引文件
while (!this.isStopped()) {
try {
Thread.sleep(1);
this.doReput();
} catch (Exception e) {
DefaultMessageStore.log.warn(this.getServiceName() + " service has exception. ", e);
}
}

DefaultMessageStore.log.info(this.getServiceName() + " service end");
}

代码:DefaultMessageStore:deReput

1
2
3
4
5
6
7
8
9
10
11
//从result中循环遍历消息,一次读一条,创建DispatherRequest对象。
for (int readSize = 0; readSize < result.getSize() && doNext; ) {
DispatchRequest dispatchRequest = DefaultMessageStore.this.commitLog.checkMessageAndReturnSize(result.getByteBuffer(), false, false);
int size = dispatchRequest.getBufferSize() == -1 ? dispatchRequest.getMsgSize() : dispatchRequest.getBufferSize();

if (dispatchRequest.isSuccess()) {
if (size > 0) {
DefaultMessageStore.this.doDispatch(dispatchRequest);
}
}
}

DispatchRequest

1
2
3
4
5
6
7
8
9
10
11
12
13
14
String topic; //消息主题名称
int queueId; //消息队列ID
long commitLogOffset; //消息物理偏移量
int msgSize; //消息长度
long tagsCode; //消息过滤tag hashCode
long storeTimestamp; //消息存储时间戳
long consumeQueueOffset; //消息队列偏移量
String keys; //消息索引key
boolean success; //是否成功解析到完整的消息
String uniqKey; //消息唯一键
int sysFlag; //消息系统标记
long preparedTransactionOffset; //消息预处理事务偏移量
Map<String, String> propertiesMap; //消息属性
byte[] bitMap; //位图

1)转发到ConsumerQueue

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
class CommitLogDispatcherBuildConsumeQueue implements CommitLogDispatcher {
@Override
public void dispatch(DispatchRequest request) {
final int tranType = MessageSysFlag.getTransactionValue(request.getSysFlag());
switch (tranType) {
case MessageSysFlag.TRANSACTION_NOT_TYPE:
case MessageSysFlag.TRANSACTION_COMMIT_TYPE:
//消息分发
DefaultMessageStore.this.putMessagePositionInfo(request);
break;
case MessageSysFlag.TRANSACTION_PREPARED_TYPE:
case MessageSysFlag.TRANSACTION_ROLLBACK_TYPE:
break;
}
}
}

代码:DefaultMessageStore#putMessagePositionInfo

1
2
3
4
5
6
public void putMessagePositionInfo(DispatchRequest dispatchRequest) {
//获得消费队列
ConsumeQueue cq = this.findConsumeQueue(dispatchRequest.getTopic(), dispatchRequest.getQueueId());
//消费队列分发消息
cq.putMessagePositionInfoWrapper(dispatchRequest);
}

代码:DefaultMessageStore#putMessagePositionInfo

1
2
3
4
5
6
7
8
9
10
11
12
//依次将消息偏移量、消息长度、tag写入到ByteBuffer中
this.byteBufferIndex.flip();
this.byteBufferIndex.limit(CQ_STORE_UNIT_SIZE);
this.byteBufferIndex.putLong(offset);
this.byteBufferIndex.putInt(size);
this.byteBufferIndex.putLong(tagsCode);
//获得内存映射文件
MappedFile mappedFile = this.mappedFileQueue.getLastMappedFile(expectLogicOffset);
if (mappedFile != null) {
//将消息追加到内存映射文件,异步输盘
return mappedFile.appendMessage(this.byteBufferIndex.array());
}

2)转发到Index

1
2
3
4
5
6
7
8
9
class CommitLogDispatcherBuildIndex implements CommitLogDispatcher {

@Override
public void dispatch(DispatchRequest request) {
if (DefaultMessageStore.this.messageStoreConfig.isMessageIndexEnable()) {
DefaultMessageStore.this.indexService.buildIndex(request);
}
}
}

代码:DefaultMessageStore#buildIndex

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
public void buildIndex(DispatchRequest req) {
//获得索引文件
IndexFile indexFile = retryGetAndCreateIndexFile();
if (indexFile != null) {
//获得文件最大物理偏移量
long endPhyOffset = indexFile.getEndPhyOffset();
DispatchRequest msg = req;
String topic = msg.getTopic();
String keys = msg.getKeys();
//如果该消息的物理偏移量小于索引文件中的最大物理偏移量,则说明是重复数据,忽略本次索引构建
if (msg.getCommitLogOffset() < endPhyOffset) {
return;
}

final int tranType = MessageSysFlag.getTransactionValue(msg.getSysFlag());
switch (tranType) {
case MessageSysFlag.TRANSACTION_NOT_TYPE:
case MessageSysFlag.TRANSACTION_PREPARED_TYPE:
case MessageSysFlag.TRANSACTION_COMMIT_TYPE:
break;
case MessageSysFlag.TRANSACTION_ROLLBACK_TYPE:
return;
}

//如果消息ID不为空,则添加到Hash索引中
if (req.getUniqKey() != null) {
indexFile = putKey(indexFile, msg, buildKey(topic, req.getUniqKey()));
if (indexFile == null) {
return;
}
}
//构建索引key,RocketMQ支持为同一个消息建立多个索引,多个索引键空格隔开.
if (keys != null && keys.length() > 0) {
String[] keyset = keys.split(MessageConst.KEY_SEPARATOR);
for (int i = 0; i < keyset.length; i++) {
String key = keyset[i];
if (key.length() > 0) {
indexFile = putKey(indexFile, msg, buildKey(topic, key));
if (indexFile == null) {

return;
}
}
}
}
} else {
log.error("build index error, stop building index");
}
}

消息队列和索引文件恢复

  • RocketMQ首先将消息全量存储在CommitLog文件,然后异步生成转发任务更新ConsumerQueue和Index文件
  • 如果消息成功存储到CommitLog文件,转发任务未成功执行,且Broker由于某个愿意宕机,会导致CommitLog、ConsumerQueue、IndexFile文件数据不一致

1)存储文件加载

代码:DefaultMessageStore#load

  • 判断上一次是否异常退出
  • Broker在启动时创建abort文件,在退出时通过JVM钩子函数删除abort文件。如果下次启动时存在abort文件,说明Broker异常退出,CommitLog与ConsumerQueue数据有可能不一致,需要修复
1
2
3
4
5
6
7
8
9
//判断临时文件是否存在
boolean lastExitOK = !this.isTempFileExist();
//根据临时文件判断当前Broker是否异常退出
private boolean isTempFileExist() {
String fileName = StorePathConfigHelper
.getAbortFile(this.messageStoreConfig.getStorePathRootDir());
File file = new File(fileName);
return file.exists();
}

代码:DefaultMessageStore#load

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
//加载延时队列
if (null != scheduleMessageService) {
result = result && this.scheduleMessageService.load();
}

// 加载CommitLog文件
result = result && this.commitLog.load();

// 加载消费队列文件
result = result && this.loadConsumeQueue();

if (result) {
//加载存储监测点,监测点主要记录CommitLog文件、ConsumerQueue文件、Index索引文件的刷盘点
this.storeCheckpoint =new StoreCheckpoint(StorePathConfigHelper.getStoreCheckpoint(this.messageStoreConfig.getStorePathRootDir()));
//加载index文件
this.indexService.load(lastExitOK);
//根据Broker是否异常退出,执行不同的恢复策略
this.recover(lastExitOK);
}

代码:MappedFileQueue#load(加载CommitLog到映射文件)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
//指向CommitLog文件目录
File dir = new File(this.storePath);
//获得文件数组
File[] files = dir.listFiles();
if (files != null) {
// 文件排序
Arrays.sort(files);
//遍历文件
for (File file : files) {
//如果文件大小和配置文件不一致,退出
if (file.length() != this.mappedFileSize) {

return false;
}

try {
//创建映射文件
MappedFile mappedFile = new MappedFile(file.getPath(), mappedFileSize);
mappedFile.setWrotePosition(this.mappedFileSize);
mappedFile.setFlushedPosition(this.mappedFileSize);
mappedFile.setCommittedPosition(this.mappedFileSize);
//将映射文件添加到队列
this.mappedFiles.add(mappedFile);
log.info("load " + file.getPath() + " OK");
} catch (IOException e) {
log.error("load file " + file + " error", e);
return false;
}
}
}

return true;

代码:DefaultMessageStore#loadConsumeQueue(加载消息消费队列)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
//执行消费队列目录
File dirLogic = new File(StorePathConfigHelper.getStorePathConsumeQueue(this.messageStoreConfig.getStorePathRootDir()));
//遍历消费队列目录
File[] fileTopicList = dirLogic.listFiles();
if (fileTopicList != null) {

for (File fileTopic : fileTopicList) {
//获得子目录名称,即topic名称
String topic = fileTopic.getName();
//遍历子目录下的消费队列文件
File[] fileQueueIdList = fileTopic.listFiles();
if (fileQueueIdList != null) {
//遍历文件
for (File fileQueueId : fileQueueIdList) {
//文件名称即队列ID
int queueId;
try {
queueId = Integer.parseInt(fileQueueId.getName());
} catch (NumberFormatException e) {
continue;
}
//创建消费队列并加载到内存
ConsumeQueue logic = new ConsumeQueue(
topic,
queueId,
StorePathConfigHelper.getStorePathConsumeQueue(this.messageStoreConfig.getStorePathRootDir()),
this.getMessageStoreConfig().getMapedFileSizeConsumeQueue(),
this);
this.putConsumeQueue(topic, queueId, logic);
if (!logic.load()) {
return false;
}
}
}
}
}

log.info("load logics queue all over, OK");

return true;

代码:IndexService#load(加载索引文件)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
public boolean load(final boolean lastExitOK) {
//索引文件目录
File dir = new File(this.storePath);
//遍历索引文件
File[] files = dir.listFiles();
if (files != null) {
//文件排序
Arrays.sort(files);
//遍历文件
for (File file : files) {
try {
//加载索引文件
IndexFile f = new IndexFile(file.getPath(), this.hashSlotNum, this.indexNum, 0, 0);
f.load();

if (!lastExitOK) {
//索引文件上次的刷盘时间小于该索引文件的消息时间戳,该文件将立即删除
if (f.getEndTimestamp() > this.defaultMessageStore.getStoreCheckpoint()
.getIndexMsgTimestamp()) {
f.destroy(0);
continue;
}
}
//将索引文件添加到队列
log.info("load index file OK, " + f.getFileName());
this.indexFileList.add(f);
} catch (IOException e) {
log.error("load file {} error", file, e);
return false;
} catch (NumberFormatException e) {
log.error("load file {} error", file, e);
}
}
}

return true;
}

代码:DefaultMessageStore#recover(文件恢复,根据Broker是否正常退出执行不同的恢复策略)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
private void recover(final boolean lastExitOK) {
//获得最大的物理便宜消费队列
long maxPhyOffsetOfConsumeQueue = this.recoverConsumeQueue();

if (lastExitOK) {
//正常恢复
this.commitLog.recoverNormally(maxPhyOffsetOfConsumeQueue);
} else {
//异常恢复
this.commitLog.recoverAbnormally(maxPhyOffsetOfConsumeQueue);
}
//在CommitLog中保存每个消息消费队列当前的存储逻辑偏移量
this.recoverTopicQueueTable();
}

代码:DefaultMessageStore#recoverTopicQueueTable(恢复ConsumerQueue后,在CommitLog实例中保存每个消息队列当前的存储逻辑偏移量)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public void recoverTopicQueueTable() {
HashMap<String/* topic-queueid */, Long/* offset */> table = new HashMap<String, Long>(1024);
//CommitLog最小偏移量
long minPhyOffset = this.commitLog.getMinOffset();
//遍历消费队列,将消费队列保存在CommitLog中
for (ConcurrentMap<Integer, ConsumeQueue> maps : this.consumeQueueTable.values()) {
for (ConsumeQueue logic : maps.values()) {
String key = logic.getTopic() + "-" + logic.getQueueId();
table.put(key, logic.getMaxOffsetInQueue());
logic.correctMinOffset(minPhyOffset);
}
}
this.commitLog.setTopicQueueTable(table);
}

2)正常恢复

代码:CommitLog#recoverNormally

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
public void recoverNormally(long maxPhyOffsetOfConsumeQueue) {

final List<MappedFile> mappedFiles = this.mappedFileQueue.getMappedFiles();
if (!mappedFiles.isEmpty()) {
//Broker正常停止再重启时,从倒数第三个开始恢复,如果不足3个文件,则从第一个文件开始恢复。
int index = mappedFiles.size() - 3;
if (index < 0)
index = 0;
MappedFile mappedFile = mappedFiles.get(index);
ByteBuffer byteBuffer = mappedFile.sliceByteBuffer();
long processOffset = mappedFile.getFileFromOffset();
//代表当前已校验通过的offset
long mappedFileOffset = 0;
while (true) {
//查找消息
DispatchRequest dispatchRequest = this.checkMessageAndReturnSize(byteBuffer, checkCRCOnRecover);
//消息长度
int size = dispatchRequest.getMsgSize();
//查找结果为true,并且消息长度大于0,表示消息正确.mappedFileOffset向前移动本消息长度
if (dispatchRequest.isSuccess() && size > 0) {
mappedFileOffset += size;
}
//如果查找结果为true且消息长度等于0,表示已到该文件末尾,如果还有下一个文件,则重置processOffset和MappedFileOffset重复查找下一个文件,否则跳出循环。
else if (dispatchRequest.isSuccess() && size == 0) {
index++;
if (index >= mappedFiles.size()) {
// Current branch can not happen
break;
} else {
//取出每个文件
mappedFile = mappedFiles.get(index);
byteBuffer = mappedFile.sliceByteBuffer();
processOffset = mappedFile.getFileFromOffset();
mappedFileOffset = 0;

}
}
// 查找结果为false,表明该文件未填满所有消息,跳出循环,结束循环
else if (!dispatchRequest.isSuccess()) {
log.info("recover physics file end, " + mappedFile.getFileName());
break;
}
}
//更新MappedFileQueue的flushedWhere和committedWhere指针
processOffset += mappedFileOffset;
this.mappedFileQueue.setFlushedWhere(processOffset);
this.mappedFileQueue.setCommittedWhere(processOffset);
//删除offset之后的所有文件
this.mappedFileQueue.truncateDirtyFiles(processOffset);


if (maxPhyOffsetOfConsumeQueue >= processOffset) {
this.defaultMessageStore.truncateDirtyLogicFiles(processOffset);
}
} else {
this.mappedFileQueue.setFlushedWhere(0);
this.mappedFileQueue.setCommittedWhere(0);
this.defaultMessageStore.destroyLogics();
}
}

代码:MappedFileQueue#truncateDirtyFiles

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
public void truncateDirtyFiles(long offset) {
List<MappedFile> willRemoveFiles = new ArrayList<MappedFile>();
//遍历目录下文件
for (MappedFile file : this.mappedFiles) {
//文件尾部的偏移量
long fileTailOffset = file.getFileFromOffset() + this.mappedFileSize;
//文件尾部的偏移量大于offset
if (fileTailOffset > offset) {
//offset大于文件的起始偏移量
if (offset >= file.getFileFromOffset()) {
//更新wrotePosition、committedPosition、flushedPosistion
file.setWrotePosition((int) (offset % this.mappedFileSize));
file.setCommittedPosition((int) (offset % this.mappedFileSize));
file.setFlushedPosition((int) (offset % this.mappedFileSize));
} else {
//offset小于文件的起始偏移量,说明该文件是有效文件后面创建的,释放mappedFile占用内存,删除文件
file.destroy(1000);
willRemoveFiles.add(file);
}
}
}

this.deleteExpiredFile(willRemoveFiles);
}

3)异常恢复

  • Broker异常停止文件恢复的实现为CommitLog#recoverAbnormally
  • 异常文件恢复步骤与正常停止文件恢复流程基本相同,差异在于:
    • 正常停止默认从倒数第三个文件开始进行恢复,异常停止需要从最后一个文件往前找第一个消息存储正常的文件
    • 如果CommitLog目录没有消息文件,而消息消费队列目录下存在文件,则需要销毁

代码:CommitLog#recoverAbnormally

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
if (!mappedFiles.isEmpty()) {
// Looking beginning to recover from which file
int index = mappedFiles.size() - 1;
MappedFile mappedFile = null;
for (; index >= 0; index--) {
mappedFile = mappedFiles.get(index);
//判断消息文件是否是一个正确的文件
if (this.isMappedFileMatchedRecover(mappedFile)) {
log.info("recover from this mapped file " + mappedFile.getFileName());
break;
}
}
//根据索引取出mappedFile文件
if (index < 0) {
index = 0;
mappedFile = mappedFiles.get(index);
}
//...验证消息的合法性,并将消息转发到消息消费队列和索引文件

}else{
//未找到mappedFile,重置flushWhere、committedWhere都为0,销毁消息队列文件
this.mappedFileQueue.setFlushedWhere(0);
this.mappedFileQueue.setCommittedWhere(0);
this.defaultMessageStore.destroyLogics();
}

刷盘机制

  • RocketMQ的存储基于JDK NIO的内存映射机制(MappedByteBuffer),消息存储时首先将消息追加到内存,再根据配置的刷盘策略在不同时间刷写磁盘

同步刷盘

  • 消息追加到内存后,立即刷盘

代码:CommitLog#handleDiskFlush

1
2
3
4
5
6
7
8
9
10
11
12
//刷盘服务
final GroupCommitService service = (GroupCommitService) this.flushCommitLogService;
if (messageExt.isWaitStoreMsgOK()) {
//封装刷盘请求
GroupCommitRequest request = new GroupCommitRequest(result.getWroteOffset() + result.getWroteBytes());
//提交刷盘请求
service.putRequest(request);
//线程阻塞5秒,等待刷盘结束
boolean flushOK = request.waitForFlush(this.defaultMessageStore.getMessageStoreConfig().getSyncFlushTimeout());
if (!flushOK) {
putMessageResult.setPutMessageStatus(PutMessageStatus.FLUSH_DISK_TIMEOUT);
}

GroupCommitRequest

1
2
3
long nextOffset;	//刷盘点偏移量
CountDownLatch countDownLatch = new CountDownLatch(1); //倒计树锁存器
volatile boolean flushOK = false; //刷盘结果;默认为false

代码:GroupCommitService#run

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public void run() {
CommitLog.log.info(this.getServiceName() + " service started");

while (!this.isStopped()) {
try {
//线程等待10ms
this.waitForRunning(10);
//执行提交
this.doCommit();
} catch (Exception e) {
CommitLog.log.warn(this.getServiceName() + " service has exception. ", e);
}
}
...
}

代码:GroupCommitService#doCommit

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
private void doCommit() {
//加锁
synchronized (this.requestsRead) {
if (!this.requestsRead.isEmpty()) {
//遍历requestsRead
for (GroupCommitRequest req : this.requestsRead) {
// There may be a message in the next file, so a maximum of
// two times the flush
boolean flushOK = false;
for (int i = 0; i < 2 && !flushOK; i++) {
flushOK = CommitLog.this.mappedFileQueue.getFlushedWhere() >= req.getNextOffset();
//刷盘
if (!flushOK) {
CommitLog.this.mappedFileQueue.flush(0);
}
}
//唤醒发送消息客户端
req.wakeupCustomer(flushOK);
}

//更新刷盘监测点
long storeTimestamp = CommitLog.this.mappedFileQueue.getStoreTimestamp();
if (storeTimestamp > 0) { CommitLog.this.defaultMessageStore.getStoreCheckpoint().setPhysicMsgTimestamp(storeTimestamp);
}

this.requestsRead.clear();
} else {
// Because of individual messages is set to not sync flush, it
// will come to this process
CommitLog.this.mappedFileQueue.flush(0);
}
}
}

异步刷盘

  • 消息追加到内存后,立即响应
  • 如果开启transientStorePoolEnable,RocketMQ单独申请一个与目标物理文件(commitLog)同样大小的堆外内存,使用内存锁定确保其不会被置换到虚拟内存。消息追加到堆外内存,然后提交到物理文件的内存映射中,刷写磁盘
    • 将消息直接追加到ByteBuffer(堆外内存)
    • CommitRealTimeService线程每隔200ms将ByteBuffer新追加内容提交到MappedByteBuffer
    • MappedByteBuffer在内存中追加提交的内容,wrotePosition指针向后移动
    • commit操作成功返回,将committedPosition位置恢复
    • FlushRealTimeService线程默认每500ms将MappedByteBuffer中新追加的内存刷写到磁盘
  • 如果未开启transientStorePoolEnable,消息直接追加到物理文件直接映射文件中,刷写到磁盘

代码:CommitLog$CommitRealTimeService#run(提交线程工作机制)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
//间隔时间,默认200ms
int interval = CommitLog.this.defaultMessageStore.getMessageStoreConfig().getCommitIntervalCommitLog();

//一次提交的至少页数
int commitDataLeastPages = CommitLog.this.defaultMessageStore.getMessageStoreConfig().getCommitCommitLogLeastPages();

//两次真实提交的最大间隔,默认200ms
int commitDataThoroughInterval =
CommitLog.this.defaultMessageStore.getMessageStoreConfig().getCommitCommitLogThoroughInterval();

//上次提交间隔超过commitDataThoroughInterval,则忽略提交commitDataThoroughInterval参数,直接提交
long begin = System.currentTimeMillis();
if (begin >= (this.lastCommitTimestamp + commitDataThoroughInterval)) {
this.lastCommitTimestamp = begin;
commitDataLeastPages = 0;
}

//执行提交操作,将待提交数据提交到物理文件的内存映射区
boolean result = CommitLog.this.mappedFileQueue.commit(commitDataLeastPages);
long end = System.currentTimeMillis();
if (!result) {
this.lastCommitTimestamp = end; // result = false means some data committed.
//now wake up flush thread.
//唤醒刷盘线程
flushCommitLogService.wakeup();
}

if (end - begin > 500) {
log.info("Commit data to file costs {} ms", end - begin);
}
this.waitForRunning(interval);

代码:CommitLog$FlushRealTimeService#run(刷盘线程工作机制)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
//表示await方法等待,默认false
boolean flushCommitLogTimed = CommitLog.this.defaultMessageStore.getMessageStoreConfig().isFlushCommitLogTimed();
//线程执行时间间隔
int interval = CommitLog.this.defaultMessageStore.getMessageStoreConfig().getFlushIntervalCommitLog();
//一次刷写任务至少包含页数
int flushPhysicQueueLeastPages = CommitLog.this.defaultMessageStore.getMessageStoreConfig().getFlushCommitLogLeastPages();
//两次真实刷写任务最大间隔
int flushPhysicQueueThoroughInterval =
CommitLog.this.defaultMessageStore.getMessageStoreConfig().getFlushCommitLogThoroughInterval();
...
//距离上次提交间隔超过flushPhysicQueueThoroughInterval,则本次刷盘任务将忽略flushPhysicQueueLeastPages,直接提交
long currentTimeMillis = System.currentTimeMillis();
if (currentTimeMillis >= (this.lastFlushTimestamp + flushPhysicQueueThoroughInterval)) {
this.lastFlushTimestamp = currentTimeMillis;
flushPhysicQueueLeastPages = 0;
printFlushProgress = (printTimes++ % 10) == 0;
}
...
//执行一次刷盘前,先等待指定时间间隔
if (flushCommitLogTimed) {
Thread.sleep(interval);
} else {
this.waitForRunning(interval);
}
...
long begin = System.currentTimeMillis();
//刷写磁盘
CommitLog.this.mappedFileQueue.flush(flushPhysicQueueLeastPages);
long storeTimestamp = CommitLog.this.mappedFileQueue.getStoreTimestamp();
if (storeTimestamp > 0) {
//更新存储监测点文件的时间戳
CommitLog.this.defaultMessageStore.getStoreCheckpoint().setPhysicMsgTimestamp(storeTimestamp);

过期文件删除机制

  • RocketMQ操作CommitLog、ConsumerQueue文件基于内存映射机制,并在启动的时候加载CommitLog、ConsumerQueue目录下的所有文件
  • 为避免内存与磁盘的浪费,引入一种机制删除已过期的文件
  • RocketMQ顺序写CommitLog、ConsumerQueue文件,写操作全部落在最后一个CommitLog或者ConsumerQueue文件,之前的文件在下一个文件创建后将不会再被更新
  • 如果当前文件在一定时间间隔内没有再次被消费,则认为是过期文件,可以被删除
  • RocketMQ不会关注这个文件上的消息是否全部被消费。默认每个文件的过期时间为72小时(通过在Broker配置文件中设置fileReservedTime,单位为小时)

代码:DefaultMessageStore#addScheduleTask

1
2
3
4
5
6
7
8
9
10
private void addScheduleTask() {
//每隔10s调度一次清除文件
this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
DefaultMessageStore.this.cleanFilesPeriodically();
}
}, 1000 * 60, this.messageStoreConfig.getCleanResourceInterval(), TimeUnit.MILLISECONDS);
...
}

代码:DefaultMessageStore#cleanFilesPeriodically

1
2
3
4
5
6
private void cleanFilesPeriodically() {
//清除存储文件
this.cleanCommitLogService.run();
//清除消息消费队列文件
this.cleanConsumeQueueService.run();
}

代码:DefaultMessageStore#deleteExpiredFiles

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
private void deleteExpiredFiles() {
//删除的数量
int deleteCount = 0;
//文件保留的时间
long fileReservedTime = DefaultMessageStore.this.getMessageStoreConfig().getFileReservedTime();
//删除物理文件的间隔
int deletePhysicFilesInterval = DefaultMessageStore.this.getMessageStoreConfig().getDeleteCommitLogFilesInterval();
//线程被占用,第一次拒绝删除后能保留的最大时间,超过该时间,文件将被强制删除
int destroyMapedFileIntervalForcibly = DefaultMessageStore.this.getMessageStoreConfig().getDestroyMapedFileIntervalForcibly();

boolean timeup = this.isTimeToDelete();
boolean spacefull = this.isSpaceToDelete();
boolean manualDelete = this.manualDeleteFileSeveralTimes > 0;
// 指定删除文件的时间点,RocketMQ通过deleteWhen设置一天的固定时间执行一次删除过期文件操作,默认4点
// 磁盘空间如果不充足,删除过期文件
// 预留,手工触发
if (timeup || spacefull || manualDelete) {
...执行删除逻辑
}else{
...无作为
}

代码:CleanCommitLogService#isSpaceToDelete(当磁盘空间不足时执行删除过期文件)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
private boolean isSpaceToDelete() {
//磁盘分区的最大使用量
double ratio = DefaultMessageStore.this.getMessageStoreConfig().getDiskMaxUsedSpaceRatio() / 100.0;
//是否需要立即执行删除过期文件操作
cleanImmediately = false;

{
String storePathPhysic = DefaultMessageStore.this.getMessageStoreConfig().getStorePathCommitLog();
//当前CommitLog目录所在的磁盘分区的磁盘使用率
double physicRatio = UtilAll.getDiskPartitionSpaceUsedPercent(storePathPhysic);
//diskSpaceWarningLevelRatio:磁盘使用率警告阈值,默认0.90
if (physicRatio > diskSpaceWarningLevelRatio) {
boolean diskok = DefaultMessageStore.this.runningFlags.getAndMakeDiskFull();
if (diskok) {
DefaultMessageStore.log.error("physic disk maybe full soon " + physicRatio + ", so mark disk full");
}
//diskSpaceCleanForciblyRatio:强制清除阈值,默认0.85
cleanImmediately = true;
} else if (physicRatio > diskSpaceCleanForciblyRatio) {
cleanImmediately = true;
} else {
boolean diskok = DefaultMessageStore.this.runningFlags.getAndMakeDiskOK();
if (!diskok) {
DefaultMessageStore.log.info("physic disk space OK " + physicRatio + ", so mark disk ok");
}
}

if (physicRatio < 0 || physicRatio > ratio) {
DefaultMessageStore.log.info("physic disk maybe full soon, so reclaim space, " + physicRatio);
return true;
}
}

代码:MappedFileQueue#deleteExpiredFileByTime(执行文件销毁和删除)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
for (int i = 0; i < mfsLength; i++) {
//遍历每隔文件
MappedFile mappedFile = (MappedFile) mfs[i];
//计算文件存活时间
long liveMaxTimestamp = mappedFile.getLastModifiedTimestamp() + expiredTime;
//如果超过72小时,执行文件删除
if (System.currentTimeMillis() >= liveMaxTimestamp || cleanImmediately) {
if (mappedFile.destroy(intervalForcibly)) {
files.add(mappedFile);
deleteCount++;

if (files.size() >= DELETE_FILES_BATCH_MAX) {
break;
}

if (deleteFilesInterval > 0 && (i + 1) < mfsLength) {
try {
Thread.sleep(deleteFilesInterval);
} catch (InterruptedException e) {
}
}
} else {
break;
}
} else {
//avoid deleting files in the middle
break;
}
}

小结

  • RocketMQ的存储文件包括消息文件(Commitlog)、消息消费队列文件(ConsumerQueue)、Hash索引文件(IndexFile)、监测点文件(checkPoint)、abort(关闭异常文件)

  • 单个消息存储文件、消息消费队列文件、Hash索引文件长度固定,以便使用内存映射机制进行文件的读写操作

  • RocketMQ组织文件以文件的起始偏移量来命名文件

  • RocketMQ基于内存映射文件机制提供同步刷盘和异步刷盘,后者先追加消息到内存映射文件,然后启动专门的刷盘线程定时将内存中的文件数据刷写到磁盘

  • RocketMQ为了保证消息发送的高吞吐量,采用单一文件存储所有主题消息(Commitlog),保证消息存储是完全的顺序写,但这样给文件读取带来了不便,为了方便消费,构建了消息消费队列文件,基于主题与队列进行组织。同时RocketMQ为消息实现了Hash索引,可以为消息设置索引键,快速从CommitLog文件中检索消息

  • 当消息达到CommitLog后,通过ReputMessageService线程接近实时地将消息转发给消息消费队列文件与索引文件

  • 为了安全,RocketMQ引入abort文件,记录Broker的停机是否是正常。重启Broker时为了保证CommitLog文件、ConsumerQueue文件与Hash索引文件的正确性,分别采用不同策略来恢复文件

  • 删除策略:启动文件过期机制并在磁盘空间不足或者默认凌晨4点删除过期文件,文件保存72小时并且在删除文件时并不会判断该消息文件上的消息是否被消费

Consumer

消息消费概述

  • 消费以组的模式开展

    • 一个消费组内包含多个消费者,一个消费者组可订阅多个主题
    • 消费组之间有集群模式和广播模式
      • 集群模式:一个主题下的同一条消息只允许被其中一个消费者消费
      • 广播模式:一个主题下的同一条消息,被集群内的所有消费者消费一次
    • Broker和Consumer之间的消息传递有两种模式:推模式、拉模式
      • pull:Consumer主动发起拉消息请求
      • push:消息到Broker后,推送给Consumer——基于pull模式实现,一个拉取任务完成后开始下一个拉取任务
  • 消息队列负载机制:一个消息队列同一个时间只允许被一个消费者消费,一个消费者可以消费多个消息队列

  • RocketMQ支持局部顺序消息消费——保证同一个消息队列上的消息顺序消费;如果要实现某一个主题的全局顺序消费,将该主题的队列数设置为1,牺牲高可用性

消息推送模式

消息消费重要方法

1
2
3
4
5
6
7
8
void sendMessageBack(final MessageExt msg, final int delayLevel, final String brokerName):发送消息确认
Set<MessageQueue> fetchSubscribeMessageQueues(final String topic) :获取消费者对主题分配了那些消息队列
void registerMessageListener(final MessageListenerConcurrently messageListener):注册并发事件监听器
void registerMessageListener(final MessageListenerOrderly messageListener):注册顺序消息事件监听器
void subscribe(final String topic, final String subExpression):基于主题订阅消息,消息过滤使用表达式
void subscribe(final String topic, final String fullClassName,final String filterClassSource):基于主题订阅消息,消息过滤使用类模式
void subscribe(final String topic, final MessageSelector selector) :订阅消息,并指定队列选择器
void unsubscribe(final String topic):取消消息订阅

DefaultMQPushConsumer

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
//消费者组
private String consumerGroup;
//消息消费模式
private MessageModel messageModel = MessageModel.CLUSTERING;
//指定消费开始偏移量(最大偏移量、最小偏移量、启动时间戳)开始消费
private ConsumeFromWhere consumeFromWhere = ConsumeFromWhere.CONSUME_FROM_LAST_OFFSET;
//集群模式下的消息队列负载策略
private AllocateMessageQueueStrategy allocateMessageQueueStrategy;
//订阅信息
private Map<String /* topic */, String /* sub expression */> subscription = new HashMap<String, String>();
//消息业务监听器
private MessageListener messageListener;
//消息消费进度存储器
private OffsetStore offsetStore;
//消费者最小线程数量
private int consumeThreadMin = 20;
//消费者最大线程数量
private int consumeThreadMax = 20;
//并发消息消费时处理队列最大跨度
private int consumeConcurrentlyMaxSpan = 2000;
//每1000次流控后打印流控日志
private int pullThresholdForQueue = 1000;
//推模式下任务间隔时间
private long pullInterval = 0;
//推模式下任务拉取的条数,默认32条
private int pullBatchSize = 32;
//每次传入MessageListener#consumerMessage中消息的数量
private int consumeMessageBatchMaxSize = 1;
//是否每次拉取消息都订阅消息
private boolean postSubscriptionWhenPull = false;
//消息重试次数,-1代表16次
private int maxReconsumeTimes = -1;
//消息消费超时时间
private long consumeTimeout = 15;

消费者启动流程

代码:DefaultMQPushConsumerImpl#start

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
public synchronized void start() throws MQClientException {
switch (this.serviceState) {
case CREATE_JUST:

this.defaultMQPushConsumer.getMessageModel(), this.defaultMQPushConsumer.isUnitMode());
this.serviceState = ServiceState.START_FAILED;
//检查消息者是否合法
this.checkConfig();
//构建主题订阅信息
this.copySubscription();
//设置消费者客户端实例名称为进程ID
if (this.defaultMQPushConsumer.getMessageModel() == MessageModel.CLUSTERING) {
this.defaultMQPushConsumer.changeInstanceNameToPID();
}
//创建MQClient实例
this.mQClientFactory = MQClientManager.getInstance().getAndCreateMQClientInstance(this.defaultMQPushConsumer, this.rpcHook);
//构建rebalanceImpl
this.rebalanceImpl.setConsumerGroup(this.defaultMQPushConsumer.getConsumerGroup());
this.rebalanceImpl.setMessageModel(this.defaultMQPushConsumer.getMessageModel());
this.rebalanceImpl.setAllocateMessageQueueStrategy(this.defaultMQPushConsumer.getAllocateMessageQueueStrategy());
this.rebalanceImpl.setmQClientFactory(this.mQClientFactor
this.pullAPIWrapper = new PullAPIWrapper(
mQClientFactory,
this.defaultMQPushConsumer.getConsumerGroup(), isUnitMode());
this.pullAPIWrapper.registerFilterMessageHook(filterMessageHookLis
if (this.defaultMQPushConsumer.getOffsetStore() != null) {
this.offsetStore = this.defaultMQPushConsumer.getOffsetStore();
} else {
switch (this.defaultMQPushConsumer.getMessageModel()) {

case BROADCASTING: //消息消费广播模式,将消费进度保存在本地
this.offsetStore = new LocalFileOffsetStore(this.mQClientFactory, this.defaultMQPushConsumer.getConsumerGroup());
break;
case CLUSTERING: //消息消费集群模式,将消费进度保存在远端Broker
this.offsetStore = new RemoteBrokerOffsetStore(this.mQClientFactory, this.defaultMQPushConsumer.getConsumerGroup());
break;
default:
break;
}
this.defaultMQPushConsumer.setOffsetStore(this.offsetStore);
}
this.offsetStore.load
//创建顺序消息消费服务
if (this.getMessageListenerInner() instanceof MessageListenerOrderly) {
this.consumeOrderly = true;
this.consumeMessageService =
new ConsumeMessageOrderlyService(this, (MessageListenerOrderly) this.getMessageListenerInner());
//创建并发消息消费服务
} else if (this.getMessageListenerInner() instanceof MessageListenerConcurrently) {
this.consumeOrderly = false;
this.consumeMessageService =
new ConsumeMessageConcurrentlyService(this, (MessageListenerConcurrently) this.getMessageListenerInner());
}
//消息消费服务启动
this.consumeMessageService.start();
//注册消费者实例
boolean registerOK = mQClientFactory.registerConsumer(this.defaultMQPushConsumer.getConsumerGroup(), this);

if (!registerOK) {
this.serviceState = ServiceState.CREATE_JUST;
this.consumeMessageService.shutdown();
throw new MQClientException("The consumer group[" + this.defaultMQPushConsumer.getConsumerGroup()
+ "] has been created before, specify another name please." + FAQUrl.suggestTodo(FAQUrl.GROUP_NAME_DUPLICATE_URL),
null);
//启动消费者客户端
mQClientFactory.start();
log.info("the consumer [{}] start OK.", this.defaultMQPushConsumer.getConsumerGroup());
this.serviceState = ServiceState.RUNNING;
break;
case RUNNING:
case START_FAILED:
case SHUTDOWN_ALREADY:
throw new MQClientException("The PushConsumer service state not OK, maybe started once, "
+ this.serviceState
+ FAQUrl.suggestTodo(FAQUrl.CLIENT_SERVICE_NOT_OK),
null);
default:
break;
}

this.updateTopicSubscribeInfoWhenSubscriptionChanged();
this.mQClientFactory.checkClientInBroker();
this.mQClientFactory.sendHeartbeatToAllBrokerWithLock();
this.mQClientFactory.rebalanceImmediately();
}

消息拉取

1)PullMessageService实现机制

  • RocketMQ使用一个单独的线程PullMessageService,负责消息的拉取

代码:PullMessageService#run

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public void run() {
log.info(this.getServiceName() + " service started");
//循环拉取消息
while (!this.isStopped()) {
try {
//从请求队列中获取拉取消息请求
PullRequest pullRequest = this.pullRequestQueue.take();
//拉取消息
this.pullMessage(pullRequest);
} catch (InterruptedException ignored) {
} catch (Exception e) {
log.error("Pull Message Service Run Method exception", e);
}
}

log.info(this.getServiceName() + " service end");
}

PullRequest

1
2
3
4
5
private String consumerGroup;	//消费者组
private MessageQueue messageQueue; //待拉取消息队列
private ProcessQueue processQueue; //消息处理队列
private long nextOffset; //待拉取的MessageQueue偏移量
private boolean lockedFirst = false; //是否被锁定

代码:PullMessageService#pullMessage

1
2
3
4
5
6
7
8
9
10
11
12
private void pullMessage(final PullRequest pullRequest) {
//获得消费者实例
final MQConsumerInner consumer = this.mQClientFactory.selectConsumer(pullRequest.getConsumerGroup());
if (consumer != null) {
//强转为推送模式消费者
DefaultMQPushConsumerImpl impl = (DefaultMQPushConsumerImpl) consumer;
//推送消息
impl.pullMessage(pullRequest);
} else {
log.warn("No matched consumer for the PullRequest {}, drop it", pullRequest);
}
}

2)ProcessQueue实现机制

  • ProcessQueue是MessageQueue在消费端的重现、快照
  • 从Broker默认每次拉取32条消息,按照消息的队列偏移量顺序存放在ProcessQueue,然后将消息提交到Consumer消费线程池,消息成功消费后从ProcessQueue中移除

属性

1
2
3
4
5
6
7
8
9
10
11
12
13
14
//消息容器
private final TreeMap<Long, MessageExt> msgTreeMap = new TreeMap<Long, MessageExt>();
//读写锁
private final ReadWriteLock lockTreeMap = new ReentrantReadWriteLock();
//ProcessQueue总消息树
private final AtomicLong msgCount = new AtomicLong();
//ProcessQueue队列最大偏移量
private volatile long queueOffsetMax = 0L;
//当前ProcessQueue是否被丢弃
private volatile boolean dropped = false;
//上一次拉取时间戳
private volatile long lastPullTimestamp = System.currentTimeMillis();
//上一次消费时间戳
private volatile long lastConsumeTimestamp = System.currentTimeMillis();

方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
//移除消费超时消息
public void cleanExpiredMsg(DefaultMQPushConsumer pushConsumer)
//添加消息
public boolean putMessage(final List<MessageExt> msgs)
//获取消息最大间隔
public long getMaxSpan()
//移除消息
public long removeMessage(final List<MessageExt> msgs)
//将consumingMsgOrderlyTreeMap中消息重新放在msgTreeMap,并清空consumingMsgOrderlyTreeMap
public void rollback()
//将consumingMsgOrderlyTreeMap消息清除,表示成功处理该批消息
public long commit()
//重新处理该批消息
public void makeMessageToCosumeAgain(List<MessageExt> msgs)
//从processQueue中取出batchSize条消息
public List<MessageExt> takeMessags(final int batchSize)

3)消息拉取基本流程

1.客户端发起拉取请求

代码:DefaultMQPushConsumerImpl#pullMessage

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
public void pullMessage(final PullRequest pullRequest) {
//从pullRequest获得ProcessQueue
final ProcessQueue processQueue = pullRequest.getProcessQueue();
//如果处理队列被丢弃,直接返回
if (processQueue.isDropped()) {
log.info("the pull request[{}] is dropped.", pullRequest.toString());
return;
}
//如果处理队列未被丢弃,更新时间戳
pullRequest.getProcessQueue().setLastPullTimestamp(System.currentTimeMillis());

try {
this.makeSureStateOK();
} catch (MQClientException e) {
log.warn("pullMessage exception, consumer state not ok", e);
this.executePullRequestLater(pullRequest, PULL_TIME_DELAY_MILLS_WHEN_EXCEPTION);
return;
}
//如果处理队列被挂起,延迟1s后再执行
if (this.isPause()) {
log.warn("consumer was paused, execute pull request later. instanceName={}, group={}", this.defaultMQPushConsumer.getInstanceName(), this.defaultMQPushConsumer.getConsumerGroup());
this.executePullRequestLater(pullRequest, PULL_TIME_DELAY_MILLS_WHEN_SUSPEND);
return;
}
//获得最大待处理消息数量
long cachedMessageCount = processQueue.getMsgCount().get();
//获得最大待处理消息大小
long cachedMessageSizeInMiB = processQueue.getMsgSize().get() / (1024 * 1024);
//从数量进行流控
if (cachedMessageCount > this.defaultMQPushConsumer.getPullThresholdForQueue()) {
this.executePullRequestLater(pullRequest, PULL_TIME_DELAY_MILLS_WHEN_FLOW_CONTROL);
if ((queueFlowControlTimes++ % 1000) == 0) {
log.warn(
"the cached message count exceeds the threshold {}, so do flow control, minOffset={}, maxOffset={}, count={}, size={} MiB, pullRequest={}, flowControlTimes={}",
this.defaultMQPushConsumer.getPullThresholdForQueue(), processQueue.getMsgTreeMap().firstKey(), processQueue.getMsgTreeMap().lastKey(), cachedMessageCount, cachedMessageSizeInMiB, pullRequest, queueFlowControlTimes);
}
return;
}
//从消息大小进行流控
if (cachedMessageSizeInMiB > this.defaultMQPushConsumer.getPullThresholdSizeForQueue()) {
this.executePullRequestLater(pullRequest, PULL_TIME_DELAY_MILLS_WHEN_FLOW_CONTROL);
if ((queueFlowControlTimes++ % 1000) == 0) {
log.warn(
"the cached message size exceeds the threshold {} MiB, so do flow control, minOffset={}, maxOffset={}, count={}, size={} MiB, pullRequest={}, flowControlTimes={}",
this.defaultMQPushConsumer.getPullThresholdSizeForQueue(), processQueue.getMsgTreeMap().firstKey(), processQueue.getMsgTreeMap().lastKey(), cachedMessageCount, cachedMessageSizeInMiB, pullRequest, queueFlowControlTimes);
}
return;
}
//获得订阅信息
final SubscriptionData subscriptionData = this.rebalanceImpl.getSubscriptionInner().get(pullRequest.getMessageQueue().getTopic());
if (null == subscriptionData) {
this.executePullRequestLater(pullRequest, PULL_TIME_DELAY_MILLS_WHEN_EXCEPTION);
log.warn("find the consumer's subscription failed, {}", pullRequest);
return;
//与服务端交互,获取消息
this.pullAPIWrapper.pullKernelImpl(
pullRequest.getMessageQueue(),
subExpression,
subscriptionData.getExpressionType(),
subscriptionData.getSubVersion(),
pullRequest.getNextOffset(),
this.defaultMQPushConsumer.getPullBatchSize(),
sysFlag,
commitOffsetValue,
BROKER_SUSPEND_MAX_TIME_MILLIS,
CONSUMER_TIMEOUT_MILLIS_WHEN_SUSPEND,
CommunicationMode.ASYNC,
pullCallback
);

}
2.消息服务端Broker组装消息

代码:PullMessageProcessor#processRequest

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
//构建消息过滤器
MessageFilter messageFilter;
if (this.brokerController.getBrokerConfig().isFilterSupportRetry()) {
messageFilter = new ExpressionForRetryMessageFilter(subscriptionData, consumerFilterData,
this.brokerController.getConsumerFilterManager());
} else {
messageFilter = new ExpressionMessageFilter(subscriptionData, consumerFilterData,
this.brokerController.getConsumerFilterManager());
}
//调用MessageStore.getMessage查找消息
final GetMessageResult getMessageResult =
this.brokerController.getMessageStore().getMessage(
requestHeader.getConsumerGroup(), //消费组名称
requestHeader.getTopic(), //主题名称
requestHeader.getQueueId(), //队列ID
requestHeader.getQueueOffset(), //待拉取偏移量
requestHeader.getMaxMsgNums(), //最大拉取消息条数
messageFilter //消息过滤器
);

代码:DefaultMessageStore#getMessage

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
GetMessageStatus status = GetMessageStatus.NO_MESSAGE_IN_QUEUE;
long nextBeginOffset = offset; //查找下一次队列偏移量
long minOffset = 0; //当前消息队列最小偏移量
long maxOffset = 0; //当前消息队列最大偏移量
GetMessageResult getResult = new GetMessageResult();
final long maxOffsetPy = this.commitLog.getMaxOffset(); //当前commitLog最大偏移量
//根据主题名称和队列编号获取消息消费队列
ConsumeQueue consumeQueue = findConsumeQueue(topic, queueId);

...
minOffset = consumeQueue.getMinOffsetInQueue();
maxOffset = consumeQueue.getMaxOffsetInQueue();
//消息偏移量异常情况校对下一次拉取偏移量
if (maxOffset == 0) { //表示当前消息队列中没有消息
status = GetMessageStatus.NO_MESSAGE_IN_QUEUE;
nextBeginOffset = nextOffsetCorrection(offset, 0);
} else if (offset < minOffset) { //待拉取消息的偏移量小于队列的其实偏移量
status = GetMessageStatus.OFFSET_TOO_SMALL;
nextBeginOffset = nextOffsetCorrection(offset, minOffset);
} else if (offset == maxOffset) { //待拉取偏移量为队列最大偏移量
status = GetMessageStatus.OFFSET_OVERFLOW_ONE;
nextBeginOffset = nextOffsetCorrection(offset, offset);
} else if (offset > maxOffset) { //偏移量越界
status = GetMessageStatus.OFFSET_OVERFLOW_BADLY;
if (0 == minOffset) {
nextBeginOffset = nextOffsetCorrection(offset, minOffset);
} else {
nextBeginOffset = nextOffsetCorrection(offset, maxOffset);
}
}
...
//根据偏移量从CommitLog中拉取32条消息
SelectMappedBufferResult selectResult = this.commitLog.getMessage(offsetPy, sizePy);

代码:PullMessageProcessor#processRequest

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
//根据拉取结果填充responseHeader
response.setRemark(getMessageResult.getStatus().name());
responseHeader.setNextBeginOffset(getMessageResult.getNextBeginOffset());
responseHeader.setMinOffset(getMessageResult.getMinOffset());
responseHeader.setMaxOffset(getMessageResult.getMaxOffset());

//判断如果存在主从同步慢,设置下一次拉取任务的ID为主节点
switch (this.brokerController.getMessageStoreConfig().getBrokerRole()) {
case ASYNC_MASTER:
case SYNC_MASTER:
break;
case SLAVE:
if (!this.brokerController.getBrokerConfig().isSlaveReadEnable()) {
response.setCode(ResponseCode.PULL_RETRY_IMMEDIATELY);
responseHeader.setSuggestWhichBrokerId(MixAll.MASTER_ID);
}
break;
}
...
//GetMessageResult与Response的Code转换
switch (getMessageResult.getStatus()) {
case FOUND: //成功
response.setCode(ResponseCode.SUCCESS);
break;
case MESSAGE_WAS_REMOVING: //消息存放在下一个commitLog中
response.setCode(ResponseCode.PULL_RETRY_IMMEDIATELY); //消息重试
break;
case NO_MATCHED_LOGIC_QUEUE: //未找到队列
case NO_MESSAGE_IN_QUEUE: //队列中未包含消息
if (0 != requestHeader.getQueueOffset()) {
response.setCode(ResponseCode.PULL_OFFSET_MOVED);
requestHeader.getQueueOffset(),
getMessageResult.getNextBeginOffset(),
requestHeader.getTopic(),
requestHeader.getQueueId(),
requestHeader.getConsumerGroup()
);
} else {
response.setCode(ResponseCode.PULL_NOT_FOUND);
}
break;
case NO_MATCHED_MESSAGE: //未找到消息
response.setCode(ResponseCode.PULL_RETRY_IMMEDIATELY);
break;
case OFFSET_FOUND_NULL: //消息物理偏移量为空
response.setCode(ResponseCode.PULL_NOT_FOUND);
break;
case OFFSET_OVERFLOW_BADLY: //offset越界
response.setCode(ResponseCode.PULL_OFFSET_MOVED);
// XXX: warn and notify me
log.info("the request offset: {} over flow badly, broker max offset: {}, consumer: {}",
requestHeader.getQueueOffset(), getMessageResult.getMaxOffset(), channel.remoteAddress());
break;
case OFFSET_OVERFLOW_ONE: //offset在队列中未找到
response.setCode(ResponseCode.PULL_NOT_FOUND);
break;
case OFFSET_TOO_SMALL: //offset未在队列中
response.setCode(ResponseCode.PULL_OFFSET_MOVED);
requestHeader.getConsumerGroup(),
requestHeader.getTopic(),
requestHeader.getQueueOffset(),
getMessageResult.getMinOffset(), channel.remoteAddress());
break;
default:
assert false;
break;
}
...
//如果CommitLog标记可用,并且当前Broker为主节点,则更新消息消费进度
boolean storeOffsetEnable = brokerAllowSuspend;
storeOffsetEnable = storeOffsetEnable && hasCommitOffsetFlag;
storeOffsetEnable = storeOffsetEnable
&& this.brokerController.getMessageStoreConfig().getBrokerRole() != BrokerRole.SLAVE;
if (storeOffsetEnable) {
this.brokerController.getConsumerOffsetManager().commitOffset(RemotingHelper.parseChannelRemoteAddr(channel),
requestHeader.getConsumerGroup(), requestHeader.getTopic(), requestHeader.getQueueId(), requestHeader.getCommitOffset());
}
3.消息拉取客户端处理消息

代码:MQClientAPIImpl#processPullResponse

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
private PullResult processPullResponse(
final RemotingCommand response) throws MQBrokerException, RemotingCommandException {
PullStatus pullStatus = PullStatus.NO_NEW_MSG;
//判断响应结果
switch (response.getCode()) {
case ResponseCode.SUCCESS:
pullStatus = PullStatus.FOUND;
break;
case ResponseCode.PULL_NOT_FOUND:
pullStatus = PullStatus.NO_NEW_MSG;
break;
case ResponseCode.PULL_RETRY_IMMEDIATELY:
pullStatus = PullStatus.NO_MATCHED_MSG;
break;
case ResponseCode.PULL_OFFSET_MOVED:
pullStatus = PullStatus.OFFSET_ILLEGAL;
break;

default:
throw new MQBrokerException(response.getCode(), response.getRemark());
}
//解码响应头
PullMessageResponseHeader responseHeader =
(PullMessageResponseHeader) response.decodeCommandCustomHeader(PullMessageResponseHeader.class);
//封装PullResultExt返回
return new PullResultExt(pullStatus, responseHeader.getNextBeginOffset(), responseHeader.getMinOffset(),
responseHeader.getMaxOffset(), null, responseHeader.getSuggestWhichBrokerId(), response.getBody());
}

PullResult类

1
2
3
4
5
private final PullStatus pullStatus;	//拉取结果
private final long nextBeginOffset; //下次拉取偏移量
private final long minOffset; //消息队列最小偏移量
private final long maxOffset; //消息队列最大偏移量
private List<MessageExt> msgFoundList; //拉取的消息列表

代码:DefaultMQPushConsumerImpl$PullCallback#OnSuccess

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
//将拉取到的消息存入processQueue
boolean dispatchToConsume = processQueue.putMessage(pullResult.getMsgFoundList());
//将processQueue提交到consumeMessageService中供消费者消费
DefaultMQPushConsumerImpl.this.consumeMessageService.submitConsumeRequest(
pullResult.getMsgFoundList(),
processQueue,
pullRequest.getMessageQueue(),
dispatchToConsume);
//如果pullInterval大于0,则等待pullInterval毫秒后将pullRequest对象放入到PullMessageService中的pullRequestQueue队列中
if (DefaultMQPushConsumerImpl.this.defaultMQPushConsumer.getPullInterval() > 0) {
DefaultMQPushConsumerImpl.this.executePullRequestLater(pullRequest,
DefaultMQPushConsumerImpl.this.defaultMQPushConsumer.getPullInterval());
} else {
DefaultMQPushConsumerImpl.this.executePullRequestImmediately(pullRequest);
}
4.消息拉取总结

4)消息拉取长轮询机制分析

  • RocketMQ的Push模式是循环向消息服务端发起消息拉取请求
  • 如果不启用长轮询机制,拉取未到达消费队列的消息时,会等待shortPollingTimeMills时间后(挂起)再去判断消息是否已经到达指定消息队列,如果消息仍未到达则提示拉取Consumer PULL—NOT—FOUND(消息不存在)
  • 如果开启长轮询模式,Broker会每隔5s轮询检查一次消息是否可达,有消息达到后立马通知挂起线程再次验证消息是否是自己感兴趣的消息
    • 是则从CommitLog文件提取消息返回给Consumer
    • 否则挂起直到超时,才给Consumer响应。超时时间由Consumer封装在请求参数中,PUSH模式为15s,PULL模式通过DefaultMQPullConsumer#setBrokerSuspendMaxTimeMillis设置
  • RocketMQ通过在Broker客户端配置longPollingEnable为true开启长轮询模式

代码:PullMessageProcessor#processRequest

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
//当没有拉取到消息时,通过长轮询方式继续拉取消息
case ResponseCode.PULL_NOT_FOUND:
if (brokerAllowSuspend && hasSuspendFlag) {
long pollingTimeMills = suspendTimeoutMillisLong;
if (!this.brokerController.getBrokerConfig().isLongPollingEnable()) {
pollingTimeMills = this.brokerController.getBrokerConfig().getShortPollingTimeMills();
}

String topic = requestHeader.getTopic();
long offset = requestHeader.getQueueOffset();
int queueId = requestHeader.getQueueId();
//构建拉取请求对象
PullRequest pullRequest = new PullRequest(request, channel, pollingTimeMills,
this.brokerController.getMessageStore().now(), offset, subscriptionData, messageFilter);
//处理拉取请求
this.brokerController.getPullRequestHoldService().suspendPullRequest(topic, queueId, pullRequest);
response = null;
break;
}

PullRequestHoldService方式实现长轮询

代码:PullRequestHoldService#suspendPullRequest

1
2
3
4
5
6
7
8
9
10
11
12
13
14
//将拉取消息请求,放置在ManyPullRequest集合中
public void suspendPullRequest(final String topic, final int queueId, final PullRequest pullRequest) {
String key = this.buildKey(topic, queueId);
ManyPullRequest mpr = this.pullRequestTable.get(key);
if (null == mpr) {
mpr = new ManyPullRequest();
ManyPullRequest prev = this.pullRequestTable.putIfAbsent(key, mpr);
if (prev != null) {
mpr = prev;
}
}

mpr.addPullRequest(pullRequest);
}

代码:PullRequestHoldService#run

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
public void run() {
log.info("{} service started", this.getServiceName());
while (!this.isStopped()) {
try {
//如果开启长轮询每隔5秒判断消息是否到达
if (this.brokerController.getBrokerConfig().isLongPollingEnable()) {
this.waitForRunning(5 * 1000);
} else {
//没有开启长轮询,每隔1s再次尝试
this.waitForRunning(this.brokerController.getBrokerConfig().getShortPollingTimeMills());
}

long beginLockTimestamp = this.systemClock.now();
this.checkHoldRequest();
long costTime = this.systemClock.now() - beginLockTimestamp;
if (costTime > 5 * 1000) {
log.info("[NOTIFYME] check hold request cost {} ms.", costTime);
}
} catch (Throwable e) {
log.warn(this.getServiceName() + " service has exception. ", e);
}
}

log.info("{} service end", this.getServiceName());
}

代码:PullRequestHoldService#checkHoldRequest

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
//遍历拉取任务
private void checkHoldRequest() {
for (String key : this.pullRequestTable.keySet()) {
String[] kArray = key.split(TOPIC_QUEUEID_SEPARATOR);
if (2 == kArray.length) {
String topic = kArray[0];
int queueId = Integer.parseInt(kArray[1]);
//获得消息偏移量
final long offset = this.brokerController.getMessageStore().getMaxOffsetInQueue(topic, queueId);
try {
//通知有消息达到
this.notifyMessageArriving(topic, queueId, offset);
} catch (Throwable e) {
log.error("check hold request failed. topic={}, queueId={}", topic, queueId, e);
}
}
}
}

代码:PullRequestHoldService#notifyMessageArriving

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
//如果拉取消息偏移大于请求偏移量,如果消息匹配调用executeRequestWhenWakeup处理消息
if (newestOffset > request.getPullFromThisOffset()) {
boolean match = request.getMessageFilter().isMatchedByConsumeQueue(tagsCode,
new ConsumeQueueExt.CqExtUnit(tagsCode, msgStoreTime, filterBitMap));
// match by bit map, need eval again when properties is not null.
if (match && properties != null) {
match = request.getMessageFilter().isMatchedByCommitLog(null, properties);
}

if (match) {
try {
this.brokerController.getPullMessageProcessor().executeRequestWhenWakeup(request.getClientChannel(),
request.getRequestCommand());
} catch (Throwable e) {
log.error("execute request when wakeup failed.", e);
}
continue;
}
}
//如果过期时间超时,则不继续等待将直接返回给客户端消息未找到
if (System.currentTimeMillis() >= (request.getSuspendTimestamp() + request.getTimeoutMillis())) {
try {
this.brokerController.getPullMessageProcessor().executeRequestWhenWakeup(request.getClientChannel(),
request.getRequestCommand());
} catch (Throwable e) {
log.error("execute request when wakeup failed.", e);
}
continue;
}

**DefaultMessageStore$ReputMessageService机制**(无长轮询)

代码:DefaultMessageStore#start

1
2
3
//长轮询入口
this.reputMessageService.setReputFromOffset(maxPhysicalPosInLogicQueue);
this.reputMessageService.start();

代码:DefaultMessageStore$ReputMessageService#run

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public void run() {
DefaultMessageStore.log.info(this.getServiceName() + " service started");

while (!this.isStopped()) {
try {
Thread.sleep(1);
//长轮询核心逻辑代码入口
this.doReput();
} catch (Exception e) {
DefaultMessageStore.log.warn(this.getServiceName() + " service has exception. ", e);
}
}

DefaultMessageStore.log.info(this.getServiceName() + " service end");
}

代码:DefaultMessageStore$ReputMessageService#deReput

1
2
3
4
5
6
7
8
//当新消息达到是,进行通知监听器进行处理
if (BrokerRole.SLAVE != DefaultMessageStore.this.getMessageStoreConfig().getBrokerRole()
&& DefaultMessageStore.this.brokerConfig.isLongPollingEnable()) {
DefaultMessageStore.this.messageArrivingListener.arriving(dispatchRequest.getTopic(),
dispatchRequest.getQueueId(), dispatchRequest.getConsumeQueueOffset() + 1,
dispatchRequest.getTagsCode(), dispatchRequest.getStoreTimestamp(),
dispatchRequest.getBitMap(), dispatchRequest.getPropertiesMap());
}

代码:NotifyMessageArrivingListener#arriving

1
2
3
4
5
public void arriving(String topic, int queueId, long logicOffset, long tagsCode,
long msgStoreTime, byte[] filterBitMap, Map<String, String> properties) {
this.pullRequestHoldService.notifyMessageArriving(topic, queueId, logicOffset, tagsCode,
msgStoreTime, filterBitMap, properties);
}

消息队列负载与重新分布机制

  • RocketMQ消息队列重新分配由RebalanceService线程实现
  • 一个MQClientInstance持有一个RebalanceService,随着MQClientInstance的启动而启动

代码:RebalanceService#run

1
2
3
4
5
6
7
8
9
10
public void run() {
log.info(this.getServiceName() + " service started");
//RebalanceService线程默认每隔20s执行一次mqClientFactory.doRebalance方法
while (!this.isStopped()) {
this.waitForRunning(waitInterval);
this.mqClientFactory.doRebalance();
}

log.info(this.getServiceName() + " service end");
}

代码:MQClientInstance#doRebalance

1
2
3
4
5
6
7
8
9
10
11
12
13
public void doRebalance() {
//MQClientInstance遍历以注册的消费者,对消费者执行doRebalance()方法
for (Map.Entry<String, MQConsumerInner> entry : this.consumerTable.entrySet()) {
MQConsumerInner impl = entry.getValue();
if (impl != null) {
try {
impl.doRebalance();
} catch (Throwable e) {
log.error("doRebalance exception", e);
}
}
}
}

代码:RebalanceImpl#doRebalance

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
//遍历订阅消息对每个主题的订阅的队列进行重新负载
public void doRebalance(final boolean isOrder) {
Map<String, SubscriptionData> subTable = this.getSubscriptionInner();
if (subTable != null) {
for (final Map.Entry<String, SubscriptionData> entry : subTable.entrySet()) {
final String topic = entry.getKey();
try {
this.rebalanceByTopic(topic, isOrder);
} catch (Throwable e) {
if (!topic.startsWith(MixAll.RETRY_GROUP_TOPIC_PREFIX)) {
log.warn("rebalanceByTopic Exception", e);
}
}
}
}

this.truncateMessageQueueNotMyTopic();
}

代码:RebalanceImpl#rebalanceByTopic

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
//从主题订阅消息缓存表中获取主题的队列信息
Set<MessageQueue> mqSet = this.topicSubscribeInfoTable.get(topic);
//查找该主题订阅组所有的消费者ID
List<String> cidAll = this.mQClientFactory.findConsumerIdList(topic, consumerGroup);

//给消费者重新分配队列
if (mqSet != null && cidAll != null) {
List<MessageQueue> mqAll = new ArrayList<MessageQueue>();
mqAll.addAll(mqSet);

Collections.sort(mqAll);
Collections.sort(cidAll);

AllocateMessageQueueStrategy strategy = this.allocateMessageQueueStrategy;

List<MessageQueue> allocateResult = null;
try {
allocateResult = strategy.allocate(
this.consumerGroup,
this.mQClientFactory.getClientId(),
mqAll,
cidAll);
} catch (Throwable e) {
log.error("AllocateMessageQueueStrategy.allocate Exception. allocateMessageQueueStrategyName={}", strategy.getName(),
e);
return;
}
  • RocketMQ默认提供5种负载均衡分配算法——但是,一个Consumer可以分配到多个队列,但同一个消息队列只会分配给一个Consumer,如果Consumer个数大于消息队列数量,则有些Consumer无法消费消息
1
2
3
4
5
6
7
8
9
10
11
12
AllocateMessageQueueAveragely:平均分配
举例:8个队列q1,q2,q3,q4,q5,a6,q7,q8,消费者3个:c1,c2,c3
分配如下:
c1:q1,q2,q3
c2:q4,q5,a6
c3:q7,q8
AllocateMessageQueueAveragelyByCircle:平均轮询分配
举例:8个队列q1,q2,q3,q4,q5,a6,q7,q8,消费者3个:c1,c2,c3
分配如下:
c1:q1,q4,q7
c2:q2,q5,a8
c3:q3,q6

消息消费过程

  • PullMessageService负责拉取消息队列,从远端Broker拉取消息后将消息存储ProcessQueue(消息队列处理队列)
  • 调用ConsumeMessageService#submitConsumeRequest进行消息消费——使用线程池消费消息,确保消息拉取与消息消费的解耦
  • ConsumeMessageService支持顺序消息和并发消息,核心类图如下:

并发消息消费

代码:ConsumeMessageConcurrentlyService#submitConsumeRequest

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
//消息批次单次
final int consumeBatchSize = this.defaultMQPushConsumer.getConsumeMessageBatchMaxSize();
//msgs.size()默认最多为32条。
//如果msgs.size()小于consumeBatchSize,则直接将拉取到的消息放入到consumeRequest,然后将consumeRequest提交到消费者线程池中
if (msgs.size() <= consumeBatchSize) {
ConsumeRequest consumeRequest = new ConsumeRequest(msgs, processQueue, messageQueue);
try {
this.consumeExecutor.submit(consumeRequest);
} catch (RejectedExecutionException e) {
this.submitConsumeRequestLater(consumeRequest);
}
}else{ //如果拉取的消息条数大于consumeBatchSize,则对拉取消息进行分页
for (int total = 0; total < msgs.size(); ) {
List<MessageExt> msgThis = new ArrayList<MessageExt>(consumeBatchSize);
for (int i = 0; i < consumeBatchSize; i++, total++) {
if (total < msgs.size()) {
msgThis.add(msgs.get(total));
} else {
break;
}

ConsumeRequest consumeRequest = new ConsumeRequest(msgThis, processQueue, messageQueue);
try {
this.consumeExecutor.submit(consumeRequest);
} catch (RejectedExecutionException e) {
for (; total < msgs.size(); total++) {
msgThis.add(msgs.get(total));

this.submitConsumeRequestLater(consumeRequest);
}
}
}

代码:ConsumeMessageConcurrentlyService$ConsumeRequest#run

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
//检查processQueue的dropped,如果为true,则停止该队列消费。
if (this.processQueue.isDropped()) {
log.info("the message queue not be able to consume, because it's dropped. group={} {}", ConsumeMessageConcurrentlyService.this.consumerGroup, this.messageQueue);
return;
}

...
//执行消息处理的钩子函数
if (ConsumeMessageConcurrentlyService.this.defaultMQPushConsumerImpl.hasHook()) {
consumeMessageContext = new ConsumeMessageContext();
consumeMessageContext.setNamespace(defaultMQPushConsumer.getNamespace());
consumeMessageContext.setConsumerGroup(defaultMQPushConsumer.getConsumerGroup());
consumeMessageContext.setProps(new HashMap<String, String>());
consumeMessageContext.setMq(messageQueue);
consumeMessageContext.setMsgList(msgs);
consumeMessageContext.setSuccess(false);
ConsumeMessageConcurrentlyService.this.defaultMQPushConsumerImpl.executeHookBefore(consumeMessageContext);
}
...
//调用应用程序消息监听器的consumeMessage方法,进入到具体的消息消费业务处理逻辑
status = listener.consumeMessage(Collections.unmodifiableList(msgs), context);

//执行消息处理后的钩子函数
if (ConsumeMessageConcurrentlyService.this.defaultMQPushConsumerImpl.hasHook()) {
consumeMessageContext.setStatus(status.toString());
consumeMessageContext.setSuccess(ConsumeConcurrentlyStatus.CONSUME_SUCCESS == status);
ConsumeMessageConcurrentlyService.this.defaultMQPushConsumerImpl.executeHookAfter(consumeMessageContext);
}

定时消息机制

  • 消息发送到Broker后,不立即被Consumer消费,等到特定的时间后才能被消费
  • RocketMQ不支持任意的时间精度——要支持任意时间精度定时调度,需要在Broker层做消息排序,再加上持久化,将带来巨大的性能消耗
  • 消息延迟级别在Broker端通过messageDelayLevel配置,默认为“1s 5s 10s 30s 1m 2m 3m 4m 5m 6m 7m 8m 9m 10m 20m 30m 1h 2h”,delayLevel=1表示延迟消息1s
  • 定时消息实现类为ScheduleMessageService,该类在DefaultMessageStore中创建
  • 通过在DefaultMessageStore中调用load方法加载该类并调用start方法启动

代码:ScheduleMessageService#load

1
2
3
4
5
6
//加载延迟消息消费进度的加载与delayLevelTable的构造。延迟消息的进度默认存储路径为/store/config/delayOffset.json
public boolean load() {
boolean result = super.load();
result = result && this.parseDelayLevel();
return result;
}

代码:ScheduleMessageService#start

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
//遍历延迟队列创建定时任务,遍历延迟级别,根据延迟级别level从offsetTable中获取消费队列的消费进度。如果不存在,则使用0
for (Map.Entry<Integer, Long> entry : this.delayLevelTable.entrySet()) {
Integer level = entry.getKey();
Long timeDelay = entry.getValue();
Long offset = this.offsetTable.get(level);
if (null == offset) {
offset = 0L;
}

if (timeDelay != null) {
this.timer.schedule(new DeliverDelayedMessageTimerTask(level, offset), FIRST_DELAY_TIME);
}
}

//每隔10s持久化一次延迟队列的消息消费进度
this.timer.scheduleAtFixedRate(new TimerTask() {

@Override
public void run() {
try {
if (started.get()) ScheduleMessageService.this.persist();
} catch (Throwable e) {
log.error("scheduleAtFixedRate flush exception", e);
}
}
}, 10000, this.defaultMessageStore.getMessageStoreConfig().getFlushDelayOffsetInterval());

调度机制

  • ScheduleMessageService的start方法启动后,为每一个延迟级别创建一个调度任务,每一个延迟级别对应SCHEDULE_TOPIC_XXXX主题下的一个消息消费队列
  • 定时调度任务的实现类为DeliverDelayedMessageTimerTask,核心实现方法为executeOnTimeup

代码:ScheduleMessageService$DeliverDelayedMessageTimerTask#executeOnTimeup

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
//根据队列ID与延迟主题查找消息消费队列
ConsumeQueue cq =
ScheduleMessageService.this.defaultMessageStore.findConsumeQueue(SCHEDULE_TOPIC,
delayLevel2QueueId(delayLevel));
...
//根据偏移量从消息消费队列中获取当前队列中所有有效的消息
SelectMappedBufferResult bufferCQ = cq.getIndexBuffer(this.offset);

...
//遍历ConsumeQueue,解析消息队列中消息
for (; i < bufferCQ.getSize(); i += ConsumeQueue.CQ_STORE_UNIT_SIZE) {
long offsetPy = bufferCQ.getByteBuffer().getLong();
int sizePy = bufferCQ.getByteBuffer().getInt();
long tagsCode = bufferCQ.getByteBuffer().getLong();

if (cq.isExtAddr(tagsCode)) {
if (cq.getExt(tagsCode, cqExtUnit)) {
tagsCode = cqExtUnit.getTagsCode();
} else {
//can't find ext content.So re compute tags code.
log.error("[BUG] can't find consume queue extend file content!addr={}, offsetPy={}, sizePy={}",
tagsCode, offsetPy, sizePy);
long msgStoreTime = defaultMessageStore.getCommitLog().pickupStoreTimestamp(offsetPy, sizePy);
tagsCode = computeDeliverTimestamp(delayLevel, msgStoreTime);
}
}

long now = System.currentTimeMillis();
long deliverTimestamp = this.correctDeliverTimestamp(now, tagsCode);

...
//根据消息偏移量与消息大小,从CommitLog中查找消息.
MessageExt msgExt =
ScheduleMessageService.this.defaultMessageStore.lookMessageByOffset(
offsetPy, sizePy);
}

顺序消息

  • 实现类是org.apache.rocketmq.client.impl.consumer.ConsumeMessageOrderlyService

代码:ConsumeMessageOrderlyService#start

1
2
3
4
5
6
7
8
9
10
11
public void start() {
//如果消息模式为集群模式,启动定时任务,默认每隔20s执行一次锁定分配给自己的消息消费队列
if (MessageModel.CLUSTERING.equals(ConsumeMessageOrderlyService.this.defaultMQPushConsumerImpl.messageModel())) {
this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
ConsumeMessageOrderlyService.this.lockMQPeriodically();
}
}, 1000 * 1, ProcessQueue.REBALANCE_LOCK_INTERVAL, TimeUnit.MILLISECONDS);
}
}

代码:ConsumeMessageOrderlyService#submitConsumeRequest

1
2
3
4
5
6
7
8
9
10
11
//构建消息任务,并提交消费线程池中
public void submitConsumeRequest(
final List<MessageExt> msgs,
final ProcessQueue processQueue,
final MessageQueue messageQueue,
final boolean dispathToConsume) {
if (dispathToConsume) {
ConsumeRequest consumeRequest = new ConsumeRequest(processQueue, messageQueue);
this.consumeExecutor.submit(consumeRequest);
}
}

代码:ConsumeMessageOrderlyService$ConsumeRequest#run

1
2
3
4
5
6
7
8
9
10
//如果消息队列为丢弃,则停止本次消费任务
if (this.processQueue.isDropped()) {
log.warn("run, the message queue not be able to consume, because it's dropped. {}", this.messageQueue);
return;
}
//从消息队列中获取一个对象。然后消费消息时先申请独占objLock锁。顺序消息一个消息消费队列同一时刻只会被一个消费线程池处理
final Object objLock = messageQueueLock.fetchLockObject(this.messageQueue);
synchronized (objLock) {
...
}

小结

  • RocketMQ消息消费分别为集群模式、广播模式

  • 消息队列负载:由RebalanceService线程默认每隔20s进行一次消息队列负载,根据当前消费者组内Consumer个数与Topic队列数量,按照某一种负载算法进行队列分配

  • 消息拉取:由PullMessageService线程根据RebalanceService线程创建的拉取任务进行拉取,默认每次拉取32条消息,提交给Consumer消费线程后继续下一次消息拉取。如果消息消费过慢产生消息堆积会触发消息消费拉取流控

  • 并发消息消费:消费线程池中的线程可以并发对同一个消息队列的消息进行消费,消费成功后,取出消息队列中最小的消息偏移量作为消费进度偏移量存储于消费进度存储文件中

    • 集群模式:消息消费进度存储在Broker(消息服务器)
    • 广播模式:消息消费进度存储在Consumer
  • RocketMQ不支持任意精度的定时调度消息,只支持自定义的消息延迟级别

  • 顺序消息一般使用集群模式,Consumer内的线程池中的线程对消息消费队列只能串行消费,和并发消息消费最本质的区别是消息消费时必须锁定消息消费队列,Broker会存储消息消费队列的锁占用情况