前置条件:安装nacos和mysql数据库
一丶下载seata服务端安装包,将下载的安装包上传至Linux并解压
1.2-1.4版本都可以
官网地址:https://github.com/seata/seata/releases
[root@iZuf6f6me43woqf6q431tqZ mysoft]# ls docker-compose jdk1.8.0_212 maven-3.6.3 mysql-8.0.23 nginx-1.20.1 node redis-6.2.1 seata seata-server-1.2.0.tar.gz tomcat-9.0.45 [root@iZuf6f6me43woqf6q431tqZ mysoft]#
二丶进入解压后的seata/conf目录,编辑registry.conf文件内容如下,即将type设为nacos,并填写自己的nacos配置信息,表示将seata服务端注册到nacos中
备注:application,cluster,group的值建议和我填一样,不然客户端启动时可能会找不到服务
type = "nacos" nacos { application = "serverAddr" serverAddr = "127.0.0.1:8848" namespace = "aa94072f-3269-409c-a065-ed0910a03f45" cluster = "default" group = "DEFAULT_GROUP" }
三丶到seata/bin目录下启动seata
后台启动:nohup ./seata-server.sh -h 127.0.0.1 -p 8091 >log.out 2>1 & 指定ip
备注:(1)这里最好将127.0.0.1换成你的Linux外网地址,不然客户端启动时可能会找不到服务
(2)如果启动时报内存不足等错误,可以编辑seata-server.sh文件,将2048改成256
若启动没有异常,打开nacos,查看seta服务是否已经注册到nacos中
四丶 新建一个微服务工程,并编写一个可能存在分布式事务的服务调用方法,例如,我这里有订单,账户,库存,三个微服务
其中用户下单时,会先创建订单,在扣减账户余额,最后扣减库存。我们在扣减账户余额的时候抛出异常,观察订单表中的数据是否会回滚,若回滚,则说明分布式事务生效了。
order服务的调用代码
@Override public void createOrder(OrderEntity orderEntity) { LOGGER.info("------->下单开始"); //创建订单 this.save(orderEntity); //扣减账户余额 accountServer.decreaseAccount(orderEntity.getUserId(),orderEntity.getMoney()); //扣减库存 storageServer.decreaseStorage(orderEntity.getProductId(),orderEntity.getCount()); LOGGER.info("------->下单结束"); }
account服务代码
@Override public void decreaseAccount(Long userId, BigDecimal money) { LOGGER.info("------->account-service中扣减账户余额开始"); AccountEntity accountEntity = this.baseMapper.selectList(Wrappers.<AccountEntity>lambdaQuery().eq(AccountEntity::getUserId, userId)).get(0); //异常,全局事务回滚 if (accountEntity.getResidue().compareTo(money) < 0) throw new RuntimeException("余额不足"); accountEntity.setResidue(accountEntity.getResidue().subtract(money)); accountEntity.setUsed(accountEntity.getUsed().add(money)); this.updateById(accountEntity); LOGGER.info("------->account-service中扣减账户余额结束"); }
五丶在每一个服务中都引入seata客户端依赖,因为这里与spring-boot集成,所以选择seata-spring-boot-starter,这里客户端版本选择最新的1.3.0
<!--seata 分布式事务--> <dependency> <groupId>io.seata</groupId> <artifactId>seata-spring-boot-starter</artifactId> <version>1.3.0</version> </dependency>
六丶在每一个微服务的配置文件中加入如下seata配置信息
备注:把enable-auto-data-source-proxy置为false(取消自动代理数据源)
# seata配置 seata: enabled: true enable-auto-data-source-proxy: false application-id: ${spring.application.name} tx-service-group: my_tx_group service: vgroup-mapping: my_tx_group: default registry: type: nacos nacos: application: serverAddr server-addr: 127.0.0.1:8848 namespace: aa94072f-3269-409c-a065-ed0910a03f45 group: DEFAULT_GROUP cluster: default config: file: name: file.conf
七丶在每一个微服务下编写如下配置类,并在每一个微服务的启动类上排除默认的数据源
@Configuration public class DataSourceProxyConfig { @Autowired private DataSourceConfig dataSourceConfig; @Bean("dataSource") public DataSource druidDataSource() { HikariDataSource hikariDataSource = new HikariDataSource (); hikariDataSource.setUsername(dataSourceConfig.getUsername()); hikariDataSource.setPassword(dataSourceConfig.getPassword()); hikariDataSource.setJdbcUrl(dataSourceConfig.getUrl()); hikariDataSource.setDriverClassName(dataSourceConfig.getDriveClassName()); return hikariDataSource; } @Bean @Primary public DataSourceProxy dataSourceProxy(DataSource dataSource) { return new DataSourceProxy(dataSource); } }
DataSourceConfig 信息
@Data @Configuration public class DataSourceConfig { @Value("${spring.datasource.url}") private String url; @Value("${spring.datasource.username}") private String username; @Value("${spring.datasource.password}") private String password; @Value("${spring.datasource.driver-class-name}") private String driveClassName; }
排除默认的数据源:在每一个微服务的启动类上加上@SpringBootApplication(exclude = DataSourceAutoConfiguration.class)
备注:如果你也引入mybatis-plus依赖,则代理数据源的方式最后和我保持一致,否则以其他方式代理数据源,可能会导致mybatis-plus的某些功能失效。
八丶最后在开始微服务调用的方法上加上@GlobalTransactional(rollbackFor = Exception.class)
启动每一个微服务,若出现如下信息,则说明每一个seata客户端都注册到了seata服务端中
2021-11-06 15:04:28.042 INFO 14524 --- [eoutChecker_1_1] i.s.c.r.netty.NettyClientChannelManager : will connect to 127.0.0.1:8091 2021-11-06 15:04:28.043 INFO 14524 --- [eoutChecker_1_1] i.s.core.rpc.netty.NettyPoolableFactory : NettyPool create channel to transactionRole:TMROLE,address:127.0.0.1:8091,msg:< RegisterTMRequest{applicationId='storage', transactionServiceGroup='my_tx_group'} > 2021-11-06 15:04:28.132 INFO 14524 --- [eoutChecker_1_1] i.s.c.rpc.netty.TmNettyRemotingClient : register TM success. client version:1.3.0, server version:1.2.0,channel:[id: 0x878eb8e6, L:/192.168.0.101:54031 - R:/127.0.0.1:8091] 2021-11-06 15:04:28.132 INFO 14524 --- [eoutChecker_1_1] i.s.core.rpc.netty.NettyPoolableFactory : register success, cost 34 ms, version:1.2.0,role:TMROLE,channel:[id: 0x878eb8e6, L:/192.168.0.101:54031 - R:/47.100.205.138:8091]
九丶访问order服务中创建订单的接口,观察事务是否回滚,若order服务下出现如下信息,则说明全局事务生效了
2021-11-06 15:18:07.776 INFO 24452 --- [h_RMROLE_1_1_16] i.s.r.d.undo.AbstractUndoLogManager : xid 127.0.0.1:8091:2026427667 branch 2026427668, undo_log deleted with GlobalFinished 2021-11-06 15:18:07.793 INFO 24452 --- [h_RMROLE_1_1_16] io.seata.rm.AbstractRMHandler : Branch Rollbacked result: PhaseTwo_Rollbacked 2021-11-06 15:18:07.817 INFO 24452 --- [nio-8088-exec-7] i.seata.tm.api.DefaultGlobalTransaction : [127.0.0.1:8091:2026427667] rollback status: Rollbacked 2021-11-06 15:18:07.831 ERROR 24452 --- [nio-8088-exec-7] o.a.c.c.C.[.[.[.[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [/orderServer] threw exception [Request processing failed; nested exception is feign.FeignException$InternalServerError: [500] during [GET] to [http://account/accountServer/account/decreaseAccount?userId=1&money=1000] [AccountServer#decreaseAccount(Long,BigDecimal)]: [{"timestamp":"2021-11-06 15:18:07","status":500,"error":"Internal Server Error","message":"余额不足","path":"/accountServer/account/decreaseAccount"}]] with root cause
十丶结语:seata集成当你最后完成时看似简单,但整个过程确实很艰难,耗时费力,在集成过程中总是会出现各种错误,如seata server启动报错,客户端启动找不到服务,全局事务不会滚,mybatis-plus失效,
代理数据源不生效等等,基本上网上哪些seata集成的各种错误我都遇到了,不得不说seata的整合的确很不友好,没有极大的耐心很难坚持下来。上面的步骤,也只是我对最终效果的总结,可能并不适合每一个人。
大家还是遇到报错时还是具体问题具体分析。