最近学习了 sharding sphere 中的jdbc组件,此文作为学习记录方便以后复习。

中间件官网:Apache Sharding Sphere

学习准备

快速开始

分表

创建库表:

CREATE DATABASE  `db_device_0` DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;

CREATE TABLE `tb_device_0` ( `device_id` BIGINT NOT NULL AUTO_INCREMENT, `device_type` INT DEFAULT NULL, PRIMARY KEY ( `device_id` ) ) AUTO_INCREMENT = 9 DEFAULT CHARSET = utf8mb4;
CREATE TABLE `tb_device_1` ( `device_id` BIGINT NOT NULL AUTO_INCREMENT, `device_type` INT DEFAULT NULL, PRIMARY KEY ( `device_id` ) ) AUTO_INCREMENT = 10 DEFAULT CHARSET = utf8mb4;

引入依赖:

因为是demo项目,所以直接使用了JDK17


pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.7.1</version>
    </parent>

    <groupId>fun.psgame</groupId>
    <artifactId>sharding-demo</artifactId>
    <version>0.0.1-SNAPSHOT</version>

    <name>sharding-demo</name>
    <description>sharding-demo</description>

    <properties>
        <java.version>17</java.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <scope>runtime</scope>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.projectlombok/lombok -->
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.18.24</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <!-- https://mvnrepository.com/artifact/com.baomidou/mybatis-plus-boot-starter -->
        <dependency>
            <groupId>com.baomidou</groupId>
            <artifactId>mybatis-plus-boot-starter</artifactId>
            <version>3.5.2</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.apache.shardingsphere/sharding-jdbc-spring-boot-starter -->
        <dependency>
            <groupId>org.apache.shardingsphere</groupId>
            <artifactId>sharding-jdbc-spring-boot-starter</artifactId>
            <version>4.1.1</version>
        </dependency>

    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
                <configuration>
                    <excludes>
                        <exclude>
                            <groupId>org.projectlombok</groupId>
                            <artifactId>lombok</artifactId>
                        </exclude>
                    </excludes>
                </configuration>
            </plugin>
        </plugins>
    </build>

</project>

添加配置文件:

spring:
  shardingsphere:
    props:
      sql:
        # 开启SQL显示
        show: true
    # 配置真实数据源
    datasource:
      names: ds1
      ds1:
        # 这里会报红,但是不影响
        jdbc-url: jdbc:mysql://localhost:3306/db_device_0?serverTimezone=Asia/Shanghai
        type: com.zaxxer.hikari.HikariDataSource
        driver-class-name: com.mysql.cj.jdbc.Driver
        username: root
        password: root
    sharding:
      tables:
        tb_device:
        # 配置物理表 逻辑表 -> 物理表,groovy语法,tb_device_0、tb_device_1
          actual-data-nodes: ds1.tb_device_$->{0..1}
          table-strategy:
            inline:
              # 配置分表策略:根据device_id作为分片的依据(分片键)
              sharding-column: device_id
              # 分片策略
              algorithm-expression: tb_device_$->{device_id%2}

创建实体类:

@Getter
@Setter
public class TbDevice {
    private Long deviceId;
    private int deviceType;
}

创建mapper:

public interface DeviceMapper extends BaseMapper<TbDevice> {
}

配置springboot启动类

@SpringBootApplication
@MapperScan("fun.psgame.demo.mapper")
public class ShardingDemoApplication {
    public static void main(String[] args) {
        SpringApplication.run(ShardingDemoApplication.class, args);
    }
}

添加测试类进行初步测试:


@SpringBootTest
class ShardingDemoApplicationTests {
    @Autowired
    private DeviceMapper deviceMapper;

    @Test
    void initData() {
        for (int i = 0; i < 10; i++) {
            TbDevice device = new TbDevice();
            device.setDeviceId((long) i);
            device.setDeviceType(i);
            deviceMapper.insert(device);
        }
    }
}

执行测试类发现出错:


错误信息

org.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.exceptions.PersistenceException: 
### Error updating database.  Cause: java.lang.reflect.InaccessibleObjectException: Unable to make field private static final long java.lang.Number.serialVersionUID accessible: module java.base does not "opens java.lang" to unnamed module @1324409e
### The error may exist in fun/psgame/demo/mapper/DeviceMapper.java (best guess)
### The error may involve fun.psgame.demo.mapper.DeviceMapper.insert-Inline
### The error occurred while setting parameters
### SQL: INSERT INTO tb_device  ( device_id, device_type )  VALUES  ( ?, ? )
### Cause: java.lang.reflect.InaccessibleObjectException: Unable to make field private static final long java.lang.Number.serialVersionUID accessible: module java.base does not "opens java.lang" to unnamed module @1324409e

    at org.mybatis.spring.MyBatisExceptionTranslator.translateExceptionIfPossible(MyBatisExceptionTranslator.java:96)
    at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:441)
    at jdk.proxy2/jdk.proxy2.$Proxy73.insert(Unknown Source)
    at org.mybatis.spring.SqlSessionTemplate.insert(SqlSessionTemplate.java:272)
    at com.baomidou.mybatisplus.core.override.MybatisMapperMethod.execute(MybatisMapperMethod.java:59)
    at com.baomidou.mybatisplus.core.override.MybatisMapperProxy$PlainMethodInvoker.invoke(MybatisMapperProxy.java:148)
    at com.baomidou.mybatisplus.core.override.MybatisMapperProxy.invoke(MybatisMapperProxy.java:89)
    at jdk.proxy2/jdk.proxy2.$Proxy76.insert(Unknown Source)
    at fun.psgame.demo.ShardingDemoApplicationTests.initData(ShardingDemoApplicationTests.java:20)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:568)
    at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
    at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
    at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
    at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
    at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
    at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
    at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
    at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
    at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
    at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
    at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
    at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
    at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
    at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
    at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
    at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
    at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
    at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
    at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
    at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
    at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
    at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
    at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
    at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
    at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
    at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
    at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
    at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
    at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
    at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
    at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
    at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
    at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52)
    at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
    at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
    at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
    at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
    at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:71)
    at com.intellij.rt.junit.IdeaTestRunner$Repeater$1.execute(IdeaTestRunner.java:38)
    at com.intellij.rt.execution.junit.TestsRepeater.repeat(TestsRepeater.java:11)
    at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:35)
    at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:235)
    at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)

这是由于jdk版本太高,一种解决方法是直接换jdk 1.8,不过既然是学习,就是要折腾,只需要在启动参数上添加--add-opens=java.base/java.lang=ALL-UNNAMED就可以了

初步执行发现10条数据偶数device_id的数据都在tb_device_0,奇数的都在tb_device_1里面。

分库

然后创建第二个数据库:

CREATE DATABASE  `db_device_1` DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;

CREATE TABLE `tb_device_0` ( `device_id` BIGINT NOT NULL AUTO_INCREMENT, `device_type` INT DEFAULT NULL, PRIMARY KEY ( `device_id` ) ) AUTO_INCREMENT = 9 DEFAULT CHARSET = utf8mb4;
CREATE TABLE `tb_device_1` ( `device_id` BIGINT NOT NULL AUTO_INCREMENT, `device_type` INT DEFAULT NULL, PRIMARY KEY ( `device_id` ) ) AUTO_INCREMENT = 10 DEFAULT CHARSET = utf8mb4;

然后需要修改配置文件

spring:
  shardingsphere:
    props:
      sql:
        # 开启SQL显示
        show: true
    # 配置真实数据源
    datasource:
      names: ds0,ds1
      ds0:
        jdbcUrl: jdbc:mysql://localhost:3306/db_device_0?serverTimezone=Asia/Shanghai
        type: com.zaxxer.hikari.HikariDataSource
        driverClassName: com.mysql.cj.jdbc.Driver
        username: root
        password: root
      ds1:
        jdbcUrl: jdbc:mysql://localhost:3306/db_device_1?serverTimezone=Asia/Shanghai
        type: com.zaxxer.hikari.HikariDataSource
        driverClassName: com.mysql.cj.jdbc.Driver
        username: root
        password: root
    sharding:
      # 分库的分片策略
      default-database-strategy:
        inline:
          sharding-column: device_id
          algorithm-expression: ds$->{device_id%2}
      tables:
        tb_device:
        # 配置物理表 逻辑表 -> 物理表,groovy语法,tb_device_0、tb_device_1
          actual-data-nodes: ds$->{0..1}.tb_device_$->{0..1}
          table-strategy:
            inline:
              # 配置分表策略:根据device_id作为分片的依据(分片键)
              sharding-column: device_id
              # 分片策略
              algorithm-expression: tb_device_$->{device_id%2}

和刚才相比主要是添加了一个新的数据库,并且配置了分库策略。

测试

写两个test:

@Test
void queryDeviceByID() {
    QueryWrapper<TbDevice> queryWrapper = new QueryWrapper<>();
    queryWrapper.eq("device_id", 1L);
    List<TbDevice> deviceList = deviceMapper.selectList(queryWrapper);
    System.out.println(deviceList);
}

@Test
void queryDeviceByRange(){
    QueryWrapper<TbDevice> queryWrapper = new QueryWrapper<>();
    queryWrapper.between("device_id",1,10);
    List<TbDevice> deviceList =
            deviceMapper.selectList(queryWrapper);
    System.out.println(deviceList);
}

会发现第一个查询可以查到,第二个查询报错了,错误说明是inline方式的分片策略不支持范围查询。

分库分表核心知识

核心概念

了解分片策略之前首先需要知道几个重点概念:逻辑表、真实表、数据节点、绑定表、
⼴播表

  • 逻辑表

⽔平拆分的数据库(表)的相同逻辑和数据结构表的总称。

例:订单数据根据主键尾数拆分
为10张表,分别是 t_order_0t_order_9 ,他们的逻辑表名为 t_order

  • 真实表

在分片的数据库中真实存在的物理表。即上个示例中的 t_order_0t_order_9

  • 数据节点

数据分片的最⼩单元。由数据源名称和数据表组成,例: ds_0.t_order_0

  • 绑定表

指分片规则⼀致的主表和⼦表。例如: t_order 表和 t_order_item 表,均按照 order_id 分
片,则此两张表互为绑定表关系。绑定表之间的多表关联查询不会出现笛卡尔积关联,关联
查询效率将⼤⼤提升。举例说明,如果SQL为:

SELECT i.* FROM t_order o JOIN t_order_item i ON o.order_id=i.order_id
WHERE o.order_id in (10, 11);

在不配置绑定表关系时,假设分片键 order_id 将数值10路由至第0片,将数值11路由至第1
片,那么路由后的SQL应该为4条,它们呈现为笛卡尔积:

SELECT i.* FROM t_order_0 o JOIN t_order_item_0 i ON
o.order_id=i.order_id WHERE o.order_id in (10, 11);
SELECT i.* FROM t_order_0 o JOIN t_order_item_1 i ON
o.order_id=i.order_id WHERE o.order_id in (10, 11);
SELECT i.* FROM t_order_1 o JOIN t_order_item_0 i ON
o.order_id=i.order_id WHERE o.order_id in (10, 11);
SELECT i.* FROM t_order_1 o JOIN t_order_item_1 i ON
o.order_id=i.order_id WHERE o.order_id in (10, 11);

在配置绑定表关系后,路由的SQL应该为2条:

SELECT i.* FROM t_order_0 o JOIN t_order_item_0 i ON
o.order_id=i.order_id WHERE o.order_id in (10, 11);
SELECT i.* FROM t_order_1 o JOIN t_order_item_1 i ON
o.order_id=i.order_id WHERE o.order_id in (10, 11);

即 订单表t_order_0关联的数据都应该在t_order_item_0

其中 t_order 在FROM的最左侧, ShardingSphere将会以它作为整个绑定表的主表。 所有路
由计算将会只使⽤主表的策略,那么 t_order_item 表的分片计算将会使⽤ t_order 的条件。
故绑定表之间的分区键要完全相同。

  • ⼴播表

指所有的分片数据源中都存在的表,表结构和表中的数据在每个数据库中均完全⼀致。适⽤
于数据量不⼤且需要与海量数据的表进⾏关联查询的场景,例如:字典表。

分片及分片策略

分片键

⽤于分片的数据库字段,是将数据库(表)⽔平拆分的关键字段,shardingSphere既支持单分片字段,也支持多分片字段。

分片算法

通过分片算法将数据分片,将支持sql中的=>=<=><BETWEENIN。具体分片算法需要自行实现。

目前有四类分片算法,没有具体实现,可根据需要自行继承实现:

  • 精确分片算法

对应PreciseShardingAlgorithm,⽤于处理使⽤单⼀键作为分片键的=与·进⾏分片的场景。需要配合StandardShardingStrategy使⽤。

  • 范围分片算法

对应RangeShardingAlgorithm,⽤于处理使⽤单⼀键作为分片键的BETWEEN AND><>=<=进⾏分片的场景。需要配合StandardShardingStrategy使⽤。

  • 复合分片算法

对应ComplexKeysShardingAlgorithm,⽤于处理使⽤多键作为分片键进⾏分片的场景,包含多个分片键的逻辑较复杂,需要应⽤开发者⾃⾏处理其中的复杂度。需要配合
ComplexShardingStrategy使⽤。

  • Hint分片算法

对应HintShardingAlgorithm,⽤于处理使⽤Hint⾏分片的场景。需要配合
HintShardingStrategy使⽤。

分片策略

分片策略包含分片键和分片算法,⽬前提供5种分片策略:

  • 标准分片策略

对应StandardShardingStrategy。提供对SQL语句中的= > < >= <= IN BETWEEN AND的分片操作⽀持。 StandardShardingStrategy只⽀持单分片键,提供PreciseShardingAlgorithmRangeShardingAlgorithm两个分片算法。PreciseShardingAlgorithm是必选的,⽤于处理=IN的分片。 RangeShardingAlgorithm是可选的,⽤于处理BETWEEN AND, >, <, >=, <=分片,如果不配置RangeShardingAlgorithm, SQL中的BETWEEN AND将按照全库路由处理。

  • 复合分片策略

对应ComplexShardingStrategy。复合分片策略。提供对SQL语句中的=, >, <, >=, <=, INBETWEEN AND的分片操作⽀持。 ComplexShardingStrategy⽀持多分片键,由于多分片键之间的关系复杂,因此并未进⾏过多的封装,⽽是直接将分片键值组合以及分片操作符透传至分片算法,完全由应⽤开发者实现,提供最⼤的灵活度。

  • ⾏表达式分片策略

对应InlineShardingStrategy。使⽤Groovy的表达式,提供对SQL语句中的=IN的分片操作⽀持,只⽀持单分片键。对于简单的分片算法,可以通过简单的配置使⽤,从⽽避免繁琐的Java代码开发,如: t_user_$->{u_id % 8} 表示t_user表根据u_id模8,⽽分成8张表,表名称为 t_user_0t_user_7

  • Hint分片策略

对应HintShardingStrategy。通过Hint指定分片值⽽⾮从SQL中提取分片值的⽅式进⾏分片的策略。

  • 不分片策略

对应NoneShardingStrategy。不分片的策略。

分片策略的实现

Standard标准分片策略的精准分片

  • 配置分库的精确分片

spring.shardingsphere.sharding做如下修改:

    sharding:
      # 分库的分片策略
      default-database-strategy:
#        inline:
#          sharding-column: device_id
#          algorithm-expression: ds$->{device_id%2}
        standard:
          sharding-column: device_id
          # 精准查询算法
          precise-algorithm-class-name: fun.psgame.demo.algorithm.database.MyDatabaseStandardPreciseAlgorithm

也就是把inline分库策略修改为自定义的精准分库策略,上例中模2的等价替代如下

package fun.psgame.demo.algorithm.database;

import org.apache.shardingsphere.api.sharding.standard.PreciseShardingAlgorithm;
import org.apache.shardingsphere.api.sharding.standard.PreciseShardingValue;

import java.util.Collection;

/**
 * 数据库精准分片策略
 */
public class MyDatabaseStandardPreciseAlgorithm implements PreciseShardingAlgorithm<Long> {

    /**
     * 精准分片
     *
     * @param availableTargetNames           可用物理库表名称
     * @param shardingValue 分片参数
     * @return 定位到的数据库
     */
    @Override
    public String doSharding(Collection<String> availableTargetNames, PreciseShardingValue<Long> shardingValue) {
        // 逻辑表名称
        String logicTableName = shardingValue.getLogicTableName();
        // 分片列名
        String columnName = shardingValue.getColumnName();
        // 分片值
        Long value = shardingValue.getValue();

        String databaseName = "ds" + (value % 2);

        if (!availableTargetNames.contains(databaseName)) {
            throw new UnsupportedOperationException("数据源" + databaseName + "不存在");
        }

        return databaseName;
    }
}

即可以根据自己的需要进行数据库定位。

表的分片策略也是同样:

      tables:
        tb_device:
        # 配置物理表 逻辑表 -> 物理表,groovy语法,tb_device_0、tb_device_1
          actual-data-nodes: ds$->{0..1}.tb_device_$->{0..1}
          table-strategy:
#            inline:
#              # 配置分表策略:根据device_id作为分片的依据(分片键)
#              sharding-column: device_id
#              # 分片策略
#              algorithm-expression: tb_device_$->{device_id%2}
            standard:
              # 分片键
              sharding-column: device_id
              # 自定义标准分片策略
              precise-algorithm-class-name: fun.psgame.demo.algorithm.table.MyTableStandardPreciseAlgorithm

对应的类也类似库的策略类:

package fun.psgame.demo.algorithm.table;

import org.apache.shardingsphere.api.sharding.standard.PreciseShardingAlgorithm;
import org.apache.shardingsphere.api.sharding.standard.PreciseShardingValue;

import java.util.Collection;

public class MyTableStandardPreciseAlgorithm implements PreciseShardingAlgorithm<Long> {
    @Override
    public String doSharding(Collection<String> availableTargetNames, PreciseShardingValue<Long> shardingValue) {
        Long value = shardingValue.getValue();
        String tableName = shardingValue.getLogicTableName() + "_" + (value % 2);

        if (!availableTargetNames.contains(tableName)) {
            throw new UnsupportedOperationException("表" + tableName + "不存在");
        }

        return tableName;
    }
}

Standard标准分片策略的范围分片

修改配置

    sharding:
      # 分库的分片策略
      default-database-strategy:
#        inline:
#          sharding-column: device_id
#          algorithm-expression: ds$->{device_id%2}
        standard:
          sharding-column: device_id
          # 精准查询算法
          precise-algorithm-class-name: fun.psgame.demo.algorithm.database.MyDatabaseStandardPreciseAlgorithm
          # 范围查询算法
          range-algorithm-class-name: fun.psgame.demo.algorithm.database.MyDatabaseStandardRangeAlgorithm
      tables:
        tb_device:
        # 配置物理表 逻辑表 -> 物理表,groovy语法,tb_device_0、tb_device_1
          actual-data-nodes: ds$->{0..1}.tb_device_$->{0..1}
          table-strategy:
#            inline:
#              # 配置分表策略:根据device_id作为分片的依据(分片键)
#              sharding-column: device_id
#              # 分片策略
#              algorithm-expression: tb_device_$->{device_id%2}
            standard:
              # 分片键
              sharding-column: device_id
              # 自定义标准分片策略
              precise-algorithm-class-name: fun.psgame.demo.algorithm.table.MyTableStandardPreciseAlgorithm
              range-algorithm-class-name: fun.psgame.demo.algorithm.table.MyTableStandardRangeAlgorithm

对应的数据库分片类、表分片类:

package fun.psgame.demo.algorithm.database;

import com.google.common.collect.Range;
import org.apache.shardingsphere.api.sharding.standard.RangeShardingAlgorithm;
import org.apache.shardingsphere.api.sharding.standard.RangeShardingValue;

import java.util.Collection;

public class MyDatabaseStandardRangeAlgorithm implements RangeShardingAlgorithm<Long> {
    /**
     *
     * @param availableTargetNames available data sources or tables's names
     * @param shardingValue sharding value
     * @return
     */
    @Override
    public Collection<String> doSharding(Collection<String> availableTargetNames, RangeShardingValue<Long> shardingValue) {
        // 要查询的范围信息
        Range<Long> range = shardingValue.getValueRange();

        // 直接返回数据源
        return availableTargetNames;
    }
}
package fun.psgame.demo.algorithm.table;

import org.apache.shardingsphere.api.sharding.standard.RangeShardingAlgorithm;
import org.apache.shardingsphere.api.sharding.standard.RangeShardingValue;

import java.util.Collection;

public class MyTableStandardRangeAlgorithm implements RangeShardingAlgorithm<Long> {
    @Override
    public Collection<String> doSharding(Collection<String> availableTargetNames, RangeShardingValue<Long> shardingValue) {
        return availableTargetNames;
    }
}

此时发现已然支持BETWEEN AND查询了。

Complex复合分片策略

问题:如果参与分片的键不止一个,那么就可以使用复合分片

@Test
void queryDeviceByRangeAndDeviceType() {
    QueryWrapper<TbDevice> queryWrapper = new QueryWrapper<>();
    queryWrapper.between("device_id", 1, 10);
    queryWrapper.eq("device_type", 5);
    List<TbDevice> deviceList = deviceMapper.selectList(queryWrapper);
    System.out.println(deviceList);
}

上例中,device_type的值也与库表有关,且逻辑与device_id都是对2取模,那么在不使用复合分片策略执行查询时,查询结果虽然是正确的,但是会把每个库表都查一边(因为device_id条件是BETWEEN AND),但是device_type为5的数据只存在于一张表,造成了部分无效查询:

INFO 5879 --- [ main] ShardingSphere-SQL
: Actual SQL: ds0 ::: SELECT device_id,device_type FROM tb_device_0
WHERE device_id BETWEEN ? AND ? AND device_type = ? ::: [1, 10, 5]
INFO 5879 --- [ main] ShardingSphere-SQL
: Actual SQL: ds0 ::: SELECT device_id,device_type FROM tb_device_1
WHERE device_id BETWEEN ? AND ? AND device_type = ? ::: [1, 10, 5]
INFO 5879 --- [ main] ShardingSphere-SQL
: Actual SQL: ds1 ::: SELECT device_id,device_type FROM tb_device_0
WHERE device_id BETWEEN ? AND ? AND device_type = ? ::: [1, 10, 5]
INFO 5879 --- [ main] ShardingSphere-SQL
: Actual SQL: ds1 ::: SELECT device_id,device_type FROM tb_device_1
WHERE device_id BETWEEN ? AND ? AND device_type = ? ::: [1, 10, 5]

因此可以使用复合分片策略来进行分片

yml配置:

    sharding:
      # 分库的分片策略
      default-database-strategy:
#        inline:
#          sharding-column: device_id
#          algorithm-expression: ds$->{device_id%2}
#        standard:
#          sharding-column: device_id
#          # 精准查询算法
#          precise-algorithm-class-name: fun.psgame.demo.algorithm.database.MyDatabaseStandardPreciseAlgorithm
#          # 范围查询算法
#          range-algorithm-class-name: fun.psgame.demo.algorithm.database.MyDatabaseStandardRangeAlgorithm
        complex:
          sharding-columns: device_id,device_type
          algorithm-class-name: fun.psgame.demo.algorithm.database.MyDatabaseComplexAlgorithm
      tables:
        tb_device:
        # 配置物理表 逻辑表 -> 物理表,groovy语法,tb_device_0、tb_device_1
          actual-data-nodes: ds$->{0..1}.tb_device_$->{0..1}
          table-strategy:
#            inline:
#              # 配置分表策略:根据device_id作为分片的依据(分片键)
#              sharding-column: device_id
#              # 分片策略
#              algorithm-expression: tb_device_$->{device_id%2}
#            standard:
#              # 分片键
#              sharding-column: device_id
#              # 自定义标准分片策略
#              precise-algorithm-class-name: fun.psgame.demo.algorithm.table.MyTableStandardPreciseAlgorithm
#              range-algorithm-class-name: fun.psgame.demo.algorithm.table.MyTableStandardRangeAlgorithm
             complex:
               sharding-columns: device_id,device_type
               algorithm-class-name: fun.psgame.demo.algorithm.table.MyTableComplexAlgorithm

对应的库分片类为:

package fun.psgame.demo.algorithm.database;

import com.google.common.collect.Range;
import org.apache.shardingsphere.api.sharding.complex.ComplexKeysShardingAlgorithm;
import org.apache.shardingsphere.api.sharding.complex.ComplexKeysShardingValue;

import java.util.*;

public class MyDatabaseComplexAlgorithm implements
        // 因为参与分片的既有Long又有Integer,因此泛型只能使用其父类Comparable<?>
        ComplexKeysShardingAlgorithm<Comparable<?>> {

    @Override
    public Collection<String> doSharding(Collection<String> availableTargetNames, ComplexKeysShardingValue<Comparable<?>> shardingValue) {
        Map<String, Range<Comparable<?>>> columnNameAndRangeValuesMap = shardingValue.getColumnNameAndRangeValuesMap();
        Map<String, Collection<Comparable<?>>> columnNameAndShardingValuesMap = shardingValue.getColumnNameAndShardingValuesMap();

        Collection<Comparable<?>> deviceTypes = columnNameAndShardingValuesMap.get("device_type");

        List<Integer> deviceTypeList = deviceTypes.stream()
                .filter(s -> s instanceof Integer)
                .map(s -> (Integer) s)
                .toList();

        Set<String> resultSet = new HashSet<>();

        for (Integer deviceType : deviceTypeList) {
            String lastStr = String.valueOf(deviceType % 2);

            for (String availableTargetName : availableTargetNames) {
                if (availableTargetName.endsWith(lastStr)) {
                    resultSet.add(availableTargetName);
                }
            }
        }

        return resultSet;
    }
}

对应的表分片类为:

package fun.psgame.demo.algorithm.table;

import com.google.common.collect.Range;
import org.apache.shardingsphere.api.sharding.complex.ComplexKeysShardingAlgorithm;
import org.apache.shardingsphere.api.sharding.complex.ComplexKeysShardingValue;

import java.util.*;

public class MyTableComplexAlgorithm implements ComplexKeysShardingAlgorithm<Comparable<?>> {
    /**
     *
     * @param availableTargetNames available data sources or tables's names
     * @param shardingValue sharding value
     * @return
     */
    @Override
    public Collection<String> doSharding(Collection<String> availableTargetNames,
                                         ComplexKeysShardingValue<Comparable<?>> shardingValue) {
        Map<String, Range<Comparable<?>>> columnNameAndRangeValuesMap = shardingValue.getColumnNameAndRangeValuesMap();
        Map<String, Collection<Comparable<?>>> columnNameAndShardingValuesMap = shardingValue.getColumnNameAndShardingValuesMap();

        Collection<Comparable<?>> deviceTypes = columnNameAndShardingValuesMap.get("device_type");

        List<Integer> deviceTypeList = deviceTypes.stream()
                .filter(s -> s instanceof Integer)
                .map(s -> (Integer) s)
                .toList();

        Set<String> resultSet = new HashSet<>();

        for (Integer deviceType : deviceTypeList) {
            String lastStr = String.valueOf(deviceType % 2);

            for (String availableTargetName : availableTargetNames) {
                if (availableTargetName.endsWith(lastStr)) {
                    resultSet.add(availableTargetName);
                }
            }
        }

        return resultSet;
    }
}

Hint强制路由策略

hint可以不根据sql语句特性,强制路由到某个库的某个表中。

yml配置

      tables:
        tb_device:
        # 配置物理表 逻辑表 -> 物理表,groovy语法,tb_device_0、tb_device_1
          actual-data-nodes: ds$->{0..1}.tb_device_$->{0..1}
          table-strategy:
#            inline:
#              # 配置分表策略:根据device_id作为分片的依据(分片键)
#              sharding-column: device_id
#              # 分片策略
#              algorithm-expression: tb_device_$->{device_id%2}
#            standard:
#              # 分片键
#              sharding-column: device_id
#              # 自定义标准分片策略
#              precise-algorithm-class-name: fun.psgame.demo.algorithm.table.MyTableStandardPreciseAlgorithm
#              range-algorithm-class-name: fun.psgame.demo.algorithm.table.MyTableStandardRangeAlgorithm
#            complex:
#              sharding-columns: device_id,device_type
#              algorithm-class-name: fun.psgame.demo.algorithm.table.MyTableComplexAlgorithm
            hint:
              algorithm-class-name: fun.psgame.demo.algorithm.table.MyTableHintAlgorithm

hint只需要配置策略类就行,不关心具体的分片键

对应策略类:

package fun.psgame.demo.algorithm.table;

import org.apache.shardingsphere.api.sharding.hint.HintShardingAlgorithm;
import org.apache.shardingsphere.api.sharding.hint.HintShardingValue;

import java.util.Collection;
import java.util.List;

public class MyTableHintAlgorithm implements HintShardingAlgorithm<Long> {
    /**
     *
     * @param availableTargetNames available data sources or tables's names
     * @param shardingValue sharding value
     * @return
     */
    @Override
    public Collection<String> doSharding(Collection<String> availableTargetNames, HintShardingValue<Long> shardingValue) {
        Collection<Long> values = shardingValue.getValues();
        String logicTableName = shardingValue.getLogicTableName();

        String tableName = logicTableName + "_" + values.toArray()[0];

        if (!availableTargetNames.contains(tableName)) {
            throw new UnsupportedOperationException("表" + tableName + "不存在");
        }

        return List.of(tableName);
    }
}

这样就可以强制指定某张表。

绑定表

绑定表解决的问题是,进行分表的表,在进行join关联查询时,会造成不必要的查询,查询结果也会因为笛卡尔积而重复。

例子:

  1. 创建 tb_device_info 表

在两个库里面分别执行:

表的主键不能自增,因为每条数据都需要有一个唯一ID
这里没有主键策略,mybatis-plus会使用自身的随机主键,测试用足够了,实际生产可以是有雪花算法生成唯一ID

-- 清除tb_device表的数据
truncate table tb_device_0;
truncate table tb_device_1;

-- 创建tb_device_info表
DROP TABLE if EXISTS tb_device_info_0;
CREATE TABLE `tb_device_info_0`
(
    `id`           BIGINT PRIMARY KEY,
    `device_id`    BIGINT                                  DEFAULT NULL,
    `device_intro` VARCHAR(255) COLLATE utf8mb4_general_ci DEFAULT NULL
);
DROP TABLE if EXISTS tb_device_info_1;
CREATE TABLE `tb_device_info_1`
(
    `id`           BIGINT primary key,
    `device_id`    BIGINT                                  DEFAULT NULL,
    `device_intro` VARCHAR(255) COLLATE utf8mb4_general_ci DEFAULT NULL
)

然后创建对应的实体类和Mapper(此步骤不再赘述)

接着插入数据:

@Test
void testInsertType() {
    for (int i = 0; i < 10; i++) {
        TbDevice device = new TbDevice();
        device.setDeviceId((long) i);
        device.setDeviceType(i);
        deviceMapper.insert(device);
        TbDeviceInfo deviceInfo = new TbDeviceInfo();
        deviceInfo.setDeviceId((long) i);
        deviceInfo.setDeviceIntro("" + i);
        deviceInfoMapper.insert(deviceInfo);
    }
}

然后给添加一个关联查询的方法:

ps:多行文本是java 15才有的特性,java8请自行拼接字符串

public interface DeviceInfoMapper extends BaseMapper<TbDeviceInfo> {
    @Select("""
            select a.id,a.device_id,a.device_intro,b.device_type from
            tb_device_info a left join tb_device b on a.device_id = b.device_id
            """)
    List<TbDeviceInfo> queryDeviceInfo();
}

然后添加测试方法:

@Test
void testQueryDeviceInfo() {
    List<TbDeviceInfo> deviceInfos = deviceInfoMapper.queryDeviceInfo();
    deviceInfos.forEach(System.out::println);
}

执行查询之后发现进行了一些不必要的关联:

2022-08-25 16:59:34.566  INFO 20800 --- [           main] ShardingSphere-SQL                       : Actual SQL: ds0 ::: select a.id,a.device_id,a.device_intro,b.device_type from
tb_device_info_1 a left join tb_device_0 b on a.device_id = b.device_id
2022-08-25 16:59:34.566  INFO 20800 --- [           main] ShardingSphere-SQL                       : Actual SQL: ds0 ::: select a.id,a.device_id,a.device_intro,b.device_type from
tb_device_info_1 a left join tb_device_1 b on a.device_id = b.device_id
2022-08-25 16:59:34.566  INFO 20800 --- [           main] ShardingSphere-SQL                       : Actual SQL: ds0 ::: select a.id,a.device_id,a.device_intro,b.device_type from
tb_device_info_0 a left join tb_device_0 b on a.device_id = b.device_id
2022-08-25 16:59:34.566  INFO 20800 --- [           main] ShardingSphere-SQL                       : Actual SQL: ds0 ::: select a.id,a.device_id,a.device_intro,b.device_type from
tb_device_info_0 a left join tb_device_1 b on a.device_id = b.device_id
2022-08-25 16:59:34.566  INFO 20800 --- [           main] ShardingSphere-SQL                       : Actual SQL: ds1 ::: select a.id,a.device_id,a.device_intro,b.device_type from
tb_device_info_1 a left join tb_device_0 b on a.device_id = b.device_id
2022-08-25 16:59:34.566  INFO 20800 --- [           main] ShardingSphere-SQL                       : Actual SQL: ds1 ::: select a.id,a.device_id,a.device_intro,b.device_type from
tb_device_info_1 a left join tb_device_1 b on a.device_id = b.device_id
2022-08-25 16:59:34.566  INFO 20800 --- [           main] ShardingSphere-SQL                       : Actual SQL: ds1 ::: select a.id,a.device_id,a.device_intro,b.device_type from
tb_device_info_0 a left join tb_device_0 b on a.device_id = b.device_id
2022-08-25 16:59:34.566  INFO 20800 --- [           main] ShardingSphere-SQL                       : Actual SQL: ds1 ::: select a.id,a.device_id,a.device_intro,b.device_type from
tb_device_info_0 a left join tb_device_1 b on a.device_id = b.device_id
TbDeviceInfo(id=1562725321956294657, deviceId=0, deviceIntro=0)
TbDeviceInfo(id=1562725323428495361, deviceId=2, deviceIntro=2)
TbDeviceInfo(id=1562725325781499905, deviceId=4, deviceIntro=4)
TbDeviceInfo(id=1562725328981753857, deviceId=6, deviceIntro=6)
TbDeviceInfo(id=1562725330357485569, deviceId=8, deviceIntro=8)
TbDeviceInfo(id=1562725321956294657, deviceId=0, deviceIntro=0)
TbDeviceInfo(id=1562725323428495361, deviceId=2, deviceIntro=2)
TbDeviceInfo(id=1562725325781499905, deviceId=4, deviceIntro=4)
TbDeviceInfo(id=1562725328981753857, deviceId=6, deviceIntro=6)
TbDeviceInfo(id=1562725330357485569, deviceId=8, deviceIntro=8)
TbDeviceInfo(id=1562725322774183937, deviceId=1, deviceIntro=1)
TbDeviceInfo(id=1562725324024086530, deviceId=3, deviceIntro=3)
TbDeviceInfo(id=1562725327215951874, deviceId=5, deviceIntro=5)
TbDeviceInfo(id=1562725329636065281, deviceId=7, deviceIntro=7)
TbDeviceInfo(id=1562725331007602689, deviceId=9, deviceIntro=9)
TbDeviceInfo(id=1562725322774183937, deviceId=1, deviceIntro=1)
TbDeviceInfo(id=1562725324024086530, deviceId=3, deviceIntro=3)
TbDeviceInfo(id=1562725327215951874, deviceId=5, deviceIntro=5)
TbDeviceInfo(id=1562725329636065281, deviceId=7, deviceIntro=7)
TbDeviceInfo(id=1562725331007602689, deviceId=9, deviceIntro=9)

因为关联时笛卡尔积的存在造成结果不准确,绑定表就是解决这种问题的。

进行绑定表配置:

spring:
  shardingsphere:
    sharding:
      # 配置绑定表
      binding-tables:
        - tb_device,tb_device_info

再次测试发现查询结果就是正常的

2022-08-25 21:00:12.760  INFO 7316 --- [           main] ShardingSphere-SQL                       : Actual SQL: ds0 ::: select a.id,a.device_id,a.device_intro,b.device_type from
tb_device_info_0 a left join tb_device_0 b on a.device_id = b.device_id
2022-08-25 21:00:12.760  INFO 7316 --- [           main] ShardingSphere-SQL                       : Actual SQL: ds0 ::: select a.id,a.device_id,a.device_intro,b.device_type from
tb_device_info_1 a left join tb_device_1 b on a.device_id = b.device_id
2022-08-25 21:00:12.760  INFO 7316 --- [           main] ShardingSphere-SQL                       : Actual SQL: ds1 ::: select a.id,a.device_id,a.device_intro,b.device_type from
tb_device_info_0 a left join tb_device_0 b on a.device_id = b.device_id
2022-08-25 21:00:12.760  INFO 7316 --- [           main] ShardingSphere-SQL                       : Actual SQL: ds1 ::: select a.id,a.device_id,a.device_intro,b.device_type from
tb_device_info_1 a left join tb_device_1 b on a.device_id = b.device_id
TbDeviceInfo(id=1562725321956294657, deviceId=0, deviceIntro=0)
TbDeviceInfo(id=1562725323428495361, deviceId=2, deviceIntro=2)
TbDeviceInfo(id=1562725325781499905, deviceId=4, deviceIntro=4)
TbDeviceInfo(id=1562725328981753857, deviceId=6, deviceIntro=6)
TbDeviceInfo(id=1562725330357485569, deviceId=8, deviceIntro=8)
TbDeviceInfo(id=1562725322774183937, deviceId=1, deviceIntro=1)
TbDeviceInfo(id=1562725324024086530, deviceId=3, deviceIntro=3)
TbDeviceInfo(id=1562725327215951874, deviceId=5, deviceIntro=5)
TbDeviceInfo(id=1562725329636065281, deviceId=7, deviceIntro=7)
TbDeviceInfo(id=1562725331007602689, deviceId=9, deviceIntro=9)

广播表

现在有这么⼀个场景, device_type列对应的tb_device_type表中的数据,不应该被分表,两个库中都应该有全量的该表的数据,这种表就是广播表。

表结构如下:

drop table if exists tb_device_type;
CREATE TABLE `tb_device_type`
(
    `type_id`   int NOT NULL AUTO_INCREMENT,
    `type_name` varchar(255) COLLATE utf8mb4_general_ci DEFAULT NULL,
    PRIMARY KEY (`type_id`)
);

配置广播表:

spring:
  shardingsphere:
    sharding:
      tables:
        # 广播表主键生成策略
        t_dict:
          key-generator:
            column: type_id
            type: SNOWFLAKE
      # 配置广播表
      broadcast-tables:
        - tb_device_type          

通过查看执行日志可以发现插入方法在两张表里面都进行了插入:

2022-08-25 21:21:06.816  INFO 20652 --- [           main] ShardingSphere-SQL                       : Actual SQL: ds0 ::: INSERT INTO tb_device_type  ( type_id,
type_name )  VALUES  (?, ?) ::: [1, ⼈脸考勤]
2022-08-25 21:21:06.816  INFO 20652 --- [           main] ShardingSphere-SQL                       : Actual SQL: ds1 ::: INSERT INTO tb_device_type  ( type_id,
type_name )  VALUES  (?, ?) ::: [1, ⼈脸考勤]
...
2022-08-25 21:21:06.913  INFO 20652 --- [           main] ShardingSphere-SQL                       : Actual SQL: ds0 ::: INSERT INTO tb_device_type  ( type_id,
type_name )  VALUES  (?, ?) ::: [2, ⼈脸通道]
2022-08-25 21:21:06.913  INFO 20652 --- [           main] ShardingSphere-SQL                       : Actual SQL: ds1 ::: INSERT INTO tb_device_type  ( type_id,
type_name )  VALUES  (?, ?) ::: [2, ⼈脸通道]

实现读写分离

笔者未完全进行试验,此处仅仅给出配置文件

# 配置真实数据源
spring.shardingsphere.datasource.names=m0,s0
# 配置主数据源
spring.shardingsphere.datasource.m0.type=com.alibaba.druid.pool.DruidDa
taSource
spring.shardingsphere.datasource.m0.driver-classname=com.mysql.cj.jdbc.Driver
spring.shardingsphere.datasource.m0.url=jdbc:mysql://172.16.253.73:3306
/db_device?serverTimezone=Asia/Shanghai
spring.shardingsphere.datasource.m0.username=root
spring.shardingsphere.datasource.m0.password=123456
# 配置从数据源
spring.shardingsphere.datasource.s0.type=com.alibaba.druid.pool.DruidDa
taSource
spring.shardingsphere.datasource.s0.driver-classname=com.mysql.cj.jdbc.Driver
spring.shardingsphere.datasource.s0.url=jdbc:mysql://172.16.253.74:3306
/db_device?serverTimezone=Asia/Shanghai
spring.shardingsphere.datasource.s0.username=root
spring.shardingsphere.datasource.s0.password=123456
# 分配读写规则
spring.shardingsphere.sharding.master-slave-rules.ds0.master-datasource-name=m0
spring.shardingsphere.sharding.master-slave-rules.ds0.slave-datasource-names[0]=s0
# 确定实际表
spring.shardingsphere.sharding.tables.tb_user.actual-datanodes=ds0.tb_user
# 确定主键⽣成策略
spring.shardingsphere.sharding.tables.t_dict.key-generator.column=id
spring.shardingsphere.sharding.tables.t_dict.keygenerator.type=SNOWFLAKE
# 开启显示sql语句
spring.shardingsphere.props.sql.show = true
  • 测试写数据
@Test
void testInsertUser(){
    for (int i = 0; i < 10; i++) {
        TbUser user = new TbUser();
        user.setName(""+i);
        userMapper.insert(user);
    }
}
  • 测试读数据
@Test
void testQueryUser(){
    List<TbUser> tbUsers = userMapper.selectList(null);
    tbUsers.forEach( tbUser -> System.out.println(tbUser));
}

可以看到数据在主库中写,然后同步到了从库,读取只会从从库中读。

连接模式及实现原理

To Be Continued

标签: Java, 数据库, 中间件

添加新评论