所以我们会选择在自增列或交易时间列上建立聚

2019-08-23 作者:数据库   |   浏览(174)

 

最近发现一个分页查询存储过程中的的一个SQL语句,当聚集索引列的排序方式不同的时候,效率差别达到数十倍,让我感到非常吃惊
由此引发出来分页查询的情况下对大表做Clustered Scan的时候,
不同情况下会选择FORWARD 或者 BACKWARD差别,以及建立聚集索引时,选择索引列的排序方式的一些思考
废话不多,上代码
先建立一张测试表,在Col1上建立聚集索引,写入100W条数据

 

 

Preface

create table ClusteredIndexScanDirection
(
    Col1 int identity(1,1),
    Col2 varchar(50),
    Col3 varchar(50),
    Col4 Datetime
)


create unique clustered index idx_Col1 on ClusteredIndexScanDirection(Col1 ASC) 

DECLARE @date datetime,@i int=0
    set @date=GETDATE()
    while @i<1000000
    begin
    insert into ClusteredIndexScanDirection values (NEWID(),NEWID(),DATEADD(MI,@i,GETDATE()-200))
    set @i=@i 1
end

MySQL连接方式
MySQL除了最常见的TCP连接方式外,还提供SOCKET(LINUX默认连接方式)、PIPE和SHARED MEMORY连接方式。
各连接方式的服务器、客户端启动选项,及连接默认值见下表:

一般的交易系统里面我们都会以自增列或交易时间列作为聚集索引列,因为一般这些系统都是写多读少

 

 

TCP连接(Linux,Windows):
目标 启动选项 默认值
SERVER --port=PORT 3306
CLIENT --protocol=TCP --port=PORT 3306

每天的交易数据会不停的插入到数据库,但是读取数据就没有数据插入那么频繁

    I got an replication error 1236 when I modified the password of a user without starting slave threads of replication.Further more,the user was absent in slave.below is the whole operation how I solve the issue.

 

SOCKET连接(Linux):
目标 启动选项 默认值
SERVER --enable-named-pipe --socket=SOCKET /tmp/mysql.sock
CLIENT --protocol=SOCKET --socket=SOCKET /tmp/mysql.sock

因为这些系统一般是写多读少,所以我们会选择在自增列或交易时间列上建立聚集索引

 

先直观地看一下聚集索引扫描时候的FORWARD 和 BACKWARD

PIPE连接(Windows):
目标 启动选项 默认值
SERVER --enable-named-pipe --socket=SOCKET MYSQL
CLIENT --protocol=PIPE --socket=SOCKET MYSQL


Procedure

 

SHARED MEMORY连接(Windows):
目标 启动选项 默认值
SERVER --shared-memory --shared-memory-base-name=MEMORY MySQL
CLIENT --protocol=MEMORY --shared-memory-base-name=MEMORY MySQL

测试

 

 BACKWARD

测试环境:SQLSERVER2012 SP1  WINDOWS7 64位

I modified the user password on master at first.

  执行如下分页查询,当按照Col4符合2017-7-18和2017-7-23,并且Col1 倒序排序的时候
  从执行计划看,Clustered Index Scan的Scan Direction的方式是BACKWARD

我们来做一个测试,测试脚本如下:

1 root@localhost:mysql3306.sock [(none)]>alter user zlm@'192.168.56.%' identified by 'zlmzlm';
2 Query OK, 0 rows affected (0.00 sec)

  图片 1

 1 --测试脚本  插入性能
 2 USE [test]
 3 GO
 4 --建表 以transtime为聚集索引列
 5 CREATE TABLE transtable(tranid INT ,transtime DATETIME)
 6 GO
 7 CREATE CLUSTERED INDEX CIX_transtable ON [dbo].[transtable]([transtime])
 8 GO
 9 
10 --建表 以tranid为聚集索引列
11 CREATE TABLE transtable2(tranid INT ,transtime DATETIME)
12 GO
13 CREATE CLUSTERED INDEX CIX_transtable2 ON [dbo].[transtable2]([tranid])
14 GO
15 
16 
17 ----------------------------------------------------------
18 --先插入测试数据,插入的tranid都为基数
19 DECLARE @i INT
20 SET @i = 1
21 WHILE @i <= 1000000
22     BEGIN 
23         INSERT  INTO [dbo].[transtable]
24                 SELECT  @i , GETDATE()
25         SET @i = @i   2
26     END
27 --------------------------------------
28 DECLARE @i INT
29 SET @i = 1
30 WHILE @i <= 1000000
31     BEGIN 
32         INSERT  INTO [dbo].[transtable2]
33                 SELECT  @i , GETDATE()
34         SET @i = @i   2
35     END
36 
37 -------------------------------------------

 

图片 2

在transtable表上的transtime(交易时间)上建立聚集索引,在transtable2表上的tranid(交易编号)上建立聚集索引

**I then started the slave replication on zlm3.**

 

我们分别在两个表上插入500000条记录,插入的时候有个特点,就是插入的tranid都是基数

 1 #mysql -uroot -p -S /var/lib/mysql/mysql.sock
 2 Enter password: 
 3 Welcome to the MySQL monitor.  Commands end with ; or g.
 4 Your MySQL connection id is 3
 5 Server version: 5.7.21-log MySQL Community Server (GPL)
 6 
 7 Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
 8 
 9 Oracle is a registered trademark of Oracle Corporation and/or its
10 affiliates. Other names may be trademarks of their respective
11 owners.
12 
13 Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
14 
15 root@localhost:mysql.sock [(none)]>start slave;
16 Query OK, 0 rows affected (0.00 sec)
17 
18 root@localhost:mysql.sock [(none)]>show slave statusG
19 *************************** 1. row ***************************
20                Slave_IO_State: 
21                   Master_Host: 192.168.56.100
22                   Master_User: repl
23                   Master_Port: 3306
24                 Connect_Retry: 60
25               Master_Log_File: mysql-bin.000047
26           Read_Master_Log_Pos: 194
27                Relay_Log_File: relay-bin.000027
28                 Relay_Log_Pos: 4
29         Relay_Master_Log_File: mysql-bin.000047
30              Slave_IO_Running: No //The IO_Thread is not running on account of Error 1236
31             Slave_SQL_Running: Yes
32               Replicate_Do_DB: 
33           Replicate_Ignore_DB: 
34            Replicate_Do_Table: 
35        Replicate_Ignore_Table: 
36       Replicate_Wild_Do_Table: 
37   Replicate_Wild_Ignore_Table: 
38                    Last_Errno: 0
39                    Last_Error: 
40                  Skip_Counter: 0
41           Exec_Master_Log_Pos: 194
42               Relay_Log_Space: 532
43               Until_Condition: None
44                Until_Log_File: 
45                 Until_Log_Pos: 0
46            Master_SSL_Allowed: No
47            Master_SSL_CA_File: 
48            Master_SSL_CA_Path: 
49               Master_SSL_Cert: 
50             Master_SSL_Cipher: 
51                Master_SSL_Key: 
52         Seconds_Behind_Master: NULL
53 Master_SSL_Verify_Server_Cert: No
54                 Last_IO_Errno: 1236
55                 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'
56                Last_SQL_Errno: 0
57                Last_SQL_Error: 
58   Replicate_Ignore_Server_Ids: 
59              Master_Server_Id: 1003306
60                   Master_UUID: 2a4b3562-2ab6-11e8-be7a-080027de0e0e
61              Master_Info_File: /data/mysql/mysql3306/data/master.info
62                     SQL_Delay: 0
63           SQL_Remaining_Delay: NULL
64       Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
65            Master_Retry_Count: 86400
66                   Master_Bind: 
67       Last_IO_Error_Timestamp: 180721 09:05:50
68      Last_SQL_Error_Timestamp: 
69                Master_SSL_Crl: 
70            Master_SSL_Crlpath: 
71            Retrieved_Gtid_Set: 
72             Executed_Gtid_Set: 27af30ca-6800-11e8-ad7e-080027de0e0e:1-7,
73 2a4b3562-2ab6-11e8-be7a-080027de0e0e:1-2700058 //This is the id of maximum transaction executed on slave.
74                 Auto_Position: 1
75          Replicate_Rewrite_DB: 
76                  Channel_Name: 
77            Master_TLS_Version: 
78 1 row in set (0.00 sec)

 FORWARD

1 SELECT COUNT(*) FROM [dbo].[transtable]
2 SELECT COUNT(*) FROM [dbo].[transtable2]
3 
4 SELECT TOP 10 * FROM [dbo].[transtable] ORDER BY [tranid]
5 SELECT TOP 10 * FROM [dbo].[transtable2] ORDER BY [tranid] 

 

  执行如下分页查询,当按照Col4符合2017-7-18和2017-7-23,并且Col1 正序排序的时候
  从执行计划看,Clustered Index Scan的Scan Direction的方式是FORWARD

图片 3

**I decided to set gtid_purged variable to a certain value.So I executed "reset master;" to remove the value of "Executed_Gtid_Set".**

图片 4

我们创建两个存储过程,这两个存储过程为插入到表数据

 1 root@localhost:mysql.sock [(none)]>reset master;
 2 Query OK, 0 rows affected (0.05 sec)
 3 
 4 root@localhost:mysql.sock [(none)]>show slave statusG
 5 *************************** 1. row ***************************
 6                Slave_IO_State: 
 7                   Master_Host: 192.168.56.100
 8                   Master_User: repl
 9                   Master_Port: 3306
10                 Connect_Retry: 60
11               Master_Log_File: mysql-bin.000047
12           Read_Master_Log_Pos: 194
13                Relay_Log_File: relay-bin.000027
14                 Relay_Log_Pos: 4
15         Relay_Master_Log_File: mysql-bin.000047
16              Slave_IO_Running: No
17             Slave_SQL_Running: Yes
18               Replicate_Do_DB: 
19           Replicate_Ignore_DB: 
20            Replicate_Do_Table: 
21        Replicate_Ignore_Table: 
22       Replicate_Wild_Do_Table: 
23   Replicate_Wild_Ignore_Table: 
24                    Last_Errno: 0
25                    Last_Error: 
26                  Skip_Counter: 0
27           Exec_Master_Log_Pos: 194
28               Relay_Log_Space: 532
29               Until_Condition: None
30                Until_Log_File: 
31                 Until_Log_Pos: 0
32            Master_SSL_Allowed: No
33            Master_SSL_CA_File: 
34            Master_SSL_CA_Path: 
35               Master_SSL_Cert: 
36             Master_SSL_Cipher: 
37                Master_SSL_Key: 
38         Seconds_Behind_Master: NULL
39 Master_SSL_Verify_Server_Cert: No
40                 Last_IO_Errno: 1236
41                 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'
42                Last_SQL_Errno: 0
43                Last_SQL_Error: 
44   Replicate_Ignore_Server_Ids: 
45              Master_Server_Id: 1003306
46                   Master_UUID: 2a4b3562-2ab6-11e8-be7a-080027de0e0e
47              Master_Info_File: /data/mysql/mysql3306/data/master.info
48                     SQL_Delay: 0
49           SQL_Remaining_Delay: NULL
50       Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
51            Master_Retry_Count: 86400
52                   Master_Bind: 
53       Last_IO_Error_Timestamp: 180721 09:11:00
54      Last_SQL_Error_Timestamp: 
55                Master_SSL_Crl: 
56            Master_SSL_Crlpath: 
57            Retrieved_Gtid_Set: 
58             Executed_Gtid_Set: //The value has been cleared after operation of "reset master;".
59                 Auto_Position: 1
60          Replicate_Rewrite_DB: 
61                  Channel_Name: 
62            Master_TLS_Version: 
63 1 row in set (0.00 sec)

图片 5

 1 --------------------------------------------
 2 --创建两个存储过程
 3 CREATE PROC INSERTTranstable
 4 AS
 5     DECLARE @i INT
 6     SET @i = 1
 7     WHILE @i <= 1000
 8         BEGIN 
 9             IF ( @i % 2 = 0 )
10                 BEGIN
11                     INSERT  INTO [dbo].[transtable]
12                             SELECT  @i ,
13                                     GETDATE()
14                     SET @i = @i   1
15                 END
16             ELSE
17                 BEGIN
18                     SET @i = @i   1
19                     CONTINUE 
20                 END
21         END
22 ------------------------------------------
23 CREATE PROC INSERTTranstable2
24 AS
25     DECLARE @i INT
26 SET @i = 1
27 WHILE @i <= 1000
28     BEGIN 
29         IF ( @i % 2 = 0 )
30             BEGIN
31                 INSERT  INTO [dbo].[transtable2]
32                         SELECT  @i ,
33                                 GETDATE()
34                 SET @i = @i   1
35             END
36         ELSE
37             BEGIN
38              SET @i = @i   1
39                 CONTINUE 
40             END
41     END
42 ----------------------------



 1 --------------------------------------------
 2 --创建两个存储过程
 3 CREATE PROC INSERTTranstable
 4 AS
 5     DECLARE @i INT
 6     SET @i = 1
 7     WHILE @i <= 1000
 8         BEGIN 
 9             IF ( @i % 2 = 0 )
10                 BEGIN
11                     INSERT  INTO [dbo].[transtable]
12                             SELECT  @i ,
13                                     GETDATE()
14                     SET @i = @i   1
15                 END
16             ELSE
17                 BEGIN
18                     SET @i = @i   1
19                     CONTINUE 
20                 END
21         END
22 ------------------------------------------
23 CREATE PROC INSERTTranstable2
24 AS
25     DECLARE @i INT
26 SET @i = 1
27 WHILE @i <= 1000
28     BEGIN 
29         IF ( @i % 2 = 0 )
30             BEGIN
31                 INSERT  INTO [dbo].[transtable2]
32                         SELECT  @i ,
33                                 GETDATE()
34                 SET @i = @i   1
35             END
36         ELSE
37             BEGIN
38              SET @i = @i   1
39                 CONTINUE 
40             END
41     END
42 -----------------------------

本文由小鱼儿玄机30码发布于数据库,转载请注明出处:所以我们会选择在自增列或交易时间列上建立聚

关键词: 小鱼儿玄机30码

  • 上一篇:没有了
  • 下一篇:没有了