Mysql超大分页怎么优化处理
1)数据库层面,这也是我们主要集中关注的(虽然收效没那么大),类似于
select * from table where age > 20 limit 1000000,10
这种查询其实也是有可以优化的余地的. 这条语句需要load1000000数据然后基本上全部丢弃,只取10条当然比较慢. 当时我们可以修改为
select * from table where id in (select id from table where age > 20 limit 1000000,10)
这样虽然也load了一百万的数据,但是由于索引覆盖,要查询的所有字段都在索引中,所以速度会很快. 同时如果ID连续的好(自增id连续),我们还可以
select * from table where id > 1000000 limit 10
效率也是不错的,优化的可能性有许多种,但是核心思想都一样,就是减少load的数据.
2)从需求的角度减少这种请求…主要是不做类似的需求(直接跳转到几百万页之后的具体某一页.只允许逐页查看或者按照给定的路线走,这样可预测,可缓存)以及防止ID泄漏且连续被人恶意攻击
3)分页联表left join优化
select * from sicimike where name like ‘c6%‘ order by id limit 30000, 5;
可改为
select a.* from sicimike a inner join (select id from sicimike where name like ‘c6%‘ order by id limit 30000, 5) b on a.id = b.id;
效果展示
mysql> select a.* from sicimike a inner join (select id from sicimike where name like ‘c6%‘ order by id limit 30000, 5) b on a.id = b.id; +---------+------------+-----+---------------------+ | id | name | age | add_time | +---------+------------+-----+---------------------+ | 7466563 | c6db537243 | 59 | 2020-02-14 13:34:01 | | 7466920 | c62dec7921 | 79 | 2020-02-14 13:34:01 | | 7467162 | c610b89b31 | 71 | 2020-02-14 13:34:01 | | 7467590 | c67bbd4bfd | 10 | 2020-02-14 13:34:01 | | 7467825 | c6db24865b | 51 | 2020-02-14 13:34:01 | +---------+------------+-----+---------------------+ 5 rows in set (0.05 sec) mysql> select * from sicimike where name like ‘c6%‘ order by id limit 30000, 5; +---------+------------+-----+---------------------+ | id | name | age | add_time | +---------+------------+-----+---------------------+ | 7466563 | c6db537243 | 59 | 2020-02-14 13:34:01 | | 7466920 | c62dec7921 | 79 | 2020-02-14 13:34:01 | | 7467162 | c610b89b31 | 71 | 2020-02-14 13:34:01 | | 7467590 | c67bbd4bfd | 10 | 2020-02-14 13:34:01 | | 7467825 | c6db24865b | 51 | 2020-02-14 13:34:01 | +---------+------------+-----+---------------------+ 5 rows in set (2.26 sec)