大 key
$ redis-cli --bigkeys
# Scanning the entire keyspace to find biggest keys as well as
# average sizes per key type. You can use -i 0.1 to sleep 0.1 sec
# per 100 SCAN commands (not usually needed).
[00.00%] Biggest string found so far 'key316' with 3 bytes
[00.00%] Biggest string found so far 'key7806' with 4 bytes
[12.79%] Biggest zset found so far 'salary' with 1 members
[13.19%] Biggest string found so far 'counter:__rand_int__' with 6 bytes
[13.50%] Biggest hash found so far 'websit' with 2 fields
[14.37%] Biggest set found so far 'bbs' with 3 members
[14.67%] Biggest hash found so far 'website' with 3 fields
[30.41%] Biggest list found so far 'mylist' with 100000 items
[95.53%] Biggest zset found so far 'page_rank' with 3 members
-------- summary -------
Sampled 10019 keys in the keyspace!
Total key length in bytes is 68990 (avg len 6.89)
Biggest string found 'counter:__rand_int__' has 6 bytes
Biggest list found 'mylist' has 100000 items
Biggest set found 'bbs' has 3 members
Biggest hash found 'website' has 3 fields
Biggest zset found 'page_rank' has 3 members
10011 strings with 38919 bytes (99.92% of keys, avg size 3.89)
3 lists with 100003 items (00.03% of keys, avg size 33334.33)
1 sets with 3 members (00.01% of keys, avg size 3.00)
2 hashs with 5 fields (00.02% of keys, avg size 2.50)
2 zsets with 4 members (00.02% of keys, avg size 2.00)
如果你担心这个指令会大幅抬升 Redis 的 ops 导致线上报警,还可以增加一个休眠参数。
$ redis-cli --bigkeys -i 0.1
# Scanning the entire keyspace to find biggest keys as well as
# average sizes per key type. You can use -i 0.1 to sleep 0.1 sec
# per 100 SCAN commands (not usually needed).
[00.00%] Biggest string found so far 'key316' with 3 bytes
[00.00%] Biggest string found so far 'key7806' with 4 bytes
[12.79%] Biggest zset found so far 'salary' with 1 members
[13.19%] Biggest string found so far 'counter:__rand_int__' with 6 bytes
[13.50%] Biggest hash found so far 'websit' with 2 fields
[14.37%] Biggest set found so far 'bbs' with 3 members
[14.67%] Biggest hash found so far 'website' with 3 fields
[30.41%] Biggest list found so far 'mylist' with 100000 items
[95.53%] Biggest zset found so far 'page_rank' with 3 members
-------- summary -------
Sampled 10019 keys in the keyspace!
Total key length in bytes is 68990 (avg len 6.89)
Biggest string found 'counter:__rand_int__' has 6 bytes
Biggest list found 'mylist' has 100000 items
Biggest set found 'bbs' has 3 members
Biggest hash found 'website' has 3 fields
Biggest zset found 'page_rank' has 3 members
10011 strings with 38919 bytes (99.92% of keys, avg size 3.89)
3 lists with 100003 items (00.03% of keys, avg size 33334.33)
1 sets with 3 members (00.01% of keys, avg size 3.00)
2 hashs with 5 fields (00.02% of keys, avg size 2.50)
2 zsets with 4 members (00.02% of keys, avg size 2.00)
上面这个指令每隔 100 条 scan 指令就会休眠 0.1s,ops 就不会剧烈抬升,但是扫描的时间会变长。
需要注意的是,这个bigkeys得到的最大,不一定是最大。
说明原因前,首先说明bigkeys的原理,非常简单,通过scan命令遍历,各种不同数据结构的key,分别通过不同的命令得到最大的key:
- 如果是string结构,通过strlen判断;
- 如果是list结构,通过llen判断;
- 如果是hash结构,通过hlen判断;
- 如果是set结构,通过scard判断;
- 如果是sorted set结构,通过zcard判断。
正因为这样的判断方式,虽然string结构肯定可以正确的筛选出最占用缓存,也可以说最大的key。
但是list不一定,例如,现在有两个list类型的key,分别是:numberlist–[0,1,2],stringlist–[“123456789123456789”],由于通过llen判断,所以numberlist要大于stringlist。
而事实上stringlist更占用内存。其他三种数据结构hash,set,sorted set都会存在这个问题。使用bigkeys一定要注意这一点。
slowlog
与其他任意存储系统例如mysql,mongodb可以查看慢日志一样,redis也可以,即通过命令slowlog。
用法如下
SLOWLOG subcommand [argument]
subcommand主要有:
- get,用法:slowlog get [argument],获取argument参数指定数量的慢日志。
- len,用法:slowlog len,总慢日志数量。
- reset,用法:slowlog reset,清空慢日志。
!redis-cli slowlog get 5
1) 1) (integer) 2
2) (integer) 1537786953
3) (integer) 17980
4) 1) "scan"
2) "0"
3) "match"
4) "key99*"
5) "count"
6) "1000"
5) "127.0.0.1:50129"
6) ""
2) 1) (integer) 1
2) (integer) 1537785886
3) (integer) 39537
4) 1) "keys"
2) "*"
5) "127.0.0.1:49701"
6) ""
3) 1) (integer) 0
2) (integer) 1537681701
3) (integer) 18276
4) 1) "ZADD"
2) "page_rank"
3) "10"
4) "google.com"
5) "127.0.0.1:52334"
6) ""
命令耗时超过多少才会保存到slowlog中,可以通过命令config set slowlog-log-slower-than 2000配置并且不需要重启redis。
注意:单位是微秒,2000微妙即2毫秒。