summaryrefslogtreecommitdiffstats
path: root/iredis/data/commands
diff options
context:
space:
mode:
Diffstat (limited to 'iredis/data/commands')
-rw-r--r--iredis/data/commands/__init__.py0
-rw-r--r--iredis/data/commands/acl-cat.md84
-rw-r--r--iredis/data/commands/acl-deluser.md17
-rw-r--r--iredis/data/commands/acl-genpass.md43
-rw-r--r--iredis/data/commands/acl-getuser.md28
-rw-r--r--iredis/data/commands/acl-help.md6
-rw-r--r--iredis/data/commands/acl-list.md17
-rw-r--r--iredis/data/commands/acl-load.md27
-rw-r--r--iredis/data/commands/acl-log.md41
-rw-r--r--iredis/data/commands/acl-save.md20
-rw-r--r--iredis/data/commands/acl-setuser.md115
-rw-r--r--iredis/data/commands/acl-users.md15
-rw-r--r--iredis/data/commands/acl-whoami.md14
-rw-r--r--iredis/data/commands/append.md55
-rw-r--r--iredis/data/commands/auth.md42
-rw-r--r--iredis/data/commands/bgrewriteaof.md37
-rw-r--r--iredis/data/commands/bgsave.md28
-rw-r--r--iredis/data/commands/bitcount.md66
-rw-r--r--iredis/data/commands/bitfield.md155
-rw-r--r--iredis/data/commands/bitop.md61
-rw-r--r--iredis/data/commands/bitpos.md60
-rw-r--r--iredis/data/commands/blpop.md182
-rw-r--r--iredis/data/commands/brpop.md31
-rw-r--r--iredis/data/commands/brpoplpush.md21
-rw-r--r--iredis/data/commands/bzpopmax.md37
-rw-r--r--iredis/data/commands/bzpopmin.md37
-rw-r--r--iredis/data/commands/client-caching.md19
-rw-r--r--iredis/data/commands/client-getname.md7
-rw-r--r--iredis/data/commands/client-getredir.md13
-rw-r--r--iredis/data/commands/client-id.md25
-rw-r--r--iredis/data/commands/client-kill.md73
-rw-r--r--iredis/data/commands/client-list.md70
-rw-r--r--iredis/data/commands/client-pause.md38
-rw-r--r--iredis/data/commands/client-reply.md21
-rw-r--r--iredis/data/commands/client-setname.md28
-rw-r--r--iredis/data/commands/client-tracking.md54
-rw-r--r--iredis/data/commands/client-unblock.md63
-rw-r--r--iredis/data/commands/cluster-addslots.md55
-rw-r--r--iredis/data/commands/cluster-bumpepoch.md15
-rw-r--r--iredis/data/commands/cluster-count-failure-reports.md34
-rw-r--r--iredis/data/commands/cluster-countkeysinslot.md13
-rw-r--r--iredis/data/commands/cluster-delslots.md47
-rw-r--r--iredis/data/commands/cluster-failover.md81
-rw-r--r--iredis/data/commands/cluster-flushslots.md8
-rw-r--r--iredis/data/commands/cluster-forget.md59
-rw-r--r--iredis/data/commands/cluster-getkeysinslot.md20
-rw-r--r--iredis/data/commands/cluster-info.md56
-rw-r--r--iredis/data/commands/cluster-keyslot.md32
-rw-r--r--iredis/data/commands/cluster-meet.md55
-rw-r--r--iredis/data/commands/cluster-myid.md8
-rw-r--r--iredis/data/commands/cluster-nodes.md147
-rw-r--r--iredis/data/commands/cluster-replicas.md16
-rw-r--r--iredis/data/commands/cluster-replicate.md29
-rw-r--r--iredis/data/commands/cluster-reset.md29
-rw-r--r--iredis/data/commands/cluster-saveconfig.md15
-rw-r--r--iredis/data/commands/cluster-set-config-epoch.md25
-rw-r--r--iredis/data/commands/cluster-setslot.md132
-rw-r--r--iredis/data/commands/cluster-slaves.md22
-rw-r--r--iredis/data/commands/cluster-slots.md102
-rw-r--r--iredis/data/commands/command-count.md11
-rw-r--r--iredis/data/commands/command-getkeys.md21
-rw-r--r--iredis/data/commands/command-info.md18
-rw-r--r--iredis/data/commands/command.md179
-rw-r--r--iredis/data/commands/config-get.md52
-rw-r--r--iredis/data/commands/config-resetstat.md16
-rw-r--r--iredis/data/commands/config-rewrite.md37
-rw-r--r--iredis/data/commands/config-set.md56
-rw-r--r--iredis/data/commands/dbsize.md5
-rw-r--r--iredis/data/commands/debug-object.md6
-rw-r--r--iredis/data/commands/debug-segfault.md6
-rw-r--r--iredis/data/commands/decr.md19
-rw-r--r--iredis/data/commands/decrby.md17
-rw-r--r--iredis/data/commands/del.md13
-rw-r--r--iredis/data/commands/discard.md10
-rw-r--r--iredis/data/commands/dump.md30
-rw-r--r--iredis/data/commands/echo.md11
-rw-r--r--iredis/data/commands/eval.md892
-rw-r--r--iredis/data/commands/evalsha.md3
-rw-r--r--iredis/data/commands/exec.md16
-rw-r--r--iredis/data/commands/exists.md33
-rw-r--r--iredis/data/commands/expire.md166
-rw-r--r--iredis/data/commands/expireat.md31
-rw-r--r--iredis/data/commands/flushall.md19
-rw-r--r--iredis/data/commands/flushdb.md12
-rw-r--r--iredis/data/commands/geoadd.md56
-rw-r--r--iredis/data/commands/geodecode.md34
-rw-r--r--iredis/data/commands/geodist.md35
-rw-r--r--iredis/data/commands/geoencode.md57
-rw-r--r--iredis/data/commands/geohash.md39
-rw-r--r--iredis/data/commands/geopos.md28
-rw-r--r--iredis/data/commands/georadius.md103
-rw-r--r--iredis/data/commands/georadiusbymember.md21
-rw-r--r--iredis/data/commands/get.md15
-rw-r--r--iredis/data/commands/getbit.md19
-rw-r--r--iredis/data/commands/getrange.md24
-rw-r--r--iredis/data/commands/getset.md28
-rw-r--r--iredis/data/commands/hdel.md24
-rw-r--r--iredis/data/commands/hello.md46
-rw-r--r--iredis/data/commands/hexists.md16
-rw-r--r--iredis/data/commands/hget.md14
-rw-r--r--iredis/data/commands/hgetall.md16
-rw-r--r--iredis/data/commands/hincrby.md22
-rw-r--r--iredis/data/commands/hincrbyfloat.md33
-rw-r--r--iredis/data/commands/hkeys.md14
-rw-r--r--iredis/data/commands/hlen.md13
-rw-r--r--iredis/data/commands/hmget.md17
-rw-r--r--iredis/data/commands/hmset.md18
-rw-r--r--iredis/data/commands/hscan.md1
-rw-r--r--iredis/data/commands/hset.md17
-rw-r--r--iredis/data/commands/hsetnx.md18
-rw-r--r--iredis/data/commands/hstrlen.md16
-rw-r--r--iredis/data/commands/hvals.md14
-rw-r--r--iredis/data/commands/incr.md156
-rw-r--r--iredis/data/commands/incrby.md17
-rw-r--r--iredis/data/commands/incrbyfloat.md42
-rw-r--r--iredis/data/commands/info.md335
-rw-r--r--iredis/data/commands/keys.md37
-rw-r--r--iredis/data/commands/lastsave.md8
-rw-r--r--iredis/data/commands/latency-doctor.md47
-rw-r--r--iredis/data/commands/latency-graph.md68
-rw-r--r--iredis/data/commands/latency-help.md10
-rw-r--r--iredis/data/commands/latency-history.md47
-rw-r--r--iredis/data/commands/latency-latest.md38
-rw-r--r--iredis/data/commands/latency-reset.md36
-rw-r--r--iredis/data/commands/lindex.md22
-rw-r--r--iredis/data/commands/linsert.md21
-rw-r--r--iredis/data/commands/llen.md15
-rw-r--r--iredis/data/commands/lolwut.md36
-rw-r--r--iredis/data/commands/lpop.md16
-rw-r--r--iredis/data/commands/lpos.md95
-rw-r--r--iredis/data/commands/lpush.md27
-rw-r--r--iredis/data/commands/lpushx.md22
-rw-r--r--iredis/data/commands/lrange.md37
-rw-r--r--iredis/data/commands/lrem.md28
-rw-r--r--iredis/data/commands/lset.md19
-rw-r--r--iredis/data/commands/ltrim.md42
-rw-r--r--iredis/data/commands/memory-doctor.md6
-rw-r--r--iredis/data/commands/memory-help.md6
-rw-r--r--iredis/data/commands/memory-malloc-stats.md9
-rw-r--r--iredis/data/commands/memory-purge.md9
-rw-r--r--iredis/data/commands/memory-stats.md50
-rw-r--r--iredis/data/commands/memory-usage.md40
-rw-r--r--iredis/data/commands/mget.md15
-rw-r--r--iredis/data/commands/migrate.md78
-rw-r--r--iredis/data/commands/module-list.md10
-rw-r--r--iredis/data/commands/module-load.md13
-rw-r--r--iredis/data/commands/module-unload.md13
-rw-r--r--iredis/data/commands/monitor.md93
-rw-r--r--iredis/data/commands/move.md11
-rw-r--r--iredis/data/commands/mset.md18
-rw-r--r--iredis/data/commands/msetnx.md24
-rw-r--r--iredis/data/commands/multi.md8
-rw-r--r--iredis/data/commands/object.md80
-rw-r--r--iredis/data/commands/persist.md20
-rw-r--r--iredis/data/commands/pexpire.md18
-rw-r--r--iredis/data/commands/pexpireat.md18
-rw-r--r--iredis/data/commands/pfadd.md33
-rw-r--r--iredis/data/commands/pfcount.md93
-rw-r--r--iredis/data/commands/pfmerge.md22
-rw-r--r--iredis/data/commands/ping.md20
-rw-r--r--iredis/data/commands/psetex.md10
-rw-r--r--iredis/data/commands/psubscribe.md9
-rw-r--r--iredis/data/commands/psync.md14
-rw-r--r--iredis/data/commands/pttl.md24
-rw-r--r--iredis/data/commands/publish.md5
-rw-r--r--iredis/data/commands/pubsub.md44
-rw-r--r--iredis/data/commands/punsubscribe.md6
-rw-r--r--iredis/data/commands/quit.md6
-rw-r--r--iredis/data/commands/randomkey.md5
-rw-r--r--iredis/data/commands/readonly.md21
-rw-r--r--iredis/data/commands/readwrite.md10
-rw-r--r--iredis/data/commands/rename.md26
-rw-r--r--iredis/data/commands/renamenx.md27
-rw-r--r--iredis/data/commands/replicaof.md21
-rw-r--r--iredis/data/commands/restore.md41
-rw-r--r--iredis/data/commands/role.md105
-rw-r--r--iredis/data/commands/rpop.md16
-rw-r--r--iredis/data/commands/rpoplpush.md71
-rw-r--r--iredis/data/commands/rpush.md27
-rw-r--r--iredis/data/commands/rpushx.md22
-rw-r--r--iredis/data/commands/sadd.md24
-rw-r--r--iredis/data/commands/save.md17
-rw-r--r--iredis/data/commands/scan.md341
-rw-r--r--iredis/data/commands/scard.md14
-rw-r--r--iredis/data/commands/script-debug.md26
-rw-r--r--iredis/data/commands/script-exists.md18
-rw-r--r--iredis/data/commands/script-flush.md8
-rw-r--r--iredis/data/commands/script-kill.md19
-rw-r--r--iredis/data/commands/script-load.md18
-rw-r--r--iredis/data/commands/sdiff.md29
-rw-r--r--iredis/data/commands/sdiffstore.md21
-rw-r--r--iredis/data/commands/select.md27
-rw-r--r--iredis/data/commands/set.md73
-rw-r--r--iredis/data/commands/setbit.md158
-rw-r--r--iredis/data/commands/setex.md27
-rw-r--r--iredis/data/commands/setnx.md102
-rw-r--r--iredis/data/commands/setrange.md47
-rw-r--r--iredis/data/commands/shutdown.md62
-rw-r--r--iredis/data/commands/sinter.md31
-rw-r--r--iredis/data/commands/sinterstore.md21
-rw-r--r--iredis/data/commands/sismember.md16
-rw-r--r--iredis/data/commands/slaveof.md25
-rw-r--r--iredis/data/commands/slowlog.md84
-rw-r--r--iredis/data/commands/smembers.md15
-rw-r--r--iredis/data/commands/smove.md28
-rw-r--r--iredis/data/commands/sort.md136
-rw-r--r--iredis/data/commands/spop.md41
-rw-r--r--iredis/data/commands/srandmember.md63
-rw-r--r--iredis/data/commands/srem.md26
-rw-r--r--iredis/data/commands/sscan.md1
-rw-r--r--iredis/data/commands/stralgo.md121
-rw-r--r--iredis/data/commands/strlen.md15
-rw-r--r--iredis/data/commands/subscribe.md5
-rw-r--r--iredis/data/commands/sunion.md28
-rw-r--r--iredis/data/commands/sunionstore.md21
-rw-r--r--iredis/data/commands/swapdb.md19
-rw-r--r--iredis/data/commands/sync.md15
-rw-r--r--iredis/data/commands/time.md20
-rw-r--r--iredis/data/commands/touch.md13
-rw-r--r--iredis/data/commands/ttl.md27
-rw-r--r--iredis/data/commands/type.md18
-rw-r--r--iredis/data/commands/unlink.md18
-rw-r--r--iredis/data/commands/unsubscribe.md6
-rw-r--r--iredis/data/commands/unwatch.md9
-rw-r--r--iredis/data/commands/wait.md75
-rw-r--r--iredis/data/commands/watch.md8
-rw-r--r--iredis/data/commands/xack.md25
-rw-r--r--iredis/data/commands/xadd.md87
-rw-r--r--iredis/data/commands/xclaim.md83
-rw-r--r--iredis/data/commands/xdel.md51
-rw-r--r--iredis/data/commands/xgroup.md64
-rw-r--r--iredis/data/commands/xinfo.md182
-rw-r--r--iredis/data/commands/xlen.md21
-rw-r--r--iredis/data/commands/xpending.md110
-rw-r--r--iredis/data/commands/xrange.md183
-rw-r--r--iredis/data/commands/xread.md210
-rw-r--r--iredis/data/commands/xreadgroup.md131
-rw-r--r--iredis/data/commands/xrevrange.md85
-rw-r--r--iredis/data/commands/xtrim.md37
-rw-r--r--iredis/data/commands/zadd.md98
-rw-r--r--iredis/data/commands/zcard.md15
-rw-r--r--iredis/data/commands/zcount.md23
-rw-r--r--iredis/data/commands/zincrby.md25
-rw-r--r--iredis/data/commands/zinterstore.md30
-rw-r--r--iredis/data/commands/zlexcount.md23
-rw-r--r--iredis/data/commands/zpopmax.md20
-rw-r--r--iredis/data/commands/zpopmin.md20
-rw-r--r--iredis/data/commands/zrange.md49
-rw-r--r--iredis/data/commands/zrangebylex.md66
-rw-r--r--iredis/data/commands/zrangebyscore.md100
-rw-r--r--iredis/data/commands/zrank.md22
-rw-r--r--iredis/data/commands/zrem.md26
-rw-r--r--iredis/data/commands/zremrangebylex.md22
-rw-r--r--iredis/data/commands/zremrangebyrank.md20
-rw-r--r--iredis/data/commands/zremrangebyscore.md19
-rw-r--r--iredis/data/commands/zrevrange.md21
-rw-r--r--iredis/data/commands/zrevrangebylex.md18
-rw-r--r--iredis/data/commands/zrevrangebyscore.md27
-rw-r--r--iredis/data/commands/zrevrank.md22
-rw-r--r--iredis/data/commands/zscan.md1
-rw-r--r--iredis/data/commands/zscore.md16
-rw-r--r--iredis/data/commands/zunionstore.md38
262 files changed, 11451 insertions, 0 deletions
diff --git a/iredis/data/commands/__init__.py b/iredis/data/commands/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/iredis/data/commands/__init__.py
diff --git a/iredis/data/commands/acl-cat.md b/iredis/data/commands/acl-cat.md
new file mode 100644
index 0000000..eedf692
--- /dev/null
+++ b/iredis/data/commands/acl-cat.md
@@ -0,0 +1,84 @@
+The command shows the available ACL categories if called without arguments. If a
+category name is given, the command shows all the Redis commands in the
+specified category.
+
+ACL categories are very useful in order to create ACL rules that include or
+exclude a large set of commands at once, without specifying every single
+command. For instance, the following rule will let the user `karin` perform
+everything but the most dangerous operations that may affect the server
+stability:
+
+ ACL SETUSER karin on +@all -@dangerous
+
+We first add all the commands to the set of commands that `karin` is able to
+execute, but then we remove all the dangerous commands.
+
+Checking for all the available categories is as simple as:
+
+```
+> ACL CAT
+ 1) "keyspace"
+ 2) "read"
+ 3) "write"
+ 4) "set"
+ 5) "sortedset"
+ 6) "list"
+ 7) "hash"
+ 8) "string"
+ 9) "bitmap"
+10) "hyperloglog"
+11) "geo"
+12) "stream"
+13) "pubsub"
+14) "admin"
+15) "fast"
+16) "slow"
+17) "blocking"
+18) "dangerous"
+19) "connection"
+20) "transaction"
+21) "scripting"
+```
+
+Then we may want to know what commands are part of a given category:
+
+```
+> ACL CAT dangerous
+ 1) "flushdb"
+ 2) "acl"
+ 3) "slowlog"
+ 4) "debug"
+ 5) "role"
+ 6) "keys"
+ 7) "pfselftest"
+ 8) "client"
+ 9) "bgrewriteaof"
+10) "replicaof"
+11) "monitor"
+12) "restore-asking"
+13) "latency"
+14) "replconf"
+15) "pfdebug"
+16) "bgsave"
+17) "sync"
+18) "config"
+19) "flushall"
+20) "cluster"
+21) "info"
+22) "lastsave"
+23) "slaveof"
+24) "swapdb"
+25) "module"
+26) "restore"
+27) "migrate"
+28) "save"
+29) "shutdown"
+30) "psync"
+31) "sort"
+```
+
+@return
+
+@array-reply: a list of ACL categories or a list of commands inside a given
+category. The command may return an error if an invalid category name is given
+as argument.
diff --git a/iredis/data/commands/acl-deluser.md b/iredis/data/commands/acl-deluser.md
new file mode 100644
index 0000000..88359fe
--- /dev/null
+++ b/iredis/data/commands/acl-deluser.md
@@ -0,0 +1,17 @@
+Delete all the specified ACL users and terminate all the connections that are
+authenticated with such users. Note: the special `default` user cannot be
+removed from the system, this is the default user that every new connection is
+authenticated with. The list of users may include usernames that do not exist,
+in such case no operation is performed for the non existing users.
+
+@return
+
+@integer-reply: The number of users that were deleted. This number will not
+always match the number of arguments since certain users may not exist.
+
+@examples
+
+```
+> ACL DELUSER antirez
+1
+```
diff --git a/iredis/data/commands/acl-genpass.md b/iredis/data/commands/acl-genpass.md
new file mode 100644
index 0000000..46043cc
--- /dev/null
+++ b/iredis/data/commands/acl-genpass.md
@@ -0,0 +1,43 @@
+ACL users need a solid password in order to authenticate to the server without
+security risks. Such password does not need to be remembered by humans, but only
+by computers, so it can be very long and strong (unguessable by an external
+attacker). The `ACL GENPASS` command generates a password starting from
+/dev/urandom if available, otherwise (in systems without /dev/urandom) it uses a
+weaker system that is likely still better than picking a weak password by hand.
+
+By default (if /dev/urandom is available) the password is strong and can be used
+for other uses in the context of a Redis application, for instance in order to
+create unique session identifiers or other kind of unguessable and not colliding
+IDs. The password generation is also very cheap because we don't really ask
+/dev/urandom for bits at every execution. At startup Redis creates a seed using
+/dev/urandom, then it will use SHA256 in counter mode, with
+HMAC-SHA256(seed,counter) as primitive, in order to create more random bytes as
+needed. This means that the application developer should be feel free to abuse
+`ACL GENPASS` to create as many secure pseudorandom strings as needed.
+
+The command output is an hexadecimal representation of a binary string. By
+default it emits 256 bits (so 64 hex characters). The user can provide an
+argument in form of number of bits to emit from 1 to 1024 to change the output
+length. Note that the number of bits provided is always rounded to the next
+multiple of 4. So for instance asking for just 1 bit password will result in 4
+bits to be emitted, in the form of a single hex character.
+
+@return
+
+@bulk-string-reply: by default 64 bytes string representing 256 bits of
+pseudorandom data. Otherwise if an argument if needed, the output string length
+is the number of specified bits (rounded to the next multiple of 4) divided
+by 4.
+
+@examples
+
+```
+> ACL GENPASS
+"dd721260bfe1b3d9601e7fbab36de6d04e2e67b0ef1c53de59d45950db0dd3cc"
+
+> ACL GENPASS 32
+"355ef3dd"
+
+> ACL GENPASS 5
+"90"
+```
diff --git a/iredis/data/commands/acl-getuser.md b/iredis/data/commands/acl-getuser.md
new file mode 100644
index 0000000..a3bebe3
--- /dev/null
+++ b/iredis/data/commands/acl-getuser.md
@@ -0,0 +1,28 @@
+The command returns all the rules defined for an existing ACL user.
+
+Specifically, it lists the user's ACL flags, password hashes and key name
+patterns. Note that command rules are returned as a string in the same format
+used with the `ACL SETUSER` command. This description of command rules reflects
+the user's effective permissions, so while it may not be identical to the set of
+rules used to configure the user, it is still functionally identical.
+
+@array-reply: a list of ACL rule definitions for the user.
+
+@examples
+
+Here's the default configuration for the default user:
+
+```
+> ACL GETUSER default
+1) "flags"
+2) 1) "on"
+ 2) "allkeys"
+ 3) "allcommands"
+ 4) "nopass"
+3) "passwords"
+4) (empty array)
+5) "commands"
+6) "+@all"
+7) "keys"
+8) 1) "*"
+```
diff --git a/iredis/data/commands/acl-help.md b/iredis/data/commands/acl-help.md
new file mode 100644
index 0000000..3ec1ffb
--- /dev/null
+++ b/iredis/data/commands/acl-help.md
@@ -0,0 +1,6 @@
+The `ACL HELP` command returns a helpful text describing the different
+subcommands.
+
+@return
+
+@array-reply: a list of subcommands and their descriptions
diff --git a/iredis/data/commands/acl-list.md b/iredis/data/commands/acl-list.md
new file mode 100644
index 0000000..ebfa36b
--- /dev/null
+++ b/iredis/data/commands/acl-list.md
@@ -0,0 +1,17 @@
+The command shows the currently active ACL rules in the Redis server. Each line
+in the returned array defines a different user, and the format is the same used
+in the redis.conf file or the external ACL file, so you can cut and paste what
+is returned by the ACL LIST command directly inside a configuration file if you
+wish (but make sure to check `ACL SAVE`).
+
+@return
+
+An array of strings.
+
+@examples
+
+```
+> ACL LIST
+1) "user antirez on #9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 ~objects:* +@all -@admin -@dangerous"
+2) "user default on nopass ~* +@all"
+```
diff --git a/iredis/data/commands/acl-load.md b/iredis/data/commands/acl-load.md
new file mode 100644
index 0000000..3892bb4
--- /dev/null
+++ b/iredis/data/commands/acl-load.md
@@ -0,0 +1,27 @@
+When Redis is configured to use an ACL file (with the `aclfile` configuration
+option), this command will reload the ACLs from the file, replacing all the
+current ACL rules with the ones defined in the file. The command makes sure to
+have an _all or nothing_ behavior, that is:
+
+- If every line in the file is valid, all the ACLs are loaded.
+- If one or more line in the file is not valid, nothing is loaded, and the old
+ ACL rules defined in the server memory continue to be used.
+
+@return
+
+@simple-string-reply: `OK` on success.
+
+The command may fail with an error for several reasons: if the file is not
+readable, if there is an error inside the file, and in such case the error will
+be reported to the user in the error. Finally the command will fail if the
+server is not configured to use an external ACL file.
+
+@examples
+
+```
+> ACL LOAD
++OK
+
+> ACL LOAD
+-ERR /tmp/foo:1: Unknown command or category name in ACL...
+```
diff --git a/iredis/data/commands/acl-log.md b/iredis/data/commands/acl-log.md
new file mode 100644
index 0000000..6dbdb85
--- /dev/null
+++ b/iredis/data/commands/acl-log.md
@@ -0,0 +1,41 @@
+The command shows a list of recent ACL security events:
+
+1. Failures to authenticate their connections with `AUTH` or `HELLO`.
+2. Commands denied because against the current ACL rules.
+3. Commands denied because accessing keys not allowed in the current ACL rules.
+
+The optional argument specifies how many entries to show. By default up to ten
+failures are returned. The special `RESET` argument clears the log. Entries are
+displayed starting from the most recent.
+
+@return
+
+When called to show security events:
+
+@array-reply: a list of ACL security events.
+
+When called with `RESET`:
+
+@simple-string-reply: `OK` if the security log was cleared.
+
+@examples
+
+```
+> AUTH someuser wrongpassword
+(error) WRONGPASS invalid username-password pair
+> ACL LOG 1
+1) 1) "count"
+ 2) (integer) 1
+ 3) "reason"
+ 4) "auth"
+ 5) "context"
+ 6) "toplevel"
+ 7) "object"
+ 8) "AUTH"
+ 9) "username"
+ 10) "someuser"
+ 11) "age-seconds"
+ 12) "4.0960000000000001"
+ 13) "client-info"
+ 14) "id=6 addr=127.0.0.1:63026 fd=8 name= age=9 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=48 qbuf-free=32720 obl=0 oll=0 omem=0 events=r cmd=auth user=default"
+```
diff --git a/iredis/data/commands/acl-save.md b/iredis/data/commands/acl-save.md
new file mode 100644
index 0000000..17580c6
--- /dev/null
+++ b/iredis/data/commands/acl-save.md
@@ -0,0 +1,20 @@
+When Redis is configured to use an ACL file (with the `aclfile` configuration
+option), this command will save the currently defined ACLs from the server
+memory to the ACL file.
+
+@return
+
+@simple-string-reply: `OK` on success.
+
+The command may fail with an error for several reasons: if the file cannot be
+written or if the server is not configured to use an external ACL file.
+
+@examples
+
+```
+> ACL SAVE
++OK
+
+> ACL SAVE
+-ERR There was an error trying to save the ACLs. Please check the server logs for more information
+```
diff --git a/iredis/data/commands/acl-setuser.md b/iredis/data/commands/acl-setuser.md
new file mode 100644
index 0000000..476d178
--- /dev/null
+++ b/iredis/data/commands/acl-setuser.md
@@ -0,0 +1,115 @@
+Create an ACL user with the specified rules or modify the rules of an existing
+user. This is the main interface in order to manipulate Redis ACL users
+interactively: if the username does not exist, the command creates the username
+without any privilege, then reads from left to right all the rules provided as
+successive arguments, setting the user ACL rules as specified.
+
+If the user already exists, the provided ACL rules are simply applied _in
+addition_ to the rules already set. For example:
+
+ ACL SETUSER virginia on allkeys +set
+
+The above command will create a user called `virginia` that is active (the on
+rule), can access any key (allkeys rule), and can call the set command (+set
+rule). Then another SETUSER call can modify the user rules:
+
+ ACL SETUSER virginia +get
+
+The above rule will not apply the new rule to the user virginia, so other than
+`SET`, the user virginia will now be able to also use the `GET` command.
+
+When we want to be sure to define an user from scratch, without caring if it had
+previously defined rules associated, we can use the special rule `reset` as
+first rule, in order to flush all the other existing rules:
+
+ ACL SETUSER antirez reset [... other rules ...]
+
+After resetting an user, it returns back to the status it has when it was just
+created: non active (off rule), can't execute any command, can't access any key:
+
+ > ACL SETUSER antirez reset
+ +OK
+ > ACL LIST
+ 1) "user antirez off -@all"
+
+ACL rules are either words like "on", "off", "reset", "allkeys", or are special
+rules that start with a special character, and are followed by another string
+(without any space in between), like "+SET".
+
+The following documentation is a reference manual about the capabilities of this
+command, however our [ACL tutorial](/topics/acl) may be a more gentle
+introduction to how the ACL system works in general.
+
+## List of rules
+
+This is a list of all the supported Redis ACL rules:
+
+- `on`: set the user as active, it will be possible to authenticate as this user
+ using `AUTH <username> <password>`.
+- `off`: set user as not active, it will be impossible to log as this user.
+ Please note that if a user gets disabled (set to off) after there are
+ connections already authenticated with such a user, the connections will
+ continue to work as expected. To also kill the old connections you can use
+ `CLIENT KILL` with the user option. An alternative is to delete the user with
+ `ACL DELUSER`, that will result in all the connections authenticated as the
+ deleted user to be disconnected.
+- `~<pattern>`: add the specified key pattern (glob style pattern, like in the
+ `KEYS` command), to the list of key patterns accessible by the user. You can
+ add as many key patterns you want to the same user. Example: `~objects:*`
+- `allkeys`: alias for `~*`, it allows the user to access all the keys.
+- `resetkey`: removes all the key patterns from the list of key patterns the
+ user can access.
+- `+<command>`: add this command to the list of the commands the user can call.
+ Example: `+zadd`.
+- `+@<category>`: add all the commands in the specified category to the list of
+ commands the user is able to execute. Example: `+@string` (adds all the string
+ commands). For a list of categories check the `ACL CAT` command.
+- `+<command>|<subcommand>`: add the specified command to the list of the
+ commands the user can execute, but only for the specified subcommand. Example:
+ `+config|get`. Generates an error if the specified command is already allowed
+ in its full version for the specified user. Note: there is no symmetrical
+ command to remove subcommands, you need to remove the whole command and re-add
+ the subcommands you want to allow. This is much safer than removing
+ subcommands, in the future Redis may add new dangerous subcommands, so
+ configuring by subtraction is not good.
+- `allcommands`: alias of `+@all`. Adds all the commands there are in the
+ server, including _future commands_ loaded via module, to be executed by this
+ user.
+- `-<command>`. Like `+<command>` but removes the command instead of adding it.
+- `-@<category>`: Like `+@<category>` but removes all the commands in the
+ category instead of adding them.
+- `nocommands`: alias for `-@all`. Removes all the commands, the user will no
+ longer be able to execute anything.
+- `nopass`: the user is set as a "no password" user. It means that it will be
+ possible to authenticate as such user with any password. By default, the
+ `default` special user is set as "nopass". The `nopass` rule will also reset
+ all the configured passwords for the user.
+- `>password`: Add the specified clear text password as an hashed password in
+ the list of the users passwords. Every user can have many active passwords, so
+ that password rotation will be simpler. The specified password is not stored
+ in cleartext inside the server. Example: `>mypassword`.
+- `#<hashedpassword>`: Add the specified hashed password to the list of user
+ passwords. A Redis hashed password is hashed with SHA256 and translated into a
+ hexadecimal string. Example:
+ `#c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f2`.
+- `<password`: Like `>password` but removes the password instead of adding it.
+- `!<hashedpassword>`: Like `#<hashedpassword>` but removes the password instead
+ of adding it.
+- reset: Remove any capability from the user. It is set to off, without
+ passwords, unable to execute any command, unable to access any key.
+
+@return
+
+@simple-string-reply: `OK` on success.
+
+If the rules contain errors, the error is returned.
+
+@examples
+
+```
+> ACL SETUSER alan allkeys +@string +@set -SADD >alanpassword
++OK
+
+> ACL SETUSER antirez heeyyyy
+(error) ERR Error in ACL SETUSER modifier 'heeyyyy': Syntax error
+```
diff --git a/iredis/data/commands/acl-users.md b/iredis/data/commands/acl-users.md
new file mode 100644
index 0000000..c4d4d8c
--- /dev/null
+++ b/iredis/data/commands/acl-users.md
@@ -0,0 +1,15 @@
+The command shows a list of all the usernames of the currently configured users
+in the Redis ACL system.
+
+@return
+
+An array of strings.
+
+@examples
+
+```
+> ACL USERS
+1) "anna"
+2) "antirez"
+3) "default"
+```
diff --git a/iredis/data/commands/acl-whoami.md b/iredis/data/commands/acl-whoami.md
new file mode 100644
index 0000000..3007760
--- /dev/null
+++ b/iredis/data/commands/acl-whoami.md
@@ -0,0 +1,14 @@
+Return the username the current connection is authenticated with. New
+connections are authenticated with the "default" user. They can change user
+using `AUTH`.
+
+@return
+
+@bulk-string-reply: the username of the current connection.
+
+@examples
+
+```
+> ACL WHOAMI
+"default"
+```
diff --git a/iredis/data/commands/append.md b/iredis/data/commands/append.md
new file mode 100644
index 0000000..c354122
--- /dev/null
+++ b/iredis/data/commands/append.md
@@ -0,0 +1,55 @@
+If `key` already exists and is a string, this command appends the `value` at the
+end of the string. If `key` does not exist it is created and set as an empty
+string, so `APPEND` will be similar to `SET` in this special case.
+
+@return
+
+@integer-reply: the length of the string after the append operation.
+
+@examples
+
+```cli
+EXISTS mykey
+APPEND mykey "Hello"
+APPEND mykey " World"
+GET mykey
+```
+
+## Pattern: Time series
+
+The `APPEND` command can be used to create a very compact representation of a
+list of fixed-size samples, usually referred as _time series_. Every time a new
+sample arrives we can store it using the command
+
+```
+APPEND timeseries "fixed-size sample"
+```
+
+Accessing individual elements in the time series is not hard:
+
+- `STRLEN` can be used in order to obtain the number of samples.
+- `GETRANGE` allows for random access of elements. If our time series have
+ associated time information we can easily implement a binary search to get
+ range combining `GETRANGE` with the Lua scripting engine available in Redis
+ 2.6.
+- `SETRANGE` can be used to overwrite an existing time series.
+
+The limitation of this pattern is that we are forced into an append-only mode of
+operation, there is no way to cut the time series to a given size easily because
+Redis currently lacks a command able to trim string objects. However the space
+efficiency of time series stored in this way is remarkable.
+
+Hint: it is possible to switch to a different key based on the current Unix
+time, in this way it is possible to have just a relatively small amount of
+samples per key, to avoid dealing with very big keys, and to make this pattern
+more friendly to be distributed across many Redis instances.
+
+An example sampling the temperature of a sensor using fixed-size strings (using
+a binary format is better in real implementations).
+
+```cli
+APPEND ts "0043"
+APPEND ts "0035"
+GETRANGE ts 0 3
+GETRANGE ts 4 7
+```
diff --git a/iredis/data/commands/auth.md b/iredis/data/commands/auth.md
new file mode 100644
index 0000000..4b75171
--- /dev/null
+++ b/iredis/data/commands/auth.md
@@ -0,0 +1,42 @@
+The AUTH command authenticates the current connection in two cases:
+
+1. If the Redis server is password protected via the `requirepass` option.
+2. If a Redis 6.0 instance, or greater, is using the
+ [Redis ACL system](/topics/acl).
+
+Redis versions prior of Redis 6 were only able to understand the one argument
+version of the command:
+
+ AUTH <password>
+
+This form just authenticates against the password set with `requirepass`. In
+this configuration Redis will deny any command executed by the just connected
+clients, unless the connection gets authenticated via `AUTH`.
+
+If the password provided via AUTH matches the password in the configuration
+file, the server replies with the `OK` status code and starts accepting
+commands. Otherwise, an error is returned and the clients needs to try a new
+password.
+
+When Redis ACLs are used, the command should be given in an extended way:
+
+ AUTH <username> <password>
+
+In order to authenticate the current connection with one of the connections
+defined in the ACL list (see `ACL SETUSER`) and the official
+[ACL guide](/topics/acl) for more information.
+
+When ACLs are used, the single argument form of the command, where only the
+password is specified, assumes that the implicit username is "default".
+
+## Security notice
+
+Because of the high performance nature of Redis, it is possible to try a lot of
+passwords in parallel in very short time, so make sure to generate a strong and
+very long password so that this attack is infeasible. A good way to generate
+strong passwords is via the `ACL GENPASS` command.
+
+@return
+
+@simple-string-reply or an error if the password, or username/password pair, is
+invalid.
diff --git a/iredis/data/commands/bgrewriteaof.md b/iredis/data/commands/bgrewriteaof.md
new file mode 100644
index 0000000..2ebaa89
--- /dev/null
+++ b/iredis/data/commands/bgrewriteaof.md
@@ -0,0 +1,37 @@
+Instruct Redis to start an [Append Only File][tpaof] rewrite process. The
+rewrite will create a small optimized version of the current Append Only File.
+
+[tpaof]: /topics/persistence#append-only-file
+
+If `BGREWRITEAOF` fails, no data gets lost as the old AOF will be untouched.
+
+The rewrite will be only triggered by Redis if there is not already a background
+process doing persistence.
+
+Specifically:
+
+- If a Redis child is creating a snapshot on disk, the AOF rewrite is
+ _scheduled_ but not started until the saving child producing the RDB file
+ terminates. In this case the `BGREWRITEAOF` will still return an positive
+ status reply, but with an appropriate message. You can check if an AOF rewrite
+ is scheduled looking at the `INFO` command as of Redis 2.6 or successive
+ versions.
+- If an AOF rewrite is already in progress the command returns an error and no
+ AOF rewrite will be scheduled for a later time.
+- If the AOF rewrite could start, but the attempt at starting it fails (for
+ instance because of an error in creating the child process), an error is
+ returned to the caller.
+
+Since Redis 2.4 the AOF rewrite is automatically triggered by Redis, however the
+`BGREWRITEAOF` command can be used to trigger a rewrite at any time.
+
+Please refer to the [persistence documentation][tp] for detailed information.
+
+[tp]: /topics/persistence
+
+@return
+
+@simple-string-reply: A simple string reply indicating that the rewriting
+started or is about to start ASAP, when the call is executed with success.
+
+The command may reply with an error in certain cases, as documented above.
diff --git a/iredis/data/commands/bgsave.md b/iredis/data/commands/bgsave.md
new file mode 100644
index 0000000..f04d71b
--- /dev/null
+++ b/iredis/data/commands/bgsave.md
@@ -0,0 +1,28 @@
+Save the DB in background.
+
+Normally the OK code is immediately returned. Redis forks, the parent continues
+to serve the clients, the child saves the DB on disk then exits.
+
+An error is returned if there is already a background save running or if there
+is another non-background-save process running, specifically an in-progress AOF
+rewrite.
+
+If `BGSAVE SCHEDULE` is used, the command will immediately return `OK` when an
+AOF rewrite is in progress and schedule the background save to run at the next
+opportunity.
+
+A client may be able to check if the operation succeeded using the `LASTSAVE`
+command.
+
+Please refer to the [persistence documentation][tp] for detailed information.
+
+[tp]: /topics/persistence
+
+@return
+
+@simple-string-reply: `Background saving started` if `BGSAVE` started correctly
+or `Background saving scheduled` when used with the `SCHEDULE` subcommand.
+
+@history
+
+- `>= 3.2.2`: Added the `SCHEDULE` option.
diff --git a/iredis/data/commands/bitcount.md b/iredis/data/commands/bitcount.md
new file mode 100644
index 0000000..680aae1
--- /dev/null
+++ b/iredis/data/commands/bitcount.md
@@ -0,0 +1,66 @@
+Count the number of set bits (population counting) in a string.
+
+By default all the bytes contained in the string are examined. It is possible to
+specify the counting operation only in an interval passing the additional
+arguments _start_ and _end_.
+
+Like for the `GETRANGE` command start and end can contain negative values in
+order to index bytes starting from the end of the string, where -1 is the last
+byte, -2 is the penultimate, and so forth.
+
+Non-existent keys are treated as empty strings, so the command will return zero.
+
+@return
+
+@integer-reply
+
+The number of bits set to 1.
+
+@examples
+
+```cli
+SET mykey "foobar"
+BITCOUNT mykey
+BITCOUNT mykey 0 0
+BITCOUNT mykey 1 1
+```
+
+## Pattern: real-time metrics using bitmaps
+
+Bitmaps are a very space-efficient representation of certain kinds of
+information. One example is a Web application that needs the history of user
+visits, so that for instance it is possible to determine what users are good
+targets of beta features.
+
+Using the `SETBIT` command this is trivial to accomplish, identifying every day
+with a small progressive integer. For instance day 0 is the first day the
+application was put online, day 1 the next day, and so forth.
+
+Every time a user performs a page view, the application can register that in the
+current day the user visited the web site using the `SETBIT` command setting the
+bit corresponding to the current day.
+
+Later it will be trivial to know the number of single days the user visited the
+web site simply calling the `BITCOUNT` command against the bitmap.
+
+A similar pattern where user IDs are used instead of days is described in the
+article called "[Fast easy realtime metrics using Redis
+bitmaps][hbgc212fermurb]".
+
+[hbgc212fermurb]:
+ http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps
+
+## Performance considerations
+
+In the above example of counting days, even after 10 years the application is
+online we still have just `365*10` bits of data per user, that is just 456 bytes
+per user. With this amount of data `BITCOUNT` is still as fast as any other O(1)
+Redis command like `GET` or `INCR`.
+
+When the bitmap is big, there are two alternatives:
+
+- Taking a separated key that is incremented every time the bitmap is modified.
+ This can be very efficient and atomic using a small Redis Lua script.
+- Running the bitmap incrementally using the `BITCOUNT` _start_ and _end_
+ optional parameters, accumulating the results client-side, and optionally
+ caching the result into a key.
diff --git a/iredis/data/commands/bitfield.md b/iredis/data/commands/bitfield.md
new file mode 100644
index 0000000..80563fa
--- /dev/null
+++ b/iredis/data/commands/bitfield.md
@@ -0,0 +1,155 @@
+The command treats a Redis string as a array of bits, and is capable of
+addressing specific integer fields of varying bit widths and arbitrary non
+(necessary) aligned offset. In practical terms using this command you can set,
+for example, a signed 5 bits integer at bit offset 1234 to a specific value,
+retrieve a 31 bit unsigned integer from offset 4567. Similarly the command
+handles increments and decrements of the specified integers, providing
+guaranteed and well specified overflow and underflow behavior that the user can
+configure.
+
+`BITFIELD` is able to operate with multiple bit fields in the same command call.
+It takes a list of operations to perform, and returns an array of replies, where
+each array matches the corresponding operation in the list of arguments.
+
+For example the following command increments an 5 bit signed integer at bit
+offset 100, and gets the value of the 4 bit unsigned integer at bit offset 0:
+
+ > BITFIELD mykey INCRBY i5 100 1 GET u4 0
+ 1) (integer) 1
+ 2) (integer) 0
+
+Note that:
+
+1. Addressing with `GET` bits outside the current string length (including the
+ case the key does not exist at all), results in the operation to be performed
+ like the missing part all consists of bits set to 0.
+2. Addressing with `SET` or `INCRBY` bits outside the current string length will
+ enlarge the string, zero-padding it, as needed, for the minimal length
+ needed, according to the most far bit touched.
+
+## Supported subcommands and integer types
+
+The following is the list of supported commands.
+
+- **GET** `<type>` `<offset>` -- Returns the specified bit field.
+- **SET** `<type>` `<offset>` `<value>` -- Set the specified bit field and
+ returns its old value.
+- **INCRBY** `<type>` `<offset>` `<increment>` -- Increments or decrements (if a
+ negative increment is given) the specified bit field and returns the new
+ value.
+
+There is another subcommand that only changes the behavior of successive
+`INCRBY` subcommand calls by setting the overflow behavior:
+
+- **OVERFLOW** `[WRAP|SAT|FAIL]`
+
+Where an integer type is expected, it can be composed by prefixing with `i` for
+signed integers and `u` for unsigned integers with the number of bits of our
+integer type. So for example `u8` is an unsigned integer of 8 bits and `i16` is
+a signed integer of 16 bits.
+
+The supported types are up to 64 bits for signed integers, and up to 63 bits for
+unsigned integers. This limitation with unsigned integers is due to the fact
+that currently the Redis protocol is unable to return 64 bit unsigned integers
+as replies.
+
+## Bits and positional offsets
+
+There are two ways in order to specify offsets in the bitfield command. If a
+number without any prefix is specified, it is used just as a zero based bit
+offset inside the string.
+
+However if the offset is prefixed with a `#` character, the specified offset is
+multiplied by the integer type width, so for example:
+
+ BITFIELD mystring SET i8 #0 100 SET i8 #1 200
+
+Will set the first i8 integer at offset 0 and the second at offset 8. This way
+you don't have to do the math yourself inside your client if what you want is a
+plain array of integers of a given size.
+
+## Overflow control
+
+Using the `OVERFLOW` command the user is able to fine-tune the behavior of the
+increment or decrement overflow (or underflow) by specifying one of the
+following behaviors:
+
+- **WRAP**: wrap around, both with signed and unsigned integers. In the case of
+ unsigned integers, wrapping is like performing the operation modulo the
+ maximum value the integer can contain (the C standard behavior). With signed
+ integers instead wrapping means that overflows restart towards the most
+ negative value and underflows towards the most positive ones, so for example
+ if an `i8` integer is set to the value 127, incrementing it by 1 will yield
+ `-128`.
+- **SAT**: uses saturation arithmetic, that is, on underflows the value is set
+ to the minimum integer value, and on overflows to the maximum integer value.
+ For example incrementing an `i8` integer starting from value 120 with an
+ increment of 10, will result into the value 127, and further increments will
+ always keep the value at 127. The same happens on underflows, but towards the
+ value is blocked at the most negative value.
+- **FAIL**: in this mode no operation is performed on overflows or underflows
+ detected. The corresponding return value is set to NULL to signal the
+ condition to the caller.
+
+Note that each `OVERFLOW` statement only affects the `INCRBY` commands that
+follow it in the list of subcommands, up to the next `OVERFLOW` statement.
+
+By default, **WRAP** is used if not otherwise specified.
+
+ > BITFIELD mykey incrby u2 100 1 OVERFLOW SAT incrby u2 102 1
+ 1) (integer) 1
+ 2) (integer) 1
+ > BITFIELD mykey incrby u2 100 1 OVERFLOW SAT incrby u2 102 1
+ 1) (integer) 2
+ 2) (integer) 2
+ > BITFIELD mykey incrby u2 100 1 OVERFLOW SAT incrby u2 102 1
+ 1) (integer) 3
+ 2) (integer) 3
+ > BITFIELD mykey incrby u2 100 1 OVERFLOW SAT incrby u2 102 1
+ 1) (integer) 0
+ 2) (integer) 3
+
+## Return value
+
+The command returns an array with each entry being the corresponding result of
+the sub command given at the same position. `OVERFLOW` subcommands don't count
+as generating a reply.
+
+The following is an example of `OVERFLOW FAIL` returning NULL.
+
+ > BITFIELD mykey OVERFLOW FAIL incrby u2 102 1
+ 1) (nil)
+
+## Motivations
+
+The motivation for this command is that the ability to store many small integers
+as a single large bitmap (or segmented over a few keys to avoid having huge
+keys) is extremely memory efficient, and opens new use cases for Redis to be
+applied, especially in the field of real time analytics. This use cases are
+supported by the ability to specify the overflow in a controlled way.
+
+Fun fact: Reddit's 2017 April fools' project
+[r/place](https://reddit.com/r/place) was
+[built using the Redis BITFIELD command](https://redditblog.com/2017/04/13/how-we-built-rplace/)
+in order to take an in-memory representation of the collaborative canvas.
+
+## Performance considerations
+
+Usually `BITFIELD` is a fast command, however note that addressing far bits of
+currently short strings will trigger an allocation that may be more costly than
+executing the command on bits already existing.
+
+## Orders of bits
+
+The representation used by `BITFIELD` considers the bitmap as having the bit
+number 0 to be the most significant bit of the first byte, and so forth, so for
+example setting a 5 bits unsigned integer to value 23 at offset 7 into a bitmap
+previously set to all zeroes, will produce the following representation:
+
+ +--------+--------+
+ |00000001|01110000|
+ +--------+--------+
+
+When offsets and integer sizes are aligned to bytes boundaries, this is the same
+as big endian, however when such alignment does not exist, its important to also
+understand how the bits inside a byte are ordered.
diff --git a/iredis/data/commands/bitop.md b/iredis/data/commands/bitop.md
new file mode 100644
index 0000000..656befa
--- /dev/null
+++ b/iredis/data/commands/bitop.md
@@ -0,0 +1,61 @@
+Perform a bitwise operation between multiple keys (containing string values) and
+store the result in the destination key.
+
+The `BITOP` command supports four bitwise operations: **AND**, **OR**, **XOR**
+and **NOT**, thus the valid forms to call the command are:
+
+- `BITOP AND destkey srckey1 srckey2 srckey3 ... srckeyN`
+- `BITOP OR destkey srckey1 srckey2 srckey3 ... srckeyN`
+- `BITOP XOR destkey srckey1 srckey2 srckey3 ... srckeyN`
+- `BITOP NOT destkey srckey`
+
+As you can see **NOT** is special as it only takes an input key, because it
+performs inversion of bits so it only makes sense as an unary operator.
+
+The result of the operation is always stored at `destkey`.
+
+## Handling of strings with different lengths
+
+When an operation is performed between strings having different lengths, all the
+strings shorter than the longest string in the set are treated as if they were
+zero-padded up to the length of the longest string.
+
+The same holds true for non-existent keys, that are considered as a stream of
+zero bytes up to the length of the longest string.
+
+@return
+
+@integer-reply
+
+The size of the string stored in the destination key, that is equal to the size
+of the longest input string.
+
+@examples
+
+```cli
+SET key1 "foobar"
+SET key2 "abcdef"
+BITOP AND dest key1 key2
+GET dest
+```
+
+## Pattern: real time metrics using bitmaps
+
+`BITOP` is a good complement to the pattern documented in the `BITCOUNT` command
+documentation. Different bitmaps can be combined in order to obtain a target
+bitmap where the population counting operation is performed.
+
+See the article called "[Fast easy realtime metrics using Redis
+bitmaps][hbgc212fermurb]" for a interesting use cases.
+
+[hbgc212fermurb]:
+ http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps
+
+## Performance considerations
+
+`BITOP` is a potentially slow command as it runs in O(N) time. Care should be
+taken when running it against long input strings.
+
+For real-time metrics and statistics involving large inputs a good approach is
+to use a replica (with read-only option disabled) where the bit-wise operations
+are performed to avoid blocking the master instance.
diff --git a/iredis/data/commands/bitpos.md b/iredis/data/commands/bitpos.md
new file mode 100644
index 0000000..56dc588
--- /dev/null
+++ b/iredis/data/commands/bitpos.md
@@ -0,0 +1,60 @@
+Return the position of the first bit set to 1 or 0 in a string.
+
+The position is returned, thinking of the string as an array of bits from left
+to right, where the first byte's most significant bit is at position 0, the
+second byte's most significant bit is at position 8, and so forth.
+
+The same bit position convention is followed by `GETBIT` and `SETBIT`.
+
+By default, all the bytes contained in the string are examined. It is possible
+to look for bits only in a specified interval passing the additional arguments
+_start_ and _end_ (it is possible to just pass _start_, the operation will
+assume that the end is the last byte of the string. However there are semantic
+differences as explained later). The range is interpreted as a range of bytes
+and not a range of bits, so `start=0` and `end=2` means to look at the first
+three bytes.
+
+Note that bit positions are returned always as absolute values starting from bit
+zero even when _start_ and _end_ are used to specify a range.
+
+Like for the `GETRANGE` command start and end can contain negative values in
+order to index bytes starting from the end of the string, where -1 is the last
+byte, -2 is the penultimate, and so forth.
+
+Non-existent keys are treated as empty strings.
+
+@return
+
+@integer-reply
+
+The command returns the position of the first bit set to 1 or 0 according to the
+request.
+
+If we look for set bits (the bit argument is 1) and the string is empty or
+composed of just zero bytes, -1 is returned.
+
+If we look for clear bits (the bit argument is 0) and the string only contains
+bit set to 1, the function returns the first bit not part of the string on the
+right. So if the string is three bytes set to the value `0xff` the command
+`BITPOS key 0` will return 24, since up to bit 23 all the bits are 1.
+
+Basically, the function considers the right of the string as padded with zeros
+if you look for clear bits and specify no range or the _start_ argument
+**only**.
+
+However, this behavior changes if you are looking for clear bits and specify a
+range with both **start** and **end**. If no clear bit is found in the specified
+range, the function returns -1 as the user specified a clear range and there are
+no 0 bits in that range.
+
+@examples
+
+```cli
+SET mykey "\xff\xf0\x00"
+BITPOS mykey 0
+SET mykey "\x00\xff\xf0"
+BITPOS mykey 1 0
+BITPOS mykey 1 2
+set mykey "\x00\x00\x00"
+BITPOS mykey 1
+```
diff --git a/iredis/data/commands/blpop.md b/iredis/data/commands/blpop.md
new file mode 100644
index 0000000..b48ace7
--- /dev/null
+++ b/iredis/data/commands/blpop.md
@@ -0,0 +1,182 @@
+`BLPOP` is a blocking list pop primitive. It is the blocking version of `LPOP`
+because it blocks the connection when there are no elements to pop from any of
+the given lists. An element is popped from the head of the first list that is
+non-empty, with the given keys being checked in the order that they are given.
+
+## Non-blocking behavior
+
+When `BLPOP` is called, if at least one of the specified keys contains a
+non-empty list, an element is popped from the head of the list and returned to
+the caller together with the `key` it was popped from.
+
+Keys are checked in the order that they are given. Let's say that the key
+`list1` doesn't exist and `list2` and `list3` hold non-empty lists. Consider the
+following command:
+
+```
+BLPOP list1 list2 list3 0
+```
+
+`BLPOP` guarantees to return an element from the list stored at `list2` (since
+it is the first non empty list when checking `list1`, `list2` and `list3` in
+that order).
+
+## Blocking behavior
+
+If none of the specified keys exist, `BLPOP` blocks the connection until another
+client performs an `LPUSH` or `RPUSH` operation against one of the keys.
+
+Once new data is present on one of the lists, the client returns with the name
+of the key unblocking it and the popped value.
+
+When `BLPOP` causes a client to block and a non-zero timeout is specified, the
+client will unblock returning a `nil` multi-bulk value when the specified
+timeout has expired without a push operation against at least one of the
+specified keys.
+
+**The timeout argument is interpreted as an integer value specifying the maximum
+number of seconds to block**. A timeout of zero can be used to block
+indefinitely.
+
+## What key is served first? What client? What element? Priority ordering details.
+
+- If the client tries to blocks for multiple keys, but at least one key contains
+ elements, the returned key / element pair is the first key from left to right
+ that has one or more elements. In this case the client is not blocked. So for
+ instance `BLPOP key1 key2 key3 key4 0`, assuming that both `key2` and `key4`
+ are non-empty, will always return an element from `key2`.
+- If multiple clients are blocked for the same key, the first client to be
+ served is the one that was waiting for more time (the first that blocked for
+ the key). Once a client is unblocked it does not retain any priority, when it
+ blocks again with the next call to `BLPOP` it will be served accordingly to
+ the number of clients already blocked for the same key, that will all be
+ served before it (from the first to the last that blocked).
+- When a client is blocking for multiple keys at the same time, and elements are
+ available at the same time in multiple keys (because of a transaction or a Lua
+ script added elements to multiple lists), the client will be unblocked using
+ the first key that received a push operation (assuming it has enough elements
+ to serve our client, as there may be other clients as well waiting for this
+ key). Basically after the execution of every command Redis will run a list of
+ all the keys that received data AND that have at least a client blocked. The
+ list is ordered by new element arrival time, from the first key that received
+ data to the last. For every key processed, Redis will serve all the clients
+ waiting for that key in a FIFO fashion, as long as there are elements in this
+ key. When the key is empty or there are no longer clients waiting for this
+ key, the next key that received new data in the previous command / transaction
+ / script is processed, and so forth.
+
+## Behavior of `!BLPOP` when multiple elements are pushed inside a list.
+
+There are times when a list can receive multiple elements in the context of the
+same conceptual command:
+
+- Variadic push operations such as `LPUSH mylist a b c`.
+- After an `EXEC` of a `MULTI` block with multiple push operations against the
+ same list.
+- Executing a Lua Script with Redis 2.6 or newer.
+
+When multiple elements are pushed inside a list where there are clients
+blocking, the behavior is different for Redis 2.4 and Redis 2.6 or newer.
+
+For Redis 2.6 what happens is that the command performing multiple pushes is
+executed, and _only after_ the execution of the command the blocked clients are
+served. Consider this sequence of commands.
+
+ Client A: BLPOP foo 0
+ Client B: LPUSH foo a b c
+
+If the above condition happens using a Redis 2.6 server or greater, Client **A**
+will be served with the `c` element, because after the `LPUSH` command the list
+contains `c,b,a`, so taking an element from the left means to return `c`.
+
+Instead Redis 2.4 works in a different way: clients are served _in the context_
+of the push operation, so as long as `LPUSH foo a b c` starts pushing the first
+element to the list, it will be delivered to the Client **A**, that will receive
+`a` (the first element pushed).
+
+The behavior of Redis 2.4 creates a lot of problems when replicating or
+persisting data into the AOF file, so the much more generic and semantically
+simpler behavior was introduced into Redis 2.6 to prevent problems.
+
+Note that for the same reason a Lua script or a `MULTI/EXEC` block may push
+elements into a list and afterward **delete the list**. In this case the blocked
+clients will not be served at all and will continue to be blocked as long as no
+data is present on the list after the execution of a single command,
+transaction, or script.
+
+## `!BLPOP` inside a `!MULTI` / `!EXEC` transaction
+
+`BLPOP` can be used with pipelining (sending multiple commands and reading the
+replies in batch), however this setup makes sense almost solely when it is the
+last command of the pipeline.
+
+Using `BLPOP` inside a `MULTI` / `EXEC` block does not make a lot of sense as it
+would require blocking the entire server in order to execute the block
+atomically, which in turn does not allow other clients to perform a push
+operation. For this reason the behavior of `BLPOP` inside `MULTI` / `EXEC` when
+the list is empty is to return a `nil` multi-bulk reply, which is the same thing
+that happens when the timeout is reached.
+
+If you like science fiction, think of time flowing at infinite speed inside a
+`MULTI` / `EXEC` block...
+
+@return
+
+@array-reply: specifically:
+
+- A `nil` multi-bulk when no element could be popped and the timeout expired.
+- A two-element multi-bulk with the first element being the name of the key
+ where an element was popped and the second element being the value of the
+ popped element.
+
+@examples
+
+```
+redis> DEL list1 list2
+(integer) 0
+redis> RPUSH list1 a b c
+(integer) 3
+redis> BLPOP list1 list2 0
+1) "list1"
+2) "a"
+```
+
+## Reliable queues
+
+When `BLPOP` returns an element to the client, it also removes the element from
+the list. This means that the element only exists in the context of the client:
+if the client crashes while processing the returned element, it is lost forever.
+
+This can be a problem with some application where we want a more reliable
+messaging system. When this is the case, please check the `BRPOPLPUSH` command,
+that is a variant of `BLPOP` that adds the returned element to a target list
+before returning it to the client.
+
+## Pattern: Event notification
+
+Using blocking list operations it is possible to mount different blocking
+primitives. For instance for some application you may need to block waiting for
+elements into a Redis Set, so that as far as a new element is added to the Set,
+it is possible to retrieve it without resort to polling. This would require a
+blocking version of `SPOP` that is not available, but using blocking list
+operations we can easily accomplish this task.
+
+The consumer will do:
+
+```
+LOOP forever
+ WHILE SPOP(key) returns elements
+ ... process elements ...
+ END
+ BRPOP helper_key
+END
+```
+
+While in the producer side we'll use simply:
+
+```
+MULTI
+SADD key element
+LPUSH helper_key x
+EXEC
+```
diff --git a/iredis/data/commands/brpop.md b/iredis/data/commands/brpop.md
new file mode 100644
index 0000000..e0bb650
--- /dev/null
+++ b/iredis/data/commands/brpop.md
@@ -0,0 +1,31 @@
+`BRPOP` is a blocking list pop primitive. It is the blocking version of `RPOP`
+because it blocks the connection when there are no elements to pop from any of
+the given lists. An element is popped from the tail of the first list that is
+non-empty, with the given keys being checked in the order that they are given.
+
+See the [BLPOP documentation][cb] for the exact semantics, since `BRPOP` is
+identical to `BLPOP` with the only difference being that it pops elements from
+the tail of a list instead of popping from the head.
+
+[cb]: /commands/blpop
+
+@return
+
+@array-reply: specifically:
+
+- A `nil` multi-bulk when no element could be popped and the timeout expired.
+- A two-element multi-bulk with the first element being the name of the key
+ where an element was popped and the second element being the value of the
+ popped element.
+
+@examples
+
+```
+redis> DEL list1 list2
+(integer) 0
+redis> RPUSH list1 a b c
+(integer) 3
+redis> BRPOP list1 list2 0
+1) "list1"
+2) "c"
+```
diff --git a/iredis/data/commands/brpoplpush.md b/iredis/data/commands/brpoplpush.md
new file mode 100644
index 0000000..1c3a9b3
--- /dev/null
+++ b/iredis/data/commands/brpoplpush.md
@@ -0,0 +1,21 @@
+`BRPOPLPUSH` is the blocking variant of `RPOPLPUSH`. When `source` contains
+elements, this command behaves exactly like `RPOPLPUSH`. When used inside a
+`MULTI`/`EXEC` block, this command behaves exactly like `RPOPLPUSH`. When
+`source` is empty, Redis will block the connection until another client pushes
+to it or until `timeout` is reached. A `timeout` of zero can be used to block
+indefinitely.
+
+See `RPOPLPUSH` for more information.
+
+@return
+
+@bulk-string-reply: the element being popped from `source` and pushed to
+`destination`. If `timeout` is reached, a @nil-reply is returned.
+
+## Pattern: Reliable queue
+
+Please see the pattern description in the `RPOPLPUSH` documentation.
+
+## Pattern: Circular list
+
+Please see the pattern description in the `RPOPLPUSH` documentation.
diff --git a/iredis/data/commands/bzpopmax.md b/iredis/data/commands/bzpopmax.md
new file mode 100644
index 0000000..a061b0e
--- /dev/null
+++ b/iredis/data/commands/bzpopmax.md
@@ -0,0 +1,37 @@
+`BZPOPMAX` is the blocking variant of the sorted set `ZPOPMAX` primitive.
+
+It is the blocking version because it blocks the connection when there are no
+members to pop from any of the given sorted sets. A member with the highest
+score is popped from first sorted set that is non-empty, with the given keys
+being checked in the order that they are given.
+
+The `timeout` argument is interpreted as an integer value specifying the maximum
+number of seconds to block. A timeout of zero can be used to block indefinitely.
+
+See the [BZPOPMIN documentation][cb] for the exact semantics, since `BZPOPMAX`
+is identical to `BZPOPMIN` with the only difference being that it pops members
+with the highest scores instead of popping the ones with the lowest scores.
+
+[cb]: /commands/bzpopmin
+
+@return
+
+@array-reply: specifically:
+
+- A `nil` multi-bulk when no element could be popped and the timeout expired.
+- A three-element multi-bulk with the first element being the name of the key
+ where a member was popped, the second element is the popped member itself, and
+ the third element is the score of the popped element.
+
+@examples
+
+```
+redis> DEL zset1 zset2
+(integer) 0
+redis> ZADD zset1 0 a 1 b 2 c
+(integer) 3
+redis> BZPOPMAX zset1 zset2 0
+1) "zset1"
+2) "c"
+3) "2"
+```
diff --git a/iredis/data/commands/bzpopmin.md b/iredis/data/commands/bzpopmin.md
new file mode 100644
index 0000000..118a821
--- /dev/null
+++ b/iredis/data/commands/bzpopmin.md
@@ -0,0 +1,37 @@
+`BZPOPMIN` is the blocking variant of the sorted set `ZPOPMIN` primitive.
+
+It is the blocking version because it blocks the connection when there are no
+members to pop from any of the given sorted sets. A member with the lowest score
+is popped from first sorted set that is non-empty, with the given keys being
+checked in the order that they are given.
+
+The `timeout` argument is interpreted as an integer value specifying the maximum
+number of seconds to block. A timeout of zero can be used to block indefinitely.
+
+See the [BLPOP documentation][cl] for the exact semantics, since `BZPOPMIN` is
+identical to `BLPOP` with the only difference being the data structure being
+popped from.
+
+[cl]: /commands/blpop
+
+@return
+
+@array-reply: specifically:
+
+- A `nil` multi-bulk when no element could be popped and the timeout expired.
+- A three-element multi-bulk with the first element being the name of the key
+ where a member was popped, the second element is the popped member itself, and
+ the third element is the score of the popped element.
+
+@examples
+
+```
+redis> DEL zset1 zset2
+(integer) 0
+redis> ZADD zset1 0 a 1 b 2 c
+(integer) 3
+redis> BZPOPMIN zset1 zset2 0
+1) "zset1"
+2) "a"
+3) "0"
+```
diff --git a/iredis/data/commands/client-caching.md b/iredis/data/commands/client-caching.md
new file mode 100644
index 0000000..7bbb439
--- /dev/null
+++ b/iredis/data/commands/client-caching.md
@@ -0,0 +1,19 @@
+This command controls the tracking of the keys in the next command executed by
+the connection, when tracking is enabled in `OPTIN` or `OPTOUT` mode. Please
+check the [client side caching documentation](/topics/client-side-caching) for
+background informations.
+
+When tracking is enabled Redis, using the `CLIENT TRACKING` command, it is
+possible to specify the `OPTIN` or `OPTOUT` options, so that keys in read only
+commands are not automatically remembered by the server to be invalidated later.
+When we are in `OPTIN` mode, we can enable the tracking of the keys in the next
+command by calling `CLIENT CACHING yes` immediately before it. Similarly when we
+are in `OPTOUT` mode, and keys are normally tracked, we can avoid the keys in
+the next command to be tracked using `CLIENT CACHING no`.
+
+Basically the command sets a state in the connection, that is valid only for the
+next command execution, that will modify the behavior of client tracking.
+
+@return
+
+@simple-string-reply: `OK` or an error if the argument is not yes or no.
diff --git a/iredis/data/commands/client-getname.md b/iredis/data/commands/client-getname.md
new file mode 100644
index 0000000..e91d8ae
--- /dev/null
+++ b/iredis/data/commands/client-getname.md
@@ -0,0 +1,7 @@
+The `CLIENT GETNAME` returns the name of the current connection as set by
+`CLIENT SETNAME`. Since every new connection starts without an associated name,
+if no name was assigned a null bulk reply is returned.
+
+@return
+
+@bulk-string-reply: The connection name, or a null bulk reply if no name is set.
diff --git a/iredis/data/commands/client-getredir.md b/iredis/data/commands/client-getredir.md
new file mode 100644
index 0000000..ddaa1b7
--- /dev/null
+++ b/iredis/data/commands/client-getredir.md
@@ -0,0 +1,13 @@
+This command returns the client ID we are redirecting our
+[tracking](/topics/client-side-caching) notifications to. We set a client to
+redirect to when using `CLIENT TRACKING` to enable tracking. However in order to
+avoid forcing client libraries implementations to remember the ID notifications
+are redirected to, this command exists in order to improve introspection and
+allow clients to check later if redirection is active and towards which client
+ID.
+
+@return
+
+@integer-reply: the ID of the client we are redirecting the notifications to.
+The command returns `-1` if client tracking is not enabled, or `0` if client
+tracking is enabled but we are not redirecting the notifications to any client.
diff --git a/iredis/data/commands/client-id.md b/iredis/data/commands/client-id.md
new file mode 100644
index 0000000..d242d6f
--- /dev/null
+++ b/iredis/data/commands/client-id.md
@@ -0,0 +1,25 @@
+The command just returns the ID of the current connection. Every connection ID
+has certain guarantees:
+
+1. It is never repeated, so if `CLIENT ID` returns the same number, the caller
+ can be sure that the underlying client did not disconnect and reconnect the
+ connection, but it is still the same connection.
+2. The ID is monotonically incremental. If the ID of a connection is greater
+ than the ID of another connection, it is guaranteed that the second
+ connection was established with the server at a later time.
+
+This command is especially useful together with `CLIENT UNBLOCK` which was
+introduced also in Redis 5 together with `CLIENT ID`. Check the `CLIENT UNBLOCK`
+command page for a pattern involving the two commands.
+
+@examples
+
+```cli
+CLIENT ID
+```
+
+@return
+
+@integer-reply
+
+The id of the client.
diff --git a/iredis/data/commands/client-kill.md b/iredis/data/commands/client-kill.md
new file mode 100644
index 0000000..9cdb054
--- /dev/null
+++ b/iredis/data/commands/client-kill.md
@@ -0,0 +1,73 @@
+The `CLIENT KILL` command closes a given client connection. Up to Redis 2.8.11
+it was possible to close a connection only by client address, using the
+following form:
+
+ CLIENT KILL addr:port
+
+The `ip:port` should match a line returned by the `CLIENT LIST` command (`addr`
+field).
+
+However starting with Redis 2.8.12 or greater, the command accepts the following
+form:
+
+ CLIENT KILL <filter> <value> ... ... <filter> <value>
+
+With the new form it is possible to kill clients by different attributes instead
+of killing just by address. The following filters are available:
+
+- `CLIENT KILL ADDR ip:port`. This is exactly the same as the old
+ three-arguments behavior.
+- `CLIENT KILL ID client-id`. Allows to kill a client by its unique `ID` field,
+ which was introduced in the `CLIENT LIST` command starting from Redis 2.8.12.
+- `CLIENT KILL TYPE type`, where _type_ is one of `normal`, `master`, `slave`
+ and `pubsub` (the `master` type is available from v3.2). This closes the
+ connections of **all the clients** in the specified class. Note that clients
+ blocked into the `MONITOR` command are considered to belong to the `normal`
+ class.
+- `CLIENT KILL USER username`. Closes all the connections that are authenticated
+ with the specified [ACL](/topics/acl) username, however it returns an error if
+ the username does not map to an existing ACL user.
+- `CLIENT KILL SKIPME yes/no`. By default this option is set to `yes`, that is,
+ the client calling the command will not get killed, however setting this
+ option to `no` will have the effect of also killing the client calling the
+ command.
+
+**Note: starting with Redis 5 the project is no longer using the slave word. You
+can use `TYPE replica` instead, however the old form is still supported for
+backward compatibility.**
+
+It is possible to provide multiple filters at the same time. The command will
+handle multiple filters via logical AND. For example:
+
+ CLIENT KILL addr 127.0.0.1:12345 type pubsub
+
+is valid and will kill only a pubsub client with the specified address. This
+format containing multiple filters is rarely useful currently.
+
+When the new form is used the command no longer returns `OK` or an error, but
+instead the number of killed clients, that may be zero.
+
+## CLIENT KILL and Redis Sentinel
+
+Recent versions of Redis Sentinel (Redis 2.8.12 or greater) use CLIENT KILL in
+order to kill clients when an instance is reconfigured, in order to force
+clients to perform the handshake with one Sentinel again and update its
+configuration.
+
+## Notes
+
+Due to the single-threaded nature of Redis, it is not possible to kill a client
+connection while it is executing a command. From the client point of view, the
+connection can never be closed in the middle of the execution of a command.
+However, the client will notice the connection has been closed only when the
+next command is sent (and results in network error).
+
+@return
+
+When called with the three arguments format:
+
+@simple-string-reply: `OK` if the connection exists and has been closed
+
+When called with the filter / value format:
+
+@integer-reply: the number of clients killed.
diff --git a/iredis/data/commands/client-list.md b/iredis/data/commands/client-list.md
new file mode 100644
index 0000000..ea0b775
--- /dev/null
+++ b/iredis/data/commands/client-list.md
@@ -0,0 +1,70 @@
+The `CLIENT LIST` command returns information and statistics about the client
+connections server in a mostly human readable format.
+
+As of v5.0, the optional `TYPE type` subcommand can be used to filter the list
+by clients' type, where _type_ is one of `normal`, `master`, `replica` and
+`pubsub`. Note that clients blocked into the `MONITOR` command are considered to
+belong to the `normal` class.
+
+@return
+
+@bulk-string-reply: a unique string, formatted as follows:
+
+- One client connection per line (separated by LF)
+- Each line is composed of a succession of `property=value` fields separated by
+ a space character.
+
+Here is the meaning of the fields:
+
+- `id`: an unique 64-bit client ID (introduced in Redis 2.8.12).
+- `name`: the name set by the client with `CLIENT SETNAME`
+- `addr`: address/port of the client
+- `fd`: file descriptor corresponding to the socket
+- `age`: total duration of the connection in seconds
+- `idle`: idle time of the connection in seconds
+- `flags`: client flags (see below)
+- `db`: current database ID
+- `sub`: number of channel subscriptions
+- `psub`: number of pattern matching subscriptions
+- `multi`: number of commands in a MULTI/EXEC context
+- `qbuf`: query buffer length (0 means no query pending)
+- `qbuf-free`: free space of the query buffer (0 means the buffer is full)
+- `obl`: output buffer length
+- `oll`: output list length (replies are queued in this list when the buffer is
+ full)
+- `omem`: output buffer memory usage
+- `events`: file descriptor events (see below)
+- `cmd`: last command played
+
+The client flags can be a combination of:
+
+```
+A: connection to be closed ASAP
+b: the client is waiting in a blocking operation
+c: connection to be closed after writing entire reply
+d: a watched keys has been modified - EXEC will fail
+i: the client is waiting for a VM I/O (deprecated)
+M: the client is a master
+N: no specific flag set
+O: the client is a client in MONITOR mode
+P: the client is a Pub/Sub subscriber
+r: the client is in readonly mode against a cluster node
+S: the client is a replica node connection to this instance
+u: the client is unblocked
+U: the client is connected via a Unix domain socket
+x: the client is in a MULTI/EXEC context
+```
+
+The file descriptor events can be:
+
+```
+r: the client socket is readable (event loop)
+w: the client socket is writable (event loop)
+```
+
+## Notes
+
+New fields are regularly added for debugging purpose. Some could be removed in
+the future. A version safe Redis client using this command should parse the
+output accordingly (i.e. handling gracefully missing fields, skipping unknown
+fields).
diff --git a/iredis/data/commands/client-pause.md b/iredis/data/commands/client-pause.md
new file mode 100644
index 0000000..bdbdda0
--- /dev/null
+++ b/iredis/data/commands/client-pause.md
@@ -0,0 +1,38 @@
+`CLIENT PAUSE` is a connections control command able to suspend all the Redis
+clients for the specified amount of time (in milliseconds).
+
+The command performs the following actions:
+
+- It stops processing all the pending commands from normal and pub/sub clients.
+ However interactions with replicas will continue normally.
+- However it returns OK to the caller ASAP, so the `CLIENT PAUSE` command
+ execution is not paused by itself.
+- When the specified amount of time has elapsed, all the clients are unblocked:
+ this will trigger the processing of all the commands accumulated in the query
+ buffer of every client during the pause.
+
+This command is useful as it makes able to switch clients from a Redis instance
+to another one in a controlled way. For example during an instance upgrade the
+system administrator could do the following:
+
+- Pause the clients using `CLIENT PAUSE`
+- Wait a few seconds to make sure the replicas processed the latest replication
+ stream from the master.
+- Turn one of the replicas into a master.
+- Reconfigure clients to connect with the new master.
+
+It is possible to send `CLIENT PAUSE` in a MULTI/EXEC block together with the
+`INFO replication` command in order to get the current master offset at the time
+the clients are blocked. This way it is possible to wait for a specific offset
+in the replica side in order to make sure all the replication stream was
+processed.
+
+Since Redis 3.2.10 / 4.0.0, this command also prevents keys to be evicted or
+expired during the time clients are paused. This way the dataset is guaranteed
+to be static not just from the point of view of clients not being able to write,
+but also from the point of view of internal operations.
+
+@return
+
+@simple-string-reply: The command returns OK or an error if the timeout is
+invalid.
diff --git a/iredis/data/commands/client-reply.md b/iredis/data/commands/client-reply.md
new file mode 100644
index 0000000..fe8ed94
--- /dev/null
+++ b/iredis/data/commands/client-reply.md
@@ -0,0 +1,21 @@
+Sometimes it can be useful for clients to completely disable replies from the
+Redis server. For example when the client sends fire and forget commands or
+performs a mass loading of data, or in caching contexts where new data is
+streamed constantly. In such contexts to use server time and bandwidth in order
+to send back replies to clients, which are going to be ignored, is considered
+wasteful.
+
+The `CLIENT REPLY` command controls whether the server will reply the client's
+commands. The following modes are available:
+
+- `ON`. This is the default mode in which the server returns a reply to every
+ command.
+- `OFF`. In this mode the server will not reply to client commands.
+- `SKIP`. This mode skips the reply of command immediately after it.
+
+@return
+
+When called with either `OFF` or `SKIP` subcommands, no reply is made. When
+called with `ON`:
+
+@simple-string-reply: `OK`.
diff --git a/iredis/data/commands/client-setname.md b/iredis/data/commands/client-setname.md
new file mode 100644
index 0000000..0155a42
--- /dev/null
+++ b/iredis/data/commands/client-setname.md
@@ -0,0 +1,28 @@
+The `CLIENT SETNAME` command assigns a name to the current connection.
+
+The assigned name is displayed in the output of `CLIENT LIST` so that it is
+possible to identify the client that performed a given connection.
+
+For instance when Redis is used in order to implement a queue, producers and
+consumers of messages may want to set the name of the connection according to
+their role.
+
+There is no limit to the length of the name that can be assigned if not the
+usual limits of the Redis string type (512 MB). However it is not possible to
+use spaces in the connection name as this would violate the format of the
+`CLIENT LIST` reply.
+
+It is possible to entirely remove the connection name setting it to the empty
+string, that is not a valid connection name since it serves to this specific
+purpose.
+
+The connection name can be inspected using `CLIENT GETNAME`.
+
+Every new connection starts without an assigned name.
+
+Tip: setting names to connections is a good way to debug connection leaks due to
+bugs in the application using Redis.
+
+@return
+
+@simple-string-reply: `OK` if the connection name was successfully set.
diff --git a/iredis/data/commands/client-tracking.md b/iredis/data/commands/client-tracking.md
new file mode 100644
index 0000000..197c0aa
--- /dev/null
+++ b/iredis/data/commands/client-tracking.md
@@ -0,0 +1,54 @@
+This command enables the tracking feature of the Redis server, that is used for
+[server assisted client side caching](/topics/client-side-caching).
+
+When tracking is enabled Redis remembers the keys that the connection requested,
+in order to send later invalidation messages when such keys are modified.
+Invalidation messages are sent in the same connection (only available when the
+RESP3 protocol is used) or redirected in a different connection (available also
+with RESP2 and Pub/Sub). A special _broadcasting_ mode is available where
+clients participating in this protocol receive every notification just
+subscribing to given key prefixes, regardless of the keys that they requested.
+Given the complexity of the argument please refer to
+[the main client side caching documentation](/topics/client-side-caching) for
+the details. This manual page is only a reference for the options of this
+subcommand.
+
+In order to enable tracking, use:
+
+ CLIENT TRACKING on ... options ...
+
+The feature will remain active in the current connection for all its life,
+unless tracking is turned on with `CLIENT TRACKING off` at some point.
+
+The following are the list of options that modify the behavior of the command
+when enabling tracking:
+
+- `REDIRECT <id>`: send redirection messages to the connection with the
+ specified ID. The connection must exist, you can get the ID of such connection
+ using `CLIENT ID`. If the connection we are redirecting to is terminated, when
+ in RESP3 mode the connection with tracking enabled will receive
+ `tracking-redir-broken` push messages in order to signal the condition.
+- `BCAST`: enable tracking in broadcasting mode. In this mode invalidation
+ messages are reported for all the prefixes specified, regardless of the keys
+ requested by the connection. Instead when the broadcasting mode is not
+ enabled, Redis will track which keys are fetched using read-only commands, and
+ will report invalidation messages only for such keys.
+- `PREFIX <prefix>`: for broadcasting, register a given key prefix, so that
+ notifications will be provided only for keys starting with this string. This
+ option can be given multiple times to register multiple prefixes. If
+ broadcasting is enabled without this option, Redis will send notifications for
+ every key.
+- `OPTIN`: when broadcasting is NOT active, normally don't track keys in read
+ only commands, unless they are called immediately after a `CLIENT CACHING yes`
+ command.
+- `OPTOUT`: when broadcasting is NOT active, normally track keys in read only
+ commands, unless they are called immediately after a `CLIENT CACHING no`
+ command.
+- `NOLOOP`: don't send notifications about keys modified by this connection
+ itself.
+
+@return
+
+@simple-string-reply: `OK` if the connection was successfully put in tracking
+mode or if the tracking mode was successfully disabled. Otherwise an error is
+returned.
diff --git a/iredis/data/commands/client-unblock.md b/iredis/data/commands/client-unblock.md
new file mode 100644
index 0000000..dae5fb3
--- /dev/null
+++ b/iredis/data/commands/client-unblock.md
@@ -0,0 +1,63 @@
+This command can unblock, from a different connection, a client blocked in a
+blocking operation, such as for instance `BRPOP` or `XREAD` or `WAIT`.
+
+By default the client is unblocked as if the timeout of the command was reached,
+however if an additional (and optional) argument is passed, it is possible to
+specify the unblocking behavior, that can be **TIMEOUT** (the default) or
+**ERROR**. If **ERROR** is specified, the behavior is to unblock the client
+returning as error the fact that the client was force-unblocked. Specifically
+the client will receive the following error:
+
+ -UNBLOCKED client unblocked via CLIENT UNBLOCK
+
+Note: of course as usually it is not guaranteed that the error text remains the
+same, however the error code will remain `-UNBLOCKED`.
+
+This command is useful especially when we are monitoring many keys with a
+limited number of connections. For instance we may want to monitor multiple
+streams with `XREAD` without using more than N connections. However at some
+point the consumer process is informed that there is one more stream key to
+monitor. In order to avoid using more connections, the best behavior would be to
+stop the blocking command from one of the connections in the pool, add the new
+key, and issue the blocking command again.
+
+To obtain this behavior the following pattern is used. The process uses an
+additional _control connection_ in order to send the `CLIENT UNBLOCK` command if
+needed. In the meantime, before running the blocking operation on the other
+connections, the process runs `CLIENT ID` in order to get the ID associated with
+that connection. When a new key should be added, or when a key should no longer
+be monitored, the relevant connection blocking command is aborted by sending
+`CLIENT UNBLOCK` in the control connection. The blocking command will return and
+can be finally reissued.
+
+This example shows the application in the context of Redis streams, however the
+pattern is a general one and can be applied to other cases.
+
+@example
+
+```
+Connection A (blocking connection):
+> CLIENT ID
+2934
+> BRPOP key1 key2 key3 0
+(client is blocked)
+
+... Now we want to add a new key ...
+
+Connection B (control connection):
+> CLIENT UNBLOCK 2934
+1
+
+Connection A (blocking connection):
+... BRPOP reply with timeout ...
+NULL
+> BRPOP key1 key2 key3 key4 0
+(client is blocked again)
+```
+
+@return
+
+@integer-reply, specifically:
+
+- `1` if the client was unblocked successfully.
+- `0` if the client wasn't unblocked.
diff --git a/iredis/data/commands/cluster-addslots.md b/iredis/data/commands/cluster-addslots.md
new file mode 100644
index 0000000..f5668e5
--- /dev/null
+++ b/iredis/data/commands/cluster-addslots.md
@@ -0,0 +1,55 @@
+This command is useful in order to modify a node's view of the cluster
+configuration. Specifically it assigns a set of hash slots to the node receiving
+the command. If the command is successful, the node will map the specified hash
+slots to itself, and will start broadcasting the new configuration.
+
+However note that:
+
+1. The command only works if all the specified slots are, from the point of view
+ of the node receiving the command, currently not assigned. A node will refuse
+ to take ownership for slots that already belong to some other node (including
+ itself).
+2. The command fails if the same slot is specified multiple times.
+3. As a side effect of the command execution, if a slot among the ones specified
+ as argument is set as `importing`, this state gets cleared once the node
+ assigns the (previously unbound) slot to itself.
+
+## Example
+
+For example the following command assigns slots 1 2 3 to the node receiving the
+command:
+
+ > CLUSTER ADDSLOTS 1 2 3
+ OK
+
+However trying to execute it again results into an error since the slots are
+already assigned:
+
+ > CLUSTER ADDSLOTS 1 2 3
+ ERR Slot 1 is already busy
+
+## Usage in Redis Cluster
+
+This command only works in cluster mode and is useful in the following Redis
+Cluster operations:
+
+1. To create a new cluster ADDSLOTS is used in order to initially setup master
+ nodes splitting the available hash slots among them.
+2. In order to fix a broken cluster where certain slots are unassigned.
+
+## Information about slots propagation and warnings
+
+Note that once a node assigns a set of slots to itself, it will start
+propagating this information in heartbeat packet headers. However the other
+nodes will accept the information only if they have the slot as not already
+bound with another node, or if the configuration epoch of the node advertising
+the new hash slot, is greater than the node currently listed in the table.
+
+This means that this command should be used with care only by applications
+orchestrating Redis Cluster, like `redis-trib`, and the command if used out of
+the right context can leave the cluster in a wrong state or cause data loss.
+
+@return
+
+@simple-string-reply: `OK` if the command was successful. Otherwise an error is
+returned.
diff --git a/iredis/data/commands/cluster-bumpepoch.md b/iredis/data/commands/cluster-bumpepoch.md
new file mode 100644
index 0000000..16a94a4
--- /dev/null
+++ b/iredis/data/commands/cluster-bumpepoch.md
@@ -0,0 +1,15 @@
+Advances the cluster config epoch.
+
+The `CLUSTER BUMPEPOCH` command triggers an increment to the cluster's config
+epoch from the connected node. The epoch will be incremented if the node's
+config epoch is zero, or if it is less than the cluster's greatest epoch.
+
+**Note:** config epoch management is performed internally by the cluster, and
+relies on obtaining a consensus of nodes. The `CLUSTER BUMPEPOCH` attempts to
+increment the config epoch **WITHOUT** getting the consensus, so using it may
+violate the "last failover wins" rule. Use it with caution.
+
+@return
+
+@simple-string-reply: `BUMPED` if the epoch was incremented, or `STILL` if the
+node already has the greatest config epoch in the cluster.
diff --git a/iredis/data/commands/cluster-count-failure-reports.md b/iredis/data/commands/cluster-count-failure-reports.md
new file mode 100644
index 0000000..bb3c937
--- /dev/null
+++ b/iredis/data/commands/cluster-count-failure-reports.md
@@ -0,0 +1,34 @@
+The command returns the number of _failure reports_ for the specified node.
+Failure reports are the way Redis Cluster uses in order to promote a `PFAIL`
+state, that means a node is not reachable, to a `FAIL` state, that means that
+the majority of masters in the cluster agreed within a window of time that the
+node is not reachable.
+
+A few more details:
+
+- A node flags another node with `PFAIL` when the node is not reachable for a
+ time greater than the configured _node timeout_, which is a fundamental
+ configuration parameter of a Redis Cluster.
+- Nodes in `PFAIL` state are provided in gossip sections of heartbeat packets.
+- Every time a node processes gossip packets from other nodes, it creates (and
+ refreshes the TTL if needed) **failure reports**, remembering that a given
+ node said another given node is in `PFAIL` condition.
+- Each failure report has a time to live of two times the _node timeout_ time.
+- If at a given time a node has another node flagged with `PFAIL`, and at the
+ same time collected the majority of other master nodes _failure reports_ about
+ this node (including itself if it is a master), then it elevates the failure
+ state of the node from `PFAIL` to `FAIL`, and broadcasts a message forcing all
+ the nodes that can be reached to flag the node as `FAIL`.
+
+This command returns the number of failure reports for the current node which
+are currently not expired (so received within two times the _node timeout_
+time). The count does not include what the node we are asking this count
+believes about the node ID we pass as argument, the count _only_ includes the
+failure reports the node received from other nodes.
+
+This command is mainly useful for debugging, when the failure detector of Redis
+Cluster is not operating as we believe it should.
+
+@return
+
+@integer-reply: the number of active failure reports for the node.
diff --git a/iredis/data/commands/cluster-countkeysinslot.md b/iredis/data/commands/cluster-countkeysinslot.md
new file mode 100644
index 0000000..92ec7c6
--- /dev/null
+++ b/iredis/data/commands/cluster-countkeysinslot.md
@@ -0,0 +1,13 @@
+Returns the number of keys in the specified Redis Cluster hash slot. The command
+only queries the local data set, so contacting a node that is not serving the
+specified hash slot will always result in a count of zero being returned.
+
+```
+> CLUSTER COUNTKEYSINSLOT 7000
+(integer) 50341
+```
+
+@return
+
+@integer-reply: The number of keys in the specified hash slot, or an error if
+the hash slot is invalid.
diff --git a/iredis/data/commands/cluster-delslots.md b/iredis/data/commands/cluster-delslots.md
new file mode 100644
index 0000000..1924158
--- /dev/null
+++ b/iredis/data/commands/cluster-delslots.md
@@ -0,0 +1,47 @@
+In Redis Cluster, each node keeps track of which master is serving a particular
+hash slot.
+
+The `DELSLOTS` command asks a particular Redis Cluster node to forget which
+master is serving the hash slots specified as arguments.
+
+In the context of a node that has received a `DELSLOTS` command and has
+consequently removed the associations for the passed hash slots, we say those
+hash slots are _unbound_. Note that the existence of unbound hash slots occurs
+naturally when a node has not been configured to handle them (something that can
+be done with the `ADDSLOTS` command) and if it has not received any information
+about who owns those hash slots (something that it can learn from heartbeat or
+update messages).
+
+If a node with unbound hash slots receives a heartbeat packet from another node
+that claims to be the owner of some of those hash slots, the association is
+established instantly. Moreover, if a heartbeat or update message is received
+with a configuration epoch greater than the node's own, the association is
+re-established.
+
+However, note that:
+
+1. The command only works if all the specified slots are already associated with
+ some node.
+2. The command fails if the same slot is specified multiple times.
+3. As a side effect of the command execution, the node may go into _down_ state
+ because not all hash slots are covered.
+
+## Example
+
+The following command removes the association for slots 5000 and 5001 from the
+node receiving the command:
+
+ > CLUSTER DELSLOTS 5000 5001
+ OK
+
+## Usage in Redis Cluster
+
+This command only works in cluster mode and may be useful for debugging and in
+order to manually orchestrate a cluster configuration when a new cluster is
+created. It is currently not used by `redis-trib`, and mainly exists for API
+completeness.
+
+@return
+
+@simple-string-reply: `OK` if the command was successful. Otherwise an error is
+returned.
diff --git a/iredis/data/commands/cluster-failover.md b/iredis/data/commands/cluster-failover.md
new file mode 100644
index 0000000..c811c04
--- /dev/null
+++ b/iredis/data/commands/cluster-failover.md
@@ -0,0 +1,81 @@
+This command, that can only be sent to a Redis Cluster replica node, forces the
+replica to start a manual failover of its master instance.
+
+A manual failover is a special kind of failover that is usually executed when
+there are no actual failures, but we wish to swap the current master with one of
+its replicas (which is the node we send the command to), in a safe way, without
+any window for data loss. It works in the following way:
+
+1. The replica tells the master to stop processing queries from clients.
+2. The master replies to the replica with the current _replication offset_.
+3. The replica waits for the replication offset to match on its side, to make
+ sure it processed all the data from the master before it continues.
+4. The replica starts a failover, obtains a new configuration epoch from the
+ majority of the masters, and broadcasts the new configuration.
+5. The old master receives the configuration update: unblocks its clients and
+ starts replying with redirection messages so that they'll continue the chat
+ with the new master.
+
+This way clients are moved away from the old master to the new master atomically
+and only when the replica that is turning into the new master has processed all
+of the replication stream from the old master.
+
+## FORCE option: manual failover when the master is down
+
+The command behavior can be modified by two options: **FORCE** and **TAKEOVER**.
+
+If the **FORCE** option is given, the replica does not perform any handshake
+with the master, that may be not reachable, but instead just starts a failover
+ASAP starting from point 4. This is useful when we want to start a manual
+failover while the master is no longer reachable.
+
+However using **FORCE** we still need the majority of masters to be available in
+order to authorize the failover and generate a new configuration epoch for the
+replica that is going to become master.
+
+## TAKEOVER option: manual failover without cluster consensus
+
+There are situations where this is not enough, and we want a replica to failover
+without any agreement with the rest of the cluster. A real world use case for
+this is to mass promote replicas in a different data center to masters in order
+to perform a data center switch, while all the masters are down or partitioned
+away.
+
+The **TAKEOVER** option implies everything **FORCE** implies, but also does not
+uses any cluster authorization in order to failover. A replica receiving
+`CLUSTER FAILOVER TAKEOVER` will instead:
+
+1. Generate a new `configEpoch` unilaterally, just taking the current greatest
+ epoch available and incrementing it if its local configuration epoch is not
+ already the greatest.
+2. Assign itself all the hash slots of its master, and propagate the new
+ configuration to every node which is reachable ASAP, and eventually to every
+ other node.
+
+Note that **TAKEOVER violates the last-failover-wins principle** of Redis
+Cluster, since the configuration epoch generated by the replica violates the
+normal generation of configuration epochs in several ways:
+
+1. There is no guarantee that it is actually the higher configuration epoch,
+ since, for example, we can use the **TAKEOVER** option within a minority, nor
+ any message exchange is performed to generate the new configuration epoch.
+2. If we generate a configuration epoch which happens to collide with another
+ instance, eventually our configuration epoch, or the one of another instance
+ with our same epoch, will be moved away using the _configuration epoch
+ collision resolution algorithm_.
+
+Because of this the **TAKEOVER** option should be used with care.
+
+## Implementation details and notes
+
+`CLUSTER FAILOVER`, unless the **TAKEOVER** option is specified, does not
+execute a failover synchronously, it only _schedules_ a manual failover,
+bypassing the failure detection stage, so to check if the failover actually
+happened, `CLUSTER NODES` or other means should be used in order to verify that
+the state of the cluster changes after some time the command was sent.
+
+@return
+
+@simple-string-reply: `OK` if the command was accepted and a manual failover is
+going to be attempted. An error if the operation cannot be executed, for example
+if we are talking with a node which is already a master.
diff --git a/iredis/data/commands/cluster-flushslots.md b/iredis/data/commands/cluster-flushslots.md
new file mode 100644
index 0000000..74204d1
--- /dev/null
+++ b/iredis/data/commands/cluster-flushslots.md
@@ -0,0 +1,8 @@
+Deletes all slots from a node.
+
+The `CLUSTER FLUSHSLOTS` deletes all information about slots from the connected
+node. It can only be called when the database is empty.
+
+@reply
+
+@simple-string-reply: `OK`
diff --git a/iredis/data/commands/cluster-forget.md b/iredis/data/commands/cluster-forget.md
new file mode 100644
index 0000000..7926091
--- /dev/null
+++ b/iredis/data/commands/cluster-forget.md
@@ -0,0 +1,59 @@
+The command is used in order to remove a node, specified via its node ID, from
+the set of _known nodes_ of the Redis Cluster node receiving the command. In
+other words the specified node is removed from the _nodes table_ of the node
+receiving the command.
+
+Because when a given node is part of the cluster, all the other nodes
+participating in the cluster knows about it, in order for a node to be
+completely removed from a cluster, the `CLUSTER FORGET` command must be sent to
+all the remaining nodes, regardless of the fact they are masters or replicas.
+
+However the command cannot simply drop the node from the internal node table of
+the node receiving the command, it also implements a ban-list, not allowing the
+same node to be added again as a side effect of processing the _gossip section_
+of the heartbeat packets received from other nodes.
+
+## Details on why the ban-list is needed
+
+In the following example we'll show why the command must not just remove a given
+node from the nodes table, but also prevent it for being re-inserted again for
+some time.
+
+Let's assume we have four nodes, A, B, C and D. In order to end with just a
+three nodes cluster A, B, C we may follow these steps:
+
+1. Reshard all the hash slots from D to nodes A, B, C.
+2. D is now empty, but still listed in the nodes table of A, B and C.
+3. We contact A, and send `CLUSTER FORGET D`.
+4. B sends node A a heartbeat packet, where node D is listed.
+5. A does no longer known node D (see step 3), so it starts an handshake with D.
+6. D ends re-added in the nodes table of A.
+
+As you can see in this way removing a node is fragile, we need to send
+`CLUSTER FORGET` commands to all the nodes ASAP hoping there are no gossip
+sections processing in the meantime. Because of this problem the command
+implements a ban-list with an expire time for each entry.
+
+So what the command really does is:
+
+1. The specified node gets removed from the nodes table.
+2. The node ID of the removed node gets added to the ban-list, for 1 minute.
+3. The node will skip all the node IDs listed in the ban-list when processing
+ gossip sections received in heartbeat packets from other nodes.
+
+This way we have a 60 second window to inform all the nodes in the cluster that
+we want to remove a node.
+
+## Special conditions not allowing the command execution
+
+The command does not succeed and returns an error in the following cases:
+
+1. The specified node ID is not found in the nodes table.
+2. The node receiving the command is a replica, and the specified node ID
+ identifies its current master.
+3. The node ID identifies the same node we are sending the command to.
+
+@return
+
+@simple-string-reply: `OK` if the command was executed successfully, otherwise
+an error is returned.
diff --git a/iredis/data/commands/cluster-getkeysinslot.md b/iredis/data/commands/cluster-getkeysinslot.md
new file mode 100644
index 0000000..9faa62d
--- /dev/null
+++ b/iredis/data/commands/cluster-getkeysinslot.md
@@ -0,0 +1,20 @@
+The command returns an array of keys names stored in the contacted node and
+hashing to the specified hash slot. The maximum number of keys to return is
+specified via the `count` argument, so that it is possible for the user of this
+API to batch-processing keys.
+
+The main usage of this command is during rehashing of cluster slots from one
+node to another. The way the rehashing is performed is exposed in the Redis
+Cluster specification, or in a more simple to digest form, as an appendix of the
+`CLUSTER SETSLOT` command documentation.
+
+```
+> CLUSTER GETKEYSINSLOT 7000 3
+"47344|273766|70329104160040|key_39015"
+"47344|273766|70329104160040|key_89793"
+"47344|273766|70329104160040|key_92937"
+```
+
+@return
+
+@array-reply: From 0 to _count_ key names in a Redis array reply.
diff --git a/iredis/data/commands/cluster-info.md b/iredis/data/commands/cluster-info.md
new file mode 100644
index 0000000..550dd0c
--- /dev/null
+++ b/iredis/data/commands/cluster-info.md
@@ -0,0 +1,56 @@
+`CLUSTER INFO` provides `INFO` style information about Redis Cluster vital
+parameters. The following is a sample output, followed by the description of
+each field reported.
+
+```
+cluster_state:ok
+cluster_slots_assigned:16384
+cluster_slots_ok:16384
+cluster_slots_pfail:0
+cluster_slots_fail:0
+cluster_known_nodes:6
+cluster_size:3
+cluster_current_epoch:6
+cluster_my_epoch:2
+cluster_stats_messages_sent:1483972
+cluster_stats_messages_received:1483968
+```
+
+- `cluster_state`: State is `ok` if the node is able to receive queries. `fail`
+ if there is at least one hash slot which is unbound (no node associated), in
+ error state (node serving it is flagged with FAIL flag), or if the majority of
+ masters can't be reached by this node.
+- `cluster_slots_assigned`: Number of slots which are associated to some node
+ (not unbound). This number should be 16384 for the node to work properly,
+ which means that each hash slot should be mapped to a node.
+- `cluster_slots_ok`: Number of hash slots mapping to a node not in `FAIL` or
+ `PFAIL` state.
+- `cluster_slots_pfail`: Number of hash slots mapping to a node in `PFAIL`
+ state. Note that those hash slots still work correctly, as long as the `PFAIL`
+ state is not promoted to `FAIL` by the failure detection algorithm. `PFAIL`
+ only means that we are currently not able to talk with the node, but may be
+ just a transient error.
+- `cluster_slots_fail`: Number of hash slots mapping to a node in `FAIL` state.
+ If this number is not zero the node is not able to serve queries unless
+ `cluster-require-full-coverage` is set to `no` in the configuration.
+- `cluster_known_nodes`: The total number of known nodes in the cluster,
+ including nodes in `HANDSHAKE` state that may not currently be proper members
+ of the cluster.
+- `cluster_size`: The number of master nodes serving at least one hash slot in
+ the cluster.
+- `cluster_current_epoch`: The local `Current Epoch` variable. This is used in
+ order to create unique increasing version numbers during fail overs.
+- `cluster_my_epoch`: The `Config Epoch` of the node we are talking with. This
+ is the current configuration version assigned to this node.
+- `cluster_stats_messages_sent`: Number of messages sent via the cluster
+ node-to-node binary bus.
+- `cluster_stats_messages_received`: Number of messages received via the cluster
+ node-to-node binary bus.
+
+More information about the Current Epoch and Config Epoch variables are
+available in the Redis Cluster specification document.
+
+@return
+
+@bulk-string-reply: A map between named fields and values in the form of
+`<field>:<value>` lines separated by newlines composed by the two bytes `CRLF`.
diff --git a/iredis/data/commands/cluster-keyslot.md b/iredis/data/commands/cluster-keyslot.md
new file mode 100644
index 0000000..5f08e79
--- /dev/null
+++ b/iredis/data/commands/cluster-keyslot.md
@@ -0,0 +1,32 @@
+Returns an integer identifying the hash slot the specified key hashes to. This
+command is mainly useful for debugging and testing, since it exposes via an API
+the underlying Redis implementation of the hashing algorithm. Example use cases
+for this command:
+
+1. Client libraries may use Redis in order to test their own hashing algorithm,
+ generating random keys and hashing them with both their local implementation
+ and using Redis `CLUSTER KEYSLOT` command, then checking if the result is the
+ same.
+2. Humans may use this command in order to check what is the hash slot, and then
+ the associated Redis Cluster node, responsible for a given key.
+
+## Example
+
+```
+> CLUSTER KEYSLOT somekey
+11058
+> CLUSTER KEYSLOT foo{hash_tag}
+(integer) 2515
+> CLUSTER KEYSLOT bar{hash_tag}
+(integer) 2515
+```
+
+Note that the command implements the full hashing algorithm, including support
+for **hash tags**, that is the special property of Redis Cluster key hashing
+algorithm, of hashing just what is between `{` and `}` if such a pattern is
+found inside the key name, in order to force multiple keys to be handled by the
+same node.
+
+@return
+
+@integer-reply: The hash slot number.
diff --git a/iredis/data/commands/cluster-meet.md b/iredis/data/commands/cluster-meet.md
new file mode 100644
index 0000000..3402faa
--- /dev/null
+++ b/iredis/data/commands/cluster-meet.md
@@ -0,0 +1,55 @@
+`CLUSTER MEET` is used in order to connect different Redis nodes with cluster
+support enabled, into a working cluster.
+
+The basic idea is that nodes by default don't trust each other, and are
+considered unknown, so that it is unlikely that different cluster nodes will mix
+into a single one because of system administration errors or network addresses
+modifications.
+
+So in order for a given node to accept another one into the list of nodes
+composing a Redis Cluster, there are only two ways:
+
+1. The system administrator sends a `CLUSTER MEET` command to force a node to
+ meet another one.
+2. An already known node sends a list of nodes in the gossip section that we are
+ not aware of. If the receiving node trusts the sending node as a known node,
+ it will process the gossip section and send an handshake to the nodes that
+ are still not known.
+
+Note that Redis Cluster needs to form a full mesh (each node is connected with
+each other node), but in order to create a cluster, there is no need to send all
+the `CLUSTER MEET` commands needed to form the full mesh. What matter is to send
+enough `CLUSTER MEET` messages so that each node can reach each other node
+through a _chain of known nodes_. Thanks to the exchange of gossip information
+in heartbeat packets, the missing links will be created.
+
+So, if we link node A with node B via `CLUSTER MEET`, and B with C, A and C will
+find their ways to handshake and create a link.
+
+Another example: if we imagine a cluster formed of the following four nodes
+called A, B, C and D, we may send just the following set of commands to A:
+
+1. `CLUSTER MEET B-ip B-port`
+2. `CLUSTER MEET C-ip C-port`
+3. `CLUSTER MEET D-ip D-port`
+
+As a side effect of `A` knowing and being known by all the other nodes, it will
+send gossip sections in the heartbeat packets that will allow each other node to
+create a link with each other one, forming a full mesh in a matter of seconds,
+even if the cluster is large.
+
+Moreover `CLUSTER MEET` does not need to be reciprocal. If I send the command to
+A in order to join B, I don't need to also send it to B in order to join A.
+
+## Implementation details: MEET and PING packets
+
+When a given node receives a `CLUSTER MEET` message, the node specified in the
+command still does not know the node we sent the command to. So in order for the
+node to force the receiver to accept it as a trusted node, it sends a `MEET`
+packet instead of a `PING` packet. The two packets have exactly the same format,
+but the former forces the receiver to acknowledge the node as trusted.
+
+@return
+
+@simple-string-reply: `OK` if the command was successful. If the address or port
+specified are invalid an error is returned.
diff --git a/iredis/data/commands/cluster-myid.md b/iredis/data/commands/cluster-myid.md
new file mode 100644
index 0000000..1ff5c0f
--- /dev/null
+++ b/iredis/data/commands/cluster-myid.md
@@ -0,0 +1,8 @@
+Returns the node's id.
+
+The `CLUSTER MYID` command returns the unique, auto-generated identifier that is
+associated with the connected cluster node.
+
+@return
+
+@bulk-string-reply: The node id.
diff --git a/iredis/data/commands/cluster-nodes.md b/iredis/data/commands/cluster-nodes.md
new file mode 100644
index 0000000..435ce2b
--- /dev/null
+++ b/iredis/data/commands/cluster-nodes.md
@@ -0,0 +1,147 @@
+Each node in a Redis Cluster has its view of the current cluster configuration,
+given by the set of known nodes, the state of the connection we have with such
+nodes, their flags, properties and assigned slots, and so forth.
+
+`CLUSTER NODES` provides all this information, that is, the current cluster
+configuration of the node we are contacting, in a serialization format which
+happens to be exactly the same as the one used by Redis Cluster itself in order
+to store on disk the cluster state (however the on disk cluster state has a few
+additional info appended at the end).
+
+Note that normally clients willing to fetch the map between Cluster hash slots
+and node addresses should use `CLUSTER SLOTS` instead. `CLUSTER NODES`, that
+provides more information, should be used for administrative tasks, debugging,
+and configuration inspections. It is also used by `redis-trib` in order to
+manage a cluster.
+
+## Serialization format
+
+The output of the command is just a space-separated CSV string, where each line
+represents a node in the cluster. The following is an example of output:
+
+```
+07c37dfeb235213a872192d90877d0cd55635b91 127.0.0.1:30004@31004 slave e7d1eecce10fd6bb5eb35b9f99a514335d9ba9ca 0 1426238317239 4 connected
+67ed2db8d677e59ec4a4cefb06858cf2a1a89fa1 127.0.0.1:30002@31002 master - 0 1426238316232 2 connected 5461-10922
+292f8b365bb7edb5e285caf0b7e6ddc7265d2f4f 127.0.0.1:30003@31003 master - 0 1426238318243 3 connected 10923-16383
+6ec23923021cf3ffec47632106199cb7f496ce01 127.0.0.1:30005@31005 slave 67ed2db8d677e59ec4a4cefb06858cf2a1a89fa1 0 1426238316232 5 connected
+824fe116063bc5fcf9f4ffd895bc17aee7731ac3 127.0.0.1:30006@31006 slave 292f8b365bb7edb5e285caf0b7e6ddc7265d2f4f 0 1426238317741 6 connected
+e7d1eecce10fd6bb5eb35b9f99a514335d9ba9ca 127.0.0.1:30001@31001 myself,master - 0 0 1 connected 0-5460
+```
+
+Each line is composed of the following fields:
+
+```
+<id> <ip:port@cport> <flags> <master> <ping-sent> <pong-recv> <config-epoch> <link-state> <slot> <slot> ... <slot>
+```
+
+The meaning of each filed is the following:
+
+1. `id`: The node ID, a 40 characters random string generated when a node is
+ created and never changed again (unless `CLUSTER RESET HARD` is used).
+2. `ip:port@cport`: The node address where clients should contact the node to
+ run queries.
+3. `flags`: A list of comma separated flags: `myself`, `master`, `slave`,
+ `fail?`, `fail`, `handshake`, `noaddr`, `noflags`. Flags are explained in
+ detail in the next section.
+4. `master`: If the node is a replica, and the master is known, the master node
+ ID, otherwise the "-" character.
+5. `ping-sent`: Milliseconds unix time at which the currently active ping was
+ sent, or zero if there are no pending pings.
+6. `pong-recv`: Milliseconds unix time the last pong was received.
+7. `config-epoch`: The configuration epoch (or version) of the current node (or
+ of the current master if the node is a replica). Each time there is a
+ failover, a new, unique, monotonically increasing configuration epoch is
+ created. If multiple nodes claim to serve the same hash slots, the one with
+ higher configuration epoch wins.
+8. `link-state`: The state of the link used for the node-to-node cluster bus. We
+ use this link to communicate with the node. Can be `connected` or
+ `disconnected`.
+9. `slot`: A hash slot number or range. Starting from argument number 9, but
+ there may be up to 16384 entries in total (limit never reached). This is the
+ list of hash slots served by this node. If the entry is just a number, is
+ parsed as such. If it is a range, it is in the form `start-end`, and means
+ that the node is responsible for all the hash slots from `start` to `end`
+ including the start and end values.
+
+Meaning of the flags (field number 3):
+
+- `myself`: The node you are contacting.
+- `master`: Node is a master.
+- `slave`: Node is a replica.
+- `fail?`: Node is in `PFAIL` state. Not reachable for the node you are
+ contacting, but still logically reachable (not in `FAIL` state).
+- `fail`: Node is in `FAIL` state. It was not reachable for multiple nodes that
+ promoted the `PFAIL` state to `FAIL`.
+- `handshake`: Untrusted node, we are handshaking.
+- `noaddr`: No address known for this node.
+- `noflags`: No flags at all.
+
+## Notes on published config epochs
+
+Replicas broadcast their master's config epochs (in order to get an `UPDATE`
+message if they are found to be stale), so the real config epoch of the replica
+(which is meaningless more or less, since they don't serve hash slots) can be
+only obtained checking the node flagged as `myself`, which is the entry of the
+node we are asking to generate `CLUSTER NODES` output. The other replicas epochs
+reflect what they publish in heartbeat packets, which is, the configuration
+epoch of the masters they are currently replicating.
+
+## Special slot entries
+
+Normally hash slots associated to a given node are in one of the following
+formats, as already explained above:
+
+1. Single number: 3894
+2. Range: 3900-4000
+
+However node hash slots can be in a special state, used in order to communicate
+errors after a node restart (mismatch between the keys in the AOF/RDB file, and
+the node hash slots configuration), or when there is a resharding operation in
+progress. This two states are **importing** and **migrating**.
+
+The meaning of the two states is explained in the Redis Specification, however
+the gist of the two states is the following:
+
+- **Importing** slots are yet not part of the nodes hash slot, there is a
+ migration in progress. The node will accept queries about these slots only if
+ the `ASK` command is used.
+- **Migrating** slots are assigned to the node, but are being migrated to some
+ other node. The node will accept queries if all the keys in the command exist
+ already, otherwise it will emit what is called an **ASK redirection**, to
+ force new keys creation directly in the importing node.
+
+Importing and migrating slots are emitted in the `CLUSTER NODES` output as
+follows:
+
+- **Importing slot:** `[slot_number-<-importing_from_node_id]`
+- **Migrating slot:** `[slot_number->-migrating_to_node_id]`
+
+The following are a few examples of importing and migrating slots:
+
+- `[93-<-292f8b365bb7edb5e285caf0b7e6ddc7265d2f4f]`
+- `[1002-<-67ed2db8d677e59ec4a4cefb06858cf2a1a89fa1]`
+- `[77->-e7d1eecce10fd6bb5eb35b9f99a514335d9ba9ca]`
+- `[16311->-292f8b365bb7edb5e285caf0b7e6ddc7265d2f4f]`
+
+Note that the format does not have any space, so `CLUSTER NODES` output format
+is plain CSV with space as separator even when this special slots are emitted.
+However a complete parser for the format should be able to handle them.
+
+Note that:
+
+1. Migration and importing slots are only added to the node flagged as `myself`.
+ This information is local to a node, for its own slots.
+2. Importing and migrating slots are provided as **additional info**. If the
+ node has a given hash slot assigned, it will be also a plain number in the
+ list of hash slots, so clients that don't have a clue about hash slots
+ migrations can just skip this special fields.
+
+@return
+
+@bulk-string-reply: The serialized cluster configuration.
+
+**A note about the word slave used in this man page and command name**: Starting
+with Redis 5, if not for backward compatibility, the Redis project no longer
+uses the word slave. Unfortunately in this command the word slave is part of the
+protocol, so we'll be able to remove such occurrences only when this API will be
+naturally deprecated.
diff --git a/iredis/data/commands/cluster-replicas.md b/iredis/data/commands/cluster-replicas.md
new file mode 100644
index 0000000..678638a
--- /dev/null
+++ b/iredis/data/commands/cluster-replicas.md
@@ -0,0 +1,16 @@
+The command provides a list of replica nodes replicating from the specified
+master node. The list is provided in the same format used by `CLUSTER NODES`
+(please refer to its documentation for the specification of the format).
+
+The command will fail if the specified node is not known or if it is not a
+master according to the node table of the node receiving the command.
+
+Note that if a replica is added, moved, or removed from a given master node, and
+we ask `CLUSTER REPLICAS` to a node that has not yet received the configuration
+update, it may show stale information. However eventually (in a matter of
+seconds if there are no network partitions) all the nodes will agree about the
+set of nodes associated with a given master.
+
+@return
+
+The command returns data in the same format as `CLUSTER NODES`.
diff --git a/iredis/data/commands/cluster-replicate.md b/iredis/data/commands/cluster-replicate.md
new file mode 100644
index 0000000..ab5cf5c
--- /dev/null
+++ b/iredis/data/commands/cluster-replicate.md
@@ -0,0 +1,29 @@
+The command reconfigures a node as a replica of the specified master. If the
+node receiving the command is an _empty master_, as a side effect of the
+command, the node role is changed from master to replica.
+
+Once a node is turned into the replica of another master node, there is no need
+to inform the other cluster nodes about the change: heartbeat packets exchanged
+between nodes will propagate the new configuration automatically.
+
+A replica will always accept the command, assuming that:
+
+1. The specified node ID exists in its nodes table.
+2. The specified node ID does not identify the instance we are sending the
+ command to.
+3. The specified node ID is a master.
+
+If the node receiving the command is not already a replica, but is a master, the
+command will only succeed, and the node will be converted into a replica, only
+if the following additional conditions are met:
+
+1. The node is not serving any hash slots.
+2. The node is empty, no keys are stored at all in the key space.
+
+If the command succeeds the new replica will immediately try to contact its
+master in order to replicate from it.
+
+@return
+
+@simple-string-reply: `OK` if the command was executed successfully, otherwise
+an error is returned.
diff --git a/iredis/data/commands/cluster-reset.md b/iredis/data/commands/cluster-reset.md
new file mode 100644
index 0000000..4186725
--- /dev/null
+++ b/iredis/data/commands/cluster-reset.md
@@ -0,0 +1,29 @@
+Reset a Redis Cluster node, in a more or less drastic way depending on the reset
+type, that can be **hard** or **soft**. Note that this command **does not work
+for masters if they hold one or more keys**, in that case to completely reset a
+master node keys must be removed first, e.g. by using `FLUSHALL` first, and then
+`CLUSTER RESET`.
+
+Effects on the node:
+
+1. All the other nodes in the cluster are forgotten.
+2. All the assigned / open slots are reset, so the slots-to-nodes mapping is
+ totally cleared.
+3. If the node is a replica it is turned into an (empty) master. Its dataset is
+ flushed, so at the end the node will be an empty master.
+4. **Hard reset only**: a new Node ID is generated.
+5. **Hard reset only**: `currentEpoch` and `configEpoch` vars are set to 0.
+6. The new configuration is persisted on disk in the node cluster configuration
+ file.
+
+This command is mainly useful to re-provision a Redis Cluster node in order to
+be used in the context of a new, different cluster. The command is also
+extensively used by the Redis Cluster testing framework in order to reset the
+state of the cluster every time a new test unit is executed.
+
+If no reset type is specified, the default is **soft**.
+
+@return
+
+@simple-string-reply: `OK` if the command was successful. Otherwise an error is
+returned.
diff --git a/iredis/data/commands/cluster-saveconfig.md b/iredis/data/commands/cluster-saveconfig.md
new file mode 100644
index 0000000..115d88a
--- /dev/null
+++ b/iredis/data/commands/cluster-saveconfig.md
@@ -0,0 +1,15 @@
+Forces a node to save the `nodes.conf` configuration on disk. Before to return
+the command calls `fsync(2)` in order to make sure the configuration is flushed
+on the computer disk.
+
+This command is mainly used in the event a `nodes.conf` node state file gets
+lost / deleted for some reason, and we want to generate it again from scratch.
+It can also be useful in case of mundane alterations of a node cluster
+configuration via the `CLUSTER` command in order to ensure the new configuration
+is persisted on disk, however all the commands should normally be able to auto
+schedule to persist the configuration on disk when it is important to do so for
+the correctness of the system in the event of a restart.
+
+@return
+
+@simple-string-reply: `OK` or an error if the operation fails.
diff --git a/iredis/data/commands/cluster-set-config-epoch.md b/iredis/data/commands/cluster-set-config-epoch.md
new file mode 100644
index 0000000..b64fffa
--- /dev/null
+++ b/iredis/data/commands/cluster-set-config-epoch.md
@@ -0,0 +1,25 @@
+This command sets a specific _config epoch_ in a fresh node. It only works when:
+
+1. The nodes table of the node is empty.
+2. The node current _config epoch_ is zero.
+
+These prerequisites are needed since usually, manually altering the
+configuration epoch of a node is unsafe, we want to be sure that the node with
+the higher configuration epoch value (that is the last that failed over) wins
+over other nodes in claiming the hash slots ownership.
+
+However there is an exception to this rule, and it is when a new cluster is
+created from scratch. Redis Cluster _config epoch collision resolution_
+algorithm can deal with new nodes all configured with the same configuration at
+startup, but this process is slow and should be the exception, only to make sure
+that whatever happens, two more nodes eventually always move away from the state
+of having the same configuration epoch.
+
+So, using `CONFIG SET-CONFIG-EPOCH`, when a new cluster is created, we can
+assign a different progressive configuration epoch to each node before joining
+the cluster together.
+
+@return
+
+@simple-string-reply: `OK` if the command was executed successfully, otherwise
+an error is returned.
diff --git a/iredis/data/commands/cluster-setslot.md b/iredis/data/commands/cluster-setslot.md
new file mode 100644
index 0000000..6e5ecf3
--- /dev/null
+++ b/iredis/data/commands/cluster-setslot.md
@@ -0,0 +1,132 @@
+`CLUSTER SETSLOT` is responsible of changing the state of a hash slot in the
+receiving node in different ways. It can, depending on the subcommand used:
+
+1. `MIGRATING` subcommand: Set a hash slot in _migrating_ state.
+2. `IMPORTING` subcommand: Set a hash slot in _importing_ state.
+3. `STABLE` subcommand: Clear any importing / migrating state from hash slot.
+4. `NODE` subcommand: Bind the hash slot to a different node.
+
+The command with its set of subcommands is useful in order to start and end
+cluster live resharding operations, which are accomplished by setting a hash
+slot in migrating state in the source node, and importing state in the
+destination node.
+
+Each subcommand is documented below. At the end you'll find a description of how
+live resharding is performed using this command and other related commands.
+
+## CLUSTER SETSLOT `<slot>` MIGRATING `<destination-node-id>`
+
+This subcommand sets a slot to _migrating_ state. In order to set a slot in this
+state, the node receiving the command must be the hash slot owner, otherwise an
+error is returned.
+
+When a slot is set in migrating state, the node changes behavior in the
+following way:
+
+1. If a command is received about an existing key, the command is processed as
+ usually.
+2. If a command is received about a key that does not exists, an `ASK`
+ redirection is emitted by the node, asking the client to retry only that
+ specific query into `destination-node`. In this case the client should not
+ update its hash slot to node mapping.
+3. If the command contains multiple keys, in case none exist, the behavior is
+ the same as point 2, if all exist, it is the same as point 1, however if only
+ a partial number of keys exist, the command emits a `TRYAGAIN` error in order
+ for the keys interested to finish being migrated to the target node, so that
+ the multi keys command can be executed.
+
+## CLUSTER SETSLOT `<slot>` IMPORTING `<source-node-id>`
+
+This subcommand is the reverse of `MIGRATING`, and prepares the destination node
+to import keys from the specified source node. The command only works if the
+node is not already owner of the specified hash slot.
+
+When a slot is set in importing state, the node changes behavior in the
+following way:
+
+1. Commands about this hash slot are refused and a `MOVED` redirection is
+ generated as usually, but in the case the command follows an `ASKING`
+ command, in this case the command is executed.
+
+In this way when a node in migrating state generates an `ASK` redirection, the
+client contacts the target node, sends `ASKING`, and immediately after sends the
+command. This way commands about non-existing keys in the old node or keys
+already migrated to the target node are executed in the target node, so that:
+
+1. New keys are always created in the target node. During a hash slot migration
+ we'll have to move only old keys, not new ones.
+2. Commands about keys already migrated are correctly processed in the context
+ of the node which is the target of the migration, the new hash slot owner, in
+ order to guarantee consistency.
+3. Without `ASKING` the behavior is the same as usually. This guarantees that
+ clients with a broken hash slots mapping will not write for error in the
+ target node, creating a new version of a key that has yet to be migrated.
+
+## CLUSTER SETSLOT `<slot>` STABLE
+
+This subcommand just clears migrating / importing state from the slot. It is
+mainly used to fix a cluster stuck in a wrong state by `redis-trib fix`.
+Normally the two states are cleared automatically at the end of the migration
+using the `SETSLOT ... NODE ...` subcommand as explained in the next section.
+
+## CLUSTER SETSLOT `<slot>` NODE `<node-id>`
+
+The `NODE` subcommand is the one with the most complex semantics. It associates
+the hash slot with the specified node, however the command works only in
+specific situations and has different side effects depending on the slot state.
+The following is the set of pre-conditions and side effects of the command:
+
+1. If the current hash slot owner is the node receiving the command, but for
+ effect of the command the slot would be assigned to a different node, the
+ command will return an error if there are still keys for that hash slot in
+ the node receiving the command.
+2. If the slot is in _migrating_ state, the state gets cleared when the slot is
+ assigned to another node.
+3. If the slot was in _importing_ state in the node receiving the command, and
+ the command assigns the slot to this node (which happens in the target node
+ at the end of the resharding of a hash slot from one node to another), the
+ command has the following side effects: A) the _importing_ state is cleared.
+ B) If the node config epoch is not already the greatest of the cluster, it
+ generates a new one and assigns the new config epoch to itself. This way its
+ new hash slot ownership will win over any past configuration created by
+ previous failovers or slot migrations.
+
+It is important to note that step 3 is the only time when a Redis Cluster node
+will create a new config epoch without agreement from other nodes. This only
+happens when a manual configuration is operated. However it is impossible that
+this creates a non-transient setup where two nodes have the same config epoch,
+since Redis Cluster uses a config epoch collision resolution algorithm.
+
+@return
+
+@simple-string-reply: All the subcommands return `OK` if the command was
+successful. Otherwise an error is returned.
+
+## Redis Cluster live resharding explained
+
+The `CLUSTER SETSLOT` command is an important piece used by Redis Cluster in
+order to migrate all the keys contained in one hash slot from one node to
+another. This is how the migration is orchestrated, with the help of other
+commands as well. We'll call the node that has the current ownership of the hash
+slot the `source` node, and the node where we want to migrate the `destination`
+node.
+
+1. Set the destination node slot to _importing_ state using
+ `CLUSTER SETSLOT <slot> IMPORTING <source-node-id>`.
+2. Set the source node slot to _migrating_ state using
+ `CLUSTER SETSLOT <slot> MIGRATING <destination-node-id>`.
+3. Get keys from the source node with `CLUSTER GETKEYSINSLOT` command and move
+ them into the destination node using the `MIGRATE` command.
+4. Use `CLUSTER SETSLOT <slot> NODE <destination-node-id>` in the source or
+ destination.
+
+Notes:
+
+- The order of step 1 and 2 is important. We want the destination node to be
+ ready to accept `ASK` redirections when the source node is configured to
+ redirect.
+- Step 4 does not technically need to use `SETSLOT` in the nodes not involved in
+ the resharding, since the configuration will eventually propagate itself,
+ however it is a good idea to do so in order to stop nodes from pointing to the
+ wrong node for the hash slot moved as soon as possible, resulting in less
+ redirections to find the right node.
diff --git a/iredis/data/commands/cluster-slaves.md b/iredis/data/commands/cluster-slaves.md
new file mode 100644
index 0000000..0b2aa49
--- /dev/null
+++ b/iredis/data/commands/cluster-slaves.md
@@ -0,0 +1,22 @@
+**A note about the word slave used in this man page and command name**: Starting
+with Redis 5 this command: starting with Redis version 5, if not for backward
+compatibility, the Redis project no longer uses the word slave. Please use the
+new command `CLUSTER REPLICAS`. The command `SLAVEOF` will continue to work for
+backward compatibility.
+
+The command provides a list of replica nodes replicating from the specified
+master node. The list is provided in the same format used by `CLUSTER NODES`
+(please refer to its documentation for the specification of the format).
+
+The command will fail if the specified node is not known or if it is not a
+master according to the node table of the node receiving the command.
+
+Note that if a replica is added, moved, or removed from a given master node, and
+we ask `CLUSTER SLAVES` to a node that has not yet received the configuration
+update, it may show stale information. However eventually (in a matter of
+seconds if there are no network partitions) all the nodes will agree about the
+set of nodes associated with a given master.
+
+@return
+
+The command returns data in the same format as `CLUSTER NODES`.
diff --git a/iredis/data/commands/cluster-slots.md b/iredis/data/commands/cluster-slots.md
new file mode 100644
index 0000000..693e6b3
--- /dev/null
+++ b/iredis/data/commands/cluster-slots.md
@@ -0,0 +1,102 @@
+`CLUSTER SLOTS` returns details about which cluster slots map to which Redis
+instances. The command is suitable to be used by Redis Cluster client libraries
+implementations in order to retrieve (or update when a redirection is received)
+the map associating cluster _hash slots_ with actual nodes network coordinates
+(composed of an IP address and a TCP port), so that when a command is received,
+it can be sent to what is likely the right instance for the keys specified in
+the command.
+
+## Nested Result Array
+
+Each nested result is:
+
+- Start slot range
+- End slot range
+- Master for slot range represented as nested IP/Port array
+- First replica of master for slot range
+- Second replica
+- ...continues until all replicas for this master are returned.
+
+Each result includes all active replicas of the master instance for the listed
+slot range. Failed replicas are not returned.
+
+The third nested reply is guaranteed to be the IP/Port pair of the master
+instance for the slot range. All IP/Port pairs after the third nested reply are
+replicas of the master.
+
+If a cluster instance has non-contiguous slots (e.g. 1-400,900,1800-6000) then
+master and replica IP/Port results will be duplicated for each top-level slot
+range reply.
+
+**Warning:** Newer versions of Redis Cluster will output, for each Redis
+instance, not just the IP and port, but also the node ID as third element of the
+array. In future versions there could be more elements describing the node
+better. In general a client implementation should just rely on the fact that
+certain parameters are at fixed positions as specified, but more parameters may
+follow and should be ignored. Similarly a client library should try if possible
+to cope with the fact that older versions may just have the IP and port
+parameter.
+
+@return
+
+@array-reply: nested list of slot ranges with IP/Port mappings.
+
+### Sample Output (old version)
+
+```
+127.0.0.1:7001> cluster slots
+1) 1) (integer) 0
+ 2) (integer) 4095
+ 3) 1) "127.0.0.1"
+ 2) (integer) 7000
+ 4) 1) "127.0.0.1"
+ 2) (integer) 7004
+2) 1) (integer) 12288
+ 2) (integer) 16383
+ 3) 1) "127.0.0.1"
+ 2) (integer) 7003
+ 4) 1) "127.0.0.1"
+ 2) (integer) 7007
+3) 1) (integer) 4096
+ 2) (integer) 8191
+ 3) 1) "127.0.0.1"
+ 2) (integer) 7001
+ 4) 1) "127.0.0.1"
+ 2) (integer) 7005
+4) 1) (integer) 8192
+ 2) (integer) 12287
+ 3) 1) "127.0.0.1"
+ 2) (integer) 7002
+ 4) 1) "127.0.0.1"
+ 2) (integer) 7006
+```
+
+### Sample Output (new version, includes IDs)
+
+```
+127.0.0.1:30001> cluster slots
+1) 1) (integer) 0
+ 2) (integer) 5460
+ 3) 1) "127.0.0.1"
+ 2) (integer) 30001
+ 3) "09dbe9720cda62f7865eabc5fd8857c5d2678366"
+ 4) 1) "127.0.0.1"
+ 2) (integer) 30004
+ 3) "821d8ca00d7ccf931ed3ffc7e3db0599d2271abf"
+2) 1) (integer) 5461
+ 2) (integer) 10922
+ 3) 1) "127.0.0.1"
+ 2) (integer) 30002
+ 3) "c9d93d9f2c0c524ff34cc11838c2003d8c29e013"
+ 4) 1) "127.0.0.1"
+ 2) (integer) 30005
+ 3) "faadb3eb99009de4ab72ad6b6ed87634c7ee410f"
+3) 1) (integer) 10923
+ 2) (integer) 16383
+ 3) 1) "127.0.0.1"
+ 2) (integer) 30003
+ 3) "044ec91f325b7595e76dbcb18cc688b6a5b434a1"
+ 4) 1) "127.0.0.1"
+ 2) (integer) 30006
+ 3) "58e6e48d41228013e5d9c1c37c5060693925e97e"
+```
diff --git a/iredis/data/commands/command-count.md b/iredis/data/commands/command-count.md
new file mode 100644
index 0000000..a198dd3
--- /dev/null
+++ b/iredis/data/commands/command-count.md
@@ -0,0 +1,11 @@
+Returns @integer-reply of number of total commands in this Redis server.
+
+@return
+
+@integer-reply: number of commands returned by `COMMAND`
+
+@examples
+
+```cli
+COMMAND COUNT
+```
diff --git a/iredis/data/commands/command-getkeys.md b/iredis/data/commands/command-getkeys.md
new file mode 100644
index 0000000..1c591f1
--- /dev/null
+++ b/iredis/data/commands/command-getkeys.md
@@ -0,0 +1,21 @@
+Returns @array-reply of keys from a full Redis command.
+
+`COMMAND GETKEYS` is a helper command to let you find the keys from a full Redis
+command.
+
+`COMMAND` shows some commands as having movablekeys meaning the entire command
+must be parsed to discover storage or retrieval keys. You can use
+`COMMAND GETKEYS` to discover key positions directly from how Redis parses the
+commands.
+
+@return
+
+@array-reply: list of keys from your command.
+
+@examples
+
+```cli
+COMMAND GETKEYS MSET a b c d e f
+COMMAND GETKEYS EVAL "not consulted" 3 key1 key2 key3 arg1 arg2 arg3 argN
+COMMAND GETKEYS SORT mylist ALPHA STORE outlist
+```
diff --git a/iredis/data/commands/command-info.md b/iredis/data/commands/command-info.md
new file mode 100644
index 0000000..92836e4
--- /dev/null
+++ b/iredis/data/commands/command-info.md
@@ -0,0 +1,18 @@
+Returns @array-reply of details about multiple Redis commands.
+
+Same result format as `COMMAND` except you can specify which commands get
+returned.
+
+If you request details about non-existing commands, their return position will
+be nil.
+
+@return
+
+@array-reply: nested list of command details.
+
+@examples
+
+```cli
+COMMAND INFO get set eval
+COMMAND INFO foo evalsha config bar
+```
diff --git a/iredis/data/commands/command.md b/iredis/data/commands/command.md
new file mode 100644
index 0000000..5028e70
--- /dev/null
+++ b/iredis/data/commands/command.md
@@ -0,0 +1,179 @@
+Returns @array-reply of details about all Redis commands.
+
+Cluster clients must be aware of key positions in commands so commands can go to
+matching instances, but Redis commands vary between accepting one key, multiple
+keys, or even multiple keys separated by other data.
+
+You can use `COMMAND` to cache a mapping between commands and key positions for
+each command to enable exact routing of commands to cluster instances.
+
+## Nested Result Array
+
+Each top-level result contains six nested results. Each nested result is:
+
+- command name
+- command arity specification
+- nested @array-reply of command flags
+- position of first key in argument list
+- position of last key in argument list
+- step count for locating repeating keys
+
+### Command Name
+
+Command name is the command returned as a lowercase string.
+
+### Command Arity
+
+<table style="width:50%">
+<tr><td>
+<pre>
+<code>1) 1) "get"
+ 2) (integer) 2
+ 3) 1) readonly
+ 4) (integer) 1
+ 5) (integer) 1
+ 6) (integer) 1
+</code>
+</pre>
+</td>
+<td>
+<pre>
+<code>1) 1) "mget"
+ 2) (integer) -2
+ 3) 1) readonly
+ 4) (integer) 1
+ 5) (integer) -1
+ 6) (integer) 1
+</code>
+</pre>
+</td></tr>
+</table>
+
+Command arity follows a simple pattern:
+
+- positive if command has fixed number of required arguments.
+- negative if command has minimum number of required arguments, but may have
+ more.
+
+Command arity _includes_ counting the command name itself.
+
+Examples:
+
+- `GET` arity is 2 since the command only accepts one argument and always has
+ the format `GET _key_`.
+- `MGET` arity is -2 since the command accepts at a minimum one argument, but up
+ to an unlimited number: `MGET _key1_ [key2] [key3] ...`.
+
+Also note with `MGET`, the -1 value for "last key position" means the list of
+keys may have unlimited length.
+
+### Flags
+
+Command flags is @array-reply containing one or more status replies:
+
+- _write_ - command may result in modifications
+- _readonly_ - command will never modify keys
+- _denyoom_ - reject command if currently OOM
+- _admin_ - server admin command
+- _pubsub_ - pubsub-related command
+- _noscript_ - deny this command from scripts
+- _random_ - command has random results, dangerous for scripts
+- _sort_for_script_ - if called from script, sort output
+- _loading_ - allow command while database is loading
+- _stale_ - allow command while replica has stale data
+- _skip_monitor_ - do not show this command in MONITOR
+- _asking_ - cluster related - accept even if importing
+- _fast_ - command operates in constant or log(N) time. Used for latency
+ monitoring.
+- _movablekeys_ - keys have no pre-determined position. You must discover keys
+ yourself.
+
+### Movable Keys
+
+```
+1) 1) "sort"
+ 2) (integer) -2
+ 3) 1) write
+ 2) denyoom
+ 3) movablekeys
+ 4) (integer) 1
+ 5) (integer) 1
+ 6) (integer) 1
+```
+
+Some Redis commands have no predetermined key locations. For those commands,
+flag `movablekeys` is added to the command flags @array-reply. Your Redis
+Cluster client needs to parse commands marked `movablekeys` to locate all
+relevant key positions.
+
+Complete list of commands currently requiring key location parsing:
+
+- `SORT` - optional `STORE` key, optional `BY` weights, optional `GET` keys
+- `ZUNIONSTORE` - keys stop when `WEIGHT` or `AGGREGATE` starts
+- `ZINTERSTORE` - keys stop when `WEIGHT` or `AGGREGATE` starts
+- `EVAL` - keys stop after `numkeys` count arguments
+- `EVALSHA` - keys stop after `numkeys` count arguments
+
+Also see `COMMAND GETKEYS` for getting your Redis server tell you where keys are
+in any given command.
+
+### First Key in Argument List
+
+For most commands the first key is position 1. Position 0 is always the command
+name itself.
+
+### Last Key in Argument List
+
+Redis commands usually accept one key, two keys, or an unlimited number of keys.
+
+If a command accepts one key, the first key and last key positions is 1.
+
+If a command accepts two keys (e.g. `BRPOPLPUSH`, `SMOVE`, `RENAME`, ...) then
+the last key position is the location of the last key in the argument list.
+
+If a command accepts an unlimited number of keys, the last key position is -1.
+
+### Step Count
+
+<table style="width:50%">
+<tr><td>
+<pre>
+<code>1) 1) "mset"
+ 2) (integer) -3
+ 3) 1) write
+ 2) denyoom
+ 4) (integer) 1
+ 5) (integer) -1
+ 6) (integer) 2
+</code>
+</pre>
+</td>
+<td>
+<pre>
+<code>1) 1) "mget"
+ 2) (integer) -2
+ 3) 1) readonly
+ 4) (integer) 1
+ 5) (integer) -1
+ 6) (integer) 1
+</code>
+</pre>
+</td></tr>
+</table>
+
+Key step count allows us to find key positions in commands like `MSET` where the
+format is `MSET _key1_ _val1_ [key2] [val2] [key3] [val3]...`.
+
+In the case of `MSET`, keys are every other position so the step value is 2.
+Compare with `MGET` above where the step value is just 1.
+
+@return
+
+@array-reply: nested list of command details. Commands are returned in random
+order.
+
+@examples
+
+```cli
+COMMAND
+```
diff --git a/iredis/data/commands/config-get.md b/iredis/data/commands/config-get.md
new file mode 100644
index 0000000..f4a4b34
--- /dev/null
+++ b/iredis/data/commands/config-get.md
@@ -0,0 +1,52 @@
+The `CONFIG GET` command is used to read the configuration parameters of a
+running Redis server. Not all the configuration parameters are supported in
+Redis 2.4, while Redis 2.6 can read the whole configuration of a server using
+this command.
+
+The symmetric command used to alter the configuration at run time is
+`CONFIG SET`.
+
+`CONFIG GET` takes a single argument, which is a glob-style pattern. All the
+configuration parameters matching this parameter are reported as a list of
+key-value pairs. Example:
+
+```
+redis> config get *max-*-entries*
+1) "hash-max-zipmap-entries"
+2) "512"
+3) "list-max-ziplist-entries"
+4) "512"
+5) "set-max-intset-entries"
+6) "512"
+```
+
+You can obtain a list of all the supported configuration parameters by typing
+`CONFIG GET *` in an open `redis-cli` prompt.
+
+All the supported parameters have the same meaning of the equivalent
+configuration parameter used in the [redis.conf][hgcarr22rc] file, with the
+following important differences:
+
+[hgcarr22rc]: http://github.com/redis/redis/raw/2.8/redis.conf
+
+- Where bytes or other quantities are specified, it is not possible to use the
+ `redis.conf` abbreviated form (`10k`, `2gb` ... and so forth), everything
+ should be specified as a well-formed 64-bit integer, in the base unit of the
+ configuration directive.
+- The save parameter is a single string of space-separated integers. Every pair
+ of integers represent a seconds/modifications threshold.
+
+For instance what in `redis.conf` looks like:
+
+```
+save 900 1
+save 300 10
+```
+
+that means, save after 900 seconds if there is at least 1 change to the dataset,
+and after 300 seconds if there are at least 10 changes to the dataset, will be
+reported by `CONFIG GET` as "900 1 300 10".
+
+@return
+
+The return type of the command is a @array-reply.
diff --git a/iredis/data/commands/config-resetstat.md b/iredis/data/commands/config-resetstat.md
new file mode 100644
index 0000000..d25a25b
--- /dev/null
+++ b/iredis/data/commands/config-resetstat.md
@@ -0,0 +1,16 @@
+Resets the statistics reported by Redis using the `INFO` command.
+
+These are the counters that are reset:
+
+- Keyspace hits
+- Keyspace misses
+- Number of commands processed
+- Number of connections received
+- Number of expired keys
+- Number of rejected connections
+- Latest fork(2) time
+- The `aof_delayed_fsync` counter
+
+@return
+
+@simple-string-reply: always `OK`.
diff --git a/iredis/data/commands/config-rewrite.md b/iredis/data/commands/config-rewrite.md
new file mode 100644
index 0000000..54509e5
--- /dev/null
+++ b/iredis/data/commands/config-rewrite.md
@@ -0,0 +1,37 @@
+The `CONFIG REWRITE` command rewrites the `redis.conf` file the server was
+started with, applying the minimal changes needed to make it reflect the
+configuration currently used by the server, which may be different compared to
+the original one because of the use of the `CONFIG SET` command.
+
+The rewrite is performed in a very conservative way:
+
+- Comments and the overall structure of the original redis.conf are preserved as
+ much as possible.
+- If an option already exists in the old redis.conf file, it will be rewritten
+ at the same position (line number).
+- If an option was not already present, but it is set to its default value, it
+ is not added by the rewrite process.
+- If an option was not already present, but it is set to a non-default value, it
+ is appended at the end of the file.
+- Non used lines are blanked. For instance if you used to have multiple `save`
+ directives, but the current configuration has fewer or none as you disabled
+ RDB persistence, all the lines will be blanked.
+
+CONFIG REWRITE is also able to rewrite the configuration file from scratch if
+the original one no longer exists for some reason. However if the server was
+started without a configuration file at all, the CONFIG REWRITE will just return
+an error.
+
+## Atomic rewrite process
+
+In order to make sure the redis.conf file is always consistent, that is, on
+errors or crashes you always end with the old file, or the new one, the rewrite
+is performed with a single `write(2)` call that has enough content to be at
+least as big as the old file. Sometimes additional padding in the form of
+comments is added in order to make sure the resulting file is big enough, and
+later the file gets truncated to remove the padding at the end.
+
+@return
+
+@simple-string-reply: `OK` when the configuration was rewritten properly.
+Otherwise an error is returned.
diff --git a/iredis/data/commands/config-set.md b/iredis/data/commands/config-set.md
new file mode 100644
index 0000000..24c4194
--- /dev/null
+++ b/iredis/data/commands/config-set.md
@@ -0,0 +1,56 @@
+The `CONFIG SET` command is used in order to reconfigure the server at run time
+without the need to restart Redis. You can change both trivial parameters or
+switch from one to another persistence option using this command.
+
+The list of configuration parameters supported by `CONFIG SET` can be obtained
+issuing a `CONFIG GET *` command, that is the symmetrical command used to obtain
+information about the configuration of a running Redis instance.
+
+All the configuration parameters set using `CONFIG SET` are immediately loaded
+by Redis and will take effect starting with the next command executed.
+
+All the supported parameters have the same meaning of the equivalent
+configuration parameter used in the [redis.conf][hgcarr22rc] file, with the
+following important differences:
+
+[hgcarr22rc]: http://github.com/redis/redis/raw/2.8/redis.conf
+
+- In options where bytes or other quantities are specified, it is not possible
+ to use the `redis.conf` abbreviated form (`10k`, `2gb` ... and so forth),
+ everything should be specified as a well-formed 64-bit integer, in the base
+ unit of the configuration directive. However since Redis version 3.0 or
+ greater, it is possible to use `CONFIG SET` with memory units for `maxmemory`,
+ client output buffers, and replication backlog size.
+- The save parameter is a single string of space-separated integers. Every pair
+ of integers represent a seconds/modifications threshold.
+
+For instance what in `redis.conf` looks like:
+
+```
+save 900 1
+save 300 10
+```
+
+that means, save after 900 seconds if there is at least 1 change to the dataset,
+and after 300 seconds if there are at least 10 changes to the dataset, should be
+set using `CONFIG SET SAVE "900 1 300 10"`.
+
+It is possible to switch persistence from RDB snapshotting to append-only file
+(and the other way around) using the `CONFIG SET` command. For more information
+about how to do that please check the [persistence page][tp].
+
+[tp]: /topics/persistence
+
+In general what you should know is that setting the `appendonly` parameter to
+`yes` will start a background process to save the initial append-only file
+(obtained from the in memory data set), and will append all the subsequent
+commands on the append-only file, thus obtaining exactly the same effect of a
+Redis server that started with AOF turned on since the start.
+
+You can have both the AOF enabled with RDB snapshotting if you want, the two
+options are not mutually exclusive.
+
+@return
+
+@simple-string-reply: `OK` when the configuration was set properly. Otherwise an
+error is returned.
diff --git a/iredis/data/commands/dbsize.md b/iredis/data/commands/dbsize.md
new file mode 100644
index 0000000..fe82aa7
--- /dev/null
+++ b/iredis/data/commands/dbsize.md
@@ -0,0 +1,5 @@
+Return the number of keys in the currently-selected database.
+
+@return
+
+@integer-reply
diff --git a/iredis/data/commands/debug-object.md b/iredis/data/commands/debug-object.md
new file mode 100644
index 0000000..15a4780
--- /dev/null
+++ b/iredis/data/commands/debug-object.md
@@ -0,0 +1,6 @@
+`DEBUG OBJECT` is a debugging command that should not be used by clients. Check
+the `OBJECT` command instead.
+
+@return
+
+@simple-string-reply
diff --git a/iredis/data/commands/debug-segfault.md b/iredis/data/commands/debug-segfault.md
new file mode 100644
index 0000000..4e21b9c
--- /dev/null
+++ b/iredis/data/commands/debug-segfault.md
@@ -0,0 +1,6 @@
+`DEBUG SEGFAULT` performs an invalid memory access that crashes Redis. It is
+used to simulate bugs during the development.
+
+@return
+
+@simple-string-reply
diff --git a/iredis/data/commands/decr.md b/iredis/data/commands/decr.md
new file mode 100644
index 0000000..78d2a2c
--- /dev/null
+++ b/iredis/data/commands/decr.md
@@ -0,0 +1,19 @@
+Decrements the number stored at `key` by one. If the key does not exist, it is
+set to `0` before performing the operation. An error is returned if the key
+contains a value of the wrong type or contains a string that can not be
+represented as integer. This operation is limited to **64 bit signed integers**.
+
+See `INCR` for extra information on increment/decrement operations.
+
+@return
+
+@integer-reply: the value of `key` after the decrement
+
+@examples
+
+```cli
+SET mykey "10"
+DECR mykey
+SET mykey "234293482390480948029348230948"
+DECR mykey
+```
diff --git a/iredis/data/commands/decrby.md b/iredis/data/commands/decrby.md
new file mode 100644
index 0000000..3d000a0
--- /dev/null
+++ b/iredis/data/commands/decrby.md
@@ -0,0 +1,17 @@
+Decrements the number stored at `key` by `decrement`. If the key does not exist,
+it is set to `0` before performing the operation. An error is returned if the
+key contains a value of the wrong type or contains a string that can not be
+represented as integer. This operation is limited to 64 bit signed integers.
+
+See `INCR` for extra information on increment/decrement operations.
+
+@return
+
+@integer-reply: the value of `key` after the decrement
+
+@examples
+
+```cli
+SET mykey "10"
+DECRBY mykey 3
+```
diff --git a/iredis/data/commands/del.md b/iredis/data/commands/del.md
new file mode 100644
index 0000000..fbb05ec
--- /dev/null
+++ b/iredis/data/commands/del.md
@@ -0,0 +1,13 @@
+Removes the specified keys. A key is ignored if it does not exist.
+
+@return
+
+@integer-reply: The number of keys that were removed.
+
+@examples
+
+```cli
+SET key1 "Hello"
+SET key2 "World"
+DEL key1 key2 key3
+```
diff --git a/iredis/data/commands/discard.md b/iredis/data/commands/discard.md
new file mode 100644
index 0000000..d84b503
--- /dev/null
+++ b/iredis/data/commands/discard.md
@@ -0,0 +1,10 @@
+Flushes all previously queued commands in a [transaction][tt] and restores the
+connection state to normal.
+
+[tt]: /topics/transactions
+
+If `WATCH` was used, `DISCARD` unwatches all keys watched by the connection.
+
+@return
+
+@simple-string-reply: always `OK`.
diff --git a/iredis/data/commands/dump.md b/iredis/data/commands/dump.md
new file mode 100644
index 0000000..b81d7af
--- /dev/null
+++ b/iredis/data/commands/dump.md
@@ -0,0 +1,30 @@
+Serialize the value stored at key in a Redis-specific format and return it to
+the user. The returned value can be synthesized back into a Redis key using the
+`RESTORE` command.
+
+The serialization format is opaque and non-standard, however it has a few
+semantic characteristics:
+
+- It contains a 64-bit checksum that is used to make sure errors will be
+ detected. The `RESTORE` command makes sure to check the checksum before
+ synthesizing a key using the serialized value.
+- Values are encoded in the same format used by RDB.
+- An RDB version is encoded inside the serialized value, so that different Redis
+ versions with incompatible RDB formats will refuse to process the serialized
+ value.
+
+The serialized value does NOT contain expire information. In order to capture
+the time to live of the current value the `PTTL` command should be used.
+
+If `key` does not exist a nil bulk reply is returned.
+
+@return
+
+@bulk-string-reply: the serialized value.
+
+@examples
+
+```cli
+SET mykey 10
+DUMP mykey
+```
diff --git a/iredis/data/commands/echo.md b/iredis/data/commands/echo.md
new file mode 100644
index 0000000..642d0f3
--- /dev/null
+++ b/iredis/data/commands/echo.md
@@ -0,0 +1,11 @@
+Returns `message`.
+
+@return
+
+@bulk-string-reply
+
+@examples
+
+```cli
+ECHO "Hello World!"
+```
diff --git a/iredis/data/commands/eval.md b/iredis/data/commands/eval.md
new file mode 100644
index 0000000..d1d0346
--- /dev/null
+++ b/iredis/data/commands/eval.md
@@ -0,0 +1,892 @@
+## Introduction to EVAL
+
+`EVAL` and `EVALSHA` are used to evaluate scripts using the Lua interpreter
+built into Redis starting from version 2.6.0.
+
+The first argument of `EVAL` is a Lua 5.1 script. The script does not need to
+define a Lua function (and should not). It is just a Lua program that will run
+in the context of the Redis server.
+
+The second argument of `EVAL` is the number of arguments that follows the script
+(starting from the third argument) that represent Redis key names. The arguments
+can be accessed by Lua using the `!KEYS` global variable in the form of a
+one-based array (so `KEYS[1]`, `KEYS[2]`, ...).
+
+All the additional arguments should not represent key names and can be accessed
+by Lua using the `ARGV` global variable, very similarly to what happens with
+keys (so `ARGV[1]`, `ARGV[2]`, ...).
+
+The following example should clarify what stated above:
+
+```
+> eval "return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}" 2 key1 key2 first second
+1) "key1"
+2) "key2"
+3) "first"
+4) "second"
+```
+
+Note: as you can see Lua arrays are returned as Redis multi bulk replies, that
+is a Redis return type that your client library will likely convert into an
+Array type in your programming language.
+
+It is possible to call Redis commands from a Lua script using two different Lua
+functions:
+
+- `redis.call()`
+- `redis.pcall()`
+
+`redis.call()` is similar to `redis.pcall()`, the only difference is that if a
+Redis command call will result in an error, `redis.call()` will raise a Lua
+error that in turn will force `EVAL` to return an error to the command caller,
+while `redis.pcall` will trap the error and return a Lua table representing the
+error.
+
+The arguments of the `redis.call()` and `redis.pcall()` functions are all the
+arguments of a well formed Redis command:
+
+```
+> eval "return redis.call('set','foo','bar')" 0
+OK
+```
+
+The above script sets the key `foo` to the string `bar`. However it violates the
+`EVAL` command semantics as all the keys that the script uses should be passed
+using the `!KEYS` array:
+
+```
+> eval "return redis.call('set',KEYS[1],'bar')" 1 foo
+OK
+```
+
+All Redis commands must be analyzed before execution to determine which keys the
+command will operate on. In order for this to be true for `EVAL`, keys must be
+passed explicitly. This is useful in many ways, but especially to make sure
+Redis Cluster can forward your request to the appropriate cluster node.
+
+Note this rule is not enforced in order to provide the user with opportunities
+to abuse the Redis single instance configuration, at the cost of writing scripts
+not compatible with Redis Cluster.
+
+Lua scripts can return a value that is converted from the Lua type to the Redis
+protocol using a set of conversion rules.
+
+## Conversion between Lua and Redis data types
+
+Redis return values are converted into Lua data types when Lua calls a Redis
+command using `call()` or `pcall()`. Similarly, Lua data types are converted
+into the Redis protocol when calling a Redis command and when a Lua script
+returns a value, so that scripts can control what `EVAL` will return to the
+client.
+
+This conversion between data types is designed in a way that if a Redis type is
+converted into a Lua type, and then the result is converted back into a Redis
+type, the result is the same as the initial value.
+
+In other words there is a one-to-one conversion between Lua and Redis types. The
+following table shows you all the conversions rules:
+
+**Redis to Lua** conversion table.
+
+- Redis integer reply -> Lua number
+- Redis bulk reply -> Lua string
+- Redis multi bulk reply -> Lua table (may have other Redis data types nested)
+- Redis status reply -> Lua table with a single `ok` field containing the status
+- Redis error reply -> Lua table with a single `err` field containing the error
+- Redis Nil bulk reply and Nil multi bulk reply -> Lua false boolean type
+
+**Lua to Redis** conversion table.
+
+- Lua number -> Redis integer reply (the number is converted into an integer)
+- Lua string -> Redis bulk reply
+- Lua table (array) -> Redis multi bulk reply (truncated to the first nil inside
+ the Lua array if any)
+- Lua table with a single `ok` field -> Redis status reply
+- Lua table with a single `err` field -> Redis error reply
+- Lua boolean false -> Redis Nil bulk reply.
+
+There is an additional Lua-to-Redis conversion rule that has no corresponding
+Redis to Lua conversion rule:
+
+- Lua boolean true -> Redis integer reply with value of 1.
+
+Lastly, there are three important rules to note:
+
+- Lua has a single numerical type, Lua numbers. There is no distinction between
+ integers and floats. So we always convert Lua numbers into integer replies,
+ removing the decimal part of the number if any. **If you want to return a
+ float from Lua you should return it as a string**, exactly like Redis itself
+ does (see for instance the `ZSCORE` command).
+- There is
+ [no simple way to have nils inside Lua arrays](http://www.lua.org/pil/19.1.html),
+ this is a result of Lua table semantics, so when Redis converts a Lua array
+ into Redis protocol the conversion is stopped if a nil is encountered.
+- When a Lua table contains keys (and their values), the converted Redis reply
+ will **not** include them.
+
+**RESP3 mode conversion rules**: note that the Lua engine can work in RESP3 mode
+using the new Redis 6 protocol. In this case there are additional conversion
+rules, and certain conversions are also modified compared to the RESP2 mode.
+Please refer to the RESP3 section of this document for more information.
+
+Here are a few conversion examples:
+
+```
+> eval "return 10" 0
+(integer) 10
+
+> eval "return {1,2,{3,'Hello World!'}}" 0
+1) (integer) 1
+2) (integer) 2
+3) 1) (integer) 3
+ 2) "Hello World!"
+
+> eval "return redis.call('get','foo')" 0
+"bar"
+```
+
+The last example shows how it is possible to receive the exact return value of
+`redis.call()` or `redis.pcall()` from Lua that would be returned if the command
+was called directly.
+
+In the following example we can see how floats and arrays containing nils and
+keys are handled:
+
+```
+> eval "return {1,2,3.3333,somekey='somevalue','foo',nil,'bar'}" 0
+1) (integer) 1
+2) (integer) 2
+3) (integer) 3
+4) "foo"
+```
+
+As you can see 3.333 is converted into 3, _somekey_ is excluded, and the _bar_
+string is never returned as there is a nil before.
+
+## Helper functions to return Redis types
+
+There are two helper functions to return Redis types from Lua.
+
+- `redis.error_reply(error_string)` returns an error reply. This function simply
+ returns a single field table with the `err` field set to the specified string
+ for you.
+- `redis.status_reply(status_string)` returns a status reply. This function
+ simply returns a single field table with the `ok` field set to the specified
+ string for you.
+
+There is no difference between using the helper functions or directly returning
+the table with the specified format, so the following two forms are equivalent:
+
+ return {err="My Error"}
+ return redis.error_reply("My Error")
+
+## Atomicity of scripts
+
+Redis uses the same Lua interpreter to run all the commands. Also Redis
+guarantees that a script is executed in an atomic way: no other script or Redis
+command will be executed while a script is being executed. This semantic is
+similar to the one of `MULTI` / `EXEC`. From the point of view of all the other
+clients the effects of a script are either still not visible or already
+completed.
+
+However this also means that executing slow scripts is not a good idea. It is
+not hard to create fast scripts, as the script overhead is very low, but if you
+are going to use slow scripts you should be aware that while the script is
+running no other client can execute commands.
+
+## Error handling
+
+As already stated, calls to `redis.call()` resulting in a Redis command error
+will stop the execution of the script and return an error, in a way that makes
+it obvious that the error was generated by a script:
+
+```
+> del foo
+(integer) 1
+> lpush foo a
+(integer) 1
+> eval "return redis.call('get','foo')" 0
+(error) ERR Error running script (call to f_6b1bf486c81ceb7edf3c093f4c48582e38c0e791): ERR Operation against a key holding the wrong kind of value
+```
+
+Using `redis.pcall()` no error is raised, but an error object is returned in the
+format specified above (as a Lua table with an `err` field). The script can pass
+the exact error to the user by returning the error object returned by
+`redis.pcall()`.
+
+## Bandwidth and EVALSHA
+
+The `EVAL` command forces you to send the script body again and again. Redis
+does not need to recompile the script every time as it uses an internal caching
+mechanism, however paying the cost of the additional bandwidth may not be
+optimal in many contexts.
+
+On the other hand, defining commands using a special command or via `redis.conf`
+would be a problem for a few reasons:
+
+- Different instances may have different implementations of a command.
+
+- Deployment is hard if we have to make sure all instances contain a given
+ command, especially in a distributed environment.
+
+- Reading application code, the complete semantics might not be clear since the
+ application calls commands defined server side.
+
+In order to avoid these problems while avoiding the bandwidth penalty, Redis
+implements the `EVALSHA` command.
+
+`EVALSHA` works exactly like `EVAL`, but instead of having a script as the first
+argument it has the SHA1 digest of a script. The behavior is the following:
+
+- If the server still remembers a script with a matching SHA1 digest, the script
+ is executed.
+
+- If the server does not remember a script with this SHA1 digest, a special
+ error is returned telling the client to use `EVAL` instead.
+
+Example:
+
+```
+> set foo bar
+OK
+> eval "return redis.call('get','foo')" 0
+"bar"
+> evalsha 6b1bf486c81ceb7edf3c093f4c48582e38c0e791 0
+"bar"
+> evalsha ffffffffffffffffffffffffffffffffffffffff 0
+(error) `NOSCRIPT` No matching script. Please use `EVAL`.
+```
+
+The client library implementation can always optimistically send `EVALSHA` under
+the hood even when the client actually calls `EVAL`, in the hope the script was
+already seen by the server. If the `NOSCRIPT` error is returned `EVAL` will be
+used instead.
+
+Passing keys and arguments as additional `EVAL` arguments is also very useful in
+this context as the script string remains constant and can be efficiently cached
+by Redis.
+
+## Script cache semantics
+
+Executed scripts are guaranteed to be in the script cache of a given execution
+of a Redis instance forever. This means that if an `EVAL` is performed against a
+Redis instance all the subsequent `EVALSHA` calls will succeed.
+
+The reason why scripts can be cached for long time is that it is unlikely for a
+well written application to have enough different scripts to cause memory
+problems. Every script is conceptually like the implementation of a new command,
+and even a large application will likely have just a few hundred of them. Even
+if the application is modified many times and scripts will change, the memory
+used is negligible.
+
+The only way to flush the script cache is by explicitly calling the
+`SCRIPT FLUSH` command, which will _completely flush_ the scripts cache removing
+all the scripts executed so far.
+
+This is usually needed only when the instance is going to be instantiated for
+another customer or application in a cloud environment.
+
+Also, as already mentioned, restarting a Redis instance flushes the script
+cache, which is not persistent. However from the point of view of the client
+there are only two ways to make sure a Redis instance was not restarted between
+two different commands.
+
+- The connection we have with the server is persistent and was never closed so
+ far.
+- The client explicitly checks the `runid` field in the `INFO` command in order
+ to make sure the server was not restarted and is still the same process.
+
+Practically speaking, for the client it is much better to simply assume that in
+the context of a given connection, cached scripts are guaranteed to be there
+unless an administrator explicitly called the `SCRIPT FLUSH` command.
+
+The fact that the user can count on Redis not removing scripts is semantically
+useful in the context of pipelining.
+
+For instance an application with a persistent connection to Redis can be sure
+that if a script was sent once it is still in memory, so EVALSHA can be used
+against those scripts in a pipeline without the chance of an error being
+generated due to an unknown script (we'll see this problem in detail later).
+
+A common pattern is to call `SCRIPT LOAD` to load all the scripts that will
+appear in a pipeline, then use `EVALSHA` directly inside the pipeline without
+any need to check for errors resulting from the script hash not being
+recognized.
+
+## The SCRIPT command
+
+Redis offers a SCRIPT command that can be used in order to control the scripting
+subsystem. SCRIPT currently accepts three different commands:
+
+- `SCRIPT FLUSH`
+
+ This command is the only way to force Redis to flush the scripts cache. It is
+ most useful in a cloud environment where the same instance can be reassigned
+ to a different user. It is also useful for testing client libraries'
+ implementations of the scripting feature.
+
+- `SCRIPT EXISTS sha1 sha2 ... shaN`
+
+ Given a list of SHA1 digests as arguments this command returns an array of 1
+ or 0, where 1 means the specific SHA1 is recognized as a script already
+ present in the scripting cache, while 0 means that a script with this SHA1 was
+ never seen before (or at least never seen after the latest SCRIPT FLUSH
+ command).
+
+- `SCRIPT LOAD script`
+
+ This command registers the specified script in the Redis script cache. The
+ command is useful in all the contexts where we want to make sure that
+ `EVALSHA` will not fail (for instance during a pipeline or MULTI/EXEC
+ operation), without the need to actually execute the script.
+
+- `SCRIPT KILL`
+
+ This command is the only way to interrupt a long-running script that reaches
+ the configured maximum execution time for scripts. The SCRIPT KILL command can
+ only be used with scripts that did not modify the dataset during their
+ execution (since stopping a read-only script does not violate the scripting
+ engine's guaranteed atomicity). See the next sections for more information
+ about long running scripts.
+
+## Scripts as pure functions
+
+_Note: starting with Redis 5, scripts are always replicated as effects and not
+sending the script verbatim. So the following section is mostly applicable to
+Redis version 4 or older._
+
+A very important part of scripting is writing scripts that are pure functions.
+Scripts executed in a Redis instance are, by default, propagated to replicas and
+to the AOF file by sending the script itself -- not the resulting commands.
+
+The reason is that sending a script to another Redis instance is often much
+faster than sending the multiple commands the script generates, so if the client
+is sending many scripts to the master, converting the scripts into individual
+commands for the replica / AOF would result in too much bandwidth for the
+replication link or the Append Only File (and also too much CPU since
+dispatching a command received via network is a lot more work for Redis compared
+to dispatching a command invoked by Lua scripts).
+
+Normally replicating scripts instead of the effects of the scripts makes sense,
+however not in all the cases. So starting with Redis 3.2, the scripting engine
+is able to, alternatively, replicate the sequence of write commands resulting
+from the script execution, instead of replication the script itself. See the
+next section for more information. In this section we'll assume that scripts are
+replicated by sending the whole script. Let's call this replication mode **whole
+scripts replication**.
+
+The main drawback with the _whole scripts replication_ approach is that scripts
+are required to have the following property:
+
+- The script must always evaluates the same Redis _write_ commands with the same
+ arguments given the same input data set. Operations performed by the script
+ cannot depend on any hidden (non-explicit) information or state that may
+ change as script execution proceeds or between different executions of the
+ script, nor can it depend on any external input from I/O devices.
+
+Things like using the system time, calling Redis random commands like
+`RANDOMKEY`, or using Lua random number generator, could result into scripts
+that will not always evaluate in the same way.
+
+In order to enforce this behavior in scripts Redis does the following:
+
+- Lua does not export commands to access the system time or other external
+ state.
+- Redis will block the script with an error if a script calls a Redis command
+ able to alter the data set **after** a Redis _random_ command like
+ `RANDOMKEY`, `SRANDMEMBER`, `TIME`. This means that if a script is read-only
+ and does not modify the data set it is free to call those commands. Note that
+ a _random command_ does not necessarily mean a command that uses random
+ numbers: any non-deterministic command is considered a random command (the
+ best example in this regard is the `TIME` command).
+- In Redis version 4, commands that may return elements in random order, like
+ `SMEMBERS` (because Redis Sets are _unordered_) have a different behavior when
+ called from Lua, and undergo a silent lexicographical sorting filter before
+ returning data to Lua scripts. So `redis.call("smembers",KEYS[1])` will always
+ return the Set elements in the same order, while the same command invoked from
+ normal clients may return different results even if the key contains exactly
+ the same elements. However starting with Redis 5 there is no longer such
+ ordering step, because Redis 5 replicates scripts in a way that no longer
+ needs non-deterministic commands to be converted into deterministic ones. In
+ general, even when developing for Redis 4, never assume that certain commands
+ in Lua will be ordered, but instead rely on the documentation of the original
+ command you call to see the properties it provides.
+- Lua pseudo random number generation functions `math.random` and
+ `math.randomseed` are modified in order to always have the same seed every
+ time a new script is executed. This means that calling `math.random` will
+ always generate the same sequence of numbers every time a script is executed
+ if `math.randomseed` is not used.
+
+However the user is still able to write commands with random behavior using the
+following simple trick. Imagine I want to write a Redis script that will
+populate a list with N random integers.
+
+I can start with this small Ruby program:
+
+```
+require 'rubygems'
+require 'redis'
+
+r = Redis.new
+
+RandomPushScript = <<EOF
+ local i = tonumber(ARGV[1])
+ local res
+ while (i > 0) do
+ res = redis.call('lpush',KEYS[1],math.random())
+ i = i-1
+ end
+ return res
+EOF
+
+r.del(:mylist)
+puts r.eval(RandomPushScript,[:mylist],[10,rand(2**32)])
+```
+
+Every time this script executed the resulting list will have exactly the
+following elements:
+
+```
+> lrange mylist 0 -1
+ 1) "0.74509509873814"
+ 2) "0.87390407681181"
+ 3) "0.36876626981831"
+ 4) "0.6921941534114"
+ 5) "0.7857992587545"
+ 6) "0.57730350670279"
+ 7) "0.87046522734243"
+ 8) "0.09637165539729"
+ 9) "0.74990198051087"
+10) "0.17082803611217"
+```
+
+In order to make it a pure function, but still be sure that every invocation of
+the script will result in different random elements, we can simply add an
+additional argument to the script that will be used in order to seed the Lua
+pseudo-random number generator. The new script is as follows:
+
+```
+RandomPushScript = <<EOF
+ local i = tonumber(ARGV[1])
+ local res
+ math.randomseed(tonumber(ARGV[2]))
+ while (i > 0) do
+ res = redis.call('lpush',KEYS[1],math.random())
+ i = i-1
+ end
+ return res
+EOF
+
+r.del(:mylist)
+puts r.eval(RandomPushScript,1,:mylist,10,rand(2**32))
+```
+
+What we are doing here is sending the seed of the PRNG as one of the arguments.
+This way the script output will be the same given the same arguments, but we are
+changing one of the arguments in every invocation, generating the random seed
+client-side. The seed will be propagated as one of the arguments both in the
+replication link and in the Append Only File, guaranteeing that the same changes
+will be generated when the AOF is reloaded or when the replica processes the
+script.
+
+Note: an important part of this behavior is that the PRNG that Redis implements
+as `math.random` and `math.randomseed` is guaranteed to have the same output
+regardless of the architecture of the system running Redis. 32-bit, 64-bit,
+big-endian and little-endian systems will all produce the same output.
+
+## Replicating commands instead of scripts
+
+_Note: starting with Redis 5, the replication method described in this section
+(scripts effects replication) is the default and does not need to be explicitly
+enabled._
+
+Starting with Redis 3.2, it is possible to select an alternative replication
+method. Instead of replication whole scripts, we can just replicate single write
+commands generated by the script. We call this **script effects replication**.
+
+In this replication mode, while Lua scripts are executed, Redis collects all the
+commands executed by the Lua scripting engine that actually modify the dataset.
+When the script execution finishes, the sequence of commands that the script
+generated are wrapped into a MULTI / EXEC transaction and are sent to replicas
+and AOF.
+
+This is useful in several ways depending on the use case:
+
+- When the script is slow to compute, but the effects can be summarized by a few
+ write commands, it is a shame to re-compute the script on the replicas or when
+ reloading the AOF. In this case to replicate just the effect of the script is
+ much better.
+- When script effects replication is enabled, the controls about non
+ deterministic functions are disabled. You can, for example, use the `TIME` or
+ `SRANDMEMBER` commands inside your scripts freely at any place.
+- The Lua PRNG in this mode is seeded randomly at every call.
+
+In order to enable script effects replication, you need to issue the following
+Lua command before any write operated by the script:
+
+ redis.replicate_commands()
+
+The function returns true if the script effects replication was enabled,
+otherwise if the function was called after the script already called some write
+command, it returns false, and normal whole script replication is used.
+
+## Selective replication of commands
+
+When script effects replication is selected (see the previous section), it is
+possible to have more control in the way commands are replicated to replicas and
+AOF. This is a very advanced feature since **a misuse can do damage** by
+breaking the contract that the master, replicas, and AOF, all must contain the
+same logical content.
+
+However this is a useful feature since, sometimes, we need to execute certain
+commands only in the master in order to create, for example, intermediate
+values.
+
+Think at a Lua script where we perform an intersection between two sets. Pick
+five random elements, and create a new set with this five random elements.
+Finally we delete the temporary key representing the intersection between the
+two original sets. What we want to replicate is only the creation of the new set
+with the five elements. It's not useful to also replicate the commands creating
+the temporary key.
+
+For this reason, Redis 3.2 introduces a new command that only works when script
+effects replication is enabled, and is able to control the scripting replication
+engine. The command is called `redis.set_repl()` and fails raising an error if
+called when script effects replication is disabled.
+
+The command can be called with four different arguments:
+
+ redis.set_repl(redis.REPL_ALL) -- Replicate to AOF and replicas.
+ redis.set_repl(redis.REPL_AOF) -- Replicate only to AOF.
+ redis.set_repl(redis.REPL_REPLICA) -- Replicate only to replicas (Redis >= 5)
+ redis.set_repl(redis.REPL_SLAVE) -- Used for backward compatibility, the same as REPL_REPLICA.
+ redis.set_repl(redis.REPL_NONE) -- Don't replicate at all.
+
+By default the scripting engine is always set to `REPL_ALL`. By calling this
+function the user can switch on/off AOF and or replicas propagation, and turn
+them back later at her/his wish.
+
+A simple example follows:
+
+ redis.replicate_commands() -- Enable effects replication.
+ redis.call('set','A','1')
+ redis.set_repl(redis.REPL_NONE)
+ redis.call('set','B','2')
+ redis.set_repl(redis.REPL_ALL)
+ redis.call('set','C','3')
+
+After running the above script, the result is that only keys A and C will be
+created on replicas and AOF.
+
+## Global variables protection
+
+Redis scripts are not allowed to create global variables, in order to avoid
+leaking data into the Lua state. If a script needs to maintain state between
+calls (a pretty uncommon need) it should use Redis keys instead.
+
+When global variable access is attempted the script is terminated and EVAL
+returns with an error:
+
+```
+redis 127.0.0.1:6379> eval 'a=10' 0
+(error) ERR Error running script (call to f_933044db579a2f8fd45d8065f04a8d0249383e57): user_script:1: Script attempted to create global variable 'a'
+```
+
+Accessing a _non existing_ global variable generates a similar error.
+
+Using Lua debugging functionality or other approaches like altering the meta
+table used to implement global protections in order to circumvent globals
+protection is not hard. However it is difficult to do it accidentally. If the
+user messes with the Lua global state, the consistency of AOF and replication is
+not guaranteed: don't do it.
+
+Note for Lua newbies: in order to avoid using global variables in your scripts
+simply declare every variable you are going to use using the _local_ keyword.
+
+## Using SELECT inside scripts
+
+It is possible to call `SELECT` inside Lua scripts like with normal clients,
+However one subtle aspect of the behavior changes between Redis 2.8.11 and Redis
+2.8.12. Before the 2.8.12 release the database selected by the Lua script was
+_transferred_ to the calling script as current database. Starting from Redis
+2.8.12 the database selected by the Lua script only affects the execution of the
+script itself, but does not modify the database selected by the client calling
+the script.
+
+The semantic change between patch level releases was needed since the old
+behavior was inherently incompatible with the Redis replication layer and was
+the cause of bugs.
+
+## Using Lua scripting in RESP3 mode
+
+Starting with Redis version 6, the server supports two differnent protocols. One
+is called RESP2, and is the old protocol: all the new connections to the server
+start in this mode. However clients are able to negotiate the new protocol using
+the `HELLO` command: this way the connection is put in RESP3 mode. In this mode
+certain commands, like for instance `HGETALL`, reply with a new data type (the
+Map data type in this specific case). The RESP3 protocol is semantically more
+powerful, however most scripts are ok with using just RESP2.
+
+The Lua engine always assumes to run in RESP2 mode when talking with Redis, so
+whatever the connection that is invoking the `EVAL` or `EVALSHA` command is in
+RESP2 or RESP3 mode, Lua scripts will, by default, still see the same kind of
+replies they used to see in the past from Redis, when calling commands using the
+`redis.call()` built-in function.
+
+However Lua scripts running in Redis 6 or greater, are able to switch to RESP3
+mode, and get the replies using the new available types. Similarly Lua scripts
+are able to reply to clients using the new types. Please make sure to understand
+[the capabilities for RESP3](https://github.com/antirez/resp3) before continuing
+reading this section.
+
+In order to switch to RESP3 a script should call this function:
+
+ redis.setresp(3)
+
+Note that a script can switch back and forth from RESP3 and RESP2 by calling the
+function with the argument '3' or '2'.
+
+At this point the new conversions are available, specifically:
+
+**Redis to Lua** conversion table specific to RESP3:
+
+- Redis map reply -> Lua table with a single `map` field containing a Lua table
+ representing the fields and values of the map.
+- Redis set reply -> Lua table with a single `set` field containing a Lua table
+ representing the elements of the set as fields, having as value just `true`.
+- Redis new RESP3 single null value -> Lua nil.
+- Redis true reply -> Lua true boolean value.
+- Redis false reply -> Lua false boolean value.
+- Redis double reply -> Lua table with a single `score` field containing a Lua
+ number representing the double value.
+- All the RESP2 old conversions still apply.
+
+**Lua to Redis** conversion table specific for RESP3.
+
+- Lua boolean -> Redis boolean true or false. **Note that this is a change
+ compared to the RESP2 mode**, where returning true from Lua returned the
+ number 1 to the Redis client, and returning false used to return NULL.
+- Lua table with a single `map` field set to a field-value Lua table -> Redis
+ map reply.
+- Lua table with a single `set` field set to a field-value Lua table -> Redis
+ set reply, the values are discared and can be anything.
+- Lua table with a single `double` field set to a field-value Lua table -> Redis
+ double reply.
+- Lua null -> Redis RESP3 new null reply (protocol `"_\r\n"`).
+- All the RESP2 old conversions still apply unless specified above.
+
+There is one key thing to understand: in case Lua replies with RESP3 types, but
+the connection calling Lua is in RESP2 mode, Redis will automatically convert
+the RESP3 protocol to RESP2 compatible protocol, as it happens for normal
+commands. For instance returning a map type to a connection in RESP2 mode will
+have the effect of returning a flat array of fields and values.
+
+## Available libraries
+
+The Redis Lua interpreter loads the following Lua libraries:
+
+- `base` lib.
+- `table` lib.
+- `string` lib.
+- `math` lib.
+- `struct` lib.
+- `cjson` lib.
+- `cmsgpack` lib.
+- `bitop` lib.
+- `redis.sha1hex` function.
+- `redis.breakpoint and redis.debug` function in the context of the
+ [Redis Lua debugger](/topics/ldb).
+
+Every Redis instance is _guaranteed_ to have all the above libraries so you can
+be sure that the environment for your Redis scripts is always the same.
+
+struct, CJSON and cmsgpack are external libraries, all the other libraries are
+standard Lua libraries.
+
+### struct
+
+struct is a library for packing/unpacking structures within Lua.
+
+```
+Valid formats:
+> - big endian
+< - little endian
+![num] - alignment
+x - pading
+b/B - signed/unsigned byte
+h/H - signed/unsigned short
+l/L - signed/unsigned long
+T - size_t
+i/In - signed/unsigned integer with size `n' (default is size of int)
+cn - sequence of `n' chars (from/to a string); when packing, n==0 means
+ the whole string; when unpacking, n==0 means use the previous
+ read number as the string length
+s - zero-terminated string
+f - float
+d - double
+' ' - ignored
+```
+
+Example:
+
+```
+127.0.0.1:6379> eval 'return struct.pack("HH", 1, 2)' 0
+"\x01\x00\x02\x00"
+127.0.0.1:6379> eval 'return {struct.unpack("HH", ARGV[1])}' 0 "\x01\x00\x02\x00"
+1) (integer) 1
+2) (integer) 2
+3) (integer) 5
+127.0.0.1:6379> eval 'return struct.size("HH")' 0
+(integer) 4
+```
+
+### CJSON
+
+The CJSON library provides extremely fast JSON manipulation within Lua.
+
+Example:
+
+```
+redis 127.0.0.1:6379> eval 'return cjson.encode({["foo"]= "bar"})' 0
+"{\"foo\":\"bar\"}"
+redis 127.0.0.1:6379> eval 'return cjson.decode(ARGV[1])["foo"]' 0 "{\"foo\":\"bar\"}"
+"bar"
+```
+
+### cmsgpack
+
+The cmsgpack library provides simple and fast MessagePack manipulation within
+Lua.
+
+Example:
+
+```
+127.0.0.1:6379> eval 'return cmsgpack.pack({"foo", "bar", "baz"})' 0
+"\x93\xa3foo\xa3bar\xa3baz"
+127.0.0.1:6379> eval 'return cmsgpack.unpack(ARGV[1])' 0 "\x93\xa3foo\xa3bar\xa3baz"
+1) "foo"
+2) "bar"
+3) "baz"
+```
+
+### bitop
+
+The Lua Bit Operations Module adds bitwise operations on numbers. It is
+available for scripting in Redis since version 2.8.18.
+
+Example:
+
+```
+127.0.0.1:6379> eval 'return bit.tobit(1)' 0
+(integer) 1
+127.0.0.1:6379> eval 'return bit.bor(1,2,4,8,16,32,64,128)' 0
+(integer) 255
+127.0.0.1:6379> eval 'return bit.tohex(422342)' 0
+"000671c6"
+```
+
+It supports several other functions: `bit.tobit`, `bit.tohex`, `bit.bnot`,
+`bit.band`, `bit.bor`, `bit.bxor`, `bit.lshift`, `bit.rshift`, `bit.arshift`,
+`bit.rol`, `bit.ror`, `bit.bswap`. All available functions are documented in the
+[Lua BitOp documentation](http://bitop.luajit.org/api.html)
+
+### `redis.sha1hex`
+
+Perform the SHA1 of the input string.
+
+Example:
+
+```
+127.0.0.1:6379> eval 'return redis.sha1hex(ARGV[1])' 0 "foo"
+"0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33"
+```
+
+## Emitting Redis logs from scripts
+
+It is possible to write to the Redis log file from Lua scripts using the
+`redis.log` function.
+
+```
+redis.log(loglevel,message)
+```
+
+`loglevel` is one of:
+
+- `redis.LOG_DEBUG`
+- `redis.LOG_VERBOSE`
+- `redis.LOG_NOTICE`
+- `redis.LOG_WARNING`
+
+They correspond directly to the normal Redis log levels. Only logs emitted by
+scripting using a log level that is equal or greater than the currently
+configured Redis instance log level will be emitted.
+
+The `message` argument is simply a string. Example:
+
+```
+redis.log(redis.LOG_WARNING,"Something is wrong with this script.")
+```
+
+Will generate the following:
+
+```
+[32343] 22 Mar 15:21:39 # Something is wrong with this script.
+```
+
+## Sandbox and maximum execution time
+
+Scripts should never try to access the external system, like the file system or
+any other system call. A script should only operate on Redis data and passed
+arguments.
+
+Scripts are also subject to a maximum execution time (five seconds by default).
+This default timeout is huge since a script should usually run in under a
+millisecond. The limit is mostly to handle accidental infinite loops created
+during development.
+
+It is possible to modify the maximum time a script can be executed with
+millisecond precision, either via `redis.conf` or using the CONFIG GET / CONFIG
+SET command. The configuration parameter affecting max execution time is called
+`lua-time-limit`.
+
+When a script reaches the timeout it is not automatically terminated by Redis
+since this violates the contract Redis has with the scripting engine to ensure
+that scripts are atomic. Interrupting a script means potentially leaving the
+dataset with half-written data. For this reasons when a script executes for more
+than the specified time the following happens:
+
+- Redis logs that a script is running too long.
+- It starts accepting commands again from other clients, but will reply with a
+ BUSY error to all the clients sending normal commands. The only allowed
+ commands in this status are `SCRIPT KILL` and `SHUTDOWN NOSAVE`.
+- It is possible to terminate a script that executes only read-only commands
+ using the `SCRIPT KILL` command. This does not violate the scripting semantic
+ as no data was yet written to the dataset by the script.
+- If the script already called write commands the only allowed command becomes
+ `SHUTDOWN NOSAVE` that stops the server without saving the current data set on
+ disk (basically the server is aborted).
+
+## EVALSHA in the context of pipelining
+
+Care should be taken when executing `EVALSHA` in the context of a pipelined
+request, since even in a pipeline the order of execution of commands must be
+guaranteed. If `EVALSHA` will return a `NOSCRIPT` error the command can not be
+reissued later otherwise the order of execution is violated.
+
+The client library implementation should take one of the following approaches:
+
+- Always use plain `EVAL` when in the context of a pipeline.
+
+- Accumulate all the commands to send into the pipeline, then check for `EVAL`
+ commands and use the `SCRIPT EXISTS` command to check if all the scripts are
+ already defined. If not, add `SCRIPT LOAD` commands on top of the pipeline as
+ required, and use `EVALSHA` for all the `EVAL` calls.
+
+## Debugging Lua scripts
+
+Starting with Redis 3.2, Redis has support for native Lua debugging. The Redis
+Lua debugger is a remote debugger consisting of a server, which is Redis itself,
+and a client, which is by default `redis-cli`.
+
+The Lua debugger is described in the [Lua scripts debugging](/topics/ldb)
+section of the Redis documentation.
diff --git a/iredis/data/commands/evalsha.md b/iredis/data/commands/evalsha.md
new file mode 100644
index 0000000..b3bb515
--- /dev/null
+++ b/iredis/data/commands/evalsha.md
@@ -0,0 +1,3 @@
+Evaluates a script cached on the server side by its SHA1 digest. Scripts are
+cached on the server side using the `SCRIPT LOAD` command. The command is
+otherwise identical to `EVAL`.
diff --git a/iredis/data/commands/exec.md b/iredis/data/commands/exec.md
new file mode 100644
index 0000000..b35d04a
--- /dev/null
+++ b/iredis/data/commands/exec.md
@@ -0,0 +1,16 @@
+Executes all previously queued commands in a [transaction][tt] and restores the
+connection state to normal.
+
+[tt]: /topics/transactions
+
+When using `WATCH`, `EXEC` will execute commands only if the watched keys were
+not modified, allowing for a [check-and-set mechanism][ttc].
+
+[ttc]: /topics/transactions#cas
+
+@return
+
+@array-reply: each element being the reply to each of the commands in the atomic
+transaction.
+
+When using `WATCH`, `EXEC` can return a @nil-reply if the execution was aborted.
diff --git a/iredis/data/commands/exists.md b/iredis/data/commands/exists.md
new file mode 100644
index 0000000..83c2043
--- /dev/null
+++ b/iredis/data/commands/exists.md
@@ -0,0 +1,33 @@
+Returns if `key` exists.
+
+Since Redis 3.0.3 it is possible to specify multiple keys instead of a single
+one. In such a case, it returns the total number of keys existing. Note that
+returning 1 or 0 for a single key is just a special case of the variadic usage,
+so the command is completely backward compatible.
+
+The user should be aware that if the same existing key is mentioned in the
+arguments multiple times, it will be counted multiple times. So if `somekey`
+exists, `EXISTS somekey somekey` will return 2.
+
+@return
+
+@integer-reply, specifically:
+
+- `1` if the key exists.
+- `0` if the key does not exist.
+
+Since Redis 3.0.3 the command accepts a variable number of keys and the return
+value is generalized:
+
+- The number of keys existing among the ones specified as arguments. Keys
+ mentioned multiple times and existing are counted multiple times.
+
+@examples
+
+```cli
+SET key1 "Hello"
+EXISTS key1
+EXISTS nosuchkey
+SET key2 "World"
+EXISTS key1 key2 nosuchkey
+```
diff --git a/iredis/data/commands/expire.md b/iredis/data/commands/expire.md
new file mode 100644
index 0000000..e5ca954
--- /dev/null
+++ b/iredis/data/commands/expire.md
@@ -0,0 +1,166 @@
+Set a timeout on `key`. After the timeout has expired, the key will
+automatically be deleted. A key with an associated timeout is often said to be
+_volatile_ in Redis terminology.
+
+The timeout will only be cleared by commands that delete or overwrite the
+contents of the key, including `DEL`, `SET`, `GETSET` and all the `*STORE`
+commands. This means that all the operations that conceptually _alter_ the value
+stored at the key without replacing it with a new one will leave the timeout
+untouched. For instance, incrementing the value of a key with `INCR`, pushing a
+new value into a list with `LPUSH`, or altering the field value of a hash with
+`HSET` are all operations that will leave the timeout untouched.
+
+The timeout can also be cleared, turning the key back into a persistent key,
+using the `PERSIST` command.
+
+If a key is renamed with `RENAME`, the associated time to live is transferred to
+the new key name.
+
+If a key is overwritten by `RENAME`, like in the case of an existing key `Key_A`
+that is overwritten by a call like `RENAME Key_B Key_A`, it does not matter if
+the original `Key_A` had a timeout associated or not, the new key `Key_A` will
+inherit all the characteristics of `Key_B`.
+
+Note that calling `EXPIRE`/`PEXPIRE` with a non-positive timeout or
+`EXPIREAT`/`PEXPIREAT` with a time in the past will result in the key being
+[deleted][del] rather than expired (accordingly, the emitted [key event][ntf]
+will be `del`, not `expired`).
+
+[del]: /commands/del
+[ntf]: /topics/notifications
+
+## Refreshing expires
+
+It is possible to call `EXPIRE` using as argument a key that already has an
+existing expire set. In this case the time to live of a key is _updated_ to the
+new value. There are many useful applications for this, an example is documented
+in the _Navigation session_ pattern section below.
+
+## Differences in Redis prior 2.1.3
+
+In Redis versions prior **2.1.3** altering a key with an expire set using a
+command altering its value had the effect of removing the key entirely. This
+semantics was needed because of limitations in the replication layer that are
+now fixed.
+
+`EXPIRE` would return 0 and not alter the timeout for a key with a timeout set.
+
+@return
+
+@integer-reply, specifically:
+
+- `1` if the timeout was set.
+- `0` if `key` does not exist.
+
+@examples
+
+```cli
+SET mykey "Hello"
+EXPIRE mykey 10
+TTL mykey
+SET mykey "Hello World"
+TTL mykey
+```
+
+## Pattern: Navigation session
+
+Imagine you have a web service and you are interested in the latest N pages
+_recently_ visited by your users, such that each adjacent page view was not
+performed more than 60 seconds after the previous. Conceptually you may consider
+this set of page views as a _Navigation session_ of your user, that may contain
+interesting information about what kind of products he or she is looking for
+currently, so that you can recommend related products.
+
+You can easily model this pattern in Redis using the following strategy: every
+time the user does a page view you call the following commands:
+
+```
+MULTI
+RPUSH pagewviews.user:<userid> http://.....
+EXPIRE pagewviews.user:<userid> 60
+EXEC
+```
+
+If the user will be idle more than 60 seconds, the key will be deleted and only
+subsequent page views that have less than 60 seconds of difference will be
+recorded.
+
+This pattern is easily modified to use counters using `INCR` instead of lists
+using `RPUSH`.
+
+# Appendix: Redis expires
+
+## Keys with an expire
+
+Normally Redis keys are created without an associated time to live. The key will
+simply live forever, unless it is removed by the user in an explicit way, for
+instance using the `DEL` command.
+
+The `EXPIRE` family of commands is able to associate an expire to a given key,
+at the cost of some additional memory used by the key. When a key has an expire
+set, Redis will make sure to remove the key when the specified amount of time
+elapsed.
+
+The key time to live can be updated or entirely removed using the `EXPIRE` and
+`PERSIST` command (or other strictly related commands).
+
+## Expire accuracy
+
+In Redis 2.4 the expire might not be pin-point accurate, and it could be between
+zero to one seconds out.
+
+Since Redis 2.6 the expire error is from 0 to 1 milliseconds.
+
+## Expires and persistence
+
+Keys expiring information is stored as absolute Unix timestamps (in milliseconds
+in case of Redis version 2.6 or greater). This means that the time is flowing
+even when the Redis instance is not active.
+
+For expires to work well, the computer time must be taken stable. If you move an
+RDB file from two computers with a big desync in their clocks, funny things may
+happen (like all the keys loaded to be expired at loading time).
+
+Even running instances will always check the computer clock, so for instance if
+you set a key with a time to live of 1000 seconds, and then set your computer
+time 2000 seconds in the future, the key will be expired immediately, instead of
+lasting for 1000 seconds.
+
+## How Redis expires keys
+
+Redis keys are expired in two ways: a passive way, and an active way.
+
+A key is passively expired simply when some client tries to access it, and the
+key is found to be timed out.
+
+Of course this is not enough as there are expired keys that will never be
+accessed again. These keys should be expired anyway, so periodically Redis tests
+a few keys at random among keys with an expire set. All the keys that are
+already expired are deleted from the keyspace.
+
+Specifically this is what Redis does 10 times per second:
+
+1. Test 20 random keys from the set of keys with an associated expire.
+2. Delete all the keys found expired.
+3. If more than 25% of keys were expired, start again from step 1.
+
+This is a trivial probabilistic algorithm, basically the assumption is that our
+sample is representative of the whole key space, and we continue to expire until
+the percentage of keys that are likely to be expired is under 25%
+
+This means that at any given moment the maximum amount of keys already expired
+that are using memory is at max equal to max amount of write operations per
+second divided by 4.
+
+## How expires are handled in the replication link and AOF file
+
+In order to obtain a correct behavior without sacrificing consistency, when a
+key expires, a `DEL` operation is synthesized in both the AOF file and gains all
+the attached replicas nodes. This way the expiration process is centralized in
+the master instance, and there is no chance of consistency errors.
+
+However while the replicas connected to a master will not expire keys
+independently (but will wait for the `DEL` coming from the master), they'll
+still take the full state of the expires existing in the dataset, so when a
+replica is elected to master it will be able to expire the keys independently,
+fully acting as a master.
diff --git a/iredis/data/commands/expireat.md b/iredis/data/commands/expireat.md
new file mode 100644
index 0000000..92c4e9a
--- /dev/null
+++ b/iredis/data/commands/expireat.md
@@ -0,0 +1,31 @@
+`EXPIREAT` has the same effect and semantic as `EXPIRE`, but instead of
+specifying the number of seconds representing the TTL (time to live), it takes
+an absolute [Unix timestamp][hewowu] (seconds since January 1, 1970). A
+timestamp in the past will delete the key immediately.
+
+[hewowu]: http://en.wikipedia.org/wiki/Unix_time
+
+Please for the specific semantics of the command refer to the documentation of
+`EXPIRE`.
+
+## Background
+
+`EXPIREAT` was introduced in order to convert relative timeouts to absolute
+timeouts for the AOF persistence mode. Of course, it can be used directly to
+specify that a given key should expire at a given time in the future.
+
+@return
+
+@integer-reply, specifically:
+
+- `1` if the timeout was set.
+- `0` if `key` does not exist.
+
+@examples
+
+```cli
+SET mykey "Hello"
+EXISTS mykey
+EXPIREAT mykey 1293840000
+EXISTS mykey
+```
diff --git a/iredis/data/commands/flushall.md b/iredis/data/commands/flushall.md
new file mode 100644
index 0000000..8cc7560
--- /dev/null
+++ b/iredis/data/commands/flushall.md
@@ -0,0 +1,19 @@
+Delete all the keys of all the existing databases, not just the currently
+selected one. This command never fails.
+
+The time-complexity for this operation is O(N), N being the number of keys in
+all existing databases.
+
+## `FLUSHALL ASYNC` (Redis 4.0.0 or greater)
+
+Redis is now able to delete keys in the background in a different thread without
+blocking the server. An `ASYNC` option was added to `FLUSHALL` and `FLUSHDB` in
+order to let the entire dataset or a single database to be freed asynchronously.
+
+Asynchronous `FLUSHALL` and `FLUSHDB` commands only delete keys that were
+present at the time the command was invoked. Keys created during an asynchronous
+flush will be unaffected.
+
+@return
+
+@simple-string-reply
diff --git a/iredis/data/commands/flushdb.md b/iredis/data/commands/flushdb.md
new file mode 100644
index 0000000..8c76001
--- /dev/null
+++ b/iredis/data/commands/flushdb.md
@@ -0,0 +1,12 @@
+Delete all the keys of the currently selected DB. This command never fails.
+
+The time-complexity for this operation is O(N), N being the number of keys in
+the database.
+
+## `FLUSHDB ASYNC` (Redis 4.0.0 or greater)
+
+See `FLUSHALL` for documentation.
+
+@return
+
+@simple-string-reply
diff --git a/iredis/data/commands/geoadd.md b/iredis/data/commands/geoadd.md
new file mode 100644
index 0000000..32751db
--- /dev/null
+++ b/iredis/data/commands/geoadd.md
@@ -0,0 +1,56 @@
+Adds the specified geospatial items (latitude, longitude, name) to the specified
+key. Data is stored into the key as a sorted set, in a way that makes it
+possible to later retrieve items using a query by radius with the `GEORADIUS` or
+`GEORADIUSBYMEMBER` commands.
+
+The command takes arguments in the standard format x,y so the longitude must be
+specified before the latitude. There are limits to the coordinates that can be
+indexed: areas very near to the poles are not indexable. The exact limits, as
+specified by EPSG:900913 / EPSG:3785 / OSGEO:41001 are the following:
+
+- Valid longitudes are from -180 to 180 degrees.
+- Valid latitudes are from -85.05112878 to 85.05112878 degrees.
+
+The command will report an error when the user attempts to index coordinates
+outside the specified ranges.
+
+**Note:** there is no **GEODEL** command because you can use `ZREM` in order to
+remove elements. The Geo index structure is just a sorted set.
+
+## How does it work?
+
+The way the sorted set is populated is using a technique called
+[Geohash](https://en.wikipedia.org/wiki/Geohash). Latitude and Longitude bits
+are interleaved in order to form an unique 52 bit integer. We know that a sorted
+set double score can represent a 52 bit integer without losing precision.
+
+This format allows for radius querying by checking the 1+8 areas needed to cover
+the whole radius, and discarding elements outside the radius. The areas are
+checked by calculating the range of the box covered removing enough bits from
+the less significant part of the sorted set score, and computing the score range
+to query in the sorted set for each area.
+
+## What Earth model does it use?
+
+It just assumes that the Earth is a sphere, since the used distance formula is
+the Haversine formula. This formula is only an approximation when applied to the
+Earth, which is not a perfect sphere. The introduced errors are not an issue
+when used in the context of social network sites that need to query by radius
+and most other applications. However in the worst case the error may be up to
+0.5%, so you may want to consider other systems for error-critical applications.
+
+@return
+
+@integer-reply, specifically:
+
+- The number of elements added to the sorted set, not including elements already
+ existing for which the score was updated.
+
+@examples
+
+```cli
+GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania"
+GEODIST Sicily Palermo Catania
+GEORADIUS Sicily 15 37 100 km
+GEORADIUS Sicily 15 37 200 km
+```
diff --git a/iredis/data/commands/geodecode.md b/iredis/data/commands/geodecode.md
new file mode 100644
index 0000000..f083fd9
--- /dev/null
+++ b/iredis/data/commands/geodecode.md
@@ -0,0 +1,34 @@
+Geospatial Redis commands encode positions of objects in a single 52 bit
+integer, using a technique called geohash. Those 52 bit integers are:
+
+1. Returned by `GEOAENCODE` as return value.
+2. Used by `GEOADD` as sorted set scores of members.
+
+The `GEODECODE` command is able to translate the 52 bit integers back into a
+position expressed as longitude and latitude. The command also returns the
+corners of the box that the 52 bit integer identifies on the earth surface,
+since each 52 integer actually represent not a single point, but a small area.
+
+This command usefulness is limited to the rare situations where you want to
+fetch raw data from the sorted set, for example with `ZRANGE`, and later need to
+decode the scores into positions. The other obvious use is debugging.
+
+@return
+
+@array-reply, specifically:
+
+The command returns an array of three elements. Each element of the main array
+is an array of two elements, specifying a longitude and a latitude. So the
+returned value is in the following form:
+
+- center-longitude, center-latitude
+- min-longitude, min-latitude
+- max-longitude, max-latitude
+
+@examples
+
+```cli
+GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania"
+ZSCORE Sicily "Palermo"
+GEODECODE 3479099956230698
+```
diff --git a/iredis/data/commands/geodist.md b/iredis/data/commands/geodist.md
new file mode 100644
index 0000000..3f32a89
--- /dev/null
+++ b/iredis/data/commands/geodist.md
@@ -0,0 +1,35 @@
+Return the distance between two members in the geospatial index represented by
+the sorted set.
+
+Given a sorted set representing a geospatial index, populated using the `GEOADD`
+command, the command returns the distance between the two specified members in
+the specified unit.
+
+If one or both the members are missing, the command returns NULL.
+
+The unit must be one of the following, and defaults to meters:
+
+- **m** for meters.
+- **km** for kilometers.
+- **mi** for miles.
+- **ft** for feet.
+
+The distance is computed assuming that the Earth is a perfect sphere, so errors
+up to 0.5% are possible in edge cases.
+
+@return
+
+@bulk-string-reply, specifically:
+
+The command returns the distance as a double (represented as a string) in the
+specified unit, or NULL if one or both the elements are missing.
+
+@examples
+
+```cli
+GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania"
+GEODIST Sicily Palermo Catania
+GEODIST Sicily Palermo Catania km
+GEODIST Sicily Palermo Catania mi
+GEODIST Sicily Foo Bar
+```
diff --git a/iredis/data/commands/geoencode.md b/iredis/data/commands/geoencode.md
new file mode 100644
index 0000000..5e10af6
--- /dev/null
+++ b/iredis/data/commands/geoencode.md
@@ -0,0 +1,57 @@
+Geospatial Redis commands encode positions of objects in a single 52 bit
+integer, using a technique called geohash. The encoding is further explained in
+the `GEODECODE` and `GEOADD` documentation. The `GEOENCODE` command, documented
+in this page, is able to convert a longitude and latitude pair into such 52 bit
+integer, which is used as the _score_ for the sorted set members representing
+geopositional information.
+
+Normally you don't need to use this command, unless you plan to implement low
+level code in the client side interacting with the Redis geo commands. This
+command may also be useful for debugging purposes.
+
+`GEOENCODE` takes as input:
+
+1. The longitude and latitude of a point on the Earth surface.
+2. Optionally a radius represented by an integer and an unit.
+
+And returns a set of information, including the representation of the position
+as a 52 bit integer, the min and max corners of the bounding box represented by
+the geo hash, the center point in the area covered by the geohash integer, and
+finally the two sorted set scores to query in order to retrieve all the elements
+included in the geohash area.
+
+The radius optionally provided to the command is used in order to compute the
+two scores returned by the command for range query purposes. Moreover the
+returned geohash integer will only have the most significant bits set, according
+to the number of bits needed to approximate the specified radius.
+
+## Use case
+
+As already specified this command is mostly not needed if not for debugging.
+However there are actual use cases, which is, when there is to query for the
+same areas multiple times, or with a different granularity or area shape
+compared to what Redis `GEORADIUS` is able to provide, the client may implement
+using this command part of the logic on the client side. Score ranges
+representing given areas can be cached client side and used to retrieve elements
+directly using `ZRANGEBYSCORE`.
+
+@return
+
+@array-reply, specifically:
+
+The command returns an array of give elements in the following order:
+
+- The 52 bit geohash
+- min-longitude, min-latitude of the area identified
+- max-longitude, max-latitude of the area identified
+- center-longitude, center-latitude
+- min-score and max-score of the sorted set to retrieve the members inside the
+ area
+
+@examples
+
+```cli
+GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania"
+ZSCORE Sicily "Palermo"
+GEOENCODE 13.361389 38.115556 100 km
+```
diff --git a/iredis/data/commands/geohash.md b/iredis/data/commands/geohash.md
new file mode 100644
index 0000000..f091912
--- /dev/null
+++ b/iredis/data/commands/geohash.md
@@ -0,0 +1,39 @@
+Return valid [Geohash](https://en.wikipedia.org/wiki/Geohash) strings
+representing the position of one or more elements in a sorted set value
+representing a geospatial index (where elements were added using `GEOADD`).
+
+Normally Redis represents positions of elements using a variation of the Geohash
+technique where positions are encoded using 52 bit integers. The encoding is
+also different compared to the standard because the initial min and max
+coordinates used during the encoding and decoding process are different. This
+command however **returns a standard Geohash** in the form of a string as
+described in the [Wikipedia article](https://en.wikipedia.org/wiki/Geohash) and
+compatible with the [geohash.org](http://geohash.org) web site.
+
+## Geohash string properties
+
+The command returns 11 characters Geohash strings, so no precision is loss
+compared to the Redis internal 52 bit representation. The returned Geohashes
+have the following properties:
+
+1. They can be shortened removing characters from the right. It will lose
+ precision but will still point to the same area.
+2. It is possible to use them in `geohash.org` URLs such as
+ `http://geohash.org/<geohash-string>`. This is an
+ [example of such URL](http://geohash.org/sqdtr74hyu0).
+3. Strings with a similar prefix are nearby, but the contrary is not true, it is
+ possible that strings with different prefixes are nearby too.
+
+@return
+
+@array-reply, specifically:
+
+The command returns an array where each element is the Geohash corresponding to
+each member name passed as argument to the command.
+
+@examples
+
+```cli
+GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania"
+GEOHASH Sicily Palermo Catania
+```
diff --git a/iredis/data/commands/geopos.md b/iredis/data/commands/geopos.md
new file mode 100644
index 0000000..14b941c
--- /dev/null
+++ b/iredis/data/commands/geopos.md
@@ -0,0 +1,28 @@
+Return the positions (longitude,latitude) of all the specified members of the
+geospatial index represented by the sorted set at _key_.
+
+Given a sorted set representing a geospatial index, populated using the `GEOADD`
+command, it is often useful to obtain back the coordinates of specified members.
+When the geospatial index is populated via `GEOADD` the coordinates are
+converted into a 52 bit geohash, so the coordinates returned may not be exactly
+the ones used in order to add the elements, but small errors may be introduced.
+
+The command can accept a variable number of arguments so it always returns an
+array of positions even when a single element is specified.
+
+@return
+
+@array-reply, specifically:
+
+The command returns an array where each element is a two elements array
+representing longitude and latitude (x,y) of each member name passed as argument
+to the command.
+
+Non existing elements are reported as NULL elements of the array.
+
+@examples
+
+```cli
+GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania"
+GEOPOS Sicily Palermo Catania NonExisting
+```
diff --git a/iredis/data/commands/georadius.md b/iredis/data/commands/georadius.md
new file mode 100644
index 0000000..a1dd20d
--- /dev/null
+++ b/iredis/data/commands/georadius.md
@@ -0,0 +1,103 @@
+Return the members of a sorted set populated with geospatial information using
+`GEOADD`, which are within the borders of the area specified with the center
+location and the maximum distance from the center (the radius).
+
+This manual page also covers the `GEORADIUS_RO` and `GEORADIUSBYMEMBER_RO`
+variants (see the section below for more information).
+
+The common use case for this command is to retrieve geospatial items near a
+specified point not farther than a given amount of meters (or other units). This
+allows, for example, to suggest mobile users of an application nearby places.
+
+The radius is specified in one of the following units:
+
+- **m** for meters.
+- **km** for kilometers.
+- **mi** for miles.
+- **ft** for feet.
+
+The command optionally returns additional information using the following
+options:
+
+- `WITHDIST`: Also return the distance of the returned items from the specified
+ center. The distance is returned in the same unit as the unit specified as the
+ radius argument of the command.
+- `WITHCOORD`: Also return the longitude,latitude coordinates of the matching
+ items.
+- `WITHHASH`: Also return the raw geohash-encoded sorted set score of the item,
+ in the form of a 52 bit unsigned integer. This is only useful for low level
+ hacks or debugging and is otherwise of little interest for the general user.
+
+The command default is to return unsorted items. Two different sorting methods
+can be invoked using the following two options:
+
+- `ASC`: Sort returned items from the nearest to the farthest, relative to the
+ center.
+- `DESC`: Sort returned items from the farthest to the nearest, relative to the
+ center.
+
+By default all the matching items are returned. It is possible to limit the
+results to the first N matching items by using the **COUNT `<count>`** option.
+However note that internally the command needs to perform an effort proportional
+to the number of items matching the specified area, so to query very large areas
+with a very small `COUNT` option may be slow even if just a few results are
+returned. On the other hand `COUNT` can be a very effective way to reduce
+bandwidth usage if normally just the first results are used.
+
+By default the command returns the items to the client. It is possible to store
+the results with one of these options:
+
+- `!STORE`: Store the items in a sorted set populated with their geospatial
+ information.
+- `!STOREDIST`: Store the items in a sorted set populated with their distance
+ from the center as a floating point number, in the same unit specified in the
+ radius.
+
+@return
+
+@array-reply, specifically:
+
+- Without any `WITH` option specified, the command just returns a linear array
+ like ["New York","Milan","Paris"].
+- If `WITHCOORD`, `WITHDIST` or `WITHHASH` options are specified, the command
+ returns an array of arrays, where each sub-array represents a single item.
+
+When additional information is returned as an array of arrays for each item, the
+first item in the sub-array is always the name of the returned item. The other
+information is returned in the following order as successive elements of the
+sub-array.
+
+1. The distance from the center as a floating point number, in the same unit
+ specified in the radius.
+2. The geohash integer.
+3. The coordinates as a two items x,y array (longitude,latitude).
+
+So for example the command `GEORADIUS Sicily 15 37 200 km WITHCOORD WITHDIST`
+will return each item in the following way:
+
+ ["Palermo","190.4424",["13.361389338970184","38.115556395496299"]]
+
+## Read only variants
+
+Since `GEORADIUS` and `GEORADIUSBYMEMBER` have a `STORE` and `STOREDIST` option
+they are technically flagged as writing commands in the Redis command table. For
+this reason read-only replicas will flag them, and Redis Cluster replicas will
+redirect them to the master instance even if the connection is in read only mode
+(See the `READONLY` command of Redis Cluster).
+
+Breaking the compatibility with the past was considered but rejected, at least
+for Redis 4.0, so instead two read only variants of the commands were added.
+They are exactly like the original commands but refuse the `STORE` and
+`STOREDIST` options. The two variants are called `GEORADIUS_RO` and
+`GEORADIUSBYMEMBER_RO`, and can safely be used in replicas.
+
+Both commands were introduced in Redis 3.2.10 and Redis 4.0.0 respectively.
+
+@examples
+
+```cli
+GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania"
+GEORADIUS Sicily 15 37 200 km WITHDIST
+GEORADIUS Sicily 15 37 200 km WITHCOORD
+GEORADIUS Sicily 15 37 200 km WITHDIST WITHCOORD
+```
diff --git a/iredis/data/commands/georadiusbymember.md b/iredis/data/commands/georadiusbymember.md
new file mode 100644
index 0000000..88c1be4
--- /dev/null
+++ b/iredis/data/commands/georadiusbymember.md
@@ -0,0 +1,21 @@
+This command is exactly like `GEORADIUS` with the sole difference that instead
+of taking, as the center of the area to query, a longitude and latitude value,
+it takes the name of a member already existing inside the geospatial index
+represented by the sorted set.
+
+The position of the specified member is used as the center of the query.
+
+Please check the example below and the `GEORADIUS` documentation for more
+information about the command and its options.
+
+Note that `GEORADIUSBYMEMBER_RO` is also available since Redis 3.2.10 and Redis
+4.0.0 in order to provide a read-only command that can be used in replicas. See
+the `GEORADIUS` page for more information.
+
+@examples
+
+```cli
+GEOADD Sicily 13.583333 37.316667 "Agrigento"
+GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania"
+GEORADIUSBYMEMBER Sicily Agrigento 100 km
+```
diff --git a/iredis/data/commands/get.md b/iredis/data/commands/get.md
new file mode 100644
index 0000000..423aab3
--- /dev/null
+++ b/iredis/data/commands/get.md
@@ -0,0 +1,15 @@
+Get the value of `key`. If the key does not exist the special value `nil` is
+returned. An error is returned if the value stored at `key` is not a string,
+because `GET` only handles string values.
+
+@return
+
+@bulk-string-reply: the value of `key`, or `nil` when `key` does not exist.
+
+@examples
+
+```cli
+GET nonexisting
+SET mykey "Hello"
+GET mykey
+```
diff --git a/iredis/data/commands/getbit.md b/iredis/data/commands/getbit.md
new file mode 100644
index 0000000..2d05d12
--- /dev/null
+++ b/iredis/data/commands/getbit.md
@@ -0,0 +1,19 @@
+Returns the bit value at _offset_ in the string value stored at _key_.
+
+When _offset_ is beyond the string length, the string is assumed to be a
+contiguous space with 0 bits. When _key_ does not exist it is assumed to be an
+empty string, so _offset_ is always out of range and the value is also assumed
+to be a contiguous space with 0 bits.
+
+@return
+
+@integer-reply: the bit value stored at _offset_.
+
+@examples
+
+```cli
+SETBIT mykey 7 1
+GETBIT mykey 0
+GETBIT mykey 7
+GETBIT mykey 100
+```
diff --git a/iredis/data/commands/getrange.md b/iredis/data/commands/getrange.md
new file mode 100644
index 0000000..d25ac7a
--- /dev/null
+++ b/iredis/data/commands/getrange.md
@@ -0,0 +1,24 @@
+**Warning**: this command was renamed to `GETRANGE`, it is called `SUBSTR` in
+Redis versions `<= 2.0`.
+
+Returns the substring of the string value stored at `key`, determined by the
+offsets `start` and `end` (both are inclusive). Negative offsets can be used in
+order to provide an offset starting from the end of the string. So -1 means the
+last character, -2 the penultimate and so forth.
+
+The function handles out of range requests by limiting the resulting range to
+the actual length of the string.
+
+@return
+
+@bulk-string-reply
+
+@examples
+
+```cli
+SET mykey "This is a string"
+GETRANGE mykey 0 3
+GETRANGE mykey -3 -1
+GETRANGE mykey 0 -1
+GETRANGE mykey 10 100
+```
diff --git a/iredis/data/commands/getset.md b/iredis/data/commands/getset.md
new file mode 100644
index 0000000..20b86a2
--- /dev/null
+++ b/iredis/data/commands/getset.md
@@ -0,0 +1,28 @@
+Atomically sets `key` to `value` and returns the old value stored at `key`.
+Returns an error when `key` exists but does not hold a string value.
+
+## Design pattern
+
+`GETSET` can be used together with `INCR` for counting with atomic reset. For
+example: a process may call `INCR` against the key `mycounter` every time some
+event occurs, but from time to time we need to get the value of the counter and
+reset it to zero atomically. This can be done using `GETSET mycounter "0"`:
+
+```cli
+INCR mycounter
+GETSET mycounter "0"
+GET mycounter
+```
+
+@return
+
+@bulk-string-reply: the old value stored at `key`, or `nil` when `key` did not
+exist.
+
+@examples
+
+```cli
+SET mykey "Hello"
+GETSET mykey "World"
+GET mykey
+```
diff --git a/iredis/data/commands/hdel.md b/iredis/data/commands/hdel.md
new file mode 100644
index 0000000..ce1d184
--- /dev/null
+++ b/iredis/data/commands/hdel.md
@@ -0,0 +1,24 @@
+Removes the specified fields from the hash stored at `key`. Specified fields
+that do not exist within this hash are ignored. If `key` does not exist, it is
+treated as an empty hash and this command returns `0`.
+
+@return
+
+@integer-reply: the number of fields that were removed from the hash, not
+including specified but non existing fields.
+
+@history
+
+- `>= 2.4`: Accepts multiple `field` arguments. Redis versions older than 2.4
+ can only remove a field per call.
+
+ To remove multiple fields from a hash in an atomic fashion in earlier
+ versions, use a `MULTI` / `EXEC` block.
+
+@examples
+
+```cli
+HSET myhash field1 "foo"
+HDEL myhash field1
+HDEL myhash field2
+```
diff --git a/iredis/data/commands/hello.md b/iredis/data/commands/hello.md
new file mode 100644
index 0000000..b7a9dc4
--- /dev/null
+++ b/iredis/data/commands/hello.md
@@ -0,0 +1,46 @@
+Switch the connection to a different protocol. Redis version 6 or greater are
+able to support two protocols, the old protocol, RESP2, and a new one introduced
+with Redis 6, RESP3. RESP3 has certain advantages since when the connection is
+in this mode, Redis is able to reply with more semantical replies: for instance
+`HGETALL` will return a _map type_, so a client library implementation no longer
+requires to know in advance to translate the array into a hash before returning
+it to the caller. For a full coverage of RESP3 please
+[check this repository](https://github.com/antirez/resp3).
+
+Redis 6 connections starts in RESP2 mode, so clients implementing RESP2 do not
+need to change (nor there are short term plans to drop support for RESP2).
+Clients that want to handshake the RESP3 mode need to call the `HELLO` command,
+using "3" as first argument.
+
+ > HELLO 3
+ 1# "server" => "redis"
+ 2# "version" => "6.0.0"
+ 3# "proto" => (integer) 3
+ 4# "id" => (integer) 10
+ 5# "mode" => "standalone"
+ 6# "role" => "master"
+ 7# "modules" => (empty array)
+
+The `HELLO` command has a useful reply that will state a number of facts about
+the server: the exact version, the set of modules loaded, the client ID, the
+replication role and so forth. Because of that, and given that the `HELLO`
+command also works with "2" as argument, both in order to downgrade the protocol
+back to version 2, or just to get the reply from the server without switching
+the protocol, client library authors may consider using this command instead of
+the canonical `PING` when setting up the connection.
+
+This command accepts two non mandatory options:
+
+- `AUTH <username> <password>`: directly authenticate the connection other than
+ switching to the specified protocol. In this way there is no need to call
+ `AUTH` before `HELLO` when setting up new connections. Note that the username
+ can be set to "default" in order to authenticate against a server that does
+ not use ACLs, but the simpler `requirepass` mechanism of Redis before
+ version 6.
+- `SETNAME <clientname>`: this is equivalent to also call `CLIENT SETNAME`.
+
+@return
+
+@array-reply: a list of server properties. The reply is a map instead of an
+array when RESP3 is selected. The command returns an error if the protocol
+requested does not exist.
diff --git a/iredis/data/commands/hexists.md b/iredis/data/commands/hexists.md
new file mode 100644
index 0000000..4581b63
--- /dev/null
+++ b/iredis/data/commands/hexists.md
@@ -0,0 +1,16 @@
+Returns if `field` is an existing field in the hash stored at `key`.
+
+@return
+
+@integer-reply, specifically:
+
+- `1` if the hash contains `field`.
+- `0` if the hash does not contain `field`, or `key` does not exist.
+
+@examples
+
+```cli
+HSET myhash field1 "foo"
+HEXISTS myhash field1
+HEXISTS myhash field2
+```
diff --git a/iredis/data/commands/hget.md b/iredis/data/commands/hget.md
new file mode 100644
index 0000000..12556a1
--- /dev/null
+++ b/iredis/data/commands/hget.md
@@ -0,0 +1,14 @@
+Returns the value associated with `field` in the hash stored at `key`.
+
+@return
+
+@bulk-string-reply: the value associated with `field`, or `nil` when `field` is
+not present in the hash or `key` does not exist.
+
+@examples
+
+```cli
+HSET myhash field1 "foo"
+HGET myhash field1
+HGET myhash field2
+```
diff --git a/iredis/data/commands/hgetall.md b/iredis/data/commands/hgetall.md
new file mode 100644
index 0000000..42e95de
--- /dev/null
+++ b/iredis/data/commands/hgetall.md
@@ -0,0 +1,16 @@
+Returns all fields and values of the hash stored at `key`. In the returned
+value, every field name is followed by its value, so the length of the reply is
+twice the size of the hash.
+
+@return
+
+@array-reply: list of fields and their values stored in the hash, or an empty
+list when `key` does not exist.
+
+@examples
+
+```cli
+HSET myhash field1 "Hello"
+HSET myhash field2 "World"
+HGETALL myhash
+```
diff --git a/iredis/data/commands/hincrby.md b/iredis/data/commands/hincrby.md
new file mode 100644
index 0000000..71f53cb
--- /dev/null
+++ b/iredis/data/commands/hincrby.md
@@ -0,0 +1,22 @@
+Increments the number stored at `field` in the hash stored at `key` by
+`increment`. If `key` does not exist, a new key holding a hash is created. If
+`field` does not exist the value is set to `0` before the operation is
+performed.
+
+The range of values supported by `HINCRBY` is limited to 64 bit signed integers.
+
+@return
+
+@integer-reply: the value at `field` after the increment operation.
+
+@examples
+
+Since the `increment` argument is signed, both increment and decrement
+operations can be performed:
+
+```cli
+HSET myhash field 5
+HINCRBY myhash field 1
+HINCRBY myhash field -1
+HINCRBY myhash field -10
+```
diff --git a/iredis/data/commands/hincrbyfloat.md b/iredis/data/commands/hincrbyfloat.md
new file mode 100644
index 0000000..fe58beb
--- /dev/null
+++ b/iredis/data/commands/hincrbyfloat.md
@@ -0,0 +1,33 @@
+Increment the specified `field` of a hash stored at `key`, and representing a
+floating point number, by the specified `increment`. If the increment value is
+negative, the result is to have the hash field value **decremented** instead of
+incremented. If the field does not exist, it is set to `0` before performing the
+operation. An error is returned if one of the following conditions occur:
+
+- The field contains a value of the wrong type (not a string).
+- The current field content or the specified increment are not parsable as a
+ double precision floating point number.
+
+The exact behavior of this command is identical to the one of the `INCRBYFLOAT`
+command, please refer to the documentation of `INCRBYFLOAT` for further
+information.
+
+@return
+
+@bulk-string-reply: the value of `field` after the increment.
+
+@examples
+
+```cli
+HSET mykey field 10.50
+HINCRBYFLOAT mykey field 0.1
+HINCRBYFLOAT mykey field -5
+HSET mykey field 5.0e3
+HINCRBYFLOAT mykey field 2.0e2
+```
+
+## Implementation details
+
+The command is always propagated in the replication link and the Append Only
+File as a `HSET` operation, so that differences in the underlying floating point
+math implementation will not be sources of inconsistency.
diff --git a/iredis/data/commands/hkeys.md b/iredis/data/commands/hkeys.md
new file mode 100644
index 0000000..42fd82c
--- /dev/null
+++ b/iredis/data/commands/hkeys.md
@@ -0,0 +1,14 @@
+Returns all field names in the hash stored at `key`.
+
+@return
+
+@array-reply: list of fields in the hash, or an empty list when `key` does not
+exist.
+
+@examples
+
+```cli
+HSET myhash field1 "Hello"
+HSET myhash field2 "World"
+HKEYS myhash
+```
diff --git a/iredis/data/commands/hlen.md b/iredis/data/commands/hlen.md
new file mode 100644
index 0000000..2c18193
--- /dev/null
+++ b/iredis/data/commands/hlen.md
@@ -0,0 +1,13 @@
+Returns the number of fields contained in the hash stored at `key`.
+
+@return
+
+@integer-reply: number of fields in the hash, or `0` when `key` does not exist.
+
+@examples
+
+```cli
+HSET myhash field1 "Hello"
+HSET myhash field2 "World"
+HLEN myhash
+```
diff --git a/iredis/data/commands/hmget.md b/iredis/data/commands/hmget.md
new file mode 100644
index 0000000..14b8733
--- /dev/null
+++ b/iredis/data/commands/hmget.md
@@ -0,0 +1,17 @@
+Returns the values associated with the specified `fields` in the hash stored at
+`key`.
+
+For every `field` that does not exist in the hash, a `nil` value is returned.
+Because non-existing keys are treated as empty hashes, running `HMGET` against a
+non-existing `key` will return a list of `nil` values.
+
+@return
+
+@array-reply: list of values associated with the given fields, in the same order
+as they are requested.
+
+```cli
+HSET myhash field1 "Hello"
+HSET myhash field2 "World"
+HMGET myhash field1 field2 nofield
+```
diff --git a/iredis/data/commands/hmset.md b/iredis/data/commands/hmset.md
new file mode 100644
index 0000000..f66a364
--- /dev/null
+++ b/iredis/data/commands/hmset.md
@@ -0,0 +1,18 @@
+Sets the specified fields to their respective values in the hash stored at
+`key`. This command overwrites any specified fields already existing in the
+hash. If `key` does not exist, a new key holding a hash is created.
+
+As per Redis 4.0.0, HMSET is considered deprecated. Please use `HSET` in new
+code.
+
+@return
+
+@simple-string-reply
+
+@examples
+
+```cli
+HMSET myhash field1 "Hello" field2 "World"
+HGET myhash field1
+HGET myhash field2
+```
diff --git a/iredis/data/commands/hscan.md b/iredis/data/commands/hscan.md
new file mode 100644
index 0000000..9ab2616
--- /dev/null
+++ b/iredis/data/commands/hscan.md
@@ -0,0 +1 @@
+See `SCAN` for `HSCAN` documentation.
diff --git a/iredis/data/commands/hset.md b/iredis/data/commands/hset.md
new file mode 100644
index 0000000..a975947
--- /dev/null
+++ b/iredis/data/commands/hset.md
@@ -0,0 +1,17 @@
+Sets `field` in the hash stored at `key` to `value`. If `key` does not exist, a
+new key holding a hash is created. If `field` already exists in the hash, it is
+overwritten.
+
+As of Redis 4.0.0, HSET is variadic and allows for multiple `field`/`value`
+pairs.
+
+@return
+
+@integer-reply: The number of fields that were added.
+
+@examples
+
+```cli
+HSET myhash field1 "Hello"
+HGET myhash field1
+```
diff --git a/iredis/data/commands/hsetnx.md b/iredis/data/commands/hsetnx.md
new file mode 100644
index 0000000..1926178
--- /dev/null
+++ b/iredis/data/commands/hsetnx.md
@@ -0,0 +1,18 @@
+Sets `field` in the hash stored at `key` to `value`, only if `field` does not
+yet exist. If `key` does not exist, a new key holding a hash is created. If
+`field` already exists, this operation has no effect.
+
+@return
+
+@integer-reply, specifically:
+
+- `1` if `field` is a new field in the hash and `value` was set.
+- `0` if `field` already exists in the hash and no operation was performed.
+
+@examples
+
+```cli
+HSETNX myhash field "Hello"
+HSETNX myhash field "World"
+HGET myhash field
+```
diff --git a/iredis/data/commands/hstrlen.md b/iredis/data/commands/hstrlen.md
new file mode 100644
index 0000000..9a96337
--- /dev/null
+++ b/iredis/data/commands/hstrlen.md
@@ -0,0 +1,16 @@
+Returns the string length of the value associated with `field` in the hash
+stored at `key`. If the `key` or the `field` do not exist, 0 is returned.
+
+@return
+
+@integer-reply: the string length of the value associated with `field`, or zero
+when `field` is not present in the hash or `key` does not exist at all.
+
+@examples
+
+```cli
+HMSET myhash f1 HelloWorld f2 99 f3 -256
+HSTRLEN myhash f1
+HSTRLEN myhash f2
+HSTRLEN myhash f3
+```
diff --git a/iredis/data/commands/hvals.md b/iredis/data/commands/hvals.md
new file mode 100644
index 0000000..1dbe950
--- /dev/null
+++ b/iredis/data/commands/hvals.md
@@ -0,0 +1,14 @@
+Returns all values in the hash stored at `key`.
+
+@return
+
+@array-reply: list of values in the hash, or an empty list when `key` does not
+exist.
+
+@examples
+
+```cli
+HSET myhash field1 "Hello"
+HSET myhash field2 "World"
+HVALS myhash
+```
diff --git a/iredis/data/commands/incr.md b/iredis/data/commands/incr.md
new file mode 100644
index 0000000..8d8b012
--- /dev/null
+++ b/iredis/data/commands/incr.md
@@ -0,0 +1,156 @@
+Increments the number stored at `key` by one. If the key does not exist, it is
+set to `0` before performing the operation. An error is returned if the key
+contains a value of the wrong type or contains a string that can not be
+represented as integer. This operation is limited to 64 bit signed integers.
+
+**Note**: this is a string operation because Redis does not have a dedicated
+integer type. The string stored at the key is interpreted as a base-10 **64 bit
+signed integer** to execute the operation.
+
+Redis stores integers in their integer representation, so for string values that
+actually hold an integer, there is no overhead for storing the string
+representation of the integer.
+
+@return
+
+@integer-reply: the value of `key` after the increment
+
+@examples
+
+```cli
+SET mykey "10"
+INCR mykey
+GET mykey
+```
+
+## Pattern: Counter
+
+The counter pattern is the most obvious thing you can do with Redis atomic
+increment operations. The idea is simply send an `INCR` command to Redis every
+time an operation occurs. For instance in a web application we may want to know
+how many page views this user did every day of the year.
+
+To do so the web application may simply increment a key every time the user
+performs a page view, creating the key name concatenating the User ID and a
+string representing the current date.
+
+This simple pattern can be extended in many ways:
+
+- It is possible to use `INCR` and `EXPIRE` together at every page view to have
+ a counter counting only the latest N page views separated by less than the
+ specified amount of seconds.
+- A client may use GETSET in order to atomically get the current counter value
+ and reset it to zero.
+- Using other atomic increment/decrement commands like `DECR` or `INCRBY` it is
+ possible to handle values that may get bigger or smaller depending on the
+ operations performed by the user. Imagine for instance the score of different
+ users in an online game.
+
+## Pattern: Rate limiter
+
+The rate limiter pattern is a special counter that is used to limit the rate at
+which an operation can be performed. The classical materialization of this
+pattern involves limiting the number of requests that can be performed against a
+public API.
+
+We provide two implementations of this pattern using `INCR`, where we assume
+that the problem to solve is limiting the number of API calls to a maximum of
+_ten requests per second per IP address_.
+
+## Pattern: Rate limiter 1
+
+The more simple and direct implementation of this pattern is the following:
+
+```
+FUNCTION LIMIT_API_CALL(ip)
+ts = CURRENT_UNIX_TIME()
+keyname = ip+":"+ts
+current = GET(keyname)
+IF current != NULL AND current > 10 THEN
+ ERROR "too many requests per second"
+ELSE
+ MULTI
+ INCR(keyname,1)
+ EXPIRE(keyname,10)
+ EXEC
+ PERFORM_API_CALL()
+END
+```
+
+Basically we have a counter for every IP, for every different second. But this
+counters are always incremented setting an expire of 10 seconds so that they'll
+be removed by Redis automatically when the current second is a different one.
+
+Note the used of `MULTI` and `EXEC` in order to make sure that we'll both
+increment and set the expire at every API call.
+
+## Pattern: Rate limiter 2
+
+An alternative implementation uses a single counter, but is a bit more complex
+to get it right without race conditions. We'll examine different variants.
+
+```
+FUNCTION LIMIT_API_CALL(ip):
+current = GET(ip)
+IF current != NULL AND current > 10 THEN
+ ERROR "too many requests per second"
+ELSE
+ value = INCR(ip)
+ IF value == 1 THEN
+ EXPIRE(ip,1)
+ END
+ PERFORM_API_CALL()
+END
+```
+
+The counter is created in a way that it only will survive one second, starting
+from the first request performed in the current second. If there are more than
+10 requests in the same second the counter will reach a value greater than 10,
+otherwise it will expire and start again from 0.
+
+**In the above code there is a race condition**. If for some reason the client
+performs the `INCR` command but does not perform the `EXPIRE` the key will be
+leaked until we'll see the same IP address again.
+
+This can be fixed easily turning the `INCR` with optional `EXPIRE` into a Lua
+script that is send using the `EVAL` command (only available since Redis version
+2.6).
+
+```
+local current
+current = redis.call("incr",KEYS[1])
+if tonumber(current) == 1 then
+ redis.call("expire",KEYS[1],1)
+end
+```
+
+There is a different way to fix this issue without using scripting, but using
+Redis lists instead of counters. The implementation is more complex and uses
+more advanced features but has the advantage of remembering the IP addresses of
+the clients currently performing an API call, that may be useful or not
+depending on the application.
+
+```
+FUNCTION LIMIT_API_CALL(ip)
+current = LLEN(ip)
+IF current > 10 THEN
+ ERROR "too many requests per second"
+ELSE
+ IF EXISTS(ip) == FALSE
+ MULTI
+ RPUSH(ip,ip)
+ EXPIRE(ip,1)
+ EXEC
+ ELSE
+ RPUSHX(ip,ip)
+ END
+ PERFORM_API_CALL()
+END
+```
+
+The `RPUSHX` command only pushes the element if the key already exists.
+
+Note that we have a race here, but it is not a problem: `EXISTS` may return
+false but the key may be created by another client before we create it inside
+the `MULTI` / `EXEC` block. However this race will just miss an API call under
+rare conditions, so the rate limiting will still work correctly.
diff --git a/iredis/data/commands/incrby.md b/iredis/data/commands/incrby.md
new file mode 100644
index 0000000..103dbbf
--- /dev/null
+++ b/iredis/data/commands/incrby.md
@@ -0,0 +1,17 @@
+Increments the number stored at `key` by `increment`. If the key does not exist,
+it is set to `0` before performing the operation. An error is returned if the
+key contains a value of the wrong type or contains a string that can not be
+represented as integer. This operation is limited to 64 bit signed integers.
+
+See `INCR` for extra information on increment/decrement operations.
+
+@return
+
+@integer-reply: the value of `key` after the increment
+
+@examples
+
+```cli
+SET mykey "10"
+INCRBY mykey 5
+```
diff --git a/iredis/data/commands/incrbyfloat.md b/iredis/data/commands/incrbyfloat.md
new file mode 100644
index 0000000..4998f5f
--- /dev/null
+++ b/iredis/data/commands/incrbyfloat.md
@@ -0,0 +1,42 @@
+Increment the string representing a floating point number stored at `key` by the
+specified `increment`. By using a negative `increment` value, the result is that
+the value stored at the key is decremented (by the obvious properties of
+addition). If the key does not exist, it is set to `0` before performing the
+operation. An error is returned if one of the following conditions occur:
+
+- The key contains a value of the wrong type (not a string).
+- The current key content or the specified increment are not parsable as a
+ double precision floating point number.
+
+If the command is successful the new incremented value is stored as the new
+value of the key (replacing the old one), and returned to the caller as a
+string.
+
+Both the value already contained in the string key and the increment argument
+can be optionally provided in exponential notation, however the value computed
+after the increment is stored consistently in the same format, that is, an
+integer number followed (if needed) by a dot, and a variable number of digits
+representing the decimal part of the number. Trailing zeroes are always removed.
+
+The precision of the output is fixed at 17 digits after the decimal point
+regardless of the actual internal precision of the computation.
+
+@return
+
+@bulk-string-reply: the value of `key` after the increment.
+
+@examples
+
+```cli
+SET mykey 10.50
+INCRBYFLOAT mykey 0.1
+INCRBYFLOAT mykey -5
+SET mykey 5.0e3
+INCRBYFLOAT mykey 2.0e2
+```
+
+## Implementation details
+
+The command is always propagated in the replication link and the Append Only
+File as a `SET` operation, so that differences in the underlying floating point
+math implementation will not be sources of inconsistency.
diff --git a/iredis/data/commands/info.md b/iredis/data/commands/info.md
new file mode 100644
index 0000000..8dcebc1
--- /dev/null
+++ b/iredis/data/commands/info.md
@@ -0,0 +1,335 @@
+The `INFO` command returns information and statistics about the server in a
+format that is simple to parse by computers and easy to read by humans.
+
+The optional parameter can be used to select a specific section of information:
+
+- `server`: General information about the Redis server
+- `clients`: Client connections section
+- `memory`: Memory consumption related information
+- `persistence`: RDB and AOF related information
+- `stats`: General statistics
+- `replication`: Master/replica replication information
+- `cpu`: CPU consumption statistics
+- `commandstats`: Redis command statistics
+- `cluster`: Redis Cluster section
+- `modules`: Modules section
+- `keyspace`: Database related statistics
+- `modules`: Module related sections
+
+It can also take the following values:
+
+- `all`: Return all sections (excluding module generated ones)
+- `default`: Return only the default set of sections
+- `everything`: Includes `all` and `modules`
+
+When no parameter is provided, the `default` option is assumed.
+
+@return
+
+@bulk-string-reply: as a collection of text lines.
+
+Lines can contain a section name (starting with a # character) or a property.
+All the properties are in the form of `field:value` terminated by `\r\n`.
+
+```cli
+INFO
+```
+
+## Notes
+
+Please note depending on the version of Redis some of the fields have been added
+or removed. A robust client application should therefore parse the result of
+this command by skipping unknown properties, and gracefully handle missing
+fields.
+
+Here is the description of fields for Redis >= 2.4.
+
+Here is the meaning of all fields in the **server** section:
+
+- `redis_version`: Version of the Redis server
+- `redis_git_sha1`: Git SHA1
+- `redis_git_dirty`: Git dirty flag
+- `redis_build_id`: The build id
+- `redis_mode`: The server's mode ("standalone", "sentinel" or "cluster")
+- `os`: Operating system hosting the Redis server
+- `arch_bits`: Architecture (32 or 64 bits)
+- `multiplexing_api`: Event loop mechanism used by Redis
+- `atomicvar_api`: Atomicvar API used by Redis
+- `gcc_version`: Version of the GCC compiler used to compile the Redis server
+- `process_id`: PID of the server process
+- `run_id`: Random value identifying the Redis server (to be used by Sentinel
+ and Cluster)
+- `tcp_port`: TCP/IP listen port
+- `uptime_in_seconds`: Number of seconds since Redis server start
+- `uptime_in_days`: Same value expressed in days
+- `hz`: The server's current frequency setting
+- `configured_hz`: The server's configured frequency setting
+- `lru_clock`: Clock incrementing every minute, for LRU management
+- `executable`: The path to the server's executable
+- `config_file`: The path to the config file
+
+Here is the meaning of all fields in the **clients** section:
+
+- `connected_clients`: Number of client connections (excluding connections from
+ replicas)
+- `client_longest_output_list`: Longest output list among current client
+ connections
+- `client_biggest_input_buf`: Biggest input buffer among current client
+ connections
+- `blocked_clients`: Number of clients pending on a blocking call (`BLPOP`,
+ `BRPOP`, `BRPOPLPUSH`, `BZPOPMIN`, `BZPOPMAX`)
+- `tracking_clients`: Number of clients being tracked (`CLIENT TRACKING`)
+- `clients_in_timeout_table`: Number of clients in the clients timeout table
+
+Here is the meaning of all fields in the **memory** section:
+
+- `used_memory`: Total number of bytes allocated by Redis using its allocator
+ (either standard **libc**, **jemalloc**, or an alternative allocator such as
+ [**tcmalloc**][hcgcpgp])
+- `used_memory_human`: Human readable representation of previous value
+- `used_memory_rss`: Number of bytes that Redis allocated as seen by the
+ operating system (a.k.a resident set size). This is the number reported by
+ tools such as `top(1)` and `ps(1)`
+- `used_memory_rss_human`: Human readable representation of previous value
+- `used_memory_peak`: Peak memory consumed by Redis (in bytes)
+- `used_memory_peak_human`: Human readable representation of previous value
+- `used_memory_peak_perc`: The percentage of `used_memory_peak` out of
+ `used_memory`
+- `used_memory_overhead`: The sum in bytes of all overheads that the server
+ allocated for managing its internal data structures
+- `used_memory_startup`: Initial amount of memory consumed by Redis at startup
+ in bytes
+- `used_memory_dataset`: The size in bytes of the dataset
+ (`used_memory_overhead` subtracted from `used_memory`)
+- `used_memory_dataset_perc`: The percentage of `used_memory_dataset` out of the
+ net memory usage (`used_memory` minus `used_memory_startup`)
+- `total_system_memory`: The total amount of memory that the Redis host has
+- `total_system_memory_human`: Human readable representation of previous value
+- `used_memory_lua`: Number of bytes used by the Lua engine
+- `used_memory_lua_human`: Human readable representation of previous value
+- `used_memory_scripts`: Number of bytes used by cached Lua scripts
+- `used_memory_scripts_human`: Human readable representation of previous value
+- `maxmemory`: The value of the `maxmemory` configuration directive
+- `maxmemory_human`: Human readable representation of previous value
+- `maxmemory_policy`: The value of the `maxmemory-policy` configuration
+ directive
+- `mem_fragmentation_ratio`: Ratio between `used_memory_rss` and `used_memory`
+- `mem_allocator`: Memory allocator, chosen at compile time
+- `active_defrag_running`: Flag indicating if active defragmentation is active
+- `lazyfree_pending_objects`: The number of objects waiting to be freed (as a
+ result of calling `UNLINK`, or `FLUSHDB` and `FLUSHALL` with the **ASYNC**
+ option)
+
+Ideally, the `used_memory_rss` value should be only slightly higher than
+`used_memory`. When rss >> used, a large difference means there is memory
+fragmentation (internal or external), which can be evaluated by checking
+`mem_fragmentation_ratio`. When used >> rss, it means part of Redis memory has
+been swapped off by the operating system: expect some significant latencies.
+
+Because Redis does not have control over how its allocations are mapped to
+memory pages, high `used_memory_rss` is often the result of a spike in memory
+usage.
+
+When Redis frees memory, the memory is given back to the allocator, and the
+allocator may or may not give the memory back to the system. There may be a
+discrepancy between the `used_memory` value and memory consumption as reported
+by the operating system. It may be due to the fact memory has been used and
+released by Redis, but not given back to the system. The `used_memory_peak`
+value is generally useful to check this point.
+
+Additional introspective information about the server's memory can be obtained
+by referring to the `MEMORY STATS` command and the `MEMORY DOCTOR`.
+
+Here is the meaning of all fields in the **persistence** section:
+
+- `loading`: Flag indicating if the load of a dump file is on-going
+- `rdb_changes_since_last_save`: Number of changes since the last dump
+- `rdb_bgsave_in_progress`: Flag indicating a RDB save is on-going
+- `rdb_last_save_time`: Epoch-based timestamp of last successful RDB save
+- `rdb_last_bgsave_status`: Status of the last RDB save operation
+- `rdb_last_bgsave_time_sec`: Duration of the last RDB save operation in seconds
+- `rdb_current_bgsave_time_sec`: Duration of the on-going RDB save operation if
+ any
+- `rdb_last_cow_size`: The size in bytes of copy-on-write allocations during the
+ last RDB save operation
+- `aof_enabled`: Flag indicating AOF logging is activated
+- `aof_rewrite_in_progress`: Flag indicating a AOF rewrite operation is on-going
+- `aof_rewrite_scheduled`: Flag indicating an AOF rewrite operation will be
+ scheduled once the on-going RDB save is complete.
+- `aof_last_rewrite_time_sec`: Duration of the last AOF rewrite operation in
+ seconds
+- `aof_current_rewrite_time_sec`: Duration of the on-going AOF rewrite operation
+ if any
+- `aof_last_bgrewrite_status`: Status of the last AOF rewrite operation
+- `aof_last_write_status`: Status of the last write operation to the AOF
+- `aof_last_cow_size`: The size in bytes of copy-on-write allocations during the
+ last AOF rewrite operation
+- `module_fork_in_progress`: Flag indicating a module fork is on-going
+- `module_fork_last_cow_size`: The size in bytes of copy-on-write allocations
+ during the last module fork operation
+
+`rdb_changes_since_last_save` refers to the number of operations that produced
+some kind of changes in the dataset since the last time either `SAVE` or
+`BGSAVE` was called.
+
+If AOF is activated, these additional fields will be added:
+
+- `aof_current_size`: AOF current file size
+- `aof_base_size`: AOF file size on latest startup or rewrite
+- `aof_pending_rewrite`: Flag indicating an AOF rewrite operation will be
+ scheduled once the on-going RDB save is complete.
+- `aof_buffer_length`: Size of the AOF buffer
+- `aof_rewrite_buffer_length`: Size of the AOF rewrite buffer
+- `aof_pending_bio_fsync`: Number of fsync pending jobs in background I/O queue
+- `aof_delayed_fsync`: Delayed fsync counter
+
+If a load operation is on-going, these additional fields will be added:
+
+- `loading_start_time`: Epoch-based timestamp of the start of the load operation
+- `loading_total_bytes`: Total file size
+- `loading_loaded_bytes`: Number of bytes already loaded
+- `loading_loaded_perc`: Same value expressed as a percentage
+- `loading_eta_seconds`: ETA in seconds for the load to be complete
+
+Here is the meaning of all fields in the **stats** section:
+
+- `total_connections_received`: Total number of connections accepted by the
+ server
+- `total_commands_processed`: Total number of commands processed by the server
+- `instantaneous_ops_per_sec`: Number of commands processed per second
+- `total_net_input_bytes`: The total number of bytes read from the network
+- `total_net_output_bytes`: The total number of bytes written to the network
+- `instantaneous_input_kbps`: The network's read rate per second in KB/sec
+- `instantaneous_output_kbps`: The network's write rate per second in KB/sec
+- `rejected_connections`: Number of connections rejected because of `maxclients`
+ limit
+- `sync_full`: The number of full resyncs with replicas
+- `sync_partial_ok`: The number of accepted partial resync requests
+- `sync_partial_err`: The number of denied partial resync requests
+- `expired_keys`: Total number of key expiration events
+- `expired_stale_perc`: The percentage of keys probably expired
+- `expired_time_cap_reached_count`: The count of times that active expiry cycles
+ have stopped early
+- `expire_cycle_cpu_milliseconds`: The cumulative amount of time spend on active
+ expiry cycles
+- `evicted_keys`: Number of evicted keys due to `maxmemory` limit
+- `keyspace_hits`: Number of successful lookup of keys in the main dictionary
+- `keyspace_misses`: Number of failed lookup of keys in the main dictionary
+- `pubsub_channels`: Global number of pub/sub channels with client subscriptions
+- `pubsub_patterns`: Global number of pub/sub pattern with client subscriptions
+- `latest_fork_usec`: Duration of the latest fork operation in microseconds
+- `migrate_cached_sockets`: The number of sockets open for `MIGRATE` purposes
+- `slave_expires_tracked_keys`: The number of keys tracked for expiry purposes
+ (applicable only to writable replicas)
+- `active_defrag_hits`: Number of value reallocations performed by active the
+ defragmentation process
+- `active_defrag_misses`: Number of aborted value reallocations started by the
+ active defragmentation process
+- `active_defrag_key_hits`: Number of keys that were actively defragmented
+- `active_defrag_key_misses`: Number of keys that were skipped by the active
+ defragmentation process
+- `tracking_total_keys`: Number of keys being tracked by the server
+- `tracking_total_items`: Number of items, that is the sum of clients number for
+ each key, that are being tracked
+- `tracking_total_prefixes`: Number of tracked prefixes in server's prefix table
+ (only applicable for broadcast mode)
+- `unexpected_error_replies`: Number of unexpected error replies, that are types
+ of errors from an AOF load or replication
+
+Here is the meaning of all fields in the **replication** section:
+
+- `role`: Value is "master" if the instance is replica of no one, or "slave" if
+ the instance is a replica of some master instance. Note that a replica can be
+ master of another replica (chained replication).
+- `master_replid`: The replication ID of the Redis server.
+- `master_replid2`: The secondary replication ID, used for PSYNC after a
+ failover.
+- `master_repl_offset`: The server's current replication offset
+- `second_repl_offset`: The offset up to which replication IDs are accepted
+- `repl_backlog_active`: Flag indicating replication backlog is active
+- `repl_backlog_size`: Total size in bytes of the replication backlog buffer
+- `repl_backlog_first_byte_offset`: The master offset of the replication backlog
+ buffer
+- `repl_backlog_histlen`: Size in bytes of the data in the replication backlog
+ buffer
+
+If the instance is a replica, these additional fields are provided:
+
+- `master_host`: Host or IP address of the master
+- `master_port`: Master listening TCP port
+- `master_link_status`: Status of the link (up/down)
+- `master_last_io_seconds_ago`: Number of seconds since the last interaction
+ with master
+- `master_sync_in_progress`: Indicate the master is syncing to the replica
+- `slave_repl_offset`: The replication offset of the replica instance
+- `slave_priority`: The priority of the instance as a candidate for failover
+- `slave_read_only`: Flag indicating if the replica is read-only
+
+If a SYNC operation is on-going, these additional fields are provided:
+
+- `master_sync_left_bytes`: Number of bytes left before syncing is complete
+- `master_sync_last_io_seconds_ago`: Number of seconds since last transfer I/O
+ during a SYNC operation
+
+If the link between master and replica is down, an additional field is provided:
+
+- `master_link_down_since_seconds`: Number of seconds since the link is down
+
+The following field is always provided:
+
+- `connected_slaves`: Number of connected replicas
+
+If the server is configured with the `min-slaves-to-write` (or starting with
+Redis 5 with the `min-replicas-to-write`) directive, an additional field is
+provided:
+
+- `min_slaves_good_slaves`: Number of replicas currently considered good
+
+For each replica, the following line is added:
+
+- `slaveXXX`: id, IP address, port, state, offset, lag
+
+Here is the meaning of all fields in the **cpu** section:
+
+- `used_cpu_sys`: System CPU consumed by the Redis server
+- `used_cpu_user`:User CPU consumed by the Redis server
+- `used_cpu_sys_children`: System CPU consumed by the background processes
+- `used_cpu_user_children`: User CPU consumed by the background processes
+
+The **commandstats** section provides statistics based on the command type,
+including the number of calls, the total CPU time consumed by these commands,
+and the average CPU consumed per command execution.
+
+For each command type, the following line is added:
+
+- `cmdstat_XXX`: `calls=XXX,usec=XXX,usec_per_call=XXX`
+
+The **cluster** section currently only contains a unique field:
+
+- `cluster_enabled`: Indicate Redis cluster is enabled
+
+The **modules** section contains additional information about loaded modules if
+the modules provide it. The field part of properties lines in this section is
+always prefixed with the module's name.
+
+The **keyspace** section provides statistics on the main dictionary of each
+database. The statistics are the number of keys, and the number of keys with an
+expiration.
+
+For each database, the following line is added:
+
+- `dbXXX`: `keys=XXX,expires=XXX`
+
+[hcgcpgp]: http://code.google.com/p/google-perftools/
+
+**A note about the word slave used in this man page**: Starting with Redis 5, if
+not for backward compatibility, the Redis project no longer uses the word slave.
+Unfortunately in this command the word slave is part of the protocol, so we'll
+be able to remove such occurrences only when this API will be naturally
+deprecated.
+
+**Modules generated sections**: Starting with Redis 6, modules can inject their
+info into the `INFO` command, these are excluded by default even when the `all`
+argument is provided (it will include a list of loaded modules but not their
+generated info fields). To get these you must use either the `modules` argument
+or `everything`.,
diff --git a/iredis/data/commands/keys.md b/iredis/data/commands/keys.md
new file mode 100644
index 0000000..8991beb
--- /dev/null
+++ b/iredis/data/commands/keys.md
@@ -0,0 +1,37 @@
+Returns all keys matching `pattern`.
+
+While the time complexity for this operation is O(N), the constant times are
+fairly low. For example, Redis running on an entry level laptop can scan a 1
+million key database in 40 milliseconds.
+
+**Warning**: consider `KEYS` as a command that should only be used in production
+environments with extreme care. It may ruin performance when it is executed
+against large databases. This command is intended for debugging and special
+operations, such as changing your keyspace layout. Don't use `KEYS` in your
+regular application code. If you're looking for a way to find keys in a subset
+of your keyspace, consider using `SCAN` or [sets][tdts].
+
+[tdts]: /topics/data-types#sets
+
+Supported glob-style patterns:
+
+- `h?llo` matches `hello`, `hallo` and `hxllo`
+- `h*llo` matches `hllo` and `heeeello`
+- `h[ae]llo` matches `hello` and `hallo,` but not `hillo`
+- `h[^e]llo` matches `hallo`, `hbllo`, ... but not `hello`
+- `h[a-b]llo` matches `hallo` and `hbllo`
+
+Use `\` to escape special characters if you want to match them verbatim.
+
+@return
+
+@array-reply: list of keys matching `pattern`.
+
+@examples
+
+```cli
+MSET firstname Jack lastname Stuntman age 35
+KEYS *name*
+KEYS a??
+KEYS *
+```
diff --git a/iredis/data/commands/lastsave.md b/iredis/data/commands/lastsave.md
new file mode 100644
index 0000000..8a4dea1
--- /dev/null
+++ b/iredis/data/commands/lastsave.md
@@ -0,0 +1,8 @@
+Return the UNIX TIME of the last DB save executed with success. A client may
+check if a `BGSAVE` command succeeded reading the `LASTSAVE` value, then issuing
+a `BGSAVE` command and checking at regular intervals every N seconds if
+`LASTSAVE` changed.
+
+@return
+
+@integer-reply: an UNIX time stamp.
diff --git a/iredis/data/commands/latency-doctor.md b/iredis/data/commands/latency-doctor.md
new file mode 100644
index 0000000..e089abb
--- /dev/null
+++ b/iredis/data/commands/latency-doctor.md
@@ -0,0 +1,47 @@
+The `LATENCY DOCTOR` command reports about different latency-related issues and
+advises about possible remedies.
+
+This command is the most powerful analysis tool in the latency monitoring
+framework, and is able to provide additional statistical data like the average
+period between latency spikes, the median deviation, and a human-readable
+analysis of the event. For certain events, like `fork`, additional information
+is provided, like the rate at which the system forks processes.
+
+This is the output you should post in the Redis mailing list if you are looking
+for help about Latency related issues.
+
+@example
+
+```
+127.0.0.1:6379> latency doctor
+
+Dave, I have observed latency spikes in this Redis instance.
+You don't mind talking about it, do you Dave?
+
+1. command: 5 latency spikes (average 300ms, mean deviation 120ms,
+ period 73.40 sec). Worst all time event 500ms.
+
+I have a few advices for you:
+
+- Your current Slow Log configuration only logs events that are
+ slower than your configured latency monitor threshold. Please
+ use 'CONFIG SET slowlog-log-slower-than 1000'.
+- Check your Slow Log to understand what are the commands you are
+ running which are too slow to execute. Please check
+ http://redis.io/commands/slowlog for more information.
+- Deleting, expiring or evicting (because of maxmemory policy)
+ large objects is a blocking operation. If you have very large
+ objects that are often deleted, expired, or evicted, try to
+ fragment those objects into multiple smaller objects.
+```
+
+**Note:** the doctor has erratic psychological behaviors, so we recommend
+interacting with it carefully.
+
+For more information refer to the [Latency Monitoring Framework page][lm].
+
+[lm]: /topics/latency-monitor
+
+@return
+
+@bulk-string-reply
diff --git a/iredis/data/commands/latency-graph.md b/iredis/data/commands/latency-graph.md
new file mode 100644
index 0000000..ee3d93f
--- /dev/null
+++ b/iredis/data/commands/latency-graph.md
@@ -0,0 +1,68 @@
+Produces an ASCII-art style graph for the specified event.
+
+`LATENCY GRAPH` lets you intuitively understand the latency trend of an `event`
+via state-of-the-art visualization. It can be used for quickly grasping the
+situation before resorting to means such parsing the raw data from
+`LATENCY HISTORY` or external tooling.
+
+Valid values for `event` are:
+
+- `active-defrag-cycle`
+- `aof-fsync-always`
+- `aof-stat`
+- `aof-rewrite-diff-write`
+- `aof-rename`
+- `aof-write`
+- `aof-write-active-child`
+- `aof-write-alone`
+- `aof-write-pending-fsync`
+- `command`
+- `expire-cycle`
+- `eviction-cycle`
+- `eviction-del`
+- `fast-command`
+- `fork`
+- `rdb-unlink-temp-file`
+
+@example
+
+```
+127.0.0.1:6379> latency reset command
+(integer) 0
+127.0.0.1:6379> debug sleep .1
+OK
+127.0.0.1:6379> debug sleep .2
+OK
+127.0.0.1:6379> debug sleep .3
+OK
+127.0.0.1:6379> debug sleep .5
+OK
+127.0.0.1:6379> debug sleep .4
+OK
+127.0.0.1:6379> latency graph command
+command - high 500 ms, low 101 ms (all time high 500 ms)
+--------------------------------------------------------------------------------
+ #_
+ _||
+ _|||
+_||||
+
+11186
+542ss
+sss
+```
+
+The vertical labels under each graph column represent the amount of seconds,
+minutes, hours or days ago the event happened. For example "15s" means that the
+first graphed event happened 15 seconds ago.
+
+The graph is normalized in the min-max scale so that the zero (the underscore in
+the lower row) is the minimum, and a # in the higher row is the maximum.
+
+For more information refer to the [Latency Monitoring Framework page][lm].
+
+[lm]: /topics/latency-monitor
+
+@return
+
+@bulk-string-reply
diff --git a/iredis/data/commands/latency-help.md b/iredis/data/commands/latency-help.md
new file mode 100644
index 0000000..8077bf0
--- /dev/null
+++ b/iredis/data/commands/latency-help.md
@@ -0,0 +1,10 @@
+The `LATENCY HELP` command returns a helpful text describing the different
+subcommands.
+
+For more information refer to the [Latency Monitoring Framework page][lm].
+
+[lm]: /topics/latency-monitor
+
+@return
+
+@array-reply: a list of subcommands and their descriptions
diff --git a/iredis/data/commands/latency-history.md b/iredis/data/commands/latency-history.md
new file mode 100644
index 0000000..de48ecd
--- /dev/null
+++ b/iredis/data/commands/latency-history.md
@@ -0,0 +1,47 @@
+The `LATENCY HISTORY` command returns the raw data of the `event`'s latency
+spikes time series.
+
+This is useful to an application that wants to fetch raw data in order to
+perform monitoring, display graphs, and so forth.
+
+The command will return up to 160 timestamp-latency pairs for the `event`.
+
+Valid values for `event` are:
+
+- `active-defrag-cycle`
+- `aof-fsync-always`
+- `aof-stat`
+- `aof-rewrite-diff-write`
+- `aof-rename`
+- `aof-write`
+- `aof-write-active-child`
+- `aof-write-alone`
+- `aof-write-pending-fsync`
+- `command`
+- `expire-cycle`
+- `eviction-cycle`
+- `eviction-del`
+- `fast-command`
+- `fork`
+- `rdb-unlink-temp-file`
+
+@example
+
+```
+127.0.0.1:6379> latency history command
+1) 1) (integer) 1405067822
+ 2) (integer) 251
+2) 1) (integer) 1405067941
+ 2) (integer) 1001
+```
+
+For more information refer to the [Latency Monitoring Framework page][lm].
+
+[lm]: /topics/latency-monitor
+
+@return
+
+@array-reply: specifically:
+
+The command returns an array where each element is a two elements array
+representing the timestamp and the latency of the event.
diff --git a/iredis/data/commands/latency-latest.md b/iredis/data/commands/latency-latest.md
new file mode 100644
index 0000000..883f0a5
--- /dev/null
+++ b/iredis/data/commands/latency-latest.md
@@ -0,0 +1,38 @@
+The `LATENCY LATEST` command reports the latest latency events logged.
+
+Each reported event has the following fields:
+
+- Event name.
+- Unix timestamp of the latest latency spike for the event.
+- Latest event latency in millisecond.
+- All-time maximum latency for this event.
+
+"All-time" means the maximum latency since the Redis instance was started, or
+the time that events were reset `LATENCY RESET`.
+
+@example:
+
+```
+127.0.0.1:6379> debug sleep 1
+OK
+(1.00s)
+127.0.0.1:6379> debug sleep .25
+OK
+127.0.0.1:6379> latency latest
+1) 1) "command"
+ 2) (integer) 1405067976
+ 3) (integer) 251
+ 4) (integer) 1001
+```
+
+For more information refer to the [Latency Monitoring Framework page][lm].
+
+[lm]: /topics/latency-monitor
+
+@return
+
+@array-reply: specifically:
+
+The command returns an array where each element is a four elements array
+representing the event's name, timestamp, latest and all-time latency
+measurements.
diff --git a/iredis/data/commands/latency-reset.md b/iredis/data/commands/latency-reset.md
new file mode 100644
index 0000000..cec6f06
--- /dev/null
+++ b/iredis/data/commands/latency-reset.md
@@ -0,0 +1,36 @@
+The `LATENCY RESET` command resets the latency spikes time series of all, or
+only some, events.
+
+When the command is called without arguments, it resets all the events,
+discarding the currently logged latency spike events, and resetting the maximum
+event time register.
+
+It is possible to reset only specific events by providing the `event` names as
+arguments.
+
+Valid values for `event` are:
+
+- `active-defrag-cycle`
+- `aof-fsync-always`
+- `aof-stat`
+- `aof-rewrite-diff-write`
+- `aof-rename`
+- `aof-write`
+- `aof-write-active-child`
+- `aof-write-alone`
+- `aof-write-pending-fsync`
+- `command`
+- `expire-cycle`
+- `eviction-cycle`
+- `eviction-del`
+- `fast-command`
+- `fork`
+- `rdb-unlink-temp-file`
+
+For more information refer to the [Latency Monitoring Framework page][lm].
+
+[lm]: /topics/latency-monitor
+
+@reply
+
+@integer-reply: the number of event time series that were reset.
diff --git a/iredis/data/commands/lindex.md b/iredis/data/commands/lindex.md
new file mode 100644
index 0000000..c0f6ac3
--- /dev/null
+++ b/iredis/data/commands/lindex.md
@@ -0,0 +1,22 @@
+Returns the element at index `index` in the list stored at `key`. The index is
+zero-based, so `0` means the first element, `1` the second element and so on.
+Negative indices can be used to designate elements starting at the tail of the
+list. Here, `-1` means the last element, `-2` means the penultimate and so
+forth.
+
+When the value at `key` is not a list, an error is returned.
+
+@return
+
+@bulk-string-reply: the requested element, or `nil` when `index` is out of
+range.
+
+@examples
+
+```cli
+LPUSH mylist "World"
+LPUSH mylist "Hello"
+LINDEX mylist 0
+LINDEX mylist -1
+LINDEX mylist 3
+```
diff --git a/iredis/data/commands/linsert.md b/iredis/data/commands/linsert.md
new file mode 100644
index 0000000..6ff8060
--- /dev/null
+++ b/iredis/data/commands/linsert.md
@@ -0,0 +1,21 @@
+Inserts `element` in the list stored at `key` either before or after the
+reference value `pivot`.
+
+When `key` does not exist, it is considered an empty list and no operation is
+performed.
+
+An error is returned when `key` exists but does not hold a list value.
+
+@return
+
+@integer-reply: the length of the list after the insert operation, or `-1` when
+the value `pivot` was not found.
+
+@examples
+
+```cli
+RPUSH mylist "Hello"
+RPUSH mylist "World"
+LINSERT mylist BEFORE "World" "There"
+LRANGE mylist 0 -1
+```
diff --git a/iredis/data/commands/llen.md b/iredis/data/commands/llen.md
new file mode 100644
index 0000000..17ac581
--- /dev/null
+++ b/iredis/data/commands/llen.md
@@ -0,0 +1,15 @@
+Returns the length of the list stored at `key`. If `key` does not exist, it is
+interpreted as an empty list and `0` is returned. An error is returned when the
+value stored at `key` is not a list.
+
+@return
+
+@integer-reply: the length of the list at `key`.
+
+@examples
+
+```cli
+LPUSH mylist "World"
+LPUSH mylist "Hello"
+LLEN mylist
+```
diff --git a/iredis/data/commands/lolwut.md b/iredis/data/commands/lolwut.md
new file mode 100644
index 0000000..c19567c
--- /dev/null
+++ b/iredis/data/commands/lolwut.md
@@ -0,0 +1,36 @@
+The LOLWUT command displays the Redis version: however as a side effect of doing
+so, it also creates a piece of generative computer art that is different with
+each version of Redis. The command was introduced in Redis 5 and announced with
+this [blog post](http://antirez.com/news/123).
+
+By default the `LOLWUT` command will display the piece corresponding to the
+current Redis version, however it is possible to display a specific version
+using the following form:
+
+ LOLWUT VERSION 5 ... other optional arguments ...
+
+Of course the "5" above is an example. Each LOLWUT version takes a different set
+of arguments in order to change the output. The user is encouraged to play with
+it to discover how the output changes adding more numerical arguments.
+
+LOLWUT wants to be a reminder that there is more in programming than just
+putting some code together in order to create something useful. Every LOLWUT
+version should have the following properties:
+
+1. It should display some computer art. There are no limits as long as the
+ output works well in a normal terminal display. However the output should not
+ be limited to graphics (like LOLWUT 5 and 6 actually do), but can be
+ generative poetry and other non graphical things.
+2. LOLWUT output should be completely useless. Displaying some useful Redis
+ internal metrics does not count as a valid LOLWUT.
+3. LOLWUT output should be fast to generate so that the command can be called in
+ production instances without issues. It should remain fast even when the user
+ experiments with odd parameters.
+4. LOLWUT implementations should be safe and carefully checked for security, and
+ resist to untrusted inputs if they take arguments.
+5. LOLWUT must always display the Redis version at the end.
+
+@return
+
+@bulk-string-reply (or verbatim reply when using the RESP3 protocol): the string
+containing the generative computer art, and a text with the Redis version.
diff --git a/iredis/data/commands/lpop.md b/iredis/data/commands/lpop.md
new file mode 100644
index 0000000..6049176
--- /dev/null
+++ b/iredis/data/commands/lpop.md
@@ -0,0 +1,16 @@
+Removes and returns the first element of the list stored at `key`.
+
+@return
+
+@bulk-string-reply: the value of the first element, or `nil` when `key` does not
+exist.
+
+@examples
+
+```cli
+RPUSH mylist "one"
+RPUSH mylist "two"
+RPUSH mylist "three"
+LPOP mylist
+LRANGE mylist 0 -1
+```
diff --git a/iredis/data/commands/lpos.md b/iredis/data/commands/lpos.md
new file mode 100644
index 0000000..c256e0e
--- /dev/null
+++ b/iredis/data/commands/lpos.md
@@ -0,0 +1,95 @@
+The command returns the index of matching elements inside a Redis list. By
+default, when no options are given, it will scan the list from head to tail,
+looking for the first match of "element". If the element is found, its index
+(the zero-based position in the list) is returned. Otherwise, if no match is
+found, NULL is returned.
+
+```
+> RPUSH mylist a b c 1 2 3 c c
+> LPOS mylist c
+2
+```
+
+The optional arguments and options can modify the command's behavior. The `RANK`
+option specifies the "rank" of the first element to return, in case there are
+multiple matches. A rank of 1 means to return the first match, 2 to return the
+second match, and so forth.
+
+For instance, in the above example the element "c" is present multiple times, if
+I want the index of the second match, I'll write:
+
+```
+> LPOS mylist c RANK 2
+6
+```
+
+That is, the second occurrence of "c" is at position 6. A negative "rank" as the
+`RANK` argument tells `LPOS` to invert the search direction, starting from the
+tail to the head.
+
+So, we want to say, give me the first element starting from the tail of the
+list:
+
+```
+> LPOS mylist c RANK -1
+7
+```
+
+Note that the indexes are still reported in the "natural" way, that is,
+considering the first element starting from the head of the list at index 0, the
+next element at index 1, and so forth. This basically means that the returned
+indexes are stable whatever the rank is positive or negative.
+
+Sometimes we want to return not just the Nth matching element, but the position
+of all the first N matching elements. This can be achieved using the `COUNT`
+option.
+
+```
+> LPOS mylist c COUNT 2
+[2,6]
+```
+
+We can combine `COUNT` and `RANK`, so that `COUNT` will try to return up to the
+specified number of matches, but starting from the Nth match, as specified by
+the `RANK` option.
+
+```
+> LPOS mylist c RANK -1 COUNT 2
+[7,6]
+```
+
+When `COUNT` is used, it is possible to specify 0 as the number of matches, as a
+way to tell the command we want all the matches found returned as an array of
+indexes. This is better than giving a very large `COUNT` option because it is
+more general.
+
+```
+> LPOS mylist COUNT 0
+[2,6,7]
+```
+
+When `COUNT` is used and no match is found, an empty array is returned. However
+when `COUNT` is not used and there are no matches, the command returns NULL.
+
+Finally, the `MAXLEN` option tells the command to compare the provided element
+only with a given maximum number of list items. So for instance specifying
+`MAXLEN 1000` will make sure that the command performs only 1000 comparisons,
+effectively running the algorithm on a subset of the list (the first part or the
+last part depending on the fact we use a positive or negative rank). This is
+useful to limit the maximum complexity of the command. It is also useful when we
+expect the match to be found very early, but want to be sure that in case this
+is not true, the command does not take too much time to run.
+
+@return
+
+The command returns the integer representing the matching element, or null if
+there is no match. However, if the `COUNT` option is given the command returns
+an array (empty if there are no matches).
+
+@examples
+
+```cli
+RPUSH mylist a b c d 1 2 3 4 3 3 3
+LPOS mylist 3
+LPOS mylist 3 COUNT 0 RANK 2
+```
diff --git a/iredis/data/commands/lpush.md b/iredis/data/commands/lpush.md
new file mode 100644
index 0000000..2140857
--- /dev/null
+++ b/iredis/data/commands/lpush.md
@@ -0,0 +1,27 @@
+Insert all the specified values at the head of the list stored at `key`. If
+`key` does not exist, it is created as empty list before performing the push
+operations. When `key` holds a value that is not a list, an error is returned.
+
+It is possible to push multiple elements using a single command call just
+specifying multiple arguments at the end of the command. Elements are inserted
+one after the other to the head of the list, from the leftmost element to the
+rightmost element. So for instance the command `LPUSH mylist a b c` will result
+into a list containing `c` as first element, `b` as second element and `a` as
+third element.
+
+@return
+
+@integer-reply: the length of the list after the push operations.
+
+@history
+
+- `>= 2.4`: Accepts multiple `element` arguments. In Redis versions older than
+ 2.4 it was possible to push a single value per command.
+
+@examples
+
+```cli
+LPUSH mylist "world"
+LPUSH mylist "hello"
+LRANGE mylist 0 -1
+```
diff --git a/iredis/data/commands/lpushx.md b/iredis/data/commands/lpushx.md
new file mode 100644
index 0000000..4cc505f
--- /dev/null
+++ b/iredis/data/commands/lpushx.md
@@ -0,0 +1,22 @@
+Inserts specified values at the head of the list stored at `key`, only if `key`
+already exists and holds a list. In contrary to `LPUSH`, no operation will be
+performed when `key` does not yet exist.
+
+@return
+
+@integer-reply: the length of the list after the push operation.
+
+@history
+
+- `>= 4.0`: Accepts multiple `element` arguments. In Redis versions older than
+ 4.0 it was possible to push a single value per command.
+
+@examples
+
+```cli
+LPUSH mylist "World"
+LPUSHX mylist "Hello"
+LPUSHX myotherlist "Hello"
+LRANGE mylist 0 -1
+LRANGE myotherlist 0 -1
+```
diff --git a/iredis/data/commands/lrange.md b/iredis/data/commands/lrange.md
new file mode 100644
index 0000000..923b542
--- /dev/null
+++ b/iredis/data/commands/lrange.md
@@ -0,0 +1,37 @@
+Returns the specified elements of the list stored at `key`. The offsets `start`
+and `stop` are zero-based indexes, with `0` being the first element of the list
+(the head of the list), `1` being the next element and so on.
+
+These offsets can also be negative numbers indicating offsets starting at the
+end of the list. For example, `-1` is the last element of the list, `-2` the
+penultimate, and so on.
+
+## Consistency with range functions in various programming languages
+
+Note that if you have a list of numbers from 0 to 100, `LRANGE list 0 10` will
+return 11 elements, that is, the rightmost item is included. This **may or may
+not** be consistent with behavior of range-related functions in your programming
+language of choice (think Ruby's `Range.new`, `Array#slice` or Python's
+`range()` function).
+
+## Out-of-range indexes
+
+Out of range indexes will not produce an error. If `start` is larger than the
+end of the list, an empty list is returned. If `stop` is larger than the actual
+end of the list, Redis will treat it like the last element of the list.
+
+@return
+
+@array-reply: list of elements in the specified range.
+
+@examples
+
+```cli
+RPUSH mylist "one"
+RPUSH mylist "two"
+RPUSH mylist "three"
+LRANGE mylist 0 0
+LRANGE mylist -3 2
+LRANGE mylist -100 100
+LRANGE mylist 5 10
+```
diff --git a/iredis/data/commands/lrem.md b/iredis/data/commands/lrem.md
new file mode 100644
index 0000000..c06dda7
--- /dev/null
+++ b/iredis/data/commands/lrem.md
@@ -0,0 +1,28 @@
+Removes the first `count` occurrences of elements equal to `element` from the
+list stored at `key`. The `count` argument influences the operation in the
+following ways:
+
+- `count > 0`: Remove elements equal to `element` moving from head to tail.
+- `count < 0`: Remove elements equal to `element` moving from tail to head.
+- `count = 0`: Remove all elements equal to `element`.
+
+For example, `LREM list -2 "hello"` will remove the last two occurrences of
+`"hello"` in the list stored at `list`.
+
+Note that non-existing keys are treated like empty lists, so when `key` does not
+exist, the command will always return `0`.
+
+@return
+
+@integer-reply: the number of removed elements.
+
+@examples
+
+```cli
+RPUSH mylist "hello"
+RPUSH mylist "hello"
+RPUSH mylist "foo"
+RPUSH mylist "hello"
+LREM mylist -2 "hello"
+LRANGE mylist 0 -1
+```
diff --git a/iredis/data/commands/lset.md b/iredis/data/commands/lset.md
new file mode 100644
index 0000000..fad99ce
--- /dev/null
+++ b/iredis/data/commands/lset.md
@@ -0,0 +1,19 @@
+Sets the list element at `index` to `element`. For more information on the
+`index` argument, see `LINDEX`.
+
+An error is returned for out of range indexes.
+
+@return
+
+@simple-string-reply
+
+@examples
+
+```cli
+RPUSH mylist "one"
+RPUSH mylist "two"
+RPUSH mylist "three"
+LSET mylist 0 "four"
+LSET mylist -2 "five"
+LRANGE mylist 0 -1
+```
diff --git a/iredis/data/commands/ltrim.md b/iredis/data/commands/ltrim.md
new file mode 100644
index 0000000..fd7fca5
--- /dev/null
+++ b/iredis/data/commands/ltrim.md
@@ -0,0 +1,42 @@
+Trim an existing list so that it will contain only the specified range of
+elements specified. Both `start` and `stop` are zero-based indexes, where `0` is
+the first element of the list (the head), `1` the next element and so on.
+
+For example: `LTRIM foobar 0 2` will modify the list stored at `foobar` so that
+only the first three elements of the list will remain.
+
+`start` and `end` can also be negative numbers indicating offsets from the end
+of the list, where `-1` is the last element of the list, `-2` the penultimate
+element and so on.
+
+Out of range indexes will not produce an error: if `start` is larger than the
+end of the list, or `start > end`, the result will be an empty list (which
+causes `key` to be removed). If `end` is larger than the end of the list, Redis
+will treat it like the last element of the list.
+
+A common use of `LTRIM` is together with `LPUSH` / `RPUSH`. For example:
+
+```
+LPUSH mylist someelement
+LTRIM mylist 0 99
+```
+
+This pair of commands will push a new element on the list, while making sure
+that the list will not grow larger than 100 elements. This is very useful when
+using Redis to store logs for example. It is important to note that when used in
+this way `LTRIM` is an O(1) operation because in the average case just one
+element is removed from the tail of the list.
+
+@return
+
+@simple-string-reply
+
+@examples
+
+```cli
+RPUSH mylist "one"
+RPUSH mylist "two"
+RPUSH mylist "three"
+LTRIM mylist 1 -1
+LRANGE mylist 0 -1
+```
diff --git a/iredis/data/commands/memory-doctor.md b/iredis/data/commands/memory-doctor.md
new file mode 100644
index 0000000..0c9a172
--- /dev/null
+++ b/iredis/data/commands/memory-doctor.md
@@ -0,0 +1,6 @@
+The `MEMORY DOCTOR` command reports about different memory-related issues that
+the Redis server experiences, and advises about possible remedies.
+
+@return
+
+@bulk-string-reply
diff --git a/iredis/data/commands/memory-help.md b/iredis/data/commands/memory-help.md
new file mode 100644
index 0000000..c0f4086
--- /dev/null
+++ b/iredis/data/commands/memory-help.md
@@ -0,0 +1,6 @@
+The `MEMORY HELP` command returns a helpful text describing the different
+subcommands.
+
+@return
+
+@array-reply: a list of subcommands and their descriptions
diff --git a/iredis/data/commands/memory-malloc-stats.md b/iredis/data/commands/memory-malloc-stats.md
new file mode 100644
index 0000000..8da8e72
--- /dev/null
+++ b/iredis/data/commands/memory-malloc-stats.md
@@ -0,0 +1,9 @@
+The `MEMORY MALLOC-STATS` command provides an internal statistics report from
+the memory allocator.
+
+This command is currently implemented only when using **jemalloc** as an
+allocator, and evaluates to a benign NOOP for all others.
+
+@return
+
+@bulk-string-reply: the memory allocator's internal statistics report
diff --git a/iredis/data/commands/memory-purge.md b/iredis/data/commands/memory-purge.md
new file mode 100644
index 0000000..5ebe433
--- /dev/null
+++ b/iredis/data/commands/memory-purge.md
@@ -0,0 +1,9 @@
+The `MEMORY PURGE` command attempts to purge dirty pages so these can be
+reclaimed by the allocator.
+
+This command is currently implemented only when using **jemalloc** as an
+allocator, and evaluates to a benign NOOP for all others.
+
+@return
+
+@simple-string-reply
diff --git a/iredis/data/commands/memory-stats.md b/iredis/data/commands/memory-stats.md
new file mode 100644
index 0000000..4820c6a
--- /dev/null
+++ b/iredis/data/commands/memory-stats.md
@@ -0,0 +1,50 @@
+The `MEMORY STATS` command returns an @array-reply about the memory usage of the
+server.
+
+The information about memory usage is provided as metrics and their respective
+values. The following metrics are reported:
+
+- `peak.allocated`: Peak memory consumed by Redis in bytes (see `INFO`'s
+ `used_memory_peak`)
+- `total.allocated`: Total number of bytes allocated by Redis using its
+ allocator (see `INFO`'s `used_memory`)
+- `startup.allocated`: Initial amount of memory consumed by Redis at startup in
+ bytes (see `INFO`'s `used_memory_startup`)
+- `replication.backlog`: Size in bytes of the replication backlog (see `INFO`'s
+ `repl_backlog_active`)
+- `clients.slaves`: The total size in bytes of all replicas overheads (output
+ and query buffers, connection contexts)
+- `clients.normal`: The total size in bytes of all clients overheads (output and
+ query buffers, connection contexts)
+- `aof.buffer`: The summed size in bytes of the current and rewrite AOF buffers
+ (see `INFO`'s `aof_buffer_length` and `aof_rewrite_buffer_length`,
+ respectively)
+- `lua.caches`: the summed size in bytes of the overheads of the Lua scripts'
+ caches
+- `dbXXX`: For each of the server's databases, the overheads of the main and
+ expiry dictionaries (`overhead.hashtable.main` and
+ `overhead.hashtable.expires`, respectively) are reported in bytes
+- `overhead.total`: The sum of all overheads, i.e. `startup.allocated`,
+ `replication.backlog`, `clients.slaves`, `clients.normal`, `aof.buffer` and
+ those of the internal data structures that are used in managing the Redis
+ keyspace (see `INFO`'s `used_memory_overhead`)
+- `keys.count`: The total number of keys stored across all databases in the
+ server
+- `keys.bytes-per-key`: The ratio between **net memory usage**
+ (`total.allocated` minus `startup.allocated`) and `keys.count`
+- `dataset.bytes`: The size in bytes of the dataset, i.e. `overhead.total`
+ subtracted from `total.allocated` (see `INFO`'s `used_memory_dataset`)
+- `dataset.percentage`: The percentage of `dataset.bytes` out of the net memory
+ usage
+- `peak.percentage`: The percentage of `peak.allocated` out of `total.allocated`
+- `fragmentation`: See `INFO`'s `mem_fragmentation_ratio`
+
+@return
+
+@array-reply: nested list of memory usage metrics and their values
+
+**A note about the word slave used in this man page**: Starting with Redis 5, if
+not for backward compatibility, the Redis project no longer uses the word slave.
+Unfortunately in this command the word slave is part of the protocol, so we'll
+be able to remove such occurrences only when this API will be naturally
+deprecated.
diff --git a/iredis/data/commands/memory-usage.md b/iredis/data/commands/memory-usage.md
new file mode 100644
index 0000000..73e26d7
--- /dev/null
+++ b/iredis/data/commands/memory-usage.md
@@ -0,0 +1,40 @@
+The `MEMORY USAGE` command reports the number of bytes that a key and its value
+require to be stored in RAM.
+
+The reported usage is the total of memory allocations for data and
+administrative overheads that a key its value require.
+
+For nested data types, the optional `SAMPLES` option can be provided, where
+`count` is the number of sampled nested values. By default, this option is set
+to `5`. To sample the all of the nested values, use `SAMPLES 0`.
+
+@examples
+
+With Redis v4.0.1 64-bit and **jemalloc**, the empty string measures as follows:
+
+```
+> SET "" ""
+OK
+> MEMORY USAGE ""
+(integer) 51
+```
+
+These bytes are pure overhead at the moment as no actual data is stored, and are
+used for maintaining the internal data structures of the server. Longer keys and
+values show asymptotically linear usage.
+
+```
+> SET foo bar
+OK
+> MEMORY USAGE foo
+(integer) 54
+> SET cento 01234567890123456789012345678901234567890123
+45678901234567890123456789012345678901234567890123456789
+OK
+127.0.0.1:6379> MEMORY USAGE cento
+(integer) 153
+```
+
+@return
+
+@integer-reply: the memory usage in bytes
diff --git a/iredis/data/commands/mget.md b/iredis/data/commands/mget.md
new file mode 100644
index 0000000..130f935
--- /dev/null
+++ b/iredis/data/commands/mget.md
@@ -0,0 +1,15 @@
+Returns the values of all specified keys. For every key that does not hold a
+string value or does not exist, the special value `nil` is returned. Because of
+this, the operation never fails.
+
+@return
+
+@array-reply: list of values at the specified keys.
+
+@examples
+
+```cli
+SET key1 "Hello"
+SET key2 "World"
+MGET key1 key2 nonexisting
+```
diff --git a/iredis/data/commands/migrate.md b/iredis/data/commands/migrate.md
new file mode 100644
index 0000000..cc4561c
--- /dev/null
+++ b/iredis/data/commands/migrate.md
@@ -0,0 +1,78 @@
+Atomically transfer a key from a source Redis instance to a destination Redis
+instance. On success the key is deleted from the original instance and is
+guaranteed to exist in the target instance.
+
+The command is atomic and blocks the two instances for the time required to
+transfer the key, at any given time the key will appear to exist in a given
+instance or in the other instance, unless a timeout error occurs. In 3.2 and
+above, multiple keys can be pipelined in a single call to `MIGRATE` by passing
+the empty string ("") as key and adding the `KEYS` clause.
+
+The command internally uses `DUMP` to generate the serialized version of the key
+value, and `RESTORE` in order to synthesize the key in the target instance. The
+source instance acts as a client for the target instance. If the target instance
+returns OK to the `RESTORE` command, the source instance deletes the key using
+`DEL`.
+
+The timeout specifies the maximum idle time in any moment of the communication
+with the destination instance in milliseconds. This means that the operation
+does not need to be completed within the specified amount of milliseconds, but
+that the transfer should make progresses without blocking for more than the
+specified amount of milliseconds.
+
+`MIGRATE` needs to perform I/O operations and to honor the specified timeout.
+When there is an I/O error during the transfer or if the timeout is reached the
+operation is aborted and the special error - `IOERR` returned. When this happens
+the following two cases are possible:
+
+- The key may be on both the instances.
+- The key may be only in the source instance.
+
+It is not possible for the key to get lost in the event of a timeout, but the
+client calling `MIGRATE`, in the event of a timeout error, should check if the
+key is _also_ present in the target instance and act accordingly.
+
+When any other error is returned (starting with `ERR`) `MIGRATE` guarantees that
+the key is still only present in the originating instance (unless a key with the
+same name was also _already_ present on the target instance).
+
+If there are no keys to migrate in the source instance `NOKEY` is returned.
+Because missing keys are possible in normal conditions, from expiry for example,
+`NOKEY` isn't an error.
+
+## Migrating multiple keys with a single command call
+
+Starting with Redis 3.0.6 `MIGRATE` supports a new bulk-migration mode that uses
+pipelining in order to migrate multiple keys between instances without incurring
+in the round trip time latency and other overheads that there are when moving
+each key with a single `MIGRATE` call.
+
+In order to enable this form, the `KEYS` option is used, and the normal _key_
+argument is set to an empty string. The actual key names will be provided after
+the `KEYS` argument itself, like in the following example:
+
+ MIGRATE 192.168.1.34 6379 "" 0 5000 KEYS key1 key2 key3
+
+When this form is used the `NOKEY` status code is only returned when none of the
+keys is present in the instance, otherwise the command is executed, even if just
+a single key exists.
+
+## Options
+
+- `COPY` -- Do not remove the key from the local instance.
+- `REPLACE` -- Replace existing key on the remote instance.
+- `KEYS` -- If the key argument is an empty string, the command will instead
+ migrate all the keys that follow the `KEYS` option (see the above section for
+ more info).
+- `AUTH` -- Authenticate with the given password to the remote instance.
+- `AUTH2` -- Authenticate with the given username and password pair (Redis 6 or
+ greater ACL auth style).
+
+`COPY` and `REPLACE` are available only in 3.0 and above. `KEYS` is available
+starting with Redis 3.0.6. `AUTH` is available starting with Redis 4.0.7.
+`AUTH2` is available starting with Redis 6.0.0.
+
+@return
+
+@simple-string-reply: The command returns OK on success, or `NOKEY` if no keys
+were found in the source instance.
diff --git a/iredis/data/commands/module-list.md b/iredis/data/commands/module-list.md
new file mode 100644
index 0000000..d951f23
--- /dev/null
+++ b/iredis/data/commands/module-list.md
@@ -0,0 +1,10 @@
+Returns information about the modules loaded to the server.
+
+@return
+
+@array-reply: list of loaded modules. Each element in the list represents a
+module, and is in itself a list of property names and their values. The
+following properties is reported for each loaded module:
+
+- `name`: Name of the module
+- `ver`: Version of the module
diff --git a/iredis/data/commands/module-load.md b/iredis/data/commands/module-load.md
new file mode 100644
index 0000000..c5919c0
--- /dev/null
+++ b/iredis/data/commands/module-load.md
@@ -0,0 +1,13 @@
+Loads a module from a dynamic library at runtime.
+
+This command loads and initializes the Redis module from the dynamic library
+specified by the `path` argument. The `path` should be the absolute path of the
+library, including the full filename. Any additional arguments are passed
+unmodified to the module.
+
+**Note**: modules can also be loaded at server startup with 'loadmodule'
+configuration directive in `redis.conf`.
+
+@return
+
+@simple-string-reply: `OK` if module was loaded.
diff --git a/iredis/data/commands/module-unload.md b/iredis/data/commands/module-unload.md
new file mode 100644
index 0000000..c5ce38e
--- /dev/null
+++ b/iredis/data/commands/module-unload.md
@@ -0,0 +1,13 @@
+Unloads a module.
+
+This command unloads the module specified by `name`. Note that the module's name
+is reported by the `MODULE LIST` command, and may differ from the dynamic
+library's filename.
+
+Known limitations:
+
+- Modules that register custom data types can not be unloaded.
+
+@return
+
+@simple-string-reply: `OK` if module was unloaded.
diff --git a/iredis/data/commands/monitor.md b/iredis/data/commands/monitor.md
new file mode 100644
index 0000000..7900787
--- /dev/null
+++ b/iredis/data/commands/monitor.md
@@ -0,0 +1,93 @@
+`MONITOR` is a debugging command that streams back every command processed by
+the Redis server. It can help in understanding what is happening to the
+database. This command can both be used via `redis-cli` and via `telnet`.
+
+The ability to see all the requests processed by the server is useful in order
+to spot bugs in an application both when using Redis as a database and as a
+distributed caching system.
+
+```
+$ redis-cli monitor
+1339518083.107412 [0 127.0.0.1:60866] "keys" "*"
+1339518087.877697 [0 127.0.0.1:60866] "dbsize"
+1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6"
+1339518096.506257 [0 127.0.0.1:60866] "get" "x"
+1339518099.363765 [0 127.0.0.1:60866] "del" "x"
+1339518100.544926 [0 127.0.0.1:60866] "get" "x"
+```
+
+Use `SIGINT` (Ctrl-C) to stop a `MONITOR` stream running via `redis-cli`.
+
+```
+$ telnet localhost 6379
+Trying 127.0.0.1...
+Connected to localhost.
+Escape character is '^]'.
+MONITOR
++OK
++1339518083.107412 [0 127.0.0.1:60866] "keys" "*"
++1339518087.877697 [0 127.0.0.1:60866] "dbsize"
++1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6"
++1339518096.506257 [0 127.0.0.1:60866] "get" "x"
++1339518099.363765 [0 127.0.0.1:60866] "del" "x"
++1339518100.544926 [0 127.0.0.1:60866] "get" "x"
+QUIT
++OK
+Connection closed by foreign host.
+```
+
+Manually issue the `QUIT` command to stop a `MONITOR` stream running via
+`telnet`.
+
+## Commands not logged by MONITOR
+
+Because of security concerns, all administrative commands are not logged by
+`MONITOR`'s output.
+
+Furthermore, the following commands are also not logged:
+
+- `AUTH`
+- `EXEC`
+- `HELLO`
+- `QUIT`
+
+## Cost of running MONITOR
+
+Because `MONITOR` streams back **all** commands, its use comes at a cost. The
+following (totally unscientific) benchmark numbers illustrate what the cost of
+running `MONITOR` can be.
+
+Benchmark result **without** `MONITOR` running:
+
+```
+$ src/redis-benchmark -c 10 -n 100000 -q
+PING_INLINE: 101936.80 requests per second
+PING_BULK: 102880.66 requests per second
+SET: 95419.85 requests per second
+GET: 104275.29 requests per second
+INCR: 93283.58 requests per second
+```
+
+Benchmark result **with** `MONITOR` running (`redis-cli monitor > /dev/null`):
+
+```
+$ src/redis-benchmark -c 10 -n 100000 -q
+PING_INLINE: 58479.53 requests per second
+PING_BULK: 59136.61 requests per second
+SET: 41823.50 requests per second
+GET: 45330.91 requests per second
+INCR: 41771.09 requests per second
+```
+
+In this particular case, running a single `MONITOR` client can reduce the
+throughput by more than 50%. Running more `MONITOR` clients will reduce
+throughput even more.
+
+@return
+
+**Non standard return value**, just dumps the received commands in an infinite
+flow.
+
+@history
+
+- `>=6.0`: `AUTH` excluded from the command's output.
diff --git a/iredis/data/commands/move.md b/iredis/data/commands/move.md
new file mode 100644
index 0000000..e007a18
--- /dev/null
+++ b/iredis/data/commands/move.md
@@ -0,0 +1,11 @@
+Move `key` from the currently selected database (see `SELECT`) to the specified
+destination database. When `key` already exists in the destination database, or
+it does not exist in the source database, it does nothing. It is possible to use
+`MOVE` as a locking primitive because of this.
+
+@return
+
+@integer-reply, specifically:
+
+- `1` if `key` was moved.
+- `0` if `key` was not moved.
diff --git a/iredis/data/commands/mset.md b/iredis/data/commands/mset.md
new file mode 100644
index 0000000..9c17f86
--- /dev/null
+++ b/iredis/data/commands/mset.md
@@ -0,0 +1,18 @@
+Sets the given keys to their respective values. `MSET` replaces existing values
+with new values, just as regular `SET`. See `MSETNX` if you don't want to
+overwrite existing values.
+
+`MSET` is atomic, so all given keys are set at once. It is not possible for
+clients to see that some of the keys were updated while others are unchanged.
+
+@return
+
+@simple-string-reply: always `OK` since `MSET` can't fail.
+
+@examples
+
+```cli
+MSET key1 "Hello" key2 "World"
+GET key1
+GET key2
+```
diff --git a/iredis/data/commands/msetnx.md b/iredis/data/commands/msetnx.md
new file mode 100644
index 0000000..e332223
--- /dev/null
+++ b/iredis/data/commands/msetnx.md
@@ -0,0 +1,24 @@
+Sets the given keys to their respective values. `MSETNX` will not perform any
+operation at all even if just a single key already exists.
+
+Because of this semantic `MSETNX` can be used in order to set different keys
+representing different fields of an unique logic object in a way that ensures
+that either all the fields or none at all are set.
+
+`MSETNX` is atomic, so all given keys are set at once. It is not possible for
+clients to see that some of the keys were updated while others are unchanged.
+
+@return
+
+@integer-reply, specifically:
+
+- `1` if the all the keys were set.
+- `0` if no key was set (at least one key already existed).
+
+@examples
+
+```cli
+MSETNX key1 "Hello" key2 "there"
+MSETNX key2 "new" key3 "world"
+MGET key1 key2 key3
+```
diff --git a/iredis/data/commands/multi.md b/iredis/data/commands/multi.md
new file mode 100644
index 0000000..9ed46b6
--- /dev/null
+++ b/iredis/data/commands/multi.md
@@ -0,0 +1,8 @@
+Marks the start of a [transaction][tt] block. Subsequent commands will be queued
+for atomic execution using `EXEC`.
+
+[tt]: /topics/transactions
+
+@return
+
+@simple-string-reply: always `OK`.
diff --git a/iredis/data/commands/object.md b/iredis/data/commands/object.md
new file mode 100644
index 0000000..e5321f7
--- /dev/null
+++ b/iredis/data/commands/object.md
@@ -0,0 +1,80 @@
+The `OBJECT` command allows to inspect the internals of Redis Objects associated
+with keys. It is useful for debugging or to understand if your keys are using
+the specially encoded data types to save space. Your application may also use
+the information reported by the `OBJECT` command to implement application level
+key eviction policies when using Redis as a Cache.
+
+The `OBJECT` command supports multiple sub commands:
+
+- `OBJECT REFCOUNT <key>` returns the number of references of the value
+ associated with the specified key. This command is mainly useful for
+ debugging.
+- `OBJECT ENCODING <key>` returns the kind of internal representation used in
+ order to store the value associated with a key.
+- `OBJECT IDLETIME <key>` returns the number of seconds since the object stored
+ at the specified key is idle (not requested by read or write operations).
+ While the value is returned in seconds the actual resolution of this timer is
+ 10 seconds, but may vary in future implementations. This subcommand is
+ available when `maxmemory-policy` is set to an LRU policy or `noeviction` and
+ `maxmemory` is set.
+- `OBJECT FREQ <key>` returns the logarithmic access frequency counter of the
+ object stored at the specified key. This subcommand is available when
+ `maxmemory-policy` is set to an LFU policy.
+- `OBJECT HELP` returns a succinct help text.
+
+Objects can be encoded in different ways:
+
+- Strings can be encoded as `raw` (normal string encoding) or `int` (strings
+ representing integers in a 64 bit signed interval are encoded in this way in
+ order to save space).
+- Lists can be encoded as `ziplist` or `linkedlist`. The `ziplist` is the
+ special representation that is used to save space for small lists.
+- Sets can be encoded as `intset` or `hashtable`. The `intset` is a special
+ encoding used for small sets composed solely of integers.
+- Hashes can be encoded as `ziplist` or `hashtable`. The `ziplist` is a special
+ encoding used for small hashes.
+- Sorted Sets can be encoded as `ziplist` or `skiplist` format. As for the List
+ type small sorted sets can be specially encoded using `ziplist`, while the
+ `skiplist` encoding is the one that works with sorted sets of any size.
+
+All the specially encoded types are automatically converted to the general type
+once you perform an operation that makes it impossible for Redis to retain the
+space saving encoding.
+
+@return
+
+Different return values are used for different subcommands.
+
+- Subcommands `refcount` and `idletime` return integers.
+- Subcommand `encoding` returns a bulk reply.
+
+If the object you try to inspect is missing, a null bulk reply is returned.
+
+@examples
+
+```
+redis> lpush mylist "Hello World"
+(integer) 4
+redis> object refcount mylist
+(integer) 1
+redis> object encoding mylist
+"ziplist"
+redis> object idletime mylist
+(integer) 10
+```
+
+In the following example you can see how the encoding changes once Redis is no
+longer able to use the space saving encoding.
+
+```
+redis> set foo 1000
+OK
+redis> object encoding foo
+"int"
+redis> append foo bar
+(integer) 7
+redis> get foo
+"1000bar"
+redis> object encoding foo
+"raw"
+```
diff --git a/iredis/data/commands/persist.md b/iredis/data/commands/persist.md
new file mode 100644
index 0000000..4819230
--- /dev/null
+++ b/iredis/data/commands/persist.md
@@ -0,0 +1,20 @@
+Remove the existing timeout on `key`, turning the key from _volatile_ (a key
+with an expire set) to _persistent_ (a key that will never expire as no timeout
+is associated).
+
+@return
+
+@integer-reply, specifically:
+
+- `1` if the timeout was removed.
+- `0` if `key` does not exist or does not have an associated timeout.
+
+@examples
+
+```cli
+SET mykey "Hello"
+EXPIRE mykey 10
+TTL mykey
+PERSIST mykey
+TTL mykey
+```
diff --git a/iredis/data/commands/pexpire.md b/iredis/data/commands/pexpire.md
new file mode 100644
index 0000000..ae5f775
--- /dev/null
+++ b/iredis/data/commands/pexpire.md
@@ -0,0 +1,18 @@
+This command works exactly like `EXPIRE` but the time to live of the key is
+specified in milliseconds instead of seconds.
+
+@return
+
+@integer-reply, specifically:
+
+- `1` if the timeout was set.
+- `0` if `key` does not exist.
+
+@examples
+
+```cli
+SET mykey "Hello"
+PEXPIRE mykey 1500
+TTL mykey
+PTTL mykey
+```
diff --git a/iredis/data/commands/pexpireat.md b/iredis/data/commands/pexpireat.md
new file mode 100644
index 0000000..4b3ebb7
--- /dev/null
+++ b/iredis/data/commands/pexpireat.md
@@ -0,0 +1,18 @@
+`PEXPIREAT` has the same effect and semantic as `EXPIREAT`, but the Unix time at
+which the key will expire is specified in milliseconds instead of seconds.
+
+@return
+
+@integer-reply, specifically:
+
+- `1` if the timeout was set.
+- `0` if `key` does not exist.
+
+@examples
+
+```cli
+SET mykey "Hello"
+PEXPIREAT mykey 1555555555005
+TTL mykey
+PTTL mykey
+```
diff --git a/iredis/data/commands/pfadd.md b/iredis/data/commands/pfadd.md
new file mode 100644
index 0000000..e8e3f03
--- /dev/null
+++ b/iredis/data/commands/pfadd.md
@@ -0,0 +1,33 @@
+Adds all the element arguments to the HyperLogLog data structure stored at the
+variable name specified as first argument.
+
+As a side effect of this command the HyperLogLog internals may be updated to
+reflect a different estimation of the number of unique items added so far (the
+cardinality of the set).
+
+If the approximated cardinality estimated by the HyperLogLog changed after
+executing the command, `PFADD` returns 1, otherwise 0 is returned. The command
+automatically creates an empty HyperLogLog structure (that is, a Redis String of
+a specified length and with a given encoding) if the specified key does not
+exist.
+
+To call the command without elements but just the variable name is valid, this
+will result into no operation performed if the variable already exists, or just
+the creation of the data structure if the key does not exist (in the latter case
+1 is returned).
+
+For an introduction to HyperLogLog data structure check the `PFCOUNT` command
+page.
+
+@return
+
+@integer-reply, specifically:
+
+- 1 if at least 1 HyperLogLog internal register was altered. 0 otherwise.
+
+@examples
+
+```cli
+PFADD hll a b c d e f g
+PFCOUNT hll
+```
diff --git a/iredis/data/commands/pfcount.md b/iredis/data/commands/pfcount.md
new file mode 100644
index 0000000..e39b19e
--- /dev/null
+++ b/iredis/data/commands/pfcount.md
@@ -0,0 +1,93 @@
+When called with a single key, returns the approximated cardinality computed by
+the HyperLogLog data structure stored at the specified variable, which is 0 if
+the variable does not exist.
+
+When called with multiple keys, returns the approximated cardinality of the
+union of the HyperLogLogs passed, by internally merging the HyperLogLogs stored
+at the provided keys into a temporary HyperLogLog.
+
+The HyperLogLog data structure can be used in order to count **unique** elements
+in a set using just a small constant amount of memory, specifically 12k bytes
+for every HyperLogLog (plus a few bytes for the key itself).
+
+The returned cardinality of the observed set is not exact, but approximated with
+a standard error of 0.81%.
+
+For example in order to take the count of all the unique search queries
+performed in a day, a program needs to call `PFADD` every time a query is
+processed. The estimated number of unique queries can be retrieved with
+`PFCOUNT` at any time.
+
+Note: as a side effect of calling this function, it is possible that the
+HyperLogLog is modified, since the last 8 bytes encode the latest computed
+cardinality for caching purposes. So `PFCOUNT` is technically a write command.
+
+@return
+
+@integer-reply, specifically:
+
+- The approximated number of unique elements observed via `PFADD`.
+
+@examples
+
+```cli
+PFADD hll foo bar zap
+PFADD hll zap zap zap
+PFADD hll foo bar
+PFCOUNT hll
+PFADD some-other-hll 1 2 3
+PFCOUNT hll some-other-hll
+```
+
+## Performances
+
+When `PFCOUNT` is called with a single key, performances are excellent even if
+in theory constant times to process a dense HyperLogLog are high. This is
+possible because the `PFCOUNT` uses caching in order to remember the cardinality
+previously computed, that rarely changes because most `PFADD` operations will
+not update any register. Hundreds of operations per second are possible.
+
+When `PFCOUNT` is called with multiple keys, an on-the-fly merge of the
+HyperLogLogs is performed, which is slow, moreover the cardinality of the union
+can't be cached, so when used with multiple keys `PFCOUNT` may take a time in
+the order of magnitude of the millisecond, and should be not abused.
+
+The user should take in mind that single-key and multiple-keys executions of
+this command are semantically different and have different performances.
+
+## HyperLogLog representation
+
+Redis HyperLogLogs are represented using a double representation: the _sparse_
+representation suitable for HLLs counting a small number of elements (resulting
+in a small number of registers set to non-zero value), and a _dense_
+representation suitable for higher cardinalities. Redis automatically switches
+from the sparse to the dense representation when needed.
+
+The sparse representation uses a run-length encoding optimized to store
+efficiently a big number of registers set to zero. The dense representation is a
+Redis string of 12288 bytes in order to store 16384 6-bit counters. The need for
+the double representation comes from the fact that using 12k (which is the dense
+representation memory requirement) to encode just a few registers for smaller
+cardinalities is extremely suboptimal.
+
+Both representations are prefixed with a 16 bytes header, that includes a magic,
+an encoding / version field, and the cached cardinality estimation computed,
+stored in little endian format (the most significant bit is 1 if the estimation
+is invalid since the HyperLogLog was updated since the cardinality was
+computed).
+
+The HyperLogLog, being a Redis string, can be retrieved with `GET` and restored
+with `SET`. Calling `PFADD`, `PFCOUNT` or `PFMERGE` commands with a corrupted
+HyperLogLog is never a problem, it may return random values but does not affect
+the stability of the server. Most of the times when corrupting a sparse
+representation, the server recognizes the corruption and returns an error.
+
+The representation is neutral from the point of view of the processor word size
+and endianness, so the same representation is used by 32 bit and 64 bit
+processor, big endian or little endian.
+
+More details about the Redis HyperLogLog implementation can be found in
+[this blog post](http://antirez.com/news/75). The source code of the
+implementation in the `hyperloglog.c` file is also easy to read and understand,
+and includes a full specification for the exact encoding used for the sparse and
+dense representations.
diff --git a/iredis/data/commands/pfmerge.md b/iredis/data/commands/pfmerge.md
new file mode 100644
index 0000000..90e1dc9
--- /dev/null
+++ b/iredis/data/commands/pfmerge.md
@@ -0,0 +1,22 @@
+Merge multiple HyperLogLog values into an unique value that will approximate the
+cardinality of the union of the observed Sets of the source HyperLogLog
+structures.
+
+The computed merged HyperLogLog is set to the destination variable, which is
+created if does not exist (defaulting to an empty HyperLogLog).
+
+If the destination variable exists, it is treated as one of the source sets and
+its cardinality will be included in the cardinality of the computed HyperLogLog.
+
+@return
+
+@simple-string-reply: The command just returns `OK`.
+
+@examples
+
+```cli
+PFADD hll1 foo bar zap a
+PFADD hll2 a b c foo
+PFMERGE hll3 hll1 hll2
+PFCOUNT hll3
+```
diff --git a/iredis/data/commands/ping.md b/iredis/data/commands/ping.md
new file mode 100644
index 0000000..251cc07
--- /dev/null
+++ b/iredis/data/commands/ping.md
@@ -0,0 +1,20 @@
+Returns `PONG` if no argument is provided, otherwise return a copy of the
+argument as a bulk. This command is often used to test if a connection is still
+alive, or to measure latency.
+
+If the client is subscribed to a channel or a pattern, it will instead return a
+multi-bulk with a "pong" in the first position and an empty bulk in the second
+position, unless an argument is provided in which case it returns a copy of the
+argument.
+
+@return
+
+@simple-string-reply
+
+@examples
+
+```cli
+PING
+
+PING "hello world"
+```
diff --git a/iredis/data/commands/psetex.md b/iredis/data/commands/psetex.md
new file mode 100644
index 0000000..3e9988e
--- /dev/null
+++ b/iredis/data/commands/psetex.md
@@ -0,0 +1,10 @@
+`PSETEX` works exactly like `SETEX` with the sole difference that the expire
+time is specified in milliseconds instead of seconds.
+
+@examples
+
+```cli
+PSETEX mykey 1000 "Hello"
+PTTL mykey
+GET mykey
+```
diff --git a/iredis/data/commands/psubscribe.md b/iredis/data/commands/psubscribe.md
new file mode 100644
index 0000000..fb14ca1
--- /dev/null
+++ b/iredis/data/commands/psubscribe.md
@@ -0,0 +1,9 @@
+Subscribes the client to the given patterns.
+
+Supported glob-style patterns:
+
+- `h?llo` subscribes to `hello`, `hallo` and `hxllo`
+- `h*llo` subscribes to `hllo` and `heeeello`
+- `h[ae]llo` subscribes to `hello` and `hallo,` but not `hillo`
+
+Use `\` to escape special characters if you want to match them verbatim.
diff --git a/iredis/data/commands/psync.md b/iredis/data/commands/psync.md
new file mode 100644
index 0000000..756a3b6
--- /dev/null
+++ b/iredis/data/commands/psync.md
@@ -0,0 +1,14 @@
+Initiates a replication stream from the master.
+
+The `PSYNC` command is called by Redis replicas for initiating a replication
+stream from the master.
+
+For more information about replication in Redis please check the [replication
+page][tr].
+
+[tr]: /topics/replication
+
+@return
+
+**Non standard return value**, a bulk transfer of the data followed by `PING`
+and write requests from the master.
diff --git a/iredis/data/commands/pttl.md b/iredis/data/commands/pttl.md
new file mode 100644
index 0000000..49eea99
--- /dev/null
+++ b/iredis/data/commands/pttl.md
@@ -0,0 +1,24 @@
+Like `TTL` this command returns the remaining time to live of a key that has an
+expire set, with the sole difference that `TTL` returns the amount of remaining
+time in seconds while `PTTL` returns it in milliseconds.
+
+In Redis 2.6 or older the command returns `-1` if the key does not exist or if
+the key exist but has no associated expire.
+
+Starting with Redis 2.8 the return value in case of error changed:
+
+- The command returns `-2` if the key does not exist.
+- The command returns `-1` if the key exists but has no associated expire.
+
+@return
+
+@integer-reply: TTL in milliseconds, or a negative value in order to signal an
+error (see the description above).
+
+@examples
+
+```cli
+SET mykey "Hello"
+EXPIRE mykey 1
+PTTL mykey
+```
diff --git a/iredis/data/commands/publish.md b/iredis/data/commands/publish.md
new file mode 100644
index 0000000..e4b338a
--- /dev/null
+++ b/iredis/data/commands/publish.md
@@ -0,0 +1,5 @@
+Posts a message to the given channel.
+
+@return
+
+@integer-reply: the number of clients that received the message.
diff --git a/iredis/data/commands/pubsub.md b/iredis/data/commands/pubsub.md
new file mode 100644
index 0000000..0a8c0a2
--- /dev/null
+++ b/iredis/data/commands/pubsub.md
@@ -0,0 +1,44 @@
+The PUBSUB command is an introspection command that allows to inspect the state
+of the Pub/Sub subsystem. It is composed of subcommands that are documented
+separately. The general form is:
+
+ PUBSUB <subcommand> ... args ...
+
+# PUBSUB CHANNELS [pattern]
+
+Lists the currently _active channels_. An active channel is a Pub/Sub channel
+with one or more subscribers (not including clients subscribed to patterns).
+
+If no `pattern` is specified, all the channels are listed, otherwise if pattern
+is specified only channels matching the specified glob-style pattern are listed.
+
+@return
+
+@array-reply: a list of active channels, optionally matching the specified
+pattern.
+
+# `PUBSUB NUMSUB [channel-1 ... channel-N]`
+
+Returns the number of subscribers (not counting clients subscribed to patterns)
+for the specified channels.
+
+@return
+
+@array-reply: a list of channels and number of subscribers for every channel.
+The format is channel, count, channel, count, ..., so the list is flat. The
+order in which the channels are listed is the same as the order of the channels
+specified in the command call.
+
+Note that it is valid to call this command without channels. In this case it
+will just return an empty list.
+
+# `PUBSUB NUMPAT`
+
+Returns the number of subscriptions to patterns (that are performed using the
+`PSUBSCRIBE` command). Note that this is not just the count of clients
+subscribed to patterns but the total number of patterns all the clients are
+subscribed to.
+
+@return
+
+@integer-reply: the number of patterns all the clients are subscribed to.
diff --git a/iredis/data/commands/punsubscribe.md b/iredis/data/commands/punsubscribe.md
new file mode 100644
index 0000000..03ed279
--- /dev/null
+++ b/iredis/data/commands/punsubscribe.md
@@ -0,0 +1,6 @@
+Unsubscribes the client from the given patterns, or from all of them if none is
+given.
+
+When no patterns are specified, the client is unsubscribed from all the
+previously subscribed patterns. In this case, a message for every unsubscribed
+pattern will be sent to the client.
diff --git a/iredis/data/commands/quit.md b/iredis/data/commands/quit.md
new file mode 100644
index 0000000..b6ce3bf
--- /dev/null
+++ b/iredis/data/commands/quit.md
@@ -0,0 +1,6 @@
+Ask the server to close the connection. The connection is closed as soon as all
+pending replies have been written to the client.
+
+@return
+
+@simple-string-reply: always OK.
diff --git a/iredis/data/commands/randomkey.md b/iredis/data/commands/randomkey.md
new file mode 100644
index 0000000..d823322
--- /dev/null
+++ b/iredis/data/commands/randomkey.md
@@ -0,0 +1,5 @@
+Return a random key from the currently selected database.
+
+@return
+
+@bulk-string-reply: the random key, or `nil` when the database is empty.
diff --git a/iredis/data/commands/readonly.md b/iredis/data/commands/readonly.md
new file mode 100644
index 0000000..00e8aad
--- /dev/null
+++ b/iredis/data/commands/readonly.md
@@ -0,0 +1,21 @@
+Enables read queries for a connection to a Redis Cluster replica node.
+
+Normally replica nodes will redirect clients to the authoritative master for the
+hash slot involved in a given command, however clients can use replicas in order
+to scale reads using the `READONLY` command.
+
+`READONLY` tells a Redis Cluster replica node that the client is willing to read
+possibly stale data and is not interested in running write queries.
+
+When the connection is in readonly mode, the cluster will send a redirection to
+the client only if the operation involves keys not served by the replica's
+master node. This may happen because:
+
+1. The client sent a command about hash slots never served by the master of this
+ replica.
+2. The cluster was reconfigured (for example resharded) and the replica is no
+ longer able to serve commands for a given hash slot.
+
+@return
+
+@simple-string-reply
diff --git a/iredis/data/commands/readwrite.md b/iredis/data/commands/readwrite.md
new file mode 100644
index 0000000..f31a1f9
--- /dev/null
+++ b/iredis/data/commands/readwrite.md
@@ -0,0 +1,10 @@
+Disables read queries for a connection to a Redis Cluster slave node.
+
+Read queries against a Redis Cluster slave node are disabled by default, but you
+can use the `READONLY` command to change this behavior on a per- connection
+basis. The `READWRITE` command resets the readonly mode flag of a connection
+back to readwrite.
+
+@return
+
+@simple-string-reply
diff --git a/iredis/data/commands/rename.md b/iredis/data/commands/rename.md
new file mode 100644
index 0000000..5a38f63
--- /dev/null
+++ b/iredis/data/commands/rename.md
@@ -0,0 +1,26 @@
+Renames `key` to `newkey`. It returns an error when `key` does not exist. If
+`newkey` already exists it is overwritten, when this happens `RENAME` executes
+an implicit `DEL` operation, so if the deleted key contains a very big value it
+may cause high latency even if `RENAME` itself is usually a constant-time
+operation.
+
+In Cluster mode, both `key` and `newkey` must be in the same **hash slot**,
+meaning that in practice only keys that have the same hash tag can be reliably
+renamed in cluster.
+
+@history
+
+- `<= 3.2.0`: Before Redis 3.2.0, an error is returned if source and destination
+ names are the same.
+
+@return
+
+@simple-string-reply
+
+@examples
+
+```cli
+SET mykey "Hello"
+RENAME mykey myotherkey
+GET myotherkey
+```
diff --git a/iredis/data/commands/renamenx.md b/iredis/data/commands/renamenx.md
new file mode 100644
index 0000000..8f98ec5
--- /dev/null
+++ b/iredis/data/commands/renamenx.md
@@ -0,0 +1,27 @@
+Renames `key` to `newkey` if `newkey` does not yet exist. It returns an error
+when `key` does not exist.
+
+In Cluster mode, both `key` and `newkey` must be in the same **hash slot**,
+meaning that in practice only keys that have the same hash tag can be reliably
+renamed in cluster.
+
+@history
+
+- `<= 3.2.0`: Before Redis 3.2.0, an error is returned if source and destination
+ names are the same.
+
+@return
+
+@integer-reply, specifically:
+
+- `1` if `key` was renamed to `newkey`.
+- `0` if `newkey` already exists.
+
+@examples
+
+```cli
+SET mykey "Hello"
+SET myotherkey "World"
+RENAMENX mykey myotherkey
+GET myotherkey
+```
diff --git a/iredis/data/commands/replicaof.md b/iredis/data/commands/replicaof.md
new file mode 100644
index 0000000..c8c839c
--- /dev/null
+++ b/iredis/data/commands/replicaof.md
@@ -0,0 +1,21 @@
+The `REPLICAOF` command can change the replication settings of a replica on the
+fly.
+
+If a Redis server is already acting as replica, the command `REPLICAOF` NO ONE
+will turn off the replication, turning the Redis server into a MASTER. In the
+proper form `REPLICAOF` hostname port will make the server a replica of another
+server listening at the specified hostname and port.
+
+If a server is already a replica of some master, `REPLICAOF` hostname port will
+stop the replication against the old server and start the synchronization
+against the new one, discarding the old dataset.
+
+The form `REPLICAOF` NO ONE will stop replication, turning the server into a
+MASTER, but will not discard the replication. So, if the old master stops
+working, it is possible to turn the replica into a master and set the
+application to use this new master in read/write. Later when the other Redis
+server is fixed, it can be reconfigured to work as a replica.
+
+@return
+
+@simple-string-reply
diff --git a/iredis/data/commands/restore.md b/iredis/data/commands/restore.md
new file mode 100644
index 0000000..50632e4
--- /dev/null
+++ b/iredis/data/commands/restore.md
@@ -0,0 +1,41 @@
+Create a key associated with a value that is obtained by deserializing the
+provided serialized value (obtained via `DUMP`).
+
+If `ttl` is 0 the key is created without any expire, otherwise the specified
+expire time (in milliseconds) is set.
+
+If the `ABSTTL` modifier was used, `ttl` should represent an absolute [Unix
+timestamp][hewowu] (in milliseconds) in which the key will expire. (Redis 5.0 or
+greater).
+
+[hewowu]: http://en.wikipedia.org/wiki/Unix_time
+
+For eviction purposes, you may use the `IDLETIME` or `FREQ` modifiers. See
+`OBJECT` for more information (Redis 5.0 or greater).
+
+`RESTORE` will return a "Target key name is busy" error when `key` already
+exists unless you use the `REPLACE` modifier (Redis 3.0 or greater).
+
+`RESTORE` checks the RDB version and data checksum. If they don't match an error
+is returned.
+
+@return
+
+@simple-string-reply: The command returns OK on success.
+
+@examples
+
+```
+redis> DEL mykey
+0
+redis> RESTORE mykey 0 "\n\x17\x17\x00\x00\x00\x12\x00\x00\x00\x03\x00\
+ x00\xc0\x01\x00\x04\xc0\x02\x00\x04\xc0\x03\x00\
+ xff\x04\x00u#<\xc0;.\xe9\xdd"
+OK
+redis> TYPE mykey
+list
+redis> LRANGE mykey 0 -1
+1) "1"
+2) "2"
+3) "3"
+```
diff --git a/iredis/data/commands/role.md b/iredis/data/commands/role.md
new file mode 100644
index 0000000..c42fa74
--- /dev/null
+++ b/iredis/data/commands/role.md
@@ -0,0 +1,105 @@
+Provide information on the role of a Redis instance in the context of
+replication, by returning if the instance is currently a `master`, `slave`, or
+`sentinel`. The command also returns additional information about the state of
+the replication (if the role is master or slave) or the list of monitored master
+names (if the role is sentinel).
+
+## Output format
+
+The command returns an array of elements. The first element is the role of the
+instance, as one of the following three strings:
+
+- "master"
+- "slave"
+- "sentinel"
+
+The additional elements of the array depends on the role.
+
+## Master output
+
+An example of output when `ROLE` is called in a master instance:
+
+```
+1) "master"
+2) (integer) 3129659
+3) 1) 1) "127.0.0.1"
+ 2) "9001"
+ 3) "3129242"
+ 2) 1) "127.0.0.1"
+ 2) "9002"
+ 3) "3129543"
+```
+
+The master output is composed of the following parts:
+
+1. The string `master`.
+2. The current master replication offset, which is an offset that masters and
+ replicas share to understand, in partial resynchronizations, the part of the
+ replication stream the replicas needs to fetch to continue.
+3. An array composed of three elements array representing the connected
+ replicas. Every sub-array contains the replica IP, port, and the last
+ acknowledged replication offset.
+
+## Output of the command on replicas
+
+An example of output when `ROLE` is called in a replica instance:
+
+```
+1) "slave"
+2) "127.0.0.1"
+3) (integer) 9000
+4) "connected"
+5) (integer) 3167038
+```
+
+The replica output is composed of the following parts:
+
+1. The string `slave`, because of backward compatibility (see note at the end of
+ this page).
+2. The IP of the master.
+3. The port number of the master.
+4. The state of the replication from the point of view of the master, that can
+ be `connect` (the instance needs to connect to its master), `connecting` (the
+ master-replica connection is in progress), `sync` (the master and replica are
+ trying to perform the synchronization), `connected` (the replica is online).
+5. The amount of data received from the replica so far in terms of master
+ replication offset.
+
+## Sentinel output
+
+An example of Sentinel output:
+
+```
+1) "sentinel"
+2) 1) "resque-master"
+ 2) "html-fragments-master"
+ 3) "stats-master"
+ 4) "metadata-master"
+```
+
+The sentinel output is composed of the following parts:
+
+1. The string `sentinel`.
+2. An array of master names monitored by this Sentinel instance.
+
+@return
+
+@array-reply: where the first element is one of `master`, `slave`, `sentinel`
+and the additional elements are role-specific as illustrated above.
+
+@history
+
+- This command was introduced in the middle of a Redis stable release,
+ specifically with Redis 2.8.12.
+
+@examples
+
+```cli
+ROLE
+```
+
+**A note about the word slave used in this man page**: Starting with Redis 5, if
+not for backward compatibility, the Redis project no longer uses the word slave.
+Unfortunately in this command the word slave is part of the protocol, so we'll
+be able to remove such occurrences only when this API will be naturally
+deprecated.
diff --git a/iredis/data/commands/rpop.md b/iredis/data/commands/rpop.md
new file mode 100644
index 0000000..9c03902
--- /dev/null
+++ b/iredis/data/commands/rpop.md
@@ -0,0 +1,16 @@
+Removes and returns the last element of the list stored at `key`.
+
+@return
+
+@bulk-string-reply: the value of the last element, or `nil` when `key` does not
+exist.
+
+@examples
+
+```cli
+RPUSH mylist "one"
+RPUSH mylist "two"
+RPUSH mylist "three"
+RPOP mylist
+LRANGE mylist 0 -1
+```
diff --git a/iredis/data/commands/rpoplpush.md b/iredis/data/commands/rpoplpush.md
new file mode 100644
index 0000000..7a5b662
--- /dev/null
+++ b/iredis/data/commands/rpoplpush.md
@@ -0,0 +1,71 @@
+Atomically returns and removes the last element (tail) of the list stored at
+`source`, and pushes the element at the first element (head) of the list stored
+at `destination`.
+
+For example: consider `source` holding the list `a,b,c`, and `destination`
+holding the list `x,y,z`. Executing `RPOPLPUSH` results in `source` holding
+`a,b` and `destination` holding `c,x,y,z`.
+
+If `source` does not exist, the value `nil` is returned and no operation is
+performed. If `source` and `destination` are the same, the operation is
+equivalent to removing the last element from the list and pushing it as first
+element of the list, so it can be considered as a list rotation command.
+
+@return
+
+@bulk-string-reply: the element being popped and pushed.
+
+@examples
+
+```cli
+RPUSH mylist "one"
+RPUSH mylist "two"
+RPUSH mylist "three"
+RPOPLPUSH mylist myotherlist
+LRANGE mylist 0 -1
+LRANGE myotherlist 0 -1
+```
+
+## Pattern: Reliable queue
+
+Redis is often used as a messaging server to implement processing of background
+jobs or other kinds of messaging tasks. A simple form of queue is often obtained
+pushing values into a list in the producer side, and waiting for this values in
+the consumer side using `RPOP` (using polling), or `BRPOP` if the client is
+better served by a blocking operation.
+
+However in this context the obtained queue is not _reliable_ as messages can be
+lost, for example in the case there is a network problem or if the consumer
+crashes just after the message is received but it is still to process.
+
+`RPOPLPUSH` (or `BRPOPLPUSH` for the blocking variant) offers a way to avoid
+this problem: the consumer fetches the message and at the same time pushes it
+into a _processing_ list. It will use the `LREM` command in order to remove the
+message from the _processing_ list once the message has been processed.
+
+An additional client may monitor the _processing_ list for items that remain
+there for too much time, and will push those timed out items into the queue
+again if needed.
+
+## Pattern: Circular list
+
+Using `RPOPLPUSH` with the same source and destination key, a client can visit
+all the elements of an N-elements list, one after the other, in O(N) without
+transferring the full list from the server to the client using a single `LRANGE`
+operation.
+
+The above pattern works even if the following two conditions:
+
+- There are multiple clients rotating the list: they'll fetch different
+ elements, until all the elements of the list are visited, and the process
+ restarts.
+- Even if other clients are actively pushing new items at the end of the list.
+
+The above makes it very simple to implement a system where a set of items must
+be processed by N workers continuously as fast as possible. An example is a
+monitoring system that must check that a set of web sites are reachable, with
+the smallest delay possible, using a number of parallel workers.
+
+Note that this implementation of workers is trivially scalable and reliable,
+because even if a message is lost the item is still in the queue and will be
+processed at the next iteration.
diff --git a/iredis/data/commands/rpush.md b/iredis/data/commands/rpush.md
new file mode 100644
index 0000000..14a796b
--- /dev/null
+++ b/iredis/data/commands/rpush.md
@@ -0,0 +1,27 @@
+Insert all the specified values at the tail of the list stored at `key`. If
+`key` does not exist, it is created as empty list before performing the push
+operation. When `key` holds a value that is not a list, an error is returned.
+
+It is possible to push multiple elements using a single command call just
+specifying multiple arguments at the end of the command. Elements are inserted
+one after the other to the tail of the list, from the leftmost element to the
+rightmost element. So for instance the command `RPUSH mylist a b c` will result
+into a list containing `a` as first element, `b` as second element and `c` as
+third element.
+
+@return
+
+@integer-reply: the length of the list after the push operation.
+
+@history
+
+- `>= 2.4`: Accepts multiple `element` arguments. In Redis versions older than
+ 2.4 it was possible to push a single value per command.
+
+@examples
+
+```cli
+RPUSH mylist "hello"
+RPUSH mylist "world"
+LRANGE mylist 0 -1
+```
diff --git a/iredis/data/commands/rpushx.md b/iredis/data/commands/rpushx.md
new file mode 100644
index 0000000..0345367
--- /dev/null
+++ b/iredis/data/commands/rpushx.md
@@ -0,0 +1,22 @@
+Inserts specified values at the tail of the list stored at `key`, only if `key`
+already exists and holds a list. In contrary to `RPUSH`, no operation will be
+performed when `key` does not yet exist.
+
+@return
+
+@integer-reply: the length of the list after the push operation.
+
+@history
+
+- `>= 4.0`: Accepts multiple `element` arguments. In Redis versions older than
+ 4.0 it was possible to push a single value per command.
+
+@examples
+
+```cli
+RPUSH mylist "Hello"
+RPUSHX mylist "World"
+RPUSHX myotherlist "World"
+LRANGE mylist 0 -1
+LRANGE myotherlist 0 -1
+```
diff --git a/iredis/data/commands/sadd.md b/iredis/data/commands/sadd.md
new file mode 100644
index 0000000..a8b280e
--- /dev/null
+++ b/iredis/data/commands/sadd.md
@@ -0,0 +1,24 @@
+Add the specified members to the set stored at `key`. Specified members that are
+already a member of this set are ignored. If `key` does not exist, a new set is
+created before adding the specified members.
+
+An error is returned when the value stored at `key` is not a set.
+
+@return
+
+@integer-reply: the number of elements that were added to the set, not including
+all the elements already present into the set.
+
+@history
+
+- `>= 2.4`: Accepts multiple `member` arguments. Redis versions before 2.4 are
+ only able to add a single member per call.
+
+@examples
+
+```cli
+SADD myset "Hello"
+SADD myset "World"
+SADD myset "World"
+SMEMBERS myset
+```
diff --git a/iredis/data/commands/save.md b/iredis/data/commands/save.md
new file mode 100644
index 0000000..540dc7a
--- /dev/null
+++ b/iredis/data/commands/save.md
@@ -0,0 +1,17 @@
+The `SAVE` commands performs a **synchronous** save of the dataset producing a
+_point in time_ snapshot of all the data inside the Redis instance, in the form
+of an RDB file.
+
+You almost never want to call `SAVE` in production environments where it will
+block all the other clients. Instead usually `BGSAVE` is used. However in case
+of issues preventing Redis to create the background saving child (for instance
+errors in the fork(2) system call), the `SAVE` command can be a good last resort
+to perform the dump of the latest dataset.
+
+Please refer to the [persistence documentation][tp] for detailed information.
+
+[tp]: /topics/persistence
+
+@return
+
+@simple-string-reply: The commands returns OK on success.
diff --git a/iredis/data/commands/scan.md b/iredis/data/commands/scan.md
new file mode 100644
index 0000000..f1674fd
--- /dev/null
+++ b/iredis/data/commands/scan.md
@@ -0,0 +1,341 @@
+The `SCAN` command and the closely related commands `SSCAN`, `HSCAN` and `ZSCAN`
+are used in order to incrementally iterate over a collection of elements.
+
+- `SCAN` iterates the set of keys in the currently selected Redis database.
+- `SSCAN` iterates elements of Sets types.
+- `HSCAN` iterates fields of Hash types and their associated values.
+- `ZSCAN` iterates elements of Sorted Set types and their associated scores.
+
+Since these commands allow for incremental iteration, returning only a small
+number of elements per call, they can be used in production without the downside
+of commands like `KEYS` or `SMEMBERS` that may block the server for a long time
+(even several seconds) when called against big collections of keys or elements.
+
+However while blocking commands like `SMEMBERS` are able to provide all the
+elements that are part of a Set in a given moment, The SCAN family of commands
+only offer limited guarantees about the returned elements since the collection
+that we incrementally iterate can change during the iteration process.
+
+Note that `SCAN`, `SSCAN`, `HSCAN` and `ZSCAN` all work very similarly, so this
+documentation covers all the four commands. However an obvious difference is
+that in the case of `SSCAN`, `HSCAN` and `ZSCAN` the first argument is the name
+of the key holding the Set, Hash or Sorted Set value. The `SCAN` command does
+not need any key name argument as it iterates keys in the current database, so
+the iterated object is the database itself.
+
+## SCAN basic usage
+
+SCAN is a cursor based iterator. This means that at every call of the command,
+the server returns an updated cursor that the user needs to use as the cursor
+argument in the next call.
+
+An iteration starts when the cursor is set to 0, and terminates when the cursor
+returned by the server is 0. The following is an example of SCAN iteration:
+
+```
+redis 127.0.0.1:6379> scan 0
+1) "17"
+2) 1) "key:12"
+ 2) "key:8"
+ 3) "key:4"
+ 4) "key:14"
+ 5) "key:16"
+ 6) "key:17"
+ 7) "key:15"
+ 8) "key:10"
+ 9) "key:3"
+ 10) "key:7"
+ 11) "key:1"
+redis 127.0.0.1:6379> scan 17
+1) "0"
+2) 1) "key:5"
+ 2) "key:18"
+ 3) "key:0"
+ 4) "key:2"
+ 5) "key:19"
+ 6) "key:13"
+ 7) "key:6"
+ 8) "key:9"
+ 9) "key:11"
+```
+
+In the example above, the first call uses zero as a cursor, to start the
+iteration. The second call uses the cursor returned by the previous call as the
+first element of the reply, that is, 17.
+
+As you can see the **SCAN return value** is an array of two values: the first
+value is the new cursor to use in the next call, the second value is an array of
+elements.
+
+Since in the second call the returned cursor is 0, the server signaled to the
+caller that the iteration finished, and the collection was completely explored.
+Starting an iteration with a cursor value of 0, and calling `SCAN` until the
+returned cursor is 0 again is called a **full iteration**.
+
+## Scan guarantees
+
+The `SCAN` command, and the other commands in the `SCAN` family, are able to
+provide to the user a set of guarantees associated to full iterations.
+
+- A full iteration always retrieves all the elements that were present in the
+ collection from the start to the end of a full iteration. This means that if a
+ given element is inside the collection when an iteration is started, and is
+ still there when an iteration terminates, then at some point `SCAN` returned
+ it to the user.
+- A full iteration never returns any element that was NOT present in the
+ collection from the start to the end of a full iteration. So if an element was
+ removed before the start of an iteration, and is never added back to the
+ collection for all the time an iteration lasts, `SCAN` ensures that this
+ element will never be returned.
+
+However because `SCAN` has very little state associated (just the cursor) it has
+the following drawbacks:
+
+- A given element may be returned multiple times. It is up to the application to
+ handle the case of duplicated elements, for example only using the returned
+ elements in order to perform operations that are safe when re-applied multiple
+ times.
+- Elements that were not constantly present in the collection during a full
+ iteration, may be returned or not: it is undefined.
+
+## Number of elements returned at every SCAN call
+
+`SCAN` family functions do not guarantee that the number of elements returned
+per call are in a given range. The commands are also allowed to return zero
+elements, and the client should not consider the iteration complete as long as
+the returned cursor is not zero.
+
+However the number of returned elements is reasonable, that is, in practical
+terms SCAN may return a maximum number of elements in the order of a few tens of
+elements when iterating a large collection, or may return all the elements of
+the collection in a single call when the iterated collection is small enough to
+be internally represented as an encoded data structure (this happens for small
+sets, hashes and sorted sets).
+
+However there is a way for the user to tune the order of magnitude of the number
+of returned elements per call using the **COUNT** option.
+
+## The COUNT option
+
+While `SCAN` does not provide guarantees about the number of elements returned
+at every iteration, it is possible to empirically adjust the behavior of `SCAN`
+using the **COUNT** option. Basically with COUNT the user specified the _amount
+of work that should be done at every call in order to retrieve elements from the
+collection_. This is **just a hint** for the implementation, however generally
+speaking this is what you could expect most of the times from the
+implementation.
+
+- The default COUNT value is 10.
+- When iterating the key space, or a Set, Hash or Sorted Set that is big enough
+ to be represented by a hash table, assuming no **MATCH** option is used, the
+ server will usually return _count_ or a bit more than _count_ elements per
+ call. Please check the _why SCAN may return all the elements at once_ section
+ later in this document.
+- When iterating Sets encoded as intsets (small sets composed of just integers),
+ or Hashes and Sorted Sets encoded as ziplists (small hashes and sets composed
+ of small individual values), usually all the elements are returned in the
+ first `SCAN` call regardless of the COUNT value.
+
+Important: **there is no need to use the same COUNT value** for every iteration.
+The caller is free to change the count from one iteration to the other as
+required, as long as the cursor passed in the next call is the one obtained in
+the previous call to the command.
+
+## The MATCH option
+
+It is possible to only iterate elements matching a given glob-style pattern,
+similarly to the behavior of the `KEYS` command that takes a pattern as only
+argument.
+
+To do so, just append the `MATCH <pattern>` arguments at the end of the `SCAN`
+command (it works with all the SCAN family commands).
+
+This is an example of iteration using **MATCH**:
+
+```
+redis 127.0.0.1:6379> sadd myset 1 2 3 foo foobar feelsgood
+(integer) 6
+redis 127.0.0.1:6379> sscan myset 0 match f*
+1) "0"
+2) 1) "foo"
+ 2) "feelsgood"
+ 3) "foobar"
+redis 127.0.0.1:6379>
+```
+
+It is important to note that the **MATCH** filter is applied after elements are
+retrieved from the collection, just before returning data to the client. This
+means that if the pattern matches very little elements inside the collection,
+`SCAN` will likely return no elements in most iterations. An example is shown
+below:
+
+```
+redis 127.0.0.1:6379> scan 0 MATCH *11*
+1) "288"
+2) 1) "key:911"
+redis 127.0.0.1:6379> scan 288 MATCH *11*
+1) "224"
+2) (empty list or set)
+redis 127.0.0.1:6379> scan 224 MATCH *11*
+1) "80"
+2) (empty list or set)
+redis 127.0.0.1:6379> scan 80 MATCH *11*
+1) "176"
+2) (empty list or set)
+redis 127.0.0.1:6379> scan 176 MATCH *11* COUNT 1000
+1) "0"
+2) 1) "key:611"
+ 2) "key:711"
+ 3) "key:118"
+ 4) "key:117"
+ 5) "key:311"
+ 6) "key:112"
+ 7) "key:111"
+ 8) "key:110"
+ 9) "key:113"
+ 10) "key:211"
+ 11) "key:411"
+ 12) "key:115"
+ 13) "key:116"
+ 14) "key:114"
+ 15) "key:119"
+ 16) "key:811"
+ 17) "key:511"
+ 18) "key:11"
+redis 127.0.0.1:6379>
+```
+
+As you can see most of the calls returned zero elements, but the last call where
+a COUNT of 1000 was used in order to force the command to do more scanning for
+that iteration.
+
+## The TYPE option
+
+As of version 6.0 you can use this option to ask `SCAN` to only return objects
+that match a given `type`, allowing you to iterate through the database looking
+for keys of a specific type. The **TYPE** option is only available on the
+whole-database `SCAN`, not `HSCAN` or `ZSCAN` etc.
+
+The `type` argument is the same string name that the `TYPE` command returns.
+Note a quirk where some Redis types, such as GeoHashes, HyperLogLogs, Bitmaps,
+and Bitfields, may internally be implemented using other Redis types, such as a
+string or zset, so can't be distinguished from other keys of that same type by
+`SCAN`. For example, a ZSET and GEOHASH:
+
+```
+redis 127.0.0.1:6379> GEOADD geokey 0 0 value
+(integer) 1
+redis 127.0.0.1:6379> ZADD zkey 1000 value
+(integer) 1
+redis 127.0.0.1:6379> TYPE geokey
+zset
+redis 127.0.0.1:6379> TYPE zkey
+zset
+redis 127.0.0.1:6379> SCAN 0 TYPE zset
+1) "0"
+2) 1) "geokey"
+ 2) "zkey"
+```
+
+It is important to note that the **TYPE** filter is also applied after elements
+are retrieved from the database, so the option does not reduce the amount of
+work the server has to do to complete a full iteration, and for rare types you
+may receive no elements in many iterations.
+
+## Multiple parallel iterations
+
+It is possible for an infinite number of clients to iterate the same collection
+at the same time, as the full state of the iterator is in the cursor, that is
+obtained and returned to the client at every call. Server side no state is taken
+at all.
+
+## Terminating iterations in the middle
+
+Since there is no state server side, but the full state is captured by the
+cursor, the caller is free to terminate an iteration half-way without signaling
+this to the server in any way. An infinite number of iterations can be started
+and never terminated without any issue.
+
+## Calling SCAN with a corrupted cursor
+
+Calling `SCAN` with a broken, negative, out of range, or otherwise invalid
+cursor, will result into undefined behavior but never into a crash. What will be
+undefined is that the guarantees about the returned elements can no longer be
+ensured by the `SCAN` implementation.
+
+The only valid cursors to use are:
+
+- The cursor value of 0 when starting an iteration.
+- The cursor returned by the previous call to SCAN in order to continue the
+ iteration.
+
+## Guarantee of termination
+
+The `SCAN` algorithm is guaranteed to terminate only if the size of the iterated
+collection remains bounded to a given maximum size, otherwise iterating a
+collection that always grows may result into `SCAN` to never terminate a full
+iteration.
+
+This is easy to see intuitively: if the collection grows there is more and more
+work to do in order to visit all the possible elements, and the ability to
+terminate the iteration depends on the number of calls to `SCAN` and its COUNT
+option value compared with the rate at which the collection grows.
+
+## Why SCAN may return all the items of an aggregate data type in a single call?
+
+In the `COUNT` option documentation, we state that sometimes this family of
+commands may return all the elements of a Set, Hash or Sorted Set at once in a
+single call, regardless of the `COUNT` option value. The reason why this happens
+is that the cursor-based iterator can be implemented, and is useful, only when
+the aggregate data type that we are scanning is represented as an hash table.
+However Redis uses a [memory optimization](/topics/memory-optimization) where
+small aggregate data types, until they reach a given amount of items or a given
+max size of single elements, are represented using a compact single-allocation
+packed encoding. When this is the case, `SCAN` has no meaningful cursor to
+return, and must iterate the whole data structure at once, so the only sane
+behavior it has is to return everything in a call.
+
+However once the data structures are bigger and are promoted to use real hash
+tables, the `SCAN` family of commands will resort to the normal behavior. Note
+that since this special behavior of returning all the elements is true only for
+small aggregates, it has no effects on the command complexity or latency.
+However the exact limits to get converted into real hash tables are
+[user configurable](/topics/memory-optimization), so the maximum number of
+elements you can see returned in a single call depends on how big an aggregate
+data type could be and still use the packed representation.
+
+Also note that this behavior is specific of `SSCAN`, `HSCAN` and `ZSCAN`. `SCAN`
+itself never shows this behavior because the key space is always represented by
+hash tables.
+
+## Return value
+
+`SCAN`, `SSCAN`, `HSCAN` and `ZSCAN` return a two elements multi-bulk reply,
+where the first element is a string representing an unsigned 64 bit number (the
+cursor), and the second element is a multi-bulk with an array of elements.
+
+- `SCAN` array of elements is a list of keys.
+- `SSCAN` array of elements is a list of Set members.
+- `HSCAN` array of elements contain two elements, a field and a value, for every
+ returned element of the Hash.
+- `ZSCAN` array of elements contain two elements, a member and its associated
+ score, for every returned element of the sorted set.
+
+@history
+
+- `>= 6.0`: Supports the `TYPE` subcommand.
+
+## Additional examples
+
+Iteration of a Hash value.
+
+```
+redis 127.0.0.1:6379> hmset hash name Jack age 33
+OK
+redis 127.0.0.1:6379> hscan hash 0
+1) "0"
+2) 1) "name"
+ 2) "Jack"
+ 3) "age"
+ 4) "33"
+```
diff --git a/iredis/data/commands/scard.md b/iredis/data/commands/scard.md
new file mode 100644
index 0000000..85d3c01
--- /dev/null
+++ b/iredis/data/commands/scard.md
@@ -0,0 +1,14 @@
+Returns the set cardinality (number of elements) of the set stored at `key`.
+
+@return
+
+@integer-reply: the cardinality (number of elements) of the set, or `0` if `key`
+does not exist.
+
+@examples
+
+```cli
+SADD myset "Hello"
+SADD myset "World"
+SCARD myset
+```
diff --git a/iredis/data/commands/script-debug.md b/iredis/data/commands/script-debug.md
new file mode 100644
index 0000000..67502b2
--- /dev/null
+++ b/iredis/data/commands/script-debug.md
@@ -0,0 +1,26 @@
+Set the debug mode for subsequent scripts executed with `EVAL`. Redis includes a
+complete Lua debugger, codename LDB, that can be used to make the task of
+writing complex scripts much simpler. In debug mode Redis acts as a remote
+debugging server and a client, such as `redis-cli`, can execute scripts step by
+step, set breakpoints, inspect variables and more - for additional information
+about LDB refer to the [Redis Lua debugger](/topics/ldb) page.
+
+**Important note:** avoid debugging Lua scripts using your Redis production
+server. Use a development server instead.
+
+LDB can be enabled in one of two modes: asynchronous or synchronous. In
+asynchronous mode the server creates a forked debugging session that does not
+block and all changes to the data are **rolled back** after the session
+finishes, so debugging can be restarted using the same initial state. The
+alternative synchronous debug mode blocks the server while the debugging session
+is active and retains all changes to the data set once it ends.
+
+- `YES`. Enable non-blocking asynchronous debugging of Lua scripts (changes are
+ discarded).
+- `SYNC`. Enable blocking synchronous debugging of Lua scripts (saves changes to
+ data).
+- `NO`. Disables scripts debug mode.
+
+@return
+
+@simple-string-reply: `OK`.
diff --git a/iredis/data/commands/script-exists.md b/iredis/data/commands/script-exists.md
new file mode 100644
index 0000000..d27d771
--- /dev/null
+++ b/iredis/data/commands/script-exists.md
@@ -0,0 +1,18 @@
+Returns information about the existence of the scripts in the script cache.
+
+This command accepts one or more SHA1 digests and returns a list of ones or
+zeros to signal if the scripts are already defined or not inside the script
+cache. This can be useful before a pipelining operation to ensure that scripts
+are loaded (and if not, to load them using `SCRIPT LOAD`) so that the pipelining
+operation can be performed solely using `EVALSHA` instead of `EVAL` to save
+bandwidth.
+
+Please refer to the `EVAL` documentation for detailed information about Redis
+Lua scripting.
+
+@return
+
+@array-reply The command returns an array of integers that correspond to the
+specified SHA1 digest arguments. For every corresponding SHA1 digest of a script
+that actually exists in the script cache, an 1 is returned, otherwise 0 is
+returned.
diff --git a/iredis/data/commands/script-flush.md b/iredis/data/commands/script-flush.md
new file mode 100644
index 0000000..833732d
--- /dev/null
+++ b/iredis/data/commands/script-flush.md
@@ -0,0 +1,8 @@
+Flush the Lua scripts cache.
+
+Please refer to the `EVAL` documentation for detailed information about Redis
+Lua scripting.
+
+@return
+
+@simple-string-reply
diff --git a/iredis/data/commands/script-kill.md b/iredis/data/commands/script-kill.md
new file mode 100644
index 0000000..225798b
--- /dev/null
+++ b/iredis/data/commands/script-kill.md
@@ -0,0 +1,19 @@
+Kills the currently executing Lua script, assuming no write operation was yet
+performed by the script.
+
+This command is mainly useful to kill a script that is running for too much
+time(for instance because it entered an infinite loop because of a bug). The
+script will be killed and the client currently blocked into EVAL will see the
+command returning with an error.
+
+If the script already performed write operations it can not be killed in this
+way because it would violate Lua script atomicity contract. In such a case only
+`SHUTDOWN NOSAVE` is able to kill the script, killing the Redis process in an
+hard way preventing it to persist with half-written information.
+
+Please refer to the `EVAL` documentation for detailed information about Redis
+Lua scripting.
+
+@return
+
+@simple-string-reply
diff --git a/iredis/data/commands/script-load.md b/iredis/data/commands/script-load.md
new file mode 100644
index 0000000..839b247
--- /dev/null
+++ b/iredis/data/commands/script-load.md
@@ -0,0 +1,18 @@
+Load a script into the scripts cache, without executing it. After the specified
+command is loaded into the script cache it will be callable using `EVALSHA` with
+the correct SHA1 digest of the script, exactly like after the first successful
+invocation of `EVAL`.
+
+The script is guaranteed to stay in the script cache forever (unless
+`SCRIPT FLUSH` is called).
+
+The command works in the same way even if the script was already present in the
+script cache.
+
+Please refer to the `EVAL` documentation for detailed information about Redis
+Lua scripting.
+
+@return
+
+@bulk-string-reply This command returns the SHA1 digest of the script added into
+the script cache.
diff --git a/iredis/data/commands/sdiff.md b/iredis/data/commands/sdiff.md
new file mode 100644
index 0000000..5d458ec
--- /dev/null
+++ b/iredis/data/commands/sdiff.md
@@ -0,0 +1,29 @@
+Returns the members of the set resulting from the difference between the first
+set and all the successive sets.
+
+For example:
+
+```
+key1 = {a,b,c,d}
+key2 = {c}
+key3 = {a,c,e}
+SDIFF key1 key2 key3 = {b,d}
+```
+
+Keys that do not exist are considered to be empty sets.
+
+@return
+
+@array-reply: list with members of the resulting set.
+
+@examples
+
+```cli
+SADD key1 "a"
+SADD key1 "b"
+SADD key1 "c"
+SADD key2 "c"
+SADD key2 "d"
+SADD key2 "e"
+SDIFF key1 key2
+```
diff --git a/iredis/data/commands/sdiffstore.md b/iredis/data/commands/sdiffstore.md
new file mode 100644
index 0000000..e941016
--- /dev/null
+++ b/iredis/data/commands/sdiffstore.md
@@ -0,0 +1,21 @@
+This command is equal to `SDIFF`, but instead of returning the resulting set, it
+is stored in `destination`.
+
+If `destination` already exists, it is overwritten.
+
+@return
+
+@integer-reply: the number of elements in the resulting set.
+
+@examples
+
+```cli
+SADD key1 "a"
+SADD key1 "b"
+SADD key1 "c"
+SADD key2 "c"
+SADD key2 "d"
+SADD key2 "e"
+SDIFFSTORE key key1 key2
+SMEMBERS key
+```
diff --git a/iredis/data/commands/select.md b/iredis/data/commands/select.md
new file mode 100644
index 0000000..ff366c6
--- /dev/null
+++ b/iredis/data/commands/select.md
@@ -0,0 +1,27 @@
+Select the Redis logical database having the specified zero-based numeric index.
+New connections always use the database 0.
+
+Selectable Redis databases are a form of namespacing: all databases are still
+persisted in the same RDB / AOF file. However different databases can have keys
+with the same name, and commands like `FLUSHDB`, `SWAPDB` or `RANDOMKEY` work on
+specific databases.
+
+In practical terms, Redis databases should be used to separate different keys
+belonging to the same application (if needed), and not to use a single Redis
+instance for multiple unrelated applications.
+
+When using Redis Cluster, the `SELECT` command cannot be used, since Redis
+Cluster only supports database zero. In the case of a Redis Cluster, having
+multiple databases would be useless and an unnecessary source of complexity.
+Commands operating atomically on a single database would not be possible with
+the Redis Cluster design and goals.
+
+Since the currently selected database is a property of the connection, clients
+should track the currently selected database and re-select it on reconnection.
+While there is no command in order to query the selected database in the current
+connection, the `CLIENT LIST` output shows, for each client, the currently
+selected database.
+
+@return
+
+@simple-string-reply
diff --git a/iredis/data/commands/set.md b/iredis/data/commands/set.md
new file mode 100644
index 0000000..2cf4afa
--- /dev/null
+++ b/iredis/data/commands/set.md
@@ -0,0 +1,73 @@
+Set `key` to hold the string `value`. If `key` already holds a value, it is
+overwritten, regardless of its type. Any previous time to live associated with
+the key is discarded on successful `SET` operation.
+
+## Options
+
+The `SET` command supports a set of options that modify its behavior:
+
+- `EX` _seconds_ -- Set the specified expire time, in seconds.
+- `PX` _milliseconds_ -- Set the specified expire time, in milliseconds.
+- `NX` -- Only set the key if it does not already exist.
+- `XX` -- Only set the key if it already exist.
+- `KEEPTTL` -- Retain the time to live associated with the key.
+
+Note: Since the `SET` command options can replace `SETNX`, `SETEX`, `PSETEX`, it
+is possible that in future versions of Redis these three commands will be
+deprecated and finally removed.
+
+@return
+
+@simple-string-reply: `OK` if `SET` was executed correctly. @nil-reply: a Null
+Bulk Reply is returned if the `SET` operation was not performed because the user
+specified the `NX` or `XX` option but the condition was not met.
+
+@history
+
+- `>= 2.6.12`: Added the `EX`, `PX`, `NX` and `XX` options.
+- `>= 6.0`: Added the `KEEPTTL` option.
+
+@examples
+
+```cli
+SET mykey "Hello"
+GET mykey
+
+SET anotherkey "will expire in a minute" EX 60
+```
+
+## Patterns
+
+**Note:** The following pattern is discouraged in favor of
+[the Redlock algorithm](http://redis.io/topics/distlock) which is only a bit
+more complex to implement, but offers better guarantees and is fault tolerant.
+
+The command `SET resource-name anystring NX EX max-lock-time` is a simple way to
+implement a locking system with Redis.
+
+A client can acquire the lock if the above command returns `OK` (or retry after
+some time if the command returns Nil), and remove the lock just using `DEL`.
+
+The lock will be auto-released after the expire time is reached.
+
+It is possible to make this system more robust modifying the unlock schema as
+follows:
+
+- Instead of setting a fixed string, set a non-guessable large random string,
+ called token.
+- Instead of releasing the lock with `DEL`, send a script that only removes the
+ key if the value matches.
+
+This avoids that a client will try to release the lock after the expire time
+deleting the key created by another client that acquired the lock later.
+
+An example of unlock script would be similar to the following:
+
+ if redis.call("get",KEYS[1]) == ARGV[1]
+ then
+ return redis.call("del",KEYS[1])
+ else
+ return 0
+ end
+
+The script should be called with `EVAL ...script... 1 resource-name token-value`
diff --git a/iredis/data/commands/setbit.md b/iredis/data/commands/setbit.md
new file mode 100644
index 0000000..a6b64f2
--- /dev/null
+++ b/iredis/data/commands/setbit.md
@@ -0,0 +1,158 @@
+Sets or clears the bit at _offset_ in the string value stored at _key_.
+
+The bit is either set or cleared depending on _value_, which can be either 0
+or 1.
+
+When _key_ does not exist, a new string value is created. The string is grown to
+make sure it can hold a bit at _offset_. The _offset_ argument is required to be
+greater than or equal to 0, and smaller than 2^32 (this limits bitmaps to
+512MB). When the string at _key_ is grown, added bits are set to 0.
+
+**Warning**: When setting the last possible bit (_offset_ equal to 2^32 -1) and
+the string value stored at _key_ does not yet hold a string value, or holds a
+small string value, Redis needs to allocate all intermediate memory which can
+block the server for some time. On a 2010 MacBook Pro, setting bit number 2^32
+-1 (512MB allocation) takes ~300ms, setting bit number 2^30 -1 (128MB
+allocation) takes ~80ms, setting bit number 2^28 -1 (32MB allocation) takes
+~30ms and setting bit number 2^26 -1 (8MB allocation) takes ~8ms. Note that once
+this first allocation is done, subsequent calls to `SETBIT` for the same _key_
+will not have the allocation overhead.
+
+@return
+
+@integer-reply: the original bit value stored at _offset_.
+
+@examples
+
+```cli
+SETBIT mykey 7 1
+SETBIT mykey 7 0
+GET mykey
+```
+
+## Pattern: accessing the entire bitmap
+
+There are cases when you need to set all the bits of single bitmap at once, for
+example when initializing it to a default non-zero value. It is possible to do
+this with multiple calls to the `SETBIT` command, one for each bit that needs to
+be set. However, so as an optimization you can use a single `SET` command to set
+the entire bitmap.
+
+Bitmaps are not an actual data type, but a set of bit-oriented operations
+defined on the String type (for more information refer to the [Bitmaps section
+of the Data Types Introduction page][ti]). This means that bitmaps can be used
+with string commands, and most importantly with `SET` and `GET`.
+
+Because Redis' strings are binary-safe, a bitmap is trivially encoded as a bytes
+stream. The first byte of the string corresponds to offsets 0..7 of the bitmap,
+the second byte to the 8..15 range, and so forth.
+
+For example, after setting a few bits, getting the string value of the bitmap
+would look like this:
+
+```
+> SETBIT bitmapsarestrings 2 1
+> SETBIT bitmapsarestrings 3 1
+> SETBIT bitmapsarestrings 5 1
+> SETBIT bitmapsarestrings 10 1
+> SETBIT bitmapsarestrings 11 1
+> SETBIT bitmapsarestrings 14 1
+> GET bitmapsarestrings
+"42"
+```
+
+By getting the string representation of a bitmap, the client can then parse the
+response's bytes by extracting the bit values using native bit operations in its
+native programming language. Symmetrically, it is also possible to set an entire
+bitmap by performing the bits-to-bytes encoding in the client and calling `SET`
+with the resultant string.
+
+[ti]: /topics/data-types-intro#bitmaps
+
+## Pattern: setting multiple bits
+
+`SETBIT` excels at setting single bits, and can be called several times when
+multiple bits need to be set. To optimize this operation you can replace
+multiple `SETBIT` calls with a single call to the variadic `BITFIELD` command
+and the use of fields of type `u1`.
+
+For example, the example above could be replaced by:
+
+```
+> BITFIELD bitsinabitmap SET u1 2 1 SET u1 3 1 SET u1 5 1 SET u1 10 1 SET u1 11 1 SET u1 14 1
+```
+
+## Advanced Pattern: accessing bitmap ranges
+
+It is also possible to use the `GETRANGE` and `SETRANGE` string commands to
+efficiently access a range of bit offsets in a bitmap. Below is a sample
+implementation in idiomatic Redis Lua scripting that can be run with the `EVAL`
+command:
+
+```
+--[[
+Sets a bitmap range
+
+Bitmaps are stored as Strings in Redis. A range spans one or more bytes,
+so we can call `SETRANGE` when entire bytes need to be set instead of flipping
+individual bits. Also, to avoid multiple internal memory allocations in
+Redis, we traverse in reverse.
+Expected input:
+ KEYS[1] - bitfield key
+ ARGV[1] - start offset (0-based, inclusive)
+ ARGV[2] - end offset (same, should be bigger than start, no error checking)
+ ARGV[3] - value (should be 0 or 1, no error checking)
+]]--
+
+-- A helper function to stringify a binary string to semi-binary format
+local function tobits(str)
+ local r = ''
+ for i = 1, string.len(str) do
+ local c = string.byte(str, i)
+ local b = ' '
+ for j = 0, 7 do
+ b = tostring(bit.band(c, 1)) .. b
+ c = bit.rshift(c, 1)
+ end
+ r = r .. b
+ end
+ return r
+end
+
+-- Main
+local k = KEYS[1]
+local s, e, v = tonumber(ARGV[1]), tonumber(ARGV[2]), tonumber(ARGV[3])
+
+-- First treat the dangling bits in the last byte
+local ms, me = s % 8, (e + 1) % 8
+if me > 0 then
+ local t = math.max(e - me + 1, s)
+ for i = e, t, -1 do
+ redis.call('SETBIT', k, i, v)
+ end
+ e = t
+end
+
+-- Then the danglings in the first byte
+if ms > 0 then
+ local t = math.min(s - ms + 7, e)
+ for i = s, t, 1 do
+ redis.call('SETBIT', k, i, v)
+ end
+ s = t + 1
+end
+
+-- Set a range accordingly, if at all
+local rs, re = s / 8, (e + 1) / 8
+local rl = re - rs
+if rl > 0 then
+ local b = '\255'
+ if 0 == v then
+ b = '\0'
+ end
+ redis.call('SETRANGE', k, rs, string.rep(b, rl))
+end
+```
+
+**Note:** the implementation for getting a range of bit offsets from a bitmap is
+left as an exercise to the reader.
diff --git a/iredis/data/commands/setex.md b/iredis/data/commands/setex.md
new file mode 100644
index 0000000..6181b73
--- /dev/null
+++ b/iredis/data/commands/setex.md
@@ -0,0 +1,27 @@
+Set `key` to hold the string `value` and set `key` to timeout after a given
+number of seconds. This command is equivalent to executing the following
+commands:
+
+```
+SET mykey value
+EXPIRE mykey seconds
+```
+
+`SETEX` is atomic, and can be reproduced by using the previous two commands
+inside an `MULTI` / `EXEC` block. It is provided as a faster alternative to the
+given sequence of operations, because this operation is very common when Redis
+is used as a cache.
+
+An error is returned when `seconds` is invalid.
+
+@return
+
+@simple-string-reply
+
+@examples
+
+```cli
+SETEX mykey 10 "Hello"
+TTL mykey
+GET mykey
+```
diff --git a/iredis/data/commands/setnx.md b/iredis/data/commands/setnx.md
new file mode 100644
index 0000000..889f10c
--- /dev/null
+++ b/iredis/data/commands/setnx.md
@@ -0,0 +1,102 @@
+Set `key` to hold string `value` if `key` does not exist. In that case, it is
+equal to `SET`. When `key` already holds a value, no operation is performed.
+`SETNX` is short for "**SET** if **N**ot e**X**ists".
+
+@return
+
+@integer-reply, specifically:
+
+- `1` if the key was set
+- `0` if the key was not set
+
+@examples
+
+```cli
+SETNX mykey "Hello"
+SETNX mykey "World"
+GET mykey
+```
+
+## Design pattern: Locking with `!SETNX`
+
+**Please note that:**
+
+1. The following pattern is discouraged in favor of
+ [the Redlock algorithm](http://redis.io/topics/distlock) which is only a bit
+ more complex to implement, but offers better guarantees and is fault
+ tolerant.
+2. We document the old pattern anyway because certain existing implementations
+ link to this page as a reference. Moreover it is an interesting example of
+ how Redis commands can be used in order to mount programming primitives.
+3. Anyway even assuming a single-instance locking primitive, starting with
+ 2.6.12 it is possible to create a much simpler locking primitive, equivalent
+ to the one discussed here, using the `SET` command to acquire the lock, and a
+ simple Lua script to release the lock. The pattern is documented in the `SET`
+ command page.
+
+That said, `SETNX` can be used, and was historically used, as a locking
+primitive. For example, to acquire the lock of the key `foo`, the client could
+try the following:
+
+```
+SETNX lock.foo <current Unix time + lock timeout + 1>
+```
+
+If `SETNX` returns `1` the client acquired the lock, setting the `lock.foo` key
+to the Unix time at which the lock should no longer be considered valid. The
+client will later use `DEL lock.foo` in order to release the lock.
+
+If `SETNX` returns `0` the key is already locked by some other client. We can
+either return to the caller if it's a non blocking lock, or enter a loop
+retrying to hold the lock until we succeed or some kind of timeout expires.
+
+### Handling deadlocks
+
+In the above locking algorithm there is a problem: what happens if a client
+fails, crashes, or is otherwise not able to release the lock? It's possible to
+detect this condition because the lock key contains a UNIX timestamp. If such a
+timestamp is equal to the current Unix time the lock is no longer valid.
+
+When this happens we can't just call `DEL` against the key to remove the lock
+and then try to issue a `SETNX`, as there is a race condition here, when
+multiple clients detected an expired lock and are trying to release it.
+
+- C1 and C2 read `lock.foo` to check the timestamp, because they both received
+ `0` after executing `SETNX`, as the lock is still held by C3 that crashed
+ after holding the lock.
+- C1 sends `DEL lock.foo`
+- C1 sends `SETNX lock.foo` and it succeeds
+- C2 sends `DEL lock.foo`
+- C2 sends `SETNX lock.foo` and it succeeds
+- **ERROR**: both C1 and C2 acquired the lock because of the race condition.
+
+Fortunately, it's possible to avoid this issue using the following algorithm.
+Let's see how C4, our sane client, uses the good algorithm:
+
+- C4 sends `SETNX lock.foo` in order to acquire the lock
+
+- The crashed client C3 still holds it, so Redis will reply with `0` to C4.
+
+- C4 sends `GET lock.foo` to check if the lock expired. If it is not, it will
+ sleep for some time and retry from the start.
+
+- Instead, if the lock is expired because the Unix time at `lock.foo` is older
+ than the current Unix time, C4 tries to perform:
+
+ ```
+ GETSET lock.foo <current Unix timestamp + lock timeout + 1>
+ ```
+
+- Because of the `GETSET` semantic, C4 can check if the old value stored at
+ `key` is still an expired timestamp. If it is, the lock was acquired.
+
+- If another client, for instance C5, was faster than C4 and acquired the lock
+ with the `GETSET` operation, the C4 `GETSET` operation will return a non
+ expired timestamp. C4 will simply restart from the first step. Note that even
+ if C4 set the key a bit a few seconds in the future this is not a problem.
+
+In order to make this locking algorithm more robust, a client holding a lock
+should always check the timeout didn't expire before unlocking the key with
+`DEL` because client failures can be complex, not just crashing but also
+blocking a lot of time against some operations and trying to issue `DEL` after a
+lot of time (when the LOCK is already held by another client).
diff --git a/iredis/data/commands/setrange.md b/iredis/data/commands/setrange.md
new file mode 100644
index 0000000..078fb34
--- /dev/null
+++ b/iredis/data/commands/setrange.md
@@ -0,0 +1,47 @@
+Overwrites part of the string stored at _key_, starting at the specified offset,
+for the entire length of _value_. If the offset is larger than the current
+length of the string at _key_, the string is padded with zero-bytes to make
+_offset_ fit. Non-existing keys are considered as empty strings, so this command
+will make sure it holds a string large enough to be able to set _value_ at
+_offset_.
+
+Note that the maximum offset that you can set is 2^29 -1 (536870911), as Redis
+Strings are limited to 512 megabytes. If you need to grow beyond this size, you
+can use multiple keys.
+
+**Warning**: When setting the last possible byte and the string value stored at
+_key_ does not yet hold a string value, or holds a small string value, Redis
+needs to allocate all intermediate memory which can block the server for some
+time. On a 2010 MacBook Pro, setting byte number 536870911 (512MB allocation)
+takes ~300ms, setting byte number 134217728 (128MB allocation) takes ~80ms,
+setting bit number 33554432 (32MB allocation) takes ~30ms and setting bit number
+8388608 (8MB allocation) takes ~8ms. Note that once this first allocation is
+done, subsequent calls to `SETRANGE` for the same _key_ will not have the
+allocation overhead.
+
+## Patterns
+
+Thanks to `SETRANGE` and the analogous `GETRANGE` commands, you can use Redis
+strings as a linear array with O(1) random access. This is a very fast and
+efficient storage in many real world use cases.
+
+@return
+
+@integer-reply: the length of the string after it was modified by the command.
+
+@examples
+
+Basic usage:
+
+```cli
+SET key1 "Hello World"
+SETRANGE key1 6 "Redis"
+GET key1
+```
+
+Example of zero padding:
+
+```cli
+SETRANGE key2 6 "Redis"
+GET key2
+```
diff --git a/iredis/data/commands/shutdown.md b/iredis/data/commands/shutdown.md
new file mode 100644
index 0000000..cd48260
--- /dev/null
+++ b/iredis/data/commands/shutdown.md
@@ -0,0 +1,62 @@
+The command behavior is the following:
+
+- Stop all the clients.
+- Perform a blocking SAVE if at least one **save point** is configured.
+- Flush the Append Only File if AOF is enabled.
+- Quit the server.
+
+If persistence is enabled this commands makes sure that Redis is switched off
+without the lost of any data. This is not guaranteed if the client uses simply
+`SAVE` and then `QUIT` because other clients may alter the DB data between the
+two commands.
+
+Note: A Redis instance that is configured for not persisting on disk (no AOF
+configured, nor "save" directive) will not dump the RDB file on `SHUTDOWN`, as
+usually you don't want Redis instances used only for caching to block on when
+shutting down.
+
+## SAVE and NOSAVE modifiers
+
+It is possible to specify an optional modifier to alter the behavior of the
+command. Specifically:
+
+- **SHUTDOWN SAVE** will force a DB saving operation even if no save points are
+ configured.
+- **SHUTDOWN NOSAVE** will prevent a DB saving operation even if one or more
+ save points are configured. (You can think of this variant as an hypothetical
+ **ABORT** command that just stops the server).
+
+## Conditions where a SHUTDOWN fails
+
+When the Append Only File is enabled the shutdown may fail because the system is
+in a state that does not allow to safely immediately persist on disk.
+
+Normally if there is an AOF child process performing an AOF rewrite, Redis will
+simply kill it and exit. However there are two conditions where it is unsafe to
+do so, and the **SHUTDOWN** command will be refused with an error instead. This
+happens when:
+
+- The user just turned on AOF, and the server triggered the first AOF rewrite in
+ order to create the initial AOF file. In this context, stopping will result in
+ losing the dataset at all: once restarted, the server will potentially have
+ AOF enabled without having any AOF file at all.
+- A replica with AOF enabled, reconnected with its master, performed a full
+ resynchronization, and restarted the AOF file, triggering the initial AOF
+ creation process. In this case not completing the AOF rewrite is dangerous
+ because the latest dataset received from the master would be lost. The new
+ master can actually be even a different instance (if the **REPLICAOF** or
+ **SLAVEOF** command was used in order to reconfigure the replica), so it is
+ important to finish the AOF rewrite and start with the correct data set
+ representing the data set in memory when the server was terminated.
+
+There are conditions when we want just to terminate a Redis instance ASAP,
+regardless of what its content is. In such a case, the right combination of
+commands is to send a **CONFIG appendonly no** followed by a **SHUTDOWN
+NOSAVE**. The first command will turn off the AOF if needed, and will terminate
+the AOF rewriting child if there is one active. The second command will not have
+any problem to execute since the AOF is no longer enabled.
+
+@return
+
+@simple-string-reply on error. On success nothing is returned since the server
+quits and the connection is closed.
diff --git a/iredis/data/commands/sinter.md b/iredis/data/commands/sinter.md
new file mode 100644
index 0000000..e4ab023
--- /dev/null
+++ b/iredis/data/commands/sinter.md
@@ -0,0 +1,31 @@
+Returns the members of the set resulting from the intersection of all the given
+sets.
+
+For example:
+
+```
+key1 = {a,b,c,d}
+key2 = {c}
+key3 = {a,c,e}
+SINTER key1 key2 key3 = {c}
+```
+
+Keys that do not exist are considered to be empty sets. With one of the keys
+being an empty set, the resulting set is also empty (since set intersection with
+an empty set always results in an empty set).
+
+@return
+
+@array-reply: list with members of the resulting set.
+
+@examples
+
+```cli
+SADD key1 "a"
+SADD key1 "b"
+SADD key1 "c"
+SADD key2 "c"
+SADD key2 "d"
+SADD key2 "e"
+SINTER key1 key2
+```
diff --git a/iredis/data/commands/sinterstore.md b/iredis/data/commands/sinterstore.md
new file mode 100644
index 0000000..17dd0bf
--- /dev/null
+++ b/iredis/data/commands/sinterstore.md
@@ -0,0 +1,21 @@
+This command is equal to `SINTER`, but instead of returning the resulting set,
+it is stored in `destination`.
+
+If `destination` already exists, it is overwritten.
+
+@return
+
+@integer-reply: the number of elements in the resulting set.
+
+@examples
+
+```cli
+SADD key1 "a"
+SADD key1 "b"
+SADD key1 "c"
+SADD key2 "c"
+SADD key2 "d"
+SADD key2 "e"
+SINTERSTORE key key1 key2
+SMEMBERS key
+```
diff --git a/iredis/data/commands/sismember.md b/iredis/data/commands/sismember.md
new file mode 100644
index 0000000..051f87d
--- /dev/null
+++ b/iredis/data/commands/sismember.md
@@ -0,0 +1,16 @@
+Returns if `member` is a member of the set stored at `key`.
+
+@return
+
+@integer-reply, specifically:
+
+- `1` if the element is a member of the set.
+- `0` if the element is not a member of the set, or if `key` does not exist.
+
+@examples
+
+```cli
+SADD myset "one"
+SISMEMBER myset "one"
+SISMEMBER myset "two"
+```
diff --git a/iredis/data/commands/slaveof.md b/iredis/data/commands/slaveof.md
new file mode 100644
index 0000000..1250d43
--- /dev/null
+++ b/iredis/data/commands/slaveof.md
@@ -0,0 +1,25 @@
+**A note about the word slave used in this man page and command name**: Starting
+with Redis 5 this command: starting with Redis version 5, if not for backward
+compatibility, the Redis project no longer uses the word slave. Please use the
+new command `REPLICAOF`. The command `SLAVEOF` will continue to work for
+backward compatibility.
+
+The `SLAVEOF` command can change the replication settings of a replica on the
+fly. If a Redis server is already acting as replica, the command `SLAVEOF` NO
+ONE will turn off the replication, turning the Redis server into a MASTER. In
+the proper form `SLAVEOF` hostname port will make the server a replica of
+another server listening at the specified hostname and port.
+
+If a server is already a replica of some master, `SLAVEOF` hostname port will
+stop the replication against the old server and start the synchronization
+against the new one, discarding the old dataset.
+
+The form `SLAVEOF` NO ONE will stop replication, turning the server into a
+MASTER, but will not discard the replication. So, if the old master stops
+working, it is possible to turn the replica into a master and set the
+application to use this new master in read/write. Later when the other Redis
+server is fixed, it can be reconfigured to work as a replica.
+
+@return
+
+@simple-string-reply
diff --git a/iredis/data/commands/slowlog.md b/iredis/data/commands/slowlog.md
new file mode 100644
index 0000000..267a6bb
--- /dev/null
+++ b/iredis/data/commands/slowlog.md
@@ -0,0 +1,84 @@
+This command is used in order to read and reset the Redis slow queries log.
+
+## Redis slow log overview
+
+The Redis Slow Log is a system to log queries that exceeded a specified
+execution time. The execution time does not include I/O operations like talking
+with the client, sending the reply and so forth, but just the time needed to
+actually execute the command (this is the only stage of command execution where
+the thread is blocked and can not serve other requests in the meantime).
+
+You can configure the slow log with two parameters: _slowlog-log-slower-than_
+tells Redis what is the execution time, in microseconds, to exceed in order for
+the command to get logged. Note that a negative number disables the slow log,
+while a value of zero forces the logging of every command. _slowlog-max-len_ is
+the length of the slow log. The minimum value is zero. When a new command is
+logged and the slow log is already at its maximum length, the oldest one is
+removed from the queue of logged commands in order to make space.
+
+The configuration can be done by editing `redis.conf` or while the server is
+running using the `CONFIG GET` and `CONFIG SET` commands.
+
+## Reading the slow log
+
+The slow log is accumulated in memory, so no file is written with information
+about the slow command executions. This makes the slow log remarkably fast at
+the point that you can enable the logging of all the commands (setting the
+_slowlog-log-slower-than_ config parameter to zero) with minor performance hit.
+
+To read the slow log the **SLOWLOG GET** command is used, that returns every
+entry in the slow log. It is possible to return only the N most recent entries
+passing an additional argument to the command (for instance **SLOWLOG GET 10**).
+
+Note that you need a recent version of redis-cli in order to read the slow log
+output, since it uses some features of the protocol that were not formerly
+implemented in redis-cli (deeply nested multi bulk replies).
+
+## Output format
+
+```
+redis 127.0.0.1:6379> slowlog get 2
+1) 1) (integer) 14
+ 2) (integer) 1309448221
+ 3) (integer) 15
+ 4) 1) "ping"
+2) 1) (integer) 13
+ 2) (integer) 1309448128
+ 3) (integer) 30
+ 4) 1) "slowlog"
+ 2) "get"
+ 3) "100"
+```
+
+There are also optional fields emitted only by Redis 4.0 or greater:
+
+```
+5) "127.0.0.1:58217"
+6) "worker-123"
+```
+
+Every entry is composed of four (or six starting with Redis 4.0) fields:
+
+- A unique progressive identifier for every slow log entry.
+- The unix timestamp at which the logged command was processed.
+- The amount of time needed for its execution, in microseconds.
+- The array composing the arguments of the command.
+- Client IP address and port (4.0 only).
+- Client name if set via the `CLIENT SETNAME` command (4.0 only).
+
+The entry's unique ID can be used in order to avoid processing slow log entries
+multiple times (for instance you may have a script sending you an email alert
+for every new slow log entry).
+
+The ID is never reset in the course of the Redis server execution, only a server
+restart will reset it.
+
+## Obtaining the current length of the slow log
+
+It is possible to get just the length of the slow log using the command
+**SLOWLOG LEN**.
+
+## Resetting the slow log.
+
+You can reset the slow log using the **SLOWLOG RESET** command. Once deleted the
+information is lost forever.
diff --git a/iredis/data/commands/smembers.md b/iredis/data/commands/smembers.md
new file mode 100644
index 0000000..2272859
--- /dev/null
+++ b/iredis/data/commands/smembers.md
@@ -0,0 +1,15 @@
+Returns all the members of the set value stored at `key`.
+
+This has the same effect as running `SINTER` with one argument `key`.
+
+@return
+
+@array-reply: all elements of the set.
+
+@examples
+
+```cli
+SADD myset "Hello"
+SADD myset "World"
+SMEMBERS myset
+```
diff --git a/iredis/data/commands/smove.md b/iredis/data/commands/smove.md
new file mode 100644
index 0000000..d8c12fa
--- /dev/null
+++ b/iredis/data/commands/smove.md
@@ -0,0 +1,28 @@
+Move `member` from the set at `source` to the set at `destination`. This
+operation is atomic. In every given moment the element will appear to be a
+member of `source` **or** `destination` for other clients.
+
+If the source set does not exist or does not contain the specified element, no
+operation is performed and `0` is returned. Otherwise, the element is removed
+from the source set and added to the destination set. When the specified element
+already exists in the destination set, it is only removed from the source set.
+
+An error is returned if `source` or `destination` does not hold a set value.
+
+@return
+
+@integer-reply, specifically:
+
+- `1` if the element is moved.
+- `0` if the element is not a member of `source` and no operation was performed.
+
+@examples
+
+```cli
+SADD myset "one"
+SADD myset "two"
+SADD myotherset "three"
+SMOVE myset myotherset "two"
+SMEMBERS myset
+SMEMBERS myotherset
+```
diff --git a/iredis/data/commands/sort.md b/iredis/data/commands/sort.md
new file mode 100644
index 0000000..0703b5e
--- /dev/null
+++ b/iredis/data/commands/sort.md
@@ -0,0 +1,136 @@
+Returns or stores the elements contained in the [list][tdtl], [set][tdts] or
+[sorted set][tdtss] at `key`. By default, sorting is numeric and elements are
+compared by their value interpreted as double precision floating point number.
+This is `SORT` in its simplest form:
+
+[tdtl]: /topics/data-types#lists
+[tdts]: /topics/data-types#set
+[tdtss]: /topics/data-types#sorted-sets
+
+```
+SORT mylist
+```
+
+Assuming `mylist` is a list of numbers, this command will return the same list
+with the elements sorted from small to large. In order to sort the numbers from
+large to small, use the `!DESC` modifier:
+
+```
+SORT mylist DESC
+```
+
+When `mylist` contains string values and you want to sort them
+lexicographically, use the `!ALPHA` modifier:
+
+```
+SORT mylist ALPHA
+```
+
+Redis is UTF-8 aware, assuming you correctly set the `!LC_COLLATE` environment
+variable.
+
+The number of returned elements can be limited using the `!LIMIT` modifier. This
+modifier takes the `offset` argument, specifying the number of elements to skip
+and the `count` argument, specifying the number of elements to return from
+starting at `offset`. The following example will return 10 elements of the
+sorted version of `mylist`, starting at element 0 (`offset` is zero-based):
+
+```
+SORT mylist LIMIT 0 10
+```
+
+Almost all modifiers can be used together. The following example will return the
+first 5 elements, lexicographically sorted in descending order:
+
+```
+SORT mylist LIMIT 0 5 ALPHA DESC
+```
+
+## Sorting by external keys
+
+Sometimes you want to sort elements using external keys as weights to compare
+instead of comparing the actual elements in the list, set or sorted set. Let's
+say the list `mylist` contains the elements `1`, `2` and `3` representing unique
+IDs of objects stored in `object_1`, `object_2` and `object_3`. When these
+objects have associated weights stored in `weight_1`, `weight_2` and `weight_3`,
+`SORT` can be instructed to use these weights to sort `mylist` with the
+following statement:
+
+```
+SORT mylist BY weight_*
+```
+
+The `BY` option takes a pattern (equal to `weight_*` in this example) that is
+used to generate the keys that are used for sorting. These key names are
+obtained substituting the first occurrence of `*` with the actual value of the
+element in the list (`1`, `2` and `3` in this example).
+
+## Skip sorting the elements
+
+The `!BY` option can also take a non-existent key, which causes `SORT` to skip
+the sorting operation. This is useful if you want to retrieve external keys (see
+the `!GET` option below) without the overhead of sorting.
+
+```
+SORT mylist BY nosort
+```
+
+## Retrieving external keys
+
+Our previous example returns just the sorted IDs. In some cases, it is more
+useful to get the actual objects instead of their IDs (`object_1`, `object_2`
+and `object_3`). Retrieving external keys based on the elements in a list, set
+or sorted set can be done with the following command:
+
+```
+SORT mylist BY weight_* GET object_*
+```
+
+The `!GET` option can be used multiple times in order to get more keys for every
+element of the original list, set or sorted set.
+
+It is also possible to `!GET` the element itself using the special pattern `#`:
+
+```
+SORT mylist BY weight_* GET object_* GET #
+```
+
+## Storing the result of a SORT operation
+
+By default, `SORT` returns the sorted elements to the client. With the `!STORE`
+option, the result will be stored as a list at the specified key instead of
+being returned to the client.
+
+```
+SORT mylist BY weight_* STORE resultkey
+```
+
+An interesting pattern using `SORT ... STORE` consists in associating an
+`EXPIRE` timeout to the resulting key so that in applications where the result
+of a `SORT` operation can be cached for some time. Other clients will use the
+cached list instead of calling `SORT` for every request. When the key will
+timeout, an updated version of the cache can be created by calling
+`SORT ... STORE` again.
+
+Note that for correctly implementing this pattern it is important to avoid
+multiple clients rebuilding the cache at the same time. Some kind of locking is
+needed here (for instance using `SETNX`).
+
+## Using hashes in `!BY` and `!GET`
+
+It is possible to use `!BY` and `!GET` options against hash fields with the
+following syntax:
+
+```
+SORT mylist BY weight_*->fieldname GET object_*->fieldname
+```
+
+The string `->` is used to separate the key name from the hash field name. The
+key is substituted as documented above, and the hash stored at the resulting key
+is accessed to retrieve the specified hash field.
+
+@return
+
+@array-reply: without passing the `store` option the command returns a list of
+sorted elements. @integer-reply: when the `store` option is specified the
+command returns the number of sorted elements in the destination list.
diff --git a/iredis/data/commands/spop.md b/iredis/data/commands/spop.md
new file mode 100644
index 0000000..76b1c89
--- /dev/null
+++ b/iredis/data/commands/spop.md
@@ -0,0 +1,41 @@
+Removes and returns one or more random elements from the set value store at
+`key`.
+
+This operation is similar to `SRANDMEMBER`, that returns one or more random
+elements from a set but does not remove it.
+
+The `count` argument is available since version 3.2.
+
+@return
+
+@bulk-string-reply: the removed element, or `nil` when `key` does not exist.
+
+@examples
+
+```cli
+SADD myset "one"
+SADD myset "two"
+SADD myset "three"
+SPOP myset
+SMEMBERS myset
+SADD myset "four"
+SADD myset "five"
+SPOP myset 3
+SMEMBERS myset
+```
+
+## Specification of the behavior when count is passed
+
+If count is bigger than the number of elements inside the Set, the command will
+only return the whole set without additional elements.
+
+## Distribution of returned elements
+
+Note that this command is not suitable when you need a guaranteed uniform
+distribution of the returned elements. For more information about the algorithms
+used for SPOP, look up both the Knuth sampling and Floyd sampling algorithms.
+
+## Count argument extension
+
+Redis 3.2 introduced an optional `count` argument that can be passed to `SPOP`
+in order to retrieve multiple elements in a single call.
diff --git a/iredis/data/commands/srandmember.md b/iredis/data/commands/srandmember.md
new file mode 100644
index 0000000..99f9b7e
--- /dev/null
+++ b/iredis/data/commands/srandmember.md
@@ -0,0 +1,63 @@
+When called with just the `key` argument, return a random element from the set
+value stored at `key`.
+
+Starting from Redis version 2.6, when called with the additional `count`
+argument, return an array of `count` **distinct elements** if `count` is
+positive. If called with a negative `count` the behavior changes and the command
+is allowed to return the **same element multiple times**. In this case the
+number of returned elements is the absolute value of the specified `count`.
+
+When called with just the key argument, the operation is similar to `SPOP`,
+however while `SPOP` also removes the randomly selected element from the set,
+`SRANDMEMBER` will just return a random element without altering the original
+set in any way.
+
+@return
+
+@bulk-string-reply: without the additional `count` argument the command returns
+a Bulk Reply with the randomly selected element, or `nil` when `key` does not
+exist. @array-reply: when the additional `count` argument is passed the command
+returns an array of elements, or an empty array when `key` does not exist.
+
+@examples
+
+```cli
+SADD myset one two three
+SRANDMEMBER myset
+SRANDMEMBER myset 2
+SRANDMEMBER myset -5
+```
+
+## Specification of the behavior when count is passed
+
+When a count argument is passed and is positive, the elements are returned as if
+every selected element is removed from the set (like the extraction of numbers
+in the game of Bingo). However elements are **not removed** from the Set. So
+basically:
+
+- No repeated elements are returned.
+- If count is bigger than the number of elements inside the Set, the command
+ will only return the whole set without additional elements.
+
+When instead the count is negative, the behavior changes and the extraction
+happens as if you put the extracted element inside the bag again after every
+extraction, so repeated elements are possible, and the number of elements
+requested is always returned as we can repeat the same elements again and again,
+with the exception of an empty Set (non existing key) that will always produce
+an empty array as a result.
+
+## Distribution of returned elements
+
+The distribution of the returned elements is far from perfect when the number of
+elements in the set is small, this is due to the fact that we used an
+approximated random element function that does not really guarantees good
+distribution.
+
+The algorithm used, that is implemented inside dict.c, samples the hash table
+buckets to find a non-empty one. Once a non empty bucket is found, since we use
+chaining in our hash table implementation, the number of elements inside the
+bucket is checked and a random element is selected.
+
+This means that if you have two non-empty buckets in the entire hash table, and
+one has three elements while one has just one, the element that is alone in its
+bucket will be returned with much higher probability.
diff --git a/iredis/data/commands/srem.md b/iredis/data/commands/srem.md
new file mode 100644
index 0000000..6ead535
--- /dev/null
+++ b/iredis/data/commands/srem.md
@@ -0,0 +1,26 @@
+Remove the specified members from the set stored at `key`. Specified members
+that are not a member of this set are ignored. If `key` does not exist, it is
+treated as an empty set and this command returns `0`.
+
+An error is returned when the value stored at `key` is not a set.
+
+@return
+
+@integer-reply: the number of members that were removed from the set, not
+including non existing members.
+
+@history
+
+- `>= 2.4`: Accepts multiple `member` arguments. Redis versions older than 2.4
+ can only remove a set member per call.
+
+@examples
+
+```cli
+SADD myset "one"
+SADD myset "two"
+SADD myset "three"
+SREM myset "one"
+SREM myset "four"
+SMEMBERS myset
+```
diff --git a/iredis/data/commands/sscan.md b/iredis/data/commands/sscan.md
new file mode 100644
index 0000000..c19f3b1
--- /dev/null
+++ b/iredis/data/commands/sscan.md
@@ -0,0 +1 @@
+See `SCAN` for `SSCAN` documentation.
diff --git a/iredis/data/commands/stralgo.md b/iredis/data/commands/stralgo.md
new file mode 100644
index 0000000..ccd72fe
--- /dev/null
+++ b/iredis/data/commands/stralgo.md
@@ -0,0 +1,121 @@
+The STRALGO implements complex algorithms that operate on strings. Right now the
+only algorithm implemented is the LCS algorithm (longest common substring).
+However new algorithms could be implemented in the future. The goal of this
+command is to provide to Redis users algorithms that need fast implementations
+and are normally not provided in the standard library of most programming
+languages.
+
+The first argument of the command selects the algorithm to use, right now the
+argument must be "LCS", since this is the only implemented one.
+
+## LCS algorithm
+
+```
+STRALGO LCS [KEYS ...] [STRINGS ...] [LEN] [IDX] [MINMATCHLEN <len>] [WITHMATCHLEN]
+```
+
+The LCS subcommand implements the longest common subsequence algorithm. Note
+that this is different than the longest common string algorithm, since matching
+characters in the string does not need to be contiguous.
+
+For instance the LCS between "foo" and "fao" is "fo", since scanning the two
+strings from left to right, the longest common set of characters is composed of
+the first "f" and then the "o".
+
+LCS is very useful in order to evaluate how similar two strings are. Strings can
+represent many things. For instance if two strings are DNA sequences, the LCS
+will provide a measure of similarity between the two DNA sequences. If the
+strings represent some text edited by some user, the LCS could represent how
+different the new text is compared to the old one, and so forth.
+
+Note that this algorithm runs in `O(N*M)` time, where N is the length of the
+first string and M is the length of the second string. So either spin a
+different Redis instance in order to run this algorithm, or make sure to run it
+against very small strings.
+
+The basic usage is the following:
+
+```
+> STRALGO LCS STRINGS ohmytext mynewtext
+"mytext"
+```
+
+It is also possible to compute the LCS between the content of two keys:
+
+```
+> MSET key1 ohmytext key2 mynewtext
+OK
+> STRALGO LCS KEYS key1 key2
+"mytext"
+```
+
+Sometimes we need just the length of the match:
+
+```
+> STRALGO LCS STRINGS ohmytext mynewtext LEN
+6
+```
+
+However what is often very useful, is to know the match position in each
+strings:
+
+```
+> STRALGO LCS KEYS key1 key2 IDX
+1) "matches"
+2) 1) 1) 1) (integer) 4
+ 2) (integer) 7
+ 2) 1) (integer) 5
+ 2) (integer) 8
+ 2) 1) 1) (integer) 2
+ 2) (integer) 3
+ 2) 1) (integer) 0
+ 2) (integer) 1
+3) "len"
+4) (integer) 6
+```
+
+Matches are produced from the last one to the first one, since this is how the
+algorithm works, and it more efficient to emit things in the same order. The
+above array means that the first match (second element of the array) is between
+positions 2-3 of the first string and 0-1 of the second. Then there is another
+match between 4-7 and 5-8.
+
+To restrict the list of matches to the ones of a given minimal length:
+
+```
+> STRALGO LCS KEYS key1 key2 IDX MINMATCHLEN 4
+1) "matches"
+2) 1) 1) 1) (integer) 4
+ 2) (integer) 7
+ 2) 1) (integer) 5
+ 2) (integer) 8
+3) "len"
+4) (integer) 6
+```
+
+Finally to also have the match len:
+
+```
+> STRALGO LCS KEYS key1 key2 IDX MINMATCHLEN 4 WITHMATCHLEN
+1) "matches"
+2) 1) 1) 1) (integer) 4
+ 2) (integer) 7
+ 2) 1) (integer) 5
+ 2) (integer) 8
+ 3) (integer) 4
+3) "len"
+4) (integer) 6
+```
+
+@return
+
+For the LCS algorithm:
+
+- Without modifiers the string representing the longest common substring is
+ returned.
+- When LEN is given the command returns the length of the longest common
+ substring.
+- When IDX is given the command returns an array with the LCS length and all the
+ ranges in both the strings, start and end offset for each string, where there
+ are matches. When WITHMATCHLEN is given each array representing a match will
+ also have the length of the match (see examples).
diff --git a/iredis/data/commands/strlen.md b/iredis/data/commands/strlen.md
new file mode 100644
index 0000000..99a9c55
--- /dev/null
+++ b/iredis/data/commands/strlen.md
@@ -0,0 +1,15 @@
+Returns the length of the string value stored at `key`. An error is returned
+when `key` holds a non-string value.
+
+@return
+
+@integer-reply: the length of the string at `key`, or `0` when `key` does not
+exist.
+
+@examples
+
+```cli
+SET mykey "Hello world"
+STRLEN mykey
+STRLEN nonexisting
+```
diff --git a/iredis/data/commands/subscribe.md b/iredis/data/commands/subscribe.md
new file mode 100644
index 0000000..997670c
--- /dev/null
+++ b/iredis/data/commands/subscribe.md
@@ -0,0 +1,5 @@
+Subscribes the client to the specified channels.
+
+Once the client enters the subscribed state it is not supposed to issue any
+other commands, except for additional `SUBSCRIBE`, `PSUBSCRIBE`, `UNSUBSCRIBE`,
+`PUNSUBSCRIBE`, `PING` and `QUIT` commands.
diff --git a/iredis/data/commands/sunion.md b/iredis/data/commands/sunion.md
new file mode 100644
index 0000000..2056468
--- /dev/null
+++ b/iredis/data/commands/sunion.md
@@ -0,0 +1,28 @@
+Returns the members of the set resulting from the union of all the given sets.
+
+For example:
+
+```
+key1 = {a,b,c,d}
+key2 = {c}
+key3 = {a,c,e}
+SUNION key1 key2 key3 = {a,b,c,d,e}
+```
+
+Keys that do not exist are considered to be empty sets.
+
+@return
+
+@array-reply: list with members of the resulting set.
+
+@examples
+
+```cli
+SADD key1 "a"
+SADD key1 "b"
+SADD key1 "c"
+SADD key2 "c"
+SADD key2 "d"
+SADD key2 "e"
+SUNION key1 key2
+```
diff --git a/iredis/data/commands/sunionstore.md b/iredis/data/commands/sunionstore.md
new file mode 100644
index 0000000..716caf1
--- /dev/null
+++ b/iredis/data/commands/sunionstore.md
@@ -0,0 +1,21 @@
+This command is equal to `SUNION`, but instead of returning the resulting set,
+it is stored in `destination`.
+
+If `destination` already exists, it is overwritten.
+
+@return
+
+@integer-reply: the number of elements in the resulting set.
+
+@examples
+
+```cli
+SADD key1 "a"
+SADD key1 "b"
+SADD key1 "c"
+SADD key2 "c"
+SADD key2 "d"
+SADD key2 "e"
+SUNIONSTORE key key1 key2
+SMEMBERS key
+```
diff --git a/iredis/data/commands/swapdb.md b/iredis/data/commands/swapdb.md
new file mode 100644
index 0000000..708096a
--- /dev/null
+++ b/iredis/data/commands/swapdb.md
@@ -0,0 +1,19 @@
+This command swaps two Redis databases, so that immediately all the clients
+connected to a given database will see the data of the other database, and the
+other way around. Example:
+
+ SWAPDB 0 1
+
+This will swap database 0 with database 1. All the clients connected with
+database 0 will immediately see the new data, exactly like all the clients
+connected with database 1 will see the data that was formerly of database 0.
+
+@return
+
+@simple-string-reply: `OK` if `SWAPDB` was executed correctly.
+
+@examples
+
+```
+SWAPDB 0 1
+```
diff --git a/iredis/data/commands/sync.md b/iredis/data/commands/sync.md
new file mode 100644
index 0000000..48250b4
--- /dev/null
+++ b/iredis/data/commands/sync.md
@@ -0,0 +1,15 @@
+Initiates a replication stream from the master.
+
+The `SYNC` command is called by Redis replicas for initiating a replication
+stream from the master. It has been replaced in newer versions of Redis by
+`PSYNC`.
+
+For more information about replication in Redis please check the [replication
+page][tr].
+
+[tr]: /topics/replication
+
+@return
+
+**Non standard return value**, a bulk transfer of the data followed by `PING`
+and write requests from the master.
diff --git a/iredis/data/commands/time.md b/iredis/data/commands/time.md
new file mode 100644
index 0000000..441376e
--- /dev/null
+++ b/iredis/data/commands/time.md
@@ -0,0 +1,20 @@
+The `TIME` command returns the current server time as a two items lists: a Unix
+timestamp and the amount of microseconds already elapsed in the current second.
+Basically the interface is very similar to the one of the `gettimeofday` system
+call.
+
+@return
+
+@array-reply, specifically:
+
+A multi bulk reply containing two elements:
+
+- unix time in seconds.
+- microseconds.
+
+@examples
+
+```cli
+TIME
+TIME
+```
diff --git a/iredis/data/commands/touch.md b/iredis/data/commands/touch.md
new file mode 100644
index 0000000..eee3365
--- /dev/null
+++ b/iredis/data/commands/touch.md
@@ -0,0 +1,13 @@
+Alters the last access time of a key(s). A key is ignored if it does not exist.
+
+@return
+
+@integer-reply: The number of keys that were touched.
+
+@examples
+
+```cli
+SET key1 "Hello"
+SET key2 "World"
+TOUCH key1 key2
+```
diff --git a/iredis/data/commands/ttl.md b/iredis/data/commands/ttl.md
new file mode 100644
index 0000000..c36557c
--- /dev/null
+++ b/iredis/data/commands/ttl.md
@@ -0,0 +1,27 @@
+Returns the remaining time to live of a key that has a timeout. This
+introspection capability allows a Redis client to check how many seconds a given
+key will continue to be part of the dataset.
+
+In Redis 2.6 or older the command returns `-1` if the key does not exist or if
+the key exist but has no associated expire.
+
+Starting with Redis 2.8 the return value in case of error changed:
+
+- The command returns `-2` if the key does not exist.
+- The command returns `-1` if the key exists but has no associated expire.
+
+See also the `PTTL` command that returns the same information with milliseconds
+resolution (Only available in Redis 2.6 or greater).
+
+@return
+
+@integer-reply: TTL in seconds, or a negative value in order to signal an error
+(see the description above).
+
+@examples
+
+```cli
+SET mykey "Hello"
+EXPIRE mykey 10
+TTL mykey
+```
diff --git a/iredis/data/commands/type.md b/iredis/data/commands/type.md
new file mode 100644
index 0000000..d27a7e8
--- /dev/null
+++ b/iredis/data/commands/type.md
@@ -0,0 +1,18 @@
+Returns the string representation of the type of the value stored at `key`. The
+different types that can be returned are: `string`, `list`, `set`, `zset`,
+`hash` and `stream`.
+
+@return
+
+@simple-string-reply: type of `key`, or `none` when `key` does not exist.
+
+@examples
+
+```cli
+SET key1 "value"
+LPUSH key2 "value"
+SADD key3 "value"
+TYPE key1
+TYPE key2
+TYPE key3
+```
diff --git a/iredis/data/commands/unlink.md b/iredis/data/commands/unlink.md
new file mode 100644
index 0000000..e305440
--- /dev/null
+++ b/iredis/data/commands/unlink.md
@@ -0,0 +1,18 @@
+This command is very similar to `DEL`: it removes the specified keys. Just like
+`DEL` a key is ignored if it does not exist. However the command performs the
+actual memory reclaiming in a different thread, so it is not blocking, while
+`DEL` is. This is where the command name comes from: the command just
+**unlinks** the keys from the keyspace. The actual removal will happen later
+asynchronously.
+
+@return
+
+@integer-reply: The number of keys that were unlinked.
+
+@examples
+
+```cli
+SET key1 "Hello"
+SET key2 "World"
+UNLINK key1 key2 key3
+```
diff --git a/iredis/data/commands/unsubscribe.md b/iredis/data/commands/unsubscribe.md
new file mode 100644
index 0000000..78c4d0c
--- /dev/null
+++ b/iredis/data/commands/unsubscribe.md
@@ -0,0 +1,6 @@
+Unsubscribes the client from the given channels, or from all of them if none is
+given.
+
+When no channels are specified, the client is unsubscribed from all the
+previously subscribed channels. In this case, a message for every unsubscribed
+channel will be sent to the client.
diff --git a/iredis/data/commands/unwatch.md b/iredis/data/commands/unwatch.md
new file mode 100644
index 0000000..b60bcb8
--- /dev/null
+++ b/iredis/data/commands/unwatch.md
@@ -0,0 +1,9 @@
+Flushes all the previously watched keys for a [transaction][tt].
+
+[tt]: /topics/transactions
+
+If you call `EXEC` or `DISCARD`, there's no need to manually call `UNWATCH`.
+
+@return
+
+@simple-string-reply: always `OK`.
diff --git a/iredis/data/commands/wait.md b/iredis/data/commands/wait.md
new file mode 100644
index 0000000..e5a179c
--- /dev/null
+++ b/iredis/data/commands/wait.md
@@ -0,0 +1,75 @@
+This command blocks the current client until all the previous write commands are
+successfully transferred and acknowledged by at least the specified number of
+replicas. If the timeout, specified in milliseconds, is reached, the command
+returns even if the specified number of replicas were not yet reached.
+
+The command **will always return** the number of replicas that acknowledged the
+write commands sent before the `WAIT` command, both in the case where the
+specified number of replicas are reached, or when the timeout is reached.
+
+A few remarks:
+
+1. When `WAIT` returns, all the previous write commands sent in the context of
+ the current connection are guaranteed to be received by the number of
+ replicas returned by `WAIT`.
+2. If the command is sent as part of a `MULTI` transaction, the command does not
+ block but instead just return ASAP the number of replicas that acknowledged
+ the previous write commands.
+3. A timeout of 0 means to block forever.
+4. Since `WAIT` returns the number of replicas reached both in case of failure
+ and success, the client should check that the returned value is equal or
+ greater to the replication level it demanded.
+
+## Consistency and WAIT
+
+Note that `WAIT` does not make Redis a strongly consistent store: while
+synchronous replication is part of a replicated state machine, it is not the
+only thing needed. However in the context of Sentinel or Redis Cluster failover,
+`WAIT` improves the real world data safety.
+
+Specifically if a given write is transferred to one or more replicas, it is more
+likely (but not guaranteed) that if the master fails, we'll be able to promote,
+during a failover, a replica that received the write: both Sentinel and Redis
+Cluster will do a best-effort attempt to promote the best replica among the set
+of available replicas.
+
+However this is just a best-effort attempt so it is possible to still lose a
+write synchronously replicated to multiple replicas.
+
+## Implementation details
+
+Since the introduction of partial resynchronization with replicas (PSYNC
+feature) Redis replicas asynchronously ping their master with the offset they
+already processed in the replication stream. This is used in multiple ways:
+
+1. Detect timed out replicas.
+2. Perform a partial resynchronization after a disconnection.
+3. Implement `WAIT`.
+
+In the specific case of the implementation of `WAIT`, Redis remembers, for each
+client, the replication offset of the produced replication stream when a given
+write command was executed in the context of a given client. When `WAIT` is
+called Redis checks if the specified number of replicas already acknowledged
+this offset or a greater one.
+
+@return
+
+@integer-reply: The command returns the number of replicas reached by all the
+writes performed in the context of the current connection.
+
+@examples
+
+```
+> SET foo bar
+OK
+> WAIT 1 0
+(integer) 1
+> WAIT 2 1000
+(integer) 1
+```
+
+In the following example the first call to `WAIT` does not use a timeout and
+asks for the write to reach 1 replica. It returns with success. In the second
+attempt instead we put a timeout, and ask for the replication of the write to
+two replicas. Since there is a single replica available, after one second `WAIT`
+unblocks and returns 1, the number of replicas reached.
diff --git a/iredis/data/commands/watch.md b/iredis/data/commands/watch.md
new file mode 100644
index 0000000..08f823f
--- /dev/null
+++ b/iredis/data/commands/watch.md
@@ -0,0 +1,8 @@
+Marks the given keys to be watched for conditional execution of a
+[transaction][tt].
+
+[tt]: /topics/transactions
+
+@return
+
+@simple-string-reply: always `OK`.
diff --git a/iredis/data/commands/xack.md b/iredis/data/commands/xack.md
new file mode 100644
index 0000000..76b7c13
--- /dev/null
+++ b/iredis/data/commands/xack.md
@@ -0,0 +1,25 @@
+The `XACK` command removes one or multiple messages from the _pending entries
+list_ (PEL) of a stream consumer group. A message is pending, and as such stored
+inside the PEL, when it was delivered to some consumer, normally as a side
+effect of calling `XREADGROUP`, or when a consumer took ownership of a message
+calling `XCLAIM`. The pending message was delivered to some consumer but the
+server is yet not sure it was processed at least once. So new calls to
+`XREADGROUP` to grab the messages history for a consumer (for instance using an
+ID of 0), will return such message. Similarly the pending message will be listed
+by the `XPENDING` command, that inspects the PEL.
+
+Once a consumer _successfully_ processes a message, it should call `XACK` so
+that such message does not get processed again, and as a side effect, the PEL
+entry about this message is also purged, releasing memory from the Redis server.
+
+@return
+
+@integer-reply, specifically:
+
+The command returns the number of messages successfully acknowledged. Certain
+message IDs may no longer be part of the PEL (for example because they have been
+already acknowledge), and XACK will not count them as successfully acknowledged.
+
+```cli
+XACK mystream mygroup 1526569495631-0
+```
diff --git a/iredis/data/commands/xadd.md b/iredis/data/commands/xadd.md
new file mode 100644
index 0000000..d60b571
--- /dev/null
+++ b/iredis/data/commands/xadd.md
@@ -0,0 +1,87 @@
+Appends the specified stream entry to the stream at the specified key. If the
+key does not exist, as a side effect of running this command the key is created
+with a stream value.
+
+An entry is composed of a set of field-value pairs, it is basically a small
+dictionary. The field-value pairs are stored in the same order they are given by
+the user, and commands to read the stream such as `XRANGE` or `XREAD` are
+guaranteed to return the fields and values exactly in the same order they were
+added by `XADD`.
+
+`XADD` is the _only Redis command_ that can add data to a stream, but there are
+other commands, such as `XDEL` and `XTRIM`, that are able to remove data from a
+stream.
+
+## Specifying a Stream ID as an argument
+
+A stream entry ID identifies a given entry inside a stream. The `XADD` command
+will auto-generate a unique ID for you if the ID argument specified is the `*`
+character (asterisk ASCII character). However, while useful only in very rare
+cases, it is possible to specify a well-formed ID, so that the new entry will be
+added exactly with the specified ID.
+
+IDs are specified by two numbers separated by a `-` character:
+
+ 1526919030474-55
+
+Both quantities are 64-bit numbers. When an ID is auto-generated, the first part
+is the Unix time in milliseconds of the Redis instance generating the ID. The
+second part is just a sequence number and is used in order to distinguish IDs
+generated in the same millisecond.
+
+IDs are guaranteed to be always incremental: If you compare the ID of the entry
+just inserted it will be greater than any other past ID, so entries are totally
+ordered inside a stream. In order to guarantee this property, if the current top
+ID in the stream has a time greater than the current local time of the instance,
+the top entry time will be used instead, and the sequence part of the ID
+incremented. This may happen when, for instance, the local clock jumps backward,
+or if after a failover the new master has a different absolute time.
+
+When a user specified an explicit ID to `XADD`, the minimum valid ID is `0-1`,
+and the user _must_ specify an ID which is greater than any other ID currently
+inside the stream, otherwise the command will fail. Usually resorting to
+specific IDs is useful only if you have another system generating unique IDs
+(for instance an SQL table) and you really want the Redis stream IDs to match
+the one of this other system.
+
+## Capped streams
+
+It is possible to limit the size of the stream to a maximum number of elements
+using the **MAXLEN** option.
+
+Trimming with **MAXLEN** can be expensive compared to just adding entries with
+`XADD`: streams are represented by macro nodes into a radix tree, in order to be
+very memory efficient. Altering the single macro node, consisting of a few tens
+of elements, is not optimal. So it is possible to give the command in the
+following special form:
+
+ XADD mystream MAXLEN ~ 1000 * ... entry fields here ...
+
+The `~` argument between the **MAXLEN** option and the actual count means that
+the user is not really requesting that the stream length is exactly 1000 items,
+but instead it could be a few tens of entries more, but never less than 1000
+items. When this option modifier is used, the trimming is performed only when
+Redis is able to remove a whole macro node. This makes it much more efficient,
+and it is usually what you want.
+
+## Additional information about streams
+
+For further information about Redis streams please check our
+[introduction to Redis Streams document](/topics/streams-intro).
+
+@return
+
+@bulk-string-reply, specifically:
+
+The command returns the ID of the added entry. The ID is the one auto-generated
+if `*` is passed as ID argument, otherwise the command just returns the same ID
+specified by the user during insertion.
+
+@examples
+
+```cli
+XADD mystream * name Sara surname OConnor
+XADD mystream * field1 value1 field2 value2 field3 value3
+XLEN mystream
+XRANGE mystream - +
+```
diff --git a/iredis/data/commands/xclaim.md b/iredis/data/commands/xclaim.md
new file mode 100644
index 0000000..e6ee8c9
--- /dev/null
+++ b/iredis/data/commands/xclaim.md
@@ -0,0 +1,83 @@
+In the context of a stream consumer group, this command changes the ownership of
+a pending message, so that the new owner is the consumer specified as the
+command argument. Normally this is what happens:
+
+1. There is a stream with an associated consumer group.
+2. Some consumer A reads a message via `XREADGROUP` from a stream, in the
+ context of that consumer group.
+3. As a side effect a pending message entry is created in the pending entries
+ list (PEL) of the consumer group: it means the message was delivered to a
+ given consumer, but it was not yet acknowledged via `XACK`.
+4. Then suddenly that consumer fails forever.
+5. Other consumers may inspect the list of pending messages, that are stale for
+ quite some time, using the `XPENDING` command. In order to continue
+ processing such messages, they use `XCLAIM` to acquire the ownership of the
+ message and continue.
+
+This dynamic is clearly explained in the
+[Stream intro documentation](/topics/streams-intro).
+
+Note that the message is claimed only if its idle time is greater the minimum
+idle time we specify when calling `XCLAIM`. Because as a side effect `XCLAIM`
+will also reset the idle time (since this is a new attempt at processing the
+message), two consumers trying to claim a message at the same time will never
+both succeed: only one will successfully claim the message. This avoids that we
+process a given message multiple times in a trivial way (yet multiple processing
+is possible and unavoidable in the general case).
+
+Moreover, as a side effect, `XCLAIM` will increment the count of attempted
+deliveries of the message unless the `JUSTID` option has been specified (which
+only delivers the message ID, not the message itself). In this way messages that
+cannot be processed for some reason, for instance because the consumers crash
+attempting to process them, will start to have a larger counter and can be
+detected inside the system.
+
+## Command options
+
+The command has multiple options, however most are mainly for internal use in
+order to transfer the effects of `XCLAIM` or other commands to the AOF file and
+to propagate the same effects to the slaves, and are unlikely to be useful to
+normal users:
+
+1. `IDLE <ms>`: Set the idle time (last time it was delivered) of the message.
+ If IDLE is not specified, an IDLE of 0 is assumed, that is, the time count is
+ reset because the message has now a new owner trying to process it.
+2. `TIME <ms-unix-time>`: This is the same as IDLE but instead of a relative
+ amount of milliseconds, it sets the idle time to a specific Unix time (in
+ milliseconds). This is useful in order to rewrite the AOF file generating
+ `XCLAIM` commands.
+3. `RETRYCOUNT <count>`: Set the retry counter to the specified value. This
+ counter is incremented every time a message is delivered again. Normally
+ `XCLAIM` does not alter this counter, which is just served to clients when
+ the XPENDING command is called: this way clients can detect anomalies, like
+ messages that are never processed for some reason after a big number of
+ delivery attempts.
+4. `FORCE`: Creates the pending message entry in the PEL even if certain
+ specified IDs are not already in the PEL assigned to a different client.
+ However the message must be exist in the stream, otherwise the IDs of non
+ existing messages are ignored.
+5. `JUSTID`: Return just an array of IDs of messages successfully claimed,
+ without returning the actual message. Using this option means the retry
+ counter is not incremented.
+
+@return
+
+@array-reply, specifically:
+
+The command returns all the messages successfully claimed, in the same format as
+`XRANGE`. However if the `JUSTID` option was specified, only the message IDs are
+reported, without including the actual message.
+
+Example:
+
+```
+> XCLAIM mystream mygroup Alice 3600000 1526569498055-0
+1) 1) 1526569498055-0
+ 2) 1) "message"
+ 2) "orange"
+```
+
+In the above example we claim the message with ID `1526569498055-0`, only if the
+message is idle for at least one hour without the original consumer or some
+other consumer making progresses (acknowledging or claiming it), and assigns the
+ownership to the consumer `Alice`.
diff --git a/iredis/data/commands/xdel.md b/iredis/data/commands/xdel.md
new file mode 100644
index 0000000..3f507a8
--- /dev/null
+++ b/iredis/data/commands/xdel.md
@@ -0,0 +1,51 @@
+Removes the specified entries from a stream, and returns the number of entries
+deleted, that may be different from the number of IDs passed to the command in
+case certain IDs do not exist.
+
+Normally you may think at a Redis stream as an append-only data structure,
+however Redis streams are represented in memory, so we are able to also delete
+entries. This may be useful, for instance, in order to comply with certain
+privacy policies.
+
+# Understanding the low level details of entries deletion
+
+Redis streams are represented in a way that makes them memory efficient: a radix
+tree is used in order to index macro-nodes that pack linearly tens of stream
+entries. Normally what happens when you delete an entry from a stream is that
+the entry is not _really_ evicted, it just gets marked as deleted.
+
+Eventually if all the entries in a macro-node are marked as deleted, the whole
+node is destroyed and the memory reclaimed. This means that if you delete a
+large amount of entries from a stream, for instance more than 50% of the entries
+appended to the stream, the memory usage per entry may increment, since what
+happens is that the stream will start to be fragmented. However the stream
+performances will remain the same.
+
+In future versions of Redis it is possible that we'll trigger a node garbage
+collection in case a given macro-node reaches a given amount of deleted entries.
+Currently with the usage we anticipate for this data structure, it is not a good
+idea to add such complexity.
+
+@return
+
+@integer-reply: the number of entries actually deleted.
+
+@examples
+
+```
+> XADD mystream * a 1
+1538561698944-0
+> XADD mystream * b 2
+1538561700640-0
+> XADD mystream * c 3
+1538561701744-0
+> XDEL mystream 1538561700640-0
+(integer) 1
+127.0.0.1:6379> XRANGE mystream - +
+1) 1) 1538561698944-0
+ 2) 1) "a"
+ 2) "1"
+2) 1) 1538561701744-0
+ 2) 1) "c"
+ 2) "3"
+```
diff --git a/iredis/data/commands/xgroup.md b/iredis/data/commands/xgroup.md
new file mode 100644
index 0000000..b690d87
--- /dev/null
+++ b/iredis/data/commands/xgroup.md
@@ -0,0 +1,64 @@
+This command is used in order to manage the consumer groups associated with a
+stream data structure. Using `XGROUP` you can:
+
+- Create a new consumer group associated with a stream.
+- Destroy a consumer group.
+- Remove a specific consumer from a consumer group.
+- Set the consumer group _last delivered ID_ to something else.
+
+To create a new consumer group, use the following form:
+
+ XGROUP CREATE mystream consumer-group-name $
+
+The last argument is the ID of the last item in the stream to consider already
+delivered. In the above case we used the special ID '\$' (that means: the ID of
+the last item in the stream). In this case the consumers fetching data from that
+consumer group will only see new elements arriving in the stream.
+
+If instead you want consumers to fetch the whole stream history, use zero as the
+starting ID for the consumer group:
+
+ XGROUP CREATE mystream consumer-group-name 0
+
+Of course it is also possible to use any other valid ID. If the specified
+consumer group already exists, the command returns a `-BUSYGROUP` error.
+Otherwise the operation is performed and OK is returned. There are no hard
+limits to the number of consumer groups you can associate to a given stream.
+
+If the specified stream doesn't exist when creating a group, an error will be
+returned. You can use the optional `MKSTREAM` subcommand as the last argument
+after the `ID` to automatically create the stream, if it doesn't exist. Note
+that if the stream is created in this way it will have a length of 0:
+
+ XGROUP CREATE mystream consumer-group-name $ MKSTREAM
+
+A consumer group can be destroyed completely by using the following form:
+
+ XGROUP DESTROY mystream consumer-group-name
+
+The consumer group will be destroyed even if there are active consumers and
+pending messages, so make sure to call this command only when really needed.
+
+To just remove a given consumer from a consumer group, the following form is
+used:
+
+ XGROUP DELCONSUMER mystream consumer-group-name myconsumer123
+
+Consumers in a consumer group are auto-created every time a new consumer name is
+mentioned by some command. However sometimes it may be useful to remove old
+consumers since they are no longer used. This form returns the number of pending
+messages that the consumer had before it was deleted.
+
+Finally it possible to set the next message to deliver using the `SETID`
+subcommand. Normally the next ID is set when the consumer is created, as the
+last argument of `XGROUP CREATE`. However using this form the next ID can be
+modified later without deleting and creating the consumer group again. For
+instance if you want the consumers in a consumer group to re-process all the
+messages in a stream, you may want to set its next ID to 0:
+
+ XGROUP SETID mystream consumer-group-name 0
+
+Finally to get some help if you don't remember the syntax, use the HELP
+subcommand:
+
+ XGROUP HELP
diff --git a/iredis/data/commands/xinfo.md b/iredis/data/commands/xinfo.md
new file mode 100644
index 0000000..b9c228d
--- /dev/null
+++ b/iredis/data/commands/xinfo.md
@@ -0,0 +1,182 @@
+This is an introspection command used in order to retrieve different information
+about the streams and associated consumer groups. Three forms are possible:
+
+- `XINFO STREAM <key>`
+
+In this form the command returns general information about the stream stored at
+the specified key.
+
+```
+> XINFO STREAM mystream
+ 1) length
+ 2) (integer) 2
+ 3) radix-tree-keys
+ 4) (integer) 1
+ 5) radix-tree-nodes
+ 6) (integer) 2
+ 7) groups
+ 8) (integer) 2
+ 9) last-generated-id
+10) 1538385846314-0
+11) first-entry
+12) 1) 1538385820729-0
+ 2) 1) "foo"
+ 2) "bar"
+13) last-entry
+14) 1) 1538385846314-0
+ 2) 1) "field"
+ 2) "value"
+```
+
+In the above example you can see that the reported information are the number of
+elements of the stream, details about the radix tree representing the stream
+mostly useful for optimization and debugging tasks, the number of consumer
+groups associated with the stream, the last generated ID that may not be the
+same as the last entry ID in case some entry was deleted. Finally the full first
+and last entry in the stream are shown, in order to give some sense about what
+is the stream content.
+
+- `XINFO STREAM <key> FULL [COUNT <count>]`
+
+In this form the command returns the entire state of the stream, including
+entries, groups, consumers and PELs. This form is available since Redis 6.0.
+
+```
+> XADD mystream * foo bar
+"1588152471065-0"
+> XADD mystream * foo bar2
+"1588152473531-0"
+> XGROUP CREATE mystream mygroup 0-0
+OK
+> XREADGROUP GROUP mygroup Alice COUNT 1 STREAMS mystream >
+1) 1) "mystream"
+ 2) 1) 1) "1588152471065-0"
+ 2) 1) "foo"
+ 2) "bar"
+> XINFO STREAM mystream FULL
+ 1) "length"
+ 2) (integer) 2
+ 3) "radix-tree-keys"
+ 4) (integer) 1
+ 5) "radix-tree-nodes"
+ 6) (integer) 2
+ 7) "last-generated-id"
+ 8) "1588152473531-0"
+ 9) "entries"
+10) 1) 1) "1588152471065-0"
+ 2) 1) "foo"
+ 2) "bar"
+ 2) 1) "1588152473531-0"
+ 2) 1) "foo"
+ 2) "bar2"
+11) "groups"
+12) 1) 1) "name"
+ 2) "mygroup"
+ 3) "last-delivered-id"
+ 4) "1588152471065-0"
+ 5) "pel-count"
+ 6) (integer) 1
+ 7) "pending"
+ 8) 1) 1) "1588152471065-0"
+ 2) "Alice"
+ 3) (integer) 1588152520299
+ 4) (integer) 1
+ 9) "consumers"
+ 10) 1) 1) "name"
+ 2) "Alice"
+ 3) "seen-time"
+ 4) (integer) 1588152520299
+ 5) "pel-count"
+ 6) (integer) 1
+ 7) "pending"
+ 8) 1) 1) "1588152471065-0"
+ 2) (integer) 1588152520299
+ 3) (integer) 1
+```
+
+The reported information contains all of the fields reported by the simple form
+of `XINFO STREAM`, with some additional information:
+
+1. Stream entries are returned, including fields and values.
+2. Groups, consumers and PELs are returned.
+
+The `COUNT` option is used to limit the amount of stream/PEL entries that are
+returned (The first `<count>` entries are returned). The default `COUNT` is 10
+and a `COUNT` of 0 means that all entries will be returned (Execution time may
+be long if the stream has a lot of entries)
+
+- `XINFO GROUPS <key>`
+
+In this form we just get as output all the consumer groups associated with the
+stream:
+
+```
+> XINFO GROUPS mystream
+1) 1) name
+ 2) "mygroup"
+ 3) consumers
+ 4) (integer) 2
+ 5) pending
+ 6) (integer) 2
+ 7) last-delivered-id
+ 8) "1588152489012-0"
+2) 1) name
+ 2) "some-other-group"
+ 3) consumers
+ 4) (integer) 1
+ 5) pending
+ 6) (integer) 0
+ 7) last-delivered-id
+ 8) "1588152498034-0"
+```
+
+For each consumer group listed the command also shows the number of consumers
+known in that group and the pending messages (delivered but not yet
+acknowledged) in that group.
+
+- `XINFO CONSUMERS <key> <group>`
+
+Finally it is possible to get the list of every consumer in a specific consumer
+group:
+
+```
+> XINFO CONSUMERS mystream mygroup
+1) 1) name
+ 2) "Alice"
+ 3) pending
+ 4) (integer) 1
+ 5) idle
+ 6) (integer) 9104628
+2) 1) name
+ 2) "Bob"
+ 3) pending
+ 4) (integer) 1
+ 5) idle
+ 6) (integer) 83841983
+```
+
+We can see the idle time in milliseconds (last field) together with the consumer
+name and the number of pending messages for this specific consumer.
+
+**Note that you should not rely on the fields exact position**, nor on the
+number of fields, new fields may be added in the future. So a well behaving
+client should fetch the whole list, and report it to the user, for example, as a
+dictionary data structure. Low level clients such as C clients where the items
+will likely be reported back in a linear array should document that the order is
+undefined.
+
+Finally it is possible to get help from the command, in case the user can't
+remember the exact syntax, by using the `HELP` subcommand:
+
+```
+> XINFO HELP
+1) XINFO <subcommand> arg arg ... arg. Subcommands are:
+2) CONSUMERS <key> <groupname> -- Show consumer groups of group <groupname>.
+3) GROUPS <key> -- Show the stream consumer groups.
+4) STREAM <key> -- Show information about the stream.
+5) HELP
+```
+
+@history
+
+- `>= 6.0.0`: Added the `FULL` option to `XINFO STREAM`.
diff --git a/iredis/data/commands/xlen.md b/iredis/data/commands/xlen.md
new file mode 100644
index 0000000..5506449
--- /dev/null
+++ b/iredis/data/commands/xlen.md
@@ -0,0 +1,21 @@
+Returns the number of entries inside a stream. If the specified key does not
+exist the command returns zero, as if the stream was empty. However note that
+unlike other Redis types, zero-length streams are possible, so you should call
+`TYPE` or `EXISTS` in order to check if a key exists or not.
+
+Streams are not auto-deleted once they have no entries inside (for instance
+after an `XDEL` call), because the stream may have consumer groups associated
+with it.
+
+@return
+
+@integer-reply: the number of entries of the stream at `key`.
+
+@examples
+
+```cli
+XADD mystream * item 1
+XADD mystream * item 2
+XADD mystream * item 3
+XLEN mystream
+```
diff --git a/iredis/data/commands/xpending.md b/iredis/data/commands/xpending.md
new file mode 100644
index 0000000..53e3f33
--- /dev/null
+++ b/iredis/data/commands/xpending.md
@@ -0,0 +1,110 @@
+Fetching data from a stream via a consumer group, and not acknowledging such
+data, has the effect of creating _pending entries_. This is well explained in
+the `XREADGROUP` command, and even better in our
+[introduction to Redis Streams](/topics/streams-intro). The `XACK` command will
+immediately remove the pending entry from the Pending Entry List (PEL) since
+once a message is successfully processed, there is no longer need for the
+consumer group to track it and to remember the current owner of the message.
+
+The `XPENDING` command is the interface to inspect the list of pending messages,
+and is as thus a very important command in order to observe and understand what
+is happening with a streams consumer groups: what clients are active, what
+messages are pending to be consumed, or to see if there are idle messages.
+Moreover this command, together with `XCLAIM` is used in order to implement
+recovering of consumers that are failing for a long time, and as a result
+certain messages are not processed: a different consumer can claim the message
+and continue. This is better explained in the
+[streams intro](/topics/streams-intro) and in the `XCLAIM` command page, and is
+not covered here.
+
+## Summary form of XPENDING
+
+When `XPENDING` is called with just a key name and a consumer group name, it
+just outputs a summary about the pending messages in a given consumer group. In
+the following example, we create a consumer group and immediately create a
+pending message by reading from the group with `XREADGROUP`.
+
+```
+> XGROUP CREATE mystream group55 0-0
+OK
+
+> XREADGROUP GROUP group55 consumer-123 COUNT 1 STREAMS mystream >
+1) 1) "mystream"
+ 2) 1) 1) 1526984818136-0
+ 2) 1) "duration"
+ 2) "1532"
+ 3) "event-id"
+ 4) "5"
+ 5) "user-id"
+ 6) "7782813"
+```
+
+We expect the pending entries list for the consumer group `group55` to have a
+message right now: consumer named `consumer-123` fetched the message without
+acknowledging its processing. The simple `XPENDING` form will give us this
+information:
+
+```
+> XPENDING mystream group55
+1) (integer) 1
+2) 1526984818136-0
+3) 1526984818136-0
+4) 1) 1) "consumer-123"
+ 2) "1"
+```
+
+In this form, the command outputs the total number of pending messages for this
+consumer group, which is one, followed by the smallest and greatest ID among the
+pending messages, and then list every consumer in the consumer group with at
+least one pending message, and the number of pending messages it has.
+
+This is a good overview, but sometimes we are interested in the details. In
+order to see all the pending messages with more associated information we need
+to also pass a range of IDs, in a similar way we do it with `XRANGE`, and a non
+optional _count_ argument, to limit the number of messages returned per call:
+
+```
+> XPENDING mystream group55 - + 10
+1) 1) 1526984818136-0
+ 2) "consumer-123"
+ 3) (integer) 196415
+ 4) (integer) 1
+```
+
+In the extended form we no longer see the summary information, instead there are
+detailed information for each message in the pending entries list. For each
+message four attributes are returned:
+
+1. The ID of the message.
+2. The name of the consumer that fetched the message and has still to
+ acknowledge it. We call it the current _owner_ of the message.
+3. The number of milliseconds that elapsed since the last time this message was
+ delivered to this consumer.
+4. The number of times this message was delivered.
+
+The deliveries counter, that is the fourth element in the array, is incremented
+when some other consumer _claims_ the message with `XCLAIM`, or when the message
+is delivered again via `XREADGROUP`, when accessing the history of a consumer in
+a consumer group (see the `XREADGROUP` page for more info).
+
+Finally it is possible to pass an additional argument to the command, in order
+to see the messages having a specific owner:
+
+```
+> XPENDING mystream group55 - + 10 consumer-123
+```
+
+But in the above case the output would be the same, since we have pending
+messages only for a single consumer. However what is important to keep in mind
+is that this operation, filtering by a specific consumer, is not inefficient
+even when there are many pending messages from many consumers: we have a pending
+entries list data structure both globally, and for every consumer, so we can
+very efficiently show just messages pending for a single consumer.
+
+@return
+
+@array-reply, specifically:
+
+The command returns data in different format depending on the way it is called,
+as previously explained in this page. However the reply is always an array of
+items.
diff --git a/iredis/data/commands/xrange.md b/iredis/data/commands/xrange.md
new file mode 100644
index 0000000..b9d5ab0
--- /dev/null
+++ b/iredis/data/commands/xrange.md
@@ -0,0 +1,183 @@
+The command returns the stream entries matching a given range of IDs. The range
+is specified by a minimum and maximum ID. All the entries having an ID between
+the two specified or exactly one of the two IDs specified (closed interval) are
+returned.
+
+The `XRANGE` command has a number of applications:
+
+- Returning items in a specific time range. This is possible because Stream IDs
+ are [related to time](/topics/streams-intro).
+- Iterating a stream incrementally, returning just a few items at every
+ iteration. However it is semantically much more robust than the `SCAN` family
+ of functions.
+- Fetching a single entry from a stream, providing the ID of the entry to fetch
+ two times: as start and end of the query interval.
+
+The command also has a reciprocal command returning items in the reverse order,
+called `XREVRANGE`, which is otherwise identical.
+
+## `-` and `+` special IDs
+
+The `-` and `+` special IDs mean respectively the minimum ID possible and the
+maximum ID possible inside a stream, so the following command will just return
+every entry in the stream:
+
+```
+> XRANGE somestream - +
+1) 1) 1526985054069-0
+ 2) 1) "duration"
+ 2) "72"
+ 3) "event-id"
+ 4) "9"
+ 5) "user-id"
+ 6) "839248"
+2) 1) 1526985069902-0
+ 2) 1) "duration"
+ 2) "415"
+ 3) "event-id"
+ 4) "2"
+ 5) "user-id"
+ 6) "772213"
+... other entries here ...
+```
+
+The `-` ID is effectively just exactly as specifying `0-0`, while `+` is
+equivalent to `18446744073709551615-18446744073709551615`, however they are
+nicer to type.
+
+## Incomplete IDs
+
+Stream IDs are composed of two parts, a Unix millisecond time stamp and a
+sequence number for entries inserted in the same millisecond. It is possible to
+use `XRANGE` specifying just the first part of the ID, the millisecond time,
+like in the following example:
+
+```
+> XRANGE somestream 1526985054069 1526985055069
+```
+
+In this case, `XRANGE` will auto-complete the start interval with `-0` and end
+interval with `-18446744073709551615`, in order to return all the entries that
+were generated between a given millisecond and the end of the other specified
+millisecond. This also means that repeating the same millisecond two times, we
+get all the entries within such millisecond, because the sequence number range
+will be from zero to the maximum.
+
+Used in this way `XRANGE` works as a range query command to obtain entries in a
+specified time. This is very handy in order to access the history of past events
+in a stream.
+
+## Returning a maximum number of entries
+
+Using the **COUNT** option it is possible to reduce the number of entries
+reported. This is a very important feature even if it may look marginal, because
+it allows, for instance, to model operations such as _give me the entry greater
+or equal to the following_:
+
+```
+> XRANGE somestream 1526985054069-0 + COUNT 1
+1) 1) 1526985054069-0
+ 2) 1) "duration"
+ 2) "72"
+ 3) "event-id"
+ 4) "9"
+ 5) "user-id"
+ 6) "839248"
+```
+
+In the above case the entry `1526985054069-0` exists, otherwise the server would
+have sent us the next one. Using `COUNT` is also the base in order to use
+`XRANGE` as an iterator.
+
+## Iterating a stream
+
+In order to iterate a stream, we can proceed as follows. Let's assume that we
+want two elements per iteration. We start fetching the first two elements, which
+is trivial:
+
+```
+> XRANGE writers - + COUNT 2
+1) 1) 1526985676425-0
+ 2) 1) "name"
+ 2) "Virginia"
+ 3) "surname"
+ 4) "Woolf"
+2) 1) 1526985685298-0
+ 2) 1) "name"
+ 2) "Jane"
+ 3) "surname"
+ 4) "Austen"
+```
+
+Then instead of starting the iteration again from `-`, as the start of the range
+we use the entry ID of the _last_ entry returned by the previous `XRANGE` call,
+adding the sequence part of the ID by one.
+
+The ID of the last entry is `1526985685298-0`, so we just add 1 to the sequence
+to obtain `1526985685298-1`, and continue our iteration:
+
+```
+> XRANGE writers 1526985685298-1 + COUNT 2
+1) 1) 1526985691746-0
+ 2) 1) "name"
+ 2) "Toni"
+ 3) "surname"
+ 4) "Morrison"
+2) 1) 1526985712947-0
+ 2) 1) "name"
+ 2) "Agatha"
+ 3) "surname"
+ 4) "Christie"
+```
+
+And so forth. Eventually this will allow to visit all the entries in the stream.
+Obviously, we can start the iteration from any ID, or even from a specific time,
+by providing a given incomplete start ID. Moreover, we can limit the iteration
+to a given ID or time, by providing an end ID or incomplete ID instead of `+`.
+
+The command `XREAD` is also able to iterate the stream. The command `XREVRANGE`
+can iterate the stream reverse, from higher IDs (or times) to lower IDs (or
+times).
+
+## Fetching single items
+
+If you look for an `XGET` command you'll be disappointed because `XRANGE` is
+effectively the way to go in order to fetch a single entry from a stream. All
+you have to do is to specify the ID two times in the arguments of XRANGE:
+
+```
+> XRANGE mystream 1526984818136-0 1526984818136-0
+1) 1) 1526984818136-0
+ 2) 1) "duration"
+ 2) "1532"
+ 3) "event-id"
+ 4) "5"
+ 5) "user-id"
+ 6) "7782813"
+```
+
+## Additional information about streams
+
+For further information about Redis streams please check our
+[introduction to Redis Streams document](/topics/streams-intro).
+
+@return
+
+@array-reply, specifically:
+
+The command returns the entries with IDs matching the specified range. The
+returned entries are complete, that means that the ID and all the fields they
+are composed are returned. Moreover, the entries are returned with their fields
+and values in the exact same order as `XADD` added them.
+
+@examples
+
+```cli
+XADD writers * name Virginia surname Woolf
+XADD writers * name Jane surname Austen
+XADD writers * name Toni surname Morrison
+XADD writers * name Agatha surname Christie
+XADD writers * name Ngozi surname Adichie
+XLEN writers
+XRANGE writers - + COUNT 2
+```
diff --git a/iredis/data/commands/xread.md b/iredis/data/commands/xread.md
new file mode 100644
index 0000000..6a45fb4
--- /dev/null
+++ b/iredis/data/commands/xread.md
@@ -0,0 +1,210 @@
+Read data from one or multiple streams, only returning entries with an ID
+greater than the last received ID reported by the caller. This command has an
+option to block if items are not available, in a similar fashion to `BRPOP` or
+`BZPOPMIN` and others.
+
+Please note that before reading this page, if you are new to streams, we
+recommend to read [our introduction to Redis Streams](/topics/streams-intro).
+
+## Non-blocking usage
+
+If the **BLOCK** option is not used, the command is synchronous, and can be
+considered somewhat related to `XRANGE`: it will return a range of items inside
+streams, however it has two fundamental differences compared to `XRANGE` even if
+we just consider the synchronous usage:
+
+- This command can be called with multiple streams if we want to read at the
+ same time from a number of keys. This is a key feature of `XREAD` because
+ especially when blocking with **BLOCK**, to be able to listen with a single
+ connection to multiple keys is a vital feature.
+- While `XRANGE` returns items in a range of IDs, `XREAD` is more suited in
+ order to consume the stream starting from the first entry which is greater
+ than any other entry we saw so far. So what we pass to `XREAD` is, for each
+ stream, the ID of the last element that we received from that stream.
+
+For example, if I have two streams `mystream` and `writers`, and I want to read
+data from both the streams starting from the first element they contain, I could
+call `XREAD` like in the following example.
+
+Note: we use the **COUNT** option in the example, so that for each stream the
+call will return at maximum two elements per stream.
+
+```
+> XREAD COUNT 2 STREAMS mystream writers 0-0 0-0
+1) 1) "mystream"
+ 2) 1) 1) 1526984818136-0
+ 2) 1) "duration"
+ 2) "1532"
+ 3) "event-id"
+ 4) "5"
+ 5) "user-id"
+ 6) "7782813"
+ 2) 1) 1526999352406-0
+ 2) 1) "duration"
+ 2) "812"
+ 3) "event-id"
+ 4) "9"
+ 5) "user-id"
+ 6) "388234"
+2) 1) "writers"
+ 2) 1) 1) 1526985676425-0
+ 2) 1) "name"
+ 2) "Virginia"
+ 3) "surname"
+ 4) "Woolf"
+ 2) 1) 1526985685298-0
+ 2) 1) "name"
+ 2) "Jane"
+ 3) "surname"
+ 4) "Austen"
+```
+
+The **STREAMS** option is mandatory and MUST be the final option because such
+option gets a variable length of argument in the following format:
+
+ STREAMS key_1 key_2 key_3 ... key_N ID_1 ID_2 ID_3 ... ID_N
+
+So we start with a list of keys, and later continue with all the associated IDs,
+representing _the last ID we received for that stream_, so that the call will
+serve us only greater IDs from the same stream.
+
+For instance in the above example, the last items that we received for the
+stream `mystream` has ID `1526999352406-0`, while for the stream `writers` has
+the ID `1526985685298-0`.
+
+To continue iterating the two streams I'll call:
+
+```
+> XREAD COUNT 2 STREAMS mystream writers 1526999352406-0 1526985685298-0
+1) 1) "mystream"
+ 2) 1) 1) 1526999626221-0
+ 2) 1) "duration"
+ 2) "911"
+ 3) "event-id"
+ 4) "7"
+ 5) "user-id"
+ 6) "9488232"
+2) 1) "writers"
+ 2) 1) 1) 1526985691746-0
+ 2) 1) "name"
+ 2) "Toni"
+ 3) "surname"
+ 4) "Morrison"
+ 2) 1) 1526985712947-0
+ 2) 1) "name"
+ 2) "Agatha"
+ 3) "surname"
+ 4) "Christie"
+```
+
+And so forth. Eventually, the call will not return any item, but just an empty
+array, then we know that there is nothing more to fetch from our stream (and we
+would have to retry the operation, hence this command also supports a blocking
+mode).
+
+## Incomplete IDs
+
+To use incomplete IDs is valid, like it is valid for `XRANGE`. However here the
+sequence part of the ID, if missing, is always interpreted as zero, so the
+command:
+
+```
+> XREAD COUNT 2 STREAMS mystream writers 0 0
+```
+
+is exactly equivalent to
+
+```
+> XREAD COUNT 2 STREAMS mystream writers 0-0 0-0
+```
+
+## Blocking for data
+
+In its synchronous form, the command can get new data as long as there are more
+items available. However, at some point, we'll have to wait for producers of
+data to use `XADD` to push new entries inside the streams we are consuming. In
+order to avoid polling at a fixed or adaptive interval the command is able to
+block if it could not return any data, according to the specified streams and
+IDs, and automatically unblock once one of the requested keys accept data.
+
+It is important to understand that this command _fans out_ to all the clients
+that are waiting for the same range of IDs, so every consumer will get a copy of
+the data, unlike to what happens when blocking list pop operations are used.
+
+In order to block, the **BLOCK** option is used, together with the number of
+milliseconds we want to block before timing out. Normally Redis blocking
+commands take timeouts in seconds, however this command takes a millisecond
+timeout, even if normally the server will have a timeout resolution near to 0.1
+seconds. This time it is possible to block for a shorter time in certain use
+cases, and if the server internals will improve over time, it is possible that
+the resolution of timeouts will improve.
+
+When the **BLOCK** command is passed, but there is data to return at least in
+one of the streams passed, the command is executed synchronously _exactly like
+if the BLOCK option would be missing_.
+
+This is an example of blocking invocation, where the command later returns a
+null reply because the timeout has elapsed without new data arriving:
+
+```
+> XREAD BLOCK 1000 STREAMS mystream 1526999626221-0
+(nil)
+```
+
+## The special `$` ID.
+
+When blocking sometimes we want to receive just entries that are added to the
+stream via `XADD` starting from the moment we block. In such a case we are not
+interested in the history of already added entries. For this use case, we would
+have to check the stream top element ID, and use such ID in the `XREAD` command
+line. This is not clean and requires to call other commands, so instead it is
+possible to use the special `$` ID to signal the stream that we want only the
+new things.
+
+It is **very important** to understand that you should use the `$` ID only for
+the first call to `XREAD`. Later the ID should be the one of the last reported
+item in the stream, otherwise you could miss all the entries that are added in
+between.
+
+This is how a typical `XREAD` call looks like in the first iteration of a
+consumer willing to consume only new entries:
+
+```
+> XREAD BLOCK 5000 COUNT 100 STREAMS mystream $
+```
+
+Once we get some replies, the next call will be something like:
+
+```
+> XREAD BLOCK 5000 COUNT 100 STREAMS mystream 1526999644174-3
+```
+
+And so forth.
+
+## How multiple clients blocked on a single stream are served
+
+Blocking list operations on lists or sorted sets have a _pop_ behavior.
+Basically, the element is removed from the list or sorted set in order to be
+returned to the client. In this scenario you want the items to be consumed in a
+fair way, depending on the moment clients blocked on a given key arrived.
+Normally Redis uses the FIFO semantics in this use cases.
+
+However note that with streams this is not a problem: stream entries are not
+removed from the stream when clients are served, so every client waiting will be
+served as soon as an `XADD` command provides data to the stream.
+
+@return
+
+@array-reply, specifically:
+
+The command returns an array of results: each element of the returned array is
+an array composed of a two element containing the key name and the entries
+reported for that key. The entries reported are full stream entries, having IDs
+and the list of all the fields and values. Field and values are guaranteed to be
+reported in the same order they were added by `XADD`.
+
+When **BLOCK** is used, on timeout a null reply is returned.
+
+Reading the [Redis Streams introduction](/topics/streams-intro) is highly
+suggested in order to understand more about the streams overall behavior and
+semantics.
diff --git a/iredis/data/commands/xreadgroup.md b/iredis/data/commands/xreadgroup.md
new file mode 100644
index 0000000..fb0b21c
--- /dev/null
+++ b/iredis/data/commands/xreadgroup.md
@@ -0,0 +1,131 @@
+The `XREADGROUP` command is a special version of the `XREAD` command with
+support for consumer groups. Probably you will have to understand the `XREAD`
+command before reading this page will makes sense.
+
+Moreover, if you are new to streams, we recommend to read our
+[introduction to Redis Streams](/topics/streams-intro). Make sure to understand
+the concept of consumer group in the introduction so that following how this
+command works will be simpler.
+
+## Consumer groups in 30 seconds
+
+The difference between this command and the vanilla `XREAD` is that this one
+supports consumer groups.
+
+Without consumer groups, just using `XREAD`, all the clients are served with all
+the entries arriving in a stream. Instead using consumer groups with
+`XREADGROUP`, it is possible to create groups of clients that consume different
+parts of the messages arriving in a given stream. If, for instance, the stream
+gets the new entries A, B, and C and there are two consumers reading via a
+consumer group, one client will get, for instance, the messages A and C, and the
+other the message B, and so forth.
+
+Within a consumer group, a given consumer (that is, just a client consuming
+messages from the stream), has to identify with an unique _consumer name_. Which
+is just a string.
+
+One of the guarantees of consumer groups is that a given consumer can only see
+the history of messages that were delivered to it, so a message has just a
+single owner. However there is a special feature called _message claiming_ that
+allows other consumers to claim messages in case there is a non recoverable
+failure of some consumer. In order to implement such semantics, consumer groups
+require explicit acknowledgement of the messages successfully processed by the
+consumer, via the `XACK` command. This is needed because the stream will track,
+for each consumer group, who is processing what message.
+
+This is how to understand if you want to use a consumer group or not:
+
+1. If you have a stream and multiple clients, and you want all the clients to
+ get all the messages, you do not need a consumer group.
+2. If you have a stream and multiple clients, and you want the stream to be
+ _partitioned_ or _sharded_ across your clients, so that each client will get
+ a sub set of the messages arriving in a stream, you need a consumer group.
+
+## Differences between XREAD and XREADGROUP
+
+From the point of view of the syntax, the commands are almost the same, however
+`XREADGROUP` _requires_ a special and mandatory option:
+
+ GROUP <group-name> <consumer-name>
+
+The group name is just the name of a consumer group associated to the stream.
+The group is created using the `XGROUP` command. The consumer name is the string
+that is used by the client to identify itself inside the group. The consumer is
+auto created inside the consumer group the first time it is saw. Different
+clients should select a different consumer name.
+
+When you read with `XREADGROUP`, the server will _remember_ that a given message
+was delivered to you: the message will be stored inside the consumer group in
+what is called a Pending Entries List (PEL), that is a list of message IDs
+delivered but not yet acknowledged.
+
+The client will have to acknowledge the message processing using `XACK` in order
+for the pending entry to be removed from the PEL. The PEL can be inspected using
+the `XPENDING` command.
+
+The `NOACK` subcommand can be used to avoid adding the message to the PEL in
+cases where reliability is not a requirement and the occasional message loss is
+acceptable. This is equivalent to acknowledging the message when it is read.
+
+The ID to specify in the **STREAMS** option when using `XREADGROUP` can be one
+of the following two:
+
+- The special `>` ID, which means that the consumer want to receive only
+ messages that were _never delivered to any other consumer_. It just means,
+ give me new messages.
+- Any other ID, that is, 0 or any other valid ID or incomplete ID (just the
+ millisecond time part), will have the effect of returning entries that are
+ pending for the consumer sending the command with IDs greater than the one
+ provided. So basically if the ID is not `>`, then the command will just let
+ the client access its pending entries: messages delivered to it, but not yet
+ acknowledged. Note that in this case, both `BLOCK` and `NOACK` are ignored.
+
+Like `XREAD` the `XREADGROUP` command can be used in a blocking way. There are
+no differences in this regard.
+
+## What happens when a message is delivered to a consumer?
+
+Two things:
+
+1. If the message was never delivered to anyone, that is, if we are talking
+ about a new message, then a PEL (Pending Entry List) is created.
+2. If instead the message was already delivered to this consumer, and it is just
+ re-fetching the same message again, then the _last delivery counter_ is
+ updated to the current time, and the _number of deliveries_ is incremented by
+ one. You can access those message properties using the `XPENDING` command.
+
+## Usage example
+
+Normally you use the command like that in order to get new messages and process
+them. In pseudo-code:
+
+```
+WHILE true
+ entries = XREADGROUP GROUP $GroupName $ConsumerName BLOCK 2000 COUNT 10 STREAMS mystream >
+ if entries == nil
+ puts "Timeout... try again"
+ CONTINUE
+ end
+
+ FOREACH entries AS stream_entries
+ FOREACH stream_entries as message
+ process_message(message.id,message.fields)
+
+ # ACK the message as processed
+ XACK mystream $GroupName message.id
+ END
+ END
+END
+```
+
+In this way the example consumer code will fetch only new messages, process
+them, and acknowledge them via `XACK`. However the example code above is not
+complete, because it does not handle recovering after a crash. What will happen
+if we crash in the middle of processing messages, is that our messages will
+remain in the pending entries list, so we can access our history by giving
+`XREADGROUP` initially an ID of 0, and performing the same loop. Once providing
+an ID of 0 the reply is an empty set of messages, we know that we processed and
+acknowledged all the pending messages: we can start to use `>` as ID, in order
+to get the new messages and rejoin the consumers that are processing new things.
+
+To see how the command actually replies, please check the `XREAD` command page.
diff --git a/iredis/data/commands/xrevrange.md b/iredis/data/commands/xrevrange.md
new file mode 100644
index 0000000..35d7438
--- /dev/null
+++ b/iredis/data/commands/xrevrange.md
@@ -0,0 +1,85 @@
+This command is exactly like `XRANGE`, but with the notable difference of
+returning the entries in reverse order, and also taking the start-end range in
+reverse order: in `XREVRANGE` you need to state the _end_ ID and later the
+_start_ ID, and the command will produce all the element between (or exactly
+like) the two IDs, starting from the _end_ side.
+
+So for instance, to get all the elements from the higher ID to the lower ID one
+could use:
+
+ XREVRANGE somestream + -
+
+Similarly to get just the last element added into the stream it is enough to
+send:
+
+ XREVRANGE somestream + - COUNT 1
+
+## Iterating with XREVRANGE
+
+Like `XRANGE` this command can be used in order to iterate the whole stream
+content, however note that in this case, the next command calls should use the
+ID of the last entry, with the sequence number decremented by one. However if
+the sequence number is already 0, the time part of the ID should be decremented
+by 1, and the sequence part should be set to the maximum possible sequence
+number, that is, 18446744073709551615, or could be omitted at all, and the
+command will automatically assume it to be such a number (see `XRANGE` for more
+info about incomplete IDs).
+
+Example:
+
+```
+> XREVRANGE writers + - COUNT 2
+1) 1) 1526985723355-0
+ 2) 1) "name"
+ 2) "Ngozi"
+ 3) "surname"
+ 4) "Adichie"
+2) 1) 1526985712947-0
+ 2) 1) "name"
+ 2) "Agatha"
+ 3) "surname"
+ 4) "Christie"
+```
+
+The last ID returned is `1526985712947-0`, since the sequence number is already
+zero, the next ID I'll use instead of the `+` special ID will be
+`1526985712946-18446744073709551615`, or just `18446744073709551615`:
+
+```
+> XREVRANGE writers 1526985712946-18446744073709551615 - COUNT 2
+1) 1) 1526985691746-0
+ 2) 1) "name"
+ 2) "Toni"
+ 3) "surname"
+ 4) "Morrison"
+2) 1) 1526985685298-0
+ 2) 1) "name"
+ 2) "Jane"
+ 3) "surname"
+ 4) "Austen"
+```
+
+And so for until the iteration is complete and no result is returned. See the
+`XRANGE` page about iterating for more information.
+
+@return
+
+@array-reply, specifically:
+
+The command returns the entries with IDs matching the specified range, from the
+higher ID to the lower ID matching. The returned entries are complete, that
+means that the ID and all the fields they are composed are returned. Moreover
+the entries are returned with their fields and values in the exact same order as
+`XADD` added them.
+
+@examples
+
+```cli
+XADD writers * name Virginia surname Woolf
+XADD writers * name Jane surname Austen
+XADD writers * name Toni surname Morrison
+XADD writers * name Agatha surname Christie
+XADD writers * name Ngozi surname Adichie
+XLEN writers
+XREVRANGE writers + - COUNT 1
+```
diff --git a/iredis/data/commands/xtrim.md b/iredis/data/commands/xtrim.md
new file mode 100644
index 0000000..090650b
--- /dev/null
+++ b/iredis/data/commands/xtrim.md
@@ -0,0 +1,37 @@
+`XTRIM` trims the stream to a given number of items, evicting older items (items
+with lower IDs) if needed. The command is conceived to accept multiple trimming
+strategies, however currently only a single one is implemented, which is
+`MAXLEN`, and works exactly as the `MAXLEN` option in `XADD`.
+
+For example the following command will trim the stream to exactly the latest
+1000 items:
+
+```
+XTRIM mystream MAXLEN 1000
+```
+
+It is possible to give the command in the following special form in order to
+make it more efficient:
+
+```
+XTRIM mystream MAXLEN ~ 1000
+```
+
+The `~` argument between the **MAXLEN** option and the actual count means that
+the user is not really requesting that the stream length is exactly 1000 items,
+but instead it could be a few tens of entries more, but never less than 1000
+items. When this option modifier is used, the trimming is performed only when
+Redis is able to remove a whole macro node. This makes it much more efficient,
+and it is usually what you want.
+
+@return
+
+@integer-reply, specifically:
+
+The command returns the number of entries deleted from the stream.
+
+```cli
+XADD mystream * field1 A field2 B field3 C field4 D
+XTRIM mystream MAXLEN 2
+XRANGE mystream - +
+```
diff --git a/iredis/data/commands/zadd.md b/iredis/data/commands/zadd.md
new file mode 100644
index 0000000..589eaf3
--- /dev/null
+++ b/iredis/data/commands/zadd.md
@@ -0,0 +1,98 @@
+Adds all the specified members with the specified scores to the sorted set
+stored at `key`. It is possible to specify multiple score / member pairs. If a
+specified member is already a member of the sorted set, the score is updated and
+the element reinserted at the right position to ensure the correct ordering.
+
+If `key` does not exist, a new sorted set with the specified members as sole
+members is created, like if the sorted set was empty. If the key exists but does
+not hold a sorted set, an error is returned.
+
+The score values should be the string representation of a double precision
+floating point number. `+inf` and `-inf` values are valid values as well.
+
+## ZADD options (Redis 3.0.2 or greater)
+
+ZADD supports a list of options, specified after the name of the key and before
+the first score argument. Options are:
+
+- **XX**: Only update elements that already exist. Never add elements.
+- **NX**: Don't update already existing elements. Always add new elements.
+- **CH**: Modify the return value from the number of new elements added, to the
+ total number of elements changed (CH is an abbreviation of _changed_). Changed
+ elements are **new elements added** and elements already existing for which
+ **the score was updated**. So elements specified in the command line having
+ the same score as they had in the past are not counted. Note: normally the
+ return value of `ZADD` only counts the number of new elements added.
+- **INCR**: When this option is specified `ZADD` acts like `ZINCRBY`. Only one
+ score-element pair can be specified in this mode.
+
+## Range of integer scores that can be expressed precisely
+
+Redis sorted sets use a _double 64-bit floating point number_ to represent the
+score. In all the architectures we support, this is represented as an **IEEE 754
+floating point number**, that is able to represent precisely integer numbers
+between `-(2^53)` and `+(2^53)` included. In more practical terms, all the
+integers between -9007199254740992 and 9007199254740992 are perfectly
+representable. Larger integers, or fractions, are internally represented in
+exponential form, so it is possible that you get only an approximation of the
+decimal number, or of the very big integer, that you set as score.
+
+## Sorted sets 101
+
+Sorted sets are sorted by their score in an ascending way. The same element only
+exists a single time, no repeated elements are permitted. The score can be
+modified both by `ZADD` that will update the element score, and as a side
+effect, its position on the sorted set, and by `ZINCRBY` that can be used in
+order to update the score relatively to its previous value.
+
+The current score of an element can be retrieved using the `ZSCORE` command,
+that can also be used to verify if an element already exists or not.
+
+For an introduction to sorted sets, see the data types page on [sorted
+sets][tdtss].
+
+[tdtss]: /topics/data-types#sorted-sets
+
+## Elements with the same score
+
+While the same element can't be repeated in a sorted set since every element is
+unique, it is possible to add multiple different elements _having the same
+score_. When multiple elements have the same score, they are _ordered
+lexicographically_ (they are still ordered by score as a first key, however,
+locally, all the elements with the same score are relatively ordered
+lexicographically).
+
+The lexicographic ordering used is binary, it compares strings as array of
+bytes.
+
+If the user inserts all the elements in a sorted set with the same score (for
+example 0), all the elements of the sorted set are sorted lexicographically, and
+range queries on elements are possible using the command `ZRANGEBYLEX` (Note: it
+is also possible to query sorted sets by range of scores using `ZRANGEBYSCORE`).
+
+@return
+
+@integer-reply, specifically:
+
+- The number of elements added to the sorted set, not including elements already
+ existing for which the score was updated.
+
+If the `INCR` option is specified, the return value will be @bulk-string-reply:
+
+- The new score of `member` (a double precision floating point number)
+ represented as string, or `nil` if the operation was aborted (when called with
+ either the `XX` or the `NX` option).
+
+@history
+
+- `>= 2.4`: Accepts multiple elements. In Redis versions older than 2.4 it was
+ possible to add or update a single member per call.
+
+@examples
+
+```cli
+ZADD myzset 1 "one"
+ZADD myzset 1 "uno"
+ZADD myzset 2 "two" 3 "three"
+ZRANGE myzset 0 -1 WITHSCORES
+```
diff --git a/iredis/data/commands/zcard.md b/iredis/data/commands/zcard.md
new file mode 100644
index 0000000..5ad5043
--- /dev/null
+++ b/iredis/data/commands/zcard.md
@@ -0,0 +1,15 @@
+Returns the sorted set cardinality (number of elements) of the sorted set stored
+at `key`.
+
+@return
+
+@integer-reply: the cardinality (number of elements) of the sorted set, or `0`
+if `key` does not exist.
+
+@examples
+
+```cli
+ZADD myzset 1 "one"
+ZADD myzset 2 "two"
+ZCARD myzset
+```
diff --git a/iredis/data/commands/zcount.md b/iredis/data/commands/zcount.md
new file mode 100644
index 0000000..49e6dd6
--- /dev/null
+++ b/iredis/data/commands/zcount.md
@@ -0,0 +1,23 @@
+Returns the number of elements in the sorted set at `key` with a score between
+`min` and `max`.
+
+The `min` and `max` arguments have the same semantic as described for
+`ZRANGEBYSCORE`.
+
+Note: the command has a complexity of just O(log(N)) because it uses elements
+ranks (see `ZRANK`) to get an idea of the range. Because of this there is no
+need to do a work proportional to the size of the range.
+
+@return
+
+@integer-reply: the number of elements in the specified score range.
+
+@examples
+
+```cli
+ZADD myzset 1 "one"
+ZADD myzset 2 "two"
+ZADD myzset 3 "three"
+ZCOUNT myzset -inf +inf
+ZCOUNT myzset (1 3
+```
diff --git a/iredis/data/commands/zincrby.md b/iredis/data/commands/zincrby.md
new file mode 100644
index 0000000..0ac8a89
--- /dev/null
+++ b/iredis/data/commands/zincrby.md
@@ -0,0 +1,25 @@
+Increments the score of `member` in the sorted set stored at `key` by
+`increment`. If `member` does not exist in the sorted set, it is added with
+`increment` as its score (as if its previous score was `0.0`). If `key` does not
+exist, a new sorted set with the specified `member` as its sole member is
+created.
+
+An error is returned when `key` exists but does not hold a sorted set.
+
+The `score` value should be the string representation of a numeric value, and
+accepts double precision floating point numbers. It is possible to provide a
+negative value to decrement the score.
+
+@return
+
+@bulk-string-reply: the new score of `member` (a double precision floating point
+number), represented as string.
+
+@examples
+
+```cli
+ZADD myzset 1 "one"
+ZADD myzset 2 "two"
+ZINCRBY myzset 2 "one"
+ZRANGE myzset 0 -1 WITHSCORES
+```
diff --git a/iredis/data/commands/zinterstore.md b/iredis/data/commands/zinterstore.md
new file mode 100644
index 0000000..e7e71f0
--- /dev/null
+++ b/iredis/data/commands/zinterstore.md
@@ -0,0 +1,30 @@
+Computes the intersection of `numkeys` sorted sets given by the specified keys,
+and stores the result in `destination`. It is mandatory to provide the number of
+input keys (`numkeys`) before passing the input keys and the other (optional)
+arguments.
+
+By default, the resulting score of an element is the sum of its scores in the
+sorted sets where it exists. Because intersection requires an element to be a
+member of every given sorted set, this results in the score of every element in
+the resulting sorted set to be equal to the number of input sorted sets.
+
+For a description of the `WEIGHTS` and `AGGREGATE` options, see `ZUNIONSTORE`.
+
+If `destination` already exists, it is overwritten.
+
+@return
+
+@integer-reply: the number of elements in the resulting sorted set at
+`destination`.
+
+@examples
+
+```cli
+ZADD zset1 1 "one"
+ZADD zset1 2 "two"
+ZADD zset2 1 "one"
+ZADD zset2 2 "two"
+ZADD zset2 3 "three"
+ZINTERSTORE out 2 zset1 zset2 WEIGHTS 2 3
+ZRANGE out 0 -1 WITHSCORES
+```
diff --git a/iredis/data/commands/zlexcount.md b/iredis/data/commands/zlexcount.md
new file mode 100644
index 0000000..9aa7092
--- /dev/null
+++ b/iredis/data/commands/zlexcount.md
@@ -0,0 +1,23 @@
+When all the elements in a sorted set are inserted with the same score, in order
+to force lexicographical ordering, this command returns the number of elements
+in the sorted set at `key` with a value between `min` and `max`.
+
+The `min` and `max` arguments have the same meaning as described for
+`ZRANGEBYLEX`.
+
+Note: the command has a complexity of just O(log(N)) because it uses elements
+ranks (see `ZRANK`) to get an idea of the range. Because of this there is no
+need to do a work proportional to the size of the range.
+
+@return
+
+@integer-reply: the number of elements in the specified score range.
+
+@examples
+
+```cli
+ZADD myzset 0 a 0 b 0 c 0 d 0 e
+ZADD myzset 0 f 0 g
+ZLEXCOUNT myzset - +
+ZLEXCOUNT myzset [b [f
+```
diff --git a/iredis/data/commands/zpopmax.md b/iredis/data/commands/zpopmax.md
new file mode 100644
index 0000000..dea7a16
--- /dev/null
+++ b/iredis/data/commands/zpopmax.md
@@ -0,0 +1,20 @@
+Removes and returns up to `count` members with the highest scores in the sorted
+set stored at `key`.
+
+When left unspecified, the default value for `count` is 1. Specifying a `count`
+value that is higher than the sorted set's cardinality will not produce an
+error. When returning multiple elements, the one with the highest score will be
+the first, followed by the elements with lower scores.
+
+@return
+
+@array-reply: list of popped elements and scores.
+
+@examples
+
+```cli
+ZADD myzset 1 "one"
+ZADD myzset 2 "two"
+ZADD myzset 3 "three"
+ZPOPMAX myzset
+```
diff --git a/iredis/data/commands/zpopmin.md b/iredis/data/commands/zpopmin.md
new file mode 100644
index 0000000..789e30c
--- /dev/null
+++ b/iredis/data/commands/zpopmin.md
@@ -0,0 +1,20 @@
+Removes and returns up to `count` members with the lowest scores in the sorted
+set stored at `key`.
+
+When left unspecified, the default value for `count` is 1. Specifying a `count`
+value that is higher than the sorted set's cardinality will not produce an
+error. When returning multiple elements, the one with the lowest score will be
+the first, followed by the elements with greater scores.
+
+@return
+
+@array-reply: list of popped elements and scores.
+
+@examples
+
+```cli
+ZADD myzset 1 "one"
+ZADD myzset 2 "two"
+ZADD myzset 3 "three"
+ZPOPMIN myzset
+```
diff --git a/iredis/data/commands/zrange.md b/iredis/data/commands/zrange.md
new file mode 100644
index 0000000..e2b1c5b
--- /dev/null
+++ b/iredis/data/commands/zrange.md
@@ -0,0 +1,49 @@
+Returns the specified range of elements in the sorted set stored at `key`. The
+elements are considered to be ordered from the lowest to the highest score.
+Lexicographical order is used for elements with equal score.
+
+See `ZREVRANGE` when you need the elements ordered from highest to lowest score
+(and descending lexicographical order for elements with equal score).
+
+Both `start` and `stop` are zero-based indexes, where `0` is the first element,
+`1` is the next element and so on. They can also be negative numbers indicating
+offsets from the end of the sorted set, with `-1` being the last element of the
+sorted set, `-2` the penultimate element and so on.
+
+`start` and `stop` are **inclusive ranges**, so for example `ZRANGE myzset 0 1`
+will return both the first and the second element of the sorted set.
+
+Out of range indexes will not produce an error. If `start` is larger than the
+largest index in the sorted set, or `start > stop`, an empty list is returned.
+If `stop` is larger than the end of the sorted set Redis will treat it like it
+is the last element of the sorted set.
+
+It is possible to pass the `WITHSCORES` option in order to return the scores of
+the elements together with the elements. The returned list will contain
+`value1,score1,...,valueN,scoreN` instead of `value1,...,valueN`. Client
+libraries are free to return a more appropriate data type (suggestion: an array
+with (value, score) arrays/tuples).
+
+@return
+
+@array-reply: list of elements in the specified range (optionally with their
+scores, in case the `WITHSCORES` option is given).
+
+@examples
+
+```cli
+ZADD myzset 1 "one"
+ZADD myzset 2 "two"
+ZADD myzset 3 "three"
+ZRANGE myzset 0 -1
+ZRANGE myzset 2 3
+ZRANGE myzset -2 -1
+```
+
+The following example using `WITHSCORES` shows how the command returns always an
+array, but this time, populated with _element_1_, _score_1_, _element_2_,
+_score_2_, ..., _element_N_, _score_N_.
+
+```cli
+ZRANGE myzset 0 1 WITHSCORES
+```
diff --git a/iredis/data/commands/zrangebylex.md b/iredis/data/commands/zrangebylex.md
new file mode 100644
index 0000000..ab387bd
--- /dev/null
+++ b/iredis/data/commands/zrangebylex.md
@@ -0,0 +1,66 @@
+When all the elements in a sorted set are inserted with the same score, in order
+to force lexicographical ordering, this command returns all the elements in the
+sorted set at `key` with a value between `min` and `max`.
+
+If the elements in the sorted set have different scores, the returned elements
+are unspecified.
+
+The elements are considered to be ordered from lower to higher strings as
+compared byte-by-byte using the `memcmp()` C function. Longer strings are
+considered greater than shorter strings if the common part is identical.
+
+The optional `LIMIT` argument can be used to only get a range of the matching
+elements (similar to _SELECT LIMIT offset, count_ in SQL). A negative `count`
+returns all elements from the `offset`. Keep in mind that if `offset` is large,
+the sorted set needs to be traversed for `offset` elements before getting to the
+elements to return, which can add up to O(N) time complexity.
+
+## How to specify intervals
+
+Valid _start_ and _stop_ must start with `(` or `[`, in order to specify if the
+range item is respectively exclusive or inclusive. The special values of `+` or
+`-` for _start_ and _stop_ have the special meaning or positively infinite and
+negatively infinite strings, so for instance the command **ZRANGEBYLEX
+myzset - +** is guaranteed to return all the elements in the sorted set, if all
+the elements have the same score.
+
+## Details on strings comparison
+
+Strings are compared as binary array of bytes. Because of how the ASCII
+character set is specified, this means that usually this also have the effect of
+comparing normal ASCII characters in an obvious dictionary way. However this is
+not true if non plain ASCII strings are used (for example utf8 strings).
+
+However the user can apply a transformation to the encoded string so that the
+first part of the element inserted in the sorted set will compare as the user
+requires for the specific application. For example if I want to add strings that
+will be compared in a case-insensitive way, but I still want to retrieve the
+real case when querying, I can add strings in the following way:
+
+ ZADD autocomplete 0 foo:Foo 0 bar:BAR 0 zap:zap
+
+Because of the first _normalized_ part in every element (before the colon
+character), we are forcing a given comparison, however after the range is
+queries using `ZRANGEBYLEX` the application can display to the user the second
+part of the string, after the colon.
+
+The binary nature of the comparison allows to use sorted sets as a general
+purpose index, for example the first part of the element can be a 64 bit big
+endian number: since big endian numbers have the most significant bytes in the
+initial positions, the binary comparison will match the numerical comparison of
+the numbers. This can be used in order to implement range queries on 64 bit
+values. As in the example below, after the first 8 bytes we can store the value
+of the element we are actually indexing.
+
+@return
+
+@array-reply: list of elements in the specified score range.
+
+@examples
+
+```cli
+ZADD myzset 0 a 0 b 0 c 0 d 0 e 0 f 0 g
+ZRANGEBYLEX myzset - [c
+ZRANGEBYLEX myzset - (c
+ZRANGEBYLEX myzset [aaa (g
+```
diff --git a/iredis/data/commands/zrangebyscore.md b/iredis/data/commands/zrangebyscore.md
new file mode 100644
index 0000000..f440e0e
--- /dev/null
+++ b/iredis/data/commands/zrangebyscore.md
@@ -0,0 +1,100 @@
+Returns all the elements in the sorted set at `key` with a score between `min`
+and `max` (including elements with score equal to `min` or `max`). The elements
+are considered to be ordered from low to high scores.
+
+The elements having the same score are returned in lexicographical order (this
+follows from a property of the sorted set implementation in Redis and does not
+involve further computation).
+
+The optional `LIMIT` argument can be used to only get a range of the matching
+elements (similar to _SELECT LIMIT offset, count_ in SQL). A negative `count`
+returns all elements from the `offset`. Keep in mind that if `offset` is large,
+the sorted set needs to be traversed for `offset` elements before getting to the
+elements to return, which can add up to O(N) time complexity.
+
+The optional `WITHSCORES` argument makes the command return both the element and
+its score, instead of the element alone. This option is available since Redis
+2.0.
+
+## Exclusive intervals and infinity
+
+`min` and `max` can be `-inf` and `+inf`, so that you are not required to know
+the highest or lowest score in the sorted set to get all elements from or up to
+a certain score.
+
+By default, the interval specified by `min` and `max` is closed (inclusive). It
+is possible to specify an open interval (exclusive) by prefixing the score with
+the character `(`. For example:
+
+```
+ZRANGEBYSCORE zset (1 5
+```
+
+Will return all elements with `1 < score <= 5` while:
+
+```
+ZRANGEBYSCORE zset (5 (10
+```
+
+Will return all the elements with `5 < score < 10` (5 and 10 excluded).
+
+@return
+
+@array-reply: list of elements in the specified score range (optionally with
+their scores).
+
+@examples
+
+```cli
+ZADD myzset 1 "one"
+ZADD myzset 2 "two"
+ZADD myzset 3 "three"
+ZRANGEBYSCORE myzset -inf +inf
+ZRANGEBYSCORE myzset 1 2
+ZRANGEBYSCORE myzset (1 2
+ZRANGEBYSCORE myzset (1 (2
+```
+
+## Pattern: weighted random selection of an element
+
+Normally `ZRANGEBYSCORE` is simply used in order to get range of items where the
+score is the indexed integer key, however it is possible to do less obvious
+things with the command.
+
+For example a common problem when implementing Markov chains and other
+algorithms is to select an element at random from a set, but different elements
+may have different weights that change how likely it is they are picked.
+
+This is how we use this command in order to mount such an algorithm:
+
+Imagine you have elements A, B and C with weights 1, 2 and 3. You compute the
+sum of the weights, which is 1+2+3 = 6
+
+At this point you add all the elements into a sorted set using this algorithm:
+
+```
+SUM = ELEMENTS.TOTAL_WEIGHT // 6 in this case.
+SCORE = 0
+FOREACH ELE in ELEMENTS
+ SCORE += ELE.weight / SUM
+ ZADD KEY SCORE ELE
+END
+```
+
+This means that you set:
+
+```
+A to score 0.16
+B to score .5
+C to score 1
+```
+
+Since this involves approximations, in order to avoid C is set to, like, 0.998
+instead of 1, we just modify the above algorithm to make sure the last score is
+1 (left as an exercise for the reader...).
+
+At this point, each time you want to get a weighted random element, just compute
+a random number between 0 and 1 (which is like calling `rand()` in most
+languages), so you can just do:
+
+ RANDOM_ELE = ZRANGEBYSCORE key RAND() +inf LIMIT 0 1
diff --git a/iredis/data/commands/zrank.md b/iredis/data/commands/zrank.md
new file mode 100644
index 0000000..62520d9
--- /dev/null
+++ b/iredis/data/commands/zrank.md
@@ -0,0 +1,22 @@
+Returns the rank of `member` in the sorted set stored at `key`, with the scores
+ordered from low to high. The rank (or index) is 0-based, which means that the
+member with the lowest score has rank `0`.
+
+Use `ZREVRANK` to get the rank of an element with the scores ordered from high
+to low.
+
+@return
+
+- If `member` exists in the sorted set, @integer-reply: the rank of `member`.
+- If `member` does not exist in the sorted set or `key` does not exist,
+ @bulk-string-reply: `nil`.
+
+@examples
+
+```cli
+ZADD myzset 1 "one"
+ZADD myzset 2 "two"
+ZADD myzset 3 "three"
+ZRANK myzset "three"
+ZRANK myzset "four"
+```
diff --git a/iredis/data/commands/zrem.md b/iredis/data/commands/zrem.md
new file mode 100644
index 0000000..0ce5061
--- /dev/null
+++ b/iredis/data/commands/zrem.md
@@ -0,0 +1,26 @@
+Removes the specified members from the sorted set stored at `key`. Non existing
+members are ignored.
+
+An error is returned when `key` exists and does not hold a sorted set.
+
+@return
+
+@integer-reply, specifically:
+
+- The number of members removed from the sorted set, not including non existing
+ members.
+
+@history
+
+- `>= 2.4`: Accepts multiple elements. In Redis versions older than 2.4 it was
+ possible to remove a single member per call.
+
+@examples
+
+```cli
+ZADD myzset 1 "one"
+ZADD myzset 2 "two"
+ZADD myzset 3 "three"
+ZREM myzset "two"
+ZRANGE myzset 0 -1 WITHSCORES
+```
diff --git a/iredis/data/commands/zremrangebylex.md b/iredis/data/commands/zremrangebylex.md
new file mode 100644
index 0000000..ceaf69f
--- /dev/null
+++ b/iredis/data/commands/zremrangebylex.md
@@ -0,0 +1,22 @@
+When all the elements in a sorted set are inserted with the same score, in order
+to force lexicographical ordering, this command removes all elements in the
+sorted set stored at `key` between the lexicographical range specified by `min`
+and `max`.
+
+The meaning of `min` and `max` are the same of the `ZRANGEBYLEX` command.
+Similarly, this command actually returns the same elements that `ZRANGEBYLEX`
+would return if called with the same `min` and `max` arguments.
+
+@return
+
+@integer-reply: the number of elements removed.
+
+@examples
+
+```cli
+ZADD myzset 0 aaaa 0 b 0 c 0 d 0 e
+ZADD myzset 0 foo 0 zap 0 zip 0 ALPHA 0 alpha
+ZRANGE myzset 0 -1
+ZREMRANGEBYLEX myzset [alpha [omega
+ZRANGE myzset 0 -1
+```
diff --git a/iredis/data/commands/zremrangebyrank.md b/iredis/data/commands/zremrangebyrank.md
new file mode 100644
index 0000000..4de25f9
--- /dev/null
+++ b/iredis/data/commands/zremrangebyrank.md
@@ -0,0 +1,20 @@
+Removes all elements in the sorted set stored at `key` with rank between `start`
+and `stop`. Both `start` and `stop` are `0` -based indexes with `0` being the
+element with the lowest score. These indexes can be negative numbers, where they
+indicate offsets starting at the element with the highest score. For example:
+`-1` is the element with the highest score, `-2` the element with the second
+highest score and so forth.
+
+@return
+
+@integer-reply: the number of elements removed.
+
+@examples
+
+```cli
+ZADD myzset 1 "one"
+ZADD myzset 2 "two"
+ZADD myzset 3 "three"
+ZREMRANGEBYRANK myzset 0 1
+ZRANGE myzset 0 -1 WITHSCORES
+```
diff --git a/iredis/data/commands/zremrangebyscore.md b/iredis/data/commands/zremrangebyscore.md
new file mode 100644
index 0000000..3665bd0
--- /dev/null
+++ b/iredis/data/commands/zremrangebyscore.md
@@ -0,0 +1,19 @@
+Removes all elements in the sorted set stored at `key` with a score between
+`min` and `max` (inclusive).
+
+Since version 2.1.6, `min` and `max` can be exclusive, following the syntax of
+`ZRANGEBYSCORE`.
+
+@return
+
+@integer-reply: the number of elements removed.
+
+@examples
+
+```cli
+ZADD myzset 1 "one"
+ZADD myzset 2 "two"
+ZADD myzset 3 "three"
+ZREMRANGEBYSCORE myzset -inf (2
+ZRANGE myzset 0 -1 WITHSCORES
+```
diff --git a/iredis/data/commands/zrevrange.md b/iredis/data/commands/zrevrange.md
new file mode 100644
index 0000000..c9f6c4d
--- /dev/null
+++ b/iredis/data/commands/zrevrange.md
@@ -0,0 +1,21 @@
+Returns the specified range of elements in the sorted set stored at `key`. The
+elements are considered to be ordered from the highest to the lowest score.
+Descending lexicographical order is used for elements with equal score.
+
+Apart from the reversed ordering, `ZREVRANGE` is similar to `ZRANGE`.
+
+@return
+
+@array-reply: list of elements in the specified range (optionally with their
+scores).
+
+@examples
+
+```cli
+ZADD myzset 1 "one"
+ZADD myzset 2 "two"
+ZADD myzset 3 "three"
+ZREVRANGE myzset 0 -1
+ZREVRANGE myzset 2 3
+ZREVRANGE myzset -2 -1
+```
diff --git a/iredis/data/commands/zrevrangebylex.md b/iredis/data/commands/zrevrangebylex.md
new file mode 100644
index 0000000..831e5cd
--- /dev/null
+++ b/iredis/data/commands/zrevrangebylex.md
@@ -0,0 +1,18 @@
+When all the elements in a sorted set are inserted with the same score, in order
+to force lexicographical ordering, this command returns all the elements in the
+sorted set at `key` with a value between `max` and `min`.
+
+Apart from the reversed ordering, `ZREVRANGEBYLEX` is similar to `ZRANGEBYLEX`.
+
+@return
+
+@array-reply: list of elements in the specified score range.
+
+@examples
+
+```cli
+ZADD myzset 0 a 0 b 0 c 0 d 0 e 0 f 0 g
+ZREVRANGEBYLEX myzset [c -
+ZREVRANGEBYLEX myzset (c -
+ZREVRANGEBYLEX myzset (g [aaa
+```
diff --git a/iredis/data/commands/zrevrangebyscore.md b/iredis/data/commands/zrevrangebyscore.md
new file mode 100644
index 0000000..c16d8b4
--- /dev/null
+++ b/iredis/data/commands/zrevrangebyscore.md
@@ -0,0 +1,27 @@
+Returns all the elements in the sorted set at `key` with a score between `max`
+and `min` (including elements with score equal to `max` or `min`). In contrary
+to the default ordering of sorted sets, for this command the elements are
+considered to be ordered from high to low scores.
+
+The elements having the same score are returned in reverse lexicographical
+order.
+
+Apart from the reversed ordering, `ZREVRANGEBYSCORE` is similar to
+`ZRANGEBYSCORE`.
+
+@return
+
+@array-reply: list of elements in the specified score range (optionally with
+their scores).
+
+@examples
+
+```cli
+ZADD myzset 1 "one"
+ZADD myzset 2 "two"
+ZADD myzset 3 "three"
+ZREVRANGEBYSCORE myzset +inf -inf
+ZREVRANGEBYSCORE myzset 2 1
+ZREVRANGEBYSCORE myzset 2 (1
+ZREVRANGEBYSCORE myzset (2 (1
+```
diff --git a/iredis/data/commands/zrevrank.md b/iredis/data/commands/zrevrank.md
new file mode 100644
index 0000000..e85c80c
--- /dev/null
+++ b/iredis/data/commands/zrevrank.md
@@ -0,0 +1,22 @@
+Returns the rank of `member` in the sorted set stored at `key`, with the scores
+ordered from high to low. The rank (or index) is 0-based, which means that the
+member with the highest score has rank `0`.
+
+Use `ZRANK` to get the rank of an element with the scores ordered from low to
+high.
+
+@return
+
+- If `member` exists in the sorted set, @integer-reply: the rank of `member`.
+- If `member` does not exist in the sorted set or `key` does not exist,
+ @bulk-string-reply: `nil`.
+
+@examples
+
+```cli
+ZADD myzset 1 "one"
+ZADD myzset 2 "two"
+ZADD myzset 3 "three"
+ZREVRANK myzset "one"
+ZREVRANK myzset "four"
+```
diff --git a/iredis/data/commands/zscan.md b/iredis/data/commands/zscan.md
new file mode 100644
index 0000000..3926307
--- /dev/null
+++ b/iredis/data/commands/zscan.md
@@ -0,0 +1 @@
+See `SCAN` for `ZSCAN` documentation.
diff --git a/iredis/data/commands/zscore.md b/iredis/data/commands/zscore.md
new file mode 100644
index 0000000..f3204d0
--- /dev/null
+++ b/iredis/data/commands/zscore.md
@@ -0,0 +1,16 @@
+Returns the score of `member` in the sorted set at `key`.
+
+If `member` does not exist in the sorted set, or `key` does not exist, `nil` is
+returned.
+
+@return
+
+@bulk-string-reply: the score of `member` (a double precision floating point
+number), represented as string.
+
+@examples
+
+```cli
+ZADD myzset 1 "one"
+ZSCORE myzset "one"
+```
diff --git a/iredis/data/commands/zunionstore.md b/iredis/data/commands/zunionstore.md
new file mode 100644
index 0000000..45b4b3b
--- /dev/null
+++ b/iredis/data/commands/zunionstore.md
@@ -0,0 +1,38 @@
+Computes the union of `numkeys` sorted sets given by the specified keys, and
+stores the result in `destination`. It is mandatory to provide the number of
+input keys (`numkeys`) before passing the input keys and the other (optional)
+arguments.
+
+By default, the resulting score of an element is the sum of its scores in the
+sorted sets where it exists.
+
+Using the `WEIGHTS` option, it is possible to specify a multiplication factor
+for each input sorted set. This means that the score of every element in every
+input sorted set is multiplied by this factor before being passed to the
+aggregation function. When `WEIGHTS` is not given, the multiplication factors
+default to `1`.
+
+With the `AGGREGATE` option, it is possible to specify how the results of the
+union are aggregated. This option defaults to `SUM`, where the score of an
+element is summed across the inputs where it exists. When this option is set to
+either `MIN` or `MAX`, the resulting set will contain the minimum or maximum
+score of an element across the inputs where it exists.
+
+If `destination` already exists, it is overwritten.
+
+@return
+
+@integer-reply: the number of elements in the resulting sorted set at
+`destination`.
+
+@examples
+
+```cli
+ZADD zset1 1 "one"
+ZADD zset1 2 "two"
+ZADD zset2 1 "one"
+ZADD zset2 2 "two"
+ZADD zset2 3 "three"
+ZUNIONSTORE out 2 zset1 zset2 WEIGHTS 2 3
+ZRANGE out 0 -1 WITHSCORES
+```