Commit Graph

12327 Commits

Author SHA1 Message Date
debing.sun
58edea7385 Add help for DEBUG SCRIPT command (#13385)
1. Add help for `DEBUG SCRIPT` command.
2. Remove a duplicate `getLuaScripts()` which is same as
`evalScriptsDict()`.
2024-07-04 14:44:42 +08:00
Filipe Oliveira (Redis)
26a2dcb936 Reduce getNodeByQuery overhead (#13221)
The following PR does the following changes based upon on CPU profile
info. The `getNodeByQuery` function represents 8.2% of an overhead of
12.3% when comparing single shard cluster with standalone.
Proposed changes:
- inlinging keyHashSlot to reduce overhead of that function call
- Reduce duplicate calls to getCommandFlags within getNodeByQuery

The above changes represent an improvement of approximately 5% on the achievable ops/sec.

Co-authored-by: filipecosta90 <filipecosta.90@gmail.com>
2024-07-03 18:23:14 +08:00
Moti Cohen
a84cc20aef HFE - Fix statistic to count also lazy expired and rename INFO params (#13372)
* INFO command : rename `hashes_with_expiry_fields` to `subexpiry`
* INFO command : rename `expired_hash_fields` to `expired_subkeys`
* Fix statistic of `expired_subkeys` to count also lazy expired
* Remove TODOs comments leftover in TCL
* Fix potential flaky test of rdb load of hash-field-expiration
2024-07-02 18:22:10 +03:00
debing.sun
76d179c64f Update CentOS7 repo URL for EOL and macOS11 GitHub Action for deprecation (#13365)
1. Update macOS version in GitHub Actions due to deprecation of macOS 11
Because the macOS 11 runner image will be removed by 06/28/2024.

2. CentOS has arrived at EOL, we need to update the yum repo to new url.
2024-07-02 21:53:32 +08:00
David Dougherty
c6ceea9a23 DOC-3932: remove frontmatter attributes from module API spec generator (#13377)
This PR removes the Hugo frontmatter from this utility.
2024-07-02 08:25:42 +03:00
Oran Agra
799c5e5f51 Solve a race between BGSAVE and FLUSHALL messing up the dirty counter (#13361)
If we run FLUSHALL when the 'save' config is set, and there's a fork
child ding BGSAVE, there's a chance the child is already finished, and
the parent process is unaware of it. in that case the child will not get
the kill signal and will finish successfully, but the parent process
thinks it killed it and will reset the dirty counter to 0, then the
backgroundSaveDoneHandlerDisk method can set the dirty counter to a
negative value.
2024-07-01 09:04:52 +03:00
Oran Agra
69b7137d32 Fix possible crash due to OOM panic on invalid command (#13380)
getKeysUsingKeySpece had the range check AFTER the allocation, of the
keys buffer, which could lead to an OOM panic when invalid arguments are
provided, leading to an overflow.
The allocated memory is only used after the range check, so there's no
risk of buffer overrun.
The OOM panic can happen on 32bit builds, or 64 builds running on
systems with less than 4GB of RAM, and is reachable via the COMMAND
GETKEYSANDFLAGS, and ACL key name validation.
2024-07-01 09:04:20 +03:00
Moti Cohen
a9267137ee HFE - count in command must match actual number of fields (#13369)
There was wrong preliminary assumption that we can optionally provide
vector of arguments more than count.
This is error-prone approach that leaded to actual error in that case.
This PR enforce that vector of argument match count.

Also fixed flaky HRANDFIELD test.
2024-06-26 14:12:06 +03:00
debing.sun
52e12d8bac Don't keep global replication buffer reference for replicas marked CLIENT_CLOSE_ASAP (#13363)
In certain situations, we might generate a large number of propagates
(e.g., multi/exec, Lua script, or a single command generating tons of
propagations) within an event loop.
During the process of propagating to a replica, if the replica is
disconnected(marked as CLIENT_CLOSE_ASAP) due to exceeding the output
buffer limit, we should remove its reference to the global replication
buffer to avoid the global replication buffer being unable to be
properly trimmed due to being referenced.

---------

Co-authored-by: oranagra <oran@redislabs.com>
2024-06-26 08:26:23 +08:00
Moti Cohen
5eac99c312 Fix H(P)EXPIREAT command to propagate HDEL as well (#13364)
H(P)EXPIREAT command might delete fields in case the absolute time is in the 
past. Those HDELs need to be propagated as well.
 
In general, as we need to propagate H(P)EXPIRE(AT) command to the replica, each 
field that is mentioned in the command should be categorized into one of the four
options:
1. Managed to update field’s expiration time - propagate it to replica as part 
   of the HPEXPIREAT command.
2. Deleted the field because the time is in the past - propagate also HDEL command
   to delete the field and remove the field from the propagated HPEXPIREAT.
3. Condition not met for the field - Remove the field from the propagated
   HPEXPIREAT command.
4. Field does not exists - Remove the field from the propagated HPEXPIREAT command.

If none of the provided fields match option number 1, then avoid also propagating 
the HPEXPIREAT command to the replica.

This approach is aligned with the EXPIRE command:
If a given key has already expired, then DEL will be propagated instead of
EXPIRE command. If condition not met, then command will be rejected. Otherwise, 
EXPIRE command will be propagated for given key.
2024-06-25 13:15:09 +03:00
Moti Cohen
e26ea35cd4 Adapt HRANDFIELD to HFE feature (#13348)
Considerations for the selected imp of HRANDFIELD & HFE feature:

HRANDFIELD might access any of the fields in the hash as some of them
might be expired. And so the Implementation of HRANDFIELD along with HFEs
might be one of the two options:
1. Expire hash-fields before diving into handling HRANDFIELD.
2. Refine HRANDFIELD cases to deal with expired fields.

Regarding the first option, as reference, the command RANDOMKEY also
declareson O(1) complexity, yet might be stuck on a very long (but not infinite)
loop trying to find non-expired keys. Furthermore RANDOMKEY also evicts expired 
keys along the way even though it is categorized as a read-only command. Note
that the case of HRANDFIELD is more lightweight versus RANDOMKEY since 
HFEs have much more effective and aggressive active-expiration for fields behind.

The second option introduces additional implementation complexity to HRANDFIELD.
We could further refine HRANDFIELD cases to differentiate between scenarios
with many expired fields versus few expired fields, and adjust based on the
percentage of expired fields. However, this approach could still lead to long
loops or necessitate expiring fields before selecting them. For the “lightweight”
cases it is also expected to have a lightweight expiration.

Considering the pros and cons, and the fact that HRANDFIELD is an infrequent
command (particularly with HFEs) and the fact we have effective active-expiration
behind for hash-fields, it is better to keep it simple and choose option number 1.

Other changes:
* Don't mark command dirty by internal hashTypeExpire(). It causes to read 
  only command of HRANDFIELD to be accidently propagated (This flag
  should be indicated at higher level, by the command functions).
* Align `hashTypeExpireIfNeeded()` and `hashTypeGetValue()` to be more
  aligned with `expireIfNeeded()` logic of keyspace.
2024-06-24 18:11:53 +03:00
AcherTT
811c5d7aeb Add debug script command (#13289)
Add two new debug commands for outputing script.
1. `DEBUG SCRIPT LIST`
   Output all scripts.
2. `DEBUG SCRIPT <sha1>`
    Output a specific script.
    
Close #3846
2024-06-21 18:21:25 +08:00
debing.sun
a03b6e29a9 Sort out mess document for RM_Replicate and RM_ReplicateVerbatim (#13323)
Related to #12647

1. Make clear that `RM_Replicate` and `RM_ReplicateVerbatim` are
non-thread safe.
2. Make clear that `RM_Replicate` and `RM_ReplicateVerbatim` are alwarys
wrapped into MULTI in any case.
2024-06-21 17:59:49 +08:00
Moti Cohen
e18a173a81 Fix rdbLoadObject() empty hash (#13347)
As part of HFE feature, the logic of rdbLoadObject() was wrongly
modified to indicate of loaded empty hash from RDB as hash that all its
fields got expired. Rollback to `emptykey` logic. This function should
load blindly all fields, expired or not. Manually verified.

Few more minor fixes:
- remove hash double check of emptyKey
- Fix from `sds` to `hfield` in rdbLoadObject() (not really a bug. Both
are of type char*)
- Revert code rdbLoadObject() to get dbid instead of db
2024-06-20 13:53:47 +03:00
Filipe Oliveira (Redis)
24c85cc368 reduce getNodeByQuery CPU time by using less cache lines (from 2064 Bytes struct to 64 Bytes): reduces LLC misses and Memory Loads (#13296)
The following PR goes from 33 cacheline on getKeysResult struct (by
default has 256 static buffer)

```
root@hpe10:~/redis# pahole -p   ./src/server.o -C getKeysResult
typedef struct {
	keyReference               keysbuf[256];         /*     0  2048 */
	/* --- cacheline 32 boundary (2048 bytes) --- */
	/* typedef keyReference */ struct {
		int                pos;
		int                flags;
	} *keys; /*  2048     8 */
	int                        numkeys;              /*  2056     4 */
	int                        size;                 /*  2060     4 */

	/* size: 2064, cachelines: 33, members: 4 */
	/* last cacheline: 16 bytes */
} getKeysResult;
```


to 1 cacheline with a static buffer of 6 keys per command):
```
root@hpe10:~/redis# pahole -p   ./src/server.o -C getKeysResult
typedef struct {
	int                        numkeys;              /*     0     4 */
	int                        size;                 /*     4     4 */
	keyReference               keysbuf[6];           /*     8    48 */
	/* typedef keyReference */ struct {
		int                pos;
		int                flags;
	} *keys; /*    56     8 */

	/* size: 64, cachelines: 1, members: 4 */
} getKeysResult; 
```

we get around 1.5% higher ops/sec, and a confirmation of around 15% less
LLC loads on getNodeByQuery and 37% less Stores.

Function / Call Stack | CPU Time: Difference | CPU Time:
9462436fa4 | CPU Time: this PR | Loads:
Difference | Loads: 9462436fa4 | Loads:
this PR | Stores: Difference | Stores:
9462436fa4 | Stores: This PR
-- | -- | -- | -- | -- | -- | -- | -- | -- | --
getNodeByQuery | 0.753767 | 1.57118 | 0.817416 | 144297829 (15% less
loads) | 920575969 | 776278140 | 367607824 (37% less stores) | 991642384
| 624034560

## results on client side

### baseline 
```
taskset -c 2,3 memtier_benchmark -s 192.168.1.200 --port 6379 --authenticate perf --cluster-mode --pipeline 10 --data-size 100 --ratio 1:0 --key-pattern P:P --key-minimum=1 --key-maximum 1000000 --test-time 180 -c 25 -t 2 --hide-histogram 
Writing results to stdout
[RUN #1] Preparing benchmark client...
[RUN #1] Launching threads now...
[RUN #1 100%, 180 secs]  0 threads:   110333450 ops,  604992 (avg:  612942) ops/sec, 84.75MB/sec (avg: 85.86MB/sec),  0.82 (avg:  0.81) msec latency

2         Threads
25        Connections per thread
180       Seconds


ALL STATS
======================================================================================================================================================
Type         Ops/sec     Hits/sec   Misses/sec    MOVED/sec      ASK/sec    Avg. Latency     p50 Latency     p99 Latency   p99.9 Latency       KB/sec 
------------------------------------------------------------------------------------------------------------------------------------------------------
Sets       612942.14          ---          ---         0.00         0.00         0.81332         0.80700         1.26300         2.92700     87924.12 
Gets            0.00         0.00         0.00         0.00         0.00             ---             ---             ---             ---         0.00 
Waits           0.00          ---          ---          ---          ---             ---             ---             ---             ---          --- 
Totals     612942.14         0.00         0.00         0.00         0.00         0.81332         0.80700         1.26300         2.92700     87924.12 
```

### comparison 
```
taskset -c 2,3 memtier_benchmark -s 192.168.1.200 --port 6379 --authenticate perf --cluster-mode --pipeline 10 --data-size 100 --ratio 1:0 --key-pattern P:P --key-minimum=1 --key-maximum 1000000 --test-time 180 -c 25 -t 2 --hide-histogram 
Writing results to stdout
[RUN #1] Preparing benchmark client...
[RUN #1] Launching threads now...
[RUN #1 100%, 180 secs]  0 threads:   111731310 ops,  610195 (avg:  620707) ops/sec, 85.48MB/sec (avg: 86.95MB/sec),  0.82 (avg:  0.80) msec latency

2         Threads
25        Connections per thread
180       Seconds


ALL STATS
======================================================================================================================================================
Type         Ops/sec     Hits/sec   Misses/sec    MOVED/sec      ASK/sec    Avg. Latency     p50 Latency     p99 Latency   p99.9 Latency       KB/sec 
------------------------------------------------------------------------------------------------------------------------------------------------------
Sets       620707.72          ---          ---         0.00         0.00         0.80312         0.79900         1.23900         2.87900     89037.78 
Gets            0.00         0.00         0.00         0.00         0.00             ---             ---             ---             ---         0.00 
Waits           0.00          ---          ---          ---          ---             ---             ---             ---             ---          --- 
Totals     620707.72         0.00         0.00         0.00         0.00         0.80312         0.79900         1.23900         2.87900     89037.78
```

Co-authored-by: filipecosta90 <filipecosta.90@gmail.com>
2024-06-18 18:00:47 +08:00
Ozan Tezcan
4aa25d042c Reply with array of return codes if the key does not exist for HFE commands (#13343)
Currently, HFE commands reply with empty array if the key does not
exist. Though, non-existing key and empty key is the same thing. 
It means fields given in the command do not exist in the empty key. 
So, replying with an array of 'no field' error codes (-2) suits better 
to Redis logic. e.g. Similarly, `hmget` returns array of nulls if the 
key does not exist.

After this PR:
```
127.0.0.1:6379> hpersist missingkey fields 2 a b
1) (integer) -2
2) (integer) -2
```
2024-06-14 09:35:05 +03:00
Jo
871c985919 Update FIELDS argument to block type for HFE commands schema (#13339)
I reviewed `XREAD` command syntax:
```
XREAD [COUNT count] [BLOCK milliseconds] STREAMS key [key ...] id [id ...]
```

Here’s the structure for `XREAD`:
```json
"arguments": [
            {
                "token": "COUNT",
                "name": "count",
                "type": "integer",
                "optional": true
            },
            {
                "token": "BLOCK",
                "name": "milliseconds",
                "type": "integer",
                "optional": true
            },
            {
                "name": "streams",
                "token": "STREAMS",
                "type": "block",
                "arguments": [
                    {
                        "name": "key",
                        "type": "key",
                        "key_spec_index": 0,
                        "multiple": true
                    },
                    {
                        "name": "ID",
                        "type": "string",
                        "multiple": true
                    }
                ]
            }
]
```

Now, consider the `HEXPIRE` syntax:
```
HEXPIRE key seconds [NX | XX | GT | LT] FIELDS numfields field [field ...]
```

Since the `FIELDS` token functions similarly to `STREAMS`, and given that `STREAMS` is defined as a block, I believe the `FIELDS` in `hepxire` should also be defined as a block.
2024-06-14 13:51:49 +08:00
debing.sun
ed10f737b8 Add new hexpired notification for HFE (#13329)
When the hash field expired, we will send a new `hexpired` notification.
It mainly includes the following three cases:
1. When field expired by active expiration.
2. When field expired by lazy expiration.
3. When the user uses the `h(p)expire(at)` command, the user will also
get a `hexpired` notification if the field expires during the command.

## Improvement
1. Now if more than one field expires in the hmget command, we will only
send a `hexpired` notification.
2. When a field with TTL is deleted by commands like hdel without
updating the global DS, active expire will not send a notification.

---------

Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
Co-authored-by: Moti Cohen <moti.cohen@redis.com>
2024-06-11 21:42:34 +08:00
Moti Cohen
f01fdc3960 Reserve 2 bits out of EB_EXPIRE_TIME_MAX for possible future use (#13331)
Reserve 2 bits out of hash-field expiration time (`EB_EXPIRE_TIME_MAX`)
for possible future lightweight indexing/categorizing of fields. It can
be achieved by hacking HFE as follows:
```
HPEXPIREAT key [ 2^47 + USER_INDEX ] FIELDS numfields field [field …]
```

Redis will also need to expose kind of `HEXPIRESCAN` and `HEXPIRECOUNT`
for this idea. Yet to be better defined.

`HFE_MAX_ABS_TIME_MSEC` constraint must be enforced only at API level.
Internally, the expiration time can be up to `EB_EXPIRE_TIME_MAX` for
future readiness.
2024-06-10 16:57:26 +03:00
Moti Cohen
ce121b9261 HFE - Avoid lazy expire if called by modules + cleanup (#13326)
Need to be carefull if called by modules since modules API allow to open
and close key handler. We don't want to invalidate the handler
underneath.

* hashTypeExists(), hashTypeGetValueObject() - will return the logical
state of the field. A flag will indicate noExpire.
* RM_HashGet() - Will get NULL if the field expired. Fields won’t be
deleted.
* RM_ScanKey() - might return 0 items if all fields got expired. Fields
won’t be deleted.
* RM_HashSet() - If set, then override expired field. If delete, we can
either delete or leave it to active-expiration. XX/NX - logically
correct (Verify with tests).

Nice to have (not implemented):
* RedisModule_CloseKey() - We can local active-expire up-to 100 items. 

Note:
Length will be wrong to modules just like redis (Count expired fields).
2024-06-10 11:24:26 +03:00
debing.sun
690ef36330 Fix HFE notification test timing caused by field expiration in hexpire command (#13325)
In the old test, we give the `hexpire` a very short expire time, which
caused the filed to be deleted by the time `hpersist` command was
executed. As a result, the `hpersist` command won't be able to give a
`hpersist` notification, leading to test stuck.

fail CI:
https://github.com/redis/redis/actions/runs/9342175887/job/25709886471
2024-06-05 14:49:45 +08:00
debing.sun
9a2c6ba4e7 Prevent negative expire parameter in HEXPIRE/HEXPIREAT/HPEXPIRE/HPEXPIREAT commands (#13310)
1. Don't allow HEXPIRE/HEXPIREAT/HPEXPIRE/HPEXPIREAT command expire
parameters is negative

2. Remove a dead code reported from Coverity.
when `unit` is not `UNIT_SECONDS`, the second `if (expire > (long long)
EB_EXPIRE_TIME_MAX)` will be dead code.
```c
# t_hash.c
2988    /* Check expire overflow */
      	cond_at_most: Condition expire > 281474976710655LL, taking false branch. Now the value of expire is at most 281474976710655.
2989    if (expire > (long long) EB_EXPIRE_TIME_MAX) {
2990        addReplyErrorExpireTime(c);
2991        return;
2992    }

2994    if (unit == UNIT_SECONDS) {
2995        if (expire > (long long) EB_EXPIRE_TIME_MAX / 1000) {
2996            addReplyErrorExpireTime(c);
2997            return;
2998        }
2999        expire *= 1000;
3000    } else {
      	at_most: At condition expire > 281474976710655LL, the value of expire must be at most 281474976710655.
      	dead_error_condition: The condition expire > 281474976710655LL cannot be true.
3001        if (expire > (long long) EB_EXPIRE_TIME_MAX) {
      	
CID 494223: (#1 of 1): Logically dead code (DEADCODE)
dead_error_begin: Execution cannot reach this statement: addReplyErrorExpireTime(c);.
3002            addReplyErrorExpireTime(c);
3003            return;
3004        }
3005    }
```

---------

Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
2024-06-04 20:35:26 +08:00
gms
f36b5a8586 Fix crash due to unblock client during slot migration (#13311)
In #13224, we found a crash during cluster slot migration but don't know
why. So i check all the return C_OK in processCommand to see if we are
missing some duration reset and see this.

This fix is like #12247, when we reject the command, we should reset the
duration. I test it and verify it can fix #13224.

So the reason may because we are using stream block and then during the
slot migration, it got a redirect and then crash the server.

---------

Co-authored-by: debing.sun <debing.sun@redis.com>
2024-06-04 20:16:36 +08:00
Ozan Tezcan
293a68afdf Use lookupKeyWrite() for hpersist command (#13321)
As hpersist is a write command, we should use lookupKeyWrite() instead
of lookupKeyRead() to fetch the key.
2024-06-04 12:04:37 +03:00
Ozan Tezcan
44352beefa Fix crash in RM_ScanKey() when used with hexpire (#13320)
RM_ScanKey() was overlooked while introducing hash field expiration. 
An assert is triggered when it is called on a hash key with
OBJ_ENCODING_LISTPACK_EX encoding.

I've changed to code to handle listpackex encoding properly.
2024-06-04 11:42:02 +03:00
Moti Cohen
56169112e3 Fix returned value nextExpireTime by ebExpire() (#13313)
At `ebuckets` structure, On `ebExpire()`, if the callback indicated to update 
the item expiration time and return it back to ebuckets (`ACT_UPDATE_EXP_ITEM`), 
then returned value `nextExpireTime` should be updated, if needed. 
Invalid value of `nextExpireTime` was modified from 0 to `EB_EXPIRE_TIME_INVALID`.
2024-06-03 13:54:44 +03:00
Valentino Geron
50569a906c Free current client asynchronously after user permissions changes (#13274)
The crash happens when the user that triggers the permission changes
should be affected (and should be disconnected eventually).

To handle such a scenario, we should use the
`CLIENT_CLOSE_AFTER_COMMAND` flag.

This commit encapsulates all the places that should be handled in the
same way in `deauthenticateAndCloseClient`

Also:
* bugfix: during the ACL LOAD we ignore clients that are marked as
`CLIENT MASTER`
2024-05-30 22:09:30 +08:00
jonghoonpark
5a3534f9b5 dynamically list test files (#13220)
**Related issue**
https://github.com/redis/redis/issues/13219

**Motivation**
Currently we have to manually update the all_tests variable when
introducing new test files.

**Modification**
I have modified it to list test files dynamically, but instead of
modifying it to add all test files, I have modified it to only add only
test files from the following 4 paths

- unit
- unit/type
- unit/cluster
- integration

so that it doesn't deviate too much from what we already do

**Result**
- dynamically list test files to all_tests variable
- close issue https://github.com/redis/redis/issues/13219

**Additional information**
- removed `list-common.tcl` file and added
`generate_largevalue_test_array` proc in `util.tcl`. because
`list-common.tcl` is not a test file
- There is an order dependency. So I added a code to the "Is a ziplist
encoded Hash promoted on big payload?" test that resets
hash-max-listpack-value to the default (64).

---------

Signed-off-by: jonghoonpark <dev@jonghoonpark.com>
Co-authored-by: debing.sun <debing.sun@redis.com>
2024-05-30 19:09:26 +08:00
debing.sun
7b9e960690 Hash Field Expiration (#13303)
## Background

This PR introduces support for field-level expiration in Redis hashes. Previously, Redis supported expiration only at the key level, but this enhancement allows setting expiration times for individual fields within a hash.

## New commands
* HEXPIRE
* HEXPIREAT
* HEXPIRETIME
* HPERSIST
* HPEXPIRE
* HPEXPIREAT
* HPEXPIRETIME
* HPTTL
* HTTL

## Short example
from @moticless
```sh
127.0.0.1:6379>  hset myhash f1 v1 f2 v2 f3 v3                                                   
(integer) 3
127.0.0.1:6379>  hpexpire myhash 10000 NX fields 2 f2 f3                                         
1) (integer) 1
2) (integer) 1
127.0.0.1:6379>  hpttl myhash fields 3 f1 f2 f3                                                                                                                                                                         
1) (integer) -1
2) (integer) 9997
3) (integer) 9997
127.0.0.1:6379>  hgetall myhash  
1) "f3"
2) "v3"
3) "f2"
4) "v2"
5) "f1"
6) "v1"

... after 10 seconds ...

127.0.0.1:6379>  hgetall myhash  
1) "f1"
2) "v1"
127.0.0.1:6379>
```

## Expiration strategy
1. Integrate active
    Redis periodically performs active expiration and deletion of hash keys that contain expired fields, with a maximum attempt limit.
3. Lazy expiration
    When a client touches fields within a hash, Redis checks if the fields are expired. If a field is expired, it will be deleted. However, we do not delete expired fields during a traversal, we implicitly skip over them.

## RDB changes
Add two new rdb type s`RDB_TYPE_HASH_METADATA` and `RDB_TYPE_HASH_LISTPACK_EX`.

## Notification
1. Add `hpersist` notification for `HPERSIST` command.
5. Add `hexpire` notification for `HEXPIRE`, `HEXPIREAT`, `HPEXPIRE` and `HPEXPIREAT` commands.

## Internal
1. Add new data structure `ebuckets`, which is used to store TTL and keys, enabling quick retrieval of keys based on TTL.
2. Add new data structure `mstr` like sds, which is used to store a string with TTL.

This work was done by @moticless, @tezc, @ronen-kalish, @sundb, I just release it.
2024-05-30 15:26:19 +08:00
Ozan Tezcan
f0389f2823 Fix position of numfields in H(P)EXPIRE json files (#13301)
Fix position of numfields in H(P)EXPIRE json files
2024-05-29 16:35:47 +03:00
Moti Cohen
33fc0fbfae HFE to support AOF and replicas (#13285)
* For replica sake, rewrite commands `H*EXPIRE*` , `HSETF`, `HGETF` to
have absolute unix time in msec.
* On active-expiration of field, propagate HDEL to replica
(`propagateHashFieldDeletion()`)
* On lazy-expiration, propagate HDEL to replica (`hashTypeGetValue()`
now calls `hashTypeDelete()`. It also takes care to call
`propagateHashFieldDeletion()`).
* Fix `H*EXPIRE*` command such that if it gets flag `LT` and it doesn’t
have any expiration on the field then it will considered as valid
condition.

Note, replicas doesn’t make any active expiration, and should avoid lazy
expiration. On `hashTypeGetValue()` it doesn't check expiration (As long
as the master didn’t request to delete the field, it is valid)

TODO: 
* Attach `dbid` to HASH metadata. See
[here](https://github.com/redis/redis/pull/13209#discussion_r1593385850)

---------

Co-authored-by: debing.sun <debing.sun@redis.com>
2024-05-29 19:47:48 +08:00
Ozan Tezcan
6a11d458be Fix hscan return value (#13297)
In the last step of hscan, while replying to client, we assume all items
in the result list are keys which are mstr instances. Though, there 
might be values which are sds instances. 

Added a check to avoid calling mstrlen() for value objects.

To reproduce:
```
127.0.0.1:6379> hset myhash1 a 11111111111111111111111111111111111111111111111111111111111111111
(integer) 0
127.0.0.1:6379> hscan myhash1 0
1) "0"
2) 1) "a"
   2) "11111111111111111111111111111111111111111111111111111111111111111\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
```
2024-05-28 11:59:57 +03:00
Ozan Tezcan
e2918705c8 Fix hfe reply schemas (#13295)
In https://github.com/redis/redis/pull/13291, we've changed that hfe
commands to return empty array if the key does not exist.
Forgot to update json schemas.
2024-05-27 16:07:01 +03:00
debing.sun
2d1bb42cba Update version references from 8.0 to 7.4 for upcoming release (#13294) 2024-05-27 16:47:23 +08:00
Ozan Tezcan
2f34f6f0b9 Delete hsetf and hgetf (#13291)
Changes:
- Delete hsetf and hgetf commands
- Hfe commands will return empty array instead of nil.
---------

Co-authored-by: Moti Cohen <moticless@gmail.com>
2024-05-26 13:30:45 +03:00
Moti Cohen
60e1582ddb Fix statistics test (#13293) 2024-05-26 03:34:15 +03:00
Yves LeBras
6801a3cec8 redis-cli --keystats and --keystats-samples with --top and --cursor (#12826)
Added `--keystats` and `--keystats-samples <n>` commands.

The new commands combines memkeys and bigkeys with additional
distribution data.
We often run memkeys and bigkeys one after the other. It will be
convenient to just have one command.
Distribution and top 10 key sizes are useful when we have multiple keys
taking a lot of memory.

Like for memkeys and bigkeys, we can use `-i <n>` and Ctrl-C to
interrupt the scan loop. It will still show the statistics on the keys
sampled so far.
We can use two new optional parameters:
- `--cursor <n>` to continue from the last cursor after an interruption
of the scan.
- `--top <n>` to change the number of top key sizes (10 by default).

Implemented a fix for the `--memkeys-samples 0` issue in order to use
`--keystats-samples 0`.

Used hdr_histogram for the key size distribution.

For key length, hdr_histogram seemed overkill and preset ranges were
used.

The memory used by keystats with hdr_histogram is around 7MB (compared
to 3MB for memkeys or bigkeys).

Execution time is somewhat equivalent to adding memkeys and bigkeys
time. Each scan loop is having more commands (key type, key size, key
length/cardinality).

We can redirect the output to a file. In that case, no color or text
refresh will happen.

Limitation:
- As the information printed during the loop is refreshed (moving cursor
up), stderr and information not fitting in the terminal window (width or
height) might create some refresh issues.

Comments:
- config.top_sizes_limit could be used globally like config.count, but
it is passed as parameter to be consistent with config.memkeys_samples.
- Not sure if we should move some utility functions to cli-common.c.

Got some tips and help from @ofirluzon.

---------

Co-authored-by: Binbin <binloveplay1314@qq.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Co-authored-by: debing.sun <debing.sun@redis.com>
2024-05-25 15:59:27 +08:00
Moti Cohen
f34f2ade85 Add Statistics hashes_with_expiry_fields to INFO (#13275)
Added hashes_with_expiry_fields.
Optimially it would better to have statistic of that counts all fields
with expiry. But it requires careful logic and computation to follow and
deep dive listpacks and hashes. This statistics is trivial to achieve
and reflected by global HFE DS that has builtin enumeration of all the
hashes that are registered in it.
2024-05-23 17:29:45 +03:00
Moti Cohen
ae6df30eb1 Fix ebuckets stop indication during ebExpire() (#13287)
on active-expire of buckets, at function `ebExpire()` ->
`ebSegExpire()`, if callback `onExpireItem()` indicated to stop, Yet
iterator (iter) was wrongly already got updated to point to next item.
In turn the segment will be updated without current item.
2024-05-23 10:46:50 +03:00
Ozan Tezcan
a25b15392c Improve performance of hfe listpack (#13279)
This PR contains a few optimizations for hfe listpack.
- Hfe fields are ordered by TTL in the listpack. There are two cases
that we want to search listpack according to TTLs:
- As part of active-expiry, we need to find the fields that are expired.
e.g. find fields that have smaller TTLs than given timestamp.
- When we want to add a new field, we need to find the correct position
to maintain the order by TTL. e.g. find the field that has a higher TTL
than the one we want to insert.
  
Iterating with lpNext() to compare TTLs has a performance cost as
lpNext() calls lpValidateIntegrity() for each entry. Instead, this PR
adds `lpFindCb()` to the listpack which accepts a comparator callback.
It preserves same validation logic of lpFind() which is faster than
search with lpNext().
  
- We have field name, value, ttl for a single hfe field. Inserting these
items one by one to listpack is costly. Especially, as we place fields
according to TTL, most additions will end up in the middle of the
listpack. Each insert causes realloc + memmove. This PR introduces
`lpBatchInsert()` to add multiple items in one go.

- For hsetf, if we are going to update value and TTL at the same time,
currently, we update the value first and later update the TTL (two
distinct listpack operation). This PR improves it by doing it with a
single update operation.

---------

Co-authored-by: debing.sun <debing.sun@redis.com>
2024-05-22 23:05:32 +03:00
debing.sun
95cbe879e5 sanitize dump payload for HFE (#13278)
Add the following validations:
1. Get TTL using the lpGetIntegerValue() method instead of lpGetValue(),
Ref https://github.com/redis/redis/pull/13209#discussion_r1602569422
2. The TTL of listpackex is a number in the valid range
(0~EB_EXPIRE_TIME_MAX) and ordered.
3. The TTL fields of listpackex are ordered. 
4. The TTL of hashtable is within the valid range
(0~EB_EXPIRE_TIME_MAX).

Other:
Fix the missing of handling OBJ_ENCODING_LISTPACK_EX in
dismissHashObject().

---------

Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
2024-05-22 10:53:30 +08:00
Ted Lyngmo
e92363e248 Log the real reason for why posix_fadvise failed (#13246)
`reclaimFilePageCache` did not set `errno` but `rdbSaveInternal` which
is logging the error assumed it did. This makes sure `errno` is set.

Fixes #13245

Signed-off-by: Ted Lyngmo <ted@lyncon.se>
2024-05-21 10:33:14 +08:00
debing.sun
9ffc35c98e Have consistent behavior of SPUBLISH within multi/exec like regular command (#13276)
This PR is based on the commits from PR #12944.

Allow SPUBLISH command within multi/exec on replica

Behavior on unstable:

```
127.0.0.1:6380> CLUSTER NODES
39ce8aa20f1f0d91f1a88d976ee1926dfefcdf1a 127.0.0.1:6380@16380 myself,slave 8b0feb120b68aac489d6a5af9c77dc40d71bc792 0 0 0 connected
8b0feb120b68aac489d6a5af9c77dc40d71bc792 127.0.0.1:6379@16379 master - 0 1705091681202 0 connected 0-16383
127.0.0.1:6380> SPUBLISH hello world
(integer) 0
127.0.0.1:6380> MULTI
OK
127.0.0.1:6380(TX)> SPUBLISH hello world
QUEUED
127.0.0.1:6380(TX)> EXEC
(error) MOVED 866 127.0.0.1:6379
```

With this change:

```
127.0.0.1:6380> SPUBLISH hello world
(integer) 0
127.0.0.1:6380> MULTI
OK
127.0.0.1:6380(TX)> SPUBLISH hello world
QUEUED
127.0.0.1:6380(TX)> EXEC
1) (integer) 0
```

---------

Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: oranagra <oran@redislabs.com>
2024-05-21 09:25:13 +08:00
Ozan Tezcan
36c3cec6d1 Fix hfe RDB tests by adding FIELDS keyword to hexpire commands (#13277)
FIELDS keyword was added as part of
[#13270](https://github.com/redis/redis/pull/13270). 

It was missing in
[#13243](https://github.com/redis/redis/pull/13243)
2024-05-18 11:34:13 +03:00
Ronen Kalish
323be4d699 Hfe serialization listpack (#13243)
Add RDB de/serialization for HFE

This PR adds two new RDB types: `RDB_TYPE_HASH_METADATA` and
`RDB_TYPE_HASH_LISTPACK_TTL` to save HFE data.
When the hash RAM encoding is dict, it will be saved in the former, and
when it is listpack it will be saved in the latter.
Both formats just add the TTL value for each field after the data that
was previously saved, i.e HASH_METADATA will save the number of entries
and, for each entry, key, value and TTL, whereas listpack is saved as a
blob.
On read, the usual dict <--> listpack conversion takes place if
required.
In addition, when reading a hash that was saved as a dict fields are
actively expired if expiry is due. Currently this slao holds for
listpack encoding, but it is supposed to be removed.

TODO:
Remove active expiry on load when loading from listpack format (unless
we'll decide to keep it)
2024-05-17 18:27:02 +08:00
Moti Cohen
71676513dd Fix commands H*EXPIRE* and H*TTL to include FIELDS constant (#13270)
The same goes to: HPEXPIRE, HEXPIREAT, HPEXPIREAT, HEXPIRETIME,
HPEXPIRETIME, HPTTL, HTTL, HPERSIST
2024-05-16 19:35:58 +03:00
debing.sun
f1b0212917 Revert "Change mmap rand bits as a temporary mitigation to resolve asan bug (#13150)" (#13266)
The kernel config `vm.mmap_rnd_bits` had been revert in
https://github.com/actions/runner-images/issues/9491, so we can revert
the changes from #13150.

CI only with ASAN passed:
https://github.com/sundb/redis/actions/runs/9058263634
2024-05-15 14:12:22 +08:00
debing.sun
ffbdf2f6f3 Fix test failure due to differing reply format of XREADGROUP under RESP3 in MULTI (#13255)
This test was introducted by #13251.
Normally we auto transform the reply format of XREADGROUP to array under
RESP3 (see trasformer_funcs).
But when we execute XREADGROUP command in multi it can't work, which
cause the new test failed.
The solution is to verity the reply of XREADGROUP in advance rather than
in MULTI.

Failed validate schema CI:
https://github.com/redis/redis/actions/runs/9025128323/job/24800285684

---------

Co-authored-by: guybe7 <guy.benoish@redislabs.com>
2024-05-14 20:08:32 +08:00
debing.sun
80be2cc291 Add defragment support for HFE (#13229)
## Background
1. All hash objects that contain HFE are referenced by db->hexpires.
2. All fields in a dict hash object with HFE are referenced by an
ebucket.

So when we defrag the hash object or the field in a dict with HFE, we
also need to update the references in them.

## Interface
1. Add a new interface `ebDefragItem`, which can accept a defrag
callback to defrag items in ebuckets, and simultaneously update their
references in the ebucket.

## Mainly changes
1. The key type of dict of hash object is no longer sds, so add new
`activeDefragHfieldDict()` to defrag the dict instead of
`activeDefragSdsDict()`.
2. When we defrag the dict of hash object by using `dictScanDefrag()`,
we always set the defrag callback `defragKey` of `dictDefragFunctions`
to NULL, because we can't reallocate a field with out updating it's
reference in ebuckets.
Instead, we will defrag the field of the dict and update its reference
in the callback `dictScanDefrag` of dictScanFunction().
3. When we defrag the hash robj with HFE, we will use `ebDefragItem` to
defrag the robj and update the reference in db->hexpires.

## TODO:
Defrag ebucket structure incremently, which will be handler in a future
PR.

---------

Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
Co-authored-by: Moti Cohen <moti.cohen@redis.com>
2024-05-14 17:32:33 +08:00
Ozan Tezcan
5066e6e9cd Fix hgetf/hsetf reply type by returning string (#13263)
If encoding is listpack, hgetf and hsetf commands reply field value type
as integer.
This PR fixes it by returning string.

Problematic cases:
```
127.0.0.1:6379> hset hash one 1
(integer) 1
127.0.0.1:6379> hgetf hash fields 1 one
1) (integer) 1
127.0.0.1:6379> hsetf hash GETOLD fvs 1 one 2
1) (integer) 1
127.0.0.1:6379> hsetf hash DOF GETNEW fvs 1 one 2
1) (integer) 2
```

Additional fixes:
- hgetf/hsetf command description text

Fixes #13261, #13262
2024-05-13 11:09:49 +03:00