mirror of
https://github.com/redis/redis.git
synced 2026-04-21 03:01:35 -04:00
Refactor of ActiveDefrag to reduce latencies (#13814)
This PR is based on: https://github.com/valkey-io/valkey/pull/1462 ## Issue/Problems Duty Cycle: Active Defrag has configuration values which determine the intended percentage of CPU to be used based on a gradient of the fragmentation percentage. However, Active Defrag performs its work on the 100ms serverCron timer. It then computes a duty cycle and performs a single long cycle. For example, if the intended CPU is computed to be 10%, Active Defrag will perform 10ms of work on this 100ms timer cron. * This type of cycle introduces large latencies on the client (up to 25ms with default configurations) * This mechanism is subject to starvation when slow commands delay the serverCron Maintainability: The current Active Defrag code is difficult to read & maintain. Refactoring of the high level control mechanisms and functions will allow us to more seamlessly adapt to new defragmentation needs. Specific examples include: * A single function (activeDefragCycle) includes the logic to start/stop/modify the defragmentation as well as performing one "step" of the defragmentation. This should be separated out, so that the actual defrag activity can be performed on an independent timer (see duty cycle above). * The code is focused on kvstores, with other actions just thrown in at the end (defragOtherGlobals). There's no mechanism to break this up to reduce latencies. * For the main dictionary (only), there is a mechanism to set aside large keys to be processed in a later step. However this code creates a separate list in each kvstore (main dict or not), bleeding/exposing internal defrag logic. We only need 1 list - inside defrag. This logic should be more contained for the main key store. * The structure is not well suited towards other non-main-dictionary items. For example, pub-sub and pub-sub-shard was added, but it's added in such a way that in CMD mode, with multiple DBs, we will defrag pub-sub repeatedly after each DB. ## Description of the feature Primarily, this feature will split activeDefragCycle into 2 functions. 1. One function will be called from serverCron to determine if a defrag cycle (a complete scan) needs to be started. It will also determine if the CPU expenditure needs to be adjusted. 2. The 2nd function will be a timer proc dedicated to performing defrag. This will be invoked independently from serverCron. Once the functions are split, there is more control over the latency created by the defrag process. A new configuration will be used to determine the running time for the defrag timer proc. The default for this will be 500us (one-half of the current minimum time). Then the timer will be adjusted to achieve the desired CPU. As an example, 5% of CPU will run the defrag process for 500us every 10ms. This is much better than running for 5ms every 100ms. The timer function will also adjust to compensate for starvation. If a slow command delays the timer, the process will run proportionately longer to ensure that the configured CPU is achieved. Given the presence of slow commands, the proportional extra time is insignificant to latency. This also addresses the overload case. At 100% CPU, if the event loop slows, defrag will run proportionately longer to achieve the configured CPU utilization. Optionally, in low CPU situations, there would be little impact in utilizing more than the configured CPU. We could optionally allow the timer to pop more often (even with a 0ms delay) and the (tail) latency impact would not change. And we add a time limit for the defrag duty cycle to prevent excessive latency. When latency is already high (indicated by a long time between calls), we don't want to make it worse by running defrag for too long. Addressing maintainability: * The basic code structure can more clearly be organized around a "cycle". * Have clear begin/end functions and a set of "stages" to be executed. * Rather than stages being limited to "kvstore" type data, a cycle should be more flexible, incorporating the ability to incrementally perform arbitrary work. This will likely be necessary in the future for certain module types. It can be used today to address oddballs like defragOtherGlobals. * We reduced some of the globals, and reduce some of the coupling. defrag_later should be removed from serverDb. * Each stage should begin on a fresh cycle. So if there are non-time-bounded operations like kvstoreDictLUTDefrag, these would be less likely to introduce additional latency. Signed-off-by: Jim Brunner [brunnerj@amazon.com](mailto:brunnerj@amazon.com) Signed-off-by: Madelyn Olson [madelyneolson@gmail.com](mailto:madelyneolson@gmail.com) Co-authored-by: Madelyn Olson [madelyneolson@gmail.com](mailto:madelyneolson@gmail.com) --------- Signed-off-by: Jim Brunner brunnerj@amazon.com Signed-off-by: Madelyn Olson madelyneolson@gmail.com Co-authored-by: Madelyn Olson madelyneolson@gmail.com Co-authored-by: ShooterIT <wangyuancode@163.com>
This commit is contained in:
963
src/defrag.c
963
src/defrag.c
File diff suppressed because it is too large
Load Diff
@@ -12,9 +12,15 @@
|
||||
* Copyright (c) 2011-Present, Redis Ltd. and contributors.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Copyright (c) 2024-present, Valkey contributors.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Licensed under your choice of the Redis Source Available License 2.0
|
||||
* (RSALv2) or the Server Side Public License v1 (SSPLv1).
|
||||
*
|
||||
* Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.
|
||||
*/
|
||||
|
||||
#include "fmacros.h"
|
||||
|
||||
#include <string.h>
|
||||
@@ -802,10 +808,14 @@ unsigned long kvstoreDictScanDefrag(kvstore *kvs, int didx, unsigned long v, dic
|
||||
* within dict, it only reallocates the memory used by the dict structure itself using
|
||||
* the provided allocation function. This feature was added for the active defrag feature.
|
||||
*
|
||||
* The 'defragfn' callback is called with a reference to the dict
|
||||
* that callback can reallocate. */
|
||||
void kvstoreDictLUTDefrag(kvstore *kvs, kvstoreDictLUTDefragFunction *defragfn) {
|
||||
for (int didx = 0; didx < kvs->num_dicts; didx++) {
|
||||
* With 16k dictionaries for cluster mode with 1 shard, this operation may require substantial time
|
||||
* to execute. A "cursor" is used to perform the operation iteratively. When first called, a
|
||||
* cursor value of 0 should be provided. The return value is an updated cursor which should be
|
||||
* provided on the next iteration. The operation is complete when 0 is returned.
|
||||
*
|
||||
* The 'defragfn' callback is called with a reference to the dict that callback can reallocate. */
|
||||
unsigned long kvstoreDictLUTDefrag(kvstore *kvs, unsigned long cursor, kvstoreDictLUTDefragFunction *defragfn) {
|
||||
for (int didx = cursor; didx < kvs->num_dicts; didx++) {
|
||||
dict **d = kvstoreGetDictRef(kvs, didx), *newd;
|
||||
if (!*d)
|
||||
continue;
|
||||
@@ -818,7 +828,9 @@ void kvstoreDictLUTDefrag(kvstore *kvs, kvstoreDictLUTDefragFunction *defragfn)
|
||||
if (metadata->rehashing_node)
|
||||
metadata->rehashing_node->value = *d;
|
||||
}
|
||||
return (didx + 1);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
uint64_t kvstoreGetHash(kvstore *kvs, const void *key)
|
||||
@@ -1059,13 +1071,14 @@ int kvstoreTest(int argc, char **argv, int flags) {
|
||||
}
|
||||
|
||||
TEST("Verify that a rehashing dict's node in the rehashing list is correctly updated after defragmentation") {
|
||||
unsigned long cursor = 0;
|
||||
kvstore *kvs = kvstoreCreate(&KvstoreDictTestType, 0, KVSTORE_ALLOCATE_DICTS_ON_DEMAND);
|
||||
for (i = 0; i < 256; i++) {
|
||||
de = kvstoreDictAddRaw(kvs, 0, stringFromInt(i), NULL);
|
||||
if (listLength(kvs->rehashing)) break;
|
||||
}
|
||||
assert(listLength(kvs->rehashing));
|
||||
kvstoreDictLUTDefrag(kvs, defragLUTTestCallback);
|
||||
while ((cursor = kvstoreDictLUTDefrag(kvs, cursor, defragLUTTestCallback)) != 0) {}
|
||||
while (kvstoreIncrementallyRehash(kvs, 1000)) {}
|
||||
kvstoreRelease(kvs);
|
||||
}
|
||||
|
||||
@@ -1,3 +1,16 @@
|
||||
/*
|
||||
* Copyright (c) 2009-Present, Redis Ltd.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Copyright (c) 2024-present, Valkey contributors.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Licensed under your choice of the Redis Source Available License 2.0
|
||||
* (RSALv2) or the Server Side Public License v1 (SSPLv1).
|
||||
*
|
||||
* Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.
|
||||
*/
|
||||
|
||||
#ifndef DICTARRAY_H_
|
||||
#define DICTARRAY_H_
|
||||
|
||||
@@ -78,7 +91,7 @@ unsigned int kvstoreDictGetSomeKeys(kvstore *kvs, int didx, dictEntry **des, uns
|
||||
int kvstoreDictExpand(kvstore *kvs, int didx, unsigned long size);
|
||||
unsigned long kvstoreDictScanDefrag(kvstore *kvs, int didx, unsigned long v, dictScanFunction *fn, dictDefragFunctions *defragfns, void *privdata);
|
||||
typedef dict *(kvstoreDictLUTDefragFunction)(dict *d);
|
||||
void kvstoreDictLUTDefrag(kvstore *kvs, kvstoreDictLUTDefragFunction *defragfn);
|
||||
unsigned long kvstoreDictLUTDefrag(kvstore *kvs, unsigned long cursor, kvstoreDictLUTDefragFunction *defragfn);
|
||||
void *kvstoreDictFetchValue(kvstore *kvs, int didx, const void *key);
|
||||
dictEntry *kvstoreDictFind(kvstore *kvs, int didx, void *key);
|
||||
dictEntry *kvstoreDictAddRaw(kvstore *kvs, int didx, void *key, dictEntry **existing);
|
||||
|
||||
11
src/module.c
11
src/module.c
@@ -2,8 +2,13 @@
|
||||
* Copyright (c) 2016-Present, Redis Ltd.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Copyright (c) 2024-present, Valkey contributors.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Licensed under your choice of the Redis Source Available License 2.0
|
||||
* (RSALv2) or the Server Side Public License v1 (SSPLv1).
|
||||
*
|
||||
* Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.
|
||||
*/
|
||||
|
||||
/* --------------------------------------------------------------------------
|
||||
@@ -13782,7 +13787,7 @@ const char *RM_GetCurrentCommandName(RedisModuleCtx *ctx) {
|
||||
* defrag callback.
|
||||
*/
|
||||
struct RedisModuleDefragCtx {
|
||||
long long int endtime;
|
||||
monotime endtime;
|
||||
unsigned long *cursor;
|
||||
struct redisObject *key; /* Optional name of key processed, NULL when unknown. */
|
||||
int dbid; /* The dbid of the key being processed, -1 when unknown. */
|
||||
@@ -13821,7 +13826,7 @@ int RM_RegisterDefragCallbacks(RedisModuleCtx *ctx, RedisModuleDefragFunc start,
|
||||
* so it generally makes sense to do small batches of work in between calls.
|
||||
*/
|
||||
int RM_DefragShouldStop(RedisModuleDefragCtx *ctx) {
|
||||
return (ctx->endtime != 0 && ctx->endtime < ustime());
|
||||
return (ctx->endtime != 0 && ctx->endtime <= getMonotonicUs());
|
||||
}
|
||||
|
||||
/* Store an arbitrary cursor value for future re-use.
|
||||
@@ -13929,7 +13934,7 @@ RedisModuleString *RM_DefragRedisModuleString(RedisModuleDefragCtx *ctx, RedisMo
|
||||
* Returns a zero value (and initializes the cursor) if no more needs to be done,
|
||||
* or a non-zero value otherwise.
|
||||
*/
|
||||
int moduleLateDefrag(robj *key, robj *value, unsigned long *cursor, long long endtime, int dbid) {
|
||||
int moduleLateDefrag(robj *key, robj *value, unsigned long *cursor, monotime endtime, int dbid) {
|
||||
moduleValue *mv = value->ptr;
|
||||
moduleType *mt = mv->type;
|
||||
|
||||
|
||||
22
src/server.c
22
src/server.c
@@ -1637,25 +1637,7 @@ void whileBlockedCron(void) {
|
||||
mstime_t latency;
|
||||
latencyStartMonitor(latency);
|
||||
|
||||
/* In some cases we may be called with big intervals, so we may need to do
|
||||
* extra work here. This is because some of the functions in serverCron rely
|
||||
* on the fact that it is performed every 10 ms or so. For instance, if
|
||||
* activeDefragCycle needs to utilize 25% cpu, it will utilize 2.5ms, so we
|
||||
* need to call it multiple times. */
|
||||
long hz_ms = 1000/server.hz;
|
||||
while (server.blocked_last_cron < server.mstime) {
|
||||
|
||||
/* Defrag keys gradually. */
|
||||
activeDefragCycle();
|
||||
|
||||
server.blocked_last_cron += hz_ms;
|
||||
|
||||
/* Increment cronloop so that run_with_period works. */
|
||||
server.cronloops++;
|
||||
}
|
||||
|
||||
/* Other cron jobs do not need to be done in a loop. No need to check
|
||||
* server.blocked_last_cron since we have an early exit at the top. */
|
||||
defragWhileBlocked();
|
||||
|
||||
/* Update memory stats during loading (excluding blocked scripts) */
|
||||
if (server.loading) cronUpdateMemoryStats();
|
||||
@@ -2758,8 +2740,6 @@ void initServer(void) {
|
||||
server.db[j].watched_keys = dictCreate(&keylistDictType);
|
||||
server.db[j].id = j;
|
||||
server.db[j].avg_ttl = 0;
|
||||
server.db[j].defrag_later = listCreate();
|
||||
listSetFreeMethod(server.db[j].defrag_later, sdsfreegeneric);
|
||||
}
|
||||
evictionPoolAlloc(); /* Initialize the LRU keys pool. */
|
||||
/* Note that server.pubsub_channels was chosen to be a kvstore (with only one dict, which
|
||||
|
||||
@@ -1051,7 +1051,6 @@ typedef struct redisDb {
|
||||
int id; /* Database ID */
|
||||
long long avg_ttl; /* Average TTL, just for stats */
|
||||
unsigned long expires_cursor; /* Cursor of the active expire cycle. */
|
||||
list *defrag_later; /* List of key names to attempt to defrag one by one, gradually. */
|
||||
} redisDb;
|
||||
|
||||
/* forward declaration for functions ctx */
|
||||
@@ -2675,7 +2674,7 @@ size_t moduleGetFreeEffort(robj *key, robj *val, int dbid);
|
||||
size_t moduleGetMemUsage(robj *key, robj *val, size_t sample_size, int dbid);
|
||||
robj *moduleTypeDupOrReply(client *c, robj *fromkey, robj *tokey, int todb, robj *value);
|
||||
int moduleDefragValue(robj *key, robj *obj, int dbid);
|
||||
int moduleLateDefrag(robj *key, robj *value, unsigned long *cursor, long long endtime, int dbid);
|
||||
int moduleLateDefrag(robj *key, robj *value, unsigned long *cursor, monotime endtime, int dbid);
|
||||
void moduleDefragGlobals(void);
|
||||
void moduleDefragStart(void);
|
||||
void moduleDefragEnd(void);
|
||||
@@ -3269,6 +3268,7 @@ void enterExecutionUnit(int update_cached_time, long long us);
|
||||
void exitExecutionUnit(void);
|
||||
void resetServerStats(void);
|
||||
void activeDefragCycle(void);
|
||||
void defragWhileBlocked(void);
|
||||
unsigned int getLRUClock(void);
|
||||
unsigned int LRU_CLOCK(void);
|
||||
const char *evictPolicyToString(void);
|
||||
|
||||
@@ -1,3 +1,16 @@
|
||||
#
|
||||
# Copyright (c) 2009-Present, Redis Ltd.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Copyright (c) 2024-present, Valkey contributors.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under your choice of the Redis Source Available License 2.0
|
||||
# (RSALv2) or the Server Side Public License v1 (SSPLv1).
|
||||
#
|
||||
# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.
|
||||
#
|
||||
|
||||
proc test_memory_efficiency {range} {
|
||||
r flushall
|
||||
set rd [redis_deferring_client]
|
||||
@@ -37,15 +50,19 @@ start_server {tags {"memefficiency external:skip"}} {
|
||||
}
|
||||
|
||||
run_solo {defrag} {
|
||||
proc wait_for_defrag_stop {maxtries delay} {
|
||||
proc wait_for_defrag_stop {maxtries delay {expect_frag 0}} {
|
||||
wait_for_condition $maxtries $delay {
|
||||
[s active_defrag_running] eq 0
|
||||
[s active_defrag_running] eq 0 && ($expect_frag == 0 || [s allocator_frag_ratio] <= $expect_frag)
|
||||
} else {
|
||||
after 120 ;# serverCron only updates the info once in 100ms
|
||||
puts [r info memory]
|
||||
puts [r info stats]
|
||||
puts [r memory malloc-stats]
|
||||
fail "defrag didn't stop."
|
||||
if {$expect_frag != 0} {
|
||||
fail "defrag didn't stop or failed to achieve expected frag ratio ([s allocator_frag_ratio] > $expect_frag)"
|
||||
} else {
|
||||
fail "defrag didn't stop."
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -102,7 +119,7 @@ run_solo {defrag} {
|
||||
r config set active-defrag-cycle-max 75
|
||||
|
||||
# Wait for the active defrag to stop working.
|
||||
wait_for_defrag_stop 2000 100
|
||||
wait_for_defrag_stop 2000 100 1.1
|
||||
|
||||
# Test the fragmentation is lower.
|
||||
after 120 ;# serverCron only updates the info once in 100ms
|
||||
@@ -124,7 +141,6 @@ run_solo {defrag} {
|
||||
puts [r latency latest]
|
||||
puts [r latency history active-defrag-cycle]
|
||||
}
|
||||
assert {$frag < 1.1}
|
||||
# due to high fragmentation, 100hz, and active-defrag-cycle-max set to 75,
|
||||
# we expect max latency to be not much higher than 7.5ms but due to rare slowness threshold is set higher
|
||||
if {!$::no_latency} {
|
||||
@@ -142,6 +158,11 @@ run_solo {defrag} {
|
||||
# reset stats and load the AOF file
|
||||
r config resetstat
|
||||
r config set key-load-delay -25 ;# sleep on average 1/25 usec
|
||||
# Note: This test is checking if defrag is working DURING AOF loading (while
|
||||
# timers are not active). So we don't give any extra time, and we deactivate
|
||||
# defrag immediately after the AOF loading is complete. During loading,
|
||||
# defrag will get invoked less often, causing starvation prevention. We
|
||||
# should expect longer latency measurements.
|
||||
r debug loadaof
|
||||
r config set activedefrag no
|
||||
# measure hits and misses right after aof loading
|
||||
@@ -246,7 +267,7 @@ run_solo {defrag} {
|
||||
}
|
||||
|
||||
# wait for the active defrag to stop working
|
||||
wait_for_defrag_stop 500 100
|
||||
wait_for_defrag_stop 500 100 1.05
|
||||
|
||||
# test the fragmentation is lower
|
||||
after 120 ;# serverCron only updates the info once in 100ms
|
||||
@@ -256,7 +277,6 @@ run_solo {defrag} {
|
||||
puts "frag [s allocator_frag_ratio]"
|
||||
puts "frag_bytes [s allocator_frag_bytes]"
|
||||
}
|
||||
assert_lessthan_equal [s allocator_frag_ratio] 1.05
|
||||
}
|
||||
# Flush all script to make sure we don't crash after defragging them
|
||||
r script flush sync
|
||||
@@ -362,7 +382,7 @@ run_solo {defrag} {
|
||||
}
|
||||
|
||||
# wait for the active defrag to stop working
|
||||
wait_for_defrag_stop 500 100
|
||||
wait_for_defrag_stop 500 100 1.1
|
||||
|
||||
# test the fragmentation is lower
|
||||
after 120 ;# serverCron only updates the info once in 100ms
|
||||
@@ -384,7 +404,6 @@ run_solo {defrag} {
|
||||
puts [r latency latest]
|
||||
puts [r latency history active-defrag-cycle]
|
||||
}
|
||||
assert {$frag < 1.1}
|
||||
# due to high fragmentation, 100hz, and active-defrag-cycle-max set to 75,
|
||||
# we expect max latency to be not much higher than 7.5ms but due to rare slowness threshold is set higher
|
||||
if {!$::no_latency} {
|
||||
@@ -464,7 +483,7 @@ run_solo {defrag} {
|
||||
}
|
||||
|
||||
# wait for the active defrag to stop working
|
||||
wait_for_defrag_stop 500 100
|
||||
wait_for_defrag_stop 500 100 1.05
|
||||
|
||||
# test the fragmentation is lower
|
||||
after 120 ;# serverCron only updates the info once in 100ms
|
||||
@@ -474,7 +493,6 @@ run_solo {defrag} {
|
||||
puts "frag [s allocator_frag_ratio]"
|
||||
puts "frag_bytes [s allocator_frag_bytes]"
|
||||
}
|
||||
assert_lessthan_equal [s allocator_frag_ratio] 1.05
|
||||
}
|
||||
|
||||
# Publishes some message to all the pubsub clients to make sure that
|
||||
@@ -572,7 +590,7 @@ run_solo {defrag} {
|
||||
}
|
||||
|
||||
# wait for the active defrag to stop working
|
||||
wait_for_defrag_stop 500 100
|
||||
wait_for_defrag_stop 500 100 1.5
|
||||
|
||||
# test the fragmentation is lower
|
||||
after 120 ;# serverCron only updates the info once in 100ms
|
||||
@@ -582,7 +600,6 @@ run_solo {defrag} {
|
||||
puts "frag [s allocator_frag_ratio]"
|
||||
puts "frag_bytes [s allocator_frag_bytes]"
|
||||
}
|
||||
assert_lessthan_equal [s allocator_frag_ratio] 1.5
|
||||
}
|
||||
}
|
||||
|
||||
@@ -682,7 +699,13 @@ run_solo {defrag} {
|
||||
}
|
||||
|
||||
# wait for the active defrag to stop working
|
||||
wait_for_defrag_stop 500 100
|
||||
if {$io_threads == 1} {
|
||||
wait_for_defrag_stop 500 100 1.05
|
||||
} else {
|
||||
# TODO: When multithreading is enabled, argv may be created in the io thread
|
||||
# and kept in the main thread, which can cause fragmentation to become worse.
|
||||
wait_for_defrag_stop 500 100 1.1
|
||||
}
|
||||
|
||||
# test the fragmentation is lower
|
||||
after 120 ;# serverCron only updates the info once in 100ms
|
||||
@@ -692,14 +715,6 @@ run_solo {defrag} {
|
||||
puts "frag [s allocator_frag_ratio]"
|
||||
puts "frag_bytes [s allocator_frag_bytes]"
|
||||
}
|
||||
|
||||
if {$io_threads == 1} {
|
||||
assert_lessthan_equal [s allocator_frag_ratio] 1.05
|
||||
} else {
|
||||
# TODO: When multithreading is enabled, argv may be created in the io thread
|
||||
# and kept in the main thread, which can cause fragmentation to become worse.
|
||||
assert_lessthan_equal [s allocator_frag_ratio] 1.1
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -763,7 +778,7 @@ run_solo {defrag} {
|
||||
}
|
||||
|
||||
# wait for the active defrag to stop working
|
||||
wait_for_defrag_stop 500 100
|
||||
wait_for_defrag_stop 500 100 1.1
|
||||
|
||||
# test the fragmentation is lower
|
||||
after 120 ;# serverCron only updates the info once in 100ms
|
||||
@@ -789,7 +804,6 @@ run_solo {defrag} {
|
||||
puts [r latency history active-defrag-cycle]
|
||||
puts [r memory malloc-stats]
|
||||
}
|
||||
assert {$frag < 1.1}
|
||||
# due to high fragmentation, 100hz, and active-defrag-cycle-max set to 75,
|
||||
# we expect max latency to be not much higher than 7.5ms but due to rare slowness threshold is set higher
|
||||
if {!$::no_latency} {
|
||||
@@ -884,7 +898,7 @@ run_solo {defrag} {
|
||||
}
|
||||
|
||||
# wait for the active defrag to stop working
|
||||
wait_for_defrag_stop 500 100
|
||||
wait_for_defrag_stop 500 100 1.1
|
||||
|
||||
# test the fragmentation is lower
|
||||
after 120 ;# serverCron only updates the info once in 100ms
|
||||
@@ -896,7 +910,6 @@ run_solo {defrag} {
|
||||
puts "hits: $hits"
|
||||
puts "misses: $misses"
|
||||
}
|
||||
assert {$frag < 1.1}
|
||||
assert {$misses < 10000000} ;# when defrag doesn't stop, we have some 30m misses, when it does, we have 2m misses
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user