Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace "redis" with "valkey" test code #287

Merged
merged 5 commits into from
Apr 18, 2024
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions src/cli_common.c
Original file line number Diff line number Diff line change
Expand Up @@ -318,8 +318,8 @@ void parseRedisUri(const char *uri, const char* tool_name, cliConnInfo *connInfo
UNUSED(tls_flag);
#endif

const char *scheme = "redis://";
const char *tlsscheme = "rediss://";
const char *scheme = "valkey://";
const char *tlsscheme = "valkeys://";
const char *curr = uri;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not test code.

This is a breaking change. It is #198. See the discussions in #199.

We're not doing that. Besides, it is interfering with #199.

We already have had several PRs doing the same changes and we're reviewing them over and over.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got your point, But I had benchmark: tls connecting using URI with authentication set,get and benchmark: connecting using URI with authentication set,get test failed. that was the reason I changed, No problem we can wait until #199 (#198) gets merged.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure #198 will get merged at all.

Either way, I think we should test the redis:// protocol to make sure we support it. We want valkey-cli 8.0 to be compatible with valkey-server 7.2.x and with redis oss.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ohh.. Do we have any macro or check for backport compatibilty defined, I can go ahead update the commit in this pr or first I will commit them in different PR as they are going to be breaking changes and later we can focus on this PR changes. WDYT?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think just keep redis:// and rediss:// in this PR. Only update internal names in the test.

We are sure we want to support it and still test it.

We are not yet sure if we want to add another scheme. If we do, we want to test both.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you for the details, sure I will change internally in test and commit again!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Completed the changes and updated tests as well to pass now, when scheme is added we can change those test together. Thanks!

const char *end = uri + strlen(uri);
const char *userinfo, *username, *port, *host, *path;
Expand All @@ -330,7 +330,7 @@ void parseRedisUri(const char *uri, const char* tool_name, cliConnInfo *connInfo
*tls_flag = 1;
curr += strlen(tlsscheme);
#else
fprintf(stderr,"rediss:// is only supported when %s is compiled with OpenSSL\n", tool_name);
fprintf(stderr,"valkeys:// is only supported when %s is compiled with OpenSSL\n", tool_name);
exit(1);
#endif
} else if (!strncasecmp(scheme, curr, strlen(scheme))) {
Expand Down
12 changes: 6 additions & 6 deletions tests/cluster/cluster.tcl
Original file line number Diff line number Diff line change
Expand Up @@ -86,8 +86,8 @@ proc cluster_allocate_slots {n} {

# Check that cluster nodes agree about "state", or raise an error.
proc assert_cluster_state {state} {
foreach_redis_id id {
if {[instance_is_killed redis $id]} continue
foreach_valkey_id id {
if {[instance_is_killed valkey $id]} continue
wait_for_condition 1000 50 {
[CI $id cluster_state] eq $state
} else {
Expand All @@ -99,9 +99,9 @@ proc assert_cluster_state {state} {
# Search the first node starting from ID $first that is not
# already configured as a slave.
proc cluster_find_available_slave {first} {
foreach_redis_id id {
foreach_valkey_id id {
if {$id < $first} continue
if {[instance_is_killed redis $id]} continue
if {[instance_is_killed valkey $id]} continue
set me [get_myself $id]
if {[dict get $me slaveof] eq {-}} {return $id}
}
Expand Down Expand Up @@ -161,7 +161,7 @@ proc cluster_create_with_continuous_slots {masters slaves} {

# Set the cluster node-timeout to all the reachalbe nodes.
proc set_cluster_node_timeout {to} {
foreach_redis_id id {
foreach_valkey_id id {
catch {R $id CONFIG SET cluster-node-timeout $to}
}
}
Expand All @@ -170,7 +170,7 @@ proc set_cluster_node_timeout {to} {
# as a starting point to talk with the cluster.
proc cluster_write_test {id} {
set prefix [randstring 20 20 alpha]
set port [get_instance_attrib redis $id port]
set port [get_instance_attrib valkey $id port]
set cluster [redis_cluster 127.0.0.1:$port]
for {set j 0} {$j < 100} {incr j} {
$cluster set key.$j $prefix.$j
Expand Down
2 changes: 1 addition & 1 deletion tests/cluster/run.tcl
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ set ::tlsdir "../../tls"

proc main {} {
parse_options
spawn_instance redis $::redis_base_port $::instances_count {
spawn_instance valkey $::valkey_base_port $::instances_count {
"cluster-enabled yes"
"appendonly yes"
"enable-protected-configs yes"
Expand Down
4 changes: 2 additions & 2 deletions tests/cluster/tests/00-base.tcl
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ if {$::simulate_error} {
test "Different nodes have different IDs" {
set ids {}
set numnodes 0
foreach_redis_id id {
foreach_valkey_id id {
incr numnodes
# Every node should just know itself.
set nodeid [dict get [get_myself $id] id]
Expand All @@ -31,7 +31,7 @@ test "After the join, every node gets a different config epoch" {
while {[incr trynum -1] != 0} {
# We check that this condition is true for *all* the nodes.
set ok 1 ; # Will be set to 0 every time a node is not ok.
foreach_redis_id id {
foreach_valkey_id id {
set epochs {}
foreach n [get_cluster_nodes $id] {
lappend epochs [dict get $n config_epoch]
Expand Down
8 changes: 4 additions & 4 deletions tests/cluster/tests/01-faildet.tcl
Original file line number Diff line number Diff line change
Expand Up @@ -11,16 +11,16 @@ test "Cluster should start ok" {
}

test "Killing two slave nodes" {
kill_instance redis 5
kill_instance redis 6
kill_instance valkey 5
kill_instance valkey 6
}

test "Cluster should be still up" {
assert_cluster_state ok
}

test "Killing one master node" {
kill_instance redis 0
kill_instance valkey 0
}

# Note: the only slave of instance 0 is already down so no
Expand All @@ -30,7 +30,7 @@ test "Cluster should be down now" {
}

test "Restarting master node" {
restart_instance redis 0
restart_instance valkey 0
}

test "Cluster should be up again" {
Expand Down
4 changes: 2 additions & 2 deletions tests/cluster/tests/02-failover.tcl
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ test "Instance #5 synced with the master" {
set current_epoch [CI 1 cluster_current_epoch]

test "Killing one master node" {
kill_instance redis 0
kill_instance valkey 0
}

test "Wait for failover" {
Expand All @@ -53,7 +53,7 @@ test "Instance #5 is now a master" {
}

test "Restarting the previously killed master node" {
restart_instance redis 0
restart_instance valkey 0
}

test "Instance #0 gets converted into a slave" {
Expand Down
10 changes: 5 additions & 5 deletions tests/cluster/tests/03-failover-loop.tcl
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ test "Cluster is up" {
}

set iterations 20
set cluster [redis_cluster 127.0.0.1:[get_instance_attrib redis 0 port]]
set cluster [redis_cluster 127.0.0.1:[get_instance_attrib valkey 0 port]]

while {[incr iterations -1]} {
set tokill [randomInt 10]
Expand All @@ -25,7 +25,7 @@ while {[incr iterations -1]} {
if {$role eq {master}} {
set slave {}
set myid [dict get [get_myself $tokill] id]
foreach_redis_id id {
foreach_valkey_id id {
if {$id == $tokill} continue
if {[dict get [get_myself $id] slaveof] eq $myid} {
set slave $id
Expand Down Expand Up @@ -64,7 +64,7 @@ while {[incr iterations -1]} {
test "Terminating node #$tokill" {
# Stop AOF so that an initial AOFRW won't prevent the instance from terminating
R $tokill config set appendonly no
kill_instance redis $tokill
kill_instance valkey $tokill
}

if {$role eq {master}} {
Expand All @@ -89,7 +89,7 @@ while {[incr iterations -1]} {
}

test "Restarting node #$tokill" {
restart_instance redis $tokill
restart_instance valkey $tokill
}

test "Instance #$tokill is now a slave" {
Expand All @@ -111,7 +111,7 @@ while {[incr iterations -1]} {
}

test "Post condition: current_epoch >= my_epoch everywhere" {
foreach_redis_id id {
foreach_valkey_id id {
assert {[CI $id cluster_current_epoch] >= [CI $id cluster_my_epoch]}
}
}
26 changes: 13 additions & 13 deletions tests/cluster/tests/04-resharding.tcl
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,14 @@ test "Cluster is up" {
}

test "Enable AOF in all the instances" {
foreach_redis_id id {
foreach_valkey_id id {
R $id config set appendonly yes
# We use "appendfsync no" because it's fast but also guarantees that
# write(2) is performed before replying to client.
R $id config set appendfsync no
}

foreach_redis_id id {
foreach_valkey_id id {
wait_for_condition 1000 500 {
[RI $id aof_rewrite_in_progress] == 0 &&
[RI $id aof_enabled] == 1
Expand Down Expand Up @@ -54,11 +54,11 @@ proc process_is_running {pid} {

set numkeys 50000
set numops 200000
set start_node_port [get_instance_attrib redis 0 port]
set start_node_port [get_instance_attrib valkey 0 port]
set cluster [redis_cluster 127.0.0.1:$start_node_port]
if {$::tls} {
# setup a non-TLS cluster client to the TLS cluster
set plaintext_port [get_instance_attrib redis 0 plaintext-port]
set plaintext_port [get_instance_attrib valkey 0 plaintext-port]
set cluster_plaintext [redis_cluster 127.0.0.1:$plaintext_port 0]
puts "Testing TLS cluster on start node 127.0.0.1:$start_node_port, plaintext port $plaintext_port"
} else {
Expand All @@ -85,7 +85,7 @@ test "Cluster consistency during live resharding" {
set target [dict get [get_myself [randomInt 5]] id]
set tribpid [lindex [exec \
../../../src/valkey-cli --cluster reshard \
127.0.0.1:[get_instance_attrib redis 0 port] \
127.0.0.1:[get_instance_attrib valkey 0 port] \
--cluster-from all \
--cluster-to $target \
--cluster-slots 100 \
Expand Down Expand Up @@ -138,11 +138,11 @@ test "Verify $numkeys keys for consistency with logical content" {
}

test "Terminate and restart all the instances" {
foreach_redis_id id {
foreach_valkey_id id {
# Stop AOF so that an initial AOFRW won't prevent the instance from terminating
R $id config set appendonly no
kill_instance redis $id
restart_instance redis $id
kill_instance valkey $id
restart_instance valkey $id
}
}

Expand All @@ -160,20 +160,20 @@ test "Verify $numkeys keys after the restart" {
}

test "Disable AOF in all the instances" {
foreach_redis_id id {
foreach_valkey_id id {
R $id config set appendonly no
}
}

test "Verify slaves consistency" {
set verified_masters 0
foreach_redis_id id {
foreach_valkey_id id {
set role [R $id role]
lassign $role myrole myoffset slaves
if {$myrole eq {slave}} continue
set masterport [get_instance_attrib redis $id port]
set masterport [get_instance_attrib valkey $id port]
set masterdigest [R $id debug digest]
foreach_redis_id sid {
foreach_valkey_id sid {
set srole [R $sid role]
if {[lindex $srole 0] eq {master}} continue
if {[lindex $srole 2] != $masterport} continue
Expand All @@ -190,7 +190,7 @@ test "Verify slaves consistency" {

test "Dump sanitization was skipped for migrations" {
set verified_masters 0
foreach_redis_id id {
foreach_valkey_id id {
assert {[RI $id dump_payload_sanitizations] == 0}
}
}
12 changes: 6 additions & 6 deletions tests/cluster/tests/05-slave-selection.tcl
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ test "CLUSTER SLAVES and CLUSTER REPLICAS output is consistent" {
}

test {Slaves of #0 are instance #5 and #10 as expected} {
set port0 [get_instance_attrib redis 0 port]
set port0 [get_instance_attrib valkey 0 port]
assert {[lindex [R 5 role] 2] == $port0}
assert {[lindex [R 10 role] 2] == $port0}
}
Expand All @@ -48,7 +48,7 @@ test "Instance #5 and #10 synced with the master" {
}
}

set cluster [redis_cluster 127.0.0.1:[get_instance_attrib redis 0 port]]
set cluster [redis_cluster 127.0.0.1:[get_instance_attrib valkey 0 port]]

test "Slaves are both able to receive and acknowledge writes" {
for {set j 0} {$j < 100} {incr j} {
Expand Down Expand Up @@ -80,7 +80,7 @@ test "Write data while slave #10 is paused and can't receive it" {
assert {[R 10 read] eq {OK OK}}

# Kill the master so that a reconnection will not be possible.
kill_instance redis 0
kill_instance valkey 0
}

test "Wait for instance #5 (and not #10) to turn into a master" {
Expand All @@ -100,7 +100,7 @@ test "Cluster should eventually be up again" {
}

test "Node #10 should eventually replicate node #5" {
set port5 [get_instance_attrib redis 5 port]
set port5 [get_instance_attrib valkey 5 port]
wait_for_condition 1000 50 {
([lindex [R 10 role] 2] == $port5) &&
([lindex [R 10 role] 3] eq {connected})
Expand Down Expand Up @@ -130,7 +130,7 @@ test "The first master has actually 5 slaves" {
}

test {Slaves of #0 are instance #3, #6, #9, #12 and #15 as expected} {
set port0 [get_instance_attrib redis 0 port]
set port0 [get_instance_attrib valkey 0 port]
assert {[lindex [R 3 role] 2] == $port0}
assert {[lindex [R 6 role] 2] == $port0}
assert {[lindex [R 9 role] 2] == $port0}
Expand Down Expand Up @@ -179,7 +179,7 @@ test "New Master down consecutively" {

set instances [dict remove $instances $master_id]

kill_instance redis $master_id
kill_instance valkey $master_id
wait_for_condition 1000 50 {
[master_detected $instances]
} else {
Expand Down
4 changes: 2 additions & 2 deletions tests/cluster/tests/06-slave-stop-cond.tcl
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ test "The first master has actually one slave" {
}

test {Slaves of #0 is instance #5 as expected} {
set port0 [get_instance_attrib redis 0 port]
set port0 [get_instance_attrib valkey 0 port]
assert {[lindex [R 5 role] 2] == $port0}
}

Expand Down Expand Up @@ -60,7 +60,7 @@ test "Break master-slave link and prevent further reconnections" {
assert {[R 5 read] eq {OK OK}}

# Kill the master so that a reconnection will not be possible.
kill_instance redis 0
kill_instance valkey 0
}

test "Slave #5 is reachable and alive" {
Expand Down
16 changes: 8 additions & 8 deletions tests/cluster/tests/07-replica-migration.tcl
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ test "Cluster is up" {
}

test "Each master should have two replicas attached" {
foreach_redis_id id {
foreach_valkey_id id {
if {$id < 5} {
wait_for_condition 1000 50 {
[llength [lindex [R $id role] 2]] == 2
Expand All @@ -27,14 +27,14 @@ test "Each master should have two replicas attached" {
}

test "Killing all the slaves of master #0 and #1" {
kill_instance redis 5
kill_instance redis 10
kill_instance redis 6
kill_instance redis 11
kill_instance valkey 5
kill_instance valkey 10
kill_instance valkey 6
kill_instance valkey 11
after 4000
}

foreach_redis_id id {
foreach_valkey_id id {
if {$id < 5} {
test "Master #$id should have at least one replica" {
wait_for_condition 1000 50 {
Expand Down Expand Up @@ -62,13 +62,13 @@ test "Cluster is up" {
}

test "Kill slave #7 of master #2. Only slave left is #12 now" {
kill_instance redis 7
kill_instance valkey 7
}

set current_epoch [CI 1 cluster_current_epoch]

test "Killing master node #2, #12 should failover" {
kill_instance redis 2
kill_instance valkey 2
}

test "Wait for failover" {
Expand Down
Loading
Loading