-
Notifications
You must be signed in to change notification settings - Fork 353
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is the 'Redlock' mechanism supported in this library? #24
Comments
Yes, there are plans for implementing some patterns / recipes for Redis, and Redlock is one of them. However, I'm busy with other stuffs these day, and there's no firm date for the release of these patterns. It seems the implementation of Redlock is not very complicated, and I might try it ASAP. If I have any progress, I'll let you know :) |
Hi @wingunder Thanks a lot for your great work! In fact, I've already tried to implement RedLock this week. However, by now, I reviewed your code, and I think the The I created a new branch, i.e.
Also you're welcome to create another pull request to the Thanks again for your great work! Regards |
Hi @sewenew, Thanks a lot for looking at my PR #15, as well as your positive feedback. In the meantime, I fixed the bug that you found and added a test case for it. While testing it I found a little quirk in the tests, which I also fixed. I also decided that your idea of unlocking everything in the destructor is a good idea, and implemented it, together with a test case. I haven't made
I had a quick browse over your redlock implementation. I saw that you are using the Redis BTW, I stumbled across the node-redlock implementation. They also use some kind of lock extension. So, someone had the idea before I did :) The node-redlock implementation also does some funky timing stuff, so this might be an implementation to look at. As for the cluster stuff: The way I understand it, a key (the one that you're using to lock with) gets sharded to a specific node in the cluster, so you'll always be directed to that node, if you operate on the key. If this is so, you won't need to lock all nodes of the cluster. You simply view the cluster as a single instance, in this case, or am I wrong? I'll have a closer look at your implementation tomorrow and give you more feedback. Finally, I think it would be more efficient if we could find some common ground between our implementations. How about the following:
How does this sound? |
@wingunder wrote:
From my benchmarks, the times are as follow:
So, at the moment I'm not sure if making |
Hi @wingunder Thanks for your detailed replies, and your review 👍 I've replied your comments on my implementation, please take a look. I'll try to explain your question as detail as I can.
When I say: RedLock with Redis Cluster, it doesn't mean lock on a single node of Redis Cluster. Instead, I was trying to implement RedLock with the distributed version of RedLock. If you just want to lock on a single master of a Redis Cluster, you don't need another class, e.g.
So you don't need to make your When writing this comment, I read the doc of distributed version of RedLock algorithm again. And I think I misunderstood the algorithm. It's NOT an algorithm working with Redis Cluster. Instead, it's an algorithm working with N independent Redis masters. How stupid am I!!! I'll try this tomorrow. I'm sorry, but too busy with my job...
Yes, I didn't use script, but instead, use Redis transaction to do the work. Because I thought it might not be a good idea to load script from the client side. Instead, Lua script should be loaded by Redis administrator. Some administrator might disable EVAL command, because an evil client might load some slow script to Redis, and block Redis for a long time. However, as you mentioned, since most implementation use script as a solution, it's might not be a bad idea to load it from the client side. I'll reconsider it. And maybe give user an option to choose how to do the lock, so that s/he can still use RedLock on Redis server whose EVAL command has been disabled.
I think you misunderstand the meaning of the return value. It's NOT the time used for acquiring the lock. Instead, if the time still left for the acquired lock, i.e. validity time. For instance, user wants to lock for 100 milliseconds with This might cause problem especially in the Cluster mode. Image there're 5 master nodes, and we need to lock at least 3 nodes for 100 milliseconds. If each lock operation costs 20 milliseconds, there will be only 100 - 20 * 3 = 40 milliseconds left. Please check the RedLock algorithm step 4 for detail.
Thanks for your suggestion! I'll take a closer look.
It's a good idea! And I'd like to merge your test code :) But before that, we need to nail down the interface. By now, I still prefer my version of RedLock interface. It has an interface similar as STL's Regards |
It's a little late. I'll review your changes tomorrow. Sorry for that... |
Hi @wingunder I committed the distributed version of RedLock, and refactored the single node version. So that
I also updated the CMakeList.txt, and you can fetch the latest code of recipes branch to have to try. It seems that @antirez's ruby version did some retry and drift stuff. Maybe we can add to the C++ version. Regards |
Hi @sewenew, Thanks for your work on the I gave up on my original PR #32 as it just seemed better to branch from your I was almost finished with the I took the liberty to create the The test cases are updated, and covers close to everything in
The results are printed on All of your suggestions in your review flowed into #33, except for raw string literal, I would appreciate it if you could help me. I looked at your Regards |
Hi @wingunder Thanks for your great work and the great comments, of course! I'm so sorry for the late reply... I've commented all your questions, and the new issue you created. I'll review your changes for PR #33 tomorrow ASAP. And I think you can close PR #32 now. Sorry again for the late reply... Regards |
Hi @sewenew, |
Hi @wingunder I created a new commit: 10ca2b2, which includes the following changes:
Since these changes also modify your code, I'd like you to do a review, and confirm these changes. Thanks in advance! Regards |
Hi @sewenew, |
Hi @wingunder There's no hurry :) Also this commit adds a Regards |
Hi @wingunder Thanks for your comments! I created a new commit to fix your comments. You can take a look.
Yes, of course. I'll add some tests for it. Regards |
Hi @sewenew,
This looks OK to me. One thing that I did notice however, is that throughout the redis++ lib, use is being made of eg.
This should probably be something for a new PR, if relevant. |
Hi @wingunder I'm glad you noticed the use of two kinds of clocks :)
However, there're some Redis commands that take a UNIX timestamp (system time point) as parameter, e.g. EXPIREAT, PEXPIREAT. So I think the corresponding In the case of Hope this can answer your question :) Regards |
Hi @sewenew, Thanks for the explanation. Here is some stuff that I picked up, while reviewing your last 2 commits (10ca2b2 and 10c333d): From: 10ca2b2#r35934763
If I understand it correctly, a However, A more serious problem, is that redis-plus-plus/src/sw/redis++/recipes/redlock.h Lines 283 to 290 in 10c333d
It should actually look as follows: std::chrono::milliseconds extend_lock(const std::string &random_string,
const std::chrono::milliseconds &ttl)
{
const RedLockMutexVessel::LockInfo lock_info =
{true, std::chrono::steady_clock::now(), ttl, _resource, random_string};
const auto result = _redlock_mutex.extend_lock(lock_info, ttl);
if (!result.locked) {
return std::chrono::milliseconds(-1);
}
else {
return result.time_remaining;
}
} The patch will look as follows: diff --git a/src/sw/redis++/recipes/redlock.h b/src/sw/redis++/recipes/redlock.h
index 53a88f5..c2ccc35 100644
--- a/src/sw/redis++/recipes/redlock.h
+++ b/src/sw/redis++/recipes/redlock.h
@@ -286,7 +286,12 @@ public:
const RedLockMutexVessel::LockInfo lock_info =
{true, std::chrono::steady_clock::now(), ttl, _resource, random_string};
const auto result = _redlock_mutex.extend_lock(lock_info, ttl);
- return result.time_remaining;
+ if (!result.locked) {
+ return std::chrono::milliseconds(0);
+ }
+ else {
+ return result.time_remaining;
+ }
}
std::chrono::milliseconds extend_lock(const std::string &random_string, If you want, I can make a PR for this. Regards |
Hi @wingunder Feel free to create a PR to fix it :) Please create the PR based on recipes-dev branch. When it's stable, I'll merge it into recipes branch.
I think either In fact, the best solution might be throwing an exception. Because this failure is different from the run-out-of-time error. How about keeping it returning Regards |
Hi @sewenew, |
Hi @wingunder I'm so sorry for the late reply. Too busy these days... I've merged your commit. Thanks again for your great work! Regards |
Hi @wingunder I agree with you. Let's keep it open :) Regards |
Hi @sewenew, I would like to send you some PRs for the recipes branch, but I saw it's a bit behind master. Would it be possible for you to merge master into recipes?
Thanks & regards |
Hi @wingunder No problem! I've merged mater to recipes branch. Thanks for you reminding! I'll do the merge once the master branch has any changes in the future. Regards |
hi, is there any progress? |
@inaryart I still don't receive any feedbacks on the API of this Redlock implementation. However, I'll merge mater branch to recipe branch from time to time. If you get any problem on it or have any feedback on it, feel free to let me know. Regards |
Hi @sewenew,
You probably meant that you'll merge the At the moment the state of the
Regards |
@wingunder Yes, I'll merge the master branch into recipes branch. I might do a merge this weekend. Regards |
Hi all, master branch has been merged into the recipes branch. If you have any feedbacks on the Redlock API, feel free to let me know:) Regards |
Hello, will this be merged in master soon ? |
@sewenew by the way, can we use Redlock with a RedisCluster ? |
@RobinLanglois Still not received any feedbacks. However, since there're some use cases for RedLock, I'll try to clean the code, and merge it in master in May.
NO. RedLock algorithm works with standalone deployed Redis, NOT Redis Cluster. Regards |
@sewenew oh really ? do you have any ideas of alternatives ? Thank you, regards |
Well I don't understand, because according to Redis website, Redlock is designed for Redis cluster (without replication). |
@RobinLanglois No. You might have some misunderstanding. From the doc you mentioned:
These nodes are independent masters, which have nothing to do with Redis Cluster. Regards |
Oh okay, thanks for explanation |
Hi all, Redlock implementation has been merge into master branch. Also I add a high level API to make it work like a So sorry for the really late delay... Special thanks give to @wingunder ! Thanks a lot! Regards |
I am aware that a transaction mechanism is available and that it works with both clustered and non-clustered Redis instances. However, as far as I understand it, the transaction mechanism can't be used to synchronize over Redis instances, as it would need to have a DLM (Distributed Lock Manager) for doing this. 'Redlock' is a specification for such a DLM.
According to the Distributed locks with Redis page, a C++ implementation of 'Redlock' exists here. However, it seems to lack cluster support, as it uses hiredis, as back-end.
A 'Redlock' implementation will be very handy within, as well as a nice supplement to this library.
Are there any plans for implementing this?
The text was updated successfully, but these errors were encountered: