-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial Key Value Store implementation #132
Conversation
* Store create * put/get
# Conflicts: # tests/NATS.Client.Core.Tests/RequestReplyTest.cs
In some cases in test we send an unnecessary pull request with batch=0 the only place this could happen is in CheckPending() so we add an additional check for pending messages there.
There is an extraneous pull request sent at the very beginning of the consume.
In rare cases we were getting two pull requests issued back-to-back, one for the initialization then right after that with an incoming message triggering another request because the pending numbers were nor reset soon enough after the initial request. initialization | message loop ------------------------------------------ pull() | | incoming_message() | check_pending() (*)--> reset_pending() | | pull() | reset_pending() (*) Too late. Check thinks we don't have enough pending messages so issues another pull request. By setting setting the pending requests in constructor to full values, assuming a pull request will be done right after subscription solves the race condition where reset_pending() might happen just after the first message. initialization | message loop ------------------------------------------ reset_pending() | pull() | | incoming_message() | check_pending() (*)--> | | (*) Now check thinks we have enough pending messages so doesn't issues a pull request.
# Conflicts: # .github/workflows/test.yml
sandbox/Example.KeyValueStore.Watcher/Example.KeyValueStore.Watcher.csproj
Show resolved
Hide resolved
…tcher.csproj Co-authored-by: Caleb Lloyd <[email protected]>
IdleHeartbeat = _idleHbNanos, | ||
AckWait = _ackWaitNanos, | ||
MaxDeliver = 1, | ||
MemStorage = true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MemStorage
may be an issue - some server configurations don't have Memory Store enabled, which will make this fail. Would be best to leave this out and default to the Stream's backing store
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIK this is explicitly set by Go (also Rust I think) clients and in Ordered Consumer ADR I believe. I don't know the reason. @piotrpio?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe MemStorage
still works for consumers, even if max_memory_store
is set to 0 then. Otherwise this would always fail on NGS, and I am assuming if Go is doing it then someone is successfully using this with NGS 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hehe.. just tested as well with a server set to max_memory_store: 0
and watcher still works fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Initial KV implementation and internal tidy-up and fixes.
Kv implementation:
Missing KV features for another PR:
Internal tidy-up and fixes:
IMemoryOwner<byte>
support