Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DotNetify with Multiple Web Servers #181

Closed
kinetiq opened this issue Feb 18, 2019 · 9 comments
Closed

DotNetify with Multiple Web Servers #181

kinetiq opened this issue Feb 18, 2019 · 9 comments

Comments

@kinetiq
Copy link

kinetiq commented Feb 18, 2019

I noticed in the getting started docs that you use .AddMemoryCache, which is a non-distributed memory cache. That pinged me a little.

How does this work when you have multiple web servers? Is there an option for using IDistributedCache, so that I can use my redis cache, etc?

Thanks, this looks like a great project.

@dsuryd
Copy link
Owner

dsuryd commented Feb 19, 2019

There's no option to use IDistributedCache, unless you inject a custom IMemoryCache implementation that uses Redis. However, the cache is used to hold instances of connected view models; it won't be a problem with websocket connection. As with non-websocket, sticky session will be required.

@kinetiq
Copy link
Author

kinetiq commented Feb 19, 2019

Thanks for the reply. Ok, good to know... So to be clear, if the web server crashes for a few seconds, all users are likely to lose their models, right?

Assuming I am correct about that, is there any way to harden or recover from that? For instance, using some sort of cloud architecture to hold the websocket connection and (I suppose you already answered this) a more resilient type of cache would also be required for keeping the view models alive.

Or a recovery strategy would work.

@tw-bert
Copy link

tw-bert commented Feb 19, 2019

Perhaps Envoy could come to the rescue. It does support WebSocket H/A. Not affiliated with the product, but we are testing it with other traffic and are considering to use Envoy for websockets as well.

@kinetiq
Copy link
Author

kinetiq commented Feb 19, 2019

Envoy does look interesting @tw-bert ..!

One thing that I think I may have wrong in my understanding of dotnetify...What is this memory cache actually doing? These view models are not simply serialized and cached until needed, because in one of the examples, the view model is running a timer.

Thank you for indulging me here guys. I suspect this conversation will be useful to people however, since many enterprise devs will wonder about this right off the bat like I did.

@dsuryd
Copy link
Owner

dsuryd commented Feb 19, 2019

First of all, thank you for wanting to do a deeper dive on the architecture. It's always good to have the design assumptions tested so further improvements can be identified and planned.

What is this memory cache actually doing?

The cache keeps active view model objects, i.e. when a user connects, a VM controller for that connection is created and kept in the cache. The controller is responsible for creating and updating view model objects (which it keeps in a dictionary) while the views are active on the user's browser.

Speaking on resiliency, if the application takes care to keep VM objects stateless, subsequent recovery of lost connection can just use a fresh instance. The problem is more on the client side, how to do graceful recovery, and this is the subject of the still open issue #77.

@kinetiq
Copy link
Author

kinetiq commented Feb 19, 2019

@dsuryd Okay, this is great. Maybe you can confirm something for me: I mistakenly thought items going into IMemoryCache are serialized and therefore out of play, when in fact MemoryCache keeps cached items referenced and fully alive, which explains why the timers in the demo can work.

So the reason IDistributedCache cannot work here is because you want or need this alive behavior, and IDistributedCache really does serialize everything down to byte[], so that would break things.

Do I have that right?

On the subject of resiliency, I will pick that up over in #77, although I do have one thought: I do not understand your architecture enough to grok all the decisions you've made, obviously, but if stateless VMs are important for connection recovery and might also allow IDistributedCache (which would mean that using Dotnetify would have less or even zero impact on architectural/networking decisions), it does make me wonder if stateless VMs should just be required. I am sure you have many reasons though, and I am sure this would be a major change (I suspect some of your interesting multiplex features/etc depend on this)

Thanks again for taking the time. I did check out the code for VMControllerFactory and VMController. Nice clean code.

@dsuryd
Copy link
Owner

dsuryd commented Feb 20, 2019

So the reason IDistributedCache cannot work here is because you want or need this alive behavior, and IDistributedCache really does serialize everything down to byte[], so that would break things.

Do I have that right?

Yes if it's used directly. One option is to have a custom implementation that uses IDistributedCache to store the current state in bytes, and serialize it into a new object on demand. Expensive, but resilient.

...it does make me wonder if stateless VMs should just be required.

I feel it is not the framework's place to impose such restriction as it's not an architectural constraint. There's value in giving flexibility to app developers to decide.

@kinetiq
Copy link
Author

kinetiq commented Apr 4, 2019

I came across DotNetifyClient here: http://dotnetify.net/core/api/dotnetclient

Using this, would it be possible to create a single specialized server that manages all your dotnetify models? Then all other servers could refer to it rather than storing in their own memory.

That could solve the problem of requiring sticky sessions... However it would probably create some serious architectural headaches though inside of Dotnetify.

@kinetiq
Copy link
Author

kinetiq commented Apr 5, 2019

Working a little more with Dotnetify, I can see that none of these ideas would really work. Closing this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants