Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate Lock and ModuleScope's public type cache API #391

Merged
merged 11 commits into from
Jun 24, 2018

Conversation

stakx
Copy link
Member

@stakx stakx commented Jun 21, 2018

This is in response to #376 (comment):

[We] should definitely consider changing the code to use [ReaderWriterLockSlim] directly as it'll also remove all the allocations of the "disposable holder" objects.

This PR deprecates Lock and directly related types. This is achieved by replacing internal uses either with ReaderWriterLockSlim (upon which it is based), or with a new but simpler abstraction, SynchronizedDictionary<,>.

This PR also deprecates ModuleScope's public API for accessing the proxy type cache. This is necessary because that API consists of a Lock (which we're deprecating) along with a few thread-unsafe methods.

Publicly visible members marked for deprecation:

  • Castle.Core.Internal.Lock (class)
  • Castle.Core.Internal.ILockHolder (interface)
  • Castle.Core.Internal.IUpgradeableLockHolder (interface)
  • Castle.DynamicProxy.Generators.BaseProxyGenerator.AddToCache (method)
  • Castle.DynamicProxy.Generators.BaseProxyGenerator.GetFromCache (method)
  • Castle.DynamicProxy.Generators.CacheKey (class)
  • Castle.DynamicProxy.ModuleScope.Lock (property)
  • Castle.DynamicProxy.ModuleScope.GetFromCache (method)
  • Castle.DynamicProxy.ModuleScope.RegisterInCache (method)
  • Castle.DynamicProxy.Serialization.CacheMappingsAttribute.ApplyTo (method)
  • Castle.DynamicProxy.Serialization.CacheMappingsAttribute.GetDeserializedMappings (method)

Instead of Castle's own `Lock` abstraction, just use the underlying
`ReaderWriterLockSlim` directly.
Instead of Castle's own `Lock` abstraction, just use the underlying
`ReaderWriterLockSlim` directly.
Instead of Castle's own `Lock` abstraction, just use the underlying
`ReaderWriterLockSlim` directly.
Unfortunately, `ModuleScope` exposes a `Lock` for its proxy type cache
publicly. This means we cannot just remove its `Lock` completely, but
we can start replacing it with a `ReaderWriterLockSlim` internally.

A new `Lock.CreateFor(ReaderWriterLockSlim)` method is needed so that
a `Lock` can be exposed that is mapped to the `ReaderWriterLockSlim`
that is used internally.
@ghost
Copy link

ghost commented Jun 21, 2018

We need a formal test harness that batters the absolute hell out of this. I think something that can exercise memory usage and call site hotspots.

I would like to run some metrics on this for before and after changes.

@stakx
Copy link
Member Author

stakx commented Jun 21, 2018

@fir3pho3nixx - One small note regarding resource usage: I noticed that the Lock-related APIs never Dispose of anything. See e.g. SingletonDispenser's use of wait handles, they are allocated but apparently never released. That doesn't seem right to me. (The fact that ReaderWriterLockSlim in DynamicProxy are never disposed is fine since they are static, i.e. they live as long as the process / AppDomain.)

@ghost
Copy link

ghost commented Jun 21, 2018

See e.g. SingletonDispenser's use of wait handles, they are allocated but apparently never released

OK, this is really weird.

https://github.com/stakx/Castle.Core/blob/lock/src/Castle.Core/Components.DictionaryAdapter/Xml/Internal/Utilities/SingletonDispenser.cs#L53

TryGetExistingItem default allocates a manual reset event

https://github.com/stakx/Castle.Core/blob/lock/src/Castle.Core/Components.DictionaryAdapter/Xml/Internal/Utilities/SingletonDispenser.cs#L87

WaitForCreate will WaitOne

https://github.com/stakx/Castle.Core/blob/lock/src/Castle.Core/Components.DictionaryAdapter/Xml/Internal/Utilities/SingletonDispenser.cs#L104

Create does the work of Set.

This is bonkers, the thing dishing out the reset events is not disposable or ever disposing them.

Anything we can learn from here? I have not looked at this in infinite detail.

@ghost
Copy link

ghost commented Jun 21, 2018

Anything we can learn from here? I have not looked at this in infinite detail.

I am a fan of atomic locking(Interlocked) not sure how this would apply to the calling code. ManualResetEvent's are slow.

@stakx
Copy link
Member Author

stakx commented Jun 21, 2018

@fir3pho3nixx - I think the dangling wait handle issue could be resolved with Lazy<TItem>(factory, LazyThreadSafetyMode.PublicationOnly). (I believe that thread-safety mode uses Interlocked instead of Monitor, the downside is that factory might be invoked more than once.) Unfortunately we don't have that available for .NET 3.5. We could implement a trimmed down Lazy<> shim class. CoreFX could provide a starting point, if licensing allows us to reuse their implementation.

Another option I've thought about is doing what we do in other places: Having a fast path that enters only a read lock. Otherwise, invoke the factory (possibly outside of any lock), then entering a write lock to update the dictionary, if it hasn't been updated in the meantime. Downsides here: factory might be invoked more than once, and recursion from factory / reentrancy would be broken.

That said, I'd like to keep this PR focused solely on obsolescing Lock (and the baggage that comes with this). Do you think it fair enough if we improved SingletonDispenser in a separate PR?

@@ -18,12 +18,11 @@ namespace Castle.Components.DictionaryAdapter.Xml
using System;
using System.Collections.Generic;
using System.Threading;
using Castle.Core.Internal;

public class SingletonDispenser<TKey, TItem>
Copy link
Contributor

@TimLovellSmith TimLovellSmith Jun 21, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If SingletonDispenser is an internal utility why is it public? Should it become [Obsolete] too?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or it should be internal, like as in the csharp modifier we all know and love :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd keep this pull request focused on deprecating Lock.

@ghost
Copy link

ghost commented Jun 21, 2018

@stakx

I think the dangling wait handle issue could be resolved with Lazy(factory, LazyThreadSafetyMode.PublicationOnly). (I believe that thread-safety mode uses Interlocked instead of Monitor,

Correct. https://github.com/dotnet/corefx/blob/master/src/Common/src/CoreLib/System/Lazy.cs#L356

the downside is that factory might be invoked more than once

Not sure about this. Please see: https://docs.microsoft.com/en-us/dotnet/framework/performance/lazy-initialization#thread-safe-initialization

Specifying LazyThreadSafetyMode.PublicationOnly allows multiple threads to attempt to initialize the Lazy<T> instance. Only one thread can win this race, and all the other threads receive the value that was initialized by the successful thread

We could implement a trimmed down Lazy<> shim class. CoreFX could provide a starting point, if licensing allows us to reuse their implementation

Not sure we can do this but nothing is stopping us from using the framework :)

That said, I'd like to keep this PR focused solely on obsolescing Lock (and the baggage that comes with this). Do you think it fair enough if we improved SingletonDispenser in a separate PR?

Yep no worries! Just trying to validate your concerns you raised here: #391 (comment)

this.typeCacheLock.EnterWriteLock();
try
{
if (this.typeCache.TryGetValue(key, out value))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can the guts here just call GetFromOrRegisterInCache?

Debug.Assert(key != null);
Debug.Assert(valueFactory != null);

if (this.typeCache.TryGetValue(key, out value))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to prefix all field access with this..

Copy link
Member Author

@stakx stakx Jun 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, that's a (bad?) habit I picked up from a recent coworker. 😄 Will remove. Removed.

/// If the specified key cannot be found, the factory function is invoked while a write lock is held.
/// The function must not access this scope's type cache.
/// </remarks>
public Type GetFromOrRegisterInCacheSynchronized(CacheKey key, Func<CacheKey, Type> valueFactory)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would have thought this method would be named GetFromOrRegisterInCache as the go to one (like how AddCacheMappings has built-in synchronisation), and the previous one would be suffixed to indicate it is the special one, e.g. GetFromOrRegisterInCacheHoldingLock or something.

On a separate note how do users use the first one without synchronisation without using the deprecated Lock property?

Maybe GetOrAddToCache would be a simpler name.

Copy link
Member Author

@stakx stakx Jun 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jonorossi:

On a separate note how do users use the first one without synchronisation without using the deprecated Lock property?

The question becomes moot if we decide to un-expose the proxy type cache. Assuming we're not doing that:

If they need locking, they use the other, synchronized method. If they don't need locking, then they don't need Lock in the first place.

The question here is probably, who are the users you're thinking of? Why should anyone even try to update the proxy type cache? Like you asked yourself, why is the proxy type cache, along with CacheKey, even exposed publicly?

Maybe GetOrAddToCache would be a simpler name.

I tried to stay close to what was there before the changes, but I agree.

/// Only use this method when you know for sure that a method further up in the call stack already holds
/// a write lock. The function must not access this scope's type cache.
/// </remarks>
public Type GetFromOrRegisterInCache(CacheKey key, Func<CacheKey, Type> valueFactory)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since methods like RegisterInCache are being deprecated, are you making this public for a good reason?

I don't see why we'd want to allow users to add their own types to the cache. We've already got LoadAssemblyIntoCache for loading a DP generated assembly which is a special case.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see why we'd want to allow users to add their own types to the cache.

Neither do I.

Since methods like RegisterInCache are being deprecated, are you making this public for a good reason?

If we agree that the proxy type cache shouldn't be exposed, I'd be more than happy to make this internal. I just didn't know the full story behind why the cache is exposed, so I didn't want to just reverse it without further discussion.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we agree that the proxy type cache shouldn't be exposed, I'd be more than happy to make this internal. I just didn't know the full story behind why the cache is exposed, so I didn't want to just reverse it without further discussion.

The old methods will be gone in the next major version, so we'll hear from someone if they think they have a valid use case (doubt it), but let's not make new stuff public unless we actually want to.

This commit removes the last actual usage of a `Lock` inside the
library.
Before `Lock` can be obsoleted, the APIs connected with it need some
updating. This commit will replace most uses of `ModuleScope`'s `Get-
FromCache` and `RegisterInCache.`

This commit introduces an unsynchronized version of the previously
added `GetOrAddToCache`. (These methods combine `GetFromCache` and
`RegisterInCache` into a single operation. Their signatures are
inspired by `ConcurrentDictionary<,>.GetOrAdd`.)

It can be used in many code locations where both the existing methods
were previously called separately. This leads to simpler code and to
the possibility of making these type cache queries/updates truly
atomic by changing a single method, if that need should arise.
}

type = new DelegateTypeGenerator(method, targetType)
return scope.GetOrAddToCacheWithoutTakingLock(key, _ =>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see all these calls to WithoutTakingLock and I wonder - how are you reasoning about whether you hold the lock already or not?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't. Someone else did that before me, I only refactored pre-existing code.

The general flow of execution appears to be from IProxyGenerator into IProxyBuilder into "generator" classes which, via their base class BaseProxyGenerator, take hold of a lock, then they use contributors to build a model of the proxy type, then they use "emitters" to actually generate code. Most of the calls you're seeing are in the contributors so if all goes well, the lock should be taken at that point.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should have 'WithoutTakingLock' also do some kind of assertion that the lock is really held by the current thread then? (in debug mode)

Also a different idea - if you wanted the compiler to help analyze whether these assumptions are really correct, you can do an experiment - change 'WithoutTakingLock' to require an extra parameter 'ProofLockHeld', and see what build errors you get. Then fix them (by making callers pass in the parameter, which they must get from their own caller), and repeat - until you find yourself at an ultimate caller where you are taking the lock and acquiring 'ProofLockHeld' by doing so). Might be a painfully slow experiment though!

And of course it would break all the apis so you can't really ship it this way, or keep it around to prove that nothing is broken ever, so it doesn't have much lasting value ;(

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should have 'WithoutTakingLock' also do some kind of assertion that the lock is really held by the current thread then?

Perhaps, but I see the present PR purely as a refactoring to get rid of some no-longer-needed types and abstractions. I'd like to keep it focused on that.

Changes helping us resolve #193 don't belong here, in my opinion.

Also a different idea [...]

It's a good idea as a one-time manual analysis, I might give this a try in a spare moment & separate branch! Thanks. 👍

CHANGELOG.md Outdated
- `Castle.DynamicProxy.ModuleScope.RegisterInCache` (method)
- `Castle.DynamicProxy.Generators.BaseProxyGenerator.AddToCache` (method)
- `Castle.DynamicProxy.Generators.BaseProxyGenerator.GetFromCache` (method)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets deprecate CacheKey too, since you're not supposed to use the cache directly!

Copy link
Member Author

@stakx stakx Jun 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm already on it. Done.

@stakx stakx force-pushed the lock branch 2 times, most recently from 06af7e2 to 48dbd11 Compare June 22, 2018 20:58
@@ -2,6 +2,7 @@

<PropertyGroup>
<NoWarn>$(NoWarn);CS1591;CS3014;CS3003;CS3001;CS3021</NoWarn>
<NoWarn>$(NoWarn);CS0612;CS0618</NoWarn> <!-- TODO: Remove this line once `[Obsolete]` members have been dealt with. -->
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not a fan of disabling [Obsolete]-related warnings CS0612 and CS0618 solution-wide, but this still seems better than sprinkling #pragma warning disables and #pragma warning restores across the whole code base. [assembly: SuppressMessage] would have been nice, but I couldn't get it to work with these warnings.

We should remove this again right before releasing Castle Core 5.0.0.


[Obsolete("Consider using `System.Threading.ReaderWriterLockSlim` instead of `Lock` and related types.")] // TODO: Remove this type.
[EditorBrowsable(EditorBrowsableState.Never)]
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've put // TODO comments next to the most [Obsolete] attributes. They contain instructions how to deal with each obsolete member, so that the clean-up will be as simple as possible.

using Castle.DynamicProxy.Generators;

public static class InvocationHelper
{
private static readonly Dictionary<CacheKey, MethodInfo> cache =
new Dictionary<CacheKey, MethodInfo>();

private static readonly Lock @lock = Lock.Create();
private static readonly ReaderWriterLockSlim cacheLock =
new ReaderWriterLockSlim(LockRecursionPolicy.NoRecursion);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You often see a Dictionary<,> together with a ReaderWriterLockSlim. These could be wrapped up as an internal helper type similar to the BCL's ConcurrentDictionary<>, except we might only need a GetOrAdd method. Having such a type would mean we can replace all of the double-checked lock pattern that's currently being used in several places with a single GetOrAdd call. (This should work in ProxyUtil, InvocationHelper, DictionaryAdapterFactory; but probably not in ModuleScope nor SingletonDispenser due to the added synchronization requirements these two types have).

@@ -565,7 +657,7 @@ public void LoadAssemblyIntoCache(Assembly assembly)

if (loadedType != null)
{
RegisterInCache(mapping.Key, loadedType);
typeCache[mapping.Key] = loadedType;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note the absence of any locking here. Proper locking should be added, I didn't do this yet because I wanted this PR to be mostly a refactoring.

@stakx stakx mentioned this pull request Jun 22, 2018
20 tasks
}

using (var heldLock = interfaceToMetaLock.ForReadingUpgradeable())
interfaceToMetaLock.EnterWriteLock();
Copy link

@ghost ghost Jun 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stakx Just to confirm, we are happy with a performance hit here?

Copy link
Member Author

@stakx stakx Jun 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fir3pho3nixx - I cannot say for sure at this time. We'd have to benchmark this. This should only affect performance in the case where at least two threads are "racing" to put the same thing in the cache. (How likely is this?) In all other situations, this might even improve performance since you're only performing one lock operation instead of two, and without the small overhead of the disposable lock holders. We'd have to measure some real-world cases to see which way the scales tip.

Copy link
Member Author

@stakx stakx Jun 23, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fir3pho3nixx - You know what, I think you are right. There is no real need to make this functional change now. Why "optimize" something when (a) I haven't established that this really is an optimization, and (b) when this is supposed to be a refactoring-only PR anyway?

I've reinstated the upgradeable read locks. After this PR, it should be easy to change things because the locking pattern that DynamicProxy uses will be present in only two locations (SingletonDispenser and SynchronizedDictionary<,>).

@stakx stakx changed the title Deprecate Lock (Work in progress) Deprecate Lock and ModuleScope's public type cache API (Work in progress) Jun 23, 2018
@stakx stakx changed the title Deprecate Lock and ModuleScope's public type cache API (Work in progress) Deprecate Lock and ModuleScope's public type cache API Jun 23, 2018
`ReaderWriterLockSlim` is used mostly in conjunction with regular
dictionaries. The double-checked lock pattern is used in many places
to query and update those dictionaries.

Instead of repeating this same pattern over and over again, abstract
it away with a dictionary-like type `SynchronizedDictionary<,>` that
performs the required locking pattern.

`ModuleScope`'s internal methods for accessing the type cache can now
be replaced with a simple internal property `TypeCache`.
Based on a review comment, this reinstates the upgradeable read locks
that were originally taken before entering write locks.

The intent of this PR is to deprecate `Lock`, not to make functional
changes. While forgoing the upgradeable read locks in favor of enter-
ing write locks right away might *possibly* be an optimization, this
is a different concern and would need to be properly established using
careful benchmarking. Let's do this another time.
Copy link
Member Author

@stakx stakx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is now ready for review. I've tried to keep this as close to a pure refactoring as possible, so I've reverted some functional changes (e.g. removing the upgradeable read locks) that had previously sneaked in.

I'd appreciate everyone's input.

Copy link

@ghost ghost left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM :shipit:

Nothing is jumping out at me. Good work :)

Copy link
Member

@jonorossi jonorossi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, I'm happy with the changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants