Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.NET Core Http Support #16650

Closed
clrjunkie opened this issue Mar 9, 2016 · 25 comments
Closed

.NET Core Http Support #16650

clrjunkie opened this issue Mar 9, 2016 · 25 comments
Milestone

Comments

@clrjunkie
Copy link

So I wrote some test apps in C against LibUV and Libcurl api to get a feel of what’s powering Kestrel and HttpClient and while I can say that both libraries provide a very elegant and straightforward callback API I can’t help but raise the question:

Why do we need these in .NET Core?

Sure, LibUV makes perfect sense for NodeJS as a cross-platform abstraction layer for sockets, files, timers, thread pool, mutex.. but in .NET Core all these services are already inherently offered cross-platform by CoreFX. Simply put what does uv_accept / uv_read_start provide that the Socket.AcceptAsync/Socket.ReceiveAsync lack?

As for Libcurl, again great library for Native developers but what’s in it for Managed developers?
Furthermore, from what I see we are not using the high performance event driven curl_multi_socket_action interface with timeout callback and in any case why not just port the full .NET HttpWebRequest to use .NET Core cross-platform async socket interface and have a consistent implementation across both Windows / *Unix down to the socket level?

There is also a potential problem if someone wants to use a third-party native library (PInvoke) that depends on some version specific build of one of these; I mean how can you load multiple versions of LibUV or Libcurl in the same .NET process and why does the FW need to take part of any potential conflict that might occur?

Lastly, I think it's very important to include an equivalent of HttpListener for both Windows and *inux in CoreFX

Because:

  1. There are probably existing applications that use HttpListener interface that now can’t be ported to .NET Core and run on both Windows and *inux as is.
  2. Experience shows that web application framework’s (including ASP.NET) are highly opinionated. Some people need/like the “Application model” some just want access to an Http interface. In my opinion it would be a mistake not include a low-level http server in CoreFX as part of the standard library as the case in Go and NodeJS. Kestrel is great but it’s an ASP.NET server subsystem not a .NET CoreFX library.
@stephentoub
Copy link
Member

we are not using the high performance event driven curl_multi_socket_action

We are using the event-driven curl_multi interface, with many requests multiplexed onto the same event-handling thread. We've not seen evidence yet that switching to the socket_action APIs would yield any significant benefits, and if it did, we could. We did previously switch from just using the easy interface to instead using the multi interface, and that afforded significant benefits as we expected it would.

what’s in it for Managed developers?

It provides everything that's needed, it's available everywhere we need it, it's well-implemented and well-tuned, it's constantly being maintained and improved, it's best-of-breed, etc. Why should a ton of engineering effort be spent reimplementing something when it's already available? For example, once we had CurlHandler implemented, having it light-up to support HTTP/2 was an hour or two's work... that would not have been the case if we had to implement HTTP/2 support from scratch on top of sockets. Are you proposing that on Windows we not take advantage of WinHttp and instead re-implement everything on top of sockets in managed code? For what value? And how is that different?

@davidfowl
Copy link
Member

I actually wholeheartedly agree with @clrjunkie on this one. Though I would say, you can get the best of both worlds if there were 2 implementations, a managed version for maximum portability across platforms and a native implementation that light up if there were specific features that were more optimal. There are other reasons to have purely managed implementations of things though.

Take the websockets example. The native implementation on windows called into an OS component that was only available on windows >= 8 for the client and server! To this day there isn't a websocket client that works on windows 7 available on the .NET stack (there other open source options though). Also, look at Windows Store/UWP applications, they have a completely different abstraction for websockets because it was made into a "platform component".

People porting .NET to new platforms should only have to port a few native components tied to the platform:

  • VM
  • GC
  • JIT
  • Console
  • Threading
  • File IO
  • Networking
  • (I'm sure I'm missing some)

The rest of the stack can sit on top of those native intrinsics.

It's very unfortunate that libcurl has to be ported as well but it did save us time implementing an http client so I understand.

@stephentoub
Copy link
Member

you can get the best of both worlds if there were 2 implementations, a managed version for maximum portability across platforms and a native implementation that light up if there were specific features that were more optimal

I'm not arguing against having a managed version built on top of sockets in addition to having one built on top of the platform's support, so that it can support cases where there is no platform support. But until we have a true need to invest in such a thing, I'm not sure why we would. It is a huge amount of work, it'll likely have a very long stabilization tail, it'll likely be missing features that are trivial to bring up on top of something like libcurl, it'll likely have worse performance, etc.

It's very unfortunate that libcurl has to be ported

Which platform are you concerned about supporting today where libcurl doesn't already exist for it or isn't trivially ported to it?

Networking

The other component @clrjunkie calls out is libuv, which corefx doesn't use at all, but ASP.NET does. @davidfowl, given your argument, when is Kestrel moving to be completely managed built on top of System.Net.Sockets? ;)

@leemgs
Copy link
Contributor

leemgs commented Mar 9, 2016

People porting .NET to new platforms should only have to port a few native components tied to the platform:

@davidfowl, and, PAL(Platform Adaptation Layer), Unwinder, Stack Walking, ...

@davidfowl
Copy link
Member

It is a huge amount of work, it'll likely have a very long stabilization tail, it'll likely be missing features that are trivial to bring up on top of something like libcurl, it'll likely have worse performance, etc.

Yep! I can understand and agree with that. Still I would have opted for writing the managed implementation first as it has less overall downsides. The one in .NET Framework is already managed, though service point manager is an abomination 😄 . I see the appeal in using something that already works plus looking at the time constraints we have it probably made sense.

Which platform are you concerned about supporting today where libcurl doesn't already exist for it or isn't trivially ported to it?

ARM devices? Android/iOS/Windows Phone?

That was just one example, another might be platforms that don't have the full capability of windows (like missing winhttp etc.)

The other component @clrjunkie calls out is libuv, which corefx doesn't use at all, but ASP.NET does. @davidfowl, given your argument, when is Kestrel moving to be completely managed built on top of System.Net.Sockets? ;)

It would have been that way if it was around when we did it. We actually refactored some of kestrel to sit on top of any networking stack (libuv is just a byte pump). Though the existing networking APIs are not optimal 😄 and libuv is actually more portable that System.Net.

@stephentoub
Copy link
Member

libuv is actually more portable that System.Net.

As is libcurl.

ARM devices? Android/iOS/Windows Phone?

It's available on ARM. It's available on Android. It's available on iOS. You don't need it for Windows, that's what the Windows-optimized platform-based implementations are for, but nevertheless... it's available on Windows.

The one in .NET Framework is already managed, though service point manager is an abomination

And lacks various features we'd need to build, e.g. HTTP/2 support. Plus, additional features we'd like to enable that are trivial to support on top of what libcurl already has but would again require a complete custom implementation if it were built manually, such as using Unix domain sockets as a proxy. And as you point out in the side-mention of service point manager, it wouldn't have just been "copy the code and it works"... there's a non-trivial porting effort there as well, nevermind that the stuff it builds on wasn't implemented yet on Unix.

Again, I'm not arguing against eventually having an implementation of HttpClientHandler that sits on top of System.Net.Sockets (and System.Net.Security, System.Security.Cryptography.X509Certificates, etc.), but at the moment there's zero need for that, and until we encounter a platform that demands it, I don't see value in spending effort on it.

@clrjunkie
Copy link
Author

We've not seen evidence yet that switching to the socket_action APIs would yield any significant benefits, and if it did, we could

How did you measure? Did you compare with HttpWebRequest?

it's well-implemented and well-tuned, it's constantly being maintained and improved, it's best-of-breed, etc.

Not only do I strongly agree, but I would add that the way Daniel Stenberg supports the project is something to envy!

Nevertheless, in my opinion things are not as simple as you describe:

  1. It’s available everywhere but I see no guarantee that updates or fixes will be delivered in a timely fashion to all O/S distributions. You want the latest version NOW go compile it yourself.
    How many .NET Developers will do that? Are you prepared to support different versions? Furthermore, since you depend on the SHARED version you are assuming that people are reluctant to pull non security updates (or even non essential security ones) through apt-get for Servers which isn't necessarily true.
  2. Did you try to study the code? Not only is it well-implemented and well-tuned it’s a WORK OF ART in C (no sarcasm whatsoever!). Now having coded a complete working sample around the socket_action with timeouts paired with libUV I can tell you from first hand that it’s ONE thing to use the library and ANOTHER thing to debug it!

Obviously one with much better C skills than I have can easily challenge this, but that’s not the issue; It’s my understanding that one of the goals of opening .NET is to allow the implementation to be more accessible to .NET developers so we can have more control over the API and the risks.

Do we now have to be also SUPER C PROGRAMMERS?? Because in my opinion Libcurl is written by SUPER C PROGRAMMERS.

Why should a ton of engineering effort be spent reimplementing something when it's already available?

  1. Why did you implement HttpWebRequest over sockets in the full .NET Framework?
  2. Why isn’t HttpWebRequest over sockets “compatible” for porting over to .NET Core as is?
  3. Since you have access to the source code; How hard can it be to port WinHttp Http2 implementation into .NET Core?
  4. Why Java 9 Http2 Client is built on top of existing Java networking classes? "The prototype was built on NIO SocketChannels with asynchronous behavior implemented with Selectors and externally provided ExecutorServices." http://openjdk.java.net/jeps/110
  5. Why did Go implement its Http1/2 Client AND Server in Go?

Are you proposing that on Windows we not take advantage of WinHttp and instead re-implement everything on top of sockets in managed code? For what value? And how is that different?

Yes I am.

Putting aside how this might reflect upon the CLR as a platform for implementing high performance network communication:

  1. You start taking dependencies on different implementation you will always be restricted to lowest common denominator. Different implementations tend to have different features That’s a fact.
  2. Again, having the implementation written in managed code makes it MUCH more accessible to scrutiny and contributions from .NET developers then if it’s written in a native low-level implementation.
  3. Debugging: It’s one thing to debug relevant O/S syscalls which have specific semantics then to debug a complete sub-system.
  4. Any benchmark would obviously measure two different implementations.

@clrjunkie
Copy link
Author

..and let's not forget: we are talking about "APPLICATION LAYER" protocols.

@clrjunkie
Copy link
Author

Unix domain sockets as a proxy.

Who needs this? What's the common scenario that it falls into? Does Java support it?

@stephentoub
Copy link
Member

How did you measure? Did you compare with HttpWebRequest?

You asserted that our current implementation, which is event-driven and uses curl_multi_perform, is deficient as compared to an implementation that instead used curl_multi_socket_action. When we originally switched from just the curl_easy APIs to using the curl_multi APIs, we prototyped with both curl_multi_perform and curl_multi_socket_action. The latter had non-trivially more complicated code, and showed no significant improvement in either throughput or scale based on the typical usage patterns employed by HttpClient (the primary benefit would come at hundreds or thousands of concurrent downloads, as we'd be able to use epoll/kqueue instead of the poll libcurl uses in curl_multi_perform... but it would also come with some additional per-operation costs). If you can demonstrate otherwise, would like to submit a PR for the switch, and can provide detailed performance data highlighting the impact, we would be very happy to review. Otherwise, we can always change this in the future if it proves to be problematic, paying the complexity costs then.

You want the latest version NOW go compile it yourself.

This is the way of the unix world. It's also no different than if we make a fix in C# code and you want that fix before it's available in a package.

Are you prepared to support different versions?

Yes

Furthermore, since you depend on the SHARED version you are assuming that people are reluctant to pull non security updates

I don't understand this. If an app wants to use its own private copy of libcurl.so, it can.

Do we now have to be also SUPER C PROGRAMMERS?

I don't understand what you're suggesting here. Are you saying that we should only use code in the implementation of .NET that every developer can easily understand? Code to the least-common denominator of ability?

Anyway, it seems we simply fundamentally disagree. That's fine. We do not have the time nor resources nor inclination right now to go and implement a new HTTP stack that would simply get us to a state no better than we're in now. You're absolutely welcome to implement your own HttpClientHandler on top of the other System.Net.* libraries and put it out as a NuGet package for anyone to consume. Might even make sense to add one to corefxlab if you were so inclined and if you had the time and dedication to keep it moving forward.

@SidharthNabar
Copy link

I completely agree with @stephentoub A few points I would add specifically on the HttpClient topic:

  1. The API design of HttpClient with HttpMessageHandler abstract class was meant exactly for developers to have the freedom to either chain their custom handlers above the platform handler OR to completely replace the platform handler with their own. If someone wants to write, support, maintain and evolve their own managed Http Handler, by all means - Go for it! That is something we would totally love and encourage. Making this available as a NuGet pkg would give .NET Core developers an option to choose the native platform handler OR a (presumably) cross-platform and debugging friendly managed handler.
  2. The choice of how the default HttpClientHandler is implemented on a given platform was made by our team after much thinking and design discussions. The HttpWebRequest stack uses a lot of legacy coding patterns and has been shown to have very high memory usage when used at large scale. In contrast, WinHTTP has been in Windows Server for 10+ years and been tuned for high scale performance. Each implementation option had its pros and cons, we chose the one that allowed us to leverage existing code, get high server scale performance and provide HttpClient APIs in .NET Core within the target timeline.
  3. I completely understand that by implementing HttpClientHandler over a native handler, we lose some of the knobs/controls/debugging ability at deeper layers (TCP/Sockets), but in return, we get to spend more time on improving and optimizing the HttpClient API surface and leverage others' effort for underlying platform improvements, implementing HTTP/2, etc.

@clrjunkie
Copy link
Author

If you can demonstrate otherwise, would like to submit a PR for the switch, and can provide detailed performance data highlighting the impact, we would be very happy to review.

I don’t want to invest any time in this, because I don’t believe in the approach to begin with. The whole reason I experimented with libcurl C api is to see if this is something I can ramp quickly so I can self-support the implementation and not be back at square one having a wall in front of the communication layer implementation (HttpWebRequest) now just from a technical perspective. However; If you are willing to work TOGETHER on porting HttpWebRequest than that’s a completely different story.

This is the way of the unix world. It's also no different than if we make a fix in C# code and you want that fix before it's available in a package.

Big difference. As a developer I have much more control on what get into the build then what get’s into the O/S (shared library)

I don't understand this. If an app wants to use its own private copy of libcurl.so, it can.

And if the app dependents on a native library that depends on a private copy of libcurl.so and the app also uses HttpClient can you load the library twice?? Why should the FW be part of this party?

I don't understand what you're suggesting here. Are you saying that we should only use code in the implementation of .NET that every developer can easily understand? Code to the least-common denominator of ability?

For BCL always prefer to the least-common denominator of domain expertise (C#/.NET) when possible.

Here definitely possible.

Furthermore, considering the “complexity” in implementing curl_multi_socket_action, my concern here is about a much higher bar in complexity.

Might even make sense to add one to corefxlab if you were so inclined and if you had the time and dedication to keep it moving forward.

Give me an editor debugging experience on *nix first!! this was supposed to be the TOP priority before anything. I refuse to debug epoll events issues with Console.WriteLine. You should know your .NET people.

@clrjunkie
Copy link
Author

@SidharthNabar

The choice of how the default HttpClientHandler is implemented on a given platform was made by our team after much thinking and design discussions. The HttpWebRequest stack uses a lot of legacy coding patterns and has been shown to have very high memory usage when used at large scale. In contrast,

Solution: Refactor.

I completely understand that by implementing HttpClientHandler over a native handler, we lose some of the knobs/controls/debugging ability at deeper layers (TCP/Sockets), but in return, we get to spend more time on improving and optimizing the HttpClient API surface and leverage others' effort for underlying platform improvements, implementing HTTP/2, etc.

Until you hit the problem at the layer.. and then your stuck! (praying a for a fix)

@stephentoub
Copy link
Member

You should know your .NET people.

I stop participating in threads when they turn to insults. I'm done with this one. Thank you for the discussion.

@clrjunkie
Copy link
Author

What insults?? the fact that we are working for the past 15 years with VS where editor debugging is an inherent part of the workflow and mind set you call this insult?

@clrjunkie
Copy link
Author

#16401

@clrjunkie
Copy link
Author

Re: Is curl suitable for iOS/Android when http proxy is used

https://curl.haxx.se/mail/lib-2016-01/0086.html

@xied75
Copy link
Contributor

xied75 commented May 12, 2016

What insults??

I guess there was a misunderstanding between you two. I feel @clrjunkie meant You should know your people, but @stephentoub read it like You should know your .NET

Probably better for the .net community that everyone check their words and remove anything typed not in a good mood, so that we still have a chance to send Java to the museum.

@clrjunkie
Copy link
Author

@xied75 You understood me correctly. It didn't even cross my mind that such wording could be interpreted differently and I apologize for that. I didn't mean to say "You should know your .NET".
I believe in fierce scrutiny backed by sound reasoning but never for the sake of degrading anyone or hurl insults as not only I think it's unprofessional but also such approach doesn't help move forward anything.

Having said that, after 20y of working with MS Tech (15y with .NET) and also spending past time working at a Microsoft product group I'm too often baffled by how things in .NET Core are prioritized, to a point where it's challenging to stay calm. In my opinion: "Networking API's" (e.g Http(s), Sockets, Sql Reference Client*) and "Developer Tooling" should be among the top three priorities (I'll let others speculate on the third. I centrally have my opinion but I prefer to stay on topic) This is mainly because most other areas do not require nearly comparable testing efforts and can be fixed along the way by users themselves if mature tooling are made available cross-platform. Every time I see an issue mentioning SIMD some pointer abstraction or some memory micro optimization that will most probably only be appreciated in an artificial benchmark I do "ooofff", as all these grab developer attention that would otherwise help to improve on what I believe are the top priorities.

if I was to write the .NET Core mission statement it would not be about performance improvements, "cloud-ready", "diet" or any other superlative.

It would simply be summarized in one word - Accessibility

Because:

  1. The full .NET FW is inaccessible to developer bug fixes (You can't compile it)
  2. The full .NET FW is inaccessible to non-FW developers attempting to understand the
    motivations behind the implementation (Many areas turned into spaghetti over time)
  3. The full .NET FW is inaccessible to IOS and Android Developers.
  4. The full .NET FW is inaccessible to Mac / Linux users.

Relying on 3rd-party Native Http Api's that are written in C make a core area in .NET Core inaccessible to managed developers .

*Sql Reference Client - Since virtually all ADO.NET DB Drivers share 80% of the high level requirements (API Surface, Connection Pooling, retry logic, etc.) It would be wise to invest not only in coding a robust SQL Client for SQLServer but also document and record a code review session so other ADO.NET Driver developers can build upon the existing implementation and focus mainly on the protocol parsing logic.

@aL3891
Copy link

aL3891 commented May 13, 2016

To each their own but I must say I disagree. In my mind the vast majority of developers who use .net will never dive into the actual .Net core code, much less such a low level apis as the libuv stuff. Most people will just want to get their app to market.

Imo, .Net core is not a sample for people to use as reference when they attempt to learn how to write a network stack (or what have you), its a tool for writing apps. As such I think it should to all it can to make those apps run well/fast. If anything i'd like to see more native platform stuff, if that would increase performance.

Besides, its not like everything in the .Net core is c# anyway, there is plenty of c/c++ and even asm to go around :)

-edit-
I can see your point about native parts not being accessible to pure c# developers so they are unable to make fixes, but i'd still argue that anyone who'll dive that deep would not be all that troubled by c code

@clrjunkie
Copy link
Author

@aL3891

In my mind the vast majority of developers who use .net will never dive into the actual code, much less such a low level apis as the libuv stuff. Most people will just want to get their app to market.

I agree. The majority of .net developers (as well as others) will probably understand.NET Core as a version of Microsoft .NET for linux/mac, no more no less. However, the majority of .NET developers do not participate in this project and do not make strategic decisions on framework of choice.
Furthermore, contrary to previous "Full .NET Framework" releases .NET Core is targeted for everyone, not only .NET Developers which means it's success is tied to how it would be adopted in non-Microsoft shops and for that to happen, on any serious scale, it needs high profile customers first which DO care about these details and there is plenty of evidence for that when you look into who is engaged in open source HTTP projects on GitHub.

Do you really think that companies like DropBox, Pintrest, Netflix care that LibCurl/LibUV saved Microsoft time? Do you really think they just buy into abstractions without looking under the hood to assess the risk? Have you seen how Google implemented their C# gRPC client? (hint: Pinvoke I/O Completion Port) Do you really think these companies aren't asking themselves how all this fit's together?

Imo, .net core is not a sample for people to use as reference when they attempt to learn how to write a network stack (or what have you),

I agree. The only thing I suggested was to help other DB Client developers to leverage the investment made in SQL Client, because if Microsoft wants' to have MySql and PostgreSQL users run on Azure AND use other MS/.NET Technologies it would be is in their best interest that DB Drivers are in place. It's very important factor when choosing a FW. BTW, I think they mentioned that they are actually working on this with 3rd party in a past developer standup.

its a tool for writing apps. as such I think it should to all it can to make those apps run well/fast.

I should have made myself more clear. This issue is primarily about .NET Core HTTP components as they apply to server side scenarios and service-to-service communication. I agree this has much lower impact on the majority of client App developers.

Besides, it's not like everything in the clr is c# anyway, there is plenty of c/c++ and even asm to go around :)

Of course, all I/O is ultimately implemented in native languages, and I don't have anything against C/C++ or asm. I occasionally use 3rd party libraries written in C/C++ via PInvoke, but as I mentioned in one of my previous posts above, from my experience, it's one thing to debug and reason about syscalls that have very narrow semantics which rarely change and it's a completely different effort (and risk) to deal with application level protocol issues that are entirely implemented in Native languages.

but i'd still argue that anyone who'll dive that deep would not be all that troubled by c code

Ohh, I disagree.., C Code is not the problem, it's how you deal with a complex C codebase that's over a decade old - that's the problem. Heck .NET Core itself is a solution to legacy problems in the "Full .NET Framework" just on a "Managed Level".

@aL3891
Copy link

aL3891 commented May 14, 2016

I honestly doubt that level of technical audit is very common unless you're building a moon lander or a nuclear power plant, but even so, suppose that they are, wouldn't the goal of such an audit be to determine reliability and security first and foremost? All respect to the amazing skill of the .net teams, but writing a new managed network stack from scratch will not have the same battle cred as something that has existed and been used for years. Besides, when .net was closed, no one could check this kind of stuff anyway and high profile, non-Microsoft shops still used .net. Also, companies as the ones you mentions do care about time to market, and if Microsoft can deliver .Net core faster, that will translate to faster time to market for customers as well

When I say "app" I do actually mean any kind of solution, server side or client. I firmly believe that most customers sees .net as a tool to solve their business problems, and trusts Microsoft to make the right calls about the internals.

I think that if there is some specific concern about libuv from a performance, security or usability perspective that effects users of the framework, then that's a reason for looking at alternatives such a pure managed solution. But otherwise, i'd rather them focus on other stuff

@clrjunkie
Copy link
Author

I honestly doubt that level of technical audit is very common unless you're building a moon lander or a nuclear power plant

I recommend you visit some engineering blogs and leading open source http projects issue trackers to get a more accurate picture.

Heck, go no further then StackOverflow to see they implemented their own websocket server (e.g NetGain) instead of using HttpListener WebSocket support (yes, they are running Windows 2012R2)

http://nickcraver.com/blog/2016/02/17/stack-overflow-the-architecture-2016-edition/

but even so, suppose that they are, wouldn't the goal of such an audit be to determine reliability and security first and foremost?

Reliability is extremely difficult to assess until you go into production and it's virtually impossible if not politically risky to grade security. The more relevant concern I see coming from high traffic services is whether they can debug at this layer without taking additional dependencies on 3rd party components or waiting for a Microsoft patch.

Putting it simply, the question is who are we marrying here .NET Core / .NET Core + LibUV / .NET Core + LibUV + Microsoft?

All respect to the amazing skill of the .net teams, but writing a new managed network stack from scratch will not have the same battle cred as something that has existed and been used for years.

You are using the term "Network Stack" too loosely, I'm not talking about reimplementing the O/S TCP/IP Stack, The concern here is about two .NET Networking libraries that Microsoft has implemented and as far as .NET Http Client goes, it is implemented entirely in managed code in the "Full .NET FW" that you are using for years. The concern here is about the new implementation and lack of a managed socket based HttpListener in NET Core FX.

Besides, when .net was closed, no one could check this kind of stuff anyway and high profile, non-Microsoft shops still used .net. Also, companies as the ones you mentions do care about time to market, and if Microsoft can deliver .Net core faster, that will translate to faster time to market for customers as well

Times have changed, the level of interactivity has changed. People outside the .NET eco-system have been looking into these areas for several years now.

When I say "app" I do actually mean any kind of solution, server side or client. I firmly believe that most customers sees .net as a tool to solve their business problems, and trusts Microsoft to make the right calls about the internals.

That's not the social contract in the open source community

@clrjunkie
Copy link
Author

dotnet/corefx#10947

@karelz
Copy link
Member

karelz commented Dec 14, 2016

Triage: This is a long discussion about direction of the stack. It's not active anymore and it doesn't track any meaningful action item. Closing.

If there are clear action items which are non-controversial, please let me know.
I don't think it is useful to keep open-ended discussions issue which stopped "opened", but am willing to change my mind if there is strong push back from multiple people.

@karelz karelz closed this as completed Dec 14, 2016
@msftgits msftgits transferred this issue from dotnet/corefx Jan 31, 2020
@msftgits msftgits added this to the 2.0.0 milestone Jan 31, 2020
@ghost ghost locked as resolved and limited conversation to collaborators Jan 2, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

10 participants