-
Notifications
You must be signed in to change notification settings - Fork 17.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: crypto: add crypto engines #33281
Comments
A use case: business reasons (FIPS, etc.) cause us to have to use one of a couple crypto libraries, depending on the situation. We wrote a library that copies the stdlib's crypto API and uses different "backends" based on build tags. One of the major pain points is instead of Another pain point is when libraries don't implement entirely what we need. In this case, we have to fall back to the Go stdlib which results in sloppy and error-prone backend code. Additionally, we have no real way to switch backends at runtime depending on, e.g., whether FIPS mode is enabled on the host machine. This also means for each backend we want to test we have to re-compile the binary instead of switching engines. Similarly, the |
CC @FiloSottile @agl |
I understand this desire and it does make sense; There's no need to get stuck on implementation details while still pondering on whether we even need such a thing. Having said that, I would prefer to see a proposal in an early stage for the interfaces where there's an implementation of at least two separate engines. Once too often I've seen someone gobble up "a single generic interface to rule them all", which turns out to be basically a copy of their own implementation, and not be compatible with any alternatives. If we set out to create a single interface to rule them all (as I see this proposal), we should at least know that that is realistic and that it fulfils all requirements of other (potential) engines as well. If that turns hard to do -for example because no other implementations exist yet - perhaps a small case study of other similar initiatives (e.g. in the C-ecosystem) could be done to show how these evolved and where they lack - if at all. Finally, it's not mentioned explicitly in the proposal why this should be part of the standard library, rather than be a third party package that simply encapsulates the existing crypto stuff. Am I right in assuming it's necessary because you also want other things in the standard library to use this engine? All in all, +1 from me :) |
@Freeaqingme I don't want to speak for @derekparker, but I know that my +1 would change to a -1 if this didn't affect the stdlib's crypto. My previous comment sort of implies why, but since the default crypto for Go is the stdlib's crypto, there'd still be lots of pain points if this was implemented as a 3rd party library. |
I understand that and can work to provide a working example if necessary once discussion on this proposal has progressed. I just wanted to get the conversation started here.
Sorry, I thought I has clarified that. There are a few reason for this being part of the standard library as opposed to simply an external package. One, there are already many external crypto packages for Go and yet this proposal still exists. The issue isn't with the ability to use other crypto with the Go programming language itself, the issue is being able to swap crypto back ends without changing import paths. This is an important detail for a few reason:
The dev.boring crypto branch (and other forks based on it) are created so that Go programs built with that version of the toolchain just simply call into other crypto libs without user code having to do much, if anything. That is the real goal here, to distill and simplify this need of having to use crypto other than pure-Go standard library crypto, and being able to do it without using an "unofficial" branch or fork of the language. Hope that all makes sense! I'm excited by the support and discussion around this. |
What happens when a new algorithm gets added to the stdlib? Do all existing engines become obsolete? |
Yes, that potential is there. However, looking back at the past few releases: Go 1.12 - No new crypto functions added These changes only come in minor version updates which happen on a 6 month cadence. I feel that if a person needs to go through the trouble of using and maintaining a cryptographic library, the maintenance burden of adding ~4 new functions to said library over the span of ~2 years (from 1.9 to 1.12) doesn't seem insurmountable. |
Alternatively, would it be possible to have a single interface that returns which algo's are supported, and then have a single interface per algorithm? |
Could you sketch out what you mean by that? |
Sorta related to having a “single interface per algorithm,” check out how bazil.org/fuse/fs handles multiple optional methods. |
ACK. That seems like an interesting addition in the interface approach. That could solve the problem of making older engines obsolete. EDIT: For those who are following along, this is the series of interfaces I found: https://github.com/bazil/fuse/blob/master/fs/serve.go |
It would definitely be nice to have some way of transparently adding FIPS or distro managed crypto support to Go. I have some concerns with doing this at runtime though:
Personally I think it would be nicer to have a clean way to swap out standard library packages at compile time using modules or a compiler option. That way you could statically check that the binary doesn't contain any Go crypto code (unless the replacement you are using links to it as a fallback - not sure what sort of mechanism that would use). Also this approach would have no impact on people who just want to use the standard Go crypto library as-is. |
@mundaym I'm not sure I agree that it should be required at compile time. But I think the underlying concern that it should be verifiable which engine is being used is a valid one. I suppose that's something that can also be done at runtime. E.g. a call to crypto.ActiveEngine() that returns a struct with a field to identify the engine in use. That could be printed while starting the application, shown through a http handler, etc. |
@Freeaqingme I guess it depends on your level of paranoia (or the size of the fines...) :). A call to Does anyone know what sort of guarantees are required for FIPS compliant programs? Do they just need to avoid calling non-FIPS code or should they also avoid linking against it? |
Well, if you don't use the stdlib crypto anywhere, then those symbols would be unused? They could then be stripped from the binary if so desired, right? However, were the application compiled to remove all debug symbols, it may be hard to analyze if std crypto is still used, so therefore I think it's at least required to be able to determine it at runtime. Removing them at compile time would be a nice bonus, but I think they could be stripped after compiling as well. |
I think that this issue is trying to solve two distinct problems:
1. wholesale replacement of the crypto stack for regulatory reasons.
2. replacement of low-level cryptographic primitives with
platform-specific versions.
Both are useful, but they involve different tradeoffs. In case (1), you
want to make sure that the stdlib code is never called, even if that means
failing in cases that are not covered by the replacement. In case (2),
you want to gracefully fall back to stdlib whenever necessary, but you
care about performance very much (e.g. you don't want every hardware-
accelerated SHA-1 to cause a piece of data to escape to the heap).
Should this issue be restricted to just case (1)?
(In which case I fear that compile-time replacement of the stdlib crypto
code is the only workable solution.)
|
There has been talk of potentially splitting parts of the standard library into modules. If the Additionally, if your primary goal is to use a different TLS stack, it might be easier to make just |
This is a very great point, and one that is addressed in forks such as the boringcrypto branch by panic'ing if a non-boring code path is executed if boring mode is activated. I agree this proposal is a bit lax on this, but to be honest I mostly wanted to get the conversation started. I think we should have a way of explicitly failing instead of falling back to the stdlib.
They must fail when calling non-FIPS code.
That would be interesting, but would have to wait until Go 2. |
Maybe this is a distraction, but as a thought experiment, what if this proposal were instead "garbage collection engines" or "channel engines" or "reflect engines" or "encoding/json engines?" Would any of those make sense to try to support? Why or why not? Obviously some other implementations of Go do substitute different garbage collection and channel and reflect implementations, though not as far as I know encoding/json. The existence of those alternate Go implementations that carry pieces more appropriate to their use case has not caused us to try to make the garbage collector or runtime or reflection systems "pluggable". I wonder if crypto is like those or more like encoding/json or somewhere in the middle. For dev.boringcrypto I of course had to answer that question, and I answered it by saying that crypto was like other key pieces like GC and runtime and that if you changed it you simply had a different Go. Is that the wrong answer? If so, why? Is crypto different solely because of FIPS? Or something more general? |
I appreciate the thought experiment and the discussion, however I would argue yes. My main reasoning being that nobody will be prevented and / or charged large fines due to running Go binaries because of the garbage collector, runtime, or channel implementation, reflect or encoding implementations. However, certain companies simply cannot run Go binaries in situations requiring certain standards of cryptography due to, as you mentioned, government mandates such as requiring FIPS compliance. That's the fundamental issue here, is that by not allowing a standard upstream way for users to switch crypto implementations based on actual business needs and requirements we limit what can be used in the Go ecosystem, or at least make it way more difficult. The argument against that would be "just use an external package". However I feel if it really were that easy, the Now, along with that I think there are things that should be updated in this proposal to reflect the discussion. Namely, I think there should be some way to prevent falling back to stdlib crypto if an engine is loaded to prevent situations where non-FIPS code gets called in situations where it would be inappropriate. However I think those details can be hashed out later in this thread, certainly before any kind of implementation. I just want to address the basic need for something like this and stress that cryptography is fundamentally different than a runtime or garbage collector. |
That's a legit question I suppose. I'm not 100% informed when it's about the alternate GC or runtime implementations, so I might make some (incorrect) assumptions here... I assume the alternate GC or runtime stuff is implemented because of technical considerations. It's done by people who have a different vision on certain technical aspects of these components, and therefore want to do it themselves. Simply because they can do it "better" (mind the quotes! :)). However, this proposal is not about companies that want to swap the crypto implementation because of technical reasons. If they feel they can do better, they may temporarily fork it, but only to contribute those changes back to upstream later (like CloudFlare did with the TLS asm implementation). The companies that are looking to use a FIPS implementation (and rule out usage of stdlib crypto), do so because of business considerations, rather than technical considerations. They're not looking to maintain a fork because they can do it better, they simply have a business requirement to substitute a single component (crypto) with another. That's why I think that allowing for alternate crypto implementations is different from allowing alternate GC or channel implementations. Of course we can say that those concerns are not the concerns of the Golang community and therefore a fork is the way to go. Had this been proposed 10 years ago, it might have been very well because the language was less mature and resources were better spent on other things. I think that now that Go is gaining more and more traction, it could be a good moment to consider it. In the early days of Android, vendors also wanted functionality that was not present in the stock linux kernel. This in turn lead to many forks, often barely updated, hardly ever kept in sync with upstream. This reflected badly on the Linux community, and lead to a worse user experience for the end user. In the end, much effort (by Greg Kroah-Hartman among others) was invested into getting it all back into sync. This in turn has lead to an increased amount of available developer hours on a single 'implementation' of Linux, rather than having them split between the two. I realize that Go != Linux and that a programming language != kernel, but it could be an argument of why having forks could be a bad thing in the long term. |
Let me start by saying that as the current maintainer of dev.boringcrypto, I am sympathetic to the pain of maintaining a fork, and I understand where this is coming from. I think you make a clear argument that there is a business case for swapping crypto backends. I'm not convinced it's qualitatively different from the business case for swapping other parts of the language though: for example, the GC and runtime make Go incompatible with embedded devices, which also "limit[s] what can be used in the Go ecosystem". There are however ways in which swapping cryptography backends is peculiar technically, and they all suggest leaning against this proposal.
Overall, it's a tradeoff. Not making cryptography engines pluggable puts the cost of maintaining a FIPS fork on the organizations that need it (Google included), instead of adding complexity to the standard library used by most. Also considering that it's mostly well-staffed organizations that need FIPS compliance, I think that's the right call, and I don't think we should implement this. Maintaining one of the simplest cryptography libraries in the whole ecosystem is an exercise in resisting complexity, and it sometimes comes at the cost of this kind of flexibility. (This focuses on the FIPS case of switching the whole engine, because for providing hardware implementations of specific values we have interfaces like |
Thanks for your thoughtful reply @FiloSottile.
I was definitely awaiting your reply and thoughts on this as we have talked before about the dev.boringcrypto branch in private mail and I knew we were likely experiencing the same pain points.
Thank you. One counter I would provide here is that Go was never intended as a language suitable for embedded devices. As a language that is geared towards writing performant server software, I think that differentiates the crypto packages as they are a necessity these days, as running a server without TLS and proper security in any kind of production environment is irresponsible at this point.
I appreciate this feedback and the time you took to provide specific examples of where this might be a burden in the future if implemented as described.
I completely understand and agree that I wouldn't want to do anything to unnecessarily complicate the language, as the simplicity of it is one of the main factors that drove me towards being a Go programmer many years back now. I guess my last question would be: are we doomed to maintain forks for the rest of time, or is there still room for brainstorming a solution which has an acceptably small complexity vs flexibility tradeoff? As I mentioned before, if this proposal is not "the one" I'm not offended, my goal was to provide a jumping off point and open the conversation. |
With respect to comparing alternate garbage collectors and alternate cryptography implementations, there's one pertinent difference: the business reasons for cryptography backends usually necessitate multiple backends. I'm unaware of any business reasons that would require, say, three different garbage collectors or runtimes. On Windows, the correct option for a FIPS-compliant Go package is to use the CNG API (in particular, BCrypt.dll or NCrypt.dll). On Linux, one might choose OpenSSL. In addition to OS-specific packages, however, those who need FIPS compliance often have to deal with client-specific libraries or hardware. This greatly complicates forks, because now instead of just one BoringSSL fork you're required to maintain N forks. Or, try to combine the forks into one. But that's not entirely possible because you can't always include client A's cryptography library (or your own code that accesses client A's hardware) in your fork if that fork is provided to client B. And often you have to provide your fork to client B because the type of clients that require FIPS compliance quite often need to build the software themselves, or at least have a trusted intermediary build it for them. For something like FIPS-compliant cloud storage this probably isn't a big deal. You can pick and choose your desired VM and only maintain a fork that runs on that VM. But the lack of pluggability makes writing software that the clients run on their own machines incredibly difficult.
I very strongly disagree. Large companies like Google, Amazon, Cloudflare, etc. are not the only companies that require FIPS compliance. |
This is a great point as well. I'm coming from the point of view of maintaining a fork calling into OpenSSL (as opposed to BoringSSL) for RHEL, however the idea of potentially needing multiple forks within the same organization is a major pain point for consumers. |
This proposal has been added to the active column of the proposals project |
Added to active to try to move discussion toward a decision. I asked in #33281 (comment) about the motivation, and it sounds like the answer is "for FIPS" not "because crypto should be pluggable generally". We still don't support uses of BoringCrypto at all, but it's worth noting that dev.boringcrypto has been merged into the main tree. So replacing crypto should be a matter of replacing just the one crypto/internal/boring package. Does that make it easier for people to maintain forks? Or does that provide a useful chokepoint for other replacements? |
Ignoring FIPS compliance, there is still a good argument for "pluggable crypto" specifically to take advantage of crypto hardware. I know I cannot take advantage of a crypto accelerator in one of my embedded targets currently. While crypto accelerators will likely be niche and covered by a fork, I think things like YubiKey and TPM integration are becoming more normal. I have not developed for the YubiKey yet, but I believe it is capable of performing all the needed PKI actions for TLSv13 internally, so the private key would not have to leave its boundary. The only way I have seen Go leverage a YubiKey currently is to extract the key from it. |
Hardware modules are supported through the |
@FiloSottile I don't see any examples of it, but yes the As for the engine vs interface debate, I think interfaces, like |
I think you’re correct in that being the main motivation, however I also feel that FIPS in and of itself is enough motivation to provide support for this feature. FIPS is a real-world requirement with regard to cryptography that companies must deal with. Go, having a cryptography package, should also have facilities to enable this sort of compliance out of the box. Go is very much a “batteries included” language with a robust and comprehensive standard library, so forcing users who must care about FIPS due to regulations into maintaining or using a fork of the standard library seems like the wrong approach. I would very much be interested in ways to make Go cryptography FIPS certifiable by somehow being able to have a Go crypto plugin (read: shared library of sorts) that can be certified, alleviating the need to use another crypto implementation, but that also implies the standard library being able to load a cryptographic implementation and use it, which is essentially what we’re already talking about here.
This change does indeed make things somewhat simpler from a fork maintainer perspective, however I think it’s still not ideal. Carrying patches to weld things together can be brittle and time consuming as rebasing can get messy over time, and in the end we’re still not shipping or using the “real” upstream Go toolchain, which carries with it its own set of risks and complications. A simple solution could be to allow Go module replace directives to replace stdlib packages so that one could simply swap that package out with another in a more supported way, however that proposal has already been declined previously[1].
It’s true that interfaces allow for swapping out certain implementations, but that only affects code the user is writing directly. Other packages, and even other stdlib packages such as crypto/tls, may still be using the regular stdlib implementation. What this proposal allows for that interfaces don’t is the ability to swap out the entire crypto package implementation for all packages, even ones that are only included as a dependency that the developer did not write themselves. |
Speaking as a maintainer of the Microsoft Go fork, which implements CNG and OpenSSL crypto backends, yes, merging dev.boringcrypto back the the main tree has made our life much more easy. This is a small experience report:
Going back to the Crypto Engines proposal, my gut feeling is that it would make our work as engine (aka backend) maintainer more difficult:
Summarizing, I agree with @FiloSottile when he says that On the other hand, @derekparker made a good point with this proposal: rolling your own crypto should be easier. Merging |
Thanks for your detailed input @qmuntal!
I think that can be part of the overall discussion. If unloading an engine proves problematic maybe it's not something we should allow. E.g. loading an engine would be a one-time, irreversible step.
The original proposal allows for this in a way. The engine itself is represented by an interface, with different algorithm types as methods. Taking a look back at the code example:
You could imagine a scenario where an engine implementation only implements part of the interface, returning nil for those not implemented, falling back to stdlib implementation.
What about the case for wanting to ensure that every package in your program is using the crypto implementation you choose? Even with this interface approach the code you write can use whatever crypto you want, but if you import a package that imports I think that if these interfaces were sufficient for accomplishing what we all would like to accomplish, then the
I agree that we should exhaustively explore all options, crypto engines being just one of them and a way to get this conversation started with at least one concrete direction. Allowing replace directives to work against stdlib packages, or making the crypto package loadable via plugins (which could then mean the stdlib crypto library could be FIPS certified), or allowing for an engine approach similar to what's described in this proposal are all ideas. To be clear I'm not staunchly in favor of the approach I proposed, I just wanted to get the conversation started and provide at least one path forward. The ultimate goal is to be able to supply user-provided crypto without the use of patch files, forks, etc... |
I have started to write this comment half a dozen times and it's probably still a bit incoherent. Sorry about that. One of the factors here I am not sure of is how much of a goal it is that a FIPS certified system contains no non-certified crypto code at all (certainly it is easier to prove that code that is not present is not being executed). If it is a goal, then it is certainly easier for a compile-time approach (like GOEXPERIMENT=boringcrypto). It would be convenient for us (Ubuntu/Canonical) if we didn't have to recompile and separately distribute each Go binary that might be used on a certified system, but that seems incompatible with the above unless we build all Go applications to use openssl for crypto (which is probably OK for us as a distro maintainer, but it's quite a divergence from upstream), or this proposal is adopted to such an extent that all Go programs use some plugin-based interface to find their crypto code from a system path (which just doesn't seem like how Go works). I'm curious how other distros are handling this issue? |
Our experience with BoringCrypto has been that each backend ends up imposing new constraints on the standard library. To the extent that the existence of BoringCrypto in the tree makes other backends easier to drop in, that's great. But I don't think we want to take on the extra work of supporting a general backend interface, for the reasons given earlier in the discussion. |
I'm also a maintainer of the Microsoft Go fork, and our fork is starting to be used by the CBL-Mariner Linux distro for FIPS compliance. In Go 1.19+, our Yep: we expect this decision means CBL-Mariner will need to build all Go apps that are part of the distro to always use OpenSSL, so the distro's FIPS mode works with the Go apps in the distro. However, we haven't explicitly confirmed this plan with the CBL-Mariner team. (Our FIPS focus is currently on our non-Linux-distro binary archive builds.) Red Hat has historically been strict about FIPS requirements at runtime (no fallbacks, extra FIPS-based limitations on which OpenSSL APIs you can call) but plans to move to a GOEXPERIMENT compile-time decision as well, as of microsoft/go#641 (comment). Using OpenSSL for non-FIPS scenarios will need some refactoring to loosen those runtime safeguards. I'm not a FIPS expert, but my understanding is that this doesn't have any major effect on FIPS compliance or certification. Limiting the available APIs might make some initial FIPS compliance exploration easier (run tests under FIPS mode -> fix errors) but it's always possible to use permitted APIs in a FIPS-noncompliant way, so it seems to me the code still needs review regardless. Footnotes
|
The Zen of Python says: This is a good principle for Go too. So should Go allow us to chose between different crypto engines, to chose If the Go stdlib is updated to allow me to chose which crypto engine to use, Some people have argued that making Crypto plug-able is necessary in order to comply with government rules. |
Based on the discussion above, this proposal seems like a likely decline. |
I completely understand the arguments towards keeping the language simple and continuing to provide the "batteries included" approach to the standard library, and especially the mindset of having one and simply one correct way to do things. These are all aspects that drew me to this language and community in the first place, and I wouldn't want to sacrifice them. That being said, I believe we have already added complexity with the merging of the boringcrypto codebase to the standard library. Would this proposal have any merit / chance of acceptance if we continue the direction of hiding the complexity of it's implementation behind a GOEXPERIMENT flag? That way anybody who does not need the complexity of swapping crypto backends and dealing with FIPS doesn't have to worry about it, and we can isolate these changes a bit more. (cc @rsc) |
No change in consensus, so declined. |
Any thoughts on my last comment? #33281 (comment) |
Crypto Engines
The Go standard library cryptography implementation is very comprehensive and
widely used. The problem, however, comes when a user wants to swap out the
implementation for another, while using the same interface. This has already
happened upstream with the dev.boringcrypto [0] branch. Use cases for using an
alternate implementation include FIPS compliance (where a user may need to call
into a FIPS validated module), and interfacing with external crypto devices
(e.g. accelerator cards, specialty hardware).
It would be easy enough to maintain a package that does this already. However,
the problem comes when trying to swap crypto implementation in a project you do
not own. This would also ensure consistency so that there is a formal way to
declare that all or part of the crypto operations provided by the standard
library should be implemented by some outside package.
This proposal outlines a way for an alternate implementation of the crypto
functionality to be loaded / unloaded via function calls, and allow the
alternate crypto implementation to live in a separate package maintained outside
of the Go source tree. It also allows all this without the user having to modify
import paths, allowing a user to load an engine once and have that change
propagated everywhere automatically.
Engine implementation
Users should be able to load a single global engine which if present will be
used instead of the standard library implementation. The design allows engines
to replace all or part of the standard library implementations. There is no
concept of having multiple engines. Loading an engine makes it the default for
any crypto implemented by that engine. This means that a single engine could
replace all or part of the crypto functionality present in the
crypto
packagein the standard library.
Engine interface
All engines to be registered internally must conform to a specific interface.
The interfaces will be focused on specific areas of cryptography following the
package boundaries in the standard library.
The goal would be that all crypto operations could be handled by an engine
implementation, including TLS specific functionality (PRF, et al).
Loading
The loading of an engine should be simple and straightforward. Only a single
engine can be loaded at any given time. This means one must unload an engine
before attempting to load another one.
The user is presented with 3 options:
calling
LoadEngine
with the struct.GO_CRYPTO_ENGINE=<path>
. This would cause the program to load theengine from
<path>
at runtime.<path>
should point to a valid Goplugin.
The basic outline for these functions would look like:
It would also be very easy to unload an Engine, if need be, via:
It is the responsibility of the engine to maintain correctness. In other words,
the engine must be responsible for maintaining locks, reference counting, or any
other method to ensure that when
EngineCleanup
is called the resources used bythat engine will not be left in an invalid state.
Usage
The standard library code will branch based on the engine and specific
implementation being non-null.
For example:
Real world use cases
The upstream Go repository currently has a branch [0] that is being maintained
separate from the main
master
branch which adds support for calling intoBoringCrypto [2]. Instead of maintaining an entirely separate branch, the
functionality could be rewritten as an engine and included in any project that
may need it.
Additionally, if a Go user has a requirement that their code meet FIPS
requirements and would not like to use (or cannot use) the dev.boringcrypto
branch, this new functionality would allow the user to pick their crypto library
of choice (OpenSSL, LibreSSL, et al).
Finally, if there are users who wish to take advantage of certain hardware for
crypto, that too could be accomplished via engines.
Implementation details
Originally when drafting this proposal I wanted to get deep into specifics of
how this functionality would be implemented as far as how the engine is
represented internally (after loading / assignment). However after a few drafts
and further thinking I figured I'd leave it a bit more abstract at first lest
this devolve into a code review of sorts. I'd rather the discussion be around
the idea of engines as a feature. If necessary I can draft a further design doc
on some of the proposed specifics. Otherwise, we can leave that to a later date
in a proper code review setting.
Summary
Engines would provide an easy way to swap all or part of the standard library
crypto functionality with one provided by the user. There are already real world
use cases and examples of folks using other crypto libraries either by creating
their own package [3][4][5], or even by upstream maintaining a separate
branch to support BoringSSL.
This proposal suggests a way to remove the maintenance burden and make the Go
standard library crypto implementation more extensible.
The text was updated successfully, but these errors were encountered: