-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ca functionality #94
Ca functionality #94
Conversation
b1be443
to
2e1e7a7
Compare
a6c53e2
to
149e5a0
Compare
ready for review, all tests have been updated and are passing locally 👍 |
What is this |
Tests are failing on gitlab too. Something to do with the agent simulation. |
the generated folder contains the API server and client code generated from |
I will fix these and re-commit 👍 |
I'm wondering if we can just write our API directly without generating code. Like the way we do it in Ouroboros. It seems like a lot of code that is generated.
…On 23 September 2020 12:22:17 pm AEST, Robbie Cronin ***@***.***> wrote:
> What is this `api/generated/*`? What is making this and why do we
need it?
the generated folder contains the API server and client code generated
from `swagger-codegen`. I've kept the generated code outside the src
folder to follow the precedent.
--
You are receiving this because your review was requested.
Reply to this email directly or view it on GitHub:
#94 (comment)
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
|
We are already kind of doing this, I will get rid of the stuff we don't need from the generated folder. Also I guess we don't really need to generate the client? we're really only using it for testing. To be honest, we could get away with just using the generated yaml file and write all the logic ourselves in the src directory. |
Here are the changes made in this PR:
|
0926c7e
to
cec2d4a
Compare
Why does npm testing keep failing. |
Is there an issue for the openapi/swagger/HTTP Api integration. Seems like this PR brings that in as well. |
Can you review the workflow of the netboot system, and see if polykey solves netboot's requirements? |
We need to review 2 things. Whether this solves the iPXE Netboot problem. And the mapping/relationship between this and the smallstep commands. Does pk now act as both the CA and the CA client where small step is separating this into 2 commands. For the iPXE netboot problem, we need to have a meeting about this. But have a read of this: https://ipxe.org/crypto Primary usecase of the CA functionality here, is to facilitate mTLS for third party applications. Please address the requirements here and describe how the CA functionality/what GRPC calls/CLI commands address them? How does this CA functionality interact with DNS? I need more detailed diagrams/documentation about this CA PR. |
What are all the files in |
Also why not remove webpack entirely at this point? |
Ever since we removed |
We still have dist, but I should probably get rid of it in favour of our new approach and in light of it still being uploaded to npm if its in .gitignore (overridden with .npm_ignore) |
client folder in tests has been removed, we are now just using |
I am just making some ascii diagrams for understanding so I will post them here as I go and put them into the readme once finished.
|
The following is how the CA functionality is exposed over HTTP for external services
|
c398155
to
ab05af9
Compare
Add in some design notes regarding code signing:
Basically it means the x509 certificates can be used for lots of things. And it is up the CA to give these capabilities. |
Another thing we need to understand is that certificates have "capabilities". These are encoded as Key Usage Constraints and Extended Key Usage Constraints. Examples include "Code Signing" which enables the certificate to be used for signing digital artifacts. Most commonly is the certificate is used for But there's a whole bunch of other capabilities as well. |
What is the role of CSR for Polykey? The CA does not generate certificates. It is the "leaf" nodes that generate certificates, and then ask the CA to sign them. Whether the CA signs them or not depends on a CSR process. Let's use an example. For Let's Encrypt, it just needs to know whether the requester/provisioner "owns" the domain that it's asking to get. This is why they have a bunch of different domain verification techniques like DNS check and HTTP/HTML check. Smallstep has instead a token-based provisioner system. We can do something similar with our OAuth mechanism. But really let's investigate what this really means. |
There's a few options here:
Ok so the problem is that the current root keypair are PGP keys. I'm not sure which of the 4 options are viable. But it appears that x509 certificates CAN be used as the root keypair because PGP (algorithm/protocol?) supports these keys. One thing though, we should be using ECDSA algo rather than RSA... but this is something we might configure later. The reason we should do this, is that keynodes now have a keypair to represent their identity. BUT by UNIFYING PGP and X509 together, it means the keynodes can be identified via PGP protocol OR x509 protocol. In addition to this, it also means, every keynode can now be a CA. |
Once the keynodes have a root certificate. Instead of presenting the root public key around. They present the root certificate to other keynodes. However for social sharing we still want to preserve the ability of using ECDSA (ed25519) keys since they very short. They can be easily read out and written down and published in text form wherever. However during the "keynode discovery phase", the keynodes are actually exchanging the public certificate. This then has more information and capabilities encoded within it. The extra information can be used to identify/associate other pieces of information with keynodes (this was one of the original goals of x509, to associate physical identity with digital identity). And the ability to sign certificates... essentially gives us transitive trust. Although PGP allows this too... I'm not too familiar with PGP certificates and who it exposes that transitive trust. Now once every keynode has a root certificate. That's when every keynode is capable of being a CA! |
Once a keynode is a CA. We need to expose several features:
|
One benefit of doing this is that as soon as the keynodes have root public certificates. Then it is possible to use them for mTLS between Polykey keynodes for their GRPC connection. This unifies peer to peer and agent to peer transit security with the identity system of keynodes. So that simplifies ALOT of things. |
ab05af9
to
b08ca2d
Compare
This PR is to add CA functionality to polykey which includes the ability to act as a CA for peers but also the ability to respond to certificate requests from other services through a standard API.
Fixes #48
Fixes #105