-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discussion: Advanced functionality #50
Comments
Yes, I thought about extensibility briefly already. Good that you bring it up.
Not so fast :D It is my understanding that all that So if that is the plugin mechanism you are referring to, we should not discard that. In fact, I would say that this is both pretty easy to implement and fairly flexible. That can also be neatly combined with 1. and 2. Not only that, it also allows pretty much everybody to write their own extension that can be easily integrated with the program, without having to modify either. The only downside I see is that we cannot share source code. That is, you can't effectively write an extension in Rust that pokes into some of Does that make sense to you? |
I didn’t thank that much about it after realizing some of the implications (e. g. elegantly passing the execution context, manually handling command parsing). But if you think that can be implemented with reasonable effort, it would be the best solution. |
I don't think the implementation is complex, but lets prototype it and see. Now, that is on the I suppose you were somehow thinking about sharing functionality at the source code level? It is not quite clear to me whether you already had a possible implementation in mind, because neither 1. nor 2. really say anything about how they integrate with I am definitely open to other ideas, the above were just my thoughts on the subject. |
No. I think 1–3 could be realized using small standalone bash scripts, but that’s everyhting I thought about. As you said, we should write a prototype and then evaluate it. Maybe I’ll post an example script for one of the tasks that we can use for evaluation. |
I've gotten a working prototype of dynamic subcommand discovery here: https://github.com/d-e-s-o/nitrocli/tree/topic/extensions.
I think it can be integrated nicely into the existing infrastructure, with a few changes beforehand. Right now it's a bit ugly because it's not nicely separated out. The logic does not require overly much code, if you ask me.
|
Looking good! Did you think about how to pass the execution context to the subcommand? I think the only viable way is to rebuild the arguments and pass them to the extension. Of course we also have to pass the arguments that we did not handle. Maybe we can separate them from the exeuction context with
Some subcommands will have to spawn a new |
I basically concluded that right now I don't care :) But yes, translating them back into arguments seems to be a reasonable way. Environment variables are an alternative, and they would have the advantage that they can more easily be ignored and that newly introduced options to the The most relevant question we should clarify is whether such a scheme (external command) really does meet our needs. I think it can capture most use cases with close to the least effort on our side, but Jan's proposal (#52) for example I could not immediately map to this design. Perhaps if we allow for clever replacing of built-in subcommands or at least recursive invocation as you suggested, it may be possible. Will try to sketch this out tomorrow or so. |
I have an extension for exporting an OTP secret as CSV prototyped (note: format is not correct, it's just an arbitrary CSV dump to show that it can be done): https://github.com/d-e-s-o/nitrocli/commits/topic/extensions $ PATH="${PATH}:${PWD}/ext" cargo run -- otp-export 0 name 123456
name,123456
$ echo $?
0
$ PATH="${PATH}:${PWD}/ext" cargo run -- otp-export 0 name abcdefg
Could not write OTP slot: InvalidHexString
$ echo $?
1
$ PATH="${PATH}:${PWD}/ext" cargo run -- otp-export 0 name 123456 --export-format=pskc
Only csv format is currently supported.
$ echo $?
1 It works, but the implementation is questionable: we basically duplicate all argument definitions for the |
Suggestion for Advantages: You don’t have to duplicate the code for interpreting the secret format, and you don’t have to parse the I’ve been working on the subcommand for OTP import from a QR code. I’ll post a link soon. |
Thanks for the suggestion, Robin! Yeah, that may be a bit nicer, but ultimately the
Awesome! I would guess that will fit much nicer into the proposed extension scheme. Looking forward to checking it out (I've never used any QR logic in a desktop context). Out of curiosity, are you writing it in Rust? |
Awesome! I would guess that will fit much nicer into the proposed
extension scheme. Looking forward to checking it out (I've never used
any QR logic in a desktop context). Out of curiosity, are you writing
it in Rust?
Yes, the extension is Rust code, but it’s not doing anything by itself.
It calls `import` to make a screenshot (if no file is given) and
`zbarimg` to decode the QR code. There is a `zbars` binding crate in
Rust, but it is to complex for this simple task.
|
Here it is: |
Just checked it out: Very nice! 👍 |
As part of #14 you mentioned
I am not a big fan of parsing random unstable output not designed to be consumed programmatically. However, some time back I thought about a |
I agree that we should define a fixed output format, but I don’t think we should add JSON output. That would be unnecessarily complicated. I think it is sufficient to have simple tab-separated output as we have for |
After thinking a bit more about it, I think that’s the way to go – having |
JSON is pretty much ubiquitous from my point of view. It has the advantage that there are reasonably simple parsers out there that are guaranteed to do the job. It's also self documenting for our case because you can simply name the fields. Lastly, it is (for trivial cases such as ours) easy to ensure backwards compatibility: if we have a map and name the fields we are done as long as the names stay as they are. Order does not matter. New fields are ignored unless explicitly referenced. Check this out: https://docs.rs/json/0.11.13/json/macro.object.html let data = object!{
"foo" => 42,
"bar" => false
};
assert_eq!(data.dump(), r#"{"foo":42,"bar":false}"#); I don't think we'll be able to produce something much more to the point with the advantages outlined.
Can you elaborate a bit more?
I would like to clearly differentiate between user facing output (which is meant to be appealing to the human eye) and machine readable one and explicitly not mix the two. The former is meant to be changeable pretty much arbitrarily. The latter has backwards compatibility guarantees provided.
I think we should decide what we want: a quick way for a user to hack something up or an extension point. I am not opposed to Correct me if I am wrong, but it seems you are more concerned with scenarios of the sort "let's write this 10 line extension in bash script in 2 min and be done". That is not really what I care about here. While by far not everything needs to be as complex/comprehensive as |
Again, I agree that we should have an option that enables fixed output. I just don’t think that output has to be JSON. For example, I think it is perfectly legitimate to do something like
to extract a field (assuming tab-separated output). That is both robust and easy. If you want to add additional JSON output, I can live with that. But I think we should have an output format that can be interpreted without a parser framework – especially as we don’t really have complex output. |
Right, but just having JSON output does not exactly prevent any of that: $ nitrocli status --json | sed 's/.*"model":"\(.*\)",.*/\1/g'
storage ;-)
Sure, but I really don't think we should support two different machine readable formats just because we can't agree on one. The goal should not be to cater to everybody in the most convenient fashion but to provide one way to extend the core of the program that can be supported moving forward, in my opinion. |
Now that’s what I would call “sed and awk magic” and “giv[ing] me headaches”. ;) function parse {
sed 's/.*"model":"\(.*\)",.*/\1/g'
}
echo '{"model":"Pro"}' | parse
echo '{"model":"Pro","serial":"0x123"}' | parse
echo '{"serial":"0x123","model":"Pro"}' | parse
echo '{"model": "Pro", "serial": "0x123"}' | parse
echo '{
"model":"Pro",
"serial":"0x123"
}' | parse
echo '{"otp-code":"123456"}' | parse It works for one of the six test cases, and these aren’t even mean! |
Hey! It's a proof-of-concept! I am no |
My point was not that this particular expression does not work, but that it is hard to build a JSON parser from standard tools without making assumptions about the formatting. |
Let's take a step back here. Because while I agree that JSON is not optimal for command line based parsing, I still question why we would need that to begin with. Because, again, the use case of interest are extensions. So while somebody at some point may write something along the lines of
to quickly check something, this is not what this format is ultimately intended for, is it? For a quick check of this sort, why not simply parse A custom format like you are proposing tries to combine the best of two worlds -- the ability to be easily usable from the command line with the safety of being properly usable for data exchange -- and it is my believe that it ultimately fails at least at achieving the latter (and if it succeeds at the former, I still don't fathom the relevance here). Now you'd have to write your own parser, which is more work and more error prone than using a well tested existing one. Is one parser enough? While it may be able to capture the So let me asking again, because I have not seen an answer so far:
What exactly would be complicated? The client code? Why? |
Consider my first example, |
Thanks for the explanation. So it appears that the point of contention is the usage of shell script as a potential implementation language.
But that is pretty much what it comes down to, isn't it? There is no proper support for parsing JSON with built-in or commonly available shell tools and that is why you are pushing for a more minimalistic format. At least that is the only argument I see. I look at it differently in that first and foremost I want a proper data exchange format. And proper in this definition already includes support in various languages. Now if that support does not spread to shell script then...I am sorry for the language, but it just hasn't kept up. I have absolutely no problem dropping shell as an implementation language: in my opinion nothing above five lines of code "deserves" to be implemented in it (yes, that is a more or less arbitrary number and yes I've broken that "rule"). Software will almost inevitably grow and shell script is not properly equipped to handle that growth, in my opinion & experience. I don't think that in more recent languages JSON support is a problem whatsoever. Yes, it may be an additional dependency, but for an extension I don't care. My main focus is on ease of usage and correctness.
I've brought up a good amount of reasons in this thread already. They are sufficient for me :-) Anecdotally, I was reading this article touching on, among other things, the history of operating systems. Now say what you want about the article, but one section reminded me a lot of this discussion. One problem identified for software development in general was the excessive use of domain specific languages which are (and I quote):
Now that is mostly about programming languages (although configuration is explicitly mentioned as well). But I think that similar arguments can be made for data formats (and I have basically done so already throughout the discussion). Don't get me wrong, Unix wouldn't be Unix without pipes-and-filter style programs and I am not here to say otherwise. What I am saying, though, is that this pattern does not apply everywhere. I don't think it is necessary for our use case and as such I would not want to go with a format whose only advantage (at least from what I can see and from what I understand you are arguing) is that it lends itself to easy command line based filtering. |
Would you be terribly mad with me and run from the project if I made a call here, @robinkrahl? I thought several times about giving in and going with your suggestion but I never could convince myself of the benefits. In fact, the more I thought about it the more I believed that to be the wrong call. Now, I seemingly failed to convince you of my point of view but I think we may have to accept that part of this decision is subjective and move on. I would really want to wrap this entire extension topic up next and there are a few more design decisions to make here, so I'd rather get the discussion at hand concluded and focus on things ahead. Perhaps ultimately we even end up going without an export as we discussed, but it is what I plan to prototype next (we obviously don't need the decision to get the prototype rolled out, but this discussion has been dragging on for a long time and it appears to have reached a dead end). |
Of course not, there has to be a decision and the issue is not that important. But let me make one final suggestion: Most programs that want to access the Nitrokey should use libnitrokey or the nitrokey-rs in the first place. So we basically have two use cases that would want to use a nitrocli interface: integration with other applications and scripting (which is my main concern), and extensions (which is your main concern). The reason why I don’t care that much about extensions is that I think they should be using an API: nitrocli should also be a library, the command functions in the This would eliminate the need for any parsing of the output (if the commands would return a displayable structure instead of directly printing to stdout). It would require people to use Rust, but for the limited field of extensions that need to interact with the nitrocli output, I’d be willing to accept that. Also, it would increase the size of the extension binaries as they had to link against nitrocli, but I think that’s okay. (By the way, that’s also the way cargo does it.) We don’t have to discuss this in another 26 comments, and probably you already considered that option, but I wanted to bring it up anyway before a final decision is made. |
Thanks for bringing that up. I did consider the library approach but discarded it quickly for I didn't really want to get into the business of providing a stable API at that level. But I guess it won't be much worse than any other library crate. It also seemed over engineered to me. The command driven path appeared to offer more of a reasonable middle ground. Interesting that Cargo provides a precedent here. I wasn't aware of that.
I guess my point of view (and perhaps that is short sighted) is that we can and should cover both use-cases with the extension design. |
Closing this issue as extension support made it into |
How should we deal with advanced functionality that goes beyond just providing access to the Nitrokey’s features? Let me illustrate this with some examples:
passmenu
, I’d like to have something likeotpmenu
that lets the user select an OTP slot viadmenu
, generates an OTP and copies it to the clipboard.otp status
is rather slow, I’d like to have something likeotpcache
that caches the output fromotp status
to speed up queries for 1.import
), tries to identify and read a QR code in this region (zbar
) and then configures a slot on the Nitrokey based on the data from the QR code.Stuff like 2. could be easily implemented as a
nitrocli
subcommand (though this would pollute the global command namespace). For stuff like 1. and 3., that’s harder as it would pull in many dependencies only for a niche task.Possible solutions:
contrib
folder with scripts like these (similar topass
).cargo
).While 3. might be interesting to implement, it is totally not worth the effort. I just listed it for the sake of completeness.
I tend to prefer option 2 as it would make it easier to distribute the tools and I think they could be useful for many
nitrocli
users, but option 1 would be fine too. What do you think?The text was updated successfully, but these errors were encountered: