Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Namespace Registry #8

Open
tshemsedinov opened this issue May 1, 2017 · 2 comments
Open

Namespace Registry #8

tshemsedinov opened this issue May 1, 2017 · 2 comments

Comments

@tshemsedinov
Copy link
Member

Here we have repo for Metarhia package manager NamespaceRegistry.
https://github.com/metarhia/NamespaceRegistry
Expecting features:

  • No functionality duplication
    • Before somebody will start feature development he need admin approve
    • Admins will review functionality from time to time to find what can be additionally unified
  • Single version for whole Registry
    • No package versions, any change will increment Registry
  • Complex testing together
    • Each package will be tested in correlation with others
    • If all tests are passed, this certain version will fork forever
  • No naming conflicts
    • All packages are mapped to a common namespace system api.*
    • Namespace endpoint is not a certain implementation but a fixed interface
      • Different implementations will be loaded depending to environment (node, browser, mobile)
        @metarhia/angeli @metarhia/amici
@tshemsedinov
Copy link
Member Author

See also #5

@aqrln
Copy link
Member

aqrln commented May 2, 2017

Before somebody will start feature development he need admin approve

That obviously doesn't scale well... like, almost at all. So I'm immediately following up with a question: what will the exact purpose of this package manager be? Like in "which ecosystem it will serve for". Will it be some ecosystem of open addons for Metarhia applications to be fetched from GlobalStorage, or will this be used just as the special package manager for Metarhia itself, or...?

Single version for whole Registry
No package versions, any change will increment Registry

That actually sounds like a cool idea to be researched upon, and we must really understand all the ins and outs of this approach and what the implications will be. For one, how is that supposed to work with multiple release lines of some packages? Is that basically not (so that if you're incompatible with a major upgrade of package A, you cannot upgrade to newer versions of packages B, C, D, E, F, G, H, I, J and K you are pretty compatible with), or will it just result in multiple release lines of the whole repo (eventually resulting in exponential growth of number of the repository versions, although that might be a non-issue if we come up with some clever way of storing the repo).

Each package will be tested in correlation with others

How exactly? Will it be a huge monolithic integration test or...?

Different implementations will be loaded depending to environment (node, browser, mobile)

Not only the environment, but also depending on what the user's needs are. Having a single package for every problem is a nice idealistic idea I like and appreciate theoretically but would be horrified to encounter in practice 'cause the cliffs of real world applications are quite stiff for such a tender thing as idealism and often (like, in vast majority of cases) there's a plenty of ways to achieve the same result with all of them being more appropriate in one situation or another (think of two algorithms A and B that are implemented using functions with the same signature, but A being faster on small datasets and B being more efficient on large ones).

Namespace endpoint is not a certain implementation but a fixed interface

Yes, please. Fixed interfaces. That is the key awesome idea about the whole thing. So I'd like to go forward and lay out some of my thoughts on this topic that I've had for a while since I'd played with the idea in my mind too.

I don't think there should be any need in limiting the amount of implementations (and I don't believe it's gonna work for the reasons I elaborated on above), but there is much sense in standardizing APIs and data contracts before someone is about to write a package to solve any particular task and forcing all the other implementations to obey the very same interface. Diversity of packages themselves is actually a great thing and I find it perplexing to see this entitled as a problem that has to be solved.

An interesting approach may be:

  1. Write a proposal for a new API. The proposal consists of a formal specification and a set of tests which ensure that any given implementation is conformant to this specification.
  2. Receive feedback, pass review, make amendments.
  3. The specs are pushed as a standard for what you call a namespace in this issue. Changing these specs effects in a major bump of all the packages that implement them.
  4. Every package has a metadata field that associates the package with a specification. It might also have some other metadata in order to allow the concrete implementation to be chosen automatically by the runtime based on programmatic conditions, if we go for the namespaces approach.
  5. Passing tests from the specification is mandatory for a package to be successfully published, but any non-trivial packages will also have their own tests. The reason behind that is that they may be implementing an API at a different level of granularity thus allowing the code to be tested in a specific scenario. We might also want to enforce a minimal level of code coverage.

Whichever approach is chosen, making a human admin supervise all the packages seems an overkill. If we do that, there are no actual problems to be solved left, tbh. The only thing that an admin will do is checking that all the packages adhere semver strictly, and that's all. Nothing will ever be broken because of incompatibility. That's exactly the problem that semver and npm have successfully solved assuming the package authors are sane. Something can be broken because of a bug, though, and a single repo version does not prevent that. If there's a test for that code path, how does it make any difference where will the test itself reside and whether it will be caught during testing the package alone or in ensemble with the other ones, and if there isn't, there isn't either way.

But the thing is doing so is definitely not the "ultimate automation", like I believe the motto has been. It would be really great to come up with a solution that excludes the human link from this chain and pushes all the guarantees to the fully automated checks (and I'd say type system, but we are doing some JavaScript here).

Hope to have my comments and questions addressed, and I'm looking forward to hearing feedback on my thoughts too. Thanks for reading!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants