-
Notifications
You must be signed in to change notification settings - Fork 323
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Marketplace #1750
Comments
This also implies that there is a way to manage libraries, or at the very least to create a new library. This is also relevant for the marketplace: the information we want to show in the marketplace needs to be configured when creating or editing a library.
Should this be possible for any node from a different library or only locally defined ones? If so, should there be some way to show "read only" nodes, for example from the standard library.
Name idea: instead of
Note: this should be a proxy for an "Enso account" and could be expanded to other authentication services. Thought: do we have account management for the cloud? This maybe should be the same account from there (at some point).
Needs a design mock-up.
The preview for libraries without an image could be an abstract graph representation of the nodes. Or some fancy cool graphic derived from it. That makes it unique and recognizable even without a custom image.
How would this be disambiguated later on if there are multiple possible repositories for libraries and there are name clashes? Are we going to ensure unique "Enso" usernames? What happens on name change?
This should probably not be clear text names but hashes or something along those lines.
Is this going to be Enso's "cargo"? Or is there another tool that is going to be used to invoke, e.g., tests.
Is
Should this be based on a version? It might be beneficial to unpublish specific buggy versions of a library, but not the whole libraries. See Cargo's
See above: this should not be clear text names. |
How are new libraries created? What does a user need to do when creating a new library?
Given that we currently support circular dependencies between modules (it's very useful), this may not be an actual problem. While we can't deal with circular re-exports, a module
What do you mean by this? Adding and removing imports as necessary based on the extraction to the library?
I'm not sure what you mean by "logical path". Do you mean the chain of call-sites that got the user to this point? Along similar lines, what should happen when a user tries to edit a library that they do not own? We probably want some concept of a "protected" library that users have to accept a warning to edit, and we should track the edit status of these libraries in our logging. We don't want users to accidentally modify the standard library and then report bugs, for example. By the same token, though, we do want it to be easy for users to make modifications to functionality in other libraries as part of their own libraries. We probably want some ability to "extract from $foo into $bar" to enable this.
What does this mean? A dropdown containing the various
We should definitely print a warning about this.
We need to have an exceedingly clear terms of service for this kind of thing. It's very important that users agree to it before getting access to any of this functionality.
Pretty layouts are all well and good, but this sounds like a nightmare for discovery. We really need to think about how this will enable users to find the library that they want.
Cumulative progress is all well and good for an overview, but I think it important that we allow users to hover over it and see detailed progress for each download without having to actually re-enter the marketplace.
This is a bad idea due to the simple fact that a GitHub username is mutable. People can change their usernames which means that the import path of the library would either change or become vulnerable to a supply-chain attack by someone taking over the old username. If we want to go with user-based prefixes we need to maintain the global uniqueness and immutability of these prefixes ourselves, not rely on GitHub.
I don't know what you want in Thinking more on this we're still open to the same legal grey areas as users can presumably upload arbitrary contents of the
This is all well and good but seems like a bit of a usability problem to me if we don't do it well:
What is the distinction between
Please make sure that you understand the potentially astronomical costs of this. If we're expecting users to be fetching LFS data (which we are, given that the Furthermore, if we use LFS we need to pre-populate an LFS configuration and reject pushes that don't use it. Otherwise we will have users making the mistakes of storing binary files outside of LFS, which will severely impact repository performance. It's also a consideration that such a repository will be subject to a heavy amount of churn, which pollutes the repository history with lots of small commits. This is likely to run up against multiple performance edge-cases in git as GitHub sees with large repos.
We need to be particularly careful with this as git is fairly unfriendly and it's very easy for users to accidentally upload PII or protected classes of information with their library. Git is a poor tool for retracting such things, which makes me think that we probably don't want to use it. We need to provide users the means to remove such mistakes, even if that removal is incomplete due to having been public (as is always the case). I don't think that carefully warning the user before upload is sufficient when the mistakes become effectively permanent (as we can't force-push the repo due to lots of people depending on it).
We already have support for this, but
This merging needs to be specified very carefully as we need to have a deterministic and clear mechanism for resolving conflicts.
This size limit helps somewhat with egress costs, but it's worth keeping in mind that the number of downloads will usually be >>> than the number of uploads, so we still get stung with large egress costs. |
Yes, this means if a node is extracted from the current file, the imports should be adjusted, so everything still works.
My understanding was that Nightly is supposed to be the most up-to-date daily version, while unstable is "what becomes stable in the next release". The idea is similar to what Rust does with its train release model.
I think the idea is that the user does not really interact with the repo itself, but this is only there to back our library management. So, user side configuration should not be an issue, since this is all abstracted away and only the commands from our tool / the IDE are invoked. |
Makes sense, though my concerns about LFS and bandwidth still stand. |
Shouldn't it be possible to enter it but for example in some read-only mode? (If that cannot be done now, maybe we could at least note it for sometime in the future?) I think it may be quite useful to see the implementation of not-editable nodes, as sometimes this may be very helpful for debugging or just understanding some implementation issues. |
Yes, that is definitely something that we want to tackle at some point. But to limit the scope of the initial implementation we went with the simplest solution. |
As discussed I'm comparing GitHub (with LFS) vs S3 as backends for storage. Git
S3
If our current approach is that we keep the directory structure as described above and download only parts of it (for example skipping the Pricing comparisonS3 - Storage: $0.023/GB/month, Transfer: $0.09/GB/month (first 1GB free), requests: $0.0004/1k requests (negligible) Git LFS - first 1GB of transfer/storage free, then $5 per 50GB of storage and transfer per month For the sake of an example, let’s assume that we have 1k users each downloading 1GB of libraries per month and that we store 5GB of data for libraries versions:
So for such amounts the difference in pricing is indeed not significant. S3 is priced in a regressive manner, so for significantly bigger transfer capacities the pricing difference will be larger (S3 will be more affordable). |
While analysing possible designs I have encountered a quite important question, answer of which may affect what designs are possible: do we plan to have a mechanism for overriding a dependency version over what is defined in an edition? If yes, how should that work? Motivation: Let’s say we have library A-1.0.0 that depends on B-1.0.0 which are both included in the latest edition 2021.4. Now, library B has been upgraded to B-2.0.0 (assume there are breaking changes) and it is scheduled to be bumped in edition 2021.5. If library A does not update its dependencies, it will not be able to be included in edition 2021.5 anymore, because it would cause a dependency conflict (B cannot co-exist in a single edition in two versions at once). To fix that, maintainers of A want to perform an upgrade which will involve some code modifications (as there were the mentioned API changes). To do so, they need to be able to use B-2.0.0, but it is not yet part of any edition. They cannot wait for 2021.5 to be released because that will be too late - A will need to be pulled out of this edition along with any other libraries that depend on A. Which shows a need for some overriding mechanism. |
Just a note that it probably makes sense to look at Azure Blob Storage as we have lots of credits there and are slowly moving our infra to Azure. |
As a summary - I agree that git is a great abstraction of file storage, but it is from its inception tailored for use-cases where we download the whole repository. There are mechanisms that were developed as an answer to monorepos which now allow it to avoid downloading all the data, but they are still mostly experimental and not mature yet - they may work ok, but we do not really have any performance guarantees. |
The main issue besides the fact that the features we want to use are experimental is that we are not really using many features that git provides at the cost of having to ‘fight’ with git to get what we need. For example we are not using the versioning history and incremental updates for the library files, because we are storing library versions in separate directories, so a specific library version is actually immutable and does not have ‘history’ (apart from not existing and then existing). We could try to leverage git’s versioning but that would require overwriting the library files for new versions. However such a solution would then conflict with using different editions across projects - we would need to somehow checkout different versions at the same time - so we would need to actually move stuff out of the repository to be used (to allow using different versions of the library at the same time) - but then we start treating git just as a content delivery system which it is not. The only upside is that once the user installed library A-1.0.0 from edition X and they create a new project with edition Y which requires library A-1.1.0, the incremental update may avoid re-downloading some artifact. But this would require modifying the design of directory structure and poses other issues. |
Suggested designs: Blob storagesLoose blob storageWe can use the already suggested storage layout, just storing the library data in separate directories, each file as a separate blob.
Uploading would be done through the service but conceptually it just consists of putting the library files in its directory. Very easy to implement for custom repositories. Vendor-lock or adding manifestsAs the above approach needs to know the structure of the directories to download, we need some way to convey this information. An alternative solution is to generate a manifest file that would reside at the root of the directory for each library which would contain a list of all files in all directories that are part of this library. This is a completely platform-agnostic approach and it would allow us to easily swap backends for the package repository. In particular, any kind of hosting service would work. Moreover it can be a bit faster, because we can download a single manifest instead of having to recursively traverse all subdirectories (each subdirectory incurring an additional request). The only downside is that the manifests need to be generated somehow - but that is not really a problem as our tool can generate them when the library is being uploaded; the logic for generating the manifests is also extremely simple, but it affords us portability. Blob storage with subarchivesA completely alternative solution is to not store the files in a ‘loose’ manner but instead create packages for logical components of the library. We can have separate package for each logical component - for example for the sources, for the tests, for the binary data. Some simple files that are always present, like configuration files / metadata, or the license file could still be stored loosely for simplicity. The proposed directory structure would be:
Now as each logical component is packaged separately, we can easily download only the sources ignoring for example the test files. We gain better storage and transfer efficiency as the data is stored compressed and the download operation just downloads a constant number of packages for each library. The package manager can then unpack the packages locally, which is a simple operation. Storing edition metadataEach separate edition can be stored as a separate text file, for example:
Alternatively, to avoid having too many files next to each other in the most populated directory - nightly, we could have subdirectories for each year. Git-basedAs originally suggested
The original idea was to store the above structure in a git repository. Then we could use However with this approach we don’t really get much from git itself - we are not using any tagging, version control capability because we put new library versions in separate directories anyway - so in terms of git history they are not connected in any way. So we do not have any incremental updates or other niceties of git. There are important performance concerns - after reading through the documentation, to my best understanding, even if we do the most restricted kind of clone (shallow+sparse+partial clone), we still download the whole tree - that is we have references to every object that is in the current snapshot of the repository. That means that we have reference to every file of every, even not downloaded, library. Of course it does not take a lot of space, because we fetch the blobs lazily, so we have refs to files but no contents of these files. But still if we want to support huge amounts of big libraries this may be a significant bottleneck because the initial download of the repository must still download the refs to all available libraries (and the more files the library has the more downloading there is to be done). I’m unfortunately not 100% sure that this is the case, but based on multiple sources that seems to be the most likely scenario. To verify it with absolute certainty I would need to read very deeply into git’s source code or perform some tests - this will take more time (I’d guess around half to one day to verify this thoroughly) so I didn’t want to do it unless absolutely necessary. Based on current data, see for example this blogpost I’d say I have 85% certainty that this is the case. There is also an issue that any operation like checking out a library has to check all libraries against the sparse checkout mask so our performance is linear in the number of libraries available in the repository. One way to alleviate this would be to store libraries in a directory structure like us/er/na/me/li/br/ar/yn/am/e - thus making the tree deeper but less wide. There are also analyses that a subsequent shallow fetch after a shallow clone may have less-than-ideal performance - but it is hard to say if this would translate to our use-case exactly, it would require performing our own benchmarks as the analyses had slightly different circumstances. Actually using the version control capabilitiesAs mentioned above, the original design requires a lot of hassle to use git but it does not actually use its version control abilities at all and we cannot easily extend that approach to use them - since we keep the library versions in separate directories they are completely separate entities to git and so there is no history-connection. We can modify the design to take advantage of git’s history and the incremental updates. That would require to keep each library version in the same directory, thus different versions would not be in different directories but under different commits. The edition files could reference library vresion by the commmit when it was added. Thus newer versions will be connected through git’s history and when the user were downloading updates to a newer edition, they could only download the deltas.
However this gives rise to other complications - we can have different projects that use different editions that reference different versions of the same library. But we can have only one ‘point of history’ of git checked out at the same time. And we don’t want to limit to having only one project at a given point in time. So we actually need to copy each library to some separate directory so that multiple library versions can coexist. But then git turns into just a download manager. There are some advantages to that (we have the incremental updates), but the complexity is quite hight and also all the performance disadvantages listed in the previous section still stand (e.g. we still may need to download the whole directory tree (albeit without contents) and the issues with operations being linear in the number of libraries). I don't think that the feature of incremental updates (because I think this is the main selling point here, or am I missing something?) is worth this additional complexity. Additionally for some libraries it is quite possible that a significant part of library's size will be some native components that will be stored on LFS and not processed incrementally anyway. Or put another way - a library that is mostly text files can be downloaded in full very quickly (regardless of being incremental or not) and a library is usually big in size due to big binary files - in both cases the incremental updates do not give too much of a benefit. |
A short summary after this long analysis: It seems possible to implement the repository using git, but we don't get many advantages at the cost of quite high complexity (also good to remember that as noted earlier, we need to use git CLI instead of bindings) and having to rely on experimental features whose behaviour may change in time and whose performance is not really well understood nor predictable. But we have a simple abstraction for file storage and that is any kind of storage systems that then expose access to these files via HTTP(S) and I think using that will be easier to implement, more stable and more predictable in terms of performance (for example we don't have to be afraid of scaling the S3 storage (or any other good alternatives) with growing number of libraries). By using a standardised format of archives (as described in Blob storage with subarchives) or manifest files (which can be easily generated) we can have a standardized format of storing the libraries repository which can be used on any kind of storage system (be it S3, Azure or even FTP access to a HTTP server). |
@MichaelMauderer I believe that allowing people to see how libs are implemented under the hood in read-only mode is crucial for learning the application, so I believe this solution may be a little bit too limited even for the first shot. If we would allow people to browse libs in read-only mode, as @radeusgd suggested, that would allow them to debug / understand / learn much faster and better. I feel that displaying errors instead is a little too limiting here. Also, I think the stable editions should be named Regarding the rest of the things, we will have a call tomorrow. |
One more thing that came up when I was thinking on refining the tasks is that we will need some mechanism for creating new editions. Shall we also have a task for a tool that will create a new edition semi-automatically? Depending on what we choose I guess there should be a task to create a tool for generating new (and probably also nightly?) editions or at least documentation explaining how to do this. |
For dependency resolution you can consider using Z3. |
@iamrecursion that would be a total overkill. We want it to be delivered in 8-12 weeks. The dependency mechanism described here (editions without per package dependencies) would not need z3 at all. |
Meeting summaryWe will use a static-file based approach mostly similar to what was described in the Blob storage with subarchives section. For downloading, we will rely only on the HTTP protocol (if possible, see possible exception below) so that various storages may be used under the hood. Each library will have a path determined by its name and version: For now we will download all of these components except for the (Not directly discussed but seems like a logical conclusion) Another exception would be the Edition files will be stored as plain files, each edition being uniquely identified (nightly with its date or stable with its version number). We need to create the following tools:
The identification service connected with Enso Cloud will need to provide:
When a new library is created (mainly in the IDE, for example when extracting a piece of code) the user should already be logged into the Enso Cloud account. If they are not, this is the moment where the log-in should take place. That is because the newly created library will contain the username as part of its import path, so we need to know it in advance to avoid renaming. For now the edition files will be created manually. We want to be secure against at least basic DoS attacks (like a malevolent actor repeatedly downloading lots of libraries to use-up our bandwidth). One way to do that would be to require being logged-in to download libraries and keeping a log of how much bandwidth a single user has used-up and set a daily limit to some reasonable value. TODO: this requires research as to how we could integrate such a solution (authentication + logging used bandwidth) with the S3 backend. What I have slightly missed and we may need to further discuss at some point is how the static storage will be integrated with the marketplace website. The marketplace can definitely load the edition files to know which packages are available in a given edition, but how should the searchability work? Should it somehow index the metadata from each package's |
Has there been any thought given to using TLS on the connection? We don't want to allow people to MITM the users of the marketplace. |
@iamrecursion no thoughts on that. But if this is a real threat, we should consider it. |
Just a thought that I don't know of a single package repository for another language that requires users to be authenticated to download packages. Most don't rate-limit at all, but those that do seem to do it based on IP. Uploading packages is a different matter and does, of course, require users to be authenticated. |
DoS ProtectionS3 AuthenticationThis SO answer explains the landscape for S3 access control quite well. The S3 access can be for IAM Roles/Users but that is not suitable for our use-case, as IAM is intended for developers/staff, not for the users. There is also another mechanism - pre-signed URLs - an application that has permissions to access the bucket can generate a one-time URL that is valid for a limited time which can be then used by an user to access S3. However for this to work we would need to implement some kind of a proxy service which will be checking the auth status, updating the used bandwidth and if allowed, redirecting to these pre-signed URLs. This application would need to be a Lambda or some service running on EC2 (but the latter may be worse in terms of scalability). There may be some issues with tracking the real bandwidth usage, because once we give the user the presigned URL, they can use it multiple times within the expiry time (so using more bandwidth than we expect) or the download may be interrupted (so the user used less than actually expected, possibly leading to false positives if the maximum bandwidth threshold is not high enough). It may be possible to do this better by inspecting S3 logs, but that increases the complexity of the solution significantly. Other approachesAs noted by @iamrecursion, it is uncommon for public package repositories to require authentication to download packages, which may make our ecosystem look more closed than it really is. Most systems seem to tackle this issue by using CDNs (which we should likely use anyway for better performance) and some kind of rate-limiting protection. AWS has an Advanced Shield service which also includes Web Application Firewall that allows for some rate limiting. It will however not protect against someone performing a slightly slower attack distributed over a longer timeframe. I think I don't have enough expertise in this area to evaluate this properly, so I think it would be good to discuss that with someone more knowledgeable about cloud deployments. |
After a discussion with @wdanilo we decided that it will actually make sense to not have the package manager as a separate binary. Instead it will be a Scala library which then can be used by the launcher (which will handle its CLI interface) and the project-manager/language-server. This will greatly simplify the integration, as instead of having to wrap a CLI command and parse some JSON responses we will be able to just use a native API. The motivations for changing that decision were following:
|
I would also like to clarify the design decision to not include an explicit list of dependencies but instead rely on imports only. From the UX perspective it seems simpler but can complicate situations where the user wants to learn the dependencies of a project - they cannot just check a single file but need to go through the imports. Although we can alleviate this by providing a CLI helper command that will list the dependencies, however it may also be slow for big projects as it needs to parse every file. Another issue, albeit less problematic is that when installing a library A we also need to install any transitive dependencies. Without any metadata this will involve the following process: download and extract library A, parse all of its files to learn its dependencies to know that it depends on B and C, download B and C, parse their files to learn their dependencies etc. This will be slower (we probably don't want to optimize this now, but in the future we may want to parallelize downloading of dependencies and not knowing about all the dependencies up-front will hinder the parallelization - further downloads need to wait while a dependency is being extracted and parsed because before that we do not know them, the internet connection is waiting unused when the dependency is being extracted) and it will be harder to estimate download progress - if we do not know how many dependencies there are to download we cannot display a good progress bar. This however can be fixed without changing the design - on the user-level we can keep inferring the dependencies from imports alone, but when creating the package for upload (which will be done by the package-manager component), we could gather all the dependencies and list them inside of the metadata/manifest file. This way when downloading we could rely on this data to provide better progress estimations and in the future, optimize download speed. |
To make sure that our designs are complete, I'm posting a few example workflows that show how all parts of the system may be interacting. Extracting nodes to a library
|
Signing-in
Adding a library from the marketplace to the project
Adding a local library to the projectIf I understand correctly, this is currently done by manually adding the import. Or do we want to have some kind of a dialog box that can display available local libraries? This would not require additional backing from the backend, as we will already provide an endpoint to list local libraries and adding imports can use the same logic as adding the import for remote libraries. Publishing a library
|
I would like to ask @wdanilo and @MichaelMauderer to have a look at these example workflows described above to see if they match the expectations and are clearly described. Also while describing them I've noticed an issue with local library versioning that we need to clarify. Essentially when a local library is created it will get some initial version, probably Afterwards the user can work on it, use it in some of their project and publish it. Later, they may want to improve it or fix bugs and will want to publish an update. So we need to support 'bumping' the version number. One part of that is some kind of UI which will allow to select a library and change its version number before publishing (as publishing with the same version number will simply fail). However this opens another question: what should happen when the library version is changed? Should this change version of the currently opened library or create a copy with a bumped version? If we were to choose the latter, we would need to allow the local libraries repository to support multiple versions of the same library (note that I'm not speaking about the repository containing locally downloaded copies of published libraries, but the location where locally created editable libraries are gathered). This complicates the situation because it would not be enough to have the switch that we wanted 'override local libraries', but would need a way to select versions of the local libraries. In my opinion this is an unnecessary complication, so I would suggest that this will just edit the library version in-place. If any projects depended on the older version of the library, they can just switch to the older, already published version in an usual override. |
One more feature I'd like to describe in more detail is library resolution settings. The main setting the user can change is the base edition that the project is based on. Moreover we have this toggle 'use local libraries over published ones'. Moreover we have a list of version overrides, which can be added or modified, which consist of a library name, (version override and repository override) or just the 'use local version' override (as if the local libraries repository can contain only a single version of a given library, there is no sense if having a version there). I also suggest that the project should not specify its edition at all within the An alternative is to have a The latter solution however leads to some edge cases - for example what if Because of these edge cases, I think just requiring the |
I would say it should be "move". If you want to create a function you can collapse the nodes first and then move the resulting node.
Do we need a new name or can we take 1:1 the source here?
This is in the "for later" category. Right now, we will not do this. Later on this will probably need refactoring support from the language server.
Agree, this is for later. |
You can also do a "section" in the edition: 2021.4 Could easily become: edition:
version: 2021.4
extra-deps:
... |
I think taking the source name by default sounds sensible, but it can happen that a function with that name is already present in the target library, so we need some way to resolve conflicting names, to avoid adding code that causes a re-definition (which IIRC is a compile error).
I think this sounds good, although then I'd suggest always making it a section, i.e. it should always have format like
Otherwise we would need extra logic to handle the edge case to 'collapse' the section if the version is the only setting which in my opinion adds unnecessary complexity to that logic. Or do you think it is worth it because the shorter version is significantly more readable/understandable? |
Understandable as that is rather complex. But then we need to remember that we probably shouldn't remove the 'original' function after extraction. If we were able to do the proper refactor, it would make sense to delete the original function (as it was moved not copied to the library). But if we are not doing refactoring, removing it would break existing code and we probably don't want to do this - if the user wants to change the references manually they can also manually delete the old function. |
Specification
Package Manager:
Engine:
Cloud:
IDE:
Requirements
Show
GUI related description
Managing Libraries
There needs to be a way to create and delete libraries via the GUI.
Assigning nodes to libraries
There should be an option in the node's context menu (RMB) to assign this node to a library. After doing so:
Navigating the nodes
If a user double-clicks a node that was refactored to a library, they should properly enter its definition in the graph editor. Breadcrumbs should reflect the user's logical path, not the path on the disk (as currently). If they then open the text editor, it should show the code of that component.
User should be able to select the
Edition
for the project.Editions
.Edition
field in the project's configuration file should be updated and should be properly handled by the engine.User accounts
Marketplace panel
Main.enso
file doc.Engine related description
Libraries on disk
All user-defined libraries should be placed next to downloaded libraries in
~/enso/libraries
, next to~/enso/projects
. The idea is that when user defines a library (a set of nodes), it should be out-of-the-box accessible in all of their projects. Also, the engine should use theENSO_LIBRARY_PATH
env variable to discover other folders where libraries can be placed. The above locations should be used by default. There should also be a parameter in theproject.yaml
and a way to pass a command-line parameter-based overrides for this.Library naming
All user-defined libraries should have the name starting with their Enso account username. For example, a shapes library could be named (and imported) as
import wdanilo.Shapes
. The only exception is theStandard
library, which is provided by the core team.Library metadata
meta
which should containicon.png
andpreview.png
(of predefined sizes). These images will be used by the library searcher. In case of missing files, a default will be used.LICENSE.md
in the top folder of a library. By default, theLICENSE.md
should be populated with the MIT license. In our terms of use, there should also be a sentence that in case of a missingLICENSE.md
file, the license defaults to MIT by default.Library discovery
There should be no special configuration file to describe library dependencies used by the project. All libraries should be auto-discovered based on imports.
Library versioning
The library versioning should be based on a simple
resolver
configuration. A resolver can be one ofnightly-<VERSION>
,unstable-<VERSION>
, orlts-<VERSION>
, whereVERSION
uses semantic versioning schema. A resolver is just a file containing all packages versions available in the marketplace.Package-manager related description
Storage
Management
update
, which should reset the local git repo and pull thelibraries-version
folder only.install [--libraries-version=...] <NAME>
, which should look up the name in the appropriate libraries version file, pull it, and repeat that for all of its dependencies. All pulled source code should be located in~/enso/libraries/<libname>/<version>
. Also, it is important to note here that only the needed things should be pulled - only the sources of the library without thetest
folder.push <PATH>
, which should upload the library of the provided path or the library located in the closest parent to CWD if the path is missing. This should use SSH-based authentication on GitHub. Of course, users would not have access to it directly, see "publishing libraries" section below to learn more.search
, which should search package names by part of the name.info
, which should provide the name, version, and synopsis of a given package.publish
, which should publish the package just like the pull command, but should not require SSH authentication to GitHub. Instead, it should utilize the server-app described in the "Publishing libraries" section.unpublish <LIBRARY> <VERSION>
, which should unpublish the library of the provided. Unpublished libraries can still be downloaded and used, but are not visible for people who never used them before.enso-install
forenso-marketplace install
. This way, theenso
command should be able to allow users to writeenso install
(searching forenso-install
in env, just like Git does).library-versions
files should be merged. For example, in case we are using repositoryA
and repositoryB
and we are usinglibrary-version=stable-2.0
, then the filesA/libraries-version/stable
andB/libraries-version/stable
should be considered and the list of libraries for2.0
should be merged.Publishing libraries
publish
command of the "marketplace" app.ban.list
file.The text was updated successfully, but these errors were encountered: