-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support other databases for prod #8
Comments
This is an interesting project. What do you think about a mode where no edits in the tool were allowed & the tool was driven off of either a static config file or something similar? I'm asking because I'm interested in exploring the use of a descriptor file in the deployment repo that describes many of the fields you are displaying and ideally a few more. The idea is to put the data next to the IAC so that it can easily be updated by the teams who own the deployment. |
Hi @schmidtw, thanks for the idea. Yes, read-only mode based on roles is absolutely necessary and there is even ticket for this: #92 Initially I wanted to store all service descriptors inside git repo and use spring cloud config (https://cloud.spring.io/spring-cloud-config/reference/html/) to retrieve those descriptors. Then to the sake of speed I decided to use database instead. You can grant permissions to this repo for particular people or teams so they can add / modify service descriptors and git will bring to us historical capabilities (e.g. who, when, why changed something)
|
Makes sense. I missed #92 as well. You got to a pretty cool demo state pretty fast so I can appreciate adding things like this later. On the descriptor per service topic, yes that was what I was thinking. Instead of allowing any editing of the service descriptions/etc via this tool, those would be solely based on what is in a set of git repos. I have lots of teams managing several different services & we're generally finding that if we can put all the information about that service in the one repo that covers deployment then it's easier to keep track of what's going on for the humans involved. It also enables the potential for CI plugin that could do preflight checking/analysis (prevent circular dependencies for example). I think it is also reasonable to simplify & just have a single file & format that |
Great ideas @schmidtw makes absolutely sense. I'm now thinking what can be the best method downloading this descriptor from git repos without prior knowledge about these repos. I have currently giturl field for each microservice, but it is something user / admin must provide. Let's say you have 10 services in 10 git repos. If there is no service available / preconfigured in catalog, then I don't have any clue where should I get / download descriptors. Do you have any ideas? P.S. in #93 I want to implement import / export functionality using json. So you can call REST API of microservice-catlog and import service(s) programmatically. |
Good question. I put a new issue in (#124) describing what I've been thinking about as a possibility. I put it in a different issue so it might be easier to throw away in the event that you want to go a different route. In that path, we'd configure the tool to basically not store or accept input from the user, but it would be a presentation and analysis engine that serves to connect documents, and the like together. To me it seems very similar to being a DB import that just doesn't change & is refreshed. On how to solve the aggregation problem - I think the answer will depend on how an organization operates. For github based organization, there will likely be a scraper that either walks a list of know locations, or all locations and finds these files. Possibly as a github app that are added to orgs. I imagine Gitlab would be very similar. If you are a gerrit shop, the answer may be a bit different. I think the key to my proposal working is that these scrapers need to be pretty simple to lift/copy/produce since they will live in the "internal company magic" space & will often not be shared back out. Another other reason I think it's worth trying to avoid making the scraper in microservice-catalog is a security safeguard. If it can scrape all this data, if mistakes happens internally, this tool could become a super powerful way to spider into other systems. Without the ability to access the other system repos this kind of concern goes way down. |
Agreed. Let scraper do the scraper work and and push the data into microservice-catalog. |
It should be possible to use different databases, possibly mongodb as well
The text was updated successfully, but these errors were encountered: