This project provides a convenience CLI for creating, testing and working with ACS clusters. It's a work in progress and we welcome contributions via the project page.
Assuming you have Docker installed the application will run "out of the box" with the following command:
docker run -it rgardler/acs
You may also choose to use the latest developer version (which is the default branch in github). This versoin will likely have features and fixes that are still being tested. Simply use the ':dev' tag:
docker run -it rgardler/acs:dev
Although not required, it is preferable to mount your ssh and acs configuration files into the running container. This way you can use the same files across multiple versions of the cli container. To do this use the following mounts:
docker run -it -v ~/.ssh:/root/.ssh -v ~/.acs:/root/.acs rgardler/acs
NOTE 1: the first time you run this you may need to create the ~/.ssh
and ~/.acs
directories.
NOTE 2: we provide a convenience script for doing this (and more, see below).
NOTE 3: If you want to work with the latest development version (may
have more features, but may also have more bugs) replace
rgardler/acs
with rgardler/acs:dev
in the above commands.
At this point you are ready to start using the CLI.
NOTE: the run-docker.sh script linked above will always attempt to restart a previously run container. This has the advantage of maintaining your Azure login credentials.
The CLI includes some basic help guidance:
acs --help
To get help for a specific command use acs COMMAND --help
, for example:
acs service --help
For more information see the
documentation, sources for which
are located in the docs/source
folder of our
GitHub project.
Since the first thing most users will want to do is create a cluster
we are documenting this command here. Please see acs --help
for a
list of all available commands.
acs service create
Unless otherwise specified the cluster configuration is defined in
~/.acs/default.ini
within the container. If you mapped a local
directory to this location then it will be persisted on your
client. If this file does not exists you will be asked to answer some
questions about your desired configuration and the file will be
created for you.
You can override the location of the configuration file with the
option --config-file=PATH_TO_FILE
, if the file exists it will be
used, if not you will be asked the same questions and the file will be
created.
It is necessary to open an SSH tunnel to you cluster. By default the
acs cli will use the keys in ~/.ssh/id_rsa
(this can be configured
in the ini
file in ~/.acs/
. If the identified keys don't exist
when you run the acs service create
command they will be created for
you.
To easily create a tunnel to your cluster run:
acs service connect
Take a note of the pid file this command outputs as you may want to kill this tunnel at a later time with:
kill $PID
If your cluster is using DC/OS as the orchestrator and you created it using this set of tools then the DC/OS CLI will have been installed for you when you created the cluster. If, however, you deployed the cluster using a different method you can install the DC/OS cli with the following commands.
acs service install_dcos_cli
. /src/bin/env-setup
Once installed you can run DCOS command directly with dcos COMMAND
(you must first have connected to the cluster with acs service connect
).
Contributions (bug reports, feature requests, docs, patches etc.) are welcome via the project page.
The easiest way to get started is to develop using the Docker container, however, the application is a Python3 application can can be run anywhere you can find Python 3. First you need to clone the dev branch of the code:
git clone https://github.com/rgardler/acs-cli/tree/dev acs-cli
Once you have the source code, you can build a development container with:
./scripts/build-docker.sh
To run your container with local files mapped into it:
./scripts/dev-docker.sh
Now you can edit the files using your favorite editor and test the application from within the container. Note that when you have made changes to your source files you should run the following in your container:
python setup.py install
If you would prefer to work outside of a container then consult the Dockerfile in the project root for details of how to set up your development environment.
Run tests using py.test: and coverage:
sudo pip install -e .[test]
python setup.py test
Note, by default this does not run the slow tests (like creating the cluster and installing features. You must therefore first have run the full suite of tests at least once. You can do this with:
py.test --runslow
To add a top level command representing a new feature follow the
these steps (in this example the new command is called Foo
:
- Add the command
foo
and its description to the "Commands" section of the docstring for acs/cli.py - Copy
acs/commands/command.tmpl
toacs/commands/foo.py
- Add the subcommands and options to the docstring of the foo.py file
- Implement each command in a method using the same name as the command
- Add foo.py import to
acs/commands/__init__.py
- Add instantiation of foo.py to tests/conftest.py
- Copy
tests/command/test_command.tmpl
totest/command/test_foo.py
- Implement the tests
- Run the tests with
python setup.py test
and iterate as necessary - Install the package with
python setup.py install
- Add the command to the documentation in docs/*
Subcommands are applied to commands, to add a subcommand do the following:
- Add the subcommand to the docstring of the relevant command class (e.g. foo.bar)
- Add a method with the same name as the subcommand
- Add a test
- Run the tests with
python setup.py test
and iterate as necessary - Install the package with
python setup.py install
Ensure all tests pass (see above).
Cut a release and publish to the Python Package Index install [twine](http://pypi.python.org/pypi/twine. and then run:
python3.5 setup.py sdist bdist_wheel
twine upload dist/*
This will build both a surce tarball and a wheel build, which will run on all platforms.
Now create a tag in git:
git tag x.y.z
git push --tags
Finally update the version numbers in acs/__init__.py
:
__version__ - 'x.y.z'
To build and publish the documentsation you need Sphinx installed:
sudo pip install -U Sphinx
Then you can build and deploy the docs with:
cd docs
make gh-pages
cd ..