diff --git a/basics/dockerhub.md b/basics/dockerhub.md index 7577deb7..24101a3b 100644 --- a/basics/dockerhub.md +++ b/basics/dockerhub.md @@ -7,7 +7,7 @@ For using these container images, ensure you have a recent version of `docker` i To try `grimoirelab/full`, just type: ```bash -$ docker run -p 127.0.0.1:5601:5601 \ +docker run -p 127.0.0.1:5601:5601 \ -v $(pwd)/credentials.cfg:/override.cfg \ -t grimoirelab/full ``` @@ -28,13 +28,13 @@ The resulting dashboard will be available from Kibiter, and you can see it by po What is even more interesting: you can get a shell in the container (after launching it), and run arbitrary GrimoireLab commands (`container_id` is the identifier of the running container, that you can find out with `docker ps`, or by looking at the first line when running the container): ```bash -$ docker exec -it container_id env TERM=xterm /bin/bash +docker exec -it container_id env TERM=xterm /bin/bash ``` If you're running the container on Windows through Docker Quickstart Terminal and Oracle VirtualBox, additional steps need to be taken to access the dashboard from your Windows machine. First, you then want to go into the settings for the virtual machine through the VirtualBox manager. To accomplish this, click on your VM, and click on 'Settings'. Now go to the 'Network' tab and you should see two adapters, we want to reconfigure adapter 1. Change the 'Attached to' setting to 'Bridged Adapter', open up the Advanced dropdown, and change the Promiscuous Mode to 'Allow VMs'. Now change Bridged Adapter to NAT and click on Port Forwarding. Here you need to set up a rule that allows port 5601 on the VM to talk to localhost:5601 on your host machine. To do this, click the green diamond with a '+' inside it to add a new rule. Name this rule whatever you wish, set its protocol to TCP, host IP to 127.0.0.1, Host Port to 5601, leave Guest IP blank, and set Guest Port to 5601 and click OK. While you're in settings, I recommend upping the memory for your VM to 2048MB, and setting display memory to 10MB. After you've done this you can now exit all the way out of settings. You now need to stop the docker container if it's currently running, shut down your VM, and restart your VM. Once the VM is restarted, you'll need to run ``ifconfig | grep inet`` on the virtual machine the container is running on to find its local IP address, most likely it will be along the lines of 10.0.2.x. Now rerun ```bash -$ docker run -p x.x.x.x:5601:5601 \ +docker run -p x.x.x.x:5601:5601 \ -v $(pwd)/credentials.cfg:/override.cfg \ -t grimoirelab/full ``` @@ -48,7 +48,7 @@ Have a look at the section If you happen to run the container in Windows, remember that you should use backslash instead of slash for the paths related to Windows. That means that paths internal to the container will still include slashes, but those that refer to files or directories in the host machine will include backslashes, and maybe disk unit identifiers. For example: -``` -$ docker run -p 127.0.0.1:5601:5601 -v D:\test\credentials.cfg:/override.cfg \ +```bash +docker run -p 127.0.0.1:5601:5601 -v D:\test\credentials.cfg:/override.cfg \ -v D:\test\projects.json:/projects.json -t grimoirelab/full ``` diff --git a/basics/install.md b/basics/install.md index 5e33e383..1dbe01bf 100644 --- a/basics/install.md +++ b/basics/install.md @@ -26,13 +26,13 @@ as detailed in the First, let's create our new environment. I like my Python virtual environments under the `venvs` subdirectory in my home directory, and in this case I will call it `gl` \(see how original I am!\): ```bash -$ python3 -m venv ~/venvs/gl +python3 -m venv ~/venvs/gl ``` Once the virtual environment is created, you can activate it: ```bash -$ source ~/venvs/gl/bin/activate +source ~/venvs/gl/bin/activate (gl) $ ``` @@ -104,7 +104,7 @@ This should produce a banner with information about command line arguments, and Assuming everything was fine, next thing is getting information about an specific backend. Let's start with the git backend, which will be a good starter for testing: -``` +```bash (gl) $ perceval git --help ``` @@ -208,9 +208,9 @@ Some of GrimoireLab dependencies need non-Python packages as pre-requisites to b * For `dulwich` to be installed, you need to have some Python libraries present. In Debian-derived systems (such as Ubuntu), that can be done by installing the `python3-dev` package: -``` -$ sudo apt-get install python3-dev -$ sudo apt-get install build-essential +```bash +sudo apt-get install python3-dev +sudo apt-get install build-essential ``` Usually, you know you need this when you have a problem installing `dulwich`. For example, you check the output of `pip install` and you find: @@ -230,7 +230,7 @@ Previous instructions are for installing the Python packages corresponding to th It is designed to work standalone, with just a few dependencies. It is easy to produce a Python virtual environment with all GrimoireLab tools (and dependencies) installed, corresponding to the latest version in the master branch of each of the development repositories. Just the utility, and run: ```bash -$ python3 build_grimoirelab --install --install_venv /tmp/ivenv +python3 build_grimoirelab --install --install_venv /tmp/ivenv ``` This will create a virtual environment in `/tmp/ivenv`, which can be activated as follows @@ -248,19 +248,19 @@ $ source /tmp/ivenv/bin/activate [releases directory](https://github.com/chaoss/grimoirelab/tree/master/releases) and run (assuming you downloaded the release file `elasticgirl.21` to the current directory): ```bash -$ python3 build_grimoirelab --install --install_venv /tmp/ivenv --relfile elasticgirl.21 +python3 build_grimoirelab --install --install_venv /tmp/ivenv --relfile elasticgirl.21 ``` If you want, you can also produce the Python packages (wheels and dists) for any release, or the latest versions in development repositories. For example, for building packages for the latest versions in directory `/tmp/dists`: ```bash -$ python3 build_grimoirelab --build --distdir /tmp/ivenv +python3 build_grimoirelab --build --distdir /tmp/ivenv ``` You can get a listing of all the options of `build_grimoirelab` by using its `--help` flag: ```bash -$ python3 build_grimoirelab --help +python3 build_grimoirelab --help ``` There is some explanation about some of them in the diff --git a/basics/quick.md b/basics/quick.md index 95c885ed..e0cc1b44 100644 --- a/basics/quick.md +++ b/basics/quick.md @@ -54,7 +54,7 @@ standard Debian distro, so you can run those directly. To run that image, just type: ```bash -% docker run -p 127.0.0.1:9200:9200 \ +docker run -p 127.0.0.1:9200:9200 \ -p 127.0.0.1:5601:5601 \ -p 127.0.0.1:3306:3306 \ -e RUN_MORDRED=NO \ @@ -74,7 +74,7 @@ Once the container is running, you can connect to it, and launch any GrimoireLab command or program in it: ```bash -$ docker exec -it container_id env TERM=xterm /bin/bash +docker exec -it container_id env TERM=xterm /bin/bash ``` That container can be used also, as such, diff --git a/basics/supporting.md b/basics/supporting.md index 6d4e7fb7..12c39259 100644 --- a/basics/supporting.md +++ b/basics/supporting.md @@ -33,20 +33,20 @@ provided the right version of Python is available. In other platforms, your mile Python3 is a standard package in Debian, so it is easy to install: ```bash -$ sudo apt-get install python3 +sudo apt-get install python3 ``` Once installed, you can check the installed version: ```bash -$ python3 --version +python3 --version ``` For installing some other Python modules, including GrimoireLab modules, you will need `pip` for Python3. For using `venv` virtual environments, you will also need `ensurepip`. Both are available in Debian and derivatives as packages `python3-pip` and `python3-venv`: ```bash -$ sudo apt-get install python3-pip -$ sudo apt-get install python3-venv +sudo apt-get install python3-pip +sudo apt-get install python3-venv ``` More information about installing Python3 in other platforms is available in [Properly installing Python](http://docs.python-guide.org/en/latest/starting/installation/). In addition, you can also check information on [how to install pip](https://pip.pypa.io/en/stable/installing/). @@ -56,7 +56,7 @@ More information about installing Python3 in other platforms is available in [Pr If you are retrieving data from git repositories, you will need git installed. Pretty simple: ```bash -$ sudo apt-get install git-all +sudo apt-get install git-all ``` More information about installing git in other platforms is available in @@ -71,7 +71,7 @@ For installing Elasticsearch you can follow its [installation instructions](http Assuming the installed ElasticSearch directory is `elasticsearch`, to launch it you will just run the appropriate command \(no need to run this from the virtual environment\): ```bash -$ elasticsearch/bin/elasticsearch +elasticsearch/bin/elasticsearch ``` This will launch Elasticsearch that will listen via its HTTP REST API at `http://localhost:9200`. You can check that everything went well by pointing your web browser to that url, and watching the ElasticSearch welcome message. @@ -91,7 +91,7 @@ You can install Kibana instead of Kibiter. Maybe you will lose some functionalit Assuming the installed Kibana directory is `kibana`, to launch it, again just run the appropriate command: ```bash -$ kibana/bin/kibana +kibana/bin/kibana ``` This should serve a Kibana instance in `http://localhost:5601`. Point your web browser to that url, and you´ll see the Kibana welcome page. @@ -105,7 +105,7 @@ Now, we´re ready to go. Instead of following the installation instructions mentioned above, you can also install ElasticSearch and Kibana as a Docker container, by using pre-composed images. For example: ```bash -$ docker run -d -p 9200:9200 -p 5601:5601 nshou/elasticsearch-kibana +docker run -d -p 9200:9200 -p 5601:5601 nshou/elasticsearch-kibana ``` Then you can connect to Elasticsearch by localhost:9200 and its Kibana front-end by localhost:5601. See [details about these Docker images in DockerHub](https://hub.docker.com/r/nshou/elasticsearch-kibana/) @@ -116,7 +116,7 @@ Then you can connect to Elasticsearch by localhost:9200 and its Kibana front-end If you are going to use SortingHat, you will need a database. Currently, MySQL-like databases are supported. In our case, we will use MariaDB. Installing it in Debian is easy: ```bash -$ sudo apt-get install mariadb-server +sudo apt-get install mariadb-server ``` That's it, that's all. diff --git a/cases-chaoss/intro.md b/cases-chaoss/intro.md index 20050ea9..5fd131af 100644 --- a/cases-chaoss/intro.md +++ b/cases-chaoss/intro.md @@ -15,8 +15,8 @@ The process will include the installation of the GrimoireLab tools needed, and w Let's start by installing GrimoireLab components: ```bash -$ python3 -m venv gl -$ source gl/bin/activate +python3 -m venv gl +source gl/bin/activate (gl) $ pip install grimoire-elk grimoire-kidash ``` diff --git a/docs/data-sources/add-configurations.md b/docs/data-sources/add-configurations.md index 242aa364..45352d29 100644 --- a/docs/data-sources/add-configurations.md +++ b/docs/data-sources/add-configurations.md @@ -66,8 +66,8 @@ out_index = git_study_forecast Once you have made the following changes, run your containers with docker-compose -```console -$ docker-compose up -d +```bash +docker-compose up -d ``` Give it some time to gather the data and after a while your dashboard and data diff --git a/docs/getting-started/dev-setup.md b/docs/getting-started/dev-setup.md index 482ef4b1..1a3ff819 100644 --- a/docs/getting-started/dev-setup.md +++ b/docs/getting-started/dev-setup.md @@ -130,8 +130,8 @@ mariadb: ``` Save the above into a docker-compose.yml file and run -```console -$ docker-compose up -d +```bash +docker-compose up -d ``` to get Elasticsearch, Kibiter and MariaDB running on your system. @@ -159,17 +159,17 @@ Each local repo should have two `remotes`: `origin` points to the forked repo, while `upstream` points to the original CHAOSS repo. An example is provided below. -```console -$ git remote -v -origin https://github.com/valeriocos/perceval (fetch) -origin https://github.com/valeriocos/perceval (push) -upstream https://github.com/chaoss/grimoirelab-perceval (fetch) -upstream https://github.com/chaoss/grimoirelab-perceval (push) +```bash +git remote -v +# origin https://github.com/valeriocos/perceval (fetch) +# origin https://github.com/valeriocos/perceval (push) +# upstream https://github.com/chaoss/grimoirelab-perceval (fetch) +# upstream https://github.com/chaoss/grimoirelab-perceval (push) ``` In order to add a remote to a Git repository, you can use the following command: -```console -$ git remote add upstream https://github.com/chaoss/grimoirelab-perceval +```bash +git remote add upstream https://github.com/chaoss/grimoirelab-perceval ``` #### ProTip @@ -177,8 +177,8 @@ $ git remote add upstream https://github.com/chaoss/grimoirelab-perceval You can use this use this [script](https://gist.github.com/vchrombie/4403193198cd79e7ee0079259311f6e8) to automate this whole process. -```console -$ python3 glab-dev-env-setup.py --create --token xxxx --source sources +```bash +python3 glab-dev-env-setup.py --create --token xxxx --source sources ``` ### Setting up PyCharm @@ -255,7 +255,7 @@ url = http://localhost:9200 Run the following commands, which will collect and enrich the data coming from the git sections and upload the corresponding panels to Kibiter: -```console +```bash micro.py --raw --enrich --cfg ./setup.cfg --backends git cocom micro.py --panels --cfg ./setup.cfg ``` diff --git a/docs/getting-started/setup.md b/docs/getting-started/setup.md index 331503b7..8e5fbc2f 100644 --- a/docs/getting-started/setup.md +++ b/docs/getting-started/setup.md @@ -24,24 +24,28 @@ through the following means. ### Software -```console -$ git --version -git version 2.32.0 -$ docker --version -Docker version 20.10.7, build f0df35096d -$ docker-compose --version -docker-compose version 1.28.5, build c4eb3a1f +```bash +git --version +# git version 2.32.0 + +docker --version +# Docker version 20.10.7, build f0df35096d + +docker-compose --version +# docker-compose version 1.28.5, build c4eb3a1f ``` ### Hardware -```console -$ cat /proc/cpuinfo | grep processor | wc -l #View number of processors -4 -$ grep MemTotal /proc/meminfo #View amount of RAM available -MemTotal: 8029848 kB -$ sudo sysctl -w vm.max_map_count=262144 #Set virtual memory -vm.max_map_count = 262144 +```bash +cat /proc/cpuinfo | grep processor | wc -l #View number of processors +# 4 + +grep MemTotal /proc/meminfo #View amount of RAM available +# MemTotal: 8029848 kB + +sudo sysctl -w vm.max_map_count=262144 #Set virtual memory +# vm.max_map_count = 262144 ``` The reason for allocating `262144` for memory is the check that ElasticSearch @@ -52,15 +56,15 @@ mapped areas. - Clone the GrimoireLab repo: -```console -$ git clone https://github.com/chaoss/grimoirelab +```bash +git clone https://github.com/chaoss/grimoirelab ``` - Go to `docker-compose` folder and run the following command: -```console -$ cd grimoirelab/docker-compose -grimoirelab/docker-compose$ sudo docker-compose up -d +```bash +cd grimoirelab/docker-compose +sudo docker-compose up -d ``` Your dashboard will be ready after a while at `http://localhost:5601`. Usually, diff --git a/docs/getting-started/troubleshooting.md b/docs/getting-started/troubleshooting.md index 72a3cc76..35647d96 100644 --- a/docs/getting-started/troubleshooting.md +++ b/docs/getting-started/troubleshooting.md @@ -18,8 +18,9 @@ parent: Getting Started > docker-compose command without the `-d` or `--detach` flag. That will allow > you to see all the logs while starting/(re)creating/building/attaching > containers for a service. - ```console - grimoirelab/docker-compose$ docker-compose up + ```bash + cd grimoirelab/docker-compose + docker-compose up ``` --- @@ -42,16 +43,16 @@ WARNING: Host is already in use by another container In order to fix it, you need to see which container is using that port and kill that container. -```console -$ docker container ls # View all running containers -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -01f0767adb47 grimoirelab/hatstall:latest "/bin/sh -c ${DEPLOY…" 2 minutes ago Up 2 minutes 0.0.0.0:8000->80/tcp, :::8000->80/tcp docker-compose_hatstall_1 -9587614c7c4e bitergia/mordred:latest "/bin/sh -c ${DEPLOY…" 2 minutes ago Up 2 minutes (unhealthy) docker-compose_mordred_1 -c3f3f118bead bitergia/kibiter:community-v6.8.6-3 "/docker_entrypoint.…" 2 minutes ago Up 2 minutes 0.0.0.0:5601->5601/tcp, :::5601->5601/tcp docker-compose_kibiter_1 -d3c691acaf7b mariadb:10.0 "docker-entrypoint.s…" 2 minutes ago Up 2 minutes 3306/tcp docker-compose_mariadb_1 -f5f406146ee9 docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6 "/usr/local/bin/dock…" 2 minutes ago Up 2 minutes 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 9300/tcp docker-compose_elasticsearch_1 - -$ docker rm -f c3f3f118bead #c3f3f118bead is the container that is using port 5601. +```bash +docker container ls # View all running containers +# CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +# 01f0767adb47 grimoirelab/hatstall:latest "/bin/sh -c ${DEPLOY…" 2 minutes ago Up 2 minutes 0.0.0.0:8000->80/tcp, :::8000->80/tcp docker-compose_hatstall_1 +# 9587614c7c4e bitergia/mordred:latest "/bin/sh -c ${DEPLOY…" 2 minutes ago Up 2 minutes (unhealthy) docker-compose_mordred_1 +# c3f3f118bead bitergia/kibiter:community-v6.8.6-3 "/docker_entrypoint.…" 2 minutes ago Up 2 minutes 0.0.0.0:5601->5601/tcp, :::5601->5601/tcp docker-compose_kibiter_1 +# d3c691acaf7b mariadb:10.0 "docker-entrypoint.s…" 2 minutes ago Up 2 minutes 3306/tcp docker-compose_mariadb_1 +# f5f406146ee9 docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6 "/usr/local/bin/dock…" 2 minutes ago Up 2 minutes 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 9300/tcp docker-compose_elasticsearch_1 + +docker rm -f c3f3f118bead #c3f3f118bead is the container that is using port 5601. ``` ### Empty dashboard or visualization @@ -83,16 +84,16 @@ elasticsearch_1 | [1]: max virtual memory areas vm.max_map_count [65530] is too Solution Increase the kernel `max_map_count` parameter of vm using the following command. -```console -$ sudo sysctl -w vm.max_map_count=262144 +```bash +sudo sysctl -w vm.max_map_count=262144 ``` Now stop the container services and re-run `docker-compose up`. Note that this is valid only for current session. To set this value permanently, update the `vm.max_map_count` setting in `/etc/sysctl.conf` file. To verify after rebooting, run the below command. -```console -$ sysctl vm.max_map_count +```bash +sysctl vm.max_map_count ``` ### Processes have conflicts with SearchGuard @@ -100,9 +101,9 @@ $ sysctl vm.max_map_count Indication Cannot open `localhost:9200` in browser, shows `Secure connection Failed` -```console -$ curl -XGET localhost:9200 -k -curl: (52) Empty reply from server +```bash +curl -XGET localhost:9200 -k +# curl: (52) Empty reply from server ``` Diagnosis @@ -114,19 +115,19 @@ elasticsearch_1 | [2020-03-12T13:05:34,959][WARN ][c.f.s.h.SearchGuardHttpServe Check for conflicting processes by running the below command (assuming 5888 is the port number) -```console -$ sudo lsof -i:5888 +```bash +sudo lsof -i:5888 ``` Solution 1. Try to close the conflicting processes. You can do this easily with fuser -```console -$ sudo apt-get install fuser +```bash +sudo apt-get install fuser ``` Run the below command (assuming 5888 is the port number) -```console -$ fuser -k 58888/tcp +```bash +fuser -k 58888/tcp ``` Re-run `docker-compose up` and check if `localhost:9200` shows up.' @@ -215,9 +216,9 @@ again and extract all commits. Indication Cannot open `localhost:9200` in browser, shows `Secure connection Failed` -```console -$ curl -XGET localhost:9200 -k -curl: (7) Failed to connect to localhost port 9200: Connection refused +```bash +curl -XGET localhost:9200 -k +# curl: (7) Failed to connect to localhost port 9200: Connection refused ``` Diagnosis @@ -232,13 +233,13 @@ Solution 1. Increase the maximum File Descriptors (FD) enforced. You can do this by running the below command. -```console -$ sysctl -w fs.file-max=65536 +```bash +sysctl -w fs.file-max=65536 ``` To set this value permanently, update `/etc/security/limits.conf` content to below. To verify after rebooting, run -```console -$ sysctl fs.file-max +```bash +sysctl fs.file-max ``` ``` elasticsearch soft nofile 65536 @@ -326,8 +327,8 @@ Execute the following command to install `cloc` (more details are available in the [Graal](https://github.com/chaoss/grimoirelab-graal#how-to-installcreate-the-executables) repo). -```console -$ sudo apt-get install cloc +```bash +sudo apt-get install cloc ``` ### Incomplete data diff --git a/gelk/simple.md b/gelk/simple.md index 66172ec0..aa0015fc 100644 --- a/gelk/simple.md +++ b/gelk/simple.md @@ -59,8 +59,8 @@ In this section you have learned to produce a simple dashboard, using Perceval, In case you want to try a dashboard for some other repositories, once you're done with this one, you can delete the indexes \(both `git` and `git_raw`\), and produce new indexes with `p2o.py`. For doing this, you can use `curl` and the ElasticsSearch REST HTTP API: ```bash -$ curl -XDELETE http://localhost:9200/git -$ curl -XDELETE http://localhost:9200/git_raw +curl -XDELETE http://localhost:9200/git +curl -XDELETE http://localhost:9200/git_raw ``` Using the Kibiter/Kibana interface it is simple to modify the dashboard, its visualizations, and produce new dashboards and visualizations. If you are interested, have a look at the [Kibana User Guide](https://www.elastic.co/guide/en/kibana/current/). @@ -68,7 +68,7 @@ Using the Kibiter/Kibana interface it is simple to modify the dashboard, its vis `p2o.py` can be used to produce indexes for many other data sources. For example for GitHub issues and pull requests, the magic line is like this \(of course, substitute XXX for your GitHub token\): ```bash -$ (grimoireelk) p2o.py --enrich --index github_raw --index-enrich github \ +(grimoireelk) $ p2o.py --enrich --index github_raw --index-enrich github \ -e http://localhost:9200 --no_inc --debug \ github grimoirelab perceval \ -t XXX --sleep-for-rate diff --git a/gelk/sortinghat.md b/gelk/sortinghat.md index 905945ce..389331b4 100644 --- a/gelk/sortinghat.md +++ b/gelk/sortinghat.md @@ -27,7 +27,7 @@ In this case, `user` is a user of the MySQL instance with permissions to create If the command didn't throw any error message, you're done: a new `shdb` database was created. If you want, you can check it with a simple `mysql` command: ```bash -$ mysql -u user -pXXX -e 'SHOW DATABASES;' +mysql -u user -pXXX -e 'SHOW DATABASES;' ``` You should see `shdb` in the list of databases. @@ -35,7 +35,7 @@ You should see `shdb` in the list of databases. If for any reason you want to delete the database at some point, just run the appropriate mysql command: ```bash -$ mysql -u user -pXXX -e 'DROP DATABASE shdb;' +mysql -u user -pXXX -e 'DROP DATABASE shdb;' ``` Now, with our shiny new database ready, you can create indexes with SortingHat support. @@ -61,7 +61,7 @@ For example, for producing the index for the git repository for Perceval, run: That means we have new `git_raw` and `git` indexes, but we also have a populated `shdb` database (assuming we have MySQL running in `localhost`, that is the machine where the script is run). If you want to check what's in it, you can again use `mysql`: ```bash -$ mysql -u user -pXXX -e 'SELECT * FROM identities;' shdb +mysql -u user -pXXX -e 'SELECT * FROM identities;' shdb ``` This will show all the identities found in the Perceval git repository. diff --git a/python/es.md b/python/es.md index 2f0da76a..7fc2a0a6 100644 --- a/python/es.md +++ b/python/es.md @@ -57,34 +57,34 @@ When running it, you'll see the objects with the hashes being printed in the scr Once you run the script, the `commits` index is created in ElasticSearch. You can check its characteristics using `curl`. The `pretty` option is to obtain a human-readable JSON document as response. Notice that we don't need to run `curl` from the virtual environment: -``` -$ curl -XGET http://localhost:9200/commits?pretty -{ - "commits" : { - "aliases" : { }, - "mappings" : { - "summary" : { - "properties" : { - "hash" : { - "type" : "string" - } - } - } - }, - "settings" : { - "index" : { - "creation_date" : "1476470820231", - "number_of_shards" : "5", - "number_of_replicas" : "1", - "uuid" : "7DSlRG8ZSTuE1pMboG07yg", - "version" : { - "created" : "2020099" - } - } - }, - "warmers" : { } - } -} +```bash +curl -XGET http://localhost:9200/commits?pretty +# { +# "commits" : { +# "aliases" : { }, +# "mappings" : { +# "summary" : { +# "properties" : { +# "hash" : { +# "type" : "string" +# } +# } +# } +# }, +# "settings" : { +# "index" : { +# "creation_date" : "1476470820231", +# "number_of_shards" : "5", +# "number_of_replicas" : "1", +# "uuid" : "7DSlRG8ZSTuE1pMboG07yg", +# "version" : { +# "created" : "2020099" +# } +# } +# }, +# "warmers" : { } +# } +# } ``` ## Deleting is important as well @@ -92,8 +92,8 @@ $ curl -XGET http://localhost:9200/commits?pretty If you want to delete the index (for example, to run the script once again) you can just run `DELETE` on its url. For example, with `curl`: ```bash -$ curl -XDELETE http://localhost:9200/commits -{"acknowledged":true} +curl -XDELETE http://localhost:9200/commits +# {"acknowledged":true} ``` If you don't do this, before running the previous script once again, you'll see an exception such as: @@ -175,34 +175,34 @@ print('\nCreated new index with commits.') After running it (deleting any previous `commits` index if needed), we have a new index with the intended information for all commits. We can see one of them querying the index using directly the ElasticSearch REST API with `curl`: ```bash -$ curl -XGET "http://localhost:9200/commits/_search/?size=1&pretty" -{ - "took" : 2, - "timed_out" : false, - "_shards" : { - "total" : 5, - "successful" : 5, - "failed" : 0 - }, - "hits" : { - "total" : 407, - "max_score" : 1.0, - "hits" : [ { - "_index" : "commits", - "_type" : "summary", - "_id" : "AVfPp9Po5xUyv5saVPKU", - "_score" : 1.0, - "_source" : { - "hash" : "d1253dd9876bb76e938a861acaceaae95241b46d", - "commit" : "Santiago Dueñas ", - "author" : "Santiago Dueñas ", - "author_date" : "Wed Nov 18 10:59:52 2015 +0100", - "files_no" : 3, - "commit_date" : "Wed Nov 18 14:41:21 2015 +0100" - } - } ] - } -} +curl -XGET "http://localhost:9200/commits/_search/?size=1&pretty" +# { +# "took" : 2, +# "timed_out" : false, +# "_shards" : { +# "total" : 5, +# "successful" : 5, +# "failed" : 0 +# }, +# "hits" : { +# "total" : 407, +# "max_score" : 1.0, +# "hits" : [ { +# "_index" : "commits", +# "_type" : "summary", +# "_id" : "AVfPp9Po5xUyv5saVPKU", +# "_score" : 1.0, +# "_source" : { +# "hash" : "d1253dd9876bb76e938a861acaceaae95241b46d", +# "commit" : "Santiago Dueñas ", +# "author" : "Santiago Dueñas ", +# "author_date" : "Wed Nov 18 10:59:52 2015 +0100", +# "files_no" : 3, +# "commit_date" : "Wed Nov 18 14:41:21 2015 +0100" +# } +# } ] +# } +# } ``` Since we specified in the query we only wanted one document (`size=1`), we get a list of `hits` with a single document. But we can see also how there are a total of 407 documents (field `total` within field `hits`). For each document, we can see the information we have stored, which are the contents of `_source`. @@ -261,38 +261,38 @@ import datetime Instead of using the character strings that we get from Perceval as values for those two fields, we first convert them to `datetime` objects. This is enough for the `elasticsearch` module to recognize as dates, and upload them as such. You can check the resulting mapping after running this new script: -``` -$ curl -XGET "http://localhost:9200/commits/_mapping?pretty" -{ - "commits" : { - "mappings" : { - "summary" : { - "properties" : { - "author" : { - "type" : "string" - }, - "author_date" : { - "type" : "date", - "format" : "strict_date_optional_time||epoch_millis" - }, - "commit" : { - "type" : "string" - }, - "commit_date" : { - "type" : "date", - "format" : "strict_date_optional_time||epoch_millis" - }, - "files_no" : { - "type" : "long" - }, - "hash" : { - "type" : "string" - } - } - } - } - } -} +```bash +curl -XGET "http://localhost:9200/commits/_mapping?pretty" +# { +# "commits" : { +# "mappings" : { +# "summary" : { +# "properties" : { +# "author" : { +# "type" : "string" +# }, +# "author_date" : { +# "type" : "date", +# "format" : "strict_date_optional_time||epoch_millis" +# }, +# "commit" : { +# "type" : "string" +# }, +# "commit_date" : { +# "type" : "date", +# "format" : "strict_date_optional_time||epoch_millis" +# }, +# "files_no" : { +# "type" : "long" +# }, +# "hash" : { +# "type" : "string" +# } +# } +# } +# } +# } +# } ``` So, now we have a more complete index for commits, and each of the fields in it have reasonable types in the ElasticSearch mapping. diff --git a/sirmordred/container.md b/sirmordred/container.md index 35cdca69..605c1585 100644 --- a/sirmordred/container.md +++ b/sirmordred/container.md @@ -17,7 +17,7 @@ For using these container images, ensure you have a recent version of `docker` i To try it this container image, just run it as follows: ```bash -$ docker run -p 127.0.0.1:5601:5601 \ +docker run -p 127.0.0.1:5601:5601 \ -v $(pwd)/credentials.cfg:/override.cfg \ -t grimoirelab/full ``` @@ -42,7 +42,7 @@ There are three configuration files read in before `/override.cfg`. The first on A slightly different command line is as follows: ```bash -$ docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:5601:5601 \ +docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:5601:5601 \ -v $(pwd)/logs:/logs \ -v $(pwd)/credentials.cfg:/override.cfg \ -t grimoirelab/full @@ -53,7 +53,7 @@ This one will expose also port `9200`, which corresponds to Elasticsearch. This By default, Elasticsearch will store indexes within the container image, which means they are not persistent if the image shuts down. But you can mount a local directory for Elasticsearch to write the indexes in it. this way they will be available from one run of the image to the next one. For example, to let Elasticsearch use directory `es-data` to write the indexes: ```bash -$ docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:5601:5601 \ +docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:5601:5601 \ -v $(pwd)/logs:/logs \ -v $(pwd)/credentials.cfg:/override.cfg \ -v $(pwd)/es-data:/var/lib/elasticsearch \ @@ -64,7 +64,7 @@ The `grimoirelab/full` container, by default, produces a dashboard showing an an The file to override is `/projects.json` in the container, so the command to run it could be (assuming the file was created as `projects.json` in the current directory): ```bash -$ docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:5601:5601 \ +docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:5601:5601 \ -v $(pwd)/logs:/logs \ -v $(pwd)/credentials.cfg:/override.cfg \ -v $(pwd)/projects.json:/projects.json \ @@ -74,7 +74,7 @@ $ docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:5601:5601 \ You can also get a shell in the running container, and run arbitrary GrimoireLab commands (`container_id` is the identifier of the running container, that you can find out with `docker ps`, or by looking at the first line when running the container): ```bash -$ docker exec -it container_id env TERM=xterm /bin/bash +docker exec -it container_id env TERM=xterm /bin/bash ``` In the shell prompt, write any GrimoireLab command. And if you have mounted external files for the SirMordred configuration, you can modify them, and run SirMordred again, to change its behavior. @@ -82,7 +82,7 @@ In the shell prompt, write any GrimoireLab command. And if you have mounted exte If you want to connect to the dashboard to issue your own commands, but don't want it to run SirMordred by itself, run the container setting `RUN_MORDRED` to `NO`: ```bash -$ docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:5601:5601 \ +docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:5601:5601 \ -v $(pwd)/logs:/logs \ -v $(pwd)/credentials.cfg:/override.cfg \ -v $(pwd)/es-data:/var/lib/elasticsearch \ @@ -130,7 +130,7 @@ The last two lines specify your GitHub user token, which is needed to access the Now, just run the container as: ```bash -$ docker run --net="host" \ +docker run --net="host" \ -v $(pwd)/credentials.cfg:/override.cfg \ grimoirelab/installed ``` diff --git a/sirmordred/micro-mordred.md b/sirmordred/micro-mordred.md index 4f0ab9f1..37aae4df 100644 --- a/sirmordred/micro-mordred.md +++ b/sirmordred/micro-mordred.md @@ -60,8 +60,8 @@ mariadb: You can now run the following command in order to start the execution of individual instances. -``` -$ docker-compose -f docker-config.yml up +```bash +docker-compose -f docker-config.yml up ``` Once you see something similar to the below `log` on your console, it means that you've successfully instantiated the containers corresponding to the required components. @@ -100,8 +100,8 @@ kibiter_1 | {"type":"log","@timestamp":"2019-05-30T09:38:25Z","tags":["st 3. As you can see on the `Kibiter Instance` above, it says `Couldn't find any Elasticsearch data. You'll need to index some data into Elasticsearch before you can create an index pattern`. Hence, in order to index some data, we'll now execute micro-mordred using the following command, which will call the `Raw` and `Enrich` tasks for the Git config section from the provided `setup.cfg` file. -``` -$ python3 micro.py --raw --enrich --cfg setup.cfg --backends git +```bash +python3 micro.py --raw --enrich --cfg setup.cfg --backends git ``` The above command requires two files: @@ -114,14 +114,14 @@ We'll (for the purpose of this tutorial) use the files provided in the `/utils` - **Note**: In case the process fails to index the data to the ElasticSearch, check the `.perceval` folder in the home directory; which in this case may contain the same repositories as mentioned in the `projects.json` file. We can proceed after removing the repositories using the following command. -``` -$ rm -rf .perceval/repositories/... +```bash +rm -rf .perceval/repositories/... ``` 4. Now, we can create the index pattern and after its successful creation we can analyze the data as per fields. Then, we execute the `panels` task to load the corresponding `sigils panels` to Kibiter instance using the following command. -``` -$ python3 micro.py --panels --cfg setup.cfg +```bash +python3 micro.py --panels --cfg setup.cfg ``` On successful execution of the above command, we can manage to produce some dashboard similar to the one shown below. diff --git a/sortinghat/basic.md b/sortinghat/basic.md index 506dab1e..e0dae5bd 100644 --- a/sortinghat/basic.md +++ b/sortinghat/basic.md @@ -36,7 +36,7 @@ For example, let's merge repo identity `4fcec5a` (dpose, dpose@sega.bitergia.net Notice that we had to use the complete hashes (in the table above, and in the listing in the previous section, we shortened them just for readability). What we have done is to merge `4fcec5a` on `5b358fc`, and the result is: ```bash -$ mysql -u user -pXXX -e 'SELECT * FROM identities WHERE uuid LIKE "5b358fc%";' shdb +mysql -u user -pXXX -e 'SELECT * FROM identities WHERE uuid LIKE "5b358fc%";' shdb | id | name | email | username | source | uuid | | 4fcec5a | dpose | dpose@sega.bitergia.net | NULL | git | 5b358fc | | 5b358fc | dpose | dpose@bitergia.com | NULL | git | 5b358fc | diff --git a/sortinghat/data.md b/sortinghat/data.md index 51de6b78..cb550bdc 100644 --- a/sortinghat/data.md +++ b/sortinghat/data.md @@ -49,7 +49,7 @@ let's visit the data structure of the database it maintains. See [A dashboard with SortingHat](../gelk/sortinghat.md), and the introduction to this chapter, for details on how the database was produced; `user` and `XXX` are the credentials to access the `shdb` database. For finding out about its tables, just query MySQL. ```bash -$ mysql -u user -pXXX -e 'SHOW TABLES;' shdb +mysql -u user -pXXX -e 'SHOW TABLES;' shdb ... | countries | | domains_organizations | @@ -177,8 +177,8 @@ When we unify repo identities (merging several into a single unique identity), w Up to now we have not used SortingHat to assign organizations to persons (unique identities). Therefore, `enrollments` and `organizations` tables are empty. But we can check their structure. -``` -$ mysql -u user -pXXX -e 'DESCRIBE organizations;' shdb +```bash +mysql -u user -pXXX -e 'DESCRIBE organizations;' shdb +-------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------+--------------+------+-----+---------+----------------+ @@ -192,8 +192,8 @@ In this format, each row corresponds to the description of a field in the `organ `enrollments` table is a bit more complex: -``` -$ mysql -u user -pXXX -e 'DESCRIBE enrollments;' shdb +```bash +mysql -u user -pXXX -e 'DESCRIBE enrollments;' shdb +-----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+----------------+ diff --git a/tools-and-tips/csv-from-jenkins-enriched-index.md b/tools-and-tips/csv-from-jenkins-enriched-index.md index 5dd5c4f2..12d1b71c 100644 --- a/tools-and-tips/csv-from-jenkins-enriched-index.md +++ b/tools-and-tips/csv-from-jenkins-enriched-index.md @@ -7,8 +7,8 @@ To illustrate how to get data from an enriched index (produced using `grimoire_e To use it, we can create a new virtual environment for Python, and install the needed modules (including the script) in it. ```bash -$ pyvenv ~/venv -$ source ~/venv/bin/activate +pyvenv ~/venv +source ~/venv/bin/activate (venv) $ pip install elasticsearch (venv) $ pip install elasticsearch-dsl (venv) $ wget https://raw.githubusercontent.com/jgbarah/GrimoireLab-training/master/tools-and-tips/scripts/enriched_elasticsearch_jenkins.py diff --git a/tools-and-tips/elasticsearch.md b/tools-and-tips/elasticsearch.md index 7bd5986b..5c1b9815 100644 --- a/tools-and-tips/elasticsearch.md +++ b/tools-and-tips/elasticsearch.md @@ -12,8 +12,8 @@ https://user:passwd@host:port/resource To list all indexex stored by Elasticsearch: -``` -$ curl -XGET 'https://elasticurl/_cat/indices?v' +```bash +curl -XGET 'https://elasticurl/_cat/indices?v' ``` This returns for each index, its name, status (`open` comes to mean 'usable'), number of documents, deleted documents, and storage size used. @@ -24,8 +24,8 @@ Elasticsearch index aliases allow to work with a collection of indexes as if it To list the base indexes corresponding to an index alias (assume the index alias is `alias_index`): -``` -$ curl -XGET 'https://elastic_url/alias_index/_alias/*' +```bash +curl -XGET 'https://elastic_url/alias_index/_alias/*' ``` The result will be similar to (being `base_index` the base index for the alias, and `alias_index`, `alias_index2` two aliases for that base index): @@ -43,7 +43,7 @@ The result will be similar to (being `base_index` the base index for the alias, To remove aliases, and create new ones, in an atomic operation: -``` +```bash curl -XPOST 'https://elastic_url/_aliases' -d ' { "actions" : [ diff --git a/tools-and-tips/html5-app-latest-activity.md b/tools-and-tips/html5-app-latest-activity.md index 72589067..16b6c70c 100644 --- a/tools-and-tips/html5-app-latest-activity.md +++ b/tools-and-tips/html5-app-latest-activity.md @@ -14,22 +14,22 @@ For demoing the application, you can first install the files for the HTML applic For deploying the HTML5 app, just copy `index.html`, `events.js`, and `events.css`, all in the [`scripts`](https://github.com/jgbarah/GrimoireLab-training/blob/master/tools-and-tips/scripts/) directory, to your directory of choice. Then, ensure that some web server is serving that directory. For example, you can launch a simple Python server from it: ```bash -$ python3 -m http.server -Serving HTTP on 0.0.0.0 port 8000 ... +python3 -m http.server +# Serving HTTP on 0.0.0.0 port 8000 ... ``` Now, let's produce a JSON file with the events that the app will show. For that, we will install [`elastic_last.py`](https://github.com/jgbarah/GrimoireLab-training/blob/master/tools-and-tips/scripts/elastic_last.py) in a Python3 virtual environment with all the needed dependencies (in this case, it is enough to install, via `pip`, the `elasticsearch-dsl` module, and run it: -``` -$ python3 elastic_last.py --loop 10 --total 10 http://localhost:9200/git +```bash +python3 elastic_last.py --loop 10 --total 10 http://localhost:9200/git ``` (assuming ElasticSearch is running in the same host, in port 9200, as it runs by default, and that it has an index, named `git` with the standard git index, as produced by GrimoireELK) If we're using a `git` index in an ElasticSearch instance accessible at `https://grimoirelab.biterg.io/data`, using user `user` and password `XXX`: -``` -$ python3 elastic_last.py --no_verify_certs --loop 10 --total 10 \ +```bash +python3 elastic_last.py --no_verify_certs --loop 10 --total 10 \ https://user:XXX@grimoirelab.biterg.io/data/git ``` diff --git a/tools-and-tips/perceval.md b/tools-and-tips/perceval.md index 683b8f99..2e08f799 100644 --- a/tools-and-tips/perceval.md +++ b/tools-and-tips/perceval.md @@ -7,14 +7,14 @@ This section shows some scripts using Perceval. [perceval_git_counter](https://github.com/jgbarah/GrimoireLab-training/blob/master/tools-and-tips/scripts/perceval_git_counter.py) is a simple utility to count commits in a git repository. Just run it with the url of the repository to count, and a directory to clone, and you're done: ```bash -$ python perceval_git_counter.py https://github.com/grimoirelab/perceval.git /tmp/ppp -Number of commmits: 579. +python perceval_git_counter.py https://github.com/grimoirelab/perceval.git /tmp/ppp +# Number of commmits: 579. ``` You can get a help banner, including options, by running -``` -$ python perceval_git_counter.py --help +```bash +python perceval_git_counter.py --help ``` There is an option to print commit hashes for all commits in the repository: `--print`.