diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 14042d5f..16bdac05 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -18,10 +18,10 @@ or suggest something. Any feedback is appreciated! If you are willing to setup the tutorial locally ```bash -$ git clone https://github.com/chaoss/grimoirelab-tutorial -$ cd grimoirelab-tutorial -$ bundle -$ bundle exec jekyll serve +git clone https://github.com/chaoss/grimoirelab-tutorial +cd grimoirelab-tutorial +bundle +bundle exec jekyll serve ``` **Note:** Make sure you have git and ruby (version 2.7.x) installed. @@ -43,17 +43,17 @@ which is a fork (copy) of the GrimoireLab Tutorial. 2. Clone the forked git repository, and create in a local branch for your contribution. -``` -$ git clone https://github.com/username/grimoirelab-tutorial/ -$ cd grimoirelab-tutorial/ -$ git checkout -b new-branch-name +```bash +git clone https://github.com/username/grimoirelab-tutorial/ +cd grimoirelab-tutorial/ +git checkout -b new-branch-name ``` 3. In this repository, set up a remote for the upstream (original grimoirelab-tutorial) git repository. -``` -$ git remote add upstream https://github.com/chaoss/grimoirelab-tutorial/ +```bash +git remote add upstream https://github.com/chaoss/grimoirelab-tutorial/ ``` 4. Now you can change the documentation and then commit it. Except that the @@ -61,19 +61,19 @@ contribution really needs it, use a single commit, and comment in detail in the corresponding commit message what it is intended to do. If it fixes some bug, reference it (with the text "_Fixes #23_", for example, for issue number 23). -``` -$ git add -A -$ git commit -s +```bash +git add -A +git commit -s ``` 5. Once your contribution is ready, rebase your local branch with `upstream/master`, so that it merges clean with that branch, and push your local branch to a remote branch to your GitHub repository. -``` -$ git fetch upstream -$ git rebase upstream/master -$ git push origin new-branch-name +```bash +git fetch upstream +git rebase upstream/master +git push origin new-branch-name ``` 6. In the GitHub interface, produce a pull request from your branch (you will @@ -97,7 +97,7 @@ For ensuring it, a bot checks all incoming commits. For users of the git command line interface, a sign-off is accomplished with the `-s` as part of the commit command: -``` +```bash git commit -s -m 'This is a commit message' ``` diff --git a/basics/dockerhub.md b/basics/dockerhub.md index 7577deb7..476eabfd 100644 --- a/basics/dockerhub.md +++ b/basics/dockerhub.md @@ -7,14 +7,14 @@ For using these container images, ensure you have a recent version of `docker` i To try `grimoirelab/full`, just type: ```bash -$ docker run -p 127.0.0.1:5601:5601 \ - -v $(pwd)/credentials.cfg:/override.cfg \ - -t grimoirelab/full +docker run -p 127.0.0.1:5601:5601 \ + -v $(pwd)/credentials.cfg:/override.cfg \ + -t grimoirelab/full ``` `credentials.cfg` should have a GitHub API token, in `mordred.cfg` format: -``` +```cfg [github] api-token = XXX ``` @@ -28,15 +28,15 @@ The resulting dashboard will be available from Kibiter, and you can see it by po What is even more interesting: you can get a shell in the container (after launching it), and run arbitrary GrimoireLab commands (`container_id` is the identifier of the running container, that you can find out with `docker ps`, or by looking at the first line when running the container): ```bash -$ docker exec -it container_id env TERM=xterm /bin/bash +docker exec -it container_id env TERM=xterm /bin/bash ``` If you're running the container on Windows through Docker Quickstart Terminal and Oracle VirtualBox, additional steps need to be taken to access the dashboard from your Windows machine. First, you then want to go into the settings for the virtual machine through the VirtualBox manager. To accomplish this, click on your VM, and click on 'Settings'. Now go to the 'Network' tab and you should see two adapters, we want to reconfigure adapter 1. Change the 'Attached to' setting to 'Bridged Adapter', open up the Advanced dropdown, and change the Promiscuous Mode to 'Allow VMs'. Now change Bridged Adapter to NAT and click on Port Forwarding. Here you need to set up a rule that allows port 5601 on the VM to talk to localhost:5601 on your host machine. To do this, click the green diamond with a '+' inside it to add a new rule. Name this rule whatever you wish, set its protocol to TCP, host IP to 127.0.0.1, Host Port to 5601, leave Guest IP blank, and set Guest Port to 5601 and click OK. While you're in settings, I recommend upping the memory for your VM to 2048MB, and setting display memory to 10MB. After you've done this you can now exit all the way out of settings. You now need to stop the docker container if it's currently running, shut down your VM, and restart your VM. Once the VM is restarted, you'll need to run ``ifconfig | grep inet`` on the virtual machine the container is running on to find its local IP address, most likely it will be along the lines of 10.0.2.x. Now rerun ```bash -$ docker run -p x.x.x.x:5601:5601 \ - -v $(pwd)/credentials.cfg:/override.cfg \ - -t grimoirelab/full +docker run -p x.x.x.x:5601:5601 \ + -v $(pwd)/credentials.cfg:/override.cfg \ + -t grimoirelab/full ``` but replace the x'ed out IP address with the IP address of your VM that you got from `ifconfig`. If all goes well, once you see the docker command line print out "Elasticsearch Aliased: Created!", you should be able to go to 127.0.0.1:5601 on your host machine web browser and be able to access the GrimoireLab dashboard. @@ -48,7 +48,7 @@ Have a look at the section If you happen to run the container in Windows, remember that you should use backslash instead of slash for the paths related to Windows. That means that paths internal to the container will still include slashes, but those that refer to files or directories in the host machine will include backslashes, and maybe disk unit identifiers. For example: -``` -$ docker run -p 127.0.0.1:5601:5601 -v D:\test\credentials.cfg:/override.cfg \ +```bash +docker run -p 127.0.0.1:5601:5601 -v D:\test\credentials.cfg:/override.cfg \ -v D:\test\projects.json:/projects.json -t grimoirelab/full ``` diff --git a/basics/install.md b/basics/install.md index 5e33e383..a80d6a55 100644 --- a/basics/install.md +++ b/basics/install.md @@ -104,7 +104,7 @@ This should produce a banner with information about command line arguments, and Assuming everything was fine, next thing is getting information about an specific backend. Let's start with the git backend, which will be a good starter for testing: -``` +```bash (gl) $ perceval git --help ``` @@ -208,14 +208,14 @@ Some of GrimoireLab dependencies need non-Python packages as pre-requisites to b * For `dulwich` to be installed, you need to have some Python libraries present. In Debian-derived systems (such as Ubuntu), that can be done by installing the `python3-dev` package: -``` -$ sudo apt-get install python3-dev -$ sudo apt-get install build-essential +```bash +sudo apt-get install python3-dev +sudo apt-get install build-essential ``` Usually, you know you need this when you have a problem installing `dulwich`. For example, you check the output of `pip install` and you find: -``` +```bash dulwich/_objects.c:21:10: fatal error: Python.h: No such file or Directory ``` @@ -230,7 +230,7 @@ Previous instructions are for installing the Python packages corresponding to th It is designed to work standalone, with just a few dependencies. It is easy to produce a Python virtual environment with all GrimoireLab tools (and dependencies) installed, corresponding to the latest version in the master branch of each of the development repositories. Just the utility, and run: ```bash -$ python3 build_grimoirelab --install --install_venv /tmp/ivenv +python3 build_grimoirelab --install --install_venv /tmp/ivenv ``` This will create a virtual environment in `/tmp/ivenv`, which can be activated as follows @@ -248,19 +248,19 @@ $ source /tmp/ivenv/bin/activate [releases directory](https://github.com/chaoss/grimoirelab/tree/master/releases) and run (assuming you downloaded the release file `elasticgirl.21` to the current directory): ```bash -$ python3 build_grimoirelab --install --install_venv /tmp/ivenv --relfile elasticgirl.21 +python3 build_grimoirelab --install --install_venv /tmp/ivenv --relfile elasticgirl.21 ``` If you want, you can also produce the Python packages (wheels and dists) for any release, or the latest versions in development repositories. For example, for building packages for the latest versions in directory `/tmp/dists`: ```bash -$ python3 build_grimoirelab --build --distdir /tmp/ivenv +python3 build_grimoirelab --build --distdir /tmp/ivenv ``` You can get a listing of all the options of `build_grimoirelab` by using its `--help` flag: ```bash -$ python3 build_grimoirelab --help +python3 build_grimoirelab --help ``` There is some explanation about some of them in the diff --git a/basics/quick.md b/basics/quick.md index 95c885ed..a9179198 100644 --- a/basics/quick.md +++ b/basics/quick.md @@ -29,13 +29,13 @@ Please check the [section on installing non-Python packages](install.html#non-python-pkgs) if you have any trouble. ```bash -(gl) % pip3 install grimoirelab +(gl) $ pip3 install grimoirelab ``` If everything went well, you can just check the version that you installed: ```bash -(gl) % grimoirelab -v +(gl) $ grimoirelab -v ``` And that's it. You can now skip the rest of this chapter @@ -54,7 +54,7 @@ standard Debian distro, so you can run those directly. To run that image, just type: ```bash -% docker run -p 127.0.0.1:9200:9200 \ +docker run -p 127.0.0.1:9200:9200 \ -p 127.0.0.1:5601:5601 \ -p 127.0.0.1:3306:3306 \ -e RUN_MORDRED=NO \ @@ -74,7 +74,7 @@ Once the container is running, you can connect to it, and launch any GrimoireLab command or program in it: ```bash -$ docker exec -it container_id env TERM=xterm /bin/bash +docker exec -it container_id env TERM=xterm /bin/bash ``` That container can be used also, as such, diff --git a/basics/supporting.md b/basics/supporting.md index 6d4e7fb7..12c39259 100644 --- a/basics/supporting.md +++ b/basics/supporting.md @@ -33,20 +33,20 @@ provided the right version of Python is available. In other platforms, your mile Python3 is a standard package in Debian, so it is easy to install: ```bash -$ sudo apt-get install python3 +sudo apt-get install python3 ``` Once installed, you can check the installed version: ```bash -$ python3 --version +python3 --version ``` For installing some other Python modules, including GrimoireLab modules, you will need `pip` for Python3. For using `venv` virtual environments, you will also need `ensurepip`. Both are available in Debian and derivatives as packages `python3-pip` and `python3-venv`: ```bash -$ sudo apt-get install python3-pip -$ sudo apt-get install python3-venv +sudo apt-get install python3-pip +sudo apt-get install python3-venv ``` More information about installing Python3 in other platforms is available in [Properly installing Python](http://docs.python-guide.org/en/latest/starting/installation/). In addition, you can also check information on [how to install pip](https://pip.pypa.io/en/stable/installing/). @@ -56,7 +56,7 @@ More information about installing Python3 in other platforms is available in [Pr If you are retrieving data from git repositories, you will need git installed. Pretty simple: ```bash -$ sudo apt-get install git-all +sudo apt-get install git-all ``` More information about installing git in other platforms is available in @@ -71,7 +71,7 @@ For installing Elasticsearch you can follow its [installation instructions](http Assuming the installed ElasticSearch directory is `elasticsearch`, to launch it you will just run the appropriate command \(no need to run this from the virtual environment\): ```bash -$ elasticsearch/bin/elasticsearch +elasticsearch/bin/elasticsearch ``` This will launch Elasticsearch that will listen via its HTTP REST API at `http://localhost:9200`. You can check that everything went well by pointing your web browser to that url, and watching the ElasticSearch welcome message. @@ -91,7 +91,7 @@ You can install Kibana instead of Kibiter. Maybe you will lose some functionalit Assuming the installed Kibana directory is `kibana`, to launch it, again just run the appropriate command: ```bash -$ kibana/bin/kibana +kibana/bin/kibana ``` This should serve a Kibana instance in `http://localhost:5601`. Point your web browser to that url, and you´ll see the Kibana welcome page. @@ -105,7 +105,7 @@ Now, we´re ready to go. Instead of following the installation instructions mentioned above, you can also install ElasticSearch and Kibana as a Docker container, by using pre-composed images. For example: ```bash -$ docker run -d -p 9200:9200 -p 5601:5601 nshou/elasticsearch-kibana +docker run -d -p 9200:9200 -p 5601:5601 nshou/elasticsearch-kibana ``` Then you can connect to Elasticsearch by localhost:9200 and its Kibana front-end by localhost:5601. See [details about these Docker images in DockerHub](https://hub.docker.com/r/nshou/elasticsearch-kibana/) @@ -116,7 +116,7 @@ Then you can connect to Elasticsearch by localhost:9200 and its Kibana front-end If you are going to use SortingHat, you will need a database. Currently, MySQL-like databases are supported. In our case, we will use MariaDB. Installing it in Debian is easy: ```bash -$ sudo apt-get install mariadb-server +sudo apt-get install mariadb-server ``` That's it, that's all. diff --git a/docs/data-sources/add-configurations.md b/docs/data-sources/add-configurations.md index 242aa364..45352d29 100644 --- a/docs/data-sources/add-configurations.md +++ b/docs/data-sources/add-configurations.md @@ -66,8 +66,8 @@ out_index = git_study_forecast Once you have made the following changes, run your containers with docker-compose -```console -$ docker-compose up -d +```bash +docker-compose up -d ``` Give it some time to gather the data and after a while your dashboard and data diff --git a/docs/getting-started/dev-setup.md b/docs/getting-started/dev-setup.md index 482ef4b1..8bc12cca 100644 --- a/docs/getting-started/dev-setup.md +++ b/docs/getting-started/dev-setup.md @@ -130,8 +130,8 @@ mariadb: ``` Save the above into a docker-compose.yml file and run -```console -$ docker-compose up -d +```bash +docker-compose up -d ``` to get Elasticsearch, Kibiter and MariaDB running on your system. @@ -159,8 +159,8 @@ Each local repo should have two `remotes`: `origin` points to the forked repo, while `upstream` points to the original CHAOSS repo. An example is provided below. -```console -$ git remote -v +```bash +git remote -v origin https://github.com/valeriocos/perceval (fetch) origin https://github.com/valeriocos/perceval (push) upstream https://github.com/chaoss/grimoirelab-perceval (fetch) @@ -168,8 +168,8 @@ upstream https://github.com/chaoss/grimoirelab-perceval (push) ``` In order to add a remote to a Git repository, you can use the following command: -```console -$ git remote add upstream https://github.com/chaoss/grimoirelab-perceval +```bash +git remote add upstream https://github.com/chaoss/grimoirelab-perceval ``` #### ProTip @@ -177,8 +177,8 @@ $ git remote add upstream https://github.com/chaoss/grimoirelab-perceval You can use this use this [script](https://gist.github.com/vchrombie/4403193198cd79e7ee0079259311f6e8) to automate this whole process. -```console -$ python3 glab-dev-env-setup.py --create --token xxxx --source sources +```bash +python3 glab-dev-env-setup.py --create --token xxxx --source sources ``` ### Setting up PyCharm @@ -255,7 +255,7 @@ url = http://localhost:9200 Run the following commands, which will collect and enrich the data coming from the git sections and upload the corresponding panels to Kibiter: -```console +```bash micro.py --raw --enrich --cfg ./setup.cfg --backends git cocom micro.py --panels --cfg ./setup.cfg ``` diff --git a/docs/getting-started/setup.md b/docs/getting-started/setup.md index 331503b7..21742b73 100644 --- a/docs/getting-started/setup.md +++ b/docs/getting-started/setup.md @@ -24,23 +24,31 @@ through the following means. ### Software -```console -$ git --version +```bash +git --version git version 2.32.0 -$ docker --version +``` +```bash +docker --version Docker version 20.10.7, build f0df35096d -$ docker-compose --version +``` +```bash +docker-compose --version docker-compose version 1.28.5, build c4eb3a1f ``` ### Hardware -```console -$ cat /proc/cpuinfo | grep processor | wc -l #View number of processors +```bash +cat /proc/cpuinfo | grep processor | wc -l #View number of processors 4 -$ grep MemTotal /proc/meminfo #View amount of RAM available +``` +```bash +grep MemTotal /proc/meminfo #View amount of RAM available MemTotal: 8029848 kB -$ sudo sysctl -w vm.max_map_count=262144 #Set virtual memory +``` +```bash +sudo sysctl -w vm.max_map_count=262144 #Set virtual memory vm.max_map_count = 262144 ``` @@ -52,15 +60,15 @@ mapped areas. - Clone the GrimoireLab repo: -```console -$ git clone https://github.com/chaoss/grimoirelab +```bash +git clone https://github.com/chaoss/grimoirelab ``` - Go to `docker-compose` folder and run the following command: -```console -$ cd grimoirelab/docker-compose -grimoirelab/docker-compose$ sudo docker-compose up -d +```bash +cd grimoirelab/docker-compose +sudo docker-compose up -d ``` Your dashboard will be ready after a while at `http://localhost:5601`. Usually, diff --git a/docs/getting-started/troubleshooting.md b/docs/getting-started/troubleshooting.md index 72a3cc76..6329cd6f 100644 --- a/docs/getting-started/troubleshooting.md +++ b/docs/getting-started/troubleshooting.md @@ -18,8 +18,9 @@ parent: Getting Started > docker-compose command without the `-d` or `--detach` flag. That will allow > you to see all the logs while starting/(re)creating/building/attaching > containers for a service. - ```console - grimoirelab/docker-compose$ docker-compose up + ```bash + cd grimoirelab/docker-compose + docker-compose up ``` --- @@ -35,23 +36,24 @@ parent: Getting Started It may also happen that the port, 5601, is already allocated to some other container. So running docker-compose will lead to the following error -```console +``` WARNING: Host is already in use by another container ``` In order to fix it, you need to see which container is using that port and kill that container. -```console -$ docker container ls # View all running containers +```bash +docker container ls # View all running containers CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 01f0767adb47 grimoirelab/hatstall:latest "/bin/sh -c ${DEPLOY…" 2 minutes ago Up 2 minutes 0.0.0.0:8000->80/tcp, :::8000->80/tcp docker-compose_hatstall_1 9587614c7c4e bitergia/mordred:latest "/bin/sh -c ${DEPLOY…" 2 minutes ago Up 2 minutes (unhealthy) docker-compose_mordred_1 c3f3f118bead bitergia/kibiter:community-v6.8.6-3 "/docker_entrypoint.…" 2 minutes ago Up 2 minutes 0.0.0.0:5601->5601/tcp, :::5601->5601/tcp docker-compose_kibiter_1 d3c691acaf7b mariadb:10.0 "docker-entrypoint.s…" 2 minutes ago Up 2 minutes 3306/tcp docker-compose_mariadb_1 f5f406146ee9 docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6 "/usr/local/bin/dock…" 2 minutes ago Up 2 minutes 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 9300/tcp docker-compose_elasticsearch_1 - -$ docker rm -f c3f3f118bead #c3f3f118bead is the container that is using port 5601. +``` +```bash +docker rm -f c3f3f118bead #c3f3f118bead is the container that is using port 5601. ``` ### Empty dashboard or visualization @@ -75,7 +77,7 @@ localhost:9200` messages. Diagnosis Check for the following log in the output of `docker-compose up` -```console +```bash elasticsearch_1 | ERROR: [1] bootstrap checks failed elasticsearch_1 | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] ``` @@ -83,16 +85,16 @@ elasticsearch_1 | [1]: max virtual memory areas vm.max_map_count [65530] is too Solution Increase the kernel `max_map_count` parameter of vm using the following command. -```console -$ sudo sysctl -w vm.max_map_count=262144 +```bash +sudo sysctl -w vm.max_map_count=262144 ``` Now stop the container services and re-run `docker-compose up`. Note that this is valid only for current session. To set this value permanently, update the `vm.max_map_count` setting in `/etc/sysctl.conf` file. To verify after rebooting, run the below command. -```console -$ sysctl vm.max_map_count +```bash +sysctl vm.max_map_count ``` ### Processes have conflicts with SearchGuard @@ -100,33 +102,33 @@ $ sysctl vm.max_map_count Indication Cannot open `localhost:9200` in browser, shows `Secure connection Failed` -```console -$ curl -XGET localhost:9200 -k +```bash +curl -XGET localhost:9200 -k curl: (52) Empty reply from server ``` Diagnosis Check for the following log in the output of `docker-compose up` -```console +```bash elasticsearch_1 | [2020-03-12T13:05:34,959][WARN ][c.f.s.h.SearchGuardHttpServerTransport] [Xrb6LcS] Someone (/172.18.0.1:59838) speaks http plaintext instead of ssl, will close the channel ``` Check for conflicting processes by running the below command (assuming 5888 is the port number) -```console -$ sudo lsof -i:5888 +```bash +sudo lsof -i:5888 ``` Solution 1. Try to close the conflicting processes. You can do this easily with fuser -```console -$ sudo apt-get install fuser +```bash +sudo apt-get install fuser ``` Run the below command (assuming 5888 is the port number) -```console -$ fuser -k 58888/tcp +```bash +fuser -k 58888/tcp ``` Re-run `docker-compose up` and check if `localhost:9200` shows up.' @@ -145,7 +147,7 @@ Can't create indices in Kibana. Nothing happens after clicking create index. Diagnosis Check for the following log in the output of `docker-compose up` -```console +```bash elasticsearch_1 |[INFO ][c.f.s.c.PrivilegesEvaluator] No index-level perm match for User [name=readall, roles=[readall], requestedTenant=null] [IndexType [index=.kibana, type=doc]] [Action [[indices:data/write/index]]] [RolesChecked [sg_own_index, sg_readall]] elasticsearch_1 | [c.f.s.c.PrivilegesEvaluator] No permissions for {sg_own_index=[IndexType [index=.kibana, type=doc]], sg_readall=[IndexType [index=.kibana, type=doc]]} kibiter_1 | {"type":"response","@timestamp":CURRENT_TIME,"tags":[],"pid":1,"method":"post","statusCode":403,"req":{"url":"/api/saved_objects/index-pattern?overwrite=false","method":"post","headers":{"host":"localhost:5601","user-agent":YOUR_USER_AGENT,"accept":"application/json, text/plain, /","accept-language":"en-US,en;q=0.5","accept-encoding":"gzip, deflate","referer":"http://localhost:5601/app/kibana","content-type":"application/json;charset=utf-8","kbn-version":"6.1.4-1","content-length":"59","connection":"keep-alive"},"remoteAddress":YOUR_IP,"userAgent":YOUR_IP,"referer":"http://localhost:5601/app/kibana"},"res":{"statusCode":403,"responseTime":25,"contentLength":9},"message":"POST /api/saved_objects/index-pattern?overwrite=false 403 25ms - 9.0B"} @@ -166,11 +168,11 @@ Indication and Diagnosis Check for the following error after executing [Micro Mordred](https://github.com/chaoss/grimoirelab-sirmordred/tree/master/sirmordred/utils/micro.py) using the below command (assuming `git` is the backend) -```console +```bash micro.py --raw --enrich --panels --cfg ./setup.cfg --backends git ``` -```console +```bash [git] Problem executing study enrich_areas_of_code:git, RequestError(400, 'search_phase_execution_exception', 'No mapping found for [metadata__timestamp] in order to sort on') ``` @@ -215,15 +217,15 @@ again and extract all commits. Indication Cannot open `localhost:9200` in browser, shows `Secure connection Failed` -```console -$ curl -XGET localhost:9200 -k +```bash +curl -XGET localhost:9200 -k curl: (7) Failed to connect to localhost port 9200: Connection refused ``` Diagnosis Check for the following log in the output of `docker-compose up` -```console +```bash elasticsearch_1 | ERROR: [1] bootstrap checks failed elasticsearch_1 | [1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536] ``` @@ -232,13 +234,13 @@ Solution 1. Increase the maximum File Descriptors (FD) enforced. You can do this by running the below command. -```console -$ sysctl -w fs.file-max=65536 +```bash +sysctl -w fs.file-max=65536 ``` To set this value permanently, update `/etc/security/limits.conf` content to below. To verify after rebooting, run -```console -$ sysctl fs.file-max +```bash +sysctl fs.file-max ``` ``` elasticsearch soft nofile 65536 @@ -294,7 +296,7 @@ Indication Diagnosis -```console +```bash Retrying (Retry(total=10,connected=21,read=0,redirect=5,status=None)) after connection broken by 'SSLError(SSLError{1,'[SSL: WRONG_VERSION_NUMBER] wrong version number {_ssl.c:852}'},)': / ``` @@ -316,7 +318,7 @@ url = http://localhost:9200 Diagnosis -```console +```bash : [Errno 2]No such file or directory : 'cloc': 'cloc' ``` @@ -326,8 +328,8 @@ Execute the following command to install `cloc` (more details are available in the [Graal](https://github.com/chaoss/grimoirelab-graal#how-to-installcreate-the-executables) repo). -```console -$ sudo apt-get install cloc +```bash +sudo apt-get install cloc ``` ### Incomplete data diff --git a/gelk/kidash.md b/gelk/kidash.md index 70cfe7e4..927ae3b0 100644 --- a/gelk/kidash.md +++ b/gelk/kidash.md @@ -14,13 +14,13 @@ You can save a dashboard, with all its components, to a file, either for backup ```bash -(grimoireelk) kidash -e http://localhost:9200 --dashboard "Git" --export /tmp/dashboard-git.json +kidash -e http://localhost:9200 --dashboard "Git" --export /tmp/dashboard-git.json ``` You can learn the name of the dashboard by looking at its top left corner, or by noting the name you use when opening it in Kibana. If the name includes spaces, use "-" instead. For example, for a dashboard named "Git History", use the line: ```bash -(grimoireelk) kidash -e http://localhost:9200 --dashboard "Git-History" \ +kidash -e http://localhost:9200 --dashboard "Git-History" \ --export /tmp/dashboard-git.json ``` @@ -42,7 +42,7 @@ We already restored a dashboard in the We can restore from any file created with kidash. Assuming we have that file as `/tmp/dashboard-git.json`, we need to know the link to the ElasticSearch REST interface (same as for backing up). The format is, for example, as follows: ```bash -(grimoireelk) $ kidash --elastic_url http://localhost:9200 \ +kidash --elastic_url http://localhost:9200 \ --import /tmp/dashboard-git.json ``` @@ -53,5 +53,5 @@ This will restore all elements in the file, overwriting, if needed, elements wit Kidash has some more options. For a complete listing, use the `--help` argument: ```bash -(grimoireelk) $ kidash --help +kidash --help ``` diff --git a/gelk/meetup.md b/gelk/meetup.md index ab6ff984..92039df4 100644 --- a/gelk/meetup.md +++ b/gelk/meetup.md @@ -27,8 +27,8 @@ Note: If your site redirects on page load, you may not see the code in the final For each of the group names, you only need to run the following command, assuming the group name is `group_name` and the Meetup API key is `meetup_key`: ```bash -(gl) $ p2o.py --enrich --index meetup_raw --index-enrich meetup \ --e http://localhost:9200 --no_inc --debug meetup group_name -t meetup_key --tag group_name +p2o.py --enrich --index meetup_raw --index-enrich meetup \ + -e http://localhost:9200 --no_inc --debug meetup group_name -t meetup_key --tag group_name ``` If the group has a sizable activity, the command will be retrieving data for a while, and uploading it to ElasticSearch, producing: diff --git a/gelk/simple.md b/gelk/simple.md index 66172ec0..666ed25a 100644 --- a/gelk/simple.md +++ b/gelk/simple.md @@ -16,11 +16,13 @@ Let's run `p2o.py` to create the indexes in ElasticSearch. We will create both t As an example, we produce indexes for two git repositories: those of Perceval and GrimoireELK. We will use `git_raw` as the name for the raw index, and `git` for the enriched one. We will store indexes in our ElasticSearch instance listening at `http://localhost:9200`. Each of the following commands will retrieve and enrich data for one of the git repositories: ```bash -(gl) $ p2o.py --enrich --index git_raw --index-enrich git \ +p2o.py --enrich --index git_raw --index-enrich git \ -e http://localhost:9200 --no_inc --debug \ git https://github.com/grimoirelab/perceval.git ... -(gl) $ p2o.py --enrich --index git_raw --index-enrich git \ +``` +```bash +p2o.py --enrich --index git_raw --index-enrich git \ -e http://localhost:9200 --no_inc --debug \ git https://github.com/grimoirelab/GrimoireELK.git ... @@ -44,7 +46,7 @@ Download it to your `/tmp` directory (Note: Please use 'Save Link as' option for downloading), and run the command: ```bash -(grimoireelk) $ kidash --elastic_url http://localhost:9200 \ +kidash --elastic_url http://localhost:9200 \ --import /tmp/git-dashboard.json ``` @@ -59,8 +61,8 @@ In this section you have learned to produce a simple dashboard, using Perceval, In case you want to try a dashboard for some other repositories, once you're done with this one, you can delete the indexes \(both `git` and `git_raw`\), and produce new indexes with `p2o.py`. For doing this, you can use `curl` and the ElasticsSearch REST HTTP API: ```bash -$ curl -XDELETE http://localhost:9200/git -$ curl -XDELETE http://localhost:9200/git_raw +curl -XDELETE http://localhost:9200/git +curl -XDELETE http://localhost:9200/git_raw ``` Using the Kibiter/Kibana interface it is simple to modify the dashboard, its visualizations, and produce new dashboards and visualizations. If you are interested, have a look at the [Kibana User Guide](https://www.elastic.co/guide/en/kibana/current/). @@ -68,7 +70,7 @@ Using the Kibiter/Kibana interface it is simple to modify the dashboard, its vis `p2o.py` can be used to produce indexes for many other data sources. For example for GitHub issues and pull requests, the magic line is like this \(of course, substitute XXX for your GitHub token\): ```bash -$ (grimoireelk) p2o.py --enrich --index github_raw --index-enrich github \ +p2o.py --enrich --index github_raw --index-enrich github \ -e http://localhost:9200 --no_inc --debug \ github grimoirelab perceval \ -t XXX --sleep-for-rate @@ -79,7 +81,7 @@ In this case, you can use the Download it to your `/tmp` directory (Note: Please use 'Save Link as' option for downloading), and run the command: ```bash -(grimoireelk) $ kidash --elastic_url http://localhost:9200 \ +kidash --elastic_url http://localhost:9200 \ --import /tmp/github-dashboard.json ``` diff --git a/gelk/sortinghat.md b/gelk/sortinghat.md index 905945ce..a26fafa3 100644 --- a/gelk/sortinghat.md +++ b/gelk/sortinghat.md @@ -19,7 +19,7 @@ you need to initialize a database for it. Usually, each dashboard will have its own SortingHat database, although several dashboards can share the same. Initializing the database means creating the SQL schema for it, initializing its tables, and not much more. But you don't need to know about the details: SortingHat will take care of that for you. Just run `sortinghat init` with the appropriate options: ```bash -(gl) $ sortinghat -u user -p XXX init shdb +sortinghat -u user -p XXX init shdb ``` In this case, `user` is a user of the MySQL instance with permissions to create a new MySQL schema (database), `XXX` is the password for that user, and `shdb` is the name of the database to be created. @@ -27,7 +27,7 @@ In this case, `user` is a user of the MySQL instance with permissions to create If the command didn't throw any error message, you're done: a new `shdb` database was created. If you want, you can check it with a simple `mysql` command: ```bash -$ mysql -u user -pXXX -e 'SHOW DATABASES;' +mysql -u user -pXXX -e 'SHOW DATABASES;' ``` You should see `shdb` in the list of databases. @@ -35,7 +35,7 @@ You should see `shdb` in the list of databases. If for any reason you want to delete the database at some point, just run the appropriate mysql command: ```bash -$ mysql -u user -pXXX -e 'DROP DATABASE shdb;' +mysql -u user -pXXX -e 'DROP DATABASE shdb;' ``` Now, with our shiny new database ready, you can create indexes with SortingHat support. @@ -47,7 +47,7 @@ For creating the indexes, we run `p2o.py` the same way we have done before, but For example, for producing the index for the git repository for Perceval, run: ```bash -(gl) $ p2o.py --enrich --index git_raw --index-enrich git \ +p2o.py --enrich --index git_raw --index-enrich git \ -e http://localhost:9200 --no_inc --debug \ --db-host localhost --db-sortinghat shdb --db-user user --db-password XXX \ git https://github.com/grimoirelab/perceval.git @@ -61,7 +61,7 @@ For example, for producing the index for the git repository for Perceval, run: That means we have new `git_raw` and `git` indexes, but we also have a populated `shdb` database (assuming we have MySQL running in `localhost`, that is the machine where the script is run). If you want to check what's in it, you can again use `mysql`: ```bash -$ mysql -u user -pXXX -e 'SELECT * FROM identities;' shdb +mysql -u user -pXXX -e 'SELECT * FROM identities;' shdb ``` This will show all the identities found in the Perceval git repository. @@ -71,14 +71,14 @@ This will show all the identities found in the Perceval git repository. Let's produce now a Kibana dashboard for our enriched index (`git` in our ElasticSearch instance). I will start by installing `kidash`, to upload a JSON description of the dashboard, its visualizations, and everything needed: ```bash -(sh) $ pip install kidash +pip install kidash ``` Then, I use the JSON description of a dashboard for Git that includes visualizations for some fields generated from the SortingHat database: [git-sortinghat.json](dashboards/git-sortinghat.json). ```bash -(sh) $ kidash --elastic_url http://localhost:9200 \ +kidash --elastic_url http://localhost:9200 \ --import /tmp/git-sortinghat.json ``` diff --git a/graal/cocom.md b/graal/cocom.md index 6b92bf70..b548d45e 100644 --- a/graal/cocom.md +++ b/graal/cocom.md @@ -12,7 +12,7 @@ Once you've successfully installed Graal, you can get started real quick with the command line interface as easy as - ```sh -(graal) $ graal cocom --help +graal cocom --help ``` **Note:** You can invoke other available backends in a similar way. @@ -22,7 +22,7 @@ Once you've successfully installed Graal, you can get started real quick with th - Let's start our analysis with the host repository itself. As you can see the positional parameter is added with the repository url and `git-path` flag is used to define the path where the git repository will be cloned. ```sh -(graal) $ graal cocom https://github.com/chaoss/grimoirelab-graal --git-path /tmp/graal-cocom +graal cocom https://github.com/chaoss/grimoirelab-graal --git-path /tmp/graal-cocom [2019-03-27 21:32:03,719] - Starting the quest for the Graal. [2019-03-27 21:32:11,663] - Git worktree /tmp/worktrees/graal-cocom created! [2019-03-27 21:32:11,663] - Fetching commits: 'https://github.com/chaoss/grimoirelab-graal' git repository from 1970-01-01 00:00:00+00:00 to 2100-01-01 00:00:00+00:00; all branches diff --git a/manuscripts/first.md b/manuscripts/first.md index c6a56954..d21e315b 100644 --- a/manuscripts/first.md +++ b/manuscripts/first.md @@ -11,7 +11,7 @@ Reporting with GrimoireLab Manuscripts is easy. You need to have enriched Elasti For example, to produce a report about Git data in the standard GrimoireLab enriched index in my local ElasticSearch (accessible in the standard [http://localhost:9200](http://localhost:9200) location), you only need to run: ```bash -(gl) $ manuscripts -d /tmp/reports -u http://localhost:9200 \ +manuscripts -d /tmp/reports -u http://localhost:9200 \ -n GrimoireLab --data-sources git ``` diff --git a/python/es-dsl.md b/python/es-dsl.md index 9c9cef3c..79f9af48 100644 --- a/python/es-dsl.md +++ b/python/es-dsl.md @@ -5,7 +5,7 @@ The `elasticsearch` Python module may seem good enough to query ElasticSearch vi To install it, just use pip: ```bash -(perceval) $ pip install elasticsearch_dsl +pip install elasticsearch_dsl ``` It needs the `elasticsearch` Python module to work, but you'll have it already installed, or will be pulled in via dependencies, so don't worry about it. @@ -66,4 +66,4 @@ for commit in response: print(commit.hash, commit.author_date, commit.author) ``` -Now, instead of `scan()`, we use `execute()` which allows for slicing (note the line where we slice `request`), and preserves order. \ No newline at end of file +Now, instead of `scan()`, we use `execute()` which allows for slicing (note the line where we slice `request`), and preserves order. diff --git a/python/es.md b/python/es.md index 2f0da76a..b2e325b7 100644 --- a/python/es.md +++ b/python/es.md @@ -13,7 +13,7 @@ Instead of that, we will move one abstraction layer up, and will use the [elasti So, let's start with the basics of using the `elasticsearch` module. To begin with, we will add the module to our virtual environment, using pip: ```bash -(perceval) $ pip install elasticsearch +pip install elasticsearch ``` Now we can write some Python code to test it @@ -48,7 +48,7 @@ This little script assumes that we're running a local instance of ElasticSearch, When running it, you'll see the objects with the hashes being printed in the screen, right before they are uploaded to ElasticSearch: ```bash -(perceval) $ python perceval_elasticsearch_1.py +python perceval_elasticsearch_1.py {'hash': 'dc78c254e464ff334892e0448a23e4cfbfc637a3'} {'hash': '57bc204822832a6c23ac7883e5392f4da6f4ca37'} {'hash': '2355d18310d8e15c8e5d44f688d757df33b0e4be'} @@ -57,8 +57,8 @@ When running it, you'll see the objects with the hashes being printed in the scr Once you run the script, the `commits` index is created in ElasticSearch. You can check its characteristics using `curl`. The `pretty` option is to obtain a human-readable JSON document as response. Notice that we don't need to run `curl` from the virtual environment: -``` -$ curl -XGET http://localhost:9200/commits?pretty +```bash +curl -XGET http://localhost:9200/commits?pretty { "commits" : { "aliases" : { }, @@ -92,7 +92,7 @@ $ curl -XGET http://localhost:9200/commits?pretty If you want to delete the index (for example, to run the script once again) you can just run `DELETE` on its url. For example, with `curl`: ```bash -$ curl -XDELETE http://localhost:9200/commits +curl -XDELETE http://localhost:9200/commits {"acknowledged":true} ``` @@ -175,7 +175,7 @@ print('\nCreated new index with commits.') After running it (deleting any previous `commits` index if needed), we have a new index with the intended information for all commits. We can see one of them querying the index using directly the ElasticSearch REST API with `curl`: ```bash -$ curl -XGET "http://localhost:9200/commits/_search/?size=1&pretty" +curl -XGET "http://localhost:9200/commits/_search/?size=1&pretty" { "took" : 2, "timed_out" : false, @@ -212,7 +212,7 @@ Since we specified in the query we only wanted one document (`size=1`), we get a Every index in ElasticSearch has a 'mapping'. Mappings specify how the index is, for example in terms of data types. If we don't specify a mapping before uploading data to an index, ElasticSearch will infere the mapping from the data. Therefore, even when we created no mapping for it, we can have a look at the mapping for the recently created index: ```bash -(perceval) $ curl -XGET "http://localhost:9200/commits/_mapping?pretty" +curl -XGET "http://localhost:9200/commits/_mapping?pretty" { "commits" : { "mappings" : { @@ -261,8 +261,8 @@ import datetime Instead of using the character strings that we get from Perceval as values for those two fields, we first convert them to `datetime` objects. This is enough for the `elasticsearch` module to recognize as dates, and upload them as such. You can check the resulting mapping after running this new script: -``` -$ curl -XGET "http://localhost:9200/commits/_mapping?pretty" +```bash +curl -XGET "http://localhost:9200/commits/_mapping?pretty" { "commits" : { "mappings" : { diff --git a/sirmordred/container.md b/sirmordred/container.md index 35cdca69..8e30cb89 100644 --- a/sirmordred/container.md +++ b/sirmordred/container.md @@ -17,7 +17,7 @@ For using these container images, ensure you have a recent version of `docker` i To try it this container image, just run it as follows: ```bash -$ docker run -p 127.0.0.1:5601:5601 \ +docker run -p 127.0.0.1:5601:5601 \ -v $(pwd)/credentials.cfg:/override.cfg \ -t grimoirelab/full ``` @@ -25,7 +25,7 @@ $ docker run -p 127.0.0.1:5601:5601 \ `credentials.cfg` should have a GitHub API token (see [Personal GitHub API tokens](https://github.com/blog/1509-personal-api-tokens)), in a `mordred.cfg` format: -``` +```cfg [github] api-token = XXX ``` @@ -42,7 +42,7 @@ There are three configuration files read in before `/override.cfg`. The first on A slightly different command line is as follows: ```bash -$ docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:5601:5601 \ +docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:5601:5601 \ -v $(pwd)/logs:/logs \ -v $(pwd)/credentials.cfg:/override.cfg \ -t grimoirelab/full @@ -53,7 +53,7 @@ This one will expose also port `9200`, which corresponds to Elasticsearch. This By default, Elasticsearch will store indexes within the container image, which means they are not persistent if the image shuts down. But you can mount a local directory for Elasticsearch to write the indexes in it. this way they will be available from one run of the image to the next one. For example, to let Elasticsearch use directory `es-data` to write the indexes: ```bash -$ docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:5601:5601 \ +docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:5601:5601 \ -v $(pwd)/logs:/logs \ -v $(pwd)/credentials.cfg:/override.cfg \ -v $(pwd)/es-data:/var/lib/elasticsearch \ @@ -64,7 +64,7 @@ The `grimoirelab/full` container, by default, produces a dashboard showing an an The file to override is `/projects.json` in the container, so the command to run it could be (assuming the file was created as `projects.json` in the current directory): ```bash -$ docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:5601:5601 \ +docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:5601:5601 \ -v $(pwd)/logs:/logs \ -v $(pwd)/credentials.cfg:/override.cfg \ -v $(pwd)/projects.json:/projects.json \ @@ -74,7 +74,7 @@ $ docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:5601:5601 \ You can also get a shell in the running container, and run arbitrary GrimoireLab commands (`container_id` is the identifier of the running container, that you can find out with `docker ps`, or by looking at the first line when running the container): ```bash -$ docker exec -it container_id env TERM=xterm /bin/bash +docker exec -it container_id env TERM=xterm /bin/bash ``` In the shell prompt, write any GrimoireLab command. And if you have mounted external files for the SirMordred configuration, you can modify them, and run SirMordred again, to change its behavior. @@ -82,7 +82,7 @@ In the shell prompt, write any GrimoireLab command. And if you have mounted exte If you want to connect to the dashboard to issue your own commands, but don't want it to run SirMordred by itself, run the container setting `RUN_MORDRED` to `NO`: ```bash -$ docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:5601:5601 \ +docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:5601:5601 \ -v $(pwd)/logs:/logs \ -v $(pwd)/credentials.cfg:/override.cfg \ -v $(pwd)/es-data:/var/lib/elasticsearch \ @@ -100,7 +100,7 @@ This will make the container launch all services, but not running `sirmordred`: For running the `grimoirelab/installed` docker image, first set up the supporting systems in your host, as detailed in the [Supporting systems](../basics/supporting.md) section. Finally, compose a SirMordred configuration file with credentials and references the supporting system. For example: -``` +```cfg [es_collection] url = http://localhost:9200 user = @@ -130,7 +130,7 @@ The last two lines specify your GitHub user token, which is needed to access the Now, just run the container as: ```bash -$ docker run --net="host" \ +docker run --net="host" \ -v $(pwd)/credentials.cfg:/override.cfg \ grimoirelab/installed ``` diff --git a/sirmordred/micro-mordred.md b/sirmordred/micro-mordred.md index 4f0ab9f1..1570d520 100644 --- a/sirmordred/micro-mordred.md +++ b/sirmordred/micro-mordred.md @@ -16,7 +16,7 @@ 1. We'll use the following docker-compose configuration to instantiate the required components i.e ElasticSearch, Kibiter and MariaDB. Note that we can omit the `mariadb` section in case you have MySQL/MariaDB already installed in our system. We'll name the following configuration as `docker-config.yml`. -``` +```yml elasticsearch: restart: on-failure:5 image: bitergia/elasticsearch:6.1.0-secured @@ -60,8 +60,8 @@ mariadb: You can now run the following command in order to start the execution of individual instances. -``` -$ docker-compose -f docker-config.yml up +```bash +docker-compose -f docker-config.yml up ``` Once you see something similar to the below `log` on your console, it means that you've successfully instantiated the containers corresponding to the required components. @@ -100,8 +100,8 @@ kibiter_1 | {"type":"log","@timestamp":"2019-05-30T09:38:25Z","tags":["st 3. As you can see on the `Kibiter Instance` above, it says `Couldn't find any Elasticsearch data. You'll need to index some data into Elasticsearch before you can create an index pattern`. Hence, in order to index some data, we'll now execute micro-mordred using the following command, which will call the `Raw` and `Enrich` tasks for the Git config section from the provided `setup.cfg` file. -``` -$ python3 micro.py --raw --enrich --cfg setup.cfg --backends git +```bash +python3 micro.py --raw --enrich --cfg setup.cfg --backends git ``` The above command requires two files: @@ -114,14 +114,14 @@ We'll (for the purpose of this tutorial) use the files provided in the `/utils` - **Note**: In case the process fails to index the data to the ElasticSearch, check the `.perceval` folder in the home directory; which in this case may contain the same repositories as mentioned in the `projects.json` file. We can proceed after removing the repositories using the following command. -``` -$ rm -rf .perceval/repositories/... +```bash +rm -rf .perceval/repositories/... ``` 4. Now, we can create the index pattern and after its successful creation we can analyze the data as per fields. Then, we execute the `panels` task to load the corresponding `sigils panels` to Kibiter instance using the following command. -``` -$ python3 micro.py --panels --cfg setup.cfg +```bash +python3 micro.py --panels --cfg setup.cfg ``` On successful execution of the above command, we can manage to produce some dashboard similar to the one shown below. diff --git a/sortinghat/basic.md b/sortinghat/basic.md index 506dab1e..a8507e4c 100644 --- a/sortinghat/basic.md +++ b/sortinghat/basic.md @@ -28,15 +28,15 @@ It is obvious that there are some repo identities in it that correspond to the s For example, let's merge repo identity `4fcec5a` (dpose, dpose@sega.bitergia.net) with `5b358fc` (dpose, dpose@bitergia.com), which I know correspond to the same person: ```bash - (gl) $ sortinghat -u user -p XXX -d shdb merge \ - 4fcec5a968246d8342e4acfceb9174531c8545c1 5b358fc11019cf2c03ea4c162009e89715e590dd - Unique identity 4fcec5a968246d8342e4acfceb9174531c8545c1 merged on 5b358fc11019cf2c03ea4c162009e89715e590dd +sortinghat -u user -p XXX -d shdb merge \ + 4fcec5a968246d8342e4acfceb9174531c8545c1 5b358fc11019cf2c03ea4c162009e89715e590dd +Unique identity 4fcec5a968246d8342e4acfceb9174531c8545c1 merged on 5b358fc11019cf2c03ea4c162009e89715e590dd ``` Notice that we had to use the complete hashes (in the table above, and in the listing in the previous section, we shortened them just for readability). What we have done is to merge `4fcec5a` on `5b358fc`, and the result is: ```bash -$ mysql -u user -pXXX -e 'SELECT * FROM identities WHERE uuid LIKE "5b358fc%";' shdb +mysql -u user -pXXX -e 'SELECT * FROM identities WHERE uuid LIKE "5b358fc%";' shdb | id | name | email | username | source | uuid | | 4fcec5a | dpose | dpose@sega.bitergia.net | NULL | git | 5b358fc | | 5b358fc | dpose | dpose@bitergia.com | NULL | git | 5b358fc | @@ -47,13 +47,17 @@ The query looked for all rows in the `identities` table whose `uuid` field start We can follow this procedure for other identities that correspond to the same person: (Quan Zhou, quan@bitergia.com) and (quan, zhquan7@gmail.com); (Alberto Martín, alberto.martin@bitergia.com) and (Alberto Martín, albertinisg@users.noreply.github.com); and (Alvaro del Castillo, acs@thelma.cloud) and (Alvaro del Castillo, acs@bitergia.com): ```bash -(gl) $ sortinghat -u user -p XXX -d shdb merge \ +sortinghat -u user -p XXX -d shdb merge \ 0cac4ef12631d5b0ef2fa27ef09729b45d7a68c1 11cc0348b60711cdee515286e394c961388230ab Unique identity 0cac4ef12631d5b0ef2fa27ef09729b45d7a68c1 merged on 11cc0348b60711cdee515286e394c961388230ab -(gl) $ sortinghat -u user -p XXX -d shdb merge \ +``` +```bash +sortinghat -u user -p XXX -d shdb merge \ 35c0421704928bcbe3a0d9a4de1d79f9590ccaa9 37a8187909592a7b78559399105f6b5404af9e4e Unique identity 35c0421704928bcbe3a0d9a4de1d79f9590ccaa9 merged on 37a8187909592a7b78559399105f6b5404af9e4e -(gl) $ sortinghat -u user -p XXX -d shdb merge \ +``` +```bash +sortinghat -u user -p XXX -d shdb merge \ 7ad0031fa2db40a5149f54dfc2ec2a355e9443cd 9aed245d9df109f8d00ca0e656121c3bdde46a2a Unique identity 7ad0031fa2db40a5149f54dfc2ec2a355e9443cd merged on 9aed245d9df109f8d00ca0e656121c3bdde46a2a ``` @@ -63,7 +67,7 @@ Unique identity 7ad0031fa2db40a5149f54dfc2ec2a355e9443cd merged on 9aed245d9df10 Now, we can check how SortingHat is storing information about these merged identities, but instead of querying directly the database, we can just use `sortinghat`: ```bash -(gl) $ sortinghat -u user -p XXX -d shdb show \ +sortinghat -u user -p XXX -d shdb show \ 11cc0348b60711cdee515286e394c961388230ab unique identity 11cc0348b60711cdee515286e394c961388230ab @@ -87,7 +91,7 @@ We merged the repo identity (Quan Zhou, quan@bitergia.com) on the unique identit Unfortunately, we cannot redo the merge with the most convenient order: ```bash -(gl) $ sortinghat -u user -p XXX -d shdb merge \ +sortinghat -u user -p XXX -d shdb merge \ 11cc0348b60711cdee515286e394c961388230ab 0cac4ef12631d5b0ef2fa27ef09729b45d7a68c1 Error: 0cac4ef12631d5b0ef2fa27ef09729b45d7a68c1 not found in the registry ``` @@ -101,7 +105,7 @@ Later on we will revisit this case, since there are stuff that can be done: brea We can just modify the profile for the unique identity, thus changing the profile for a person: ```bash -(gl) $ sortinghat -u user -p XXX -d shdb profile \ +sortinghat -u user -p XXX -d shdb profile \ --name "Quan Zhou" --email "quan@bitergia.com" \ 11cc0348b60711cdee515286e394c961388230ab unique identity 11cc0348b60711cdee515286e394c961388230ab @@ -122,7 +126,7 @@ When we interact with SortingHat, it only changes the contents of the database i To make changes appear in the dashboard, we need to create new enriched indexes (re-enrich the indexes). We can do that by removing raw and enriched indexes from ElasticsSearch, and then running the same `p2o.py` commands shown to produce new raw and enriched indexes. But in our case, this is a clear overkill: we don't need to retrieve new raw indexes from the repositories, since they are fine. We only need to produce new enriched indexes. For that, we can run `p2o.py` as follows: ``` -(gl) $ p2o.py --only-enrich --index git_raw --index-enrich git \ +p2o.py --only-enrich --index git_raw --index-enrich git \ -e http://localhost:9200 --no_inc --debug \ --db-host localhost --db-sortinghat shdb --db-user user --db-password XXX \ git https://github.com/grimoirelab/GrimoireELK.git @@ -137,7 +141,7 @@ In this case, the command will create a new `git` index (by modifying the curren The above method, even when it will work, is still an overkill. I really don't need to modify the whole enriched indexes, by updating all the fields in their items. We just need to update the fields related to identities, which are the only ones that we need to change. For that, we have a specific option to `p2o.py`: ``` -(gl) $ p2o.py --only-enrich --refresh-identities --index git_raw --index-enrich git \ +p2o.py --only-enrich --refresh-identities --index git_raw --index-enrich git \ -e http://localhost:9200 --no_inc --debug \ --db-host localhost --db-sortinghat shdb --db-user user --db-password XXX \ git https://github.com/grimoirelab/GrimoireELK.git @@ -152,7 +156,7 @@ In most cases, when the SortingHat database is modified, only a handful of ident In this case, the command to run is: ``` -(gl) $ p2o.py --only-enrich --refresh-identities --index git_raw --index-enrich git \ +p2o.py --only-enrich --refresh-identities --index git_raw --index-enrich git \ --author_uuid 11cc0348b60711cdee515286e394c961388230ab \ 0cac4ef12631d5b0ef2fa27ef09729b45d7a68c1 \ -e http://localhost:9200 --no_inc --debug \ diff --git a/sortinghat/data.md b/sortinghat/data.md index 51de6b78..b64d6d8b 100644 --- a/sortinghat/data.md +++ b/sortinghat/data.md @@ -16,23 +16,31 @@ In this chapter we will learn how to use SortingHat in combination to other Grim We will start by adding some more repositories to the index, to have some more complete data. Then we will use it to explore the capabilities of SortingHat for merging identities, for adding affiliations and for adapting profiles. ```bash -(gl) $ p2o.py --enrich --index git_raw --index-enrich git \ +p2o.py --enrich --index git_raw --index-enrich git \ -e http://localhost:9200 --no_inc --debug \ --db-host localhost --db-sortinghat shdb --db-user user --db-password XXX \ git https://github.com/grimoirelab/GrimoireELK.git -(gl) $ p2o.py --enrich --index git_raw --index-enrich git \ +``` +```bash +p2o.py --enrich --index git_raw --index-enrich git \ -e http://localhost:9200 --no_inc --debug \ --db-host localhost --db-sortinghat shdb --db-user user --db-password XXX \ git https://github.com/grimoirelab/panels.git -(gl) $ p2o.py --enrich --index git_raw --index-enrich git \ +``` +```bash +p2o.py --enrich --index git_raw --index-enrich git \ -e http://localhost:9200 --no_inc --debug \ --db-host localhost --db-sortinghat shdb --db-user user --db-password XXX \ git https://github.com/grimoirelab/mordred.git -(gl) $ p2o.py --enrich --index git_raw --index-enrich git \ +``` +```bash +p2o.py --enrich --index git_raw --index-enrich git \ -e http://localhost:9200 --no_inc --debug \ --db-host localhost --db-sortinghat shdb --db-user user --db-password XXX \ git https://github.com/grimoirelab/arthur.git -(gl) $ p2o.py --enrich --index git_raw --index-enrich git \ +``` +```bash +p2o.py --enrich --index git_raw --index-enrich git \ -e http://localhost:9200 --no_inc --debug \ --db-host localhost --db-sortinghat shdb --db-user user --db-password XXX \ git https://github.com/grimoirelab/training.git @@ -49,7 +57,7 @@ let's visit the data structure of the database it maintains. See [A dashboard with SortingHat](../gelk/sortinghat.md), and the introduction to this chapter, for details on how the database was produced; `user` and `XXX` are the credentials to access the `shdb` database. For finding out about its tables, just query MySQL. ```bash -$ mysql -u user -pXXX -e 'SHOW TABLES;' shdb +mysql -u user -pXXX -e 'SHOW TABLES;' shdb ... | countries | | domains_organizations | @@ -177,8 +185,8 @@ When we unify repo identities (merging several into a single unique identity), w Up to now we have not used SortingHat to assign organizations to persons (unique identities). Therefore, `enrollments` and `organizations` tables are empty. But we can check their structure. -``` -$ mysql -u user -pXXX -e 'DESCRIBE organizations;' shdb +```bash +mysql -u user -pXXX -e 'DESCRIBE organizations;' shdb +-------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------+--------------+------+-----+---------+----------------+ @@ -192,8 +200,8 @@ In this format, each row corresponds to the description of a field in the `organ `enrollments` table is a bit more complex: -``` -$ mysql -u user -pXXX -e 'DESCRIBE enrollments;' shdb +```bash +mysql -u user -pXXX -e 'DESCRIBE enrollments;' shdb +-----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+----------------+ diff --git a/tools-and-tips/elasticsearch.md b/tools-and-tips/elasticsearch.md index 7bd5986b..5c1b9815 100644 --- a/tools-and-tips/elasticsearch.md +++ b/tools-and-tips/elasticsearch.md @@ -12,8 +12,8 @@ https://user:passwd@host:port/resource To list all indexex stored by Elasticsearch: -``` -$ curl -XGET 'https://elasticurl/_cat/indices?v' +```bash +curl -XGET 'https://elasticurl/_cat/indices?v' ``` This returns for each index, its name, status (`open` comes to mean 'usable'), number of documents, deleted documents, and storage size used. @@ -24,8 +24,8 @@ Elasticsearch index aliases allow to work with a collection of indexes as if it To list the base indexes corresponding to an index alias (assume the index alias is `alias_index`): -``` -$ curl -XGET 'https://elastic_url/alias_index/_alias/*' +```bash +curl -XGET 'https://elastic_url/alias_index/_alias/*' ``` The result will be similar to (being `base_index` the base index for the alias, and `alias_index`, `alias_index2` two aliases for that base index): @@ -43,7 +43,7 @@ The result will be similar to (being `base_index` the base index for the alias, To remove aliases, and create new ones, in an atomic operation: -``` +```bash curl -XPOST 'https://elastic_url/_aliases' -d ' { "actions" : [ diff --git a/tools-and-tips/html5-app-latest-activity.md b/tools-and-tips/html5-app-latest-activity.md index 72589067..b556bb8a 100644 --- a/tools-and-tips/html5-app-latest-activity.md +++ b/tools-and-tips/html5-app-latest-activity.md @@ -14,23 +14,23 @@ For demoing the application, you can first install the files for the HTML applic For deploying the HTML5 app, just copy `index.html`, `events.js`, and `events.css`, all in the [`scripts`](https://github.com/jgbarah/GrimoireLab-training/blob/master/tools-and-tips/scripts/) directory, to your directory of choice. Then, ensure that some web server is serving that directory. For example, you can launch a simple Python server from it: ```bash -$ python3 -m http.server +python3 -m http.server Serving HTTP on 0.0.0.0 port 8000 ... ``` Now, let's produce a JSON file with the events that the app will show. For that, we will install [`elastic_last.py`](https://github.com/jgbarah/GrimoireLab-training/blob/master/tools-and-tips/scripts/elastic_last.py) in a Python3 virtual environment with all the needed dependencies (in this case, it is enough to install, via `pip`, the `elasticsearch-dsl` module, and run it: -``` -$ python3 elastic_last.py --loop 10 --total 10 http://localhost:9200/git +```bash +python3 elastic_last.py --loop 10 --total 10 http://localhost:9200/git ``` (assuming ElasticSearch is running in the same host, in port 9200, as it runs by default, and that it has an index, named `git` with the standard git index, as produced by GrimoireELK) If we're using a `git` index in an ElasticSearch instance accessible at `https://grimoirelab.biterg.io/data`, using user `user` and password `XXX`: -``` -$ python3 elastic_last.py --no_verify_certs --loop 10 --total 10 \ -https://user:XXX@grimoirelab.biterg.io/data/git +```bash +python3 elastic_last.py --no_verify_certs --loop 10 --total 10 \ + https://user:XXX@grimoirelab.biterg.io/data/git ``` In both cases `--loop 10` will cause the script to retrieve the index every 10 seconds, and produce a file `events.json` with the latest 10 events in the index (commits in this case), because of the option `--total 10`. If you want, instead of just one url, you can include as many as you may want, one after the other, to retrieve data from several indexes every 10 seconds. The option `--no_verify_certs` is needed only if your Python installation has trouble checking the validity of the SSL certificates (needed because the url is using HTTPS). diff --git a/tools-and-tips/perceval.md b/tools-and-tips/perceval.md index 683b8f99..ba31a035 100644 --- a/tools-and-tips/perceval.md +++ b/tools-and-tips/perceval.md @@ -7,14 +7,14 @@ This section shows some scripts using Perceval. [perceval_git_counter](https://github.com/jgbarah/GrimoireLab-training/blob/master/tools-and-tips/scripts/perceval_git_counter.py) is a simple utility to count commits in a git repository. Just run it with the url of the repository to count, and a directory to clone, and you're done: ```bash -$ python perceval_git_counter.py https://github.com/grimoirelab/perceval.git /tmp/ppp +python perceval_git_counter.py https://github.com/grimoirelab/perceval.git /tmp/ppp Number of commmits: 579. ``` You can get a help banner, including options, by running -``` -$ python perceval_git_counter.py --help +```bash +python perceval_git_counter.py --help ``` There is an option to print commit hashes for all commits in the repository: `--print`.