Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improved docs #268

Merged
merged 3 commits into from
Nov 1, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
61 changes: 35 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ NavigaTUM is a the official tool developed by students for students, that aims t

Features:

- Interactive or RoomFinder-like maps to lookup the position of rooms or buildings
- Interactive or RoomFinder-like maps to look up the position of rooms or buildings
- Fast and typo-tolerant search
- Support for different room code formats as well as generic names

Expand All @@ -35,10 +35,12 @@ Note: The API is still under development, and we are open to Issues, Feature Req

## Getting started

NavigaTUM consists of three parts + deployment resources.
### Overview

Depending on what you want to work on, you do not need to set up all of them.
For an overview how the components work, have a look at the
NavigaTUM consists of three main parts + deployment resources.

Depending on what you want to work on, you **do not need to set up all of them**.
For an overview of how the components work, have a look at the
[deployment documentation](deployment/README.md).

- `data/` contains the code to obtain and process the data
Expand All @@ -47,10 +49,9 @@ For an overview how the components work, have a look at the
- `deployment/` contains deployment related configuration
- `map/` contains information about our own map, how to style it and how to run it

The following steps assume you have just cloned the repository and are in the
root directory of it.
The following steps assume you have just cloned the repository and are in the root directory of it.

### Data
### Data Processing

In case you do not want to work on the data processing, you can instead
download the latest compiled files:
Expand All @@ -61,41 +62,49 @@ wget -P data/output https://nav.tum.de/cdn/search_data.json
wget -P data/output https://nav.tum.de/cdn/search_synonyms.json
```

Else you can follow the steps in the [data documentation](data/).
Else you can follow the steps in the [data documentation](data/README.md).

### Server

Follow the steps in the [server documentation](server/).
If you want to work on the webclient only (and not server or data), you don't need to set up the server. You can instead either use the public API (see the [webclient documentation](webclient/README.md#Testing)) or use our ready-made docker images to run the server locally:

````bash
```bash
docker network create navigatum-net
docker run -it --rm -p 7700:7700 --name search --network navigatum-net ghcr.io/tum-dev/navigatum-mieli-search:main
docker run -it --rm -p 8080:8080 --network navigatum-net -e MIELI_SEARCH_ADDR=search ghcr.io/tum-dev/navigatum-server:main
````

Else you can follow the steps in the [server documentation](server/README.md).

### Webclient

Follow the steps in the [webclient documentation](webclient/).
If you want to only run the webclient locally, you can skip the "Data" and
"Server" steps above and edit the webclient configuration to use the public
API as is described in the webclient documentation.
Follow the steps in the [webclient documentation](webclient/README.md).
If you want to only run the webclient locally, you can skip the "Data" and "Server" steps above and use docker (as seen above) or you can [edit the webclient configuration](webclient/README.md#testing) to point to production.

### Formatting

### API
We have multiple programming languages in this repository, and we use different tools to format them.

We format our api via [openapi-format](https://www.npmjs.com/package/openapi-format).
since we use [pre-commit](https://pre-commit.com/) to format our code, you can install it in an virtual environment with:

```bash
npm install openapi-format
openapi-format ./openapi.yaml --output ./openapi.yaml
python3 -m venv venv
source venv/bin/activate
pip install -r data/requirements.txt -r server/test/requirements.txt -r requirements_dev.txt # for mypy the server and data requirements are needed
```

To validate that the specification is being followed, use the [Swagger Editor](https://editor.swagger.io/?url=https://raw.githubusercontent.com/TUM-Dev/navigatum/main/openapi.yaml) in tandem with [stoplight](stoplight.io), as they are both very imperfect tools.

To make sure that this specification is up-to-date and without holes, we run [schemathesis](https://github.com/schemathesis/schemathesis) using the following command on API Server provided by the "Server" step or the public API:
To format all files, run the following command:

```bash
python -m venv venv
source venv/bin/activate
pip install schemathesis
st run --workers=auto --base-url=http://localhost:8080 --checks=all ../openapi.yaml
pre-commit run --all-files
```

Some fuzzing-goals may not be available for you locally, as they require prefix-routing (f.ex.`/cdn` to the CDN).
You can exchange `--base-url=http://localhost:8080` to `--base-url=https://nav.tum.de` for the full public API, or restrict your scope using a option like `--endpoint=/api/search`.
You can also automatically **format files on every commit** by running the following command:

```bash
pre-commit install
```

## License

Expand Down
35 changes: 22 additions & 13 deletions data/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,22 +14,31 @@ Also, new external data might break the scripts from time to time, as either roo

## Getting started


### Prerequisites

For getting started, there are some system dependencys which you will need.
Please follow the [system dependencys docs](/resources/documentation/Dependencys.md) before trying to run this part of our project.

### Dependencies

You need the following dependencies to get started:
Since data needs some python dependencys, you will need to install them first.
We recommend doing this in a virtual environment.

- _Python_ (at least version 3.6)
- The following Python packages:
`pip install -r requirements.txt`
From the root of the project, run:
```bash
python3 -m venv venv
source venv/bin/activate
pip install -r data/requirements.txt -r requirements_dev.txt
```

## Getting external data

External data (and the scraper) is stored in the `external/` subdirectory.
External data (and the scrapers) are stored in the `external/` subdirectory.

By default, the latest scraped data is already included in this directory, so that you do not need
to run the scraping yourself and skip to the next step.
The latest scraped data is already included in this directory, you do not need to run the scraping yourself and can skip to the next step.

However if you want to update the scraped data, open `external/main.py` and comment out all
However, if you want to update the scraped data, open `external/main.py` and comment out all
steps depending on what specific data you want to scrape (Note that some steps depend on previous
steps. In this case, the downloader will automatically run these as well).

Expand All @@ -38,7 +47,7 @@ Then, start scraping with:
```bash
cd external
export PYTHONPATH=$PYTHONPATH:..
python main.py
python3 main.py
```

The data will be stored in the `cache` subdirectory as json files. To force a redownload, delete them.
Expand All @@ -54,7 +63,7 @@ cd ..
### Compiling the data

```bash
python compile.py
python3 compile.py
```

The exported datasets will be stored in `output/` as JSON files.
Expand Down Expand Up @@ -126,13 +135,13 @@ The data compilation is made of indiviual processing steps, where each step adds
#### Step 00 Areatree

The starting point is the data defined in the "areatree" (in `sources/00_areatree`).
It (currently) has a custom data format to be human & machine-readable while taking
only minimal space.
It (currently) has a custom data format to be human & machine-readable while taking only minimal space.
Details about the formatting are given at the head of the file.

## License

The source data (i.e. all files located in `sources/` that are not images) is made available under the Open Database License: <https://opendatacommons.org/licenses/odbl/1.0/>. Any rights in individual contents of the database are licensed under the Database Contents License: <http://opendatacommons.org/licenses/dbcl/1.0/>.
The source data (i.e. all files located in `sources/` that are not images) is made available under the Open Database License: <https://opendatacommons.org/licenses/odbl/1.0/>.
Any rights in individual contents of the database are licensed under the Database Contents License: <http://opendatacommons.org/licenses/dbcl/1.0/>.

The images in `sources/img/` are subject to their own licensing terms, which are stated in the file `sources/img/img-sources.yaml`.

Expand Down
4 changes: 2 additions & 2 deletions map/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ docker run -it -e JAVA_TOOL_OPTIONS="-Xmx10g" -v "$(pwd)/map":/data ghcr.io/onth

For `planet`, you might want to increase the `--Xmx` parameter to 20GB. For 128GB of RAM or more you will want to use `--storage=ram` instead of `--storage=mmap`.

### 1. serve the tileset
### Serve the tileset

After generating `output.mbtiles` you can serve it with a tileserver.
We use [tileserver-gl](https://github.com/maptiler/tileserver-gl) for this, but there are other ones out there.
Expand All @@ -41,7 +41,7 @@ From the root of the repository, run:
docker run --rm -it -v $(pwd)/map:/data -p 7770:80 maptiler/tileserver-gl
```

### 2. Edit the style
### Edit the style

For editing the style we use [Maputnik](https://github.com/maputnik/editor).
It is a web-based editor for Mapbox styles.
Expand Down
94 changes: 94 additions & 0 deletions resources/documentation/Dependencys.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# Dependencys

Our project has a few system-level-dependencies, which are generally usefully and a few which are only used for some
parts of the project.
If you get stuck or have any questions, feel free to contact us. We are happy to help.

## General

### OS

We recommend using a Linux based OS, as we have not tested the project on Windows or Mac.
("There be dragons", but we will try to improve this part if you show us where we fail)
If you are using Windows, use [WSL](https://docs.microsoft.com/en-us/windows/wsl/install-win10) to run Linux on Windows.

Please make sure that your OS is up-to-date, before we start. (Trust me, this has fucked over multiple people...)
On Ubuntu this is as easy as running `sudo apt update && sudo apt upgrade`.

### Git

You probably already have it, but if not, install it using your package manager.

### Docker

We deploy our project using docker containers.
This means, that if you have docker installed, you can:

- Run a part of the project like the `server`, our `tileserver` or the search engine `meilisearch` locally
- Test deployment-linked changes locally

To get started with docker, you can follow the [official tutorial](https://docs.docker.com/get-started/).

## Specific (most of these are only needed for development of said part)

### Data Processing

#### Python3

The data processing scripts are written in python, and they implicitly depend on a recent version of python (~3.10).
If you don't meet this requirement, head over to the [python website](https://www.python.org/downloads/) and download
the latest version.

### Server

#### Python3

The server does have some scripts, which are written in python, and they implicitly depend on a recent version of
python (~3.10).
If you don't meet this requirement, head over to the [python website](https://www.python.org/downloads/) and download
the latest version.
We also assume that `python --version` outputs something like `Python 3.1X.Y`.

#### Rust

Our server is written in [Rust](https://youtu.be/Q3AhzHq8ogs).
To get started with Rust, you can follow the [official tutorial](https://www.rust-lang.org/learn/get-started).
To install Rust, you can use [rustup](https://rustup.rs/).

#### OpenSSL

The server uses OpenSSL to verify TLS certificates.

```bash
sudo apt install build-essential pkg-config openssl libssl-dev
```

#### SQLite

The server uses SQLite

```bash
sudo apt install libsqlite3-dev
```

### Webclient

#### NodeJS

We use NodeJS for the webclient.
Setting NodeJS up is a bit more complicated than setting up python/rust, but it is still pretty easy.

- On linux, you can get it through your favorite package manager.
You normally should need to install `nodejs` and `npm`.
- On WSL, use [this guide](https://learn.microsoft.com/en-us/windows/dev-environment/javascript/nodejs-on-wsl)

#### Gulp

We (current) use Gulp to build the webclient. gulp is a task runner, which is used to automate tasks.
Gulp needs to be installed globally, so that it can be used from the command line.

Installing _Gulp_ with npm:

```bash
sudo npm install -g gulp
```
36 changes: 28 additions & 8 deletions server/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,12 @@ This folder contains the backend server for NavigaTUM.

## Getting started

### 0. Install Rust
### Prerequisites

Unless you haven't already, you need to [install Rust](https://www.rust-lang.org/tools/install)
in order to compile and run this server.
For getting started, there are some system dependencys which you will need.
Please follow the [system dependencys docs](resources/documentation/Dependencys.md) before trying to run this part of our project.

If you want to run the tests you need at least Python 3.6 as well.

### 1. Get the data
### Get the data

The data is provided to the server with just a simple JSON file.
You can create a `data` subdirectory and copy the `api_data.json`
Expand All @@ -24,7 +22,7 @@ so that you don't need to copy on every update:
ln -s ../data/output data
```

### 2. Starting the server
### Starting the server

Run

Expand All @@ -34,7 +32,7 @@ cargo run --release

The server should now be available on `localhost:8080`.

### 3. Setup MeiliSearch (optional)
### Setup MeiliSearch (optional)

The server uses [MeiliSearch](https://github.com/meilisearch/MeiliSearch) as a backend for search.
For a local test environment you can skip this step if you don't want to test or work on search.
Expand Down Expand Up @@ -86,6 +84,28 @@ curl -X DELETE 'http://localhost:7700/indexes/entries'

MeiliSearch provides an interactive interface at [http://localhost:7700](http://localhost:7700).

### API-Changes

If you have made changes to the API, you need to update the API documentation.

There are two editors for the API documentation (both are imperfect):

- [Swagger Editor](https://editor.swagger.io/?url=https://raw.githubusercontent.com/TUM-Dev/navigatum/main/openapi.yaml)
- [stoplight](stoplight.io)

Of course documentation is one part of the process. If the changes are substantial, you should also run an API-Fuzz-Test:
To make sure that this specification is up-to-date and without holes, we run [schemathesis](https://github.com/schemathesis/schemathesis) using the following command on API Server:

```bash
python -m venv venv
source venv/bin/activate
pip install schemathesis
st run --workers=auto --base-url=http://localhost:8080 --checks=all ../openapi.yaml
```

Some fuzzing-goals may not be available for you locally, as they require prefix-routing (f.ex.`/cdn` to the CDN) and some fuzzing-goals are automatically tested in our CI.
You can exchange `--base-url=http://localhost:8080` to `--base-url=https://nav.tum.sexy` for the full public API, or restrict your scope using a option like `--endpoint=/api/search`.

## License

This program is free software: you can redistribute it and/or modify
Expand Down
Loading