Search engine for Material Safety Datasheets based on Solr
Backend | Frontend | Deployment |
---|---|---|
Der Report und die Zwischenpräsentationen sind in docs/ zu finden.
NOTE: Read the following section carefully!
-
You will need the dataset which is NOT in this repository because of liscensing issues.
-
The dataset is linked as git submodule and can be installed by running:
git submodule update --init --recursive
-
All of the following setups will expect the dataset to be in
<repo-root>/hazard-ds/hazard-dataset
. If your dataset is in a different location please override the path in the docker-compose files (the dataset is just mounted into the container) or override the application.propertydataSetPath
. -
If you are not running the backend in a docker container, there are 2 additional files needed. Their default paths are
<repo-root>/hazard-ds/fscMap.txt
and<repo-root>/hazard-ds/fsgMap.txt
. These paths can be adjusted through the environment variablesfscMapPath
andfsgMapPath
.
NOTE Installtion Instructions for required Software to run MSDS Search:
NOTE Installtion Instructions for required Software to develop for MSDS Search:
- docker and docker-compose (see above)
- Java Jdk 10 manual
- Java Jdk 10 via PPA
- yarn
NOTE: If you have questions about the following docker commands, you can find their documentation here. These are the basic commands which you will probably need:
docker-compose up -d
Start/create containers, volumes and networks. Setup and detach your console.docker-compose logs -f
View the logs and attach your console to them.docker-compose down
Stop and delete all containers.docker-compose down -v
Stop, delete all containers and remove all data volumes.- The
-f
flag is used to specify a certain compose file. There are different compose files with different purposes in this repository.
- clone the repository:
git clone https://github.com/leoek/material-safety-search.git
cd material-safety-search
- download submodules:
git submodule update --init --recursive
- With access to https://hub.docker.com/r/materialsafetysearch/private/
- login to dockerhub:
docker login
- Use our prebuilt docker images:
sudo docker-compose -f docker-compose.staging.yml up -d
- [Optional] View logs:
sudo docker-compose -f docker-compose.staging.yml logs -f mss-server
- login to dockerhub:
- Without access to the prebuilt docker images
- Build and run the images on your local machine (which might take a while):
sudo docker-compose -f docker-compose.local.yml up -d
- [Optional] View logs:
sudo docker-compose -f docker-compose.local.yml logs -f mss-server
- Build and run the images on your local machine (which might take a while):
- Wait for the dataset to be indexed, the search engine is already usable if only a part is indexed. (You can check the progress in the logs)
- Use our prebuilt docker images:
sudo docker-compose -f docker-compose.staging.yml up -d
- Build and run the images on your local machine (which might take a while):
sudo docker-compose -f docker-compose.local.yml up -d
ATTENTION: Please note that the solr container does not require any login information. Therefore it should not be exposed to the public. Our docker-compose.local.yml configuration will expose all ports for debugging purposes. You should never use that configuration in any production environment.
- We recommend to run all containers behind some kind of reverse proxy. You can get an example on how to do that in our docker-compose.staging.yml configuration.
- Our backend and frontend containers do not have any SSL support. We recommend to setup SSL within the reverse proxy.
- You can find an example Ansible setup inside
/ansible
which is used by Jenkins to deploy the search engine to the demo server. The Ansible setup will work with most debian based systems.ansible-role-common
: Basic server setup, you probably want to replace / adjust this.ansible-role-docker
: Install / update docker and docker-compose. Set it up to work with the Ansibledocker_service
module (which has some flaws).ansible-role-docker-nginx-proxy
: Setup jwilder as a reverse proxy, with letsencrypt-companion for SSL certificatesdocker-mss-deployment-local
: This is the role which will deploy the search engine.docker-mss-deployment-local-reset-volumes
: This role can be used to clear the docker volumes. Use this if you want to re-index the dataset and clear all logs.
The Api is defined by an [penapi specification]api.yml).
Check for these files to be present:
- The dataset
- fscMap.txt
- fsgMap.txt
-
We are using git flow.
- The
master
branch only contains released versions. - Every release has its own tag, we are using semantic versioning.
- The
develop
branch contains the current state of development. The code withindevelop
should always be in such a shape, that we could make a release without any patches needed. - Actual development happens in
feature
branches.feature
branches should be commited to the remote repository and deleted once they are merged intodevelop
. - Try to avoid merge commits with the usage of
git pull --rebase
.
- The
-
Every commit to develop will be built and published to docker hub by Jenkins. Builds from develop are tagged with
next
, builds from master withstable
and their<full version>-<buildnumber>
. Until the first release, builds from develop will be pushed with their version to develop as well. -
Every build from develop is deployed to mss.leoek.tech and api.mss.leoek.tech. The frontend can develop against that staging api.
Requirements:
- Docker-ce 15+
- docker-compose 1.16+
Procedure:
sudo docker-compose -f docker-compose.local.yml up -d
- Solr will be available at http://localhost:8983
- The Backend will be available at http://localhost:8080
- The Frontend will be available at http://localhost:80
Requirements:
- Java JDK 10
- Docker-ce
- docker-compose 1.16+
Procedure:
- You will need a Solr instance for development. The easiest way to get one up and running which is configured for local development is by running
docker-compose up -d
. - Our Backend is a Spring Boot application, which uses Gradle for package management. You can build and run it with:
./gradlew bootRun
(theoretically it should work with the .bat file on windows, however until now nobody had the nerves to mess with windows). - Solr will be available at http://localhost:8983
- The Backend will be available at http://localhost:8080
The frontend can be found within the frontend/ folder. Check out the rontend Readme
Requirements:
- Yarn 1.8+
- Node 8+
Procedure:
- Run
yarn
to obtain the dependencies. - Run
yarn start
to start the development server. - The Frontend will be available at http://localhost:3000
NOTES:
- The Backend will check whether documents are already imported to Solr. If the solr core is not empty it will skip the import of the documents.
- Solr data is in a named docker volume, which will prevent losing the Solr core with the indexed documents in between container restarts.
- The Solr data volume can be deleted with the
-v
flag:docker-compose down -v
Postman can be used to talk to the backend directly. Import the supplied collection and environments and you should be good to go.