Skip to content
hjchan edited this page Mar 1, 2018 · 5 revisions

How to Contribute

To contribute with the project you will need to sign the CLA before your Pull Request is accepted. When you send your PR check the Pull Request page, it will have a Link to signing the CLA.

Keep in mind this standardized setup is a work in progress, if you have any trouble please open an issue on the issue tracker and give as much info as possible.

First of all you need Docker and Docker Compose installed.

Environment

Bountysource uses Environment Variables for a lot of things. There's a suggested .env.dev file at the root directory that is loaded by the docker-compose.yml. Copy .env.dev into .env as the docker will read .env file. It is compulsory to update cloudinary CLOUDINARY_URL and CLOUDINARY_BASE_URL to setup the database.

It's recommended that you take a good look at the entire .env.dev file and, if necessary, change it to suit your needs, but at the very least you should pay attention to the following variables:

  1. URL for Bountysource API: BOUNTYSOURCE_API_URL
  2. URL for Bountysource Website (Front-End): BOUNTYSOURCE_WWW_URL
  3. URL for the Database: DATABASE_URL
  4. Environment for Rails and Rake: RAILS_ENV and RAKE_ENV, respectively.

About the DATABASE_URL

This is a url, in the usual form protocol://user:password@host/path, just without the password since the postgres image doesn't use a password. The path part is the database name and you can change that without any issues, but before changing anything else keep in mind that the host matches the service in docker-compose.yml, and the user matches the default on the postgres docker image.

DATABASE_URL=postgres://postgres@pgsql/bountysource_dev

The URLs

Remember to keep the API and WWW URLs accessible from Outside Docker, either by changing your hosts file or leaving them as 0.0.0.0:

BOUNTYSOURCE_API_URL=http://0.0.0.0:3000/
BOUNTYSOURCE_WWW_URL=http://0.0.0.0:3000/

What is the Standardized Setup

There is some setup and configuration you need to do before the project will run, this section covers it.

The setup is abstracted using Docker-compose, if you're interested take a look at the file tasks.yml.

As a simple overview of what is done in the background, the basic setup does the following:

  1. Build the container (build command)
  2. Create and seed the database (setup command)
  3. Create the Sphinx configuration file (setup command)
  4. Symlink it to its expected location (setup command)
  5. Run the Search Daemon (reindex_sphinx command)
    note: The Search Daemon can't be run before configuration file is generated and symlinked
  6. Generate search indexes (reindex_sphinx command)

The Setup

  1. First we copy .env.dev into .env.
    cp .env.dev .env  
  2. Second we update CLOUDINARY_URL and CLOUDINARY_BASE_URL in .env.
  3. Third we Build container.
    docker-compose build  
  4. Then, we do the basic setup. This generates and seed the database as well as the Sphinx configuration file.
    docker-compose -f docker-compose.yml -f tasks.yml run setup  
  5. Lastly we will index the data using Sphinx, the Search Engine.
    docker-compose -f docker-compose.yml -f tasks.yml run reindex_sphinx  

Now the Setup is done and everything should be working.

How to use the containers - Cheatsheet

The Basics

  • To run the Rails server and make it available on port 3000, a simple up command will suffice:

    docker-compose up
  • To run the console you need to pass the rails c command to the container, and there are two ways to do it:

    1. If the container is not running, you have to use run
      docker-compose run bountysource rails c  
    2. If the container is running, you can use exec, which connects to the running container.
      docker-compose exec bountysource rails c  
  • You can also simply open bash and do pretty much anything on the container, using either run or exec.

docker-compose run bountysource bash
docker-compose exec bountysource bash

Some things you want to keep in mind

  • The database as well as the Search index use volumes so your data is persisted across runs, unless you remove the containers, so the one-time setup really means one-time.
  • If the bountysource container exits immediately after printing A server is already running. Check /app/tmp/pids/server.pid., just run
docker-compose run bountysource rm /app/tmp/pids/server.pid
  • Sphinx needs a config file to run. This file is generated by running rake ts:configure on the main container. This is done by the docker-compose -f docker-compose.yml -f tasks.yml run setup command, together with the db:setup task..

  • To keep things simple while also avoiding bloating the sphinx configuration file, the sphinx and bountysource containers share a volume containing the /app folder, which means the Sphinx logs (as well as many other files really) are accessible from the main container.

  • If you change data and you're not seeing the changes in the Search, you may need to run the Sphinx rake tasks:

    1. ts:index indexes data using the files in app/indices, while
    2. ts:generate indexes data using the Sphinx definitions on the models.
    docker-compose -f docker-compose.yml -f tasks.yml run reindex_sphinx 
  • If you need to, you can send arbitrary commands to any container (Not only the main container, but also the database and Sphinx containers) by using docker-compose exec [container] [command] and docker-compose run [container] [command].

Before sending a Pull Request

Before sending a Pull Request you should make sure all tests are passing:

docker-compose -f docker-compose.yml -f tasks.yml run test