Go to file
2021-08-19 12:46:32 -04:00
apps add docker swarm deployment configs and remove dependency on kaniko for ci builds 2021-08-19 12:29:41 -04:00
ci_templates Faster local builds with base image and buildkit and run all unit tests 2021-07-10 15:46:14 +00:00
scripts add docker swarm deployment configs and remove dependency on kaniko for ci builds 2021-08-19 12:29:41 -04:00
.env fix a bug in the meta tag 2021-08-19 12:46:32 -04:00
.gitignore add docker swarm deployment configs and remove dependency on kaniko for ci builds 2021-08-19 12:29:41 -04:00
.gitlab-ci.yml add docker swarm deployment configs and remove dependency on kaniko for ci builds 2021-08-19 12:29:41 -04:00
.gitmodules cic cache build 2021-02-18 05:04:30 +00:00
docker-compose.override.yml fix a bug in the meta tag 2021-08-19 12:46:32 -04:00
docker-compose.yml add docker swarm deployment configs and remove dependency on kaniko for ci builds 2021-08-19 12:29:41 -04:00
README.md add docker swarm deployment configs and remove dependency on kaniko for ci builds 2021-08-19 12:29:41 -04:00

cic-internal-integration

Backend Requirements

Backend local development

  • Start the stack with Docker Compose:
docker-compose up -d
  • Now you can open your browser and interact with these URLs:

Frontend (CICADA), built with Docker, with routes handled based on the path: http://localhost

PGAdmin, PostgreSQL web administration: http://localhost:5050

Flower, administration of Celery tasks: http://localhost:5555

Traefik UI, to see how the routes are being handled by the proxy: http://localhost:8090

Note: The first time you start your stack, it might take a minute for it to be ready. While the backend waits for the database to be ready and configures everything. You can check the logs to monitor it.

To check the logs, run:

docker-compose logs

To check the logs of a specific service, add the name of the service, e.g.:

docker-compose logs backend

If your Docker is not running in localhost (the URLs above wouldn't work) check the sections below on Development with Docker Toolbox and Development with a custom IP.

Deploy the stack locally

If you want to run the docker stack locally on swarm

 docker-compose -f docker-compose.yml -f docker-compose.override.yml  config > docker-stack.yml
docker node update z1ehkrw1mvqlxc2udwt4xpype --label-add cic-net.app-db-data=true
 docker stack deploy -c docker-stack.yml  cic-net  

Backend local development, additional details

fill me in

Docker Compose Override

During development, you can change Docker Compose settings that will only affect the local development environment, in the file docker-compose.override.yml.

The changes to that file only affect the local development environment, not the production environment. So, you can add "temporary" changes that help the development workflow.

For example, the directory with the backend code is mounted as a Docker "host volume", mapping the code you change live to the directory inside the container. That allows you to test your changes right away, without having to build the Docker image again. It should only be done during development, for production, you should build the Docker image with a recent version of the backend code. But during development, it allows you to iterate very fast.

There is also a command override that runs /start-reload.sh (included in the base image) instead of the default /start.sh (also included in the base image). It starts a single server process (instead of multiple, as would be for production) and reloads the process whenever the code changes. Have in mind that if you have a syntax error and save the Python file, it will break and exit, and the container will stop. After that, you can restart the container by fixing the error and running again:

$ docker-compose up -d

There is also a commented out command override, you can uncomment it and comment the default one. It makes the backend container run a process that does "nothing", but keeps the container alive. That allows you to get inside your running container and execute commands inside, for example a Python interpreter to test installed dependencies, or start the development server that reloads when it detects changes, or start a Jupyter Notebook session.

To get inside the container with a bash session you can start the stack with:

$ docker-compose up -d

and then exec inside the running container:

$ docker-compose exec backend bash

You should see an output like:

root@7f2607af31c3:/app#

Test running stack

If your stack is already up and you just want to run the tests, you can use:

docker-compose exec data-seeding /script/run_ussd_user_imports.sh