A collection of docker images, middlewares, schedulers, jobs, UIs to set up a self managed analytical cluster.
Industry standards following some modern principles:
example of a data pipeline
The whole stack is based on docker swarm where all the containers communicate through the same network. The ports are not exposed (apart from 80 and 443) and requests are redirected via the webserver (nginx) to the appropriate container. The main databases are postgres and kafka. Jobs are scheduled by airflow and multiples UI show the health of the system. The main BI is metabase which is directly connected to postgres but connected to kafka
The core of the application is under sawmill.
The images of the middleware should be created beforehand running a
script to build the image live_py/docker_build.sh
and
go_ingest/docker_build.sh
.
Env variables should be executed running
~/credenza/database.env
. Alternatively variables can be
defined inside of gitlab or using docker secrets.
Once the images are created cd
into docker/
and run docker-compose up -d
Before starting the services the credentials have to be created and exported so that the containers can start with the correctly defined variables as in this file
After logging into the db
contanier run the sawmill script putting the
correspondent users passwords. This script creates the necessary
database and user permissions for the services of the data platform
The webserver is moved to a separated folder webserver/
where all the configuration about nginx, certbot and php are stored.
There is a script to initiate all the certificates for the domains as
defined in database.env
.
After having created the certificates run the script
env_nginxConf.sh
to create the default confs for nginx and
then start the services docker-compose up -d
traefik/
contains the old and deprecated configuration of
the reverse proxy.
The messaging system is inside the /kafka
folder. Run
docker-compose up -d
to start the services.
Additionally to start presto (which will communicate between kafka
and metabase) cd into presto/
folder and start docker.
An infrastructure should be designed to be secure without sacrifice performance and operativity. Different access and restriction levels are applied to the different services.
gitlab runner is on a separated folder in case the instance should work as well as a runner.
To activate the scheduler cd into the folder airflow
and
run docker-compose up -d
.
The scheduler will be available under
schedule.yourdomain.com
.
There are different UI to turn on/off depending on the state of the cluster maintenance. The main are
airflow/
: configuration files for airflowdags/
: list of jobs to run periodicallydb_connect/
: golang
backend for db
communicationparse_sources/
: python
ETL jobsdocker/
: docker compose filedocker/postgres/
: configurations and environment for
postgresdocker/db-data/
: all the db data external from the
container to backupdocker/logs/
: all the docker logs datadocker/traefik/
: configuration and routesterraform/
: terraform configuration, currently on
digital oceanUse the built-in continuous integration in GitLab.
Open an issue tracker
Collaborate on the project
cd existing_repo
git remote add origin https://github.com/sabeiro/sawmill.git
git branch -M main
git push -uf origin main