data platform

A collection of docker images, middlewares, schedulers, jobs, UIs to set up a self managed analytical cluster.

data_platform example of a data pipeline

main features

The whole stack is based on docker swarm where all the containers communicate through the same network. The ports are not exposed (apart from 80 and 443) and requests are redirected via the webserver (nginx) to the appropriate container. The main databases are postgres and kafka. Jobs are scheduled by airflow and multiples UI show the health of the system. The main BI is metabase which is directly connected to postgres but connected to kafka

core

The core of the application is under sawmill.

The images of the middleware should be created beforehand running a script to build the image live_py/docker_build.shand go_ingest/docker_build.sh.

Env variables should be executed running ~/credenza/database.env . Alternatively variables can be defined inside of gitlab or using docker secrets.

Once the images are created cdinto docker/ and run docker-compose up -d

environment

Before starting the services the credentials have to be created and exported so that the containers can start with the correctly defined variables as in this file

storage

After logging into the dbcontanier run the sawmill script putting the correspondent users passwords. This script creates the necessary database and user permissions for the services of the data platform

webserver

The webserver is moved to a separated folder webserver/ where all the configuration about nginx, certbot and php are stored. There is a script to initiate all the certificates for the domains as defined in database.env.

After having created the certificates run the script env_nginxConf.shto create the default confs for nginx and then start the services docker-compose up -d

traefik/contains the old and deprecated configuration of the reverse proxy.

messaging

The messaging system is inside the /kafka folder. Run docker-compose up -d to start the services.

Additionally to start presto (which will communicate between kafka and metabase) cd into presto/folder and start docker.

security and access

An infrastructure should be designed to be secure without sacrifice performance and operativity. Different access and restriction levels are applied to the different services.

gitlab-runner

gitlab runner is on a separated folder in case the instance should work as well as a runner.

scheduler

To activate the scheduler cd into the folder airflow and run docker-compose up -d.

The scheduler will be available under schedule.yourdomain.com.

UI

There are different UI to turn on/off depending on the state of the cluster maintenance. The main are

Project structure

Test and Deploy

Use the built-in continuous integration in GitLab.

Support

Open an issue tracker

Roadmap

Contributing

Collaborate on the project

cd existing_repo
git remote add origin https://github.com/sabeiro/sawmill.git
git branch -M main
git push -uf origin main

Authors and acknowledgment

License

CC by-sa-nc