Principles and usage of databases for data infrastructure purposes,
docker-compose
provided.
It is always important to define external volumes connected with the database. Those volumes can be easily backed-up or synced on a cloud file system.
Usage of relational databases in different projects.
Robust and stable database, used in different deployment as main database, great connections with BI tools and powerful geo spatial queries.
The useful files to create the container are under postgres.
Once created the credential for the root user
POSTGRES_USER=${DB_USER}
POSTGRES_PASSWORD=${DB_PASS}
POSTGRES_DB=api_ingest
The correspondent docker-compose is:
services:
db:
image: postgres
restart: unless-stopped
env_file:
- postgres/database.env
ports:
- '127.0.0.1:5432:5432'
volumes:
- db-data:/var/lib/postgresql/data
- ./postgres/init.sql:/docker-entrypoint-initdb.d/create_tables.sql
- ./postgres/postgres.conf:/etc/postgresql/postgresql.conf
#command: postgres -c config_file=/etc/postgresql/postgresql.conf
healthcheck:
test: ["CMD", "pg_isready", "-U", "api_ingest"]
interval: 5s
retries: 5
networks:
- traefik-net
Connections to the database can be performed locally or with an orchestrator (swarm, kubernetes).
Mariadb is an optimal option for websites and lightweight applications, it’s more performant and stable than mysql environment
MARIADB_ROOT_PASSWORD=${DB_PASS_ROOT}
MARIADB_DATABASE=${DB_NAME}
MARIADB_USER=${DB_USER}
MARIADB_PASSWORD=${DB_PASS}
docker-compose
services:
mysql:
#image: mysql
image: mariadb
restart: unless-stopped
env_file:
- mysql/mariadb.env
#- mysql/database.env
volumes:
- ${HOME}/mysql-data:/var/lib/mysql
networks:
- webserver-net
We can define each access user with such queries:
create user dash_ro with password '';
grant select on all tables in schema "public" to dash_ro;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO dash_ro;
Presto is a very handy interface for querying non database sources like kafka
coordinator:
image: ahanaio/prestodb-sandbox
# ports:
# - "8090:8090"
container_name: "coordinator"
volumes:
- ./config.properties:/opt/presto-server/etc/config.properties
- ./local_postgres.properties:/opt/presto-server/etc/catalog/postgres.prop
erties
- ./kafka.properties:/opt/presto-server/etc/catalog/kafka.properties
- ./pass_auth.properties:/opt/presto-server/etc/password-authenticator.pro
perties
- ./password.db:/opt/presto-server/etc/password.db
- ./presto_keystore.jks:/opt/presto-server/etc/presto_keystore.jks
and properties
coordinator=true
node-scheduler.include-coordinator=true
http-server.http.port=8090
discovery-server.enabled=true
discovery.uri=http://localhost:8090
#kafka
connector.name=kafka
kafka.nodes=kafka:9093
kafka.table-names=test_topic
kafka.hide-internal-columns=false
import os, sys, gzip, random, csv, json, datetime, re
import numpy as np
import pandas as pd
import scipy as sp
import matplotlib.pyplot as plt
import geopandas as gpd
from scipy.spatial import cKDTree
from scipy import inf
import shapely as sh
import pymongo
= "~/raw/" baseDir
We initiate the client
with open(baseDir + '/credenza/geomadi.json') as f:
= json.load(f)
cred
with open(baseDir + '/raw/metrics.json') as f:
= json.load(f)['metrics']
metr
= pymongo.MongoClient(cred['mongo']['address'],cred['mongo']['port'])
client = client["index_name"]["collection_name"] coll
Returns all points within a distance
= 200.
neiDist = []
nodeL for i,poii in poi.iterrows():
= poi.loc[i]
poii = [x for x in poii.ix[['x','y']]]
poi_coord = coll.find({'loc':{'$nearSphere':{'$geometry':{'type':"Point",'coordinates':poi_coord},'$minDistance':0,'$maxDistance':neiDist}}})
neiN = []
nodeId for neii in neiN:
'id_poi':poii['id_poi'],'src':neii['src'],'trg':neii['trg'],"maxspeed":neii['maxspeed'],'street':neii['highway']
nodeL.append({"x_poi":poii['x'],"y_poi":poii['y']
, })
Take all locations inside polygons
= gpd.GeoDataFrame.from_file(baseDir + "gis/geo/motorway_area.shp")
motG = []
cellL for g in np.array(motG['geometry'][0]):
= g.exterior.coords.xy
c = [[x,y] for x,y in zip(c[0],c[1])]
c1 = coll.find({'geom':{'$geoIntersects':{'$geometry':{'type':"Polygon",'coordinates':[c1]}}}})
neiN = neiN[0]
neii for neii in neiN:
"cilac":str(neii['cell_ci']) + '-' + str(neii['cell_lac'])})
cellL.append({= pd.DataFrame(cellL) cellL
Neo4j is the most known graph database with an handy graphical interface
Neo4j UI
from neo4j.v1 import GraphDatabase, basic_auth
= GraphDatabase.driver("bolt://localhost:7687", auth=basic_auth("neo4j", "neo4j"))
driver = driver.session()
session
"CREATE (a:Person {name: {name}, title: {title}})",
session.run("name": "Arthur", "title": "King"})
{
= session.run("MATCH (a:Person) WHERE a.name = {name} "
result "RETURN a.name AS name, a.title AS title",
"name": "Arthur"})
{for record in result:
print("%s %s" % (record["title"], record["name"]))
session.close()
from py2neo import Graph, Path
= Graph()
graph
= graph.cypher.begin()
tx for name in ["Alice", "Bob", "Carol"]:
"CREATE (person:Person {name:{name}}) RETURN person", name=name)
tx.append(= [result.one for result in tx.commit()]
alice, bob, carol
= Path(alice, "KNOWS", bob, "KNOWS", carol)
friends
graph.create(friends)
from neomodel import StructuredNode, StringProperty, RelationshipTo, RelationshipFrom, config
= 'bolt://neo4j:test@localhost:7687'
config.DATABASE_URL
class Book(StructuredNode):
= StringProperty(unique_index=True)
title = RelationshipTo('Author', 'AUTHOR')
author
class Author(StructuredNode):
= StringProperty(unique_index=True)
name = RelationshipFrom('Book', 'AUTHOR')
books
= Book(title='Harry potter and the..').save()
harry_potter = Author(name='J. K. Rowling').save()
rowling connect(rowling) harry_potter.author.
Help to deploy language model applications
Vector embeddings are the distilled representations of the training data produced as an output from the training stage of the machine learning process