Appearance
Database Setup
With the server routing in place, we need a PostgreSQL database to back it. We'll run Postgres locally using Docker Compose and keep the connection credentials in a .env file.
Two new files at the project root:
sh
doable/
├── .env # database credentials
└── compose.yml # runs Postgres in DockerInstall Docker
Docker Desktop bundles the Docker engine and the Compose plugin, and is the simplest option on any platform. The standalone Docker Engine plus the Compose plugin is a lighter alternative on Linux.
sh
brew install --cask dockersh
# Docker Engine — follow https://docs.docker.com/engine/install/
# Compose plugin — follow https://docs.docker.com/compose/install/linux/sh
# Download and run Docker Desktop from
# https://docs.docker.com/desktop/install/windows-install/The official install guide covers more options. Verify both the engine and the Compose plugin are available:
sh
docker --version
docker compose versionEnvironment Variables
Create a .env file at the project root to hold the database credentials:
sh
# .env
# Database
PGHOST=db
PGPORT=5432
PGDATABASE=doable-dev
PGUSER=doable-user-dev
PGPASSWORD=doable-dev-p@ssw0rdThe variable names follow the standard libpq environment variables, which Postgres clients pick up automatically — including Squirrel, the Gleam package we'll use later to query the database.
DANGER
Never commit .env to version control. For this guide I did it intentionally — the credentials are for a local dev database only and keeping the file in the repo simplifies following along.
Docker Compose
Create compose.yml at the project root[1]:
yaml
# compose.yml
name: doable-dev
services:
db:
image: postgres:18-alpine
restart: unless-stopped
shm_size: 128mb
environment:
POSTGRES_PORT: ${PGPORT}
POSTGRES_USER: ${PGUSER}
POSTGRES_PASSWORD: ${PGPASSWORD}
POSTGRES_DB: ${PGDATABASE}
ports:
- ${PGPORT}:${PGPORT}
volumes:
- data:/var/lib/postgresql
volumes:
data:
name: doable-dev-data
networks:
default:
name: doable-dev-networkA few things worth noting:
image: postgres:18-alpine— the Alpine variant keeps the image small.shm_size: 128mb— Postgres uses shared memory for internal buffers; the Docker defaults to 64 MB but I followed the recommended settings in official Postgres image docs.restart: unless-stopped— the container restarts automatically after a machine reboot, unless you explicitly stop it.${PGPORT}:${PGPORT}— Docker Compose reads.envautomatically, so the port mapping uses the same variable as the app.- Named volume
doable-dev-data— data persists across container restarts and rebuilds. - Named network
doable-dev-network— an explicit name makes it easier to connect other services later.
Starting the Database
sh
docker compose up -dThe -d flag runs the containers in the background. On first run Docker will pull the Postgres image, which may take a moment.
To verify the container is healthy:
sh
docker compose psYou should see db listed with status running.
sh
docker ps -a
# CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
# 52bcc6e06f8a postgres:18-alpine "docker-entrypoint.s…" 11 seconds ago Up 11 seconds 0.0.0.0:5432->5432/tcp, [::]:5432->5432/tcp doable-dev-db-1Verifying the Database
Connect to the running container with psql to confirm the database was created and is accepting connections:
sh
docker compose exec db psql -U doable-user-dev -d doable-devIf everything is working you'll land in the psql prompt:
psql (18.x)
Type "help" for help.
doable-dev=#Once inside, run \l to list databases and confirm doable-dev is present. Then run a quick query to confirm the database is accepting connections:
sql
SELECT 1;You should see:
?column?
----------
1
(1 row)Type \q to exit.
TIP
Since the port is mapped to your host, you can also connect directly from your machine without going through Docker:
sh
psql -h localhost -p 5432 -U doable-user-dev -d doable-devThe same credentials work in any GUI client — DBeaver, DataGrip, or similar — using localhost:5432.
What's Next
Postgres is running but empty — no tables, no columns. Next, we'll define the tasks schema as a reversible SQL migration and let a migrate service run it automatically whenever the stack starts.