Databases
Self-managed Database Services
You can provision a database with greyhound by defining a database service in your config.yaml. This is approach is best if:
- you want to have more flexibility over your database
- you want a non-Postgres database
- you don't need all the availability features of Aurora and want a simpler database setup
- you want to get started independently without requiring setup help from the platform tools team
See the Database examples for ready-to-use snippets.
EFS-backed databases provisioned by greyhound currently require the use of the efs-uid999-sc storage class and the user/group ID 999. See the examples listed above.
Managed Aurora Databases
Greyhound can also provision Aurora database clusters for your environments using pre-configured pools. This is ideal when:
- you want to take advantage of Aurora's high availability features
- you are working with ephemeral environments and don't want to recreate databases full of data manually
How Database Pools Work
Database pools are collections of pre-provisioned Aurora clusters managed by your platform team. When your environment starts, greyhound checks out a cluster from the pool — this is significantly faster than creating a new cluster from scratch.
When the environment is deleted, the cluster is returned to the pool (or cleaned up, depending on pool configuration).
To provision Aurora database pools for your application, reach out to the platform tools team via Slack in the #proj-ephemeral-environments channel.
Connecting to Your Database
Once an environment has a database, greyhound exposes connection details through interpolation variables:
services:
- name: api
image_from_build: api-build
env:
- name: DB_HOST
value: ${database.my-pool.writer_endpoint}
- name: DB_PORT
value: ${database.my-pool.port}
Available database variables:
| Variable | Description |
|---|---|
${database.<pool>.writer_endpoint} | Writer (primary) endpoint |
${database.<pool>.reader_endpoint} | Reader (replica) endpoint |
${database.<pool>.host} | Alias for writer endpoint |
${database.<pool>.port} | Database port |
Running Migrations
Use a job to run database migrations before your services start:
jobs:
- name: db-migrate
image_from_build: api-build
command:
- yarn
- migrate