Real-World Example: API
This walkthrough dissects the greyhound configuration for the api repository — one of the most complex real-world setups at Underdog. It demonstrates builds with private dependencies, multiple services from a single image, secret providers, CSI volumes, configmaps, database migrations, ingress routing, sidecar services, and multi-app orchestration.
Don't try to absorb every field at once. Use the sections below as a reference when you need to configure a similar pattern in your own application.
Repository Layout
The api repo uses both greyhound configuration files:
.greyhound/
├── config.yaml # Application configuration (services, builds, secrets, etc.)
└── applications.yaml # Multi-app orchestration (child apps, database pools, variable overrides)
Build Configuration
The api defines a single build that produces the container image used by every service and job. Because the app depends on private Ruby gems, the build injects credentials as build-time secrets and environment variables:
builds:
- name: api
target: api
service_account_name: fantasy-sa
dockerfile: Dockerfile.greyhound
secrets:
- secretName: fantasy-sidekiq-ent-key
- secretName: fantasy-karafka-pro-key
env:
- name: BUNDLE_ENTERPRISE__CONTRIBSYS__COM
valueFrom:
secretKeyRef:
key: key
name: fantasy-sidekiq-ent-key
- name: BUNDLE_GEMS__KARAFKA__IO
valueFrom:
secretKeyRef:
key: key
name: fantasy-karafka-pro-key
- name: BUNDLE_GITHUB__COM
value: $(GIT_USERNAME):$(GIT_PASSWORD)
resources:
limits:
cpu: 32
memory: 32Gi
requests:
cpu: 32
memory: 32Gi
Key patterns:
secrets— attaches Kubernetes secrets to the build pod so thatvalueFrom.secretKeyRefreferences resolve correctly.envwithvalueFrom— injects private gem credentials without hardcoding them. Bundler reads these environment variables duringbundle install.$(GIT_USERNAME):$(GIT_PASSWORD)— uses Kubernetes-level variable substitution (the$(...)syntax) to compose a GitHub token at runtime. This is distinct from greyhound's${...}interpolation variables.- High build resources — the api build compiles native extensions and pulls many dependencies, so it requests 32 CPU / 32 GiB to keep build times reasonable.
Secret Providers and Shared Volumes
Before any service can read secrets, the config declares a secret provider and a shared CSI volume that projects those secrets as files into each pod:
service_accounts:
- name: fantasy-sa
cloud_role: arn:aws:iam::211125383386:role/develop-fantasy-role-sa-default
secretproviders:
- name: fantasy-secrets
secretObjects:
- secretName: fantasy-pg-application-password
- secretName: fantasy-rails-master-key
- secretName: fantasy-sidekiq-ent-key
- secretName: pe-develop-msk-sasl-fantasy-consumer
keys:
- path: username
- path: password
- secretName: fantasy-karafka-pro-key
- secretName: gcs-writer-credentials
shared_volumes:
- name: fantasy-secrets-volume
type: csi
secret_provider: fantasy-secrets
read_only: true
Key patterns:
service_accounts— binds an IAM role to a Kubernetes service account, giving pods the AWS permissions they need (database access, S3, etc.).secretproviders— declares which secrets from AWS Secrets Manager should be synced into Kubernetes. Some secrets expose specifickeys(like the MSK SASL credentials withusernameandpasswordpaths).shared_volumes— creates a single CSI volume that any service can mount. By declaring it once at the top level, you avoid duplicating the provider configuration across every service.
See the Secrets and Parameters guide for more on classifying and delivering secrets.
Multiple Services from One Build
The api repo runs several services, all sharing the same container image but differentiating behavior through the APP_RUNNING_MODE environment variable.
services:
- name: api
image_from_build: api
replicas: 1
service_account_name: fantasy-sa
resources:
limits:
memory: 8Gi
requests:
memory: 8Gi
volumes:
- name: fantasy-secrets-volume
claim: fantasy-secrets-volume
mount_path: /mnt/secrets-store
type: csi
envFrom:
- configMapRef:
name: api-env-vars
env:
- name: APP_RUNNING_MODE
value: api
# ... secret refs
ports:
- containerPort: 3000
- name: api-admin
image_from_build: api
replicas: 1
# ... same volumes, envFrom, secrets
env:
- name: APP_RUNNING_MODE
value: admin
ports:
- containerPort: 3000
- name: api-internal
image_from_build: api
# APP_RUNNING_MODE: internal
- name: api-stats
image_from_build: api
# APP_RUNNING_MODE: stats
- name: api-sidekiq
image_from_build: api
replicas: 1
resources:
limits:
memory: 10Gi
requests:
memory: 10Gi
env:
- name: APP_RUNNING_MODE
value: api
- name: DB_STATEMENT_TIMEOUT
value: '15000'
# no ports — background worker
- name: api-sidekiq-notif
image_from_build: api
replicas: 1
env:
- name: APP_RUNNING_MODE
value: api
- name: DISABLE_SIDEKIQ_CRON
value: 'true'
- name: DB_STATEMENT_TIMEOUT
value: '15000'
# no ports — background worker
Key patterns:
image_from_build: api— every service uses the same build artifact. Only the runtime config differs.APP_RUNNING_MODE— a single env var controls which role the process assumes (api, admin, internal, stats, worker).envFromwithconfigMapRef— loads a large block of shared env vars from a configmap, keeping each service definition concise.- Worker services have no
ports— Sidekiq workers don't serve HTTP, so they omit port declarations entirely. - Per-service resource tuning — the main API and sidekiq workers get extra memory (
8Giand10Girespectively) while other services inherit the top-level default of4Gi.
Sidecar Services
Not every service needs a custom build. The api config runs a Valkey (Redis-compatible) instance using a public image:
services:
- name: api-redis
image: valkey/valkey:8.1-alpine
service_account_name: fantasy-sa
replicas: 1
resources:
requests:
cpu: 1
memory: 4Gi
limits:
cpu: 1
memory: 4Gi
ports:
- containerPort: 6379
Other services reference this via REDIS_CACHE_URL: redis://api-redis:6379 in the configmap. Because all services share a namespace, they can reach the Redis pod by its service name.
Ingress Rules with Interpolation
The config defines ingress rules for each HTTP-serving service, using interpolation variables to generate environment-specific hostnames:
rules:
- service: api
hostnames:
- api-${env.name}.${cluster.dnsDomain}
visibility: nginx-internal-fantasy-services
- service: api-admin
hostnames:
- api-admin-${env.name}.${cluster.dnsDomain}
visibility: nginx-internal-fantasy-services
- service: api-internal
hostnames:
- api-internal-${env.name}.${cluster.dnsDomain}
visibility: nginx-internal-fantasy-services
- service: api-stats
hostnames:
- api-stats-${env.name}.${cluster.dnsDomain}
visibility: nginx-internal-fantasy-services
Key patterns:
${env.name}and${cluster.dnsDomain}— greyhound resolves these at deploy time, producing hostnames likeapi-pr-123.develop.example.com.- Consistent naming convention — each service gets a predictable hostname derived from its service name.
visibility— controls which ingress controller handles the rule. Internal services use the internal nginx class.
Top-Level Resource Defaults
Rather than repeating the same limits on every service, the config sets a top-level default:
resources:
limits:
cpu: 3
memory: 4Gi
requests:
cpu: 3
memory: 4Gi
Individual services can override these defaults when they need more (or less) — like api-sidekiq requesting 10Gi of memory. Services that don't declare their own resources inherit this block.
Database Migration Job
Jobs run to completion before services start. The api uses one to apply Rails database migrations:
jobs:
- name: api-db-migration
image_from_build: api
service_account_name: fantasy-sa
volumes:
- name: fantasy-secrets-volume
claim: fantasy-secrets-volume
mount_path: /mnt/secrets-store
type: csi
command:
- bundle
- exec
- rails
- db:create:quickfix
- db:migrate:quickfix
- db:migrate
envFrom:
- configMapRef:
name: api-env-vars
env:
- name: DB_STATEMENT_TIMEOUT
value: '300000'
# ... same secret refs as services
Key patterns:
command— overrides the container's default entrypoint to run migration commands.- Higher
DB_STATEMENT_TIMEOUT— migrations can be long-running, so the timeout is set to 5 minutes (300000ms) instead of the default15000ms used by Sidekiq workers. - Same secrets and configmap — the job shares credentials and config with the services it prepares the database for.
ConfigMaps for Environment Variables
The api uses a large configmap to centralize environment variables shared across all services. This keeps individual service definitions focused on only the variables that differ between roles:
configmaps:
- name: api-env-vars
data:
# Static configuration
RAILS_ENV: staging
TZ: America/New_York
JSON_LOGGING_ENABLED: true
# Interpolated values
DD_ENV: ${env.name}
DEPLOYMENT_ENV: ${env.name}
API_DOMAIN: api-${env.name}.${cluster.dnsDomain}
ADMIN_DOMAIN: api-admin-${env.name}.${cluster.dnsDomain}
INTERNAL_DOMAIN: api-internal-${env.name}.${cluster.dnsDomain}
STATS_DOMAIN: api-stats-${env.name}.${cluster.dnsDomain}
# Database endpoints from pool
PGHOST: ${database.0.writer_endpoint}
REPLICA_HOST: ${database.0.reader_endpoint}
QUICKFIX_PG_HOST: ${database.0.writer_endpoint}
QUICKFIX_REPLICA_PGHOST: ${database.0.reader_endpoint}
# Redis pointing to sidecar
REDIS_CACHE_URL: redis://api-redis:6379
REDIS_SIDEKIQ_URL: redis://api-redis:6379
REDIS_SCORING_URL: redis://api-redis:6379
# ... many more application-specific settings
Key patterns:
- Interpolation in configmaps —
${env.name},${cluster.dnsDomain}, and${database.0.writer_endpoint}are all resolved by greyhound before the configmap is applied. ${database.0.writer_endpoint}/${database.0.reader_endpoint}— references the first database pool attached to the environment. The0is a zero-based index into the pool list.- Service DNS references —
redis://api-redis:6379uses Kubernetes service discovery, since all services share a namespace. - Separation of concerns — shared config lives in the configmap; only role-specific variables (like
APP_RUNNING_MODE) are set per service.
Multi-App Orchestration
The applications.yaml brings everything together, composing the api with its dependent services:
applications:
- name: api
repository: Underdog-Inc/api
default_cluster: develop-use2
default_database_pools:
- api-staging-snapshot
additional_applications:
- name: finorc
import_type: Required
repository: Underdog-Inc/financial_orchestration
branch: main
- name: web-app
import_type: Required
repository: Underdog-Inc/web-app
branch: main
variable_overrides:
- variable_name: API_ENDPOINT
service_name: web-app
variable_value: https://api-${env.name}.${cluster.dnsDomain}
- variable_name: STAT_ENDPOINT
service_name: web-app
variable_value: https://api-stats-${env.name}.${cluster.dnsDomain}
Key patterns:
default_cluster— sets the EKS cluster for environments provisioned from this application.default_database_pools— attaches an Aurora database snapshot pool. greyhound checks out a database from this pool for each environment, making${database.0.*}variables available.additional_applications— pulls infinorcandweb-appas required child services. Both are deployed alongside the api in the same namespace.variable_overrides— injects the api's interpolated URL into theweb-appservice's env vars. This is how the frontend discovers its backend endpoint — greyhound resolves the${env.name}and${cluster.dnsDomain}variables and passes the final URL to the child application.
Summary
This configuration demonstrates several patterns you can apply to your own greyhound setup:
| Pattern | Where Used |
|---|---|
| Private dependency credentials in builds | builds[].secrets + env[].valueFrom |
| Multiple services from one image | image_from_build + APP_RUNNING_MODE |
| Centralized secrets via CSI volumes | secretproviders + shared_volumes |
| Large shared config via configmaps | configmaps + envFrom |
| Environment-aware hostnames | ${env.name} + ${cluster.dnsDomain} in rules and configmaps |
| Database pool endpoints | ${database.0.writer_endpoint} in configmaps |
| Pre-deploy database migrations | jobs with custom command |
| Sidecar services (Redis/Valkey) | image with public container image |
| Multi-app with child dependencies | additional_applications in applications.yaml |
| Cross-app variable injection | variable_overrides targeting child service env vars |