Jobs#
The jobs command group starts an HTTP server and provides CLI clients for submitting and streaming workloads.
Subcommands#
| Command | Description |
|---|---|
edg jobs serve | Start the HTTP server on port 3000 |
edg jobs submit | Submit a workload config to the server |
edg jobs status [id] | Show status of one or all jobs |
edg jobs stream <id> | Stream live progress from a running job |
edg jobs health | Check job server health |
Server#
Start the job server:
edg jobs serveThe server exposes five endpoints:
| Method | Path | Description |
|---|---|---|
GET | /healthz | Health check (returns ok) |
POST | /jobs | Submit a YAML workload config |
GET | /jobs | List all jobs and their statuses |
GET | /jobs/{id} | Check job status |
GET | /jobs/{id}/stream | Stream live progress via SSE |
Submit#
Submit a workload config from the CLI:
edg jobs submit \
--url "postgres://root@localhost:26257?sslmode=disable" \
--driver pgx \
--config workload.yaml \
--duration 30s \
--workers 4The response contains the job ID:
{"id":"a1b2c3d4-e5f6-4a7b-8c9d-0e1f2a3b4c5d"}Submit flags#
| Flag | Short | Default | Description |
|---|---|---|---|
--server | -s | http://localhost:3000 | Job server address |
--duration | -d | 1m | Run duration |
--workers | -w | 1 | Number of concurrent workers |
--stream | false | Stream live logs after submitting |
Global flags like --url, --driver, --config, --errors, --retries, --pool-size, and --no-atomic-tx are forwarded to the server. See CLI Reference for details.
Stream#
Stream live progress logs from a running job:
edg jobs stream a1b2c3d4-e5f6-4a7b-8c9d-0e1f2a3b4c5dThe connection stays open until the job completes. Progress stats are printed to stdout as they arrive.
Alternatively, pass --stream to submit to stream automatically without needing the job ID:
edg jobs submit \
--url "postgres://root@localhost:26257?sslmode=disable" \
--config workload.yaml \
--streamStream flags#
| Flag | Short | Default | Description |
|---|---|---|---|
--server | -s | http://localhost:3000 | Job server address |
Status#
Query job status via CLI or HTTP.
All jobs#
edg jobs statuscurl http://localhost:3000/jobs[
{
"id": "a1b2c3d4-e5f6-4a7b-8c9d-0e1f2a3b4c5d",
"status": "running",
"started_at": "2025-04-23T14:32:07Z"
},
{
"id": "f6e5d4c3-b2a1-4f6e-8d7c-9a0b1c2d3e4f",
"status": "completed",
"started_at": "2025-04-23T14:30:00Z",
"completed_at": "2025-04-23T14:30:30Z"
}
]Single job#
edg jobs status a1b2c3d4-e5f6-4a7b-8c9d-0e1f2a3b4c5dcurl http://localhost:3000/jobs/a1b2c3d4-e5f6-4a7b-8c9d-0e1f2a3b4c5dWhile running:
{
"id": "a1b2c3d4-e5f6-4a7b-8c9d-0e1f2a3b4c5d",
"status": "running",
"started_at": "2025-04-23T14:32:07Z"
}After completion:
{
"id": "a1b2c3d4-e5f6-4a7b-8c9d-0e1f2a3b4c5d",
"status": "completed",
"started_at": "2025-04-23T14:32:07Z",
"completed_at": "2025-04-23T14:32:37Z"
}If the workload fails:
{
"id": "a1b2c3d4-e5f6-4a7b-8c9d-0e1f2a3b4c5d",
"status": "failed",
"error": "connecting to database: ...",
"started_at": "2025-04-23T14:32:07Z",
"completed_at": "2025-04-23T14:32:07Z"
}Status flags#
| Flag | Short | Default | Description |
|---|---|---|---|
--server | -s | http://localhost:3000 | Job server address |
Health Check#
edg jobs healthcurl http://localhost:3000/healthzokHealth flags#
| Flag | Short | Default | Description |
|---|---|---|---|
--server | -s | http://localhost:3000 | Job server address |
Lifecycle#
The server automatically runs up, seed, deseed, and down lifecycle sections if defined in the config. This means a single config file can create tables, seed data, run the workload, and clean up.
Jobs are stored in memory. Restarting the server clears all job history.