Skip to main content

Why Pi uses async

Operations like brand extraction, image generation, and campaign rendering are compute-heavy and can take seconds to minutes. Rather than holding a connection open, Pi returns immediately with 202 Accepted and a job_id. Your code then polls for the result — or uses long-polling to let the server wait on your behalf. This pattern lets you:
  • Run many jobs in parallel without blocking threads
  • Integrate into queues, serverless functions, and automation platforms
  • Retry failed jobs without re-triggering the original POST

Job lifecycle

Every job moves through these states in order:
queued → processing → completed | failed
StateMeaning
queuedJob accepted and waiting for a worker
processingWorker is actively executing the job
completedJob finished successfully; result is available in job_result
failedJob encountered an unrecoverable error; details are in error_log

Polling: GET /api/v1/jobs/:id

Retrieve a job’s current state at any time:
GET /api/v1/jobs/:id

Query parameters

wait_for_completion
boolean
When true, the server holds the connection open and returns only when the job reaches a terminal state (completed or failed). Use this instead of a manual polling loop.
timeout_seconds
number
default:"20"
Maximum seconds to wait when wait_for_completion=true. If the job is still running when the timeout expires, the server returns the current (non-terminal) state so you can re-poll.
expand
string
Pass expand=brand to include the resolved brand object inline in data.expanded.brand when the job result references a brand in your organization. Reduces the need for a follow-up GET.

Long-polling

Long-polling is the recommended pattern for server-side integrations. Pass wait_for_completion=true and a generous timeout_seconds, then re-poll if the response comes back without a terminal status:
curl -sS "https://api.example.com/api/v1/jobs/JOB_ID?wait_for_completion=true&timeout_seconds=20&expand=brand" \
  -H "Authorization: Bearer pi_live_***"

Full example: extract and poll

1

Create the job

POST to an async endpoint. The response body contains data.job_id.
export BASE="https://api.example.com"
export API_KEY="pi_live_***"

curl -sS -X POST "$BASE/api/v1/brands/extract" \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url":"https://example.com"}'
# → {"object":"job","status":"queued","data":{"job_id":"<uuid>"}}
2

Poll until terminal

Use wait_for_completion=true to let the server wait. Re-poll if the timeout elapses before completion.
curl -sS "$BASE/api/v1/jobs/JOB_ID_HERE?wait_for_completion=true&timeout_seconds=20&expand=brand" \
  -H "Authorization: Bearer $API_KEY"

Error handling

When a job reaches failed, inspect error_log in the response for a description of what went wrong:
{
  "data": {
    "id": "<job_id>",
    "status": "failed",
    "error_log": "Firecrawl could not reach the provided URL.",
    "job_result": null
  }
}
Re-submission pattern: Fix the root cause (e.g. correct the URL, supply valid credentials) and re-POST to the original creation endpoint. Do not retry a failed job by polling — create a new job.
Shopify app backends should prefer wait_for_completion=true on server-side workers, then persist brand_id in your shop config. ManyChat or Zapier-style connectors should poll with short timeouts and emit brand_id as the stable downstream key.