Your agent can now run studies and evaluations on Prolific
Trusted by leading engineering teams
Built for pipelines
"I want to remove any barrier between my agent and the results."
How teams do more with the CLI
Four programmatic workflows across one human intelligence network.
Do more from where you already operate
Start where it matches your stack.
Fast-moving AI teams using Prolific
Trusted by AI/ML developers, researchers, and leading organizations across industries.
Start building automated workflows
Questions from engineers integrating Prolific
Yes. Prolific exposes a REST API and a CLI over the same 200,000+ identity-verified participant pool. UI, CLI, and API address the same 300+ filter attributes and the same audit trail. Anonymised participant response data is accessible via any of the three.
Yes. Any system that can make authenticated HTTPS requests — agent scaffolding, orchestrators, training pipelines, CI runners — can launch studies, subscribe to webhooks, and export responses. Idempotency keys on study creation make retry safe from automated callers.
Yes. Preference-collection studies launched via API return JSONL responses with stable cohort hashes and filter provenance in every record. The format is designed to feed directly into reward-model, DPO, and RLHF training code.
Self-serve studies typically launch in minutes and return first responses in hours. Completion time depends on cohort size, filter strictness, and reward. Webhooks on response.submitted allow incremental ingest rather than blocking on full completion.
Yes. The API accepts the same 300+ filter attributes available in the UI — demographic, professional, behavioural, and credential. A filter set resolves to a stable cohort hash that can be reused across runs for longitudinal comparison between model versions.
Yes. Subscribe to response.submitted, study.completed, and submission-review events. Payloads are signed and include filter provenance; delivery retries with exponential backoff on non-2xx responses.
Yes. A CI job can launch a study, wait on completion via the CLI or a webhook, aggregate responses against a threshold, and exit pass or fail. Reproducible filter hashes make the same cohort specification repeatable across releases.
API-key authentication over HTTPS. Keys are scoped to a workspace and can be rotated without downtime. Keys are never embedded in participant-facing URLs or exposed to study respondents.






