Human data powers every step of modern AI—from curation and labeling to evaluation and alignment. Yet no shared baseline exists for how it’s sourced, governed, and measured. Help us change that.
Why your voice matters
AI systems are only as trustworthy as the human data beneath them. Today, decisions about data sources, consent, quality, incentives, safety, and evaluation are fragmented and opaque.
This study sets the benchmark for how the field actually works—what’s standard, what’s emerging, and what needs to improve. Your input will guide the practices, incentives, and tools that shape the next wave of AI.
Who should take this
All leaders and practitioners who influence AI data, evaluation, or decisioning. Including, but not limited to:
If you work primarily in AI or impact how human data is sourced, labeled, or evaluated, we want to hear from you.
More information
- Early access to the published report and a summary deck you can share internally
- Benchmark insights to compare your practices with peers
- Time: ~5 minutes
- Anonymity: Results are aggregated; no personally identifiable responses will be published.
- Use of data: Research only. If you opt to leave contact info, it’s stored separately.

About Prolific
Prolific is the human data platform trusted by researchers and AI teams to collect high-quality, consented data for evaluation, alignment, and decision-making. We’re leading this study to set a clear, actionable baseline for the field.