What’s New: Verified domain experts, the first human-centric leaderboard, and more.

Welcome to our latest product updates! 🎉
The future of AI depends on rich human insight, and we're making it easier than ever to access. Our latest platform enhancements connect you directly with verified domain experts, introduce a revolutionary human-centered AI evaluation approach, and streamline multi-annotator workflows.
Read on to discover what's new at Prolific and how we can power your research efforts.
Verified Domain Experts: Specialized knowledge on demand
In the race to build specialized AI, domain expertise isn't optional—it's essential. Expert human feedback provides the nuance and professional judgment that transforms good models into groundbreaking ones.
To that end, we’ve assembled a pool of 2,300+ verified specialists across critical disciplines:
- 🏥 Healthcare professionals from both UK and US medical systems, including GPs, neurologists, psychiatrists, and dermatologists
- 👩🔬 STEM experts with deep knowledge in biology, chemistry, mathematics, physics, and computer science
- 🧑💻 Programming specialists proficient in Python, JavaScript, Java, and SQL
- 🌎 Language experts fluent in Portuguese, Spanish, Italian, Arabic, Mandarin, French, and more
This growing network of specialists gives you immediate access to the targeted expertise your AI development demands—no recruitment delays, no quality concerns. You can access them in your Prolific account, or via our API.
Learn more about Domain Experts

The human experience leaderboard: Evaluating AI that matters
Technical benchmarks tell only part of the story. Our new AI user experience leaderboard measures what truly counts—how people actually experience AI models.
This first-of-its-kind evaluation framework captures dimensions traditional leaderboards miss:
- Multi-dimensional feedback beyond simple preferences
- Representative participant verification with Multilevel Regression with Poststratification (MRP)
- Controlled experimental conditions for reliable results
- Experiential metrics that predict real-world adoption
The leaderboard evaluates models across critical human factors like helpfulness, communication quality, trustworthiness, adaptiveness, and cultural representation—giving you insights that predict not just performance, but user satisfaction and engagement.
Discover the leaderboard
AI Task Builder: Quality without complexity
Collecting multiple independent annotations per task just got easier. Now you can specify how many participants should evaluate each data point with AI Task Builder, giving you data to identify consensus and outliers, critical for training and evaluating AI applications.
This highly requested feature eliminates manual workarounds:
- Set your annotator count once during setup
- Distribute tasks automatically to the right number of participants
- Get multiple annotations to improve data quality and reduce bias
We’ve also shipped a number of user experience improvements that deliver a much simpler setup flow when setting up tasks on AI Task Builder. These improvements work together to save you time while enhancing the quality signals in your training and evaluation data.
Learn more about AI Task Builder
Connect with the human data community
May 15: Get your AI model to market faster - Webinar
Join us to discover how subject matter experts can accelerate your specialized AI development:
- Access verified specialists matched to your unique needs
- Collect high-quality training data without the traditional delays
- Scale seamlessly as your requirements evolve
Join us for a live Q&A session with our team—bring your questions and we'll help you get started right away. All registrants will receive a recording of the webinar.
May 29: In The Loop - London dinner for human data leaders
In the Loop: An exclusive dinner for human data experts in AI (Thursday, May 29, 2025)
Join Prolific and Encord for an evening dedicated to the people behind effective AI. Connect with peers who understand your challenges and are shaping the future of human-in-the-loop processes.
Over the course of the evening, you’ll:
- Hear from peers: Discover fresh insights on how to build robust human‑in‑the‑loop workflows.
- Engage at the round table: Dive into facilitated conversations on best practices, common challenges, and emerging opportunities in human data.
- Build your community: Connect with fellow human data leaders and help shape the future of a UK‑based community for ongoing idea exchange.
Whether you oversee data annotation teams, design evaluation pipelines, or champion data quality at scale, you’ll leave inspired, with new connections and strategies to elevate your human‑in‑the‑loop programs.