Comparing data quality in online research platforms: Why Prolific outperforms the competition

Challenge
Online research platforms have revolutionized academic studies, but researchers face ongoing concerns about data quality, from inattentive participants to automated bots. Low-quality data can significantly distort research outcomes, undermining trust in the findings in the process. Researchers need clear evidence about which online platforms provide reliable, high-quality data at reasonable costs.
Solution
A team of independent researchers – Benjamin D. Douglas, Patrick J. Ewell, and Markus Brauer – set out to evaluate five of the most commonly used platforms for online research: Prolific, CloudResearch, MTurk, Qualtrics, and an undergraduate participant pool (SONA). Their aim was to identify the platform that consistently delivers high-quality data and does so cost-effectively.
Execution
The researchers collected responses from over 2,700 participants across the five platforms. They used multiple robust methods to accurately compare data quality. These methods included attention checks to detect careful engagement and measures of memory recall accuracy to ensure meaningful participation.
Additionally, the reliability of personality scales was evaluated to spot random or inconsistent answers. The researchers also analysed participant uniqueness through IP addresses and geolocation. Finally, they calculated the cost per high-quality participant to determine overall value.
Results
The study clearly found that Prolific and CloudResearch consistently produced superior data quality:
- Participants from Prolific passed attention checks significantly more often than those from MTurk, Qualtrics, or the undergraduate sample.
- Prolific respondents demonstrated excellent recall accuracy at 83.47%, markedly higher than MTurk’s 52.20%.
- When factoring in the cost, Prolific emerged as highly cost-effective, at just $1.90 per high-quality respondent, compared with MTurk ($4.36) and Qualtrics ($8.17).
- Overall, Prolific provided the highest proportion of engaged, attentive, and reliable respondents across all metrics evaluated.
Conclusion
Researchers need platforms they can trust to collect reliable data without breaking budgets. Independent findings show Prolific stands out by delivering consistent data quality at a lower cost than alternatives like MTurk and Qualtrics. For academics looking for dependable results from their online studies, Prolific has become a practical choice backed by clear evidence.
Citation
Douglas, B.D., Ewell, P.J., & Brauer, M. (2023). Data quality in online human-subjects research: Comparisons between MTurk, Prolific, CloudResearch, Qualtrics, and SONA. PLOS ONE, 18(3), e0279720. https://doi.org/10.1371/journal.pone.0279720