Introducing authenticity checks (beta): Ensure genuine human responses in the age of AI

As AI becomes increasingly prevalent in our digital lives, the research world faces a critical inflection point. Pure, authentic human insights have never been more valuable—or more threatened.
That's why we've built authenticity checks—a powerful new tool that automatically detects AI-generated free-text responses, with 98.7% accuracy.
The challenge of AI in primary data collection
Large language models like ChatGPT have become increasingly accessible and sophisticated, which means researchers face a growing challenge: how can we distinguish between authentic human responses and AI-generated answers?
We’ve directly observed how AI threatens research validity across the industry, seeing a rise in participant AI-generated responses.
We’re committed to preserving authentic primary data collection. That’s why we've developed a new behavioral detection system that goes beyond surface-level content analysis.
Introducing new authenticity checks
Authenticity checks use advanced behavioral analysis to detect when participants may be using AI tools (like ChatGPT) or third-party websites, instead of providing genuine responses to free-text questions.
It’s difficult to tell if a response is authentic just by reading it. These checks look for suspicious behavioral patterns rather than analyzing the text itself, adding another layer of defense on top of manually reading responses.
They’re available now at no extra cost, as part of our commitment to high data quality standards.
How it works
- Choose which free-text questions you want to monitor, then insert the provided JavaScript for your platform.
- As each participant answers, our model looks for real-time behaviors that suggest the response isn’t authentic. It runs over 15 different checks and uses machine learning to weigh up the likelihood of an inauthentic response. When the model has high confidence, it’ll flag the response.
- When participants submit responses, Prolific will show you which ones have been flagged in the authenticity check column:
- Green bar: All free-text responses appear authentic
- Red bar: Suspicious patterns detected in all free-text responses
- Mixed bar: Some questions flagged (e.g., if 2 out of 4 questions raise suspicion, you'll see a 50% green, 50% red bar)
Easily see and sort through flagged submissions, saving countless hours of manual checks. You can reject flagged responses, as long as the task didn’t require the use of an external source. Read our full guidelines and instructions.

Easily sort responses, with highly accurate checks
Our extensive internal testing shows that authenticity checks are highly precise:
- When our model flags answers as AI-generated, it's correct 98.7% of the time, offering a reliable way to sort, double-check and remove responses
- With a low false positive rate (0.6%), there is minimal risk of incorrectly flagging authentic participants
We strive to maximize effective detection while remaining fair to participants. As you use these checks, your feedback will be crucial in helping us improve them.
Which platforms support authenticity checks?
During this beta phase, authenticity checks are only compatible with projects designed in:
We're actively working to expand compatibility with additional platforms based on your feedback. Read our full instructions.
Best practices for implementation
Authenticity checks work for free-text questions that require participants’ own knowledge without using external sources.
Whilst we already tell participants not to use AI (unless a task instructs them to do so), we’ve found it’s best to remind them in the question itself. Our internal tests found that adding clear reminders reduced AI usage by 61%, making this a simple but effective first line of defense.
Learn more about best practices around authenticity checks.
Your feedback shapes the future
As you use this feature, we’d love to hear your feedback.
AI is evolving rapidly, so your experiences will directly influence the development of authenticity checks and future features, ensuring they meet the diverse needs of our research community.
- How the feature performs in your specific research contexts
- Suggestions for improvements to how it works
- Additional platforms you'd like to see supported
Get started with authenticity checks today.
Want to learn more about Prolific’s approach to data quality?
Join our VP of Product, Sara Saab, next week for an online discussion about keeping research real in the age of AI.
You’ll learn about:
- Current challenges in maintaining data integrity in this new AI era
- Prolific's suite of features that verify and monitor our participant quality
- Practical strategies to optimise study design for high quality results
- A demo of new authenticity checks
Wednesday 21st May at 9am PDT / 5pm BST