Articles

Why you should stop using MTurk

George Denison
|November 10, 2025

Amazon Mechanical Turk (MTurk) is a crowdsourcing platform used by researchers to collect data at scale. Perhaps you’re one of them. If you are, stop using MTurk today. 

Independent research has shown that the data quality you get on MTurk is much lower than that of competitors. The cost per quality response makes it one of the worst value propositions in online research. And it has become known for its highly unethical treatment and compensation of participants. 

Here's why you should stop using MTurk - and what you should use instead.

Why do you get poor data quality on MTurk?

For many years now, researchers have shown concern over the low-quality data delivered by MTurk participants. 

In 2018, reports surfaced of high numbers of bots flooding the platform with unreliable data. A 2020 review in the SMA management journal highlighted several challenges associated with using MTurk, including inattentive participants, self-misrepresentation, and attrition rates as high as 30%. A 2021 study found that data quality on MTurk was alarmingly low compared to other platforms, even with data quality filters in place.  

One independent study from 2023 compared five major online research platforms and found that MTurk ranked last across almost every quality measure tested. Researchers at the University of Wisconsin–Madison evaluated MTurk, Prolific, CloudResearch, Qualtrics, and a university student sample using multiple quality indicators.

MTurk participants performed worse than every other platform on:

  • Attention checks
  • Memory tests
  • Response consistency
  • Survey engagement 

Let’s take a look at why the quality of responses on MTurk is worse than on other platforms. 

MTurk participants aren't reading questions carefully

In the University of Wisconsin–Madison study, when researchers looked at personality scale reliability, MTurk showed the weakest internal consistency among all platforms tested. MTurk participants were contradicting themselves within the same survey, which suggests they weren't reading questions carefully.

When researchers asked the same question twice, 21 questions apart, MTurk participants were more likely to give different answers than participants on any other platform. Also, more than half of MTurk respondents couldn't correctly recall a video they'd watched minutes earlier during the survey.

The professional survey-taker problem 

In the independent study, MTurk participants reported taking more surveys than participants on competing platforms. This has created a pool of professional survey-takers who have learned to game the system. They know how to pass attention checks while rushing through surveys and providing low-quality responses.

Low compensation leads to unethical, poor-quality data 

The minimum fee per task on MTurk is $0.01, plus 20% paid to Amazon. It’s then up to researchers how much they pay participants for taking studies. While there’s little published data on average earnings on MTurk, a 2021 study suggests the average participant on MTurk earns as little as $2 per hour.

These low compensation rates raise ethical concerns about fair pay. Low reward rates also impact data quality; when participants feel undervalued, they're less motivated to provide thoughtful and careful responses.  

MTurk costs more than you think

Even if you were willing to overlook the data quality issues, MTurk simply doesn't offer good value.

The real cost per quality response is shockingly high

The Wisconsin study calculated the cost per high-quality respondent across platforms. MTurk's performance was poor. 

After filtering out low-quality responses, MTurk costs $4.36 per quality respondent. Prolific, by contrast, costs $1.90 per quality response - less than half of what you’d pay on MTurk. 

You're paying for low-quality data you can't use

The seemingly low cost per completed survey on MTurk becomes meaningless when a large portion of those surveys contains unusable data. You might save money by paying less to recruit participants, but if participants rush through your survey, fail attention checks, and provide contradictory answers, you've wasted that money.

When quality is defined as passing attention checks, having unique identifiers, spending adequate time on the survey, and providing meaningful responses, MTurk has the lowest percentage of quality respondents among major platforms. You're paying to collect data you'll need to throw away.

What you should use as an alternative to MTurk

The solution to MTurk's problems doesn't require compromise. You can get high-quality, ethically sourced data at a much better value by switching to Prolific.

Prolific delivers the highest quality data for the lowest cost

In the Wisconsin study, Prolific and CloudResearch tied for the highest data quality across all measures tested. Compared to MTurk, Prolific participants:

  • Correctly recalled survey content at much higher rates
  • Showed stronger internal consistency on personality measures
  • Provided more reliable test-retest responses

Prolific also delivered the best value among paid platforms at $1.90 per quality respondent compared to MTurk's $4.36. You get far better data for less than half the cost.

A 2025 study that evaluated Prolific alongside other data collection platforms like MTurk found that “Prolific is consistent in delivering the most attentive and reliable responses". It also found Prolific to have "remarkably low dropout rates". 

Prolific is designed to ensure authentic human responses

Protocol, Prolific’s five-layer quality system, ensures the data you get is genuinely human and keeps out the bots and low-quality participants you’ll find on MTurk.  

Here’s how it works:

  1. Quality over quantity: Where MTurk prioritizes large volumes of participants, Prolific is highly selective. We invite just 13% of applicants from our waitlist. After 50+ quality and identity checks, 55% pass to take studies and tasks.
  2. Continuous fraud protection: Ad-hoc identity checks ensure no bots or fraud. Machine learning algorithms detect suspicious patterns in the pool, and a full-time quality team takes action to remove any bad actors. 
  3. In-study quality assurance: Prolific’s authenticity checks detect AI-generated responses with 98.7% precision. If submissions are exceptionally fast, they’re automatically flagged and replaced.
  4. Performance-based controls: The best-performing participants get priority access to studies, while poor performers are excluded. Researchers can filter for high-performers using quality and skills-based targeting.
  5. Motivated, engaged participants: Prolific’s industry-leading reward structure ensures participants get paid at least $8.00 per hour. This has fostered a loyal participant community that delivers rich, thoughtful responses. 

Prolific is as fast as MTurk without the drawbacks

While MTurk is fast, Prolific matches that speed while delivering better quality. Prolific participants deliver complete datasets within 2 hours on average. You don't have to choose between fast data collection and quality data.

Make the switch to Prolific

MTurk might be one of the most well-known platforms for data collection. But there are alternatives offering higher-quality data, more ethical practices, and better value for your money. 

Prolific is one of the leading competitors to MTurk. Don’t let bots, bad actors, and low-quality responses hold back your research. Make the switch today and reach your next breakthrough with authentic and reliable human insight.

Find out more about how Prolific sets the gold standard for data quality in online platforms. Or experience it for yourself by getting started now.