Professional respondents in nonprobability online panels

DS Hillygus, N Jackson… - Online Panel Research …, 2014 - Wiley Online Library
It is well‐documented that there exists a pool of frequent survey takers who participate in
many different online nonprobability panels in order to earn cash or other incentives‐‐so …

Common concerns with MTurk as a participant pool: Evidence and solutions

D Hauser, G Paolacci, J Chandler - Handbook of research …, 2019 - taylorfrancis.com
This chapter discusses common concerns that researchers have with Mechanical Turk
(MTurk), reviewing the evidence that bears upon each concern. It suggests that readers are …

Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants

DJ Hauser, N Schwarz - Behavior research methods, 2016 - Springer
Participant attentiveness is a concern for many researchers using Amazon's Mechanical
Turk (MTurk). Although studies comparing the attentiveness of participants on MTurk versus …

The duality of empowerment and marginalization in microtask crowdsourcing

X Deng, KD Joshi, RD Galliers - MIS quarterly, 2016 - JSTOR
Crowdsourcing (CS) of micro tasks is a relatively new, open source work form enabled by
information and communication technologies. While anecdotal evidence of its benefits …

The myth of harmless wrongs in moral cognition: Automatic dyadic completion from sin to suffering.

K Gray, C Schein, AF Ward - Journal of Experimental Psychology …, 2014 - psycnet.apa.org
When something is wrong, someone is harmed. This hypothesis derives from the theory of
dyadic morality, which suggests a moral cognitive template of wrongdoing agent and …

A checklist to combat cognitive biases in crowdsourcing

T Draws, A Rieger, O Inel, U Gadiraju… - Proceedings of the AAAI …, 2021 - ojs.aaai.org
Recent research has demonstrated that cognitive biases such as the confirmation bias or the
anchoring effect can negatively affect the quality of crowdsourced data. In practice, however …

Crowd-sourced text analysis: Reproducible and agile production of political data

K Benoit, D Conway, BE Lauderdale… - American Political …, 2016 - cambridge.org
Empirical social science often relies on data that are not observed in the field, but are
transformed into quantitative variables by expert researchers who analyze and interpret …

Measuring the prevalence of problematic respondent behaviors among MTurk, campus, and community participants

EA Necka, S Cacioppo, GJ Norman, JT Cacioppo - PloS one, 2016 - journals.plos.org
The reliance on small samples and underpowered studies may undermine the replicability
of scientific findings. Large sample sizes may be necessary to achieve adequate statistical …

Using MTurk to distribute a survey or experiment: Methodological considerations

NC Hunt, AM Scheetz - Journal of Information Systems, 2019 - publications.aaahq.org
ABSTRACT Amazon Mechanical Turk (MTurk) is a powerful tool that is more commonly
being used to recruit behavioral research participants for accounting research. This …

Designing incentives for inexpert human raters

AD Shaw, JJ Horton, DL Chen - … of the ACM 2011 conference on …, 2011 - dl.acm.org
The emergence of online labor markets makes it far easier to use individual human raters to
evaluate materials for data collection and analysis in the social sciences. In this paper, we …