Evaluating meta-analytic methods to detect selective reporting in the presence of dependent effect sizes.

MA Rodgers, JE Pustejovsky - Psychological methods, 2021 - psycnet.apa.org
Psychological methods, 2021psycnet.apa.org
Selective reporting of results based on their statistical significance threatens the validity of
meta-analytic findings. A variety of techniques for detecting selective reporting, publication
bias, or small-study effects are available and are routinely used in research syntheses. Most
such techniques are univariate, in that they assume that each study contributes a single,
independent effect size estimate to the meta-analysis. In practice, however, studies often
contribute multiple, statistically dependent effect size estimates, such as for multiple …
Abstract
Selective reporting of results based on their statistical significance threatens the validity of meta-analytic findings. A variety of techniques for detecting selective reporting, publication bias, or small-study effects are available and are routinely used in research syntheses. Most such techniques are univariate, in that they assume that each study contributes a single, independent effect size estimate to the meta-analysis. In practice, however, studies often contribute multiple, statistically dependent effect size estimates, such as for multiple measures of a common outcome construct. Many methods are available for meta-analyzing dependent effect sizes, but methods for investigating selective reporting while also handling effect size dependencies require further investigation. Using Monte Carlo simulations, we evaluate three available univariate tests for small-study effects or selective reporting, including the trim and fill test, Egger’s regression test, and a likelihood ratio test from a three-parameter selection model (3PSM), when dependence is ignored or handled using ad hoc techniques. We also examine two variants of Egger’s regression test that incorporate robust variance estimation (RVE) or multilevel meta-analysis (MLMA) to handle dependence. Simulation results demonstrate that ignoring dependence inflates Type I error rates for all univariate tests. Variants of Egger’s regression maintain Type I error rates when dependent effect sizes are sampled or handled using RVE or MLMA. The 3PSM likelihood ratio test does not fully control Type I error rates. With the exception of the 3PSM, all methods have limited power to detect selection bias except under strong selection for statistically significant effects.
American Psychological Association
以上显示的是最相近的搜索结果。 查看全部搜索结果