Browsing by Author "Baskin E"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- ItemOn the trajectory of discrimination: A meta-analysis and forecasting survey capturing 44 years of field experiments on gender and hiring decisions(Elsevier Inc, 2023-11) Schaerer M; Plessis CD; Nguyen MHB; van Aert RCM; Tiokhin L; Lakens D; Clemente EG; Pfeiffer T; Dreber A; Johannesson M; Clark CJ; Uhlmann EL; Abraham AT; Adamus M; Akinci C; Alberti F; Alsharawy AM; Alzahawi S; Anseel F; Arndt F; Balkan B; Baskin E; Bearden CE; Benotsch EG; Bernritter S; Black SR; Bleidorn W; Boysen AP; Brienza JP; Brown M; Brown SEV; Brown JW; Buckley J; Buttliere B; Byrd N; Cígler H; Capitan T; Cherubini P; Chong SY; Ciftci EE; Conrad CD; Conway P; Costa E; Cox JA; Cox DJ; Cruz F; Dawson IGJ; Demiral EE; Derrick JL; Doshi S; Dunleavy DJ; Durham JD; Elbaek CT; Ellis DA; Ert E; Espinoza MP; Füllbrunn SC; Fath S; Furrer R; Fiala L; Fillon AA; Forsgren M; Fytraki AT; Galarza FB; Gandhi L; Garrison SM; Geraldes D; Ghasemi O; Gjoneska B; Gothilander J; Grühn D; Grieder M; Hafenbrädl S; Halkias G; Hancock R; Hantula DA; Harton HC; Hoffmann CP; Holzmeister F; Hoŕak F; Hosch A-K; Imada H; Ioannidis K; Jaeger B; Janas M; Janik B; Pratap KC R; Keel PK; Keeley JW; Keller L; Kenrick DT; Kiely KM; Knutsson M; Kovacheva A; Kovera MB; Krivoshchekov V; Krumrei-Mancuso EJ; Kulibert D; Lacko D; Lemay EPA preregistered meta-analysis, including 244 effect sizes from 85 field audits and 361,645 individual job applications, tested for gender bias in hiring practices in female-stereotypical and gender-balanced as well as male-stereotypical jobs from 1976 to 2020. A “red team” of independent experts was recruited to increase the rigor and robustness of our meta-analytic approach. A forecasting survey further examined whether laypeople (n = 499 nationally representative adults) and scientists (n = 312) could predict the results. Forecasters correctly anticipated reductions in discrimination against female candidates over time. However, both scientists and laypeople overestimated the continuation of bias against female candidates. Instead, selection bias in favor of male over female candidates was eliminated and, if anything, slightly reversed in sign starting in 2009 for mixed-gender and male-stereotypical jobs in our sample. Forecasters further failed to anticipate that discrimination against male candidates for stereotypically female jobs would remain stable across the decades.
- ItemPredicting the replicability of social and behavioural science claims in COVID-19 preprints(Springer Nature Limited, 2024-12-20) Marcoci A; Wilkinson DP; Vercammen A; Wintle BC; Abatayo AL; Baskin E; Berkman H; Buchanan EM; Capitán S; Capitán T; Chan G; Cheng KJG; Coupé T; Dryhurst S; Duan J; Edlund JE; Errington TM; Fedor A; Fidler F; Field JG; Fox N; Fraser H; Freeman ALJ; Hanea A; Holzmeister F; Hong S; Huggins R; Huntington-Klein N; Johannesson M; Jones AM; Kapoor H; Kerr J; Kline Struhl M; Kołczyńska M; Liu Y; Loomas Z; Luis B; Méndez E; Miske O; Mody F; Nast C; Nosek BA; Simon Parsons E; Pfeiffer T; Reed WR; Roozenbeek J; Schlyfestone AR; Schneider CR; Soh A; Song Z; Tagat A; Tutor M; Tyner AH; Urbanska K; van der Linden SReplications are important for assessing the reliability of published findings. However, they are costly, and it is infeasible to replicate everything. Accurate, fast, lower-cost alternatives such as eliciting predictions could accelerate assessment for rapid policy implementation in a crisis and help guide a more efficient allocation of scarce replication resources. We elicited judgements from participants on 100 claims from preprints about an emerging area of research (COVID-19 pandemic) using an interactive structured elicitation protocol, and we conducted 29 new high-powered replications. After interacting with their peers, participant groups with lower task expertise ('beginners') updated their estimates and confidence in their judgements significantly more than groups with greater task expertise ('experienced'). For experienced individuals, the average accuracy was 0.57 (95% CI: [0.53, 0.61]) after interaction, and they correctly classified 61% of claims; beginners' average accuracy was 0.58 (95% CI: [0.54, 0.62]), correctly classifying 69% of claims. The difference in accuracy between groups was not statistically significant and their judgements on the full set of claims were correlated (r(98) = 0.48, P < 0.001). These results suggest that both beginners and more-experienced participants using a structured process have some ability to make better-than-chance predictions about the reliability of 'fast science' under conditions of high uncertainty. However, given the importance of such assessments for making evidence-based critical decisions in a crisis, more research is required to understand who the right experts in forecasting replicability are and how their judgements ought to be elicited.