Journal Articles
Permanent URI for this collectionhttps://mro.massey.ac.nz/handle/10179/7915
Browse
5 results
Search Results
Item The fallacy of placing confidence in confidence intervals – A commentary(Open Science Framework (OSF), 2/05/2017) Perezgonzalez JD‘The fallacy of placing confidence in confidence intervals’ (Morey et al., 2016, Psychonomic Bulletin & Review, doi: 10.3758/s13423-015-0947-8) delved into a much needed technical and philosophical dissertation regarding the differences between typical (mis)interpretations of frequentist confidence intervals and the typical correct interpretation of Bayesian credible intervals. My contribution here partly strengthens the authors’ argument, partly closes some gaps they left open, and concludes with a note of attention to the possibility that there may be distinctions without real practical differences in the ultimate use of estimation by intervals, namely when assuming a common ground of uninformative priors and intervals as ranges of values instead of as posterior distributions per se.Item Commentary: Psychological Science’s Aversion to the Null(Frontiers Media SA, 9/06/2020) Perezgonzalez JD; Frias-Navarro D; Pascual-Llobell J; Dettweiler, U; Hanfstingl, B; Schroter, HHeene and Ferguson (2017) contributed important epistemological, ethical and didactical ideas to the debate on null hypothesis significance testing, chief among them ideas about falsificationism, statistical power, dubious statistical practices, and Publication bias. Important as those contributions are, the authors do not fully resolve four confusions which we would like to clarify.Item Failings in COPE's guidelines to editors, and recommendations for improvement.(Figshare, 23/11/2016) Perezgonzalez JDLetter highlighting failings in COPE's Guidelines to editors and proposing recommendations for improvement. The main recommendation is to create appropriate guidelines for dealing with fully disclosed (potential) conflicts of interest. COPE sought the topic as relevant and included a session on the topic as part of COPE's Forum (Feb 3, 2017; http://publicationethics.org/forum-discussion-topic-comments-please-7).Item Statistical Sensitiveness for the Behavioral Sciences(Open Science Framework (OSF), 14/02/2017) Perezgonzalez JDResearch often necessitates of samples, yet obtaining large enough samples is not always possible. When it is, the researcher may use one of two methods for deciding upon the required sample size: rules-of-thumb, quick yet uncertain, and estimations for power, mathematically precise yet with the potential to overestimate or underestimate sample sizes when effect sizes are unknown. Misestimated sample sizes have negative repercussions in the form of increased costs, abandoned projects or abandoned publication of non-significant results. Here I describe a procedure for estimating sample sizes adequate for the testing approach which is most common in the behavioural, social, and biomedical sciences, that of Fisher’s tests of significance. The procedure focuses on a desired minimum effect size for the research at hand and finds the minimum sample size required for capturing such effect size as a statistically significant result. In a similar fashion than power analyses, sensitiveness analyses can also be extended to finding the minimum effect for a given sample size a priori as well as to calculating sensitiveness a posteriori. The article provides a full tutorial for carrying out a sensitiveness analysis, as well as empirical support via simulation.Item Retract p < 0.005 and propose using JASP, instead(F1000Research, 12/12/2017) Perezgonzalez JD; Frías-Navarro MDSeeking to address the lack of research reproducibility in science, including psychology and the life sciences, a pragmatic solution has been raised recently: to use a stricter p < 0.005 standard for statistical significance when claiming evidence of new discoveries. Notwithstanding its potential impact, the proposal has motivated a large mass of authors to dispute it from different philosophical and methodological angles. This article reflects on the original argument and the consequent counterarguments, and concludes with a simpler and better-suited alternative that the authors of the proposal knew about and, perhaps, should have made from their Jeffresian perspective: to use a Bayes factors analysis in parallel (e.g., via JASP) in order to learn more about frequentist error statistics and about Bayesian prior and posterior beliefs without having to mix inconsistent research philosophies.

