[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [AUDITORY] stats (mis)use in psychology and hearing science

Well said. Also, why is the use of nonpametric statistics so rare? They do not require meeting the stiff criteria that e.g. ANOVA does. 

Pardon the typos, it's Apple's fault.

On Jun 24, 2013, at 0:04, Brian Gygi <bgygi@xxxxxxxxx> wrote:

The problem is not just with higher education.  As long as one needs a significant effect to get published, people are going to continue using the easiest and most powerfult stats tests out there.  Reviewers are complicit in this as well - I have rarely seen a paper turned down because of inappropriate statistical tests (even though many publications specifically ask about this).  We as scientists could start by cleaning up our own shop a bit.

Brian Gygi, Ph.D.
-----Original Message-----
From: Iftikhar Nizami [mailto:nizamii2@xxxxxxx]
Sent: Sunday, June 23, 2013 11:21 AM
To: AUDITORY@xxxxxxxxxxxxxxx
Subject: Re: stats use in psychology and hearing science

Dear List - My thanks to Holger Mitterer for pointing out the paper by Simmons et al in Psych Sci, which promises to be an interesting read. It is just one of a long string of papers in recent years which point out just how little of value can arise through statistical testing of experimental results (see also the numerous papers of John Ioannidis at Stanford on this topic in medicine).
Unfortunately, this problem of designing experiments for the data analysis - and the wider problem of inappropriate experimental design and inappropriate data analysis - is only going to get worse, especially in departments of education, psychology, and hearing research. There, the older generation of researchers, who might have had at least an undergrad freshman calculus course, has been replaced by a new generation of workers who do not have math beyond the 10th grade of high school and who barely passed their weak undergraduate mandatory course in practical stats. Too many people now seem to think of stats testing (ANOVA in particular) as an act of magic that tells them what's "significant". It is exceedingly rare, for example, to find any mention of whether the assumptions underlying the statistical tests are actually obeyed, as no-one seems to realize that statistical tests are derived from mathematical models that involve assumptions. There is a solution to this problem: stricter math requirements at the undergraduate and graduate levels, including introductory theoretical statistics, not just basic stats testing. If we're going to use stats, let's do it properly. - Lance Nizami PhD, Palo Alto, Cal.

From: Holger Mitterer <holgermitterer@xxxxxxxxxxx>
To: AUDITORY@xxxxxxxxxxxxxxx
Sent: Saturday, June 22, 2013 7:40 AM
Subject: [AUDITORY] Reminder: Speed Sound Finding Experiment

probably many readers caught this, but just to make sure:
The reminder for the speech sound finding experiment contained a somewhat questionable phrase:

> We are missing a few participants to reach statistical significance so
> please consider giving it a try:

Stopping data collection when an effect becomes significant is a very problematic research strategy,
see the paper by Simmons et al. in PsychScience (http://pss.sagepub.com/content/22/11/1359.abstract).