# Blog

I blog about statistics and research design with an audience consisting of researchers in bilingualism, multilingualism, and applied linguistics in mind.

## Latest blog posts

### Confidence intervals for standardised mean differences

22 February 2017

Standardised effect sizes express patterns found in the data in
terms of the variability found in the data. For instance, a mean difference
in body height could be expressed in the metric in which the data were
measured (e.g., a difference of 4 centimetres) or relative to the
variation in the data (e.g., a difference of 0.9 standard deviations).
The latter is a standardised effect size known as Cohen’s *d*.

As I’ve
written
before,
I don’t particularly like standardised effect sizes.
Nonetheless, I wondered how confidence intervals around standardised
effect sizes (more specifically: standardised mean differences)
are constructed. Until recently, I hadn’t really thought about it
and sort of assumed you would compute them the same way as
confidence intervals around
raw effect sizes. But unlike raw (unstandardised) mean differences,
standardised mean differences are a combination of *two* estimates
subject to sampling error: the mean difference itself
and the sample standard deviation.
Moreover, the sample standard deviation is a biased estimate of
the population standard deviation (it tends to be
too low),
which causes Cohen’s *d* to be an upwardly biased estimate of the
population standardised mean difference.
Surely both of these factors must affect how
the confidence intervals around standardised effect sizes are constructed?

It turns out that indeed they do. When I compared the confidence intervals that I computed around a standardised effect size using a naive approach that assumed that the standard deviation wasn’t subject to sampling error and wasn’t biased, I got different results than when I used specialised R functions.

But these R functions all produced different results, too.

Obviously, there may well be more than one way to skin a cat, but this
caused me to wonder if the different procedures for computing confidence
intervals all covered the true population parameter with the nominal
probability (e.g., in 95% of cases for a 95% confidence interval).
I ran a simulation to find out, which I’ll report in the remainder of this post.
**If you spot any mistakes, please let me know.**

### Which predictor is most important? Predictive utility vs. construct importance

15 February 2017

Every so often, I’m asked for my two cents on a correlational study in which the researcher wants to find out which of a set of predictor variables is the most important one. For instance, they may have the results of an intelligence test, of a working memory task and of a questionnaire probing their participants’ motivation for learning French, and they want to find out which of these three is the most important factor in acquiring a nativelike French accent, as measured using a pronunciation task. As I will explain below, research questions such as these can be interpreted in two ways, and whether they can be answered sensibly depends on the interpretation intended.

### Automatise repetitive tasks

31 January 2017

Research often involves many repetitive tasks. For a ongoing project, for instance, we needed to replace all stylised apostrophes (’) with straight apostrophes (‘) in some 3,000 text files when preparing the texts for the next step. As another example, you may need to split up a bunch of files into different directories depending on, say, the character in the file name just before the extension. When done by hand, such tasks are as mind-numbing and time-consuming as they sound – perhaps you would do them on a Friday afternoon while listening to music or outsource them to a student assistant. My advice, though, is this: Try to automatise repetitive tasks.

Doing repetitive tasks is what computers are for, so rather than spending several hours learning nothing, I suggest you spend that time writing a script or putting together a command line call that does the task for you. If you have little experience doing this, this will take time at first. In fact, I reckon I often spend roughly same amount of time trying to automatise menial tasks as it would have cost me to do them by hand. But in the not-so-long run, automatisation is a time-saver: Once you have a working script, you can tweak and reuse it. Additionally, while you’re figuring out how to automatise a menial chore, you’re actually learning something useful. The chores become more of a challenge and less mind-numbing. I’m going to present an example or two of what I mean and I will conclude by giving some general pointers.

### Some illustrations of bootstrapping

20 December 2016

This post illustrates a statistical technique
that becomes particularly useful when you want to calculate the sampling variation of some custom statistic
when you start to dabble in mixed-effects models.
This technique is called **bootstrapping**
and I will first illustrate its use in constructing confidence intervals around
a custom summary statistic.
Then I’ll illustrate three bootstrapping approaches when constructing
confidence intervals around a regression coefficient,
and finally, I will show how bootstrapping can be used to compute *p*-values.

The goal of this post is *not* to argue that bootstrapping is superior to the traditional alternatives—in the examples
discussed, they are pretty much on par—but merely to illustrate how it works.
The main advantage of bootstrapping, as I understand it,
is that it can be applied in situation where the traditional alternatives
are not available,
where you don’t understand how to use them
or where their assumptions are questionable,
but I think it’s instructive to see how its results compare to those of traditional approaches where both can readily be applied.

### What data patterns can lie behind a correlation coefficient?

21 November 2016

In this post, I want to, first, help you to improve your intuition of what data patterns correlation coefficients can represent and, second, hammer home the point that to sensibly interpret a correlation coefficient, you need the corresponding scatterplot.

### Common-language effect sizes

16 November 2016

The goal of this blog post is to share with you a simple `R`

function
that may help you to better communicate the extent to which two groups differ and overlap
by computing *common-language effect sizes*.

### The Centre for Open Science's Preregistration Challenge: Why it's relevant and some recommended background reading

31 October 2016

*This blog post is an edited version of a mail I sent round to my colleagues at the various language and linguistics departments in Fribourg. Nothing in this post is new per se, but I haven’t seen much discussion of these issues among linguists, applied linguists and bilingualism researchers.*

I’d like to point you to an initiative of the Center for Open Science: the $1,000,000 Preregistration Challenge. The basic idea is to foster research transparency by offering a monetary reward to researchers who’ve outlined their study design and planned analyses in advance and report the results of these analyses in the report.

I’m not affiliated with this organisation, but I do think both it and its initiative are important developments. For those interested in knowing why I think so, I’ve written a brief text below that includes links to more detailed articles or examples; if you prefer reference lists, there’s one of those down below. Most of articles were written by and for psychologists, but I reckon pretty much all of it applies equally to research in linguistics and language learning.

### Tutorial: Drawing a dot plot

30 August 2016

In the fourth tutorial on drawing useful plots with `ggplot2`

, we’re taking a closer look at **dot plots** – a useful and more flexible alternative to bar and pie charts.

### R tip: Ordering factor levels more easily

18 August 2016

By default, `R`

sorts the levels of a factor alphabetically.
When drawing graphs, this results in ‘Alabama First’ graphs,
and it’s usually better to sort the elements of a graph by more meaningful principles than alphabetical order.
This post illustrates three convenience functions you can use to sort factor levels in `R`

according to another covariate, their frequency of occurrence, or manually.

### Classifying second-language learners as native- or non-nativelike: Don't neglect classification error rates

5 July 2016

I’d promised to write another installment on drawing graphs,
but instead I’m going to write about something that
I had to exclude, for reasons of space,
from a recently published book chapter
on age effects in second language (L2) acquisition:
**classifying** observations (e.g., L2 learners) and **estimating error rates**.

I’m going to illustrate the usefulness of classification algorithms for addressing some problems in L2 acquisition research, but my broader aim is to show that there’s more to statistics than running significance tests and to encourage you to explore—even if superficially—what else is out there.