Baby steps in Bayes: Recoding predictors and homing in on specific comparisons

Bayesian statistics
brms
R
graphs
mixed-effects models
contrast coding
Author

Jan Vanhove

Published

December 20, 2018

Interpreting models that take into account a host of possible interactions between predictor variables can be a pain, especially when some of the predictors contain more than two levels. In this post, I show how I went about fitting and then making sense of a multilevel model containing a three-way interaction between its categorical fixed-effect predictors. To this end, I used the brms package, which makes it relatively easy to fit Bayesian models using a notation that hardly differs from the one used in the popular lme4 package. I won’t discuss the Bayesian bit much here (I don’t think it’s too important), and I will instead cover the following points:

  1. How to fit a multilevel model with brms using R’s default way of handling categorical predictors (treatment coding).
  2. How to interpret this model’s fixed parameter estimates.
  3. How to visualise the modelled effects.
  4. How to recode predictors to obtain more useful parameter estimates.
  5. How to extract information from the model to home in on specific comparisons.

The data

For a longitudinal project, 328 children wrote narrative and argumentative texts in Portuguese at three points in time. About a third of the children hailed from Portugal, about a third were children of Portuguese heritage living in the French-speaking part of Switzerland, and about a third were children of Portuguese heritage living in the German-speaking part of Switzerland. Not all children wrote both kinds of texts at all three points in time, and 1,040 texts were retained for the analysis. For each text, we computed the Guiraud index, which is a function of the number of words (tokens) and the number of different words (types) in the texts. Higher values are assumed to reflect greater diversity in vocabulary use.

If you want to know more about this project, check out Bonvin et al. (2018), Lambelet et al. (2017a,b) and Vanhove et al. (2019); you’ll find the references at the bottom of this page.

Update (2023-08-06): I ran all of the R code again with newer software versions when converting the format of this blog.

Read in the data:

# Load tidyverse suite
library(tidyverse)

# Read in data from my webspace
d <- read_csv("http://homeweb.unifr.ch/VanhoveJ/Pub/Data/portuguese_guiraud.csv")

# Need to code factors explicitly
d$Group    <- factor(d$Group)
d$TextType <- factor(d$TextType)
d$Time     <- factor(d$Time)

str(d)
spc_tbl_ [1,040 × 6] (S3: spec_tbl_df/tbl_df/tbl/data.frame)
 $ Group   : Factor w/ 3 levels "monolingual Portuguese",..: 1 1 1 1 1 1 1 1 1 1 ...
 $ Class   : chr [1:1040] "monolingual Portuguese_AI" "monolingual Portuguese_AI" "monolingual Portuguese_AI" "monolingual Portuguese_AI" ...
 $ Child   : chr [1:1040] "monolingual Portuguese_AI_1" "monolingual Portuguese_AI_1" "monolingual Portuguese_AI_1" "monolingual Portuguese_AI_10" ...
 $ Time    : Factor w/ 3 levels "T1","T2","T3": 1 1 2 1 1 3 3 1 3 1 ...
 $ TextType: Factor w/ 2 levels "argumentative",..: 1 2 2 1 2 1 2 2 2 1 ...
 $ Guiraud : num [1:1040] 4.73 5.83 3.9 4.22 4.57 ...
 - attr(*, "spec")=
  .. cols(
  ..   Group = col_character(),
  ..   Class = col_character(),
  ..   Child = col_character(),
  ..   Time = col_character(),
  ..   TextType = col_character(),
  ..   Guiraud = col_double()
  .. )
 - attr(*, "problems")=<externalptr> 
summary(d)
                    Group        Class              Child           Time    
 monolingual Portuguese:360   Length:1040        Length:1040        T1:320  
 Portuguese-French     :360   Class :character   Class :character   T2:340  
 Portuguese-German     :320   Mode  :character   Mode  :character   T3:380  
                                                                            
                                                                            
                                                                            
          TextType      Guiraud    
 argumentative:560   Min.   :2.32  
 narrative    :480   1st Qu.:3.93  
                     Median :4.64  
                     Mean   :4.75  
                     3rd Qu.:5.48  
                     Max.   :8.43  

Let’s also plot the data. Incidentally, and contrary to popular belief, I don’t write ggplot code such as this from scratch. What you see is the result of drawing and redrawing (see comments).

# Plot Guiraud scores
ggplot(d,
       aes(x = Time,
           y = Guiraud,
           # reorder: sort the Groups by their median Guiraud value
           fill = reorder(Group, Guiraud, median))) +
  # I prefer empty (shape = 1) to filled circles (shape = 16).
  geom_boxplot(outlier.shape = 1) +
  facet_grid(. ~ TextType) +
  # The legend name ("Group") seems superfluous, so suppress it;
  # the default colours contain red and green, which can be hard to
  #  distinguish for some people.
  scale_fill_brewer(name = element_blank(), type = "qual") +
  # I prefer the black and white look to the default grey one.
  theme_bw() +
  # Put the legend at the bottom rather than on the right
  theme(legend.position = "bottom")

Figure 1. The texts’ Guiraud values by time of data collection, text type, and language background.

A multilevel model with treatment coding

Our data are nested: Each child wrote up to 6 texts, and the data were collected in classes, with each child belong to one class. It’s advisable to take such nesting into account since you may end up overestimating your degree of certainty about the results otherwise. I mostly use lme4’s lmer() and glmer() functions to handle such data, but as will become clearer in a minute, brms’s brm() function offers some distinct advantages. So let’s load that package:

library(brms)

Fitting the model

We’ll fit a model with a three-way fixed-effect interaction between Time, TextType and Group as well as with by-Child and by-Class random intercepts. In order to take into account the possibility that children vary in the development of their lexical diversity, we add a random slope of Time by Child, and in order to take into account the possibility that their lexical diversity varies by text type, we do the same for TextType. Similarly, we add by-Class random slopes for Time and TextType.

m_default <- brm(Guiraud ~ Time*TextType*Group +
                   (1 + TextType + Time|Class) +
                   (1 + TextType + Time|Child),
                 cores = 4, iter = 4000,
                 silent = 2,
                 control = list(adapt_delta = 0.95),
                 data = d)

Interpreting the parameter estimates

summary(m_default)
 Family: gaussian 
  Links: mu = identity 
Formula: Guiraud ~ Time * TextType * Group + (1 + TextType + Time | Class) + (1 + TextType + Time | Child) 
   Data: d (Number of observations: 1040) 
  Draws: 4 chains, each with iter = 4000; warmup = 2000; thin = 1;
         total post-warmup draws = 8000

Multilevel Hyperparameters:
~Child (Number of levels: 328) 
                                 Estimate Est.Error l-95% CI u-95% CI Rhat
sd(Intercept)                        0.42      0.05     0.33     0.52 1.00
sd(TextTypenarrative)                0.26      0.09     0.05     0.43 1.01
sd(TimeT2)                           0.09      0.07     0.00     0.24 1.00
sd(TimeT3)                           0.39      0.09     0.21     0.55 1.01
cor(Intercept,TextTypenarrative)     0.24      0.28    -0.27     0.80 1.00
cor(Intercept,TimeT2)               -0.09      0.41    -0.81     0.74 1.00
cor(TextTypenarrative,TimeT2)        0.05      0.43    -0.78     0.81 1.00
cor(Intercept,TimeT3)                0.15      0.23    -0.25     0.67 1.01
cor(TextTypenarrative,TimeT3)       -0.04      0.30    -0.60     0.61 1.00
cor(TimeT2,TimeT3)                   0.09      0.44    -0.75     0.85 1.02
                                 Bulk_ESS Tail_ESS
sd(Intercept)                        2812     4473
sd(TextTypenarrative)                 488      756
sd(TimeT2)                           1147     1896
sd(TimeT3)                            566     1120
cor(Intercept,TextTypenarrative)      736     2131
cor(Intercept,TimeT2)                6991     4697
cor(TextTypenarrative,TimeT2)        3272     5945
cor(Intercept,TimeT3)                 716     1250
cor(TextTypenarrative,TimeT3)         530     1180
cor(TimeT2,TimeT3)                    289      962

~Class (Number of levels: 25) 
                                 Estimate Est.Error l-95% CI u-95% CI Rhat
sd(Intercept)                        0.16      0.08     0.02     0.33 1.00
sd(TextTypenarrative)                0.25      0.08     0.10     0.42 1.00
sd(TimeT2)                           0.11      0.07     0.01     0.28 1.00
sd(TimeT3)                           0.10      0.07     0.00     0.27 1.00
cor(Intercept,TextTypenarrative)    -0.15      0.37    -0.77     0.62 1.00
cor(Intercept,TimeT2)               -0.09      0.43    -0.82     0.75 1.00
cor(TextTypenarrative,TimeT2)        0.08      0.41    -0.71     0.80 1.00
cor(Intercept,TimeT3)                0.13      0.43    -0.74     0.85 1.00
cor(TextTypenarrative,TimeT3)       -0.15      0.41    -0.84     0.68 1.00
cor(TimeT2,TimeT3)                   0.10      0.44    -0.75     0.84 1.00
                                 Bulk_ESS Tail_ESS
sd(Intercept)                        1287     1728
sd(TextTypenarrative)                2038     1642
sd(TimeT2)                           2457     3574
sd(TimeT3)                           2513     3009
cor(Intercept,TextTypenarrative)     2373     3458
cor(Intercept,TimeT2)                5754     5433
cor(TextTypenarrative,TimeT2)        6168     6098
cor(Intercept,TimeT3)                6103     5865
cor(TextTypenarrative,TimeT3)        6407     5928
cor(TimeT2,TimeT3)                   5602     6387

Regression Coefficients:
                                                Estimate Est.Error l-95% CI
Intercept                                           5.25      0.13     4.99
TimeT2                                              0.57      0.13     0.32
TimeT3                                              0.92      0.14     0.64
TextTypenarrative                                  -0.26      0.18    -0.61
GroupPortugueseMFrench                             -1.06      0.17    -1.40
GroupPortugueseMGerman                             -1.30      0.17    -1.63
TimeT2:TextTypenarrative                           -0.24      0.16    -0.57
TimeT3:TextTypenarrative                           -0.06      0.17    -0.39
TimeT2:GroupPortugueseMFrench                      -0.55      0.18    -0.91
TimeT3:GroupPortugueseMFrench                      -0.38      0.19    -0.74
TimeT2:GroupPortugueseMGerman                      -0.64      0.18    -1.00
TimeT3:GroupPortugueseMGerman                      -0.23      0.20    -0.62
TextTypenarrative:GroupPortugueseMFrench            0.14      0.24    -0.33
TextTypenarrative:GroupPortugueseMGerman            0.18      0.24    -0.30
TimeT2:TextTypenarrative:GroupPortugueseMFrench     0.37      0.24    -0.11
TimeT3:TextTypenarrative:GroupPortugueseMFrench     0.25      0.24    -0.22
TimeT2:TextTypenarrative:GroupPortugueseMGerman     0.55      0.25     0.06
TimeT3:TextTypenarrative:GroupPortugueseMGerman     0.27      0.25    -0.23
                                                u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept                                           5.51 1.00     3025     4012
TimeT2                                              0.84 1.00     3131     4204
TimeT3                                              1.19 1.00     3148     4685
TextTypenarrative                                   0.09 1.00     2486     4100
GroupPortugueseMFrench                             -0.70 1.00     3201     4467
GroupPortugueseMGerman                             -0.96 1.00     3052     4142
TimeT2:TextTypenarrative                            0.08 1.00     3290     5287
TimeT3:TextTypenarrative                            0.27 1.00     3221     4880
TimeT2:GroupPortugueseMFrench                      -0.20 1.00     3517     4684
TimeT3:GroupPortugueseMFrench                      -0.01 1.00     3485     5470
TimeT2:GroupPortugueseMGerman                      -0.29 1.00     3477     4481
TimeT3:GroupPortugueseMGerman                       0.15 1.00     3416     5390
TextTypenarrative:GroupPortugueseMFrench            0.62 1.00     3078     4651
TextTypenarrative:GroupPortugueseMGerman            0.65 1.00     2626     3876
TimeT2:TextTypenarrative:GroupPortugueseMFrench     0.84 1.00     3863     5728
TimeT3:TextTypenarrative:GroupPortugueseMFrench     0.72 1.00     3538     5159
TimeT2:TextTypenarrative:GroupPortugueseMGerman     1.05 1.00     3820     5109
TimeT3:TextTypenarrative:GroupPortugueseMGerman     0.76 1.00     3447     5446

Further Distributional Parameters:
      Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma     0.60      0.02     0.56     0.65 1.01      783     2117

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).

The output looks pretty similar to what we’d obtain when using lmer(), but let’s review what these estimates actually refer to. By default, R uses treatment coding. This entails that the Intercept refers to a specific combination of factors: the combination of all reference levels. Again by default, the reference levels are chosen alphabetically:

  • Time consists of three levels (T1, T2, T3); for alphabetical reasons, T1 is chosen as the default reference level.
  • Group also consists of three levels (monolingual Portuguese, Portuguese-French, Portuguese-German); monolingual Portuguese is chosen as the default level.
  • TextType consists of two levels (argumentative, narrative); argumentative is the default reference level.

The Intercept, then, shows the modelled mean Guiraud value of argumentative texts written by monolingual Portuguese children at T1: 5.25.

If you’re unsure which factor level was used as the reference level, you can use the contrasts() function. The reference level is the one in whose rows only zeroes occur.

contrasts(d$Group)
                       Portuguese-French Portuguese-German
monolingual Portuguese                 0                 0
Portuguese-French                      1                 0
Portuguese-German                      0                 1

Crucially, all other estimated effects are computed with respect to this intercept. That is, TimeT2 (0.57) shows the difference between T1 and T2 for monolingual Portuguese children writing argumentative texts. Similarly, TimeT3 (0.91) shows the difference between T1 and T3 for monolingual Portuguese children writing argumentative texts, and TextTypenarrative (-0.27) shows the difference between the mean Guiraud values of argumentative and narrative texts written by monolingual Portuguese children writing at T1. The texts written by the Portuguese-German and Portuguese-French bilinguals don’t enter into these estimates.

Now, it’s possible to piece together the mean values associated with each combination of predictor values, but questions such as the following remain difficult to answer with just these estimates at hand:

  • What’s the overall difference between T2 and T3 and its uncertainty?
  • What’s the overall difference between the Guiraud values of texts written by Portuguese-French and Portuguese-German children and its uncertainty?

We’ll tackle these questions in a minute; for now, the point is merely that the estimated parameters above all refer to highly specific comparisons that may not be the most relevant.

Plotting the fitted values and the uncertainty about them

When working with brms, it’s relatively easy to obtain the modelled average outcome value for each combination of the predictor variables as well as a measure of the uncertainty associated with them.

First construct a small data frame containing the unique combinations of predictor variables in our dataset:

d_pred <- d |> 
  select(Group, Time, TextType) |> 
  distinct() |> 
  arrange(Group, Time, TextType)
d_pred
# A tibble: 18 × 3
   Group                  Time  TextType     
   <fct>                  <fct> <fct>        
 1 monolingual Portuguese T1    argumentative
 2 monolingual Portuguese T1    narrative    
 3 monolingual Portuguese T2    argumentative
 4 monolingual Portuguese T2    narrative    
 5 monolingual Portuguese T3    argumentative
 6 monolingual Portuguese T3    narrative    
 7 Portuguese-French      T1    argumentative
 8 Portuguese-French      T1    narrative    
 9 Portuguese-French      T2    argumentative
10 Portuguese-French      T2    narrative    
11 Portuguese-French      T3    argumentative
12 Portuguese-French      T3    narrative    
13 Portuguese-German      T1    argumentative
14 Portuguese-German      T1    narrative    
15 Portuguese-German      T2    argumentative
16 Portuguese-German      T2    narrative    
17 Portuguese-German      T3    argumentative
18 Portuguese-German      T3    narrative    

If you feed the model (here: m_default) and the data frame we’ve just created (d_pred) to the fitted() function, it outputs the modelled mean estimate for each combination of predictor values (Estimate), the estimated error of this mean estimate (Est.Error), and a 95% uncertainty interval about the estimate (Q2.5 and Q97.5). One more thing: The re_formula = NA line specifies that we do not want the variability associated with the by-Class and by-Child random effects to affect the estimates and their uncertainty. This is what I typically want.

cbind(
  d_pred, 
  fitted(m_default, 
         newdata = d_pred, 
         re_formula = NA)
  )
                    Group Time      TextType Estimate Est.Error Q2.5 Q97.5
1  monolingual Portuguese   T1 argumentative     5.25     0.131 4.99  5.51
2  monolingual Portuguese   T1     narrative     4.99     0.181 4.63  5.35
3  monolingual Portuguese   T2 argumentative     5.82     0.140 5.55  6.11
4  monolingual Portuguese   T2     narrative     5.32     0.191 4.93  5.69
5  monolingual Portuguese   T3 argumentative     6.17     0.159 5.84  6.48
6  monolingual Portuguese   T3     narrative     5.84     0.193 5.45  6.24
7       Portuguese-French   T1 argumentative     4.19     0.117 3.97  4.42
8       Portuguese-French   T1     narrative     4.07     0.162 3.75  4.40
9       Portuguese-French   T2 argumentative     4.21     0.124 3.97  4.46
10      Portuguese-French   T2     narrative     4.22     0.158 3.91  4.52
11      Portuguese-French   T3 argumentative     4.73     0.131 4.48  4.99
12      Portuguese-French   T3     narrative     4.80     0.157 4.49  5.11
13      Portuguese-German   T1 argumentative     3.95     0.109 3.74  4.17
14      Portuguese-German   T1     narrative     3.87     0.151 3.57  4.16
15      Portuguese-German   T2 argumentative     3.88     0.116 3.65  4.11
16      Portuguese-German   T2     narrative     4.10     0.159 3.79  4.41
17      Portuguese-German   T3 argumentative     4.64     0.129 4.38  4.89
18      Portuguese-German   T3     narrative     4.76     0.148 4.46  5.05

So where do these estimates and uncertainty intervals come from? In the Bayesian approach, every model parameter hasn’t got just one estimate but an entire distribution of estimates. Moreover, everything that depends on model parameters also has an entire distribution of estimates associated with it. The mean modelled outcome values per cell depend on the model parameters, so they, too, have entire distributions associated with them. The fitted() function summarises these distributions for us: it returns their means as Estimate, their standard deviations as Est.Error and their 2.5th and 97.5 percentiles as Q2.5 and Q97.5. If so inclined, you can generate these distributions yourself using the posterior_linpred() function:

posterior_fit <- posterior_linpred(m_default, newdata = d_pred, re_formula = NA)
dim(posterior_fit)
[1] 8000   18

This returns matrix of 4000 rows and 18 columns. 4000 is the number of ‘post-warmup samples’ (see the output of summary(m_default); 18 is the number of combinations of predictor values in d_pred.

The first column of posterior_fit contains the distribution associated with the first row in d_pred. If you compute its mean, standard deviation and 2.5th and 97.5th percentiles, you end up with the same numbers as above:

mean(posterior_fit[, 1])
[1] 5.25
sd(posterior_fit[, 1])
[1] 0.131
quantile(posterior_fit[, 1], probs = c(0.025, 0.975))
 2.5% 97.5% 
 4.99  5.51 

Or similarly for the 10th row of d_pred (Portuguese-French, T2, narrative):

mean(posterior_fit[, 10])
[1] 4.22
sd(posterior_fit[, 10])
[1] 0.158
quantile(posterior_fit[, 10], probs = c(0.025, 0.975))
 2.5% 97.5% 
 3.91  4.52 

At the moment, using posterior_linpred() has no added value, but it’s good to know where these numbers come from.

Let’s draw a graph showing these modelled averages and the uncertainty about them. 95% uncertainty intervals are typically used, but they may instill dichotomous thinking. To highlight that such an interval highlights but two points on a continuum, I’m tempted to add 80% intervals as well:

# Obtain fitted values + uncertainty
fitted_values <- fitted(m_default, newdata = d_pred, re_formula = NA, 
                        # 95% interval: between 2.5th and 97.5th percentile
                        # 80% interval: between 10th and 90th percentile
                        probs = c(0.025, 0.10, 0.90, 0.975))
# Combine fitted values with predictor values
fitted_values <- cbind(d_pred, fitted_values)
fitted_values
                    Group Time      TextType Estimate Est.Error Q2.5  Q10  Q90
1  monolingual Portuguese   T1 argumentative     5.25     0.131 4.99 5.09 5.42
2  monolingual Portuguese   T1     narrative     4.99     0.181 4.63 4.77 5.22
3  monolingual Portuguese   T2 argumentative     5.82     0.140 5.55 5.65 6.00
4  monolingual Portuguese   T2     narrative     5.32     0.191 4.93 5.08 5.56
5  monolingual Portuguese   T3 argumentative     6.17     0.159 5.84 5.97 6.37
6  monolingual Portuguese   T3     narrative     5.84     0.193 5.45 5.60 6.09
7       Portuguese-French   T1 argumentative     4.19     0.117 3.97 4.04 4.34
8       Portuguese-French   T1     narrative     4.07     0.162 3.75 3.87 4.28
9       Portuguese-French   T2 argumentative     4.21     0.124 3.97 4.06 4.37
10      Portuguese-French   T2     narrative     4.22     0.158 3.91 4.02 4.42
11      Portuguese-French   T3 argumentative     4.73     0.131 4.48 4.56 4.89
12      Portuguese-French   T3     narrative     4.80     0.157 4.49 4.60 5.00
13      Portuguese-German   T1 argumentative     3.95     0.109 3.74 3.81 4.09
14      Portuguese-German   T1     narrative     3.87     0.151 3.57 3.67 4.06
15      Portuguese-German   T2 argumentative     3.88     0.116 3.65 3.73 4.02
16      Portuguese-German   T2     narrative     4.10     0.159 3.79 3.90 4.30
17      Portuguese-German   T3 argumentative     4.64     0.129 4.38 4.47 4.80
18      Portuguese-German   T3     narrative     4.76     0.148 4.46 4.57 4.95
   Q97.5
1   5.51
2   5.35
3   6.11
4   5.69
5   6.48
6   6.24
7   4.42
8   4.40
9   4.46
10  4.52
11  4.99
12  5.11
13  4.17
14  4.16
15  4.11
16  4.41
17  4.89
18  5.05

And now for the graph:

# Move all points apart horizontally to reduce overlap
position_adjustment <- position_dodge(width = 0.3)

ggplot(fitted_values,
       aes(x = Time,
           y = Estimate,
           # Sort Groups from low to high
           colour = reorder(Group, Estimate),
           group = Group)) +
  # Move point apart:
  geom_point(position = position_adjustment) +
  # Move lines apart:
  geom_path(position = position_adjustment) +
  # Add 95% intervals; move them apart, too
  geom_linerange(aes(ymin = Q2.5, ymax = Q97.5), linewidth = 0.4,
                 position = position_adjustment) +
  # Add 80% intervals; move them apart, too
  geom_linerange(aes(ymin = Q10, ymax = Q90), linewidth = 0.9,
                 position = position_adjustment) +
  facet_wrap(~ TextType) +
  # Override default colour
  scale_colour_brewer(name = element_blank(), type = "qual") +
  ylab("Modelled mean Guiraud") +
  theme_bw() +
  theme(legend.position = "bottom")

Figure 2. The modelled mean Guiraud values and their uncertainty (thick vertical lines: 80% interval; thin vertical lines: 95% interval).

A model with more sensible coding

Tailoring the coding of categorical predictors to the research questions

The summary() output for m_default was difficult to interpret because treatment coding was used. However, we can override this default behaviour to end up with estimates that are more readily and more usefully interpretable.

The first thing we can do is to override the default refence level. Figure 1 showed that the Guiraud values at T2 tend to be somewhere midway between those at T1 and T3, so we can make the intercept estimate more representative of the dataset as a whole by making T2 the reference level of Time rather than T1. A benefit of doing so is that we will now have two parameters, TimeT1 and TimeT3 that specify the difference between T1-T2 and T2-T3, respectively. In other words, the estimated parameters will directly reflect the progression from data collection to data collection. (Before, the parameter estimates specified the differences between T1-T2 and T1-T3, so a direct estimate for T2-T3 was lacking.)

# Set T2 as default time; retain treatment coding
d$Time <- relevel(d$Time, "T2")

Second, there’s no reason for preferring argumentative or narrative texts as the reference level. If we sum-code this predictor, the intercept reflects the grand mean of the argumentative and narrative texts (at T2), and the estimated parameter then specifies how far the mean Guiraud value of each text type is removed from this mean:

# Sum (or deviation) coding for TextType (2 levels)
contrasts(d$TextType) <- contr.sum(2)

Similarly, there are a couple of reasonable ways to choose the reference level for Group when using treatment coding. But you can also sum-code this predictor so that the intercept reflects the grand mean of the Guiraud values of texts written by monolingual Portuguese and bilingual Portuguese-French and Portuguese-German kids (at T2).

# Sum (or deviation) coding for Group (3 levels)
contrasts(d$Group) <- contr.sum(3)

Refitting the model

No difference here.

m_recoded <- brm(Guiraud ~ Time*TextType*Group +
                   (1 + TextType + Time|Class) +
                   (1 + TextType + Time|Child),
                 cores = 4, iter = 4000,
                 silent = 2,
                 control = list(adapt_delta = 0.95),
                 data = d)

Interpreting the parameter estimates

summary(m_recoded)
 Family: gaussian 
  Links: mu = identity 
Formula: Guiraud ~ Time * TextType * Group + (1 + TextType + Time | Class) + (1 + TextType + Time | Child) 
   Data: d (Number of observations: 1040) 
  Draws: 4 chains, each with iter = 4000; warmup = 2000; thin = 1;
         total post-warmup draws = 8000

Multilevel Hyperparameters:
~Child (Number of levels: 328) 
                         Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
sd(Intercept)                0.46      0.04     0.38     0.55 1.00     2275
sd(TextType1)                0.13      0.05     0.03     0.22 1.00      706
sd(TimeT1)                   0.10      0.08     0.00     0.28 1.00     1022
sd(TimeT3)                   0.38      0.09     0.18     0.54 1.01      590
cor(Intercept,TextType1)    -0.47      0.21    -0.86    -0.02 1.00     2148
cor(Intercept,TimeT1)       -0.04      0.39    -0.77     0.73 1.00     5998
cor(TextType1,TimeT1)        0.05      0.43    -0.78     0.80 1.00     3249
cor(Intercept,TimeT3)        0.14      0.23    -0.24     0.66 1.01      926
cor(TextType1,TimeT3)        0.03      0.32    -0.63     0.62 1.01      510
cor(TimeT1,TimeT3)          -0.02      0.42    -0.81     0.77 1.02      268
                         Tail_ESS
sd(Intercept)                4601
sd(TextType1)                1278
sd(TimeT1)                   1980
sd(TimeT3)                    859
cor(Intercept,TextType1)     3347
cor(Intercept,TimeT1)        4574
cor(TextType1,TimeT1)        5031
cor(Intercept,TimeT3)        1331
cor(TextType1,TimeT3)        1297
cor(TimeT1,TimeT3)            892

~Class (Number of levels: 25) 
                         Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
sd(Intercept)                0.18      0.07     0.04     0.34 1.00     1298
sd(TextType1)                0.12      0.04     0.05     0.21 1.00     2542
sd(TimeT1)                   0.10      0.07     0.01     0.27 1.00     2235
sd(TimeT3)                   0.10      0.07     0.00     0.27 1.00     2161
cor(Intercept,TextType1)    -0.20      0.35    -0.79     0.56 1.00     2368
cor(Intercept,TimeT1)       -0.14      0.42    -0.84     0.74 1.00     5527
cor(TextType1,TimeT1)       -0.03      0.41    -0.78     0.76 1.00     6213
cor(Intercept,TimeT3)        0.04      0.43    -0.76     0.80 1.00     5339
cor(TextType1,TimeT3)        0.16      0.42    -0.69     0.85 1.00     5890
cor(TimeT1,TimeT3)           0.11      0.44    -0.78     0.85 1.00     4306
                         Tail_ESS
sd(Intercept)                1281
sd(TextType1)                2768
sd(TimeT1)                   3747
sd(TimeT3)                   3171
cor(Intercept,TextType1)     3745
cor(Intercept,TimeT1)        5022
cor(TextType1,TimeT1)        6110
cor(Intercept,TimeT3)        5475
cor(TextType1,TimeT3)        5570
cor(TimeT1,TimeT3)           5635

Regression Coefficients:
                        Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
Intercept                   4.59      0.06     4.47     4.72 1.00     4328
TimeT1                     -0.21      0.06    -0.33    -0.08 1.00     6617
TimeT3                      0.56      0.06     0.44     0.69 1.00     6613
TextType1                   0.05      0.05    -0.05     0.14 1.00     4556
Group1                      0.98      0.10     0.78     1.17 1.00     3423
Group2                     -0.38      0.09    -0.56    -0.19 1.00     3473
TimeT1:TextType1            0.03      0.05    -0.07     0.13 1.00     6741
TimeT3:TextType1           -0.02      0.05    -0.12     0.08 1.00     6181
TimeT1:Group1              -0.24      0.09    -0.41    -0.08 1.00     4832
TimeT3:Group1              -0.13      0.09    -0.30     0.04 1.00     5460
TimeT1:Group2               0.12      0.09    -0.05     0.30 1.00     4994
TimeT3:Group2              -0.01      0.09    -0.18     0.16 1.00     5093
TextType1:Group1            0.20      0.07     0.06     0.34 1.00     3541
TextType1:Group2           -0.04      0.07    -0.17     0.09 1.00     3527
TimeT1:TextType1:Group1    -0.15      0.07    -0.28    -0.01 1.00     6125
TimeT3:TextType1:Group1    -0.07      0.07    -0.20     0.07 1.00     6193
TimeT1:TextType1:Group2     0.03      0.07    -0.11     0.17 1.00     5731
TimeT3:TextType1:Group2    -0.01      0.07    -0.14     0.12 1.00     5675
                        Tail_ESS
Intercept                   4832
TimeT1                      5591
TimeT3                      6012
TextType1                   5530
Group1                      4024
Group2                      4055
TimeT1:TextType1            6272
TimeT3:TextType1            5424
TimeT1:Group1               5080
TimeT3:Group1               5557
TimeT1:Group2               5074
TimeT3:Group2               4939
TextType1:Group1            4792
TextType1:Group2            4530
TimeT1:TextType1:Group1     5989
TimeT3:TextType1:Group1     6461
TimeT1:TextType1:Group2     6285
TimeT3:TextType1:Group2     6247

Further Distributional Parameters:
      Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma     0.60      0.02     0.55     0.65 1.00      909     1428

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).

Now the Intercept reflects the grand mean of the Guiraud values for both argumentative and narrative texts for all three groups written at T2. The TimeT1 estimate (-0.20) shows the difference between T1 and T2 averaged over all text types and all groups (0.20 points worse at T1); the TimeT3 estimate (0.56) shows the difference between T2 and T3 averaged over all text types and all groups (0.56 points better at T3).

TextType1 (0.05) shows that the mean Guiraud value of one text type (still written at T2!) averaged over all groups is 0.05 points higher than the grand mean; and by implication that the mean Guiraud value of the other text type is 0.05 lower than the grand mean. To find out which text type is which, use contrasts():

contrasts(d$TextType)
              [,1]
argumentative    1
narrative       -1

Since argumentative is coded as 1, it’s the argumentative texts that have the higher Guiraud values at T2.

Similarly, Group1 (0.98) shows that one group has higher-than-average Guiraud values averaged across text types at T2, whereas Group2 (-0.38) shows that another group has a mean Guiraud value that lies 0.38 points below the average at T2. By implication, the third group’s mean Guiraud value lies 0.60 points below average ((0.98-0.38-0.60)/3 = 0). To see which group is which, use contrasts():

contrasts(d$Group)
                       [,1] [,2]
monolingual Portuguese    1    0
Portuguese-French         0    1
Portuguese-German        -1   -1

monolingual Portuguese is ‘1’ for the purposes of Group1, Portuguese-French is 1 for the purposes of Group2, and Portuguese-German is the third group.

We can double-check these numbers by generating the modelled mean values for each predictor value combination:

double_check <- cbind(
  d_pred, 
  fitted(m_recoded, 
         newdata = d_pred, 
         re_formula = NA)
  )
double_check
                    Group Time      TextType Estimate Est.Error Q2.5 Q97.5
1  monolingual Portuguese   T1 argumentative     5.25     0.151 4.95  5.55
2  monolingual Portuguese   T1     narrative     4.99     0.171 4.65  5.33
3  monolingual Portuguese   T2 argumentative     5.82     0.142 5.54  6.10
4  monolingual Portuguese   T2     narrative     5.32     0.166 4.98  5.65
5  monolingual Portuguese   T3 argumentative     6.16     0.170 5.82  6.50
6  monolingual Portuguese   T3     narrative     5.84     0.179 5.48  6.19
7       Portuguese-French   T1 argumentative     4.19     0.131 3.94  4.46
8       Portuguese-French   T1     narrative     4.06     0.160 3.75  4.37
9       Portuguese-French   T2 argumentative     4.21     0.124 3.97  4.46
10      Portuguese-French   T2     narrative     4.21     0.141 3.94  4.50
11      Portuguese-French   T3 argumentative     4.73     0.138 4.46  5.01
12      Portuguese-French   T3     narrative     4.79     0.152 4.50  5.11
13      Portuguese-German   T1 argumentative     3.95     0.116 3.72  4.18
14      Portuguese-German   T1     narrative     3.88     0.148 3.58  4.16
15      Portuguese-German   T2 argumentative     3.88     0.115 3.65  4.10
16      Portuguese-German   T2     narrative     4.11     0.145 3.83  4.40
17      Portuguese-German   T3 argumentative     4.64     0.135 4.36  4.90
18      Portuguese-German   T3     narrative     4.76     0.142 4.48  5.04

Some sanity checks:

  1. Intercept = 4.59 = grand mean at T2:
double_check |> 
  filter(Time == "T2") |> 
  summarise(mean_est = mean(Estimate))
  mean_est
1     4.59
  1. TimeT3 = 0.56 = T2/T3 difference across texts and groups:
double_check |> 
  group_by(Time) |> 
  summarise(mean_est = mean(Estimate)) |> 
  spread(Time, mean_est) |> 
  summarise(diff_T2T3 = T3 - T2)
# A tibble: 1 × 1
  diff_T2T3
      <dbl>
1     0.562
  1. Portuguese-German lies 0.60 below average at T2 across texts:
double_check |> 
  filter(Time == "T2") |> 
  group_by(Group) |> 
  summarise(mean_est = mean(Estimate)) |> 
  mutate(diff_mean = mean_est - mean(mean_est))
# A tibble: 3 × 3
  Group                  mean_est diff_mean
  <fct>                     <dbl>     <dbl>
1 monolingual Portuguese     5.57     0.977
2 Portuguese-French          4.21    -0.381
3 Portuguese-German          4.00    -0.596

I won’t plot the modelled averages and their uncertainty, because the result will be the same as before: Recoding the predictors in this way doesn’t affect the modelled averages per cell; it just makes the summary output easier to parse.

Homing in on specific comparisons

Finally, let’s see how we can target some specific comparisons without having to refit the model several times. A specific comparison you might be interested in could be “How large is the difference in Guiraud scores for narrative texts written by Portuguese-French bilinguals between T1 and T2?” Or a more complicated one: “How large is the difference in the progression from T1 to T3 for argumentative texts between Portuguese-French and Portuguese-German children?”

To answer such questions, we need to generate the distribution of the modelled averages per predictor value combination:

posterior_fit <- posterior_linpred(m_recoded, newdata = d_pred, re_formula = NA)

Question 1: Progression T1-T2 for narrative texts, Portuguese-French bilinguals?

This question requires us to compare the modelled average for narrative texts written by Portuguese-French bilinguals at T2 to that of the narrative texts written by Portuguese-French bilinguals at T1. The first combination of predictor values can be found in row 10 in d_pred, so the corresponding estimates are in column 10 in posterior_fit. The second combination of predictor values can be found in row 8 in d_pred, so the corresponding estimates are in column 8 in posterior_fit.

t2 <- posterior_fit[, 10]
t1 <- posterior_fit[, 8]
df <- data.frame(t2, t1)

Now compute and plot the pairwise differences:

df <- df |> 
  mutate(progression = t2 - t1)
ggplot(df,
       aes(x = progression)) +
  geom_histogram(bins = 50, fill = "lightgrey", colour = "black") +
  theme_bw()

Figure 3. Estimate of the progression in Guiraud values for narrative texts by Portuguese-French bilinguals from T1 to T2.

The mean progression is easily calculated:

mean(df$progression)
[1] 0.147

The estimated error for this estimate is:

sd(df$progression)
[1] 0.148

And its 95% uncertainty interval is:

quantile(df$progression, probs = c(0.025, 0.975))
  2.5%  97.5% 
-0.144  0.436 

According to the model, there’s about a 84% chance that there’s indeed some progression going from T1 to T2.

mean(df$progression > 0)
[1] 0.839

Question 2: T1-T3 progression for argumentative texts, Portuguese-French vs. Portuguese-German?

This question requires us to take into consideration the modelled average for argumentative texts written by Portuguese-French bilinguals at T1, that for argumentative texts written by Portuguese-French bilinguals at T3, and the same for the texts written by Portuguese-German bilinguals. We need the following columns in posterior_fit:

  • 7 (Portuguese-French, T1, argumentative)
  • 11 (Portuguese-French, T3, argumentative)
  • 13 (Portuguese-German, T1, argumentative)
  • 17 (Portuguese-German, T3, argumentative)
fr_t1 <- posterior_fit[, 7]
fr_t3 <- posterior_fit[, 11]
gm_t1 <- posterior_fit[, 13]
gm_t3 <- posterior_fit[, 17]
df <- data.frame(fr_t1, fr_t3, gm_t1, gm_t3)

We compute the progression for the Portuguese-French bilinguals and that for the Portuguese-German bilinguals. Then we compute the difference between these progressions:

df <- df |> 
  mutate(prog_fr = fr_t3 - fr_t1,
         prog_gm = gm_t3 - gm_t1,
         diff_prog = prog_gm - prog_fr)

The mean progression for the Portuguese-French bilinguals was 0.54 compared to 0.68 for the Portuguese-German bilinguals:

mean(df$prog_fr)
[1] 0.54
mean(df$prog_gm)
[1] 0.687

The mean difference between these progressions, then, is 0.14 in favour of the Portuguese-German bilinguals:

mean(df$diff_prog)
[1] 0.146

However, there is considerable uncertainty about this difference:

ggplot(df,
       aes(x = diff_prog)) +
  geom_histogram(bins = 50, fill = "lightgrey", colour = "black") +
  theme_bw()

The probability that the Portuguese-German bilinguals make more progress than the Portuguese-French bilinguals is 77%, and according to the model, there’s a 95% chance its size is somewhere between -0.25 and 0.52 points.

mean(df$diff_prog > 0)
[1] 0.777
quantile(df$diff_prog, probs = c(0.025, 0.975))
  2.5%  97.5% 
-0.248  0.532 

Summary

By investing some time in recoding your predictors, you can make the parameter estimates more relevant to your questions. Any specific comparisons you may be interested in can additionally be addressed by making use of the entire distribution of estimates. You can also use these estimate distributions to draw effect plots.

Further resources

References

Audrey Bonvin, Jan Vanhove, Raphael Berthele and Amelia Lambelet. 2018. Die Entwicklung von produktiven lexikalischen Kompetenzen bei Schüler(innen) mit portugiesischem Migrationshintergrund in der Schweiz. Zeitschrift für Interkulturellen Fremdsprachenunterricht 23(1). 135-148. Data and R code available from figshare.

Amelia Lambelet, Raphael Berthele, Magalie Desgrippes, Carlos Pestana and Jan Vanhove. 2017a. Chapter 2: Testing interdependence in Portuguese Heritage speakers in Switzerland: the HELASCOT project. In Raphael Berthele and Amelia Lambelet (eds.), Heritage and school language literacy development in migrant children: Interdependence or independence?, pp. 26-33. Multilingual Matters.

Amelia Lambelet, Magalie Desgrippes and Jan Vanhove. 2017b. Chapter 5: The development of argumentative and narrative writing skills in Portuguese heritage speakers in Switzerland (HELASCOT project). In Raphael Berthele and Amelia Lambelet (eds.), Heritage and school language literacy development in migrant children: Interdependence or independence?, pp. 83-96. Multilingual Matters.

Jan Vanhove, Audrey Bonvin, Amelia Lambelet and Raphael Berthele. 2019. Predicting perceptions of the lexical richness of short French, German, and Portuguese texts. Journal of Writing Research. Technical report, data (including texts), elicitation materials, and R code available from the Open Science Framework.

Session info

devtools::session_info()
─ Session info ───────────────────────────────────────────────────────────────
 setting  value
 version  R version 4.5.1 (2025-06-13)
 os       Ubuntu 22.04.5 LTS
 system   x86_64, linux-gnu
 ui       X11
 language en_US
 collate  en_US.UTF-8
 ctype    en_US.UTF-8
 tz       Europe/Zurich
 date     2025-09-24
 pandoc   2.12 @ /home/jan/miniconda3/bin/ (via rmarkdown)

─ Packages ───────────────────────────────────────────────────────────────────
 package        * version  date (UTC) lib source
 abind            1.4-8    2024-09-12 [1] CRAN (R 4.5.1)
 backports        1.5.0    2024-05-23 [1] CRAN (R 4.5.1)
 bayesplot        1.14.0   2025-08-31 [1] CRAN (R 4.5.1)
 bit              4.6.0    2025-03-06 [1] CRAN (R 4.5.1)
 bit64            4.6.0-1  2025-01-16 [1] CRAN (R 4.5.1)
 bridgesampling   1.1-2    2021-04-16 [1] CRAN (R 4.5.1)
 brms           * 2.23.0   2025-09-09 [1] CRAN (R 4.5.1)
 Brobdingnag      1.2-9    2022-10-19 [1] CRAN (R 4.5.1)
 cachem           1.1.0    2024-05-16 [1] CRAN (R 4.5.1)
 callr            3.7.6    2024-03-25 [1] CRAN (R 4.5.1)
 checkmate        2.3.3    2025-08-18 [1] CRAN (R 4.5.1)
 cli              3.6.5    2025-04-23 [1] CRAN (R 4.5.1)
 coda             0.19-4.1 2024-01-31 [1] CRAN (R 4.5.1)
 codetools        0.2-19   2023-02-01 [4] CRAN (R 4.2.2)
 crayon           1.5.1    2022-03-26 [2] CRAN (R 4.2.0)
 curl             6.4.0    2025-06-22 [1] CRAN (R 4.5.1)
 devtools         2.4.5    2022-10-11 [1] CRAN (R 4.5.1)
 digest           0.6.37   2024-08-19 [1] CRAN (R 4.5.1)
 distributional   0.5.0    2024-09-17 [1] CRAN (R 4.5.1)
 dplyr          * 1.1.4    2023-11-17 [1] CRAN (R 4.5.1)
 ellipsis         0.3.2    2021-04-29 [2] CRAN (R 4.2.0)
 evaluate         1.0.4    2025-06-18 [1] CRAN (R 4.5.1)
 farver           2.1.2    2024-05-13 [1] CRAN (R 4.5.1)
 fastmap          1.2.0    2024-05-15 [1] CRAN (R 4.5.1)
 forcats        * 1.0.0    2023-01-29 [1] CRAN (R 4.5.1)
 fs               1.5.2    2021-12-08 [2] CRAN (R 4.2.0)
 generics         0.1.4    2025-05-09 [1] CRAN (R 4.5.1)
 ggplot2        * 3.5.2    2025-04-09 [1] CRAN (R 4.5.1)
 glue             1.6.2    2022-02-24 [2] CRAN (R 4.2.0)
 gridExtra        2.3      2017-09-09 [1] CRAN (R 4.5.1)
 gtable           0.3.6    2024-10-25 [1] CRAN (R 4.5.1)
 hms              1.1.3    2023-03-21 [1] CRAN (R 4.5.1)
 htmltools        0.5.8.1  2024-04-04 [1] CRAN (R 4.5.1)
 htmlwidgets      1.6.4    2023-12-06 [1] CRAN (R 4.5.1)
 httpuv           1.6.16   2025-04-16 [1] CRAN (R 4.5.1)
 inline           0.3.21   2025-01-09 [1] CRAN (R 4.5.1)
 jsonlite         2.0.0    2025-03-27 [1] CRAN (R 4.5.1)
 knitr            1.50     2025-03-16 [1] CRAN (R 4.5.1)
 labeling         0.4.3    2023-08-29 [1] CRAN (R 4.5.1)
 later            1.4.2    2025-04-08 [1] CRAN (R 4.5.1)
 lattice          0.22-5   2023-10-24 [4] CRAN (R 4.3.1)
 lifecycle        1.0.4    2023-11-07 [1] CRAN (R 4.5.1)
 loo              2.8.0    2024-07-03 [1] CRAN (R 4.5.1)
 lubridate      * 1.9.4    2024-12-08 [1] CRAN (R 4.5.1)
 magrittr         2.0.3    2022-03-30 [1] CRAN (R 4.5.1)
 Matrix           1.7-3    2025-03-11 [4] CRAN (R 4.4.3)
 matrixStats      1.5.0    2025-01-07 [1] CRAN (R 4.5.1)
 memoise          2.0.1    2021-11-26 [2] CRAN (R 4.2.0)
 mime             0.10     2021-02-13 [2] CRAN (R 4.0.2)
 miniUI           0.1.2    2025-04-17 [1] CRAN (R 4.5.1)
 mvtnorm          1.3-3    2025-01-10 [1] CRAN (R 4.5.1)
 nlme             3.1-168  2025-03-31 [4] CRAN (R 4.4.3)
 pillar           1.10.2   2025-04-05 [1] CRAN (R 4.5.1)
 pkgbuild         1.3.1    2021-12-20 [2] CRAN (R 4.2.0)
 pkgconfig        2.0.3    2019-09-22 [2] CRAN (R 4.2.0)
 pkgload          1.4.0    2024-06-28 [1] CRAN (R 4.5.1)
 plyr             1.8.9    2023-10-02 [1] CRAN (R 4.5.1)
 posterior        1.6.1    2025-02-27 [1] CRAN (R 4.5.1)
 prettyunits      1.1.1    2020-01-24 [2] CRAN (R 4.2.0)
 processx         3.8.6    2025-02-21 [1] CRAN (R 4.5.1)
 profvis          0.4.0    2024-09-20 [1] CRAN (R 4.5.1)
 promises         1.3.3    2025-05-29 [1] CRAN (R 4.5.1)
 ps               1.9.1    2025-04-12 [1] CRAN (R 4.5.1)
 purrr          * 1.0.4    2025-02-05 [1] CRAN (R 4.5.1)
 QuickJSR         1.8.1    2025-09-20 [1] CRAN (R 4.5.1)
 R6               2.5.1    2021-08-19 [2] CRAN (R 4.2.0)
 RColorBrewer     1.1-3    2022-04-03 [1] CRAN (R 4.5.1)
 Rcpp           * 1.0.14   2025-01-12 [1] CRAN (R 4.5.1)
 RcppParallel     5.1.11-1 2025-08-27 [1] CRAN (R 4.5.1)
 readr          * 2.1.5    2024-01-10 [1] CRAN (R 4.5.1)
 remotes          2.4.2    2021-11-30 [2] CRAN (R 4.2.0)
 reshape2         1.4.4    2020-04-09 [1] CRAN (R 4.5.1)
 rlang            1.1.6    2025-04-11 [1] CRAN (R 4.5.1)
 rmarkdown        2.29     2024-11-04 [1] CRAN (R 4.5.1)
 rstan            2.32.7   2025-03-10 [1] CRAN (R 4.5.1)
 rstantools       2.5.0    2025-09-01 [1] CRAN (R 4.5.1)
 rstudioapi       0.17.1   2024-10-22 [1] CRAN (R 4.5.1)
 scales           1.4.0    2025-04-24 [1] CRAN (R 4.5.1)
 sessioninfo      1.2.2    2021-12-06 [2] CRAN (R 4.2.0)
 shiny            1.10.0   2024-12-14 [1] CRAN (R 4.5.1)
 StanHeaders      2.32.10  2024-07-15 [1] CRAN (R 4.5.1)
 stringi          1.7.6    2021-11-29 [2] CRAN (R 4.2.0)
 stringr        * 1.5.1    2023-11-14 [1] CRAN (R 4.5.1)
 tensorA          0.36.2.1 2023-12-13 [1] CRAN (R 4.5.1)
 tibble         * 3.3.0    2025-06-08 [1] CRAN (R 4.5.1)
 tidyr          * 1.3.1    2024-01-24 [1] CRAN (R 4.5.1)
 tidyselect       1.2.1    2024-03-11 [1] CRAN (R 4.5.1)
 tidyverse      * 2.0.0    2023-02-22 [1] CRAN (R 4.5.1)
 timechange       0.3.0    2024-01-18 [1] CRAN (R 4.5.1)
 tzdb             0.5.0    2025-03-15 [1] CRAN (R 4.5.1)
 urlchecker       1.0.1    2021-11-30 [1] CRAN (R 4.5.1)
 usethis          3.1.0    2024-11-26 [1] CRAN (R 4.5.1)
 utf8             1.2.2    2021-07-24 [2] CRAN (R 4.2.0)
 V8               6.0.4    2025-06-04 [1] CRAN (R 4.5.1)
 vctrs            0.6.5    2023-12-01 [1] CRAN (R 4.5.1)
 vroom            1.6.5    2023-12-05 [1] CRAN (R 4.5.1)
 withr            3.0.2    2024-10-28 [1] CRAN (R 4.5.1)
 xfun             0.52     2025-04-02 [1] CRAN (R 4.5.1)
 xtable           1.8-4    2019-04-21 [1] CRAN (R 4.5.1)
 yaml             2.3.5    2022-02-21 [2] CRAN (R 4.2.0)

 [1] /home/jan/R/x86_64-pc-linux-gnu-library/4.5
 [2] /usr/local/lib/R/site-library
 [3] /usr/lib/R/site-library
 [4] /usr/lib/R/library

──────────────────────────────────────────────────────────────────────────────