The data is the observation from your werewolf detector that the mailman is a werewolf. That is information you went out and collected, presumably to test the hypothesis that the mailman is a werewolf.
The likelihood is the probability of the data given the model. In this case, it’s the probability that the data point (the werewolf detector observing that the mailman is a werewolf) would be observed, given that the model is true (the model being the hypothesis that the mailman is a werewolf). As we know the detector correctly identifies werewolves 99% of the time, it’s 0.99.
The prior is the information from your field guide that 1 in every 1000 people is a werewolf. This is knowledge that you have before you set out to test whether anyone is a werewolf.
The posterior is the final result: if we observe a positive werewolf test on an individual, given our prior knowledge that 1/1000 people are werewolves and the fact that our test is 99% accurate for both positive and negative results, the posterior probability that the mailman is a werewolf is 9%.
Incidentally, this seems like a very low probability but it can be demonstrated using Bayes’ Theorem. See the worked example on the Wikipedia page on Bayes’ Theorem. It uses drug testing as an example, which is not as fun as werewolf testing, but the math is the same.
Trace plot A resembles a “hairy caterpillar,” representing good model convergence. All Markov chains have arrived at the correct solution and are moving back and forth around it. The amount they go back and forth is proportional to the uncertainty we have about the value of the parameter.
In trace plot B, some of the Markov chains show trends in one direction or another represents poor model convergence. The chains are still moving around trying to find the correct solution. They are not mixing well, meaning they are not moving back and forth with the same stationary distribution.
In the first case, we can tell the researcher they can use the posterior distribution to make some inferences. They can do things like calculate the median and quantiles of the distribution, and make predictions.
In the second case, we would recommend the researcher to either run the MCMC for a greater number of iterations, or change the prior distributions to constrain the chains to explore parameter space more narrowly so they are more likely to converge on the correct solution.
Here is how you would proceed if you wanted to add two-way interaction terms to the model. This adds the interaction terms to the last model we fit that has random slopes for all main effects.
fit_withinteractions <- brm(yield ~ 1 + N + P + K + N:P + N:K + P:K + (1 + N + P + K | site) + (1 | block:site),
data = stvincent,
prior = c(prior(normal(0, 10), class = b)),
chains = 4,
iter = 2000,
warmup = 1000,
seed = 5,
file = 'fits/fit_2wayinteractions')
Then, to examine the model diagnostics, look at the trace plots and the posterior predictive check plot (output not shown here). We should see “hairy caterpillar” traceplots and a posterior predictive check plot where the fitted values taken from the posterior have a similar distribution to the data.
plot(fit_withinteractions)
pp_check(fit_withinteractions)
Next, look at the model summary. This will show you that the R-hat statistics are close to 1 to further confirm the model converged. That means we can make inferences based on the parameter estimates.
summary(fit_withinteractions)
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: yield ~ 1 + N + P + K + N:P + N:K + P:K + (1 + N + P + K | site) + (1 | block:site)
## Data: stvincent (Number of observations: 324)
## Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup draws = 4000
##
## Group-Level Effects:
## ~block:site (Number of levels: 36)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 1.43 0.31 0.88 2.09 1.00 1648 2692
##
## ~site (Number of levels: 9)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 4.55 1.47 2.45 8.04 1.00 2565 2767
## sd(N1N) 0.86 0.69 0.04 2.55 1.00 3215 2509
## sd(N2N) 3.41 1.05 1.87 5.91 1.00 2190 2709
## sd(N3N) 3.53 1.32 1.52 6.62 1.00 2796 2877
## sd(P1P) 1.17 0.85 0.05 3.12 1.00 2304 2196
## sd(P2P) 1.69 0.92 0.19 3.85 1.00 1824 1824
## sd(P3P) 1.03 0.88 0.04 3.31 1.00 2556 2340
## sd(K1K) 1.45 0.94 0.07 3.55 1.00 1735 1631
## sd(K2K) 1.48 0.85 0.15 3.49 1.00 2271 1809
## sd(K3K) 2.10 1.23 0.15 4.92 1.00 1870 1902
## cor(Intercept,N1N) -0.00 0.30 -0.57 0.56 1.00 7548 2878
## cor(Intercept,N2N) 0.03 0.24 -0.45 0.49 1.00 4399 3199
## cor(N1N,N2N) 0.07 0.30 -0.52 0.64 1.00 1799 2951
## cor(Intercept,N3N) -0.03 0.26 -0.53 0.46 1.00 4811 3514
## cor(N1N,N3N) 0.10 0.31 -0.51 0.65 1.00 2230 3004
## cor(N2N,N3N) 0.39 0.24 -0.13 0.79 1.00 4091 3537
## cor(Intercept,P1P) -0.03 0.29 -0.58 0.53 1.00 8010 2964
## cor(N1N,P1P) -0.03 0.30 -0.59 0.54 1.00 4453 2937
## cor(N2N,P1P) 0.20 0.30 -0.40 0.72 1.00 4753 3062
## cor(N3N,P1P) 0.10 0.29 -0.47 0.62 1.00 4575 3241
## cor(Intercept,P2P) 0.16 0.27 -0.38 0.66 1.00 6315 3349
## cor(N1N,P2P) 0.05 0.31 -0.55 0.61 1.00 3784 3260
## cor(N2N,P2P) -0.07 0.27 -0.57 0.47 1.00 4956 3422
## cor(N3N,P2P) -0.01 0.28 -0.55 0.53 1.00 4303 3124
## cor(P1P,P2P) 0.07 0.30 -0.51 0.62 1.00 3507 3343
## cor(Intercept,P3P) 0.02 0.29 -0.53 0.55 1.00 8439 2721
## cor(N1N,P3P) -0.03 0.30 -0.60 0.56 1.00 6471 3272
## cor(N2N,P3P) 0.01 0.29 -0.54 0.57 1.00 6904 3853
## cor(N3N,P3P) 0.01 0.30 -0.56 0.58 1.00 5793 3342
## cor(P1P,P3P) 0.04 0.30 -0.55 0.62 1.00 3126 2974
## cor(P2P,P3P) -0.01 0.30 -0.60 0.56 1.00 3964 3293
## cor(Intercept,K1K) 0.13 0.29 -0.45 0.65 1.00 5660 2756
## cor(N1N,K1K) -0.06 0.31 -0.63 0.54 1.00 3984 2904
## cor(N2N,K1K) 0.15 0.29 -0.44 0.66 1.00 4942 3356
## cor(N3N,K1K) 0.09 0.28 -0.47 0.61 1.00 4556 3312
## cor(P1P,K1K) -0.01 0.30 -0.58 0.57 1.00 3580 3341
## cor(P2P,K1K) 0.16 0.29 -0.45 0.68 1.00 2989 3384
## cor(P3P,K1K) -0.05 0.30 -0.61 0.54 1.00 3103 3458
## cor(Intercept,K2K) -0.06 0.27 -0.56 0.47 1.00 7185 3246
## cor(N1N,K2K) -0.01 0.31 -0.59 0.56 1.00 3761 3284
## cor(N2N,K2K) -0.24 0.27 -0.70 0.32 1.00 4686 3007
## cor(N3N,K2K) -0.16 0.28 -0.68 0.42 1.00 4682 3506
## cor(P1P,K2K) -0.08 0.30 -0.62 0.50 1.00 3081 3306
## cor(P2P,K2K) -0.09 0.29 -0.63 0.48 1.00 3633 3676
## cor(P3P,K2K) 0.02 0.30 -0.55 0.59 1.00 3121 3238
## cor(K1K,K2K) -0.05 0.29 -0.60 0.51 1.00 3324 3559
## cor(Intercept,K3K) -0.20 0.28 -0.70 0.38 1.00 6203 3154
## cor(N1N,K3K) -0.04 0.29 -0.60 0.53 1.00 4231 2887
## cor(N2N,K3K) 0.15 0.28 -0.41 0.66 1.00 4894 3056
## cor(N3N,K3K) 0.06 0.28 -0.50 0.60 1.00 5151 3604
## cor(P1P,K3K) -0.03 0.30 -0.58 0.55 1.00 3441 3372
## cor(P2P,K3K) -0.12 0.29 -0.65 0.44 1.00 3287 3471
## cor(P3P,K3K) 0.00 0.30 -0.59 0.57 1.00 3277 3502
## cor(K1K,K3K) 0.00 0.30 -0.56 0.56 1.00 3055 3332
## cor(K2K,K3K) -0.03 0.29 -0.59 0.52 1.00 3231 3645
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 13.06 1.65 9.77 16.22 1.00 1722 2446
## N1N 0.68 8.38 -15.61 16.81 1.00 6896 2569
## N2N 4.43 1.46 1.58 7.34 1.00 2859 2783
## N3N 0.78 8.85 -16.36 18.57 1.00 7255 2892
## P1P 1.29 8.07 -14.96 16.73 1.00 7439 2722
## P2P -0.11 1.09 -2.23 2.02 1.00 3988 2618
## P3P 0.34 8.90 -17.55 17.93 1.00 7989 2298
## K1K 0.89 7.99 -14.47 16.55 1.00 7448 2659
## K2K -0.86 1.06 -2.98 1.23 1.00 4117 2991
## K3K 0.54 8.46 -16.11 16.47 1.00 9987 2372
## N1N:P1P 0.38 8.40 -16.69 16.63 1.00 7645 2671
## N2N:P1P -0.01 10.23 -20.36 20.53 1.00 10692 3067
## N3N:P1P 0.96 8.84 -16.27 18.06 1.00 11485 2604
## N1N:P2P -0.05 10.24 -19.77 19.95 1.00 11189 2450
## N2N:P2P -0.20 1.05 -2.21 1.87 1.00 5960 2645
## N3N:P2P -0.11 9.78 -19.35 19.07 1.00 11261 2742
## N1N:P3P 0.22 8.92 -17.34 18.00 1.00 8277 2459
## N2N:P3P 0.00 10.24 -20.02 20.69 1.00 10253 2524
## N3N:P3P -0.10 9.86 -19.46 19.28 1.00 9312 2817
## N1N:K1K -0.05 8.54 -16.36 17.21 1.00 8500 2954
## N2N:K1K -0.18 10.29 -20.50 20.08 1.00 11373 2550
## N3N:K1K 0.61 8.90 -16.64 18.18 1.00 8808 2515
## N1N:K2K -0.09 10.04 -20.00 19.64 1.00 10335 2545
## N2N:K2K -0.92 1.02 -2.99 1.03 1.00 8082 3090
## N3N:K2K 0.12 10.29 -20.14 20.52 1.00 11469 2928
## N1N:K3K 0.48 8.61 -16.56 17.29 1.00 8659 2829
## N2N:K3K -0.07 10.16 -20.43 19.16 1.00 9578 2650
## N3N:K3K 0.00 9.74 -19.16 18.55 1.00 11632 2929
## P1P:K1K 0.40 8.14 -15.63 16.36 1.00 7982 2683
## P2P:K1K 0.05 10.12 -19.29 19.83 1.00 10626 2303
## P3P:K1K 0.26 8.91 -17.02 17.68 1.00 10223 2380
## P1P:K2K 0.05 10.42 -20.51 20.71 1.00 10861 2255
## P2P:K2K 1.93 1.02 -0.05 3.91 1.00 6542 2922
## P3P:K2K 0.29 10.14 -19.10 20.00 1.00 8588 2653
## P1P:K3K 0.64 8.80 -16.89 18.08 1.00 9355 2938
## P2P:K3K -0.05 9.77 -18.76 19.18 1.00 10862 2634
## P3P:K3K -0.04 10.36 -19.71 20.01 1.00 9846 2524
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 3.05 0.14 2.79 3.34 1.00 3992 2792
##
## Draws were sampled using sample(hmc). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
We can see here that the point estimates of the interaction slopes are mostly close to zero with wide credible intervals, indicating weak to nonexistent interactions.
Finally, we can use LOO cross-validation to compare the model to others. In this case I will just compare the model with both interaction terms to the random-slope model with only the three main effects and no interaction terms that we fit during the main part of the lesson.
fit_withinteractions <- add_criterion(fit_withinteractions, 'loo')
loo_compare(fit_withinteractions, fit_randomNPKslopes)
## elpd_diff se_diff
## fit_randomNPKslopes 0.0 0.0
## fit_withinteractions -0.6 2.3
This shows us that the fit that includes the interaction terms does not perform any better in the LOO cross-validation than the fit that ignores interactions. This is further evidence that interactions between fertilization treatments are not very strong in this particular dataset.