The data is the observation from your werewolf detector that the mailman is a werewolf. That is information you went out and collected, presumably to test the hypothesis that the mailman is a werewolf.
The likelihood is the probability of the data given the model. In this case, it’s the probability that the data point (the werewolf detector observing that the mailman is a werewolf) would be observed, given that the model is true (the model being the hypothesis that the mailman is a werewolf). As we know the detector correctly identifies werewolves 99% of the time, it’s 0.99.
The prior is the information from your field guide that 1 in every 1000 people is a werewolf. This is knowledge that you have before you set out to test whether anyone is a werewolf.
The posterior is the final result: if we observe a positive werewolf test on an individual, given our prior knowledge that 1/1000 people are werewolves and the fact that our test is 99% accurate for both positive and negative results, the posterior probability that the mailman is a werewolf is 9%.
Incidentally, this seems like a very low probability but it can be demonstrated using Bayes’ Theorem. See the worked example on the Wikipedia page on Bayes’ Theorem. It uses drug testing as an example, which is not as fun as werewolf testing, but the math is the same.
Trace plot A resembles a “hairy caterpillar,” representing good model convergence. All Markov chains have arrived at the correct solution and are moving back and forth around it. The amount they go back and forth is proportional to the uncertainty we have about the value of the parameter.
In trace plot B, some of the Markov chains show trends in one direction or another represents poor model convergence. The chains are still moving around trying to find the correct solution. They are not mixing well, meaning they are not moving back and forth with the same stationary distribution.
In the first case, we can tell the researcher they can use the posterior distribution to make some inferences. They can do things like calculate the median and quantiles of the distribution, and make predictions.
In the second case, we would recommend the researcher to either run the MCMC for a greater number of iterations, or change the prior distributions to constrain the chains to explore parameter space more narrowly so they are more likely to converge on the correct solution.
Here is how you would proceed if you wanted to add two-way interaction terms to the model. This adds the interaction terms to the last model we fit that has random slopes for all main effects.
fit_withinteractions <- brm(yield ~ 1 + N + P + K + N:P + N:K + P:K + (1 + N + P + K | site) + (1 | block:site),
data = stvincent,
prior = c(prior(normal(0, 10), class = b)),
chains = 4,
iter = 2000,
warmup = 1000,
seed = 5,
file = 'fits/fit_2wayinteractions')
Then, to examine the model diagnostics, look at the trace plots and the posterior predictive check plot (output not shown here). We should see “hairy caterpillar” traceplots and a posterior predictive check plot where the fitted values taken from the posterior have a similar distribution to the data.
plot(fit_withinteractions)
pp_check(fit_withinteractions)
Next, look at the model summary. This will show you that the R-hat statistics are close to 1 to further confirm the model converged. That means we can make inferences based on the parameter estimates.
summary(fit_withinteractions)
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: yield ~ 1 + N + P + K + N:P + N:K + P:K + (1 + N + P + K | site) + (1 | block:site)
## Data: stvincent (Number of observations: 324)
## Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup draws = 4000
##
## Multilevel Hyperparameters:
## ~block:site (Number of levels: 36)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 1.44 0.32 0.85 2.12 1.00 1483 2415
##
## ~site (Number of levels: 9)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 4.54 1.49 2.49 8.30 1.00 2013 2466
## sd(N1N) 0.86 0.70 0.04 2.57 1.00 2812 2711
## sd(N2N) 3.40 1.08 1.85 6.00 1.00 2136 2517
## sd(N3N) 3.54 1.37 1.39 6.69 1.00 2155 2030
## sd(P1P) 1.16 0.83 0.05 3.12 1.00 1710 1612
## sd(P2P) 1.70 0.93 0.18 3.82 1.00 1697 1480
## sd(P3P) 1.04 0.85 0.03 3.17 1.00 2702 2434
## sd(K1K) 1.45 0.94 0.08 3.66 1.00 1726 1881
## sd(K2K) 1.43 0.87 0.08 3.40 1.00 1700 1314
## sd(K3K) 2.10 1.19 0.20 4.74 1.00 1903 1835
## cor(Intercept,N1N) -0.00 0.30 -0.56 0.56 1.00 6853 2640
## cor(Intercept,N2N) 0.02 0.24 -0.45 0.49 1.00 3881 3146
## cor(N1N,N2N) 0.07 0.31 -0.55 0.63 1.00 1637 2496
## cor(Intercept,N3N) -0.03 0.26 -0.51 0.46 1.00 4316 2821
## cor(N1N,N3N) 0.08 0.30 -0.53 0.63 1.00 1972 2591
## cor(N2N,N3N) 0.39 0.25 -0.14 0.80 1.00 4326 3286
## cor(Intercept,P1P) -0.03 0.29 -0.58 0.54 1.00 6276 2980
## cor(N1N,P1P) -0.03 0.30 -0.61 0.56 1.00 4301 2479
## cor(N2N,P1P) 0.21 0.30 -0.42 0.72 1.00 4328 2939
## cor(N3N,P1P) 0.11 0.29 -0.46 0.64 1.00 4241 3521
## cor(Intercept,P2P) 0.17 0.27 -0.37 0.66 1.00 6054 3049
## cor(N1N,P2P) 0.05 0.30 -0.53 0.62 1.00 3982 3502
## cor(N2N,P2P) -0.07 0.28 -0.60 0.49 1.00 4852 3130
## cor(N3N,P2P) -0.01 0.27 -0.53 0.51 1.00 4749 3571
## cor(P1P,P2P) 0.06 0.31 -0.53 0.62 1.00 3189 3373
## cor(Intercept,P3P) 0.01 0.30 -0.55 0.57 1.00 7568 2707
## cor(N1N,P3P) -0.02 0.30 -0.60 0.56 1.00 5860 2965
## cor(N2N,P3P) 0.01 0.30 -0.56 0.57 1.00 5930 3131
## cor(N3N,P3P) 0.01 0.30 -0.55 0.58 1.00 4798 3303
## cor(P1P,P3P) 0.04 0.30 -0.55 0.59 1.00 2979 3358
## cor(P2P,P3P) -0.01 0.30 -0.57 0.58 1.00 3342 3385
## cor(Intercept,K1K) 0.13 0.29 -0.45 0.65 1.00 6553 3000
## cor(N1N,K1K) -0.05 0.31 -0.63 0.56 1.00 4459 3123
## cor(N2N,K1K) 0.16 0.28 -0.43 0.67 1.00 4289 3113
## cor(N3N,K1K) 0.09 0.29 -0.49 0.63 1.00 4442 3239
## cor(P1P,K1K) 0.01 0.30 -0.55 0.56 1.00 3517 3471
## cor(P2P,K1K) 0.17 0.29 -0.42 0.68 1.00 3226 3506
## cor(P3P,K1K) -0.05 0.30 -0.61 0.53 1.00 3205 3429
## cor(Intercept,K2K) -0.07 0.28 -0.60 0.50 1.00 5495 2856
## cor(N1N,K2K) -0.01 0.30 -0.58 0.58 1.00 3176 2896
## cor(N2N,K2K) -0.24 0.28 -0.72 0.35 1.00 4894 3187
## cor(N3N,K2K) -0.17 0.28 -0.68 0.41 1.00 4653 3504
## cor(P1P,K2K) -0.09 0.29 -0.62 0.51 1.00 3092 3712
## cor(P2P,K2K) -0.08 0.29 -0.63 0.48 1.00 3800 3359
## cor(P3P,K2K) 0.02 0.30 -0.56 0.58 1.00 2699 3376
## cor(K1K,K2K) -0.04 0.29 -0.59 0.54 1.00 3190 3517
## cor(Intercept,K3K) -0.20 0.28 -0.69 0.38 1.00 5982 3278
## cor(N1N,K3K) -0.04 0.30 -0.59 0.52 1.00 4378 3265
## cor(N2N,K3K) 0.16 0.28 -0.42 0.68 1.00 4685 3579
## cor(N3N,K3K) 0.07 0.29 -0.50 0.61 1.00 4265 3180
## cor(P1P,K3K) -0.04 0.30 -0.60 0.55 1.00 3347 3551
## cor(P2P,K3K) -0.12 0.29 -0.65 0.46 1.00 3432 3364
## cor(P3P,K3K) -0.01 0.30 -0.56 0.57 1.00 3185 3761
## cor(K1K,K3K) 0.02 0.30 -0.55 0.58 1.00 3057 3481
## cor(K2K,K3K) -0.03 0.29 -0.58 0.53 1.00 2540 2813
##
## Regression Coefficients:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 13.20 1.72 9.69 16.62 1.00 1647 2178
## N1N 0.68 8.23 -15.20 16.77 1.00 6866 2840
## N2N 4.42 1.47 1.46 7.23 1.00 2304 2550
## N3N 0.75 8.96 -17.54 18.70 1.00 7675 2837
## P1P 0.99 8.23 -14.98 16.81 1.00 6737 2947
## P2P -0.12 1.06 -2.20 1.94 1.00 3892 2903
## P3P 0.49 8.84 -16.76 17.48 1.00 6943 2960
## K1K 0.89 8.29 -15.46 16.96 1.00 7562 2948
## K2K -0.90 1.07 -3.00 1.17 1.00 3500 2839
## K3K 0.45 8.75 -17.67 17.89 1.00 7807 2767
## N1N:P1P 0.30 8.42 -16.74 16.99 1.00 6895 2760
## N2N:P1P 0.03 10.07 -20.01 20.20 1.00 10965 2585
## N3N:P1P 0.81 8.70 -16.15 17.76 1.00 8078 2762
## N1N:P2P 0.01 10.02 -19.33 19.70 1.00 9609 2926
## N2N:P2P -0.22 0.99 -2.17 1.73 1.00 8406 3409
## N3N:P2P 0.06 9.97 -18.89 19.19 1.00 9132 2690
## N1N:P3P 0.04 8.72 -17.08 17.30 1.00 8235 2933
## N2N:P3P 0.07 9.78 -19.04 19.22 1.00 9633 3066
## N3N:P3P -0.05 9.87 -19.49 19.42 1.00 9002 2780
## N1N:K1K -0.06 8.00 -15.58 15.59 1.00 7124 3126
## N2N:K1K 0.03 10.06 -19.39 19.37 1.00 9321 2624
## N3N:K1K 0.69 8.97 -16.91 17.79 1.00 8192 2748
## N1N:K2K 0.09 9.82 -18.67 19.35 1.00 9720 2822
## N2N:K2K -0.90 1.01 -2.85 1.12 1.00 5398 3158
## N3N:K2K -0.07 9.95 -19.33 19.13 1.00 9941 2978
## N1N:K3K 0.79 8.77 -16.24 17.87 1.00 8141 2744
## N2N:K3K 0.07 9.87 -19.89 20.02 1.00 10128 3010
## N3N:K3K -0.03 9.97 -19.22 19.63 1.00 10602 2679
## P1P:K1K 0.75 8.21 -15.01 16.52 1.00 7409 2983
## P2P:K1K 0.08 9.86 -19.50 19.95 1.00 8854 2920
## P3P:K1K 0.32 8.96 -17.44 17.62 1.00 7869 2776
## P1P:K2K -0.02 10.14 -20.08 20.13 1.00 9891 2734
## P2P:K2K 1.96 1.03 -0.02 3.92 1.00 6497 3149
## P3P:K2K 0.12 9.74 -19.04 19.08 1.00 9593 2704
## P1P:K3K 0.76 8.73 -16.56 17.77 1.00 7699 2602
## P2P:K3K 0.12 9.97 -19.69 19.76 1.00 10342 2816
## P3P:K3K 0.04 9.98 -19.89 19.10 1.01 9257 2824
##
## Further Distributional Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 3.05 0.14 2.79 3.34 1.00 3016 2902
##
## Draws were sampled using sample(hmc). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
We can see here that the point estimates of the interaction slopes are mostly close to zero with wide credible intervals, indicating weak to nonexistent interactions.
Finally, we can use LOO cross-validation to compare the model to others. In this case I will just compare the model with both interaction terms to the random-slope model with only the three main effects and no interaction terms that we fit during the main part of the lesson.
fit_withinteractions <- add_criterion(fit_withinteractions, 'loo')
loo_compare(fit_withinteractions, fit_randomNPKslopes)
## elpd_diff se_diff
## fit_randomNPKslopes 0.0 0.0
## fit_withinteractions -1.3 2.3
This shows us that the fit that includes the interaction terms does not perform any better in the LOO cross-validation than the fit that ignores interactions. This is further evidence that interactions between fertilization treatments are not very strong in this particular dataset.