Ratings provided on visual analog scales (VAS), or slider scales, are unlikely to be normally distributed. Nevertheless, researchers typically use the normal distribution to analyze analog scale ratings, such as when they perform ANOVAs, t-tests, and correlations. A potentially better model of analog ratings, which are typically skewed and have lower and upper limits, is the so called zero-one-inflated beta model. In this post, I explain this model, illustrate its use with simulated and data, and compare its performance to t-tests in comparing two groups slider ratings.
Don't set R's working directory from an R script.
Assessing the correlations between psychological variabless, such as abilities and improvements, is one essential goal of psychological science. However, psychological variables are usually only available to the researcher as estimated parameters in mathematical and statistical models. The parameters are often estimated from small samples of observations for each research participant, which results in uncertainty (aka sampling error) about the participant-specific parameters. Ignoring the resulting uncertainty can lead to suboptimal inferences, such as asserting findings with too much confidence. Hierarchical models alleviate this problem by accounting for each parameter's uncertainty at the person- and average levels. However, common maximum likelihood estimation methods can have difficulties converging and finding appropriate values for parameters that describe the person-level parameters' spread and correlation. In this post, I discuss how Bayesian hierarchical models solve this problem, and advocate their use in estimating psychological variables and their correlations.
It appears that there is an imbalance in what many beginning bayesian data analysts think about BDA. From casual observation and discussions, I've noticed a tendency for people to equate bayesian methods with computing bayes factors; that is, testing (usually null) hypotheses using bayesian model comparison.