Statistical power is defined as the probability, before a study is performed, that a particular comparison will achieve statistical significance at some predetermined level (typically a p-value below 0.05), given some assumed true effect size
If a certain effect of interest exists (e.g. a difference between two groups) power is the chance that we actually find the effect in a given study
A power analysis is performed by first hypothesizing an effect size, then making some assumptions about the variation in the data and the sample size of the study to be conducted, and finally using probability calculations to determine the chance of the p-value being below the threshold
The conventional view is that you should avoid low-power studies because they are unlikely to succeed
There are several problems with this view, but it’s often required by research funding bodies
The uses of simulation methods
Simulating data uses generating random data sets with known properties using code (or some other method). This can be useful in various contexts.
To better understand our models. Probability models mimic variation in the world, and the tools of simulation can help us better understand this variation. Patterns of randomness are contrary to normal human thinking and simulation helps in training our intuitions about averages and variation
To run statistical analyses (e.g., simulating a null distribution against which to compare a sample)
To approximate the sampling distribution of data and propagate this to the sampling distribution of statistical estimates and procedures
sample(): sampling elements from an R object with or without replacement
replicate(): often plays a role in conjunction with sampling functions; it is used to evaluate an expression N number of times repeatedly
From non-base packages:
MASS::mvtnorm(): multivariate normal; sampling multiple variables with a known correlation structure (i.e., we can tell R how variables should be correlated with one another) and normally distributed errors
Sampling from a uniform distribution
The runif function returns some number (n) of random numbers from a uniform distribution with a range from \(a\) (min) to \(b\) (max) such that \(X\sim\mathcal U(a,b)\) (verbally, \(X\) is sampled from a uniform distribution with the parameters \(a\) and \(b\)), where \(-\infty < a < b < \infty\) (verbally, \(a\) is greater than negative infinity but less than \(b\), and \(b\) is finite). The default is to draw from a standard uniform distribution (i.e., \(a = 0\) and \(b = 1\)):
# Sample a vector of ten numbers and store the results in the object `rand_unifs`# Note that the numbers will be different each time we re-run the `runif` function above.# If we want to recreate the same sample, we should set a `seed` number firstrand_unifs <-runif(n =10000, min =0, max =1);
To visualise the entire sample, we can plot it on a histogram:
Sampling from a normal distribution
The rnorm function returns some number (n) of randomly generated values given a set mean (\(\mu\); mean) and standard deviation (\(\sigma\); sd), such that \(X\sim\mathcal N(\mu,\sigma^2)\). The default is to draw from a standard normal (a.k.a., Gaussian) distribution (i.e., \(\mu = 0\) and \(\sigma = 1\)):
rand_norms_10000 <-rnorm(n =10000, mean =0, sd =1)print(rand_norms_10000[1:20])
Histograms allow us to check how samples from the same distribution might vary.
Exercise: Compare the above distribution with a normal distribution that had a standard deviation of 2 instead of 1.
Sample 10,000 new values in rnorm with sd = 2 instead of sd = 1 and create a new histogram with hist.
To see what the distribution of sampled data might look like given a low sample size (e.g., 10), repeat the process of sampling from rnorm(n = 10, mean = 0, sd = 1) multiple times and look at the shape of the resulting histogram.
Sampling from a Poisson distribution
A Poisson process describes events happening with some given probability over an area of time or space such that \(X\sim Poisson(\lambda)\), where the rate parameter \(\lambda\) is both the mean and variance of the Poisson distribution (note that by definition, \(\lambda > 0\), and although \(\lambda\) can be any positive real number, data are always integers, as with count data).
Sampling from a Poisson distribution can be done in R with rpois, which takes only two arguments specifying the number of values to be returned (n) and the rate parameter (lambda). There are no default values for rpois.
A binomial distribution describes the number of successes for some number of independent trials (\(\Pr(success) = p\)).
The rbinom function returns the number of successes after size trials, in which the probability of success in each trial is prob.
Sampling from a binomial distribution in R with rbinom is a bit more complex than using runif, rnorm, or rpois.
Like those previous functions, the rbinom function returns some number (n) of random numbers, but the arguments and output can be slightly confusing at first.
Sampling from a binomial distribution
For example, suppose we want to simulate the flipping of a fair coin 1000 times, and we want to know how many times that coin comes up heads (success). We can do this with the following code:
The above result shows that the coin came up heads 491 times. But note the (required) argument n. This allows us to set the number of sequences to run.
If we instead set n = 2, then this could simulate the flipping of a fair coin 1000 times once to see how many times heads comes up, then repeating the whole process a second time to see how many times heads comes up again (or, if it is more intuitive, the flipping of two separate fair coins 1000 times at the same time).
A coin was flipped 1000 times and returned 496 heads, and then another fair coin was flipped 1000 times and returned 506 heads.
As with the rnorm and runif functions, we can check to see what the distribution of the binomial function looks like if we repeat this process.
Sampling from a binomial distribution
Suppose that we want to see the distribution of the number of times heads comes up after 1000 flips. We can simulate the process of flipping 1000 times in a row with 10000 different coins:
Sometimes it is useful to sample a set of values from a vector or list. The R function sample is very flexible for sampling a subset of numbers or elements from some structure (x) in R according to some set probabilities (prob).
Elements can be sampled from x some number of times (size) with or without replacement (replace), though an error will be returned if the size of the sample is larger than x but replace = FALSE (default).
Suppose we want to ask R to pick a random number from one to ten with equal probability:
Note that all numbers from 1 to 10 have been sampled, but in a random order. This is because the default is to sample without replacement, meaning that once a number has been sampled for the first element in rand_number_10, it is no longer available to be sampled again.
We can change this and allow for sampling with replacement:
Note that the numbers {4, 9, 10} are now repeated in the set of randomly sampled values above.
Random sampling using sample
So far, because we have not specified a probability vector prob, the function assumes that every element in 1:10 is sampled with equal probability
Here’s an example in which the numbers 1-5 are sampled with a probability of 0.05, while the numbers 6-10 are sampled with a probability of 0.15, thereby biasing sampling toward larger numbers; we always need to ensure that these probabilities need to sum to 1.
prob_vec <-c( rep(x =0.05, times =5), rep(x =0.15, times =5))rand_num_bias <-sample(x =1:10, size =10, replace =TRUE, prob = prob_vec)print(rand_num_bias)
[1] 10 8 1 10 7 8 4 10 10 5
Sampling random characters from a list
We can also sample characters from a list of elements; it is no different than sampling numbers
For example, if we want to create a simulated data set that includes three different species of some plant or animal, we could create a vector of species identities from which to sample:
species <-c("species_A", "species_B", "species_C");
We can then sample from these three possible categories. For example:
We can generate variables \(X_{1}\) and \(X_{2}\) that have known correlations \(\rho\) with with one another.
For example: two standard normal random variables with a sample size of 10000, and with correlation between them of 0.3:
N <-10000rho <-0.3x1 <-rnorm(n = N, mean =0, sd =1)x2 <- (rho * x1) +sqrt(1- rho*rho) *rnorm(n = N, mean =0, sd =1)
These variables are generated by first simulating the sample \(x_{1}\) (x1 above) from a standard normal distribution. Then, \(x_{2}\) (x2 above) is calculated as
where \(x_{rand}\) is a sample from a normal distribution with the same variance as \(x_{1}\).
Simulating data with known correlations
We can generate variables \(X_{1}\) and \(X_{2}\) that have known correlations \(\rho\) with with one another.
For example: two standard normal random variables with a sample size of 10000, and with correlation between them of 0.3:
N <-10000rho <-0.3x1 <-rnorm(n = N, mean =0, sd =1)x2 <- (rho * x1) +sqrt(1- rho*rho) *rnorm(n = N, mean =0, sd =1)
Does the correlation equal rho (with some sampling error)?
cor(x1, x2)
[1] 0.2967485
Simulating data with known correlations
There is a more efficient way to generate any number of variables with different variances and correlations to one another.
We need to use the MASS library, which can be installed and loaded as below:
In the MASS library, the function mvrnorm can be used to generate any number of variables for a pre-specified covariance structure.
Simulating a regression design
Simulating a regression design
We can use simulation to test rather complex study designs
Imagine you are interested in students attitude towards smoking and how it depends on the medium of the message and the focus of the message
We want to know whether people’s attitude is different after seeing a visual anti-smoking message (these pictures on the package) vs. a text-message (the text belonging to that picture)
We are interested in whether the attitude that people report is different after seeing a message that regards the consequences on other people (e.g. smoking can harm your loved ones) as compared to yourself (smoking can cause cancer)
Study design:
DV: attitude towards smoking (0-100) IV1: medium (text vs. visual) IV2: focus (internal vs. external)
This is, there are 4 groups:
group_TI will receive text-messages that are internal
group_TE will receive text-messages that are external
group_VI will receive visual messages that are internal
group_VE will receive visual messages that are external
Example: simulating a regression design
assume that we expect that people’s attitude will be more negative after seeing a visual rather than text message if the focus is internal (i.e. the message is about yourself) because it might be difficult to imagine that oneself would get cancer after reading a text but seeing a picture might cause fear regardless
for the external focus on the other hand, we expect a more negative attitude after reading a text as compared to seeing a picture, as it might have more impact on attitude to imagine a loved one get hurt than seeing a stranger in a picture suffering from the consequences of second-hand smoking
we expect that the internal focus messages will be related to lower attitudes compared to the external focus messages on average but we expect no main-effect of picture vs. text-messages
Example: simulating a regression design
visualize some rough means that show the desired behavior that we described in words earlier and see where we are going
we could make the overall mean of the internal focus groups (group_TI and group_VI) 20 and the mean of the external groups (group_TE and group_VE) 50 (this would already reflect the main-effect but also a belief that the smoking-attitudes are on average quite negative as we assume both means to be on the low end of the scale)
assume that the mean of group_TI is 30 while the mean of group_VI is 10 and we could assume that the mean of group_TE is 40 and the mean of group_VE is 60
focus <-rep(c("internal", "external"), each =2)media <-rep(c("text", "visual"), times =2)mean_TI <-50mean_VI <-20mean_TE <-30mean_VE <-60pd <-data.frame(score =c(mean_TI, mean_VI, mean_TE, mean_VE), focus = focus, media = media)interaction.plot(pd$focus, pd$media, pd$score, ylim =c(0,100))
focus <-rep(c("internal", "external"), each =2)media <-rep(c("text", "visual"), times =2)mean_TI <-43mean_VI <-40mean_TE <-45mean_VE <-47pd <-data.frame(score =c(mean_TI, mean_VI, mean_TE, mean_VE), focus = focus, media = media)interaction.plot(pd$focus, pd$media, pd$score, ylim =c(0,100))
Example: simulating a regression design
in the new example there is a difference between the two media groups on average but it is only .50 points, so arguably it is small enough to represent the assumption of no effect, as in real-life no effect in terms of a difference being actually 0 is rather rare
come up with some reasonable standard-deviation; if we start at 50 and we want most people to be < 80, we can set the 2-SD bound at 80 to get a standard-deviation of 15 (80-50)/2.
let’s assume that each of our groups has a standard-deviation of 15 points.
n <-1e5group_TI <-rnorm(n, 43, 15)group_VI <-rnorm(n, 40, 15)group_TE <-rnorm(n, 45, 15)group_VE <-rnorm(n, 47, 15)participant <-c(1:(n*4))focus <-rep(c("internal", "external"), each = n*2)media <-rep(c("text", "visual"), each = n, times =2)data <-data.frame(participant = participant, focus = focus, media = media, score =c(group_TI, group_VI, group_TE, group_VE))summary(data)
participant focus media score
Min. :1e+00 Length:400000 Length:400000 Min. :-26.82
1st Qu.:1e+05 Class :character Class :character 1st Qu.: 33.41
Median :2e+05 Mode :character Mode :character Median : 43.69
Mean :2e+05 Mean : 43.72
3rd Qu.:3e+05 3rd Qu.: 54.00
Max. :4e+05 Max. :115.45
Ready for power-analysis
Some additional assumptions: suppose we have enough funding for a sizeable data collection and the aim is to ensure that we do not draw unwarranted conclusions from the research
We should then set the alpha-level at a more conservative value (\(\alpha = .001\)); with this, we expect to draw non-realistic conclusions in the interaction effect in only about 1 in every 1,000 experiments
We also want to be sure that we do detect an existing effect and keep our power high at 95%; with this, we expect that if there is an interaction effect, we would detect it in 19 out of 20 cases (only miss it in 1 out of 20, or 5%)
Running the power-simulation can be very memory-demanding and the code can run a very long time to complete; it’s advised to start from various low-resolution sample-sizes (e.g. n = 10, n = 100, n = 200, etc.) to get a rough idea of where we can expect our loop to end. Then, the search can be made more specific in order to identify a more precise sample size.
Ready for power-analysis
set.seed(1)n_sims <-1000# we want 1000 simulationsp_vals <-c()power_at_n <-c(0) # this vector will contain the power for each sample-size (it needs the initial 0 for the while-loop to work)n <-100# sample-size and start at 100 as we can be pretty sure this will not suffice for such a small effectn_increase <-100# by which stepsize should n be increasedi <-2power_crit <- .95alpha <- .001while(power_at_n[i-1] < power_crit){for(sim in1:n_sims){ group_TI <-rnorm(n, 43, 15) group_VI <-rnorm(n, 40, 15) group_TE <-rnorm(n, 45, 15) group_VE <-rnorm(n, 47, 15) participant <-c(1:(n*4)) focus <-rep(c("internal", "external"), each = n*2) media <-rep(c("text", "visual"), each = n, times =2) data <-data.frame(participant = participant, focus = focus, media = media, score =c(group_TI, group_VI, group_TE, group_VE)) data$media_sum_num <-ifelse(data$media =="text", 1, -1) # apply sum-to-zero coding data$focus_sum_num <-ifelse(data$focus =="external", 1, -1) lm_int <-lm(score ~1+ focus_sum_num + media_sum_num + focus_sum_num:media_sum_num, data = data) # fit the model with the interaction lm_null <-lm(score ~1+ focus_sum_num + media_sum_num, data = data) # fit the model without the interaction p_vals[sim] <-anova(lm_int, lm_null)$`Pr(>F)`[2] # put the p-values in a list }print(n) power_at_n[i] <-mean(p_vals < alpha) # check power (i.e. proportion of p-values that are smaller than alpha-level of .10)names(power_at_n)[i] <- n n <- n+n_increase # increase sample-size by 100 for low-resolution testing first i <- i+1# increase index of the while-loop by 1 to save power and cohens d to vector}