Open In App

How to Perform an F-Test in Python

Improve
Improve
Like Article
Like
Save
Share
Report

In statistics, Many tests are used to compare the different samples or groups and draw conclusions about populations. These techniques are commonly known as Statistical Tests or hypothesis Tests. It focuses on analyzing the likelihood or probability of obtaining the observed data that they are random or follow specific assumptions or hypotheses. These tests give an outline for analyzing evidence in support or opposition to a certain hypothesis.

A statistical test begins with the formation of a null hypothesis (H0) and an alternative hypothesis (Ha). The null hypothesis represents the default or no-effect assumption, while the alternative hypothesis suggests a specific relationship or effect.

Different statistical test methods are available to calculate the probability, typically measured as a p-value, of obtaining the observed data. The p-value indicates the likelihood of observing the data or more extreme results assuming the null hypothesis is true. Based on the calculated p-value and a predetermined significance level, researchers make a decision to either accept or reject the null hypothesis. 

There are different-different statistical tests like T-tests, Chi-squared tests, ANOVA, Z-test, and F-tests, etc. which are used to compute the p-value. In this article, we will learn about the F-test.

F-test

F-test is the statistical test used to compare the variances of two or more samples or populations in hypothesis testing to determine whether they are significantly different or not. It applies the F-test statistic to determine whether the variances of two samples or populations are equal or not.

The F-statistic is a test statistic that measures the ratio of the variances between groups or populations. It is calculated by dividing the Population sample variance by each sample variance.

F=\frac{\text{Variance Between Groups}}{\text{Variance Within Groups}}

By performing the F-test, we compare the calculated F statistic to a critical value or a specified significance level. If the results of the F-test are statistically significant, meaning that the calculated F statistic exceeds the critical value, we can reject the null hypothesis, which assumes equal variances. On the other hand, if the results are not statistically significant, we fail to reject the null hypothesis, indicating that there is not enough evidence to conclude that the variances are significantly different.

The F-test is used in statistics and machine learning for comparing variances or testing the overall significance of a statistical model, such as in the analysis of variance (ANOVA) or regression analysis.

In this article, we will be looking at the approach to performing an F-Test in the python programming language. The scipy stats.f() function in Python with the certain parameters required to be passed to get the F- test of the given data.

scipy stats.f(): It is an F continuous random variable that is defined with a standard format and some shape parameters to complete its specification.

Syntax: scipy stats.f()

Parameters:

  • x :  quantiles
  • q :  lower or upper tail probability
  • dfn, dfd shape parameters
  • loc :location parameter
  • scale :  scale parameter (default=1)
  • size :shape of random variate
  • moments : composed of letters [‘mvsk’] specifying which moments to compute

In this example, we will be using the data from the normalized distributed sample with the different variance values and we will be further using the scipy.stats.f.cdf() to get the data’s F-test in the Python programming language.

Python3

import numpy as np
import scipy.stats as stats
 
# Create the data for two groups
group1 = np.random.rand(25)
group2 = np.random.rand(20)
 
# Calculate the sample variances
variance1 = np.var(group1, ddof=1)
variance2 = np.var(group2, ddof=1)
 
# Calculate the F-statistic
f_value = variance1 / variance2
 
# Calculate the degrees of freedom
df1 = len(group1) - 1
df2 = len(group2) - 1
 
# Calculate the p-value
p_value = stats.f.cdf(f_value, df1, df2)
 
# Print the results
print('Degree of freedom 1:',df1)
print('Degree of freedom 2:',df2)
print("F-statistic:", f_value)
print("p-value:", p_value)

                    

Output:

Degree of freedom 1: 24
Degree of freedom 2: 19
F-statistic: 1.0173394178487225
p-value: 0.5088429167047133

F-distribution: 

The F-statistic follows an F-distribution, which is a probability distribution that depends on the degrees of freedom of the numerator and denominator.

The F-distribution is the distribution that arises when the ratio of two independent chi-square variables is divided by their respective degrees of freedom. 

\text{f-value} = \frac{\text{Sample 1}/df1}{\text{Sample 2}/df2}

  • Sample 1 & Sample 2 are the independent random variable with a chi-square distribution
  • df1 & df2 are the degrees of freedom for the corresponding samples

The degrees of freedom represent the number of observations used to calculate the chi-square variables that form the ratio. The shape of the F-distribution is determined by its degrees of freedom. It is a right-skewed distribution, meaning it has a longer tail on the right side. As the degrees of freedom increase, the F-distribution becomes more symmetric and approaches a bell shape.

Python3

import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
 
# Set the degrees of freedom
df1 = 7
df2 = 13
 
# Generate random samples from chi-square distributions
sample1 = np.random.chisquare(df1, size=1000)
sample2 = np.random.chisquare(df2, size=1000)
 
# Calculate the F-statistic
f_value = (sample1 / df1) / (sample2 / df2)
 
# Sort the f-statistic for better distribution plot
f_value = np.sort(f_value)
# Calculate the PDF of the F-distribution
pdf = stats.f.pdf(f_value, df1, df2)
 
# Calculate the CDF of the F-distribution
cdf = stats.f.cdf(f_value, df1, df2)
 
# Calculate the corresponding p-value
p_value = 1 - cdf
 
 
# Plot the PDF and CDF of the F-distribution
plt.figure(figsize=(10, 4))
plt.subplot(1, 2, 1)
plt.plot(f_value, pdf)
plt.title('F-distribution PDF')
plt.xlabel('x')
plt.grid(True)
 
plt.ylabel('Probability Density')
 
plt.subplot(1, 2, 2)
plt.plot(f_value, cdf)
plt.title('F-distribution CDF')
plt.xlabel('x')
plt.ylabel('Cumulative Probability')
plt.grid(True)
plt.tight_layout()
plt.show()

                    

Output:

F-Distribution-Geeksforgeeks

F-Distribution

Key points in F-Test

The key points of the F-test are as follows:

  • Degrees of freedom: The degrees of freedom in an F-test represent the number of independent observations available for calculating the variances. The numerator degrees of freedom represent the number of groups being compared minus one, and the denominator degrees of freedom represent the total number of observations minus the number of groups.
  • Null hypothesis: The null hypothesis in an F-test states that the variances of the groups or populations being compared are equal.
    \sigma_1 = \sigma_2
  • Alternative hypothesis: The alternative hypothesis in an F-test states that the variances of the groups or populations are not equal.
    One Tailed Test  \sigma_1 \leq \sigma_2 \\ \sigma_1 \geq \sigma_2@Two tailed Test \sigma_1 \neq \sigma_2
  • P-value: The p-value is the probability of obtaining an F-statistic as extreme as the observed value, assuming the null hypothesis is true. A small p-value (typically less than a predefined significance level) indicates strong evidence against the null hypothesis.
  • Decision: Based on the calculated F-statistic and p-value, a decision is made whether to reject or fail to reject the null hypothesis. If the p-value is less than the significance level, the null hypothesis is rejected, suggesting significant differences in variances.

Application of F-Test

The F-test is commonly used in various statistical analyses, including comparing the variances of two groups, assessing the adequacy of regression models, and conducting analysis of variance (ANOVA) tests. It helps determine if the differences in variances observed in the samples are likely due to chance or if they are statistically significant.

In a one-tailed F-test, the hypotheses are formulated to test for a specific directional difference or relationship between the groups or populations. For example, you might hypothesize that one group has a higher variance than the other, or that one group’s mean is significantly greater than the other. The one-tailed F-test will determine if the observed F-value is statistically significant in the desired direction.

On the other hand, in a two-tailed F-test, the hypotheses are formulated to test for any significant difference or relationship between the groups or populations, regardless of the direction. It is used to check if the variances are significantly different or if there is any significant difference in means. The two-tailed F-test considers both tails of the F-distribution and assesses if the observed F-value is statistically significant in either direction.

In the context of ANOVA, the F-value or F-statistic are obtained from the F-test is indeed used to perform the one-way ANOVA test. It helps assess the significance of the differences between the means of multiple groups. The F-value is compared to a critical value or p-value to determine if there are statistically significant differences among the group means.

Example

Perform a one-way analysis of variance (ANOVA) to compare the means of multiple groups using the F-test. The objective is to determine if there are any significant differences in the means of the groups based on a set of generated data.

Implementations

Python3

import numpy as np
import scipy.stats as stats
 
# Generate 25 samples
x = np.random.rand(25)
 
# Randomly group the data into 10 groups
num_groups = 5
group_labels = np.random.randint(0, num_groups, size=len(x))
 
# Calculate the group means
group_means = []
for i in range(num_groups):
    group_means.append(np.mean(x[group_labels == i]))
 
# Calculate the overall mean
overall_mean = np.mean(x)
 
# Calculate the sum of squares between groups
SSB = np.sum([len(x[group_labels == i]) * (group_means[i] - overall_mean)**2 for i in range(num_groups)])
 
# Calculate the degrees of freedom between groups
df_between = num_groups - 1
# Calculate the degrees of freedom with in groups
df_within = len(x)-num_groups
 
# Calculate the mean square between groups
MSB = SSB / df_between
 
# Calculate the sum of squares within groups
SSW = 0
for i in range(num_groups):
    group_samples = x[group_labels == i]
    SSW += np.sum((group_samples - group_means[i])**2)
 
MSW = SSW / df_within
 
# Calculate the F-value
F_value = MSB / MSW
 
# Degree of Freedom
print('Degree of Freedom between groups',df_between)
print('Degree of Freedom within groups',df_within)
 
# Print the F-value
print("F-value:", F_value)
 
# Set the significance level
alpha = 0.05
 
# Calculate the F-value using Percent point function (inverse of cdf)
f_critical = stats.f.ppf(1 - alpha, df_between, df_within)
 
# Print the F-critical
print("F-critical:", f_critical)
 
# Check the hypothesis
if F_value > f_critical:
    print("Reject the null hypothesis")
else:
    print("Fail to reject the null hypothesis")

                    

Output:

Degree of Freedom between groups 4
Degree of Freedom within groups 20
F-value: 2.3903576949121788
F-critical: 2.8660814020156584
Fail to reject the null hypothesis

Method 2:

Python3

import numpy as np
import scipy.stats as stats
 
np.random.seed(23)
 
# Generate 25 samples
x = np.random.rand(25)
 
# Randomly group the data into 10 groups
num_groups = 5
group_labels = np.random.randint(0, num_groups, size=len(x))
 
# Calculate the group means
group_means = []
for i in range(num_groups):
    group_means.append(np.mean(x[group_labels == i]))
 
 
# Set the significance level
alpha = 0.05
 
 
# Perform one-way ANOVA using the F-test
f_value, p_value = stats.f_oneway(*[x[group_labels == i] for i in range(num_groups)])
 
# Print the results
print("F-value:", f_value)
print("p-value:", p_value)
 
# Set the significance level
alpha = 0.05
 
# Check the hypothesis
if p_value < alpha:
    print("Reject the null hypothesis")
else:
    print("Fail to reject the null hypothesis")

                    

Output:

F-value: 2.390357694912179
p-value: 0.08508248527342153
Fail to reject the null hypothesis

Interpretation from the test:

The F test statistic is 1.188016891556387 and the p-value is 0.29883740804009307 we would fail to reject the null hypothesis as p-value>0.2 so we can say simply by looking at the p-value of the data used that the two population variances are not equal.



Last Updated : 25 Sep, 2023
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads