Why, when dealing with two independent samples where you cannot assume the population variances are equal, should the degrees of freedom be adjusted?

Answer 1: When testing a hypothesis, specified significance level of one size for all parameters does not work. If we have samples of populations with different variances and different sizes, then we have to unpool the data. Because we are not certain about the results, this means we have to adjust the degree of freedom to make sure we have a higher critical value and a bigger confidence interval. This does not really affect samples of same sizes, it is usually for samples of different sizes.

Answer 2: In the real world, there are certain things that aren’t universal prime example in statistics. Sample sizes are taken into factor when it comes to have a significant and specified level of hypothesis testing. The same approach can’t be used because the method needs to be correspondent to the data giving with the appropriate sample size. This is where the degrees of freedom becomes a factor to where it needs to be altered most commonly by increasing the confidence interval level. There are a few occasions where the population variances is similar or even the exact same in which this case a constant value for the degrees of freedom maybe used.