Saturday, 28 June 2014

Independent t-Test for Two Sample

Introduction
The independent t-test, also called the two sample t-test or student's t-test, is an inferential statistical test that determines whether there is a statistically significant difference between the means in two unrelated groups.

The independent-samples t-test (or independent t-test, for short) compares the means between two unrelated groups on the same continuous, dependent variable. For example, you could use an independent t-test to understand whether first year graduate salaries differed based on gender (i.e., your dependent variable would be "first year graduate salaries" and your independent variable would be "gender", which has two groups: "male" and "female"). Alternately, you could use an independent t-test to understand whether there is a difference in test anxiety based on educational level (i.e., your dependent variable would be "test anxiety" and your independent variable would be "educational level", which has two groups: "undergraduates" and "postgraduates").

Hypothesis for the Independent t-Test
The null hypothesis for the independent t-test is that the population means from the two unrelated groups are equal:
H0: u1 = u2 (Means of two groups are equal) 
In most cases, we are looking to see if we can show that we can reject the null hypothesis and accept the alternative hypothesis, which is that the population means are not equal:
HA: u1 ≠ u2 (Means of two groups are not equal) 
To do this, we need to set a significance level (alpha) that allows us to either reject or accept the alternative hypothesis. Most commonly, this value is set at 0.05.

What do we need to run an independent Test?
In order to run an independent t-test, you need the following:
  • One independent, categorical variable that has two levels (An independent variable, sometimes called an experimental or predictor variable, is a variable that is being manipulated in an experiment in order to observe the effect on a dependent variable, sometimes called an outcome variable)
  • One dependent variable.
Unrelated Groups
Unrelated groups, also called unpaired groups or independent groups, are groups in which the cases in each group are different. Often we are investigating differences in individuals, which means that when comparing two groups, an individual in one group cannot also be a member of the other group and vice versa. An example would be gender - an individual would have to be classified as either male or female - not both.

Assumptions
When you choose to analyze your data using an independent t-test, part of the process involves checking to make sure that the data you want to analyze can actually be analyzed using an independent t-test. You need to do this because it is only appropriate to use an independent t-test if your data "passes" six assumptions that are required for an independent t-test to give you a valid result.
Do not be surprised if, when analyzing your own data, one or more of these assumptions is violated (i.e., is not met). This is not uncommon when working with real-world data rather than textbook examples, which often only show you how to carry out an independent t-test when everything goes well! However, don't worry. Even when your data fails certain assumptions, there is often a solution to overcome this. First, let's take a look at these six assumptions:
  • Assumption #1: The independent t-test requires that the dependent variable is approximately normally distributed within each group. We can test for this using a multitude of tests, but the Shapiro-Wilks Test or a graphical method, such as a Q-Q Plot, are very common. Your dependent variable should be measured on a continuous scale (i.e., it is measured at the interval or ratio level). Examples of variables that meet this criterion include revision time (measured in hours), intelligence (measured using IQ score), exam performance (measured from 0 to 100), weight (measured in kg), and so forth.  
  • Assumption #2: Your independent variable should consist of two categorical, independent groups. Example independent variables that meet this criterion include gender (2 groups: male or female), employment status (2 groups: employed or unemployed), smoker (2 groups: yes or no), and so forth.
  • Assumption #3: You should have independence of observations, which means that there is no relationship between the observations in each group or between the groups themselves. For example, there must be different participants in each group with no participant being in more than one group. This is more of a study design issue than something you can test for, but it is an important assumption of the independent t-test. If your study fails this assumption, you will need to use another statistical test instead of the independent t-test (e.g., a paired-samples t-test).  
  • Assumption #4: There should be no significant outliers. Outliers are simply single data points within your data that do not follow the usual pattern (e.g., in a study of 100 students' IQ scores, where the mean score was 108 with only a small variation between students, one student had a score of 156, which is very unusual, and may even put her in the top 1% of IQ scores globally). The problem with outliers is that they can have a negative effect on the independent t-test, reducing the validity of your results. 
  • Assumption #5: Your dependent variable should be approximately normally distributed for each group of the independent variable. We talk about the independent t-test only requiring approximately normal data because it is quite "robust" to violations of normality, meaning that this assumption can be a little violated and still provide valid results.  
  • Assumption #6: There needs to be homogeneity of variances. You can test this assumption using Levene’s test for homogeneity of variances. The independent t-test assumes the variances of the two groups you are measuring to be equal. If your variances are unequal, this can affect the Type I error rate. The assumption of homogeneity of variance can be tested using Levene's Test of Equality of Variances. If you have run Levene's Test of Equality of Variances, you will get a result similar to that below:
    Levene's Test for Equality of Variances in the Independent T-Test Procedure within SPSS
    This test for homogeneity of variance provides an F statistic and a significance value (p-value). We are primarily concerned with the significance level - if it is greater than 0.05, our group variances can be treated as equal. However, if p < 0.05, we have unequal variances and we have violated the assumption of homogeneity of variance.
What to do when you violate the Normal Assumptions
If you find that either one or both of your group's data is not approximately normally distributed and groups sizes differ greatly, you have two options:
(1) transform your data so that the data becomes normally distributed or
(2) run the Mann-Whitney U Test which is a non-parametric test that does not require the assumption of normality

Overcoming the violation of Homogeneity of Variance
If the Levene's Test for Equality of Variances is statistically significant, and therefore indicates unequal variances, we can correct for this violation by not using the pooled estimate for the error term for the t-statistic, and also making adjustments to the degrees of freedom using the Welch-Satterthwaite method. However, you can see the evidence of these tests as below:

Differences in the t-statistic and the degrees of freedom when homogeneity of variance is not assumed
 
From the result of Levene's Test for Equality of Variances, we can reject the null hypothesis that there is no difference in the variances between the groups and accept the alternative hypothesis that there is a significant difference in the variances between groups. The effect of not being able to assume equal variances is evident in the final column of the above figure where we see a reduction in the value of the t-statistic and a large reduction in the degrees of freedom (df). This has the effect of increasing the p-value above the critical significance level of 0.05. In this case, we therefore do not accept the alternative hypothesis and accept that there are no statistically significant differences between means. This would not have been our conclusion had we not tested for homogeneity of variances.

How to use an Independent T-Test
This level assumes that you will use a software package to perform a t-test. If you want to know how to do it by hand, read level two. The result of using a t-test is that you know how likely it is that the difference between your sample means is due to sampling error. This is presented as a probability and is called a p-value. The p-value tells you the probability of seeing the difference you found (or larger) in two random samples if there is really no difference in the population.Generally, if this p-value is below 0.05 (5%), you can reject the null hypothesis and conclude that there is a statistically significant difference between the two population means. If you want to be particularly strict, you can decide that the p-value should be below 0.01 (1%). The level of p that you choose is called the significance level of the test. The p-value is calculated by first using the t-test formula to produce a t-value. This t-value is then converted to a probability either by software or by looking it up in a t-table. The next topic covers this part of the process

Calculating T Now we will look at the formula for calculating the t-value from an independent t-test. There are two versions of the formula, one for use when the two samples you wish to compare are of equal size, and one for differently sized samples. We will look at the first version because your data is equal in size for each sample and because it is the simpler formula. The unequal sized samples formula is shown in the extra topic section for your information.
The formula is the ratio of the difference between the two sample means to the variation within the two samples. The ratio has the following consequences:
  • When the difference between the sample means is large and the variation within each sample is small, t is large;
  • When the difference between the sample means is small and the variation within each sample is large, t is small.
In other words, if there is a lot of variation in the values of each sample, it is natural to assume that there will be variation between two samples. In such cases, the t-value will be low.

Here is the formula to calculate the t value for equal sized Independent Samples
Independent t-test formula
It reads: sample mean 1 minus sample mean 2 divided by the square root of (sample variance 1 plus sample variance 2, over n)
  • x1 is read as x1-bar and is the mean of the first sample;
  • x2 is read as x2-bar and is the mean of the second sample;
  • The variance is the standard deviation squared (hence s2).
  • The subscript numbers (1 and 2 to the bottom right of the x and s in the formula) refer to sample 1 and sample 2;
Reporting the result of a Independent t-Test
When reporting the result of an independent t-test, you need to include the t-statistic value, the degrees of freedom (df) and the significance value of the test (p-value). The format of the test result is: t(df) = t-statistic, p = significance value. Therefore, for the example above, you could report the result as t(7.001) = 2.233, p = 0.061.

For example
Inspection of Q-Q Plots revealed that cholesterol concentration was normally distributed for both groups and that there was homogeneity of variance as assessed by Levene's Test for Equality of Variances. Therefore, an independent t-test was run on the data as well as 95% confidence intervals (CI) for the mean difference. It was found that after the two interventions, It was found that after the two interventions, cholesterol concentrations in the dietary group exercise group (5.80 ± 0.38 mmol/L) (t(38) = 2.470, p = 0.018) with a difference of 0.35 (95% CI, 0.06 to 0.64) mmol/L.

Here is an example of Independent t-Test and how we perform with SAS
When data are collected on subjects where subjects are (hopefully randomly) divided into two groups, this is called an independent or parallel study.  That is, the subjects in one group (treatment, etc) are different from the subjects in the other group.
 
The SAS PROC TTEST procedure
This procedure is used to test for the equality of means for a two-sample (independent group) t-test. For example, you might want to compare males and females regarding their reaction to a certain drug, or one group who receives a treatment compared to another that receives a placebo. The purpose of the two-sample t-test, is to determine whether your data provide you with enough evidence to conclude that there is a difference in mean reaction levels between the two group. In general, for a two-sample t-test you obtain independent random samples of size N1 and N2 and from the two populations of interest, and then you compare the observed sample means.

The typical hypotheses for a two-sample t-test are
Ho: m1 = m2           (means of the two groups are equal)
Ha: m1¹ m2         (means are not equal)

Key assumptions underlying the two-sample t-test are that the random samples are independent and that the populations are normally distributed with equal variances. (If the data are non-normal, then nonparametric tests such as the Mann-Whitney U are appropriate.)

SAS Syntax:
Simplified syntax for the TTEST procedure is

PROC TTEST <options>;
CLASS variable; <statements>;
RUN;
Common OPTIONS:
  • DATA = datasetname: Specifies what data set to use.
  • COCHRAN: Specifies that the Cochran and Cox probability approximation is to be used for unequal variances.
Common STATEMENTS:
  • CLASS statement: The CLASS statement is required, and it specifies the grouping variable for the analysis.
The data for this grouping variable must contain two and only two values. An example PROC TTEST command is

PROC TTEST DATA=MYDATA;
CLASS GROUP;
VAR SCORE;
RUN
;

In this example, GROUP contains two values, say 1 or 2. A t-test will be performed on the variable SCORE.
  • VAR variable list;: Specifies which variables will be used in the analysis.
  • BY variable list;: (optional) Causes t-tests to be run separately for groups specified by the BY statement.
EXAMPLE:
A biologist experimenting with plant growth designs an experiment in which 15 seeds are randomly assigned to one of two fertilizers and the height of the resulting plant is measured after two weeks. She wants to know if one of the fertilizers provides more vertical growth than the other.
STEP 1. Define the data to be used. (If your data is already in a data set, you don't have to do this step). For this example, here is the SAS code to create the data set for this experiment:
DATA FERT;
INPUT BRAND $ HEIGHT;
DATALINES;
A 20.00
A 23.00
A 32.00
A 24.00
A 25.00
A 28.00
A 27.50
B 25.00
B 46.00
B 56.00
B 45.00
B 46.00
B 51.00
B 34.00
B 47.50
;
RUN;
STEP 2. The following SAS code specifies a two-sample t-test using the FERT dataset.(NOTE: For SAS 9.3, you don't need the ODS statements.)

ODS HTML;
ODS GRAPHICS ON;
PROC TTEST DATA=FERT;
CLASS BRAND;
VAR HEIGHT;
Title 'Independent Group t-Test Example';
RUN;
ODS GRAPHICS OFF;
ODS HTML CLOSE;
QUIT;

 
The CLASS statement specifies the grouping variables with 2 categories (in this case A and B).
The VAR statement specifies which variable is the observed (outcome or dependent) variable.
Interpret the output. The (abbreviated) output contains the following information
BRANDNMeanStd DevStd ErrMinimumMaximum
A725.64293.90211.474820.000032.0000
B843.81259.81963.471725.000056.0000
Diff (1-2) -18.16967.67783.9736  


This first table provides means, sd, min and max for each group and the mean difference.
BRANDMethodMean95% CL MeanStd Dev95% CL Std Dev
A 25.642922.034029.25173.90212.51458.5926
B 43.812535.603152.02199.81966.492519.9855
Diff (1-2)Pooled-18.1696-26.7541-9.58527.67785.566012.3692
Diff (1-2)Satterthwaite-18.1696-26.6479-9.6914   
    
The next table provides 95% Confidence Limits on both the means and Standard Deviations, and the mean difference using both the pooled (assume variances are equal) and Satterthwaite (assume variances are not equal) methods.
Before deciding which version of the t-test is appropriate, look at this table
Equality of Variances
MethodNum DFDen DFF ValuePr > F
Folded F766.330.0388

This table helps you determine if the variances for the two groups are equal. If the p-value (Pr>F) is less than 0.05, you should assume UNEQUAL VARIANCES. In this case, the variances appear to be unequal.

   
Therefore... in the t-test table below (for this case) choose the Satterthwaite t-test, and report a p-value of p=0.0008. If the variances were assumed equal, you would report the Pooled variances t-test.
MethodVariancesDFt ValuePr > |t|
PooledEqual13-4.570.0005
SatterthwaiteUnequal9.3974-4.820.0008

Since p<.05, you can conclude that the mean for group B (43.8125) is statistically larger than the mean for group A (25.6429). More formally, you reject the null hypothesis that the means are equal and show evidence that they are different.
 
When you use the ODS GRAPHICS ON option, you get the following graph:
Two Sample T-Test Graph

This graph provides a visual comparison of the means (and distributions) of the two groups. In this graph you can see that the mean of group B is larger than the mean of group A. You can also see why the test for equality of variances found that they were not the same (the variance for group A is smaller.)

A complete report of our result
In order to provide enough information for readers to fully understand the results when you have run an independent t-test, you should include the result of normality tests, Levene's Equality of Variances test, the two group means and standard deviations, the actual t-test result and the direction of the difference (if any). In addition, you might also wish to include the difference between the groups along with the 95% confidence intervals.

References

http://www.stattutorials.com/SAS/TUTORIAL-PROC-TTEST-2.htm

http://www.stattutorials.com/SAS/TUTORIAL-PROC-TTEST-INDEPENDENT.htm

Dependent t-Test for Paired Samples

What does this test do?

The dependent t-test (also called paired t-test or paired-samples t-test) compares the mean of two related groups to detect whether there are any statistically significant differences between these means.

What variables do we need for Dependent t-Test?
We need one dependent variable that is measured in an interval or ratio scale. We also need one categorical variable that has only two related groups.

Dependent Samples: When data are collected twice on the same subjects (or matched subjects) the proper analysis is a paired t-test (also called a dependent samples t-test). In this case, subjects may be measured in a before – after fashion, or in a design where a treatment is administered for a time, there is a washout period, and another treatment is administered (in random order for each subject). Or, data might be measured on the same individual in two areas such as one treatment in one eye and another treatment for another eye (or leg, or arm, etc). In these cases the measurement of interest is the difference between the first and second measure. Thus, the null hypothesis (two-sided) is:
Ho: mdifference = 0         (The average difference is 0)
Ha: mdifference ≠ 0         (The average difference is not 0)

What is meant by "related groups"?
A dependent t-test is an example of a "within-subjects" or "repeated-measures" statistical test. This indicates that the same subjects are tested more than once. Thus, in the dependent t-test, "related groups" indicates that the same subjects are present in both groups. The reason that it is possible to have the same subjects in each group is because each subject has been measured on two occasions on the same dependent variable.
For example, you might have measured 10 individuals' (subjects') performance in a spelling test (the dependent variable) before and after they underwent a new form of computerized teaching method to improve spelling. You would like to know if the computer training improved their spelling performance. Here, we can use a dependent t-test because we have two related groups. The first related group consists of the subjects at the beginning (prior to) the computerized spell training and the second related group consists of the same subjects, but now at the end of the computerized training

Does the dependent t-test test for "Changes" or "Differences" between related groups?
The dependent t-test can be used to test either a "change" or a "difference" in means between two related groups, but not both at the same time. Whether you are measuring a "change" or "difference" between the means of the two related groups depends on your study design. The two types of study design are indicated in the following diagrams.

How do you detect differences between experimental conditions using the dependent t-test?
The dependent t-test can look for "differences" between means when subjects are measured on the same dependent variable under two different conditions. For example, you might have tested subjects' eyesight (dependent variable) when wearing two different types of spectacle (independent variable). See the diagram below for a general schematic of this design approach



How do you detect changes in time using the dependent t-test?
The dependent t-test can also look for "changes" between means when the subjects are measured on the same dependent variable, but at two time points. A common use of this is in a pre-post study design. In this type of experiment, we measure subjects at the beginning and at the end of some intervention (e.g., an exercise-training programme or business-skills course). A general schematic is provided below

Dependent T-Test - Design 2




How do you detect differences between experimental conditions using the dependent t-test?
You can also use the dependent t-test to study more complex study designs although it is not normally recommended. The most common, more complex study design where you might use the dependent t-test is where you have a crossover design with two different interventions that are both performed by the same subjects. One example of this design is where you have one of the interventions act as a control. For example, you might want to investigate whether a course of diet counselling can help people lose weight. To study this you could simply measure subjects' weight before and after the diet counselling course for any changes in weight using a dependent t-test. However, to improve the study design you also include want to include a control trial. During this control trial, the subjects could either receive "normal" counselling or do nothing at all, or something else you deem appropriate. In order to assess this study using a dependent t-test, you would use the same subjects for the control trial as the diet counselling trial. You then measure the differences between the interventions at the end, and only at the end, of the two interventions. Remember, however, that this is unlikely to be the preferred statistical analysis for this study design.

What are the assumptions of the dependent t-test?
  • The types of variables
  • The distribution of the differences between the scores of the two related groups needs to be normally distributed - We do this by simply subtracting each individuals' score in one group from their score in the other related group and then testing for normality in the normal way  It is important to note that the two related groups do not need to be normally distributed themselves - just the differences between the groups.

What hypothesis is being tested?
The dependent t-test is testing the null hypothesis that there are no differences between the means of the two related groups. If we get a significant result, we can reject the null hypothesis that there are no significant differences between the means and accept the alternative hypothesis that there are statistically significant differences between the means. We can express this as follows:
H0: µ1 = µ2
HA: µ1 ≠ µ2

What is the advantage of dependent t-test over independent t-test?
Before we answer this question, we need to point out that you cannot choose one test over the other unless your study design allows it. What we are discussing here is whether it is advantageous to design a study that uses one set of subjects whom are measured twice or two separate groups of subjects measured once each. The major advantage of choosing a repeated-measures design (and therefore, running a dependent t-test) is that you get to eliminate the individual differences that occur between subjects - the concept that no two people are the same - and this increases the power of the test. What this means is that if you are more likely to detect any significant differences, if they do exist, using the dependent t-test versus the independent t-test.

Can the dependent t-test used to compare different subjects?
Yes, but this does not happen very often. You can use the dependent t-test instead of using the usual independent t-test when each subject in one of the independent groups is closely related to another subject in the other group on many individual characteristics. This approach is called a "matched-pairs" design. The reason we might want to do this is that the major advantage of running a within-subject (repeated-measures) design is that you get to eliminate between-groups variation from the equation (each individual is unique and will react slightly differently than someone else), thereby increasing the power of the test. Hence, the reason why we use the same subjects - we expect them to react in the same way as they are, after all, the same person. The most obvious case of when a "matched-pairs" design might be implemented is when using identical twins. Effectively you are choosing parameters to match your subjects on which you believe will result in each pair of subjects reacting in a similar way.

Here is the formula for a paired t-test.

Paired t-test formula

The top of the formula is the sum of the differences (i.e. the sum of d). The bottom of the formula reads as:
The square root of the following: n times the sum of the differences squared minus the sum of the squared differences, all over n-1.
  • The sum of the squared differences: ∑d2 means take each difference in turn, square it, and add up all those squared numbers.
  • The sum of the differences squared: (∑d)2means add up all the differences and square the result.
Brackets around something in a formula mean (do this first), so (∑d)2 means add up all the differences first, then square the result.

How do I report the result of a dependent t-test?
You need to report the test as follows:
Reporting a Dependent T-Test
where df is N - 1, where N = number of subjects


Here is a example of Paired t-Test

Consider the data from a paper by Raskin and Unger (1978) where four diabetic patients were used to compare the effects of insulin infusion regimens. One treatment was insulin and somatostatin (IS) and the other treatment was insulin, somatostatin and gulcagon (ISG). Each subject was given each treatment with a period of washout between treatments. The data follow:

Patient
Treatment
 
Number
IS
ISG
Difference
1
14
17
3
2
6
8
2
3
7
11
4
4
6
9
3
Mean
8.25
11.25
3.0
S.E.M.
1.9
2
.40
 
A paper by Thomas Louis (1984) looked at this data using both types of t-tests (Dependent and Independent t-Test). The correct version of the t-test to use for this data set is the paired t-test since each patient is observed twice
 
Paired t-test analysis: The calculations for this test can be performed using the following SAS code:
 
data diabetic;
input IS ISG;
datalines;
14     17
6      8
7      11
6      9
 
ODS HTML;
PROC TTEST;
  PAIRED IS*ISG;
RUN;
ODS HTML CLOSE;
 
The (partial) output is as follows. Note that the analysis is performed on the mean of the differences (-4.299) and that the standard error of the difference is 0.41
 
Difference
N
Lower CL
Mean
Mean
Upper CL
Mean
Lower CL
Std Dev
Std Dev
Upper CL
Std Dev
Std Err
IS - ISG
4
-4.299
-3
-1.701
0.4625
0.8165
3.0443
0.4082
 
The paired t-test yields p=0.005, which is statistically significant.  
T-Tests
Difference
DF
t Value
Pr > |t|
IS - ISG
3
-7.35
0.0052
 
The reason that the paired t-test found significance when the independent t-test on the same data did not achieve significance is because the paired analysis is the more correct analysis and therefore it is able to make use of a much smaller standard error (of the mean difference rather than pooled.)
In his paper, Louis explains that to achieve the power of this paired t-test, an independent group t-test (parallel test) would require 14 times as many subjects. Thus, when the model is appropriate, the paired t-test can be a more powerful design to analysis your data. On the other hand, if you use a paired analysis on independent group data you will get incorrect and misleading results. Therefore, carefully consider how your experiment is designed before you select which t-test to perform.
 
References:
 
https://statistics.laerd.com/statistical-guides/dependent-t-test-statistical-guide.php

The t-Test Procedure

THE TTEST PROCEDURE

t-Test are commonly done when the population standard deviation is unknown. We use the t-distribution

The ttest procedure performs t-tests for one sample, two samples and paired observations. 

  • The single-sample t-test compares the mean of the sample to a given number (which you supply). 
  • The dependent-sample t-test compares the difference in the means from the two variables to a given number (usually 0), while taking into account the fact that the scores are not independent. 
  • The independent samples t-test compares the difference in the means from the two groups to a given value (usually 0).  In other words, it tests whether the difference in the means is 0. 

One Sample t-Test

This function gives a single sample Student t test with a confidence interval for the mean difference.

The single sample t method tests a null hypothesis that the population mean is equal to a specified value. If this value is zero (or not entered) then the confidence interval for the sample mean is given (Altman, 1991; Armitage and Berry, 1994).

The test statistic is calculated as:

- where x bar is the sample mean, s² is the sample variance, n is the sample size, µ is the specified population mean and t is a Student t quantile with n-1 degrees of freedom.

Power is calculated as the power achieved with the given sample size and variance for detecting the observed mean difference with a two-sided type I error probability of (100-CI%)% (Dupont, 1990).

Here is a Question: In a population the average IQ is 100. A team of scientists wants to test a new medication to see if it has either a positive or negative effect on intelligence or no effect at all. A sample of 30 participants who have taken the medication has a mean of 140 with a standard deviation of 20. Did the medication affect intelligence? Alpha = 0.05

We are going to answer these steps with the Alpha = 0.05

1. Define Null and Alternative Hypotheses
2. State the Alpha
3. Calculate the Degrees of Freedom
4. State Decision Rule
5. Calculate Test Statistic
6. State the Result
7. State the Conclusion

Note that we do not know the population. We know that the population mean is 100. The sample standard deviation is 20

1. Define Null and Alternate Hypotheses:
In this case the NULL Hypotheses is that the mean is equal to 100 because that's what the population Mean is. The alternative hypotheses is that the mean is not equal to 100. Because we are testing to see if there is any difference from the expected value of 100

2. State the Alpha:
The alpha level is .05

3. Calculate the Degree of Freedom:
To calculate the Degrees of Freedom use the sample number n - 1 = 29

4. State the Decision Rule
We have an Alpha of .05 we want to find what the middle 95% of what we would expect the mean to be if it's outside in this two-tailed test. We need to conclude that the mean of the is and likely that our sample is different from the population we are comparing to (or the expected population)

Remember that our alpha level is .05. This is a two tailed test and 29 DF. The critical value we are going to use is 2.0452. We would expect most values to fall within -2.0452 and +2.0452. If it falls outside of that range we are going to conclude that our sample is different from the expected population. So the Decision Rule is if t is less than the -2.0452 or more than the +2.0452 we are going to reject the NULL Hypothesis.

5. Calculate the Test Statistic:
Well now lets calculate the Test Statistic.
The Sample mean = 140 (x bar)
The Population Mean = 100 (Mu)
Sample Standard deviation = 20 (s)
Sample Size = 30 (n)

t = x bar - Mu/ (s/sq root of n) = 10.96

6. State Results
Decision Rule: If t is Less than -2.0452, or greater than 2.0452 reject the Null hypothesis. Over here t = 10.96. This value is definitely greater than 2.0452. So we are going to Reject the Null Hypothesis.
Result: Reject Null Hypothesis.

7. State the Conclusion
Medication significantly affected the intelligence, t = 10.96. p < 0.05


Another Example: