Standardized Scores

Suppose that you have just completed your midterm for this class and you were told that you had a score of 55. How would you feel?  The first question you might ask is how many points were possible?  If you were told that 85 points were possible you might not feel too well since you correctly answered about 64% of the questions.

Now let's assume that the test was very difficult and the instructor didn't expect anyone to have mastered all of the content on it. What other information might you wish to know?  Perhaps you would like to know the average (mean) grade on the test. If you were to discover that it was 50, you might feel better about your performance because you were above average.

You might be interested to know how the scores were spread above and below the average (mean). In particular, you are probably wondering how far above the mean you were compared to others in the class. Were most of the grades close to the mean, or were students' grades far above or below the mean. One way to measure the dispersion or spread of scores is with the range (subtract the low score from the high score). Suppose the range were 30 points with the high being 75 and the low being 45. You might not feel to well about your grade even though it was above the average. The problem with the range is that one extreme score can influence it very much. In this case, maybe only one person earned 75 and the next high was 56 with everyone else falling between 46 and 56, a range of 10.

Instead of using the range, we use the standard deviation when we talk about the spread of scores. In the midterm example, suppose you were told that 68% of the people who took the test has a score from 48 to 52. In other words, 68% of the people fell 2 points above or 2 points below the mean. In that case, we would say that the test scores had a standard deviation of 2. Assuming that the scores fell into a normal distribution, we would also know that 95% of the students would have scores within two standard deviations above or below the mean. In our case that would 4 points above (54) or 4 points below (46) the mean (50). You would feel rather good about your score of 55. Knowing the mean and standard deviation makes it possible to interpret raw scores and compare different individuals' performances with each other or an individual's performance on one test with his or her performance on another test.

Without standardized scores, it is difficult to make comparisons. A raw score of 30 on one test and a raw score of 125 on another test don't have much meaning until we know where each score is in relations to the mean. In order for the score to be meaningful, we need to know whether the score is above or below the mean and how far above or below the mean. Unfortunately, knowing that the first score is 2 points above the mean and the second score is 10 points above the mean still doesn't help. Each test is different, so being 2 points above the mean may be better than being 10 points above the mean on another test. It is only when we know how many standard deviations  each score is above or below the mean that we can compare the two performances. Standard scores allow us to make comparisons of raw scores that come from very different sources.

A common way to make comparisons is to calculate z-scores. A z-score tells how many standard deviations someone is above or below the mean. A z-score of -1.4 indicates that someone is 1.4 standard deviations below the mean. Someone who is in that position would have done as well or better than 8% of the students who took the test.

To calculate a z-score, subtract the mean from the raw score and divide that answer by the standard deviation. (i.e., raw score =15, mean = 10, standard deviation = 4. Therefore 15 minus 10 equals 5.  5 divided by 4 equals 1.25. Thus the z-score is 1.25.)

Lay people are sometimes uncomfortable with z-scores for a couple reasons. First, they don't like negative numbers and anyone who has a raw score less than the mean has a negative z-score. Second, they are uncomfortable with a z-score of 0 being average. Explaining to a parent that her child did average on an achievement test and has a z-score of 0 can be difficult. For this reason z-scores are often converted to a scale where negative value are not possible.. IQ scores, SAT scores, and T scores are examples of z-scores that have been converted.

To convert a z-score to a T-score, multiple the z-score by 10 and add 50 to your answers (i.e., z-score = .5.   .5 times 10 equal 5. 5 plus 50 equals 55. Therefore, a z-score of .5 converts to a T-score of 55.

As a point of reference, IQs are calculated by multiplying the z-score by 15 (16 on some tests) and then adding 100. SAT scores are calculated by multiplying the z-score by 100 and then adding 500.

To convert a z-score on an IQ measure to an IQ score, multiply the z-score by 15 and add 100 to your answer (e.e., z-score= .5. .5 times 15 equals 7.5 plus 100 equals 107.5. Therefore, a z-score of .5 on an IQ measure converts to a T-score of 107.5.

 

Del Siegle, Ph.D.
Neag School of Education - University of Connecticut
del.siegle@uconn.edu

www.delsiegle.com