# Value Added or Value Diminished?

It all starts with research. One paper, one idea. University of Tennessee professors Dr. Robert McLean and Dr. William Sanders set out in 1984 to describe a statistical method for assessing teacher effectiveness. Yes, 1984. For those of you who were schooled in the 80’s, remember those California Achievement Tests? McLean and Sanders took results from grades 2 through 5 from one school system and three years worth of data to statistically show that teachers could be assessed for their effectiveness by running a mixed-model regression analysis. The study was repeated for two other school systems and results proved similar (not the same). So, lets take a second to let that sink in.

Here is what we just found out:

- Our teacher effectiveness (or value added) model is based on scores of second through fifth grade students.
- These scores are based on a single testing instrument implemented once in a school year.
- The California Achievement Test, a test that was not aligned to Tennessee curricula in the 1980’s (or since), was the deployment for assessing teacher quality.
- The California Achievement Test, as is the case with how standardized tests are created, also shoots for items that are answered correctly by 40 to 60 percent of the population. Items deemed more difficult and challenging are expunged from the test. In other words, mediocrity is the aim. If you get all of the mediocre questions correct, you are smart. Or better yet, if you get more of the mediocre questions correct in grade 4 than in grade 3, your teacher is awesome.
- Let me make it really real: If your daughter got 35 out of 50 middle of the road questions correct in 3rd grade, and then got 37 (yes, ONLY two more) middle of the road questions correct in 4th grade, YOUR DAUGHTERS 4TH GRADE TEACHER IS AWESOME.

For those of you who have read the Tipping Point, you would also thoroughly enjoy reading the history of how Tennessee adopted Value Added in 1991 into state law. Long story short, Tennessee was in the bottom five percent in test scores. Accountability and testing must be the answer but we need “something” to implement that can help us “judge” teachers. Hey, those two guys from the University of TN are pretty smart. Perfect. They have research on the California Achievement Test and they tell us we can denote the good teachers from the bad teachers. Write it in the law, let’s vote, approved.

Before I go on, let me also note that Tennessee is STILL in the bottom five percent in testing across the country… and it is the year 2012. Ironically, the law makers in 1991 wrote a document titled “Tennessee Challenge 2000.” Yeah, the HOPE was Tennessee would be out of the cellar by the year 2000. 21 years in… still in the cellar… and…

The model is even more convoluted. Instead of focusing on 2nd through 5th grade students and teachers, we now have a value-added system that impacts every teacher. So, we “predict” the score of a 9th grade Algebra student based on scores from elementary and middle school. The “statistics” associated with this process is deemed too difficult for anyone to understand, so it is never shared with the public or the teachers. Yet, the math is really not all that complicated. Let me explain how simple regression works. I know I am simplifying the statistics, but the real math is not much different, just longer.

- Suppose your daughter went to elementary school X and got an average score of 85. Then your daughter goes to middle school XX and got an average score of 82. When she goes to 9th grade at high school XXX, she is “predicted” to get an average score of 79. (ie down three points each school level)
- Now suppose your son went to elementary school Y and got an average score of 85. Then your son goes to middle school YY and got an average score of 87. When he goes to 9th grade at high school YYY, he is “predicted” to get an average score of 89. (ie up two points)
- Now, lets suppose that both your son and daughter get a score of 84 on the Algebra test. High School XXX (where your daughter goes) will be promoted as one of the best high schools in the state of Tennessee. She beat her predicted score by 5 points. High School YYY (where your son goes) will be publicly destroyed as a horrible place for students. While your daughter beat her predicted score by 5, your son was 5 lower than his… AND YET they both had the EXACT SAME SCORE in elementary school and the EXACT SAME SCORE in high school.
- Finally, the other dirty secret about the testing is that the difference between scoring a 79 or 84 (for example) may be answering two more questions correct. Yes, it makes complete sense for a teacher to spend an ENTIRE YEAR to have a student answer two more questions correct. Please note my sarcasm here.

The three high schools that earned recognition in Tennessee this year for being phenomenal schools are high schools like High School XXX above. And they also earned those distinctions because of scores that happened in 9th grade Algebra only.

What value added does to our schools:

- It creates a climate where teachers must focus instruction on mediocrity (its all about getting the most mediocre questions correct).
- It creates a reward system that is flawed and not only unfairly punishes one school, but absurdly rewards another.
- It makes school about answering questions correct that can be googled in five seconds, and perpetuating a mindset that kids will be better prepared for college/work/life if they answer two more multiple choice questions correctly.

**We want to engage students in innovation, critical thinking, and collaboration**. Value added promotes none of these. The longer we reward excellence as two more questions correct on a test, the longer we diminish the value of our educational system.