It always starts in Cleveland. – K. Griffin

A while back I wrote a piece called “Teachers are Guilty”. The post was basically about how it was easier for teachers to close our doors and focus on our students and classrooms than to become involved with the ugliness of educational politics. For fear of offending some colleagues I didn’t publish the piece.

This week the Cleveland Plain Dealer and State Impact OH pulled a little PR stunt by publishing teachers names and “value-added” scores. They also made an amateurish attempt to mask this unethical report by also pointing out some of the flaws of using the data to evaluate teachers. Then, after reporting the data was incomplete and should not be used solely to evaluate teachers, they published the teachers names and “value-added” scores anyway. I guess competent reporting takes a back seat to tabloid-like, website hit generating drama.

The Plain Dealer and State Impact OH focused on Cleveland. Teachers across the state should pay attention because all educational ugliness begins in Cleveland.

The first Charter School Scam legislation was specific to Cleveland. While public education activists across the state tried to get it squashed the message from the charter supporters was “Don’t worry. It’s only an experiment. It’s specific to Cleveland.” And now we have failing charters all across the state.

Last year “The Cleveland Plan” was passed. While those paying attention opposed it the supporters had the same message as with the charters. “It’s specific to Cleveland.” One year later the Ohio House and Senate are trying to implement a similar plan to Columbus Schools and the Cleveland merit-pay system has been touted as “statewide model” by Governor Kasich.

This week the flawed “value-added” scores were published just for Cleveland. How much longer until it’s a statewide shaming of our teachers?

Advertisements

Letter to Senator Hughes re: Value-Added and Evaluations. – K. Griffin

Senator Hughes,

This afternoon, February 2, I received a letter from you in response to an email I sent you in early December. The email related to HB 555 and a then rumored amendment mandating that value-added become 50% of a teacher’s evaluation for certain teachers.

Your two page letter addressed the sections of HB 555 relating to the new school report cards, PARCC assessments and the implementation of the common core curriculum, but makes no reference to teacher evaluations and value-added, which was my only concern mentioned in my email.

The rumored amendment was added to the bill, at the 11th hour with no hearings or time to public input. This amendment goes against recommendations from the Ohio Department of Education, Battelle for Kids, the Ohio Education Association, and even against a recent study by the Bill and Melinda Gates Foundation on student data and teacher evaluation.

I’d like to again point out the glaring contradiction between telling teachers to be creative and innovative but then basing 50% of their evaluation on a single standardized test score.

I hope, for what is in the best interest of students and teachers, that this ill-conceived amendment is reversed quickly.

I thank you for your time and service to the people of Ohio.

Sincerely,

Kevin Griffin
Dublin, OH

Value-Added Fails to identify “Good” and “Bad” Teachers. – K. Griffin

Below is a chart created by Gary Rubenstein, a Teach for America graduate.  I just wanted to explain it in my own words.  There is only a 24% correlation between the VA scores of these 665 teachers or in other words, it’s random.

The chart plots the Value-Added scores of teachers who teach the same subject to two different grade levels in the same school year.  (ex. Ms. Smith teaches 7th Math and 8th Math, and Mr. Richards 4th Grade Reading and 5th Grade Reading.) The X-axis represents the teachers VA score for one grade level and the Y-axis represents the VA score from the other grade level taught.

If the theory behind evaluating teachers based on value-added is valid then a “great” 7th grade math teacher should also be a “great” 8th grade math teacher (upper right corner) and a “bad” 7th grade math teacher should also be a “bad” 8th grade math teacher (lower left corner). There should, in theory, be a straight line (or at least close) showing a direct correlation between 7th grade VA scores and 8th grade VA scores since those students, despite being a grade apart, have the same teacher.

There is a huge contradiction in telling teachers “DON’T teach to the test” and then basing 50% of their evaluation on one test score, which as this chart shows is invalid 76% of the time.

va_in_nyc

Linda Darling-Hammond says Value Added Adds Little Value to Teacher Evaluations. – I. Lieszkovszky

Lieszkovszky, Ida.  “Linda Darling-Hammond says Value Added Adds Little Value to Teacher Evaluations.”  State Impact, Ohio, Eye on Education.  January 28, 2013.  Retrieved from:  http://stateimpact.npr.org/ohio/2013/01/28/linda-darling-hammond-says-value-added-adds-little-value-to-teacher-evaluations/

The Stanford University professor and leading education expert discusses, in a Cleveland radio interview, how she was once very excited how value added, but that as facts and evidence accumulated she no longer considers it reliable.

“I’m a researcher who was very interested and enthusiastic about value added a few years ago, who has, among many other researchers, found that it has a lot more difficulty and problems than we realized,” Darling-Hammond said. “So the National Research Council has recently come out to say value added should not be used, because it’s very unstable. It’s unreliable. It turns out that it’s biased.”

Gates Foundation Wastes More Money Pushing VAM. – G. Glass

Glass, Gene V.  “Gates Foundation Wastes More Money Pushing VAM.”  Blog: Education in Two Worlds.  January 14, 2013.  Retrieved from: http://ed2worlds.blogspot.com/2013/01/gates-foundation-wastes-more-money.html

Professor Glass, an education researcher and professor in the School of Education and National Education Policy Center at the University of Colorado Boulder, and Emeritus Regents’ Professor at Arizona State University questions the obvious data manipulation used in the latest value-added study funded by the Bill and Melinda Gates foundation.

At the center of the brief’s claims are a couple of figures (“scatter diagrams” in statistical lingo) that show remarkable agreement in VAM scores for teachers in Language Arts and Math for two consecutive years. The dots form virtual straight lines. A teacher with a high VAM score one year can be relied on to have an equally high VAM score the next, so Figure 2 seems to say.

Not so. The scatter diagrams are not dots of teachers’ VAM scores but of averages of groups of VAM scores. For some unexplained reason, the statisticians who analyzed the data for the MET Project report divided the 3,000 teachers into 20 groups of about 150 teachers each and plotted the average VAM scores for each group. Why?

A Better Way to Grade Teachers – Darling-Hammond; Haertel

Darling-Hammond, Linda – Haertel, Edward. “A Better Way to Grade Teachers.”  OP-ED.  Los Angeles Times.  Novemver 5, 2012.  Retrieved from: http://www.latimes.com/news/opinion/commentary/la-oe-darling-teacher-evaluations-20121105,0,650639.story 

The authors writing about the continuous reports about the problems with using test scores to evaluate teachers and what a better evaluation should look like.

…value-added ratings cannot disentangle the many home, school and student factors that influence learning gains. These matter more than the individual teacher in explaining changes in scores.

The Weighting Game. – M. Di Carlo

Di Carlo, Matthew. “The Weighting Game.”  The Shanker Blog. May 9, 2012. Retrieved from: http://shankerblog.org/?p=5764

The author uses Florida as an example of why using weights could be misleading when judging school districts and applies the same weighting logic to teacher evaluation and value-added such as ours in Ohio.

Now, back to the original point: All of these issues also apply to teacher evaluations. You can say that value-added scores count for only 40 or 50 percent, but the effective weight might be totally different, depending both on how you incorporate those scores into the final evaluation score, as well as on how much variation there is in the other components. If, for example, a district can choose their own measures for 20 percent of a total evaluation score, and that district chooses a measure or measures that don’t vary much, then the effective weight of the other components will actually be higher than it is “on paper.” And the effective weight is the one that really matters.

All the public attention to weights, specifically those assigned to value-added, seem to ignore the fact that, in most systems, those weights will almost certainly be different – perhaps rather different – in practice. Moreover, the relative role – the effective weight – of value-added (and any other component) will vary not only between districts (which will have different systems), but also, quite possibly, between years (if the components vary differently each year). This has important implications for both the validity of these systems as well as the incentives they represent.