during listening tours and in educator lunchrooms
throughout the state.
Murphy said that the STA fully supports the
efforts of the parent and teacher group and the
advisory committee. “My kids go to the Boston
Public Schools,” she said. “The kids are tested out.
There is a lot of anxiety and burnout.
“They are spending millions of dollars on the
test, but meanwhile my son’s school budget is being
cut,” she added. “In Boston they don’t have a lot of
the extras that Sharon has. And now it looks like his
class size is going to get bigger.”
Kathleen Turner, a Sharon High School French
teacher who was named the 2013 Massachusetts
Teacher of the Year, spoke for colleagues in tested
grades and subjects when she noted, “They tell
us to differentiate instruction, but then measure
performance by making kids take the same test. It
Teacher Dorothy Macoritto questioned how
useful the test results are in measuring growth.
Speaking about her own children, who attend the
nearby Canton Public Schools, she said, “It seems
like my kids are taking tests constantly, but how
are evaluators going to use the results to measure
growth? If a kid gets 100 on the pretest and 100 at
the end of the year, you won’t show growth. Does
that mean the teacher is bad? What if she gets a 50
on the pretest and a 60 at the end of the year? What
Snow said that his research has helped him
understand why the testing regimen is so hard to
change. “There is a lot of money going into private
pockets,” he said.
He believes that educators can make changes
in their own practice now, even while continuing to
push for new laws.
“In a place like Sharon, where the students do
very well, we have more flexibility than we might
think,” he said. “The amount of time students spend
actually taking state tests is small — maybe 2 to 3
percent of the time. It’s the high stakes attached to
the results that lead to so much time being spent
preparing for these tests instead of fostering a love
of learning. But we do have some control over the
other 98 percent of our time. We need to exercise
Snow has a strong personal interest in the
outcome of the debate, since his second child will
soon be in the public schools.
“Some people suggested that I start saving up
money to send our sons to a private school,” Snow
said. “That didn’t sit well with me. I’m a proud
public school teacher. I’m not going to do that until I
really give this my best effort — to see what we can
do about it.”
For more information and upcates, please visit
S tarting next fall, some districts will be required to issue Student Impact Ratings for teachers and administrators based
on District-Determined Measures and state
standardized test scores. And in the fall of 2017,
more districts will fall under this mandate, which
is part of the educator evaluation system.
There is one big problem: No one has figured
out how to use student test score data to fairly,
accurately and reliably determine a given teacher’s
contribution to a particular student’s performance.
Bill Parsons, president of the Westborough
Education Association, put it this way in describing
a test he had given to his engineering students:
“The scores ranged from 95 to 27. They all got the
same lesson and the same materials from the same
teacher. Some students obviously studied harder
Parsons said his members opposed the
requirement from the start.
“We felt it would create competition among
teachers,” he said, “and we felt it would reduce
incentives to take the kids who needed the most
work. It would break down the good collaboration
we have among our colleagues and wouldn’t give
administrators useful information about teachers.”
Local associations and district administrators
across the state are grappling with ways to
implement — or quietly downplay — the Student
Impact Rating requirement.
“Teachers develop and administer tests all
the time to find out what our students know and
to improve instruction,” said MTA President
Barbara Madeloni. “But making up new student
tests simply to evaluate teachers is a fool’s errand.
The results will be useless and the time spent on
this bureaucratic mandate will be time lost to real
“We encourage members who oppose this
mandate to let local and state education officials
know what they think about it and to organize to
change it,” she added.
Under the new federal education law,
the Every Student Succeeds Act, the federal
government no longer requires tying student test
scores to educator evaluations. However, the
requirement is now embedded in state regulations.
To get rid of it,
either the Board
of Elementary and
must vote to amend
the regulations or
the state Legislature
must pass a law
The rating is
supposed to provide an objective measure of
whether a teacher’s impact on student performance
is high, moderate or low. There must be at least
two measures for every educator. For those who
teach a subject tested by MCAS or PARCC, one
rating must be based on the Student Growth
Percentiles assigned to students in that teacher’s
The impact rating is to be used to inform the
length and content of the Educator Plan.
An added complexity is that this mandate
remains even as the state is transitioning from
MCAS and PARCC to MCAS 2.0.
At the time the mandate was passed,
Massachusetts was considered to be among
the more enlightened states, since the test
scores didn’t count for that much. But now that
implementation has begun, many are questioning
the time and energy it is taking away from the
classroom, along with the legitimacy of the
To get enough data to look at, many districts
and teachers have had to create new growth
measures, such as a pre-test at the beginning of the
year and a post-test at the end.
Parsons described the system bluntly. “DDMs,
honestly, are crazy,” he said.
Tom Scott, executive director of the
Massachusetts Association of School
Superintendents, told MTA Today that his
association “is a strong supporter of common
assessments.” But he added, “We have a problem
with the rating system. How do you create a
rating system with any validity, reliability and
Westborough Assistant Superintendent Daniel
Mayer also voiced concerns.
“While we all agree it would be invaluable to
determine each teacher’s exact impact on student
learning, few believe local districts have the
capacity to create a DDM system that will do this
in a meaningful and fair way,” he said.
In Westborough, the WEA and the School
Committee bargained language that gives teachers
a lot of say over choosing their DDMs and
determining how much student performance has
grown. Other districts have put off bargaining over
the mandate as long as possible or are exploring
other ways to reduce its impact.
Educators who question the accuracy of
measures like DDMs are in good company.
The American Educational Research Association
Please turn to DDMs/Page 16
Continued from previous page
‘The kids are tested out. There is a lot of anxiety and burnout.’