Well,
this is my first blog post ever. I am used to just mulling things
over in my head, grumbling to my pals, or telling students what to read.
Today, though, my topic is a few overlooked gems I would like to assign as
reading to aides to members of Congress regarding decisions they will soon have
to make about our nation’s testing policies.
My expectation is that Congress will, at least some time in 2013, resume debating whether to keep mandated state testing in grades 3 through 8 under No Child Left Behind, and what kind of stakes to require states to attach. So once again, staffers have to figure out how to design a policy that works for all students. Collectively, the readings below point to some clear flaws with current policy, and suggest possible alternatives.
My expectation is that Congress will, at least some time in 2013, resume debating whether to keep mandated state testing in grades 3 through 8 under No Child Left Behind, and what kind of stakes to require states to attach. So once again, staffers have to figure out how to design a policy that works for all students. Collectively, the readings below point to some clear flaws with current policy, and suggest possible alternatives.
One
is from five years back, a 2007 Education
Week Commentary entitled “No
Child Gets Ahead,” by Anthony P. Carnevale. Colleen
Donovan, David Figlio, and Mark Rush of the National
Bureau of Economic Research used data from the federal early Childhood
Longitudinal Study to analyze low to middle income, high-achieving students’
educational attainment. Specifically, there were "more than a million
grade school students from families making less than $85,000 a year who start
out in the top half of their class but fall off the college track on the way to
high school." Part of the story was that these achievers were being harmed
by NCLB’s focus on the lowest-performing students in the schools they attended.
They found that teaching to the test “dulls creativity and generally ignores
the students who can meet the standards.”
As
Carnevale writes: "With lower standards on offer, many high-performing
students from working families rush down to meet them. They give in to
lower standards because their college and career expectations are fragile and
they get less support at home and at school than students born into affluent
families." The way forward, he says, "is to move beyond uniform
standards altogether, toward individualized standards." Hmm, how
does that fit with the onslaught of Common Core assessments?
The second is the 2011 report of the National Research Council, Incentives and Test-Based Accountability in Education, which interrogates what the behavioral and social sciences (in particular, economics and experimental psychology) tell us about the use testing and incentives to improve performance. Based on 10 years of empirical work, this group of psychologists, economists, and testing experts concluded that "the available evidence does not give strong support for the use of test-based incentives to improve education and provides only minimal guidance about which incentive designs may be effective"(p. 91) The report explains the trade-offs in different kinds of accountability systems, and reviews the various considerations about incentives: target, performance measures, consequences, and support (p. 33). NCLB, obviously, has provided "many ways for schools to fail" (p. 49); wouldn't it be better to have test scores instead serve as a trigger for a deeper examination of instructional and organizational norms inside the schools?
Last but not least is a new piece by Andrew McEachin and Morgan Polikoff in the October 2012 Educational Researcher, "We are the 5%: Which schools would be held accountable under a proposed revision of the ESEA?" The authors model the bill’s proposed accountability criteria, which seek to identify lowest performing, largest within-school achievement gaps, and lowest performing subgroups, to schools in California, attempting to answer questions about the stability of the various classifications, as well as whether they identify the schools they were designed to identify. Based on their findings, they have numerous important policy recommendations, including “considering alternatives to the proposed Lowest Subgroup Achieving Schools [LSAS] criteria, which, as written, target schools serving significant numbers of students with disabilities,” such as stratifying the LSAS by subgroups, such as Hispanic, special education, etc. (p. 250). They also note the importance of administering accountability separately by school level (elementary, middle, and high) – say, 15% of each type if the policy goal is to hold 15% of all schools accountable per year. McEachin and Polikoff highlight the importance of state policymakers using 3-year averages of combined proficiency level and growth measures to give the most optimal picture of persistently low-achieving and low-growing schools. The authors recommend that Congress should commission similar analyses from all states to look at possible implications.
Now some may argue that the Common Core assessments will, in time, solve some of the problems with low-level state standardized tests driving instruction down. But does that tell aides to members of Congress what kind of testing and accountability system to enact next year, 2013? What are the likeliest measures to build state capacity for intervention while not harming instruction? The 1994 Improving America's Schools Act, with its mandate of testing just once in three grade intervals between the early grades and high school was too loose for many in the civil rights community, who pushed for the sub-group tight enforcement model. How do you tend to the lowest-performing students without dragging down Carnevale's "low-hanging fruit" of high-performing, middle-income students (many of whom he points out are likely to become teachers and public servants themselves)?
If the answer is obvious, it has eluded me. One thing I do know is that there is no substitute for good congressional deliberation, and that just might involve bringing some of these researchers to testify, run more models, answer questions, and even be permitted to debate each other as well as interact with state officials who have to run these programs. Aides, happy mid-air reading after you go flying over the cliff.
The second is the 2011 report of the National Research Council, Incentives and Test-Based Accountability in Education, which interrogates what the behavioral and social sciences (in particular, economics and experimental psychology) tell us about the use testing and incentives to improve performance. Based on 10 years of empirical work, this group of psychologists, economists, and testing experts concluded that "the available evidence does not give strong support for the use of test-based incentives to improve education and provides only minimal guidance about which incentive designs may be effective"(p. 91) The report explains the trade-offs in different kinds of accountability systems, and reviews the various considerations about incentives: target, performance measures, consequences, and support (p. 33). NCLB, obviously, has provided "many ways for schools to fail" (p. 49); wouldn't it be better to have test scores instead serve as a trigger for a deeper examination of instructional and organizational norms inside the schools?
Last but not least is a new piece by Andrew McEachin and Morgan Polikoff in the October 2012 Educational Researcher, "We are the 5%: Which schools would be held accountable under a proposed revision of the ESEA?" The authors model the bill’s proposed accountability criteria, which seek to identify lowest performing, largest within-school achievement gaps, and lowest performing subgroups, to schools in California, attempting to answer questions about the stability of the various classifications, as well as whether they identify the schools they were designed to identify. Based on their findings, they have numerous important policy recommendations, including “considering alternatives to the proposed Lowest Subgroup Achieving Schools [LSAS] criteria, which, as written, target schools serving significant numbers of students with disabilities,” such as stratifying the LSAS by subgroups, such as Hispanic, special education, etc. (p. 250). They also note the importance of administering accountability separately by school level (elementary, middle, and high) – say, 15% of each type if the policy goal is to hold 15% of all schools accountable per year. McEachin and Polikoff highlight the importance of state policymakers using 3-year averages of combined proficiency level and growth measures to give the most optimal picture of persistently low-achieving and low-growing schools. The authors recommend that Congress should commission similar analyses from all states to look at possible implications.
Now some may argue that the Common Core assessments will, in time, solve some of the problems with low-level state standardized tests driving instruction down. But does that tell aides to members of Congress what kind of testing and accountability system to enact next year, 2013? What are the likeliest measures to build state capacity for intervention while not harming instruction? The 1994 Improving America's Schools Act, with its mandate of testing just once in three grade intervals between the early grades and high school was too loose for many in the civil rights community, who pushed for the sub-group tight enforcement model. How do you tend to the lowest-performing students without dragging down Carnevale's "low-hanging fruit" of high-performing, middle-income students (many of whom he points out are likely to become teachers and public servants themselves)?
If the answer is obvious, it has eluded me. One thing I do know is that there is no substitute for good congressional deliberation, and that just might involve bringing some of these researchers to testify, run more models, answer questions, and even be permitted to debate each other as well as interact with state officials who have to run these programs. Aides, happy mid-air reading after you go flying over the cliff.
By:
Elizabeth DeBray