To no one’s surprise, the market-oriented EducationNext, “a journal of opinion and research,”
does not like the findings outlined in the recent book, The Public School Advantage:
Why Public Schools Outperform Private Schools,
which I wrote with Prof. Sarah Theule Lubienski. After all, the two large,
nationally representative datasets we analyzed do not lend support to that
publication’s agenda of school privatization. While I wouldn’t normally respond
to reviews, the misconceptions and errors advanced by EdNext and subsequent blog postings deserve some scrutiny. In
particular, as I show below, bloggers at the University
of Arkansas and the conservative National
Review Online have used the EdNext
review as an opportunity to make claims that are simply wrong.
Writing in EdNext,
Patrick Wolf, the 21st Century Endowed Chair in School Choice at
Arkansas, and principal investigator of the School Choice Demonstration
Project, offers a serious but flawed
reading of our book. Wolf questions — but is unable to
disprove — our findings. He notes that “researchers have routinely found that
similar students do at least as well and, at times, better academically in
private schools than in public schools,” which is not exactly
true,
and depends on a misrepresentation of research literature to which he himself
has contributed. But even if it were true, past assumptions are certainly not a
sufficient reason to preempt further research; for instance, researchers also
have routinely believed that DNA was some unimportant molecule and saturated
fat is bad, but new findings have caused us to question such received wisdom. For
example, strangely enough, Wolf goes on to cite other
independent research that, in fact, undercuts what he
claims.
Perhaps most importantly, in his zeal to disprove that
public schools could have any advantage over private schools, Wolf ignores a
main focus of the book: why instruction
in public schools now seems to give students an unexpected edge when it comes
to mathematics performance, and what these findings say about the benefits and
dangers of school autonomy.
Overall, Wolf points to “problems” that he actually does not
show to be problems, offers no evidence that the other approaches he suggests
would change the findings at all, and finds no errors in the book. While he
raises several issues that — although it might not be apparent from his review
— we have already discussed extensively in the book, I respond here to a few of
Wolf’s statements, either because his claims require scrutiny, or because they
have been taken up by bloggers associated with Wolf in their advocacy around
school vouchers in misrepresenting the data, methods and findings.
Are
voucher studies relevant?
The main shortcoming of Wolf and the bloggers’ efforts to
refute our findings is that, for evidence, they simply point to evaluations of
voucher programs, usually ones that they have conducted. As we have noted before,
even if we accept the validity of their studies, these evaluations of local voucher programs simply do not address the
issue at hand — the larger question of achievement in different types of
schools. They are purporting to measure the impact of vouchers in what are
actually small, non-representative samples of public and private schools. We
are drawing from large, nationally representative datasets. Their studies are
actually program evaluations in local contexts, and do not address the larger
question of the relative effectiveness of U.S. public and private schools,
despite what they claim. Wolf
studies the effects of vouchers on students who are attempting to leave a
specific public school for a private school that appears more desirable on some
measure, whether it be peer demographics, instructional quality, or the use of
uniforms. One cannot generalize to all public and private schools from such
studies. Thus, it is simply
silly to claim — as does a blog post from the libertarian Cato Institute — that “private schools beat public schools” based on those studies, especially when
they provide absolutely no evidence
that this is generally true.
Why
focus on achievement tests?
Another concern, or irony, is the claim that we have a
“narrow definition of school performance.” This is interesting because my
co-author and I have research interests that are not really focused on testing outcomes,
unlike many market-oriented school reform advocates such as, say, Wolf
and his colleagues, who have made great efforts to
prove that one type of school outscores another, or to disprove research that
questions that agenda (see this summary).
As we have said before, we embrace many other goals for schools. Our book’s
treatment of student achievement data simply reflects the fact that such measures have been elevated by these
market-oriented reformers, but actually do not support their claims. Also,
it is important to note that, after years of trying to prove that voucher
programs and private schools are leading to higher test scores, and getting
less than compelling results on those measures, market-oriented school
reformers have recently been emphasizing other outcomes, such as graduation and
college attendance rates. While I am happy to see them now embrace other goals
for schools, their effort to move the goalposts represent Plan B for market
advocates.
What
about reading?
Wolf finds it “curious and frustrating” that we would focus
on mathematics and not reading achievement (and the bloggers suggest that we
didn’t report reading results because they didn’t match our “story”). But there
is actually widespread agreement that
math achievement is the best measure of school performance, as Wolf’s mentor
has noted: “Math
tests are thought to be especially good indicators of school effectiveness,
because math, unlike reading and language skills, is learned mainly in school.”
Thus, although some other research has indeed found smaller public school outcomes
in reading than in math, that tells us less about school impacts, since it reflects
the fact that children in private schools tend to get greater advantages in
reading from home — being read to,
being exposed to a larger vocabulary, etc. — while home advantages are less
pronounced in math, making that subject a better measure of school effects.
The other reason that we did not analyze reading achievement
is because we did not set out to study or prove whether one type of school is
better, but were initially focused on analyses of math teaching and learning
when these public/private results first emerged. Neither of us has any special
interest or training in reading issues, and — perhaps unlike some other researchers — we do not need to weigh in on
topics for which we have no particular expertise.
Test
over-alignment?
Where Wolf really misses the mark is in his concern that we
use “tests that align more closely with public school than with private school
curricula.” This claim almost comes across as a suggestion of some kind of
conspiracy on our part to use only measures that arrive at a particular
finding. Yet these NAEP and ECLS-K tests have been used by Wolf
and his colleagues
with no such prior complaint. Indeed, these are federal datasets
whose construction and administration is overseen by bi-partisan panels of
experts, including professionals in testing and assessment, curriculum and
learning, US Governors, state superintendents, teachers, businesspeople, and parents,
as well as representation from the
private school sector. One school choice advocate (at that time) called NAEP
the “gold
standard” because “the federal program tries to align its
performance standards with international education standards.” These tests are
constructed by experts to measure the most important content for students in
today’s world, not to align with curriculum in one type of school or another.
Nevertheless, Wolf’s criticism that public schools are doing
a better job in the areas measured by these tests misses the fact that this is
not a weakness of our study but instead one of our major findings: that private schools are less willing to
adopt current curricular standards and more likely to employ unqualified
teachers who use dated instructional practices. And, as the data show,
students in private schools are more likely to sit in rows, do worksheets, and
see math as memorization. These factors point
to the dangers of deregulation and autonomy that people like Wolf champion.
As we discuss in the book, this reflects the fact that public schools have more
often embraced reform models based on professional understandings of how
children most effectively learn mathematics rather than the market models in
many private schools that endorse many parents’ preferences for back-to-basics,
and outdated methods of teaching mathematics. Wolf’s criticism is akin to
saying that scores are not a good measure of football teams because, now that the
sport has evolved away from the traditional 3-yards-and-a-cloud-of-dust-based ground
game, teams that have a good passing game score more often.
Bias?
Wolf claims that our demographic controls are inappropriate
and potentially biased against private schools, echoing an earlier
critique made by his mentor — a critique which we have already
discussed at great length in the book and elsewhere,
and have shown to be lacking. In fact, we were very aware of
issues of potential bias and were extremely
cautious in our approach, taking widely accepted measures to deal with
those issues. In particular, Wolf suggests that reporting of students with
special needs and student eligibility for subsidized lunch are inappropriate
indicators, even though they have also been used by Wolf.
Yet, as we explain in the book, and Wolf declines to mention, if biased at all,
our estimates are most likely biased in
favor of private and independent (charter) schools, since the available
data do not account for the fact that families in those schools have
demonstrated particular interest in their children’s education. That is, although
we use the available demographic variables to compare students and families
across the two sectors, we cannot account for the unmeasured factors that cause
one family to invest in private or independent schooling, as compared with a
demographically identical family that sends their child to the local public
school. The fact that private school parents invest time and money in their
children’s schooling suggests that private school students tend to have hidden
advantages at home that cannot be observed through typical demographic data. This
is a significant bias that we are unable to account for — one in favor of private schools — which is
why recent findings in favor of public schools are so surprising and why
policymakers and researchers should want to know why these findings have emerged, as opposed to trying to explain
them away.
Sectors
and vouchers (revisited)?
Finally, Wolf criticizes our focus on the academic gains of
students who stay within public or private sector schools. That was a conscious
decision on our part to discern the impacts of those different types of
schools. Again, Wolf provides no evidence that an alternative approach would
change the results. In fact, as we note in the book, other independent
researchers (by “independent,” I mean those who don’t describe themselves as
Jedi warriors for school privatization, or point to the advocacy “work”
they still need to do in promoting school choice) analyzing these same data in
different ways have reached conclusions similar to ours. Moreover, Wolf ends
his review with a defense of vouchers, saying that our book “has nothing to say
empirically about private school voucher programs,” and thus that our findings
do not “undermine the case for private school vouchers.” This is a logical fallacy. Voucher programs are based on the
presumption that private schools are more effective. When two large-scale
nationally representative studies undercut that claim and illuminate drawbacks
of private school instruction, there are serious implications for the agendas
of voucher advocates.
Wolf is a respectable researcher, although he has, to my
knowledge, little if any experience with the overall question of public and
private school achievement (but instead continues to conflate evaluations of
local voucher programs with this larger question of public and private school
effects), not to mention the data and methods involved. Overall, Wolf raises
concerns with what is really a pretty standard treatment of two different and
respected datasets, but shows no errors in our federally funded analysis, and
provides no evidence that alternative approaches he suggests would change the
outcomes at all.
Jay
Greene’s blog
Despite the shortcomings of Wolf’s assessment, he at least takes
a serious approach to the question. However, his review was then taken up by
bloggers like Wolf’s
department head at the University of Arkansas. Just
for context, the Department of Education Reform at Arkansas, led by Jay Greene,
was seeded by a multi-million
dollar gift from the Walton Family Foundation, the
philanthropic arm of the Wal-Mart heirs, which persistently promotes private
and market models for public education. This is an extremely unusual “gift”
to a university, and one can guess what the Waltons expected for their
investment. The Waltons also fund
the organization that runs EdNext, which is managed by a voucher proponent approvingly
described by Senator Lamar Alexander as “the leading
advocate of school choice,”
and who sees himself as part of “a
small band of Jedi attackers” on this issue, fighting “the unified might of
Death Star forces led by Darth Vader.” We discuss the questionable
credibility of this group in the book (which Wolf declines to disclose in his
review) in the context of how such foundations and their sponsored academics
advocate for particular education policies in ways that are similar to the faux
research apparatus being used to deny the science around climate change.
Greene basically
summarizes Wolf’s measured but faulty review of our book, but then goes on to
embellish in ways that are misleading and simply inaccurate. For instance, Wolf
correctly notes that we use students’ eligibility for free-reduced lunch to help
account for SES, but also acknowledges that we include all available measures
of students’ home resources, and use those to supplement lunch-eligibility data
(as we described in detail). However, in Greene’s telling, we simply used
eligibility for subsidized lunch alone, which would indeed create problems
regarding missing data and could bias the results against private schools. Greene misrepresents both our book and his
colleague.
Greene also
recklessly accuses us of burying data: “reading, graduation rates, college
attendance, incomes, etc… don’t fit their story so they ignore those measures.” Actually, if he had read the book on
which he was commenting, Greene would have seen that (1) income was used; (2) we spell out our reasons for focusing on math and not reading as noted
above; and (3) there were no measures
in the two national datasets used for “graduation rates, college attendance
rates” — simply opening the book would show any reader that the datasets covered
earlier grades, and did not include measures pertaining to high school graduation
and college. One might think that the
Waltons would want some accuracy for their money, unless, of course, the sole
purpose of an effort such as Greene’s is simply to muddy the waters, not unlike
the efforts of some deep-pocketed climate change deniers.
Greene then writes,
“But the Lubienski’s (sic) don’t like randomized
experiments.” This, again, is simply false. Randomized trials, as in medical
trials, can be a wonderful tool for helping researchers understand the impacts
of interventions. However, as we have noted before,
there can also be serious limitations to randomization in understanding social
phenomenon, including attrition, and, with Greene et al., typically a failure
or refusal to account for school-level demographic effects. And Greene points to local voucher programs to explain away larger public-private
patterns just like climate-change deniers point to local weather to overlook
larger climate patterns. Just as many market advocates have a fundamentalist-like
faith in choice and competition in schooling (again, as we discuss in the
book), Greene et al. appear to have a similar faith in randomization — that it
can explain all, and that anyone raising questions about its potential
shortcomings represents a challenge that must be attacked, rather than
understood.
Persistent errors, claims of a “problem” where none are
demonstrated, loyal reference to an irrelevant set of studies as if they were
revealed scriptures — all this suggests a fundamentalist faith in a belief
system more than an interest in finding what works and why, as well as some
desperation when that faith is challenged by evidence. Despite all this
commotion, it may be useful to know that, in some ways, I don’t really care if
one type of school will “beat”
another type. As noted in our book, our own children attend public schools due
to convenience, not any principle on our part, for the same reasons they have
attended Catholic and Christian schools (state-funded in Ireland) in the past. But,
regardless of anyone’s preexisting prejudice (or lack thereof) on this issue,
we need to be wary of organizations that clamber for legitimacy while they work
to arrange evidence to align with the agendas of their special interest
funders. Still, we apparently struck quite a nerve, at least as indicated by
all these futile attacks on our findings as these advocates continue to try to
arrange evidence in support of their agenda.
No comments:
Post a Comment