Resisting the National Council on Teacher Quality’s Propaganda

The National Council on Teacher Quality (NCTQ) has published in recent review of teacher preparation.  The NCTQ is well-financed (Gates, Walton, Broad, New Ventures Fund, and many more), and the Fordham Foundation’s creation.  Together, their goal is destroy teacher prep by convincing the nation that teacher preparation in the nations public and private colleges is failing.  And to prove it, they’ve developed a set of standards, that Dr. Tom Slekar, Dean of the School of Education at Edgewood College (Madison, WI), says are so bad that “if our teacher education programs were evaluated “highly” by NCTQ we would be violating our mission/values and all the research on child development and teaching and learning.” (Interview published on Living in Dialog by Anthony Cody, May 27, 2014).

The NCTQ’s effort is an assault on teacher education, and there is a need for a resistance to their propaganda.  In this blog post, I’ve rounded up a few articles that call the NCTQ out, and show how their method is nothing short of an assault on the nation’s teacher education infrastructure.


NCTQ’s Assault on Teacher Education.  According to the head of the NCTQ, Ed schools don’t give teachers the tools they need.  Whose tools?  What tools?  The NCTQ is stuck in a 19th century version of teaching, and a 21st century push to quantify learning about student achievement tests.  To the NCTQ, if teacher preparation is not focused on academic achievement, then it is not giving teacher candidates the tools that the NCTQ thinks it needs.


Those Nonsense Annual NCTQ Ratings Are Coming on June 17. In this piece, the author reminds readers that the NCTQ ratings are coming (they are here now).  Dr. Schneider has written several articles on the NCTQ which you can reach here.  Schneider, M.K. Deutsch29 Blog, June 16, 2014.

Why the NCTQ Teacher Prep Ratings are Nonsense.  Dr. Darling-Hammond explains that “NCTQ’s methodology is a paper review of published course requirements and course syllabi against a check list that does not consider the real quality of instruction that the programs offer, evidence of what their students learn, or whether graduates can actually teach.”  As she pointed out in her article, those states whose students score high on NAEP had teacher prep programs with the lowest ratings, while states like Alabama, that scored low on NAEP, had high NCTQ ratings.  She also says that the NCTQ is out of sync with current teacher education programs, most of which are graduate level. Darling-Hammond, L. National Education Policy Center, June 19, 20123.

Response to the New NCTQ Teacher Prep Review by Peter Smagorinsky, The University of Georgia.  Dr. Smagorinsky briefly responded to some of the claims that the NCTQ makes which rely on rhetorical characterizations about “success” and “achievement” that spuriously elevate their belief that standardized tests reflect the whole of learning, a claim that few teachers or teacher educators endorse. In contrast, most teachers and teacher educators believe that the NCTQ’s narrow focus on standardized “achievement” tests undermine an authentic education that prepares students for work or life.  Smagorinksy, P. The Becoming Radical Blog, June 17, 2014.

Market Forces

How Will Market Forces Transform Teacher Preparation?  This is an article by Anthony Cody gives meaning to the context within which the NCTQ has appointed itself as the purveyors of truth about teacher preparation.  As Anthony points out, teacher preparation is being challenged by corporate reformers who have backed a group of non-educators called the NCTQ.  Financed by the same groups that are pushing test-based accountability and charter schools, the NCTQ has started the ball rolling to crush teacher preparation as we know it.  Anthony has written many articles about teacher preparation and NCTQ and you can reach them here.  Cody, A. Living in Dialog, May 29, 2014.

Shaky Methods, Shaky Motives: A Critique of the National Council of Teacher Quality’s Review of Teacher Preparation Program by Edward J. Fuller.  In this peer-reviewed article, Dr. Fuller states that the NCTQ’s review of university-based teacher preparation programs concluded the majority of such programs were inadequately preparing the nation’s teachers. The study, however, has some serious flaws that include narrow focus on inputs, lack of a strong research base, missing standards, omitted research, incorrect application of research findings, poor method, exclusion of alternative certification programs, failure to conduct member checks, and failure to use existing evidence to confirm the report’s rankings. All of these issues give the NCTQ report less than useful in efforts to understand and improve teacher preparation programs in the United States. The article also suggests alternative pathways NCTQ could have undertaken to work with programs to actually improve teacher preparation. The article concludes by noting that the shaky methods used by NCTQ suggest shaky motives such that the true motives of NCTQ for producing the report must be questioned.  Fuller, E.J. Journal of Teacher Education 2014, Vol 65(1) 63–77 © 2013 American Association of Colleges for Teacher Education.

Feeble and Incompetent

The NCTQ Review of Teacher Prep in the University System of Georgia is Feeble & Incompetent.  An analysis of the NCTQ Review in the context of teacher preparation in Georgia’s 21 state universities that offer teacher education programs.  The NCTQ claims to have a handle on the state of teacher preparation in the nation, but the results of this investigation show that they have reviewed a very small percentage of teacher prep programs offered in America’s colleges and universities. Hassard, J. The Art of Teaching Science Blog, June 22, 2014.

National Council for Teacher Quality Review: A Stacked Deck?  In this study, we analyzed the make-up of the NCTQ people, and discovered that it represents a “stacked deck.”  Only 2.5% of the participants in the review were teacher educators–active professors out there doing teacher education.  The NCTQ was stacked with corporate executives, foundation executives, and employees of NCTQ.  It was far from representing the field of teacher education.  Hassard, J. The Art of Teaching Science Blog, June 20, 2014.

Figure 3.  The Stacked Deck of Who Performed the NCTQ of Teacher Prep Review
Figure 3. The Stacked Deck of Who Performed the NCTQ of Teacher Prep Review

Results Are In: NCTQ Report on Teacher Prep Rated with Four Cautions.  In this article, the author analyses the 2013 NCTQ Review of Teacher Prep in the US, and using a junk science model developed by M.S. Carolan, concludes that this NCTQ study scored high on the junk science index, and therefore warrants 4 cautions–the highest rating possible in the model.  Readers should be extremely cautious about using the results of the NCTQ review of teacher prep. Hassard, J. The Art of Teaching Science Blog, July 1, 2013.

NCTQ Report on Teacher Prep: the Devil is in the Detail.  In this article we dig deep into the so-called methods used to evaluate university teacher prep programs.  The “methods” used include sources including: syllabi (when they can get them), textbooks, catalogs, handbooks, evaluation forms. We show that the NCTQ report on teacher preparation is junk science.  The method that they employed in their study avoided data from the very sources that could help uncover the nature of teacher preparation.  These sources are faculty, administrators, students, and cooperating school districts and educators.  Without interviewing and observing teacher preparation programs directly, and without establishing a cooperative relationship with the these institutions, the NCTQ condemns itself to false claims, outright opinions that have little bearing on the nature of teacher preparation.  Hassard, J. The Art of Teaching Science Blog, June 23, 2013




The NCTQ Review of Teacher Prep in the University System of Georgia is Feeble & Incompetent

The NCTQ Review of Teacher Prep in the University System of Georgia is Feeble & Incompetent.

In this post I’m going to explain why I conclude that the NCTQ Review of Teacher Prep of the University System of Georgia colleges and universities that offer teacher education is feeble & incompetent.

Figure 1. Map of Teacher Preparation Institutions in Georgia
Figure 1. Map of Teacher Preparation Institutions in Georgia.  The NCTQ Review of Teacher Preparation Did Not Even Come Close to Reviewing the Status of Teacher Prep in the Peach State.

I am Professor Emeritus of Science Education at Georgia State University. I was professor at GSU for 33 years, from 1969 – 2003. For the past 11 years I’ve blogged at The Art of Teaching Science, as well as written two editions of The Art of Teaching Science (Library Copy), and the second edition of Science as Inquiry (Library Copy).

While at GSU, I collaborated with colleagues across the university and with school districts in the greater Atlanta area to design several teacher education programs. The first program, the Phase Program for Secondary Science was a one year full-time certification program designed for students with degrees in science.

Starting in 1988, I worked with other professors in the College of Education to develop the first alternative teacher prep program in Georgia. The alternative certification program (ACP) was funded by the Professional Standards Commission from 1988 – 1993. Based on our experiences in the ACP, we designed a mathematics and science teacher preparation program for students with degrees in math, science or engineering.

In 1993, the Teacher Education Environments in Mathematics and Science (TEEMS) program was created as a four semester program beginning with an intensive summer institute followed by internships in a middle school, followed by a high school.  Students were assigned to schools in clusters of 10 – 20 students, and each worked with mentor teachers and professors and graduate interns for two semesters.  Students completed their graduate work in summer 2, and began their teaching careers in the fall.  In a few years, English education and Social studies education became part of the TEEMS teacher prep model.  It is the primary program for preparing secondary teachers at Georgia State University.

I also was involved in designing graduate hybrid courses at GSU for teachers that combined online learning with face-to-face.  In our earliest work, in the early 1990s,  we were using Macintosh SE 20 computers and very crude telecommunications system.

In the late 1980s, we began a collaboration with educators, researchers and teachers in the Soviet Union, and out of a long series of collaborations, developed the Global Thinking Project, one of the first telecommunications projects linking schools in the U.S. with schools in the Soviet Union.  We designed teacher preparation institutes and brought teachers from the U.S., Russia, Spain, Australia, and the Czech Republic to Atlanta for hands-on and face-to-face experiences using the Global Thinking curriculum materials, and the technology that was essential to global collaboration.

That said, here is my first analysis of the NCTQ Review of Teacher Prep.

NCTQ’s Numbers Don’t Add Up

The National Council of Teacher Quality released its review of teacher prep programs in the United States.  It has reported on teacher prep programs in every state in the country.  It’s review was based on examining course catalogues and course syllabi, when they could get them.

NCTC claims that its method will show the quality of teacher preparation programs around the country.  Although I don’t think their review has much to do with quality, let’s forgo quality, and take a look at quantity.  Do they present a valid display of teacher education options as they exist today in our nation’s colleges and universities?  I don’t think they do.

To find an answer to this question, I compared their report on teacher education in Georgia to what I could find on documents that are available to anyone simply by migrating to the University System of Georgia’s (USG) website, and from there following links to each state university that offers teacher preparation program.

Table 1 includes data that I extracted from the USG website, and the 21 universities that support teacher education in the state of Georgia.  Georgia state universities graduate about 4500 teacher each year.  I’ve listed the top ten producing universities in sequence in the chart, and then I’ve grouped the remaining 11 into “Other Teacher Education Programs.”

If you are not familiar with teacher education in Georgia, the results might surprise you in that Kennesaw State University (KSU)  is the leader in graduating teachers each year.  It offers at least 19 teacher preparation programs at the undergraduate and graduate levels.  It is the third largest public university in the Georgia after the University of Georgia (UGA)and Georgia State University (GSU).  Yet, the NCTQ reviewed only one of KSU’s 13 programs, four of UGA’s 18 programs, and two of GSU’s 17 teach prep programs.

If you re-examine the data in Table 1, very few of the State’s teacher prep programs were reviewed.


Figure 2. Teacher Prep in Georgia in 2012 - 2013 in 19 University of System of Georgia Universities
Table 1. Teacher Prep in Georgia in 2012 – 2013 in 19 University of System of Georgia Universities

In fact, if we examine the teacher preparation programs in the University System of Georgia as a whole, there are 151 undergraduate programs, and 118 graduate programs.  This is a total of 269 programs, and the NCTQ reviewed only 39 of them (Figure 2).

What About the Other 6,000 Teacher Prep Programs?

If we extend this thinking to the nation as a whole, we find that the NCTQ has reviewed a minority of the programs available to people to become teachers. The American Association of Colleges for Teacher Education (AACTE) lists more than 800 member institutions (public and private) that offer teacher preparation programs.

If we do a little math based on data from the University System of Georgia, we estimate that there are more than 8,900 teacher education programs offered in the U.S. in early childhood education, elementary education, secondary education, special education, art, music, physical education, and gifted education.

The NCTQ reviewed 2400 teacher prep programs.  They claim that their review answers the question: How are the nation’s institutions that train tomorrow’s teachers doing?  Their report failed to mention that there are more than 6,000 programs that never were reviewed.

Figure 1.
Figure 2. Number of Teacher Programs Reviewed by NCTQ Compared to the Number of Existing Programs in Georgia and the USA

The results that NCTQ are touting as reporting on the status of the nation’s teacher preparation programs is a sham.  The NCTQ method is described here. According to NCTQ they posted rankings of 1,1612 teacher prep programs in 1,127 public and private institutions of higher education.  If my research, as reported above is correct, there are even more teacher prep programs in the nation than I had reported.  Using the NCTQ figures, a fair estimate would be that there are nearly 10,000 teacher prep programs around the country.  The NCTQ is very specific about what it defines as a teacher prep program.  Table 2 is a list of 11 programs that were reviewed by NCTQ.  But notice the specificity of each program.  For example, the first one is a Master of Arts in Teaching program at Clayton State in English education.  There are also separate programs MA in Teaching programs in mathematics, science, and social studies.

The NCTQ does not even come close to reviewing the state of teacher preparation in Georgia.  My assumption, based on data collected in Georgia, that the low rate of teacher prep review exists in each other state in the nation.

Table 2. Examples of Teacher Prep programs in Georgia reviewed by the NCTQ
Table 2. Examples of Teacher Prep programs in Georgia reviewed by the NCTQ

NCTQ Review is Junk Science, Not an Honest Review of Teacher Prep

Figure 3. Pie Chart Showing How Little of Teacher Prep in Georgia was reviewed by the NCTQ.
Figure 3. Pie Chart Showing How Little of Teacher Prep in Georgia was reviewed by the NCTQ.

Although I focused only on teacher preparation in Georgia, and have shown that the NCTQ Review is not representative of the range and depth of teacher preparation. The NCTQ method is a sorry example of “research.” I evaluated the method of the 2013 NCTQ Review, and found that the NCTQ was an example of junk science, based on M.S. Carolan’s research on junk science.  The NCTQ assumes that teacher prep is failing, and looks to support this view.  The data are circumspect.  The method involves using legal maneuvers to get data from universities, and not seeking data in a collaborative way.  It’s references are meager, and none of their work is based on the large body of work in teacher preparation.  Their review is non-peer reviewed.  There are only a few teacher educators involved in the NCTQ.

The NCTQ review is not a valid review of teacher preparation.  It is anemic and incompetent, as we see in Figure 3.

What are your views on the status of teacher prep in Georgia or in other states?

National Council for Teacher Quality Teacher Prep Review: A Stacked Deck?

Screen-Shot-2012-04-05-at-9.10.20-AMNational Council for Teacher Quality Review: A Stacked Deck?  In this post I am going to show that the make up of the NCTQ review of teacher prep panels represents a “stacked deck.”  Instead of working with teacher educators directly, the NCTQ uses deceptive and inadequate methods to investigate teacher prep.

Who are these people who have signed on to participate in this type of review?  What is in it for them?

If you were to design a study of the medical profession, do you think it would be a good idea to involve directly and in significant proportions professionals who are out there practicing medicine, whether they are physicians, physician assistants, nurse practitioners, nurses, technicians, and the many other professionals who make up the medical field?

Would include visits to clinics, “doctors” offices, hospitals and labs, or would you rely on websites, and documents as the data for your investigation? Who would you ask to check your report before it was published?

The NCTQ has appointed itself as the evaluator-in-chief of teacher preparation in the nation’s public and private colleges and universities, and those few alternative teacher prep programs.   The group, which was created as a spin-off of the Thomas Fordham Foundation has used spurious methods to acquire information that they used to write a review of the nation’s teacher preparation infrastructure.

In general, teacher prep is carried out in hundreds of the nation’s public and private colleges and universities.  The NCTQ has created a partnership with many private foundations and corporations that support them (including the Gates Foundation, Broad Foundation, & Walton Foundation) and the U.S. News and World Report.  They did not visit the universities to become involved in the clinical approaches to teacher prep that permeate teacher education. Instead, they used university catalogs, and course syllabi (when can get them) as their data source.

Rather doing a scientific inquiry into the nature of teacher prep, the NCTQ has launched an assault on the teacher education profession, much like the assault that is being made on American classroom teachers.

I wondered at first who was involved in the review, other than the big gun at NCTQ, Kate Walsh.  I was interested to find out who their analysts were, and what positions they held.  Did they include an adequate representation of teacher prep? Did the NCTQ hire analysts that had a direct connection to teacher prep in the nation’s colleges and universities. Or did they stack the deck with those that agree with NCTQ’s view that teacher education is inadequate and failing the nation’s public schools?

To find out, I went to the Who We Are page on the NCTQ website.  According to this page, the review team comprised ten in-house and 75 more general and expert analysts.  My analysis differed somewhat, as I found that there were more than 20 NCTQ staff working on the report.

The NCTQ Who We Are membership analysts include:

  • Technical Panel
  • Audit Panel
  • Advisory Groups
  • General Analysts

According to my count, there were 79 people involved in the NCTQ review. The first question asked was how many of these people were teacher educators?  Figure 1 summarizes the membership of the study teams.  More than 27% of study teams were employed by NCTQ, while teacher educators represented only 2.5%.  When we include administrators (some of whom were education deans), professors of education, adjuncts, only 17% of the analysts work in the field of teacher prep.

Figure 1. NCTQ Study Teams for the 2014 Teacher Prep Review
Figure 1. NCTQ Study Teams for the 2014 Teacher Prep Review

Teacher educators are like physicians, nurses or physician assistants. They see and work with students or clients (patients) in the real world. A teacher educator’s role is to teach, advise, and clinically involve undergraduate and graduate students in either initial teacher prep or the continuous professional development of teachers through courses, institutes, and degree programs. Teacher educators, by and large are also researchers inquiring into the nature of teaching and leaning.

Research scientists, mathematics or economics professors, corporate executives, philanthropists, consultants who own private educational companies or administrators are not teacher educators, any more than pharmaceutical drug reps are medical practitioners.

Figure 2 is a bar graph comparing the number of teacher educators who were directly involved in the NCTQ review to the number of non teacher educators. The comparison shows us that the teacher prep profession is greatly underrepresented in the NCTQ review.

Figure 1. Comparison of Types of Analysts Comprising the NCTQ Study Teams
Figure 2. Comparison of Types of Analysts Comprising the NCTQ Study Teams

Are the conclusions that are made by the NCTQ valid based on this comparison?

Figure 3 is an another bar graph showing the distribution of “professions” and organizations of members of panels identified by the NCTQ, e.g. Technical Panel, Audit Panel, Advisory Groups, General Analysts. The chart clearly shows that corporate and foundations executives and employees of the NCTQ make up the largest numbers of participants in the review.

Figure 3.  The Stacked Deck of Who Performed the NCTQ of Teacher Prep Review
Figure 3. The Stacked Deck of Who Performed the NCTQ of Teacher Prep Review

It’s not surprising that 22 of the 79 people listed were employees of the NCTQ. Many of these persons were trained to read catalogs and syllabi and to rate teacher prep against their own standards, standards that are not grounded in teacher education research.

There were a number of professors hired by NCTQ to lend credibility and ability. Why were most of them either professors of mathematics or economics, or research associates from one school (University of Oregon)?

The membership roles also include eight corporate executives and 12 foundation executives. When I looked more carefully at their bios, it was clear that they shared the same core values of NCTQ, and a number of them sat on each other’s boards.

The NCTQ website also lists the names of people and organizations that endorse the 2014 report.  A few superintendents are listed, as well a number of organization that support the corporate reform model that NCTQ and its sister organization, The Thomas Fordham Foundation work to push onto schools, and now onto teacher prep institutions.  You can see the list of endorsers here.

In the week ahead, be on the look out for reports written by other bloggers and educators on the NCTQ.

What do you think about the make up of the NCTQ panels?

Warning: If You Believe the Fordham Foundation on Their View of Science or NCTQ’s View on Teacher Education, You Should Check Your Eyesight. Really.

Warning: If You Believe the Fordham Foundation on Their View of Science or NCTQ’s View on Teacher Education, You Should Check Your Eyesight.  Really.

On this blog, I have reviewed earlier reports put out by these two oxymoronic organizations, the Thomas Fordham Institute: Advancing Education Excellence (Fordham), and The National Council on Teacher Quality (NCTQ).  You need to know that these are ultra conservative organizations, and that the National Council on Teacher Quality was formed by the Thomas Fordham Institute.

In this blog post I want to argue that the reports issued by these organizations on the science standards and on teacher preparation are nothing short of conservative propaganda put out by organizations with ties to each other.

Fordham Foundation Report on Next Generation Science Standards.

Here we go again.  The Fordham Foundation’s gang of seven has released it “Final Evaluation of the Next Generation Science Standards.”  The same group evaluated the NGSS when they were first published in June 2012.  The gang of seven does not seem to have 20/20 vision when it comes to research.  Instead they have an unchanging fealty to a conservative agenda and a canonical view of science education which restricts and confines them to an old school view of science teaching. Science education has rocketed past the views in two earlier reports issued by Fordham about science education standards, as well as the NGSS.  You can read my earlier reviews of Fordham’s lack of knowledge about science education here and here.

For Fordham to have the audacity to continue its effort to promote an honest discussion of science education is a sham.  According to this final report, the gang of seven used the same criteria is used to evaluate the science standards in the states.  They grades the states using A – F rankings, and according to their criteria, most states earned a D or F.

You need to understand that they, like many of the other conservative think tanks, believe that American science education “needs a radical upgrade.”  The gang of seven has consistently kept to this mantra, and in this final report of the NGSS, they find that we are in the same state, and that the NGSS gets a grade of C+.

First of all, you need to realize that Fordham has their own set of science content standards (General expectations for learning).  Follow this link and then scroll down through the document to page 55, and you will find their standards listed on pages 55 – 61.  When I first reviewed Fordham’s evaluation of the state science standards and the NGSS, I was shocked when I read the criteria that they used to analyze science education.

I found that the Fordham standards are low level, mediocre at best, and do not include affective or psycho-motor goals. I analyzed each Fordham statement using the Bloom categories in the Cognitive, Affective and Psycho-motor Domain.

Ninety percent of all the Fordham science criteria fall into the lowest levels of Bloom’s Taxonomy in the cognitive domain. Indeed, 52% of the statements are at the lowest level (Knowledge) which includes primarily the recall of data or information. Twenty-eight percent of the Fordham science statements were written at the Comprehension level, and only 10% at the Application level. What this means is that the authors wrote their own science standards at a very low-level. In fact of the 100 statements only 10% were at the higher levels. No statements were identified at the synthesis level, which in science is awful. Only one science standard was found at the highest level of evaluation. Cognitively, the Fordham standards are not much to write home about. And it is amazing, given the low-level of the Fordham standards that any state would score lower than their own standards.

Then they used the same criteria to check the final version of the NGSS.

In my analysis I gave the Fordham science standards a grade of D. For them to use these criteria to judge the NGSS is absurd.

Yet, they keep saying that science education is inferior, and after a while, people begin to believe them.  For me, the gang of seven is not qualified to evaluate science education.  Yes, they have credentials in science and engineering, but they are woefully inadequate in their understanding of science curriculum development, or the current research on science teaching.

Many of the creative ideas that emerged in science teaching in the past thirty years represent interdisciplinary thinking, the learning sciences, deep understanding of how students learn science, and yes, constructivism.  The Fordham group appears to have had their eyes closed during this period.  Don’t believe their report.

NCTQ Report on Teacher Prep

The National Council on Teacher Quality report on Teacher Prep is more of an assault on teacher education and not an honest and ethical evaluation of teacher education programs.  Like the Fordham Foundation, they are research challenged, and cherry pick statements out of context from educational research.  Their research methods are not only challenged, but avoid the most important aspect of research in any field, and that is peer review.  The only peers that review their reports are in-house employees.

In this report on teacher preparation, the NCTQ is an “exhaustive and unprecedented” overall rating of 608 institutions.  Don’t be fooled by the extensive use of graphs and tables.   The methodology used to generate these is essentially flawed.   Its standards are lumped into four buckets (their term): Selection, Content Preparation, Professional Skills and Outcomes.

But here’s a big problem.  Instead of working with its subjects of study, the universities that have teacher education programs, the NCTQ relied only on a paper trail discovered online or in catalogues.  It did not visit these campuses to find out about teacher education on the ground.  In fact, many of the schools simply did not want to cooperate with the NCTQ.  As a result NCTQ had to used the open records law to get much of their information.  And as the report indicates, most institutions did not supply the “necessary syllabi” to do an adequate job assessing the institutions.  They also had trouble getting the institutions to give information on student teaching and student teaching policies.

The entire NCTQ report is based on “document requests.”  They even resorted to legal action to get forms from colleges and universities.  Can you imagine social science researchers taking legal action against students because they wouldn’t answer any of their interview questions?

The NCTQ has taken the liberty of evaluating the nation’s teacher preparation institutions without making site visitations, interviewing professors, students, and administrators.

Yet, the NCTQ claims to have done an independent review of teacher education in America.  Nonsense.  The report overwhelms in terms of charts and diagrams.  The problem is that the research method is limited in terms of making valid and honest evaluations of teacher education.

 What do you think about these two conservative think tank reports?  Do you accept the grade of C for the NGSS, and think that most of teacher education in America is anemic?


NCTQ Study of Assessment in Teacher Preparation Courses Flunks

In May, 2012, the National Council on Teacher Quality (NCTQ) issued a report entitled: What Teacher Education Programs Teach About K – 12 Assessment.   Anthony Cody mentioned this study in a recent post entitled Payola Policy: NCTQ Prepares its Hit on Schools of Education.

The title intrigued me, so I went over to the NCTQ website, and read and studied the report which is about what education courses teach about assessment.  This post is my review of the NCTQ study, and I hope after you finish reading the post you will realize how bogus reports like these are, especially given the quality of research that professors of education have been doing for decades.  The study reviewed here would never have been published in a reputable journal of research in education, not only in the U.S., but in any other country in the world.   I’ll make it clear in this post why I make this claim.


The National Council on Teacher Quality is conservative think-tank that publishes reports on education that the council claims to be research studies in the field of education.  The subhead for the group on their website is: “A research and policy group working to ensure that every child has an effective teacher.”  The NCTQ has a staff of 18, an advisory group of 36 people, and a 13 member board of directors.   The individuals on these various committees come from the corporate, educational, and consulting worlds.  Some of the organizations represented include: Pearson Publishing, Teach Plus, KIPP Schools, the Hoover Foundation, American Enterprise Institute, Core Knowledge, Piton Foundation, Bill and Melinda Gates Foundation, Thomas Fordham Foundation, N.F.L Players Association, B & D Consulting, Students First, Abell Foundation, Teach for America, New Schools Venture Fund, and others including a few universities and two public schools.

Many of these groups have worked very hard to denigrate teachers, insist that the Common Core State Standards be adopted by all states, believe that teaching and learning should be data-driven, and that student achievement data on high-stakes tests should be used to make decisions about student, teacher and principal effectiveness, and school success.

The NCTQ publishes reports with titles such as Building Better Teachers, Student Teaching in the Nation, and the most recent one What Prep Programs Teach About Assessment.

According to Anthony Cody’s post, the NCTQ was founded by the Thomas Fordham Institute, a conservative think-tank that publishes non-peer reviewed reports on education, and has an appalling opinion of teacher education institutions. And of course, the Thomas Fordham Foundation has membership on the NCTQ Board of Directors.

I’ve reviewed two reports previously published by the Thomas Fordham Institute.  You can read my reviews of these reports here:

In each report I found the methodology weak, the results misleading, and both reports were published as non-peer reviewed research.  The NCGQ study on assessment in teacher education uses the same methodology as the Fordham studies.  Even with such a poorly designed study and unreliable data, think tanks get away with publishing their works in this fashion, and because of the financial resources, and the identities of their funding agencies, they carry a good deal of clout.  The Fordham Foundation and the NCTQ are two such foundations.

Is teacher education going to take hit?  Probably so.  The NCTQ organization has the resources and the connections to make trouble for university teacher education programs.  There is a movement to hold teacher education institutions accountable for the achievement test scores and gains that their graduates produce in their students once they begin teaching.  As absurd as this sounds, the U.S. Secretary of Education is supportive of such an idea.  Organizations such as NCTQ are on the accountability bandwagon, and carry weight at the the policy level in education.

What Teacher Preparation Programs Teach About K-12 Assessment

This report was released in May, 2012, and according to the preface of the report, it provides information “on the preparation provided to teacher candidates from teacher training programs so that they can fully use assessment data to improve classroom instruction.”  The results reported in the final document were based on reading and analyzing 450 syllabi received from 98 institutions of higher education representing 180 teacher preparation programs.

Why This Study?: The First Disagreement

The purpose of the study was to find out what professors in teacher education are teaching their students about assessment so that when they begin teaching in the classroom they will be able to use assessment data to improve classroom instruction.

To rationalize their study, the NCTQ authors, Julie Greenberg, and Kate Walsh impress upon the reader the importance of assessment in today’s schools, and the need for prospective teachers to know how to use assessment in their future classrooms.  The authors say,

Effective instruction requires that teachers have a well grounded sense of student proficiency in order to make a daunting number of  instructional decisions, such as making snap judgments in the midst of interactions with students, and planning lessons, be they for the next day, the next unit or the entire school year.

The purpose and rationale for this study was not based on previous research, or a review of the literature.  The authors allotted less than one page on “previous research.”  Three references were cited.  One of the references they cited is research done by Black and Wiliam, two of the leading assessment researchers in the field of education.  The authors of the NCTQ study rejected the Black and Wiliam research,which is extensive, published in peer-reviewed journals, and highly cited, BECAUSE the NCTQ researchers said that the research was old (1998), and back then education research had weaker designs, and THEREFORE those studies are suspect.

The researchers fail to tell the reader that Black and Wiliam are leading research proponents of formative assessment as a way to improve instruction and learning and have been publishing research for decades.  Even now. And if they were concerned that the studies were old (>1998), all they have to do is a Google search, or link to Dr. Black’s or Dr.Wiliam’s site for their research on assessment.

Greenberg and Walsh claim that education studies prior to 1998 used weaker designs.  I did my Ph.D. work in the late 1960’s in science education at The Ohio State University, and let me tell you the research designs and methodologies that my colleagues in graduate school, and in the literature used in their research were quite robust, not weak.  The research in education is compelling, and its a testament to the incompetence or bias of Greenberg and Walsh that they couldn’t cite more than three studies.

The rationale of  the NCTQ study is rooted in political and ideological beliefs in schooling rather than one that builds upon previous research.  For example they make this claim:

The evidence for the connection between using data to drive instruction and student performance is emerging, just as the practice of using data is emerging.

There is no previous research cited in their report that helps establish this claim, and help us see how their work is connected to other scholars.  Instead they were cherry picking any research that would support their view, or downplaying or dismissing research that might have questioned their intentions.

Biased Questions?

The researchers were bent on showing that teacher educators weren’t doing the job of teaching their students about assessment.  And they undertook this task with the clarion call that there is new focus on “data driven instruction,” and they cite examples of schools that prove that using data to drive instruction will reduce the achievement gap among low-income and high-income students.  And sure enough, they cite two Broad Prize Winners, Charlotte-Mechklenburg Schools, NC, and Adline Independent Schools, TX as examples. Teachers in these schools, according to Greenberg and Walsh, were trained in using data to drive instruction, and that is what led to such positive test results. And by the way, the Broad Foundation is a major funding source for NCTQ.

But here is the problem.  Instead of trying to document or uncover what is being taught about assessment in teacher preparation programs, the researchers decided what they thought was important and then go and compare what teacher preparation program are doing compared to their own ideas.  The researchers started with three categories of assessment that they thought ought to be included in teacher prep programs.  They identified three categories, which turned into their research questions as follows:

  • How adequately does coursework address Assessment Literacy?
  • How adequately does teacher preparation program coursework address Analytic Skills?
  • How adequately does teacher preparation program coursework address Instructional Decision Making?

You might think this is legitimate.  But it is not really helping with the inquiry.  If the researchers were really interested in making a contribution to the field they would have approached the problem inductively.  That is, they would have worked their way up from the syllabi to generalizations that they could make based on their observations of the syllabi.

The inductive method is a scientific method that educators have used for decades to make generalizations about educational phenomena, such as the types of questions that teachers ask during class.  In this case, data analysis would have been determined by multiple readings (of the syllabi) and interpretations of the raw data.  Because the researchers would be looking for evidence of assessment in the syllabi, they would identify specific parts of the syllabi and label these parts to create categories (e.g. diagnostic methods, formative techniques, using computers to analyse test data, etc.)

The point is that instead of starting with the three categories that the researchers at NCTQ thought should be taught in teacher preparation programs, they could have uncovered what the syllabi reveal about the teaching of assessment, and report that data.  There is more to say about this kind of research, such as teaching the researchers how to code, the use of computer programs to make the task easier, assessing the trustworthiness of the data, and reporting the findings.  We’ll leave that for another day.

According to the authors, and the opionions of their experts in the field, teacher education institutions have not figured out what knowledge a new teacher needs in order to enter a classroom with the ability to use data to improve classroom instruction.  Their review of the literature certainly didn’t lead them to this opinion.  They have no basis for saying this, other than they can, and it supports the basis for their study.

The purpose of their study was to show that teacher preparation program coursework does not adequately prepare students to use assessment methods with K-12 students.  Their study does not shed new light on the teaching of assessment in teacher prep, but it does shed light on how research can be biased from the start by asking questions based on your beliefs and ideologies, rather than research in the field.

The Study Sample: Found Wanting

According to the report, NCTQ obtained course syllabi from 180 teacher education programs in 98 institutions of higher education in 30 states.  Using the open records requests, the reporters used the states’ course syllabi from colleges that first responded to their request.  The “researchers” don’t tell us if they actually contacted any of these institutions, tried to talk with any of the professors, or perhaps visit a few institutions so that they could interview not only professors, but students, and cooperating teachers with whom these institutions worked.  None of this was done.  Or at least it wasn’t stated in the their report.  They got their data by requiring the institutions to hand over their course syllabi.

All of the data is embedded in the course syllabi they received.  I don’t know about you, but course syllabi vary from one course to another.  Some professors create very detailed course syllabi, have well developed websites, use course software such as Blackboard, textbooks, and online data bases.  All of these sources should have been examined if the NCQT researchers wanted get a full picture of these courses.  This was not done.

They only looked at the paper they received.  On the basis of this alone, the data that the researchers used for this report is incomplete.  Syllabi are no doubt inconsistent in design and scope from one institution to the next.  And relying solely on a paper syllabus does the research study an injustice, and makes the analysis and conclusions invalid.

The syllabi they selected had to have the word “assessment” in the course title, or it had to be a methods course, e.g. science methods.  Other syllabi were thrown out, and not analyzed. Somehow, the researchers perused the course syllabi looking for “evidence” or lack of for assessment by reading the objectives, lectures (if they were included in the syllabi), assignments, textbooks and readings.  Whether the researchers actually looked at the texts is unknown.  They said they looked at the publishers’ descriptions of the content of the required texts.  And then they looked for “capstone projects,” such as work samples or portfolios.

The sample that the researchers report in their study does NOT represent teacher preparation institutions in the U.S.  It only represents the 98 institutions that responded to the open records request of NCTQ.  Their “finding” can not be generalized beyond the sample they studied.  I don’t trust the sample that they are basing their findings on.  For one thing, there didn’t seem to be an open two-way exchange between the NCTQ and the universities cited in the report.  How do we know if the syllabi the researchers received is a true record of the course syllabi for these teacher prep institutions.

It’s possible that NCTQ is making decisions for some universities based on one syllabus, and for others using multiple syllabi.  We have no idea, however, because the researchers did not report this in their report.  The universities in the study have been short changed, and even worse have been lumped together in a report that paints a negative picture of teacher preparation programs.

If you take a look online at examples of teacher education programs, you’ll find that if they are graduate level teacher preparation programs leading to a masters degree and certification, there are at least 10 courses that should be examined to evaluate the coursework.  At the undergraduate level, there are as many as 19 courses that should be evaluated.  The researchers at NCTQ failed in giving a real picture of a university’s teacher prep program if they only reviewed a few courses.


The researchers over-laid three rubrics on the course syllabi to find out to what extent professors were teaching (1) assessment literacy (2) analytic skills and (3) instructional decision making.   Assessment literacy meant searching the syllabi for key words including diagnostic, formative and summative.  Analytic skills meant looking for key words such as dissect, describe or display data from assessment.  Instructional decision-making meant looking for evidence that teacher educators helped their students use assessment data to drive instruction.

The rubrics were very simple using a Likert measuring scale from “0” to “4.”  A “0” meant there was no evidence, while a “4” meant the criteria were met at a high degree.  For example to evaluate the syllabi for assessment literacy, the scale used was as follows (you can view all of the rubrics here):

0–There is no or almost no instruction or practice on the various types of assessment (inadequate)

1–Instruction on the various types of assessment is very limited and there is no or almost no practice (slightly adequate)

2–Case 1: The scope of Instruction on the various types of assessment is not comprehensive and practice is very limited to adequate.  OR Case 2: The scope of instruction on the various types of assessment is comprehensive, but practice is very limited or limited.

3–The scope of instruction on various types of assessment is comprehensive and there is adequate practice.

4–The scope of instruction on the various types of assessment is comprehensive, including concepts such as “validity” and “reliability,” and there is adequate practice ( adequate)

The researchers rated each syllabus on three criteria and judged each criteria as inadequate (0)to adequate (4) using the 0 – 4 point scale.  They were then able average scores on the syllabi from each teacher education program.  Presumably either the two researchers did the actual rating, or they hired raters.  Whether they did or not, the researchers failed to provide data on inter-rater reliability.  We have to question the trustworthiness of the data.

As mentioned above, NCTQ started with a biased set of questions, and used these questions to analyze the syllabi of the teacher prep coursework.  On face value, the findings only reflect their own biases and way of assuming how and what teacher prep courses should include about assessment.

In this study, 455 courses were evaluated, anywhere from one to six courses per institution. The only average mentioned was that 2.5 courses per program reference assessment.  This statistic is difficult to believe given our knowledge of teacher education courses.  If they looked at methods courses, the chances are very high that assessment was included in these courses.  I don’t know if these researchers examined course syllabi for internships or student teaching, but all of these experiences would have included assessment strategies as part of the experience.  So you have wonder about the validity of their data.

Results:  Did the Teacher Education Programs Reach the Bar

The results of this study have to be examined cautiously and with reluctance.  In my own opinion, the data that was collected in this study is inadequate to answer the questions posed in the study.  Firstly, the institutions did not directly participate in the study.  There is no evidence that there was any attempt to contact the deans of these colleges, or department heads to ask them to provide additional documentation on their teacher education courses.  Nor is there evidence that the researchers made any attempt to seek out course websites that would have included more details and content of the courses.

It seems to me that the researchers wanted to limit the data, yet make sweeping statements about teacher education programs, and make recommendations based on such an inadequate study.

According to the researchers, “the bar to earn a passing rating in this study was set low.”  They said they did this to give institutions the benefit of the doubt.  Actually, it was a way out for the researchers because they were dealing with very limited data, a few course syllabi from major institutions of higher education, and they were going to use this meager data to make decisions about how assessment was being taught to future teachers.

According to this study only 3% of teacher preparation programs adequately teach the content of assessment in their courses.  But actually all they can say is that in their opinion only 3% of the syllabi they received reflected this value.  And given my critiques, this statistic has no meaning in the reality of teacher prep.

The sample they used in their study was biased from the start.  Why did these universities respond to the open records request?  Why did universities refuse to respond to the open records request?  Did the researchers treat the universities with any respect, and try and open up a dialog on teacher preparation content?

One More Thing

There are quality teacher education programs in the United States.  Linda Darling-Hammond, in her book Powerful Teacher Education: Lessons from Exemplary Programs, documents seven highly successful teacher education programs, and discusses the way in which teacher education has changed to create more clinically based teacher education programs.

The researchers of the NTCQ study are stuck in a 19th-century model of teaching, and simply want to hold accountable teacher education institutions to the principles and practices that teacher education rocketed through years ago.

But at the same time, the NTCQ study cleverly uses percentages and numbers in such a way to convince some that teacher education programs are inadequate, and need to be regulated in ways that satisfy the researchers’ interests.  If you look at their sources of funding, and the names of individuals who sit on their boards, you will see the conservative agenda in action in this organization.

My advice is to call them to task on this study.  Tell them that their study in no ways sheds any light on how assessment is taught in teacher education programs.  The only light that is shed is on their own deficiencies as a research organization.

What do you think about the NTCQ study?  Do think their study is to be taken as a valuable contribution to the literature of teacher education?