Atlanta Public Schools’ Equity Audit Finds Differences! by Ed Johnson

Guest Letter by Mr. Ed Johnson, Advocate for Education, Atlanta, GA

Creative Commons "I Come In Peace" by JDevaun is licensed under CC BY-ND 2.0
Creative Commons “I Come In Peace” by JDevaun is licensed under CC BY-ND 2.0

Ed Johnson wrote a letter in response to the Atlanta Public Schools Equity Audit which was prepared by researchers at Georgia State University to look at differences in the characteristics across schools in the APS district.  As you will see in Ed Johnson’s letter, he uses a form of thinking that looks at the APS as a whole, and not as separate schools, and applies the work of W. Edwards Deming, Russell Ackoff, and Peter Bernard to investigate equity in the context of systems thinking.

This is an important letter written by a person who for years has explored how to improve education in the Atlanta Public Schools.  It is hoped that the new Atlanta Public Schools superintendent will seek his advice, and in so doing challenge the “turn around” and “urban” mentality that dominates educational reform.

June 26, 2014

Well, of course, Atlanta Public Schools’ equity audit would find differences. Differences always exist. No two of anything are exactly the same. So the discerning question always is, what do differences mean?

In its upfront Executive Summary, the APS Equity Audit Report proclaims:

Equity audits are a relatively new tool for school systems and there are large variations in their thresholds for determining whether or not characteristics are substantially different across schools. Simple percentage difference cutoffs or using standard error calculations to generate confidence intervals of means both avoid complex questions of whether or not differences across schools are practically meaningful. This report finds substantial variations across schools on numerous characteristics, but leaves questions of whether and how to address these differences to the broad group of stakeholders concerned with educational outcomes for the students of APS.

On the one hand, the APS Equity Audit Report responsibly cautions against using “[s]imple percentage difference cutoffs or using standard error calculations to generate confidence intervals of means both [of which] avoid complex questions of whether or not differences across schools are practically meaningful,” and that is fortunate. Such figures are usually presented in business-style financial reports that often prompt reacting to and holding people “accountable” for past performance while typically providing no rational basis for predicting performance and learning into the future.

Equity from the Standpoint of Random Variation v Non-Random Variation

On the other hand, without question, although it “finds substantial variations across schools on numerous characteristics,” the APS Equity Audit Report clearly forgoes addressing “whether or not differences across schools are practically meaningful,” and that is unfortunate.

It other words, the APS Equity Audit Report does not address the very important question of what do differences mean. Do differences with respect to a particular characteristic mean something or mean nothing? To answer the question requires detecting and distinguishing differences that arise from random variation and differences that arise from non-random variation.

Random Variation Means…

Detection of differences that arise from random variation would indicate differences that mean nothing, that are not “practically meaningful.” Such differences would be due to common, ever-present systemic causes, any or all of which may be known, knowable, and unknowable.

Non-Random Variation Means…

On the other hand, detection of differences that arise from non-random variation would indicate differences that mean something, that are “practically meaningful.” In this latter case, for better or worse, such differences would be due to special causes powerful enough to dominate and stand apart from all differences due to common causes. Special causes may occur continually, irregularly, or temporarily and are generally known or knowable.

So, there are differences due to common causes that may be referred to simply as “common cause variation.” And there are differences due to special causes that may be referred to simply as “special cause variation.” Hence, there exist two kinds of variation.

Signals and Noise in the Data

Now, considering any characteristic’s data in the APS Equity Audit Report, can something be done with those data to detect and distinguish the two kinds of variation the data may contain? Asked differently, is there a way to filter the data to separate “signals” the data may contain from the “noise” the data do contain?

To do so is important so as to:

  1. avoid responding to a signal as if it were noise and
  2. avoid responding to noise as if it were a signal

To fail at either 1) or 2) is to drive up costs and generate excessive waste, unnecessarily.

Is there a Way to Detect Signals from Noise?

Indeed there is a way to detect and distinguish common cause variation and special cause variation. And it is a way even some elementary school children have learned to use in the process of continually improving their own learning. The way is to make a “process behavior chart” from the data. (The process behavior chart is much like an EKG (electrocardiogram) made to tell a story about the behavior of a patient’s heart.)

Now, from actually having made a process behavior chart for a fair number of characteristics the APS Equity Audit Report covers, none revealed any special cause variation, save a few where Forest Hill Academy was detected to represent a special cause matter, which is to be expected of APS’ alternative school.

The Inexperienced Teacher Category

For example, the process behavior chart in Figure 1 below takes a district-level look at the characteristic “Inexperienced Teacher (Less than 3 years),” in the category “Teacher Experience by Academically Disadvantaged Students” (APS Equity Audit Report, pages 179-183). The APS Equity Audit Report explains this characteristic means the proportion of students’ time spent with a teacher that has less than three years experience, and that the proportion can be expressed as a percent by multiplying by 100, which the process behavior chart in Figure 1 does.

image
Figure 1: District-level Teacher Experience by Academically Disadvantaged Students, Inexperienced Teacher (Less than 3 years) ©Ed Johnson

 

The process behavior chart in Figure 1 detects only differences due to common causes, or common cause variation, or noise. All the variation ranges around the center-line average of 26 percent (26.26%) and between the lower control limit, at zero percent (0.00%), and the upper control limit, at 55 percent (54.55%). No variation exceeds the upper control limit. This means academically disadvantaged students that have a teacher with less than three years experience is a systemic matter among all APS elementary schools and not a matter for any individual schools. It would be top administration’s mistake, and abdication of their leadership responsibility, to single out Centennial Place, or Hutchinson, or M. Agnes Jones so as to hold any people there “accountable” as a special matter.

Again, Figure 1 is district-level, with all APS elementary schools taken as a system. But what about APS Region-level, with each Region taken as a system? What might process behavior charts say about how the North, East, South, and West regions of APS compare on the example characteristic being considered here?

Consider Figure 2, below. The figure comprises four process behavior charts, one for the East Region, North Region, South Region, and West Region of APS. Figure 2 makes it easy to compare the APS Regions holistically and rather straightforwardly and much at a glance. Like the district-level process behavior chart in Figure 1, each Region-level process behavior chart in Figure 2 detects no evidence of special cause variation; all differences are due to common cause variation, to noise. No differences are “practically meaningful.”

It is also quite easy to see in Figure 2 that common cause variation appears the least “spread out” around the North Region center-line average compared to the spread of variation around the other Regions’ center-line averages. Even so, if extended to the right, the North Region lower and upper control limits would cover all West Region schools as well as all South Region schools. And if extended to the left, North Region lower and upper control limits would cover all East Region schools, save Centennial Place. Thus Figure 2, like Figure 1, says differences among all APS elementary schools with respect to the example characteristic are systemic, and equitable.

Moreover, it is also quite easy to see from Figure 2 that each APS Region’s center-line average compares favorably to the district-level center-line average in Figure 1. In Figure 1, the district-level center-line average is 26.26%; in Figure 2, the four Regions’ center-lines average to 26.37%. The difference is a mere 0.11%, or roughly one-tenth of one percent.

The observations made from Figure 2 support and now extend the observation made from Figure 1. Now it can be said that academically disadvantaged students that have a teacher with less than three years experience is a systemic matter for all APS elementary schools, and is not a matter for any individual school or APS Region.

Figure 2: Region-level Teacher Experience by Academically Disadvantaged Students, Inexperienced Teacher (Less than 3 years)
Figure 2: Region-level Teacher Experience by Academically Disadvantaged Students, Inexperienced Teacher (Less than 3 years) ©Ed Johnson

Implication for Administrators, Especially Those at the Top

In addition, and much like already concluded, it would be APS top administration’s mistake, and abdication of their leadership responsibility, to single out any school or Region so as to hold any people there “accountable” as a special matter. Leadership from the top, from both the school board and the superintendency, is required. Only they can be held “accountable” in any rational way. And no manner of “accountability” pushed down from the top can substitute for the requisite leadership needed to foster collaboration with and among affected stakeholders, as a system.

Now, let’s be clear on this point: Both Figure 1 and Figure 2 present process behavior charts that evidence only equity; neither evidences inequity.

Where is the Inequity?

So, if inequity exists, then where does it exist?

Well, actually, knowing where the inequity exists comes through the story the process behavior charts in Figures 1 and 2 tell. The charts tell the story that the teacher characteristic “Inexperienced Teacher (Less than 3 years)” has been optimized among APS elementary schools only about that singular teacher characteristic. It is a story with telltale signs of strictly systematic analytical thinking operating to the exclusion of systemic synthetical thinking. It is a story with telltale signs of believing that the whole is the sum of its parts, and that the whole can do its best only if each part does it individual best, that each part “executes with fidelity.” It is a story where teachers that have less than three years experience have been assigned quite equitably throughout APS elementary schools and to academically disadvantaged students.

And that is the rub, the genesis of the inequity, though it may seem counterintuitive.

Standardized test results have for more than a decade shown APS to be, in effect, “two systems in one,” White-Black, with Black greatly lagging. More recently, standardized test results have begun to show APS’ devolution into becoming “three systems in one,” White-Hispanic-Black, with Black still lagging.

Therefore, the inequity comes not from placing less experienced and unremarkable teachers with especially “Black” students in the APS West Region and South Region. Again, the process behavior charts in Figure 1 and Figure 2 say equity exists among all APS elementary schools with respect to the teacher characteristic “Inexperienced Teacher (Less than 3 years).” Rather, the inequity comes from “Black” students being without greatly experienced and remarkable teachers! For example, five-weeks trained personnel by Teach for America placed with “White” students would constitute equity. However, five-weeks trained personnel by Teach for America placed with “Black” students would constitute inequity. Why? Simply because none can possibly be a greatly experienced and remarkable teacher.

Why the Inequity?

Now, why might this inequity exist? What might be its root?

Consider that the Atlanta Board of Education Policy Manual offers understanding. Specifically, Policy Number BBBC, titled “Board Member Development Opportunities,” states, in part:

The Atlanta Board of Education places a high priority on the importance of a planned and continuing program of professional development of its members. … The board considers participation in the following activities consistent with the professional development of its members: Conferences, workshops, conventions, and training and information sessions held by the state and national school boards associations and other conferences sponsored by local, state, and national educational organizations. … The list shall include, but need not be limited to, the following organizations:

  • National School Boards Association
  • Georgia School Boards Association
  • Council of Great City Schools
  • National Alliance of Black School Educators

This policy has inequity built-in. How? First, it restricts “professional development” (PD), which goes policy-wise undefined, to school board procedural matters vis-à-vis the school board associations listed. Then it more narrowly restricts PD to thinking and treating APS as an “urban” school district in need of “urban school reform” or “transformation” vis-à-vis Council of Great City Schools and similar other organizations. Then more narrowly still, the policy restricts PD to a “racialist ideology” (Fredrick Douglass) vis-à-vis National Alliance of Black School Educators.

Regressive Policy

The policy is regressive, and acts much like a funnel to direct APS into associations with persons and organizations committed to disrupting public education as a common good or who have not the wisdom to understand and value public education as a common good. The aim is the transformation of public education in especially “urban” school districts into a profit-making, free-market commodity all the while opportunistically and unashamedly co-opting Civil Rights struggles. This inequity built into school board policy and steeped in urbanism effectively keeps APS stuck in stasis and incapable of learning to continually improve, unlike the global community that is continually learning to improve.

A consequence of such inequity rooted in Atlanta Board of Education policy is the thinking that “it takes a black educator to educate a black child” made a prominent operational aspect of APS culture, and with it APS never going beyond urbanism’s boundary to seek greatly experienced and remarkable teachers to place with especially “Black” students! The inequity is such a deep, self-imposed operational aspect of APS culture that it goes virtually unspoken and unchallenged among stakeholders until it becomes convenient to use to insinuate, excoriate, or defend against allegations of maltreatment or oppression, or to conduct an equity audit.

Fortunately or unfortunately – take your pick – wisdom teaches that the problem is in here, with us, not out there, with them. But then, wisdom comes from learning, not from achievement and certainly not from merely performing.

So, isn’t it time for Atlanta Public Schools to leapfrog City of Atlanta’s modern day “Atlanta Compromise” and turn to embracing humanness more so than “race?” Isn’t it clear by now that especially “Black” children’s quality of education depend on doing so?

Kind regards,

Ed Johnson
Advocate for Quality in Public Education
O: (404) 691-9656 | C: (404) 505-8176 | edwjohnson@aol.com

“The foundation of every state is the education of its youth.”
Diogenes of Sinope (c. 412 – c. 323 BCE)

NCTQ Review on Teacher Prep Replete with Significant Data Gaps

According to the NCTQ, teacher preparation in the U.S. is failing, and again according to them, there is a significant data gap on what’s working.

Their stated goal is to fill this gap by providing those who want to be teachers to become “strategic” consumers by providing them with a ranking of the teacher prep programs in the country.

NCTQ states that its strategy is based on the review of medical education in the U.S. sponsored by the Carnegie Foundation for the Advancement of Teaching and completed by Abraham Flexner in 1910, titled Medical Education in the United States and Canada (access to the original document). However, to suggest that in the year 2014 that the preparation of teachers is in the same state as was medical education in 1910 is to be misinformed.  The preparation of teachers developed and changed in ways similar to the preparation of physicians over the past 100 years.

In this post, I provide data to show that the NCTQ review of teacher preparation is a failed effort, and does not come close to helping anyone understand teacher education, unless you work for them, or the Fordham Foundation.

What is the Flexner Report and What Does NCTQ Fail to Tell About it?

In 1910, there were 155 medical schools in North America.  Flexner visited all 155 medical schools.  As Flexner points out in his study of medical education, many of the medical schools then were “trade” schools owned by one or a few doctors.  At the time, medical training was unregulated, and his report called on American medical schools to enact higher admission standards and graduation standards.

In 2014, NCTQ identified 1,127 institutions that supported teacher preparation.  The NCTQ did not visit any of these schools.  Table 1 shows the comparison of schools visited by Flexner and the NCTQ in their respective studies of medicine, and teacher prep.  There is something wrong with the approach taken by NCTQ.  In 1910, teacher preparation in America already had 70 years of experience, and many major universities were sites for the preparation of teachers.

Table 1. Comparison of the Flexner and NCTQ Reviews
Table 1. Comparison of Schools Visited in the  Flexner and NCTQ Reviews of Medicine and Teacher Prep

Flexner’s report was a thorough study of medical education in North America, and it’s unfortunate the NCTQ identifies its review of teacher prep as in the same league as the Flexner report.  It’s not.  The Flexner report was a scientific study of medical education in North America.  It includes a detailed review of the literature and history of medical education in North America.  Flexner examined early and historical essays on medical training going back as far as 1750 with the establishment of America’s first at the College of Philadelphia in 1765.

The NCTQ report is not a scientific study of teacher preparation The NCTQ ignores the history of teacher prep in North America, and has never published a review of the literature or history of the long history of teacher prep in the United States.  And, instead of learning from teacher educators about teacher preparation, they refused to visit any institutions and used strong arm tactics to get documents such as course syllabi.  They had to do this because many teacher preparations didn’t return NCTQ’s call!

Teacher preparation, like medical education has a rich history.

When I decided to become a teacher, I applied and was accepted at Bridgewater State Teachers College, in Bridgewater, Massachusetts. It was founded in 1840 by Horace Mann as the second teacher prep institution in America, as the Bridgewater Normal School. (The first normal school was founded in 1839 in Lexington, MA, where I taught high school back in the day).  Bridgewater is the oldest institution of public higher education in Massachusetts, and is regarded as the “home of teacher education” in America.  It took its present name, Bridgwater State University in 2010.

The teacher preparation that I received at Bridgewater was based on the laboratory school model in which the university supported a laboratory school which provided clinical teaching experiences for its students.  As prospective teachers, we taught in the lab school as interns during our 3rd year, and then did a full internship (student teaching) in a Massachusetts public school.  The laboratory school (Burnell Campus Laboratory School) at Bridgewater began in 1840, and except for a few years in the mid-1880’s, it remained open until 2010.  The laboratory school, which was promoted by John Dewey as an environment for teacher development and curriculum reform, substantiated the importance of teaching students as main focus of teacher prep. My experience at Bridgewater would influence my approach to teacher preparation at Georgia State University in the years ahead.  Clinical and experiential learning would be focal points for my work in teacher prep at Georgia State University.  Combined with theoretical application and integration with the works of  John Dewey, Maria  Montessori, Abraham Maslow, Margaret Mead, Carl Rogers, Lev Vygotsky, Jean Piaget, and the rich body of research in teacher education and science education, we developed humanistic and progressive models of teacher preparation.

The history of teacher preparation in the United States rivals the precarious history of medicine, law and theology in American universities.

NCTQ failed to explain this.

The Flexner report provided recommendations for the improvement of medical education in North America when there was a real need to do so.  The establishment of “professional” schools in American universities had just begun, and there was resistance by some academics as to the viability of trade professions like medicine and law.

But Flexner was a research scholar at the Carnegie Foundation for the Advancement of Teaching. Flexner’s view was that medical education should follow the same path as the kind of thinking in the natural sciences, and suggested that medical training be as intellectual for doctors as it was for physicists.  Flexner criticized deeply the profit motive that dominated medical training in the U.S. at the time.  Flexner believed that the university hospital setting was ideal for medical prep because research would arise out of patient care and these teacher-medical-educators would thus teach their students.  He even had a motto, “Think much; publish little.”

Within the same time frame as the Flexner Report, élite universities moved into the business of teacher preparation.  The University of Iowa, The University of Michigan, Columbia Teachers College, University of Chicago, Standford, Ohio State, Harvard, and Berkeley entered the field, in that order.  Although Normal Schools primarily prepared elementary teachers, these universities focused on the preparation of secondary teachers and school administrators, and production of educational research.  Just as Flexner believed that medical education should focus on patient care, normal schools championed the same belief by focusing on practical preparation methods.  Teacher preparation at élite universities, however, took a different path, one that focused on research.

By the time I entered the field of teacher preparation, I had already studied science and education at Bridgewater State, Boston University, Illinois Institute of Technology, and The Ohio State University.  I became a faculty member in the first year of the College of Education’s existence at Georgia State University (GSU).  It was 1969, and by this time, normal schools had evolved into regional state universities with their own colleges or departments of education, and larger and élite universities had formed colleges of education on the same par as colleges of arts and sciences.  GSU was breaking ground with its first college of education in an urban environment and in a public school environment that had just begun to integrate its K-12 schools.

During the period of 1970 – 2010, American universities had incorporated teacher education into its structure, and for the most part, no professional schools (medicine, law, education) existed outside the university as a stand alone institution.

For at least 100 years, teacher preparation has experimented with different models to prepare teachers.  Colleges of education have provided universities with many students, most of whom take courses in other colleges across the university campus.  For example, nearly all the teacher education candidates that I worked with for over 30 years arrived at GSU with degrees in science, mathematics or engineering.  Their course work was based on the content domains in colleges of arts and science, or engineering.  In fact, a number of our students came from Georgia Tech, which just a few miles away from GSU.

NCTQ Did NOT Review Most Teacher Preparation Programs

Figure 1. Percentage of teacher education programs NOT reviewed at University System of Georgia teacher prep schools.
Figure 1. Percentage of teacher education programs NOT reviewed at University System of Georgia teacher prep schools.

Across the country, teacher education has done a balancing act between academic research and clinical teaching.  Powerful teacher education (Library Copy) programs are rooted in clinical experience for teacher candidates and are based on high standards in the context of a strong curriculum.  In 2006, Dr. Linda Darling-Hammond released a report that rivals that of the Flexner report.

The NCTQ would have you believe that it has identified high-caliber teacher prep programs, along with rating others to form a rank ordered system.  The truth is that they have little to no idea about which programs are of high quality because they never visited any, and they failed to investigate at many as 80% of the programs that are offered in higher education institutions.

Figure 1 shows the percentage of teacher education programs that were not reviewed by the NCTQ.  The chart shows the top 10 producing state universities in Georgia, as well as the remaining 11 universities clustered as a group.  The NCTQ review on teacher preparation is replete with significant data gaps.  The fact they reviewed very few programs in Georgia is a testament to their anemic review.

Their review is nothing compared to the report issued by Abraham Flexner more than 100 years ago.  Shame on them for thinking that they can associate with Mr. Flexner (who, by the way, with Louis Bamberger founded the Institute for Advanced Study in Princeton).

What do you think about the NCTQ review?

Resisting the National Council on Teacher Quality’s Propaganda

The National Council on Teacher Quality (NCTQ) has published in recent review of teacher preparation.  The NCTQ is well-financed (Gates, Walton, Broad, New Ventures Fund, and many more), and the Fordham Foundation’s creation.  Together, their goal is destroy teacher prep by convincing the nation that teacher preparation in the nations public and private colleges is failing.  And to prove it, they’ve developed a set of standards, that Dr. Tom Slekar, Dean of the School of Education at Edgewood College (Madison, WI), says are so bad that “if our teacher education programs were evaluated “highly” by NCTQ we would be violating our mission/values and all the research on child development and teaching and learning.” (Interview published on Living in Dialog by Anthony Cody, May 27, 2014).

The NCTQ’s effort is an assault on teacher education, and there is a need for a resistance to their propaganda.  In this blog post, I’ve rounded up a few articles that call the NCTQ out, and show how their method is nothing short of an assault on the nation’s teacher education infrastructure.

Screen-Shot-2013-06-21-at-3.39.13-PM-1

NCTQ’s Assault on Teacher Education.  According to the head of the NCTQ, Ed schools don’t give teachers the tools they need.  Whose tools?  What tools?  The NCTQ is stuck in a 19th century version of teaching, and a 21st century push to quantify learning about student achievement tests.  To the NCTQ, if teacher preparation is not focused on academic achievement, then it is not giving teacher candidates the tools that the NCTQ thinks it needs.

Nonsense

Those Nonsense Annual NCTQ Ratings Are Coming on June 17. In this piece, the author reminds readers that the NCTQ ratings are coming (they are here now).  Dr. Schneider has written several articles on the NCTQ which you can reach here.  Schneider, M.K. Deutsch29 Blog, June 16, 2014.

Why the NCTQ Teacher Prep Ratings are Nonsense.  Dr. Darling-Hammond explains that “NCTQ’s methodology is a paper review of published course requirements and course syllabi against a check list that does not consider the real quality of instruction that the programs offer, evidence of what their students learn, or whether graduates can actually teach.”  As she pointed out in her article, those states whose students score high on NAEP had teacher prep programs with the lowest ratings, while states like Alabama, that scored low on NAEP, had high NCTQ ratings.  She also says that the NCTQ is out of sync with current teacher education programs, most of which are graduate level. Darling-Hammond, L. National Education Policy Center, June 19, 20123.

Response to the New NCTQ Teacher Prep Review by Peter Smagorinsky, The University of Georgia.  Dr. Smagorinsky briefly responded to some of the claims that the NCTQ makes which rely on rhetorical characterizations about “success” and “achievement” that spuriously elevate their belief that standardized tests reflect the whole of learning, a claim that few teachers or teacher educators endorse. In contrast, most teachers and teacher educators believe that the NCTQ’s narrow focus on standardized “achievement” tests undermine an authentic education that prepares students for work or life.  Smagorinksy, P. The Becoming Radical Blog, June 17, 2014.

Market Forces

How Will Market Forces Transform Teacher Preparation?  This is an article by Anthony Cody gives meaning to the context within which the NCTQ has appointed itself as the purveyors of truth about teacher preparation.  As Anthony points out, teacher preparation is being challenged by corporate reformers who have backed a group of non-educators called the NCTQ.  Financed by the same groups that are pushing test-based accountability and charter schools, the NCTQ has started the ball rolling to crush teacher preparation as we know it.  Anthony has written many articles about teacher preparation and NCTQ and you can reach them here.  Cody, A. Living in Dialog, May 29, 2014.

Shaky Methods, Shaky Motives: A Critique of the National Council of Teacher Quality’s Review of Teacher Preparation Program by Edward J. Fuller.  In this peer-reviewed article, Dr. Fuller states that the NCTQ’s review of university-based teacher preparation programs concluded the majority of such programs were inadequately preparing the nation’s teachers. The study, however, has some serious flaws that include narrow focus on inputs, lack of a strong research base, missing standards, omitted research, incorrect application of research findings, poor method, exclusion of alternative certification programs, failure to conduct member checks, and failure to use existing evidence to confirm the report’s rankings. All of these issues give the NCTQ report less than useful in efforts to understand and improve teacher preparation programs in the United States. The article also suggests alternative pathways NCTQ could have undertaken to work with programs to actually improve teacher preparation. The article concludes by noting that the shaky methods used by NCTQ suggest shaky motives such that the true motives of NCTQ for producing the report must be questioned.  Fuller, E.J. Journal of Teacher Education 2014, Vol 65(1) 63–77 © 2013 American Association of Colleges for Teacher Education.

Feeble and Incompetent

The NCTQ Review of Teacher Prep in the University System of Georgia is Feeble & Incompetent.  An analysis of the NCTQ Review in the context of teacher preparation in Georgia’s 21 state universities that offer teacher education programs.  The NCTQ claims to have a handle on the state of teacher preparation in the nation, but the results of this investigation show that they have reviewed a very small percentage of teacher prep programs offered in America’s colleges and universities. Hassard, J. The Art of Teaching Science Blog, June 22, 2014.

National Council for Teacher Quality Review: A Stacked Deck?  In this study, we analyzed the make-up of the NCTQ people, and discovered that it represents a “stacked deck.”  Only 2.5% of the participants in the review were teacher educators–active professors out there doing teacher education.  The NCTQ was stacked with corporate executives, foundation executives, and employees of NCTQ.  It was far from representing the field of teacher education.  Hassard, J. The Art of Teaching Science Blog, June 20, 2014.

Figure 3.  The Stacked Deck of Who Performed the NCTQ of Teacher Prep Review
Figure 3. The Stacked Deck of Who Performed the NCTQ of Teacher Prep Review

Results Are In: NCTQ Report on Teacher Prep Rated with Four Cautions.  In this article, the author analyses the 2013 NCTQ Review of Teacher Prep in the US, and using a junk science model developed by M.S. Carolan, concludes that this NCTQ study scored high on the junk science index, and therefore warrants 4 cautions–the highest rating possible in the model.  Readers should be extremely cautious about using the results of the NCTQ review of teacher prep. Hassard, J. The Art of Teaching Science Blog, July 1, 2013.

NCTQ Report on Teacher Prep: the Devil is in the Detail.  In this article we dig deep into the so-called methods used to evaluate university teacher prep programs.  The “methods” used include sources including: syllabi (when they can get them), textbooks, catalogs, handbooks, evaluation forms. We show that the NCTQ report on teacher preparation is junk science.  The method that they employed in their study avoided data from the very sources that could help uncover the nature of teacher preparation.  These sources are faculty, administrators, students, and cooperating school districts and educators.  Without interviewing and observing teacher preparation programs directly, and without establishing a cooperative relationship with the these institutions, the NCTQ condemns itself to false claims, outright opinions that have little bearing on the nature of teacher preparation.  Hassard, J. The Art of Teaching Science Blog, June 23, 2013

 

 

 

The NCTQ Review of Teacher Prep in the University System of Georgia is Feeble & Incompetent

The NCTQ Review of Teacher Prep in the University System of Georgia is Feeble & Incompetent.

In this post I’m going to explain why I conclude that the NCTQ Review of Teacher Prep of the University System of Georgia colleges and universities that offer teacher education is feeble & incompetent.

Figure 1. Map of Teacher Preparation Institutions in Georgia
Figure 1. Map of Teacher Preparation Institutions in Georgia.  The NCTQ Review of Teacher Preparation Did Not Even Come Close to Reviewing the Status of Teacher Prep in the Peach State.

I am Professor Emeritus of Science Education at Georgia State University. I was professor at GSU for 33 years, from 1969 – 2003. For the past 11 years I’ve blogged at The Art of Teaching Science, as well as written two editions of The Art of Teaching Science (Library Copy), and the second edition of Science as Inquiry (Library Copy).

While at GSU, I collaborated with colleagues across the university and with school districts in the greater Atlanta area to design several teacher education programs. The first program, the Phase Program for Secondary Science was a one year full-time certification program designed for students with degrees in science.

Starting in 1988, I worked with other professors in the College of Education to develop the first alternative teacher prep program in Georgia. The alternative certification program (ACP) was funded by the Professional Standards Commission from 1988 – 1993. Based on our experiences in the ACP, we designed a mathematics and science teacher preparation program for students with degrees in math, science or engineering.

In 1993, the Teacher Education Environments in Mathematics and Science (TEEMS) program was created as a four semester program beginning with an intensive summer institute followed by internships in a middle school, followed by a high school.  Students were assigned to schools in clusters of 10 – 20 students, and each worked with mentor teachers and professors and graduate interns for two semesters.  Students completed their graduate work in summer 2, and began their teaching careers in the fall.  In a few years, English education and Social studies education became part of the TEEMS teacher prep model.  It is the primary program for preparing secondary teachers at Georgia State University.

I also was involved in designing graduate hybrid courses at GSU for teachers that combined online learning with face-to-face.  In our earliest work, in the early 1990s,  we were using Macintosh SE 20 computers and very crude telecommunications system.

In the late 1980s, we began a collaboration with educators, researchers and teachers in the Soviet Union, and out of a long series of collaborations, developed the Global Thinking Project, one of the first telecommunications projects linking schools in the U.S. with schools in the Soviet Union.  We designed teacher preparation institutes and brought teachers from the U.S., Russia, Spain, Australia, and the Czech Republic to Atlanta for hands-on and face-to-face experiences using the Global Thinking curriculum materials, and the technology that was essential to global collaboration.

That said, here is my first analysis of the NCTQ Review of Teacher Prep.

NCTQ’s Numbers Don’t Add Up

The National Council of Teacher Quality released its review of teacher prep programs in the United States.  It has reported on teacher prep programs in every state in the country.  It’s review was based on examining course catalogues and course syllabi, when they could get them.

NCTC claims that its method will show the quality of teacher preparation programs around the country.  Although I don’t think their review has much to do with quality, let’s forgo quality, and take a look at quantity.  Do they present a valid display of teacher education options as they exist today in our nation’s colleges and universities?  I don’t think they do.

To find an answer to this question, I compared their report on teacher education in Georgia to what I could find on documents that are available to anyone simply by migrating to the University System of Georgia’s (USG) website, and from there following links to each state university that offers teacher preparation program.

Table 1 includes data that I extracted from the USG website, and the 21 universities that support teacher education in the state of Georgia.  Georgia state universities graduate about 4500 teacher each year.  I’ve listed the top ten producing universities in sequence in the chart, and then I’ve grouped the remaining 11 into “Other Teacher Education Programs.”

If you are not familiar with teacher education in Georgia, the results might surprise you in that Kennesaw State University (KSU)  is the leader in graduating teachers each year.  It offers at least 19 teacher preparation programs at the undergraduate and graduate levels.  It is the third largest public university in the Georgia after the University of Georgia (UGA)and Georgia State University (GSU).  Yet, the NCTQ reviewed only one of KSU’s 13 programs, four of UGA’s 18 programs, and two of GSU’s 17 teach prep programs.

If you re-examine the data in Table 1, very few of the State’s teacher prep programs were reviewed.

 

Figure 2. Teacher Prep in Georgia in 2012 - 2013 in 19 University of System of Georgia Universities
Table 1. Teacher Prep in Georgia in 2012 – 2013 in 19 University of System of Georgia Universities

In fact, if we examine the teacher preparation programs in the University System of Georgia as a whole, there are 151 undergraduate programs, and 118 graduate programs.  This is a total of 269 programs, and the NCTQ reviewed only 39 of them (Figure 2).

What About the Other 6,000 Teacher Prep Programs?

If we extend this thinking to the nation as a whole, we find that the NCTQ has reviewed a minority of the programs available to people to become teachers. The American Association of Colleges for Teacher Education (AACTE) lists more than 800 member institutions (public and private) that offer teacher preparation programs.

If we do a little math based on data from the University System of Georgia, we estimate that there are more than 8,900 teacher education programs offered in the U.S. in early childhood education, elementary education, secondary education, special education, art, music, physical education, and gifted education.

The NCTQ reviewed 2400 teacher prep programs.  They claim that their review answers the question: How are the nation’s institutions that train tomorrow’s teachers doing?  Their report failed to mention that there are more than 6,000 programs that never were reviewed.

Figure 1.
Figure 2. Number of Teacher Programs Reviewed by NCTQ Compared to the Number of Existing Programs in Georgia and the USA

The results that NCTQ are touting as reporting on the status of the nation’s teacher preparation programs is a sham.  The NCTQ method is described here. According to NCTQ they posted rankings of 1,1612 teacher prep programs in 1,127 public and private institutions of higher education.  If my research, as reported above is correct, there are even more teacher prep programs in the nation than I had reported.  Using the NCTQ figures, a fair estimate would be that there are nearly 10,000 teacher prep programs around the country.  The NCTQ is very specific about what it defines as a teacher prep program.  Table 2 is a list of 11 programs that were reviewed by NCTQ.  But notice the specificity of each program.  For example, the first one is a Master of Arts in Teaching program at Clayton State in English education.  There are also separate programs MA in Teaching programs in mathematics, science, and social studies.

The NCTQ does not even come close to reviewing the state of teacher preparation in Georgia.  My assumption, based on data collected in Georgia, that the low rate of teacher prep review exists in each other state in the nation.

Table 2. Examples of Teacher Prep programs in Georgia reviewed by the NCTQ
Table 2. Examples of Teacher Prep programs in Georgia reviewed by the NCTQ

NCTQ Review is Junk Science, Not an Honest Review of Teacher Prep

Figure 3. Pie Chart Showing How Little of Teacher Prep in Georgia was reviewed by the NCTQ.
Figure 3. Pie Chart Showing How Little of Teacher Prep in Georgia was reviewed by the NCTQ.

Although I focused only on teacher preparation in Georgia, and have shown that the NCTQ Review is not representative of the range and depth of teacher preparation. The NCTQ method is a sorry example of “research.” I evaluated the method of the 2013 NCTQ Review, and found that the NCTQ was an example of junk science, based on M.S. Carolan’s research on junk science.  The NCTQ assumes that teacher prep is failing, and looks to support this view.  The data are circumspect.  The method involves using legal maneuvers to get data from universities, and not seeking data in a collaborative way.  It’s references are meager, and none of their work is based on the large body of work in teacher preparation.  Their review is non-peer reviewed.  There are only a few teacher educators involved in the NCTQ.

The NCTQ review is not a valid review of teacher preparation.  It is anemic and incompetent, as we see in Figure 3.

What are your views on the status of teacher prep in Georgia or in other states?

National Council for Teacher Quality Teacher Prep Review: A Stacked Deck?

Screen-Shot-2012-04-05-at-9.10.20-AMNational Council for Teacher Quality Review: A Stacked Deck?  In this post I am going to show that the make up of the NCTQ review of teacher prep panels represents a “stacked deck.”  Instead of working with teacher educators directly, the NCTQ uses deceptive and inadequate methods to investigate teacher prep.

Who are these people who have signed on to participate in this type of review?  What is in it for them?

If you were to design a study of the medical profession, do you think it would be a good idea to involve directly and in significant proportions professionals who are out there practicing medicine, whether they are physicians, physician assistants, nurse practitioners, nurses, technicians, and the many other professionals who make up the medical field?

Would include visits to clinics, “doctors” offices, hospitals and labs, or would you rely on websites, and documents as the data for your investigation? Who would you ask to check your report before it was published?

The NCTQ has appointed itself as the evaluator-in-chief of teacher preparation in the nation’s public and private colleges and universities, and those few alternative teacher prep programs.   The group, which was created as a spin-off of the Thomas Fordham Foundation has used spurious methods to acquire information that they used to write a review of the nation’s teacher preparation infrastructure.

In general, teacher prep is carried out in hundreds of the nation’s public and private colleges and universities.  The NCTQ has created a partnership with many private foundations and corporations that support them (including the Gates Foundation, Broad Foundation, & Walton Foundation) and the U.S. News and World Report.  They did not visit the universities to become involved in the clinical approaches to teacher prep that permeate teacher education. Instead, they used university catalogs, and course syllabi (when can get them) as their data source.

Rather doing a scientific inquiry into the nature of teacher prep, the NCTQ has launched an assault on the teacher education profession, much like the assault that is being made on American classroom teachers.

I wondered at first who was involved in the review, other than the big gun at NCTQ, Kate Walsh.  I was interested to find out who their analysts were, and what positions they held.  Did they include an adequate representation of teacher prep? Did the NCTQ hire analysts that had a direct connection to teacher prep in the nation’s colleges and universities. Or did they stack the deck with those that agree with NCTQ’s view that teacher education is inadequate and failing the nation’s public schools?

To find out, I went to the Who We Are page on the NCTQ website.  According to this page, the review team comprised ten in-house and 75 more general and expert analysts.  My analysis differed somewhat, as I found that there were more than 20 NCTQ staff working on the report.

The NCTQ Who We Are membership analysts include:

  • Technical Panel
  • Audit Panel
  • Advisory Groups
  • General Analysts

According to my count, there were 79 people involved in the NCTQ review. The first question asked was how many of these people were teacher educators?  Figure 1 summarizes the membership of the study teams.  More than 27% of study teams were employed by NCTQ, while teacher educators represented only 2.5%.  When we include administrators (some of whom were education deans), professors of education, adjuncts, only 17% of the analysts work in the field of teacher prep.

Figure 1. NCTQ Study Teams for the 2014 Teacher Prep Review
Figure 1. NCTQ Study Teams for the 2014 Teacher Prep Review

Teacher educators are like physicians, nurses or physician assistants. They see and work with students or clients (patients) in the real world. A teacher educator’s role is to teach, advise, and clinically involve undergraduate and graduate students in either initial teacher prep or the continuous professional development of teachers through courses, institutes, and degree programs. Teacher educators, by and large are also researchers inquiring into the nature of teaching and leaning.

Research scientists, mathematics or economics professors, corporate executives, philanthropists, consultants who own private educational companies or administrators are not teacher educators, any more than pharmaceutical drug reps are medical practitioners.

Figure 2 is a bar graph comparing the number of teacher educators who were directly involved in the NCTQ review to the number of non teacher educators. The comparison shows us that the teacher prep profession is greatly underrepresented in the NCTQ review.

Figure 1. Comparison of Types of Analysts Comprising the NCTQ Study Teams
Figure 2. Comparison of Types of Analysts Comprising the NCTQ Study Teams

Are the conclusions that are made by the NCTQ valid based on this comparison?

Figure 3 is an another bar graph showing the distribution of “professions” and organizations of members of panels identified by the NCTQ, e.g. Technical Panel, Audit Panel, Advisory Groups, General Analysts. The chart clearly shows that corporate and foundations executives and employees of the NCTQ make up the largest numbers of participants in the review.

Figure 3.  The Stacked Deck of Who Performed the NCTQ of Teacher Prep Review
Figure 3. The Stacked Deck of Who Performed the NCTQ of Teacher Prep Review

It’s not surprising that 22 of the 79 people listed were employees of the NCTQ. Many of these persons were trained to read catalogs and syllabi and to rate teacher prep against their own standards, standards that are not grounded in teacher education research.

There were a number of professors hired by NCTQ to lend credibility and ability. Why were most of them either professors of mathematics or economics, or research associates from one school (University of Oregon)?

The membership roles also include eight corporate executives and 12 foundation executives. When I looked more carefully at their bios, it was clear that they shared the same core values of NCTQ, and a number of them sat on each other’s boards.

The NCTQ website also lists the names of people and organizations that endorse the 2014 report.  A few superintendents are listed, as well a number of organization that support the corporate reform model that NCTQ and its sister organization, The Thomas Fordham Foundation work to push onto schools, and now onto teacher prep institutions.  You can see the list of endorsers here.

In the week ahead, be on the look out for reports written by other bloggers and educators on the NCTQ.

What do you think about the make up of the NCTQ panels?