A Systems Thinker Reviews The Atlanta Public Schools’ Performance in Reading & Math

Latest Story

People are asking for better schools, with no clear idea how to improve eduction, or even how to define improvement of education (except to increase test performance on high-stakes tests).

Most people are in favor of improving education.  But when asked how would they improve education, the suggestions are insufficient, and in some cases, even negative (See W. Edwards Deming. The New Economics for Industry, Government, Education (p. 8). Kindle Edition.)

Instead of reporting the details of how people and organizations want to improve education, such as corporate chiefs, philanthropists, and the U.S. Department of Education, I want to report on the work of Mr. Ed Johnson, an advocate for quality education, and has for more than a decade devoted himself to writing and talking about improving education in the Atlanta Public Schools.

Systems Thinking

Ed Johnson consults as Quality Information Solutions, Inc., with a commitment to human social and cultural systems to receive quality information from information systems for the continual improvement of life, work, and play. His commitment extends to advocating the transformation of K-12 public education systems to humanistic paradigms from prevailing mechanistic paradigms. Ed also is former president of Atlanta Area Deming Study Group.

Ed Johnson is a systems thinker.

In this regard, he believes schools can not be improved by trying to improve the parts separately.  It is a sure path to failure.  For example, some advocates of educational reform believe that student achievement can be improved by weeding out the bad teachers.  Millions of dollars have been invested in using student high-stakes test scores to check teacher performance using a technique called Value Added Measure (VAM).  Teachers whose VAM scores are low can be identified, and according to these experts, teachers with low scores must be bad teachers.  Getting rid of “defects” in any system will not improve the system or the part that was identified.  Instead, a better investment would be to ask how can we improve the quality of teaching, and what can be done to improve the teaching of all educators.

The above example highlights the current approach to reform.  Identify a part of the system, and fix it. Bad teachers, get rid of them.  Low achievement scores?  Write “rigorous” standards, raise the bar, and give high-stakes tests.  It’s that simple.  We’ve had rigorous and not so rigorous standards in place for more than a decade, and as you will see ahead, changing standards doesn’t have any effect on student performance.

Systems thinking means that all parts of a school system are interdependent and must be taken as a whole.  The Atlanta Public Schools (APS) is a system of interconnected and interdependent parts, and to improve the quality of the APS, it is critical to look at the APS as a whole.  For example, closing schools (removing so-called underperforming schools), does not have an effect of improving the APS, or indeed saving money (as some would tell you).  Fundamental questions about APS need to be asked, but in the context of the APS being a system, not a collection of  schools, students, teachers, administrators, parents, curriculum, textbooks, technology.

Ed Johnson has contributed to my understanding of quality education, and it is my great honor to share his work on this blog, and in particular to look at teaching in urban schools, and in particular the Atlanta Public Schools.

Trial Urban District Assessment (TUDA)

The National Assessment of Educational Progress (NAEP) created the Trial Urban District Assessment (TUDA) in 2002 to assess student achievement in the nation’s large urban districts.  Reading results were first reported in 2002 for six districts, and math results were reported in 2003 for 10 districts.

The NAEP provides data from 2002 through 2012 on math and reading and are comparable to NAEP national and state results because the same assessments are used.

Screen Shot 2013-12-28 at 7.00.56 PMUsing data from all of the Trial Urban Districts Assessments (2002 – 2013), available online at The Nations Report Card, Ed Johnson analyzed and created a presentation that is a series of systemic stories told by the data collected in the urban district studies.

Each story is about a system.  In TUDA (Trial Urban District Assessment), there are particular TUD’s each as a system.  So, Reading is a system.  Mathematics is a system.  4th grades is a system.  8th grades is a system.

Johnson’s research is a longitudinal study of performance in reading and mathematics from 2002 – 2013.  Using scores on reading and mathematics obtained from the National Center for Educational Statistics, he investigated the nature of a number of systems derived from the data.  Some of these particular systems include:

  • Reading as a System
  • Mathematics as a System
  • 4th Grades as a System
  • 8th Grades as a System

Since these systems are part of the APS system, we know that each of these systems is interdependent with other systems, not just the ones identified here, but including parents (as a system), teachers (as a system), and so forth.

There were 21 urban school districts in the study.  However, Ed has managed to make our work easier by highlighting with color coding just two systems, Atlanta Public Schools (Red) and District of Columbia (Purple).


Figure 1. Trial Urban District, Bottom Line.  Source: Ed Johnson edwjonhson@aol.com
Figure 1. Trial Urban District, Bottom Line. Source: Ed Johnson edwjonhson@aol.com

Ed starts his study by giving us the “bottom line.”  How did these systems (reading, math, 4th grade, 8th grade) do?  Figure 1, is a summary of systemic TUD student performance in reading and math at the 4th and 8th grade from 2002 – 2011, and predictions for 2013 (all the predictions were accurate forecasts of student performance in 2013).

Only 4th grade reading showed some improvement over the period 2002 – 2011, and the improvement was slight and noticed only in Austin, Charlotte, and Hillsborough.  In all other systems, no improvement was observed, meaning that the common causes that influence the system of math, or reading, or 4th or 8th grade inhibited improvement.

Student Improvement in Mathematics and Reading

In the TUDA study, a sample of students in each urban district was tested in reading and mathematics at the 4th and 8th grade level.  To help us understand how to interpret data collected over the past dozen or so years, Mr. Johnson has produced a series of graphs (control charts) showing the natural variation of scores to be expected in each system (reading, math, 4th grade, 8th grade).

Figure 2 shows a control chart for  reading, 4th grade.  Figure 3 shows a control chart for mathematics, grade 4.  Upper control limits and lower control limits were calculated for 2002, and then projected forward.  Changes in scores from one test period to the next are shown in the Figure 2.  If there is systemic change in reading at the 4th grade level, then scores would fall “outside” the upper or lower control limits.  You’ll notice that all the variation, except for four points (Charlotte, 2009 and 2011, Austin, 2011, and Hillsborough, 211), was within the variation expected.  In systems thinking, we mean that the variation for the most part was random, but there is evidence that some special causes were at work in the three districts mentioned here.

Figure 2.  NAEP TUDA, Reading, 4th Grade, All students prepared by edwjohnson@aol.com
Figure 2. NAEP TUDA, Reading, 4th Grade, All students prepared by edwjohnson@aol.com

Mathematics is another story.  As Mr. Johnson puts it in his study, “all districts have been on the same boat continuously since 2003 in mathematics at the 4th grade level.  What means is that the variation shown in the graph is random, and not due to any special cause.

Figure 3. NAEP TUDA, Mathematics, 4th Grade, All Students prepared by edwjohnson@aol.com
Figure 3. NAEP TUDA, Mathematics, 4th Grade, All Students prepared by edwjohnson@aol.com

There is very little student improvement in reading or mathematics at the 4th grade level as shown in Figures 2 and 3.

As long as we continue to ignore the common causes of variation that exist in the system then we can expect very little to no improvement.

But as Mr. Johnson has said in other letters and reports, if fundamental questions about the purpose of schooling are not addressed and if we can not agree on these purposes, very little will change in the system.  In the two systems explored here, reading at the 4th grade and math at the fourth grade, we need to ask: What is purpose of teaching reading in the elementary school?  Why do we teach reading in the elementary school?  What is goal of teaching mathematics in the elementary school?  Why do we teach mathematics?

As Mr. Johnson has shown, why are these districts on the same boat for the teaching of mathematics?  How can we used systems theory to look at mathematics teaching as a system and answer questions about how to improve mathematics learning?  How can help students develop a love affair with mathematics?

Ed Johnson has examined a lot of data from the standpoint of systems thinking based in part of his work with Edward Deming, and other scholars in the field of systems thinking.

I highly recommend that you check his study which you can get access to as a PDF file here: NAEP TUDA 2002-2011 Views through a Deming Lens.

In the days ahead, I’ll revisit Mr. Johnson’s study, and report on his analysis of the “performance gap” variation that he has depicted as a series of images as shown in Figures 2 and 3.  I’ll also explore systems thinking and school in more detail.

NAEP Large City Study Sheds Light on the Effects of the Atlanta Public Schools’ Cheating Scandal

NAEP Large City Study Sheds Light on the Effects of the Atlanta Public Schools’ Cheating Scandal.

The National Assessment of Educational Progress (NAEP) created the Trial Urban District Assessment (TUDA) in 2002 to assess student achievement in the nation’s large urban districts.  Reading results were first reported in 2002 for six districts, and math results were reported in 2003 for 10 districts.

The NAEP provides data from 2002 through 2012 on math and reading and are comparable to NAEP national and state results because the same assessments are used.

In July 2011, the Governor of Georgia released a report of its investigation into the Atlanta Cheating Scandal, charging 178 educators as being involved in the scandal.  According to the report, thousands of school children were harmed by widespread cheating in the Atlanta Public Schools (APS).

According to the Governor’s report, “a culture of fear and conspiracy of silence infected this school system, and kept many teachers from speaking freely about misconduct.” Although I’ve never condoned the cheating that occurred in the APS, the report falls short by not pursuing what caused the culture of fear to exist in the system, which apparently led to the cheating.  Who, besides the employees of the APS were involved in the so-called conspiracy? What role did the following play in this scandal:Georgia Department of Education, the Governor’s Office of Student Achievement, the Atlanta School Board, and partners who contributed millions of dollars to the APS to boost academic achievement of Atlanta’s students.

According to the report, the cheating scandal took place in 2009.  By 2010, the scandal had been exposed by the Atlanta Journal and Constitution reports on the APS, and we can assume that there was very little, if any cheating on the state’s 2010 – 2013 Criterion Referenced Competency Tests (CRCT).

During the period leading up to, during, and after the cheating scandal, the NAEP tested students in Atlanta, as part of the Trial Urban District Assessment in mathematics and reading from 2002 to 2012.  Fourth and eighth grade students were tested using the NAEP tests.

Don’t you think that examining the data on the NAEP tests given as part of Trial Urban District Assessment might be helpful in several areas?

  • What is the trend of academic performance of Atlanta students (grades 4 and 8) in mathematics and reading during 2002 – 2012?
  • Are there significant changes (increases, decreases) or no changes in the Atlanta data during this period?
  • Is there evidence that the academic performance of students in the APS was harmed or diminished during and after the scandal?  Do student scores change appreciably after we can be sure that there was little if any cheating going on?  Were students victimized as a result of the testing scandal?

What is the trend of academic performance of Atlanta students (grades 4 and 8) in mathematics and reading during 2002 – 2012?

Figure 1 summarizes Atlanta eighth grade student scores on the NAEP test in mathematics given as part of the Trial Urban District Assessment.  I’ve plotted the average scores of students at the 25th-50th-75th percentiles. The tend at each level is up and there is no evidence here of a decline or slump in scores for 8th grade students.

The Atlanta NAEP scores in 2011 and 2013 did not decline following the 2009 cheating scandal.  This is an important finding in the context of the Atlanta Cheating Scandal.  If students had been harmed academically, then their scored might have dropped after the episode of cheating. For more details that go beyond the graph that I produced here, consult this page in the TUDA 2013 report.

Figure 1 is a graph showing the average score of students at the 25th, 50th and 75th percentiles. NAEP, TUDA 2013 Report
Figure 1 is a graph showing the average score of students at the 25th, 50th and 75th percentiles. NAEP, TUDA 2013 Report

What’s interesting in the data are the scores in 2011.  The 2011 eighth grade students were in the sixth grade during the year of the cheating scandal.  If the students were academically victimized because of changes in CRCT scores, then we would hypothesize that their scores would decrease in the 2011 and 2013 years.  But they did not.  In fact, there is an increase in the scores at each percentile level.

Are there significant changes (increases, decreases) or no changes in the Atlanta data during this period?

Figure 2. NAEP Math scores for APS 8th grade students and large city districts.
Figure 2. NAEP Math scores for APS 8th grade students and large city districts.

The scores of Atlanta eighth grade students are plotted and compared to the average scores of other large cities that participated in the NAEP Trial Urban District Assessment.  Although Atlanta’s scores are lower than the average for each year, the overall trend is upwards, and the gap is closing.

Again, when we compare the years following 2009, the year of the test erasure scandal, two and four years later, Atlanta students are not only doing better, but they are closing the gap.  Is there evidence here that students were academically harmed by the scandal?

Is there evidence that the academic performance of students in the APS was harmed or diminished after the scandal?  Do student scores change appreciably after we can be sure that there was little if any cheating going on?  Were students victimized as a result of the testing scandal?

The NAEP data is separate and administered differently than the state CRCT tests.  Indeed, the NAEP tests are low-stakes, and according to many researchers,  NAEP scores are more valid and reliable than the high-stakes CRCT if one wants to have an idea about student performance.  CRCT is a high-stakes assessment that is used not only to assess the students, but the results are used to evaluate teacher performance.

There clearly are reasons to wonder if students were harmed by the cheating that took place in 2009 in the APS.  Academically, there is little evidence based on average scores reported in the NAEP study.  However, we have to wonder what was the social-emotional consequence caused not only by the cheating that took place, but also by the standardization and high-stakes testing reform movement that most likely what contributed to the “culture of fear and conspiracy” in the APS.

Stephanie Jones, professor of education at the University of Georgia has written extensively on the social-emotional consequences of the authoritarian standards and high-stakes testing dilemma.  She asks, “What’s the low morale and crying about in education these days?  Mandatory dehumanization and emotional policy-making–that’s what.”

Policy makers, acting on emotion and little to no data, have dehumanized schooling by implementing authoritarian standards in a one-size-fits-all system of education.  We’ve enabled a layer of the educational system (U.S. Department of Education and the state departments of education) to carry out the NCLB act, and high-stakes tests, and use data from these tests to decide the fate of school districts, teachers and students.  One of the outcomes of this policy is the debilitating effects on the mental and physical health of students, teachers and administrators.

If you don’t believe that, here is a quote from Professor Jones’ article:

I’ve witnessed sobbing children in school, tears streaking cheeks. When children hold it together at school, they often fall apart at home. Yelling, slamming doors, wetting the bed, having bad dreams, begging parents not to send them back to school.

More parents than ever feel pressured to medicate their children so they can make it through school days. Others make the gut-wrenching decision to pull their children from public schools to protect their dignity, sanity and souls. Desperate parents choose routes they had never thought they’d consider: home schooling, co-op schooling, or, when they can afford it, private schooling. But most parents suffer in silence, managing constant family conflict.

 Were Atlanta’s Students Harmed by the Test Erasure Scandal?

Based on NAEP data, Atlanta students continued to improve in mathematics, even after cheating was discovered, and eliminated from the district.  Although NAEP does not investigate the social-emotional effects of school, there is evidence that the current emphasis on high-stakes testing does contribute to and has amplified emotional and behavioral disorders among youth.  How can it be a good practice that in one district in Georgia, 70 of 180 days of the school year are devoted to some kind of state or federal testing?  For more data on this, please refer to a discussion of The Paradoxes of High Stakes Testing by Madaus, Russell, and Higgins (2009) in this blog article.

The NAEP large cities study does shed light on the Atlanta Public Schools.  There is evidence that any harm that directed toward students was more psychological than academic.

In spite of the national and international attention that the testing scandal generated, teachers in the Atlanta Public Schools positively impacted their students in mathematics and reading.  There is evidence if you dig deeper into the data that there is a continued need for more resources and more experienced teachers in schools which are populated students living in poverty, and students who are on free or reduced lunch.


Who Benefits When Student PISA Scores Decline?

What’s bad for you might be good for others. In fact, in the world of international tests, American student’s scoring low on the recent PISA test is actually very good for business, profiteers, think tanks, and those who think America’s schools are failing.  Using average scores, the US was ranked 30 in maths, 20 in reading, and 23 in science, a downward trend that plays into the hands of the “doom and gloom” education naysayers.  In this post I want to argue that using PISA tests to evaluate a nation’s educational system is not only unscientific, but the conclusions that are reached and “solutions” proposed lack the wisdom needed to support teaching and learning.

Zooming In on the PISA Data

If you look at Figure 1, it seems that student test scores are in decline from 2009 to 2012.  In math student scores plunged 6 points, a 1.2% drop in the average score of American students.  In reading, the scores dropped 0.4%, and in science scores slumped 0.9% from 2009 to 2012.  To leaders at Achieve, a company that stands to benefit from “failing schools” based on such plummeting scores.  The PISA results are just the ticket to further their claim that American schools need to be fixed, and they are ready to do the job.  Achieve wrote the Common Core, and the Next Generation Science Standards.  Between the Bill and Melinda Gates Foundation, and the Race to the Top, millions of dollars are being pumped into the implementation of the these two sets of standards.  When these three entities look at Figure 1, they interpret the results using an ideological framework based on an authoritarian standards-based and data driven model.  Although the U.S. has had standards in place in all the states during the period shown in the graph, Achieve, Gates, and Duncan (AGD) claim that an important step to fix the American school problem is the adoption of the Common Core.

The Brown Center Report on American Education (2012) showed that standards (whether they were good or bad) have had no affect on changing student achievement scores measured by NAEP, which many researchers consider a much more powerful measure of student learning in American classrooms.  Author Tom Loveless explains that neither the quality nor the rigor of state standards was not correlated with NAEP scores. Loveless suggests that if there was an effect, we would have seen it since all states had standards in place since 2003.

Yet AGD would have us believe that implementing a new set of standards in American schools will cause momentous change in student achievement.


Figure 1. PISA Scores for American 15 year-olds, 2000 - 2012
Figure 1. PISA Scores for American 15 year-olds, 2000 – 2012

Zooming Out

What happens if we zoom out and look at the data from a different perspective.  Figure 2 compares US PISA scores in math, reading and science with the OECD average for all nations that participated in the PISA tests from 2000 – 2012.  In this case, I used a scale that would also include the scores of highest and lowest scoring nation for the years that PISA was administered.  When you look at American scores over the past dozen years, they seem as a flat line in the same place as the average scores posted by all nations.  In fact it is difficult to distinguish one score from another.

The naysayers look at this data and claim that the sky is falling, and that if we don’t fix American education, our students will not be able to compete with students from the “highest scoring” nations.  Not only that, if this trend continues, it will affect the nation’s economy.  Both of these conclusions are simply not supported in the research literature.  They are nothing more than political and authoritarian ideology.

American student scores on international tests are predictable and sustainable.  But, because the reformers such as Achieve, Gates and Duncan use a market and business strategy that compels schools to increase student achievement EVERY year, and if teachers don’t fulfill this ridiculous goal, their jobs will be jeopardized.  Schools risk being closed, or taken over by private charter corporations.  If you don’t believe me, please read any one of my posts on Georgia’s Race to the Top to discover how the money is spent, how the state has developed questionable  relationship with charter companies, Teach for America and the New Teacher Project.

The graph in Figure 2  actually shows how stable is American education.  But the doom and gloom naysayers use the graph to warn Americans that we are losing the global competition war, much like the Cold War.  In fact, much of the reasoning used today to claim American education is “behind” other advanced nations is very similar to the claims made during the Sputnik Era, the Cold War and the Race for Space.  During that period, according to scientists, our education system was antiquated, and lacked the rigor needed for Americans to understand science, mathematics and technology.  (For a full discussion of this, please see Scientists in the Classroom by John L. Rudolph (Library Copy). When the “nation was at risk,” during the 1980s, it was the economies of Germany and Japan that posed a threat to America.  The threat today is from those nations that score high on international tests, such as PISA.


Image 12-18-13 at 2.09 PM

These tests do not measure what many in America consider important goals of education, including creativity, social skills of communications and collaboration, interdependence, and being innovative.  In her newest book, Nel Noddings questions those who consider that American schools are failing.  She reminds us that comparing scores from different nations does not take into account differences in these countries.  In fact, if we do compare countries that are alike, we find that their scores are similar.  She suggests that we should not obsess over international scores.  She says:

We are now in the 21st century, and it is time to reduce the emphasis on competition. Cooperation will be a major theme throughout this book. We are living in a global community— that is, we are trying to build such a community— and the keywords now are collaboration, dialogue, interdependence, and creativity. This does not mean that there should be no more competition; some competition is both necessary and healthy, and it often promotes better products and performances. But in the 21st-century world, collaboration is the new watchword. People must work together to preserve the Earth and to promote the welfare of all its inhabitants.  Noddings, Nel (2013-01-25). Education and Democracy in the 21st Century (pp. 1-2). Teachers College Press. Kindle Edition. (Library Copy)

International test scores do not measure interdependence and social skills, nor do they measure innovation.  They measure factual knowledge.  PISA claims to measure students’ ability to apply knowledge, yet the test questions are not localized, and remain outside any kind of context that would be meaningful to students.  Relying on international test scores to measure a nation’s academic abilities is a dead-end.

In Diane Ravitch’s book (Library Copy), The Reign of Error, she references research done by Keith Baker, a former researcher for the U.S. Department of Education.  In his research, he raised question including “Are international tests worth anything?  Do they predict the future of a nation’s economy?  Ravitch reports that Baker reviewed data going back to 1964 to answer these questions. Ravitch reported on his research and this is what she said:

Baker looked at per capita gross domestic product of the nations whose students competed in 1964. He found that “the higher a nation’s test score 40 years ago, the worse its economic performance on this measure of national wealth— the opposite of what the Chicken Littles raising the alarm over the poor test scores of U.S. children claimed would happen.” The rate of economic growth improved, he held, as test scores dropped. There was no relationship between a nation’s productivity and its test scores. Nor did high test scores bear any relationship to quality of life or livability, and the lower-scoring nations in the assessment were more successful at achieving democracy than those with higher scores.  Ravitch, Diane (2013-09-17). Reign of Error: The Hoax of the Privatization Movement and the Danger to America’s Public Schools (Kindle Locations 1500-1505). Knopf Doubleday Publishing Group. Kindle Edition.

We’re Number 1

There is an incessant want in this country to be number 1, and in education, international tests such as PISA, and TIMSS give the arena for the competitions to take place.  Unfortunately, using student test scores to set up league tables listing the “highest performing” nations in decreasing order toward the “lowest performing” nations creates an aura of competition that is unfortunate to our wish to help students learn.  Nel Noddings provides a powerful summary of this idea.  She says:

Recently, President Obama advised— to considerable applause— that we (the United States) must out-innovate, out-educate, and out-build the rest of the world. This is an example of 20th-century thinking that many of us believe must be put behind us. From one perspective, we are urged to reclaim the ways that, in the 20th century, made us great. From a second perspective, those ways are thought to be dangerous. Habits of domination, insistence on being “number one,” evangelical zeal to convert the world to our form of democracy, all belong to the days of empire. In the 21st century, without deriding the accomplishments of the 20th century, we must vow not to repeat the horrors of war that accompanied our rise to world power; it is time to recover from the harm done by such thinking and look ahead to an age of cooperation, communication (genuine dialogue), and critical open-mindedness.  Noddings, Nel (2013-01-25). Education and Democracy in the 21st Century (p. 2). Teachers College Press. Kindle Edition.

What is your interpretation of the PISA data as shown in the graphs in Figures 1 and 2?

Teacher Educators are Teachers First by Practicing What They Teach

Teacher Educators are Teachers First by Practicing what they Teach.

This is the first of several posts that will be published here about the art of teacher education.  There is a rich body of research on teacher education, and I will make use of recent work that shows that teacher education is a vibrant and energetic field that is being led by a new cadre of educators who are willing to get out there and do it.

Mike Dias, Charles Eich and Lauri Brantley-Dias are three members of this new cadre of teacher educators that will form the basis for this story and that is: Teacher Educators are teachers first: They practice what they preach.

For more than 30 years I practiced science teacher education, which meant that not only did I teach courses at the university, and I also taught science in K-12 schools, first as a science teacher in Lexington and Weston, Massachusetts, and then with being a professor at Georgia State University.  But there was also something that I found even more powerful, and that was the collaboration I had with practicing teachers and administrators.  As a teacher educator, I felt it was crucial that I worked in parallel with teachers in the metro-Atlanta area, and if possible to teach science education courses collaboratively with a practicing teachers.  Our doctoral program in science education attracted many local science teachers, and as graduate students, they worked as graduate teaching assistants in many of our courses.

Three of the graduate students, who would later go on and complete doctoral programs in education were Mike, Charles and Lauri. Michael and Charles were former students in our graduate science education program, Charles earning his master’s degree, and Michael his Ph.D.; Charles did his Ph.D at Auburn after completing his work at GSU.  Laurie did her doctoral studies instructional technology at GSU,  and was a member of the GSU faculty for several years.  She and Mike (her husband) have professorships at Kennesaw State University (GA), and Mike is a professor of science education at Auburn University.

 Practicing What We Preach

Mike, Charles and Laurie teamed up to organize a unique project in teacher education in which they asked more than a dozen fellow science teacher educators around the country to “practice what they preach.”  On a warm summer Atlanta evening, the three of them discussed Charles’ upcoming sabbatical leave after attending an Atlanta Braves game.  Charles had made arrangements to spend his sabbatical leave teaching eighth grade science in Auburn, Alabama, and at this informal gathering that night, that he work with Charles decided to study together his experience of going back into the classroom as a science teacher.  Working together, they “studied” Charles experience using quantitative and qualitative information.  Laurie played the role of the outsider prospective to bring further meaning and co-construction of ideas that emerged with Charles’ and Mike’s research.  Together they published papers about their work as teacher educators practicing what they preach.

Science Teacher Educators as K-12 Teachers

Then, Lauri suggested that the idea should be turned into a book.  Through the Association for Science Teacher Education, they put out a call for papers from fellow science teacher educators who would write chapters in a book describing their experiences practicing what they preach.  For more than two years they worked together with other teacher educators and produced a book that generated 16 unique accounts of science teaching at various grade levels, K – 12.  The book they published is entitled Science Teacher Educators as K-12 Teachers: Practicing What We Teach (2014).

I reviewed the book and found it to be a very important and astonishing autobiographical collection of papers written by our colleagues who in these pages took the risk of not only going back into the classroom to teach science, and to be transparent about their experiences by sharing their success, as well as the conflicts that they met with on their journey. (Disclaimer: I was the author of the last chapter of the book, which was the closing article).

There is richness in these reports, as well as creativity, and above all else, there is courage as shown by these teacher educators’ willingness to leave the safety of university life and immerse themselves in the world of K-12 classrooms.  Many of the authors took this step to find out how it feels to be back in a school in today’s classroom, and how this experience might affect their work as teacher educators.  Trying out inquiry-based reform, and constructivist approaches was also a central goal of most of the authors.  They also hoped that thoughtful reflection of their experience through the writing and critique of their chapters in this book would give the sureness and self-confidence to change their views and impact their university colleagues and their students.

The authors of these chapters described their experience through a process of collaboration and/or self-reflection.  Their immersion into the real lives of students and teachers showed the complexity of teaching, and in some cases, the difficulty in being successful in the classroom.  These were experienced teacher educators with strong backgrounds in science and pedagogy, yet they experienced a variety of problems.

In the posts to follow on the work of these teacher educators who choose to practice what they preach will lead us into the art of teacher education.  Teacher education, like medical education, requires people who have strong content backgrounds, and (in my view) they also need a stronger understanding of how to communicate with students, and how to choose the pedagogies that will help students understand, comprehend, and fall in love with the subjects that they teach.

This is no easy matter.  I look forward to telling you more about these teacher educators, and how their work can help us understand the nature of teacher education, and to provide research that outshines any of the critics of teacher education that seem to dominate the dialogue.

Shanghai-China, Canada, Chile and Liechtenstein Head to the PISA Finals

Shanghai, Canada, Chile and Liechtenstein Head to the PISA Finals.

Suppose the PISA nations were organized into leagues and to follow the tradition of the sports world,  a final competition would be held in February 2014, to coincide with the Winter Olympics in Russia.

Why these four countries?  Shanghai is an obvious choice.  Shanghai-China had the highest scores of all nations on maths (613), reading (570) and science (580).  So, if there were going to be a “jeopardy” type finals based on the 2013 PISA scores, Shanghai-China would be seeded number one.  But why the other three countries?  Why Canada, Chile and Liechtenstein?  None of these countries scored in the top dozen countries on PISA 2013.  So, what’s going on here?

I organized the nations that participated in the PISA 2013 international test into four conferences.  Using the well established method of the National Football League, Major League Baseball, and the National Basketball Association, four PISA conferences emerge as follows:

  • Eastern Conference
  • Western Conference
  • Southern Conference
  • Northern Conference

When the nations are organized into these four leagues, at the top of the leader board in each league are the following nations: Shanghai-China (Eastern), Canada (Western League), Chile (Southern League), and Liechtenstein (Northern League).  Disclaimer:  I did not include all the nations that participated in the PISA 2013.  But those that I did include are clearly representative of each “conference”

Take a look at the league standings as organized into these four conferences.  Does this organization tell us anything that we didn’t know before this filtering of the nations into these divisions?

Eastern Conference

Screen Shot 2013-12-15 at 8.06.28 PMNo surprises here, except when you dig deeper and ask why these countries do so well on international tests.  All of these countries scored above the OECD average. According to Yong Zhao, we need to keep in mind that Asian education systems have always done well on international tests.  Dr. Zhao is very skeptical of the Asian success on tests such as PISA because “they are very poisonous.”  He reminds us that the Asian formula for success is based on four ideas: competition, standardization, frequent testing, and privatization.  Indeed, these are the “symptoms” of the Global Education Reform Movement (GERM), which according to Pasi Sahlberg, are basic elements of reform in England and the United States.  And according to Sahlberg, GERM has behaved like a virus, spreading around the world.  We’ll see ahead, how GERM is spreading into the Southern Conference of nations.

Western Conference

Screen Shot 2013-12-15 at 7.45.43 PMThe Western Conference includes nations from the “Western Hemisphere.” Four of the Western Conference nations scored above the OECD average in maths, reading, and science.  The angst that appears every three years among these nations plays out in the newspaper, and when the secretary or commissioner of each nation’s education department speaks to the public. As some researchers have pointed out (Carnoy and Rothstein, 2013), the reasons countries do well or not so well are complicated.  Much of the difference in scores can be attributed to non-school factors such family income, poverty level, books in the home, health care, and so forth.  The difference between the top performing nation in the Western conference and lowest performing nation is 37 points (maths), 48 (reading), and 39 (science).

As Carnoy and Rothstein (2013) point out, if countries like the U.S. had social class compositions similar to that of the leading nations on the PISA test, the U.S. would move up in the Western Conference almost to the top.

In fact as shown in this graphic, if PISA scores were reported on the basis of poverty level, e.g., less than 10%, or between 10 – 24%, or between 25 – 49%, the scores would look very different as shown in the graphic below.

Screen Shot 2013-12-15 at 6.55.39 PM
Figure 1. Graphic from the AFT video, What Does the PISA Report Tell Us About U.S. Education. (http://bcove.me/emqcqfi7)

Southern Conference

If we look at nations in the Southern Conference, Chile leads the rankings.  Although these countries scored below the OECD average, the same issues that face Western Conference nations apply to the Southern Conference.  There is a need to give equitable education to all students.  Reports on an analysis of PISA data, GDP per capita correlates with performance in mathematics.  For example, nation’s whose GDP per capita ranged from $15,000 – $19,000 scored below the OECD average.  Follow this link to an interactive graph showing these results as indicated below.

Screen Shot 2013-12-15 at 3.16.27 PM
Screen Shot 2013-12-15 at 7.47.14 PM Figure 2.  Mathematics score versus GDP per Capita on 2013 PISA. Source: Sedghi, A, Arnett, G, and Chalabi, M. “Pisa 2012 results: which country does best at reading, maths, and science? The Guardian, December 3, 2013. http://www.theguardian.com/news/datablog/2013/dec/03/pisa-results-country-best-reading-maths-science


Northern Conference

Northern Europe and Scandinavia include many nations that have traditionally done well on the PISA tests.  However, if you were to read the headlines in newspapers of these countries, especially Finland, Norway and Sweden, you would think that the sky is falling.  In the case of Finland, which has typically been at the top of the PISA charts, it fell a few places in the rankings.  But many educators, such as Johann C. Fuhrmann, and Norbet Beckmann-Dierkes, explain how Finland has created an educational system that is NOT based on standards and high stakes testing, but on centering education on equity, health care, and teacher autonomy.  These are characteristics that are in short supply around the world.  Most countries believe that a central and standardized curriculum is in the best interests of all students.  They also believe that accountability should be visible by means of market and corporate strategies, and that teachers should be evaluated by their students’ learning.

Screen Shot 2013-12-15 at 3.16.38 PM
So, there you have it.  A look at the PISA results through four lenses.  Is it possible that we can learn more about education by examining the nature of schooling from different points of view, and through different cultures and situations.  What do you think?