Fordham Institute’s Evaluation of Next Generation Science Standards Rated as Junk Science

Fordham Institute’s Final Evaluation of Next Generation Science Standards (Fordham Evaluation) Rated as Junk Science.
In this post I am going to provide evidence that the Fordham Evaluation of Next Generation Science Standards is junk science, and does not meet the basic standards of scientific research.  Figure 1 is the Junk Science Evaluation and Index Form that I designed to assess the Fordham Evaluation.  The ten categories are definitions of junk science that emerged from a study by Michael Carolan (2012).  He  assessed ten years (1995 – 2005) of newspaper articles that included the words junk science in the title by systematically analyzing and coding the articles according to how the term was used.  I’ve used the ten definitions as the categories as shown in Figure 1.

Disclaimer: I have major concerns about using national science standards for every school student, K-12.  I also do not subscribe to the rationale or policy upon which the standards movement is based.  The rationale for science described in the NGSS is not related to the conception or philosophy of a sustainable planet, but is instead science in the service of the economic growth of the nation, job training, and economic competitiveness in a global society. The science standards were designed by scientists and engineers, and so there is a heavy emphasis on scientific process and content instead of thinking about science curriculum that would be in the service of children and adolescents.  I have written extensively about this on this blog.  Never-the-less, I have major concerns about the Thomas Fordham’s biased assessment of science education, and write this blog post in this context.  In no way do I endorse the NGSS.

Each category is an indicator that the study under review might be considered junk science.   When partisan or advocacy organizations issue reports, they are often done outside the normal context of scientific research.  In many cases, the reports are written by in-house organizational employees who indeed may have advanced degrees, but who isolate themselves from the research community at large.  Often the reports are not peer-reviewed.  One of the most obvious defects in these reports is that they tend to use methods that are not reproducible or are so murky that the results are clearly suspicious.

I’ve left the form in Figure 1 blank if you would like to reproduce it.

Strongly Disagree (1) Disagree (2) Neutral (3) Agree (4) Strongly Agree (5)
1. Based upon bad policy
2. Experts with agendas
3. False Data
4. No data or unsubstantiated claims
5. Failure to cite references
6. Uses non-certified experts
7. Poor methodology
8. Too much uncertainty to arrive at conclusions
9. Reveals only that data that supports conclusions
10. Non-peer reviewed

Figure 1. Junk Science Evaluation & Index Form

How Does the Fordham Final Evaluation of the Next Generation Science Standards Stack Up?

Image 7-7-13 at 9.38 PMThe Fordham Institute evaluation of the NGSS is a flawed report based on my assessment of their published document using the Junk Science Evaluation & Index Form.  After reading and reviewing the Fordham report I rated each criteria using a 5 point scale. For each item, I’ve included brief comments explaining my decisions.  As you can see, the overall assessment of the Fordham report was 4.7, which meant that this reviewer strongly agreed with the ten definitions that show that the report is an example of junk science.

 Junk Science Definitions Strongly Disagree (1) Disagree (2) Neutral (3) Agree (4) Strongly Agree (5)
1. Based upon bad policy  X
The policy upon which the Fordham Evaluation of the NGSS is underscored by a strict adherence to their traditional view of science content.  Their own set of standards, against which they evaluated the NGSS and the state science standards, is a list of low-level science goals.  In short the policy of the Fordham Institute and the authors of the report is an unchanging fealty to a conservative agenda and a canonical view of science education.
2. Experts with agendas  X
 The experts of the Fordham Institute seem to have an agenda which dismisses any inclusion of inquiry (practices in the NGSS), and pedagogical advances such as constructivism and inquiry teaching.
3. False Data  X
 There is no attempt to include false data.
4. No data or unsubstantiated claims  X
 Although the authors include written analyses of each content area, (physical, earth and life science), they go out of their way to knit pic standards written by others (NGSS and the states) and fail to realize that their standards which they use to judge others’ is inferior.
5. Failure to cite references  X
There were 17 footnotes identifying the references the authors cited in their analysis of a national set of science standards.  There are no referenced citations of any refereed journals or books.  Most footnotes were notes about the report, or citations of earlier Fordham Institute reports.  The only four citations were outside Fordham Institute such as by Ohio Department of Education, and ACT.
6. Uses non-certified experts  X
 There were no teachers, or science education experts.  Although all the authors hold advanced degrees in science, mathematics and engineering, they do not seem qualified to rate or judge science education standards, curriculum or pedagogy.
7. Poor methodology  X
 The authors claimed to check the quality, content, and rigor of the final draft of NGSS.  They used this method to rate the state science standards two years ago.  The grading metric uses two components; 7 points are possible for content and rigor; 3 points for clarity and specificity.  Content and rigor is evaluated against their content standards, which I have assessed using Bloom’s Taxonomy.  72% of Fordham’s science standards were at the lowest levels of Bloom, while only 10% were at the highest levels on Bloom.  In order to score high on the content and rigor part of the Fordham assessment, the NGSS would have to meet their standards–which I have judged to be mediocre.  The NGSS earned 3.7 (out of 7) on content and rigor, and 1.5 (out of 3) for clarity and specificity, for a total of 5.2 (out of 10).  Using these scores, the Fordham Institute used their earlier report on the State of the State Science Standards, and classified the states as clearly superior, too close to call or clearly inferior compared to the NGSS. According to Fordham, only 16 states had science standards superior to the NGSS.  The problem in my view,is  that the criteria Fordham uses to judge the NGSS and the state science standards is flawed.
8. Too much uncertainty to arrive at conclusions  X
 The Fordham report was written by people who seem to have an axe to grind against the work of the science education community.  The fact they failed to involve teachers and science educators in their review shows a disregard for the research community.  And this is surprising, given their credentials as scientists.
9. Reveals only that data that supports conclusions  X
 The conclusions that the Fordham group reports boil down to a number and then is translated into a grade.  In this case, the NGSS scored 5.2 out of 10 which converts to a grade of C.  This is what the media pick up on, and the Fordham Institute uses its numbers to create maps classifying states as inferior, superior or too close to call.
10. Non-peer reviewed  X
 This report is a conservative document that was never shared with the research community.  It’s conclusions should be suspect.

Figure 2. Junk Science Evaluation & Index Form of Fordham Institutes Final Evaluation of the NGSS

Even though the Fordham review is junk science, the media, including bloggers on Education Week, have printed stories that largely support the Fordham reports. The National Science Teachers Association, which had a hand in developing the NGSS, wrote a very weak response to Fordham’s criticism of NGSS.

The Thomas Fordham Institute perpetuates untruths about science education primarily to endorse it conservative agenda. It’s time call foul.  In this writer’s analysis, the Fordham Institute report on the NGSS earns an F.

If you have a chance or the time, please use the form in Figure 1 to rate the Fordham Institute report on the NGSS. What was your rating?

Why We Should Reject The Fordham Institute’s Opinion of the Next Generation Science Standards

In this post I am going to give evidence that the Fordham Institute’s evaluation of the Next Generation Science Standards should be rejected.

The Thomas Fordham Institute is a conservative advocacy think tank which issues opinion reports written by “experts” on science education (other education issues as well).  I have reviewed earlier reports released by Fordham, and have critiqued their reports on the basis of their obvious bias against science education, especially against professors of science education who advocate an inquiry approach to teaching science.

Fordham Review of State Science Standards

letterDIn 2012, the Fordham Institute published a report, State of the State Standards, which was a rating of the science standards written by the states.  They graded  the state science standards using A – F rankings, and according to their criteria, most states earned a D or F.  You need to understand that they, like many of the other conservative think tanks, believe that American science education “needs a radical upgrade.”  Their review of the state science standards was flawed, yet the media reported the results as if they were factual, and they are not.

When I first reviewed Fordham’s evaluation of the state science standards, I was shocked when I read the criteria that they used to analyze science education. In the Fordham report there is a section of Methods, Criteria and Grading Metric in which the authors report that they devised content-specific criteria against which the science standards in each state were evaluated. The authors divided the science content into learning expectations through grade eight (lists of statements divided into Physical Science, Earth and Space Science, and Life Science) , and learning expectations for grades nine through 12 (lists of statements for physics, chemistry, Earth and Space science, and life science).

The Fordham list of science content is a sham, and for states to be held to their standards is not only unprofessional, but a disgrace.

I found that the Fordham standards are low-level, mediocre at best, and do not include affective or psycho-motor goals. I analyzed each Fordham statement using the Bloom categories in the Cognitive, Affective and Psycho-motor Domain.  Ninety percent of all the Fordham science criteria fall into the lowest levels of Bloom’s Taxonomy in the cognitive domain. Indeed, 52% of the statements are at the lowest level (Knowledge) which includes primarily the recall of data or information. Twenty-eight percent of the Fordham science statements were written at the Comprehension level, and only 10% at the Application level. What this means is that the authors wrote their own science standards at a very low-level. In fact of the 100 statements only 10% were at the higher levels. No statements were identified at the synthesis level, which in science is awful. Only one science standard was found at the highest level of evaluation.

I also compared the method that Fordham used in their “study,” to the standards for educational research established by the American Education Research Association (AERA).  The Fordham report is a type of evaluation research, but does not meet the standard criteria for a research study.  In fact they only met two of the eight AERA principles.

When you assess the Fordham evaluation of the state standards, their report barely gets a grade of “D,” and perhaps should be graded “F.

They’re At It Again: Evaluation of the NGSS

The Fordham Foundation’s science Gang of Seven has released it’s “Final Evaluation of the Next Generation Science Standards.”  The same gang that evaluated the state science standards is at it again.  This time they have applied their flawed research method to evaluate the Next Generation Science Standards.

Image 7-5-13 at 5.50 PMThe Gang of Seven does not seem to have 20/20 vision when it comes to research.  Instead they have an unchanging fealty to a conservative agenda and a canonical view of science education which restricts and confines them to an old school view of science teaching. Science education has rocketed past the views in two earlier reports issued by Fordham about science education standards, as well as the NGSS.

Cognitively, the Fordham standards are not much to write home about. And it is amazing, given the low-level of the Fordham standards that any state would score lower than their own standards.

You can read my earlier reviews of Fordham’s lack of knowledge about science education here and here.  For Fordham to continue its effort to promote an honest discussion of science education is a sham.  According to this final report, the Gang of Seven used the same criteria used to evaluate the state science standards.

The Gang of Seven has consistently kept to this mantra, and in this final report of the NGSS, they find that science education is in peril.  They grade the NGSS gets a grade of C+.  What this means is that most of the state standards are inferior to the NGSS, and of course to the Fordham science standards.  Using a color coded map of the U.S., Fordham reports that:

  • 13 States are Clearly Superior
  • 22 States are Too Close to Call
  • 16 States are Clearly Inferior

First of all, you need to realize that Fordham has their own set of science content standards (General expectations for learning).  Follow this link to Fordham’s Final Evaluation of the NGSS, and then scroll down through the document to page 55, and you will find their standards listed on pages 55 – 61. .  Then they used the same criteria to check the final version of the NGSS.  In my earlier analysis I gave the Fordham science standards a grade of D. For them to use these criteria to judge the NGSS is absurd.

Yet, they keep saying that science education is inferior, and after a while, people begin to believe them.  For me, the gang of seven is not qualified to evaluate science education.  Yes, the Gang of Seven have credentials in science and engineering, but they are woefully inadequate in their understanding of science curriculum development, or the current research on science teaching.  Many of the creative ideas that emerged in science teaching in the past thirty years represent interdisciplinary thinking, the learning sciences, deep understanding of how students learn science, and yes, constructivism.

The Fordham group appears to have had their eyes closed during this period.  Anything they have to say about the NGSS should be rejected.

Is the Final Evaluation of the Next Generation Science Standards by the Thomas Fordham Institute junk science?  I’ll offer an answer in the next post on this blog.

In the meantime, what is your opinion of the Fordham methods used to evaluate the Next Generation Science Standards?



Fordham Report on Next Generation Science Standards Lacks Credibility

On January 29, the Thomas Fordham Institute published a report, “Commentary & Feedback on the Next Generation Science Standards (Commentary).  Nine people wrote the report, none of whom are “experts” in the field of science education.  Yes, most of them have Ph.D’s in science, but they lack the experiential and content knowledge of science education, science curriculum development, and classroom K – 12 science teaching experience.  The lead author of Commentary is Dr. Paul Gross, professor emeritus of life sciences at the University of Virginia.

Amazingly, news and media outlets will quote and not question the Fordham report as if they have the last answer on the Next Generation Science Standards in particular and science education in general.  They do not have the final answer.  In my opinion their answers and comments are flawed.

A Trilobite

Erik Robelen wrote an article today in Curriculum Matters entitled In Science Draft, Big Problems ‘Abound,’ Think Tank Says.  The Think Tank is the Fordham Institute.  Robelen reviewed the report (70 pages) identifying the criticisms that the Fordham reviewers had about the NGSS. The Fordham group claimed that the authors of the NGSS omitted a lot of what they call “essential content.”  They also insist that the practices of science and engineering dominate the NGSS, and claim that basic science knowledge–the goal of science education (again, according to the Fordham group), becomes secondary.  The goal of science education is to create a curriculum that is steeped primarily in science content, with little regards to practices, inquiry, and connection to other disciplines.

Fordham Science Standards—-Return to the Past

The Fordham review used a set of science standards (called criteria) created by their science experts. They use this list of content goals to judge the worthiness of the NGSS & they used it two years ago when they reported on the state of state science standards.

They also use grades to summarize their opinion of science standards.  When they reported on the state science standards, many states failed, that is they received grades of D and F.  They didn’t grade the NGSS standards, but I am sure they will.

I’ve reviewed their standards and analyzed them using Bloom’s taxonomies, and reported them here.  In my analysis, only 10% of the Fordham standards were above the analysis level; 52% were classified at the lowest level in Bloom. There were no mention of the affective or psychomotor domains.

One of the areas that is completely missing in the lists of science content are standards for science inquiry. What is amusing here is that the Fordham authors criticized the states for “poor integration of scientific inquiry.” If any group showed poor integration of inquiry into the standards, it’s the Fordham group. They do not mention one inquiry science outcome or goal, yet they slam the states for not integrating science inquiry into the content of science.  They need to get their own house in order before they go around the country laying it on the states, and now the NGSS.

Their standards are quite simply a list of content goals with little regard to the process of science & engineering (practices in the NGSS–inquiry in the 1995 NSES) or connections across disciplines.  They are a real embarrassment to science educators in the context of the research and development in science education over the past 20 years.   I gave their standards a grade of D.

Let me explain.  The Fordham wrote their “science standards” using the same format that was used in the earlier part of the last century.  For example, here are a few of the Fordham science standards:

  • Know some of the evidence that electricity and magnetism are closely related (physical science)
  • Trace major events in the history of life on earth, and understand that the diversity of life (including human life) results from biological evolution (life science)
  • Recognize Earth as one planet among its solar system neighbors (earth science)
  • Be able to use Lewis dot structures to predict the shapes and polarities of simple molecules (chemistry)
  • Know the basic structures of chromosomes and genes down to the molecular level (biology)

These are simplistic statements that are juvenile compared to the 1995 National Science Education Standards, and the 2013 Next Generation Science Standards.  Here are some example standard statements from the NGSS:

  • Construct an argument using evidence about the relationship between the change in motion and the change in energy of an object.
  • Collect, analyze, and use data to describe patterns of what plants and animals need to survive.
  • Analyze and interpret data from fossils to describe the types of organisms that lived long ago and the environments in which they lived and compare them with organisms and environments today.
  • Use Earth system models to support explanations of how Earth’s internal and surface processes operate concurrently at different spatial and temporal scales to form landscapes and sea floor features.

The Fordham report is an extensive description of their own content specific and narrow view of what science for children and youth should be.  It was written by people who have little experience in science education, and there is some evidence in their reporting that they have little knowledge of science education research.  Their report is not juried, and there has never been an attempt by Fordham to solicit the opinions of science education researchers or curriculum developers.  It is an in-house report, and that is as far as it should go.

One More Thing

I have written several blog posts that are critical of the standards movement, including the Next Generation Science Standards.  You can link to them here, here, here and here. I am not defending the NGSS, but the criteria that Fordham uses to “analyse” the NGSS is not a valid research tool, and lacks reliability and validity, two criteria that would make their report believable.  As it standards, I can not agree with their ideas, nor should the NGSS consider them in their next stage.  Fordham has been pulling the wool over the eyes of policy makers and the media.  Its time to call them out.

There is much to disagree with in their report.  What are your opinions about the Fordham report on the NGSS?

Do Higher Science Standards Lead to Higher Achievement?

In a recent article in Scientific American, it was suggested that the U.S. should adopt higher standards in science, and that all 50 states should adopt them.

When you check the literature on science standards, the main reason for aiming for higher standards (raising the bar) is because in the “Olympics” of international academic test taking, the U.S. never takes home the gold.  In fact, according the tests results reported by the Program for International Student Assessment (PISA), U.S. students never score high enough to even merit a bronze medal.  In the last PISA Science Olympics, Shanghai-China (population 23 million) took home the Gold, Finland (population 5.4 million) the Silver, and Hong Kong-China (population 7 million, the Bronze.  The United States (population 314 million) average score positioned them 22nd on the leaderboard of 65 countries that participated in the PISA 2009 testing.

Some would argue that comparing scores across countries that vary so much in population, ethnic groups, poverty, health care, and housing is not a valid enterprise.  We’ll take that into consideration as we explore the relationship of standards to student achievement.

Its assumed that there is a connection or correlation between the quality of the standards in a particular discipline such as science, and the achievement levels of students as measured by tests.  So the argument is promoted that because U.S. students score near the bottom of the top third of countries that took the PISA test in 2009, then the U.S. science education standards need to be ramped up.  If we ramp up the standards, that is to say, make them more rigorous and at a higher level, then we should see a movement upwards for U.S. students on future PISA tests.  It seems like a reasonable assumption, and one that has driven the U.S. education system toward a single set of standards in mathematics and reading/language arts (Common Core State Standards-CCSS), and very soon, there will be a single set of science standards.

There is a real problem here

There is no research to support the contention that higher standards mean higher student achievement.  In fact there are very few facts to show that standards make a difference in student achievement.  It could be that standards, per se, act as barriers to learning, not bridges to the world of science.

Barriers to Learning

I’ve reported on this blog research published in the Journal of Research in Science Teaching by professor Carolyn Wallace of Indiana State University that indicates that the science standards in Georgia actually present barriers to teaching and learning. Wallace analyzed the effects of authoritarian standards language on science  classroom teaching.  She argues that curriculum standards based on a content and product model of education are “incongruent” with research in science education, cognitive psychology, language use, and science as inquiry.  The Next Generation Science Standards is based on a content and product model of teaching, and in fact, has not deviated from the earlier National Science Education Standards.

Over the past three decades, researchers from around the world have shown that students prior knowledge and the context of how science is learned are significant factors in helping students learn science.  Instead of starting with the prior experiences and interests of students, the standards are used to determine what students learn.  Even the standards in the NGSS, or the CCSS are lists of objectives defining a body of knowledge to be learned by all learners.  As Wallace shows, its the individuals in charge of curriculum (read standards) that determine the lists of standards to be learned. Science content to be learned exists without a context, and without any knowledge of the students who are required to master this stuff, and teachers who plan and carry out the instruction.

An important point that Wallace highlights is that teachers (and students) are recipients of the standards, rather than having been a part of the process in creating the standards. By and large teachers are nonparticipants in the design and writing of standards. But more importantly, teachers were not part of the decision to use standards to drive school science, in the first place. That was done by élite groups of scientists, consultants, and educators.

The Brown Center Report

According to the 2012 Brown Center Report on American Education, the Common Core State Standards will have little to no effect on student achievement. Author Tom Loveless explains that neither the quality nor the rigor of state standards is related to state NAEP scores. Loveless suggests that if there was an effect, we would have seen it since all states had standards since 2003.

For example in the Brown Center study, it was reported (in a separate 2009 study by Whitehurst), that there was no correlation of NAEP scores with the quality ratings of state standards. Whitehurst studied scores from 2000 to 2007, and found that NAEP scores did not depend upon the “quality of the standards,” and he reported that this was true for both white and black students (The Brown Center Report on American Education, p.9). The correlation coefficients ranged from -0.6 to 0.08.

The higher a “cut score” that a state established for difficulty of performance can be used to define the rigor or expectations of standards. One would expect that over time, achievement scores in states that have more rigorous and higher expectations, would trend upwards. The Brown study reported it this way:

States with higher, more rigorous cut points did not have stronger NAEP scores than states with less rigorous cut points.

The researchers found that it did not matter if states raised the bar, or lowered the bar on NAEP scores. The only positive and significant correlations reported between raising and lowering the bar were in 4th grade math and reading. One can not decide causality using simple correlations, but we can say there is some relationship here.

When researchers looked at facts to find out if standardization would cut the variation of scores between states, they found that the variation was relatively small compared to looking at the variation within states. The researchers put it this way (The Brown Center Report on American Education, p. 12): The findings are clear.

Most variation on NAEP occurs within states not between them. The variation within states is four to five times larger than the variation between states.

According to the Brown Report, the Common Core will have very little impact on national achievement (Brown Report, p. 12).  There is no reason to believe that won’t be true for science.

The researchers concluded that we should not expect much from the Common Core. In an interesting discussion of the implications of their findings, Tom Loveless, the author of the report, cautions us to be careful about not being drawn into thinking that standards represent a kind of system of “weights and measures.” Loveless tells us that standards’ reformers use the word—benchmarks—as a synonym for standards. And he says that they use it too often. In science education, we’ve had a long history of using the word benchmarks, and Loveless reminds us that there are not real, or measured benchmarks in any content area. Yet, when you read the standards—common core or science—there is the implication we really know–almost in a measured way–what standards should be met at a particular grade level.

Loveless also makes a strong point when he says the entire system of education is “teeming with variation.” To think that creating a set of common core standards will cut this variation between states or within a state simply will not succeed. As he puts it, the common core (a kind of intended curriculum) sits on top of the implemented and achieved curriculum. The implemented curriculum is what teachers do with their students day-to-day. It is full of variation within a school. Two biology teachers in the same school will get very different results for a lot of different factors. But as far as the state is concerned, the achieved curriculum is all that matters. The state uses high-stakes tests to decide whether schools met Adequate Yearly Progress (AYP).

Now What?

If standards do not result in improved learning as measured by achievement tests, what should we be doing to improve schools?

Over on Anthony Cody’s blog on Education Week, we might find some answers to this question.  Cody has begun a series of dialogs with the Gates Foundation on educational reform by bringing together discussions between opposing views to uncover some common ground. Cody has already broken new ground because the Gates Foundation is not only participating with him on his website, but Gates is publishing everything on their own site: Impatient Optimists blog. Three of the five dialog posts have been written, and it is the third one written by Anthony Cody that I want to bring in here.

In his post, Can Schools Defeat Poverty by Ignoring it?, Cody reminds us that the U.S. Department of Education (through the Race to the Top and NCLB Flexibility Requests) is unwavering in its promotion of data-driven education, using student test scores to rate and evaluate teachers and administrators.  Cody believes that the Gates Foundation has used its political influence to support this.  There is also an alliance between the ED, and PARCC which is developing assessments to be aligned to the Common Core Standards.  The Gates Foundation is a financial contributor to Achieve, which oversees the Common Core State Standards, the Next Generation Science Standards, and PARCC.

There is a “no excuses” attitude suggesting that students from impoverished backgrounds should do just as well as students from enriched communities.  The idea here is that teachers make the difference in student learning, and if this is true, then it is the “quality” of the teacher that will decide whether students do well on academic tests.

Anthony Cody says this is a huge error.  In his post, he says, and later in the post uses research to tell us:

In the US, the linchpin for education is not teacher effectiveness or data-driven management systems. It is the effects of poverty and racial isolation on our children.

As he points out, teachers account for only 20% of the variance in student test scores.  More than 60% of score variance on achievement tests correlates to out-of-school factors.  Out-of-school factors vary a great deal.  However, as Cody points out, the impact of violence, health, housing, and child development in poverty are factors that far out weigh the effect of teacher on a test given in the spring to students whose attendance is attendance, interest, and acceptance is poor.

In the Scientific American article I referenced at the beginning of this post, the author cites research from the Fordham Foundation that scores most state science standards as poor to mediocre.  We debunked the Fordham “research” here, and showed that its research method was unreliable, and invalid.  Unfortunately, various groups, even Scientific American, accept Fordham’s findings, and use in articles and papers as if it a valid assessment of science education standards.  It is not.

It’s not that we don’t have adequate science standards.  It’s that if we ignore the most important and significant factors that affect the life of students in and out of school, then standards of any quality won’t make a difference.

What is your view on the effect of changing the science standards on student achievement.  Are we heading in the wrong direction?  If so, which way should we go?


Fordham Institute Review of New Science Standards: Fealty to Conservatism & Canonical Science

Fordham Institute has published their  review of the draft of the Next Generation Science Standards.  Achieve wrote the the new science standards.   Achieve also wrote the math and reading/language arts common core standards.

Unchanging fealty to a conservative agenda and a canonical view of science education restricts and confines Fordham’s review to an old school view of science teaching.  Science education has rocketed past the views in two reports issued by Fordham about science education standards.

The Fordham reviewers use a strict content (canonical) view of science education and dismiss any reference to the scientific practices (science processes) and pedagogical advances such as constructivism, and inquiry teaching.  Many of the creative ideas that emerged in science teaching in the past thirty years represent interdisciplinary thinking, the learning sciences, deep understanding of how students learn science, and yes, constructivism.

These creative ideas are not reflected in Fordham’s analysis of science teaching and science curriculum.

I have also studied and reviewed the draft of the Next Generation Science Standards and have written about them here, and here.

The Framework

In 2011, the Carnegie Corporation funded the National Research Council’s project A Framework for K-12 Science Education (Framework).  The Framework was published last year, and it being used by Achieve as the basis for writing the Next Generation Science Standards (Science Standards)

These two documents, The Framework and the Science Standards, will decide the nature of science teaching for many years to come.

In this post, I’ll focus on how Fordham has responded to these two reports.

In late 2011, the Carnegie Corporation provided financial support to the Fordham Institute to review the NRC Framework.  The Fordham report was a commissioned paper (Review of the National Research Council’s Framework for K-12 Science Education), written by Dr. Paul Gross, Emeritus Professor of Biology. The Gross Report was not a juried review, but written by one person, who appears to have an ax to grind, especially with the science education research community, as well as those who advocate science inquiry, STS, or student-centered ideology. Indeed, the only good standard is one that is rigorous, and clearly content and discipline oriented.

I’ve read and reviewed the Fordham review of the Framework, and published my review here. Here some excerpts from my review.

Grade: B. In general, Dr. Gross, as well as Chester E. Finn, Jr. (President of the Fordham Foundation), are reluctant to give the Framework a grade of “A” instead mark the NRC’s thick report a grade of “B”.

Rigor.  Rigor is the measure of depth and level of abstraction to which chosen content is pursued, according to Gross. The Framework gets a good grade for rigor and limiting the number of science ideas identified in the Framework. The Framework identifies 44 ideas, which according to Gross is a credible core of science for the Framework.  The evaluator makes the claim that this new framework is better on science content than the NSES…how does he know that?

Practices, Crosscutting Concepts & Engineering. The Fordham evaluation has doubts about the Framework’s emphasis on Practices, Crosscutting Concepts, and Engineering/Technology Dimensions. For example, Gross identifies several researchers and their publications by name, and then says:

These were important in a trendy movement of the 1980s and 90s that went by such names as science studies, STS (sci-tech studies), (new) sociology or anthropology of science, cultural studies, cultural constructivism, and postmodern science.

For some reason, Gross thinks that science-related social issues and the radical idea of helping students construct their own ideas are not part of  mainstream science education, when indeed they are. Many of the creative Internet-based projects developed over the past 15 years have involved students in researching issues that have social implications.  The National Science Foundation made huge investments in creative learning projects.

Gross also claims that the NRC Framework authors “wisely demote what has long been held the essential condition of K-12 science: ‘Inquiry-based learning.’ The report does NOT demote inquiry, and in fact devotes much space to discussions of the Practices of science and engineering, which is another way of talking about inquiry. In fact, inquiry can found in 71 instances in the Framework. Gross and the Fordham Foundation make the case that Practices and Crosscutting ideas are accessories, and that only the Disciplinary Core Ideas of the Framework should be taken seriously . This will result is a set of science standards that are only based on 1/3 of the Framework’s recommendations.

Gross cherry picks his resources, and does not include a single research article from a prominent research journal in science education.  Dr. Gross  could have consulted science education journals found here, here, here or here.  If he did, he might have found this article: Inquiry-based science instruction—what is it and does it matter? Results from a research synthesis years 1984 to 2002.  Journal of Research in Science Teaching (JRST) published this article in 2010. Here is the abstract of the research study:

Various findings across 138 analyzed studies show a clear, positive trend favoring inquiry-based instructional practices, particularly instruction that emphasizes student active thinking and drawing conclusions from data. Teaching strategies that actively engage students in the learning process through scientific investigations are more likely to increase conceptual understanding than are strategies that rely on more passive techniques, which are often necessary in the current standardized-assessment laden educational environment.

The Fordham review of the Framework is not surprising, nor is their review of the first draft of the standards.  Fordham has its own set of science standards that it uses to check other organizations’ standards such as the state standards.  They used their standards as the “benchmark” to check all of the state science standards, and concluded that only 7 states earned an A.  Most of  the states earned an F.

If you download Fordham’s report here, scroll down to page 208 to read their science standards, which they call content-specific criteria.

I analyzed all the Fordham standards against Bloom’s Taxonomy in the Cognitive, Affective and Psychomotor domains.  Using Bloom’s Taxonomy, 52% of the Fordham science standards were rated at the lowest level.   Twenty-eight percent of their standards were at the comprehension level, 10% at application, and only 10% above analysis.  No standards were found for the affective or psychomotor designs.

All I am saying here is that Fordham has its own set of science standards, and I found them inferior to most of the state science standards, the National Science Education Standards (published in 1996), as well as the NAEP science framework.  You can read my full report here.  I gave Fordham’s science standards a grade of D.

Fordham Commentary on the New Science Standards

Given this background, we now turn our attention to Fordham’s Commentary & Feedback on Draft I of the NGSS.

The Fordham reviewers, as they did when they reviewed the NRC Framework for Science, felt the standards’ writers “went overboard on scientific and engineering practices.  From their point of view, crosscutting concepts and scientific and engineering practices create challenges to those who write standards.

Fordham science standards are reminiscent of the way  learning goals were written in the 1960s and 1970s.   Writers used one of many behavioral or action verbs such as define, describe, find, diagram, classify, and so forth to construct  behavioral objectives.  The Fordham standards were written using this strategy. Here are three examples from their list of standards:

  • Describe the organization of matter in the universe into stars and galaxies.
  • Identify the sun as the major source of energy for processes on Earth’s surface.
  • Describe the greenhouse effect and how a planet’s atmosphere can affect its climate.

The Fordham experts raised concerns about the way standard statements are written.  As shown in the examples from the draft of the NGSS, the standards integrate content with process and pedagogical components.

I agree with the Fordham reviewers that the Next Generation Science Standards  are rather complex.  Shown in Figure 1 is the “system architecture that Achieve used for all of the standards.  Figure 1 shows just four performance expectations (read standards), and their connection to practices, core ideas, and crosscutting concepts.  Every science standard in the Achieve report is presented in this way.

Figure 1. System Architecture of the NGSS. Source:, extracted May 12, 2012

The Fordham reviewers gave careful attention to each standard statement, and indeed in their report they include many examples of how the standards’ writers got the content wrong or stated it in such a way that was unclear.

But the Fordham reviewers take the exception to the science education community’s research on constructivism.  In their terms, science educators show fealty to constructivist pedagogical theory.  To ignore constructivism, or to think that science educators have an unswerving allegiance to this well established and researched theory is quite telling.  To me it indicates that Fordham holds a traditional view of how students learn.  It tells me that these reviewers have boxed themselves into a vision of science literacy by looking inward at the canon of orthodox nature science.  Content is king.

To many science teachers and science education researchers, an alternative vision gets its meaning from the “character of situations with a scientific component, situations that students are likely to encounter as students.  Science literacy focuses on science-related situations (See Douglas Roberts’ chapter on science literacy in the Handbook of Research on Science Education).

The Fordham reviewers recommend that every standard be rewritten to cut “practices” where they are not needed.  They also want independent, highly qualified scientists who have not been involved in the standards writing attempt to check every standard.  The National Science Teachers Association, comprised of science teachers and scientists is quite qualified to do this, and indeed the NSTA sent their recommendations to Achieve last week.

I would agree with the Fordham group that the next version of the standards should be presented in a clearer way, and easily searchable.  I spent a good deal of time online with the first draft, and after a while I was able to search the document, but it was a bit overwhelming.

Finally I would add that when you check the Fordham analysis of the new standards, the word “basic” jumps out.  Near the end of their opinion report, they remind us that the science basics in the underlying NRC Framework were sound.  What they are saying is that the NGSS writers need to chisel away anything that is not solid content from the standards.

One More Thing

Organizations such as Achieve and the Fordham Institute believe the U.S. system of science and mathematics education is performing below par, and if something isn’t done, then millions of students will not be prepared to compete in the global economy. Achieve cites achievement data from PISA and NAEP to make its case that American science and mathematics teaching is in horrible shape, and needs to fixed.

The solution to fix this problem to make the American dream possible for all citizens is to write new science (and mathematics) standards.  One could argue that quality science teaching is not based on authoritarian content standards, but much richer standards of teaching that form the foundation of professional teaching.

What ever standards are agreed upon, they ought to be based on a set of values that are rooted in democratic thinking, including empathy and responsibility. Professional teachers above all else are empathic in the sense that teachers have the capacity to connect with their students, to feel what others feel, and to imagine oneself as another and hence to feel a kinship with others. Professional teachers are responsible in the sense that they act on empathy, and that they are not only responsible for others (their students, parents, colleagues), but themselves as well.

The dual forces of authoritarian standards and high-stakes testing has taken hold of K-12 education through a top-down, corporate led enterprise. This is very big business, and it is having an effect of thwarting teaching and learning in American schools. A recent study by Pioneer Institute estimated that states will spend at least $15 billion over the next few years to replace their current standards with the common core.  What will it cost to implement new science standards?

In research that I have reported here, standards are barriers to teaching and learning.  In this research, the tightly specified nature of successful learning performances precludes classroom teachers from modifying the standards to fit the needs of their students.  And the standards are removed from the thinking and reasoning provesses needed to achieve them.  Combine this with high-stakes tests, and you have a recipe for disaster.

According to the 2012 Brown Center Report on American Education, the Common Core State Standards will have little to no effect on student achievement. Author Tom Loveless explains that neither the quality or the rigor of state standards is related to state NAEP scores. Loveless suggests that if there was an effect, we would have seen it since all states had standards in 2003.

The researchers concluded that we should not expect much from the Common Core. In an interesting discussion of the implications of their findings, Tom Loveless, the author of the report, cautions us to be careful about not being drawn into thinking that standards represent a kind of system of “weights and measures.” Loveless tells us that standards’ reformers use the word—benchmarks—as a synonym for standards. And he says that they use it too often. In science education, we’ve had a long history of using the word benchmarks, and Loveless reminds us that there are not real, or measured benchmarks in any content area. Yet, when you read the standards—common core or science—there is the implication we really know–almost in a measured way–what standards should be met at a particular grade level.

Loveless also makes a strong point when he says the entire system of education is “teeming with variation.” To think that creating a set of common core standards will reduce this variation between states or within a state simply will not succeed.

As the Brown report suggests, we should not depend on the common core or the Next Generation Science Standards having any effect on students’ achievement.

What do you think?  Is Fordham’s view of science education consistent with your ideas about science teaching?