Fordham Institute’s Evaluation of Next Generation Science Standards Rated as Junk Science

Fordham Institute’s Final Evaluation of Next Generation Science Standards (Fordham Evaluation) Rated as Junk Science.
In this post I am going to provide evidence that the Fordham Evaluation of Next Generation Science Standards is junk science, and does not meet the basic standards of scientific research.  Figure 1 is the Junk Science Evaluation and Index Form that I designed to assess the Fordham Evaluation.  The ten categories are definitions of junk science that emerged from a study by Michael Carolan (2012).  He  assessed ten years (1995 – 2005) of newspaper articles that included the words junk science in the title by systematically analyzing and coding the articles according to how the term was used.  I’ve used the ten definitions as the categories as shown in Figure 1.

Disclaimer: I have major concerns about using national science standards for every school student, K-12.  I also do not subscribe to the rationale or policy upon which the standards movement is based.  The rationale for science described in the NGSS is not related to the conception or philosophy of a sustainable planet, but is instead science in the service of the economic growth of the nation, job training, and economic competitiveness in a global society. The science standards were designed by scientists and engineers, and so there is a heavy emphasis on scientific process and content instead of thinking about science curriculum that would be in the service of children and adolescents.  I have written extensively about this on this blog.  Never-the-less, I have major concerns about the Thomas Fordham’s biased assessment of science education, and write this blog post in this context.  In no way do I endorse the NGSS.

Each category is an indicator that the study under review might be considered junk science.   When partisan or advocacy organizations issue reports, they are often done outside the normal context of scientific research.  In many cases, the reports are written by in-house organizational employees who indeed may have advanced degrees, but who isolate themselves from the research community at large.  Often the reports are not peer-reviewed.  One of the most obvious defects in these reports is that they tend to use methods that are not reproducible or are so murky that the results are clearly suspicious.

I’ve left the form in Figure 1 blank if you would like to reproduce it.

Strongly Disagree (1) Disagree (2) Neutral (3) Agree (4) Strongly Agree (5)
1. Based upon bad policy
2. Experts with agendas
3. False Data
4. No data or unsubstantiated claims
5. Failure to cite references
6. Uses non-certified experts
7. Poor methodology
8. Too much uncertainty to arrive at conclusions
9. Reveals only that data that supports conclusions
10. Non-peer reviewed

Figure 1. Junk Science Evaluation & Index Form

How Does the Fordham Final Evaluation of the Next Generation Science Standards Stack Up?

Image 7-7-13 at 9.38 PMThe Fordham Institute evaluation of the NGSS is a flawed report based on my assessment of their published document using the Junk Science Evaluation & Index Form.  After reading and reviewing the Fordham report I rated each criteria using a 5 point scale. For each item, I’ve included brief comments explaining my decisions.  As you can see, the overall assessment of the Fordham report was 4.7, which meant that this reviewer strongly agreed with the ten definitions that show that the report is an example of junk science.

 Junk Science Definitions Strongly Disagree (1) Disagree (2) Neutral (3) Agree (4) Strongly Agree (5)
1. Based upon bad policy  X
The policy upon which the Fordham Evaluation of the NGSS is underscored by a strict adherence to their traditional view of science content.  Their own set of standards, against which they evaluated the NGSS and the state science standards, is a list of low-level science goals.  In short the policy of the Fordham Institute and the authors of the report is an unchanging fealty to a conservative agenda and a canonical view of science education.
2. Experts with agendas  X
 The experts of the Fordham Institute seem to have an agenda which dismisses any inclusion of inquiry (practices in the NGSS), and pedagogical advances such as constructivism and inquiry teaching.
3. False Data  X
 There is no attempt to include false data.
4. No data or unsubstantiated claims  X
 Although the authors include written analyses of each content area, (physical, earth and life science), they go out of their way to knit pic standards written by others (NGSS and the states) and fail to realize that their standards which they use to judge others’ is inferior.
5. Failure to cite references  X
There were 17 footnotes identifying the references the authors cited in their analysis of a national set of science standards.  There are no referenced citations of any refereed journals or books.  Most footnotes were notes about the report, or citations of earlier Fordham Institute reports.  The only four citations were outside Fordham Institute such as by Ohio Department of Education, and ACT.
6. Uses non-certified experts  X
 There were no teachers, or science education experts.  Although all the authors hold advanced degrees in science, mathematics and engineering, they do not seem qualified to rate or judge science education standards, curriculum or pedagogy.
7. Poor methodology  X
 The authors claimed to check the quality, content, and rigor of the final draft of NGSS.  They used this method to rate the state science standards two years ago.  The grading metric uses two components; 7 points are possible for content and rigor; 3 points for clarity and specificity.  Content and rigor is evaluated against their content standards, which I have assessed using Bloom’s Taxonomy.  72% of Fordham’s science standards were at the lowest levels of Bloom, while only 10% were at the highest levels on Bloom.  In order to score high on the content and rigor part of the Fordham assessment, the NGSS would have to meet their standards–which I have judged to be mediocre.  The NGSS earned 3.7 (out of 7) on content and rigor, and 1.5 (out of 3) for clarity and specificity, for a total of 5.2 (out of 10).  Using these scores, the Fordham Institute used their earlier report on the State of the State Science Standards, and classified the states as clearly superior, too close to call or clearly inferior compared to the NGSS. According to Fordham, only 16 states had science standards superior to the NGSS.  The problem in my view,is  that the criteria Fordham uses to judge the NGSS and the state science standards is flawed.
8. Too much uncertainty to arrive at conclusions  X
 The Fordham report was written by people who seem to have an axe to grind against the work of the science education community.  The fact they failed to involve teachers and science educators in their review shows a disregard for the research community.  And this is surprising, given their credentials as scientists.
9. Reveals only that data that supports conclusions  X
 The conclusions that the Fordham group reports boil down to a number and then is translated into a grade.  In this case, the NGSS scored 5.2 out of 10 which converts to a grade of C.  This is what the media pick up on, and the Fordham Institute uses its numbers to create maps classifying states as inferior, superior or too close to call.
10. Non-peer reviewed  X
 This report is a conservative document that was never shared with the research community.  It’s conclusions should be suspect.

Figure 2. Junk Science Evaluation & Index Form of Fordham Institutes Final Evaluation of the NGSS

Even though the Fordham review is junk science, the media, including bloggers on Education Week, have printed stories that largely support the Fordham reports. The National Science Teachers Association, which had a hand in developing the NGSS, wrote a very weak response to Fordham’s criticism of NGSS.

The Thomas Fordham Institute perpetuates untruths about science education primarily to endorse it conservative agenda. It’s time call foul.  In this writer’s analysis, the Fordham Institute report on the NGSS earns an F.

If you have a chance or the time, please use the form in Figure 1 to rate the Fordham Institute report on the NGSS. What was your rating?

Fordham Institute Review of New Science Standards: Fealty to Conservatism & Canonical Science

Fordham Institute has published their  review of the draft of the Next Generation Science Standards.  Achieve wrote the the new science standards.   Achieve also wrote the math and reading/language arts common core standards.

Unchanging fealty to a conservative agenda and a canonical view of science education restricts and confines Fordham’s review to an old school view of science teaching.  Science education has rocketed past the views in two reports issued by Fordham about science education standards.

The Fordham reviewers use a strict content (canonical) view of science education and dismiss any reference to the scientific practices (science processes) and pedagogical advances such as constructivism, and inquiry teaching.  Many of the creative ideas that emerged in science teaching in the past thirty years represent interdisciplinary thinking, the learning sciences, deep understanding of how students learn science, and yes, constructivism.

These creative ideas are not reflected in Fordham’s analysis of science teaching and science curriculum.

I have also studied and reviewed the draft of the Next Generation Science Standards and have written about them here, and here.

The Framework

In 2011, the Carnegie Corporation funded the National Research Council’s project A Framework for K-12 Science Education (Framework).  The Framework was published last year, and it being used by Achieve as the basis for writing the Next Generation Science Standards (Science Standards)

These two documents, The Framework and the Science Standards, will decide the nature of science teaching for many years to come.

In this post, I’ll focus on how Fordham has responded to these two reports.

In late 2011, the Carnegie Corporation provided financial support to the Fordham Institute to review the NRC Framework.  The Fordham report was a commissioned paper (Review of the National Research Council’s Framework for K-12 Science Education), written by Dr. Paul Gross, Emeritus Professor of Biology. The Gross Report was not a juried review, but written by one person, who appears to have an ax to grind, especially with the science education research community, as well as those who advocate science inquiry, STS, or student-centered ideology. Indeed, the only good standard is one that is rigorous, and clearly content and discipline oriented.

I’ve read and reviewed the Fordham review of the Framework, and published my review here. Here some excerpts from my review.

Grade: B. In general, Dr. Gross, as well as Chester E. Finn, Jr. (President of the Fordham Foundation), are reluctant to give the Framework a grade of “A” instead mark the NRC’s thick report a grade of “B”.

Rigor.  Rigor is the measure of depth and level of abstraction to which chosen content is pursued, according to Gross. The Framework gets a good grade for rigor and limiting the number of science ideas identified in the Framework. The Framework identifies 44 ideas, which according to Gross is a credible core of science for the Framework.  The evaluator makes the claim that this new framework is better on science content than the NSES…how does he know that?

Practices, Crosscutting Concepts & Engineering. The Fordham evaluation has doubts about the Framework’s emphasis on Practices, Crosscutting Concepts, and Engineering/Technology Dimensions. For example, Gross identifies several researchers and their publications by name, and then says:

These were important in a trendy movement of the 1980s and 90s that went by such names as science studies, STS (sci-tech studies), (new) sociology or anthropology of science, cultural studies, cultural constructivism, and postmodern science.

For some reason, Gross thinks that science-related social issues and the radical idea of helping students construct their own ideas are not part of  mainstream science education, when indeed they are. Many of the creative Internet-based projects developed over the past 15 years have involved students in researching issues that have social implications.  The National Science Foundation made huge investments in creative learning projects.

Gross also claims that the NRC Framework authors “wisely demote what has long been held the essential condition of K-12 science: ‘Inquiry-based learning.’ The report does NOT demote inquiry, and in fact devotes much space to discussions of the Practices of science and engineering, which is another way of talking about inquiry. In fact, inquiry can found in 71 instances in the Framework. Gross and the Fordham Foundation make the case that Practices and Crosscutting ideas are accessories, and that only the Disciplinary Core Ideas of the Framework should be taken seriously . This will result is a set of science standards that are only based on 1/3 of the Framework’s recommendations.

Gross cherry picks his resources, and does not include a single research article from a prominent research journal in science education.  Dr. Gross  could have consulted science education journals found here, here, here or here.  If he did, he might have found this article: Inquiry-based science instruction—what is it and does it matter? Results from a research synthesis years 1984 to 2002.  Journal of Research in Science Teaching (JRST) published this article in 2010. Here is the abstract of the research study:

Various findings across 138 analyzed studies show a clear, positive trend favoring inquiry-based instructional practices, particularly instruction that emphasizes student active thinking and drawing conclusions from data. Teaching strategies that actively engage students in the learning process through scientific investigations are more likely to increase conceptual understanding than are strategies that rely on more passive techniques, which are often necessary in the current standardized-assessment laden educational environment.

The Fordham review of the Framework is not surprising, nor is their review of the first draft of the standards.  Fordham has its own set of science standards that it uses to check other organizations’ standards such as the state standards.  They used their standards as the “benchmark” to check all of the state science standards, and concluded that only 7 states earned an A.  Most of  the states earned an F.

If you download Fordham’s report here, scroll down to page 208 to read their science standards, which they call content-specific criteria.

I analyzed all the Fordham standards against Bloom’s Taxonomy in the Cognitive, Affective and Psychomotor domains.  Using Bloom’s Taxonomy, 52% of the Fordham science standards were rated at the lowest level.   Twenty-eight percent of their standards were at the comprehension level, 10% at application, and only 10% above analysis.  No standards were found for the affective or psychomotor designs.

All I am saying here is that Fordham has its own set of science standards, and I found them inferior to most of the state science standards, the National Science Education Standards (published in 1996), as well as the NAEP science framework.  You can read my full report here.  I gave Fordham’s science standards a grade of D.

Fordham Commentary on the New Science Standards

Given this background, we now turn our attention to Fordham’s Commentary & Feedback on Draft I of the NGSS.

The Fordham reviewers, as they did when they reviewed the NRC Framework for Science, felt the standards’ writers “went overboard on scientific and engineering practices.  From their point of view, crosscutting concepts and scientific and engineering practices create challenges to those who write standards.

Fordham science standards are reminiscent of the way  learning goals were written in the 1960s and 1970s.   Writers used one of many behavioral or action verbs such as define, describe, find, diagram, classify, and so forth to construct  behavioral objectives.  The Fordham standards were written using this strategy. Here are three examples from their list of standards:

  • Describe the organization of matter in the universe into stars and galaxies.
  • Identify the sun as the major source of energy for processes on Earth’s surface.
  • Describe the greenhouse effect and how a planet’s atmosphere can affect its climate.

The Fordham experts raised concerns about the way standard statements are written.  As shown in the examples from the draft of the NGSS, the standards integrate content with process and pedagogical components.

I agree with the Fordham reviewers that the Next Generation Science Standards  are rather complex.  Shown in Figure 1 is the “system architecture that Achieve used for all of the standards.  Figure 1 shows just four performance expectations (read standards), and their connection to practices, core ideas, and crosscutting concepts.  Every science standard in the Achieve report is presented in this way.

Figure 1. System Architecture of the NGSS. Source: http://www.nextgenscience.org/how-to-read-the-standards, extracted May 12, 2012

The Fordham reviewers gave careful attention to each standard statement, and indeed in their report they include many examples of how the standards’ writers got the content wrong or stated it in such a way that was unclear.

But the Fordham reviewers take the exception to the science education community’s research on constructivism.  In their terms, science educators show fealty to constructivist pedagogical theory.  To ignore constructivism, or to think that science educators have an unswerving allegiance to this well established and researched theory is quite telling.  To me it indicates that Fordham holds a traditional view of how students learn.  It tells me that these reviewers have boxed themselves into a vision of science literacy by looking inward at the canon of orthodox nature science.  Content is king.

To many science teachers and science education researchers, an alternative vision gets its meaning from the “character of situations with a scientific component, situations that students are likely to encounter as students.  Science literacy focuses on science-related situations (See Douglas Roberts’ chapter on science literacy in the Handbook of Research on Science Education).

The Fordham reviewers recommend that every standard be rewritten to cut “practices” where they are not needed.  They also want independent, highly qualified scientists who have not been involved in the standards writing attempt to check every standard.  The National Science Teachers Association, comprised of science teachers and scientists is quite qualified to do this, and indeed the NSTA sent their recommendations to Achieve last week.

I would agree with the Fordham group that the next version of the standards should be presented in a clearer way, and easily searchable.  I spent a good deal of time online with the first draft, and after a while I was able to search the document, but it was a bit overwhelming.

Finally I would add that when you check the Fordham analysis of the new standards, the word “basic” jumps out.  Near the end of their opinion report, they remind us that the science basics in the underlying NRC Framework were sound.  What they are saying is that the NGSS writers need to chisel away anything that is not solid content from the standards.

One More Thing

Organizations such as Achieve and the Fordham Institute believe the U.S. system of science and mathematics education is performing below par, and if something isn’t done, then millions of students will not be prepared to compete in the global economy. Achieve cites achievement data from PISA and NAEP to make its case that American science and mathematics teaching is in horrible shape, and needs to fixed.

The solution to fix this problem to make the American dream possible for all citizens is to write new science (and mathematics) standards.  One could argue that quality science teaching is not based on authoritarian content standards, but much richer standards of teaching that form the foundation of professional teaching.

What ever standards are agreed upon, they ought to be based on a set of values that are rooted in democratic thinking, including empathy and responsibility. Professional teachers above all else are empathic in the sense that teachers have the capacity to connect with their students, to feel what others feel, and to imagine oneself as another and hence to feel a kinship with others. Professional teachers are responsible in the sense that they act on empathy, and that they are not only responsible for others (their students, parents, colleagues), but themselves as well.

The dual forces of authoritarian standards and high-stakes testing has taken hold of K-12 education through a top-down, corporate led enterprise. This is very big business, and it is having an effect of thwarting teaching and learning in American schools. A recent study by Pioneer Institute estimated that states will spend at least $15 billion over the next few years to replace their current standards with the common core.  What will it cost to implement new science standards?

In research that I have reported here, standards are barriers to teaching and learning.  In this research, the tightly specified nature of successful learning performances precludes classroom teachers from modifying the standards to fit the needs of their students.  And the standards are removed from the thinking and reasoning provesses needed to achieve them.  Combine this with high-stakes tests, and you have a recipe for disaster.

According to the 2012 Brown Center Report on American Education, the Common Core State Standards will have little to no effect on student achievement. Author Tom Loveless explains that neither the quality or the rigor of state standards is related to state NAEP scores. Loveless suggests that if there was an effect, we would have seen it since all states had standards in 2003.

The researchers concluded that we should not expect much from the Common Core. In an interesting discussion of the implications of their findings, Tom Loveless, the author of the report, cautions us to be careful about not being drawn into thinking that standards represent a kind of system of “weights and measures.” Loveless tells us that standards’ reformers use the word—benchmarks—as a synonym for standards. And he says that they use it too often. In science education, we’ve had a long history of using the word benchmarks, and Loveless reminds us that there are not real, or measured benchmarks in any content area. Yet, when you read the standards—common core or science—there is the implication we really know–almost in a measured way–what standards should be met at a particular grade level.

Loveless also makes a strong point when he says the entire system of education is “teeming with variation.” To think that creating a set of common core standards will reduce this variation between states or within a state simply will not succeed.

As the Brown report suggests, we should not depend on the common core or the Next Generation Science Standards having any effect on students’ achievement.

What do you think?  Is Fordham’s view of science education consistent with your ideas about science teaching?

 

You’ll Never Use This Again, or What Knowledge is of most Worth?

There was a very interesting editorial in today’s Atlanta Journal-Constitution entitled You’ll never us this math again.  It was written by Ken Sprague Sr., a high school math teacher.  Mr. Sprague, in his own words says:

I’m not advocating an end to math, only an end to math for math’s sake.  I am advocating for the option of a high school curriculum of more rigorous “practical math.”

In his article, Sprague makes the claim that only 0.09 percent of workers use the concepts taught in Algebra II.  He is questioning why we force the same curriculum on all students, and suggests that the State Department’s claim that the new math standards are central to a “world class education” is more a public relations point of view, rather than grounded in his classroom experiences.  He suggests that:

The new math standards might prove out as a case of “all dressed up with no place to go.”

Sprague has raised the question that has long challenged educators, and that is, “what knowledge is of most worth?”  It was proposed long ago by Herbert Spencer.   In asking the question ‘what knowledge is of most worth?’, Spencer answered that it is the knowledge needed to pursue the leading kinds of activity which constitute human life (see Brian Holmes, 1994, for an excellent paper on Spencer).

Surely, Sprague’s comments relate to science education, as well as mathematics education.  A science curriculum that follows the suggestion of Mr. Sprague would be a humanistic science curriculum, as argued in this weblog, and by many science educators, especially Glen Aikenhead as developed in his book, Science Education for Everyday Life.

What do you think about the comments made by Ken Sprague?  To what extent do you think they apply to us in science education?