Fordham Institute’s Final Evaluation of Next Generation Science Standards (Fordham Evaluation) Rated as Junk Science.
In this post I am going to provide evidence that the Fordham Evaluation of Next Generation Science Standards is junk science, and does not meet the basic standards of scientific research. Figure 1 is the Junk Science Evaluation and Index Form that I designed to assess the Fordham Evaluation. The ten categories are definitions of junk science that emerged from a study by Michael Carolan (2012). He assessed ten years (1995 – 2005) of newspaper articles that included the words junk science in the title by systematically analyzing and coding the articles according to how the term was used. I’ve used the ten definitions as the categories as shown in Figure 1.
Disclaimer: I have major concerns about using national science standards for every school student, K-12. I also do not subscribe to the rationale or policy upon which the standards movement is based. The rationale for science described in the NGSS is not related to the conception or philosophy of a sustainable planet, but is instead science in the service of the economic growth of the nation, job training, and economic competitiveness in a global society. The science standards were designed by scientists and engineers, and so there is a heavy emphasis on scientific process and content instead of thinking about science curriculum that would be in the service of children and adolescents. I have written extensively about this on this blog. Never-the-less, I have major concerns about the Thomas Fordham’s biased assessment of science education, and write this blog post in this context. In no way do I endorse the NGSS.
Each category is an indicator that the study under review might be considered junk science. When partisan or advocacy organizations issue reports, they are often done outside the normal context of scientific research. In many cases, the reports are written by in-house organizational employees who indeed may have advanced degrees, but who isolate themselves from the research community at large. Often the reports are not peer-reviewed. One of the most obvious defects in these reports is that they tend to use methods that are not reproducible or are so murky that the results are clearly suspicious.
I’ve left the form in Figure 1 blank if you would like to reproduce it.
|Strongly Disagree (1)||Disagree (2)||Neutral (3)||Agree (4)||Strongly Agree (5)|
|1. Based upon bad policy|
|2. Experts with agendas|
|3. False Data|
|4. No data or unsubstantiated claims|
|5. Failure to cite references|
|6. Uses non-certified experts|
|7. Poor methodology|
|8. Too much uncertainty to arrive at conclusions|
|9. Reveals only that data that supports conclusions|
|10. Non-peer reviewed|
Figure 1. Junk Science Evaluation & Index Form
How Does the Fordham Final Evaluation of the Next Generation Science Standards Stack Up?
The Fordham Institute evaluation of the NGSS is a flawed report based on my assessment of their published document using the Junk Science Evaluation & Index Form. After reading and reviewing the Fordham report I rated each criteria using a 5 point scale. For each item, I’ve included brief comments explaining my decisions. As you can see, the overall assessment of the Fordham report was 4.7, which meant that this reviewer strongly agreed with the ten definitions that show that the report is an example of junk science.
|Junk Science Definitions||Strongly Disagree (1)||Disagree (2)||Neutral (3)||Agree (4)||Strongly Agree (5)|
|1. Based upon bad policy||X|
|The policy upon which the Fordham Evaluation of the NGSS is underscored by a strict adherence to their traditional view of science content. Their own set of standards, against which they evaluated the NGSS and the state science standards, is a list of low-level science goals. In short the policy of the Fordham Institute and the authors of the report is an unchanging fealty to a conservative agenda and a canonical view of science education.|
|2. Experts with agendas||X|
|The experts of the Fordham Institute seem to have an agenda which dismisses any inclusion of inquiry (practices in the NGSS), and pedagogical advances such as constructivism and inquiry teaching.|
|3. False Data||X|
|There is no attempt to include false data.|
|4. No data or unsubstantiated claims||X|
|Although the authors include written analyses of each content area, (physical, earth and life science), they go out of their way to knit pic standards written by others (NGSS and the states) and fail to realize that their standards which they use to judge others’ is inferior.|
|5. Failure to cite references||X|
|There were 17 footnotes identifying the references the authors cited in their analysis of a national set of science standards. There are no referenced citations of any refereed journals or books. Most footnotes were notes about the report, or citations of earlier Fordham Institute reports. The only four citations were outside Fordham Institute such as by Ohio Department of Education, and ACT.|
|6. Uses non-certified experts||X|
|There were no teachers, or science education experts. Although all the authors hold advanced degrees in science, mathematics and engineering, they do not seem qualified to rate or judge science education standards, curriculum or pedagogy.|
|7. Poor methodology||X|
|The authors claimed to check the quality, content, and rigor of the final draft of NGSS. They used this method to rate the state science standards two years ago. The grading metric uses two components; 7 points are possible for content and rigor; 3 points for clarity and specificity. Content and rigor is evaluated against their content standards, which I have assessed using Bloom’s Taxonomy. 72% of Fordham’s science standards were at the lowest levels of Bloom, while only 10% were at the highest levels on Bloom. In order to score high on the content and rigor part of the Fordham assessment, the NGSS would have to meet their standards–which I have judged to be mediocre. The NGSS earned 3.7 (out of 7) on content and rigor, and 1.5 (out of 3) for clarity and specificity, for a total of 5.2 (out of 10). Using these scores, the Fordham Institute used their earlier report on the State of the State Science Standards, and classified the states as clearly superior, too close to call or clearly inferior compared to the NGSS. According to Fordham, only 16 states had science standards superior to the NGSS. The problem in my view,is that the criteria Fordham uses to judge the NGSS and the state science standards is flawed.|
|8. Too much uncertainty to arrive at conclusions||X|
|The Fordham report was written by people who seem to have an axe to grind against the work of the science education community. The fact they failed to involve teachers and science educators in their review shows a disregard for the research community. And this is surprising, given their credentials as scientists.|
|9. Reveals only that data that supports conclusions||X|
|The conclusions that the Fordham group reports boil down to a number and then is translated into a grade. In this case, the NGSS scored 5.2 out of 10 which converts to a grade of C. This is what the media pick up on, and the Fordham Institute uses its numbers to create maps classifying states as inferior, superior or too close to call.|
|10. Non-peer reviewed||X|
|This report is a conservative document that was never shared with the research community. It’s conclusions should be suspect.|
Figure 2. Junk Science Evaluation & Index Form of Fordham Institutes Final Evaluation of the NGSS
Even though the Fordham review is junk science, the media, including bloggers on Education Week, have printed stories that largely support the Fordham reports. The National Science Teachers Association, which had a hand in developing the NGSS, wrote a very weak response to Fordham’s criticism of NGSS.
The Thomas Fordham Institute perpetuates untruths about science education primarily to endorse it conservative agenda. It’s time call foul. In this writer’s analysis, the Fordham Institute report on the NGSS earns an F.