Fordham Institute’s Evaluation of Next Generation Science Standards Rated as Junk Science

Fordham Institute’s Final Evaluation of Next Generation Science Standards (Fordham Evaluation) Rated as Junk Science.
In this post I am going to provide evidence that the Fordham Evaluation of Next Generation Science Standards is junk science, and does not meet the basic standards of scientific research.  Figure 1 is the Junk Science Evaluation and Index Form that I designed to assess the Fordham Evaluation.  The ten categories are definitions of junk science that emerged from a study by Michael Carolan (2012).  He  assessed ten years (1995 – 2005) of newspaper articles that included the words junk science in the title by systematically analyzing and coding the articles according to how the term was used.  I’ve used the ten definitions as the categories as shown in Figure 1.

Disclaimer: I have major concerns about using national science standards for every school student, K-12.  I also do not subscribe to the rationale or policy upon which the standards movement is based.  The rationale for science described in the NGSS is not related to the conception or philosophy of a sustainable planet, but is instead science in the service of the economic growth of the nation, job training, and economic competitiveness in a global society. The science standards were designed by scientists and engineers, and so there is a heavy emphasis on scientific process and content instead of thinking about science curriculum that would be in the service of children and adolescents.  I have written extensively about this on this blog.  Never-the-less, I have major concerns about the Thomas Fordham’s biased assessment of science education, and write this blog post in this context.  In no way do I endorse the NGSS.

Each category is an indicator that the study under review might be considered junk science.   When partisan or advocacy organizations issue reports, they are often done outside the normal context of scientific research.  In many cases, the reports are written by in-house organizational employees who indeed may have advanced degrees, but who isolate themselves from the research community at large.  Often the reports are not peer-reviewed.  One of the most obvious defects in these reports is that they tend to use methods that are not reproducible or are so murky that the results are clearly suspicious.

I’ve left the form in Figure 1 blank if you would like to reproduce it.

Strongly Disagree (1) Disagree (2) Neutral (3) Agree (4) Strongly Agree (5)
1. Based upon bad policy
2. Experts with agendas
3. False Data
4. No data or unsubstantiated claims
5. Failure to cite references
6. Uses non-certified experts
7. Poor methodology
8. Too much uncertainty to arrive at conclusions
9. Reveals only that data that supports conclusions
10. Non-peer reviewed

Figure 1. Junk Science Evaluation & Index Form

How Does the Fordham Final Evaluation of the Next Generation Science Standards Stack Up?

Image 7-7-13 at 9.38 PMThe Fordham Institute evaluation of the NGSS is a flawed report based on my assessment of their published document using the Junk Science Evaluation & Index Form.  After reading and reviewing the Fordham report I rated each criteria using a 5 point scale. For each item, I’ve included brief comments explaining my decisions.  As you can see, the overall assessment of the Fordham report was 4.7, which meant that this reviewer strongly agreed with the ten definitions that show that the report is an example of junk science.

 Junk Science Definitions Strongly Disagree (1) Disagree (2) Neutral (3) Agree (4) Strongly Agree (5)
1. Based upon bad policy  X
The policy upon which the Fordham Evaluation of the NGSS is underscored by a strict adherence to their traditional view of science content.  Their own set of standards, against which they evaluated the NGSS and the state science standards, is a list of low-level science goals.  In short the policy of the Fordham Institute and the authors of the report is an unchanging fealty to a conservative agenda and a canonical view of science education.
2. Experts with agendas  X
 The experts of the Fordham Institute seem to have an agenda which dismisses any inclusion of inquiry (practices in the NGSS), and pedagogical advances such as constructivism and inquiry teaching.
3. False Data  X
 There is no attempt to include false data.
4. No data or unsubstantiated claims  X
 Although the authors include written analyses of each content area, (physical, earth and life science), they go out of their way to knit pic standards written by others (NGSS and the states) and fail to realize that their standards which they use to judge others’ is inferior.
5. Failure to cite references  X
There were 17 footnotes identifying the references the authors cited in their analysis of a national set of science standards.  There are no referenced citations of any refereed journals or books.  Most footnotes were notes about the report, or citations of earlier Fordham Institute reports.  The only four citations were outside Fordham Institute such as by Ohio Department of Education, and ACT.
6. Uses non-certified experts  X
 There were no teachers, or science education experts.  Although all the authors hold advanced degrees in science, mathematics and engineering, they do not seem qualified to rate or judge science education standards, curriculum or pedagogy.
7. Poor methodology  X
 The authors claimed to check the quality, content, and rigor of the final draft of NGSS.  They used this method to rate the state science standards two years ago.  The grading metric uses two components; 7 points are possible for content and rigor; 3 points for clarity and specificity.  Content and rigor is evaluated against their content standards, which I have assessed using Bloom’s Taxonomy.  72% of Fordham’s science standards were at the lowest levels of Bloom, while only 10% were at the highest levels on Bloom.  In order to score high on the content and rigor part of the Fordham assessment, the NGSS would have to meet their standards–which I have judged to be mediocre.  The NGSS earned 3.7 (out of 7) on content and rigor, and 1.5 (out of 3) for clarity and specificity, for a total of 5.2 (out of 10).  Using these scores, the Fordham Institute used their earlier report on the State of the State Science Standards, and classified the states as clearly superior, too close to call or clearly inferior compared to the NGSS. According to Fordham, only 16 states had science standards superior to the NGSS.  The problem in my view,is  that the criteria Fordham uses to judge the NGSS and the state science standards is flawed.
8. Too much uncertainty to arrive at conclusions  X
 The Fordham report was written by people who seem to have an axe to grind against the work of the science education community.  The fact they failed to involve teachers and science educators in their review shows a disregard for the research community.  And this is surprising, given their credentials as scientists.
9. Reveals only that data that supports conclusions  X
 The conclusions that the Fordham group reports boil down to a number and then is translated into a grade.  In this case, the NGSS scored 5.2 out of 10 which converts to a grade of C.  This is what the media pick up on, and the Fordham Institute uses its numbers to create maps classifying states as inferior, superior or too close to call.
10. Non-peer reviewed  X
 This report is a conservative document that was never shared with the research community.  It’s conclusions should be suspect.

Figure 2. Junk Science Evaluation & Index Form of Fordham Institutes Final Evaluation of the NGSS

Even though the Fordham review is junk science, the media, including bloggers on Education Week, have printed stories that largely support the Fordham reports. The National Science Teachers Association, which had a hand in developing the NGSS, wrote a very weak response to Fordham’s criticism of NGSS.

The Thomas Fordham Institute perpetuates untruths about science education primarily to endorse it conservative agenda. It’s time call foul.  In this writer’s analysis, the Fordham Institute report on the NGSS earns an F.

If you have a chance or the time, please use the form in Figure 1 to rate the Fordham Institute report on the NGSS. What was your rating?

Fordham Institute Review of the State Science Standards: Use with Caution!

The Fordham Institute published The State of State Science Standards 2012, a document that details evaluations of the science standards developed by each U.S. state, the District of Columbia, and even the NAEP.   The 217 page report includes an evaluation of each state’s science standards.  The authors evaluated each state by assigning a score to two attributes of the standards, content and rigor (scored 1 – 7), and clarity and specificity (scores 1 – 3).   Adding the scores gave each state a total which was used to “grade” the state standards (A – F).

According to the report,

a majority of the states’ standards remain mediocre to awful. In fact, the average grade across all states is—once again—a thoroughly undistinguished C.

Image by http://www.tagxedo.com

The blogosphere was filled with articles on the web pointing to the Fordham report on the science standards.  Scientific American reported in a post that the “new report paints a grim picture of state science standards”.   The National Center on Science Education noted posted an article on its website that focused on one of the report’s conclusions that there is an undermining of evolution throughout the science standards.  Another blogger reported on his blog and commented on how his state did on the standards.

The 2012 report is a follow up of Fordham Institutes 2005 review of science standards.  Comparative data is available for each state enabling you to see how the states science standard’s evaluations changed over the past 7 years.

Last Fall’s Report  on Framework for K-12 Science

Last fall I reviewed the Fordham Institute’s evaluation of the Framework for K-12 Science Education.  In that post I said

The report was not a juried review, but written by one individual, who appears to have an ax to grind, especially with the science education research community, as well as those who advocate science inquiry, STS, or student-centered ideology.  Indeed, the only good standard is one that is rigorous, and clearly content and discipline oriented.

The Fordham Institute’s review of the Framework was the personal view of one person.  It is a review that fits with the philosophy of the Fordham Institute, which has been in the business of supporting the corporate-style take over of public education.  It is a conservative think tank, and the report’s author did not subject his evaluation for peer review.  Any findings that are reported need to be viewed in the context of this reviewers personal views of science education, which appear to fly in the face of science education research and curriculum development.  You can read my report here.

New Report on the State of the State Science Standards

Now comes a new publication by the Fordham Institute and that is an evaluation of all of the state science standards in the United States.   You can go to this website to find state profiles and download the report for free.

The report is written in the context of Fordham Institute’s bias about the state of science education in the nation, especially in terms of achievement test results on PISA, TIMSS, and NAEP.  It is very similar to the Broad Foundation’s view of American youth which I wrote about here on my blog.  The Broad Foundation has low expectations for American students, and they go on to support their claim with distorted statistics, and use them to paint negative pictures of American youth.

The Fordham Institute’s view is embedded in the “crisis mentality” that began with “Sputnik” and has carried forward through today.  According to the Fordham report, American youth do not show strong science achievement, and show “woeful” results on international tests.  And yet during the time that American youth showed such dismal scores on science tests, American science and technology innovations and creative development flourished, and still does.  We thought our nation was risk because of technological advances, and global economic growth of Russia (then, the USSR), Japan, Germany, and China.  Now we have to worry about finland, South Korea, the Czech Republic, Hungary and Slovenia.  Their scores are higher on these tests than ours. They must be doing something different to educate their students in math and science.  The race is on!

The sky is falling

Where the State Science Standards Go Wrong

In the introduction of the report (which everyone reads, plus the section on their state’s science standards), the Fordham Institute report identifies four problem areas where in their opinion many state standards are mediocre to poor.

  • Problem 1. An Undermining of  Evolution.  The report, rightfully, repeated what many know and that is that science in the public schools has had a long history of having to deal with religious groups wanting to have equal time with evolution by inserting creation science and intelligent design into the curriculum, or requiring more scrutiny on topics that are controversial in science.
  • Problem 2. The Propensity to be Vague.  The Fordham group has some way of determining whether a standard is vague to be able to rate the standard.  I don’t know if they really analyzed every standard in each state document, but they seem to only like standards that are “really content oriented.”

For example, here is a good one according to the writers:

  • Students know how to design and build simple series and parallel circuits by using components such as wires, batteries, and bulbs.

And here is a vague one:

  • Demonstrate understanding of the interrelationships among fundamental concepts in the physical, life, and Earth systems sciences. (grade 4)

Now, what do you think about this science standard?  Do you think it is a good one, or do you think it is vague?  What are students to really know here?

  • Know how to define “gravity.”

The last one, “know how to define gravity” was one of the Fordham Institute standards that they used to “evaluate” the state science standards.  It was taken from the Fordham “content-specific criteria” (which means standards) which you will find in pages 205 – 209 of the report.  The reviewers used this list of objectives (an other word for standards) divided into K- 8 and grades 9 – 12.  I am not sure of the origin of the content-specific criteria that they used, but I assume they wrote these objectives, and have used them the past.  I wonder how they would evaluate their own objectives?

  • Problem 3. Poor Integration of Scientific Inquiry. Please keep in mind that one of the reviewers has a real disdain for the notion of inquiry science teaching.  If a state wrote an objective that asked students to make observations in order to raise questions, the reviewers were bent out of shape.  You can not write a standard unless it is tied to content. Period.  They claim that too many states do this, and they are distressed.

Problem 4. Where did all the Numbers Go?  Math is integral to science.  Who could have guessed?  After reading the Fordham report it was evident that if states included an objective like the following one, they got good reviews.


If they wrote an objective, like this one from Illinois, this will not possibly prepare them for college and they got a bad review.

 


I wonder if they ever heard of Conceptual Physics?

Where the Report Goes Wrong

First the report is amazing in its scope.  Imagine, in one report you can find the evaluations of every set of state science standards, as well as the District of Columbia, and the standards that the NAEP uses in the construction of its test items.

Here we have a scorecard on each state’s science standards, grades A – F.  At first glance, this is a report that ought to be read and studied by parents, teachers, administrators, and professors of science and science education.   On one map, you have at a glance how the states stack up.  Most of the colors of the states are purple (grade C), green (grade D), and grey (grade F).

There is, however, a problem.  This report is deeply flawed, and the analysis of any of the state’s science standards should be read with caution.  The language used in the report should raise a red flag.  Here are some comments you will find in the report.

  • The results of this rigorous analysis paint a fresh—but still bleak—picture. A majority of the states’ standards remain mediocre to awful.
  • At the high school level, the Georgia standards offer a bewilderingly large number of courses.
  • (Name of State) science standards—unchanged since 1998, in spite of much earlier criticism, ours included—are simply worthless

Worthless, mediocre, awful, bewildering—not the choice of descriptors that science educators typically use in a scientific report.

Because the report has used numbers to assign a grade of A – F to each state, there is believability in the results as presented by Fordham.  We live in culture that is driven by numbers—just look at the effect of high-stakes tests on the well-being of students and teachers.  Consequently the Fordham report will be cited and admired by many, yet it represents another agenda driven report.  What is the intent of the Fordham foundations preoccupation with standards in the first place?  Are they trying to influence the direction that the country takes with regards to a set of common standards?

State Science Standards Grades, 2012. Source: The State of the State Standards, Thomas B. Fordham Institute

 

The evaluations that are reported by the Fordham experts are their opinions.  There were five reviewers used to compile the “evaluations” of the state science standards.  They divided the work up as follows:

  • Reviewer #1.  K-12 physical science and high school physics
  • Reviewer #2. K-12 life science including high school biology
  • Reviewer #3. K-12 scientific inquiry and methodology standards
  • Reviewer #4. K-12 earth and space science standards
  • Reviewer #5. K-12 physical science and high school chemistry

A sixth person was used to put the reviews into a single document.

The data sources to evaluate the state science standards were the websites of the state DOE, and they also “communicated” with the states’ science experts.  No mention is made about the purpose of the communication, or if a standard set of questions was used.

Although there are detailed statements made about each state’s science standards, there is really no way of knowing how the evaluators came to the conclusions they made.  There was no systematic analysis of each state’s standards.  That is to say, you can not find in the report any data that is objective.  For example, the authors do identify “content-specific criteria” (pp. 206 – 209).  These criteria are lists of objectives that the authors used against the state standards in scientific inquiry, physical science, physics, chemistry, earth and space science, and life science.   Here are a couple of examples from Earth and Space Science (Fordham Report, p. 206):

  • Describe the organization of matter in the universe into stars and galaxies.
  • Describe the motions of planets in the solar system and recognize our star as one of a multitude in the Milky Way.
  • Recognize Earth as one planet among its solar system neighbors.

The Fordham “Scores” Should Be Questioned

There are more than 100 of these objective like statement in the report.  They are used to evaluate the “Content and Rigor” of the state’s standards, on a scale from 0 – 7 points, with seven being the most rigorous.  “Clarity and Specificity” are evaluated on a 0 – 3 point scale.  Using rubric style criteria, each evaluator read the assigned section of state science standards (earth science, physical science, etc.), and used the “criteria” to make a judgment (assign a numerical score) about the content and rigor and clarity and specificity.  The final science score for each state was a composite of the evaluator scores.

The authors give us no clues about the reliability of the observations that were made.  Although none of the content subsections was rated by more than one evaluator, it would have been possible to do a reliability study using the scores they assigned.  This was not done so it is not possible to judge the reliability of the observations.

There is no information about the validity of the criteria used to judge the science standards.  The Fordham Institute report includes their own set of science standards which they use to judge the “content and rigor” of the state science standards.  There is no information given about the content validity of their science standards.  To do this this, we should know if their standards reflect the knowledge actually required for K-12 science education.  One way to do this would have a panel of experts (scientists and science educators, etc.) who would be asked to judge validity of the Fordham science standards.  This was not done, to my knowledge.  Therefore any scores that the evaluators assigned to the “content and rigor” of a state’s science standards should be called into question.

What is the basis for the standards that the Fordham Institute  used to judge the work of 50 states, D.C. and the NAEP?

A Research Study?

Is the State of the State Science Standards a research study?

The report does not meet he standards of educational research established by the American Educational Research Association (AERA).  The Fordham Institute report is a type of evaluation research, but does not meet the standards as shown below.  In fact, they only meet two of the principles that are outlined below.

AERA Principles of Research

Does the Fordham Report Meet the AERA Principle?

Comments

Development of a logical, evidence-based chain of reasoning

Yes

The report itself is written logically, and organized for easy consumption
Methods appropriate to questions posed

No

 

The questions are not directly posed.  Yes they are implied, but they are couched within a lot of rhetoric.
Observational or experimental designs and instruments that provide reliable and generalizable findings

No

Although the evaluators identify criteria, they are not used to make observations, but instead the authors jump to inferences and opinions
Data and analysis adequate to support findings

No

The authors provide no information about inter-rater reliability in the form—the degree of agreement or disagreement among the four raters.  Because of this, the data is skeptical.
Explication of procedures and results clearly and in detail

Yes

Procedures used are clearly stated and results are found in the state summaries.  However, because the procedures are flawed, the results should be used cautiously.
Adherence to professional norms of peer review

No

The report was not reviewed by outside science educators or any other peers.  This is a serious issue, and one of the problems with using reports from think tanks, whether they be on the left or in this case, the right.
Dissemination of findings to contribute to scientific knowledge

No

Although the report is being disseminated, the results are flawed, and biased, and should be viewed with suspicion.
Access to data for reanalysis, replication and opportunity to build on findings

No

Because the “data” reported are based on opinions, it is difficult to reanalyze the study.  Perhaps if the authors subjected their criteria to an outside panel, and applied the same methods, we might get more valid and reliable results.

There is no evidence of any data to determine the reliability of the evaluators’ scores.  Furthermore, the validity of the science standards used to make judgement is questionable.  The lists of objectives that the evaluators used were not subjected to any review by outside experts.  How do know that these lists reflect the knowledge base of science education.  We do not.

Searching the journals in science education research, I want to share a study that was similar in intent to the Fordham study, but this study would have scored a “yes” in each of the category boxes above.  In 2002, in the journal Science Education, Gerald Skoog, and Kimberly Bilica published a study entitled The emphasis given to evolution in state science standards: A lever for change in evolution education?  They analyzed the frameworks of 49 state standards in science to determine the emphasis given to evolution in these documents at the middle and secondary levels.  Their methodology included identifying a select group of evolution concepts (from the NSES, 1996), such as species over time, speciation, diversity and so forth.  One table in the research study shows the detailed analysis of evolutionary concepts evident in each state’s science standards.  Discussion is based on the data reported for each of the evolution concepts.

Unlike the Fordham study, this study was peer reviewed, and could be replicated that the methodology is clear and ambiguous.

Looking Back

I have tried to show that the Fordham Institute report on the State of Science Standards should be questioned, and that the results should be held up to criticism and caution.  The Fordham report, as most foundation supported “studies” do, tailor their findings to the intended goals of the organization.

Science educators should use caution when reading reports about the study, and should not draw conclusions that the results in the report are valid.

 What do you think?

If you have the time, take a look at the report, and tell us what you think.  Do you think the results are valid and reliable?  Does the report reflect what is going on in science education today?  What do you think?

 

 

 

Common Corporate Science Standards?

My choice of a title for this blog post is not a play on words, but describes the current effort to write the next generation of science standards.  The next generation of science standards is being developed by Achieve, Inc., a corporate and foundation support-type organization that was established in 1996 by governors and corporate leaders, not educators, to support standards-based reform.  According to the Achieve website, governors and corporate leaders:

formed Achieve as an independent, bipartisan, non-profit education reform organization. To this day, Achieve remains the only education reform organization led by a Board of Directors of governors and business leaders.

Where are the educators?

On their website they write that (Education Week) ranked Achieve in 2006 as one of the most influential organizations in education. Over the past years, Achieve has influenced the standards in nearly every state through it’s development of the Common Core State Standards in mathematics, and English language arts. In some cases the Common Core Standards were imposed on states that applied for Race-to-the-Top funds, a reform program of the U.S. department of Education, that has not yet proven to work.

Problems with the Common Standards

There are a number of problems associated with the Common Core Standards movement. Achieve is an external, corporate driven organization without the accountability to which teachers, administrators and schools are held.  Achieve lacks the oversight that schools and universities are held to through professional and peer reviewed panels that have the authority to make recommendations for change, and in extreme cases, revoke accreditation. Achieve is responsible only to it’s donors, and board.  Many of the donor corporations and foundations are involved in educational reform that over laps with the goals of Achieve. On the surface there appears to be many conflicts of interest in this mix, and one wonders about the transparency of the system leading back to involvement of true educators.

Achieve also has stated that a single set of standards in each content discipline can work for students regardless of where they live. This makes little sense given the diversity of the United States, and the increasing rate of poverty in the country.  What is the connection between the standards being developed and students from poor families?

Standards are opinions of a subset of professors, mostly from the academic disciplines, often appearing on boards and planning and writing teams for the first time. And in some cases participants of the teams ought to be replaced with fresh faces. Are there concepts in science, for example, that every human being must know? Probably.  A set of standards for every student? We really do not have a way to determine what every student should know, and we have to wonder why we are so obsessed with this.   Why, in a nation of 50 states, and 15,000 school districts, do we insist of a single set of standards, all of which are discipline based.

Rationale for Corporate Reform

Achieve is operating on the assumption that American education is lagging the rest of the world, and it needs to be fixed. There is little evidence to make this claim. Reformers nearly always claim that the nation is at risk, and if reforms of their own designs are not put forward, students will not be able to compete with their peers, especially at the global level.  Again, the evidence to support this is not there.

What we have going on now is the corporate reform of public education with a very small group of foundational and corporate leaders leveraging the buy out of public education. Instead of professional teachers, and professional organizations leading reform, they are on the periphery of this reform. The reform is characterized as top down, with charter schools and school choice becoming the rallying cry of this reform effort.

The development of the new generation of science standards is underway at Achieve, and although 20 states, NSTA and AAAS are involved, one of the basic tenants of science is not driving the development—and that is the peer review process.  Furthermore, the research that Achieve reports on its website that it has completed is not research conducted through the peer review process.  To what extent can we accept their “research” findings?

Peer Review

It would have been in the best interests of public education if a more scholarly framework that included peer review would have guided the development of science education standards.

From the beginning there should have been a Request For Proposals (RFP) from an organization such as the National Science Foundation (NSF).   Proposals could have been accepted from any university, research and development organization, or organizations such as Achieve.   As it was, Achieve had already been selected prior to the National Research Council’s project to develop a Framework for Science Education.  The Carnegie Foundation funded this, and is providing additional funds for Achieve to carryout the writing of the science standards.

If an RFP had been announced, the process would have entered the research and development community, and it would have given more groups of researchers and developers an opportunity to participate in the creation of the new generation of science standards.  The organization that would receive funding would be accountable to the funding organization, and to the peer review process.  Furthermore, the recipient of the funds would also engage in science education research, which would be published in peer review journals.

In a democratic society, we must raise questions when one organization has a monopoly on an industry, including educational reform.  Education in the United States is best represented by diverse goals, by learning/education that is rooted in the lived experiences of students, and the by the local control of schooling.  It is not represented very well by the central command and control system that appears to rest with Achieve, Inc.

We have a problem here, and it will take reform to change this.