A Letter to The Fordham Institute

Dear Colleagues at The Fordham Institute,

The American Education Research Association (AERA) developed a framework for Scientifically Based Research (SBR) in 2008.  The SBR definition described below was supported by the AERA Council as a framework that offers sound guidance to members of Congress seeking to include such language in legislation.  As you know Congress is inundated with partisan think tanks who offer their research results on issues close on their agenda.  I believe that you are one of those think tanks.

But here is the thing

Image 7-8-13 at 8.08 PMYou are part of the problem.  The rise of “think tanks” started with The Tobacco Institute, big tobacco’s industry think tank that mounted campaigns denying the growing body of scientific knowledge that smoking was linked to cancer.  To oppose science, The Tobacco Institute created their own research with bogus claims, and in the end launched a campaign of doubt, insisting that the scientific research on smoking and cancer was “junk science.”  The Tobacco Institute was a kind of mentor of yours, and other think tanks, especially those on the extreme right (like yourself, and your cousin, the National Council on Teacher Quality) or those on the extreme left.  The Tobacco Institute created a formula or modus operandi for advocacy groups. Attack real science, and create doubt about research on industry  issues.  In general, it was important for think tanks to assemble their own experts who were willing to support the party line, and indeed take part in questionable research practices.  Their goal was to attack scientific studies, lobby Congress and the states, pushing their own narrow agenda.  These organizations promote junk science under the guise of research that they do.  Let me be clear.  This happens on the right and the left.

In the last post on this blog, I provided some evidence that your research, the Fordham Institute’s Final Report on the Next Generation Science Standards was an example of junk science.  This conclusion was based on several investigations that I made of reports issued by the Fordham Institute on the science standards which you can read here and here.

Why is it that you continue to analyze science education without involving experts in the field of science education?  Yes, there were nine people with Ph.D’s in science, mathematics and engineering on your research team, but all of you lack the experiential and content knowledge of science education, science curriculum development, and classroom K-12 science teaching experience.

Why would biologists believe the results of research done by a group of people with degrees in history, political science, and communications, especially if they ignore or do not cite earlier studies in the field of biology?   And especially if they do not involve biologists in their research? I doubt whether your group would?  Why do you think we should accept the results of your research.  We don’t.

Given your credentials, it is surprising that you did not adhere to the principles of research that are outlined below by the AERA, which are no different from principles of research in physics, biology, geochemistry, mathematics or engineering.

Education research, just like research in the earth sciences, or the biological sciences, must be considered Scientifically Based Research (SBR) for its results to be considered credible and valid.  Accordingly, valid research requires (AERA, 2013):

1. The development of a logical, evidence-based chain of reasoning;

2. Methods appropriate to the questions posed;

3. Observational or experimental designs and instruments that provide reliable and generalizable findings;

4. Data and analysis adequate to support findings;

5. Description of procedures and results clearly and in detail, including specification of the population to which the findings can be generalized;

5. Adherence to professional norms of peer review;

6. Dissemination of findings to contribute to scientific knowledge;

7. Access to data for reanalysis, replication, and the opportunity to build on findings.

In my opinion your team fell short on these criteria for SBR.  Your research focuses on an examination of science standards.  In one study you assessed the content and rigor of the state science standards, and then in your latest report, you assessed the Next Generation Science Standards.  Yet your reports are not based on the scientific protocols  that are identified here.  The conclusions you draw are based more on ideology, than on the results of research.  They are biased, and narrow.

However,because of your lobbying efforts, and your  deep pockets, your reports on science standards have been accepted by the media as the last answer on the state of science education standards.  Unfortunately, the reports on the science standards have little credibility and are no different from reports issued by the U.S. Tobacco Institute.

Some Steps You Might Take

There are a number of steps that your organization can take if you plan to issue future reports on the state of science education in the United States.  For one, you might consult the National Association for Research in Science Teaching (NARST) or specific members of NARST for advice on how to conduct research in science education.  You might visit their journal website (Journal of Research in Science Teaching).  Another journal you might consider is the journal Science Education.  If you want to expand your horizons, you might consider these international journals of science education: International Journal of Science Education, and Eurasia Journal of Mathematics, Science and Technology Education.  Two additional sources of ideas include the Handbook of Research on Science Education, and The Cambridge Handbook of the Learning Sciences.

In the meantime, we will push back against your aggressive indictments of science education.

Regards,

Jack Hassard

 

 

 

Fordham Institute’s Evaluation of Next Generation Science Standards Rated as Junk Science

Fordham Institute’s Final Evaluation of Next Generation Science Standards (Fordham Evaluation) Rated as Junk Science.
In this post I am going to provide evidence that the Fordham Evaluation of Next Generation Science Standards is junk science, and does not meet the basic standards of scientific research.  Figure 1 is the Junk Science Evaluation and Index Form that I designed to assess the Fordham Evaluation.  The ten categories are definitions of junk science that emerged from a study by Michael Carolan (2012).  He  assessed ten years (1995 – 2005) of newspaper articles that included the words junk science in the title by systematically analyzing and coding the articles according to how the term was used.  I’ve used the ten definitions as the categories as shown in Figure 1.

Disclaimer: I have major concerns about using national science standards for every school student, K-12.  I also do not subscribe to the rationale or policy upon which the standards movement is based.  The rationale for science described in the NGSS is not related to the conception or philosophy of a sustainable planet, but is instead science in the service of the economic growth of the nation, job training, and economic competitiveness in a global society. The science standards were designed by scientists and engineers, and so there is a heavy emphasis on scientific process and content instead of thinking about science curriculum that would be in the service of children and adolescents.  I have written extensively about this on this blog.  Never-the-less, I have major concerns about the Thomas Fordham’s biased assessment of science education, and write this blog post in this context.  In no way do I endorse the NGSS.

Each category is an indicator that the study under review might be considered junk science.   When partisan or advocacy organizations issue reports, they are often done outside the normal context of scientific research.  In many cases, the reports are written by in-house organizational employees who indeed may have advanced degrees, but who isolate themselves from the research community at large.  Often the reports are not peer-reviewed.  One of the most obvious defects in these reports is that they tend to use methods that are not reproducible or are so murky that the results are clearly suspicious.

I’ve left the form in Figure 1 blank if you would like to reproduce it.

Strongly Disagree (1) Disagree (2) Neutral (3) Agree (4) Strongly Agree (5)
1. Based upon bad policy
2. Experts with agendas
3. False Data
4. No data or unsubstantiated claims
5. Failure to cite references
6. Uses non-certified experts
7. Poor methodology
8. Too much uncertainty to arrive at conclusions
9. Reveals only that data that supports conclusions
10. Non-peer reviewed

Figure 1. Junk Science Evaluation & Index Form

How Does the Fordham Final Evaluation of the Next Generation Science Standards Stack Up?

Image 7-7-13 at 9.38 PMThe Fordham Institute evaluation of the NGSS is a flawed report based on my assessment of their published document using the Junk Science Evaluation & Index Form.  After reading and reviewing the Fordham report I rated each criteria using a 5 point scale. For each item, I’ve included brief comments explaining my decisions.  As you can see, the overall assessment of the Fordham report was 4.7, which meant that this reviewer strongly agreed with the ten definitions that show that the report is an example of junk science.

 Junk Science Definitions Strongly Disagree (1) Disagree (2) Neutral (3) Agree (4) Strongly Agree (5)
1. Based upon bad policy  X
The policy upon which the Fordham Evaluation of the NGSS is underscored by a strict adherence to their traditional view of science content.  Their own set of standards, against which they evaluated the NGSS and the state science standards, is a list of low-level science goals.  In short the policy of the Fordham Institute and the authors of the report is an unchanging fealty to a conservative agenda and a canonical view of science education.
2. Experts with agendas  X
 The experts of the Fordham Institute seem to have an agenda which dismisses any inclusion of inquiry (practices in the NGSS), and pedagogical advances such as constructivism and inquiry teaching.
3. False Data  X
 There is no attempt to include false data.
4. No data or unsubstantiated claims  X
 Although the authors include written analyses of each content area, (physical, earth and life science), they go out of their way to knit pic standards written by others (NGSS and the states) and fail to realize that their standards which they use to judge others’ is inferior.
5. Failure to cite references  X
There were 17 footnotes identifying the references the authors cited in their analysis of a national set of science standards.  There are no referenced citations of any refereed journals or books.  Most footnotes were notes about the report, or citations of earlier Fordham Institute reports.  The only four citations were outside Fordham Institute such as by Ohio Department of Education, and ACT.
6. Uses non-certified experts  X
 There were no teachers, or science education experts.  Although all the authors hold advanced degrees in science, mathematics and engineering, they do not seem qualified to rate or judge science education standards, curriculum or pedagogy.
7. Poor methodology  X
 The authors claimed to check the quality, content, and rigor of the final draft of NGSS.  They used this method to rate the state science standards two years ago.  The grading metric uses two components; 7 points are possible for content and rigor; 3 points for clarity and specificity.  Content and rigor is evaluated against their content standards, which I have assessed using Bloom’s Taxonomy.  72% of Fordham’s science standards were at the lowest levels of Bloom, while only 10% were at the highest levels on Bloom.  In order to score high on the content and rigor part of the Fordham assessment, the NGSS would have to meet their standards–which I have judged to be mediocre.  The NGSS earned 3.7 (out of 7) on content and rigor, and 1.5 (out of 3) for clarity and specificity, for a total of 5.2 (out of 10).  Using these scores, the Fordham Institute used their earlier report on the State of the State Science Standards, and classified the states as clearly superior, too close to call or clearly inferior compared to the NGSS. According to Fordham, only 16 states had science standards superior to the NGSS.  The problem in my view,is  that the criteria Fordham uses to judge the NGSS and the state science standards is flawed.
8. Too much uncertainty to arrive at conclusions  X
 The Fordham report was written by people who seem to have an axe to grind against the work of the science education community.  The fact they failed to involve teachers and science educators in their review shows a disregard for the research community.  And this is surprising, given their credentials as scientists.
9. Reveals only that data that supports conclusions  X
 The conclusions that the Fordham group reports boil down to a number and then is translated into a grade.  In this case, the NGSS scored 5.2 out of 10 which converts to a grade of C.  This is what the media pick up on, and the Fordham Institute uses its numbers to create maps classifying states as inferior, superior or too close to call.
10. Non-peer reviewed  X
 This report is a conservative document that was never shared with the research community.  It’s conclusions should be suspect.

Figure 2. Junk Science Evaluation & Index Form of Fordham Institutes Final Evaluation of the NGSS

Even though the Fordham review is junk science, the media, including bloggers on Education Week, have printed stories that largely support the Fordham reports. The National Science Teachers Association, which had a hand in developing the NGSS, wrote a very weak response to Fordham’s criticism of NGSS.

The Thomas Fordham Institute perpetuates untruths about science education primarily to endorse it conservative agenda. It’s time call foul.  In this writer’s analysis, the Fordham Institute report on the NGSS earns an F.

If you have a chance or the time, please use the form in Figure 1 to rate the Fordham Institute report on the NGSS. What was your rating?

Results Are In: NCTQ Report on Teacher Prep Rated with Four Cautions

In the last four posts, I have written articles that call out the National Council on Teacher Quality on the basis of their built-in bias against teacher preparation institutions, and their confrontational style of what they call research.

In the last post I used research by Michael Carolan, who investigated when does science become junk.  In one study, drawing from the Lexis-Nexis database, ten years (1995–2005) of newspaper articles containing the term ‘junk science’ in the headline were systematically analysed and coded according to how the term “junk science” was used.  From this analysis and coding,  11 definitions of junk science were identified.  Michael Carolan’s research is significant here because it offers a method to find how we can distinguish between junk science and science reports, especially if we think the report is published by non-refereed sources.  It’s vital to be able to answer this question, especially in the present age where the media gives equal time to every idea regardless of the idea’s validity.

To rate the NCTQ study, I used the 11 definitions as criteria to decide whether the study fits the definitions of junk science, or whether it might be considered a valid scientific investigation.  In future posts, I’ll report evaluations of other studies, especially two studies on science standards by the Fordham Institute.

Using the criteria listed in Figure 1, I examined the NCTQ Report on Teacher Prep.  For the criteria, the goal was find out if that particular definition could be applied to the NCTQ report.  If it did, then the criteria was marked YES.  To support the decision, examples from the NCTQ report are included in the last column of Figure 1.

Junk Science Definitions Junk Science? Score Comments
Bad policy based upon Yes 1 Assumes American education is declining; Believes the marketplace is the engine for change
Experts with agendas Yes 1 Assume that one of causes of America’s decline is teacher preparation institutions. NCTQ wants teacher education to be a training ground not an environment of learning
False data No 0 Data reported are ratings, which are dubious.  We could rate this “yes”
No data or unsubstantiated claims Yes 1 Data unclear; claim teacher education is “chaotic” with no data;
Failure to cite references Yes 1 No review of the literature; cherry picked three references;
Using non-certified experts Yes 1 There is no evidence that experts in the field of teacher education participated
Poor methodology Yes 1 Data was limited to college bulletins and syllabi.  Very few syllabi received to make decisions.
Too much uncertainty to arrive at conclusions drawn Yes 1 Universities did not cooperate thus creating great uncertainty in any information gathered by NCTQ
Revealing only that data which supports findings Yes 1 The data is in the form of ratings
Non-peer reviewed claims Yes 1 Not subjected to review by experts in the field of teacher education
Totals 10 yes; 1 no 10 The NCTQ report on Teacher Prep scored 10 on a scale of 0 -11.  The report is clearly junk science
 
Figure 1.  An Analysis of the NCTQ Report on Teacher Prep using established definitions of junk science. The NCTQ scored 10 out of 11 indicating that it a highly rated junk science study.  Definitions from Carolan, M.S., When Does Science Become Junk?

Screen Shot 2013-07-01 at 8.01.25 PMCaution

The NCTQ study scored 10 out of 11 on the Junk Science scale, and as result earned four “CAUTIONS, as indicated in Figure 2.  To earn 3 or 4 cautions is clear evidence based on the JS Score Method, that the study under consideration is pure junk.  The results should be considered  fraudulent, invalid, unreliable, and pure ideology.

image
Figure 2.  Junk Science Rating System Using JS (Junk Science) Score.  The higher the score, the more caution should be exercised using the results and conclusions.  Hit the delete button or drag the file to the waste basket if the study has a rating of more than 2 Cautions.

I invite you rate the NCTQ report on teacher prep using the JS Score Method, and then compare it to a study done by Linda Darling-Hammond, Powerful Teacher Education, Lessons from Exemplary Programs.  The NCTQ study compared to the Darling-Hammond  study is a classic example of junk science compared to science.

Do you think NCTQ study is junk science, or is it a credible study of teacher prep?

 

NCTQ Report on Teacher Prep: the Devil is in the Detail

NCTQ Report on Teacher Prep: the Devil is in the Detail.

I decided to read the National Council on Teacher Quality (NCTQ) Report on Teacher Prep to try to learn what the NCTQ had to say about teacher prep in the U.S.

Last week, the National Council on Teacher Quality released its report on Teacher Prep.  Since its release, there has been an explosion of articles and blog posts written about the NCTQ report.  If you Google “NCTQ report teacher prep” you’ll get about 51,500 results.  I didn’t look at all the results, but I did survey the first two pages and I found these results:

  • 16 articles were critical of the report
  • 2 articles were supportive
  • 1 article was neutral
  • 2 articles were links to the NCTQ report

Of the first 21 articles, 76% were critical of the NCTQ report.  Authors of these articles, which included bloggers, Dean’s of Colleges of Education, professors, and professional associations, questioned the method used.  In fact, Linda Darling-Hammond reported that because of concerns with the methodology most schools of education refused to take part in the study.  Some writers suggested that the NCTQ study was coercive, and did not ask colleges to take part or become partners in the study, but instead resorted to legal means, and the Freedom of Information Act to get their data.  But others spoke out against the partisan nature of the NCTQ which is funded by conservative groups, and according to these writers, the NCTQ is only pursuing an agenda of putting traditional teacher prep out of business, and replace it with alternative certification programs such as Teach for America.

Disclaimer: I am professor emeritus of science education, Georgia State University (GSU). I was a professor of science teacher education at GSU from 1969 – 2002, coordinator of science education and co-developer of alternative, undergraduate, and graduate teacher prep programs. I also was visiting professor in teacher education programs at the University of Vermont and the University Hawaii, Hilo. I taught science teacher education seminars for more than 20,000 teachers in the Bureau of Education & Research.
It’s a consumer report of a large number of teacher preparation programs which NCTQ claim prospective clients can use to choose a teacher prep program to attend.  However, as Linda Darling-Hammond says, the report is nonsense.

It’s a 112 page report that include colorful graphs, charts, tables and descriptive statistics. The authors? I’m not sure, but there are lots of names, hundreds of corporate and foundation sponsors, but in the end no distinct or verifiable authors. There is also no evidence that the “study”  was reviewed by respected scholars in the field of education research.

There is no review of the literature on teacher preparation in the NCTQ report.  There are some references that you have to search for in the “Notes” section at the end of the report.  These “studies” were either done by the NCTQ, or are studies they cherry picked from the literature to support their political views.

Investigating the Enemy

When you read the NCTQ report it seems as if teacher prep institutions are the enemy. For more than thirty years I’ve read and studied educational research articles published in refereed journals such as the Journal of Research in Science Teaching, Science Education, as well many Handbooks of research in science education, teacher education and the learning science. In these instances, I’ve never read a study in which researchers demanded cooperation from the research participants. The NCTQ policy is very clear. If you don’t give us what we want well use legal means to get it. They also “reached out” to a few students to supply materials that were requested from the administration.

The so-called NCTQ researchers not only resort to coercive strategies to get data (syllabi, curriculum, etc.), but you get the feeling that they snoop around universities, trying to find what texts are used by bookstore shopping.

Measuring Everything Under the Sun

But here is the thing. If you look at Figure 1 (Figure 40 in the NCTQ Report), the data sources for the 17 criteria used to evaluate teacher prep institutions is spread out and far-ranging.  For each criteria there are sources within and outside the higher education institutions, but you have no idea what value is attached to each, and its kind of murky when you begin looking at each source of data.  Take course syllabi.  In many instances, NCTQ sifters had trouble getting syllabi.  Some universities refused to send them, so the NCTQ resorted to their lawyers to impose legal justification to try to get the sources of data.  Eventually they resorted to the Freedom of Information Act.

What we see here is a contentious relationship between the NCTQ and the nation’s teacher preparation institutions.  What kind of results will emerge with the aggressive nature of this “study.”

 

Figure 1.  This is Figure 40 from the NCTQ report on teacher prep.  I've annotated the report to point out some of its limitations.  Extracted on 6/21/2013 from http://www.nctq.org/dmsView/Teacher_Prep_Review_2013_Report
Figure 1. This is Figure 40 from the NCTQ report on teacher prep. I’ve annotated the report to point out some of its limitations. Extracted on 6/21/2013 from http://www.nctq.org/dmsView/Teacher_Prep_Review_2013_Report

NCTQ has put together a mass of data, trying to measure everything under the sun.  Yet, the kind of data that they are collecting really doesn’t tell us much about teacher preparation per se.

Figure 1 is a figure in the NCTQ report which is an analysis of the criteria used by NCTQ to assess teacher prep programs.  All of the data come from paper or online documents.  None involved interviews or discussions with people at the teacher prep institutions.  As hard as this is believe, it is the pattern that the NCTQ has followed since it was formed by the Thomas Fordham Institute.

In Figure 1, look at the left hand column of IHE’s (data sources from the institutions).  Listed are six sources of data:

  1. Syllabi
  2. Required textbooks
  3. IHE catalogs
  4. Student teaching handbooks
  5. Student teaching evaluation forms
  6. Capstone project guidelines (including teacher performance assessments)

Trained analysts then check 17 standards by using a scoring system after “a very methodical and systematic process of coding and sorting.  Analysts have been trained to follow a very detailed and systematic standard-specific protocol to make a “yes” or “no” decision about whether each of the standard’s indicators is satisfied.”

But there is so much data for each standard its not believable that any kind of reliable or valid system emerges from this “corporate spray.”  The idea is to throw as much at the wall as possible and look for what sticks.  In this case, not much.

Trophies and Stars

NCTQ Gold Trophy for
NCTQ Gold Trophy for Strong Design

Nevertheless, NCTQ charges ahead and rates institutions by standard.  If an institution meets the standard (according to NCTQ), they are awarded four stars.  Nearly meet the standard= three stars; partly meet the standard and you get two stars; meet a teeny tiny part, one star.  No stars if the institution doesn’t meet NCTQ’s standard.

The “gold trophy” is awarded on some criteria to those institutions with a “strong design.”  And they get five stars!

Because there are so many sources of data, and because many institutions simply did not want to cooperate with NCTQ, there are serious questions about the results.

For example to check how institutions selected students for their programs, there is no way of knowing the relative importance of the data collected.  This is true for nearly all of the standards used by NCTQ.

Evaluating Student Teaching: You’ve Got to be Kidding

Mind you, there are 17 standards used to “rate” teacher preparation institutions.  Each standard is scored according a list of data sources.  To give you an idea, here is how NCTQ scores the Student Teaching Standard (#14 of 17)
Evaluation of elementary, secondary and special education teacher preparation programs on Standard 14: Student Teaching uses the following sources of data:
  1. Institutions of higher education (IHEs) handbooks pertaining to the teacher preparation program and/or student teaching placements specifically
  2. Observation instruments used by university supervisors in student teaching placements
  3. Contracts and/or communications between IHEs and school districts about student teaching placements
  4. Nomination or application forms of prospective cooperating teachers that are completed by school district personnel
  5. Syllabi for student teaching-related seminars or courses
Did our esteemed colleagues at NCTQ interview program heads and professors who actually work out the details of student teaching, internships, and school-based activities?  Did they interview students in teacher education programs and ask them their opinion of various aspects of their teacher preparation?  Did the NCTQ visit and interview cooperating teachers who mentor teacher preparation students?   No.  No.  No.

Junk Thought, Junk Science

Susan Jacoby, in her book The Age of American Unreason, helps us understand the quite pervasive phenomenon in which anti-rationalism and contempt for countervailing facts and expert opinions manifest itself as the truth.  Jacoby point out that junk thought can come from the right as well as the left.  Accusing each other of irrationality thrives.  But, she suggests that junk thinkers see “evidence as a tiresome stumbling block to deeper, instinctive ways of knowing.” (Jacoby, Susan (2008-02-12). The Age of American Unreason (Kindle Location 3798). Knopf Doubleday Publishing Group. Kindle Edition).

The NCTQ report on teacher preparation is junk science.  The method that they employed in their study avoided data from the very sources that could help uncover the nature of teacher preparation.  These sources are faculty, administrators, students, and cooperating school districts and educators.  Without interviewing and observing teacher preparation programs directly, and without establishing a cooperative relationship with the these institutions, the NCTQ condemns itself to false claims, outright opinions that have little bearing on the nature of teacher preparation.

The conclusions NCTQ makes has nothing to do with the data they collected. Their conclusion is a political statement

The NCTQ is averse to evidence and scientific reasoning. Instead of reviewing the literature on teacher education programs, the nature of these programs, and what makes for effective teacher prep, the NCTQ starts with the premise that teacher preparation in the U.S. is a failure. They cherry pick studies in the literature (only a very few) that support their distorted picture of teacher prep, then use unscientific and nontransparent methods that are impossible to replicate. Honestly, I can’t figure out how they arrived at their rankings.

That is, until I read the two-page conclusion near the end of the report.

According to NCTQ, teacher prep institutions do not “arm” novice teachers with practical tools to succeed in the classroom. Mind you, the NCTQ study did not collect any data about “tools” that were or were not in the teacher prep curriculum, nor did the survey students in any program, or conduct site visits to see teacher educators at work.

Another conclusion NCTQ makes is that teacher prep programs make candidates show their feelings and attitudes about race, class, language and culture through in-class dialogue and journal writing. They never visited classrooms to observe these dialogues, nor did they read any student journals.  Again, no where in the report is there any data related to this politically charged conclusion.

None of the remarkable conclusions are related to the “data” NCTQ collected.

One more thing

Walsh and her colleagues really seem to have a disdain for teacher education. They believe that it is the job of teacher educators to train candidates for teaching much like the Teach For America (TFA) program does in its 5 week teacher prep program. NCTQ reels when teacher educators suggest that their mission is to prepare candidates, and not train them.

The NCTQ is a wonderful example of not only junk thought, but is the epitome of junk science.

What is your opinion on the NCTQ report on teacher prep?

Sciencepolitica: Science Debate Seeks to Find Out What Politicians Know About Science

Update:  Shawn Otto of Science Debate will be featured on NPR’s Talk of the Nation: Science Friday at 2 ET.  Discussion on science in the elections.

In 2007, a small group of American citizens, lead by Shawn Otto, created Science Debate 2008, an organization that called on the 2008 presidential candidates to hold a debate on science and its political implications for society.  A presidential debate on science was never held.  Here, according to Science Debate 2008, is the story:

The candidates refused.  The Science Debate team secured cosponsors in the National Academies, the American Association for the Advancement of Science, and the Council on Competitiveness.  They secured bipartisan congressional co-chairs.  They made a deal with NOVA and NOW on PBS to broadcast the debate, and secured a venue.  But the candidates instead opted to debate their religious faith in two nationally televised “faith forums.

The American science and engineering community was stunned.  These issues lie at the center of most of the major unresolved policy challenges facing the country, and yet the candidates refused to debate them.  The team went to work with other leading science organizations to cull the submitted questions into “The Top 14 Science Questions Facing America,” and teamed with Research!America to do a national poll to show the candidates that 85% of the American public thought that debating these topics was important.

This time, the candidates responded.  They assembled teams of science advisers to help them answer the questions, which helped inform their strategic thinking.  The inauguration of Barack Obama marked the first time a president has gone into office with a fully formed science policy and a sense of how it fits into his overall strategic agenda.  In a day and age when science affects every aspect of our lives and lies at the center of the causes and solutions of many of our most intractable public policy challenges, this was an important new development.

Science Debate 2012 has called once again for a science presidential debate, and for the first time, a set of eight questions for Congressional discussion.   Whether the Obama or Romney campaigns will sign off on a debate that focuses on science is unknown.  Most likely they will turn the questions over to their science advisors to answer in writing.

Sciencepolitica: Science in the Political Arena

Science Debate has created a forum to explore significant science issues in the presidential campaigns in 2008 and 2012.  Are the candidates qualified to discuss these issues?  As Shawn Otto puts it, Obama and Romney spend a lot of time talking about the economy, yet neither is an economist.  They express opinions on foreign policy, yet neither is a diplomat.  They should be able to discuss science and how it impacts people and society, even though neither is a scientist.

Science has not been in the background.  Extreme earth conditions, such as the Greenland ice melt, to the calving of huge hunks of glacial ice in Antarctica, the drought and forest fires in many American states, monsoon rains in China are TV stories nearly every night.  Over the past few years, the politicization of science has led to stalemates on many areas that need to be addressed: climate change and global warming, energy sustainability, clean water, & unpolluted oceans.  Science and mathematics education have been the target of many news stories, especially when international test results are released.  In most of the news stories, a “sky is falling” scenario is played out, leading politicians and corporate leaders to denigrate the state of science and mathematics education.

When scientific findings, however, conflict with personal and corporate interests, mudslinging in the form of personal attacks on each scientist and educator (for example Rachel Carson) and junk science claims on the scientific research community emerge.

Politicians love to use the term “junk science.” It is primarily used to cast doubt on and deride scientific findings, even if the findings were published in peer-reviewed journals, and supported by the scientific community. Junk science is evoked to counter global warming theories, and especially the reports of the Intergovernmental Panel on Climate Change which has provided us with a comprehensive picture of the state of global warming. Even though the panel has reviewed thousands of studies, there are politicians and some in the media, who claim these conclusions are based on “junk science” and that until some “sound science” comes down the road, we should put a halt on any recommendations related to the data.
It’s important for us to know how the two presidential candidates interpret the findings of science.  We have a clue that Obama respects the scientific community, especially based on his science advisory appointments in the current administration.  We’ll know later this year what Romney thinks about science when his team answers the Science Debatequestions.

Science Debate has opened the conversation to a wider audience,  but we face gridlock on most of the 14 science issues proposed by Science Debate.  Debating the issues openly in democratic forums will enable the candidates to show how they would deal with these issues that are important to all Americans.

Significantly, Science Debate is partnering with Scientific American and other leading science organization to promote discussion of science issues in this year’s election.  Science Debate has reached out to the Obama and Romney campaigns to take part in a televised debate on science and science education.  The odds are slim that either will agree to debate.  Instead they will turn to their science advisers to write answers to the questions   Their replies will be published on the Science Debate and Scientific American websites as the 2008 answers were posted.  Scientific American has indicated that it will grade the answers.  I wonder if they will determine the Value Added Measure for their advisors.

The Questions

The question is what do Obama and Romney know about science?  To find out, Science Debate and its partners has devised 14 questions.  You will find the 2008 and 2012 questions in Table 1. The questions include topics on innovation, climate change, energy, biosecurity, food, the Internet, ocean health, water, space, natural resources, health, and education.

The questions are based on our understanding of various problems that society faces.  Each question provides a bit of context, and rationale for being included in the Science Debate questionnaire.  Simple answers do not exist for any of these questions.

Christine Gorman, of Scientific American suggests why it is important to ask the presidential candidates about science:

The point is, as informed citizens, we need to know how the presidential candidates expect to address the basic scientific issues that are so vital to our country’s and our planet’s future—and that their policies will be based on sound science. The best showcase for such a discussion would be a live debate between the candidates dedicated entirely to scientific issues.

 

Science Debate 2008 and 2012 Questions

Table 1.  Science Debate 2008 and 2012 Questions for a Presidential Candidate Debate

Science Debate 2008 Questions Science Debate 2012 Questions
1. Innovation. Science and technology have been responsible for half of the growth of the American economy since WWII. But several recent reports question America’s continued leadership in these vital areas. What policies will you support to ensure that America remains the world leader in innovation? 1. Innovation and the Economy. Science and technology have been responsible for over half of the growth of the U.S. economy since WWII, when the federal government first prioritized peacetime science mobilization. But several recent reports question America’s continued leadership in these vital areas. What policies will best ensure that America remains a world leader in innovation?
2. Climate Change. The Earth’s climate is changing and there is concern about the potentially adverse effects of these changes on life on the planet. What is your position on the following measures that have been proposed to address global climate change—a cap-and-trade system, a carbon tax, increased fuel-economy standards, or research?  Are there other policies you would support? 2. Climate Change. The Earth’s climate is changing and there is concern about the potentially adverse effects of these changes on life on the planet. What is your position on cap-and-trade, carbon taxes, and other policies proposed to address global climate change—and what steps can we take to improve our ability to tackle challenges like climate change that cross national boundaries?
3. Energy. Many policymakers and scientists say energy security and sustainability are major problems facing the United States this century. What policies would you support to meet demand for energy while ensuring an economically and environmentally sustainable future? 3.Energy  Many policymakers and scientists say energy security and sustainability are major problems facing the United States this century. What policies would you support to meet the demand for energy while ensuring an economically and environmentally sustainable future?
4. Education. A comparison of 15-year-olds in 30 wealthy nations found that average science scores among U.S. students ranked 17th, while average U.S. math scores ranked 24th.  What role do you think the federal government should play in preparing K-12 students for the science and technology driven 21st Century? 4. Education  Increasingly, the global economy is driven by science, technology, engineering and math, but a recent comparison of 15-year-olds in 65 countries found that average science scores among U.S. students ranked 23rd, while average U.S. math scores ranked 31st.  In your view, why have American students fallen behind over the last three decades, and what role should the federal government play to better prepare students of all ages for the science and technology-driven global economy?
5. National Security. Science and technology are at the core of national security like never before.  What is your view of how science and technology can best be used to ensure national security and where should we put our focus? 5. Research and the Future.  Federally funded research has helped to produce America’s major postwar economies and to ensure our national security, but today the UK, Singapore, China, and Korea are making competitive investments in research.  Given that the next Congress will face spending constraints, what priority would you give to investment in research in your upcoming budgets.
6. Pandemics and Biosecurity. Some estimates suggest that if H5N1 Avian Flu becomes a pandemic it could kill more than 300 million people. In an era of constant and rapid international travel, what steps should the United States take to protect our population from global pandemics or deliberate biological attacks? 6. Pandemics and Biosecurity. Recent experiments show how Avian flu may become transmissible among mammals. In an era of constant and rapid international travel, what steps should the United States take to protect our population from emerging diseases, global pandemics and/or deliberate biological attacks?
7. Genetics research. The field of genetics has the potential to improve human health and nutrition, but many people are concerned about the effects of genetic modification both in humans and in agriculture. What is the right policy balance between the benefits of genetic advances and their potential risks? 7. Food. Thanks to science and technology, the United States has the world’s most productive and diverse agricultural sector, yet many Americans are increasingly concerned about the health and safety of our food.  The use of hormones, antibiotics and pesticides, as well as animal diseases and even terrorism pose risks.  What steps would you take to ensure the health, safety and productivity of America’s food supply?
8. Stem cells.  Stem cell research advocates say it may successfully lead to treatments for many chronic diseases and injuries, saving lives, but opponents argue that using embryos as a source for stem cells destroys human life.  What is your position on government regulation and funding of stem cell research? 8. The Internet. The Internet plays a central role in both our economy and our society.  What role, if any, should the federal government play in managing the Internet to ensure its robust social, scientific, and economic role?
9. Ocean Health. Scientists estimate that some 75 percent of the world’s fisheries are in serious decline and habitats around the world like coral reefs are seriously threatened. What steps, if any, should the United States take during your presidency to protect ocean health? 9. Ocean Health.  Scientists estimate that 75 percent of the world’s fisheries are in serious decline, habitats like coral reefs are threatened, and large areas of ocean and coastlines are polluted. What role should the federal government play domestically and through foreign policy to protect the environmental health and economic vitality of the oceans?
10. Water. Thirty-nine states expect some level of water shortage over the next decade, and scientific studies suggest that a majority of our water resources are at risk.  What policies would you support to meet demand for water resources? 10. Fresh Water. Less than one percent of the world’s water is liquid fresh water, and scientific studies suggest that a majority of U.S. and global fresh water is now at risk because of increasing consumption, evaporation and pollution.  What steps, if any, should the federal government take to secure clean, abundant fresh water for all Americans?
11. Space.  The study of Earth from space can yield important information about climate change; focus on the cosmos can advance our understanding of the universe; and manned space travel can help us inspire new generations of youth to go into science.  Can we afford all of them?   How would you prioritize space in your administration? 11. Space.  The United States is currently in a major discussion over our national goals in space.  What should America’s space exploration and utilization goals be in the 21st century and what steps should the government take to help achieve them?
12. Scientific Integrity. Many government scientists report political interference in their job.  Is it acceptable for elected officials to hold back or alter scientific reports if they conflict with their own views, and how will you balance scientific information with politics and personal beliefs in your decision-making? 12. Science in Public Policy. We live in an era when science and technology affect every aspect of life and society, and so must be included in well-informed public policy decisions.  How will you ensure that policy and regulatory decisions are fully informed by the best available scientific and technical information, and that the public is able to evaluate the basis of these policy decisions?
13. Research.For many years, Congress has recognized the importance of science and engineering research to realizing our national goals.  Given that the next Congress will likely face spending constraints, what priority would you give to investment in basic research in upcoming budgets? 13. Critical Natural Resources.  Supply shortages of natural resources affect economic growth, quality of life, and national security; for example China currently produces 97% of rare earth elements needed for advanced electronics.   What steps should the federal government take to ensure the quality and availability of critical natural resources?
14. Health. Americans are increasingly concerned with the cost, quality and availability of health care.  How do you see science, research and technology contributing to improved health and quality of life? 14. Vaccination and public health.  Vaccination campaigns against preventable diseases such as measles, polio and whooping cough depend on widespread participation to be effective, but in some communities vaccination rates have fallen off sharply. What actions would you support to enforce vaccinations in the interest of public health, and in what circumstances should exemptions be allowed?

Which of the Science Debate 2012 questions is most important to you?