In the last post, I referenced a recent special issue of the Journal of Research in Science Teaching which was devoted to an exploration of scientific literacy and context in PISA Science. PISA (The Programme for International Student Assessment) is one of two (the other is TIMSS) major international assessments of science that has captured the attention of educators, politicians, policy makers, and the general public. Since about 60 countries participate in these assessments, there is the general feeling that the results are important, and provide us with a glimmer of the nature of science education in these various nations.
With that said, it is interesting to note that the media gives special attention to the results of these assessments. As Svein Sjoberg points out:
the main focus in the public reporting is in the form of simple ranking, often in the form of league tables for the participating countries. Here, the mean scores of the national samples in different countries are published. These league tables are nearly the only results that appear in the mass media. Although the PISA researchers take care to explain that many differences (say, between a mean national score of 567 and 572) are not statistically significant, the placement on the list gets most of the public attention. It is somewhat similar to sporting events: The winner takes it all. If you become no 8, no one asks how far you are from the winner, or how far you are from no 24 at any event. Moving up or down some places in this league table from PISA2000 to PISA2003 is awarded great importance in the public debate, although the differences may be non-significant statistically as well as educationally.
Whenever international results are reported, whether it be the results of TIMSS or PISA, the mean score of each country is reported in various charts. I’ve provided a copy of the chart that is a rank ordered list of the mean scores by country on the 2006 PISA Science Assessment. All of the countries shown in yellow are statistically above the PISA average; those in green are statistically below the average; and those in white are not statistically different from the average. You’ll note that the United States is in the green section, with a mean score of 489. At the top of the ranking is Finland, whose students attainted a mean score of 563.
I am not sure what the underlying rationale is for the U.S. Department of Education use of the phrase, “The Race to the Top,” but one interpretation is that there is a race to get to the top of “MT” PISA in the foreseeable future.
Using scores from tests such as PISA to evaluate and assess science education misleads the public into thinking that science learning has been assessed in the first place. For instance, in the United States there are more than 15,000 independent school systems, and to use a mean score on science test, such as PISA, or TIMSS, or NAEP does not describe the qualities or inequalities inherent in the U.S.A.’s schools.
The Race to the Top is an unfortunate choice of words because it implies that moving up the mountainside of the PISA scorecard is the way to improvement and success in science education.
There is a need for the science education community to raise serious questions about implying that these assessment scores are valid assessments of science education. I’ll talk more in the coming days about some of the movements going on to further standardize learning, and use high risk assessments of student achievement as a means to evaluate teacher effectiveness and school improvement.