« Back to list of position statements
NSTA Position Statement:
NSTA values a scientifically literate citizenry. Science assessments are necessary tools for managing and evaluating efforts to ensure all students receive the science education necessary to prepare them for participation in our nation's decision-making processes and lifelong learning of science in a technology-rich workplace.
Meaningful science assessment is realized only when stakeholders—students, parents, teachers, school administrators, community members, business persons, policy makers, and government officials—share the responsibility for science learning and associated formative and summative assessments. These stakeholders need to provide adequate resources, equal access, leadership, environment, guidance, enthusiasm, incentives, and motivation for science learning. Quality science assessments should be mechanisms for accessing information on students’
- understandings of science content and process knowledge and skills
- abilities to think critically and solve simple to complex problems
- capabilities of designing scientific experiments, analyzing data, and drawing conclusions
- capacities to see and articulate relationships between science topics and real-world issues and concerns
- skills using mathematics as a tool for science learning
Assessment feedback reflects the learning setting and should be used to adjust course content, teaching techniques, or learning strategies to improve student science learning. Moreover, the assessment data should be used to craft appropriate teacher professional development experiences, identify students who need extra help and/or learning accommodations, and revisit and redesign assessment tools to better reflect the learning goals and instructional setting.
The data and knowledge gained from quality assessment can indicate how well students are meeting science standards and expectations only if the assessment is appropriately aligned with the science curriculum and instruction. Science curriculum goals, instructional topics and strategies, and assessment topics and techniques should be in alignment if tests are to yield useful data. Additionally, it is important that the processes used to collect and interpret evaluation data be consistent with the purpose of the assessment.
With respect to science assessment at the local, state, regional, and national levels, NSTA advocates
- High expectations for science achievement be set for all
- Quality assessments be designed that reflect excellence in science curriculum and instruction
- Appropriate measures be taken to ensure all learners receive the necessary academic support and resources to succeed academically and test fairly in science
- Science curriculum, instruction, and assessment be aligned so that formative and summative assessment data are meaningful and useful to those working to increase student science achievement at all levels
- Time be allocated to engage teachers in the science assessment creation/design process
- Teacher professional development opportunities be offered that focus on aligning assessment with standards-based teaching
- Multiple forms of science assessment be used to measure student achievement and understanding (e.g., student-directed experimental designs, authentic/performance assessment, portfolio construction, laboratory practicals, real world problem-based learning scenarios, writing challenges, focus group and individual student interviews)
- Selection of science assessment type and/or form of assessment implementation be adjusted on an individual basis to provide necessary accommodations for students with special needs
- Resource allocations be adjusted to appropriately fund science curriculum-instruction-assessment alignment and subsequent science assessment implementation
- High-stakes science testing decisions regarding students, teachers, and schools be made based on multiple pieces of assessment data, not on single test instruments or single test administrations
—Adopted by the Board of Directors
Achieve, Inc. 2000. Benchmarking to the BEST: Raising achievement in America's schools, Annual Report. Washington, DC: Achieve, Inc..
AERA. 2000. Public policy statement on high-stakes testing in preK-12 education, www.aera.org, July.
Boser, U. 2000. Teaching to the test? Education Week, June 7.
Doran, R.L., F. Lawrenz and S. Helgeson. 1994. Research on assessment in science. In D. Gabel, ed., Handbook of research on science teaching and learning (pp. 338-442). New York: Macmillan.
Fuhrman, S.H. 1999. The new accountability, CPRE Policy Briefs, www.upenn.edu/gse/cpre/ RB-27-January.
Kulm, G., and C. Stuessy. 1992. Assessment in science and mathematics reform. In G. Kulm and S. Malcom, eds., Science assessment in the service of reform (pp.71-88). Washington, DC: American Association for the Advancement of Science.
Lazarowitz, R., and P. Tamir. 1994. Research on using laboratory instruction in science. In D. Gabel, ed., Handbook of research on science teaching and learning, 94-130. New York: Macmillan.
Linn, R.L. 2001. Reporting school quality in standards-based accountability systems, CRESST Policy Brief 3, cse.ucla.edu, spring.
Lerner, L.S. 2000. The state of state standards in science. Finn, C.E., Jr., and M.J. Petrilli, eds., in The state of state standards 2000. Dayton, OH: Thomas B. Fordham Foundation.
Marshall, R. 1992. Equity in science education. Science assessment in the service of reform, 17-30. Washington, DC: American Association of the Advancement of Science.
McKeon, D., M. Dianda, and A. McLaren. 2001. Advancing standards: A national call for midcourse corrections and next steps. Washington, DC: National Education Association.
National Research Council. 1996. National science education standards. Washington, DC: National Academy Press.
Quality Counts. 2001: A better balance. Education Week. Jan. 11.
Sadler, P. 1998. Psychometric models of student conceptions in science: Reconciling qualitative studies and distractor-driven assessment instruments. Journal of research in science teaching 35(3): 265-298.
Schmidt, W., C. McKnight, and S. Raizen. 1997. Splintered vision: An investigation of U.S. science and mathematics education executive summary. East Lansing, MI.: Michigan State University, U.S. National Research Center for the Third International Mathematics and Science Study.
Schrag, P. 2000. High stakes are for tomatoes. The Atlantic Monthly 286 (2): 19-21.
Wiggins, G. 1992. Creating tests worth taking. Educational Leadership 49(8): 26-33.