Skip to main content
 

From Chalkboards to AI

Using AI to Foster Ethical Conversations in Science Classrooms

By Christine Anne Royce, Ed.D., and Valerie Bennett, Ph.D., Ed.D.

Posted on 2025-11-10

Using AI to Foster Ethical Conversations in Science Classrooms

Disclaimer: The views expressed in this blog post are those of the author(s) and do not necessarily reflect the official position of the National Science Teaching Association (NSTA).

One topic that has emerged with the use of Artificial Intelligence (AI) is its ethical use. These ethical concerns have ranged from bias and fairness in what AI produces to privacy and data security for student information to student agency during use. All of these concerns are valid and warrant in-depth discussion among educators when they consider when and how to use AI tools in their classrooms.

However, another way to utilize AI is to focus on how it can engage students in ethical conversations about science topics. There is no doubt that as science educators we constantly navigate a crowded curriculum while integrating new concepts alongside essential content. With the everyday demands, it often feels like there’s little room to introduce ethics into the mix. Yet, the rise of AI offers a unique and powerful opportunity to incorporate ethical discussions into science education.

AI, more than just a technological tool, serves as a perfect lens for examining longstanding ethical questions in science. This shift moves us from focused ethical dilemmas, such as "Can I use AI to help write my lab report?", to broader socioscientific ethical concerns, such as "Should scientists rely on AI to discover a new drug?" This kind of ethical reasoning is essential for 21st-century scientists, who must not only conduct experiments but also recognize the ethical implications of their work. The use of socioscientific issues in the classroom provides a “pedagogical framework that uses debatable, socially related scientific issues as the context for science teaching” (Kahn 2019, p. 7)

Before diving into this further, we present a few guardrails that teachers should consider:

  1. Source of information: Using certain AI platforms, such as Google Notebook LM, requires the upload of content from which the AI draws a response. If “what will be shared” is a concern, this is a better tool to use since you are providing it the information from which to draw.
  2. Developmental appropriateness: Some topics are obviously better suited for older students and therefore, the topic needs to be considered ahead of time, along with some guidance to provide in the prompt, which gives students the needed boundaries.
  3. District policies: Educators should not only consider their district’s policies related to AI but also related to potential ethical topics.

Integrating AI into Ethical Conversations with the Socratic Seminar

One effective way to introduce ethical discussions surrounding AI in the science classroom is through the Socratic Seminar. This student-centered method encourages critical thinking and reasoned argumentation, providing an ideal platform for students to explore complex ethical questions (Chowning 2009).

The Socratic Seminar involves structured discussions based on open-ended questions, with the teacher serving as a facilitator rather than an expert. By guiding students to analyze ethical issues, the seminar helps them develop a deeper understanding of the dilemmas they may face in their future scientific careers.

The use of AI does require some adaptation to the traditional Socratic Seminar model. We discuss some recommended modifications below.

The AI-Infused Socratic Seminar: A New Model for the Science Classroom

This model uses the foundational structure of a traditional Socratic Seminar but infuses it with Artificial Intelligence at each stage. AI is no longer just the topic of discussion; it becomes an active tool for text generation, facilitation, and reflection.

1. Selecting the "Text" 

In this step, the use of AI can be either assisting the teacher in writing the “text” that sets the stage for the dilemma, or using AI as a co-instructor with guidance regarding the prompts from the teacher.

  • Traditional Foundation: The seminar is based on an ambiguous, real-world text, scenario, or case study that lacks a single clear answer, prompting authentic inquiry.   
  • The AI-Infused Model:
    • AI as the Author: Instead of finding a pre-written scenario, the teacher (or the students) can use AI to generate the ethical dilemma (see recommendation above about which platform to utilize). For example, you could prompt an AI with: "Create a 'what if...' scenario about a new AI that can predict disease risk but shows signs of racial bias." The resulting scenario becomes the "text."   
    • AI Output as the "Text": The "text" does not have to be a scenario about AI; it can be a direct output from an AI. For example:
      • The "Black Box": Give students a set of inputs (e.g., patient data) and the AI's outputs (e.g., treatment recommendations) and ask them to discuss the ethics of using a decision they cannot explain.   
      • The Flawed Policy: Ask an AI to write a climate adaptation policy for a specific region and have students critique it for "techno-solutionism" or embedded biases.  

For example, the following prompt was provided to OpenAI:

I want to work on having my students consider socioscientific issues in the classroom. To help me do this, generate a "what if" scenario regarding the use of AI to predict medical problems, such as potential cardiac events, when given a patient's medical records. Give me two different scenarios. One that focuses on a positive outcome and one that focuses on a negative outcome, with the use of AI.

And it produced the following results that could be used as the initial discussion point for the topic:

Here are two "what if" scenarios for considering the use of AI to predict medical problems, specifically potential cardiac events, based on a patient's medical records:

Positive Outcome Scenario:

What if an AI system is able to analyze vast amounts of patient medical data, including genetics, lifestyle factors, and historical health records, to accurately predict potential cardiac events months or even years in advance? This early prediction allows doctors to intervene early, providing preventive treatments, lifestyle recommendations, and regular monitoring for at-risk individuals. As a result, the number of severe heart attacks and related complications decreases significantly, improving overall public health and reducing healthcare costs.

Negative Outcome Scenario:

What if an AI system, while highly accurate in predicting cardiac events for some patients, overlooks critical factors or makes false predictions in others due to biases in the training data? For instance, if the system was primarily trained on data from a homogeneous population, it could disproportionately misidentify or fail to predict risks for underrepresented groups, such as certain ethnic minorities. This could lead to either false positives, where patients undergo unnecessary procedures, or false negatives, where at-risk individuals are not identified in time. The consequences could include mistrust in AI healthcare systems, unequal access to care, and potential harm to patients.

These scenarios can help students explore the ethical, social, and technological aspects of AI in healthcare.

2. Establishing Discussion Norms

  • Traditional Foundation: Clear norms are set for active listening, polite disagreement, citing evidence, and monitoring airtime.  
  • The AI-Infused Model: These norms are adapted to include AI:
    • Old Norm: "Cite evidence from the text."
    • New Norm: "Interrogate the 'text's' origins." Students must go beyond what the text says and ask why it might say it. This includes questioning the data on which the AI was likely trained, who funded its creation, and which populations might be underrepresented.   
    • Old Norm: "Polite disagreement."
    • New Norm: "Distinguish between human and machine reasoning." Students are encouraged to identify and label when an argument is based on a verifiable fact, a human value judgment, or an "opaque" AI-generated claim that lacks explanation.   

3. Facilitating the Seminar

  • Traditional Foundation: The teacher acts as a facilitator, posing the opening question, remaining silent, and using clarifying or probing questions only to refocus or deepen the analysis.   
  • The AI-Infused Model: The facilitator has new, powerful tools:
    • AI as Scribe: The teacher can use an AI transcription or summarization tool to "record the main ideas discussed," freeing them from notetaking to focus entirely on the quality of the discussion.   
    • AI as Devil’s Advocate: If the discussion stalls, the facilitator can feed a summary of the students' arguments into an AI and ask it to "generate a counterargument" or "identify the primary ethical weakness" in their reasoning. This non-human perspective can spark new lines of inquiry.   
    • New Probing Questions: The facilitator's questions shift from "What does the author mean?" to "What does the algorithm mean?"   
      • "If this AI makes a mistake, who is accountable, the programmer, the doctor who used it, or the AI itself?"   
      • "What are the environmental costs of running this AI to solve this problem?"   
      • "Does this tool promote equity, or does it widen the digital divide?"   

4. Assessment

  • Traditional Foundation: The process is assessed, not the opinions. A rubric grades preparation, analysis, civility, and discussion skills, often concluding with a reflective writing assignment.   
  • The AI-Infused Model: The assessment evolves to measure human-AI interaction.
    • New Rubric Category: A new category can be added to the rubric: "Critical AI Analysis." This assesses whether the student was able to identify underlying issues like data bias, privacy concerns, or the "black box" problem.   
    • The AI-Assisted Reflection: The final "reflective writing" assignment becomes a capstone task. Students are first asked to prompt an AI with the seminar's central ethical question (e.g., "Is it ethical to use a biased AI for predictive health?"). Then, students must write a reflection that critiques the AI's answer, using evidence and ideas from their own Socratic discussion. This forces them to "grapple with their beliefs" and demonstrate that they can apply human-centered ethical reasoning to a machine-generated argument.   

AI-Powered Dilemmas for Ethical Discussion in the Science Classroom

To provide students with a concrete example of how AI intersects with ethical dilemmas, consider using the following real-world scenarios:

  1. The AI Oracle (Biology / Health Science)
    A new AI tool can predict a patient’s risk of heart attack based on genetic data and medical records. However, the AI was trained on data from a single hospital system and performs poorly when tested on other populations. Should this tool be used in a hospital serving a diverse community? Note: There are several tools available that claim the ability to focus on this type of cardiac event. No specific tool is mentioned here.
    Key ethical considerations: algorithmic bias, equity, and justice.
    Socratic questions:
    • "What are some inherent issues that exist with this type of tool?"
    • "Who benefits from this tool, and who bears the risk of its errors?"
  2. The Black Box (General Science / Physics / Computer Science)
    A lab creates an AI that identifies new antibiotics, but the AI’s decision-making process is a "black box," meaning even the scientists cannot explain how it works. Can this AI be trusted to make life-saving decisions?
    Key ethical considerations: transparency, accountability, and the balance between reliability and explainability.
    Socratic questions:
    • "What risks does a black box introduce that a transparent system might not have?"
    • "Who is responsible if this AI makes a harmful decision?"
  3. The Price of Prediction (Environmental Science / Earth Science)
    An AI model is used to predict climate risks and allocate resources, but the model is based on data from wealthier nations, leading to inaccurate predictions for developing countries. Should this model be used for decision-making?
    Key ethical considerations: data bias, justice, and environmental ethics.
    Socratic questions:
    • "Which ethical principle is most violated when the AI underestimates the vulnerabilities of marginalized communities?"
    • "Is it ethical to use a tool that contributes to the problem it seeks to solve?"

The AI Divide: A Real Dilemma in Your Classroom

The "AI Divide" presents an ethical dilemma that is already present in many schools. Some students have access to advanced AI tools that personalize learning, while others have limited or no access. This divide exacerbates existing inequalities and raises important questions about fairness and equity in education.
Socratic questions:

  • "Is it just that some students have access to paid AI tools while others rely on free, school-provided tools?"
  • "What is the ethical line between AI as an 'assistive tool' and AI as a 'cheating tool'?"

Conclusion: Fostering Ethical Critical Thinking with AI

As science educators, our role is not to be experts on AI or ethics, but to model curiosity and guide students in their exploration of these complex issues. By engaging in thoughtful ethical discussions, we empower students to think critically about the implications of technology in science and society. Teaching students to recognize ethical dilemmas and formulate reasoned responses is not just a pedagogical strategy; it is essential for developing the thoughtful, responsible scientists of tomorrow.

References

Chowning, J. T. 2009. Socratic seminars in science class: Providing a structured format to promote dialogue and understanding. Science Teacher 76 (7): 36–41. 

Kahn, S. 2019. It’s still debatable: Using socioscientific issues to develop scientific literacy K–5. Arlington, VA: NSTA Press.
 

Christine Royce headshotChristine Anne Royce, Ed.D., is a past president of the National Science Teaching Association and currently serves as a Professor in Teacher Education and the Co-Director for the MAT in STEM Education at Shippensburg University. Her areas of interest and research include utilizing digital technologies and tools within the classroom, global education, and the integration of children's literature into the science classroom. She is an author of more than 140 publications, including the Science and Children Teaching Through Trade Books column.

 

 

Valerie Bennett headshot

Valerie Bennett, Ph.D., Ed.D., is an Assistant Professor in STEM Education at Clark Atlanta University, where she also serves as the Program Director for Graduate Teacher Education and the Director for Educational Technology and Innovation. With more than 25 years of experience and degrees in engineering from Vanderbilt University and Georgia Tech, she focuses on STEM equity for underserved groups. Her research includes AI interventions in STEM education, and she currently co-leads the Noyce NSF grant, works with the AUC Data Science Initiative, and collaborates with Google to address CS workforce diversity and engagement in the Atlanta University Center K–12 community.



This article is part of the blog series From Chalkboards to AI, which focuses on how artificial intelligence can be used in the classroom in support of science as explained and described in A Framework for K–12 Science Education and the Next Generation Science Standards.


The mission of NSTA is to transform science education to benefit all through professional learning, partnerships, and advocacy.

Asset 2