Skip to main content
 

Engineering Essential Attributes of Cooperative Learning and Quality Discourse in a Large Enrollment Course by Leveraging Clicker Devices

Journal of College Science Teaching—March/April 2020 (Volume 49, Issue 4)

By Christopher Bauer


This article describes how clickers (student response systems) may be used to assess and support the development of productive process skills and discourse patterns within student teams during class periods. Clicker questions may poll the class about specific features of the internal workings of teams, such as role rotation, helpful or distracting behaviors, and the richness or evenness of discourse. Display of polling results to the class sets up a teachable moment regarding development of effective team communication. One question per class session provides several dozen opportunities over a course to raise student awareness concerning effective team communication and potentially to improve those skills. This innovation may be particularly advantageous for large classes where student team activity cannot be efficiently monitored by a single instructor. This is the first report of use of clickers for the purpose of assessing and supporting learning of team process skills.

 

The clicker (also known as “audience,” “classroom,” or “student” response systems) has become ubiquitous in STEM classrooms (Beatty & Gerace, 2009; Chen, Zhang, & Yu, 2017; Goldstein & Wallis, 2015). Using the Web of Science database, about 800 articles across 80 journals were identified with a title or abstract containing the word “clicker.” Clicker systems poll students about their ideas and project the aggregated data, informing both the instructor and students about what they were thinking. Wireless service has improved to the point where clicker hardware and software usage is straightforward, often supported by campus technology, and integrated with course management systems. Clicker devices typically allow multiple-choice response, although some allow text or graphics. Both free and fee-based polling services are available for cell phones. Despite wide integration, studies of chemistry higher education faculty suggested that the user pool (a decade ago) was dominated by early-adopters (Emenike & Holme, 2012) and that currently the utilitarian decisions of individual faculty are limiting implementation to large enrollment entry-level courses (Gibbons et al., 2017). Nevertheless, it is rare now to find a colleague who does not know what a clicker is.

Something is notably missing in studies of clicker implementation—using clickers to assess and promote effective team interaction and discourse. Nearly all of the literature is focused on content, not process (Freeman & Vanden Heuvel, 2015). This is surprising since the frequent intended purpose of clickers is to spark student interaction and discussion. Although there is increasing interest in how clickers may create (or impede) opportunities for students to interact and co-construct knowledge (see citations below), there have been no reports of direct use of clickers for assessing team process, for encouraging productive team behaviors, or for supporting effective team discourse. This article describes strategic design and implementation of clicker questions with the express purpose of revealing and improving these important characteristics of effective team learning (Hodges, 2017; 2018).

Abundant literature has accumulated over the past two decades. Citations here are narrowed to higher education STEM and psychology, but there are many reports in business, health, and other social sciences, and in pre-college settings. Some reports are personal case studies of faculty describing and encouraging implementation (Cotes & Cotua, 2014; Hodges et al., 2017; King, 2011; Koenig, 2010; Milner-Bolotin, Antimirova, & Petrov, 2010; Ribbens, 2007; Sevian & Robinson, 2011; Skinner, 2009). Other articles describe empirical or theoretically grounded studies of implementation strategies from quasi-experimental through more rigorously controlled designs (Adams, 2014; Brady, Seli, & Rosenthal, 2013; Buil, Catalan, & Martinez, 2016; Fortner-Wood, Armistead, Marchand, & Morris, 2013; Gray & Steer, 2012; Knight, Wise, & Sieke, 2016; Kulesza, Clawson, & Ridgway, 2014; Mayer et al., 2009; Morgan & Wakefield, 2012; Niemeyer & Zewail-Foote, 2018; Oswald, Blake, & Santiago, 2014; Pearson, 2017; Smith, Wood, Krauter, & Knight, 2011; Solomon et al., 2018; Terry et al., 2016; Turpen & Finkelstein, 2009; Van Daele, Frijns, & Lievens, 2017; Wolter, Lundeberg, Kang, & Herreid, 2011). A handful of status reviews and meta-analyses have recently appeared (Castillo-Manzano, Castro-Nuno, Lopez-Valpuesta, Sanz-Diaz, & Yniguez, 2016; Chien, Chang, & Chang, 2016; Hunsu, Adesope, & Bayly, 2016; MacArthur & Jones, 2008; Vickrey, Rosploch, Rahmanian, Pilarz, and Stains, 2015).

Fewer studies have addressed metacognition by probing specifically for judgments of knowing (Brooks & Koretsky, 2011; Egelandsdal & Krumsvik, 2017; Herreid et al., 2014; Murphy, 2012; Nagel & Lindsey, 2018). Several articles, through observation of student teams, have dug into the patterns of student discourse and decision-making (Anthis, 2011; James, Barbieri, & Garcia, 2008; James & Willoughby, 2011; Knight, Wise, Rentsch, & Furtak, 2015; Knight, Wise, & Southard, 2013; Lewin, Vinson, Stetzer, & Smith, 2016; MacArthur & Jones, 2013; Perez et al., 2010). These mechanistic studies explore whether students engage with each other, what their conversation involves, and how they make clicker-response decisions. Based on these observations, advice to students and faculty has been recommended (James & Willoughby, 2011; Knight et al., 2013; MacArthur & Jones, 2013).

This article takes a more direct approach and describes for the first time an explicit strategy for using clickers to assess these team process and communication behaviors, with concomitant reporting of results to the class, in order to encourage improvements in these behaviors.

Improving process skills (Stanford, Ruder, Lantz, Cole, & Reynders, 2017) and understanding discourse patterns (Knight et al., 2013; Kulatunga, Moog, & Lewis, 2014; Moon, Stanford, Cole, & Towns, 2017) are two current interest areas for active learning research and curriculum reform in STEM. Process skills include intra-team dynamics (management, teamwork, communication). Development of these skills has been a key reason for using cooperative learning. Research on cooperative learning establishes the importance of five critical features: face-to-face promotive interaction, positive interdependence, individual accountability, team process assessment, and team skill development (Johnson, Johnson, & Smith, 1991). Clickers may be used to assess student behaviors and provide evidence for how well course structure supports these critical features.

Student discourse (what they say, to whom, and how) is also of interest because particular types of discourse seem to encourage engagement and thinking (Chi, Kang, & Yaghmourian, 2017; Christian & Talanquer, 2012; King, 1990; Michaels, O’Connor, & Resnick, 2008; Moon et al., 2017; Young & Talanquer, 2013). Clicker questions can provide insight regarding discourse patterns and behaviors. This strategy may be particularly fruitful in large classes, where a single instructor would have difficulty monitoring and facilitating improvements in team behavior by direct engagement with every team.

Setting

Student participants were in a first-year general chemistry course at the University of New Hampshire from 2016 to 2018. The population was primarily 150 to 200 first- and second-year students in the biological and health sciences, with about 60% female and less than 5% non-English speaking or underrepresented groups. Class format consisted of frequent student-centered activities: content explorations (Process-Oriented Guided Inquiry Learning [POGIL]), think-pair-share, and group quizzes. The room was theater style with fixed seats, one projection screen, and front wall whiteboards.

Clicker questions were used for content review, to gauge progress and outcomes from POGIL explorations, and to complement slide-projected presentations. Clickers contributed 3% toward the course grade. Credit was awarded for each clicker response. Nearly all clicker events were “low stakes”; that is, students were not penalized for choosing “wrong” options. High-stakes questions tend to undercut the purpose of encouraging deeper and equitable discussions (James et al., 2008). Clicker questions regarding team processes were asked within the last two minutes of class. The instructor provided continual verbal support for clicker use and reminders about the purpose of process questions. Process questions were presented as “I’m really curious about …. Please give me an honest response.”

Positive incentives (contributions to grade) were used to leverage behaviors that supported stronger cooperative structure, particularly regarding promotive interaction and interdependence. Team functions were set up to become routine behaviors. Given the large class size, some team management responsibilities and potential record-keeping tasks were moved from the instructor to the student teams. Students were instructed each day by an initial slide to form teams of three to four, to assign team roles, to obtain materials, and to prepare for clicker use. Team membership was not assigned, and students were welcome to work with whomever they wished. This allowed teams to adjust membership to deal with absences. Regular roles within each team, at minimum, included a Recorder to make a collectable team record and a Spokesperson to report on behalf of the team either orally or by writing on the board. Each day, a team report form was completed with “things learned,” “questions arising,” or specific answers to questions posed in class. Each team listed the names and roles of team members. Individuals could earn final exam bonus points for being Reporter or Spokesperson at least one quarter of the class days. Thus, teams had to rotate roles so everyone would have a chance to earn the bonus. A more proscriptive stance can be taken regarding team membership and role assignments (e.g., fixed group membership using prior knowledge of student characteristics, and role rotation plans to ensure equitable opportunities) (Hodges, 2018; Simonson, 2019), but this requires additional instructor management. After several class periods, the daily reminders were minimized as everyone knew what to do.

Results and discussion

The argument here is that clickers may be used to check the status of and to promote improvements of key components of effective cooperative learning. Figure 1 shows a question asked at the end of one of the first class periods. It provided evidence that students joined together in teams of three or four, as requested. Three-member teams were suggested for this theater-style room because three in a row can see and refer to working materials in front of the center person. Teams of two were infrequent, foursomes were not reported, and singletons were rare. This team membership question can be asked multiple times early in the semester to provide formative feedback. It also confirms for the instructor that recommendations on team size were followed, and it shows students that compliance was nearly universal. A parallel question can be asked at the end of the semester as summative assessment. In this class, most students (about 80%) stayed with the same one or two teams all semester.

FIGURE 1
Response to “I worked in a group of how many today”

Response to “I worked in a group of how many today” 

Two choices in Figure 1 also send a message concerning lone individuals. The choice “I was by myself and was not invited to join a group” suggests to all that inviting individuals to join your group is encouraged. In other words, it is not just the responsibility of lone individuals to reach out; it is everyone’s responsibility in this class to be inclusive. The potential for exclusion has arisen as an issue in team-oriented instructional settings (Eddy, Brownwell, Thummaphan, Lan, & Wenderoth, 2015; Hodges, 2018), hence this positive messaging and feedback may help mitigate this potential problem. The choice “I worked by myself and turned down an invitation to join a group” subtly suggests that people be willing to be accepted into a group. Although responses are self-reports and perhaps subject to social desirability bias, the anonymity of the clicker response will mitigate that effect (Paulhus & Reid, 1991). Students report appreciating anonymity (Fallon & Forrest, 2011), from which can be inferred they desire to report honestly.

The language of questions and choices was chosen to describe behaviors without being evaluative or judgmental: “Here is what I see” is the goal versus “Here is what I judge about the behavior.” This issue was also faced in designing the Classroom Observational Protocol for Undergraduate STEM (COPUS) (Smith, Jones, Gilbert, & Wieman, 2013) in that responses involving judgment of value were more likely to lead to bias or conflict. The tone in the clicker question was intended to encourage appropriate behaviors without being stigmatizing or embarrassing. Lastly, because clicker responses counted toward the class grade, it was important that every student see an option that they could choose with each question. Results were displayed to the class, allowing the instructor to reflect on the frequency of productive behaviors or to comment on changes in behaviors that would be improvements. In the case of Figure 1, students were thanked for organizing (for the most part) into groups of effective size, and the “singleton” issue was mentioned as something to be aware of for the future (i.e., you should invite people in). Thus, this one slide provided an opportunity to assess team performance and to provide support for building teamwork skill, two of the key components of strong cooperative learning principles.

Figures 2 and 3 show questions regarding role implementation and perception (to support positive interdependence). Figure 2 looks at student use of a new role of “Stopper.” If the conversation seemed to be leaving anyone behind, the Stopper should intervene and “stop” the conversation. This question was a fidelity check on the use of that role. Figure 3 asked about the role of Spokesperson, who may have to report verbally, or write on the board. It is typically not a favorite role as Figure 3 shows. The wording was intended to ameliorate Spokesperson anxiety and open a dialogue regarding the value of the role, and to message that the team was responsible for supporting their Spokesperson. It also reminded students of the final exam point benefit.

FIGURE 2
Check for fidelity of implementation of role of “Stopper” in group

Check for fidelity of implementation of role of “Stopper” in group

FIGURE 3
Response to “When you play role of spokesperson, how anxious are you really?”

Response to “When you play role of spokesperson, how anxious are you really?”

Questions can interrogate the patterns of talk occurring within teams. Inequity of contributions within teams has been noted in previous research (James et al., 2008). Figure 4 explores how often each person reported contributing to the conversation. Figure 5 asks a similar question, but in summative perspective over the semester. Results suggest that most students were participating.

FIGURE 4
Response to “So far today, how many times did you say something within your working group?”

Response to “So far today, how many times did you say something within your working group?”

FIGURE 5
Response to “Over the semester, how often during group discussion did you say something (observations, ideas, suggestions...)

Response to “Over the semester, how often during group discussion did you say something (observations, ideas, suggestions...)"

Figure 6 shows information regarding ineffective team behaviors. This question demonstrates that most teams were not distracted, but about thirty students reported team members who were off task or dominating the conversation. Assuming that some responses came from members of the same team, and a team size of three, one can estimate that perhaps ten teams (out of about 30 in class) had some challenges of this nature. The instructor praised team function as a whole and pointed out behaviors to avoid. Note that anonymity may have allowed this report on “bad behavior” to be an honest report.

FIGURE 6
Response to “One of the following behaviors happened during our group work”

Response to “One of the following behaviors happened during our group work” 

Clicker questions can encourage talk that is more intellectually rich. The question in Figure 7 provides insight into communication moves that are valuable for promoting productive and equitable discussion (rephrasing, inviting, proposing). The question in Figure 8 takes a different perspective, that of an observer reporting on overall discourse structure for the whole team. This question was seeking insight into whether students were expressing complete thoughts and (hopefully) arguments, as opposed to just identifying answers without much rationale. Single word utterances would be evidence for lower-level recall or recognition activity. More frequent use of full sentences would be evidence of higher-level thinking, explanation, and argument-building. (Research on student discourse in teams often needs extended verbal structures to assess that an argument is being built and justified.) Furthermore, just asking students directly about verbal communication patterns in a clicker question is a reminder to them to engage in thinking, and evidence shows that explicit reminders can lead to deeper reasoning (Knight, Wise, & Southard, 2013). Additionally, because clicker responses count, this accountability may nudge students to engage in better discussions (Knight et al., 2016).

FIGURE 7
Response to “(If you worked with a group), did you do any of these things?

Response to “(If you worked with a group), did you do any of these things?"

FIGURE 8
Response to “If someone were listening to your group today, they would have heard mostly...”

Response to “If someone were listening to your group today, they would have heard mostly...” 

What are the relative affordances and costs of adding clicker questions to monitor and encourage team process? Students can read, process, and respond to a single question in about 30 seconds. Instructor comments and recommendations may take another 30 seconds. Questions can be repeated on subsequent days as reminders and to monitor for change. One question a day provides 30 to 40 opportunities in a semester to assess and improve team dynamics. Thus, a small investment of class time, using a technology that is likely already in place, can direct substantial attention to improving student team process skills. At the same time, the instructor will gain substantial insight concerning how students communicate, helping to guide decisions about how to structure discussion tasks.

Conclusions and implications

One approach to implementing cooperative learning in STEM higher education is to alter the physical teaching space, for example, the SCALE-UP model (Foote, Knaub, Henderson, Dancy, & Beichner, 2016). Rooms are designed with seating conducive for discussion, means for visible sharing of work products, and pathways that allow student and instructor movement. The capacity of these rooms tops out at about 100 students (10 to 25 groups, depending on seating). Because many larger institutions have course capacities two to six times larger than this, moving to a SCALE-UP format has significant challenges: creating or renovating space, assigning more faculty to smaller enrollment classes, and aligning scheduling. A compromise approach implements hierarchical instructional facilitation (e.g., Lewis & Lewis, 2005; Yezierski et al., 2008). Students in class are organized into clusters of small student teams, which are located proximate to one graduate or undergraduate facilitator who mediates between the teams and the course instructor. This may use the traditional rooms, class sizes, and schedules, but requires staffing and training of student facilitators and fortuitous schedule overlap. The logistic and financial challenges are non-trivial and perhaps nonsustainable.

A third model is possible that works within the constraints of a single instructor, large-enrollment classroom, and through design, establishes features that emulate the important research-based characteristics of a cooperative learning environment. For a class of 200, where there may be 40 to 60 teams, it is impossible to give more than a few teams the direct attention they need. Too much time may be spent monitoring team membership, addressing absentee issues, or cajoling individuals to get together as a team. In such a setting, it is also difficult to facilitate well with personal charm as opposed to frustration. Students are adults and respond positively to “suggest and expect” rather than “demand and punish.” Consequently, the goal is to enact management actions that are true to the principles of cooperative learning previously cited, but avoid increasing complexity or time demands for instructor or students. In particular, class processes must: be simple for the instructor to implement and simple for students to understand, such that the processes become routine. Devices or procedures that require more than a few minutes to explain or that require repeated instructions end up misinterpreted, which interferes with the intended learning process and wastes time. minimize extra work for the instructor yet leverage important features to support learning outcomes for students. Clicker questions that support assessment of team communication skills fit this criterion. turn some management responsibility over to students (e.g., team formation, role rotation), incentivize via awarding course points, and then check for implementation via clickers.

This article presents a proof of concept supported by a research-based rationale. Data have been presented from a real classroom along with implementation guidance. However, no evidence has been developed yet to indicate whether the suggested questions or approach actually lead to the desired improvements in team communication. Certainly, the clicker questions and choices presented here could be expanded and improved, and research on efficacy pursued.

The literature review conducted for this article suggests that the pedagogic vision for clickers has been constrained by thinking of them only as an electronic extension of multiple-choice content testing (Beatty & Gerace, 2009). This article steps outside of that box by arguing that clickers can be used to monitor and direct improvement in team behaviors. Other authors have also stepped outside the box with innovative applications or approaches. For example, Bunce, Flens, and Neiles (2010) studied student attentiveness patterns during lecture and active-learning class periods. Cleary (2008) replicated classic psychology experiments in perception and recall. Organic chemists devised clever text string responses to allow a broader array of choices for students to describe molecular structures, synthetic sequences, or mechanistic reaction pathways (Flynn, 2011; Morrison, Caughran, & Sauers, 2014). More good ideas are out there.

Acknowledgments

Thanks to Dr. Kathleen Jeffery for comments on the manuscript. 

 

Christopher Bauer (Chris.Bauer@unh.edu) is a professor in the Department of Chemistry at the University of New Hampshire in Durham, New Hampshire. 

References

Adams C. C. (2014). Classroom response systems: Effects on the critical analysis skills of students in introductory science courses. School Science & Mathematics, 114(8), 367–379.

Anthis K. (2011). Is it the clicker, or is it the question? Untangling the effects of student Response System Use. Teaching of Psychology, 38(3), 189–193.

Beatty I. D., & Gerace W. J. (2009). Technology-enhanced formative assessment: A research-based pedagogy for teaching science with classroom response technology. Journal of Science Education and Technology, 18(2) 146–162.

Brady M., Seli H., & Rosenthal J. (2013). “Clickers” and metacognition: A quasi-experimental comparative study about metacognitive self-regulation and use of electronic feedback devices. Computers & Education, 65, 56–63.

Brooks B. J., & Koretsky M. D. (2011). The influence of group discussion on students’ responses and confidence during Peer Instruction. Journal of Chemical Education, 88(11), 1477–1484.

Buil I., Catalan S., & Martinez E. (2016). Do clickers enhance learning? A control-value theory approach. Computers & Education, 103, 170–182.

Bunce D. M., Flens E. A., & Neiles K. Y. (2010). How long can students pay attention in class? A study of student attention decline using clickers. Journal of Chemical Education, 87(12), 1438–1443.

Castillo-Manzano J. I., Castro-Nuno M., Lopez-Valpuesta L., Sanz-Diaz M. T., & Yniguez R. (2016). Measuring the effect of ARS on academic performance: A global meta-analysis. Computers & Education, 96, 109–121.

Chen W. T., Zhang J. Y., & Yu Z. G. (2017). Advantages and disadvantages of clicker use in education. International Journal of Information and Communication Technology Education, 13(1), 61–71.

Chi M. T. H., Kang S., & Yaghmourian D. L. (2017). Why students learn more from dialogue- than monologue-videos: Analyses of peer interactions. Journal of the Learning Sciences, 26(1), 10–50.

Chien Y. T., Chang Y. H., & Chang C. Y. (2016). Do we click in the right way? A meta-analytic review of clicker-integrated instruction. Educational Research Review, 17, 1–18.

Christian K., & Talanquer V. (2012). Content-related interactions in self-initiated study groups. International Journal of Science Education, 34(14), 2231–2255.

Cleary A. M. (2008). Using wireless response systems to replicate behavioral research findings in the classroom. Teaching of Psychology, 35, 42–44.

Cotes S., & Cotua J. (2014). Using audience response systems during interactive lectures to promote active learning and conceptual understanding of stoichiometry. Journal of Chemical Education, 91(5), 673–677.

Eddy S. L., Brownell S. E., Thummaphan P., Lan M. C., & Wenderoth M. P. (2015). Caution, student experience may vary: Social identities impact a student’s experience in peer discussions. CBE-Life Sciences Education, 14(4), ar45.

Egelandsdal K., & Krumsvik R. J. (2017). Clickers and formative feedback at university lectures. Education and Information Technologies, 22(1), 55–74.

Emenike M. E., & Holme T. A. (2012). Classroom response systems have not “crossed the chasm”: Estimating numbers of chemistry faculty who use clickers. Journal of Chemical Education, 89(4), 465–469.

Fallon M., & Forrest S. L. (2011). High-tech versus low-tech instructional strategies: A comparison of clickers and handheld response cards. Teaching of Psychology, 38(3), 194–198.

Flynn A. B. (2011). Developing problem-solving skills through retrosynthetic analysis and clickers in organic chemistry. Journal of Chemical Education, 88(11), 1496–1500.

Foote K., Knaub A., Henderson C., Dancy M., & Beichner R. J. (2016). Enabling and challenging factors in institutional reform: The case of SCALE-UP. Physical Review Physics Education Research, 12, 010103.

Fortner-Wood C., Armistead L., Marchand A., & Morris F. B. (2013). The effects of student response systems on student learning and attitudes in undergraduate psychology courses. Teaching of Psychology, 40(1), 26–30.

Freeman T., & Vanden Heuvel B. (2015). Who’s in the room? Using clickers to assess students’ needs, attitudes and prior knowledge. In Goldstein D.S. & Wallis P.D. (Eds.), Clickers in the classroom: Using classroom response systems to increase student learning (p. 29). Sterling, VA: Stylus Publishing.

Gibbons R. E., Laga E. E., Leon J., Villafane S. M., Stains M., Murphy K., & Raker J. R. (2017). Chasm Crossed? Clicker use in postsecondary chemistry education. Journal of Chemical Education, 94(5), 549–557.

Goldstein D. S., & Wallis P. D. (Eds.). (2015). Clickers in the classroom: Using classroom response systems to increase student learning. Sterling, VA: Stylus Publishing.

Gray K., & Steer D. N. (2012). Personal response systems and learning: It is the pedagogy that matters, not the technology. Journal of College Science Teaching, 41(5), 80–88.

Herreid C. F., Terry D. R., Lemons P., Armstrong N., Brickman P., & Ribbens E. (2014). Emotion, engagement, and case studies. Journal of College Science Teaching, 44(1), 86–95.

Hodges L. C. (2017). Ten research-based steps for effective group work. IDEA paper #65. Retrieved from .

Hodges L. C. (2018). Contemporary issues in group learning in undergraduate science classrooms: A perspective from student engagement. CBE-Life Sciences Education, 17(2), 1–10.

Hodges L. C., Anderson E. C., Carpenter T. S., Cui L., Feeser E. A., & Gierasch T. M. (2017). Using clickers for deliberate practice in five large science courses. Journal of College Science Teaching, 47(2), 22–28.

Hunsu N. J., Adesope O., & Bayly D. J. (2016). A meta-analysis of the effects of audience response systems (clicker-based technologies) on cognition and affect. Computers & Education, 94, 102–119.

James M. C., Barbieri F., & Garcia P. (2008). What are they talking about? Lessons learned from a study of peer instruction. Astronomy Education Review, 7(1), 37–43.

James M. C., & Willoughby S. (2011). Listening to student conversations during clicker questions: What you have not heard might surprise you! American Journal of Physics, 79, 123–132.

Johnson D. W., Johnson R. T., & Smith K. A. (1991). Cooperative learning: Increasing college faculty instructional productivity. ASHE-ERIC Higher Education Report No. 4. Washington, DC: The George Washington University, School of Education and Human Development.

King A. (1990). Enhancing peer interaction and learning in the classroom through reciprocal questioning. American Educational Research Journal, 27(4), 664–687.

King D. B. (2011). Using clickers to identify the muddiest points in large chemistry classes. Journal of Chemical Education, 88(11), 1485–1488.

Knight J. K., Wise S. B., Rentsch J., & Furtak E. M. (2015). Cues matter: Learning assistants influence introductory biology student interactions during clicker-question discussions. CBE-Life Sciences Education, 14(4), ar41.

Knight J. K., Wise S. B., & Sieke S. (2016). Group random call can positively affect student in-class clicker discussions. CBE-Life Sciences Education, 15(4), ar56.

Knight J. K., Wise S. B., & Southard K. M. (2013). Understanding clicker discussions: Student reasoning and the impact of instructional cues. CBE-Life Science Education, 12, 645–654.

Koenig K. (2010). Building acceptance for pedagogical reform through wide-scale implementation of clickers. Journal of College Science Teaching, 39(3) 46–50.

Kulatunga U., Moog R. S., & Lewis J. E. (2014). Use of Toulmin’s argumentation scheme for students discourse to gain insight about guided inquiry activities in college chemistry. Journal of College Science Teaching, 43(5), 78–86.

Kulesza A. E., Clawson M. E., & Ridgway J. S. (2014). Student success indicators associated with clicker-administered quizzes in an honors introductory biology course. Journal of College Science Teaching, 43(4), 73–79.

Lewin J. D., Vinson E. L., Stetzer M. R., & Smith M. K. (2016). A campus-wide investigation of clicker implementation: The status of peer discussion in STEM classes. CBE-Life Sciences Education, 15(1), ar6.

Lewis S. E. & Lewis J. E. (2005). Departing from lectures: An evaluation of a peer-led guided inquiry alternative. Journal of Chemical Education, 82(1), 135–139.

MacArthur J. R., & Jones L. L. (2008). A review of literature reports of clickers applicable to college chemistry classrooms. Chemistry Education Research and Practice, 9(3), 187–195.

MacArthur J. R., & Jones L. (2013). Self-assembled student interactions in undergraduate general chemistry clicker classrooms. Journal of Chemical Education, 90(12), 1586–1589.

Mayer R. E., Stull A., DeLeeuw K., Almeroth K., Bimber B., Chun D., Bulger M., Campbell J., Knight A., & Zhang H. J. (2009). Clickers in college classrooms: Fostering learning with questioning methods in large lecture classes. Contemporary Educational Psychology, 34(1), 51–57.

Michaels S., O’Connor C., & Resnick L. B. (2008). Deliberative discourse idealized and realized: Accountable talk in the classroom and in civic life. Studies in Philosophy and Education, 27(4), 283–297.

Milner-Bolotin M., Antimirova T., & Petrov A. (2010). Clickers beyond the first-year science classroom. Journal of College Science Teaching, 40(2) 14–18.

Moon A., Stanford C., Cole R., & Towns M. (2017). Decentering: A characteristic of effective student-student discourse in inquiry-oriented physical chemistry classrooms. Journal of Chemical Education, 94(7), 829–836.

Morgan J. T., & Wakefield C. (2012). Who benefits from peer conversation? Examining correlations of clicker question correctness and course performance. Journal of College Science Teaching, 41(5), 51–56.

Morrison R. W., Caughran J. A., & Sauers A. L. (2014). Classroom response systems for implementing Interactive Inquiry in large organic chemistry classes. Journal of Chemical Education, 91, 1838–1844.

Murphy K. (2012). Using a personal response system to map cognitive efficiency and gain insight into a proposed learning progression in preparatory chemistry. Journal of Chemical Education, 89(10), 1229–1235.

Nagel M., & Lindsey B. (2018). The use of classroom clickers to support improved self-assessment in introductory chemistry. Journal of College Science Teaching, 47(5) 72–79.

Niemeyer E. D., & Zewail-Foote M. (2018). Investigating the influence of gender on student perceptions of the clicker in a small undergraduate general chemistry course. Journal of Chemical Education, 95(2), 218–223.

Oswald K. M., Blake A. B., & Santiago D. T. (2014). Enhancing immediate retention with clickers through individual response identification. Applied Cognitive Psychology, 28(3), 438–442.

Paulhus D. L., & Reid D. B. (1991). Enhancement and denial in socially desirable responding. Journal of Personality and Social Psychology, 60(2), 307–317.

Pearson R. J. (2017). Tailoring clicker technology to problem-based learning: What’s the best approach? Journal of Chemical Education, 94(12), 1866–1872.

Perez K. E., Strauss E. A., Downey N., Galbraith A., Jeanne R., & Cooper S. (2010). Does displaying the class results affect student discussion during peer instruction? CBE-Life Sciences Education, 9(2), 133–140.

Ribbens E. (2007). Why I like clicker personal response systems. Journal of College Science Teaching, 37(2), 60–62.

Sevian H., & Robinson W. E. (2011). Clickers promote learning in all kinds of classes—small and large, graduate and undergraduate, lecture and lab. Journal of College Science Teaching, 40(3), 14–18.

Simonson S. R., Ed. (2019). POGIL: An introduction to process oriented guided inquiry learning for those who wish to empower learners. Sterling, VA: Stylus Publishing.

Skinner S. (2009). On clickers, questions, and learning. Journal of College Science Teaching, 38(4), 20–23.

Smith M. K., Jones F. H. M., Gilbert S. L., & Wieman C. E. (2013). The classroom observation protocol for undergraduate STEM (COPUS): A new instrument to characterize university STEM classroom practices. CBE-Life Science Education 12(4), 618–627.

Smith M. K., Wood W. B., Krauter K., & Knight J. K. (2011). Combining peer discussion with instructor explanation increases student learning from in-class concept questions. CBE-Life Sciences Education, 10(1), 55–63.

Solomon E. D., Repice M. D., Mutambuki J. M., Leonard D. A., Cohen C. A., Luo J., & Frey R. F. (2018). A mixed-methods investigation of clicker implementation styles in STEM. CBE-Life Sciences Education, 17(2), ar30.

Stanford C., Ruder S., Lantz J., Cole R., & Reynders G. (2017). Enhancing learning by improving process skills in STEM (ELIPSS): Development and implementation of interaction rubrics. 254th American Chemical Society National Meeting, Washington DC.

Terry D. R., Lemons P., Armstrong N., Brickman P., Ribbens E., & Herreid C. F. (2016). Eight is not enough: The level of questioning and its impact on learning in clicker cases. Journal of College Science Teaching, 46(2) 82–92.

Turpen C., & Finkelstein N. D. (2009). Not all interactive engagement is the same: Variations in physics professors’ implementation of Peer Instruction. Physical Review Special Topics Physics Education Research 5, 020101.

Van Daele T., Frijns C., & Lievens J. (2017). How do students and lecturers experience the interactive use of handheld technology in large enrollment courses? British Journal of Educational Technology, 48(6), 1318–1329.

Vickrey T., Rosploch K., Rahmanian R., Pilarz M., & Stains M. (2015). Research-based implementation of peer instruction: A literature review. CBE-Life Sciences Education, 14(1), es3.

Wolter B. H. K., Lundeberg M. A., Kang H., & Herreid C. F. (2011). Students’ perceptions of using personal response systems (“clickers”) with cases in science. Journal of College Science Teaching, 40(4), 14–19.

Yezierski E. J., Bauer C. F., Hunnicutt S. S., Hanson D. M., Amaral K. E., and Schneider J. P. (2008). POGIL implementation in large classes: Strategies for planning, teaching, and management. In Moog R. S. & Spencer J. (Eds.), Process-oriented guided inquiry learning (pp. 60–71). Washington, DC: American Chemical Society.

Young K. K., & Talanquer V. (2013). Effect of different types of small-group activities on students’ conversations. Journal of Chemical Education, 90(9), 1123–1129.

Research Teaching Strategies Postsecondary

Asset 2