Skip to main content
 

Research and Teaching

A Novel Rubric Format for Providing Feedback on Process Skills to STEM Undergraduate Students

Journal of College Science Teaching—July/August 2021 (Volume 50, Issue 6)

By Doug Czajka, Gil Reynders, Courtney Stanford, Renée Cole, Juliette Lantz, and Suzanne Ruder

To improve student process-skill development, a novel type of rubric was developed that goes beyond a typical analytic rubric by providing detailed feedback to students. Process skills are transferable skills such as information processing, critical thinking, communication, and teamwork; these skills are necessary for success in all STEM courses as well as in the workforce. In addition to the categories and descriptors commonly found in rubrics, these “feedback-style” rubrics also contain observable characteristics and suggestions for improvement. The observable characteristics provide specific criteria to look for when assessing students’ written work or group interactions. The suggestions for improvement are intended to promote a growth mindset in students and help them further progress in their development of each skill. In a large-enrollment class, undergraduate teaching assistants (UTA) used the feedback rubrics to rate student skills while students also self-assessed their skills. The results of our statistical analysis indicate that after the feedback rubrics were provided to students, their subsequent self-assessments became more similar to the UTA scores. These rubrics can be used in STEM disciplines at multiple course levels to assess and provide feedback to students on their skill development.

 

Process skills, also known as transferable, professional, or soft skills, are important components of the learning environment in STEM classrooms. This is especially true when using active-learning techniques that often require higher-order thinking (cognitive skills) and peer interaction (interpersonal skills) during class activities. The cognitive skills include information processing, critical thinking, and problem solving, while the interpersonal skills include interpersonal communication, teamwork, and management. These skills also carry value beyond the classroom as they are necessary in preparing students for their future careers and roles as contributing citizens (NRC, 2012). Employers often cite these skills as desirable in new hires, even valuing them over technical knowledge related to the job (NACE, 2020; AAC&U, 2018). While process skills are embedded in course- and program-level learning outcomes within many STEM disciplines at many institutions, they are rarely the explicit focus of classroom learning goals. Students are rarely given direct instruction (NRC, 2012) or assessed on their process skills, and thus they are less likely to receive feedback on the development of these skills as compared to the feedback they receive on content. This lack of alignment between intended learning outcomes and assessment practices (Biggs, 2003) means that while instructors may tell students that process skills are valuable, students may perceive the lack of process skill assessment and feedback as an indicator that skill development is unnecessary.

Feedback is critical in helping students gauge their progress toward achieving intended learning outcomes, and it ranks among the most effective ways of improving student achievement (Hattie, 2008; Kluger & Denisi, 1996; Schneider & Preckel, 2017). It is important to provide formative feedback during each unit of instruction (related lessons) along with summative feedback at the end of a unit (Nicol & MacFarlane-Dick, 2006). Hattie and Timperley (2007) provide a model of feedback suggesting that feedback provided to students should answer three questions: (1) What are the goals? (2) What progress am I making toward the goals? and (3) What should I do to make better progress? Answering these questions can be achieved by providing students with clear feedback about their performance in relation to the intended learning outcomes and developmental feedback that students can use to identify goals and strategies to improve (Ferguson, 2011; Lizzio & Wilson, 2008; Shute, 2008). Wollenschläger et al. (2016) demonstrated that when using rubrics to assess students’ ability to design a scientific experiment, the inclusion of improvement information in addition to performance evaluation led to better scientific experiment planning and enabled students to more accurately assess their own abilities. Surveys and interviews with students reinforce this idea, showing that they find targeted feedback with guidance for improvement to be most constructive (Fong et al., 2018; Weaver, 2006).

Instructors may think that students do not pay attention to or use feedback, but there is evidence that the opposite is true: Most students value and use feedback extensively (Mulliner & Tucker, 2017; Zimbardi et al., 2017). When Mulliner and Tucker (2017) surveyed undergraduate students, they found that 93% of respondents said that they always use the feedback provided to them. The results of Zimbardi et al. (2017) were similar: “Ninety-two percent of first-year and 85% of second-year students accessed their feedback, with 58% accessing their feedback for over an hour.” With the evidence supporting the efficacy and high student use of feedback, it is clearly a critical component of the learning environment.

While the studies previously outlined have looked at the role of feedback on student performance with respect to content learning, it is likely that feedback can play a similar role in improving student process skills. However, many instructors do not explicitly focus on the development of student process skills in the classroom, assess these skills, and then provide students with appropriate feedback on their skill development; instead, instructors may assume that students are developing process skills without directly measuring them (NRC, 2012). Knowing that students value and use feedback, we assert that the key to improving student process skills lies in the nature of the feedback they receive.

The Enhancing Learning by Improving Process Skills in STEM project

The Enhancing Learning by Improving Process Skills in STEM (ELIPSS) project was started with the goal of developing resources that could be used by instructors to identify, develop, and assess process skills in undergraduate STEM courses (Cole et al., 2018). The project initially began with efforts to create process-skill rubrics for a single organic chemistry course (Ruder et al., 2018). However, as these skills transcend any particular STEM discipline or course, the project used a multidisciplinary collaboration team to generate a set of rubrics that can be applied to multiple disciplines and learning environments. The choice of a rubric was supported by the idea that student and instructor perceptions of rubrics in higher education tend to be positive, and they can also lead to positive improvements in academic performance and course instruction or activities (Bauer & Cole, 2012; Reddy & Andrade, 2010). Analytic rubrics (Dawson, 2017) were created for the following process skills: information processing, critical thinking, problem solving, written and interpersonal communication, teamwork, and management. Figure 1 shows an example of the information processing analytic rubric that was developed by the ELIPSS project. These rubrics consist of a traditional grid layout, where each row represents a category (i.e., the evaluative criteria) within the targeted skill and each column is a performance level with quality descriptors populating the grid for ratings of one, three, and five (see Figure 1). The rubrics were designed for assessing process skills by either evaluating students’ written work or students’ interactions during in-class group work.

Figure 1
Analytic-style rubric for information processing. The definition for information processing is provided at the top of the rubric followed by four aspects of information processing that are each assessed as separate rubric categories. (Used with permission of the ELIPSS project.)

The analytic process-skill rubrics were classroom tested by the authors and a primary collaboration team (PCT) of instructors who were trained and experienced in facilitating and developing process skills through the use of Process Oriented Guided Inquiry Learning (POGIL) (Moog & Spencer, 2008; Simonson, 2019) in their classrooms. The PCT instructors taught in a variety of disciplines including chemistry, computer science, biology, engineering, and mathematics. Our previous work has described the development process for the analytic rubrics, including a review of existing tools for assessing process skills and how the rubrics were tested for validity, reliability, and their utility for providing feedback (Reynders et al., 2019). The instructors used the rubrics in a variety of settings, including small- and large-enrollment (> 150 students) classes, as well as laboratory courses at a range of different institutions (see Tables 1 in Cole et al., 2018; Cole et al., 2019).

Limitations of the analytic rubrics

The ELIPSS project produced analytic rubrics for process skills that can be implemented in multiple STEM learning environments to assess students, but one issue remained: how to provide more detailed feedback to students to help them improve in their process skills development. While the ELIPSS analytic rubrics were beneficial in helping instructors become more familiar with and assess process skills in their classrooms, they did not necessarily provide a clear pathway for delivering actionable feedback to students. Use of the analytic rubrics demonstrated that they can be used to measure students’ process skill growth throughout a course, but students still struggled to assess their own skills (Reynders et al., 2020) and these improvements are not seen consistently across all settings. Traditional analytic rubrics give students a sense of “what are the goals?” by pointing out what proficiency looks like in the descriptor for the highest rating in a category. Students can begin to get a sense of “what progress am I making?” by looking at a rating for their written work or group interactions. However, if these ratings do not effectively direct students to understand what specific actions or qualities of work earned those ratings, students will get only a glimpse of “what should I do to make better progress?” and the feedback they receive may have diminished impact. While students may identify the criteria for improved performance by looking at the rubric, they may not have a good sense of what actions need to be taken to reach that level. Ultimately, analytic rubrics can tell students about their current performance, but the descriptors in these types of rubrics are primarily evaluative and may not provide easily interpretable guidance for how students can improve. Thus, an opportunity to foster the growth mindset in students may be missed.

Interviews with students revealed that they could use the ELIPSS analytic rubrics to understand their current performance, but they did not know how to change their behavior to improve their scores. When students received scores of less than five, they would say that they “know the area that needs to be improved upon, just not what specifically needs to be improved in the area” and that their instructor or teaching assistant should “try to point out what I should do to get a five.” Some students received written comments on their rubrics and said that these comments would be key to helping them improve. For example, one student said that “without the comments, I’d have a vague idea of what to do, but nothing specific enough that I think I could change it.” Another student said that “I feel like [the analytic rubric is] more formatted to be reflective rather than ‘these are the steps you should take’ so maybe if there was another column...like recommendations.” In addition to the student interviews, feedback from undergraduate teaching assistants (UTAs) also informed changes to the analytic rubrics. After these UTAs assessed group interactions during class, they reported that it was difficult to write down student scores while also trying to write comments on the rubric. Based on feedback from instructors, teaching assistants, and student users of the analytic-style rubrics, the ELIPSS project set out to design a new style of rubric that is better suited for delivering actionable feedback to students.

Development of feedback-style rubrics

Institutional Review Board approval was obtained at Virginia Commonwealth University with the project number HM20005212. The project was classified as exempt, and informed consent was obtained from all participants prior to any data collection. The goal of the feedback-style rubric development was to create a rubric that was more intuitive to use when assessing student process skills and to enable instructors to give a range of viable suggestions for student improvement. As with the analytic rubrics, these new feedback rubrics were intended to use language that is accessible and applicable to undergraduate students across multiple STEM disciplines. Additionally, we wanted the feedback rubrics to contain suggestions for improvement that would resemble advice an instructor might give a student in a face-to-face interaction. With these principles in mind, both the language and formats of the analytic rubrics were adapted into a new format dubbed a feedback rubric (Figure 2). These feedback rubrics are continually being tested in the same iterative process that was used to develop the analytic rubrics (Cole et al., 2018) in which feedback is gathered from faculty members, teaching assistants, and students. This new rubric style contains three sections for each process skill category. First, the ratings section contains the same language as the analytic rubrics but in a condensed format to save space for the other parts of the feedback rubric. In this section, a clear definition of the category is provided along with a rating scale from 0 to 5 that includes adverb modifiers relevant to each category. Next to the ratings section is a list of observable characteristics for each category. These are behaviors or indicators that the rater can look for as evidence of the skill during student interactions or on student work products (e.g., exams, group quizzes, projects, laboratory reports, homework). Finally, for each category there is a list of suggestions for improvement that the evaluator can use to extend the feedback to include concrete actions students can take to improve performance.

Figure 2
Feedback-style rubric for information processing. (Used with permission of the ELIPSS project.)

Piloting the feedback rubrics

The new feedback versions of the process-skill rubrics were piloted in multiple classrooms at three universities. Here we describe the implementation in a large-enrollment (>150 students), two-semester organic chemistry course at a large, public, research-intensive university. The course was taught using the POGIL methodology (Moog & Spencer, 2008; Simonson, 2019), where students spent most of the class time working on guided inquiry group activities. The course was supported by 10–13 UTAs per semester who were trained to facilitate large-enrollment, active learning environments (Ruder & Stanford, 2018; 2020). The UTAs attended weekly training meetings to learn how to facilitate course content during class activities and to elicit and assess student process skills during group interactions using the ELIPSS analytic rubrics. Midway through the fall 2018 semester of Organic Chemistry I, the TAs were trained in the use of the feedback rubrics and transitioned to solely using them.

Part of the UTA responsibilities included providing an end-of-semester reflection on their experience as an organic chemistry UTA. Undergraduate teaching assistant reflections demonstrated an overwhelming preference for the newer feedback rubrics and some common ideas arose that were representative of most UTA reflections. Many commented on the usefulness of the observable characteristics section, which, as one UTA stated: “for a lot of the more abstract concepts like information processing and critical thinking, it helped to contextualize them by assigning certain behaviors to them.” Other TAs similarly mentioned that the observable characteristics helped them become aware of actions that they would not have considered representative of a certain process skill, and this likely made them more accurate in their ratings. Additionally, the UTAs found the suggestions for improvement a valuable component of the new rubrics. As one UTA stated, “the suggestions for improvement definitely helped me better identify areas that each group was struggling with and formulate feedback that represented the goals of each process skill. With the original [analytic] rubrics, I felt that I was making more general statements that weren’t necessarily aligned with the specific skills being assessed.” From a more practical perspective, the UTAs reported that they could more quickly give comments to students than before because they could now simply mark an item in the suggestions for improvement section instead of writing all comments by hand.

Student self-assessment and the efficacy of feedback

During the fall 2018 and spring 2019 semesters, students’ abilities to assess their own process skills were investigated. Additionally, the role that external feedback, in the form of a UTA-completed feedback rubric, could play in improving student self-assessments was explored. Over the course of the fall 2018 semester Organic Chemistry I course, student groups used a form of the feedback rubrics that did not contain the suggestions for improvements section to assess their group’s information processing, teamwork, and critical thinking at various times. Students were asked to check all of the behaviors they engaged in during group activities and to give themselves a rating for each category. Each day that students assessed one of their skills, UTAs would assess the student groups on the same process skill. As Figure 3 shows, a majority of student groups overestimated their performance in demonstrating process skills compared to the rating given by the UTA, a finding that is in line with the idea that students tend to overestimate their performance on content assessments (Hacker et al., 2000; Hawker et al., 2016). This type of overestimation could lead students to think that their skills are already sufficient and thus could inhibit their motivation to improve (Kruger & Dunning. 1999).

Figure 3
Student estimation of process skills without having received external feedback on their skills in the form of a completed UTA rubric. Any dots on the diagonal line represent perfect agreement between the UTA scores and the student self-assessment scores. Dots above the diagonal represent student overestimation, while dots under the diagonal represent student underestimation. IP = Information Processing, CT = Critical Thinking, and TW = Teamwork.

In the spring 2019 Organic Chemistry II course, we tested whether students would have more accurate perceptions of their process skills if they received completed feedback rubrics from the UTAs. The UTAs in spring 2019 were the same cohort who had been previously trained in the use of the feedback rubrics in fall 2018. During spring 2019, UTAs exclusively used the feedback rubrics to assess student process skills. Additionally, paper rubrics were replaced by digital versions created using Google Sheets that UTAs could access via an electronic device such as their phone or a tablet during class. The use of digital rubrics also allowed for the delivery of feedback to each individual member of a group, as opposed to the entire group sharing a single paper rubric. Completed digital rubrics were converted to PDFs and uploaded to the online grading software Gradescope (Singh et al., 2017). Gradescope allowed the course instructor to see if a student had opened a graded assignment to determine if the feedback was reviewed by the student.

Students were again asked to self-assess their group skills in spring 2019, but they did so individually. During a class period midway through the semester, the UTAs rated each group using the teamwork feedback rubric. After class, students were asked to individually rate their group teamwork skills using a digital version of the rubric through the course management software. As in the fall semester, students in the spring semester overestimated their teamwork skills compared to the UTA (Figure 4a). A few days before students were assessed on their teamwork skills for a second time, the UTA-completed rubric was returned to students via Gradescope so students could see the ratings, suggestions for improvement, and any comments made by the UTAs. This timing was a function of the logistics of providing the feedback and to encourage students to reflect on their prior performance shortly before being assessed again. Similarly to the first instance, the UTAs assessed students on their teamwork, and after class, students assessed themselves. During this second self-assessment in the spring, students were more accurate in their own self-assessments (Figure 4b). This was determined statistically using Lin’s Concordance Correlation Coefficient. Lin’s rc ranges from -1 to 1, and measures how close a line of best fit is to a 45° line through the origin, which in this case represents perfect agreement between the student and UTA ratings. The concordance for the postfeedback self-assessment (rc = 0.403, n = 97) was closer to 1 (perfect agreement) than the prefeedback self-assessment (rc = 0.175, n = 137), indicating better alignment between UTA and student self-assessment after receiving external feedback, including the suggestions for improvement.

Figure 4
Accuracy of student self-assessments relative to the UTA assessments. Each red dot represents one student’s self-assessment versus the UTA assessment. The solid black line represents perfect agreement (y = x), and the lighter black line is a line of best fit for the red dots.

Conclusions and implications

The feedback-style rubrics developed by the ELIPSS project are a useful new tool to guide students toward better perceptions of what it means to engage in the effective development of process skills such as information processing, critical thinking, communication, and teamwork. While analytic rubrics provide a measure of achievement or performance, they can be seen as largely evaluative; they primarily provide students with feedback on “What are the goals?” and begin to provide feedback on “What progress am I making?” The built-in suggestions for improvement in the feedback rubrics can work to support a growth mindset in students by helping to clarify and specify “What progress am I making?” and answering the “What should I do to make better progress?” question that is essential to effective feedback. Students value this type of developmental feedback, and having an observer use the feedback-style rubrics described here can lead to more accurate student self-assessment, an important component of student growth and improvement.

The new rubrics are readily adoptable and can be employed effectively by instructors in a variety of classroom settings, not just those involving learning assistants or TAs. The observable characteristics provide helpful guidance to instructors in recognizing indicators of a specific process-skill category in student group interactions or written work. These characteristics may be especially important in the assessment of process skills because identifying evidence for these skills may be less familiar to both students and instructors. Providing instructors and teaching assistants with specific behaviors to look for in each category may also support more accurate assessments and ratings of the targeted skills. These feedback rubrics represent a valuable tool for instructors looking to help students develop process skills that will allow them to become better learners and be successful beyond the classroom.

Acknowledgments

We would like to thank our Primary Collaboration Team and Cohort members for all their valuable input during the development of the feedback rubrics. We would also like to thank all the students and undergraduate teaching assistants who have allowed us to examine their work and provided reflections on using the rubrics and receiving feedback. This work was supported in part by the National Science Foundation under Division of Undergraduate Education grants #1524399, #1524936, and #1524965. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Available resources

More information about the ELIPSS project and access to both the analytic and feedback-style process-skill rubrics can be found at the project website at http://elipss.com.


Doug Czajka is an assistant professor in the Department of Earth Science at Utah Valley University in Orem, Utah. Gil Reynders is a professor of chemistry at Sauk Valley Community College in Dixon, Illinois. Courtney Stanford is an assistant professor of chemistry and chemistry education at Ball State University in Muncie, Indiana. Renée Cole is a professor in the Department of Chemistry at the University of Iowa in Iowa City, Iowa. Juliette Lantz is a professor in the Department of Chemistry at Drew University in Madison, New Jersey. Suzanne Ruder (sruder@vcu.edu) is a professor in the Department of Chemistry at Virginia Commonwealth University in Richmond, Virginia.

References

Association of American Colleges and Universities (AAC&U). (2018). Fulfilling the American dream: Liberal education and the future of work. AAC&U. https://www.aacu.org/sites/default/files/files/LEAP/2018EmployerResearchReport.pdf

Bauer, C. F., & Cole, R. (2012). Validation of an assessment rubric via controlled modification of a classroom activity. Journal of Chemical Educaction, 89(9), 1104–1108.

Biggs, J. (2003). Aligning teaching and assessing to course objectives. Teaching and Learning in Higher Education: New Trends and Innovations, 2(April), 13–17

Cole, R., Lantz, J., Ruder, S., Reynders, G., & Stanford, C. (2018). Enhancing learning by assessing more than content knowledge. American Society for Engineering Education Annual Conference & Exposition. https://peer.asee.org/29991.

Cole, R., Reynders, G., Ruder, S., Stanford, C., & Lantz, J. (2019). Constructive alignment beyond content: Assessing professional skills in student group interactions and written work in M. Schultz, S. Schmid, & G. Lawrie (Eds.), Research and practice in chemistry education: Selected contributions from the 25th IUPAC International Conference on Chemistry Education 2018 (pp. 203–222). Springer Singapore. https://doi.org/10.1007/978-981-13-6998-8

Dawson, P. (2017). Assessment rubrics: towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education, 42(3), 347–360.

Ferguson, P. (2011). Student perceptions of quality feedback in teacher education. Assessment and Evaluation in Higher Education, 36(1), 51–62. https://doi.org/10.1080/02602930903197883

Fong, C. J., Schallert, D. L., Williams, K. M., Williamson, Z. H., Warner, J. R., Lin, S., & Kim, Y. W. (2018). When feedback signals failure but offers hope for improvement: A process model of constructive criticism. Thinking Skills and Creativity, 30, 42–53.

Hacker, D. J., Bol, L., Horgan, D. D., & Rakow, E. A. (2000). Test prediction and performance in a classroom context. Journal of Educational Psychology, 92(1), 160.

Hawker, M. J., Dysleski, L., & Rickey, D. (2016). Investigating general chemistry students’ metacognitive monitoring of their exam performance by measuring postdiction accuracies over time. Journal of Chemical Education, 93(5), 832–840.

Hattie, J. (2008). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. Routledge.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487

Kluger, A. N., & Denisi, A. (1996). The effects of feedback interventions on performance. Psychological Bulletin, 119(2), 254–284.

Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121.

Lizzio, A., & Wilson, K. (2008). Feedback on assessment: Students’ perceptions of quality and effectiveness. Assessment and Evaluation in Higher Education, 33(3), 263–275. https://doi.org/10.1080/02602930701292548

Moog, R. S. & Spencer, J. N. (2008). Process-oriented guided inquiry learning. American Chemical Society.

Mulliner, E., & Tucker, M. (2017). Feedback on feedback practice: perceptions of students and academics. Assessment and Evaluation in Higher Education, 42(2), 266–288. https://doi.org/10.1080/02602938.2015.1103365

National Association of Colleges and Employers (NACE). (2020). Job outlook 2018. https://www.naceweb.org/store/2019/job-outlook-2020

National Research Council (NRC). (2012). Education for life and work: Developing transferable knowledge and skills in the 21st century. The National Academies Press.

Nicol, D., & MacFarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218. https://doi.org/10.1080/03075070600572090

Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435–448.

Reynders, G., Suh, E., Cole, R. S., & Sansom, R. (2019). Developing student process skills in a general chemistry laboratory. Journal of Chemical Education, 96(10), 2109–2119. https://doi.org/10.1021/acs.jchemed.9b00441

Reynders, G. Lantz, J., Ruder, S., Stanford, C., and Cole, R. (2020). Rubrics to assess critical thinking and information processing in undergraduate STEM courses. International Journal of STEM Education, 7(1), 1–15.

Ruder, S. M, & Stanford, C. (2018). Strategies for training undergraduate teaching assistants to facilitate large active learning classrooms. Journal of Chemical Education, 95(12), 2126–2133.

Ruder, S. M, Stanford, C., & Gandhi, A. (2018). Scaffolding STEM classrooms to integrate key workplace skills: Development of resources for active learning environments. Journal of College Science Teaching, 47 (5), 29–35.

Ruder, S. M, & Stanford, C. (2020). Training undergraduate teaching assistants to facilitate and assess process skills in large enrollment courses. Journal of Chemical Education 97(10), 3182–3187.

Schneider, M., & Preckel, F. (2017). Variables associated with achievement in higher education: A systematic review of meta-analyses. Psychological Bulletin, 143(6), 565–600. https://doi.org/10.1037/bul0000098

Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189. https://doi.org/10.3102/0034654307313795

Simonson, S. R. (2019). POGIL: An introduction to process oriented guided inquiry learning for those who wish to empower learners. Stylus Publishing.

Singh, A., Karayev, S., Gutowski, K., & Abbeel, P. (2017). Gradescope: A fast, flexible, and fair system for scalable assessment of handwritten work. In Proceedings of the Fourth (2017) Association for Computing Machinery Conference on Learning@ Scale (pp. 81–88). Association for Computing Machinery.

Weaver, M. R. (2006). Do students value feedback? Student perceptions of tutors’ written responses. Assessment & Evaluation in Higher Education, 31(3), 379–394.

Wollenschläger, M., Hattie, J., Machts, N., Möller, J., & Harms, U. (2016). What makes rubrics effective in teacher-feedback? Transparency of learning goals is not enough. Contemporary Educational Psychology, 44–45, 1–11. https://doi.org/10.1016/j.cedpsych.2015.11.003

Zimbardi, K., Colthorpe, K., Dekker, A., Engstrom, C., Bugarcic, A., Worthy, P., Victor, R., Chunduri, P., Luka, L., & Long, P. (2017). Are they using my feedback? The extent of students’ feedback use has a large impact on subsequent academic performance. Assessment and Evaluation in Higher Education, 42(4), 625–644. https://doi.org/10.1080/02602938.2016.1174187

Assessment Preservice Science Education STEM Teaching Strategies Postsecondary

Asset 2