Skip to main content
 

Two-Year Community

Identifying Differences in Learning Strategies by Demographics and Course Grade in a Community College Context

Journal of College Science Teaching—September/October 2020 (Volume 50, Issue 1)

By Matthew R. Fisher, Deborah Cole, Youngha Oh, and Sheela Vemu


Metacognition and self-regulated learning are skills that contribute to student success, but few studies have examined these topics within a community college context. We addressed this lack of understanding by asking community college biology students to metacognitively reflect on their learning strategies. We took a novel approach in our analysis by investigating how learning strategies potentially differed based on race, age, gender, and final course grade. With some exceptions, we found little evidence to suggest that such differences existed. Notably, we found that learning strategies did not differ between high-achieving and low-achieving students, which contradicts previous studies. We offer several possible explanations for these preliminary results, which include misrepresentation by students in their reflecctions, external and internal barriers to studying, potential flaws in our survey instrument, and students not effectively using learning strategies. We suggest replication of this study with methodological changes to further investigate any potential differences that may exist among these groups of students. Our research is an example of how classroom action research can provide insight into how students learn, which empowers us to make evidence-based changes in our teaching. 

 

As educators, we strive to promote student achievement in the classroom and to impart upon students the skills and knowledge required for future success. Despite our best efforts, however, we might fail to realize these goals for all students. This story is common in STEM education, where success rates are often lower than desired, especially for under-represented groups of students (Estrada et al., 2016). Low success rates and poor retention have not gone unnoticed, as evidenced by multiple efforts to reform STEM education, particularly in biology (e.g., NRC, 1999; NRC, 2009; AAAS, 2011). These efforts seek to pair knowledge of how students learn with the creation of pedagogy that can be validated through classroom-based research (e.g., Handelsman et al., 2006). The result is a student-centered approach that promotes learning for a diverse range of students.

Literature review

Learning is multi-faceted and relies upon many influences that include classroom dynamics, motivation, and metacognition (Saribas & Bayram, 2016). Thus, engaging students in metacognitive thinking and activities is an important pillar of student-centered pedagogy. Metacognition is, in short, the process of actively thinking about one’s thinking. More descriptively, it encompasses awareness of one’s cognition through knowledge of learning strategies and the contextual knowledge of when to employ those strategies (Flavell, 1979).

Metacognition can be subcategorized into two domains: knowledge of cognition and regulation of cognition (Schraw et al., 2006). The former describes what we know about how we learn, such as what factors influence our learning, our knowledge of learning strategies, and knowing when to use those strategies. Meanwhile, regulation of cognition is what is often targeted in educational interventions because it deals with how learners manage their cognitive knowledge to achieve successful learning. Regulation of cognition involves three sub-domains: planning, monitoring, and evaluation (Schraw et al., 2006). Tanner (2012) gives numerous practical examples of prompts that educators can use to stimulate self-reflection among students for each of these three sub-domains.

Numerous studies validate the usefulness of metacognition by linking it with student success, specifically within STEM disciplines (Ellis et al., 2012). For example, Saribas and Bayram (2016) randomly assigned students in a college-level general chemistry laboratory course to either a control or experimental group. The latter received an intervention whereby students engaged in metacognitive activities, including self-reflection and collaboration, while also receiving supportive feedback on assignments. At the end of the intervention, the experimental group outperformed the control group on a conceptual knowledge test for chemistry. Interestingly, differences in scores were most pronounced for questions related to chemical phenomena at the microscopic level, such as those dealing with atoms. This suggests that metacognitive skills might be especially important for abstract concepts that are hard to visualize. Some limitations of the study include the authors not tracking the development of metacognitive skills as a result of the intervention, nor comparing levels of metacognitive skills between the experimental and control groups (Saribas & Bayram, 2016).

When students engage in metacognition, the results are both positive and robust. A review by Ellis et al. (2014) found that metacognition can be used across grade levels and academic disciplines to boost students’ knowledge and course-related skills. The implication is that it might be especially useful for reducing achievement gaps among different groups of students, although we found no specific evidence of this in our review of the literature. Nevertheless, metacognition is so important to academic development that a student’s metacognitive skill can out-perform intelligence as a predictor of achievement (van der Stel & Veenman, 2008).

As educators, we can engage students in metacognitive thinking so they can become self-sufficient, lifelong learners. Self-regulated learning is a term that refers to understanding, monitoring, and controlling one’s learning (Schraw et al., 2006). There is significant overlap between this term and the concept of metacognition, as both can pertain to the regulation of cognition. Studies have found that self-regulated learning leads to both achievement and satisfaction among students, and it enables lifelong learning (as reviewed in Schraw et al., 2006).

Self-regulated learners must metacognitively monitor their cognition and use appropriate learning strategies. Students may need instruction, however, on identifying and correctly using effective learning strategies (Dunlosky et al., 2013). In their review, Dunlosky et al. (2013) assessed the utility of 10 commonly used learning strategies, from highlighting text to practice testing. They found that practice testing had the highest utility for a diversity of learning contexts, especially if it was distributed practice, which occurs repeatedly over time (from days to months).

Experimental overview

In the present study, our objectives were to gain insight into how community college students self-regulated their learning, and how this might differ based on race, age, gender, and final course grade. This latter objective may be novel, as we found no other studies that disaggregated results based on student characteristics. To achieve these research objectives, we asked students to identify the learning strategies that contributed to prior academic success and those that might be useful to them in the future.

Like other educators, we seek to promote content knowledge and the mindsets and skills that promote metacognition and self-regulated learning. Thus, we were (and remain) interested in how well students regulated their own learning, and through the very act of asking them about it we engaged them in metacognitive reflection.

Our study is an example of classroom action research, a type of research whereby educators serve as researchers in their own classrooms and not only learn about their students, but also become learners as they reflect upon their own teaching (Parson & Brown, 2002). While action research is useful for any type of classroom environment, it may be especially important for community college classrooms. This is because such classrooms are underrepresented in education research despite serving a large and diverse population that accounts for approximately half of all students that obtain bachelor’s degrees in STEM (Schinske et al., 2017). Our exploratory study, therefore, adds to a relatively small but growing body of scholarship on community college students, and it will hopefully encourage others to study this diverse and sometimes overlooked student population.

Methods

This study was conducted at a community college with a designation as a Hispanic Serving Institution, which is defined as having a student enrollment that is at least 25% Hispanic (U.S. Department of Education, 2019). Data were collected from students enrolled in two sections of a four-credit, sophomore-level Anatomy & Physiology 1 course. This course was taught by the same instructor during a single term in Spring 2019. Anatomy & Physiology 1 is a core requirement for several associate degrees and certificate programs offered at the community college.

Course grades and demographic information were collected for all students, which included the students’ gender, race, and age. In this circumstance, gender of students was recorded by the college as binary (male/female). Additionally, Hispanic/Latino was treated by the college as race, although it is considered by some, such as the U.S. Census Bureau (2019), as indicating one’s ethnicity. According to the U.S. Census Bureau classification, Hispanics can identify as any race, but that was not the situation in this study. We categorized age into three groups: 18–19 years, 20–22 years, and over 22 years. Ethical oversight for the study was provided by the Institutional Review Board at the community college where the research was conducted.

Students were administered a survey during week two of the 16-week semester. The survey consisted of 16 items; two were open-ended questions (Table 3) and 14 were closed-ended questions that were answerable with a yes/no response (Table 2). Of the latter, nine were used for this analysis; five items were later excluded because it was determined that the questions did not provide information useful to our research objectives. Survey questions were designed to elicit students’ self-reported use of study strategies and stimulate metacognitive reflection. Survey questions were inspired by the work of Zhao et al. (2014).

We performed statistical tests on the results from the nine closed-ended questions to search for any potential differences in the reported use of learning strategies by gender, race, age, and final course grade. Statistical analysis of our data proceeded in three main steps. First, bivariate correlations among the survey items were calculated in order to measure the strength of association of two variables using the point-biserial correlation (Tate, 1954). Second, Cronbach’s alpha (Cronbach, 1951) was calculated to determine how well the set items measured and how closely related a set of items are as a group, which indicates the internal consistency (or reliability). Third, in order to examine the significance of the association between the survey items and demographic variables (gender, age, race, and course grade), a series of two-tailed Fisher’s exact tests (Fisher, 1922) were performed instead of using the Chi-square test (χ2). This is because more than 20% of the data contained expected frequencies less than five. Thus, the Fisher’s exact test calculates the null hypothesis of independence applying hypergeometric distribution of the numbers in the cells of the data. In other words, the test determined whether responses to the nine closed-ended questions had gender, race, age, and grade effects or correlation. Data manipulation and all statistical analyses were conducted using SPSS software, version 23 (IBM Corporation, 2016).

Results

Survey responses, demographic data, and final course grades were collected from 38 of the 41 community college students enrolled in the two sections of Anatomy and Physiology 1. Students were predominantly female (81%) and white (59%), although 23% were Hispanic or Latino (Table 1). Approximately 62% of students were age 22 or younger.

Table 1. Demographics are shown here for the students who participated in this study (N = 38).

N

%

Gender

Female

30

79%

Male

7

19%

Missing

1

2%

Race

Asian

2

5%

Black or African American

3

8%

Hispanic or Latino

8

21%

White

22

58%

Prefer not to answer

2

5%

Missing

1

3%

Note: Gender was collected by the students' college as a binary variable.
 

Two-tailed Fisher’s exact tests confirmed that one of the nine closed-ended questions (“I created a study guide”) had a significant (p = .037) gender effect. Thus, there is evidence that responses among male and female students differ for that survey item. Because the significance test does not tell us the degree of effect, we chose to use Cramer’s V because it is easier to interpret than odds ratio or Phi (φ). The benchmarks for Cramer’s V effect size considers 0.1 as small, 0.3 as medium, and 0.5 as large. For the single survey item previously referenced, V = .29, thus it was small effect.

Considering that we detected a gender-based difference among students that created study guides, we wondered if this might result in differences in final course grades between males and females. This was based on the speculation that creating study guides would lead to higher academic achievement. Using two-tailed t tests, we found no significant difference between males and females for final course grade (p = 0.78), and furthermore, there was no difference in course grade among those that did and did not create a study guide (p = 0.23).

We also detected a significant effect for students’ age (p = .011) for one of the nine closed-ended questions (“I wrote my own study questions”). Thus, there was an association between age and student responses for that survey item and that the effect size was large (V = .54).

Because we detected an age effect for those making their own study questions, we wondered if this might result in a difference in final course grade based on the speculation that creating study questions would lead to a higher grade. Using two-tailed t tests, we found no significant difference among age groups for final course grade (18–19 years versus 20–22 years: p = 0.07; 18–19 years versus over 22 years: p = 0.66; 20–22 years versus over 22 years, p = 0.11). Additionally, we detected no difference in course grade among those that did and did not create study questions (p = 0.10).

We performed analysis to identify potential differences in student responses to the closed-ended questions based on race and final course grade, but found no significant differences.

The heat map in Table 2 helped us further visualize the results to the closed-ended survey questions based on final course grade. Students that received a D or F were equally as likely as A-earning students to report that they attended class, completed homework assignments, and related material to the “bigger picture.” Additionally, students that earned a D or F were more likely than A students to report that they wrote their own study questions, created a study guide, self-tested, did the assigned reading, and previewed the textbook before attending class. However, these differences were not statistically significant as previously mentioned.

Table 2. Students were asked to identify which of the following learning techniques they used during the course.

Final course grade

Survey questions

A (90–100%) n = 9

B (80–89%) n = 8*

C (70–79%) n = 15

D or F (< 70%) n = 6**

Attended class

100%

100%

93%

100%

Previewed the text before class

33%

38%

60%

67%

Read the assigned text

56%

50%

80%

67%

Highlighted text

45%

75%

73%

50%

Completed the assigned homework

100%

100%

100%

100%

Created a study guide

56%

50%

60%

83%

Wrote their own study questions

11%

25%

20%

50%

Self-tested without notes

56%

75%

60%

83%

Related material to the bigger picture

67%

50%

60%

67%

Results from the nine closed-ended survey questions are shown in a heat map that is organized by students' final course grade.

Lastly, responses to the two open-ended questions revealed numerous strategies used by students prior to taking the survey (question one) and those strategies the students planned to use in the future (question two) (Table 3). The most cited learning strategy employed by students prior to the survey was practice testing (including flashcards), indicted by 29% of students. This was followed by note taking/reviewing notes, which was mentioned by 26% of students. For question two in which students identified new learning strategies they planned to use in the future, the most common response was to not answer the question (32% of students). The use of practice quizzes was the second-most common response (16% of students). For both questions, response rates were similar for high-achieving students (earning a final course grade of A, B, or C) and low-achieving students (earning a D or F).

Table 3. Reported here are the results from two open-ended survey questions administered to students (N = 38).

Q1: Which learning strategies benefitted you?

Number of responses

Percent of all students

Percent of A, B, and C students

Percent of D and F students

Practice quizzes/flashcards

11

29%

28%

33%

Making or reviewing notes

10

26%

22%

50%

Diagram-making

6

16%

16%

17%

Completing homework

5

13%

13%

17%

Using textbook

5

13%

13%

17%

Attending class

4

11%

13%

0%

Q2: What learning strategies did you not use, but plan to use in the future?

Number of responses

Percent of all students

Percent of A, B, and C students

Percent of D and F students

No response

12

32%

32%

33%

Practice quizzes

6

16%

16%

16%

Make own study questions

4

11%

9%

16%

Study more

4

11%

9%

16%

The results are reported for all students and are also disaggregated into two categories based on final course grade. Only those responses mentioned by more than 10% of all students are included.

Discussion

Our results indicated that individual students typically use a variety of learning strategies. Eight of the nine learning strategies included in the closed-ended survey questions had very high response rates (50% to 100%), indicating that students partook in numerous study habits (Table 2). These strategies included highlighting the text, self-testing without notes, creating study guides, and completing homework assignments. An open-ended survey question yielded additional learning strategies such as reviewing notes and using flashcards (Table 3).

Along with this diversity of learning strategies comes diversity of utility. Some of the strategies indicated by students, such as highlighting, are not particularly helpful. For example, several studies have found no difference in learning between students in control groups and those in experimental groups that highlighted text; one study even found a negative correlation between the amount of text highlighted and test scores (as reviewed in Dunlosky et al., 2013). At the other end of the utility spectrum is practice testing, an efficacious learning strategy that has been repeatedly validated by studies dating back as far as 100 years (as reviewed in Dunlosky et al., 2013). Within our study, students most commonly identified practice testing (including flashcards) when answering an open-ended question about what learning strategies benefitted them. While it was the leading response, it was mentioned by only 29% of students, meaning that slightly more than two-thirds of all students did not view it as particularly beneficial. Therefore, this provides us an opportunity to engage our students in discussions about the benefits of highly effective learning strategies such as distributed practice testing.

With two exceptions, we found that race, gender, age, and final course grade did not significantly affect the proportion of students that engaged in the nine learning strategies provided in the survey questions. The first exception was a small-sized, gender-based effect for students that created study guides. The second exception was a large-sized, age-based effect for students that created their own study questions. We view these gender-based and age-based effects as preliminary and offer no mechanistic explanations for them, as it is too premature to speculate. Additional research should explore the unknown role that demographic variables may have on preferences for learning strategies.

Meanwhile, we predicted to find differences in the learning strategies used by students based on levels of academic achievement (as indicated by final course grade), but this was not the case. For example, we hypothesized that the two learning strategies recently mentioned, creating study guides and creating study questions, would positively correlate with academic achievement because we perceived them as requiring higher levels of metacognition and self-regulated learning. Contrary to our predictions, we found that students that reported using these strategies ended up having lower average course grades compared to those that did not use them, although these differences were not statistically significant.

At a broader perspective, our statistical analysis revealed that students’ final course grade had no effect on response rates for all learning strategies included in the closed-ended survey questions. Using a heat map of the responses by final grade allowed us to further analyze those data (Table 2). We were surprised to find that students who received a D or F reported using learning strategies at equal or higher proportions compared to students that earned an A (although any differences were not statistically significant). For example, 100% of both A students and D/F students reported that they completed the assigned homework and attended class. Where the two groups diverged, however, is particularly interesting. Most notably, 83% of D/F students reported using practice testing (a high-utility strategy), whereas only 45% of A students did. Overall, D/F students either had the highest response rate or were tied for the highest response rate for seven of the nine learning strategies provided in the survey (Table 2).

If students who received a D or F in the course are using numerous learning strategies, some of which are highly effective, then why are they not succeeding? Studies affirm the link between study skills and academic performance (as reviewed in Robbins et al., 2004), so there must be other factors at play. We propose four potential answers, all being nonmutually exclusive.

First, there might be external barriers (e.g., work, family, health, finances) and/or internal barriers (e.g., motivation, cognitive abilities, mental health) that interfere with students’ ability to study and succeed. Clement (2016) found that such barriers can be common for community college students, and those who face these barriers are more likely to fail.

Second, students might not be using the learning strategies effectively or frequently enough. Practice testing, for example, is most effective if it involves retrieval of information (as opposed to simply being given the answer) and if it occurs repeatedly over time, which is referred to as distributed practice (Dunlosky et al., 2013). Many students, and educators too, are likely not fully aware of this. Thus, students might need explicit instruction for how to use various learning strategies. Additionally, students might not be using the strategy frequently enough because they inaccurately gauge the time commitment needed for academic success, or because they lack the time needed for studying due to the aforementioned external barriers.

Third, students that earn a D or F may be inaccurately reporting their use of learning strategies. Self-reported data, like those collected from our survey and used by other educational, behavioral, and healthcare researchers, are generally prone to inaccuracies (Rosenman et al., 2011; Rosen et al., 2017). Several studies have found that low-performing students, in particular, are prone to inaccuracies when self-reporting information (e.g., Cassady, 2001; Zimmerman et al., 2002; Cole & Gonyea, 2010). That may be the case in our study, too. Inaccurate statements from students earning a D or F may result from what is called social desirability bias, which occurs when people misreport information to avoid embarrassment or to appear more attractive to others or oneself (West, 2014).

Lastly, we might have failed to detect differences that actually existed because of potential flaws in our methodology. This is similar in concept to what statisticians call a type II error. Specifically, our survey instrument might have been insufficient in detecting differences between students that earned an A and those that got a D or F. For example, we asked whether students used a learning strategy, but not how frequently. Our survey would not be able to tell the difference between a student that did 10 hours of practice testing versus a student that did 10 minutes.

To investigate our research questions in the future, we will use more in-depth instruments such as the Motivated Strategies for Learning Questionnaire (Pintrich et al., 1991), which has been used in other science education research (Miller, 2015; Clement, 2016). Based on our course grade analysis (Table 2), it is conceivable that analyzing overall GPA might not add additional insight, but other variables may have a significant effect in understanding community college student study behaviors. Thus, we plan to expand our analysis by evaluating covariables such as students’ reading placement and the number of prior science and math credits completed. We could also improve the resolution of our data by conducting student interviews. While we could reduce the bias associated with self-reported data by directly observing student study behaviors, it would be logistically unfeasible because studying occurs outside of class time and often off-campus.

Conclusion

Metacognition is required for learners to advance from low-level learning (memorization) to higher levels (analysis, application, synthesis) (Zhao et al., 2014), and thus it supports the deep learning that is advocated in STEM education reform (e.g., AAAS, 2011). The role of metacognition in developing deep learning can be explained within a constructivist framework: The constructivist theory of learning states that learners must actively create new knowledge as they modify and synthesize previously acquired knowledge with new information (Hartle et al., 2012). Metacognition is constructivist because students learn by metacognitively examining their thinking before and after learning (Tanner, 2012). Learners can do this by self-regulation and self-monitoring their cognition, and by employing learning strategies for particular tasks (Hogan et al., 2015). As Ellis et al. (2014) state: “When students are aware of their strategies and weakness as learners, they are able to choose a learning strategy (knowledge of self) that is aligned with the task at hand (knowledge of task)” (p. 4017).

Metacognition is typically not a component of traditional instruction, so it must be deliberately added to classroom curriculum (Leutwyler, 2009; Kistner et al., 2010). Teaching metacognition is more beneficial for students’ achievement if the instruction is explicit (Kistner et al., 2010). Explicit instruction includes explaining the benefits of particular strategies and allowing opportunities for students to practice it. Meanwhile, implicit teaching might involve modeling a strategy with no explanation (Ellis et al., 2014). Several metacognitive strategies are summarized in Tanner (2012) and Ellis et al. (2014), which often involve various forms of self-reflection. A review of almost 200 studies found that providing students with prompts is the most commonly-used method used by instructors to engage students in metacognitive reflection (Zohar & Barizilai, 2013).

In the present study, a simple inquiry about how our students learn morphed into a classroom action research project that benefitted us in several ways. Of course, we gained some insight into our student population, even if it raised more questions than answers. Just as importantly, we immersed ourselves in reflective thinking about our teaching and the science of learning. What we learned we can take forward into future classrooms and action research projects.

Acknowledgments

We thank Dr. Antonio Rodriguez, Mary Edith Butler, and Dr. Diane Nyhammer for their assistance. This project was partially supported by the Community College Biology Instructor Network to Support Inquiry into Teaching and Education Scholarship (CC Bio INSITES), NSF award #1730130. 


Matthew R. Fisher (matthew.fisher@oregoncoast.edu) is an instructor in the Department of Biology at Oregon Coast Community College in Newport, Oregon. Deborah Cole is an academic specialist-programming associate at the STEM Education Innovation and Research Institute at IUPUI in Indianapolis, Indiana. Youngha Oh is a PhD candidate in the Department of Research, Evaluation, Measurement, and Statistics at Texas Tech University in Lubbock, Texas. Sheela Vemu is an assistant professor in the Division of Math and Sciences at Waubonsee Community College in Sugar Grove, Illinois. 

References

American Association for the Advancement of Science (AAAS). (2011). Vision and change in undergraduate biology education: A call to action. AAAS.

Cassady J. C. (2001). Self-reported GPA and SAT: A methodological note. Practical Assessment, Research & Evaluation, 7(12).

Clement L. (2016). External and internal barriers to studying can affect student success and retention in a diverse classroom. Journal of Microbiology & Biology Education, 17(3), 351–359.

Cole J., & Gonyea R. M. (2010). Accuracy of self-reported SAT and ACT test scores: Implications for research. Research in Higher Education, 51(4), 305–319.

Cronbach L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334.

Dunlosky J., Rawson K. A., Marsh E. J., Nathan M. J., & Willingham D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychologic Science in the Public Interest, 14(1), 4–58.

Ellis A. K., Bond J. B, & Denton D. W. (2012). An analytical literature review of the effects of metacognitive teaching strategies in primary and secondary student populations. Asia Pacific Journal of Educational Development, 1(1), 9–23.

Ellis A. K., Denton D. W., & Bond J. B. (2014). An analysis of research on metacognitive teaching strategies. Procedia—Social and Behavioral Sciences, 116, 4015–4024.

Estrada M., Burnett M., Campbell A. G., Campbell P. B., Denetclaw W. F., Gutierrez C. G., Hurtado S., John G. H., Matsui J., McGee R., Moses Okpodu C., Robinson T. J., Summers M. F., Werner-Washburne M., & Zavala M. (2016). Improving underrepresented minority student persistence in STEM. CBE—Life Sciences Education, 15, es5.

Flavell F. H. (1979). Metacognition and cognitive monitoring: A new area of psychological inquiry. American Psychologist, 34, 906–911.

Fisher R. A. (1922). On the interpretation of χ2 from contingency tables, and the calculation of P. Journal of the Royal Statistical Society, 85(1), 87–94.

Handelsman J., Pfund C., & Miller S. (2006). Scientific teaching. W. H. Freeman & Company.

Hartle R. T., Baviskar S., & Smith R. (2012). A field guide to constructivism in the college science classroom: Four essential criteria and a guide to their usage. Bioscene, 38(2), 31–35.

Hogan M. J., Dwyer C. P., Harney O. M., Noone C., & Conway R. J. (2015). Metacognitive skill development and applied systems science: A framework of metacognitive skills, self regulatory functions and real-world applications. In Pena-Ayala A. (Ed.), Metacognition: Fundaments, applications, and trends (pp. 76–106). Springer International Publishing.

IBM Corporation. (2016). IBM SPSS statistics for Windows, Version 23.0. IBM Corporation.

Kistner S., Rakoczy K., Otto B., Dignath-van Ewijk C., Buttner G., & Klieme E. (2010). Promotion of self-regulated learning in classrooms: Investigating frequency, quality, and consequences for student performance. Metacognition and Learning, 5(2), 157–171.

Leutwyler B. (2009). Metacognitive learning strategies: Differential development patterns in high school. Metacognition and Learning, 4(2), 111–123.

Miller D. A. (2015). Learning how students learn: An exploration of self-regulation strategies in a two-year college general chemistry class. Journal of College Science Teaching, 44(3), 11–16.

National Research Council (NRC). 1999. Transforming undergraduate education in science, mathematics, engineering, and technology. National Academies Press.

National Research Council (NRC). 2009. A new biology for the 21st century: Ensuring the United States leads the coming biology revolution. National Academies Press.

Parson M., & Brown K. S. (2002). Educator as reflective practitioner and action researcher. Wadsworth.

Pintrich P. R., Smith D., Garcia T., & McKeachie W. (1991). A manual for the use of the Motivated Strategies for Learning Questionnaire (MSLQ). University of Michigan National Center for Research to Improve Postsecondary Teaching and Learning.

Robbins S. B., Lauver K., Le H., Davis D., Langley R., & Carlstrom A. (2004). Do psychosocial and study skill factors predict college outcomes? A meta-analysis. Psychological Bulletin, 130(2), 261–288.

Rosen J. A., Porter S. R., & Rogers J. (2017). Understanding student self-reports of academic performance and course-taking behavior. AERA Open, 3(2).

Rosenman R., Tennekoon V., & Hill L. G. (2011). Measuring bias in self-reported data. International Journal of Behavioral & Healthcare Research, 2(4), 320–332.

Saribas D., & Bayram H. (2016). Investigation of the effects of using metacognitive activities in chemistry laboratory on the development of conceptual understanding. Boğaziçi University Journal of Education, 33(1), 27–49.

Schinske J. N., Balke V. L., Bangera M. G., Bonney K. M., Brownell S. E., Carter R. S., Curran-Everett D., Dolan E. L., Elliot S. L., Fletcher L., Gonzalez B., Gorga J. J., Hewlett J. A., Kiser S. L., McFarland J. L., Misra A., Nenortas A., Ngeve S. M., Pape-Lindstrom P. A., Seidel S. B., Tuthill M. C., Yin Y., & Corwin L. A. (2017). Broadening participation in biology education research: Engaging community college students and faculty. CBE—Life Sciences Education, 16(2), 1–11.

Schraw G., Crippen K., & Hartley K. (2006). Promoting self-regulation in science education: Metacognition as part of a broader perspective on learning. Research in Science Education, 36, 111–139.

Tanner K. D. (2012). Promoting student metacognition. CBE—Life Sciences Education, 11, 113–120.

Tate R. F. (1954). Correlation between a discrete and a continuous variable. Point-biserial correlation. The Annals of Mathematical Statistics, 25(3), 603–607.

U.S. Census Bureau. (2019). Race & ethnicity. U.S. Department of Commerce.

U.S. Department of Education. (2019). White House initiative on educational excellence for Hispanics.

van der Stel M., & Veenman M. V. J. (2008). Relation between intellectual ability and metacognitive skillfulness as predictors of learning performance of young students performing tasks in different domains. Learning and Individual Differences, 18(1), 128–134.

West M. R. (2014). The limitations of self-report measures of non-cognitive skills. Brookings Institute.

Zhao N., Wardeska J. G., McGuire S. Y., & Cook E. (2014). Metacognition: An effective tool to promote success in college science learning. Journal of College Science Teaching, 43(4), 48–54.

Zimmerman M. A., Caldwell C. H., Bernat D. H. (2002). Discrepancy between self-report and school-record grade point average: Correlates with psychosocial outcomes among African American adolescents. Journal of Applied Social Psychology, 32(1), 86–109.

Zohar A., & Barzilai S. (2013). A review research on metacognition in science education: Current and future directions. Studies in Science Education, 49(2), 121–169.

Equity Learning Progression Postsecondary

Asset 2