Lesson Design Using the ‘BOPPPS’ Model – Part 3: Pre-Post Assessments & iClickers

Hello SJSU Community!

It’s Dr. Rayna Friendly again. In a previous post, I continued my discussions about a model of lesson design that I learned during my graduate degree, which is taught in the Instructional Skills Workshop (ISW). The ISW has been run in more than 100 academic institutions worldwide (Day, 2004)! To date, the ISW has been found to be an effective way to transform instructor’s teaching in the classroom such that ISW participants were found to reduce their teacher-focused thinking in comparison to controls, as well as increase the number of active learning strategies used in their classrooms (e.g., Dawson et al., 2014; Macpherson, 2011). ‘BOPPPS’ is actually an acronym, which stands for the 6 basic components that are important to consider including when you are designing a lesson or workshop:

  • Bridge into the lesson
  • Outcomes for the lesson (as in Intended Learning Outcomes)
  • Pre-assessment of learners’ existing knowledge of those outcomes
  • Participatory Activities (as in Active Learning Strategies)
  • Post-assessment of learners’ knowledge of the outcomes
  • Summary of the lesson content

In my previous blog posts, I discussed the first and second components: the Bridge-in, and writing Intended Learning Outcomes.

Today, I would like to delve deeper into the pre- and post-assessment components of the model: the Pre-Assessment and Post-Assessment of learner’s knowledge. The action of including both a pre- and post-assessment in your lesson or program aligns with the practice of evidence-based teaching (e.g., Brickman, Gormally & Marchand Martella, 2017; Gormally et al., 2014; Henderson & Dancy, 2009; Henderson et al., 2014). I hate to break it to you, but teachers are people…and all people make assumptions. As a teacher, I often find myself making assumptions like “my students must find this lesson so boring, they are all on their phones or falling asleep” or “they stayed awake, meaning they must have found this information meaningful/interesting“. The problem with assumptions is that they often are inaccurate, due to being based on our own personal biases. We cannot know for certain what our students are thinking, or what they have learned, unless we collect EVIDENCE of this learning. Basing our teaching practice on evidence, rather than assumptions, can help ensure that we are using a student-focused, rather than teacher-focused, method of teaching. Let’s go into each assessment type in more detail:

PRE-assessments of learner’s knowledge enable us to find out how much the learner knows BEFORE we teach the course content. This can give the instructor a baseline measure knowledge, which they can use to adjust the upcoming content to the level of the learners. For instance, if all students in the class already know the steps of the Scientific Method, it may not make sense to go through these step with them in detail, and you may want to spend time going through real-life examples instead. The format of pre-assessments can be formal (i.e., pop quizzes, tests, essays, etc. which are worth points), but I generally prefer to use more informal methods which are not worth points, or are only worth participation points (e.g., asking questions about the content to students, brainstorming, iClicker Questions, quick 1-minute papers, entry tickets and so on). iClickers are quickly becoming one of my favorite assessment methods, and I discuss them more below.

POST-assessments of learner’s knowledge are the basis of school as we traditionally know it. Formal test, quizzes, essays, lab write-ups, and such are all forms of assessing student’s knowledge AFTER the course content has been taught. Although these are important assessment methods, they often take place mid-way or at the end of the course. So how do we assess what students have learned at the end of each class or module? There are many end-of-class assessment methods. Like above, these can include both formal and informal methods and many of the suggested above methods can be used again at the end of class. The goal is to determine if the students have gained any NEW knowlege after they have taken the pre-assessment and spent time learning course content. As in scientific research methods, if you use the pre-assessment as a baseline, then any additional learning that occurred can confidently be attributed to students having taken your class!

Don’t forget to align your post-assessment with your Intended Learning Outcomes (ILOs)! Recall from my previous post on ILOs, that it is important to ensure your assessments align with the ILOs for each class or program. In fact, writing your ILOs first can ensure you then choose assessment methods that correctly assess those outcomes. Consider the following ILO example:

By the end of this class, students will be able to:

•Differentiate two types of metacognition
•Describe developmental trends in explicit and implicit metacognition
Reflect on how you use metacognition for schoolwork
Practice Active Listening, Meditation & “Mindfulness” to enhance your metacognitive abilities
Here, the ILOs help me to determine the type of assessment to include. For instance, “differentiate” suggests that students just need to tell-apart the 2 types of metacognition, so I might test them using multiple-choice matching-terms-to-definitions questions on a test. “describe” suggests they need to use their own word, so I might test their knowledge of developmental trends using short- or long-answer questions on a test or through an essay. “reflect” could also be assessed on an essay, but also through in-class discussions. “practice” would be evaluated by creating an activity to allow students to try meditation and mindfulness out for themselves!

Consider using iClickers for some of your assessments! I love using iClickers in the classroom! These are offered for free to SJSU students and teachers, they are relatively easy to learn and use, and many of my students who self-identify as introverts have told me how much they appreciate getting to participate in class without having to talk in front of other people. iClickers essentially allow teachers to pose multiple-choice (and some other styles of) questions to students. Then students use an iClicker (downloaded on their device, or loaned from eCampus) to answer the questions anonymously. I use these in class just for participation, but you can also set them up for course credit and link to Canvas’ Grades functionality. In the simplest example of pre-and post-assessments, you could ask students the same questions at the beginning and end of the class, to see if there has been any change in student learning throughout the class. 

Look out for my final blog post this term to learn about the last two components of the BOPPPS Model: Active (Participatory) Learning Strategies and the Summary!

 

(Note that these BOPPPS posts might be interspersed with content updates from the Teaching Community of Practice (TCoP), which I facilitate.) What is the TCoP, you ask?

  • The Teaching Community of Practice (TCoP) is a group for part- and full-time SJSU faculty (of all levels, across all departments), who are interested in enhancing their respective teaching practices. The TCoP will meet regularly, according to members’ schedules, to exchange strategies, tips and resources that have led to successful (and sometimes, less-than-successful) teaching experiences. Please fill out this form if you are interested in joining this community and you will be added to the groups’ mailing list. For inquires about the TCoP, please contact me at rayna.friendly@sjsu.edu.

 

 

REFERENCES:

Brickman, P., Gormally, C., & Martella, A. M. (2016). Making the Grade: Using instructional feedback and evaluation to inspire evidence-based teaching. CBE—Life Sciences Education15(4), ar75.

Day, R., & the ISW International Advisory Committee. (2004). Instructional Skills Workshop: From grassroots initiative to international perspectives. Paper presented at the Society for Teaching and Learning in Higher Education. Retrieved from http://iswnetwork.ca/wp-content/uploads/2012/07/Hand5_ICED.pdf

Dawson, D., Borin, P., Meadows, K., Britnell, J., Olsen, K. & McIntryre, G. (2014). The Impact of the Instructional Skills Workshop on Faculty Approaches to Teaching. Toronto ON: Higher Education Quality Council of Ontario

Gormally, C., Evans, M., & Brickman, P. (2014). Feedback about teaching in higher ed: neglected opportunities to promote changeCBE Life Sci Educ 13, 187-199.

Henderson, C. & Dancy, M.H. (2009). Impact of physics education research on the teaching of introductory quantitative physics in the United StatesPhys Rev Spec Top Phys Educ Res 5, 020107

Henderson, C., Turpen, C., Dancy, M., & Chapman, T. (2014). Assessment of teaching effectiveness: lack of alignment between instructors, institutions, and research recommendationsPhys Rev Spec Top Phys Educ Res 10, 010106.

Macpherson, A. (2011). The Instructional Skills Workshop as a transformative learning process. Unpublished doctoral dissertation. Simon Fraser University, Burnaby, BC.