By Tom Many, EdD, and Tesha Ferriby Thomas, EdD

“…Assessment is, indeed, the bridge between teaching and learning.” – Dylan Wiliam,
Embedded Formative Assessment

Here’s a question for you: How many of your university classes focused on the development of student assessments? In most cases, the answer is none, or one at the most. Yet accurately assessing student learning is one of the most important responsibilities of teams in a PLC. When intentionally designed, assessments not only give us an accurate measurement of student proficiency but can also provide teachers with important information on how to adjust instruction and provide students with much needed intervention.

Stiggins and DuFour (2009) make the case that common assessments are an extremely powerful method of promoting student achievement because “teachers can pool their collective wisdom in making sound instructional decisions based on results” (p. 643). However, the results Stiggins and DuFour are referring to involve more than the percentage of students who scored proficient. Through detailed analysis of correct and incorrect answers (distractors), teams can learn a tremendous amount about students and their learning.

Distractors allow teachers to, “follow up with additional instruction based on the most common sorts of errors made by an individual student or group of students.” – Popham (2000, p. 244)

Of course, before teachers can analyze assessment results they need to develop a valid and reliable assessment. While teachers typically spend most of their time and energy crafting rigorous questions and identifying correct answers, it can be argued that developing the wrong answers (distractors) is just as important. Reflecting on distractors allows teachers to think deeply about the expectations of the learning targets contained within the standard, the possible misunderstandings students might have about the target, and the typical mistakes students could make when applying the concept. When using selected response items (such as multiple choice or matching items), effective teacher teams reflect on each item. To ensure reliability the number of learning targets on an assessment should be limited to two or three, and each target should be represented by three to four items.

  1. Identify the specific learning target the item will assess.
  2. Craft a question that aligns with the target at the appropriate level of rigor (depth of knowledge level). This contributes to more valid and reliable assessments.
  3. Compose the correct answer.
  4. List common mistakes students typically make when answering the question.
  5. Brainstorm plausible misunderstandings students might have about the target.
  6. Co-create separate distractors that represent the possible mistakes and misunderstandings students may have about the learning target.

When teams anticipate the mistakes students might make and misunderstandings students may have, they can design assessment items that help them learn about their students’ learning. “Based on the results from distractor analysis, instructors can identify the content areas that need instructional improvement and provide students with remedial instruction in those content areas” (Gierl, et al., 2017, p. 1086). Teacher teams that analyze results of assessments that have been thoughtfully created are rewarded with powerful information about student learning.

Effective teams access this information by analyzing results by item, paying special attention to how many students chose each distractor. (In PLCs, this level of data analysis is often referred to as ‘kid by kid, skill by skill.’) Teams look for patterns and trends that indicate potential misconceptions or misunderstandings, which sometimes indicate problems with curriculum or instructional strategies. In other cases, the patterns can simply reflect miscommunication. Either way, teachers can use the patterns identified through the careful analysis of distractors to make impactful instructional choices for the class as a whole.

In addition to studying overall patterns and trends in the distractors students chose, highly effective collaborative teams analyze data by target, by teacher, and individual student need. Distractors can be instrumental in diagnosing specific student misunderstandings and creating intervention groups based on those data. “…Distractor analysis can help test developers and instructors understand why students produce errors and thereby guide our diagnostic inferences about test performance” (Gierl, et al., 2017, p. 1085). Let’s take a look at a real-world example.

If a team wants to measure whether or not students can identify the central idea of a text, they might create an item that directs students to read a short piece of text, then choose the answer that best depicts its central idea which we will label as distractor A. When creating distractors, the team recognizes students often confuse supporting details with the main idea. To help them understand which students are making this mistake (supporting details vs. main idea), they create an answer that focuses on specific details rather than the overall central idea – distractor B. The team also realizes some students may need assistance in general comprehension. To help identify those students, the team develops distractor C, which provides a central idea that is opposite of the correct answer. Of course, one question does not reliably tell us whether a student has mastered a target; teams would ideally create multiple items on the same target (3 to 5 items per target) to accurately obtain that information.

To carry this example further, the team may then identify all students who chose distractor B and provide additional time and support on the differences between a main idea and supporting details. They may also group together students who chose distractor C where they can spend more time closely reading text, using strategies that help them identify the author’s purpose and uncover the main idea. In both cases, the team analyzes the data by individual student and responds by providing students with additional time and support to meet their specific learning needs. This process is often referred to as analyzing data by name and need. Again, this example depicts the analysis of only one question. Teams will ideally analyze multiple items on the same target to determine if students are routinely confusing supporting details with the main idea, or consistently struggling with comprehension.

Clearly, the patterns in a student’s incorrect responses to items can provide powerful, valuable information to guide instruction.”- King et al. (2004, p. 4)

Although creating valid and reliable assessments can be intimidating, teacher teams really need to ‘just do it.’ The first step is to avoid over-focusing on question stems and correct answers and begin paying close attention to the power of distractors. By designing specific distractor items that reveal information about students’ learning, teachers can provide students with additional time and support on the learning targets where they need it most. Even though university classes may not have adequately trained us to create assessments that provide valuable information about students’ learning, it is never too late to learn!

Dr. Tom Many is an author and consultant. His career in education spans more than 30 years.

Dr. Tesha Ferriby Thomas has spent the past 25 years as a teacher, administrator and consultant. She is co-author of “Amplify Your Impact” and “How Schools Thrive,” and is a Solution Tree associate.

References
Gierl, M. J., Bulut, O., Guo, Q. & Zhang, X. (2017). Developing, Analyzing, and Using Distractors for Multiple-Choice Tests in Education: A Comprehensive Review. Review of Educational Research, 87(6), 1082–1116. https://doi.org/10.3102/0034654317726529.

Stiggins, R. & DuFour, R. (2009). Maximizing the Power of Formative Assessments. Phi Delta Kappan, 90(9), 640–644. https://doi.org/10.1177/003172170909000907.

Popham, W. J. (2000). Educational Measurement (3rd ed.) Boston, MA: Allyn and Bacon.

TEPSA News, May/June 2021, Vol 78, No 3

Copyright © 2021 by the Texas Elementary Principals and Supervisors Association. No part of articles in TEPSA publications or on the website may be reproduced in any medium without the permission of the Texas Elementary Principals and Supervisors Association.

The Texas Elementary Principals and Supervisors Association (TEPSA), whose hallmark is educational leaders learning with and from each other, has served Texas PK-8 school leaders since 1917. Member owned and member governed, TEPSA has more than 6000 members who direct the activities of 3 million PK-8 school children. TEPSA is an affiliate of the National Association of Elementary School Principals.

© Texas Elementary Principals and Supervisors Association

Sign up to receive the latest news on Texas PK-8 school leadership.