The New Art and Science of Teaching. Robert J. Marzano
Читать онлайн книгу.of the topic.
Source: Adapted from Marzano Research, 2016o.
Many of the strategies in this element represent different ways to assess students. For example, common assessments are those that collaborative teams create around a specific proficiency scale (see Marzano, Heflebower, Hoegh, Warrick, & Grift, 2016). To illustrate, assume that a collaborative team of three teachers is designing a common assessment. The teachers start by creating a proficiency scale like the one in figure 2.1.
Source: Marzano Research, 2016o.
Figure 2.1: Proficiency scale for common assessment.
Creating a proficiency scale is always the first order of business when designing a common assessment. As described in chapter 1, if the district has created proficiency scales for each subject area and grade level, this work is already done for collaborative teams.
The next step is to design an assessment that addresses scores 2.0, 3.0, and 4.0 content from the scale. Such an assessment appears in figure 2.2 (page 24).
The assessment in figure 2.2 includes items and tasks for score 2.0 content in section A, items and tasks for score 3.0 content in section B, and items and tasks for score 4.0 content in section C. Other assessments individual teachers generate might follow this same format. However, there are a variety of other forms assessments might take. For example, interviews are a type of assessment that involve teacher-led discussions during which the teacher asks questions that address level 2.0 content, level 3.0 content, and level 4.0 content. Based on students’ oral responses, the teacher assigns an overall score.
Source: Marzano Research, 2016o.
Figure 2.2: Assessment with three sections.
Student-generated assessments are those that individual students propose and execute. This particular strategy provides maximum flexibility to students in that they can select the assessment format and form that best fit their personality and preferences.
Probably the most unusual strategy in element 5—response patterns—involves different ways of scoring assessments. To illustrate this strategy, consider figure 2.3.
Source: Marzano Research, 2016o.
Figure 2.3: The percentage approach to scoring assessments.
Figure 2.3 depicts an individual student’s response pattern on a test that has three sections: (1) one for score 2.0 content, (2) one for score 3.0 content, and (3) one for score 4.0 content. The section for score 2.0 content contains five items that are worth five points each for a total of twenty-five points. The student obtained twenty-two of the twenty-five points for a score of 88 percent, indicating that the student knows score 2.0 content. The student acquired 50 percent of the points for score 3.0 content and only 15 percent of the points for score 4.0 content. This pattern translates into an overall score of 2.5 on the test indicating knowledge of score 2.0 content on the proficiency scale and partial knowledge of score 3.0 content.
When the strategies in this element produce the desired effects, teachers will observe the following behaviors in students.
• Students can explain what the score they received on an assessment means relative to a specific progression of knowledge.
• Students can explain what their grades mean in terms of their status in specific topics.
• Students propose ways they can demonstrate their level of proficiency on a scale.
Planning
The design question pertaining to using assessments is, How will I design and administer assessments that help students understand how their test scores and grades are related to their status on the progression of knowledge they are expected to master? The two elements that pertain to this design area provide specific guidance regarding this overall design question. Teachers can easily turn these elements into more focused planning questions.
• Element 4: How will I informally assess the whole class?
• Element 5: How will I formally assess individual students?
The teacher can address the planning question for element 4 in an opportunistic manner in that he or she might simply take advantage of situations that lend themselves to informal assessments of the whole class. For example, a teacher is conducting a lesson on level 2.0 content. She decides to employ electronic voting devices to keep track of how well students are responding to the questions. As the lesson progresses, she notices that more and more students are responding correctly to questions. She uses this information as an opportunity to celebrate the apparent growth in understanding of the class as a whole. While she could have planned for this activity, the opportunity simply presented itself, and she acted on it.
The planning question for element 5 generally requires more formal design as to the assessments teachers will administer over the course of a unit or set of related lessons. Typically, teachers like to begin a unit with a pretest that addresses scores 2.0, 3.0, and 4.0 content in the proficiency scale. They must plan for this. It is also advisable to plan for a similar post-test covering the same content but using different items and tasks. Although teachers may plan for one or more other tests to administer to students in between the pre- and post-tests, it is also advisable for the teacher to construct assessments as needed and administer them. As long as they score all assessments using the 0–4 system from the proficiency scale, teachers can compare all scores, providing a clear view of students’ learning over time.
Implications for Change
The major change this design area implies is a shift from an assessment perspective to a measurement perspective. This is a veritable paradigm shift that has far-reaching implications. Currently, teachers view assessment as a series of independent activities that gather information about students’ performance on a specific topic that has been the focus of instruction. Teachers score most, if not all, of these assessments using a percentage score (or some variation thereof). At some point, teachers combine all students’ individual scores in some way to provide an overall score for the students on each topic. Usually, teachers use a weighted average, with scores on some tests counting more than others. They then translate the overall score to some type of overall percentage or grade.
This process tells us very little about what specific content students know and don’t know. In contrast, scores teachers generate from a measurement