Antiracist Writing Assessment Ecologies. Asao B. Inoue

Читать онлайн книгу.

Antiracist Writing Assessment Ecologies - Asao B. Inoue


Скачать книгу
income, education, and test-takers’ race. Similarly, in Great Brittan, Steve Strand (2010) found that Black Caribbean British students between ages 7 and 11 made less progress on national tests than their white British peers because of systemic problems in schools and their assessments. These patterns among racial formations do not change at Fresno State, in which African-American, Latino/a, and Hmong students are assessed lower on the EPT (see Inoue & Poe, 2012, for historical EPT scores by racial formation) than their white peers and attain lower final portfolio scores in the First Year Writing (FYW) program readings conducted each summer for program assessment purposes (Inoue, 2009a; 2012, p. 88). Race appears to be functioning in each assessment, producing similar racialized consequences, always benefiting a white middle class racial formation.

      Between 2011 and 2014, I directed the Early Start English and Summer Bridge programs at Fresno State. All students who were designated by the English Placement Test (EPT), a state-wide, standardized test with a timed writing component, as remedial must take an Early Start or Bridge course in order to begin their studies on any California State University campus. Even a casual look into the classrooms and over the roster of all students in these programs shows a stunning racial picture. These courses are ostensibly organized and filled by a test of language competency, however, each summer it is the same. The classes are filled with almost exclusively students of color. Of all the 2013 Bridge students, there were only four who were designated as white by their school records—that’s 2% of the Bridge population. And the Early Start English program is almost identical. So at least in this one local example of a writing assessment (the EPT), when we talk about linguistic difference, or remediation (these are synonymous in many cases), we are talking about race in conventional ways.7

      The remediation numbers that the EPT produces through blind readings by California State University (CSU) faculty readers also support my claims. In fall of 2013, as shown in Table 1, all students of color—it doesn’t matter what racial formation or ethnic group we choose—are designated by the EPT as remedial at dramatically higher rates than white students. The Asian-American category, which at Fresno State is mostly Hmong students, are the most vulnerable to this test, with 43.9% more of the Asian-American formation being designated as remedial in English than the white formation.8 How is it that these racially uneven test results are possible, and possible at such consistent rates? How is it that the EPT can draw English remediation lines along racial lines so well?

      Table 1. At Fresno State, students of color are deemed remedial at consistently higher rates than white students by the EPT (California State University Analytic Studies, 2014)

RaceNo. of First-Year StudentsNo. of Proficient in English% of Designated as Remedial
African- American1196148.7%
Mexican- American1,29859354.3%
Asian- American49516167.5%
White Non-Latino60145923.6%
Total2,9651,54847.8%

      While my main focus in this book is on classroom writing assessment, the way judgments are formed in large-scale ratings of timed essays are not much different from a single teacher reading and judging her own students. In fact, they show how language is connected to the racialized body. The processes, contexts, feedback, and consequences in a classroom may be different in each case, but how race functions in key places in classroom writing assessment, such as the reading and judgment of the teacher, or the writing construct used as a standard by which all performances are measured, I argue, are very similar. And race is central to this similarity because it is central to our notions of language use and its value.

      To be fair, there are more things going on that produce the above numbers. There are educational, disciplinary, and economic structures at work that prepare many students of color in and around Fresno in uneven ways from their white peers. Most Blacks in Fresno, for example, are poor, go to poorer schools because of the way schools are supported by taxes, which are low in those parts of Fresno. Same goes for many Asian-American students. But why would the Mexican-American students have twice as frequent remediation rates as white students? There is more going on than economics and uneven conditions at local schools.

      Within the test, there are other structures causing certain discourses to be rated lower. Could the languages used by students of color be stigmatized, causing them to be rated lower, even though raters do not know who is writing individual essays when they read for the EPT? Consider the guide provided to schools and teachers in order to help them prepare their high school students to take the EPT. The guide, produced by the CSU Chancellor’s Office, gives the rubric used to judge the written portion of the test. Each written test can receive from 1 to 6, with 6 being “superior” quality, 4 being “adequate,” 3 being “marginal,” and 1 being “incompetent” (2009, pp. 14-16). The rubric has six familiar elements:

      a.response to the topic

      b.understanding and use of the passage

      c.quality and clarity of thought

      d.organization, development, and support

      e.syntax and command of language

      f.grammar, usage, and mechanics (CSU Office of the Chancellor, 2009, p. 14)

      At least items e and f correspond to a locally dominant SEAE, while a, b, c, and d correspond to some conventions and dispositions that are a part of a dominant discourse. The guide offers this description of a 4-essay, which is “adequate,” that is, not remedial:

      a.addresses the topic, but may slight some aspects of the task

      b.demonstrates a generally accurate understanding of the passage in developing a sensible response

      c.may treat the topic simplistically or repetitively

      d.is adequately organized and developed, generally supporting ideas with reasons and examples

      e.demonstrates adequate use of syntax and language

      f.may have some errors, but generally demonstrates control of grammar, usage, and mechanics (CSU Office of the Chancellor, 2009, p. 15)

      I cannot help but recognize this rubric. It’s very familiar. In Chapter 13, “Evaluation,” of William Irmscher’s (1979) helpful book, Teaching Expository Writing, he provides a very similar rubric, one I’ve used in the past in writing classrooms:

      •Content

      •Organization/structure/form

      •Diction/language/style

      •Punctuation/mechanics

      •Grammar/style (1979, pp. 157-159)

      Irmscher’s dimensions are a variation of the five factors that Paul Diederich (1974) and his colleagues, John French and Sydell Carlton, found in their factor analysis of fifty-three judges’ readings of 300 student papers in a 1961 ETS study. The five factors they found most important to academic and professional readers’ judgments of student essays were (in order of importance/most frequently used):

      •Ideas

      •Usage, sentence structure, punctuation, and spelling

      •Organization and analysis

      •Wording and phrasing

      •Style (Diederich, 1974, pp. 7-8)

      Diederich explains that these five factors that most of his readers used to read and grade essays only accounted for 43% of all the variance in the grades given to the set of papers in his study. He says, “the remaining 57 percent was unexplained” (1974, p. 10). Most likely, the unexplained variance in grades was due to “unique ideas about grading that are not shared by any other reader, and random variations in judgment, which may be regarded as errors in judgment” (Diederich, 1974, p. 10). In other words, most of what produced evaluations and grades of student writing simply couldn’t be accounted for in the study, and could be unique or idiosyncratic. Each reader has his or her own unique, tacit dimensions that do not easily agree with the tacit dimensions that other readers may have. But what does this have to do with the EPT’s use of a very similar rubric and how does it help us see race in the


Скачать книгу