The Concise Encyclopedia of Applied Linguistics. Carol A. Chapelle
Читать онлайн книгу.may need to comprehend the source texts and plan a response on the topic, but does not have to integrate the texts in the product of the assessment. In content‐responsible integrated tasks, both the process and the products require skill integration, and, therefore, the rating rubric should include criteria for assessing the test takers' use of more than one skill. The third test type considered is integrated assessment, which includes several test sections that are thematically linked (Esmaeili, 2002). For example, a section assessing reading comprehension would include a text that is also the topic for a subsequent writing prompt.
In addition to these three types of integration, other tasks that may be considered integrated are being used to assess language, as well. Some are new, while others are not, but are being viewed in a new light. For example, story‐completion writing tasks have been used in language acquisition research for some time; however, scholars are looking into the potential for these tasks to elicit integrated reading‐into‐writing performances (Wang & Qi, 2013). Another familiar task, short‐answer questions in a reading tests, can be considered for their assessment of both writing and reading (Weigle, Yang, & Montee, 2013). Although the writing is much shorter in these tasks, they can afford a means to assess integration for lower proficiency students who may not be able to produce a written essay. Another type of task that adds to the variety of ways to assess integrated skills is to reverse the direction of the skills in the task. For example, asking writers to free‐write on a topic before reading texts that delve into the topic can activate background knowledge to support comprehension (Plakans et al., 2018). There is great potential for continued innovation or reframing of language tasks to elicit skills integration.
Research in the Assessment of Integrated Skills
Researchers have attempted to understand integrated tasks by comparing test takers' performances on them with their performances on tasks requiring only one skill. Research comparing independent and integrated writing task performance has found that overall scores show similarities (Brown, Hilgers, & Marsella, 1991) and are positively correlated (Sawaki et al., 2013; Zhu et al., 2016). Yet closer investigation of discourse features has revealed some differences, in such features as grammatical accuracy, development, and rhetorical stance (Cumming et al., 2005). For example, in studying the prototype TOEFL iBT task, Cumming et al. (2005) found that integrated task responses were shorter, but used longer words and more variety in words when compared to independent writing tasks. The independent writing responses were scored higher in certain rhetorical features, such as the quality of propositions, claims, and warrants.
Studies investigating the test‐taking process across tasks types have found evidence that some test takers follow a similar approach for both independent and integrated tasks, while others treat integrated tasks as requiring synthesis and integration strategies, such as scanning the text for ideas to include in their essay (Plakans, 2009; Barkaoui, 2015). However, the study of test‐taking processes on two different integrated tasks also revealed differences across tasks: Ascención (2005) found that read‐and‐respond writing tasks required more planning and monitoring than a read‐and‐summarize task.
Researchers have also attempted to reveal effects of proficiency and task familiarity on integrated task performance. Not surprisingly, higher‐proficiency writers produce longer responses to integrated writing tasks than lower‐level writers (Cumming et al., 2005; Gebril & Plakans, 2013). Expected differences in grammatical accuracy also occur across levels of proficiency, as well as in organization and the use of integration source texts (Cumming et al., 2005; Gebril & Plakans, 2013; Plakans & Gebril, 2017). Research results also suggest that prior experience with integrated tasks, educational level, first‐language writing experience, and interest in writing may affect performance (Ascención, 2005; Wolfersberger, 2013). The role of texts is also important for developing integrated assessment, as well as for understanding scores. In a study of different source texts used in summary writing, Li (2014) found that test takers performed better in summarizing expository texts, despite their opinions that it was easier to summarize narrative text.
Benefits of Integrated Assessment
The current interest in the profession for integrating skills for assessment resides in their apparent authenticity. Particularly for specific purposes, such as assessing academic language, needs analyses of language use have shown that skills are used in tandem rather than in isolation (e.g., Leki & Carson, 1997). Thus, including this integration as part of assessment creates test tasks that appear authentic in view of their alignment with real language‐use contexts. The connection between the test and the real world is intended to result in a positive impact on test users' confidence in the scores, increase test takers' motivation, and lead to scores that are more predictive of future performance. Integrated assessments that provide test takers with content or ideas for their performances may mitigate nonlanguage factors such as creativity, background knowledge, or prior education, or a combination of these (Read, 1990). Some research has reported that test takers prefer integrated tasks because they understand the task topic better than they do on single skill tasks and may generate ideas from the sources given (Plakans, 2009). However, Huang and Hung (2013) found that actual performance and anxiety measures did not support test takers' perceptions that integrated tasks lower anxiety in comparison with independent speaking tasks.
Another advantage with this kind of assessment is the emphasis on the skills working together rather than viewing them as individual components of language ability. Integrated assessment may fit well with current language‐teaching approaches, such as task‐based language teaching (TBLT), which move away from teaching separate skills to focusing on accomplishing tasks using language holistically. Such tests may also have a positive washback, or impact, on classrooms that integrate skills, focus on content and language integrated learning (CLIL), or have goals for specific‐purposes language use.
Challenges of Integrated Assessment
Although visible benefits exist with integrating skills in assessment, a number of challenges remain, such as developing high‐quality integrated tasks, rating learners' performance appropriately, and justifying the validity of interpretations and uses.
Development
Developing high‐quality integrated prompts can be challenging because these tasks are often complex, including multiple steps and texts. The development or selection of source texts requires decisions to be made about the length and level of the text as well as about the content. For example, some tests include multiple texts that give different opinions on a topic while others have one long text that describes a single phenomenon. When test developers aim to produce parallel items these considerations about texts need to be taken into account. Carefully crafted instructions are an important consideration. Test takers need a clear idea of what is expected and, as much as possible, of how to integrate the skills in their process and product. Studies have shown that test takers approach these tasks in a variety of ways, some using both skills to complete the tasks while others use only one skill and thus are not truly integrating. With more frequent use, test takers' confusion may decrease; however, those unfamiliar with the type of assessment may struggle to understand how to complete the task, which can affect their score regardless of their language ability.
Reliability and Rating
Although several studies have found assessment of integrated skills tasks can lead to reliable rating (Ascención, 2005; Gebril, 2010), the issue of scoring these performance‐based tasks remains difficult. The rubric for integrated skills assessment needs to reflect skill integration in some way unless there is a clearly dominant skill that is of primary concern, such as with stimulus tasks or thematically linked tasks that do not require a content‐responsible response. Thus, a clear definition of the role of the integrated skills and what constitutes evidence for them in the performance is needed for meaningful scoring. The example below presents a detailed rubric checklist for assessing integrated reading and writing skills.
|