Dòng Nội dung
1
EFL Test-Takers’ Feedback on Integrated Speaking Assessment / Heng-Tsung Danny Huang, Shao-Ting Alan Hung. // Tesol Quarterly Volume 51, Issue 1 March 2017.
2017.
p. 166–179.

Integrated skills assessment has in the recent decade been greeted with a renewed research interest in the domain of language testing, or, as Yu (2013) aptly states, it has been “reinstated and revitalized” to command attention “as a field of research inquiry and as a method of assessing language proficiency” (p. 110). Integrated second language (L2) test tasks usually require test-takers to generate oral or written responses by integrating textual and/or aural information provided ahead of time. These tasks have thus far been incorporated into widely recognized international English proficiency tests such as the TOEFL-iBT by the Educational Testing Service. Although these tasks are not without challenges (see Cumming, 2014, for a review of such challenges), researchers have claimed or shown that they could better simulate real-life language use tasks resulting in higher levels of authenticity and predictive validity (Butler, Eignor, Jones, McNamara, & Suomi, 2000; Wesche, 1987). Further, they emphasize language as holistic and test multiple L2 skills as a way to promote better alignment with current L2 teaching approaches (Plakans, 2013) and induce positive washback (Barkaoui, Brooks, Swain, & Lapkin, 2013). Additionally, they promote test fairness via offering pertinent topical knowledge (Read, 1990; Weigle, 2004), and meet with favorable test-taker reactions (Huang & Hung, 2010). In light of these benefits, the current researchers thus targeted integrated speaking test tasks (integrated tasks) and explored EFL test-takers’ feedback on the employment of such tasks in gauging their English oral proficiency.