Keren Taiter, Rinat B. Rosenberg-Kima E97 Introduction The rapid advancement and widespread adoption of generative artificial intelligence (GenAI), exemplified by tools like ChatGPT, are fundamentally reshaping the landscape of education. GenAI's ability to produce written reports, design solutions, and simulate reflective thinking raises doubts about the validity of traditional assessments as evidence of learning, and its impact on academic evaluation raises urgent attention (Chaudhry et al., 2023). A recent systematic review highlights growing institutional concerns, particularly in high schools and universities, about academic integrity, authorship, and originality, as educators struggle to distinguish student-written work from AIgenerated content (Zhao et al., 2024). Rather than relying solely on detection-based approaches, a more effective strategy may involve rethinking assessment design (Lye & Lim, 2024). Stemming from this rapid technological shift, many educators are unprepared to adapt their assessment practices to the GenAI era. Many lack sufficient training, resources, or institutional support to design assessments that are both pedagogically sound and resistant to AI shortcuts (Lee et al., 2024). Redesigning tasks that resist AI misuse demands time, expertise, and institutional support (Winerö & Utterberg Modén, 2024; Celik et al., 2022). Although some studies have reported teachers’ need to redesign assignments (Hopfenbeck et al., 2023), questions remain regarding the specific challenges educators face in practice. Accordingly, this study aims to identify the challenges educators face in redesigning assessments in the GenAI era. Effective assessment redesign in this context demands the integration of technological understanding with pedagogical strategies and content knowledge. To frame the inquiry, the TPACK framework (Mishra & Koehler, 2006) was selected. The central research question is: What challenges do educators face in designing assessments in the GenAI era? Methodology Participants The study included 131 educators who took part in a workshop titled "Designing Assessment Tasks in the Age of Artificial Intelligence". Participation was open to lecturers in academia or teachers teaching upper secondary school levels. At the end of the workshop, 31 participants completed the questionnaires. Due to the low response rate, the questionnaire was also distributed to lecturers and secondary school teachers, yielding an additional 31 responses for a total of 62 respondents (13 men, 49 women, age M=46.7, SD = 15.94). Measurements Perceived challenges in redesigning assessment in the GenAI Era questionnaire. To capture participants’ perspectives on the challenges of redesigning assessment in the GenAI era, they responded to an open-ended question: “What challenges do you experience when redesigning assessment tasks in the era of GenAI?”. Workshop transcripts. Four of the six workshops were recorded on video and transcribed for subsequent qualitative analysis. Procedure The study was conducted over six months and included six separate workshops, titled "Designing Assessment Tasks in the GenAI Era", each lasting two hours and involving a different group of participants. Five of the workshops were delivered via Zoom, and one was conducted in person.
RkJQdWJsaXNoZXIy Mjk0MjAwOQ==