AI and Authentic Assessments
- Jace Hargis
- 41 minutes ago
- 4 min read

As we continue to ponder how AI might support our current pedagogy/andragogy, this week I would like to share a recent SoTL article on AI and authentic assessment. The article is entitled, “Practical Implications of Generative AI on Assessment: Snapshot of Early Reactions to Assessment Redesign” by Almpanis et al. (2025).
The authors explore the integration of generative AI (GenAI) tools such as OpenAI ChatGPT and Google Gemini. The findings highlight the dual role of GenAI: while it poses risks to academic integrity, contrary to the common perception, it also offers opportunities to enhance assessment authenticity and student engagement. Participating educators reported various adaptations, including the integration of GAI into assessment tasks, increased use of group-based projects, and the implementation of time-limited and context-specific assignments. The study emphasises the need for continuous evolution in assessment practices to maintain academic integrity and effectively measure student learning outcomes in the GenAI era.
The study engaged 12 faculty members across two UK universities. Using a bespoke online survey, the researchers collected qualitative data on assessment practices before and after the emergence of GenAI, as well as participants’ reflections on the future of assessment design. Responses were analyzed in two phases: first, a summative content analysis to categorize pre- and post-GenAI assessment practices, and second, a thematic analysis to capture broader hopes and doubts about AI’s role in higher education assessment (Braun & Clarke, 2019).
This design provided a structured yet flexible means of exploring faculty perspectives, though the small sample size and reliance on self-report raise important considerations. Confounding variables could include institutional culture (one institution may have stronger policy guidance on AI than the other), disciplinary traditions, and individual faculty digital literacy or attitudes toward technology. These factors may have shaped the willingness of participants to innovate or their perceptions of AI’s risks and benefits, complicating the generalizability of the findings.
The study revealed that two-thirds of participants made substantive changes to their assessments post-GenAI, ranging from incremental adjustments (e.g., requiring students to collect original data or defend work orally) to more radical redesigns (e.g., shifting from essays to lab-based simulations, peer critiques, or time-limited, context-specific tasks).
These moves align with constructivist theory (Bruner, 1966; Vygotsky, 1978), which holds that learners build knowledge through situated, active engagement rather than through rote reproduction. For example, one instructor redesigned a traditional essay into a series of blog posts and peer critiques, requiring students to engage with peers’ ideas in ways that mirror authentic professional practice. This not only deters misuse of GenAI but also fosters social constructivist learning by embedding knowledge creation within a community of practice.
Similarly, requiring students to critique AI-generated outputs directly reflects principles of the information processing model (Atkinson & Shiffrin, 1968). By comparing, evaluating, and revising AI-produced content, students strengthen encoding and retrieval pathways, moving beyond surface-level memorization toward higher-order processing. One instructor, for example, provided students with ChatGPT-generated answers as a baseline, asking them to critique and extend the responses. This exercise not only disrupted reliance on AI but also sharpened critical analysis skills.
Although exploratory, the study underscores a practical message: faculty do not need to ban AI to maintain integrity—instead, they can embed it within carefully structured, authentic assessments. Three takeaways stand out:
Localize and contextualize tasks. Several instructors shifted assignments to focus on students’ local organizations or regional contexts. This reduces the utility of generic AI outputs while reinforcing situated cognition, where knowledge is applied in meaningful contexts.
Design for process and iteration. Breaking large assessments into smaller, scaffolded components allows educators to track student growth and intervene earlier. This approach not only disrupts inappropriate AI use but also aligns with cognitive theories of incremental rehearsal and deep processing.
Integrate AI as an object of critique. Asking students to evaluate or refine AI-generated responses transforms AI from a threat into a tool for metacognitive growth. Such practices mirror authentic assessment principles by simulating the professional reality that graduates will face: working alongside AI systems while exercising judgment, creativity, and ethical responsibility (Advance HE, 2023; McArthur, 2023).
Faculty across disciplines can build on this research by experimenting with context-specific and collaborative assessments that emphasize process, creativity, and reflection. While the small scale of the study calls for longitudinal research across disciplines and institutions, its findings provide a model for how educators might navigate the AI era: not by retreating to surveillance or prohibition but by reimagining assessment in ways that honor both foundational theories of learning and the realities of a rapidly evolving digital landscape.
References
Almpanis, T., Conroy, D., & Joseph-Richard, P. (2025). Practical implications of generative AI on assessment: Snapshot of early reactions to assessment redesign in an HRM and a psychology course. Electronic Journal of e-Learning, 23(3), 19–29. https://doi.org/10.34190/ejel.23.3.3971
Advance HE. (2023). Authentic assessment in the era of AI. https://www.advance-he.ac.uk/membership/all-member-benefit-projects/Authentic-Assessment-in-the-era-of-AI
Atkinson, R. C., & Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes. In K. Spence & J. Spence (Eds.), The psychology of learning and motivation (Vol. 2, pp. 47–89). Academic Press.
Bruner, J. (1966). Toward a theory of instruction. Harvard University Press.
Braun, V., & Clarke, V. (2019). Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health, 11(4), 589–597. https://doi.org/10.1080/2159676X.2019.1628806
McArthur, J. (2023). Rethinking authentic assessment: Work, well-being, and society. Higher Education, 85(1), 85–101. https://doi.org/10.1007/s10734-022-00822-y
Pintrich, P. R. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educational Psychology Review, 16(4), 385–407. https://doi.org/10.1007/s10648-004-0006-x
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press.
Comments