top of page

AI in Year Long program Nov 28

  • Jace Hargis
  • 3 hours ago
  • 5 min read

Because AI is relatively new, the research is still relatively small, but growing. This week I would like to share a somewhat longitudinal study entitled, “Embedding Generative AI as a Digital Capability into a Year-Long Skills Program” by Smith, Sokoya, Moore, Okonkwo, Boyd, Lacey and Francis (2025) (https://open-publishing.org/journals/index.php/jutlp/article/view/1299/1031 ).


In a landmark 2025 study, Smith and colleagues present one of the first empirical models for embedding GenAI into a full-year postgraduate skills curriculum. The research responds to the growing need for pedagogically grounded strategies that move beyond policy discourse toward practical classroom integration of AI literacy, ethics, and competency-based learning.


The study positions it as a digital capability essential to 21st-century learning. Across three semesters of MSc biosciences and chemistry programs, GenAI was systematically incorporated through process-based assessments, experience mapping, and just-in-time teaching methods designed to help students engage critically with AI while developing transferable digital and ethical skills. The study sought to answer two key questions:

  1. How can GenAI be embedded effectively into a postgraduate curriculum to enhance digital competencies?

  2. What strategies can mitigate academic integrity and data privacy concerns?

Grounded in experiential learning theory (Beard, 2022), the framework shifted the emphasis from final outputs to documented learning processes. Students were required to include GenAI prompts, rationales, and critiques within their submissions reflecting a growing movement toward “process, not product” assessment (Smith & Francis, 2024). This design aligns with international calls for higher education to integrate AI literacy as a graduate attribute (Moorhouse et al., 2023; Chan, 2023). Importantly, the program supported multilingual and international cohorts, highlighting GenAI’s role in promoting digital equity and language accessibility.


Using a mixed-methods design, the researchers combined surveys, skills audits, and semi-structured interviews across a diverse international cohort representing more than 20 nationalities. Over 85% of students (n = 156) participated. Quantitative data were analyzed using nonparametric statistics, while qualitative interviews underwent thematic analysis following Braun and Clarke’s (2019) framework. The curriculum incorporated scaffolding over three semesters:

  • Semester 1: Introduction to GenAI fundamentals, ethics, and data protection.

  • Semester 2: Prompt engineering and applied GenAI tasks.

  • Semester 3: Reflective interviews and evaluation of learning trajectories.

Findings

  1. Growth in Confidence. Across all four competency domains, students reported significant gains in self-confidence. For example, confidence in prompt writing rose from 21% to 48%, and ethical-use confidence increased from 42% to 67%.

  2. Process-Based Learning and Reflection. Students valued documenting their GenAI use, noting that it encouraged ethical reflection and intentionality. As one student remarked: “I've enhanced my skills in asking the right questions to AI effectively after the module.”

  3. Digital Equity and Linguistic Empowerment. International students highlighted how GenAI supported language learning and academic writing, helping them paraphrase, translate, and refine scientific texts. This aligns with recent findings that GenAI can act as a linguistic equalizer for multilingual learners (Wu & Yu, 2024).

  4. Persistent Ethical Hesitation. Despite improved technical skill, ethical uncertainty remained. Students expressed distrust toward both GenAI and unclear institutional policies. Some reported reduced confidence after learning about data privacy risks underscoring the need for transparent university frameworks.


The authors identify three student trust profiles:

  • Those confident in both GenAI and institutional integrity.

  • Those who trusted the tool but not university guidance.

  • Those who distrusted both, often avoiding GenAI altogether.

This typology reveals that ethical education must evolve alongside technical training, ensuring alignment between student values, institutional clarity, and pedagogical design. The study demonstrates that embedding GenAI into structured, skills-based modules enhances learning outcomes without compromising academic integrity. By requiring students to critique AI outputs, the curriculum fostered deeper understanding and discouraged passive reliance.


Smith et al. emphasize that institutions should reimagine assessment through:

  • Process documentation: Students log GenAI interactions and reflect on ethical implications.

  • Collaborative critique: Peer reviews of AI outputs to build collective digital judgment.

  • Metacognitive prompts: Tasks like “What would you do differently than the AI?” to develop higher-order thinking.


The authors propose six actionable steps for educators:

  1. Integrate GenAI as a core digital competency in curricula and staff development.

  2. Scaffold learning experiences to build progressively complex AI skills.

  3. Adopt process-based assessments emphasizing reflection and critique.

  4. Provide continuous support prompt libraries, tutorials, and feedback opportunities.

  5. Establish clear ethical guidelines in syllabi and assessment briefs.

  6. Ensure privacy compliance through institutional oversight and student education.

These recommendations align with emerging frameworks such as the AI Assessment Scale (Perkins et al., 2024) call for human-centered AI integration.

Again this week, I would like to share a second AI in Ed SoTL article entitled, “Redesigning Assessment for the Generative AI Era: A Framework for Educators” by Khlaif, et al. (2025). Khlaif and colleagues offer a timely and practical rethinking of assessment practices grounded in educational integrity, learner agency, and AI fluency. Their work proposes a multidimensional framework designed to ensure that assessment continues to reflect meaningful learning even when AI is involved at every stage.


The authors argue that generative AI has fundamentally disrupted assessment by:

  • Making traditional recall tasks obsolete

  • Complicating academic integrity enforcement

  • Blurring lines between student work and AI contribution

  • Expanding students’ access to instant feedback and explanations


Rather than focusing on catching misuse, Khlaif et al. advocate for:

  • Authentic, process-driven assessments

  • Metacognitive reflection on tool use

  • Evaluation of student + AI co-production

  • Assessment of higher-order thinking, not output alone

Four Key Dimensions

  1. Pedagogical Dimension. Assessment must align with active learning, inquiry, critical thinking, and student-centered design.

  2. Ethical Dimension. Includes transparency, academic honesty, consent, bias awareness, and AI literacy.

  3. Technological Dimension. Focuses on tool selection, AI capability analysis, and appropriate use boundaries.

  4. Assessment Dimension. Calls for redesigned methods including:

    1. performance-based tasks

    2. iterative submissions

    3. reflective writing

    4. multimodal evidence

    5. collaborative problem-solving

    6. AI-augmented portfolios

Educators are urged to:

  • Require students to document how they used AI

  • Compare drafts with and without AI assistance

  • Integrate oral defense, peer review, and process documentation

  • Blend human judgment with AI-supported analytics

  • Incentivize learning, not just product creation

Rather than equating AI use with cheating, the authors propose a new definition: Integrity means honestly representing the relationship between human and AI contributions. This shift reframes assessment in terms of transparency, reflection, and ethical agency.


Khlaif et al. make a compelling case that assessment, not content, is where AI will make the biggest impact on learning systems. If assessment fails to evolve:

  • learning outcomes become artificial

  • grades become meaningless

  • student agency weakens

  • equity gaps worsen

If redesigned with AI in mind:

  • creativity expands

  • students build meta-AI literacy

  • authentic learning becomes visible

  • assessment becomes more human, not less

Khlaif, Z. N., Alkouk, W. A., Salama, N., & Abu Eideh, B. (2025). Redesigning assessments for AI-enhanced learning: A framework for educators in the generative AI era. Education Sciences, 15(2), 174. https://doi.org/10.3390/educsci15020174



References

Beard, C. (2022). Experiential learning design. Routledge.

Francis, N. J., Jones, S., & Smith, D. P. (2025). Generative AI in higher education: Balancing innovation and integrity. British Journal of Biomedical Science, 81, 14048.

Moorhouse, B. L., Yeo, M., & Wan, Y. (2023). Generative AI tools and assessment: Guidelines of the world’s top-ranking universities. Computers and Education Open, 5, 100151.

Perkins, M., Furze, L., Roe, J., & MacVaugh, J. (2024). The AI assessment scale (AIAS): A framework for ethical integration of generative AI in educational assessment. Journal of University Teaching and Learning Practice, 21(6), 49–66.

Smith, D. P., Sokoya, D., Moore, S., Okonkwo, C., Boyd, C., Lacey, M. M., & Francis, N. J. (2025). Embedding generative AI as a digital capability into a year-long skills program. Journal of University Teaching and Learning Practice, 22(4). https://doi.org/10.53761/fh6q4v89

Wu, T., & Yu, Z. (2024). Bibliometric and systematic analysis of AI chatbots’ use for language education. Journal of University Teaching and Learning Practice, 21(6), 174–198.

 
 
 

Comments


Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square

© 2023 by GREG SAINT. Proudly created with Wix.com

bottom of page