top of page

GenAI, Design Thinking, Pedagogy

  • Jace Hargis
  • 3 hours ago
  • 4 min read

This week I would like to share a recent AI SoTL article that combines AI with Design Thinking and Pedagogy. The article is entitled, “Generative AI in Design Thinking Pedagogy: Enhancing Creativity, Critical Thinking, and Ethical Reasoning in Higher Education” by Rana, Verhoeven and Sharma (2025) (https://open-publishing.org/journals/index.php/jutlp/article/view/1193/1030 ).


This study explores how GenAI can be pedagogically integrated into Design Thinking education to enhance students’ creativity, critical thinking, and ethical reasoning. Through a mixed-methods analysis of 112 student reflections from a 12-week design thinking course, the researchers reveal a nuanced transformation: students evolve from hesitant users of AI to critical evaluators and creative collaborators.


Using GenAI tools, students engage across the five stages of design thinking, empathize, define, ideate, prototype, and test. The study’s thematic analysis identified four central dimensions of student experience:

  1. Perceived Benefits – Students found that AI augmented their creativity, expanded ideation, and democratized access to design skills. GenAI acted as a cognitive partner that encouraged divergent thinking rather than replacing originality.

  2. Ethical Concerns – Learners expressed significant discomfort with algorithmic bias and authorship ambiguity, recognizing AI’s replication of gender and cultural stereotypes.

  3. Hesitance and Acceptance – Students moved from skepticism about AI’s cheating potential to a more reflective, strategic acceptance, describing AI as a tool, not a crutch.

  4. Critical Validation – Learners developed epistemic vigilance, actively verifying AI-generated outputs, fact-checking data, and even constructing bias checklists.


Sentiment analysis supported these findings, 86% of reflections were positive overall, but ethical concerns produced 62% of negative sentiment—suggesting that ethical awareness deepened precisely through discomfort.


Grounded in constructivist learning theory (Vygotsky, 1978; Kolb, 1984) and design thinking pedagogy (Brown, 2008; Carlgren et al., 2016), the study positions GenAI as a mediating cognitive partner in the learning process. Rather than viewing AI as a threat to originality, the authors argue that AI can expand the learner’s Zone of Proximal Development, supporting creative ideation and reflective problem-solving when scaffolded by intentional pedagogy.


Importantly, the researchers emphasize that GenAI must be treated as a socio-technical actor not a neutral tool whose cultural and epistemological assumptions must be made explicit in classroom dialogue. Students’ recognition of bias in AI systems, for instance, revealed how ethical reasoning can be taught through the act of critique, not simply through compliance-based ethics modules.


Key recommendations include:

  • Reframing GenAI as a co-creator that supports reflection and critical judgment, not automation.

  • Embedding ethical reasoning throughout all stages of design thinking, treating ethics as a continuous reflective process rather than an isolated unit.

  • Cultivating AI literacy that goes beyond technical skill to include epistemic and cultural awareness helps students question what AI produces, whose knowledge it reflects, and whom it might exclude.

  • Faculty development that prepares instructors to guide students through this new human–AI collaboration with empathy, skepticism, and critical reflection.


—--------

I would like to share an additional AI in an Ed SoTL article today, entitled, “Enhancing Peer Assessment with Artificial Intelligence: What the Research Really Shows” by Topping,et al. (2025). The authors provide the most comprehensive synthesis to date of how AI is transforming peer assessment in higher education not just as a grading tool, but as a cognitive, social, and pedagogical accelerator. Their contribution spans a theoretical framework, a rapid scoping review of 79 studies, and a detailed case study of the RiPPLE platform. Collectively, their work demonstrates that AI is improving peer assessment processes, but unevenly with major gaps still limiting its potential.


The authors propose a six-part model showing where AI can enhance peer assessment:

  1. Assigning Peer Assessors

  2. Enhancing Individual Reviews

  3. Deriving Peer Grades & Feedback

  4. Analyzing Student Feedback

  5. Facilitating Instructor Oversight

  6. Peer Assessment Systems


Through an extended Google Scholar scoping review (2013–2023), the authors find:

  • 44% of studies focus on AI for calculating grades or feedback

  • Only 5% focused on assigning reviewers

  • Few addressed calibration, teamwork, or automated feedback

  • Nearly all studies show AI improves peer assessment


But the research is still primarily:

  • Narrow in scope

  • Focused on scoring, not learning

  • Light on large-scale or long-term evidence


AI has been most successful in:

  • consistency of grading

  • identifying outlier scoring

  • analyzing review credibility

  • improving feedback quality


The paper highlights RiPPLE a system used in 50+ courses and 50,000+ students as a model for AI-enhanced peer review. RiPPLE uses AI to:

  • Assign reviewers based on trust metrics

  • Support students with AI-generated feedback on feedback

  • Help instructors quickly identify issues and outliers

  • Improve rubric use and review confidence

  • Provide personalized learning recommendations based on peer-rated resources


Key Takeaways

1. AI doesn’t replace peer assessment — it strengthens it.  Especially in:

  • large classes

  • feedback-intensive courses

  • collaborative learning environments

2. The biggest impacts are not in grading, but in learning.  Students learn to:

  • analyze quality

  • calibrate expectations

  • provide better feedback

  • build metacognitive skill

3. We are still underusing AI’s capabilities. Future research should explore:

  • LLM-supported feedback literacy

  • emotion-aware feedback systems

  • AI for student reviewer training

  • teamwork and co-assessment analytics

  • cross-cultural and equity implications

4. Instructor oversight still matters. Even in an AI-driven system: Human judgment is not eliminated, it is optimized.


References

Rana, V., Verhoeven, B., & Sharma, M. (2025). Generative AI in design thinking pedagogy: Enhancing creativity, critical thinking, and ethical reasoning in higher education. Journal of University Teaching and Learning Practice, 22(4). https://doi.org/10.53761/tjse2f36

Brown, T. (2008). Design thinking. Harvard Business Review, 86(6), 84–95.

Carlgren, L., Rauth, I., & Elmquist, M. (2016). Framing design thinking: The concept in idea and enactment. Creativity and Innovation Management, 25(2), 38–57. https://doi.org/10.1111/caim.12153

Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. Prentice Hall.

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press.

Topping, K. J., Gehringer, E., Khosravi, H., Gudipati, S., Jadhav, K., & Susarla, S. (2025). Enhancing peer assessment with artificial intelligence. International Journal of Educational Technology in Higher Education, 22(3). https://doi.org/10.1186/s41239-024-00501-1

 
 
 

Comments


Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square

© 2023 by GREG SAINT. Proudly created with Wix.com

bottom of page