top of page

AI and Authentic Math

  • Jace Hargis
  • 8 hours ago
  • 3 min read
ree

As many of us continue to identify ways in which we might be able to integrate AI in a meaningful way, I would like to share this recent SoTL article entitled, “Enhancing Students’ Authentic Mathematical Problem-Solving Skills and Confidence Through Error Analysis of GPT-4 Solutions” by Lin, et al. (2025) (https://rptel.apsce.net/index.php/RPTEL/article/view/2025-20034/2025-20034 ). 


The authors conducted a quasi-experimental study exploring how error analysis of GPT-4-generated math solutions can improve students’ authentic problem-solving abilities and math confidence. The research involved 59 students, divided into an experimental group (error analysis using GPT-4) and a control group (traditional instruction). Drawing from constructivist and metacognitive learning theories, the study framed AI as a cognitive partner that scaffolds reflection, collaboration, and conceptual application.


The study rests on the premise that errors are valuable learning opportunities, a central idea in constructivist theory. By analyzing mistakes, students engage in metacognitive monitoring, identifying misconceptions and reconstructing knowledge through active reflection. Error analysis aligns with how humans learn effectively, through feedback, collaboration, and self-regulated inquiry. GPT-4’s incorrect or incomplete solutions became “teachable moments,” prompting students to question reasoning processes, compare strategies, and validate answers.


From a cognitive apprenticeship perspective, GPT-4 modeled problem-solving steps while students verbalized reasoning, evaluated evidence, and refined strategies. The iterative process mirrored Vygotskian scaffolding, where social discourse and guided questioning transform abstract math into meaningful knowledge.


Findings

  1. Improvement in Authentic Problem-Solving SkillsStudents in the GPT-4 error analysis group demonstrated statistically significant gains in problem-solving performance (p < .001). Both high- and low-achieving students improved from pre- to post-tests and the gains were especially notable among low-achieving students, who outperformed their control group peers (p = .01). This suggests that AI-mediated error analysis reduces learning barriers, offering scaffolds that help struggling students connect abstract mathematical ideas to concrete contexts.

  2. Increased Mathematical ConfidencePost-test results showed significant improvement in mathematical confidence among the experimental group (p = .001). Interviews revealed that students became more patient, persistent, and less anxious when facing complex problems. They reported that GPT-4’s explanations helped them understand question intent, verify reasoning, and reframe mistakes as learning opportunities. Both high- and low-achieving students experienced increased confidence, though high achievers benefited more from opportunities for peer explanation and reasoning articulation, while low achievers relied more on GPT-4 for structured guidance.

  3. Enhanced Higher-Order Cognitive SkillsQualitative data revealed that students developed key competencies including collaborative problem-solving, critical thinking, math creativity, and metacognition. Through peer discussion and class presentations, students learned to challenge GPT-4’s logic, justify their reasoning, and refine explanations of activities that reflect authentic cognitive apprenticeship and socially mediated learning.


The authors emphasize that GPT-4’s educational value lies not in its correctness, but in how its errors activate human cognition. The study demonstrates pedagogical principles relevant to AI-supported learning:

  • Constructivist Scaffolding: GPT-4’s solutions serve as externalized “thinking artifacts” that help students visualize reasoning steps and reflect on misconceptions.

  • Metacognitive Reflection: Analyzing AI’s mistakes encourages students to think about their own thinking, promoting self-regulation and conceptual transfer.

  • Collaborative Knowledge Building: Peer dialogue during error analysis builds social understanding, echoing research on collaborative learning and cognitive apprenticeship.

  • Situated Authentic Learning: By applying math to real-world scenarios, students experience the relevance of knowledge, bridging abstract skills with lived experience.

This pedagogical design reframes GPT-4 as a partner in inquiry, not an oracle. Students learn how to interrogate information critically and foundational literacy in the age of AI.


Btw, in case you are interested in a conversation about AI in Ed, here is a YouTube video that I recently collaborated with Janet Hurn, https://www.youtube.com/watch?v=XrkXqMyYIzk&t=848s 


References

Lin, Y.-F., Yang, E. F.-Y., Wu, J.-S., Yeh, C. Y. C., Liao, C.-Y., & Chan, T.-W. (2025). Enhancing students’ authentic mathematical problem-solving skills and confidence through error analysis of GPT-4 solutions. Research and Practice in Technology Enhanced Learning, 20(34). https://doi.org/10.58459/rptel.2025.20034 

 
 
 

Comments


Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square

© 2023 by GREG SAINT. Proudly created with Wix.com

bottom of page