Double Edge: AI in Ed
Welcome back to this SoTL blog and for many of us the beginning of another academic term. I am beginning the ninth year of sharing teaching research and I hope that it continues to be useful to each of you. Please remember at any time if you would rather not receive these emails, just send me a note and I can take you off the ever-growing list. Alternatively you can read the current and all archive posts at https://jacehargis.substack.com . Also, if you know colleagues who might be interested in these posts, feel free to have them send me an email and I will gladly include them on the weekly list.
With the rapid movement and conversations of generative artificial intelligence (GenAI) in higher ed over the summer, I thought I would begin this year’s SoTL blog with a commentary article which includes research papers by author Rose Luckin, a senior fellow in the Center for Universal Education. The July 2024 paper is entitled, “The double edge sword of AI in education.” Luckin cites three potential risks of AI in Ed followed by six ways which academics can address these risks.
Risks:
Overestimating AI’s intelligence
Cognitive development is significantly influenced by a person’s social environment, with language playing a vital role in fostering abstract thinking. The learning process involves a complex interplay between knowledge gained through direct, everyday experiences and more formal, theoretical concepts acquired through structured instruction. While predictive analytics and machine learning have certainly helped transform productivity in areas like personalized learning and adaptive testing, these same technologies have yet to meaningfully grasp and augment the tacit collective processes of communication, critical thinking, and collaboration that remain crucial to human intelligence. Where AI has meaningfully impacted education, such as through automated grading or content recommendation systems, it has arguably driven more atomization, standardization, and exploitation than enhancement of human intelligence.
Cognitive atrophy through overreliance
Human cognition is changing and this process is hastened by technology. It is possible that if we fully delegate aspects of our cognitive processing to AI, then we could evolve not to be able to complete these cognitive activities. The relationship between the processing we do to behave intelligently is highly interconnected. It may therefore be that a process that appears redundant now that we have AI, may in fact be a vital component in a more advanced cognitive process that is not considered redundant and that we do not want to delegate.
The illusion of effortless wisdom
Large language models (LLMs) are designed to engage in a manner that emphasizes ease and convenience. This suggests that AI can help accomplish tasks without significant human effort. This emphasis on effortlessness is fundamentally at odds with the nature of deep learning. Genuine learning requires “strenuous mental efforts [refer to Piaget disequilibrium and the research on confusion and frustration].” There is a risk that the consumerization of AI could entice people to believe that learning can now be effortless. This approach could fundamentally alter how students approach learning and problem-solving.
How do we address these risks?
Empower the education sector in AI development and regulation.
Foster flexibility and adaptability in educational systems.
Enhance AI literacy and critical thinking.
Position AI as a ‘teammate’ in education.
Invest in advanced AI models for education.
Address AI governance and data ownership.
References
Luckin, R. (2024). The double edge sword of AI in education. https://web.archive.org/web/20240723045822/https://www.brookings.edu/articles/the-double-edged-sword-of-ai-in-education
Comments