Impact and Opportunities of AI
- Jace Hargis
- 1 hour ago
- 5 min read
This week I would like to share a recent SoTL article entitled, “Impact and Opportunities of Generative AI in Education: A Study of Academic Perceptions” by Sevilla-Bernardo, Cervera and Martin-Robles (2025). This study explores how professors perceive and adopt generative AI (GAI) within their teaching practices. Conducted in Spain, the research analyzed the attitudes of 71 professors toward the benefits and barriers of integrating GAI into higher education classrooms. The study’s two main hypotheses were:
Professors’ experience with GAI increases their perception of advantages relative to disadvantages.
Continued use of GAI leads to greater willingness to integrate it into practice.
The first hypothesis was supported, but the second was not indicating that while familiarity with AI improves awareness of its benefits, it does not automatically translate into sustained adoption. Data collection occurred in July 2023 and March 2024, through structured questionnaires. Quantitative measures were analyzed with R software and validated through Cronbach’s Alpha = .91, confirming high internal reliability. Participants were predominantly mid-career professors aged 45–64, with increasing AI familiarity between sessions. Notably, the proportion of professors using ChatGPT for more than 80 hours rose significantly, indicating greater exposure and comfort with the tool.
Findings
The advantages most cited:
Creation of activities and cases for teaching (mean = 4.38/5.0).
Translation and cultural adaptation of texts (4.27).
Generation of new content and examples for classroom use (4.27).
These uses highlight AIs perceived ability to reduce workload and enhance curriculum development. Professors valued the tool’s potential to simulate class discussions, propose evaluation questions, and assist in designing instructional materials.
Conversely, the main disadvantages centered on:
Inaccurate or unreliable responses (mean = 3.31).
Ethical concerns and plagiarism risks (3.19).
Subscription costs (2.96).
While technical issues such as internet failures and adaptation to teaching style persisted, their importance diminished over time. Significantly, the perception that AI diminishes critical thinking decreased by 11% across sessions suggesting that as professors became more experienced, they viewed AI as an enhancer rather than a detractor of critical engagement.
The Mann-Whitney U test confirmed a statistically significant increase (+9.9%) in perceived advantages (p < .001), while no significant change occurred in intention to adopt (p = .93). In essence, professors saw more potential benefits but did not necessarily feel ready to rely on GAI in daily teaching. This paradox underscores a psychological adoption gap, while faculty acknowledge AI’s utility, they remain hesitant to embed it structurally into their pedagogy. Factors contributing to this reluctance include lack of training, ethical uncertainty, and limited institutional guidance.
Professors appreciated AI’s ability to automate routine tasks, generate examples, and enhance student engagement, aligning with prior research on AI-supported learning personalization (Chiu, 2024; Gimpel et al., 2023). Yet, skepticism persists around accuracy, authorship, and student dependency concerns echoed by earlier studies (Farhi et al., 2023; Stahl & Eke, 2024). The authors argue that effective integration demands pedagogical scaffolding, ensuring AI functions as a partner in critical inquiry rather than a content substitute.
Key implications from this study include:
Training is essential: Faculty development programs should emphasize hands-on, critical engagement with AI tools.
Institutional frameworks are needed to establish ethical use, transparency, and data protection.
Curriculum innovation should leverage GAI for design thinking, formative feedback, and multilingual access.
—----
Today, since there are so many AI in Ed research papers coming out, I would like to share an additional summary. The paper is entitled, “What Counts as Evidence in AI & ED: Toward Science-for-Policy 3.0, by Tuomi (2025). While education systems continue to claim that AI will “revolutionize learning,” this author demonstrates that the empirical foundation for these claims is far weaker and far more complicated than often acknowledged.
Since the 1990s, the evidence-based education movement has relied heavily on randomized controlled trials (RCTs) and meta-analyses to claim “what works.” Inspired largely by medicine, these methods have become dominant despite:
Weak fit with educational complexity
Frequent misapplication
Major issues with contextual validity
The author reviews leading studies and meta-analyses on AI-driven learning tools, especially intelligent tutoring systems. Their results sound promising but only on the surface. When corrected for bias, poor methods, small samples, and brief interventions, the effect of AI on learning approaches is a near-zero impact. An example is the Morrison et al. (2024) report a +0.12 effect size in math statistically small and educationally modest. Other meta-analyses, once filtered for rigor, reveal almost no consistent impact at all.
These findings contradict a persistent myth rooted in Bloom’s “2-sigma problem” the belief that personalized tutoring can produce massive learning gains (effect size +2.0). Tuomi calls this contradiction the “Bloomian paradox” of AI in education: The gold standard of RCTs cannot meaningfully test a personalized AI intervention—because personalization violates statistical assumptions.
Tuomi shows that many recent AI-in-education studies:
Use low-quality experimental designs
Treat speculative potential as actual proof
Conflate usability with learning impact
Rely on wildly heterogeneous samples
Equate “AI was used” with “AI worked”
Even rigorous meta-analyses are often built on incoherent foundations—mixing different populations, tools, learning outcomes, and contexts. The result is that policymakers think strong evidence exists where it does not.
Tuomi argues that educators and policymakers often assume that learning = faster mastery of a curriculum. But education also serves:
Qualification (skills/credentials)
Socialization (culture, community, belonging)
Subjectification (agency, identity, autonomy)
AI may help with qualification, but:
May harm social and developmental goals
May accelerate only defensive credentials (Thurow, 1975)
May push education toward efficiency over humanity
Tuomi proposes moving beyond the “evidence-based” paradigm that treats research like a delivery mechanism for policy facts. Instead, we need:
Learning-oriented policy-making
Argument-centered evidence
Theory-driven method pluralism
Collaborative knowledge creation
Evidence should support intelligent decision-making not dictate it. The key shift is that evidence becomes what helps us reason about complex futures, not what proves universal causal truths.
Core Takeaways
There is no solid experimental base showing AI improves learning at scale.
RCTs are poorly matched to personalized or adaptive AI tools.
We need deeper, theory-driven, context-aware research designs.
Educational goals must expand beyond efficiency and test gains.
Evidence should be a tool for reasoning—not a statistical ritual.
References
Adeshola, I., & Adepoju, A. P. (2023). The opportunities and challenges of ChatGPT in education. Interactive Learning Environments, 32(10), 6159–6172. https://doi.org/10.1080/10494820.2023.2253858
Boscardin, C. K., Gin, B., Golde, P. B., & Hauer, K. E. (2024). ChatGPT and generative AI for medical education: Potential impact and opportunity. Academic Medicine, 99(1), 22–27. https://doi.org/10.1097/ACM.0000000000005439
Chiu, T. K. (2024). The impact of Generative AI on practices, policies and research direction in education. Interactive Learning Environments, 32(10), 6187–6203. https://doi.org/10.1080/10494820.2023.2253861
Farhi, F., Jeljeli, R., Aburezeq, I., Dweikat, F. F., Al-shami, S. A., & Slamene, R. (2023). Analyzing students’ views, concerns, and perceived ethics about ChatGPT usage. Computers and Education: AI, 5, 100180. https://doi.org/10.1016/j.caeai.2023.100180
Sevilla-Bernardo, J., Cervera, L., & Martin-Robles, J. (2025). Impact and opportunities of generative AI in education: A study of academic perceptions. International Journal of Emerging Technologies in Learning, 20(3), 55–71. https://doi.org/10.3991/ijet.v20i03.55809
Stahl, B. C., & Eke, D. (2024). The ethics of ChatGPT: Exploring the ethical issues of an emerging technology. International Journal of Information Management, 74, 102700. https://doi.org/10.1016/j.ijinfomgt.2023.102700.
Tuomi, I. (2025). What counts as evidence in AI & ED: Towards science-for-policy 3.0. European Journal of Education Policy & Practice, 1(1), 1–31. https://doi.org/10.5117/EJEP2025.1.001.TUOM




























Comments