Evaluating How Students Perceive AI
- Jace Hargis
- 1 hour ago
- 3 min read

This week, I would like to share another article that might provide insights as to how our students are perceiving and using AI. The article is entitled, “Prompting minds: Evaluating how students perceive generative AI’s critical thinking dispositions” by Oliveira, et al. (2025).
The authors introduce and validate a new instrument designed to measure how students perceive the use of AIs critical thinking disposition. The name of the instrument is Perceived Critical Thinking Disposition of Generative Artificial Intelligence (PCTD-GAI) scale. Drawing on the Marmara Critical Thinking Dispositions Scale (MCTDS), the authors adapted items to shift from evaluating students’ own dispositions to evaluating those of AI.
The study involved almost a thousand students from Portugal (n=685) and Poland (n=246), with exploratory and confirmatory factor analyses confirming reliability and validity of the six dimensions: reasoning, reaching judgment, search for evidence, search for truth, open-mindedness, and systematicity.
The major findings included:
Moderately positive perceptions of AI across most dimensions.
Systematicity received the highest ratings, reflecting AI’s ability to structure responses.
Truth-seeking received the lowest ratings, indicating skepticism about AI’s commitment to accuracy.
Students’ beliefs about AI’s dispositions influence how critically they engage with AI-generated content, raising concerns about over-reliance and reduced independent reasoning .
This research connects directly to critical thinking theory (Facione, 2013) by focusing not just on skills but also dispositions—the willingness to engage in analysis, evaluation, and judgment. The PCTD-GAI framework reveals how students project dispositions onto AI, which may mediate their own willingness to think critically.
Additionally, the findings resonate with metacognitive theory (Flavell, 1979), suggesting that metacognitive training is essential so students learn to monitor and evaluate AI outputs rather than passively accept them. The research also reinforces self-regulated learning frameworks (Pintrich, 2000), since students’ perceptions of AI dispositions shape their engagement, motivation, and monitoring strategies.
Faculty can use the outcomes of this research in immediate, practical ways:
Embed AI literacy activities: Have students use AI responses as a starting point, then evaluate them against scholarly sources for credibility and truth-seeking.
Integrate metacognitive reflection prompts: Encourage students to journal about when they trusted or doubted AI’s reasoning, highlighting strengths and blind spots in its “dispositions.”
Design critical comparison tasks: Require students to critique AI-generated arguments by providing counterarguments, fostering deeper reasoning.
Scaffold judgment-making: Instructors could create weekly exercises where students verify AI claims with peer-reviewed sources, emphasizing the “search for truth.”
While valuable, the study has notable limitations:
Self-report bias: Students’ perceptions may reflect attitudes toward tech more than actual AI capabilities, raising concerns about indirect rather than direct measurement.
Sample scope: Data is limited to Portugal and Poland, which may not generalize to global higher ed contexts.
Single AI platform: The scale was validated only on ChatGPT, though the authors suggest it could be adapted to other generative AI tools.
Cross-sectional design: Perceptions were captured at one moment, without tracking how they evolve over time or after prolonged AI use.
A stronger design would combine direct measures (e.g., critical thinking tests, analysis of student work) with perception surveys, to triangulate whether attributing dispositions to AI correlates with gains—or declines—in students’ own critical thinking.
References
Facione, P. A. (1990). Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction. American Philosophical Association.
Facione, P. A. (2013). Critical thinking: What it is and why it counts. Insight Assessment.
Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry. American Psychologist, 34(10), 906–911.
Oliveira, L., Tavares, C., Strzelecki, A., & Silva, M. (2025). Prompting minds: Evaluating how students perceive generative AI’s critical thinking dispositions. Electronic Journal of e-Learning, 23(2), 1–18. https://doi.org/10.34190/ejel.23.2.3986
Pintrich, P. R. (2000). The role of goal orientation in self-regulated learning. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 451–502). Academic Press.
Comments