
Wide use of generative Artificial Intelligence applications and their easy access often put educators in a challenging situation due to the possibility for students’ misuse of the tools. A skillful user might use AI to complete course assignments without being noticed, acquiring strong AI skills but lacking the competence they are supposed to develop during their studies. The concern is real and familiar to all educators. However, the use of AI for cheating often overshadows another very important issue: the refusal to use AI even in situations where its use is allowed or even required. This article discusses students’ concerns regarding the use of AI in studying. The ideas are collected via different discussions and conversations with students related to coursework.
Author: Olesia Kullberg
Using AI tools is a future skill for working life
The future of work is increasingly seen as a collaboration between humans and machines, bringing the idea of hybrid intelligence, where the combined creativity of machines and humans creates a new form of creative agency (Chai & Yuan 2023). Another perspective is the possibility of using AI and automation to improve employee well-being in future workplaces (Spencer 2022). According to Gallup’s 2024 study, 93% of Fortune 500 Chief Human Resource Officers confirm that their companies have started to benefit from using AI tools (Houter 2024).
It is clear that when used correctly, AI can save time, increase revenue, and free employees from repetitive tasks, creating more time for meaningful work where humans excel. However, correct use of AI for the benefit of the user or a company presupposes that the user is familiar with different techniques and methods of AI use, is able to critically evaluate the results AI provides, and knows about the possibilities and limitations of AI. As the field of generative AI is constantly evolving, these skills need to be practiced and developed. Thus, the use of AI tools in education is paramount.
AI tools in education
The use of generative AI in education is a new skill for many educators as well. Several models have been introduced to assist teachers in creating new learning assignments and instructions for students. The Traffic Light Model suggests four types of AI tool use: prohibited use, allowed with the requirement to report, allowed without the requirement to report, and required with the requirement to report (Arene 2024). The AI Assessment Scale presents a framework with five stages of AI use: no AI, AI used for planning, collaboration with AI, full use of AI for achieving assessment goals, and use of AI for a creative interdisciplinary exploration of a topic (Perkins et al. 2024). These two models clearly emphasize the use of AI in course exercises rather than prohibiting its use.
Being a relatively new phenomenon for both students and teachers, AI requires the formation of new skills, attitudes, and approaches. Regardless of innovative theories on the use of AI in education, it is often referred to either as a tool for cheating or is completely disregarded. The development of critical and innovative skills necessary to achieve readiness for working with hybrid intelligence in future workplaces demands hours and years of practice during studies. However, to encourage students to engage in this practice, it is crucial to understand the obstacles they perceive in allowing themselves to fully explore the possibilities of AI in studying.
At the moment, it is a widely accepted fact that AI detection technology is not reliable enough. For example, in many universities’ instructions for teachers, it is stated that AI detection tools cannot be reliably used to identify fraudulent use of AI. However, a widely used similarity-checking tool, Turnitin, provides reports on the use of AI in students’ work. This creates a dubious situation: on the one hand, teachers are instructed not to rely on AI detection reports; on the other hand, the reports are generated by an officially used tool. Needless to say, there is frustration among teachers about the approach they should implement. This frustration has a direct impact on students’ behavior and attitudes. Within the same university, different teachers use different approaches and, for example, interpret the Turnitin AI detection report differently. Even if the assignment instructions clearly state the allowed use of AI, it is usually not clear to the student how the teacher detects AI use and what happens if the teacher suspects misuse. From the student’s perspective, it is then much safer not to use AI at all, even in assignments where its use is allowed or required.
Student concerns regarding the usage of AI tools
Despite existing instructions and rules, each student always individually makes their own choice about which tools to use and how honestly to report their use. These choices are guided by motivation for studies, time constraints, level of skills and knowledge, family and/or work situation, and many other factors. Fraudulent use of AI might not be detected by the teacher and could even result in an excellent grade. At the same time, honest use of a student’s own skills without AI support might result in a lower grade. Thus, in practice, fraudulent use of AI might lead to a higher GPA for some students. This is a common concern among students that they cannot resolve. However, they see it as an unfair situation where, due to the lack of technological solutions and teachers’ skills, fraudulent behavior is not identified.
In addition to GPA, students have other concerns as well. They see the fraudulent use of AI in studying as a threat to the value of university degrees. Since ChatGPT was launched in November 2022, students who began their university studies that year are now graduating with their Bachelor’s degrees. If some students have outsourced their coursework to AI tools, there is a possibility that their skills and knowledge are insufficient for success in the workplace, even though they hold a degree and may have a relatively high GPA. Hypothetically, a situation could arise in the coming years where employers can no longer fully trust educational credentials.
AI – a tool for cheating or for learning?
Students often do not see the use of AI as a method for developing new skills. In most conversations, students, as well as teachers, mainly focus on the use of AI as a way to outsource coursework, minimizing the effort and time spent on studying. This approach raises concerns about a clear loss of skills, which might be true in some cases. However, the acquisition of new skills—where a student learns to use AI correctly for more efficient studying and working—is often overlooked.
A clear shift from viewing AI as a tool for cheating to seeing it as a means of learning more effectively and acquiring new skills needed in future workplaces is necessary. However, before this shift can happen, students need a clear definition of what constitutes fraudulent use of AI: who defines it, how it is defined, and what happens when it is detected. To use AI tools safely, students need to understand the procedures teachers use to determine whether AI was used appropriately. Clear and transparent procedures will allow students to use these tools confidently and experiment with them.
Another need is to redesign assignments and highlight AI-related skills by describing them clearly and guiding students in the correct use of AI to develop new competencies and learn modern methods of studying and working. Students need to understand exactly what they are developing by following the teacher’s instructions and how these skills will be relevant in real-world contexts.
Work life demands the use of new skills, which should be developed during one’s studies. However, many students have concerns about using AI in their studies because they do not clearly understand what is allowed and how unauthorized use is detected. Transparent instructions and new teaching approaches will help build students’ trust, allowing them to fully explore AI’s potential.
Sources
Arene. 2024. Arene’s recommendations on the use of Artificial Intelligence for Universities of Applied Sciences (pdf). Cited 10 Apr 2025. Available at https://arene.fi/wp-content/uploads/PDF/2024/Teko%C3%A4lysuositukset/Arene%E2%80%99s%20recommendations%20on%20the%20use%20of%20artificial%20intelligence%20for%20uni-versities%20of%20applied%20sciences%202024.pdf?_t=1731419903
Chai, H. & Yuan, P.F. 2023. Hybrid intelligence. Architectural Intelligence. Vol. 2 (1), 11. Cited 10 Apr 2025. Available at https://doi.org/10.1007/s44223-023-00029-w
Houter, K. 2024. AI in the Workplace: Answering 3 Big Questions. Gallup. Cited 10 Apr 2025. Available at https://www.gallup.com/workplace/651203/workplace-answering-big-questions.aspx
Perkins, M., Furze, L., Roe, J., & MacVaugh, J. 2024. The AI Assessment Scale revisited: A framework for educational assessment. ArXiv. Cited 10 Apr 2025. Available at https://doi.org/10.48550/arXiv.2412.09029
Spencer, D.A. 2022. Automation and well-being: bridging the gap between economics and business ethics. Journal of Business Ethics. Vol 187, 271-281. Cited 10 Apr 2025. Available at https://doi.org/10.1007/s10551-022-05258-z
Author
Olesia Kullberg works as a Senior lecturer in LAB University of applied sciences.
Illustration: https://pxhere.com/en/photo/1018640 (CC0)
Reference to this article
Kullberg, O. 2025. Using AI in studying – feeling empowered or feeling guilty? LAB Pro. Cited and date of citation. Available at https://www.labopen.fi/lab-pro/using-ai-in-studying-feeling-empowered-or-feeling-guilty/