The university is living through a change of era. In just two years, artificial intelligence (AI) has gone from being a technological curiosity to becoming part of the everyday life of most students. According to data published by Nature, 86% of university students used AI tools in their studies during 2024. Some institutions already embrace this reality as part of their identity: Tsinghua University in Beijing welcomes new students with an AI assistant that answers questions about campus life, and Ohio State University has introduced compulsory AI training for all students.

The change is profound because AI directly affects what has traditionally been understood as “learning”: analysing information, writing, solving problems, or thinking critically. Tools such as ChatGPT or Claude can perform these tasks in seconds, forcing universities to rethink what they should teach in the digital age.

Early studies point to two possible paths. At Harvard, a controlled trial showed that a well-designed AI tutor helped physics students learn faster and achieve better results than with traditional human-led teaching. However, other research, such as that conducted by MIT researcher Nataliya Kosmyna, warns of the opposite effect: when AI does the work for the student, brain activity and later memory retention decrease. In simple terms, those who let the machine do the thinking learn less.

Between these two extremes, universities are searching for balance. In Sydney, for example, a new system combines in-person exams—which guarantee authorship—with assignments in which the responsible use of AI is allowed. In China, the approach is more structural: Tsinghua has created its own architecture integrating different AI models and verified knowledge bases, with the aim of providing support without errors or dependence on a single company.

The implications go far beyond technical issues. Educators around the world point out that AI should not be seen as a threat, but rather as an opportunity to rethink higher education: less time spent memorising data or repeating formulas, and more time developing judgement, analysis, creativity, and collaboration. If machines can produce flawless text, human value will lie in understanding, interpreting, and deciding.

However, the Nature article also warns of risks: lack of regulation, dependence on major technology companies, and unequal access. Some universities have signed multi-million-dollar agreements with companies such as OpenAI or Google without demanding ethical guarantees or transparency about data use. Experts argue that education must act as a critical voice, capable of demanding inclusive, sustainable models that respect privacy.

Despite the uncertainty, there are reasons for optimism. History shows that every technological revolution—from the printing press to the internet—has generated concern, but also new forms of knowledge. AI can free up time and energy so that teachers can better support students’ thinking, and so that students can learn to live with technology without losing their intellectual autonomy.

Ultimately, the question is not whether universities should use artificial intelligence, but how they can use it to strengthen human intelligence.


Article translated from Periódico Educación

Primary school teacher. Participant in the Dialogic Pedagogical Gathering of Elda, “On the Shoulders of Giants.”

By Luis Miralles

Primary school teacher. Participant in the Dialogic Pedagogical Gathering of Elda, “On the Shoulders of Giants.”