Dawn – Newspaper | 2025-09-29 03:50
According to a report by Dawn… EDUCATIONAL institutions are grappling with the impact of Large Language Models and AI chatbots the world over.
The primary question is how does the advent of this particular technology change the way people learn, what is expected of young minds in school and universities, and what are the long-term impacts of accessing and consuming knowledge through LLMs.
The productivity impact of AI and LLMs is a separate debate and one that is already a source of much contention. Initial evaluation work shows that the technology helps raise productivity in lower level tasks, but has no discernible impact on higher-level and more complex tasks that involve multiple levels of human interface, reasoning, and analysis. It thus remains to be seen whether AI will in any way reverse the structural slowdown of growth rates that has plagued advanced capitalist economies since the late 20th century, and which the last big technological invention — the internet — failed to do in any meaningful way.
However, much like the internet, the social and cultural impact of AI is likely to be significant. I will concern myself here with its impact on one social domain: education and the fundamental task of ways and methods of learning and knowing.
From one vantage point, the early results are not encouraging. Essential aspects of learning — reading and writing — are likely to be stifled with a deepening reliance on LLMs. AI advocates frequently point out that these tools cut down the time required to do writing and reading tasks — by producing the output for you and by summarising content on particular issues. And, even as a sceptic, I have to recognise that it does this moderately well.
Merely proclaiming that devoting more time to the process of learning because of its abstract benefits isn’t necessarily a good pitch.
The problem, however, is that this approach ignores a fundamental fact about learning. Learning is not merely the quantitative production of output or the memorisation of summaries. Everything we know about it tells us that learning is a processual activity, which in turn tests and develops the ways we use our brain.
When we read text, we’re not just familiarising ourselves with the content (even if that is our primary task). The process of reading any one thing intersects with a range of thoughts, ideas, and past memories that we already have. This creates new content and new ways of understanding and thinking. The same process is in action when we sit down to write. It forces us to become clearer in our thoughts, to develop the capacity to reason and articulate, and to confront contradiction and inconsistencies that may exist in our minds. By outsourcing these tasks to an AI agent or chatbot, we remove the process part with all of its benefits, leaving behind mere output and a whole host of underutilised capacities.
Just on its own, this should be sufficient cause for scepticism and restraint, especially for children and young adults. If we equate learning with output, we run the risk of never developing essential neural capabilities in a large section of the population. But if one is looking for a more immediate reason for restraint, then the systemic hallucination problem — the creation or referencing of content that does not exist — is an important one as well. For a lot of output, AI agents may be producing passable content; but given the capacity to hallucinate, they cannot be relied on universally.
Some of these arguments that make points in general/ in the abstract are a hard sell for young people. We’re dealing with a generation that now consumes content in increasingly shorter forms (from text to video, and from video to short videos, and now finally to reels that last for a few seconds). Merely proclaiming that devoting more time to the process of learning because of its abstract benefits isn’t necessarily a good pitch.
We also don’t have the luxury of waiting for the lost value of the process to make a case for itself. The cost of the world finding out what it lost by outsourcing key aspects of the educational experience to a chatbot is far too high.
This is where institutional mechanisms and safeguards will be required. Universities and schools today are devoted to many pursuits (most of them commercial), but if they retain an interest in being organised sites of learning, they will have to place restrictions on AI use, at least in domains and stages where fundamental reading and writing skills are imparted.
At a more macroeconomic level, societies should also consider the implication of AI-reliance on two domains. The first is the obvious problem of reducing human skill and ability. Deskilled labour is replaceable, interchangeable, and ultimately discardable. It is also a fundamental erosion of the human experience.
The second domain is the prevailing quest for getting rid of human labour altogether. An obsession with AI is now the latest form of this quest. To be clear, a society where no one has to work, because of machines or technology, is a utopian ideal that remains worth pursuing. But the caveat is that this society has to be one where the gains/ outputs from such machines or technologies is shared among all.
Our societies do not feature such a model. Here the prevailing model is that any gains are appropriated by the private owners of machines and technologies. When you apply human labour-replacing technology on top of this model, you’re making humans redundant without any claim on the output (which they currently and imperfectly have in the form of a wage). The political and social implications of a large mass of redundant workers is hard to fathom at this stage, but one can assume they won’t be positive. This alone gives us a great reason to exercise restraint and rethink some key parameters of the utility and value of this technology in the long run.
The writer teaches politics and sociology at Lums.
X: @umairjav
Published in Dawn, September 29th, 2025 complete report is on below link.