Advertisement

Letters | What young humans can do that AI can’t

Readers discuss how the education system must adapt to the advent of generative AI, and the use of the p-value in determining statistical significance

Reading Time:2 minutes
Why you can trust SCMP
1
A student uses ChatGPT on May 30, 2023, at a school in Hong Kong that has adopted the generative AI tool in the classroom. Photo: Edmond So
Feel strongly about these letters, or any other aspects of the news? Share your views by emailing us your Letter to the Editor at [email protected] or filling in this Google form. Submissions should not exceed 400 words, and must include your full name and address, plus a phone number for verification
Advertisement
I refer to your report, “Hong Kong schools don’t have enough teachers trained in AI, sector veterans say” (March 16), which underscores the need for a new workforce to educate students about the use and potential abuse of artificial intelligence. One of the experts cited rightly points out that we must first train the moral and ethical cores of our youngsters before exposing them to a potentially harmful technology.

But ethical training is not enough. A recent study reports that about 40 per cent of secondary school students admit to using AI to cheat. The actual number is almost certainly higher. It is naive to expect that an honour code alone will dissuade students from such rampant improper applications of the internet.

Instead, society’s modes of teaching, learning and assessing learning need to change fundamentally. Teachers must study what skills artificial intelligence (AI) makes redundant and then alter, evolve or otherwise eliminate their pedagogy in these areas. We must likewise focus attention on the skills that large language models lack.

AI is not omnipotent – far from it. ChatGPT struggles to form value judgments (good and bad, better and worse) that it is willing to defend. When asked about the pros and cons of allowing the use of AI in English classrooms, for example, ChatGPT cautioned me that the issue was “complex” before producing a judicious analysis of costs and benefits.

Advertisement

At the end of the response, the tool offered a “possible middle ground”: teaching students to use technology moderately. I then prompted the bot to evaluate which of its arguments was “the best”. Here was its final answer: “Rather than taking a strict ‘for’ or ‘against’ stance, the strongest position is a balanced approach. Students should be taught to use AI as a tool. Schools should emphasise AI literacy.”

This “balanced approach” is typical of artificial reasoning. ChatGPT cannot make exclusive binary decisions in which, like the choice faced by poet Robert Frost’s speaker in The Road Not Taken, following one path means rejecting another. This failure is likely due to the fact that AI is unwilling to be wrong or to accept responsibility for being wrong.

Advertisement