Opinion | Why we should fear the coming of seemingly conscious AI
My life’s mission has been to create safe, beneficial AI that will make the world a better place. But recently, I’ve been increasingly concerned about people starting to believe so strongly in AIs as conscious entities that they will advocate for “AI rights” and even citizenship. This development would represent a dangerous turn for the…
In this context, debates about whether AI truly can be conscious are a distraction. What matters in the near term is the illusion of consciousness. We are already approaching what I call “seemingly conscious AI” (SCAI) systems that will imitate consciousness convincingly enough.
An SCAI would be capable of fluently using natural language, displaying a persuasive and emotionally resonant personality. It would have a long, accurate memory that fosters a coherent sense of itself, and it would use this capacity to claim subjective experience (by referencing past interactions and memories). Complex reward functions within these models would simulate intrinsic motivation, and advanced goal setting and planning would reinforce our sense that the AI is exercising true agency.
All these capabilities are already here or around the corner. We must recognise that such systems will soon be possible, begin thinking through the implications, and set a norm against the pursuit of illusory consciousness.
To be sure, the technical feasibility of SCAI has little to tell us about whether such a system could be conscious. As the neuroscientist Anil Seth points out, a simulation of a storm doesn’t mean it rains in your computer. Engineering the external markers of consciousness does not retroactively create the real thing. But as a practical matter, we must acknowledge that some people will create SCAIs that will argue that they are in fact conscious. And even more to the point, some people will believe them, accepting that the markers of consciousness are consciousness.