AI could pose serious if misused by bad actors, says Nobel laureate Geoffrey Hinton

Mail This Article
Geoffrey Hinton, the British-Canadian computer scientist whose work laid the foundations for a revolution in artificial intelligence, earning him the title Godfather of AI, has won this year’s Nobel Prize in physics with John J Hopfield.
However, the University of Toronto computer scientist, who made significant contributions to developing computer algorithms known as neural networks, has been vocal about the technology’s dangers. In a chat with Malayala Manorama, Hinton pointed to the potential dangers of AI, including electoral corruption, cybercrime, and bioterrorism.
What challenges does AI pose that can be counterproductive for us? Will the coming years witness a human vs AI conflict, especially in the job sector?
AI will be tremendously beneficial in areas like healthcare and personal tutoring, but it also poses many different risks if bad actors misuse it. These include rigging elections using fake videos, extreme surveillance to prevent political dissent, autonomous lethal weapons, and the fine-tuning of open-sourced models to commit very effective cybercrime or bioterrorism.
It is quite likely that AI or people assisted by AI will greatly increase productivity and this may have the effect of eliminating many jobs.
In the distant future, will AI start to dominate human beings? What are the chances of an AI takeover?
Most deep learning experts agree that AI will eventually become more intelligent than people and many think there is at least a 50 per cent chance that this will happen sometime in the next 20 years.
We have never had to deal with anything like this before, and there is huge uncertainty about what will happen when we create agents who are more intelligent than we are. Some people think it will be easy for people to stay in control. Others think that the agents will attempt to gain more control to be better at achieving whatever goals we give them.
It is also possible that different agents will compete, and evolution will favour the most competitive and self-interested agents.
What will happen then? Will humanity cease to exist due to the extreme competition from AI?
It is quite possible that superintelligent AI will replace people, but there is so much uncertainty that it is very hard to estimate this.
Yann LeCun (the French-American computer scientist known for his works on optical character recognition and computer vision using convolutional neural networks) thinks it is doubtful. But (Eliezer) Yudkowsky (the American computer scientist and researcher) thinks it is almost certain.
I think both of these views are ridiculous. We are currently like a person who has a very cute tiger cub as a pet. That person should think hard about what happens when the tiger has the physical ability to take over.
How can technological companies and governments keep AI in check?
Some of the threats involving bad actors can be reduced by not open-sourcing the weights of large models. For the existential threat of AI going rogue and taking over its very hard to know what to do.
I think governments should insist that large companies developing AI spend a significant fraction of their resources on experimenting with how these agents might try to take over when they are still not quite as smart as us.
I also think we should not show large models data that exhibit unethical behaviour until they have already learned about ethical behaviour by being trained on carefully curated data. That's what good parents do.