The Big Tech Show: Elon, experts and that open letter — Could AI pose a risk to society and if so, what?
This week Elon Musk joined AI experts in signing an open letter, calling for a pause on the development of artificial intelligence technology more powerful than Open AI's GPT-4. Could future progress in AI pose a threat to society and are those developing the tech considering the implications of their work? Adrian Weckler discusses.
This week Elon Musk joined AI experts in signing an open letter, calling for a pause on the development of artificial intelligence technology more powerful than Open AI's GPT-4.
More than a thousand industry leaders signed a petition calling for all AI labs to halt developing their systems for at least six months to allow for the creation of shared safety protocols.
“This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium” the letter said.
Signatories also included Skype cofounder Jaan Tallinn and Apple co-founder Steve Wozniak put their names to the letter, which said “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,”.
The letter comes weeks after OpenAI released the latest version of its AI software, GPT-4.
Are those on the letter correct? Should there be a pause on AI progress and does this technology pose a credible threat to society?
The Big Tech Show is in association with Square.