In order to address the mounting concerns surrounding artificial intelligence, several top tech leaders are urging AI labs to halt the training of their most advanced systems for at least six months.
These concerns about the speeds of deployment and risks of AI are largely driven by fears that we will build systems that do not align with human well-being or that might even pose existential threats.
That sense of urgency is heightened given the rise, just this week, of OpenAI's GPT-4, capable of performing a host of duties on various complex tasks.
Do you want to know what has led to this significant decision? Read on and discover!
The Call For A Pause
The launch of OpenAI's GPT-4 has showcased impressive capabilities, such as drafting legal documents, passing exams, and creating functional websites from sketches.
The suggested pause should apply to any AI system more advanced than GPT-4. During this time, the focus would be on establishing and enforcing safety standards for AI and ensuring these technological tools are safe beyond any doubt.
The Competitive Landscape
Google is working on Bard, its AI that is a direct ChatGPT competitor. In the wider landscape of AI development, tech giants and startups are battling it out with one another as well:
- OpenAI;
- Microsoft;
- Google;
- IBM;
- Amazon;
- Baidu; and
- Tencent.
Numerous startups are also focusing on developing AI tools for writing and image creation, so the AI industry is widely active and growing as fast as possible.
Risks And Concerns
On the other hand, experts are increasingly worried about the potential risks that are associated with advanced AI, including biased responses and misinformation, invasion of privacy, disruption of professions, facilitation of academic dishonesty, and alteration of human-technology relationships.
That is why the collective letter suggests that if a voluntary pause is not implemented soon, governments should enforce a moratorium. Some regions have already started taking action: China, the EU, and Singapore have introduced initial AI governance frameworks.
The demand for a ban on developing advanced AI indicates an increasing sense of trepidation among not only the general public but also from within Silicon Valley itself. It makes the development of a firm framework crucial, both to limit potential future risks and to maximize positive societal effects from AI.