A group of tech leaders, including Elon Musk, Apple co-founder Steve Wozniak, and 2020 presidential candidate Andrew Yang, have urged AI labs to put a stop to developing systems that could compete with human-level intelligence. In an open letter from the Future of Life Institute, AI labs were urged to halt training models more powerful than GPT-4, which is the latest version of the large language model software developed by U.S. startup OpenAI. The letter pointed out that contemporary AI systems are becoming human-competitive at general tasks, and asked if it is wise to let machines flood our information channels with propaganda and untruth. Additionally, it asked if it is wise to automate all jobs, including fulfilling ones, and if we should develop non-human minds that could eventually outnumber, outsmart, obsolete, and replace humans.
Elon Musk co-founded OpenAI in 2015 with Sam Altman and others, but he left the company’s board in 2018 and no longer holds a stake in the company. Recently, Musk has criticized the organization, stating that he believes it is diverging from its original purpose. However, he and other tech leaders are not the only ones concerned about the rapid advancement of AI technology. Regulators are also racing to get a handle on AI tools, and the UK government recently published a white paper on AI, deferring to different regulators to supervise the use of AI tools in their respective sectors by applying existing laws.
it appears that concerns over the potential risks of AI are growing. While AI has the potential to revolutionize many aspects of our lives, it also poses significant risks if it is not carefully managed. By calling for a pause on the development of systems that can compete with human-level intelligence, tech leaders like Elon Musk are taking a proactive approach to ensure that AI is developed in a responsible manner that benefits society as a whole.