Why Open Source Is The Solution To AI Safety

May 22, 2024

robot

The recent controversy at OpenAI over using a voice resembling Scarlett Johansson’s without her consent has revealed troubling aspects of the company’s practices. This issue, combined with revelations about their disparagement agreements, raises serious questions about the integrity of a company that claims to work towards making AGI beneficial for all humanity. It is crucial not to take their statements at face value without scrutiny. Scarlett Johansson expressed her shock and anger when she discovered that OpenAI’s new GPT-4o chatbot featured a voice eerily similar to hers. Despite declining an offer from Sam Altman, OpenAI’s CEO, to provide her voice for the system, the chatbot’s “Sky” voice closely resembled hers, causing public confusion.

OpenAI announced the removal of the “Sky” voice without explaining why. Johansson shared that she had received an offer from Altman, who believed her voice could help bridge the gap between tech companies and creatives. After much consideration, Johansson declined the offer for personal reasons. However, the new system’s voice still bore a striking resemblance to hers, which her friends, family, and the public noticed.

The GPT-4o system, launched last week, included five voices, one of which was “Sky.” OpenAI insisted that the “Sky” voice was not an imitation of Johansson’s but was recorded by a professional actor. The company chose not to disclose the actors’ names for privacy reasons. Johansson, who voiced an AI in the 2013 movie “Her,” felt that Altman’s reference to the film was intentional, adding to her frustration.

In her statement, Johansson recounted her disbelief and anger upon hearing the demo, noting that even close friends couldn’t distinguish the voice from hers. Altman had contacted her agent two days before the demo, asking her to reconsider, but the system had already been released. Johansson then sought legal counsel, which resulted in OpenAI agreeing to remove the “Sky” voice.

Johansson highlighted the broader issue of deepfakes and the need for clarity and protection of individual identities. She called for transparency and legislation to safeguard personal likenesses. OpenAI’s Altman responded, stating that the voice was not intended to resemble Johansson’s and was selected before reaching out to her. He apologized for the poor communication and confirmed the pause in using “Sky’s” voice.

Voice imitation technology has advanced rapidly, leading to concerns about disinformation and misuse. Fake celebrity voices have been used in scams and misleading communications, raising ethical and legal questions. Johansson previously took legal action against another AI company for using her likeness without permission. Concerns about OpenAI’s development practices have been growing. The company recently disbanded its team focused on long-term AI risks, with key members leaving and criticizing the company’s prioritization of shiny products over safety. This incident with Johansson’s voice adds to the scrutiny OpenAI faces regarding its commitment to ethical AI development.

The actors’ union SAG-AFTRA supported Johansson, emphasizing the importance of clarity and transparency in using voices for AI systems. They expressed satisfaction with OpenAI’s decision to pause the “Sky” voice and looked forward to collaborating on establishing robust protections for performers’ rights in the AI era. In a related development, leaked documents revealed aggressive tactics toward former OpenAI employees. Vox reported that employees who wanted to leave the company faced expansive and highly restrictive exit documents. If they refused to sign quickly, they risked losing their vested equity, forcing them to choose between their earned compensation and their right to criticize the company.

Sam Altman, CEO of OpenAI, posted an apology after the Vox report, stating that the company had never actually clawed back anyone’s vested equity. He admitted the provision should not have been included in their documents and took responsibility for the oversight. However, documents obtained by Vox suggested that Altman and other executives were aware of these provisions, contradicting their claims of ignorance.

The leaked documents and subsequent revelations highlight a troubling pattern within OpenAI. Employees were pressured to sign restrictive agreements with short deadlines and faced significant pushback when asking for more time. Some ex-employees were told they could not participate in future equity sales if they did not sign the agreements, effectively holding their vested equity hostage.

This controversy over employee treatment and the use of Johansson’s voice underscores the need for greater transparency and accountability at OpenAI. The company, which aims to develop AGI for the benefit of humanity, must ensure its practices align with its high-minded mission. OpenAI’s handling of these issues will be crucial in maintaining public trust and demonstrating their commitment to ethical AI development.

The recent controversies surrounding OpenAI illustrate the inherent risks of placing trust in closed-source companies, regardless of their professed values and missions. Despite OpenAI’s stated commitment to developing AGI for the benefit of all humanity, the reality of their actions — ranging from using a voice eerily similar to Scarlett Johansson’s without her consent, to pressuring employees with restrictive exit agreements — reveals a different story. These actions underscore the influence of enormous financial incentives and competitive pressures driving the AI race, ultimately compromising the company’s ethical standards and transparency.

The fundamental problem lies in the nature of closed-source companies. When a company’s operations and decisions are shrouded in secrecy, it becomes difficult, if not impossible, for the public to hold it accountable. The pursuit of AGI, driven by the promise of immense profits and strategic advantages, creates an environment where ethical considerations can easily be sidelined. This centralization of power and control over AI development poses a significant threat to the vision of an AGI that truly benefits all humanity.

Open source, on the other hand, offers a viable solution to these challenges. By making code and foundational models publicly accessible, open-source projects promote transparency, accountability, and collaborative innovation. They enable a diverse community of developers, researchers, and stakeholders to scrutinize, critique, and improve the technology. This collective effort ensures that AI development is not dominated by a single entity, reducing the risk of biased or harmful outcomes.

Code, by its very nature, is opinionated. It represents a specific approach to solving a problem, but it is rarely the only approach. Allowing a broad community to inspect and modify the code ensures that multiple perspectives are considered, fostering innovation and reducing the likelihood of oversights or biases. This principle is equally important for foundational models, which often carry implicit biases based on the data they are trained on. Open-source models allow for continuous improvement and correction by the community, leading to more robust and fair AI systems.

We cannot afford to let any one company dictate the future of AI. The stakes are too high, and the potential for misuse or unintended consequences is too great. Open source is the antidote to centralization and monopoly. It embodies the belief that sharing power and fostering collaboration is always preferable to concentrating power in the hands of a few.

By democratizing access to AI technology, open source ensures that the development and deployment of AI are aligned with the broader public interest. It provides a mechanism for collective oversight and input, helping to safeguard against abuses of power and ensuring that AI advancements benefit all of humanity, not just a privileged few. In an open-source ecosystem, no single entity can impose its will unchallenged, and the direction of AI development is guided by a diverse array of voices and perspectives.

Concerns about developing AI models openly often center on the potential for misuse by bad actors or the geopolitical competition between nations like the U.S. and China. These are valid concerns, but they can be addressed effectively without sacrificing the benefits of open-source development.

Firstly, the fear that bad actors might exploit openly developed AI models is understandable. However, the best way to catch and mitigate issues in AI models is through the transparency and collective oversight that open-source development provides. When code and models are open to scrutiny by a wide range of experts, vulnerabilities and biases are more likely to be identified and addressed quickly. This collaborative approach ensures that any potential misuse can be detected early, and safeguards can be implemented more effectively.

Moreover, while it is true that malicious individuals could potentially misuse open-source AI, this risk exists regardless of whether the AI is developed openly or in secret. Closed-source AI does not eliminate the threat; it merely reduces the number of people who can identify and correct issues. In contrast, an open-source approach leverages the collective intelligence and vigilance of the global community to enhance security and robustness. Regarding the U.S.-China competition in AI, it’s important to recognize that China is already significantly advanced in large language models (LLMs) and other AI technologies. The argument that the U.S. should keep its AI developments secret to maintain a competitive edge often overlooks the broader implications for global innovation and progress. By restricting access to AI advancements, the U.S. might maintain a temporary lead, but it also stifles the collaborative potential that drives technological breakthroughs and societal benefits worldwide. The focus should not be on maintaining a monopoly on innovation but rather on fostering a global ecosystem where advancements in AI can be shared, improved upon, and utilized for the greater good. This collaborative spirit is essential for addressing the complex challenges that AI presents and ensuring that its benefits are distributed equitably.

Furthermore, the real strength of open-source development lies in its ability to continuously improve through community involvement. With many people examining the source code and testing the models, any issues can be identified and patched promptly. This decentralized approach to quality control is far more effective than relying on a single entity, no matter how well-intentioned, to catch and fix every problem. Regulation can play a crucial role in preventing the malicious use of AI without hindering open-source innovation. Thoughtful, balanced regulation can establish guidelines and frameworks that deter misuse while encouraging transparency and collaboration. For example, regulations could mandate ethical standards for AI development, require clear documentation of AI models, and enforce accountability for harmful applications. It’s crucial to implement regulations that protect against misuse while fostering an environment where open-source AI can thrive. This can be achieved by involving a broad range of stakeholders, including developers, policymakers, and civil society, in the regulatory process. By doing so, we can create a regulatory landscape that supports innovation and safeguards against potential risks.

In conclusion, while concerns about open-source AI development are valid, they can be effectively addressed through transparency, collaboration, and balanced regulation. Open source remains the most viable path to democratizing AI, ensuring that no single entity wields too much power, and fostering a global community that works together to harness AI for the benefit of all. By embracing open-source principles and implementing thoughtful regulations, we can mitigate risks and maximize the positive impact of AI on society.