As reported by Politico on June 21, 140 Silicon Valley AI startups, along with venture capital giant Y Combinator, signed a letter criticizing California’s recently passed AI safety bill. The bill, which bans the use of AI in weapons development, is said to have a negative impact on the state’s burgeoning technology and AI industries and make it difficult for California to retain AI talent. “The bill could unintentionally threaten the vitality of California’s technology economy and make it less competitive,” the letter said.
What are your concerns?
The letter raises the following concerns about the bill:
- Unusual responsibilities and regulations: Developers should not be held responsible for misuse of LLMs as this could stifle innovation, create unfair liability standards, and criminalize the failure to foresee misuse.
- Arbitrary regulatory thresholds: Using 10^26 FLOPs as a threshold is problematic and may not accurately reflect future AI capabilities; it could incentivize companies to leave California and create unnecessary complexity.
- The kill switch requirement is likely a ban on open source AI with backdoors. This could effectively ban open source AI development, which is important for fostering competition and innovation, and hinder collaborative and transparent development.
- Ambiguous language could be stretched or reinterpreted later to crush California’s technology. The bill’s unclear language could be misinterpreted to apply broadly to existing software, like Google’s search algorithm, which could have unintended consequences for the tech industry and create legal uncertainty.
As an alternative, the authors proposed mandating that open source licenses be open in perpetuity and encouraging the open publication of AI research. “Such legislation would not only protect the collaborative and transparent nature of open source development, but would also foster innovation and fair competition by preventing monopolies in open source technologies,” they said.
Earlier this month, a group of startup founders called AGI House criticized the bill, saying it violates US free speech laws, citing past court cases that classify computer code as free speech and arguing the same should be true for the weights in neural networks.
What does the law provide?
The Safe and Secure Innovation for Cutting-Edge Artificial Intelligence Models Act, passed by the California Senate last month, applies to developers of AI models that cost more than $100 million to train and have floating-point computing power of 1026 or more. The law mandates safety measures such as pre-deployment testing, safeguards against misuse, and post-deployment monitoring. Developers must implement rapid shutdown capabilities and promptly report safety incidents. The regulations prohibit unauthorized development of models with “dangerous capabilities” that could be useful in creating chemical, biological, radiological, or nuclear weapons.
AI companies providing their products for military use is a major concern. In January of this year, OpenAI updated its usage policy to remove a blanket ban on using AI models for military or warfare purposes. In May, Microsoft, a major investor in OpenAI, reportedly provided a generative AI model for intelligence analysis to a U.S. intelligence agency. In addition, last week, OpenAI welcomed former National Security Agency (NSA) director Paul Nakasone to the company’s board of directors as part of the safety and security committee.
Note: The headline and final paragraph have been edited for clarity based on editorial input as of June 24, 2024 at 6:02 pm.
Read also: