Close Menu
  • Home
  • Business News
    • Entrepreneurship
  • Investments
  • Markets
  • Opinion
  • Politics
  • Startups
    • Stock Market
  • Trending
    • Technology
  • Online Jobs

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Tech Entrepreneurship: Eliminating waste and eliminating scarcity

July 17, 2024

AI for Entrepreneurs and Small Business Owners

July 17, 2024

Young Entrepreneurs Succeed in Timor-Leste Business Plan Competition

July 17, 2024
Facebook X (Twitter) Instagram
  • Home
  • Business News
    • Entrepreneurship
  • Investments
  • Markets
  • Opinion
  • Politics
  • Startups
    • Stock Market
  • Trending
    • Technology
  • Online Jobs
Facebook X (Twitter) Instagram Pinterest
Prosper planet pulse
  • Home
  • Privacy Policy
  • About us
    • Advertise with Us
  • AFFILIATE DISCLOSURE
  • Contact
  • DMCA Policy
  • Our Authors
  • Terms of Use
  • Shop
Prosper planet pulse
Home»Startups»Silicon Valley AI startup criticizes California’s AI safety bill
Startups

Silicon Valley AI startup criticizes California’s AI safety bill

prosperplanetpulse.comBy prosperplanetpulse.comJune 24, 2024No Comments3 Mins Read0 Views
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


As reported by Politico on June 21, 140 Silicon Valley AI startups, along with venture capital giant Y Combinator, signed a letter criticizing California’s recently passed AI safety bill. The bill, which bans the use of AI in weapons development, is said to have a negative impact on the state’s burgeoning technology and AI industries and make it difficult for California to retain AI talent. “The bill could unintentionally threaten the vitality of California’s technology economy and make it less competitive,” the letter said.

What are your concerns?

The letter raises the following concerns about the bill:

  • Unusual responsibilities and regulations: Developers should not be held responsible for misuse of LLMs as this could stifle innovation, create unfair liability standards, and criminalize the failure to foresee misuse.
  • Arbitrary regulatory thresholds: Using 10^26 FLOPs as a threshold is problematic and may not accurately reflect future AI capabilities; it could incentivize companies to leave California and create unnecessary complexity.
  • The kill switch requirement is likely a ban on open source AI with backdoors. This could effectively ban open source AI development, which is important for fostering competition and innovation, and hinder collaborative and transparent development.
  • Ambiguous language could be stretched or reinterpreted later to crush California’s technology. The bill’s unclear language could be misinterpreted to apply broadly to existing software, like Google’s search algorithm, which could have unintended consequences for the tech industry and create legal uncertainty.

As an alternative, the authors proposed mandating that open source licenses be open in perpetuity and encouraging the open publication of AI research. “Such legislation would not only protect the collaborative and transparent nature of open source development, but would also foster innovation and fair competition by preventing monopolies in open source technologies,” they said.

Earlier this month, a group of startup founders called AGI House criticized the bill, saying it violates US free speech laws, citing past court cases that classify computer code as free speech and arguing the same should be true for the weights in neural networks.

What does the law provide?

The Safe and Secure Innovation for Cutting-Edge Artificial Intelligence Models Act, passed by the California Senate last month, applies to developers of AI models that cost more than $100 million to train and have floating-point computing power of 1026 or more. The law mandates safety measures such as pre-deployment testing, safeguards against misuse, and post-deployment monitoring. Developers must implement rapid shutdown capabilities and promptly report safety incidents. The regulations prohibit unauthorized development of models with “dangerous capabilities” that could be useful in creating chemical, biological, radiological, or nuclear weapons.

AI companies providing their products for military use is a major concern. In January of this year, OpenAI updated its usage policy to remove a blanket ban on using AI models for military or warfare purposes. In May, Microsoft, a major investor in OpenAI, reportedly provided a generative AI model for intelligence analysis to a U.S. intelligence agency. In addition, last week, OpenAI welcomed former National Security Agency (NSA) director Paul Nakasone to the company’s board of directors as part of the safety and security committee.

Note: The headline and final paragraph have been edited for clarity based on editorial input as of June 24, 2024 at 6:02 pm.

Read also:



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
prosperplanetpulse.com
  • Website

Related Posts

Startups

Cryptocurrency: Asian and African startups surpass US in market share!

July 14, 2024
Startups

Nitin Kamath’s vision to create more startup entrepreneurs in small towns in India

July 14, 2024
Startups

Small Japanese startup makes “Her” AI dating a reality

July 14, 2024
Startups

22 Indian startups secure over $116 million in funding this week

July 14, 2024
Startups

Small businesses are coming back – and it’s finally time

July 14, 2024
Startups

Scaling smart: How startups balance speed and quality in product iterations for growth – SME News

July 14, 2024
Add A Comment
Leave A Reply Cancel Reply

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Editor's Picks

The rule of law is more important than feelings about Trump | Opinion

July 15, 2024

OPINION | Biden needs to follow through on promise to help Tulsa victims

July 15, 2024

Opinion | Why China is off-limits to me now

July 15, 2024

Opinion | Fast food chains’ value menu wars benefit consumers

July 15, 2024
Latest Posts

ATLANTIC-ACM Announces 2024 U.S. Business Connectivity Service Provider Excellence Awards

July 10, 2024

Costco’s hourly workers will get a pay raise. Read the CEO memo.

July 10, 2024

Why a Rockland restaurant closed after 48 years

July 10, 2024

Stay Connected

Twitter Linkedin-in Instagram Facebook-f Youtube

Subscribe