S 6953 - Requires Artificial Intelligence Developers to Implement a Safety and Security Protocol Before Deploying a Frontier Model - New York Key Vote

Stage Details

Title: Requires Artificial Intelligence Developers to Implement a Safety and Security Protocol Before Deploying a Frontier Model

See How Your Politicians Voted

Title: Requires Artificial Intelligence Developers to Implement a Safety and Security Protocol Before Deploying a Frontier Model

Vote Smart's Synopsis:

Vote to pass a bill that requires artificial intelligence developers to implement a safety and security protocol before deploying a frontier model in New York.

Highlights:

● Defines key terms, including (Sec. 1420):

○ “Frontier model”: A powerful AI system that uses extremely large computing resources;

○ “Deploy”: Making an AI model available for public use or integration into other product; and

○ “Critical harm”: Serious damage caused by an AI system, such as death or serious injury of 100 or more people, or at least $1 billion in financial or property damage.

● Establishes written safety rules and regulations for frontier models (Sec. 1420).

● Requires companies to create and share safety plans before launching these AI systems (Sec. 1421).

● Prohibits companies from releasing AI that could cause critical harm (Sec. 1421).

● Requires companies to report AI-related problems to the state within 72 hours of the developer having reasonable belief that a safety incident has occurred (Sec. 1421).

● Authorizes the Attorney General to issue fines up to $10 million for the first violation, and up to $30 million for repeated violations (Sec. 1422).

See How Your Politicians Voted

Title: Requires Artificial Intelligence Developers to Implement a Safety and Security Protocol Before Deploying a Frontier Model

Vote Smart's Synopsis:

Vote to pass a bill that requires artificial intelligence developers to implement a safety and security protocol before deploying a frontier model in New York.

Highlights:

● Defines key terms, including (Sec. 1420):

○ “Frontier model”: A powerful AI system that uses extremely large computing resources;

○ “Deploy”: Making an AI model available for public use or integration into other product; and

○ “Critical harm”: Serious damage caused by an AI system, such as death or serious injury of 100 or more people, or at least $1 billion in financial or property damage.

● Establishes written safety rules and regulations for frontier models (Sec. 1420).

● Requires companies to create and share safety plans before launching these AI systems (Sec. 1421).

● Prohibits companies from releasing AI that could cause critical harm (Sec. 1421).

● Requires companies to report AI-related problems to the state within 72 hours of the developer having reasonable belief that a safety incident has occurred (Sec. 1421).

● Authorizes the Attorney General to issue fines up to $10 million for the first violation, and up to $30 million for repeated violations (Sec. 1422).

Title: Requires Artificial Intelligence Developers to Implement a Safety and Security Protocol Before Deploying a Frontier Model

arrow_upward