Here are the 4 Main Requirements of the New White House Executive Order on AI Safety

Image source: Shutterstock

The EO calls for the creation of new standards and guidance to ensure safe use of AI especially in critical infrastructure sectors.

The Biden-Harris Administration Monday issued an Executive Order pertaining to the safe and secure use of Artificial Intelligence technologies and systems.

The EO lays out requirements for new AI safety standards and provides high-level guidance on measures to protect Americans against AI-enabled fraud, deception and other potential privacy and security risks. Here are the four biggest components of the EO.

  • Developers of AI systems will need to share safety test results and other critical information about their technologies with the US government. The requirement covers developers of the “most powerful” AI systems and developers of any foundation model that poses a “serious risk” to national security, national economic security or national public health and safety. Under the EO, covered developers must notify the government when training such models and must share all red-team results with the government before companies can make the models publicly available.
  • The EO directs the National Institute of Standards and Technology (NIST) to develop standards, tools and tests for ensuring AI systems are “safe, secure and trustworthy”.  As part of this, NIST will set standards for extensive red-team testing of AI systems before public release. The United States Departments of Homeland Security (DHS) and Energy (DoE) will apply those standards to AI systems that entities in critical infrastructure sectors such as nuclear, chemical and biological sectors use.
  • Agencies that fund life-science projects will be required to set similar standards to protect against risks tied to AI use for biological synthesis screening. Going forward, federal funding will only be available to agencies that establish such standards.
  • The Department of Commerce has been vested with the responsibility for establishing standards and best practices for protecting Americans from AI-enabled fraud and deception. This includes developing guidance for content authentication and watermarking so people can distinguish legitimate content from deepfakes and other AI-generated content.

The EO also calls—without much detail—for the creation of an advanced cybersecurity program for developing AI tools to find and fix vulnerabilities in critical software and for measures to ensure safe and ethical use of AI by the US military and intelligence community.