Big Tech Agrees to White House AI Safeguards

Several dominant tech corporations, including Amazon, Google, Meta, and Microsoft, renowned for their pioneering work in artificial intelligence (AI), have consented to adhere to a set of AI safety measures proposed by President Joe Biden’s administration.

The White House announced on Friday that it has garnered voluntary commitments from seven U.S. firms. These commitments aim to ensure that AI products are safe prior to their launch. They include provisions for third-party examination of commercial AI systems, though specifics regarding the auditors or accountability methods remain unclear.

The exponential increase in commercial investment in generative AI tools, capable of producing human-like text and generating new images and other media, has sparked both public fascination and apprehension about potential risks such as deception and dissemination of false information.

In addition to the tech titans, OpenAI (creator of ChatGPT), and startups Anthropic and Inflection have pledged to have security testing partially conducted by independent experts. This testing is designed to ward off significant threats like those to biosecurity and cybersecurity, according to a statement from the White House.

The commitments also include reporting vulnerabilities in their systems, using digital watermarking to differentiate between authentic and AI-created images (deepfakes), and publicly disclosing glitches and risks in their technology, including effects on fairness and bias.

These voluntary commitments serve as an immediate approach to mitigating risks, while the administration continues to lobby for Congress to pass laws governing the technology.

However, some AI regulation advocates believe more needs to be done to ensure accountability for these companies and their products. James Steyer, founder and CEO of nonprofit Common Sense Media, asserted that many tech companies have previously failed to uphold voluntary commitments for responsible action and supporting strong regulations.

Senate Majority Leader Chuck Schumer plans to introduce legislation to regulate AI and has already conducted multiple briefings to inform senators about this bipartisan issue.

Numerous tech executives have endorsed regulation and several met with President Biden, Vice President Kamala Harris, and other officials at the White House in May.

Nonetheless, some experts and emerging competitors express concern that such regulation could advantage established, well-funded leaders like OpenAI, Google, and Microsoft, potentially sidelining smaller entities due to the cost of complying with regulations for their AI systems, also known as large language models.

Trade association BSA, which counts Microsoft among its members, welcomed the Biden administration’s attempts to establish rules for high-risk AI systems on Friday. The association stated, “Enterprise software companies are eager to collaborate with the administration and Congress to pass legislation that mitigates AI risks and promotes its benefits.”

Internationally, several nations, including the European Union, are exploring AI regulation, with U.N. Secretary-General Antonio Guterres proposing the United Nations as a suitable platform for implementing global standards. He also welcomed calls for a new U.N. body dedicated to global AI governance, echoing models such as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.

The White House confirmed that it has already consulted several nations about these voluntary commitments.