AI Companies Consent to Implementing Safety Measures for AI Technologies

On July 21, the White House announced that seven top US artificial intelligence (AI) companies, have voluntarily committed to implementing safeguards in AI technology development. These commitments aim to manage potential risks while capitalizing on the immense potential of AI. The companies pledged to adhere to new safety, security, and trust standards during a meeting with President Biden.
Off

This announcement comes amid a competitive race among these companies to create AI capable of autonomously generating text, images, music, and video. However, concerns have emerged regarding the proliferation of disinformation and privacy breaches as AI technologies advance. These voluntary safeguards represent an initial step as governments worldwide seek to establish legal and regulatory frameworks governing AI development.

In his remarks, President Biden underscored the significance of safety, security, and trust in AI development. He stated that companies must ensure their technology is safe before public release by testing system capabilities, assessing potential risks, and publicly disclosing the assessment results. Additionally, companies should prioritize the protection of their systems against cyber threats, manage national security risks, and share best practices and industry standards. To foster trust, companies must earn the public's confidence and enable users to make informed decisions by labeling AI-generated content, addressing bias and discrimination, strengthening privacy protections, and protecting children from harm. These principles are essential for promoting responsible innovation and guaranteeing that AI technologies benefit society.

The voluntary commitments made by these AI companies will not restrict their plans or impede technological development, nor will they be enforced by government regulators. The companies agreed to security testing, research on bias and privacy issues, sharing information about risks, developing tools to address societal challenges, and implementing transparency measures to identify AI-generated content.

Implications

AI companies are responsible for ensuring the safety and transparency of their products while acknowledging AI's potential risks and opportunities. President Biden's Administration has taken action to guide responsible innovation. Last October, the Administration introduced an AI Bill of Rights, and in early 2023, it signed an executive order to protect the public from discriminatory algorithms and established AI research institutes.

Legislators are grappling with the rise of AI technology, struggling to create rules and regulations. Members of Congress are currently far from agreeing on rules, but are trying to address the risks AI technology poses to consumers while remaining concerned about falling behind rivals in the race for dominance in the field.

“Often, issues before Congress are fairly easy to understand even if they are controversial. That is not the case with AI. It is complex, difficult to understand and the potential solutions leading to some type of regulation are controversial. There is an urgency for Congress to act, but I think it is going to take time. It will be difficult to find bipartisan solutions that can be enacted by the Congress. There are times however, when Congressional action can surprise all of us. Let’s hope!”

- Former Sen. Byron Dorgan, Government Relations Practice Co-Chair

Contacts

Continue Reading