Legislative Developments In AI: The Bipartisan Framework For The US AI Act
This article is the second in a series addressing the effects of artificial intelligence (AI) on the health care industry. The first article is here.
In the United States, there are currently no regulations specific to the use of AI by businesses. The Bipartisan Framework has already been praised as a key development in this emerging area:
Although Congress has become more partisan in recent years, the new issues related to Artificial Intelligence are bringing both political parties together to work on solutions. For example, the bipartisan legislation that has been introduced by Senator Blumenthal and Senator Hawley is designed to establish rules, regulations and protections for consumers in the development of AI. – Former Senator Byron Dorgan, now a Senior Policy Advisor with ArentFox Schiff
As the integration of AI models and systems across multiple industries, including health care, continues to increase, the Bipartisan Framework is a crucial step towards enacting future legislation, bringing greater oversight of these models, and implementing necessary consumer protections.
The Bipartisan Framework establishes five crucial recommendations for regulating AI models, including:
Establishing a Licensing Structure Administered by an Independent Oversight Body: The Bipartisan Framework requires registration with an independent oversight body for companies developing sophisticated general purpose AI models (including, for example, GPT-4) or models used in high-risk situations. The oversight body should also be authorized to conduct audits of companies seeking licenses and monitor and report on technological advancements and economic impacts of AI. Moreover, any licensing requirements should include registration of information about AI models and be conditioned on developers maintaining risk management, pre-deployment testing, data governance, and adverse incident reporting programs.
Ensuring Legal Accountability for Harm: The framework recommends that Congress ensure accountability of AI companies through enforcement by the independent oversight body and private causes of action when the models and systems breach privacy, violate civil rights, or otherwise cause cognizable harms. Congress should also guarantee that, where current laws are insufficient to address new harms created by AI, enforcers and victims have a clear path to initiate litigation against the offenders, in addition to other direct steps aimed at protecting consumers from the already-emerging harms of AI. The risks of integrating AI systems has already been studied in the health care sector.
National Security and International Competition Considerations: The framework recommends that Congress employ export controls, sanctions, and other legal restrictions to limit the transfer of advanced AI models and other technologies to countries engaged in human rights violations.
Promoting Transparency: The newly created independent oversight body should establish a public database for consumers and researchers to easily access AI models and system information, such as documented harms resulting from “adverse incidents.” The Bipartisan Framework imposes transparency requirements for developers and deployers of AI systems. Specifically:
- Developers should be required to disclose essential information regarding the use of AI models, including, for example, accuracy of information, data demonstrating how the models were trained to perform particular functions, and overall safety of the models (for example, that the models are not engaged in unacceptable activities like cyber-attacks on other systems or the creation of defamatory words and images against real individuals).
- Users should be provided with affirmative notice that they are interacting with an AI model.
- AI developers and deployers should be required to provide technical disclosures of AI-generated deepfakes, including, for example, watermarking AI created content or news articles to identify the place of origin.
Protecting Consumers and Minors: Last, the framework requires that companies deploying AI in high-risk situations should be required to implement safeguards, including providing notice when AI is used in decision-making processes and offering the right to human review. Congress should also impose strict restrictions on generative AI involving minors, specifically regarding the use of AI-generated material depicting child abuse.
Although there is no current timeline for the enactment of AI legislation by Congress, the Bipartisan Framework proposes comprehensive guidelines for implementing robust rules and safeguards in this space. Future legislation may also bolster the principles outlined in the framework and define terms like “adverse incident” in relation to the use of AI models. In the meantime, organizations contemplating the integration of AI tools into their business practices should consult with their legal counsel to (i) review internal policies and procedures to best position the organization for compliance with future regulatory efforts regarding the use of AI; (ii) develop protocols specific to any potential notice and transparency requirements; and (iii) strengthen risk management strategies to identify areas where improper use of AI models may result in legal liability, consistent with the proposals in the Bipartisan Framework.
As we previously discussed, there are already several practical applications for AI technology for health care providers, such as streamlining administrative tasks, improving the efficiency of medical record analysis, and enhancing clinical decision support tools. Health care providers considering the use of AI systems for the analysis of patient medical information should be prepared to continuously monitor those models both for precision and accuracy when generating clinically relevant results and to ensure that appropriate privacy measures are employed to safeguard sensitive health information.
If the recommendations of the Bipartisan Framework are adopted by Congress, the use of AI in clinical data analytics may be a key area evaluated under an accountability prong of future AI legislation. This is not only due to inaccurate clinical data potentially resulting in direct patient harm but also because improper disclosures of health information processed by AI models could result in greater liability for providers if future legislation indeed guarantees a private cause of action for consumers affected by a breach of an AI system.
*This article first appeared on Healthcare Business Today here.