United States - New Technology (2024)

ARTICLE

1 July 2024

M Mintz

Mintz is a general practice, full-service Am Law 100 law firm with more than 600 attorneys. We are headquartered in Boston and have additional US offices in Los Angeles, Miami, New York City, San Diego, San Francisco, and Washington, DC, as well as an office in Toronto, Canada.

Explore

As the first state law to regulate the results of Artificial Intelligence System (AI System) use, Colorado's SB24-205, "Concerning Consumer Protections...

United States Technology

To print this article, all you need is to be registered or login on Mondaq.com.

As the first state law to regulate the results of ArtificialIntelligence System (AI System) use,Colorado's SB24-205, “Concerning ConsumerProtections in Interactions with Artificial IntelligenceSystems” (the Act), has generated plenty of cross-industryinterest, for good reason. In some ways similar to the risk-basedapproach taken by the European Union (EU) in theEU AI Act, the Act aims to regulate developers and deployers ofAI Systems, which are defined by the Act as “anymachine-based system that, for any explicit or implicit objective,infers from the inputs the system receives how to generate outputs,including content, decisions, predictions, or recommendations, thatcan influence physical or virtual environments.”

The Act is scheduled to go into effect on February 1, 2026 andits scope will be limited to activities in Colorado, entities doingbusiness in Colorado, or entities whose activities affect Coloradoresidents. It generally focuses on regulation of"high-risk” AI Systems, which are defined as any AISystem that, when deployed, makes, oris a substantialfactorin making, a consequential decision. A“consequential decision” means a decision that hasamaterial legal or similarly significant effect on theprovision or denial to any consumer of, or the cost or termsof, among other services, health care services.

Deployer and Developer Requirements

Both developers and deployers of high-risk AI Systems must usereasonable care to protect consumers from any known or reasonablyforeseeable risks of "algorithmicdiscrimination".1The Act also imposes certainobligations upon developers of high-risk AI systems, includingdisclosure of information to deployers; publication of summaries ofthe types of the developer's high-risk AI systems and howthey manage any foreseeable risks; and disclosure to the ColoradoAttorney General (AG) of “any known or reasonably foreseeablerisks” of algorithmic discrimination arising from theintended uses of the high-risk AI system within 90 days ofdiscovery. Deployers will need to implement risk managementpolicies and programs to govern the deployment of high-risk AIsystems; complete impact assessments for the high-risk AI systems;send notice to consumers after deploying high-risk AI systems tomake, or be a substantial factor in making, consequential decisionsconcerning a consumers; and submit notice to the AG within 90 daysof discovery that the high-risk AI system has caused algorithmicdiscrimination.

Health Care Services Scope and Exceptions

The Act defines “health care services” by referringto the Public Health Service Actdefinition.2Though this is a broad definition thatcould encompass a wide range of services, drafters also accountedfor systems that are not high-risk and some of the work that hasalready been done or is in process by the federal government, asthere are exceptions applicable to certain health careentities.

HIPAA Covered Entities

The Act will not apply to deployers, developers, or others thatare Covered Entities under HIPAA and are providing health carerecommendations that: (i) are generated by an AI System; (ii)require a health care provider to take action to implement therecommendations; and (iii) are not considered to be high-risk (asdefined by the Act). This exception appears to be geared towardhealth care providers since it requires the involvement of a healthcare provider to actually implement the recommendations made by theAI Systems rather than the recommendations being implementedautomatically by the systems. However, the scope is not limited toonly providers, as Covered Entities can be health care providers,health plans, or health care clearinghouses. There are a range ofpossible uses of AI systems by HIPAA Covered Entities, includingbut not limited to disease diagnoses, treatment planning, clinicaloutcome predictions, coverage determinations, diagnostics andimaging, clinical research, and population health management.Depending on the circ*mstances, many of these uses could beconsidered "high risk". Examples of uses of AI Systemsthat are not “high risk” in relation to health careservices, and could thus potentially meet this exception, includeadministrative-type tasks such as clinical documentation andnote-taking, billing, or appointment scheduling.

FDA-Approved Systems

Deployers, developers, and others that deploy, develop, put intoservice, or substantially modify, high-risk AI systems that havebeen approved, authorized, certified, cleared, developed, orgranted by a federal agency such as the Food & DrugAdministration (FDA) are not required to comply with the Act. SinceFDA has deep experience with AI and machine learning (ML) and, asof May 13, 2024, has authorized882 AI/ML-enabled medical devices, this is an expected andwelcome clarification for those entities who have already developedor are working with FDA-authorized AI/ML-enabled medical devices.Additionally, deployers, developers, or others conducting researchto support an application for approval or certification from afederal agency such as the FDA or research to support anapplication otherwise subject to review by the agency are notrequired to comply with the Act. Use of AI Systems is prevalent indrug development and to the extent those activities are approved bythe FDA, development and deployment of AI Systems under thoseapprovals are not subject to the Act.

Compliance with ONC Standards

Also exempted from the Act's requirements are deployers,developers, or others that deploy, develop, put into service, orintentionally and substantially modify a high-risk AI system thatis in compliance with standards established by federal agenciessuch as the Office of the National Coordinator for HealthInformation Technology (ONC). This exemption helps to avoidpossible regulatory uncertainty for certified health IT developers,and health care providers using certified health IT, in compliancewith ONC'sHTI-1 Final Rule, which imposes certain information disclosureand risk management obligations onto developers of certified healthIT. Not all developers of high-risk AI systems in health care aredevelopers of certified health IT, but the vast majority arecertified and this is an important carve-out for those developersalready in compliance with, or working to comply with the HTI-1Final Rule.

Key Takeaways

Using a risk-based approach for review of AI System usage may bea newer practice for developers and deployers directly orindirectly involved with the provision of health care services. Fordeployers in particular, they will want to have processes in placeto determine whether they are required to comply with the Act andto document the results of any applicable analyses. These analyseswill involve determinations of whether their AI System serves as asubstantial factor in making consequential decisions (and thus thesystem is "high-risk") in relation provision of healthcare services. If they determine that they are using high-risk AISystems and none of the aforementioned exceptions above areapplicable, they will need to begin activities such as developingthe required risk management policies and procedures, conductingimpact assessments for these systems, and setting up consumer andAG notification mechanisms. It will likely take some time for someorganizations to integrate these new obligations into theirrespective policies and procedures and risk management systems andthey will want to make sure they are including the rightindividuals for those conversations and decisions.

Footnotes

1.Algorithmic discrimination is defined by the Actas “any condition in which the use of an AI System results inan unlawful differential treatment or impact that disfavors anindividual or group of individuals on the basis of their actual orperceived age, color, disability, ethnicity, genetic information,limited proficiency in the English language, national origin, race,religion, reproductive health, sex, veteran status” or otherclassification protected under Colorado law or federallaw.

2.The PHSA defines health care services as“any services provided by a health care professional, or byany individual working under the supervision of a health careprofessional, that relate to—(A) the diagnosis, prevention,or treatment of any human disease or impairment; or (B) theassessment or care of the health of humanbeings.”

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circ*mstances.

United States - New Technology (2024)
Top Articles
Latest Posts
Article information

Author: Horacio Brakus JD

Last Updated:

Views: 5897

Rating: 4 / 5 (71 voted)

Reviews: 94% of readers found this page helpful

Author information

Name: Horacio Brakus JD

Birthday: 1999-08-21

Address: Apt. 524 43384 Minnie Prairie, South Edda, MA 62804

Phone: +5931039998219

Job: Sales Strategist

Hobby: Sculling, Kitesurfing, Orienteering, Painting, Computer programming, Creative writing, Scuba diving

Introduction: My name is Horacio Brakus JD, I am a lively, splendid, jolly, vivacious, vast, cheerful, agreeable person who loves writing and wants to share my knowledge and understanding with you.