AMARILLO, TX – For the last few years, “AI” is most fundamentally a marketing term used by IT companies to sell products (“technical solutions”) to other companies. Still, behind that marketing there are some pretty exciting developments. Although “AI” is used to capture a great variety of enhancements to old-fashioned computer programs, what AI boils down to is the ability of a digital algorithm (at core, a computer) to mimic certain features of ordinary human intelligence.
Examples of AI include (i) smartphone face-recognition; (ii) social media (and other media) algorithms that structure the content that appears on feeds (e.g., music); (iii) smart devices, like Siri for the Apple iPhones, Alexa for Amazon Echo, and the like; and (iv) and of course, large language models like ChatGPT and Grok
Administrative uses include (i) support of non-clinical functions like billing and prior authorization automation, and (ii) improved inventory management from better predictive patient ordering models.
Clinical and care management support includes (i) improved patient therapy adherence, including automated reminders and patient-specific feedback; (ii) improved drug contraindication checks; (iii) improved detection of medication discrepancies; and (iv) robotic dosage preparation could virtually eliminate dosing errors
AI is becoming an important tool for governmental agencies and third party payors to investigate alleged fraud and abuse. Examples of this include (i) flag high-risk prescription activity, (ii) flag improper reimbursement claims, and (iii) potentially improved coverage and pricing transparency.
Federal Regulations Impacting Use of AI
As of today, there is no general federal regulatory framework for AI. The FDA regulates a variety of uses of AI, especially in the medical device field. However, this has limited applicability to the DME space. A number of Executive Orders have been issued and then rescinded.
State Regulations Impacting AI
States have been leading the way. Examples include:
- California: A range of different laws enacted in California require disclosure of AI use and prohibiting so-called deep-fakes. The California AI Transparency Act (effective 1/1/2026) requires notification when content has been generated or modified by AI as well as establishing a license agency to ensure that only compliant AI technology is publicly accessible.
- Utah: The Artificial Intelligence Policy Act is in effect. It requires people in regulated occupations (including pharmacies and pharmacists) to prominently and proactively disclose use of AI. The state can fine instances of failure to do so up to $2,500 per violation.
- Texas: The Texas Responsible AI Governance Act was signed on 6/22/25. It is a more limited version of the California AI Act.
The Colorado AI Act
Let us focus on the Colorado AI Act because it is serving as a template for legislation under consideration by other states.
It applies to all developers and deployers of high-risk AI. “High risk” means that “when deployed, [the AI] makes, or is a substantial factor in making a consequential decision” that significantly impacts any of several categories of goods and services, including healthcare. The Act specifically prohibits “algorithmic discrimination,” which in practice is any AI-driven decision process that results in otherwise unlawfully discriminatory outcomes (i.e., discrimination on the basis of race, disability, age, or ethnicity).
The compliance opponent of the Colorado AI Act includes
- Duty to Avoid Algorithmic Discrimination: Must use reasonable care to protect consumers from foreseeable risks of algorithmic discrimination.
- Policy: Must make and implement a risk-management policy/program that identifies, documents, and mitigates risks of such discrimination.
- Regular and event-specific impact assessments required.
Use of AI must be disclosed and any discovered algorithmic discrimination must be reported to the state Attorney General. The Colorado AI Act will likely be a model for other states. States with similar bills pending include Connecticut, Massachusetts, New Mexico, New York, and Virginia.
Possible Pitfalls of AI Implementation
AI is a tool. Like any tool, it can break, malfunction, or misfire. But because it can automate a number of tasks, AI poses a wide variety of risks, including risks that its accidents will not be noticed in a timely fashion.
Regulated Processes: Any regulated processes that a business assigns to AI will continue to be regulated.
Responsibility: Although a provider may be able to limit its business liability under contract, the provider should assume that its business will continue to be broadly responsible for all of the outcomes of the AI-driven processes, including any actions resulting in adverse agency action.
Reliability Verification: This means that the provider’s primary task with AI will be to regularly act to verify the functionality and regulatory compliance of the processes that the provider automates.
Compliance Updates: State and Federal regulations are likely to increase in this space, so it is crucial that the provider keep up with regulatory changes.
Clinical Judgment: Any intrusion of AI into the area of clinical judgment is likely to trigger adverse agency action and patient litigation.
Lokken v. UnitedHealth provides a vivid example: The cases hinges on the allegation that United uses a proprietary AI program to decide when patients are entitled to post-acute care benefits, and that the AI’s decision wrongly denied care to a beneficiary, resulting in his death. The case is pending.
FDA Regulation and Avoidance
The primary body right now regulating AI in the health care space is the FDA. Most of the regulatory framework is directed at medical devices that utilize AI and the use of Software as a Medical Device (“SaMD”). The FDA maintains a summary of developments in that space. If a provider deploys, e.g., phone apps that use AI technology, it is important to recognize what will and will not be regulated by the FDA. Any medical device that requires a prescription will be FDA regulated. Many new devices rely on AI; however, the regulation of these devices is typically the responsibility of manufacturers.
FDA Exceptions
Importantly, three broad categories of AI tools are not generally regulated by the FDA.
Clinical Support and Care Management Tools: Tools that analyze patient data and suggest follow-up steps are typically not FDA-regulated, so long as they are designed to inform rather than direct clinical judgment.
Consumer Wellness Tools: Think, e.g., Fitbits and other wearable health trackers. Unless these products make specific medical claims, they typically fall under the FDA’s wellness exception.
Administrative AI: Because these are not intended for diagnosis or treatment purposes, they typically are excepted from FDA regulation.
Jeffrey S. Baird, Esq. is chairman of the Health Care Group at Brown & Fortunato, PC, a law firm based in Texas with a national healthcare practice. He represents pharmacies, infusion companies, HME companies, manufacturers, and other healthcare providers throughout the United States. Mr. Baird is Board Certified in Health Law by the Texas Board of Legal Specialization and can be reached at (806) 345-6320 or [email protected].
Blinn E. Combs, Esq. is a member of the Health Care Group at Brown & Fortunato, PC, a law firm with a national healthcare practice based in Texas. He represents pharmacies, infusion companies, HME companies, manufacturers, and other healthcare providers throughout the United States. Mr. Combs can be reached at (806) 345-6355 or [email protected].