Skip to content

Thought Leadership and Software for Wholesale Change Agents

  • Software
  • Articles
    • AI in Distribution
    • Digital Strategy
    • B2B eCommerce
    • Distribution Marketing
    • Distribution Sales Strategy
    • Distribution Technology
    • Distribution Industry News
    • Technology News
  • News
  • Programs
    • Upcoming Programs
    • On-Demand Programs
    • AI News & Gurus Show
    • Wholesale Change Show
    • The Discerning Distributor
    • Calendar
  • Reports
  • Speaking
Menu
  • Software
  • Articles
    • AI in Distribution
    • Digital Strategy
    • B2B eCommerce
    • Distribution Marketing
    • Distribution Sales Strategy
    • Distribution Technology
    • Distribution Industry News
    • Technology News
  • News
  • Programs
    • Upcoming Programs
    • On-Demand Programs
    • AI News & Gurus Show
    • Wholesale Change Show
    • The Discerning Distributor
    • Calendar
  • Reports
  • Speaking
Join Our List
Home » AI in Distribution » State AI Rules Are Coming Fast. Here’s What Wholesale Distributors Need to Watch

Date

  • Published on: December 31, 2025

Author

  • Picture of Distribution Strategy Group Distribution Strategy Group

Related

U.S. Regional Manufacturing Slows to End 2025, Signaling Headwinds for Wholesale Distributors

Lowe’s, Atlas, and TopBuild Drive Largest Q4 Deals as Distribution M&A Ends 2025 on a Strong Note

The FAM Plans Virtual New Year’s Event Focused on Reset and Reflection

Share

AI in Distribution

State AI Rules Are Coming Fast. Here’s What Wholesale Distributors Need to Watch

State lawmakers, not Washington, are setting the first real ground rules for artificial intelligence. For wholesale distributors that already use AI in hiring, ecommerce, pricing, credit and customer service, those rules are starting to reach directly into day-to-day operations.

The jurisdictions distributors need to watch most closely right now are:

  • Colorado – first detailed “high-risk AI” law for private-sector developers and deployers.
  • Utah – first AI-centric consumer law aimed at generative AI transparency and liability.
  • Texas – the Texas Responsible Artificial Intelligence Governance Act, a broad AI law with civil penalties effective Jan. 1, 2026.
  • California – employment regulations that bring AI and automated decision systems squarely under the state’s discrimination law, effective Oct. 1, 2025.
  • Illinois – new AI limits in employment decisions effective Jan. 1, 2026, on top of existing rules for AI-analyzed video interviews.
  • New York City – Local Law 144 bias audits and notice requirements for automated hiring tools.
  • Tennessee – the ELVIS Act, a first-in-the-nation law protecting musicians’ voices and likenesses from AI cloning, now a model for synthetic-content regulation.

Behind those high-profile laws is a surge of state activity. The National Conference of State Legislatures reports that in the 2025 session, all 50 states, Puerto Rico, the Virgin Islands and Washington, D.C., introduced AI legislation, and 38 states adopted or enacted about 100 measures. A separate analysis finds more than 1,000 AI-focused bills were introduced in state legislatures in 2025, more than twice the number in 2024.

For distributors that sell and hire across state lines, which adds up to a fast-moving patchwork that will shape how AI is deployed in HR, credit, and customer-facing tools.

The first “comprehensive” AI laws: Colorado, Utah, and Texas

Colorado: High-Risk AI and Duty of Care

Colorado’s SB 24-205, Consumer Protections for Artificial Intelligence, is the country’s first comprehensive AI law aimed at private sector “high-risk” systems.

The act:

  • Defines a “high-risk artificial intelligence system” as one that makes, or is a substantial factor in making, a “consequential decision” about an individual, including decisions about employment, credit, education, housing, insurance, and essential government services.
  • Imposes a duty of reasonable care on both developers and deployers to protect consumers from known or foreseeable risks of algorithmic discrimination.
  • Requires deployers of high-risk systems, starting June 30, 2026, to:
    • Maintain a risk management policy and program for each high-risk system.
    • Perform impact assessments, including before deployment and periodically thereafter.
    • Provide pre-decision and adverse-action notices and publish certain disclosures.

Lawmakers originally set a Feb. 1, 2026, effective date but delayed implementation to June 30, 2026, through SB 25B-004 to allow more time for compliance.

What does this mean for distributors:

AI used to score customer credit, set account terms, prioritize claims or shape hiring and promotion decisions in Colorado is likely to be treated as “high-risk.” Those systems will need clear documentation, risk controls, and notice procedures, not just vendor assurances.

Utah: generative AI disclosures and accountability

Utah’s Artificial Intelligence Policy Act took effect May 1, 2024, and is widely viewed as the first state law focused specifically on generative AI transparency in consumer-facing interactions.

The law and related amendments:

  • Require entities that cause generative AI to interact with individuals to disclose that the interaction is with AI when asked, and at the outset for certain licensed or certified services.
  • Clarify that synthetic data generated by AI is not “personal data” under Utah’s privacy law.
  • Establish that businesses remain liable under consumer-protection law when they use AI, limiting their ability to blame a system for deceptive or harmful conduct.

Analysis from law firms notes that 2025 amendments adjusted when disclosures are required and created safe harbors, but the core expectations—disclosure and accountability for AI-enabled interactions—remain in place.

For distributors:

If your ecommerce or service channels use chatbots, virtual sales assistants or agent-like tools with Utah customers or suppliers, you should:

  • Build standard AI disclosures into those interfaces.
  • Treat AI-generated representations about price, availability, warranty terms or return eligibility as your statements under Utah law.

Texas: The Responsible Artificial Intelligence Governance Act

Texas joined Colorado and Utah with HB 149, the Texas Responsible Artificial Intelligence Governance Act, signed June 22, 2025.

According to legislative analysis and law firm summaries, once the act takes effect on Jan. 1, 2026, Texas will:

  • Establish baseline duties for AI developers and deployers operating in the state.
  • Authorize civil penalties for violations involving artificial intelligence systems.
  • Create an AI regulatory sandbox and safe harbors tied to risk management frameworks.
  • Preempt local AI ordinances, reserving AI regulation to the state level.

For distributors:

While initial implementation focuses heavily on state agencies, the law’s text reaches private entities that use AI systems in Texas. Companies should expect:

  • A single, statewide standard for AI in Texas rather than city-by-city rules.
  • More scrutiny of AI used in marketing, customer service and decision systems affecting Texans’ rights or access to services.

Employment: California, Illinois, and New York City tighten rules on AI hiring

State and local rules around AI in employment are likely to be the first place many distributors feel direct regulatory pressure.

California: FEHA and automated decision systems

California finalized employment regulations addressing AI and automated-decision systems under the Fair Employment and Housing Act, with an effective date of Oct. 1, 2025.

The regulations:

  • Define an automated decision system broadly to include any computational process, including AI and algorithms, used to aid or replace human decision-making in employment.
  • Confirm that employers can be liable when use of such systems results in discrimination prohibited by FEHA.
  • Emphasize the need for testing, documentation, and vendor oversight when AI tools are used for hiring, promotion, performance evaluation, or termination.

Illinois: notice and limits on AI in employment decisions

Illinois has layered several requirements for employers that use AI in hiring:

  • The Artificial Intelligence Video Interview Act requires employers that use AI to analyze video interviews for Illinois positions to notify applicants, explain the AI’s role, obtain consent, and delete recordings upon request.
  • HB 3733, signed Aug. 9, 2024, amends the Illinois Human Rights Act to cover employer use of AI in employment decisions and takes effect Jan. 1, 2026. It prohibits AI use that has a discriminatory effect and requires employers to give notice when AI is used for recruitment, hiring, promotion, renewal of employment or discharge.
  • Draft notice rules released in December 2025 outline timing and content requirements for those disclosures, including annual notice for employees and notice in job postings for applicants.

A statewide summary of 2026 laws underscores that Illinois will prohibit AI use in certain employment decisions when it would discriminate and will require disclosure to applicants.

New York City: Local Law 144 bias audits

New York City’s Local Law 144 of 2021 requires employers and employment agencies to use automated employment decision tools to meet three main obligations:

  • Conduct an independent bias audit of each covered tool within one year before use.
  • Post a summary of the most recent bias audit on a publicly available website.
  • Provide advance notice to candidates and employees that an automated tool will be used, how it works and what data it uses.

Audits and enforcement updates from city and state offices confirm that these requirements are in force and are being used as a model for other jurisdictions considering similar rules.

Why This Matters for Distributors’ Workforces

Distributors hiring in California, Illinois or New York City often rely on:

  • Applicant tracking systems that score, rank or screen candidates automatically.
  • Video-interview platforms with built-in analytics.
  • Shared recruiting workflows that cover multiple states from a single corporate system.

At the same time, Mobley v. Workday, a lawsuit in federal court in California, alleges that an AI-based applicant recommendation system discriminated against based on race, age, and disability. In May 2025, Judge Rita Lin granted preliminary collective certification for age discrimination claims under the Age Discrimination in Employment Act, allowing the case to proceed as a nationwide collective action.

Legal coverage warns that the case, along with other litigation, is shaping expectations for employers: if AI is part of the hiring process, employers will be expected to audit and monitor it, not simply rely on vendors’ assurances.Likeness and content: Tennessee’s ELVIS Act and its imitators

Tennessee’s Ensuring Likeness, Voice, and Image Security (ELVIS) Act, signed March 21, 2024, and effective July 1, 2024, expands the state’s right of publicity to address AI voice and image cloning.

The law:

  • Protects a person’s name, photograph, voice, and likeness against unauthorized AI-generated impersonation.
  • Targets those who create or distribute deepfake and “sound-alike” content that could mislead audiences.
  • Has been described by news outlets and lawmakers as the first U.S. statute aimed specifically at protecting musicians and artists from AI-driven impersonation.

Other states are considering similar protections for voice and likeness, particularly where the music and entertainment industries are central to the economy.

Implications for distributors:

  • If you use synthetic voices in training modules, webinars, or customer-service systems, you should confirm they are properly licensed and not trained to mimic specific individuals without consent.
  • As more states adopt ELVIS-style rules, marketing and learning teams will need contract checks and review processes for AI-generated personas, even in B2B campaigns.

The Broader Patchwork: How It Hits Distributor Workflows

Even without a federal AI statute, state laws and local rules are already reshaping how distributors should deploy AI. NCSL data and independent analysis show that 38 states enacted around 100 AI-related measures in 2025, and more than 1,000 AI-focused bills were introduced that year.

Reuters review notes that state attorneys general are also using existing privacy, consumer-protection, and civil-rights laws to police AI misuse, even in states without stand-alone AI statutes.

For distributors, the impact breaks into five practical areas:

  1. HR and talent technology

Where risk shows up:

  • Resume-screening tools, ranking algorithms and AI-enhanced assessments used in California, Illinois, and New York City.

What to do:

  • Build an inventory of AI and automated systems used in recruiting, hiring, promotion and termination.
  • For California, ensure systems are evaluated and documented under the FEHA regulations and that vendor contracts reflect those obligations.
  • For Illinois, prepare compliant notices and consent flows and align practices with the Illinois Human Rights Act amendments and the Video Interview Act.
  • For New York City, confirm that covered tools have current independent bias audits, that summaries are posted and that candidates receive required notices.
  1. Ecommerce, bots, and agentic tools

Where risk shows up:

  • Customer-facing chatbots and virtual agents in Utah and, increasingly, in Texas and other states that focus on harmful or deceptive AI conduct.

What to do:

  • Standardize AI disclosures in customer-service chats and digital sales assistants, at least for Utah users, and consider using that standard nationwide.
  • Treat AI-generated information about specifications, pricing, availability, and terms as binding communications, and build review and escalation paths for edge cases.
  1. Credit, pricing, and other “consequential” decisions

Where risk shows up:

  • Automated credit approvals and limits, risk-based pricing, and collections workflows in Colorado and in any future high-risk AI regimes modeled on SB 24-205.

What to do:

  • Identify AI systems that influence credit, terms, or access to key programs.
  • Align your internal controls with recognized risk frameworks, such as the National Institute of Standards and Technology’s AI Risk Management Framework, which is explicitly referenced in commentary on state AI laws.
  • Develop and document impact assessments and adverse-action notices where AI plays a significant role in decisions about customers or employees.
  1. Content, training, and brand protection

Where risk shows up:

  • AI-generated videos, synthetic trainers, and voiceovers used in internal training and external marketing, particularly as ELVIS-style rules spread.

What to do:

  • Ask vendors for clear documentation of how synthetic voices and faces are sourced and licensed.
  • Add a step to creative review processes to screen for potential unauthorized impersonation risks.
  1. Enforcement and the missing federal standard

Commentary from policy and legal analysts concludes that states remain the “AI regulatory leader,” given Congress’ failure to pass comprehensive AI legislation and a failed effort to pre-empt state laws for 10 years.

Meanwhile:

  • A Reuters analysis notes that state attorneys general—from California to Texas—are already using existing laws to pursue AI-related cases involving deepfakes, deceptive marketing and algorithmic bias.
  • Another Reuters piece highlights that, in employment, state and local rules like New York City’s Local Law 144 are filling a gap as federal enforcement priorities shift.
  • Attorneys general from 35 states and the District of Columbia recently urged Congress not to block state AI regulation, signaling that state-level action is likely to continue.

For distributors, the practical assumption should be that state-level AI rules will keep expanding, and that regulators and plaintiffs’ lawyers will expect audits, documentation, and vendor transparency as standard practice.

A Practical Playbook for Distributors

Given the speed and fragmentation of state AI policy, distributors do not need a full-time public policy shop, but they do need structure:

  • Inventory AI use
    • Catalogue where AI or automated decision systems are used in HR, credit, ecommerce, forecasting, customer service, and marketing.
    • Tag systems that touch residents of Colorado, Utah, Texas, California, Illinois, New York City and Tennessee.
  • Classify high-risk systems
    • Flag systems that affect employment, credit, pricing, or access to essential services.
    • Use Colorado’s SB 24-205 and Illinois’ 2026 employment amendments as minimum benchmarks for risk management and documentation.
  • Standardize disclosures and notices
    • Develop standard language for hiring notices (Illinois, New York City, California) and AI interaction disclosures (Utah), then localize as needed.
  • Tighten vendor oversight
    • Require vendors to provide bias audit results, testing summaries and documentation for AI used in hiring and decision-making.
    • Include cooperation clauses for responding to regulatory inquiries or litigation.
  • Track outcomes and update governance
    • Monitor metrics such as who is being rejected, denied or downscored and by what tools.
    • Fold AI risk into your enterprise risk management program, alongside safety, cybersecurity, and data privacy.

State AI rules are not a distant policy discussion—they are already starting to govern how distributors hire, lend, price, and communicate. Companies that treat AI governance with the same seriousness as safety and compliance will be in a stronger position as more states move from proposals to enforceable rules.

 

Distribution Strategy Group
Distribution Strategy Group
Website

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Get inspired to act now. Get our content in your inbox 2x/week.

subscribe
Facebook-f Linkedin-in Twitter

Useful Links

  • About
  • Sponsorships
  • Consulting
  • Contact
  • About
  • Sponsorships
  • Consulting
  • Contact

Policies & Terms

  • Terms
  • Distribution Strategy Group Privacy Policy
  • Cookie Policy
  • Terms
  • Distribution Strategy Group Privacy Policy
  • Cookie Policy

Get In Touch

  • 303-898-8636
  • contact@distributionstrategy.com
  • Boulder, CO 80304 (MST/MDT)

© 2026 Distribution Strategy Group