States across the country are reckoning with how to balance regulating artificial intelligence and fostering innovation. Virginia’s no different.
Nearly 1,000 artificial intelligence-related bills have been introduced in the US in 2025, including 29 in Virginia alone. One proposed consumer protection bill, HB 2094, focused on creating requirements for developing, deploying and using “high-risk artificial intelligence systems.” That classification includes when the tech is employed in situations like financial and healthcare services, where the legal benchmarks could help protect users from bias and algorithmic discrimination.
Gov. Glenn Youngkin vetoed the legislation this spring, stating that it would inhibit small companies and startups and stifle innovation.
This is the second time Del. Michelle Maldonado (D-District 20) brought this legislation forward, and she expected a veto from the governor.
The lawmaker saw the mental health ramifications of minimal regulation around social media and wants to be “proactive versus reactive,” she said.
“What I’m suggesting is that we don’t wait to see how it evolves,” Maldonado told Technical.ly. “That we’d be engaged in a very thoughtful conversation from the start.”
Had it passed, Virginia would have been the second state to adopt comprehensive AI rules, although Colorado’s measure still faces delays and revisions. Maldonado plans to introduce the legislation for a third time next year.
“When you have really complex legislation, when it’s a first of its kind, it often takes multiple attempts before it becomes law,” she said. “I think this piece of legislation is no different.”
Avoiding a costly patchwork of laws
Maldonado is part of a multi-state AI policymaker working group alongside about 200 other legislators.
Members of this collective are introducing similar AI bills to Virginia’s across the country. One in nearby Maryland made it past a first reading during the last legislative session.
We don’t want to step on your ability to be innovative and creative, but we do want to make sure we’re protecting people along with business.
Del. Michelle Maldonado
This interstate collaboration makes sense to Nate Lindfors, the policy director at the policy-focused entrepreneurial nonprofit Engine. But there’s the possibility that lawmakers will tweak the bills to be similar but meaningfully different, making it difficult for founders to navigate beyond state lines.
Lindfors, whose employer supported Youngkin’s veto, sees that issue affecting companies navigating the country’s 19 comprehensive, state-level data privacy laws.
“A patchwork of laws is really costly for startups,” Lindfors told Technical.ly, “really fast.”
Carrying out HB 2094’s required impact assessments and other documentation could cost Virginia’s AI developers and deployers $290 million in total, according to an analysis from the tech industry coalition Chamber of Progress.
“That figure alone just prices out any sort of smaller, minority-owned innovators that are looking to do work in the state,” said Brianna January, the organization’s director of state and local government relations in the northeast. “Which is an ironic unintended consequence — that we’re trying to decrease any sort of potential bias in AI.”
Moreover, robust documentation does not necessarily reduce the risk of harm, per Gillian Hadfield, a computer science professor with an appointment at Johns Hopkins’ School of Government and Policy.

She calls this tendency to conflate this recordkeeping with damage mitigation “lawyer’s disease,” and sees it in the EU AI Act but also regulatory approaches outside of AI.
“We don’t really have much evidence about how well that works,” Hadfield told Technical.ly, “to actually reduce the risk of the harms we care about.”
Entrepreneurial leaders call for a sector-by-sector approach
Del. Maldonado’s bill called for a horizontal application of the proposed legislation across industries using AI in high-risk activity and consequential decisions. She deemed it a more “narrow” approach, compared to other AI legislation, because it applies to highly protected areas.
She also doesn’t expect lawmakers to dictate exactly how every field should implement regulations.
“I don’t think anybody wants us as legislators telling every single sector how to do the work well,” she explained. “I think the job of the legislature is to establish the framework and the guardrail … We don’t want to step on your ability to be innovative and creative, but we do want to make sure we’re protecting people along with business.”
But since each industry uses each platform differently, January from the Chamber of Progress believes in a sector-tailored and nuanced approach to AI regulation.
“It comes down to different uses — different opportunities mean different potential biases that we should focus on, instead of the big omnibus approach,” she said.
Todd O’Boyle, the Chamber of Progress’ senior director of technology policy, also noted that consumer protections already exist for many of these “high-risk” systems, like housing. The Virginia Fair Housing Law makes it illegal to discriminate against a renter based on a protected class, and AI doesn’t change that, he said.
But O’Boyle asserted that he wants to see any potential AI-related loopholes in the law closed.
Taylor Barkley agrees. The director of public policy for the Abundance Institute, a think tank that a 2024 Politico report identified as funded by the Koch brothers to promote light-touch AI regulation, believes AI is too vast a technology to blanket regulate across sectors.
It comes down to different uses — different opportunities mean different potential biases that we should focus on, instead of the big omnibus approach.
Brianna January, Chamber of Progress
“You’d be regulating how people use spreadsheets. People use spreadsheets for all sorts of different applications and use cases,” Barkley told Technical.ly. “If there was a law banning discrimination in spreadsheets, we could say that’s good. But if we unpack that a bit, how would that trip up innovation?”
Barkley is instead a fan of consumer-focused legislation being introduced with a narrower lens. For example, a bill was just signed in the Abundance Institute’s home state of Utah that addresses liability with mental health AI chatbots, and the requirement to disclose the tech.
Regulating AI in the future
Because AI constantly changes, laws and regulations need to be flexible, per Hadfield from Johns Hopkins University.
“AI is going to change the way we do everything,” she said, “so that includes the way we do law and regulation.”
She’s been advocating for governments to use multi-stakeholder regulatory organizations, and contributed to a bill proposal calling for that in California. It involves a joint public-private approach to determining what the regulatory mechanics would look like, how to get there and the best way to achieve outcomes, including testing AI models.
Under this law, participation is not mandatory, but those involved would have a safe harbor if any model resulted in injury or damage.
“I’d like to see more state legislation that was getting creative about new approaches,” Hadfield said. “Measured approaches, appropriate approaches to regulation, not, ‘Hey, we’ve taken the standard and made you do a bunch of this paperwork to show that you take it seriously too.’”
Join our growing Slack community
Join 5,000 tech professionals and entrepreneurs in our community Slack today!
Donate to the Journalism Fund
Your support powers our independent journalism. Unlike most business-media outlets, we don’t have a paywall. Instead, we count on your personal and organizational contributions.