I will never forget the Seahawks vs. 49ers playoff game in January 2014.
It was brutal. Players were carried off in stretchers. Both teams left it all on the field, with the Seahawks dominating like the machine it had been the whole season. That game helped me understand something football fans already knew: Winning isn’t about choosing offense or defense. It’s about knowing when to do each.
AI governance is shaping up the same way.
Pennsylvania Rep. Summer Lee, in partnership with her Democratic colleagues, introduced the Artificial Intelligence Civil Rights Act, which, according to Technical.ly, “would prohibit algorithmic discrimination, require independent audits of high-impact systems and give people the right to choose whether a human or an algorithm makes consequential decisions about their lives.”
Regulation is often portrayed as an innovation roadblock — but what if it could be framed as the astroturf that, far from hindering players, is a whole lot softer than being tackled on concrete.
This latest push for guardrails is important to highlight because most attempts at regulation — from stopping revenge to establishing national oversight similar to the EU — are non-starters in our cowboy-style innovation climate.
Some bias in the AI space is intentional, and some bias is simply sloppy work or a very unfortunate accident. In the rush to be first to market, less than thorough work has real-world consequences.
Regulation is often portrayed as an innovation roadblock — but what if it could be framed as the astroturf that, far from hindering players, is a whole lot softer than being tackled on concrete. That framing matters. Because globally, AI regulation is increasingly portrayed as a false choice: innovation or protection.
Instead, it should actually be portrayed as the rules of the game. Playing chess or football with a toddler can be challenging if you are trying to play by the rules and they’d rather eat the pieces or hug the ball. Yet how satisfying is it to play any sport or intellectual game when you and your opponent are playing by the same set of rules? You understand the parameters and can flex your strategy, stamina and creativity accordingly.
Nationally and globally, AI regulation is haphazard at best, and navigating guidelines and very real legal consequences that differ so drastically can be like walking through a minefield.
Uniform governance may seem like a tough ask, but is the current system really working?
US offense vs. EU defense
The United States has leaned hard into offense. The current administration’s AI Action Plan emphasizes removing regulatory barriers to “solidify our position as the global leader in AI.”
By contrast, the European Union has taken a risk-based approach through the EU AI Act, placing clear obligations on providers of high-risk systems.
As EY’s European policy team summarizes it, “the EU AI Act aims to ensure that AI systems are safe and respect fundamental rights and values, while fostering investment and innovation.”
Both approaches have consequences. Stricter regulations have cost some EU companies business. In fact, nearly 60% of EU and UK developers report launch delays, and more than one in three are forced to strip or downgrade features to comply with the act, according to trade organization ACT.
While the US may not have as many regulations, that in itself is a risk. In March 2025, Clearview AI settled a $50 million class action, where it was proven that personal data was scraped online and sold to law enforcement without consent.
“From biometric privacy violations to algorithmic discrimination, recent lawsuits are reshaping what ‘AI litigation’ really means,” legal experts wrote in a blog post earlier this year.
It’s not just lawsuits, the Hollywood Writers’ Strike and a video game actor contract dispute recently illustrated that, absent national policy, intellectual property and personal likeness are issues of importance in the US labor market.
Different strategies. Different values. Same underlying problem.
According to the Albanian news publication the Tirana Times, “AI readiness continues to mirror existing power structures … Influence is shaped not only by innovation, but by the ability to govern artificial intelligence responsibly and at scale.”
The danger isn’t regulation itself: It’s imbalance. When policy focuses only on speed, harm is externalized to individuals and communities. When it focuses only on restriction, opportunity migrates elsewhere.
A lesson for local regulators
The real leadership challenge, especially for regions like Pittsburgh, is learning how to do both at once.
John Quigley, writing for the Kleinman Center for Energy Policy, rightly points out that “the hype around Pennsylvania’s emerging AI boom is missing balance. Amid cheers for innovation and investment, key issues — climate, cost and community impact are being swept aside.”
Plus, Technical.ly’s Alice Crow mused last year, “there’s a growing buzz about Pittsburgh’s potential to become a global leader in tech, but the question remains: What will it take to push the city over the edge and into the spotlight?”
And it’s true: The region has natural resources, venerable institutions like Carnegie Mellon University, unicorns like Abridge AI and Gecko Robotics, visionary leadership and investors who will make it rain when AI is involved.
Pittsburgh and Pennsylvania as a whole have the talent, capital and institutional depth to help define what responsible AI leadership looks like in practice, but only if it resists the false choice between speed and safeguards.
The next phase of AI (both here and globally) won’t be won by those who move fastest alone, but by those who know when to push forward and when to protect the people on the field.