When it comes to regulating AI, the US Congress has mostly debated who gets to write the rules the nascent industry needs to play by.
But the European Union (EU) has already taken significant steps to regulate the industry, potentially impacting any US-based AI companies looking for European users.
Passed in 2024 with a phased implementation through 2030, the EU AI Act goes into effect this week. It’s the first comprehensive regulation in the world to address AI safety.
While many in the US might scoff at even the concept of AI firms having to adhere to a set of rules imposed by a foreign power, most US-based startups operating in Europe will need to comply by Aug. 2, 2026. (Unless it’s on the market prior to that date, which means it’ll be grandfathered in and must generally comply by the same date in 2027.)
While some startups might be scrambling to address these regulatory needs, at least one startup has been proactively approaching compliance from its earliest days. For Philly-based HR platform Phenom, the EU AI Act appeared merely an inevitability, and it’s using lessons from General Data Protection Regulation (GDPR) compliance in 2018 to inform its approach.
“That date came and went, and it was a non-event because [Phenom clients] were ready,” said Cliff Jurkiewicz, Phenom’s SVP of global strategy and general manager of its customer advisory board. Hopefully, the results will be similar by this time next year, too.
For US-based AI startups looking to gain a foothold in the EU, here’s what you need to know to prep now, so that the enforcement date doesn’t sneak up on you.
What is the EU AI Act?
The first AI regulation of its kind, the EU AI Act requires AI firms to classify their technology into one of four risk categories. Based on those standards, each company will have different obligations to meet.
- Unacceptable risk: AI that essentially use sensitive information for the purpose of manipulation, control, exploitation or otherwise violating human rights. That includes social scoring or predictive policing AI, or those that use biometrics to categorize individuals, for example. These AI are entirely banned.
- High risk: This includes any system that either has AI embedded into a regulated product or is a product itself. Examples include drones and medical devices, or other systems that handle sensitive user data, such as those in use in HR, critical infrastructure, law enforcement and other categories.
- Limited risk: AI that pose relatively low risk still require some level of transparency. This includes chatbots for customer service and generative content AI.
- Minimal risk: Systems that pose little to no risk to health, safety or fundamental rights. This includes spam filters, autocomplete or grammar checkers, recommendation engines and productivity tools, among others.
With the exception of minimal risk AI, AI companies operating in the EU are required to classify themselves and provide appropriate documentation.
High-risk AI requires the most documentation, including technical documentation, data governance documentation, human oversight protocols, recordkeeping and logging, an internal or third-party compliance audit, plus both risk management and post-deployment monitoring plans.
Limited-risk systems simply require user instructions, disclosure notices and deepfake labeling.
It’s important to note that not only are AI companies responsible for meeting regulatory compliance, but their customers are as well. In other words, if your business uses any system that is considered high risk, you will also be responsible for meeting the same compliance standards.
How Phenom got ready to comply
Phenom is among the first to deploy AI and machine learning in an HR ecosystem and knew early on that regulations would eventually catch up to their technology.
As such, the 15-year-old startup built reporting and tracking tools that enable it — and its customers — to easily respond to audits, according to Jurkiewicz.
It turned out to be a prescient move, as Phenom now serves clients in approximately 190 countries worldwide and must comply with local laws in every country.
For the EU AI Act, Phenom said it feels ready to go when the regulations kick in, as there are already state laws in New York, Colorado, Illinois and others, with very similar classification and transparency requirements.
“The EU is more a matter of scale than context,” said Jurkiewicz. “As soon as we knew that the EU AI Act was in the legislative process, we started to wrap resources around examining what was going back and forth.”
While Phenom doesn’t see being compliant as a major challenge, it is working to ensure its customers understand not only the regulations but also how to get the data needed from its platform to prove compliance.
So, what should US-based AI firms be looking to do to prepare for the EU AI Act? Jurkiewicz recommends three things:
- Design your business model with future regulation in mind. You know you’re going to have to be compliant, so build the tools you know you’re going to need.
- Use existing laws to determine risk level. Regulations aren’t built in a vacuum.
- Employ client-facing people who can explain the risks to clients and how your software is handling compliance.
Lastly, it’s important to recognize that the regulatory environment is constantly evolving and changing, both abroad and in the US. While the thought of regulation often elicits groans, Phenom’s example can serve as a blueprint for firms looking to expand into Europe.
For Phenom, non-compliance isn’t an option.
“Anyone in this domain should be taking this seriously because it’s a reflection of our domain,” said Jurkiewicz. “We may agree or disagree with a regulation, but it is our job to comply.”