Donald Trump made artificial intelligence one of his biggest key priorities during his presidency’s first days.
He signed his own AI executive order after axing the previous administration’s and is taking part in the major private AI infrastructure project Stargate. It’s early, and plenty of unknowns about the future of regulation still remain.
But experts anticipate that many of Trump’s decisions so far will have major downstream effects on how we use AI.
For example, the recent blows to diversity, equity and inclusion practices can ripple into how healthcare systems implement AI and harm patients. Plus, power needs from AI infrastructure will continue to rise, then translate to higher electric bills for consumers.
Scholars and researchers from Johns Hopkins University shared these and other predictions for AI under Trump during a recent media briefing. Here are some of their key takeaways.
Demands from data centers will continue to increase
As AI systems become more embedded in day-to-day operations, so does the need for supplies to fuel the demand increase.
Physical infrastructure like data centers is a major part of this need. The massive structures will continue to pop up across the US, per Tinglong Dai, a business analytics professor at Johns Hopkins. There’s already a massive concentration of data centers in Loudoun County, Virginia’s self-named Data Center Alley, and projects like Stargate entail data center development in places like Texas.
All that points to a greater need for electricity: Data centers consume 2% of power generated in the US, and that number’s is expected to triple by 2030. Data centers also need water to cool their systems, Dai explained, even though some companies claim to use little to no water.
This trend arose long before Trump was elected for a second term, and Dai said both the current Trump and Biden administrations shared priorities in the push for energy to power data centers.
The trend also boasts a lot of funding.
“When we talk about the data centers, we are really talking about billions of dollars [of] money involved,” Dai told reporters. “It’s extremely capital intensive.”
Dai also highlighted sustainability concerns like the reliance on fossil fuels for power and rising electricity prices for the average consumer.
Anti-DEI stance puts health equity at risk
As researchers pursue clinical trials and tests to implement AI in health systems, the administration’s aggressive step away from DEI could impact patient care, per Suchi Saria, the director of Johns Hopkins’ AI and Healthcare Lab.
“Given this administration’s focus on banning the DEI-related efforts,” Saria told reporters, “how does this play out in the context of healthcare, where there were very concrete initiatives that were put in place to improve health equity?”
This has been an issue for the past few years. Major corporations like UnitedHealthcare implemented AI tools that then hurt patients, she explained. There’s a need to document and streamline the consequences if these entities implement AI poorly, as well as define how liability works, she said.
AI adoption in healthcare isn’t going away, and there are many positive use cases: Saria cited the example of using the tech to alleviate workforce demands, which also drives more AI adoption.
Looking forward, the government needs to be united in how it regulates and guides AI use and adoption, she said.
“Multiple conflicting guidelines make it difficult,” Saria said. “There is a need for some kind of best practice. Because I do think the presence of best practices to accelerate AI adoption creates a semblance of sanity.”
There’s very little AI regulation. Building trust is key to its success
The Biden administration’s first AI executive order set up systems to create guidelines and research when it comes to the text, but very few regulations exist on the books.
But regulatory concerns won’t go away, said Gillian Hadfield, a computer science professor with an appointment at Johns Hopkins’ School of Government and Policy. It’s necessary to build trust with people. That’s the case in other industries, like restaurants and school systems, she said.
“We’re all willing to continue to participate in this incredibly complex society we live in where somebody else grows our food, and somebody else sticks needles in us,” Hadfield said. “We are willing to do it on a mass scale because we believe that they have been vetted.”
That expectation is missing in AI right now. There isn’t a check on the system or testing in place.
Plus, despite Trump’s own belief that prior federal oversight “hampered the private sector’s ability to innovate in AI,” Hadfield explained that companies don’t have to compromise innovation for regulation. She’s seen many firms look for regulatory structures so they can better manage risks.
“I think effective regulation of AI,” Hadfield said, “provides us with confidence.”
Before you go...
Please consider supporting Technical.ly to keep our independent journalism strong. Unlike most business-focused media outlets, we don’t have a paywall. Instead, we count on your personal and organizational support.
Join our growing Slack community
Join 5,000 tech professionals and entrepreneurs in our community Slack today!