Software Development

FBI leaders say more workers need to be trained in AI

The agency is adding a “Shark Tank”-style contest to spur new ideas — and is already using natural language processing to help maintain its tip line.

Kathleen Noyes, the the FBI’s section chief of next-generation technology and lawful access (left) and General Dynamics Information Technology President Amy Gilliland. (Courtesy GDIT)

As the latest developments in artificial intelligence grow more readily available, leaders at government agencies like the Federal Bureau of Investigation are trying to prevent hackers from succeeding with these new tools. 

But this is no easy feat. Accessible technology, like AI, is especially dangerous and more complex than other tech seen before. Government and technology leaders attested to this reality at an early June event that consulting company General Dynamics Information Technology (GDIT) hosted in DC. 

Anyone with a computer can exploit AI, and it’s making adversaries become much more efficient, they explained. 

“You pretty much can go into a generative AI tool and say, ‘Hey, help me not write malicious code. Do the reverse,’” said Kathleen Noyes, the FBI’s section chief of next-generation technology and lawful access. “And suddenly: ‘Here are all the things you would want to avoid if you wanted to write malicious code.’”

In addition to malicious code, AI can be used to create deep fakes, develop phishing scams and spread false information, panelists explained. The international NGO World Economic Forum backs the speakers’ concern by classifying AI as a major emerging global risk, particularly through its use to distort information. 

To stay ahead of bad actors, both domestically and abroad, leaders emphasized that workers need to be well-versed in AI. When developing these innovations, clarity is necessary, too. 

AI needs to be key to workforce training

There’s a pressing demand for workers with AI skills across the public and private sectors. Because of that, making an effort to “upskill” workers is necessary, said FBI section chief Noyes.

“We have to invest in our workforce,” Noyes said at the event. “We have to get everyone a baseline of knowledge.” 

FBI leadership is currently experimenting with a “Shark Tank”-type program to foster innovation within the agency. Under this program, named after the famous ABC show, employees have 90 days to develop and prove a concept. At the end of that period, the agency will evaluate cost, skills needed and how it would work if integrated into processes, explained the FBI’s interim CTO David Miller. 

“It becomes this huge educational opportunity,” Miller said. “It allows us to have really strategic innovation in doing outcomes.”

The FBI has also been rolling out different AI use cases, including using natural language processing models to help maintain the agency’s tip line. People still answer the phone while AI helps review the calls to see if anything was missed. 

The average person may miss or overlook key information, and the models are an example of AI “making us so much safer,” said Cynthia Kaiser, the deputy assistant director of the FBI’s cyber division.  

A transparent approach is essential

Justin Williams, the deputy assistant director for the agency’s information management division, said that when developing these new AI tools, there needs to be an explanation as to why it’s being used and how it was created. 

There’s always the chance that what’s being used at the agency, AI or otherwise, will have to be defended in court, Williams said. An innovation may also feature in the news, which requires a clear public explanation. 

Williams said he’s used several different generative AI tools on the job, but when asking platforms similar questions, he’s received slightly different responses. This lack of stability needs to be noted, Williams said. 

The FBI developed an AI ethics council in 2021 to evaluate AI use cases, Noyes explained, which has been a key part of the agency’s plan to remain transparent. The FBI evaluates risks and prioritizes use cases where there’s a clear answer that AI could be useful. 

“It can’t be a black box,” Noyes said. “We need some transparency and accountability for knowing when we’re invoking an AI capability.”

Companies: General Dynamics Information Technology (GDIT) / Department of Justice

Before you go...

Please consider supporting Technical.ly to keep our independent journalism strong. Unlike most business-focused media outlets, we don’t have a paywall. Instead, we count on your personal and organizational support.

3 ways to support our work:
  • Contribute to the Journalism Fund. Charitable giving ensures our information remains free and accessible for residents to discover workforce programs and entrepreneurship pathways. This includes philanthropic grants and individual tax-deductible donations from readers like you.
  • Use our Preferred Partners. Our directory of vetted providers offers high-quality recommendations for services our readers need, and each referral supports our journalism.
  • Use our services. If you need entrepreneurs and tech leaders to buy your services, are seeking technologists to hire or want more professionals to know about your ecosystem, Technical.ly has the biggest and most engaged audience in the mid-Atlantic. We help companies tell their stories and answer big questions to meet and serve our community.
The journalism fund Preferred partners Our services
Engagement

Join our growing Slack community

Join 5,000 tech professionals and entrepreneurs in our community Slack today!

Trending

A new model for thinking about how to grow regional economies: the Innovation Ecosystem Stack

Can the nation’s biggest cyber hub even handle Craiglist founder’s $100M security pledge?

20 tech community events in October you won’t want to miss

Sen. Mark Warner says the world needs a ‘Geneva Convention’ for using AI in warfare

Technically Media