Software Development
AI / Big Tech / Federal government

AI news to watch for in 2024

Will 2024 be the year we start putting artificial intelligence in check?

AI is everywhere. (Technical.ly/Holly Quinn/made with SDXL 1.0)

If 2023 was the year that AI broke out, 2024 may be the year it’s reeled in.

One of the things that has been scary about generative AI since the launch of ChatGPT in November 2022 is the complete lack of regulation. Large Language models are trained on our words, including social media, blog posts and articles. Image generators are trained on artwork by real people. None of what AI creates is copyrightable, and at this point, it probably shouldn’t be.

All of that has been such a mess that it distracts from the good that AI can potentially do. Regardless, AI is not going anywhere. As we look ahead to 2024, there are signs that generative AI’s Wild West period isn’t going to last.

What can we expect of AI in 2024? Here are some things to watch for:

The rise of the CAIO

In October, President Biden signed an AI Executive Order that requires, among other things, that all federal agencies appoint a chief AI officer (CAIO) to regulate the way AI is used by the government in a way that aligns with eight guiding principles:

  • Safety and security
  • Innovation and competition
  • Worker support
  • Consideration of AI bias and civil rights
  • Consumer Protection
  • Privacy
  • Federal use of AI leading by example
  • International leadership

All agencies are required to develop an AI strategy and an AI risk management framework, as well as guidelines for their use of generative AI.

The executive order only applies to government agencies, not private companies, but the CAIO position is becoming increasingly common across industries and is expected to become a standard fixture of the C-suite.

The AI Literacy Act

On Dec. 15, Rep. Lisa Blunt Rochester (D-Delaware) and Rep. Larry Bucshon (R-Indiana) introduced the Artificial Intelligence (AI) Literacy Act. The bill seeks to amend the existing Digital Literacy Act to include AI literacy under the digital literacy umbrella. This step would make AI skills education part of the curriculum in K-12 schools, colleges and workforce development programs.

Making AI education widely accessible is especially important when you consider that AI will take over a lot of low-paying jobs — and while AI will also create more jobs, they won’t be within reach to people without understanding of it. (Yes, this is about racial equity, but it also impacts rural areas that have a low level of digital literacy accessibility.)

“It’s no secret that the use of artificial intelligence has skyrocketed over the past few years, playing a key role in the ways we learn, work, and interact with one another. Like any emerging technology, AI presents us with incredible opportunities along with unique challenges,” Blunt Rochester said in a statement. “That’s why I’m proud to introduce the bipartisan AI Literacy Act with my colleague, Rep. Bucshon. By ensuring that AI literacy is at the heart of our digital literacy program, we’re ensuring that we can not only mitigate the risk of AI, but seize the opportunity it creates to help improve the way we learn and the way we work.”

Fake news will get worse

In the future, AI may be able to eradicate — or at least reliably identify — disinformation on the internet. One team in Zurich has been working on it for several years. But with the generative AI explosion, disinformation has been getting worse. As of Dec. 18, NewsGuard identified 614 AI-generated news sites without human oversight, many of which spread false narratives, from celebrity death hoaxes to political lies.

As out of control as it seems, this kind of gaming of technology has happened before. When people figured out that search engines used an algorithm in the late ‘90s, they gamed it to the point that, by the late aughts, it almost looked like the internet had been rendered useless: Instead of AI, it was human-powered content farms (today’s equivalent would be food and craft video content farms on TikTok). Remember when every single Google search brought up nothing but eHow links and scammer sites while quality content got buried? If your answer is no, there’s hope that AI content farms can possibly be controlled, too.

Which leads us to:

The Google algorithm will change

The Google algorithm changing isn’t big news. It happens several times a year, and unless you’re actively trying to swindle people using the search engine, you probably don’t notice it. Back in 2011, Google launched the Panda Update, which took out SEO-manipulating content farms and webspammers and giving high rankings to pages with high-quality content.

Today, with generative AI featuring prominently on Google and Bing, the algorithm has to get better at identifying AI-generated content, and ranking it low, just as it does with black hat SEO pages and keyword-stuffed content. With the rate of AI-generated disinformation flooding the internet, something on the scale of Panda is warranted, not just for search engines, but for social media, too.

If some online publishers get their way, the relatively new feature of having a chatbot answer your question when you type in a query may change as well. This month, Arkansas-based publisher Helena World Chronicle filed a class action lawsuit in the US District Court in DC against Google and its parent company Alphabet accusing it of diverting clicks by having the chatbot summarize their content, reducing the number of people who click to read the original source.

Copyright lawsuits may change the game

Font designer Matthew Butterick’s billion-dollar lawsuit against several major tech firms behind the AI programming tool Copilot, including OpenAI, Microsoft and Github, as well as painter Kelly McKernan’s lawsuit against Stability AI and Midjourney, take aim at copyright. Artists see their work morphed into IP-free interpretations of prompts that sometimes even include their names.

If they are successful over the next year, there could be big changes to the way generative AI models can be trained. What that means and how that will look remains to be seen, but the intellectual property issue is a huge one — and one that needs to be dealt with as AI seeps into most facets of our lives.

Engagement

Join the conversation!

Find news, events, jobs and people who share your interests on Technical.ly's open community Slack

Trending

Gopuff lays off 6% of workforce, as it prepares for 'next leg of growth'

An industrial designer is making work fun with his 3D-printed building blocks

Philly coworking guide: 21 places to get work done

DC Power Moves: The US Navy has a new innovation director

Technically Media