Civic News

What CMU prof Rayid Ghani said while testifying at this Senate hearing on AI

"I'm here today because I believe that AI has enormous potential in helping us tackle critical societal problems that our governments are focused on."

Rayid Ghani testifies during a US Senate hearing on "Governing AI through Acquisition and Procurement." (Screenshot via hsgac.senate.gov video)

Artificial intelligence has been the focus of several federal government hearings this week, both behind closed doors and in the open.

Carnegie Mellon University (CMU) professor Rayid Ghani testified at the latter, a Thursday morning Senate hearing on “Governing AI through Acquisition and Procurement.”

Ghani is a distinguished career professor in CMU’s Machine Learning Department as well as the Heinz College of Information Systems and Public Policy. He also co-leads the university’s Responsible AI initiative.

You can read his submitted testimony here, or his verbal testimony below, which Technical.ly transcribed with the help of — you guessed it — AI, in the form of Otter.ai. Note that to demonstrate the abilities (and limits) of AI, we didn’t alter the Otter.ai transcription beyond adding punctuation.

###

“Thank you, Chairman Peters, Ranking Member Paul, and other members of committee. Thanks for hosting this hearing today. And for giving me the opportunity to present this testimony.

As Chairman Peters had mentioned, my name is Rayid Ghani. I’m a professor of machine learning and public policy at Carnegie Mellon. And I’m here today because I believe that AI has enormous potential in helping us tackle critical societal problems that our governments are focused on.

Much of the work I’ve done over the last decade has been in this space to working extensively with governments at the federal, state and local level, including helping and use AI systems to tackle problem across health, criminal justice, education, public safety, human services, workforce development, particularly on supporting fair and equitable outcomes. Based on my experience, I believe that AI can benefit every federal, state and local agency.

However, any AI system or any other type of system affecting people’s lives has to be explicitly designed to promote our societal values, such as equity, and not just narrowly optimized for efficiency.

I think it’s critical for US government agencies policymakers to ensure that these systems are designed in a way that they do result in promoting our values. Now, while the entire lifecycle of AI systems from scoping, to procurement, to designing to testing the deploying, needs to have those guidelines in place that maximize societal benefits and minimize potential harms, there has been a lack of attention to the earlier phases of this process, particularly on the problem scoping and procurement parts.

As Chairman Peters mentioned, many of the AI systems being used in government are not built in house. They are procured through vendors, consultants and researchers that makes getting the procurement phase correct critical. Many problems and harms discovered downstream can be avoided by a more effective procurement process. We need to make sure that the government procurement of AI follows a responsible process and in turn makes AI vendors accountable for the systems they designed that themselves have to promote accountability, transparency and fairness.

Government agencies often go on the market to buy AI without understanding and defining and scoping the problem they want to tackle without assessing whether AI is even the right tool, and without including individuals and communities that will be affected. AI systems are not one size fits all. Procuring AI is first and foremost procuring a solution that is helping solve a problem and should be assessed in ability to better solve the problem at hand.

In that respect, procuring AI is not that different from procuring other technologies. There are a few areas where it is different.

One, AI algorithms are neither inherently biased or unbiased, nor have inherent fixed values. The design of these systems requires making hundreds and sometimes thousands of choices that determine the behavior of the system. If these choices explicitly focused on outcomes we care about and we evaluate the systems against those intended outcomes, the AI system can help us achieve what we want to achieve.

Unfortunately, today, those decisions are too often left typically to the AI system developer who defines those values implicitly or explicitly. The procurement process needs to define the these goals and values very explicitly. AI requires that society requires that and ensure that the vendors address those appropriately in the system being procured and provide evidence of that.

Building responsibly AI systems required a structured approach in the procurement process needs to set expectations and force transparency and accountability from vendors and each of these steps. That includes defining goals, translating them into requirements that the vendors should design the system to achieve and setting up a continuous monitoring and evaluation process because the system will both itself change, as well as have to live and function in an ever-changing world.

It is critical and urgent for policymakers to act and provide guidelines and regulations for procuring, developing and using AI in order to ensure that these systems are built in a transparent and accountable manner and result in fair and equitable outcomes for our society. As initial steps, here are some of my recommendations.

Number one, focusing the AI procurement process on specific use cases, rather than general purpose one sites one size fits all AI both to support intended outcomes around that use case as well as to prevent harm through misuse.

Number two, development of common procurement requirements for AI and templates that government agencies can start from that does not exist today.

Number three, create guidelines that ensure meaningful involvement of the communities that will be impacted by the AI system, right from the inception stage and continuously.

And number four and lastly, creating trainings, processes and tools to support the procurement teams within government agencies. As the government teams expand their role and start procuring AI augment the systems more regularly they will need to be supported by increasing their capacity.  To fulfill this role, I recommend creating a set of trainings and processes and collaboration mechanisms and tools to help them achieve that.

The overall goal behind these recommendations is to set some standards around procurement of AI by government agencies and to support and enable agencies to implement those standards effectively, and procure AI systems that can help us achieve their policy and societal goals. Thank you.”

Companies: Carnegie Mellon University / U.S. Government

Before you go...

Please consider supporting Technical.ly to keep our independent journalism strong. Unlike most business-focused media outlets, we don’t have a paywall. Instead, we count on your personal and organizational support.

Our services Preferred partners The journalism fund
Engagement

Join our growing Slack community

Join 5,000 tech professionals and entrepreneurs in our community Slack today!

Trending

Why a California company chose Pittsburgh for its clean energy arm

19 tech and entrepreneurship events to check out before the holidays

EDA officials are ‘hopeful’ Tech Hubs program will live on under Trump

AI is being used in more and more of the hiring process, especially at high-volume companies

Technically Media