Software Development

As AI ethics are debated, this Google expert says: Consider who’s acting as ‘steward’ of the technology

"The more perspectives, the more communities, the more cultures are brought into this conversation, the better the guardrails," Carolyn Yap, director of AI Practice, said during PACT's AI-focused Phorum conference. Plus, three software leaders discuss how their companies are using the tech.

Tristan Handy of dbt Labs, Arlen Shenkman of Boomi, John Cowan of Next Wave Partners and Carolyn Yap of Google speak at PACT's Phorum conference. (Photo by Paige Gross)
I walked into an artificial intelligence-focused conference Wednesday with a recent report on my mind.

On Monday, The New York Times reported that Geoffrey Hinton, known to many as “the godfather of AI” had quit his position at Google. He’d worked at the company for more than a decade after creating the technology in 2012 with two graduate students at the University of Toronto. Their work led to the foundation of major AI systems.

Hinton reportedly resigned from Google so that he could speak out about the risks of AI. He and other leaders in the space have lamented that the generative AI has been used for misinformation, could be a risk to jobs and potentially, a risk to humanity. The technologist’s first concern, the Times reported, is that the internet could be “flooded” with photos, videos and text that an average person couldn’t verify were true.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton said.

Critiques of artificial intelligence have been around nearly as long as the tool itself. But nowadays, AI has been enveloped into almost every facet of life, from ChatGPT to software-based business analytics to medical tools. As Google’s director of AI practice, Carolyn Yap, said during a Philadelphia Alliance for Technology and Capital’s (PACT) panel on Wednesday:

It’s “everything and nothing at the same time.”

The business value — and ethics — of AI

During Yap’s panel, Tristan Handy, founder and CEO of Philly’s dbt Labs; Arlen Shenkman, president and CFO at Berwyn’s Boomi; and Raleigh-based John Cowan, founding partner at Next Wave Partners, discussed how they use AI in their businesses.

Handy described dbt Labs’ offerings as helping companies make sense of all the data they store in “data lakehouses.” He sees AI doing a few things for his company: Large language models serve as useful tools for writing foundational code, and AI is getting close to making some sense of the data.

“Through my entire career, for 20 plus years, the holy grail has been, how do you ask data questions in natural language and get a trusted response back?” he said. “And I think that actually, we may not be that far away from from doing that.”

dbt Labs, now with about 400 employees, has instituted an AI policy around using the public data that tools like ChatGPT rely on and the private information customized by software applications a business might buy, per Handy. That means you wouldn’t take private information gained from internal sales meetings and load it into ChatGPT to summarize: “The way that we are using public models is with public data,” he said.

“The holy grail has been, how do you ask data questions in natural language and get a trusted response back? And I think that actually, we may not be that far away from from doing that.”Tristan Handy dbt Labs

Shenkman said current AI models can improve Boomi’s productivity as it optimizes operations, and because the SaaS company manages a low-code cloud integration platform for enterprise businesses., he’s intrigued by what might help the company evolve its product roadmap.

But more broadly, regarding AI and software, he’s wondering: “How do we develop code?” Shenkman said. “How do we develop products? How do we enhance the way we use utilize applications today? How do we gather information? How do we gain insights? And I think all of that will have an enormous impact far broader than simply data … as we continue to change and evolve in terms of how we integrate generative AI in our technology.”

Cowan said he doesn’t work with one particular company or industry, but rather with companies that serve the “era of autonomy.” It’s given him and his business partner a front row seat to what startups and large industrial enterprises, as well as the government, are trying to achieve.

“Through that work, what we’ve observed is the greatest impact of AI, broadly speaking, is the marriage between these large language models and AI, distributed machine learning-generated speaking, and the machine economy,” Cowan said. “We think that that is the ultimate use case or applicability of AI is when we start to infuse these systems with real-life machines.”

Shenkman raised concerns of data privacy and security, saying he goes into AI with a “sandbox approach” — slowly. He aims to figure out a technology before widely applying it.

“I think it’s really important that we recognize that, you know, this is an incremental technology change, and it’s certainly impactful, but we have to understand what we’re doing and how we’re making decisions in the longer term,” he said. “And obviously, what we do with that data and the security of that data and the regulations around data and the complexity of that data are going to all drive how we approach the problem.”

Stewards of AI

Toward the end of their conversation, I asked Yap if she felt Hinton’s departure from Google and public messaging about its dangers sat in the back of her mind throughout. She told me, and the other attendees, that she was actually heartened by it.

Yap likened AI and its dangers to other things in society, like cars and even guns. It’s the users and the lack of protections in place that make them dangerous, she said.

“Perhaps [Hinton] being out there, we might have that steward and that shepherd in this space, which is very much needed,” she said. “I actually don’t see it as a bad thing. Because it’s the same thing with cars, right? What if you said, you know, do we self-regulate speeds? A car can be used as a weapon. As we all know, we all have different ways of vehicles being used to harm communities and societies. And I hate to say it, it’s the same thing with guns, right?”

“So I think it has to be a stewardship — and it requires all of us, because otherwise there’d be too much bias in the system, if it’s only being stewarded by a very select set of people,” Yap continued. “So the more perspectives, the more communities, the more cultures are brought into this conversation, the better the guardrails, the systems, the safeguards, the policies, and even the responsibility matrixes can be.”

Companies: dbt Labs / Google / PACT

Before you go...

Please consider supporting Technical.ly to keep our independent journalism strong. Unlike most business-focused media outlets, we don’t have a paywall. Instead, we count on your personal and organizational support.

Our services Preferred partners The journalism fund
Engagement

Join our growing Slack community

Join 5,000 tech professionals and entrepreneurs in our community Slack today!

Trending

Millions of dollars pour into semiconductor manufacturing in Southwestern PA

Baltimore's innovation scene proved its resilience in 2024

Do zero-waste takeout containers work? We tried a new DC service to find out

Look inside: Franklin Institute’s Giant Heart reopens with new immersive exhibits

Technically Media