Startups

Investors want to hear how companies are using AI before handing over cash

Being clear about jargon, understanding ethical challenges and keeping up with fast-changing regulations are considered positive signs.

(L to R) Julia Fish, Zoe Weinberg, Anita Dorett and Marian Macindoe. (Sarah Huffman/Technical.ly)

When hosts took an informal poll of about 30 people at an AI and investing panel, the results were split. About half the audience raised their hands to say they were excited about AI, while the other half were concerned.

At the Total Impact Summit earlier this month, the annual conference hosted by local nonprofit ImpactPHL, investors said they’re considering how AI aligns with a company’s values and how it’s being integrated when giving money to a firm. The chance it offers to improve the world and the potential harm it could cause present opportunities for the companies. But getting on the same page about how a business plans to use AI and keeping up to date with regulations make a company more compelling to investors, according to panelists.

“I’m excited about the opportunity that AI has to help us deliver returns to our investors and also the ability to improve the world,”  Marian Macindoe, managing director of sustainable investment strategy at asset manager Parnassus Investments, said. “Automating dangerous tasks or repetitive tasks, making us all more productive and creative … and mitigating human errors and biases.”

Defining AI jargon keeps all stakeholders on the same page

With so many buzzwords in the artificial intelligence space, it can be hard to determine a company’s intentions. Knowing the differences in the terminology can help investors better understand the values of companies they’ll potentially work with, Zoe Weinberg, founder and managing partner of early stage venture fund Ex/Ante, said. That includes understanding what certain terms mean and which niches companies are focused on.

For example, the terms ethical AI and responsible AI both come from concerns about bias and discrimination. Whereas the terms AI safety and AI alignment are both more concerned about “existential risk” and the tool’s potential to harm humanity. Another term, trustworthy AI, refers to users’ trust of the technology to not exploit their data or personal information, Anita Dorett, director of human rights advocacy nonprofit Investor Alliance of Human Rights, said.

Generally, though, the terms used to describe AI vary in different contexts, causing confusion.

“We want to go to something that is a standard that is accepted across the globe,” she said. “And is also adopted not just by individual users or an investor or a government and corporations, but everybody.”

Until that becomes more universal, investors can start by getting more specific during the due diligence process. Three areas to think about are if the company’s product could be used maliciously, if the startup’s business model supports good social values and whether the founding team’s values align with the investing team, Weinberg said.

Legal and government guardrails are constantly changing what companies can do

Because AI is still evolving as a technology, the guidelines around it are shifting, too. As investors choose who to work with, it’s important to consider how companies stay on top of the regulatory, legal, financial and reputational risks of incorporating AI into their work, said Macindoe.

This includes thinking about AI regulation at the local, state and federal levels. In Pennsylvania, for example, Governor Josh Shapiro signed an executive order in the fall of 2023 outlining guidelines for generative AI use.

Long term thinking can help companies prepare for these changes as they go from theoretical to real. As AI technology develops, companies will be tempted to think in the short term to keep up with the trends, Macindoe said, but it’s important for companies to develop and deploy AI tools carefully to avoid risks down the line.

“We want to make sure the company that we’re invested in are thinking about this ahead of time,” Macindoe said. “We want to see safety, privacy, accuracy, transparency, accountability, and non-discrimination.”

Sarah Huffman is a 2022-2024 corps member for Report for America, an initiative of The Groundtruth Project that pairs young journalists with local newsrooms. This position is supported by the Lenfest Institute for Journalism.

Before you go...

Please consider supporting Technical.ly to keep our independent journalism strong. Unlike most business-focused media outlets, we don’t have a paywall. Instead, we count on your personal and organizational support.

3 ways to support our work:
  • Contribute to the Journalism Fund. Charitable giving ensures our information remains free and accessible for residents to discover workforce programs and entrepreneurship pathways. This includes philanthropic grants and individual tax-deductible donations from readers like you.
  • Use our Preferred Partners. Our directory of vetted providers offers high-quality recommendations for services our readers need, and each referral supports our journalism.
  • Use our services. If you need entrepreneurs and tech leaders to buy your services, are seeking technologists to hire or want more professionals to know about your ecosystem, Technical.ly has the biggest and most engaged audience in the mid-Atlantic. We help companies tell their stories and answer big questions to meet and serve our community.
The journalism fund Preferred partners Our services
Engagement

Join our growing Slack community

Join 5,000 tech professionals and entrepreneurs in our community Slack today!

Trending

The person charged in the UnitedHealthcare CEO shooting had a ton of tech connections

From rejection to innovation: How I built a tool to beat AI hiring algorithms at their own game

Where are the country’s most vibrant tech and startup communities?

The looming TikTok ban doesn’t strike financial fear into the hearts of creators — it’s community they’re worried about

Technically Media