Software Development

Responsible AI development in the shadow of profit is a balancing act, these experts say

To counter the limits of self-regulation, tech equity pros propose the establishment of a professional body of independent algorithmic auditors.

Technology x society. (Pexels/Pixabay)

In an era defined by rapid advancements in artificial intelligence, two significant battles emerge: the skepticism toward AI developers’ ability to create safe, secure and trustworthy systems, and the tension between ethical development and the pursuit of profit from private investment in innovation.

This dichotomy raises a crucial question: Can we have a future where AI and humans live together in a world where harmony among safety, security, trust and profitability exist for the benefit of all?

A conflict of interest for AI developers

The reliance on self-regulation and self-auditing in AI development, as demonstrated by recent voluntary commitments from leading AI companies to manage risks posed by AI, is fraught with conflicts of interest.

For instance, when AI developers prioritize rapid market deployment over rigorous ethical scrutiny, it may lead to the overlooking of potential biases in AI development, deployment and monitoring stacks. Such biases could perpetuate systemic inequities, as seen in cases where facial recognition technologies have demonstrated racial biases, lead to instances where chatbots driven by generative AI can discriminate against housing choice vouchers, or a software provider using AI to recommend rents that maximize profits at the expense of renters.

And for communities who are minoritized because of race, ethnicity, religion, gender, sexual orientation, gender identity, immigrant status or disability, the harms from these automated systems are more severe and frequent. This conflict between the pursuit of profit and safe, secure, and trustworthy development and use of AI undermines the trustworthiness of AI systems.

November’s sudden firing and rehiring of CEO Sam Altman at OpenAI — the maker of ChatGPT, one of the most prominent AI advancements in the past two years — tests the assumption that development and use of AI systems with profit-driven motives can coexist with the aspirations for safe, secure, and ethically sound AI, in line with principles outlined in the White House AI Bill of Rights and the most recent Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This scenario highlights the challenges in ensuring that AI development aligns with societal values and regulatory expectations.

The politicization of corporate governance and the pursuit of profit maximization further exacerbate these challenges. Companies might display a superficial commitment to ethical AI principles while primarily focusing on profit margins.

The limits of self-regulation in AI development

While self-regulation can help reduce some of the societal impact of AI, it has potential drawbacks. Self-regulation lacks binding power and gives industry the power to shape society’s present and future with AI without prioritizing its impact on communities.

This approach casts doubt on the sincerity of voluntary commitments to develop and deploy AI safely and securely. For instance, if a company prioritizes shareholder returns over consumer safety, it may rush an AI product to market without thorough testing, potentially causing irreparable harms at scale.

While AI principles are useful guideposts, we need more transparency and accountability mechanisms from AI companies. The trust in companies to self-regulate is further eroded by the prospect of AI systems evolving into digital agents that influence societal actions.

AI systems and their underlying algorithms do not account for the preexisting inequalities experienced by communities of color, resulting in outcomes that perpetuate current injustices. Without independent oversight and rigorous enforcement of trustworthy frameworks, these AI agents could exacerbate existing societal inequities and injustices or, in a worst case scenario, craft rules that benefit creators of the agents while further alienating the rest of society. The possibility that AI could evolve to view human governance as redundant adds a layer of urgency to this issue.

Recent developments at OpenAI reinforce the notion that internal and external security testing, information sharing about vulnerabilities, and protecting proprietary information are insufficient. The revelation that critical information can be withheld from even a company’s board of directors underscores the need for greater transparency and accountability in AI development, deployment and monitoring.

A proposed solution: independent algorithmic auditors

In response to these challenges, a radical shift is needed. We propose the establishment of a professional body or a non-statutory agency of independent algorithmic auditors.

To have meaningful immediate effect at protecting all communities, audits must be independent, publicized, recurring, and carry weight of penalty for noncompliance. We suggest these auditors, assigned to AI companies through a double-blind process, would have fiduciary duties prioritizing safety from harms, security from threats, and consumer trust in AI.

Such a body would ensure that AI development is not left solely to the discretion of the companies that stand to profit from it, but seek to maximize the benefits of AI for all. This approach would introduce a necessary layer of accountability and oversight, ensuring that AI advancements serve the broader interests of society, respecting civil rights, ethical norms and governance principles.

The complex interplay of technology, ethics, and governance in AI development necessitates a nuanced and multi-faceted approach. The creation of an independent body of algorithmic auditors represents a significant step toward achieving a balanced objective that safeguards public interest while allowing for innovation and profitability in the AI sector.

This is a guest post by Dr. Michael Akinwumi, the chief responsible AI officer for the National Fair Housing Alliance, and Dr. Dominique Harrison, an independent tech equity expert.

Before you go...

Please consider supporting Technical.ly to keep our independent journalism strong. Unlike most business-focused media outlets, we don’t have a paywall. Instead, we count on your personal and organizational support.

3 ways to support our work:
  • Contribute to the Journalism Fund. Charitable giving ensures our information remains free and accessible for residents to discover workforce programs and entrepreneurship pathways. This includes philanthropic grants and individual tax-deductible donations from readers like you.
  • Use our Preferred Partners. Our directory of vetted providers offers high-quality recommendations for services our readers need, and each referral supports our journalism.
  • Use our services. If you need entrepreneurs and tech leaders to buy your services, are seeking technologists to hire or want more professionals to know about your ecosystem, Technical.ly has the biggest and most engaged audience in the mid-Atlantic. We help companies tell their stories and answer big questions to meet and serve our community.
The journalism fund Preferred partners Our services
Engagement

Join our growing Slack community

Join 5,000 tech professionals and entrepreneurs in our community Slack today!

Trending

Penn dean is a startup founder and ‘engineer at heart’ who loves the connection between education and business

Empowering independence for Pittsburgh’s elderly and disabled community with tech

DelawareBio and UDel make joint hire to boost biotech innovation

Every startup community wants ‘storytelling.’ Too few are doing anything about it.

Technically Media