In March, health technology startup HeHealth debuted Calmara AI, an app proclaiming to be “your intimacy bestie for safer sex.” The app was heavily marketed to women, who were told they could upload a picture of their partner’s penis for Calmara to scan for evidence of a sexually transmitted infection (STI). Users would get an emoji-laden “Clear!” or “Hold!!!” verdict — with a disclaimer saying the penis in question wasn’t necessarily free of all STIs.
The reaction Ella Dawson, sex and culture critic, had when she first saw Calmara AI’s claim to provide “AI-powered scans [that] give you clear, science-backed answers about your partner’s sexual health status” can be easily summed up: “big yikes.” She raised the alarm on social media, voicing her concerns about privacy and accuracy. The attention prompted a deluge of negative press and a Los Angeles Times investigation.
The Federal Trade Commission was also concerned. The agency notified HeHealth, the parent company of Calmara AI, that it was opening an investigation into possibly fraudulent advertising claims and privacy concerns. Within days, HeHealth pulled its apps off the market.
HeHealth CEO Yudara Kularathne emphasized that the FTC found no wrongdoing and said that no penalties were imposed. “The HeHealth consumer app was incurring significant losses, so we decided to close it to focus on profitability as a startup,” he wrote over email, saying that the company is now focused on business-to-business projects with governments and NGOs mostly outside the United States.
More and more AI-powered sexual health apps have been cropping up, and there’s no sign of stopping. Some of the new consumer-focused apps are targeted toward women and queer people, who often have difficulties getting culturally sensitive and gender-informed care. Venture capitalists and funders see opportunities in underserved populations — but can prioritize growth over privacy and security.
The 19th spoke with sexual health educators and computer science researchers on how best to evaluate claims about AI, especially when it comes to sensitive health applications. They pointed to three main areas: marketing, medical claims and privacy policies.
Health technology has boomed since the onset of the COVID-19 pandemic, when telehealth laws were relaxed to allow remote care. Some companies used that opportunity to innovate, Dawson said, but some turned out to be profit grabs that don’t prioritize privacy concerns the way real doctors do.
Sexual health is a topic few people are fluent in, and there is a large societal stigma around STIs. People who are looking for STI tests are “a vulnerable, underserved population that is often in an emotional state and not able to make clear decisions,” Dawson said.
Services that substitute machine learning for human interaction are attractive because they allow people to take care of their concerns in the privacy of their own homes, Dawson said. People, especially young people, may not want to talk to someone — even a doctor — due to the shame of a potential STI exposure. “And that is a very dangerous dynamic to exploit for the startups that see an opportunity,” she said.
Dylan Baker, lead research engineer at the Distributed AI Institute, approaches claims about “artificial intelligence” — more accurately called machine learning systems in most cases — with skepticism.
In medical settings, doctors and researchers have utilized machine learning for a long time, Baker said. For example, a 2012 study in the American Journal of Roentgenology demonstrated how automated image detention systems could help radiologists diagnose lung nodules more efficiently. But those applications work because medical staff are trained in all the nuances of the models they are working with and know the limits of computation. That knowledge doesn’t extend to patients.
One of the first things to consider is whether an AI app can actually do what it claims. Be wary of technology that oversimplifies the problem, experts said. For example, many STIs are asymptomatic and can be confirmed only through lab tests.
Sexual health in particular requires specialized knowledge that even many doctors lack, said Emily L. Depasse, a trained sex therapist and sex educator. AI is usually marketed to be smarter than humans, but that’s not necessarily the case, Baker said. Any app promising clear results is something to be suspicious of.
That’s why it’s important to identify what knowledge a machine learning model powering an app has been trained on. Many companies have research papers available on their websites detailing the model they are using. “You want to make sure that the data that your model is trained on matches your use case as effectively as possible,” Baker said. If app developers don’t use real pictures with a variety of lighting conditions, angles and skin tones to train their model, it may lead to inaccurate results.
Training data is one of the aspects HeHealth was criticized for. In addition to training their model on actual photographs of penises with STIs, the engineers trained it on fabricated images created by layering pictures of infections over those of healthy penises.
Companies making commercially available AI products should audit their models for potential biases, because “the relationship between training data itself and bias in the outcomes isn’t always completely clear,” Baker said. Ideally, this information would be accessible on a company’s website, or within linked research papers, for potential customers to peruse.
The limits of an application should be communicated clearly in its instructions. Gender inclusivity can be an issue in health applications, said Maggie Delano, professor of engineering at Swarthmore College. For instance, Delano has done research on a weight-tracking app that asks users to choose a binary gender. It may seem like a small choice, but that information is used to calculate body fat percentage.
If an application has multiple inputs, Delano recommends trans folks try different options and go with what works best for their body. “One of the things I would love to see, that I have never seen, is an actual discussion of, ‘Hey, if you’re trans, this is the value you should use when you input into this algorithm,’” they said.
Delano also cited the adage that if you aren’t paying for a product, that typically means you are the product — in the form of personal data being sold or your attention being used for ads. That’s also why it’s important to thoroughly scan the privacy policy of any application, as many AI models train on user input. There should be clear information about how personal health information is stored and used by the company after interfacing with their product.
Consumer health applications aren’t necessarily covered by HIPAA, the most famous health privacy law that generally applies to health care entities, but there are many overlapping regulations that could apply. The FTC recommends reviewing what information will be gathered and how it will be shared by the app.
Try to identify in the application’s terms what recourse there is if privacy promises are broken by the company, Baker said.
Marketing claims can be boastful and, in some cases, against the law. The U.S. Food and Drug Administration regulates medical devices, so it’s important to check if a given application has the agency’s approval. Dawson said this information is usually at the bottom of a webpage, and might include a disclaimer about how the service does not offer diagnosis. If so, it’s important to see if the marketing reflects that. A lot of advertising tries to anthropomorphize AI, Baker said, making it seem like it could replace a trained doctor when that is not the case.
Sexual health is serious, Depasse said, and she is personally wary of any infantilizing language in marketing copy. Using euphemisms or cutesy language — like “seggs” instead of “sex” to get around social media filters — reinforces stigma and can cover up a lack of expertise about the subject.
Tech founders should be collaborating with sexual health professionals, OBGYNs, sex educators and people in reproductive health, Depasse said. Dawson recommended looking into the background of the founders: Are they fresh out of business school? Are they focused on raising capital? Is their team devoid of medical professionals? If so, they’re indicators that a team is more concerned about sales than creating a secure, accurate health product.
The educators and researchers interviewed by The 19th expressed disappointment and anger about how AI is marketed toward people with limited healthcare options.
As an AI ethics researcher, seeing “predatory” apps on the market also gets Baker a little fired up.
“Because I know people are struggling,” they said. “Health care can be hard to access and this is trying to fill a very real void.”
Before you go...
Please consider supporting Technical.ly to keep our independent journalism strong. Unlike most business-focused media outlets, we don’t have a paywall. Instead, we count on your personal and organizational support.
Join our growing Slack community
Join 5,000 tech professionals and entrepreneurs in our community Slack today!