If your child is already using ChatGPT or Instagram, here’s the uncomfortable truth about what that means for their privacy. 

Many sites impose a minimum age requirement of 13 years old. In many cases, this is explicitly so that they do not have to follow the COPPA provisions. 

These platforms, in their Terms of Service (ToS), do not allow children under 13. But because the companies are not doing anything to prevent this, they are allowed to use your child’s information as if they were an adult. This means that the data could be used for targeted ads and sold to third parties. 

There is one federal law in particular that is focused on protecting the privacy of children under 13 online: the Children’s Online Privacy Protection Act (COPPA). 

But it doesn’t protect kids as much as people think. There’s a difference between prohibiting children and protecting them, and most major AI tech companies only do the first.

Here’s what it is and why it matters.

What is COPPA?

Before COPPA, there was little stopping businesses from treating children as easy targets for data harvesting. The law forces companies to treat children’s data with special care or face real consequences.

The Federal Trade Commission (FTC) protects children’s privacy online through the law commonly known as COPPA. This law protects children from exploitation, empowers parents to be gatekeepers of their children’s information and creates accountability for companies providing these services.

The broader philosophy is that children deserve protection from commercial exploitation. 

There is something fundamentally wrong with treating a seven-year-old the same way you’d treat an adult consumer. It’s the digital equivalent of laws restricting advertising to children or requiring parental consent for medical procedures.

Kids can’t meaningfully consent to data collection because they don’t grasp concepts like data monetization, targeted advertising or identity theft. They’re also more susceptible to persuasive design tactics

COPPA shifts control to parents, who can make informed decisions about what data collection is appropriate for their family.

Why should you care?

From a parent’s perspective, COPPA compliance signals something specific: A company has made deliberate choices to treat your child differently than an adult user.

When a company follows COPPA, it has implemented verifiable parental consent. This means you actually had to do something to authorize your child’s access, not just hope they checked a box honestly during sign-up. 

You have the right to review what data has been collected, delete it and revoke consent at any time. 

The company can’t condition your child’s participation on collecting more data than necessary for the service to function.

What are the Big Tech companies doing?

Many sites impose a minimum age requirement of 13 years old. In many cases, this is explicitly so that they do not have to follow the COPPA provisions. 

Anyone younger than 13 who uses their sites is violating the ToS, and therefore, not supposed to be there — not that the companies are checking or doing anything to prevent minors from coming online anyway.

As of January 2026, the biggest websites are still relying on this age limit, even if it’s not always enforceable. Although the minimum age for creating an account is 13, 38% of tweens aged 8 to 12 years old report using social media.

Facebook and Instagram, both owned by Meta, prohibit children under 13 from creating accounts in most countries. Enforcement is largely through a self-reported birthdate and some AI detection. 

The ToS for OpenAI, which runs ChatGPT, explicitly states: “Minimum age. You must be at least 13 years old or the minimum age required in your country to consent to use the services. If you are under 18, you must have your parent or legal guardian’s permission to use the services.”

No major AI chatbot provider has built an equivalent for children.

Google’s YouTube is more nuanced. YouTube does not allow children under the age of 13 to create an account. However, Google explicitly created YouTube Kids as a COPPA-compliant alternative. 

The app provides a version of the service oriented solely towards children, with curated selections of content, parental control features and filtering of videos deemed inappropriate for children under 13. Google also offers supervised accounts through Family Link for children under 13.

When a company sets its age floor at 13 with self-declaration (like the above examples), it’s essentially treating child usage as outside its responsibility. It’s legal risk mitigation, not child protection.

The practical implication

If you didn’t have to do anything to prove you’re a parent (like verify your identity, provide payment info or receive a confirmation), then the platform isn’t actually protecting your child. 

It’s checking a legal box.

When evaluating any service for your kid, search the privacy policy for “COPPA” or “under 13.” 

If it just says “not intended for children under 13,” that’s a disclaimer, not a protection. And if your child already has a ChatGPT or Claude account, they very likely lied about their age to create it, which means there are no parental controls, no data protections and no recourse if something goes wrong.

This landscape, however, may shift significantly in the next year.

The good news: The FTC recently updated COPPA for the first time since 2013 and is actively investigating AI chatbots’ impacts on children.
Enforcement is ramping up, with recent settlements in the tens of millions of dollars, and AI chatbots are explicitly on the FTC’s radar. Concerns have been raised about chatbots encouraging self-harm and having inappropriate conversations with minors. Age verification is also a priority. Parents may soon have more tools and transparency.

The bad news: In the meantime, the burden of protection still falls on the parents.