Skip to content
Home » How Meta, Snap, and TikTok Enable the Buying of Audiences – And What This Means for Children

How Meta, Snap, and TikTok Enable the Buying of Audiences – And What This Means for Children

In today’s digital world, where billions of users interact with social media platforms like Meta (formerly Facebook), Snap, and TikTok, data is the most valuable currency. This data is harvested and analyzed to serve advertisements that are meticulously tailored to users, based on their demographics, behaviors, interests, and online activity. For businesses and marketers, this is a dream—targeted advertising maximizes the efficiency of their ad spend and allows them to reach precisely the audience most likely to engage with their products or services.

However, this system also brings with it a massive ethical concern: how these platforms allow advertisers to buy audiences, which includes children, through targeting mechanisms. Children, who are often vulnerable to influence and may not have fully developed critical thinking skills, can be exposed to inappropriate or harmful content. The problem is exacerbated by the fact that the ad system is built for one purpose: to maximize revenue. And unfortunately, children represent a lucrative segment of the market.

In this blog post, we’ll explore how Meta, Snap, and TikTok allow advertisers to buy audiences, how they target children, and the broader implications for parents and policymakers.


How Audience Targeting Works

To understand how children can be targeted, it’s crucial to look at how platforms like Meta, Snap, and TikTok operate in terms of audience buying. These platforms offer a powerful advertising system where marketers can essentially purchase access to specific groups of users based on:

  1. Demographics: This includes basic factors such as age, gender, location, and language. Marketers can filter their audience to include or exclude people based on these factors, allowing them to craft highly specific campaigns.

  2. Interests: These platforms collect extensive data on users’ online activity, tracking the pages they follow, the content they engage with, and the ads they click on. This data builds a profile that represents users’ interests, from fashion to fitness to gaming. Advertisers can target users based on these behavioral profiles.

  3. Behavior: Platforms like Meta and TikTok analyze users’ behaviors to classify them into groups. For instance, TikTok might categorize users based on how much time they spend on the app, what type of content they interact with (e.g., comedy videos, political content), or even what kinds of videos they’ve shared or commented on.

  4. Connections: Platforms can also target users based on their social connections—who they follow or engage with. This is part of the reason why viral challenges or trends gain so much traction on TikTok: the more someone interacts with certain types of content, the more likely it is that similar content will be promoted to them, and by extension, to their friends.

Through these targeting mechanisms, advertisers can purchase highly tailored audiences, including children. This is where the problem arises.


Targeting Children: The Ethical Dilemma

While platforms like Meta, Snap, and TikTok have age limits in place—13 years old in most regions—it is well-known that these platforms have struggled with enforcing these limits. Children younger than 13 can easily create accounts, often with little to no verification. Moreover, many teenagers under 18 are users of these platforms, which makes them a prime target for advertisers. Here’s how the targeting process works in practice for children:

  1. Interest-Based Ads for Kids: Children who are active on platforms like TikTok and Snap are constantly interacting with content—watching, liking, commenting, and sharing. This data is collected to create an interest profile, even for underage users. Despite platforms claiming to restrict advertisers from directly targeting children, the sheer volume of content and engagement from younger users means that children inevitably become part of advertisers’ audience pools.

  2. Lookalike Audiences: Meta, in particular, allows advertisers to use “lookalike audiences” to target people who resemble their existing customers or followers. If an advertiser has a significant following among teenagers, the platform will automatically create a similar group of users who match the interests and behaviors of that original group—effectively widening the net to include more children.

  3. Influencer and Sponsored Content: Another way advertisers can target children is through influencers. On TikTok and Instagram (owned by Meta), for instance, influencers with large child or teen followings are often paid to promote products in their content. This creates a feedback loop: children see influencers they admire using a product, which drives interest and engagement, and the platforms’ algorithms then serve more of these ads or similar influencer content to that child’s feed.

  4. Ads for Inappropriate Content: Though platforms are supposed to ensure that certain types of ads (e.g., alcohol, gambling, or adult products) are restricted based on age, the system is not foolproof. Since targeting is based on behavior and interest data rather than strict age verification, children may be exposed to mature ads that were never intended for them. This is particularly concerning when it comes to sensitive issues such as body image, mental health, and sexual content, which can have a lasting impact on a child’s development.


The Incentive to Treat Everyone as Adults

Why do these platforms continue to allow this system to operate, knowing full well that children are affected? The answer lies in their business model.

  • Revenue from Ads: The majority of revenue for platforms like Meta, Snap, and TikTok comes from advertising. In 2023, Meta earned over $117 billion from ads alone. The ability to target specific, highly engaged audiences is what drives this revenue, and children are a part of this equation. By allowing advertisers to purchase audiences based on interest and behavior, rather than strict age controls, these platforms maximize their earnings potential.

  • Audience Engagement: Children and teens are some of the most active users on these platforms. They spend hours scrolling, liking, and sharing content, making them highly valuable to advertisers. To keep them engaged, platforms rely on algorithms that serve increasingly personalized content, which may include ads, further blurring the line between entertainment and advertising.

  • Plausible Deniability: Many platforms adopt a stance of plausible deniability. By claiming that they do not knowingly target children (and requiring users to be over 13), they avoid taking full responsibility for how their algorithms inevitably serve content to younger audiences. This approach allows platforms to comply with regulations like COPPA in the U.S., while continuing to profit from underage users.


Parents, Privacy, and Protection

For parents, this system presents a significant challenge. Even if they want to protect their children from inappropriate ads or content, it is incredibly difficult to monitor what their child is exposed to on platforms like Meta, Snap, and TikTok. The platforms’ use of personalized algorithms, influencer marketing, and interest-based ad targeting creates a complex ecosystem that parents have little control over.

Parents often rely on parental controls, but these tools are usually reactive rather than proactive—they filter explicit content after the fact, but don’t prevent children from being exposed to more subtle, harmful advertising or influencer content. Moreover, platforms don’t offer a full range of controls that would allow parents to block certain types of ads, leaving children vulnerable to exposure.


The Path Forward: Can Age Assurance Help?

One potential solution to this problem lies in age assurance technology. Platforms like Meta, Snap, and TikTok could implement stricter age verification methods, ensuring that younger users are not exposed to adult content or targeted by advertisers. By working with age assurance APIs that minimize data collection but still verify users’ ages, platforms could create safer environments for children without sacrificing user privacy.

However, this requires a fundamental shift in the way these platforms operate. They would need to prioritize user safety over revenue, and implement systems that are transparent and accountable—something that, so far, they have been hesitant to do.


Protecting Children in a Data-Driven World

As the digital advertising industry continues to grow, the ethical responsibility of platforms like Meta, Snap, and TikTok to protect children becomes ever more important. While the current system allows advertisers to buy audiences with minimal restrictions, the cost is being paid by children who are exposed to inappropriate or harmful content. For parents, regulators, and society as a whole, the challenge is clear: pushing these platforms to adopt more responsible advertising practices, and ensuring that children are given the protection they deserve in the online world.