Online Safety

The Internet Isn’t Safe For Kids, But Keeping Them In A Bubble Will Backfire

A tech and parenting expert’s online safety advice might seem counterintuitive — but it will change your parenting for the better.

by Christian Dashiell
Ariela Basson/Scary Mommy; Getty Images, Shutterstock, Amazon
We may receive a portion of sales if you purchase a product through a link in this article.

The current generation of children is born into a world with fewer and fewer signs of what it was like to live before the digital age. Much of their entire world is online. They complete class assignments, play video games with friends, FaceTime family members, participate in group chats, and stream various forms of entertainment as though it’s second nature. Because it largely is. The access that connected devices afford creates innumerable opportunities, but online activity also introduces an array of risks that make internet safety a challenge for parents.

Providing kids with devices that have online access can sometimes feel like an inevitable Faustian bargain, which understandably freaks many parents out. And in their attempt to utilize the convenience technology provides without their kids being swallowed whole by the infinite scroll or burned by exposure to dark and even nefarious content, parents often resort to comprehensive monitoring to keep their kids safe online.

But if parents rely on online surveillance too heavily, Devorah Heitner, Ph.D. contends that they may decrease the likelihood that their kids view them as a trustworthy resource when they run into trouble online. Heitner is a media expert who counsels parents on technology best practices, consults with schools about digital wellness policies, and advises app developers to help design ethical products.

Going all NSA on our kids with covert monitoring is not a good strategy for building trust and helping them learn to be independent

Her latest book, Growing Up in Public: Coming of Age in a Digital World, is a thoughtful examination of how an always-connected culture affects kids’ boundaries, identity, privacy, and reputation in their digital world. For it, she spoke to hundreds of kids, parents, educators, clinicians, and scholars and assembled practical strategies for working together with kids to address the challenges and dangers technology presents.

“Going all NSA on our kids with covert monitoring is not a good strategy for building trust and helping them learn to be independent,” she says. “We want them to be able to come to us if something goes wrong or they make a poor decision. But if, as parents, we are all over them at all times, we won't be the person they trust if they accidentally cause a big conflict in the group text or have a friend talking about self-harm or substance abuse.”

Instead of instituting an online police state for their kids, Heitner encourages parents to take a mentorship approach as kids gradually expand their online footprint. She also suggests parents adopt a healthy curiosity — and admit when they don’t know something — to allow kids to teach them about various online platforms.

Ceding up some control may feel counterintuitive and even scary. Still, Heitner contends it’s a better route to helping kids learn how to protect themselves online and have a healthier online presence.

Fatherly spoke to Heitner about the conversations parents should have with their kids about social media, how to keep kids safe online while maintaining trust, and the differences in how kids and parents view online safety.

You conducted hundreds of interviews with kids, parents, and educators for Growing Up In Public. What was the most surprising thing you learned?

One of the biggest “aha moments” is the way kids are disclosing so much about themselves. A lot of parents are nervous about disclosure, and I initially wondered if that was safe and OK for kids. But what I found is that when kids are disclosing aspects of their identity and their experiences online around things like mental health, sexual orientation, gender identity, survivorship, or neurodiversity, it can actually be really, really good for them.

In your opinion, what are some of the upsides of frank discussions of themselves online spaces?

One is that kids can be selective in the online communities they join in ways they can’t in other areas of life. For example, if you're LGBTQ+, your high school may not be an affirming space. But online, you can skew toward or filter toward a place that's more affirming.

There are also some advantages to finding people with shared interests. If you're super into an obscure kind of anime or an activity that isn’t popular where you live, you can probably find other people with similar interests online. That's a tremendous form of support, especially in the last few years with all the isolation we've experienced related to the pandemic. The internet has some dangers and concerns, but there are also some real upsides in terms of being able to find your people there.

If, as parents, we are all over them at all times, we won't be the person they trust if they accidentally cause a big conflict in the group text or have a friend talking about self-harm or substance abuse.

Would it be accurate to say that kids disclose more than adults might assume but don’t disclose as broadly as we think they are?

Yeah, I think they're often pretty selective. I talked to a kid for a Washington Post article that I wrote specifically about kids coming out online. And she was like [and I’m paraphrasing here], ‘Absolutely, it's in my Instagram bio. But no way would I put it on TikTok because the TikTok algorithm means a lot more strangers will see you. On Instagram, it will mostly be friends or friends of friends, as opposed to total randos who might be haters.’

She was very clear that she understood the algorithm and that she was thinking clearly about the risks versus rewards of being out in different spaces. And some kids use coded language or symbols on more public platforms. They might be out with a flag in their bio on a site because their grandma is less likely to know what the flag means. So there's a selectivity there that is a little more subtle than many adults would recognize.

You encourage parents to allow empathy to guide their decisions about technology and online activity instead of fear. What does that look like practically?

I think just talking to your kid. It sounds very simple, but sitting down with them and saying, “Hey, I really want to support you in this, and I want to understand how you’re using this app better, so can you show me a little bit about how it works? Because I will be much less nervous and potentially less controlling and annoying if I understand it better?”

Then, have them show you how the app in question works, who they are connecting with on there, and what features they like and don’t like.

When we look at kids getting involved with online activity and social media, what are some indicators that kids are ready for online interactions?

The biggest is their level of impulsivity versus their ability to be accountable for their behavior and their ability to slow down and figure out social interaction. Even adults can be impulsive online, so we can't set the standard that if you've ever been impulsive, you can't be in an online community because I think we'd be taking away everyone's phones.

But if parents want an indication of how their kids might handle social media, look at their interactions in group texts or how they communicate when they email teachers. That will give you some sense of what social or self-regulation skills they may need to work on before they go to the next thing.

I would start small if you’re worried that our kids will nuke all their relationships. Maybe they get to use your phone to text with one cousin, or they’re allowed to play their Nintendo Switch online with a handful of friends you know, but they're not on a server-based game where they’d potentially interact with strangers.

You advocate not over-monitoring kids' online activities or being rigid to the point that parent-child trust and communication break down. What’s a good rule of thumb for how much oversight is healthy?

I’m a proponent of taking a mentoring approach to help kids learn how to use technology, and monitoring can be part of mentoring. But sitting down to help your 11-year-old set up a new smartwatch and figuring out who they will be in contact with is different than reading all your 17-year-old’s texts.

At the beginning of any new experience, figure out what the parameters will be. Where in the house can your kids use different devices? Who are they allowed to play with and talk to? What are reasonable time limits?

But, on the other hand, if you just put an app on their device to track them all over town and read their texts, that’s probably too intrusive. And certainly if, if you're doing it covertly, that really crosses the line.

But if parents want an indication of how their kids might handle social media, look at their interactions in group texts or how they communicate when they email teachers.

What’s the best mindset for parents to have when utilizing monitoring tools?

The more you're using monitoring to help you assess whether your kids need support, the more you're doing it to teach them something. But then you're backing off from it in a kind of training wheels approach.

It would be sitting down with them and saying, ‘Okay, you want to get on the sixth-grade group chats. Can we look at it together and see if you really want to do this? And then maybe we can check back in a week or two and look at it together again.’

It's not that you need to see every single thing your kid is doing. But initially, being there with them and allowing them to walk you through it can be helpful.

Are certain apps or platforms better or worse for adolescents and teens than others?

The apps are as good or bad as who you connect with and what you do. You can see terrible content on Pinterest, but you can also be a crafter and have a great experience there. It really depends more on your kid and what they're searching for.

I will say TikTok was identified to me as a problematic app for some kids because the algorithm is so good that the app really is hard to walk away from. Reddit and Quora can also quickly send us down a rabbit hole of very negative things. But even those apps, I wouldn't say they’re inherently evil. Just be careful when you're on Reddit, and don't go down a white supremacist rabbit hole or get recruited into a hate group.

In terms of problematic content, there's self-harm and substance use content on all these apps. So, avoiding specific apps isn’t a solution. If the algorithm starts sending you toxic stuff, you have likely clicked on something to make that happen.

Are there ways app algorithms can be harmful that aren’t on parents’ radars?

There are definitely online spaces that are more concerning. But kids and adults don’t always perceive those concerns in the same way. Take Instagram and Snapchat, for example. A lot of parents would say Instagram is more wholesome. But many kids — and we eventually found out Meta’s own internal research — were saying that Instagram was more stressful because the grid makes you feel like you have to be perfect. Snapchat was kind of a relief because you could send an ugly selfie and not feel like it had to be perfect.

We’re at an interesting point in history because kids are using apps developed by adults who grew up in the pre-digital age or early digital age. How might the online landscape change as this generation of kids becomes developers?

So, in school workshops I facilitate, I ask kids how they’d design fixes to some of the apps they use. They don't like that you can screenshot Snapchat, so I had a kid develop a screenshot protector. I also had them design apps that would prevent parents from sharing about their kids without permission using facial recognition. So if your parent is about to post your face, it'll give you a chance to be like, red light, yellow light, or green light. Because consent is something kids often think about in ways parents don’t consider.