Or, How To Not Be Fined Half a Billion Dollars
On December 19, 2022, the FTC announced that Epic Games, the makers of Fortnite, had agreed to pay a settlement of over half a billion dollars in relation to allegations of significant issues around the treatment of children online. A portion of this settlement fee will be paid in the form of reimbursements to customers, in response to what the FTC deemed “dark patterns” with respect to in-game purchases. But the majority of the costs relate to alleged COPPA violations by Epic; and reveal an evolution to the way the FTC views COPPA protections and child safety issues, which every online platform should be taking notice of.
There have already been a number of excellent articles covering the details of this specific case - here’s one from the ESRB. If you’re not already familiar with the basics of what’s going on, we recommend you start there. And given the quality of these existing articles, one might ask - why are we writing another one? What relevant expertise can we offer to this discussion?
The answer to this question lies in the way the Epic case interweaves questions of privacy with questions of safety. Historically, many online platforms have treated privacy and safety as being in tension with each other. And there is a sliver of truth to this - an absolutely private platform would lack the tools to actually protect its users, and no platform could offer absolute safety without watching every single thing each user did in intimate detail. It’s an extreme mistake to say that this means there’s no synergy to be had between privacy and safety, as we’ll discuss below, but thus far, these platforms, faced with what they see as a binary choice, have recognized regulation like COPPA, GDPR, and others with real teeth in the privacy sphere; while safety-focused regulation has, according to industry perspectives, lagged behind. As such, online platforms have generally prioritized privacy, while often neglecting safety or relying on excuses like “we can’t be held accountable for what we weren’t aware of.”
In this light, the Epic case takes on a new level of significance. The FTC’s COPPA complaints certainly noted some alleged deficiencies with respect to direct privacy protections. But the meat of their case actually centered on the ways that lack of privacy could result in worse safety outcomes.
To briefly summarize, the FTC argued that numerous safety issues, such as harassment, threats, and bullying, are prolific in online text and voice chat; and that Epic’s default privacy settings didn’t do enough to protect kids from being targeted through these channels. The conclusion in this case was that Epic must change its default privacy settings, such that children are not able to use voice or text chat without parental consent.
Which does technically solve the problem, except for the part where it cripples a key capability of modern online games - their ability to offer a space for kids to socialize, form bonds, explore their own identities, build confidence, mobilize communities, and so much more.
Yes, you could argue that this capability still exists, just so long as the parents consent to it. But not every child is fortunate enough to have tech-savvy, responsive, or even present parents. In addition, parental control and age verification features have a rocky success rate at best, and are well known for being bypassed in a variety of top games. Requiring parental consent is a sensible intervention for the moment; but it’s a patch, and it raises the question of what it would mean to really attack this problem at its root.
Well, attacking the problem at the root is fairly straightforward, at least in concept - it would just mean actually curating safe, respectful online chats, instead of the toxicity that gaming and the wider internet have unfortunately become known for.
Many people believe this is impossible; but it’s worth noting we do it all the time. If someone shows up to a playground and begins loudly talking about white supremacy, or requesting photos of genitalia (even from adults, rather than kids), we have a number of ways we correct their behavior, ranging from a gentle reprimand up to and including police intervention. This doesn’t make playgrounds 100%, absolutely, unquestionably safe…but it does prevent kids from picking up questionable ideologies or offensive slurs as a matter of course in the way they often do online.
How do we solve this issue on playgrounds? It’s not by shadowing our kids, listening to every word they say and recording every move they make. But it is by having authority figures - parents, schoolteachers, coaches, etc - nearby enough that they can notice from afar when trouble seems to be starting to break out. Raised voices, an unexpected person joining the group, or even things suddenly going too quiet - these can all be clues to those nearby that something has gone wrong, and that the kids need someone to step in for their own sake.
The analogous solution in the online world is known as “proactive moderation.” When done well, proactive moderation enables platforms to act just like the parents keeping an eye on the playground. The platform doesn’t listen to every little detail, doesn’t follow each kid around or even necessarily need to know which kid is which; but they do notice the telltale signs of trouble breaking out, and can then step closer, start actually paying attention to the details, and intervene in whatever way is appropriate.
In text chat, there are a number of existing solutions which offer proactive insights to studios, and many which are able to do so while maintaining due respect for the privacy of the children involved. But this problem has historically been more difficult in voice chat, resulting in many studios like Epic choosing not to moderate voice chat (often in the name of preserving player privacy).
Which finally brings us back to the question of our own expertise around this topic. We at Modulate are the creators of ToxMod - the world’s first and only voice-native proactive moderation system. We designed ToxMod from the ground up with privacy at the front of our mind, in alignment with our ‘playground’ analogy. ToxMod doesn’t listen to what’s being said in every conversation; it starts out monitoring from afar, looking for simple things like heated emotions, uncharacteristic behavior, or participants who aren’t supposed to be there. It’s only after ToxMod recognizes the signs of trouble that it will ‘step closer’ and begin listening more closely to better understand the problem…and even then, ToxMod doesn’t know the identity of any of the kids, nor maintain long-term logs about the kids’ behavior. Just as a good samaritan might flag to a teacher that “the kid in the orange shirt looks like he needs help”, ToxMod flags to online platforms “there’s someone over here you should take a look at” - ensuring the platforms can intervene and protect their users, without putting user privacy at risk.
Don’t believe us? That’s okay. Our point isn’t that you need to partner with Modulate to create a safe online platform - though we’re happy to help if you’d like our support! Rather, what we want to get across is the fact that while privacy and security do have a bit of tension with each other, that doesn’t mean there’s no way to have our cake and eat it too. In the physical world, we constantly walk the line between respecting the privacy of our kids and ensuring their safety, and we can do the same thing online too. And, as this recent settlement shows, just ignoring what’s happening on your platforms is no longer an option, even if you do it in the name of privacy. Protecting kids from not just bad platforms, but also bad users, requires that developers strike a middle ground of privacy and safety; and that they do it fast. (Part of the FTC’s focus on Epic came from their frustration that Epic took too long to deploy sufficient safety features.)
At Modulate, we believe in an internet experience that’s safe, private, authentic, and fun. For too long, platforms have assumed we can’t have them all, and left users in the lurch. We hope this article helps to reinforce that there is a way forward - and that consumers and regulators won’t be satisfied until we put in the work and deliver experiences that check all the boxes together.