TikTok in the firing line: The fightback has begun

Dr Rachel O’Connell and Aebha Curtis

In a landmark class action case, a claim is being made against TikTok on behalf of millions of children across the UK and EU. It has been brought by Anne Longfield, the former children's commissioner for England, and concerns all children who have used TikTok since 25 May 2018, regardless of whether they have an account or their privacy settings.

Tom Southwell, of Scott & Scott, the law firm representing the case, has said: "TikTok and ByteDance's advertising revenue is built on the personal information of its users, including children. Profiting from this information without fulfilling its legal obligations, and its moral duty to protect children online, is unacceptable.” In light of the case being brought against TikTok, and similarly against YouTube, it is becoming increasingly clear that we have reached a tipping point: safety requirements must be made for users wishing to access certain online services, particularly children. This is what new and emerging regulations, including the Online Harms Bill, aim to institute in digital environments.

The pattern of data collection, extraction and its use for purposes of recommending adult strangers to connect with children, or to expose children to content that advocates self-harm or suicide, or manipulation or persuasion (by increasing the chances of a user responding to advertisements, for instance, is not unique to TikTok. Such tactics have long since been in use and must be addressed by way of the application of age-appropriate protection measures designed to safeguard children.

There have been a number of efforts made by companies themselves in order to promote the banner of self-regulation by demonstrating their willingness to protect users without regulatory intervention. However, Baroness Shields, a former Facebook leader, highlights some of the major issues with safeguarding strategies being advanced currently. For example, companies like Facebook have made public their plans to encrypt all communications on their platform. While encryption does preserve the right to privacy, it also poses heightened risks of harm to children’s safety: encrypted communications can, and have, impede investigations of paedophile activity online.

Commentators have consistently confronted this issue of privacy and safety, as it relates to end-to-end encryption, as an oppositional one and one in which one must be sacrificed for the sake of the other. However, platforms do not need to endanger the safety of children to protect all users’ right to privacy.

Consider, for example, airbags in a car. These are safe for adults in the front seat but not deployed in back seats where children usually travel, given that an airbag could cause serious injury to a child. In other words, different safety measures deployed in the same car to handle people of different ages. When platforms know the ages of users then a granular policy can be enacted with respect to encryption and what is in the best interests of children. If encryption were to be used in communications for those over 18, it would be possible to preserve their right to privacy while enabling extra protection for those younger users who are more vulnerable and whose communications may be needed to prosecute those who might seek to exploit that vulnerability.

Determining younger users’ ages has long since been viewed as a major obstacle to implementing such measures. However, the Department of Culture, Media and Sport recently commissioned and completed a programme of work, entitled the Verification of Children Online (VoCO), which demonstrated that it is possible to conduct age checks and obtain parental consent for a child to access an app in a privacy-preserving manner. The technical trials conducted as part of the project proved that there are tech solutions to the ‘problem’ of verifying younger users’ ages and that their deployment online was feasible, desirable and proportionate.

Today, engineers and developers are creating new apps but using algorithms and code that perpetuates the 'sins of the past' by repurposing existing algorithms accessible on repositories such as Github, on new apps, thereby recreating and compounding existing risks that children face online. A new approach is needed, one which is fit for purpose today and that considers all users, taking their best interests into consideration.

Leading the charge are people like Tristan Harris, who is calling for humane technology; that is, technology which “is values-centric and designed with awareness that technology is never neutral, and is inevitably shaped by its surrounding socioeconomic environment” and “is sensitive to human nature and doesn’t exploit our innate physiological vulnerabilities”. MIT Media Lab researcher Joy Buolamwini, too, highlights the biases that are present in algorithms that are used to decide things such as employability or credit worthiness without the appropriate degree of transparency.

Figures like these, and Anne Longfield, who is bringing the case against TikTok, are at the forefront of trying to change the existing mindset around safeguarding users and bringing offline standards to bear on the digital world

Previous
Previous

Navigating the Digital Jungle: A Parent's Guide to Online Safety

Next
Next

TrustElevate September news