Using ‘safety by design’ to address online harms
Using ‘safety by design’ to address online harms
A look at how user choice and transparency provide new ways of addressing content moderation and online safety policy.
This piece originally appeared in Brookings TechStream.
Around the world, policymakers are grappling with how to address the spread of harmful content and abuse online. From misinformation, to child sexual abuse material (CSAM), to harassment, and the promotion of self-harm, the range of issues on policymakers’ plates are diverse. All of them have real consequences in the lives of their constituents—and lack easy remedies.
Recent rulemaking and legislative initiatives, however, have seen a shift in how policymakers are holding social media companies accountable for the well-being of their users. From the United States to Europe, lawmakers are increasingly embracing the principles of “safety by design,” which aim to place accountability, user empowerment, and transparency at the heart of rules for online life.
Safety by design offers a more proactive approach for policymakers to address ever-evolving online safety issues, even as these principles raise a new set of challenges. By embracing safety by design, policymakers can provide users with greater choice and understanding of how their online experiences are structured, granting users greater autonomy in mitigating online harms. But safety by design approaches also require careful balancing to preserve civil liberties and to ensure that they provide protections for all online users, not just the children whose safety concerns have come to dominate debates about how to regulate online life. Similarly, such rules need to be crafted in a way that provides consistent guidance for industry while offering a framework that is broad enough to be applied to future online social spaces—from live chat and video applications to the metaverse and beyond.
Safety by design and in practice
Safety by design builds on the concept of “choice architecture” coined by the behavioral economists Richard Thaler and Cass Sunstein in their 2008 book Nudge, which describes how the daily choices we make are shaped by how they are presented to us. Social media platforms have notoriously applied this concept to their product design to build applications that keep users engaged regardless of the benefits or harms of their experience—design choices that are often referred to as dark patterns. Policymakers are increasingly engaging with this literature on behavioral science to understand online harms, as when the United Kingdom’s Competition and Markets Authority cited the work of Thaler, Sunstein, and other leading behavioral economists such as Daniel Kahneman and Amos Tversky in a pair of reports examining online choice architecture earlier this year.
In its attempt to put user safety at the heart of technical systems, safety by design extends foundational concepts in online privacy and security—especially “privacy by design.” Privacy by design was developed by the scholar Ann Cavoukian, who worked to integrate this concept into government regulation while serving as the information and privacy commissioner of Ontario, Canada, from 1997 to 2014. Cavoukian’s seven privacy by design principles emphasize the need for default and embedded privacy protections as online services became more complex and ubiquitous. The principles include transparency requirements about the collection of personal information for users and independent audits. By giving users easy control over privacy settings, Cavoukian aims for a “user-centric” approach to design that offers strong privacy by default. In 2010, an international conference of privacy regulators passed a resolution recognizing privacy by design as an essential privacy protection, and in 2012 the Federal Trade Commission recommended the principles as a best practice for industry and that they be incorporated into law. Privacy by design was then incorporated as the foundation for the EU’s General Data Protection Regulation (GDPR) and similar legislation and regulation that followed around the world.
Safety by design also builds upon “security by design,” a set of cybersecurity guidelines for building and maintaining secure systems. This proactive organizational security approach is based on anticipating and guarding against the misuse of data or cyberattacks with principles that include regular monitoring, user verification, and limiting permissions to users who need access to specific systems and data. Safety by design similarly recognizes the potential for misuse and abuse of social tools and the need to proactively address and adapt to protect against that behavior.
Together, privacy by design and security by design offer proactive tools for maintaining data security and privacy. While these earlier principles focus on digital architecture, such as databases and websites, safety by design addresses human interaction. Applying safety by design principles to social platforms requires that products are designed in the best interest of their users, setting safety and security defaults to the strongest option with transparency and control over recommendation and communication features. Under such a scheme, users would have the ability to adjust what they see, have personal information hidden by default, and have addictive features like autoplay videos turned off by default or be given reminders to take a break. Users would be prompted to review who sees what they share, who can contact them, or decide what data can be used for the ads and content that appear in their feeds and notifications—choices that could fundamentally change our online experiences.
Beginning in 2018, Australia’s eSafety Commissioner has worked with industry and civil society and conducted research with parents and children to develop the first online safety by design standards. Its goal is to create technical standards that clearly guide developers and engineers at online services to improve a base level of online privacy and security. Particular attention has been placed on the human rights of children to address longstanding concerns that children are uniquely exposed to bullying, sexual predators, and other potentially harmful experiences online.
Safety by design in policy
Policymakers globally are beginning to integrate ideas drawing on safety by design concepts in legislation regulating online life. In the United States, for instance, proposals like the Kids Online Safety Act (KOSA), the Platform Accountability and Transparency Act (PATA), and the Digital Services and Online Safety Act (DSOSA) all have transparency and design requirements designed to give consumers and children control and understanding of their privacy and recommendation settings. In the EU, the Digital Services Act (DSA) was just approved by parliament and will require independent auditing, transparency reporting, data access for researchers, and risk assessments with corresponding design changes to recommendation algorithms and the user interface to mitigate potential harms. The U.K.’s Online Safety Bill would require features that can filter out harmful content and control who can interact with content, set rules for transparency reporting, and mandate risk assessments on the safety and impact of design features.
Here’s how each of these proposals would apply (or fail to apply) safety by design principles.
Platform responsibility and accountability
Safety by design holds online platforms and services responsible for the safety of users by assessing, addressing, and mitigating potential harms before they occur. Policy measures require platforms to study the impact of their design and algorithmic recommendations and make the findings available for audit, forcing accountability and incentivizing platforms to embed features that are designed to improve the wellbeing of users. Internal risk assessments and independent audits are a common feature of safety by design policy proposals, including the U.S. Senate’s KOSA, the House’s DSOSA, the EU’s DSA, and the U.K. Online Safety Bill. While the U.K. relies on a communications regulator to conduct the audits, the EU and U.S. put audit responsibilities in the hands of third-party organizations—something major consulting firms are already preparing for.
European and American legislative proposals develop a combination of internal risk assessment and independent auditing requirements with public transparency reports that would give greater insight into how platforms operate. In the DSA, the largest social platforms are required to conduct risk assessments and submit to independent audits that examine systemic risks and assess overall compliance in a report submitted to regulators. Auditors have broad purview to assess obligations including transparency reporting and data sharing while also gaining access to review algorithms used to curate posts and target advertising. Similar provisions are included in the U.S. House’s DSOSA proposal covering recommendation systems, while the Senate’s KOSA proposal makes some auditing and risk mitigation findings public, including outlines of identified risks and how personal data may be used in recommendation systems and other algorithms. In the U.K., platforms will also have a duty of care for algorithms and design functionalities. The Online Safety Bill risk assessments cover the removal of illegal content and broadly assess unique harms for children and for adults from the design and functions of a service and how they might be misused. Proactive safety measures are also a key component of the proposals. Most notably, targeted advertising would be banned or partially banned for children in EU and U.S. proposals, while the U.K. legislation forbids “fraudulent advertising.”
Design to empower users
A recent Gallup and Knight Foundation report found that while users don’t agree on how much responsibility government or social media companies have to moderate content, most want more choice and control over their own experience. While two-thirds of children in the U.K. say they have experienced online harassment or predatory behavior, less than 20% report that abuse. And it’s not just children: Nearly two-thirds of adults under age 30 say they have experienced online harassment. Giving users the ability to control who can message them and limit what content they see online would provide more privacy and protections for all vulnerable groups.
In the United States, KOSA would begin to address the desire for greater user control by mandating design features that prevent contact from strangers, hiding personal information by default, limiting features that reward more time spent on a platform, and allowing users to opt out of algorithmic recommendations based on personal data. However, these design requirements would be limited to children 16 and younger and mainly facilitated by parental control tools. The U.K. has a similar provision giving users controls over who can interact with the content they share. In California, the Social Media Platform Duty to Children Act would prohibit design features known to cause addiction. The California Age-Appropriate Design Code Act also has design requirements for minors but focuses more on privacy protections “by design and by default,” with broad mandates to prioritize the wellbeing of children when designing online services and features.
The EU and U.K. proposals focus more on content moderation, requiring extensive systems for reporting content users deem harmful, including a provision in the Digital Services Act for “trusted flaggers,” which are independent organizations with expertise in a specific type of harm that are given priority processing. Both regimes include risk assessment and mitigation components. The DSA also requires the largest social media platforms to allow users to select a feed option that does not use personal data to recommend content.
A stark divide exists between the United States and Europe in their approaches to design rules aimed at empowering users. While the DSA and Online Safety Bill apply at least some design rules for all users with special protections for minors, the U.S. proposals apply exclusively to children, offering parents greater control over the content their children consume.
Transparency and researcher access
In order to understand the functioning of algorithmic recommendation systems, researchers and policymakers currently rely on the goodwill of platforms to furnish data and on whistleblowers to supply internal company documents. Addressing this gap and providing greater access for researchers to company data is a key component of the DSA and U.S. proposals but is absent in the U.K. Online Safety Bill, which only requires further study of the matter. U.S. proposals, including KOSA and PATA, also include protections for independent researchers who develop their own means for data collection and analysis. The DSA and U.S. legislation would set up government organizations with subject matter experts who vet researchers and mediate data access and sharing with platforms. Other transparency requirements include reports on public content and aggregate engagement information, libraries to search and review advertising, and public reports that outline findings from risk assessments and independent audits.
Setting the foundation
Safety by design proposals hold promise but are not without potential pitfalls. Developing technical protocols and future-proofing rules will be crucial for policy and regulations to have the intended effects of protecting consumers while fostering innovation and free speech online. Developing rules that only protect children would be a missed opportunity to empower all consumers to make individual decisions about their wellbeing online. This broader approach would importantly provide vulnerable and minority groups tools to protect against harmful content and conduct.
Transparency and researcher access provisions must have protections for potential security risks in sharing data, and any public reporting should take precautions to protect user privacy and avoid de-anonymization. These considerations and technical protocols are being worked out with industry and civil society input to implement the Digital Services Act and could be further refined to fit U.S. privacy expectations and regulatory systems. Proposals already exist for sharing advertising and high-reach content while protecting user privacy.
The internet is global and we should not be reinventing the wheel to develop best practices and interoperability for data sharing and security controls. Consistency would allow more countries to adopt rules built on safety by design principles, companies to navigate a more streamlined patchwork of regulation, and promote competition by allowing smaller platforms and emerging competitors to more easily comply as they scale operations. Consistency does not mean copying the same regulation around the world, but rather applying the same technical protocols and best practices across unique rules, for instance those that preserve First Amendment protections online in the U.S.
Clear, but flexible definitions are needed in safety by design rules and guidelines as more companies develop platforms for interacting in virtual spaces and as video games grow increasingly social. Standards should be relevant and forward-looking to cover unique services and the safe development of new platforms, such as addressing control and protections for live interactions in virtual environments—not just rules for text or media content. If annual audits address business model incentives, they can help future-proof these rules and continue to educate policymakers and the public on how platforms operate. There should also be rules to continue to assess and update definitions for covered platforms as technologies change.
Special duties to protect children make sense for unique vulnerabilities such as exposure to mature content, sexual grooming or online bullying and harassment. However, age verification requirements should be minimal and rules should generally provide protections and empower all users. Age verification and parental tools, while well-intentioned, have the potential to increase risk for minors with systems that only track young users. Verifying parents and guardians for access to parental controls will also require safety mechanisms to protect children’s privacy and avoid exploitation from bad actors with access to such tools. It would be better to provide greater protections to all users than to only offer heightened safety features for children who may attempt to sidestep those controls.
By providing users with greater control over their experiences online, safety by design can reduce harms in online spaces while protecting free speech. While getting the details right for a regulatory regime that incorporates safety by design principles will be difficult and important, it’s time for policy solutions that address the features and incentives in digital platforms that promote and spread real-life harms.