Harm Reduction In Social Media

Carnegie UK
41 min readJul 17, 2018

--

Can Society Rein in Social Media?

William Perrin

by William Perrin, trustee of Good Things Foundation, Indigo Trust and 360 Giving. He is a former senior civil servant in the UK government. And Professor Lorna Woods, University of Essex.

The benefits that social networks bring for individuals and society are plentiful, yet the harm that many people have suffered is deeply troubling, described in countless reports by individuals, civil society groups and parliamentary inquiries. The turbulent debate about reducing the harm emanating from social media platforms has to date, produced few comprehensive proposals for a regulatory system that might reduce harm. The issues are complex: social media companies have a duty to shareholder interests, individuals don’t bear the costs of their actions when they use social networks leading to strong externalities, rights to freedom of speech are highly valued, the issues cut across different global approaches to regulation, making jurisdiction sometimes unclear and few governments seem willing to take a strong lead.

This state of affairs is thematically similar to other issues the internet industry has faced over the decades. In many economic sectors, initial digital disruption has led to economic or social harm, as well as gain, once adoption is widespread. Eventually governments respond to a lobby from the affected parties, pass some regulations and after quite legitimate protest, the internet industry co-operates in enforcing them, establishing a new status quo.

The issue of harm stemming from social media differs from previous waves of disruption and regulation. Harm is protested not by well-funded companies but by individuals and a civil society lobby that are dispersed, represent a myriad of different perspectives, are short on resources and lack access to the levers of power. The interplay between protection from harm and free speech is a substantial complicating factor, as is the role of political parties who seem reluctant to regulate powerful social media campaigning platforms. The economic interests that have been most negatively affected by the rise of social media platforms are arguably those of the traditional media industry. But they have long been steadfast, as you might expect, against regulation that may be construed as placing restrictions upon freedom of expression. The social media companies, now some of the biggest firms in the world, have learned from previous cycles of disruption and regulation and have built strong lobbying positions. The net result appears that the lobby to regulate social media is more vocal, but less effective than the economic interests that campaigned during previous disruptive waves, while also facing a better-equipped disruptor.

The authors of this blog post, Professor Lorna Woods and William Perrin each have over 20 years of experience in working on regulatory issues in electronic media and have joined forces with Carnegie UK Trust to sketch out a comprehensive proposal for regulation in the UK and Europe to reduce harm emanating from social media platforms. We are committing our time pro bono.

Over the coming weeks, on this blog we shall sketch out what a model regulatory regime for social media that hinges on ‘harm reduction’ might look like. At the heart of our model, most likely, would be a duty of care on social media platforms towards their users. We shall explore what regulatory models could help social media companies and their users catalyse a virtuous and importantly, transparent cycle of identifying harms, acting to reduce them, identifying what harms remain or arise then repeating the process. And doing this while at the same time preserving freedom of expression in the European tradition.

New regulatory models would require new law and new structures and we shall set out some initial proposals for these, in a minimal, pragmatic way within the UK/EU framework. We shall draw upon successful existing regulatory regimes as diverse as broadcasting, communications, health and safety and employment to produce a final report of a comprehensive, pragmatic regime that balances freedom of expression and the benefits of social networks with protecting people from harm that may emanate from those networks and their users.

Harm Reduction In Social Media — A Proposal

The UK government’s Internet Safety Strategy Green Paper set out some of the harms to individuals and society caused by users of social networks. As we set out in our opening blog post, we shall describe in detail a proposed regulatory approach to reduce these and other harms and preserve free speech. Our work is based in UK and European law and policy, drawing as much as possible upon proven social and economic regulatory approaches from other sectors. Our aim is to describe a functioning, common sense co-regulatory model. The new model would require new legislation in the UK but we believe it to be compatible with European legislation. The approach could be adopted at a European level, but would then require European legislation. Much of the broad thrust of the model could be employed voluntarily by companies without legislation.

Freedom of expression is at the heart of our work, but (for American readers in particular) it is important to note that in the UK and Europe, freedom of expression is a qualified right. Not all speech is equally protected, especially when there is an abuse of rights. And speech may be limited in pursuit of other legitimate interests, including the protection of the rights of others. There is also a positive obligation on the state to regulate or safeguard everyone’s right to freedom of expression and protect diversity of speech. This means that regulation, carefully crafted and proportionate, is not only permissible from a freedom of expression perspective but may be desirable.

Much existing debate about new regulatory models for social media has been framed by the question of the extent to which social media platforms fall within the exception provided for neutral hosts, or whether social media platforms are publishers. This debate goes nowhere because these regulatory models are an ill-fit for current practice — a new approach is needed.

British and European countries have adopted successful regulatory approaches across large swathes of economic and social activity. We judge that a regulatory regime for reducing harm on social media can draw from tried and tested techniques in the regulation of broadcasting, telecommunications, data, health and safety, medicine and employment.

One approach we are considering is the creation a new regulatory function responsible for harm reduction within social media. In such a model, all providers of a social network service would have to notify the regulator of their work and comply with basic harm reduction standards. The largest operators — as they are likely to be responsible for greater risk -would be required to take more steps to limit harm. Our early thinking is that the best route to harm reduction would be to create a positive statutory duty of care, from the social network operators, to users of their service. This duty of care would be in relation to the technology design and the operation of the platform by its owners, including the harm reducing tools available to protect its users and the enforcement by the platform of its terms and conditions. Parliament and the regulator would set out a taxonomy of harms that the duty of care was intended to reduce or prevent. This could contain harms such as the bullying of children by other children or misogynistic abuse, which are harmful but not necessarily illegal.

Oversight would be at a system level, not regulation of specific content. The regulator would have powers to inspect and survey the networks to ensure that the platform operators had adequate, enforced policies in place. The regulator, in consultation with industry, civil society and network users would set out a model process for identifying and measuring harms in a transparent, consultative way. The regulator would then work with the largest companies to ensure that they had measured harm effectively and published harm reduction strategies.

The duty of care would require the larger platforms to identify the harm (by reference to the taxonomy) to their users (either on an individual or on a platform-wide basis) and then take appropriate measures to counter that harm. New law might identify the framework for understanding harm but not detailed approaches or rules, instead allowing the platforms to determine, refine and improve their responses to reducing harm. These could include technology-based responses or changes to their terms of service. Specifically, and crucially, new law would not require general monitoring of content nor outlaw particular content. The social media platforms would create a pattern of identifying harm, measuring it and taking action to reduce harm, assessing the impact of that action and taking further action. If the quantum of harms does not fall, the regulator works with the largest companies to improve their strategies. This is similar to the European Commission’s approach to reducing hate speech. If action is not effective after a reasonable interval the regulator would have penalties to implement.

Our view is that this process would be more transparent, consistent, accountable and less one sided than the ex cathedra statements currently used by the platform operators to explain their harm reduction approach.

This approach would be implemented in a proportionate way, applying to only the largest platforms, where there is the most risk, broadly speaking Facebook, Twitter, YouTube, LinkedIn, Instagram, Snapchat and Twitch. Our general view is that smaller platforms, such as those with less than 1 million members, should not be covered by the above but should be required to survey harms on their platform annually and publish a harm reduction plan. People would be able to bring networks that they felt were harmful to the regulator’s attention.

Sadly, there is a broad spectrum of harm and there may be a need for specific harm reduction mechanisms aimed at particular targets (whether vulnerable groups or problematic behaviours). In our work, we are also considering whether a regulator should become the prosecutor or investigating agency for existing crimes in this area that the police have struggled to pursue — stalking and harassment, hate speech etc. In relation to speech that is not obviously criminal, we are also considering whether an ombudsman function would help individuals better resolve disputes — perhaps helping to avoid over-criminalisation of ‘robust’ speech (e.g. in the context of S127 Communications Act). The UK Government has asked the Law Commission to examine this area and we shall submit our work to them.

Harm Reduction In Social Media — What Can We Learn From Other Models Of Regulation?

Assuming that some sort of regulation (or self or co regulation) is necessary to reduce harm, what form should it take?

The regulatory environment provides a number of models which could serve as a guide. In adopting any such models we need, at least until Brexit, to be aware of the constraints of the e-Commerce Directive. We also need to be aware of the limitations on governmental action arising from human rights considerations, specifically (though not limited to) freedom of expression. As we have discussed in previous blogs, limitations on rights must meet certain requirements, notably that they be proportionate. Neither of these regimes foreclose regulatory activity entirely.

In this blog we provide a short overview of a range of regulatory models currently being deployed and identify regulatory tools and approaches such as risk assessments, enforcement notices etc, within these that may have some relevance or applicability for a harm-based regulation model for social media. We have discussed the regulatory frameworks frequently cited in relation to social media services, namely the electronic communications sectors and data protection, as data processing is at the heart of social media services. We argue in forthcoming blog posts that social networks have strong similarities to public spaces in the physical world and have therefore also included some other regimes which relate to the safeguarding of public or semi-public spaces. Harm emanating from a company’s activities has, from a micro economic external costs perspective, similarity to pollution and we also discuss environmental protection.

Telecommunications

Given that social media platforms are not directly content providers but rather channels or platforms through which content is transferred from one user to another, a sensible starting point is to look at the regulatory context for other intermediaries who also connect users to content: those providing the telecommunications infrastructure. The relevant rules are found in the Communications Act 2003. The telecommunications regime is expressly excluded from the e-Commerce Directive provisions (Article 4) prohibiting the prior licensing of “information society service providers” (the immunities could of course apply). There is in fact no prior license required under the Communications Act to provide electronic communications services, but a person providing a relevant “electronic communications network” or “electronic communications service” must under section 33 give prior notification to OFCOM (the independent sector regulator). While any person may be entitled to provide a network or services, that entitlement is subject to conditions with which the provider must comply. The conditions are “general conditions” and “special conditions”. As the name implies, “general conditions” apply to all providers, or all providers of a class set out in the condition. Special conditions apply only to the provider(s) listed in that special condition (see section 46 Communications Act). The conditions are set by the regulator in accordance with the Communications Act.

Section 51 sets down matters to which general conditions may relate. They include

“conditions making such provision as OFCOM consider appropriate for protecting the interests of the end-users of public electronic communications services”

which are elaborated to cover matters including the blocking of phone numbers in the case of fraud or misuse, as well as, in section 52, the requirement to have a complaints mechanism. This latter point is found in General Condition 14 (GC14) which obliges communications providers to have and to comply with procedures that conform to the Ofcom Approved Code of Practice for Complaints Handling when handling complaints made by domestic and small business customers. General conditions also cover public safety (in relation to electro-magnetic devices). The specific conditions mainly relate to competition in the telecommunications market, specifically the rights of businesses to have access to networks on fair, reasonable and non-discriminatory terms.

In terms of enforcement, section 96 gives OFCOM the power to impose fines on providers for non-compliance with the conditions. Ultimately, OFCOM has the power to suspend the entitlement to provide the service (see section 100).

Digital Economy Act 2017

The Digital Economy Act 2017 covers a range of topics — we will focus on just one aspect: the provisions in relation to age verification and pornography which is found in Part 3 of the Act (we have considered the Social Media Code of Practice elsewhere. This part of the Act is not yet fully in force.

The obligation is to ensure that pornographic material is not made available online to people under 18. The operator has freedom in how to attain this goal, but the Age Verification Regulator may check these steps. It may also issue guidance as to how age verification may be carried out. It may issue enforcement notices and/or impose penalties if a person has failed in this duty (or refuses to give the regulator information requested). The Act also empowers the regulator to issue notices to others who are dealing with the non-complying operator (section 21), such as credit card or other payment services. According to the Explanatory Memorandum, the purpose of these provisions is “to enable them to consider whether to withdraw services”. In relation to extreme pornography only, the regulator has the power to request that sites are blocked.

Data Protection

Another possible model in the sphere of information technology is that of data protection and specifically, the General Data Protection Regulation, which replaces (from 25 May 2018) the regime established by the Data Protection Directive, as implemented in the UK by the Data Protection Act 1998. The regime requires an independent regulatory authority as an essential part of the regime, and the minimum standards of independence are set down in the GDPR.

The current regime requires those processing personal data to register with the Information Commissioner’s Office (ICO), to renew the registration annually, and to pay a fee (which varies depending on the size and nature of the organisation). Failure to do so is a criminal offence. The GDPR removes the annual renewal obligation but data protection fees still apply, by virtue of the Digital Economy Act 2017. Some information (e.g. name, contact details) will have to be submitted with this fee, but the current notification regime which required details about the data processing will cease to exist.

Central to the GDPR is the principle of accountability which can require a data controller to show how it complied with the rules. These essentially put obligations on controllers to process data in accordance with the ‘data processing principles’ set down in Article 5 GDPR. Another theme is a form of precautionary principle in that data controllers must comply with both privacy and security by design. This could be described as a risk-based approach, as can be seen in the requirements regarding data security. For example, data controllers are required to “ensure a level of data security appropriate to the risk” and in general they should implement risk-based measures for ensuring compliance with the GDPR’s general obligations. Controllers should ensure both privacy and security by design. High risk processing activities trigger the need for a privacy impact assessment (PIA) to be carried out (article 33 GDPR). Article 34 specifies that where the PIA suggest that there is a high risk, the controller must ask the supervisory authority before proceeding.

As regards enforcement, these themes feed into the factors that the supervisory authorities take into account when assessing the size of fines to impose on a controller (or processor) in breach under the GDPR as the authority will have “regard to technical and organisational measures implemented” by the processor. Note that individuals also have a right to bring actions for data protection failings.

Health and Safety

Another model comes from outside the technology sector: health and safety. The Health and Safety at Work Act does not set down specific detailed rules with regards to what must be done in each workplace but rather sets out some general duties that employers have both as regards their employees and the general public. So section 2(1) specifies:

It shall be the duty of every employer to ensure, so far as is reasonably practicable, the health, safety and welfare at work of all his employees.

The next sub-section then elaborates on particular routes by which that duty of care might be achieved: e.g provision of machinery that is safe; the training of relevant individuals; and the maintenance of a safe working environment. The Act also imposes reciprocal duties on the employees.

While the Health and Safety at Work Act sets goals, it leaves employers free to determine what measures to take based on risk assessment. Exceptionally, where risks are very great, regulations set down what to do about them (e.g. Control of Major Accident Hazards Regulations 1999). In respect of hazardous industries it may operate a permission regime, in which activities involving significant hazard, risk or public concern require consent; or a licensing regime to permit activities, such as the storing of explosive materials, that would otherwise be illegal.

The area is subject to the oversight of the Health and Safety Executive, whose functions are set down in the Act. It may carry out investigations into incidents; it has the power to approve codes of conduct. It also has enforcement responsibilities and may serve “improvement notices” as well as “prohibition notices”. As a last measure, the HSE may prosecute. There are sentencing guidelines which identify factors that influence the heaviness of the penalty. Points that tend towards high penalties include flagrant disregard of the law, failing to adopt measures that are recognised standards, failing to respond to concerns, or to change/review systems following a prior incident as well as serious or systematic failure within the organisation to address risk.

Environmental Protection

Another regime which deals with spaces is the Environmental Protection Act 1990. It imposes a duty of care on anyone who produces, imports, keeps, stores, transports, treats or disposes of waste and brokers or those who control waste (“waste holders”) (section 34), as well as on householders (section 75(5)). Waste holders must register with the possibility of a fine for non-compliance; there is a prohibition on unauthorised disposal of waste backed up with a criminal penalty.

More detail on what the duty of care requires is set down in secondary legislation and codes of practice give practical guidance. As regards waste holders, they are under a duty to take all reasonable steps to:

  1. prevent unauthorised or harmful deposit, treatment or disposal of waste;
  2. prevent a breach (failure) by any other person to meet the requirement to have an environmental permit, or a breach of a permit condition;
  3. prevent the escape of waste from their control;
  4. ensure that any person you transfer the waste to has the correct authorisation; and
  5. provide an accurate description of the waste when it is transferred to another person.

The documentation demonstrating compliance with these requirements must be kept for two years. Breach of the duty of care is a crime.

Householders’ duties are more limited: they have a duty to take all reasonable measures to ensure that any household waste produced on their property is only transferred to an authorised person — a householder could be prosecuted for fly-tipping of waste by a contractor (plumber, builder) employed by the householder.

As well as this duty of care, businesses are required under Reg 12 of the Waste (England and Wales) Regulations 2011to take all such measures as are reasonable in the circumstances to:

  • prevent waste; and
  • apply the “waste hierarchy” (which is a five step strategy for dealing with waste ranging from prevention through recycling to disposal which derives from the EU Waste Framework Directive (2008/98/EC)) when they transfer waste.

In doing so, business must have regard to any guidance developed on the subject by the appropriate authorities.

The responsible regulators are the Environment Agency/Natural Resources Wales/Scottish Environment Protection Agency and local authorities. They may issue enforcement notices, and fines may be levied. If criminal action is taken, there is a sliding scale based on culpability and harm factors identified in guidance. The culpability assessment deals with the question of whether the organisation has deliberately breached the duty, done so recklessly or negligently — or to the contrary, not been particularly at fault in this regard.

Assessment

These sectors operate under general rules set by parliament and refined by independent, evidence based regulators and the courts in a transparent, open and democratic process. Modern effective regulation of these sectors supports trillions of pounds of economic activity by enforcing rights of individuals and companies. It also contributes to socially just outcomes as intended by Parliament through the internalisation of external costs and benefits. The Government’s Internet Safety Strategy Green Paper detailed extensive harms with costs to society and individuals resulting from people’s consumption of social media services. Social media services companies early stage growth models and service design decisions appear to have been predicated on such costs being external to their own production decision. Effective regulation would internalise these costs for the largest operators and lead to a more efficient outcomes for society. There is a good case to make for market failure in social media services — at a basic level people do not comprehend the price they are paying to use a social media service — recent research by doteveryone revealed that 70% of people ‘don’t realise free apps make money from data’, and 62% ‘don’t realise social media make money from data’. Without basic awareness of price and value amongst consumers it will be hard for a market to operate efficiently, if at all.

There are many similarities between the regimes. One key element of many of the regulators’ approach is that changes in policy take place in a transparent manner and after consultation with a range of stakeholders. Further, all have some form of oversight and enforcement — including criminal penalties- and the regulators responsible are independent from both Parliament and industry. Breach of statutory duty may also lead to civil action. These matters of standards and of redress are not left purely to the industry.

There are, however, differences between the regimes. One point to note with regards to the telecommunications regime is that OFCOM may stop the provider from providing the service. While the data protection regime may impose — post GDPR — hefty penalties, it may not stop a controller from being a controller. Again, with regard to HSE, particular activities may be the subject of a prohibition notice, but this does not disqualify the recipient from being an employer. The notice relates to a particular behaviour. Another key difference between the telecommunications regime and the others is that in the telecommunications regime, the standards to be met are specified in some detail by OFCOM. For the other regimes although there are general obligations identified, the responsibility in both instances lies on the controller/employer to understand the risks involved and to take appropriate action, though high risk activities in both regimes are subject to tighter control and even a permissioning regime. While the telecommunications model may seem an appropriate model give the telecommunications sector’s closeness to social media, it may be that it is not the most appropriate model for four reasons:

  • the telecommunications regime has the possibility of stopping the service, and not just problematic elements of the service; we question whether this is appropriate in the light of freedom of speech concerns;
  • the telecommunications regime specifies the conditions — we feel that this is too ‘top-down’ for a fast moving sector and allowing operators to make their own assessment of how to tackle risks means that solutions may more easily keep up with change, as well as be appropriate;
  • a risk-based approach could also allow the platforms to differentiate between different types of audience — and perhaps to compete on that basis; and
  • the telecommunications regime is specific to telecommunications, the data and workplace regimes are designed to cover the risk entailed from broader swathes of general activity.

Although the models have points of commonality, particularly in the approach of setting high level goals and then relying on the operators to make their own decisions how best to achieve that which allows flexibility and a certain amount of future-proofing- there are perhaps aspects from individual regimes that are worth highlighting:

  • the data protection and HSE regime highlight that there may be differing risks with two consequences:
  • that measures should be proportionate to those risks; and
  • that in areas of greater risk there may be greater oversight.
  • The telecoms regime emphasises the importance of transparent complaints mechanisms — this is against the operator (and not just other users);
  • the environmental regime introduces the ideas of prevention and prior mitigation, as well as the possibility for those under a duty to be liable for the activities of others (eg in the case of fly-tipping by a contractor); and
  • the Digital Economy Act has mechanisms in relation to effective sanctions when the operator may lie outside the UK’s jurisdiction.

Reducing Harm In Social Media Through A Duty Of Care

Duty of care

The idea of a “duty of care” is straightforward in principle. A person (including companies) under a duty of care must take care in relation to a particular activity as it affects particular people or things. If that person does not take care and someone comes to harm as a result then there are legal consequences. A duty of care does not require a perfect record — the question is whether sufficient care has been taken. A duty of care can arise in common law (in the courts) or, as our blog on regulatory models shows, in statute (set out in a law). It is this latter statutory duty of care we envisage. For statutory duties of care, as our blog post also set out, while the basic mechanism may be the same, the details in each statutory scheme may differ — for example the level of care to be exhibited, the types of harm to be avoided and the defences available in case of breach of duty.

Social media services are like public spaces

Many commentators have sought an analogy for social media services as a guide for the best route to regulation. A common comparison is that social media services are ‘like a publisher’. In our view the main analogy for social networks lies outside the digital realm. When considering harm reduction, social media networks should seen as a public place — like an office, bar, or theme park. Hundreds of millions of people go to social networks owned by companies to do a vast range of different things. In our view, they should be protected from harm when they do so.

The law has proven very good at this type of protection in the physical realm. Workspaces, public spaces, even houses, in the UK owned or supplied by companies have to be safe for the people who use them. The law imposes a ‘duty of care’ on the owners of those spaces. The company must take reasonable measures to prevent harm. While the company has freedom to adopt its own approach, the issue of what is ‘reasonable’ is subject to the oversight of a regulator, with recourse to the courts in case of dispute. If harm does happen the victim may have rights of redress in addition to any enforcement action that a regulator may take action against the company. By making companies invest in safety the market works better as the company bears the full costs of its actions, rather than getting an implicit subsidy when society bears the costs.

A broad, general almost futureproof approach to safety

Duties of care are expressed in terms of what they want to achieve — a desired outcome (ie the prevention of harm) rather than necessarily regulating the steps — the process — of how to get there. This fact means that duties of care work in circumstances where so many different things happen that you couldn’t write rules for each one. This generality works well in multifunctional places like houses, parks, grounds, pubs, clubs, cafes, offices and has the added benefit of being to a large extent futureproof. Duties of care set out in law 40 years ago or more still work well — for instance the duty of care from employers to employees in the Health and Safety at Work Act 1974 still performs well, despite today’s workplaces being profoundly different from 1974’s.

In our view the generality and simplicity of a duty of care works well for the breadth, complexity and rapid development of social media services, where writing detailed rules in law is impossible. By taking a similar approach to corporate owned public spaces, workplaces, products etc in the physical world, harm can be reduced in social networks. Making owners and operators of the largest social media services responsible for the costs and actions of harm reduction will also make markets work better.

Key harms to prevent

When Parliament set out a duty of care it often sets down in the law a series of prominent harms or areas that cause harm that they feel need a particular focus, as a subset of the broad duty of care. This approach has the benefit of guiding companies on where to focus and makes sure that Parliament’s priorities are not lost.

We propose setting out the key harms that qualifying companies have to consider under the duty of care, based in part on the UK Government’s Internet Safety Green Paper. We list here some areas that are already a criminal offence –the duty of care aims to prevent an offence happening and so requires social media service providers to take action before activity reaches the level at which it would become an offence.

Harmful threats — statement of an intention to cause pain, injury, damage or other hostile action such as intimidation. Psychological harassment, threats of a sexual nature, threats to kill, racial or religious threats known as hate crime. Hostility or prejudice based on a person’s race, religion, sexual orientation, disability or transgender identity. We would extend hate crime to include misogyny.

Economic harm — financial misconduct, intellectual property abuse,

Harms to national security — violent extremism, terrorism, state sponsored cyber warfare

Emotional harm — preventing emotional harm suffered by users such that it does not build up to the criminal threshold of a recognised psychiatric injury. For instance through aggregated abuse of one person by many others in a way that would not happen in the physical world (see Stannard 2010 on emotional harm below a criminal threshold). This includes harm to vulnerable people — in respect of suicide, anorexia, mental illness etc.

Harm to young people — bullying, aggression, hate, sexual harassment and communications, exposure to harmful or disturbing content, grooming, child abuse (See UKCCIS Literature Review)

Harms to justice and democracy — prevent intimidation of people taking part in the political process beyond robust debate, protecting the criminal and trial process (see Attorney General and Committee on Standards in Public Life)

We would also require qualifying social media service providers to ensure that their service was designed in such a way to be safe to use, including at a system design level. This represents a hedge against unforeseen developments as well as being an aggregate of preventing the above harms. We have borrowed the concept from risk based regulation in the General Data Protection Regulation and the Health and Safety at Work Act which both in different ways require activity to be safe or low risk by design.

People would have rights to sue eligible social media service providers under the duty of care; for the avoidance of doubt, a successful claim would have to show a systemic failing rather than be deployed in case of an isolated instance of content. But, given the huge power of most social media service companies relative to an individual we would also appoint a regulator. The regulator would ensure that companies have measurable, transparent, effective processes in place to reduce harm, so as to help avoid the need for individuals to take action in the first place. The regulator would have powers of sanction if they did not.

Which Social Media Services Should Be Regulated For Harm Reduction?

In this article, we discuss which social media services would be subject to a statutory duty of care towards their users.

Parliament would set out in law characteristics of social media services that could be covered by the regime. There are always difficult boundary cases and to mitigate this we propose the regulator makes a list of qualifying services.

Qualifying social media services

We suggest that the regime apply to social media services used in the UK that have the following characteristics:

  1. Have a strong two-way or multiway communications component;
  2. Display and organise user generated content publicly or to a large member/user audience;
  3. A significant number of users or audience — more than, say, 1,000,000;
  4. Are not subject to a detailed existing regulatory regime, such as the traditional media.

A regulator would produce detailed criteria for qualifying social media services based on the above and consult on them publicly. The regulator would be required to maintain a market intelligence function to inform consideration of these criteria. Evidence to inform judgements could come from: individual users, civil society bodies acting on behalf of individuals, whistle-blowers, researchers, journalists, consumer groups, the companies themselves, overseas markets in which the services operate, as well as observation of trends on the platforms.

In order to maintain an up to date list, companies which fall within the definition of a qualifying social media service provider would be required in law to notify the regulator after they have been operating for a given period. Failure to do so would be an offence. Notification would be a mitigating factor should the regulator need to administer sanctions.

The regulator will publish a list based on the notifications and on market intelligence, including the views of the public. The regulator’s decision to include a service on the list could, as for any such type of decision, be subject to judicial review, as could the decision not to include a service that the public had petitioned for. Services could be added to the list with due process at any time, but the regulator should review the entire list every two years.

Broadly speaking we would anticipate at least the following social media service providers qualifying, we have asterisked cases for discussion below.

  1. Facebook
  2. Twitter
  3. YouTube
  4. Instagram
  5. Twitch*
  6. Snapchat
  7. Musical.ly*
  8. Reddit
  9. Pinterest*
  10. LinkedIn

Managing boundary cases

Providing a future proof definition of a qualifying social media service is tricky. We would welcome views on how to tighten it up. However we feel that giving the regulator freedom from political interference to make a list allows for some future-proofing rather than writing it in legislation. The regulator making this list also reduces the risk of political interference — it is quite proper for the government to act to reduce harm, but in our view there would be free speech concerns if the government was to say who was on the list. An alternative would be for the regulator to advise the Secretary of State and for them to seek a negative resolution in Parliament but in our view this brings in a risk to independence and freedom of speech.

Internet forums have some of the characteristics we set out above. However hardly any l forums would currently have enough members to qualify. The very few forums that do have over one million members have, in our opinion, reached that membership level through responsible moderation and community management. In a risk based regime (see below) they would be deemed very low risk and barely affected. We do not intend to capture blog publishing services, but it is difficult to define them out. We would welcome views on whether the large scale interaction about a post that used to occur in blog comments in the hey day of blogging is of a similar magnitude to the two way conversation on social media. We do not think it is but it is hard to find data. We would welcome comments on whether this boundary is sufficiently clear and how it could be improved.

Twitch has well documented abuse problems and has arguably more sophisticated banning regimes for bad behaviour than other social networks. Twitch allows gamers to stream content that the gamers have generated (on games sites) with the intention of interacting with an audience about that content. Twitch provides a place for that display, multiway discussion about it and provides a form of organisation that allows a user to find the particular content they wish to engage with. We therefore feel that Twitch falls within scope. Other gaming services with a strong social media element should also be considered, particularly with a strong youth user base.

Note that services do not need to include (much) text or voice: photo sharing services such as Pinterest could fall within the regime too.

Risk based regulation — not treating all qualifying services the same

This regime is risk based. We are not proposing that a uniform set of rules apply across very different services and user bases. The regulator would prioritise high risk services, the regulator would only have minimal engagement with low risk services. Differentiation between high and low risk services is common in other regulatory regimes, such as for data in the GDPR and is central to health and safety regulation. In those regimes, high risk services would be subject to closer oversight and tighter rules as we intend here.

Harmful behaviours and risk have to be seen in the context of the platform. The regulator would examine whether a social media service operator has had particular regard to its audience. For example, a mass membership, general purpose service should manage risk by setting a very low tolerance for harmful behaviour, in the same way that some public spaces take into account that they should be a reasonably safe space for all. Specialist audiences/user-bases of social, media services may have online behavioural norms that on a family-friendly service could cause harm but in the community where they originate are not harmful. Examples might include sports-team fan services or sexuality-based communities. This can be seen particularly well with Reddit: its user base with diverse interests self organises into separate subreddits, each with its own behavioural culture and moderation.

Services targeted at youths are innately higher risk — particularly where youth services are designed to be used on a mobile device away from immediate adult supervision. For example, teen focussed lip synching and video sharing site musical.ly owned by Chinese group Bytedance according to Channel4 News has 2.5 million UK members and convincing reports of harmful behaviours. The service is a phone app targeted at young people that also allows them to video cast their life (through their live.ly service) with as far as we can make out few meaningful parental controls. In our opinion, this appears to be a high risk service.

How Would A Social Media Harm Regulator Work?

Reducing harm in social media — regulation and enforcement

We have set out in a series of blog posts a proposal for reducing harm from social media services in the UK (see end for details about the authors). The harm reduction system will require new legislation and a regulator. In this post we set out our first thoughts on the tasks to be given to a regulator and how the regulator would go about putting them into action.

How a regulator might work

Parliament should only set a framework within which the regulator has flexibility to reduce harm and respond appropriately in a fast moving environment. Our proposal (see earlier posts) is that the regulator is tasked with ensuring that social media services providers have adequate systems in place to reduce harm while preserving freedom of speech in the European tradition. The regulator would not get involved in individual items of speech. The regulator must not be a censor.

Harm reduction cycle

We envisage an ongoing evidence based process of harm reduction. For harm reduction in social media the regulator could work with the industry to create an on-going harm reduction cycle that is transparent, proportionate measurable and risk-based.

A harm reduction cycle begins with measurement of harms. The regulator would draw up a template for measuring harms, covering scope, quantity and impact. The regulator would use as a minimum the harms set out in statute but, where appropriate, include other harms revealed by research, advocacy from civil society, the qualifying social media service providers etc. The regulator would then consult publicly on this template, specifically including the qualifying social media service providers. Regulators in the UK such as the BBFC, the ASA and OFCOM (and its predecessors) have demonstrated for decades that it is possible to combine quantitative and qualitative analysis of media, neutral of political influence, for regulatory process.

The qualifying social media services would then run a measurement of harm based on that template, making reasonable adjustments to adapt it to the circumstances of each service. The regulator would have powers in law to require the qualifying companies (see enforcement below) to comply. The companies would be required to publish the survey results in a timely manner. This would establish a first baseline of harm.

The companies would then be required to act to reduce these harms. We expect those actions to be in two groups — things companies just do or stop doing, immediately; and actions that would take more time (for instance new code or terms and conditions changes). Companies should seek views from users as the victims of harms or NGOs that speak for them. These comments — or more specifically the qualifying social media service providers respective responses to them (though it should be emphasised that companies need not adopt every such suggestion made) — would form part of any assessment of whether an operator was taking reasonable steps and satisfying its duty of care. Companies would be required to publish, in a format set out by the regulator:

  • what actions they have taken immediately
  • actions they plan to take
  • an estimated timescale for measurable effect and
  • basic forecasts for the impact on the harms revealed in the baseline survey and any others they have identified.

The regulator takes views on the plan from the public, industry, consumers/users and civil society and makes comments on the plan to the company, including comments as to whether the plan was sufficient and/or appropriate. The companies would then continue or begin their harm reduction work.

Harms would be measured again after a sufficient time has passed for harm reduction measures to have taken effect, repeating the initial process. This establishes the first progress baseline.

The baseline will reveal four likely outcomes — that harms:

  • have risen;
  • stayed the same;
  • have fallen; or
  • new harms have occurred.

If harms surveyed in the baseline have risen or stayed the same the companies concerned will be required to act and plan again, taking due account of the views of victims, NGOS and the regulator. In these instances, the regulator may take the view that the duty of care is not being satisfied and, ultimately, may take enforcement action (see below). If harms have fallen then companies will reinforce this positive downward trajectory in a new plan. Companies would prepare second harm reduction reports/plans as in the previous round but including learning from the first wave of actions, successful and unsuccessful. Companies would then implement the plans. The regulator would set an interval before the next wave of evaluation and reporting.

Well-run social media services would quickly settle down to much lower level of harm and shift to less risky designs. This cycle of harm measurement and reduction would continue to be repeated , as in any risk management process participants would have to maintain constant vigilance.

At this point we need to consider the impact of the e-Commerce Directive. As we discussed, the e-Commerce Directive gives immunity from liability to neutral intermediaries under certain conditions. Although we are not convinced that all qualifying social media companies would be neutral intermediaries, there is a question as whether some of the measures that might be taken as part of a harm reduction plan could mean that the qualifying company loses its immunity, which would be undesirable. There are three comments that should be made here:

  • Not all measures that could be taken would have this effect;
  • That the Commission has suggested that the e-Commerce Directive be interpreted — in the context of taking down hate speech and other similarly harmful content (See Communication 28 Sept 2017) as not meaning that those which take proactive steps to prevent such content should be regarded as thereby assuming liability;
  • After Brexit, there may be some scope for changing the immunity regime — including the chance to include a ‘good Samaritan defence’ expressly.

This harm reduction cycle is similar to the techniques used by the European Commission as it works with the social media service providers to remove violent extremist content.

Other regulatory techniques

Alongside the harm reduction cycle we would expect the regulator to employ a range of techniques derived from harm reduction practice in other areas of regulation. We draw the following from a wide range of regulatory practice rather than the narrow set of tools currently employed by the tech industry (take down, filtering etc). Some of these the regulator would do, others the regulator would require the companies to do. For example:

Each qualifying social media service provider could be required to:

  • develop a statement of risks of harm, prominently displayed to all users when the regime is introduced and thereafter to new users; and when launching new services or features;
  • provide its child protection and parental control approach, including age verification, for the regulator’s approval;
  • display a rating of harm agreed with the regulator on the most prominent screen seen by users;
  • work with the regulator and civil society on model standards of care in high risk areas such as suicide, self-harm, anorexia, hate crime etc; and
  • provide adequate complaints handling systems with independently assessed customer satisfaction targets and also produce a twice yearly report on the breakdown of complaints (subject, satisfaction, numbers, handled by humans, handled in automated method etc.) to a standard set by the regulator.

The regulator would:

  • publish model policies on user sanctions for harmful behaviour, sharing research from the companies and independent research ;
  • set standards for and monitoring response time to queries (as the European Commission does on extremist content through mystery shopping);
  • co-ordinate with the qualifying companies on training and awareness for the companies’ staff on harms;
  • contact social media service companies that do not qualify for this regime to see if regulated problems move elsewhere and to spread good practice on harm reduction
  • publish a forward-look at non-qualifying social media services brought to the regulator’s attention that might qualify in future;
  • support research into online harms — both funding its own research and co-ordinating work of others;
  • establish a reference/advisory panel to provide external advice to the regulator — the panel might comprise civil society groups, people who have been victims of harm, free speech groups; and
  • maintains an independent appeals panel.

Consumer redress

We note the many complaints from individuals that social media services companies do not deal well with complaints. The most recent high profile example is Martin Lewis’s case against Facebook. At the very least qualifying companies should have internal mechanisms for redress that meet standards set by an outside body of simplicity (as few steps as possible), are fast, clear and transparent. We would establish, or legislate to make the service providers do so, a body or mechanism to improve handling of individual complaints. There are a number of routes which require further consideration — one route might be an ombudsman service, commonly used with utility companies although not with great citizen satisfaction, another might be a binding arbitration process or possibly both. We would welcome views to the address below.

Publishing performance data (specifically in relation to complaints handling) to a regulatory standard would reveal how well the services are working. We wish to ensure that the right of an individual to go to court is not diluted, which makes the duty of care more effective, but recognise that that is unaffordable for many. None of the above would remove an individual’s right to go to court, or to the police if they felt a crime had been committed.

Sanctions and compliance

Some of the qualifying social media services will be amongst the world’s biggest companies. In our view the companies will want to take part in an effective harm reduction regime and comply with the law. The companies’ duty is to their shareholders — in many ways they require regulation to make serious adjustments to their business for the benefit of wider society. The scale at which these companies operate means that a proportionate sanctions regime is required. We bear in mind the Legal Services Board (2014) paper on Regulatory Sanctions and Appeals processes:

‘if a regulator has insufficient powers and sanctions it is unlikely to incentivise behavioural change in those who are tempted to breach regulators requirements.’

Throughout discussion of sanctions there is a tension with freedom of speech. The companies are substantial vectors for free speech, although by no means exclusive ones. The state and its actors must take great care not to be seen to be penalising free speech unless the action of that speech infringes the rights of others not to be harmed or to speak themselves. The sanctions regime should penalise bad processes that lead to harm.

All processes leading to the imposition of sanctions should be transparent and subject to a civil standard of proof. By targeting the largest companies, all of which are equipped to code and recode their platforms at some speed, we do not feel that a defence of ‘the problem is too big’ is adequate. There may be a case for some statutory defences and we would welcome views as to what they might be.

Sanctions would include:

  • Administrative fines in line with the parameters established through the Data Bill regime of up to €20 million, or 4% annual global turnover — whichever is higher.
  • Enforcement notices — (as used in data protection, health and safety) — in extreme circumstances a notice to a company to stop it doing something. Breach of an enforcement service could lead to substantial fines.
  • Enforceable undertakings where the companies agree to do something to reduce harm.
  • Adverse publicity orders — the company is required to display a message on its screen most visible to all users detailing its offence. A study on the impact of reputational damage for financial services companies that commit offences in the UK found it to be nine times the impact of the fine.
  • Forms of restorative justice — where victims sit down with company directors and tell their stories face to face.

Sanctions for exceptional harm

The scale at which some of the qualifying social media services operate is such that there is the potential for exceptional harm. In a hypothetical example — a social media service was exploited to provoke a riot in which people were severely injured or died and widespread economic damage was caused. The regulator had warned about harmful design features in the service, those flaws had gone uncorrected, the instigators or the spreaders of insurrection exploited deliberately or accidentally those features. Or sexual harm occurs to hundreds of young people due to the repeated failure of a social media company to provide parental controls or age verification in a teen video service. Are fines enough or are more severe sanctions required, as seen elsewhere in regulation?

In extreme cases should there be a power to send a social media services company director to prison or to turn off the service? Regulation of health and safety in the UK allows the regulator in extreme circumstances which often involve a death or repeated, persistent breaches to seek a custodial sentence for a director. The Digital Economy Act contains power (Section 23) for the age verification regulator to issue a notice to internet service providers to block a website in the UK. In the USA the new FOSTA-SESTA package apparently provides for criminal penalties (including we think arrest) for internet companies that facilitate sex trafficking. This led swiftly to closure of dating services and a sex worker forum having its DNS service withdrawn in its entirety.

None of these powers sit well with the protection of free speech on what are generalist platforms — withdrawing the whole service due to harmful behaviour in one corner of it deprives innocent users of their speech on the platform. However, the scale of social media service mean that acute large scale harm can arise that would be penalised with gaol elsewhere in society. Further debate is needed.

Who Should Regulate To Reduce Harm In Social Media Services?

At the outset of this work we described a ‘regulatory function’. Our approach was to start with the problem — harm reduction — and work forwards from that, as opposed to starting with a regulator and their existing powers and trying to fit the problem into the shape of their activities. Our detailed post on comparative regulatory regimes gave some insight into our thinking. We now address two linked questions:

  • why a regulator is necessary, as we have already implied it is; and
  • the nature of that regulator.

The Need for a Regulator

The first question is whether a regulator is needed at all if a duty of care is to be created.

Is the fact that individuals may seek redress in relation to this overarching duty (by contrast to an action in relation to an individual piece of content) in the courts not sufficient? At least two pieces of profound legislation based on duties of care do not have ‘regulators’ as such — the 1957 Occupiers Liability Act and the 1973 Defective Premises Act. By contrast, the 1974 Health and Safety at Work Act does rely on a regulator, now the Health and Safety Executive (HSE). A regulator can address asymmetries of power between the victim and the harm causer. It is conceivable for a home owner to sue a builder or a person for harm from a building, or a person to sue a local authority for harm at a playground. However there is a strong power imbalance between an employee and their boss or even between a trade union and a multinational. A fully functioning regulator compensates for these asymmetries. In our opinion there are profound asymmetries between a user of a social media service and the company that runs it, even where the user is a business, and so a regulator is required to compensate for the users’ relative weakness.

What Sort of Regulator?

Assuming a regulator is needed, should it be a new regulator from the ground up or an existing regulator upon which the powers and resources are conferred? Need it be a traditional regulator, or would a self or co-regulator suffice? We do not at this stage rule out a co-regulatory model, although our preliminary conclusion is that a regulator is required. As we shall see below, instances of co-regulation in the communications sector have run into problems. Self-regulation works best when the public interest to be served and those of the industry coincide. This is not the case here.

Whichever model is adopted, the important point is that the regulator be independent (and its members comply with the Nolan Principles). The regulator must be independent not only from government but also from industry, so that it can make decisions based on objective evidence (and not under pressure from other interests) and be viewed as a credible regulator by the public. Independence means that it must have sufficient resources, as well as relevant expertise.

A completely new regulator created by statute would take some years before it was operational. OFCOM, for instance, was first proposed in the Communications White Paper in December 2000, was created in a paving act of Parliament in 2002 but did not vest and become operational until December 29 2003 at a cost of £120m (2018 prices). In our view harm reduction requires more urgent (and less expensive) action.

We therefore propose extending the competence of an existing regulator. This approach has a number of advantages. It spreads the regulator’s overheads further, draws upon existing expertise within the regulator (both in terms of process and substantive knowledge) and allows a faster start. We consider that the following (co) regulators should be considered: Advertising Standards Authority (ASA), the British Board of Film Classification (BBFC), the Health and Safety Executive (HSE) or the Office of Communications (OFCOM), all of which have the long proven regulatory ability.

The BBFC seems to have its hands full with the age verification regulator from the Digital Economy Act 2017. The launch date has been missed for reasons that are unclear and in our view this removes them from consideration. This also raises the question of how well delegated responsibilities work; Ofcom has recently absorbed responsibilities in relation to video on demand, rather than continue to delegate them to ATVOD. While the ASA regulates some content online including material on social media platforms, but this is limited to advertisements (including sponsorship and the like). Overall the ASA focusses quite tightly on advertising. Adding in the substantial task of grappling with harm social media services more broadly could damage its core functions. The HSE has a strong track record in running a risk based system to reduce harm in the workplace, including to some extent emotional harm. It has a substantial scientific and research capability, employing over 800 scientists and analysts. However our judgement is that harm reduction in social media service providers require a regulator with deep experience of and specialism in online industries, which is not where the HSE’s strengths lie.

Our recommendation is to vest the powers to reduce harm in social media services to OFCOM. OFCOM has over 15 years’ experience of digital issues, including regulating harm and protecting young people in broadcasting, a strong research capability, proven independence, a consumer panel, and also resilience in dealing with multinational companies. OFCOM is of a size (£110-£120 annual income and 790 staff) where, with the correct funding it could support an additional organisational unit to take on this work without unbalancing the organisation.

The regulator could be funded by a small fraction of the revenue planned to be raised by the Treasury from taxing the revenues of internet companies, of which this would be but a tiny percentage. The relative costs of large regulators suggests that the required resource would be in the low tens of millions of pounds.

Simple legislation to pass quickly

Action to reduce harm on social media is urgently needed. We think that there is a relatively quick route to implementation in law. A short bill before parliament would create a duty of care, appoint, fund and give instructions to a regulator.

We have reviewed the very short Acts that set up far more profound duties of care than regulating social media services — The Defective Premises Act 1972 is only seven sections and 28 clauses (very this was a unusually a private members bill written by the Law Commission); the Occupiers Liability Act 1957 is slightly shorter. The central clauses of the Health and Safety at Work Act 1974 creating a duty of care and a duty to provide safe machines are brief.

For social media services, a duty of care and key harms are simple to express in law, requiring less than ten clauses or less if the key harms are set out as sub clauses. A duty for safe design would require a couple of clauses. Some further clauses to amend the Communications Act 2003 would appoint OFCOM as the regulator and fund them for this new work. The most clauses might be required for definitions and parameters for the list the regulator has to prepare. We speculate that an overall length of six sections totalling thirty clauses might do it. This would be very small compared to the Communications Act 2003 of 411 Sections, thousands of clauses in the main body of the Act and 19 Schedules of further clauses.

This makes for a short and simple bill in Parliament that could slot into the legislative timetable, even though it is crowded by Brexit legislation. If government did not bring legislation forward a Private Peers/Members Bill could be considered.

We are considering drafting such a bill to inform debate and test our estimate.

About this blog post

This blog is from a programme of work on a proposed new regulatory framework to reduce the harm occurring on and facilitated by social media services. The authors William Perrin and Lorna Woods have vast experience in regulation and free speech issues. William has worked on technology policy since the 1990s, was a driving force behind the creation of OFCOM and worked on regulatory regimes in many economic and social sectors while working in the UK government’s Cabinet Office. Lorna is Professor of Internet Law at University of Essex, an EU national expert on regulation in the TMT sector, and was a solicitor in private practice specialising in telecoms, media and technology law. The blog post forms part of a proposal to Carnegie UK Trust and will culminate in a report later in the Spring.

--

--

Carnegie UK
Carnegie UK

No responses yet