Whose duty is it anyway? Answering some common questions about a duty of care

Carnegie UK
13 min readAug 2, 2019
by Professor Lorna Woods (University of Essex, Professor of Internet Law), William Perrin (Trustee, Carnegie UK Trust) and Maeve Walsh (Associate, Carnegie UK Trust)

The consultation on the UK Government’s Online Harms White Paper closed at the start of July and the official timescale for its response “by the end of the year”. In ordinary political circumstances, that would seem reasonable. In the current climate, it may be wildly optimistic, not least as a new Government with new Ministers in key DCMS and Home Office roles may open up some of the policy foundations on which the White Paper was built. That said, there is significant cross-party support for regulation to address online harms and a broad consensus — spanning Select Committees, peers and MPs — that a “duty of care” is a sensible regulatory framework on which to proceed. We remain hopeful that this consensus holds and that a Parliament consumed by Brexit will welcome the opportunity to focus on something else, not least such an urgent social and political priority.

But whose “duty of care”? And what will it mean in practice? One of the main criticisms of the UK Government’s proposals has been the lack of detail on what its “duty of care” is and how it would operate. In our response to the White Paper consultation, we drew out the differences between the systemic approach that we had developed in our work for Carnegie UK Trust, and the Government’s version, which gave disproportionate prominence to prescriptive codes of practice and thus appears more focused on notice and takedown of content, than on risk-based prevention of harm.

We have also picked up a number of recurring themes in other consultation responses which put forward assumptions or challenges about the UK Government’s “duty of care” (in the absence of sufficient explanation within the White Paper itself) which may not apply our proposal, as set out in detail in our full reference paper from April this year, or indeed to the Government’s own intended framework. This blog post addresses some of those themes.

How does the statutory duty of care relate to the duty of care in negligence?

The phrase ‘duty of care’ originates in the common law tort of negligence and there is a substantial body of case law on it. Neither Carnegie nor the UK Government is proposing using a common law duty of care, but a regulatory scheme set out in statute based which uses the concept in a different way. Parliament has on several occasions based statutory obligations on a modified version of the common law idea of duty of care to achieve policy goals successfully — an approach known as a ‘statutory’ duty of care. An early version was the Occupiers Liability Act 1957. Here, the statute operated to amend the common law doctrine in relation to a particular issue: the persons to whom the duty was owed. While the Act dealt with a specific weakness in the existing case law, it also shows that the common law duty of care does not limit what can be done by statute. Given Parliamentary sovereignty, it would be very strange if the common law could limit political choices. Indeed, the courts have been unwilling to extend the application of negligence to completely new fields precisely because they think that lies outside the role of the courts and is properly something Parliament should do.

The Health and Safety at Work Act 1974 demonstrated a further evolution of a duty of care-based regime. The Act consolidated a number of separate regimes relating to work safety into one and delivered another shift away from a private law action to a regulatory system. So, while the duty of care is still described as being owed to a certain group of people (employees in s. 2(1) and persons affected by an undertaking in s. 3(1)), general enforcement powers lie elsewhere. The Health and Safety at Work Act is enforced by a regulatory authority (the Health and Safety Executive (HSE)); the idea that a statutory duty of care may be enforced by a regulator is not new and has been successfully used for decades.

So, while the duty of care in negligence and that in statute share a common ancestry they are not the same and the existence of one does not limit the development of the other.

This is all about notice and take down of content, not a regulatory system to address harm reduction.

We believe that the Government has not satisfactorily explained how a systemic duty of care would work. The focus on draft codes of practice in the White Paper as a means, we think, of explaining to certain interest groups how their concerns would be met, obscured the core systemic nature of the proposal. Yet, the White Paper sets out (in paragraphs 3.1–3.3) a statement which aligns closely with our proposal:

The Government will establish a new statutory duty of care on relevant companies to take reasonable steps to keep their users safe and tackle illegal and harmful activity on their services … This statutory duty of care will require companies to take reasonable steps to keep users safe and prevent other persons coming to harm as a direct consequence of activity on their services.

The Secretary of State confirmed to Parliament shortly after the publication of the White Paper that the over-arching duty of care was superior to the codes of practice. The White Paper however does not go on to set out the reason for the platform’s responsibility in this context and consequently the sorts of steps that they might be required to take. The design choices made by the companies in constructing these platforms are not neutral; they have an impact on content and how it is shared. Every pixel a user sees on an online service is there as a result of decisions taken by the company that operates it: decisions about the terms of service, the software that operates the service and decisions about the resources put into enforcing the terms of service and maintaining the software. This can be best seen in the difference in content and user behaviour between services — they are different because they are designed and operated to be so. Companies have to own responsibility for reasonably foreseeable matters that arise from operation of their service.

While the White Paper refers to the idea of safety by design, it is, however, not clear what this means and more generally about the types of steps being required of companies. It does not mention that companies should perform a thorough risk assessment of their operations from which their actions to mitigate those identified risks should flow. This is central to a duty of care approach. Companies will not be unfamiliar with this process from, for example, data protection assessment requirements. Their risk assessment should be shared with the regulator who can critique it. From the risk assessment should flow a risk mitigation/reduction action plan, for the highest risk companies, this would be agreed with the regulator. An important part of a systemic approach is that it is to some extent forward looking. For instance, companies making risk assessments of the impact of changes to software on harms and acting on indicative evidence that has arisen from a risk assessment framework such as the precautionary principle. Then the company should take reasonable risk management steps based on evidence as to what works.

Risk aversion will lead to large scale banning of content

We propose a risk managed regime not a risk averse one, except for the most extreme harmful illegal content such as child sexual exploitation and terrorist material. A systemic approach looks at harms in the round, at an aggregate level, weighing what is reasonably practicable. Such an approach is not about penalising small errors that might well lead to a risk averse approach. Companies and the regulator should act on evidence of harm and indicative evidence of harm, for the latter using a framework such as the precautionary principle. The precautionary principle provides a framework for companies to proceed with potentially risky activity in a managed way where direct evidence about harm causation is lacking and protects them from crude banning by politicians or regulators if the company takes adequate risk management steps. Social media companies are great proponents of near continuous testing of people’s reaction to material and should be well placed to design risk management frameworks. We explore the relevance of the precautionary principle in detail in our April 2019 paper, but this particular section is key (in our view) in setting out how a systemic approach, founded on the precautionary principle, can mitigate risks relating to platforms’ potential role in “policing” speech, that have been justifiably raised in consultation responses from freedom of expression campaigners and others:

Emergent evidence of harm caused by online services poses many questions: whether bullying of children is widespread or whether such behaviour harms the victim; whether rape and death threats to women in public life has any real impact on them, or society; or whether the use of devices with screens in itself causes problems. The precautionary principle provides the basis for policymaking in this field, where evidence of harm may be evident, but not conclusive of causation. Companies should embrace the precautionary principle as it protects them from requirements to ban particular types of content or speakers by politicians who may over-react in the face of moral panic. Parliament should guide the regulator with a non-exclusive list of harms for it to focus upon. Parliament has created regulators before that have had few problems in arbitrating complex social issues; these harms should not be beyond the capacity of a competent and independent regulator. Some companies would welcome the guidance.

The Government will be running the regime not the regulator

Many consultation responses have focused on the long list of harms in scope of the proposed duty of care, and particularly on the distinction between “harms with a clear definition” (largely illegal) and “harms with a less clear definition”, and the way that the proposed codes of practice then prescribe the handling of content related to those harms. The White Paper puts forward codes relating to eleven different types of content, each with different specified actions that must be taken into account by the relevant operators. In these codes, there is undue emphasis on notice and take down processes with the unfortunate consequence that the Government appears to prioritise these over the safety by design features inherent in a systemic statutory duty of care. The focus on content therefore has the unfortunate side effect that platform operators will need to understand the boundaries between these different types of content in order to apply the appropriate code. In our view, cross-cutting codes which focus on process (such as risk assessment and harm reduction) and the routes to likely harm would be more appropriate.

We would also, as set out in our proposal, rather see the regulator — when operating either in shadow form and/or given instruction to prepare for a statutory role — leads the process of working with companies, civil society groups and other stakeholders to draft and agree the codes, in response to high-level harms or outcomes identified by Parliament. Such an approach would give the parties a sense of practical and emotional investment in a long-term work programme as well as supporting the independence of that process. The outcome would be likely to be more workable in practice too.

There will be constitutional limitations on approach of the regulator (ie it is bound by the Human Rights Act and also by general approach to regulation/proportionality)

The Human Rights Act 1998 incorporates the European Convention on Human Rights into UK law. Any regulator under the statutory duty of care would be a public body and the Human Rights Act imposes an obligation on public bodies. It says, at s. 6, that:

(1) It is unlawful for a public authority to act in a way which is incompatible with a Convention right.

The starting point for a regulatory scheme would be that the regulator, where it has a choice, must take into account human rights protected by the Convention; that includes freedom of expression as well as the right to private life and other rights. Of course, it may be preferable to make the point expressly on the face of any statute setting up the statutory duty of care.

There are further principles in existing UK law that pertain to the requirement that all regulators act in a proportionate manner. The Legislative and Regulatory Reform Act 2006 specifies that regulators covered by the Act should have regard to a code made under the act, The Regulators’ Code, when developing policies and operational procedures that guide their regulatory activities. The aim of the code is to ensure that regulators are not heavy-handed in their approach. Para 1.1 of the current version of the code (2014) specifies:

Regulators should avoid imposing unnecessary regulatory burdens through their regulatory activities and should assess whether similar social, environmental and economic outcomes could be achieved by less burdensome means. Regulators should choose proportionate approaches to those they regulate, based on relevant factors including, for example, business size and capacity.

The Legislative and Regulatory Reform Act was amended by the Enterprise Act 2016. When that amendment comes into force, relevant regulators will need to report on the effect of the code. Currently the HSE and the Information Commissioner are covered by the Code and it would be possible to designate the regulator for these purposes.

Ofcom is under a duty by virtue of the Communications Act 2003, s. 6 which states:

  • (a) the imposition of burdens which are unnecessary; or
  • (b) the maintenance of burdens which have become unnecessary.

Should OFCOM be designated the regulator in relation to the statutory duty of care, this obligation would most likely apply in relation to those new functions too.

Compliance burdens for small firms will be disproportionate and it will entrench the dominance of major platforms.

We have seen White Paper responses during the consultation period warning that a duty of care would penalise start-ups and SMEs, lead to greater domination of the market by the large tech firms and stifle innovation in the UK, making it uncompetitive and undesirable for further investment. We disagree. A level playing field will only be delivered by a baseline of regulation that requires all companies hosting user-generated content — no matter how big or small — to be responsible for the safety of users on their platforms. Some groups are sufficiently vulnerable (e.g. children) that any business aiming a service at them should take an appropriate level of care, no matter what its size or newness to market. Beyond child protection, basic design and resourcing errors in a growth stage have caused substantial problems for larger services and much of the debate on AI ethics attempts to bake in ethical behaviour at the outset. The GDPR emphasis on privacy by design also sets basic design conditions for all services, regardless of size. We are struck that in other areas even the smallest businesses have to take steps to ensure basic safety levels — the smallest sandwich shops have to follow food hygiene rules almost all businesses have to follow health and safety measures for their workforce. In both these cases, risks are assessed in advance by the companies concerned within a framework with a regulator.

We do agree with the Government that there should be a proportionate approach to the implementation of the regulation; this will encourage innovation and protect against reinforcing the dominance of existing market players. As is the case with the existing Codes that we discuss above, good regulators take account of company size and regulation is applied proportionate to business size or capability. A proportionality assessment does not just take into account size, but also the nature and severity of the harm, as well as the likelihood of it arising. For small start-ups, it would be reasonable for them to focus on obvious high risks, whereas more established companies with greater resources might be expected not only to do more in relation to those risks but to tackle a greater range of harms.

The regulator should determine, with industry and civil society, what is a reasonable way for an SME service provider to manage risk. Their deliberations might include the balance between managing foreseeable risk and fostering innovation (where we believe the former need not stymie the latter) and ensuring that new trends or emerging harms identified on one platform are taken account of by other companies in a timely fashion. The regulatory emphasis would be on what is a reasonable response to risk, taken at a general level. In this, formal risk assessments constitute part of the harm reduction cycle; the appropriateness of responses should be measured by the regulator against this.

A duty of care doesn’t fit with the e-Commerce Directive

In our full reference paper, we set out how our proposed regulatory regime would fit with the immunity required by the e-Commerce Directive. In summary, the logic of the directive is not to exclude ISS providers who provide hosting services from all forms of regulation. Indeed, they are not immune to all forms of legal action. The provision relates to liability “for the information stored” and not other forms of possibility exposure to liability. This means that there is a difference between rules aimed at the content (which insofar as they are acceptable from a human rights perspective would in principle impose liability on the user unless the ISS host (a) was not neutral as to the content; and/or (b) did not take it down expeditiously) and those aimed at the functioning of the platform itself (which might include rules as to how fast those systems should take content down). Indeed, the e-Commerce Directive recognises that some such rules could be imposed: recital 48 refers to the possibility of Member States imposing duties of care on hosts to “detect and prevent certain types of illegal activities”. The placement of the recital suggests that it is aimed to clarify the meaning of the prohibition in Article 15 on Member States from requiring ISS providers to carry out general monitoring; recital 47 also clarifies that Article 15 does not concern monitoring obligations in a specific case. The boundary between these specific obligations and general monitoring is not clear.

Further Information

We continue to welcome feedback and to work collaboratively with other organisations seeking to achieve similar outcomes in this area. Contact us via: [email protected]

For further information, please see the resources on the project page.

Originally published at https://www.carnegieuktrust.org.uk on August 2, 2019.

--

--