Last week British Prime Minister David Cameron prompted feverish reactions amongst privacy advocates and technologists in response to his proposals to revive legislation, previously dubbed as the Snooper’s charter. Various media outlets ran headlines with the claim that the legislative changes proposed by Cameron would allow the UK government to ban apps such as Whatsapp and Snapchat, which use end-to-end encryption.The intention of end-to-end encryption is to prevent intermediaries from being able to discover or tamper with the content of communications being transmitted, which is vital to ensure secure online transactions, including e-commerce and online banking. Cameron’s concern is that terrorists are utilising messaging apps that use encryption, to communicate with one another and he is proposing a raft of measures that will legally enable mass surveillance of UK citizens as well as restrict access to certain forms of online communication.
Existing laws and proportionality
Crucially, the UK does have laws in place, namely Section 49 of the controversial Regulatory Investigatory Powers Act (RIPA), that require suspects to decrypt their personal data, or hand over decryption keys, should the government request access with a court order. Any refusal carries a two-year custodial sentence, and if the request relates to national security, it can be increased to five. Arguably, used correctly, this legal provision, coupled with others, represent proportionate counter terrorism measures. However, Cameron’s proposal involves not only targeting terror suspects, but to allow intelligence services access to every UK citizens’ online activities.
What is the problem? If you have nothing to hide why would you be opposed to these proposals?
The ‘justification’ for the proposed legislative underpinnings of mass surveillance and restriction of access to various means of online communications, is national security. In an impassioned speech last week, Cameron posed the question: “In our country, do we want to allow a means of communication between people which we cannot read?” Many UK citizens are not concerned about the State having access to their innocuous emails and texts, particularly if this will allow law enforcement agencies to reduce the threat of terrorist attacks.
However, intelligence services collect citizens’ online communications data and run profiling algorithms to detect the characteristics of a potentially high-risk person, passenger or consignment. When law enforcement agents use algorithms that reflect unexamined generalizations about what constitutes a high-risk person, these practices may lead to erroneous conclusions that can result in negative outcomes that affect not only the lives of individuals but also of specific sections of society. If, for example, the indicators on which profiling is based relate to religious beliefs, ethnic or national origin, types of websites visited, flights booked to particular destinations, connections to specific groups of people or an individual, or political affiliations or occupation, the net is cast very wide and will include law abiding citizens.
Whilst it is possible to derive profiles that provide valuable insights to intelligence services about suspected terrorists, it is also inevitable that intelligence services will arrive at incorrect conclusions about individuals, or groups of people. Inaccurate profiling can result in a flag being allocated to an individual, which may have repercussions in terms of a law-abiding citizen being subjected to more in-depth surveillance, arrest and detention for a number of days without charge, deportation, limitations on that individual’s ability to gain entry to another country, or to secure certain types of employment and other forms of discrimination.
The potential for algorithms to introduce biases and flawed decision-making into data analytics is a concern not only for the intelligence services, but also in both public and private sectors. An algorithm is a computational procedure used to process vast quantities of data. Algorithms are engineered to make decisions, take actions and deliver results speedily and continuously. Data scientists and programmers who write algorithms are effectively translating rules into code and, when those rules impact on citizens’ rights, it can be very difficult to determine whether or not laws are being adhered to and rights are sufficiently protected. These issues can be compounded further by automation bias, which is the propensity for humans to assume that automated decision making systems are infallible and to ignore contradictory information made without automation, even if it is correct.
Many of the most complex algorithms are created by a number of different programmers over many years, which can result alterations to rules guiding the data analytics. The result is that the rules governing the analyses conducted by intelligence services become increasingly opaque over time. If the steps by which a law-abiding citizen is flagged as a potentially high-risk person are both poorly understood and not easily reversed, this not only propagates inaccurate decision making but also impedes legitimate redress.
A democratic state is built, in part, on the premise of accountability however, there is a recognised accountability deficit accompanying the delegation of legislative power to code writers, which is rarely alluded to by politicians advocating the expansion of the remit of the intelligence services. The risks of not being duly cognisant of the potential pitfalls associated with big data analytics are multi-facted and include an inadvertent move from a democratic state to a Police State
In a paper entitled Technological Due Process, Danielle Citron outlines that it is extremely challenging for elected politicians, the courts, data scientists and those tasked with oversight to make informed judgements on the extent to which rules, that protect law abiding citizens rights, have been interpreted correctly within algorithmic driven decision-making processes. Citron outlines how a carefully constructed inquisitorial model of quality control that relies on technical, legal and policy processes to enhance the transparency and accuracy of rules embedded in automated decision-making systems, would help to both identify and mitigate the shortcomings in accountability.
Similarly, in an article entitled Rage Against the Algorithms, Nicholas Diakopolous suggests an approach that requires the reverse engineering of algorithms, that would enable computational experts to articulate the specifications of a system, determine a model of how that system works, and thereby introduce algorithmic accountability. A key question is how might Citron’s and Diakopolous’s suggestions be tested and applied in the context of the UK intelligence service?
In the UK’s Interception of Communications Commissioner’s annual report (2013), the Commissioner Sir Anthony May states that ‘the technicalities [of surveillance techniques] are complicated and sophisticated but I believe that I have sufficiently understood their principles at least for present purposes.’ Whilst it is perfectly reasonable to believe that the principles underpinning surveillance are well-intentioned, it is the rules that drive algorithmic decision making that really need to be understood, if there is to be meaningful oversight. The Interception of Communications Commissioner’s office should consider testing the suggestions of Citron’s, Diakopolous’s and others with respect to the algorithms and processes used by the intelligence services. This would serve to improve algorithmic accountability and better protect human rights. It might also instil a degree of trust in the Government’s ability to adopt a proportionate and balanced approach to dealing with terrorism, improve oversight, and help to preserve a free and democratic society
A cornerstone of a democratic society is freedom of speech that protects commentators, journalists, experts, academics, civil rights activists and citizens wishing to, for example, challenge measures proposed by politicians, comment on the activities of law enforcement agencies, or to champion human rights. However, the State that would emerge under the mass surveillance proposed by Cameron is one in which commentators may choose, or feel forced, to self-censor their opinions due to concerns about the potential negative implications of expressing dissent, not only for themselves but for those with whom they are connected both on and offline. The result would be a silencing of dissenting views that would precipitate a further shift away from a democratic society.
Moreover, mass surveillance that relies on algorithmically-based profiling characterised by a lack of clarity with respect to accountability and due diligence on would have a chilling effect on the nature of the content citizens access online and with whom they connect. Algorithms are poor at determining context, i.e. distinguishing between innocent activities and those indicative of criminal intent. Therefore, citizens will have to second-guess what the implications might be of accessing certain types of content and connecting with specific individuals or groups of people.
Educators preparing content for a lesson on subjects like politics, religious education, and history, may feel it necessary to think carefully about accessing certain sites to explore the roots of, e.g. radicalism in Islam. Parents may advise their children not to look at certain sites because of fears that algorithms might detect their online activities in a manner that prompts more in-depth surveillance of family and their friends. One can envisage politicians coming up with proposals to mitigate these concerns in the form of a list of State ‘approved sites’ that present specific views, which would represent another Orwellian step away from a free society.
It is incumbent on us all to be vigilant of these sorts of incremental steps that encroach on our freedoms, foster suspicion, limit freedoms of expression and association and ultimately have the effect of exacerbating tensions between communities, all of which are counter-productive. It is also important to realise that currently we live in a State where there is very limited redress or legal remedy available to those citizens negatively affected by state sanctioned surveillance.
Legal oversight would prevent the devolution of our rights in a surveillance state?
In conducting electronic surveillance, either foreign or domestic, EU Member States are required to maintain a balance between the needs of law enforcement authorities and respect for the fundamental rights to privacy, personal data protection, and private and family life. However, under European Union (EU) treaties, foreign electronic surveillance conducted by national law enforcement authorities of the twenty-eight EU Member States falls under the control of the individual EU Members. In order to safeguard the internal security of the EU Members, the Court of Justice of the EU does not have jurisdiction over cases that involve surveillance conducted by national authorities.
A recent comparative study of laws regulating the collection of intelligence in the European Union, United Kingdom, France, Netherlands, Portugal, Romania, and Sweden entitled Foreign Intelligence Gathering Laws found that all national laws of the surveyed countries do provide for special instruments to preserve personal data. However, gaps in legislation compounded by the national legal system’s weaknesses, culminated in the circumvention of national laws and citizens’ rights being routinely breached. This raises the question of whether or not another route exists by which a law abiding citizen subjected to negative outcomes as a result of an inaccurate flag generated by an algorithm.
EU citizens rights are protected under EU law, and international agreements, including the European Convention for the Protection of Human Rights and Fundamental Freedoms (ECHRFF) by which EU Members are bound. A law abiding UK citizen can seek legal redress at the European Court of Human Rights if their rights are infringed upon by the intelligence services. Under settled case law of the European Court of Human Rights, national enforcement authorities are required, when conducting electronic surveillance, to justify such activity against the privacy of individuals on the basis of a law that sets forth clearly defined grounds, including national security and public safety, and adheres to the principles of necessity and proportionality.
A lack of judicial oversight, accountability and transparency with respect to the profiling techniques deployed by intelligence services when analysing tele-communications data is a considerable cause for concern. Oversight is incredibly important to ensure proportionality and balance and that there are means by which law-abiding citizens, who are negatively affected by the outcomes of surveillance techniques, can find a legal remedy. UK citizens should be concerned about these issues and urge Government to ensure that our rights can be upheld in a court of law and not devolved, in part, to the deliberations of the intelligence services.
In some political circles a narrative has emerged about the European Convention of Human Rights (ECRH) which equates this legal instrument with misuses by criminals which have negative consequences for UK society. There are calls by politicians to substitute ECRH with a British bill of rights and responsibilities. It is important to consider carefully what the unintended consequences might be of diluting the protections afforded law abiding citizens rights.
Lessons to be learnt from disproportionate counter terrorism legislation
Notwithstanding the potential positive outcomes of improving the checks and balances conducted on the data analytical methods employed by the intelligence services, it is also worthwhile to consider the lessons learnt from the effects of counter-productive counter –terrorism legislation. The conflict in Northern Ireland was one of the most protracted and serious violent conflicts. A number of reports highlight the divisive nature of disproportionate counter-terrorism legislation that can lead to harmful sectarian divisions in society that produce inequalities and feelings of discrimination that ultimately add fuel to radicalisation. A report entitled “Lessons From Northern Ireland” produced a number of key recommendations:
- The rule of law and human rights standards must be upheld during times of crisis;
- Police forces must receive proper training and accountability measures introduced;
- Criminalising entire communities is counter-productive and discriminatory; and long or indeterminate pre-trial detention is a major criminal justice setback
Balance, proportionality, due diligence, accountability and dialogue
Edward Snowden’s revelations continue to highlight to the citizens of the world what happens when States grant unfettered freedom to intelligence services – rights of citizens and States are routinely breached. When considering Cameron’s proposals, it is not simply the innocuous content of law-abiding citizens’ emails and texts that is at the crux of the matter. The question is, do we as a society wish to devolve our freedoms and rights to the outcomes of decisions made by algorithms that have access to every aspect of our online lives, are not subject to rigorous due diligence processes and routinely make inaccurate inferences that can have negative consequences for which there is limited legal remedy?
It is clear that decision making with respect to a surveillance state cannot be entrusted to UK politicians alone and that it is the responsibility of every UK citizen to advocate for a more balanced and informed dialogue about the risks and opportunities associated with proposals to legislate for the mass surveillance of UK citizens. It is also important to consider the relative effectiveness of past counter-terrorism measures, when developing new frameworks to counter-terrorism.