Only few people have such deep, painful and wise insights into the topic of child pornography as Alexander Hanff. In this interview, he explains why the EU’s planned ePrivacy Derogation won’t work, but will do a lot of damage.

Alexander Hanff, himself a survivor of massive abuse, first sued his case in vain, later with success. He founded online support groups. As a computer expert, he helped the police bust child porn rings. He is a privacy entrepreneur, researcher and lobbyist. And he has already advised the EU in the run-up to the GDPR. Concerning child pornography, he warns strongly against trying to solve the issue technologically – namely from his very personal experience. I spoke to him in the context of our campaign against #chatcontrol.
.

PI: I have known you as a person who speaks out remarkably clear and undaunted on the subject of child abuse. Could you tell us a bit about your personal background?

AH: I went to a boarding school in the UK, which is a government funded boarding school for boys with high IQs. And as a result, I was subjected to sexual and physical abuse by some of the caregivers within that environment.

I left the school in 1990. And in 1991 I was approached by the police in the UK to give evidence in an investigation which they were undertaking into abuse at the school, which I did. Unfortunately, the crown prosecution service at the time decided not to take the case to trial, despite substantial evidence on about a hundred complaints. This didn’t go to trial, presumably because this was a huge embarrassment for the government. These things were still very much not discussed in public, back in the early 90s.

So it didn’t go to trial – and that was the last thing I heard about it.

I’ve had significant difficulties personally. As a result of the abuse I had suffered from depression. I tried to take my own life on various occasions. I had a horrific sleep disorder where I’d be awake for four to nine days at a time without sleep. So it was a big problem for me and having a real impact on my life. I felt that something had to be done about this, so they couldn’t just sweep these issues under the carpet. That we needed to investigate and make sure that justice was served – and that we could try and figure out a way to prevent this from happening to others in the future.

In 2012, I contacted the Guardian newspaper.

I told my story in a video interview that led to a huge investigation, reopening of the police investigation in the UK, which led to convictions in relation to the abuse of Twenty Eight boys in the school, myself being one of them. So, if I hadn’t come forward and told my story, this never would have happened and all of those boys would never have received their justice and the perpetrators of the crimes would never have been convicted. So, it was really important for me to do so, and it wasn’t the only thing I was involved in.

 

What else did you do?

At the time, I was also running probably the first online support group for abuse victims. I was also working with police around the world in the tracking down of people trafficking and distributing child pornography online, through online chat rooms etc. Back then, this was a very early stage of the internet, literally just after the web had become accessible prior to that. It was all terminal-based systems, text-based systems without graphical interfaces etc. So it was really a new area for the police who didn’t have a great deal of technical knowledge.

Being the geek that I am, I was able to provide them with information for them to conduct investigations.  

I was involved in one particular big case called “The Orchid Club”, which was a massive crackdown against a global child pornography ring back in the early 90s. So to say, I’ve been involved in this issue both as a victim or survivor of abuse, but also somebody who’s taken steps to support victims and survivors of abuse. And support law enforcement in preventing these issues, such as the trafficking of online abuse. It was a big part of my life for a long time.

 

One might think that the dismantling of privacy would only come at hand for someone who is trying to chase down perpetrators. But you say, that isn’t the case with you?

As you know I’m also privacy advocate. As a computer scientist I became concerned with how technology was evolving and being used to control and manipulate people in the mid-2000s. So I went back to University and studied the impact of technology on society as a sociologist. I focused my studies around things like surveillance and fundamental rights. And soon it became clear to me that we had an issue.

Technology was being used in ways which weren’t really good for the rights of citizens or democracy, but for control and for manipulation and surveillance purposes.

I then led a campaign which had sweeping changes to EU laws such as the GDPR. I joined Privacy International, one of the biggest and longest-running privacy NGOs in the world and led up their e-privacy portfolio for three years. And since I have been heavily involved in the development of EU law and working with companies on privacy by design, data ethics, privacy enhancing technologies, as a privacy technologist / engineer. And I also speak on data ethics for Singularity University, which is an executive academic center in Silicon Valley. So I have a vested interest in both sides: I see both the impact of this on individuals, but I also see the impact of the surveillance on society.

 

You have also clearly opposed the planned ePrivacy Derogation in the EU. How did that come about?

I first heard about this issue, this derogation proposed by the European Commission, when the European data protection supervisor had written a report on this derogation where they weren’t particularly pleased with it. They were damning in their criticism. For exactly the same reasons that we’re having this discussion today.

I felt that it wasn’t proportionate, it wasn’t fit for purpose. It would lead to the dissolution of fundamental rights under the European Charter of Fundamental rights, such as the right to privacy, the right to confidentiality in communications.

That was my first interaction with this particular derogation. Then Patrick Breyer got in touch with me because I’ve made some comments about this online. So I’ve been involved in a number of discussions at the European Parliament on this issue over the past six, eight months. And from my perspective, both as a survivor of abuse, but also as a technologist and privacy advocate, one thing which we need to take in consideration, is this:

People who have been abused have already been stripped off their dignity. They’ve already had many of their fundamental rights ripped away from them in a way that they had no control over. And they also rely on fundamental rights, such as the right to confidentiality – in order to obtain support to deal with the trauma and the many cases of PTSD, which has been caused by the abuse that they’ve been subjected to. And also to report their crimes.

For example, in my case, when I came forward and told my story to the Guardian newspaper – had there not been the ability to communicate effectively and in confidence, I wouldn’t have felt comfortable in doing that.

And bearing in mind: Abuse victims already suffered from major trust issues, you know. They find it very difficult to trust, because their trust has been exploited in the past and used against them, right? So the confidentiality of communications is particularly important. And the derogation, which seeks to undermine that confidentiality of communications, is subjecting abuse victims to even more trauma, to even more reduction of their rights and dehumanization. Not only that we’re preventing research, as we’re preventing support services etc., from being able to effectively do their job.

 

I often thought of average people or maybe decision makers, lawyers etc. who by accident may become suspects of being pedophile offenders. That is a huge field of mistrust and possibly blackmail. But researchers?

The tools, which are being used as a result of this derogation, are very error-prone, they have a high rate of false positives.

Which means we’re going to end up catching people doing valuable research, or people who are supporting victims and survivors of abuse as false positives and this can ruin their lives as well. It can really lead to the loss of their job. It can lead to destruction of families and friendships and relationships – all on the false premise, that they have been identified as being involved in some form of sexual exploitation of children. Simply because they’re working in a field which requires them to research these issues. Or their supporting victims and survivors of abuse. 

So, there’s a whole host of issues around this derogation, which really need to be addressed. And which make it very clear that this isn’t a suitable way to deal with the problem of the distribution of sexual images, or abuse of children online.

It’s not appropriate. It’s not fit for purpose and I would argue isn’t even applicable under European law. It’s not something which can be done under European law.

We have countless cases of law from the European courts, which require any form of surveillance. Keep in mind, this is absolutely a form of surveillance to be targeted and to be for the purpose of law enforcement etc., but not to be blanket surveillance. We don’t consider it under EU law as proportionate. Which is the very foundation of EU law: That we must have proportionality, we must have necessity when we’re dealing with these infringements to our fundamental rights.

 

Sometimes I found it a bit hard to argue with “proportionality” – because child abuse is such an ultimative crime and emotional matter, that no means may seem hard enough to fight it.

I mean, the first thing we need to understand is: Child abuse is not a technical issue. Child abuse is a societal issue. It’s an issue which has existed for as long as societies existed. The sexual exploitation of children is as old as history. So, to try and address a sociological issue with technology is like comparing apples to oranges, the two are just not compatible with each other. We’re not going to solve the issue of children being abused by hiding this fact.

Keep in mind that this [surveillance] technology that we’re talking about doesn’t prevent the abuse.

It just allows the reporting of the distribution of the consequences of that abuse – the images of the children being abused – from from being distributed online. And when we say being distributed online, again we’re only talking about the platforms, where these tools are being deployed. So, there are many other platforms and many other types of environments, which can be set up to still allow the dissemination of this material. Which would not be subjective, because it would be controlled by the abusers themselves. We could talk about things like mesh networks and onion rooted networks, like TOR and the Dark Web for example. Environments for which there is no control. There are no tools there to detect this type of use.

And in a situation where these tools are rolled out on social media and common platforms, the criminals will simply take their activities underground.

They’ll go to these other environments where these tools don’t exist. And as I said, we’re not dealing with the abuse itself. The abuse has occurred, this is post-abuse. The images are the consequences of the activity, right? So, we need to consider why the abuse is occurring and prevent it from occurring in the first place.

I studied reoffending rates of sexual offenders in my first university where I was working at. I studied in psychology and computer science. And one of the things which became very obvious to me, was that abusers consider their behavior to be normal. This is a very common observation when talking to offenders in sexual offenses.

And then we look at some things which occur in society, which we seem to accept every single day. For example, the sexualization of children in advertising, in entertainment, such as movies, the musical industry and the fashion industry. We see it’s commonplace for young adults or teenagers to be dressed and behaving in provocative or sexually promiscuous ways. For the purpose of entertainment, advertising and fashion. That only serves to normalize the behavior in the eyes of the abusers, right?

 

Ultra conservatives often warn of “sexualization” of kids when it comes to sex education. But yes, in this way the term makes sense …

The abusers see this is as normalization of their behavior. So I would rather see more money spent on research and bear in mind there is a woeful lack of research dollars being put into these issues. Most of the money is going into “How can we make it look like we’re doing something? How can we throw technology or AI at this issue – as opposed to dealing with the cause of the problem, to prevent the children being abused in the first place? Such as this initiative by the European Commission.

And as I’ve said to you before, one of the things which will always remain in my mind was a conversation I had with my professor at the time. I was struggling because being a victim or survivor of abuse – and coming from a family where abuse had been, that literally tore my family apart – studying it was perhaps not the best medicine for me. But it was about trying to understand it a little bit and maybe see if I could do something about it. And it became difficult for me. So I had a meeting with my professor and he said to me:

“If you if you’re walking down the side of a canal and you see a drowning child in the river, what would you do?“ I answered, “I’m jumping, I try and save the child.“ And he said, “Well, if every minute you see another child down in the river, what do you do?” I said, “Well, I try and save all the children.” So he said, “But there comes a point where it’s not possible, where there’s too many children and there was only one of you. You can’t save everybody. So don’t you think it would be more appropriate to go back upstream and see why the children ended up in the river in the first place?”

And that was a lightbulb moment for me. We’re talking of cause and effect. Or dealing with it was something before the consequence occurs, by determining why the children where ended up in the river in this metaphor. We would then perhaps be able to prevent them ending up in the river in the first place and prevent the need to jump in and try to save them from drowning and being overwhelmed.

And it’s exactly the same thing. When you’re thinking about this, in the context of the sexual exploitation of children in relation to child sexual abuse, you know, the images of the children being distributed after the fact are effectively the children in the river. And the tools which are being used to remove those images from the Internet, is effectively me, jumping in the river. To save those children and no matter how many tools you have, that can be 50 of me. But eventually the number of children in the river and in the case of the tools that are being used to detect this online, the number of images, which have been disseminated around these different platforms, you’re never going to be able to keep up with it.

You know, we want to be able to prevent that from happening in the first place and focus our efforts on that. And yes, we need to safeguard against the consequences as well. But that shouldn’t be the priority. The priority should be preventing this from happening in the first place. So it should be preventative instead of remedial or reactive. This is an aspect which is sadly lacking when we look at these issues. And not just at the European level, at the global level.

There’s just no focus on trying to deal with how this happens and just a focus on trying to make it look like it isn’t happening.

Trying to erase this from history or sweep it under the rug is never going to help. The children are still going to be abused and the consequences of that abuse will live with those children for the rest of their lives. And it saddens me really as a grown man to realize that even after all this time, we still haven’t realized that in society we need to do something at that level, instead of just trying to brush it under the rug, shrug or put a band-aid over it.

 

When helping the police, how did you experience their approach and equipment? How did they deal with the matter in their police work?

I mean, we’re going back to the early 90s here. So talking nearly 30 years ago, I would say that there was definitely an effort from the police. There was a genuine effort to understand the technologies which are being used to disabuse this material. But despite the effort, there was a lack of qualification, of experience in digital platforms and digital tools. And even now, unless you’re a particularly large police force, your capabilities from a technical perspective are fairly limited. And these types of cases tend to be reserved for specialist cyber departments within larger police forces such as the Metropolitan Police in the UK, for example.

So generally speaking, I would say, there’s a lack of skills within law enforcement, to be able to police these issues.

Another thing which is important to understand is: When we’re talking about AI and automated tools which are scanning even now literally billions of communications every single year, there have been very few cases which have relied on this evidence in order to be able to go to court and obtain conviction. Something in the region of only about 60 cases in Europe have been raised to date in relation to the tools, which are being used. And even in those 60 cases – and bear in mind, we’re talking about billions of communications being compromised – only a few of them were directly the result of the scanning of those communications. Many of those convictions or prosecutions were still based on an individual actively reporting the crimes to law enforcement, so that the scanning data was merely supplementary.

And we’re also talking about proportionality. If it’s okay to read the communications of every single EU citizen for every online communication they send, every minute of every day – so to the tune of literally billions of lines of communication – and we’re only securing a handful of convictions as a result of that. Can we consider that as proportionate?

The answer is an absolute No! This is not what we would consider proportionate under EU law.

In the discussions that I’ve had with Patrick Breyer and sessions I’ve sat in on, we’ve even had retired European Court judges commenting on this and explaining that this does not comply with the principles of EU law of necessity and proportionality. So it’s not fit for purpose from a proportionality perspective. The false negatives create other issues. So, even if we throw technical tools at the situation, to try and help the police – they’re not effective.

All of the big cases I’m aware of, the big crackdowns of the large international paedophile rings, have been as a result of genuine everyday police work, not the use of AI and automated processes to detect messages being sent online.

Just genuine, good old-fashioned police work and investigation. And we’ve had some really big cases. It happens on a regular basis, you know. Almost on a yearly basis we see another big case, where an international child abuse ring has been infiltrated and has been brought down by the police – through collaboration with police forces around the world and just good old-fashioned investigation and police work. And that’s what we find to be effective in dealing with the consequences and being able to break these rings down. Not the use of inadequate technology, which is just going to infringe on the rights of innocent people and watch the criminals just move to other platforms or other environments where this technology isn’t in use.

 

PI: So, I really hope that the MEPs get to hear this position. While some 72 % of the EU citizens already share it when being asked [source] . So, thanks a lot for giving us these insights!

Just one more thing:

When we think about digital  tools and platforms, we tend to not put them in the context of the rest of the world, right? We just think, oh, it’s just online. It’s just a Facebook or WhatsApp message or whatever. It’s not the same as other communications we have.

Can you consider how you would feel if every single letter, which is sent through the postal service was opened by the police, just on the off chance that it might contain illicit images of child sexual exploitation? Would we be okay with that? No, we absolutely wouldn’t be okay with that.

Would we allow governments to install surveillance cameras in every single home in the country, just on the chance that some of those homes will be areas where children are being sexually abused? No, we wouldn’t, because it’s not proportionate.

When we talk about these these surveillance tools being deployed online, somehow we think it’s okay – and it’s not okay! The principles are exactly the same, the communications we have online are no different to communications we send by letter or the communications, we have in our home, with our family. And that’s why privacy is a fundamental right under European law and under international law. So this is something we need to understand. We need to put it into real context and not just think that it’s an AI tool that’s not really having any impact on us. It absolutely is.  

And when we lose freedom of speech, when we lose the ability to communicate in confidentiality. We lose other freedoms, that go with that: We lose freedom of opinion, freedom of thought, we lose freedom of information and freedom of association.

Our social graphs are constantly being monitored. We become automatisms. We lose our identity and self awareness and all these other things you actually take for granted. So the consequences on society and individuals within society can be absolutely devastating. Moving forward, it’s not just the consequences that we have now, in the stage where we’re talking about this. What about the consequences for future generations, where this has been normalized? What happens to their freedoms? What happens to their ability to be individual people, making individual decisions which are based on free choice and not on the knowledge that they’re being surveilled.

In the social sciences and experimental social science, we have a phenomenon called the Hawthorne effect, where people who know they’re being observed change their behavior. So the behavior becomes artificial. It’s not natural anymore. So if we know we’re going to be surveilled everywhere, online, in the streets, in our homes etc., then we lose that ability to act as free individuals. We lose the ability to be natural. Everything we do then becomes artificial and that can have dire consequences on society.

 

I couldn’t agree more.
Thank you, Alexander, and keep up the good work!

You’re very welcome. Thank you for having me.

//