Nettime mailing list archives

<nettime> What should GCHQ do?
William Waites on Mon, 25 May 2015 00:01:38 +0200 (CEST)

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> What should GCHQ do?

Edinburgh, May 24 2015

Back in late April, an invitation [1] was circulated around the School
of Informatics which asked academics for ideas about what projects
they should fund in the area of ``Cyber Defense''. Presumably the same
invitation went out to various universities and other organisations. I
was very much conflicted about whether to participate. One the one
hand engaging with GCHQ at all seemed like a bad idea. On the other,
it was an invitation to tell them directly what I think -- at least
then it could be said that they had been told. As it turns out, the
even was cancelled at the last minute with no explanation.

If the event had gone ahead, what would I have said? The topic was
defense, keeping infrastructure and such safe from attack. This part
of their job is different from the offensive surveillance (or
``signals intelligence'' in the jargon) programmes. So it stands to
reason that projects that would make their SIGINT job harder would
improve our defensive capabilities and make ``UK interests''
safer. After all, the GCHQ is are not the only ones with offensive
capabilities, but they're reputed to be pretty well developed so
trying to defend against them seems like a good tactic for improving
everybody's security. If GCHQ were to fund work in that direction,
they would be making a positive contribution to our collective
security. That's the argument in broad strokes.

What, specifically, could this mean? One thing is to figure out how to
get strong encryption used pervasively. The science is well
established, we have good (technical) quality software that does
encryption, but still an alarming amount of communications still
happen in the clear -- both the content and the meta-data. Why is
this? Originally the answer may have been expense, doing encryption is
computationally more expensive than not doing it. But that is no
longer much of a concern. Computers are fast. Modern computers even
have hardware support for encryption (how trustworthy that hardware
support is is another important thing to look at). Another answer is
that using encryption is difficult. But we know how to make simple,
pleasant and natural user interfaces, surely if serious effort were
brought to bear this too could be overcome.

The answers probably lie in psychology, sociology and economics. The
false argument that only criminals need privacy, and they don't
deserve it still convinces many people. Worse, the intuition of the
average user about the security properties of their actions does not
match the reality. This leads to people typing their lives into
Facebook under the mistaken impression that this is somehow a private
communication with their friends. How can this impedence mismatch of
intuition be improved? If it were improved, we could have an informed
population with an accurate perception of the on-line world, less
susceptible to many of the threats on the Internet. Surely the UK's
population is a ``UK interest''.

Furthermore such research could similarly improve the safety of others
outwith the UK since the Internet does not recognise the borders of
nation-states. The security of the global population is also in the UK
interest since a home computer somewhere in another country with a
virus can be used to attack something that the UK cares about. Better
that the owner of that home computer is educated and aware and follows
good practices by default so it does not become infected in the first
place. Of course this would limit the capabilities of agencies in the
UK to break into that computer (which, shockingly is now completely
allowed [2]) but that is worth it because it is delusional to think
that any bug or exploit that allows that to happen will not be also
used by criminals or countries that the UK considers to be enemies.

The Internet today, is incredibly centralised. In the UK,
infrastructure itself is heavily concentrated in London. A small
number of large companies are responsible for the lion's share of
traffic and activity. This concentration is a risk. It was not how the
Internet was conceived to operate. The risk comes because accident,
disasters and bad actors have a relatively small number of
targets. The concentration makes mass surveillance easier but it also
makes revenue generation using advertisements (a common business model
among large Internet companies) possible. The value of such a company
is roughly proportional to the number of ``eyeballs'' it can sell to
advertisers, so there is a strong incentive to gather as many as
possible in one place. It's a lot harder to tailor advertisements if
the communication between these eyeballs is encrypted. Automated
analysis of behaviour patterns is more difficult and injecting
``relevant'' ads based on content is impossible.

And so we have arrived at the economic problem. The business model of
advertising has the same basic requirements as mass
surveillance. Thwarting one by decentralisation and ensuring
confidentiality of communications means thwarting the other. Improving
safety and security by encouraging pervasive encryption means finding
a new economic model for the Internet that does not depend on
surveillance, that transcends the Web2.0 model of capturing users in
silos. Surely this too can be a fruitful direction for research.

[1] http://tardis.ed.ac.uk/~wwaites/2015/05/Invite_to_workshop_v2.pdf
[2] http://www.theguardian.com/uk-news/2015/may/15/intelligence-officers-have-immunity-from-hacking-laws-tribunal-told

Content-Type: application/pgp-signature
Content-Transfer-Encoding: 7bit




#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime {AT} kein.org