We now have the internet, and many of the 'disintermediations' that were driving the 'dotcom' boom of the late 1990s are now more or less reality. If you doubt this, ask your local author, musician or filmmaker (and yes, in New Zealand these sort of people are 'locals'). And where have all the bookshops gone?
Along the road we know that we have a problem with 'security' in that world, and it is unclear what to do about that and how a possible solution might work. To many it seems that the powers that be are always just asking for more and more control over data and metadata, and run undisclosed 'dragnet' operations on our connections. At the same time, judging only from our mail inboxes, virus and spam writers run unchecked. Little wonder that many feel that these 'powers that be' are asking for more control, and then delivering little.
It is a good question whether a philosophy of all of this is possible. I think it is. Like many good philosophical questions, this problem can be phrased in terms of a dilemma between two fairly easily understood extremes. The truth then at first seem to lie somewhere in the middle. But good philosophy usually gives it a little twist.
In the case of security versus surveillance I think the extremes are this:
- A (Hobbesian) state of nature, in which each internet user is on their own. A lot of innovation happens, along with a lot of good self-organisation and a lot of bad stuff. The closest we have to something like this in real life is probably the deep web, or darknets. Contrary to public opinion, there is life on the deep web apart from weapons and drugs, but it's not a place to go (digitally) unarmed.
- A total surveillance state, in which internet crime is quickly stamped out, along with dissent and free speech. The closest we would have to this world is the Chinese firewall. And there seems to be no shortage of politicians in the West who want to take us into this direction too.
A second answer, and one that I've heard given, is that it makes sense to give up some of our freedoms from the Hobbesian internet in order to have some security, and that a Hobbesian 'social contract' with a central power is required to keep the net relatively secure and free. For several reasons that answer - when it comes in the form of the Hobbesian deal - doesn't satisfy me. Hobbes was, for starters, not a democrat, so it is hard to see how such a future state of the internet will align with our wider democratic institutions. My hunch would be that it probably can't, and that a democratically uncontrolled central power would, little by little, take us into the total surveillance horm of the dilemma.
A third answer, and I think the right one, is that we need to rethink the notion of 'security', 'freedom', 'dissent' and so on for a digital world. We sacrifice a lot in the name of security, but to me it seems pertinent that we do not even have a robust philosophical candidate definition for what 'security' actually is. As such, the concept is woolly and vague, an ideal frame to attack all kind of ill-conceived notions of 'security measures' to.
So I think what's necessary is a philosophical twist. That twist has to start by asking some pertinent questions, and coming up with some new principles governing our digital beings. That is what philosophy of cybersecurity is about.