LSE SSIT 7 workshop, 2007-03-19

The LSE’s SSIT 7 workshop “Identity in the Information Society: Security, Privacy, The Future” took place 2007-03-19 and 20 (Monday and Tuesday).Well, it was certainly a change for me to attend an event where there is no one who I have met before, and where my badge, declaring “JISC CETIS Portfolio SIG” drew curiosity but no recognition. The usual suspects were absent. And it’s refreshing to be reminded that there are a whole lot of people out there interested in identity coming from different starting points.

One of the starting points, relating to the venue at LSE, was the political slant. Human rights good, government interference bad; social workers good, information systems – well, if not entirely bad, then certainly highly suspect of being tools in the hands of an oppressive government. Confronted with what seemed to me like ancient lefty attitudes, I didn’t know whether to laugh, cry, or just throw in a couple of questions into the pot. I tried the last, but to little effect I think.

What did puzzle and disappoint was that this workshop seemed to be put together on the premise that people wanted to get together to join in criticism for anything that could remotely be associated with surveillance and control, and in particular identity cards and databases with personal information, but neither to accept any positive reasons why these things should be put forward in the first place, nor to offer any constructive alternatives. The spectre of Orwell’s 1984 still seems to have the power to deprive people of many of their critical faculties, despite times having moved on. Why don’t people bother at least to suggest ways in which the feared technology could be kept more under control?

Bruce Schneier seemed to fit well into this mould. I felt there was an element of scaremongering, and I couldn’t discern much by way of serious analysis. I look forward to having a look at his newsletter, “Crypto-Gram“, to find the constructive and valuable things that I didn’t get from this presentation. A serious point of criticism I have relates to his view of law. If we want protection, we must have laws that enact that protection, he seemed to be suggesting. But since when have criminals, and particularly organised criminals, respected laws? Laws do “change the dynamic” for law-abiding citizens, but I’d say that law-abiding citizens aren’t the main problem.

Bruce thought that the idea of the “death of privacy” was overrated. But what exactly, I started to wonder, is this “privacy” that people champion? Do people want to interact with the information society by withholding all information about themselves, and being asked every time for their permission to access every smallest piece of it? That would seem pretty shortsighted and timewasting to me: the thought wasn’t mentioned in any case.

The next speakers were at least very stimulating and entertaining. Simon Davies and Gus Hosein (of Privacy International as well as visiting fellows of LSE) seem to have made a bit of a career out of challenging the government specifically about the Identity Card proposals. Nice to see some material reported that could have come out of Private Eye.

The highlight of the day for me came in the afternoon, when I was about to give up hope of anything solidly interesting. Prof Brian Collins is the Department of Transport’s Chief Scientific Advisor, with a distinguished earlier career. He gave a thoroughly professional presentation on some of the technical pitfalls and challenges associated with Identity Management. He sees no reason why people should not use multiple identities, allied to an assumption of minimum disclosure. I hope his slides become available to prompt more recall.

The following morning we were back to old themes for a while. Terri Dowty, the Director of Action on Rights for Children, did give a useful catalogue of the different databases on which personal information about children may appear, now or planned. But this was all in a sinister-toned presentation which, for example, almost portrayed the Connexions service as an agent of repression. How about, I asked, proposing something positive, rather than just criticising the negative aspects of current and planned databases? What would she suggest? More front-line workers like social and youth workers; more money to help families … you probably get the picture, though she didn’t say “tax the fat cats”. What I rescued with my other question was that parents and children are in principle allowed to see records on “ContactPoint” (used to be called “information sharing index”) which is where much of this information is brought together. Perhaps that is what we ought to be advising people to do: at least to know what is there, and correct if needed.

Terri seemed to have a pretty rosy picture of the world in which only about one in 500 children need any urgent intervention. She portrayed a society where the constant intrusion into children’s private lives accentuates their dependency and interferes with the development of their sense of self. I couldn’t see it, personally. What is credible is that people won’t use a service if they suspect the information may be passed on to others. I think that lesson has been taken on board in the e-portfolio community already. What it does highlight for me is the need to elaborate ideas on ethical development.

Another positive highlight followed: Ross Anderson of Cambridge University talked about “Identity Privacy and Safety in e-Health” (though e-health is not a term he likes). This was a brilliant and committed expose of the pitfalls of large government IT projects, and hence the risks inherent in the NHS IT project. Interestingly for us, he sees the way forward as being with standards-based interoperability, and an open market in IT systems development. This is how Sweden manages to have a system that works.

For me, privacy needs a model different from the one implicit at this workshop. I’d say, better and more constructive to stop focusing on what one doesn’t want others to see, and start focusing on just what one does want different groups to see – or conversely, who is allowed to see any particular piece of information. It seems to me that here is another lesson that has already been learned, a while ago, by the e-portfolio community. Perhaps when one’s focus is on systems controlled by others, it is easy to focus on the dangers, not the opportunities given by systems if under the control of the individual. Clear lesson: let’s make sure that we continue to clearly advocate the user-centric design and development of user-controlled systems.

There’s one other principle I’d like to float, and it is about the visibility of traces of accessing information. My intuition is that if one could reasonably guarantee that all accesses of personal information were both properly logged and available to the “data subject”, then we would be much less unhappy about our information being on the databases in the first place. Is such technology feasible? What would it take?