How on earth do I add OpenID to my LDAP schema

Okay – this is bugging me.

The scenario is as follows: I have an OpenLDAP directory with several hundred users in it. For the records I’m using the normal inetorgperson schema.

I want to add an openid attribute for my users (in a responsible and proper way) so that I can associate users with multiple arbitrary external OpenID providers.

All I’ve managed to find on the net about this was a blog at oracle discussing how this is an issue and how it would be a really good idea to do something about it.

I’m all at sea – how on earth am I supposed to do this? Do I create a new subclass of inetorgperson and migrate everyone on to it? Can I do this without breaking everything? Do I hackily use the “labeledURI” attribute and just shove things in there?

Come on lazyweb!

Down and dirty with OpenID

I’ve spent the last few hours (after getting home from a swift pint in the pub admittedly) having one of those satisfying coding experiences where the dots just start joining up… I took the very nicely written OpenIDenabled PHP library and bolted it on to the authentication routines for PROD.

The technical principles behind OpenID are simple enough: the user tells your application their openid URL, the app asks the relevant provider if everything is ok, the provider comes back and tells the app a whole bunch of stuff saying that the user is kosher (or halal or whatever it says in their profile).

The latest version of the toolkit made this a breeze – coming as it does with working examples and very well documented code. Most of the work was putting in a few new hooks in my authentication script to catch both ends of the transaction, copying and pasting some code from the example scripts to create the consumer object and set it flying and finally catching the response at the end and telling my application that the user is now logged in.

As with most quick work there is still quite a bit tidying up to do – particularly around how I associate existing users in the LDAP directory with their OpenIDs… At the moment I’m just not bothering. Useful error messages would probably be a good idea too! Testing it with a few different providers is also a must.

One gotcha I discovered was that at some point the exact recipe for doing Delegation must have changed and that the library is more fussy about this than other implementations I’ve seen and used. When testing using my own domain’s delegation which I’ve had set up for years it was consistently failing. This is not good news as there are probably thousands of people who still have it set up exactly as I did…

Another (Ubuntu specific) issue was that it was failing to authenticate against yahoo’s service because I was missing some bits of openssl… This was fixed with a quick sudo apt-get install openssl ca-certificates

Now I’ve had a few brushes in recent months with OpenID mainly around the web provision for the XCRI project – where we got OpenID working across WordPress, Mediawiki, and (through some rather cheap hacking) BBpress. It was however reliant on plugins for said apps and never really a very satisfactory experience – generating a long string of complaints from users getting very variable results depending on which provider they were using. Upgrading any particular component of the site seemed to just lead to more chaos.

Sadly I think that these variable experiences do rather detract from the potential that OpenID has to help us all better manage our online identities. That and the insistence of so many “providers” like Yahoo! and WordPress.com that they are just that, providers and not consumers. I’ve already got about 6 OpenIDs on the go without really realising – useful for testing but the exact opposite of the single authentication service goal. Tsk tsk.

Anyway… Now that I’ve actually tackled the problem at a slightly deeper level I’m feeling confident that over time we can not only iron out XCRI’s woes but also introduce OpenID across the JISC CETIS (and IEC) services in a reasonably robust way. The future looks rosy, the sky is blue, thunderclouds? What thunderclouds?

Songs of restriction and compromise

I’m rather perturbed by some recent conversions with friends in university IS departments and for that matter a recent experience of being at a conferencelet in Keele. It all boils down to questions of security – and the (in my view mistaken) belief that by restricting the network to certain ports administrators can limit exposure of users to the evil that is the internet. This really bugs me as the internet is not just about port 80 and the wealth of potential applications (and therefore educational opportunities) gets squeezed through the single technological bottleneck of the web. Trouble is the compromises and attacks just get squeezed through the same chink as well and it’s still hell to manage.

Some songs of restriction…

Here in Bolton we have a quite complex set of restrictions on different parts of the network – the main segment which is mostly an internal free-for-all but users have to use a web-proxy to get out, the unrestricted wilds of the res-net where pretty much anything goes, the wireless which (once authenticated) gives you an unrestricted but mutually-isolated bit of connectivity, the DMZ where the servers live and breathe. From within our own office most of us end up using the wireless and then VPN-ing back in to collect email (or in some cases using external providers). Only a few of us can print without connecting our laptops directly to the printer and reconfiguring them to be on a different subnet – which is intensely annoying.

When I visited Keele I saw that they also have a wireless network – which in theory is all well and good. Apart from being put under considerable strain by the sheer volume of people wanting to use it (this being nothing new for JISC-orientated conferences) there were two major issues with it; Firstly access credentials were provided on pieces of paper and then users were required to log in by downloading and running a slightly shady and buggy Java application. Secondly once on the network it was very heavily restricted so while regular web-browsing was fine, anything slightly more exotic like picking up email with IMAP or (heaven forbid) using VPN to get in to Bolton was totally blocked. Strangely though the access credentials came with a temporary email account – which I didn’t touch or particularly want to mess with.

Some songs of compromise…

Unsophisticated: The other week we all got this email from Support Team (University of Bolton)

Attn: Staff/Student,

To update your bolton.ac.uk account & webmail, you must reply to this email immediately and enter your password here (*********)

Failure to do this will immediately render your Email Address deactivated from our database as this is part of our security measures to serve you better.

Thank you for being a part of University of Bolton.

Regards,

Support Teams

You can probably guess how many phishies bit the bait on this one. It’s an old and well tried social engineering technique and sadly it still works. It’s just regular email with a bogus “from” address and some external “reply-to” address, no fancy stuff here. An attacker on picking up valid credentials would not only be able to hijack the user’s account but also in theory get VPN access and dig their way into the internal network.

Sophisticated: Second example and this one contains an element of personal shame – CETIS run a couple of servers and a few weeks back (while I was off doing family things) one of them got rooted good and proper. I had neglected to run any security updates for a while and (as far as I can tell) the machine was compromised through vulnerable SSH keys – what with the SSH port being open to the world. Suspicious port-scanning activity was picked up downstream and we had no choice but to take the machine down until we could re-build it.

So we can’t win?
You can see why IS departments are worried about providing unrestricted access to the net for users – and why the heavy approach seems to work for shielding their machines from viral infection and so forth – however there will always be things that slip through via social engineering or more sophisticated attacks. There are many many other scenarios, users working around the restrictions to do whatever it is they want to do, physically unplugging their machines, taking them home and bringing them back, reconfiguring them to do such-and-such.

Yes we can win
Institutions need to get real and run some mandatory courses on computer security and behaviour for all staff. And for the really clueless some basic courses on computers and what they are. This is what is done in industry and by all accounts it works pretty well. While gullibility may not be curable people should at least know that there are some clear lines of responsibility and where and why they should not be crossed. From the technical end places need to reconsider their policies to balance protection of users against freedom to use whatever network services may help them teach, learn, research or administrate. Even if that does mean they can get on Skype, bit-torrent and Second Life!