Artnotes and GIRDS

The other week I applied for some project funding from JISC under the rapid innovation call. The idea behind the call is to promote “small” and/or “risky” and/or “mad” projects which can be done in a short timescale and do innovative things. The terms being that projects have to fulfil some specific community need in higher education.

I don’t yet know if the bid is going to be successful but obviously I’ve got my fingers crossed, meanwhile I feel it’s worth getting some of my plans and ideas out there for comment.

The project is called Artnotes, the inspiration for it coming from visiting an artist friend of mine who showed me her notebook. It was full of doodles, scribblings, postcards and photographs of artwork she had seen, taped-up pages of things she didn’t want to destroy so much as save for another time. It was a beautiful and very tactile thing full of memories and influences.

It got me thinking, could you do something like that digitally? Could you use mobile devices (such as the iPhone – which I’ve been getting into coding for on my own back) to let artists and others catalogue and document their visual noodlings and found objects in a way that didn’t loose too much of the lovelyness of a real book but enabled all sorts of modern webby things – like being able to search through public image repositories and museum catalogues for images, like being able to share the book back out to the world. So I did a bit of reading around the subject, worried greatly about rights issues and risks, talked to a few other people round the community, panicked at the last minute and got a bid together.

One of the mockups from the Artnotes bid

If you’re interested, take a look at the BID DOCUMENT (reproduced here sans coversheets and budget) which explains the scope of the project and includes a bunch more mockups and planned features.

The trouble with ideas is that they tend to spawn more ideas – much of the work that went into the bid was in trying to cut it down and keep it limited to the core of what I thought the tool needed to be effective. Hopefully I’ve done this in the bid while not being too conservative.

One of the “cool” ideas that didn’t make it and is probably a separate project in its own right was to provide some kind of image recognition service hooking into the catalogues of major galleries. Along the lines of being able to walk into a gallery, snap a picture of an exhibit and be delivered a link to the entry in the museum’s (publicly available and machine readable obviously) catalogue. I’ve been following the work of the Museum API efforts set up by Mike Ellis and contributed to by many others which seems to be making some inroads into getting the necessary underpinnings of this in place. Of particular interest are technologies such as which already does data aggregation across a number of museums, the exemplary Brooklyn Museum API which lets you dig deep into their collection, and on the image recognition side Tineye which does a very similar reverse-image-search on the web at large.

The service which for the sake of convenience I’ve dubbed GIRDS (Gallery Image Recognition and Discovery Service) at first cut be a web-api (and probably a very lightweight browser-based interface) and would work a little like this:

girds diagram

girds diagram

Better names for it are obviously most welcome!

If anything the GIRDS service would fit more in the category of mad than Artnotes and perhaps would have been a better one to submit for the call. However to my mind getting the nice user interface through the iPhone done first and then adding the image recognition capabilities through GIRDS later seems the right way to go about it. Unless of course anyone else fancies pitching in with either project (they are both going to have to be open source after all).

Irrespective of whether my bid for Artnotes is successful I’m feeling very strongly that this work is taking me back to my roots – dreaming up nice workable tools which can potentially be of some benefit to learners, teachers, researchers or the wider community.

See PROD run! Run PROD run!

As of a couple of days ago PROD has gone live. Not wishing to blow my own trumpet too much but it went out pretty much on time and on schedule too. The project tracking utility (for that is what it is) is at the first of several milestones , resplendent with a new look and feel and a fair amount of the back-end plumbing sorted out. For anyone who is interested, the front end is being done in PHP and I’ve put my Ruby books on the shelf for the time being.

Essentially at this stage the structure consists of a list of projects, each of which may have a multiplicity of properties owing their syntax to the DOAP specification and other variables derived from discussions with JISC. The taxonomies for this are flexible and extra possibilities can be added very easily. Looking at the DOAP RDF schema it would not be hard to add multiple language support too – but perhaps we can keep that for some other time! I’d be interested to know if people might want such a thing.

Prod is designed to derive its data from a range of sources and produce a unified up-to-date view of projects and the activity that is taking place within them. So far there are two data import modules – one for the old e-Learning Framework project database and another for the Excel spreadsheets of projects currently in use at JISC. The former was woefully out-of-date but provided an effective proof that the system was functioning, and the latter refreshingly brings us a relatively fresh data set extending up to December 2007 as well as details of programme managers, funding, dates (rendered useless by Excel sadly) and themes.

This has thrown up a couple of other increments to work on over the next week or so – tighter validation and sanitisation of data for one, and some way of managing the precedence of properties. For example there is currently no way of saying that the data from one source is more authoritative than another, the most recent addition always wins…

The e-framework integration work (known internally as “development tables”) is also a major push at the moment. I’ve got this pretty much worked out conceptually and am hoping to have a first cut at functionality by the end of the month.

The next major iteration will also see the logging and display of new activity, starting with property updates and increasing in scope as more input sources are added. This will mean a corresponding change to the front page, transforming the list of “active projects” into an activity list or “mini-feed”. Keeping this well attenuated (i.e. relevant) may take some tweaking but it hopefully will make a good at-a-glance view of exactly what is going on in project world.

PROD me until I squeak

I’ve spent some time over the past weeks thinking about and writing up a new specification for the Project Directory – now known as PROD. As previously discussed in my post entitled Out of my mind on doap and ohloh (my god that was in March!) it’s all about drawing in project information from a range of sources, twisting it about a bit, analysing it and producing metrics and presenting it in a friendly, socially-enabled way.

The challenge is taking this sea of information (something that is inherently large and complex) and attenuating it until it is easily digestible chimes rather well with much of what we have been discussing in our newly materialised department The Institute for Educational Cybernetics here at Bolton. By taking a modular approach to digesting the information produced by and about a project I’m envisaging “boiling it all down” to a series of activity indicators showing (for example) that a given project has made a high number of commits to it’s SVN repository over the last month, relative of course to how many commits all the other projects have made. Other metrics would include a buzz-o-meter to measure general web activity referring to the project (sourced from places such as Technorati and Delicious).

In terms of the project itself it’s going to be done in a rapid and sensible kind of way with regular monthly milestones for new functionality! There is a bit of a discussion going on about platforms (rails or php? I’m desperate to learn rails and this will be a good opportunity! On the other hand I code php in my sleep…)

PROD itself! (

Trac instance (
Including mockups, milestones, wikiness, tickets and all manner of trac goodness

Out of my mind on DOAP and OHLOH

One of my main projects at the moment is to devise and ultimately be part of implementing a new all-singing all-dancing project tracking system. The starting point for this is of course the one I prepared earlier which consists of a flat-ish database of the JISC-funded projects I’m interested in (not by any means all of them) mashed up with the magpie rss parser. So you get the projects, their recent blog posts, and aggregations of them.

The issues with it are around:

  • coverage – only a small subset are currently included
  • maintenance – new projects need adding, old projects need reviewing, there is no admin interface
  • added value – various kinds of information would add to the usefulness of the site
    • comments, ratings and reviews
    • more links and aggregations from blogs, wikis, repositories etc
    • relationships between projects – same developers, similar category
    • relationships with interop standards – it uses FOAF, it uses IMS QTI etc (this should be linking in with the e-framework service expression definitions)
    • indications of code quality and other metrics

So to some research – how might we go about developing this, and what exists out there in the same space?


DOAP or Description Of A Project is a useful looking RDF-XML spec for describing projects. It has elements for descriptions, names, URLs (including those for source repositories) and person information by hooking in the FOAF spec.

There are a couple of models by which we could integrate this into a project tracker:

  1. Host the DOAPs: Projects and staff fill in a form on the tracker site – the tracker site produces (persistant) doap xml.
  2. Aggregate DOAPs: Projects host their own doap files – instead of filling out the form on the tracker site they can simply point it to their hosted file – the tracker then picks up the project details, feeds etc. They would be periodically spidered for updates. The files can be generated by a third party tool (doap-a-matic).

The aggreagtion approach is rather attractive from the point of view that projects become responsible for their own information. It is unattractive for the reason that projects may not bother to maintain such files properly. There is a further positive argument to say that if they don’t maintain their DOAP files, they should just be considered worthless and dead – such tough love might be just what they need.

As I have alluded to in earlier posts I’ve had a couple of discussions with Ross Gardler from OSS Watch who is also engaged in activities around tracking JISC’s projects. He is also interested in using DOAP to achieve this in combination with his beloved Apache Forrest.

DOAP me up: some useful DOAP resources

  • The DOAP home page
  • Doap-a-matic web-form to generate a DOAP
  • There SHOULD be a validator service however it doesn’t seem to exist these days. I suspect link-rot…. which doesn’t excatly inspire confidence in the whole DOAP initiative :(


Then again there is always the question – why are we bothering at all with our own tracker when there are better solutions out there in the world. One such is which does much of what we need; user comments, reviews, feed aggregation and general project information – and it really does the business when it comes to automated analysis of source code. I ran it over a little open source project I created and was delighted to learn that my efforts would be worth over 100 thousand dollars if I were being paid, that my code is mainly written in php and that there is a small development team of 2 people.

Samxom in Ohloh

This is just marvellous – and could even be used directly in combination with instructions to projects to employ little careful tagging. The word JISC perhaps might do the job. While it might be very web-2 and very trendy the con with this is that it is out of our control – and I’m not quite sure of the provenance and policy of Ohloh.

JISC Conference 2007

I’ve been at the JISC Conference in Birmingham. I skipped the opening keynote opting to sit around the CETIS stand talking to colleagues (wilbert/oleg/paul/osswatch etc) including discussing the potential for an improved project tracking system based on DOAP and what to do with the old e-Learning Framework – all of which is completely part of my work-plan for the next six months.

I mooched around the stands – picking up several good things like a small rubber armchair and a neat little 4-port USB hub. Thanks to the exhibitors whoever you are… but I then went and left the bag of goodies on a train! How silly is that. Fortunately it didn’t have anything of real importance inside.

The first session I went to was on The learners experience of elearning. Based on two ‘big’ studies it examined learners and their use of and attitudes toward learning technologies. The session felt like somewhat of a bedding down into the web2 mould – acknowledging that learners are mostly streets ahead of institutions in terms of their demand for online services as illustrated through blogs, myspace, msn, faceparty and that subverting these to educational ends is simply happening naturally.

One institution which has taken the bull by the horns and provided collaborative eportfolio-blogging services for the student body is Wolverhampton – through their use of Pebblepad. Emma Purnell, one of their recently qualified PGCE students came along to tell us all how she had caught the eportfolio bug and how it changed her learning – watch the video if you dare!

Next up, I went to a session about OpenAthens. In case anyone doesn’t know Eduserv is a firm charity which provides the Athens authentication service to many educational institutions and organisations, mainly in the UK. The commercial and open-source worlds are starting to get on their own personal identity bandwagons with offerings such as OpenID and Windows CardSpace. To deal with all this Eduserv have cooked up a framework of their own which (for fairly obvious reasons) they have called OpenAthens. It’s a re-working of their existing software and services only designed to work in a more heterogeneous environment. It includes libraries and plugins for client applications, administrative tools and plugable back-end services capable of interfacing with all sorts of different federations and federation methods including Shib, OpenID and all the rest of them. By all accounts it sounds pretty neat. The session was supposed to be a workshop and I thought they might just do a real demo to show how it works… but no this is another death-by-powerpoint moment. They did however point to their developer site for us to glean the full gorey details.

Finally the inspirational talk of the day was given by Tom Loosemore from the BBC. He runs their whole online operation by the sound of it and mercifully sounds like he really has his head screwed on. He outlined the scale of the BBCs electronic empire (thousands of sites) and took us through the 15 most important things you need to know about the web. It’s always heartening when someone just talks common sense and you can almost hear everyone in the room go “oh my, of course, how sensible”. You can of course read the commentary and see his 15 important things for yourself. Or read his blog which is currently violating rule #8 – hopefully to be rectified soon.

Reference model meeting

Earlier in the week I had a good meeting with the reference model projects in Brum – pretty much trying to take up the task of defining services from the brick-adopters meeting only with an obvious focus on the e-framework and the Definitions papers and templates. There was some quite animated discussion of how the template (for Service Genre) as it stands seems rather heavy and that a more lightweight version should be produced – skipping any unnecessary or un-knowable elements. I’ve got recordings of it which I mean to chop into bits and post somewhere soon. As a follow up the projects have been asked to crystallise their thoughts on what is wrong with the e-framework definitions and template at the moment – and to fill in templates for two services by the end of the month.