TOGAF: fetch me a 27b stroke 6

I’ve been attending a course on The Open Group Architecture Framework or TOGAF down in London. The aim of TOGAF is to provide a methodology for effecting change in the IT capabilities of an organisation by taking a consistent (though perhaps rather top-down) approach to structuring everything through analysing the business needs and processes…

The course run by Architecting the Enterprise was pretty power-point-heavy and by the end of the first day we were all getting pretty sleepy. There was plenty of terrible clip art and bullets bullets bullets. The second day was slightly better as we were all that little bit more awake but still there was a general consensus that the balance could be more on the workshopping of the case-study as a means to teach the method rather than the endless transmission. They are doing the job of giving us an understanding of the methodology – my criticism is simply a question of style.

The first principle of TOGAF is to put in place an architecture process – or Architecture development model – mapping out the business needs, applications, data and infrastructure which go to make things work. Simply thinking about the architecture you’re planning to put in place, who the stakeholders are, scoping it out sensibly, getting the right solutions and planning the migrations in a structured, iterative manner, considering risks etc etc should clearly help organisations to run a more efficient and tight ship in terms of alignment of IT with the actual business needs. The daisy below shows the model, each petal representing a core element of the process, all feeding the central requirements. In this diagram one petal is expanded to show the sub-process within….

togaf-daisy
Togaf’s Architechture Development Model (as exploded by developer.com)

The question for us in Education is of course how does this gel with the constraints in which we work – how we get the buy in from both the top and bottom of the organisation to such an approach. Can it be applied in a more light-weight way, how do we deal with the technological shanty-towns that exist in academia. Ultimately we figured out that going through the initial stages of the methodology would probably serve to expose a lot of cultural issues and barriers to change within the organisation.

By way of context, the other participants of the course are mostly working on JISC Enterprise Architecture projects and actually have responsibility for applying these things in their own organisations.

There are a range of certified modelling tools for TOGAF – but it should be noted that there are other “un-certified” tools which could concievably be used to model and manage the togaf process. As ever with these kind of things they will all havetheir specific uses, affordances, personal fans, strengths, weaknesses and so forth. We were not given a specific push towards one tool within the training course but we were given some criteria by which to evaluate them; Core questions – does it support the ADM process, deliverables, models and how the tool handles import/export and extensibility. Most significantly though is probably usability and cost of ownership – which varies wildly across the available products from circa $100 per seat to thousands and thousands.

To be continued…

See PROD run! Run PROD run!

As of a couple of days ago PROD has gone live. Not wishing to blow my own trumpet too much but it went out pretty much on time and on schedule too. The project tracking utility (for that is what it is) is at the first of several milestones , resplendent with a new look and feel and a fair amount of the back-end plumbing sorted out. For anyone who is interested, the front end is being done in PHP and I’ve put my Ruby books on the shelf for the time being.

Essentially at this stage the structure consists of a list of projects, each of which may have a multiplicity of properties owing their syntax to the DOAP specification and other variables derived from discussions with JISC. The taxonomies for this are flexible and extra possibilities can be added very easily. Looking at the DOAP RDF schema it would not be hard to add multiple language support too – but perhaps we can keep that for some other time! I’d be interested to know if people might want such a thing.

Prod is designed to derive its data from a range of sources and produce a unified up-to-date view of projects and the activity that is taking place within them. So far there are two data import modules – one for the old e-Learning Framework project database and another for the Excel spreadsheets of projects currently in use at JISC. The former was woefully out-of-date but provided an effective proof that the system was functioning, and the latter refreshingly brings us a relatively fresh data set extending up to December 2007 as well as details of programme managers, funding, dates (rendered useless by Excel sadly) and themes.

This has thrown up a couple of other increments to work on over the next week or so – tighter validation and sanitisation of data for one, and some way of managing the precedence of properties. For example there is currently no way of saying that the data from one source is more authoritative than another, the most recent addition always wins…

The e-framework integration work (known internally as “development tables”) is also a major push at the moment. I’ve got this pretty much worked out conceptually and am hoping to have a first cut at functionality by the end of the month.

The next major iteration will also see the logging and display of new activity, starting with property updates and increasing in scope as more input sources are added. This will mean a corresponding change to the front page, transforming the list of “active projects” into an activity list or “mini-feed”. Keeping this well attenuated (i.e. relevant) may take some tweaking but it hopefully will make a good at-a-glance view of exactly what is going on in project world.

PROD me until I squeak

I’ve spent some time over the past weeks thinking about and writing up a new specification for the Project Directory – now known as PROD. As previously discussed in my post entitled Out of my mind on doap and ohloh (my god that was in March!) it’s all about drawing in project information from a range of sources, twisting it about a bit, analysing it and producing metrics and presenting it in a friendly, socially-enabled way.

The challenge is taking this sea of information (something that is inherently large and complex) and attenuating it until it is easily digestible chimes rather well with much of what we have been discussing in our newly materialised department The Institute for Educational Cybernetics here at Bolton. By taking a modular approach to digesting the information produced by and about a project I’m envisaging “boiling it all down” to a series of activity indicators showing (for example) that a given project has made a high number of commits to it’s SVN repository over the last month, relative of course to how many commits all the other projects have made. Other metrics would include a buzz-o-meter to measure general web activity referring to the project (sourced from places such as Technorati and Delicious).

In terms of the project itself it’s going to be done in a rapid and sensible kind of way with regular monthly milestones for new functionality! There is a bit of a discussion going on about platforms (rails or php? I’m desperate to learn rails and this will be a good opportunity! On the other hand I code php in my sleep…)

PROD itself! (prod.cetis.org.uk)

Trac instance (trac.cetis.org.uk/trac.cgi/prod)
Including mockups, milestones, wikiness, tickets and all manner of trac goodness

LDAP Disaster

On Monday afternoon I updated various packages on our Fedora Core 5 server using yum. This has in the past caused one or two little tragedies. Really I should know better and do such updates over the weekend but of course I went ahead all gung-ho.

The vital mission critical thing that died this time was the OpenLDAP server which runs authentication across all the CETIS sites. No-one could get in to edit the wikis or blogs or a whole bunch of other services which is pretty disastrous really.

I scratched my brain for all of Tuesday and even a few hours on Monday night – trying to figure out what had happened. Basically it seemed that all the data in the openldap database had disappeared. I could connect to the server but it was unable to list the nodes of the directory. I tried a few command-line diagnostic tools. slapcat produced absolutely no output slapd_db_recover happily recovered something but made no difference whatsoever. Doing an ldapsearch (which should dump the whole dataset) did the following:

[root@arwen ldap]# ldapsearch -x
# extended LDIF
#
# LDAPv3
# base with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#

# search result
search: 2
result: 32 No such object

I started off thinking that my config files were knackered – so I pawed over ldap.conf and slapd.conf for hours – and nothing changed. I did notice that there was an /etc/ldap.conf as well as an /etc/openldap/ldap.conf. I compared the two and removed the one loose in /etc as it seemed wrong. Didn’t help.

Next I got drawn down a big red-herring as I noticed messages in the logs when starting slapd:

Jun 13 12:01:32 arwen slapd[18004]: sql_select option missing
Jun 13 12:01:32 arwen slapd[18004]: auxpropfunc error no mechanism available
Jun 13 12:01:32 arwen slapd[18004]: auxpropfunc error invalid parameter supplied

Several sources claimed that this was to do with permission problems and SASL – but it turned out that it was completely unrelated to my actual problem and could be safely ignored. Again I wasted loads of time reading about SASL and chmodding files everywhere. I suppose it might become important were I ever to decide to actually use SASL with the directory.

So where had my data gone? This morning while on a conference call I was idly noodling through the database files in /var/lib/ldap and noticed a directory called rpmorig which I hadn’t really been through. I looked and I saw and I suddenly realised that there were a lot more .bdb files in there than there were in the parent directory and that they were full of data. The penny dropped. yum had kindly backed up all my data into this directory and replaced the working files with fresh empty ones. I moved the rpmorig directory into the place of /var/lib/ldap, restarted slapd and behold EVERYTHING WORKS AGAIN.

I curse whoever put together that yum package.

CETIS-Redux

A year has passed since I started thinking about the redesign of the CETIS website, and inevitably now that the whole thing is creaking into some semblance of what Scott and I had originally intended it has been time to go back to basics and re-examine what we thought we were doing, why we are doing it, whether it is working and what on earth we are going to do next.

There are a few processes going on; Mark, Sharon and Adam have been conducting a review of the community wiki aspects of the web presence and the e-learning focus team are considering where to go next with their magazine-style site with a view to merging it together with the JISC-CETIS page. The Communications team has been discussing the whole show from start to finish and back again.

My thoughts

1) You are in a twisty turny maze

There is a tendancy for people to get lost in the site. Removing some navigational elements (especially in the wiki) and applying some others (breadcrumbs, menus etc) in a coherent way will make a difference I hope – but the main proposal is as follows:

Merge the www.cetis page, the jisc.cetis page and the elearning focus site into one coherent magaziney all-singing portal.

Front page mockup v3

(old versions: v1 v2)

The mockup (v3) shows the main elements we have identified – with a monthly editorial, regularly changing “features”, and constantly changing “news”. It also gives prominence to the SIGs with a prominent bar on the left hand side…

To start off with, the news and features would actually be drawn from the blogs as the current aggregation is – only the editiorial process will be stepped up – with lead-ins and article filtering-selection done by the focus team. As is done with e-learning focus, articles may also be commissioned by external writers.

V2 has a horizontal-slice approach; Banners | Navigation | Editorial | News etc (3 streams) | Other stuff (4 streams)

V1 is an earlier attempt – and is more like the aggregation as it stands at the moment.

2) Re-work the SIG entry points

We made a decision quite early on that the SIGs would simply have a protected wiki page each to serve as their main site – giving them total flexibility to do whatever they wanted. Of course this approach led to inconsistency and an extra learning curve for staff. The result was certainly not easy to follow for the outside observer and generally quite unsatisfactory to my mind.

sig-page.pdf
I have in fact started working on it

So the plan would be coherent entry points combining the main details of the sig, good quality linkage with the events system and drawing in content from the main aggregation and wiki (by tag naturally). Crucially the co-ordinators need customisable space to do with what they wish whether it be posting up some powerpoints or pointing to some interesting resources online somewhere – I need another pass of Mark and Sharon’s work as well as a round of discussion with co-ordinators to figure out exactly what really needs to be done.

3) The project tracker

Already discussed somewhat in my post about doap and ohloh I have a chunk of time set aside for re-working the project tracking system so we can reliably keep tabs on what is going on with JISC projects. Again this needs to be nicely integrated.

Example Project page

One last thing

There is one last thing. I propose removing all traces of rounded corners in favour of the infinitely superior square corners. Trendy design at it’s best don’t you think ;)

Out of my mind on DOAP and OHLOH

One of my main projects at the moment is to devise and ultimately be part of implementing a new all-singing all-dancing project tracking system. The starting point for this is of course the one I prepared earlier which consists of a flat-ish database of the JISC-funded projects I’m interested in (not by any means all of them) mashed up with the magpie rss parser. So you get the projects, their recent blog posts, and aggregations of them.

The issues with it are around:

  • coverage – only a small subset are currently included
  • maintenance – new projects need adding, old projects need reviewing, there is no admin interface
  • added value – various kinds of information would add to the usefulness of the site
    • comments, ratings and reviews
    • more links and aggregations from blogs, wikis, repositories etc
    • relationships between projects – same developers, similar category
    • relationships with interop standards – it uses FOAF, it uses IMS QTI etc (this should be linking in with the e-framework service expression definitions)
    • indications of code quality and other metrics

So to some research – how might we go about developing this, and what exists out there in the same space?

DOAP

DOAP or Description Of A Project is a useful looking RDF-XML spec for describing projects. It has elements for descriptions, names, URLs (including those for source repositories) and person information by hooking in the FOAF spec.

There are a couple of models by which we could integrate this into a project tracker:

  1. Host the DOAPs: Projects and staff fill in a form on the tracker site – the tracker site produces (persistant) doap xml.
  2. Aggregate DOAPs: Projects host their own doap files – instead of filling out the form on the tracker site they can simply point it to their hosted file – the tracker then picks up the project details, feeds etc. They would be periodically spidered for updates. The files can be generated by a third party tool (doap-a-matic).

The aggreagtion approach is rather attractive from the point of view that projects become responsible for their own information. It is unattractive for the reason that projects may not bother to maintain such files properly. There is a further positive argument to say that if they don’t maintain their DOAP files, they should just be considered worthless and dead – such tough love might be just what they need.

As I have alluded to in earlier posts I’ve had a couple of discussions with Ross Gardler from OSS Watch who is also engaged in activities around tracking JISC’s projects. He is also interested in using DOAP to achieve this in combination with his beloved Apache Forrest.

DOAP me up: some useful DOAP resources

  • The DOAP home page
  • Doap-a-matic web-form to generate a DOAP
  • There SHOULD be a validator service however it doesn’t seem to exist these days. I suspect link-rot…. which doesn’t excatly inspire confidence in the whole DOAP initiative :(

Ohloh

Then again there is always the question – why are we bothering at all with our own tracker when there are better solutions out there in the world. One such is Ohloh.net which does much of what we need; user comments, reviews, feed aggregation and general project information – and it really does the business when it comes to automated analysis of source code. I ran it over a little open source project I created and was delighted to learn that my efforts would be worth over 100 thousand dollars if I were being paid, that my code is mainly written in php and that there is a small development team of 2 people.

Samxom in Ohloh

This is just marvellous – and could even be used directly in combination with instructions to projects to employ little careful tagging. The word JISC perhaps might do the job. While it might be very web-2 and very trendy the con with this is that it is out of our control – and I’m not quite sure of the provenance and policy of Ohloh.

JISC Conference 2007

I’ve been at the JISC Conference in Birmingham. I skipped the opening keynote opting to sit around the CETIS stand talking to colleagues (wilbert/oleg/paul/osswatch etc) including discussing the potential for an improved project tracking system based on DOAP and what to do with the old e-Learning Framework – all of which is completely part of my work-plan for the next six months.

I mooched around the stands – picking up several good things like a small rubber armchair and a neat little 4-port USB hub. Thanks to the exhibitors whoever you are… but I then went and left the bag of goodies on a train! How silly is that. Fortunately it didn’t have anything of real importance inside.

The first session I went to was on The learners experience of elearning. Based on two ‘big’ studies it examined learners and their use of and attitudes toward learning technologies. The session felt like somewhat of a bedding down into the web2 mould – acknowledging that learners are mostly streets ahead of institutions in terms of their demand for online services as illustrated through blogs, myspace, msn, faceparty and that subverting these to educational ends is simply happening naturally.

One institution which has taken the bull by the horns and provided collaborative eportfolio-blogging services for the student body is Wolverhampton – through their use of Pebblepad. Emma Purnell, one of their recently qualified PGCE students came along to tell us all how she had caught the eportfolio bug and how it changed her learning – watch the video if you dare!

Next up, I went to a session about OpenAthens. In case anyone doesn’t know Eduserv is a firm charity which provides the Athens authentication service to many educational institutions and organisations, mainly in the UK. The commercial and open-source worlds are starting to get on their own personal identity bandwagons with offerings such as OpenID and Windows CardSpace. To deal with all this Eduserv have cooked up a framework of their own which (for fairly obvious reasons) they have called OpenAthens. It’s a re-working of their existing software and services only designed to work in a more heterogeneous environment. It includes libraries and plugins for client applications, administrative tools and plugable back-end services capable of interfacing with all sorts of different federations and federation methods including Shib, OpenID and all the rest of them. By all accounts it sounds pretty neat. The session was supposed to be a workshop and I thought they might just do a real demo to show how it works… but no this is another death-by-powerpoint moment. They did however point to their developer site http://labs.eduserv.org.uk/aim/ for us to glean the full gorey details.

Finally the inspirational talk of the day was given by Tom Loosemore from the BBC. He runs their whole online operation by the sound of it and mercifully sounds like he really has his head screwed on. He outlined the scale of the BBCs electronic empire (thousands of sites) and took us through the 15 most important things you need to know about the web. It’s always heartening when someone just talks common sense and you can almost hear everyone in the room go “oh my, of course, how sensible”. You can of course read the commentary and see his 15 important things for yourself. Or read his blog which is currently violating rule #8 – hopefully to be rectified soon.

You have permission

The CETIS blogs had a requirement (from day zero) that all the contributors should be able to see each-others “private” posts. The idea behind this is that blogs can be used for internal reporting and chat within the organisation at the same time as being used for public work.

I had scratched my head and tried a whole range of plugins for wordpress-mu to try and achieve this however nothing quite did what I needed. The closest was the Role Manager plugin however it didn’t work with wordpress-mu… and after I had spent a quantity of time trying to get it working I realised that roles were assigned to users on a per-blog basis rather than globally. It’s a shame as a roles and capabilities based solution had sounded reasonably elegant.

Today I work up with the resolve that I should just take matters into my own hands and make a small plugin of my own design to force wordpress-mu into doing what I wanted. Well actually I started with a direct hack on the source and then thought better of it and wrote a plugin instead! It manipulates the main wordpress query (as found in “query.php”) force-bypassing the user capability check.

I also gave the plugin the option of showing private posts in un-authenticated outgoing news feeds – for the sake of sanity this is limited to post titles and urls. To do this you have to append ?showprivate=true to the feed url.

It even has a rough administrative interface allowing the private feeds and posts to be turned on and off.

Download the source code

Mediawiki LDAP headaches part 6

Mediawiki has a useful LDAP Plugin. We have been using it on the CETIS wikis for some months now and it has been fine. The time came when we needed to promote various staff members to the rank of administrator so they can do things like import content and protect pages. In all honesty it wasn’t a hard thing to achieve: I created a new groupofnames entry called sysop in the LDAP directory and populated it with the names of our staff members. Adding the appropriate lines to the mediawiki configuration scripts after some trial and error resulted in the groups being syncronised and the appropriate people becoming appropriately powerful.

This is fine. BUT I’m not totally satisfied. What I really wanted to do was give all members of the staff ou (organisational unit) the sysop privilage – so I don’t have to start assigning more groups to people than they already have. There isn’t an easy way as far as I can see to make a group containing all children of an ou. Or at least if you can define the ou as a member of the group, no self-respecting implementation is going to realise quite what you’re getting at.

I’m left with the prospect of adding everyone to the group (which I’ll automate I suppose) – or hacking the plugin. Or I could re-structure the whole directory putting everyone in a single ou and using groups as the primary means of differentiating people. I don’t really want to do that.

Other ideas are (of course) welcome.

Fixing feeds

Last week I managed to get round to doing several items on the Web Tasklist (private wikipage) including sorting out all the JISC CETIS site news feeds. This covers the main feed from the front page, feeds organised by tag, and feeds from the events system. Needless to say they are now all validating nicely and easily locatable by all your favourite aggregators.

The real stick in the mud with producing the feeds turned out to be the precise formatting of dates. The ATOM 1.0 spec requires dates to be formatted according to RFC3339 and the various flavours of RSS require a variation of RFC822. All very well I think, I have the mighty Smarty templating engine running atop PHP. All I need to do is ask it to format the dates using the built-in date format conversion support, isn’t it. But no, that would be too easy.

Smarty has a useful modifier plugin called date_format which converts incoming dates (from php or mysql native date formats) into anything you might want. It is essentially a wrapper for the PHP strftime function, taking the same format instructions as the C function of the same name. So I start concocting format strings for the two RFCs in question and trying to get them to validate.
I also tried using PHP’s date() function – this takes a completely different syntax to produce the desired output including useful constants for such standard dates. Not that they were any help either!

Atom (rfc3339)

PHP Function Format string Sample output Problem
strftime() %Y-%m-%dT%H:%M:%SZ 2007-02-12T17:01:07Z The Z is a fudge – the time might not actually be in the UTC timezone
strftime() %Y-%m-%dT%H:%M:%S%Z 2007-02-12T17:01:07UTC No – UTC is not valid…
strftime() %Y-%m-%dT%H:%M:%S%z 2007-02-12T17:01:07+0000 Using lowercase %z better but missing colon in time-zone
date() DATE_ATOM 2007-02-12T17:01:07+00:00 It’s right!

RSS (rfc822)

PHP Function Format string Sample output Problem
strftime() %a, %d %b %Y %H:%M:%S %Z Thu, 15 Feb 2007 17:12:23 UTC Produces ‘UTC’ as the time zone – this is not allowed in the rfc
strftime() %a, %d %b %Y %H:%M:%S %z Thu, 15 Feb 2007 17:12:23 +0000 Using the undocumented lowercase %z produces the right output!
date() DATE_RSS Thu, 08 Feb 2007 13:05:37 UTC NO NO NO Not UTC! It should be right for goodness sake!
date() D, d M y H:i:s O Thu, 08 Feb 2007 13:05:37 +0000 It’s right!

I find this state of affairs pretty silly really – bunging some dates in standard formats into a feed should be a trivial nothing and not something that takes hours of faffing to get quite right. I ended up writing a new smarty wrapper for the date() function with support for the useful constants and correction for RSS and RFC822 dates. The smarty plugin is attached.

Download: Smarty plugin phpdate_format

And finally our feeds validate. Touch wood.