Approaches to building interoperability and their pros and cons

System A needs to talk to System B. Standards are the ideal to achieve that, but pragmatics often dictate otherwise. Let’s have a look at what approaches there are, and their pros and cons.

When I looked at the general area of interoperability a while ago, I observed that useful technology becomes ubiquitous and predictable enough over time for the interoperability problem to go away. The route to get to such commodification is largely down to which party – vendors, customers, domain representatives – is most powerful and what their interests are. Which describes the process very nicely, but doesn’t help solve the problem of connecting stuff now.

So I thought I’d try to list what the choices are, and what their main pros and cons are:

A priori, global
Also known as de jure standardisation. Experts, user representatives and possibly vendor representatives get together to codify whole or part of a service interface between systems that are emerging or don’t exist yet; it can concern either the syntax, semantics or transport of data. Intended to facilitate the building of innovative systems.

  • Has the potential to save a lot of money and time in systems development
  • Facilitates easy, cheap integration
  • Facilitates structured management of network over time


  • Viability depends on the business model of all relevant vendors
  • Fairly unlikely to fit either actually available data or integration needs very well

A priori, local
i.e. some type of Service Oriented Architecture (SOA). Local experts design an architecture that codifies syntax, semantics and operations into services. Usually built into agents that connect to each other via an ESB.

  • Can be tuned for locally available data and to meet local needs
  • Facilitates structured management of network over time
  • Speeds up changes in the network (relative to ad hoc, local)


  • Requires major and continuous governance effort
  • Requires upfront investment
  • Integration of a new system still takes time and effort

Ad hoc, local
Custom integration of whatever is on an institution’s network by the institution’s experts in order to solve a pressing problem. Usually built on top of existing systems using whichever technology is to hand.

  • Solves the problem of the problem owner fastest in the here and now.
  • Results accurately reflect the data that is actually there, and the solutions that are really needed


  • Non-transferable beyond local network
  • Needs to be redone every time something changes on the local network (considerable friction and cost for new integrations)
  • Can create hard to manage complexity

Ad hoc, global
Custom integration between two separate systems, done by one or both vendors. Usually built as a separate feature or piece of software on top of an existing system.

  • Fast point-to-point integration
  • Reasonable to expect upgrades for future changes


  • Depends on business relations between vendors
  • Increases vendor lock-in
  • Can create hard to manage complexity locally
  • May not meet all needs, particularly cross-system BI

Post hoc, global
Also known as standardisation, consortium style. Service provider and consumer vendors get together to codify a whole service interface between existing systems; syntax, semantics, transport. The resulting specs usually get built into systems.

  • Facilitates easy, cheap integration
  • Facilitates structured management of network over time


  • Takes a long time to start, and is slow to adapt
  • Depends on business model of all relevant vendors
  • Liable to fit either available data or integration needs poorly

Clearly, no approach offers instant nirvana, but it does make me wonder whether there are ways of combining approaches such that we can connect short term gain with long term goals. I suspect if we could close-couple what we learn from ad hoc, local integration solutions to the design of post-hoc, global solutions, we could improve both approaches.

Let me know if I missed anything!

If Enterprise Architecture is about the business, where are the business people?

The Open Group Enterprise Architecture conference in Munich last month saw a first meeting of Dutch and British Enterprise Architecture projects in Higher Education.

Probably the most noticeable aspect of the enterprise architecture in higher education session was the commonality of theme not just between the Dutch and British HE institutions, but also between the HE contingent and the enterprise architects of the wider conference. There are various aspects to the theme, but it really boils down to one thing: how does a bunch of architects get a grip on and then re-fashion the structure of a whole organisation?

In the early days, the answer was simply to set the scope of an architecture job to the expertise and jurisdiction of the typical enterprise architect team: IT systems. In both the notional goal of architecting work as well as its practice, that focus on just IT seems to be too limiting. Even a relatively narrow interpretation of the frequently cited goal of enterprise architecture – to better align systems to the business – presupposes a heavy involvement of all departments and the clout to change practices across the organisation.

A number of strategies to overcome the conundrum were reported on by the HE projects. One popular method is to focus on one concrete project development at a time, and evolve an architecture iteratively. Another is to involve everyone by letting them determine and agree on a set of principles that underpin the architecture work before it starts. Yet other organisations tackle the scope and authority issue head-on and sort governance structures before tackling the structure of the organisation; much like businesses tend to do.

In either of these cases, though, architects remain mostly focussed on IT systems, while remaining wholly reliant on the rest of the organisation for what the systems actually look like and clues about what they should do.

Presentations can be seen on the JISC website

How users can get a grip on technological innovation

Control of a technology may seem a vendor business: Microsoft and Windows, IBM and mainframes. But by understanding how technology moves from prototype to ubiquity, the people who foot the bill can get to play too.

Though each successful technological innovation follows its own route to becoming an established part of the infrastructure, there are couple of regularities. Simplified somewhat, the process can be conceived as looking like this:
The general technology commodification process

The interventions (open specifications, custom integration et al) are not mutually exclusive, nor do they necessarily come in a fixed chronological order. Nonetheless, people do often try to establish a specification before the technology is entirely ‘baked’, in order to forestall expensive dead-ends (the GSM mobile phone standard, for example). Likewise, a fully fledged open source alternative can take some time to emerge (e.g. mozilla / firefox in the web browser space).

The interventions themselves can be done by any stakeholder; be they vendors, buyers, sector representatives such as JISC or anyone else. Who benefits from which intervention is a highly complex and contextualised issue, however. Take, for example, the case of VLEs:

The technology commodification process as applied to VLEs

When VLEs were heading from innovation projects to the mainstream, stakeholders of all kinds got together to agree open standards in the IMS consortium. No-one controlled the whole space, and no-one wanted to develop an expensive system that didn’t work with third party tools. Predictably, implementations of the agreed standards varied, and didn’t interoperate straight off, which created a niche for tools such as Reload and course genie which allowed people to do some custom integration; a glossing over the peculiarities of different content standard implementations. Good for them, but not optimal for buyers or the big vle vendors.

In the mean time, some VLE vendors smelled an opportunity to control the technology, and get rich off the (license) rent. Plenty of individual buyers as well as buyer consortia took a conscious and informed decision to go along with such a near monopoly. Predictability and stability (i.e. fast commodification) were weighed against the danger of vendor lock-in in the future, and stability won.

When the inevitable vendor lock-in started to bite, another intervention became interesting for smaller tool vendors and individual buyers: reverse engineering. This intervention is clearer in other technologies such Windows file and print server protocols (the Samba project) and the PC hardware platform (the early ‘IBM clones’). Nonetheless, a tool vendor such as HarvestRoad made a business of freeing content that was caught in proprietary formats in closed VLEs. Good for them, not optimal for big platform vendors.

Lastly, when the control of a technology by a platform becomes too oppressive, the obvious answer these days is to construct an open source competitor. Like Linux in operating systems, Firefox in browsers or Apache in webservers, Moodle (and Sakai, atutor, .LRN and a host of others) have achieved a more even balance of interests in the VLE market.

The same broad pattern can be seen in many other technologies, but often with very particular differences. In blogging software, for example, open source packages have been pretty dominant from the start, and one package even went effectively proprietary later on (Movable Type).

Likewise, the shifting interests of stakeholders can be very interesting in technologies that have not been fully commodified yet. Yahoo for example, has not been an especially strong proponent of open specifications and APIs in the web search domain until it found itself the underdog to Google’s dominance. Google clearly doesn’t feel the need to espouse open specifications there, since it owns the search space.

But it doesn’t own mobile phone operating systems or social networking, so it is busy throwing its weight behind open specification initiatives there. Just as they are busy reverse engineering some of Microsoft’s technologies in domains that have already commodified such as office file formats.

From the perspective of technology users and sector representatives, it pays to consider each technology in a particular context before choosing the means to commodify it as advantageously but above all as quickly as possible. In the end, why spend time fighting over technology that is predictable, boring and ubiquitous when you can build brand new cool stuff on top of it?