CETIS publications, now on WordPress

We have recently changed how we present our publications to the world. Where once we put a file on the web somewhere, anywhere, and entered the details into a home-spun publication database, now we use WordPress. We’re quite pleased with how that has worked out, so we’re sharing the information that might help others use WordPress as a means of presenting publications to the world (a repository, if you like).

Why WordPress?
First, what were we trying to achieve? The overall aims were to make sure that our publications had good exposure online, to have a more coherent approach to managing them (for example to collect all the files into one place in case we ever need to migrate them), and to move away from the bespoke system we were using to a system that someone else maintains. There were a few other requirements, we wanted something that was easy for us to adapt to fit the look and feel of the rest of our website, that was easy to maintain (familiarity is an important factor in how easy something is–it’s easy to use something if you know how to use it), and we wanted something that would present our publications in HTML and RSS sliced and diced by topic, author, and publication type: a URL for each publication and for each type of publication and feeds for everything. We’re not talking about a huge number of publications, maybe 100 or so, so we didn’t want a huge amount of up-front effort.

We thought about Open Journal Systems, but there seemed to be a whole load of workflow stuff that was relevant to Journals but not our publications. Likewise we thought about ePrints and Dspace, but they didn’t quite look like we wanted, and we are far more familiar with WordPress. As a wildly successful open source project, WordPress also fits the requirement of being maintained by other people, not just the core programme, but all those lovely plugins and themes. So the basic plan was to represent each publication in a WordPress post and to use a suitable theme and plugins to present them as we wanted.

The choice of theme
Having settled on WordPress the first decision was which theme to use. In order to get the look and feel to be similar to the rest of the CETIS website (and, to be honest, to make sure our publications pages didn’t look like a blog) we needed a very flexible theme. The most flexible theme I know of is Atahualpa, with over 200 options, including custom CSS snippets, parameters and HTML snippets it’s close to being a template for producing you own custom themes. So, for example, the theme options I have set include a byline of By %meta('By')%. %date('F Y')% which automatically inserts the additional metadata field ‘By’ and the date in the format of my choice, all of which can be styled any way I want. I’ll come back to the “byline” metadata later.

One observation here: there is clearly a trade-off between this level of customisation and ease of maintenance. On the one hand these are options set within the Atahualpa theme that can be saved between theme upgrades, which is better than would have been the case had we decided to fork the theme or add a few lines of custom code to the theme’s PHP files. On the other hand, it is not always immediately obvious which setting in the several pages of Atahualpa theme options has been used to change some aspect of the site’s appearance.

A post for each publication
As I mentioned above we can represent each publication by creating a WordPress post, but what information do we want to provide about each publication and how does it fit into a WordPress post? Starting with the simple stuff:

  • Title of the publication -> title of WordPress post.
  • Abstract / summary -> body of post.
  • Publication file -> uploaded as attached media.
  • Type of publication -> category.
  • Topic of publication -> tag.

Slightly less simple:

  • The date of the publication is represented as the date of the post. This is possible because WordPress lets you choose when to publish post. The default is for posts to be published immediately when you press the Publish button, however you can edit this to have them published in the past :)
    WordPress publication date option

    WordPress publication date option

  • The author of the publication becomes the author of the post, but there are some complications. It’s simple enough when the publication has a single author who works for CETIS, I just added everyone as an “author” user of WordPress and a WordPress admin user can attribute any given post to the author of the publication it represents. Where there are two or more authors a nifty little plugin called Co-Authors Plus allows them all to be assigned to the post. But we have some publications that we have commissioned from external authors, so I created an user called “Other” for these “external to CETIS” authors. This saves having a great long list of authors to maintain and present, but creates a problem of how to attribute these external authors, a problem that was solved using WordPress’s “additional metadata” feature to enter a “by-line” for all posts. This also provides a nicely formatted by-line for multi-author papers with out worrying about how to add PHP to put in commas and “and”s.
  • The only other additional metadata added was an identifier for each publication, e.g. the latest QTI briefing paper is No. 2011:B02.

Presenting it all
As well as customisation for the look and feel, the Atahualpa theme allows for menus and widgets to added to the user interface. Atahualpa has an option to insert a menu into the page header which we used for the links to the other parts of the CETIS website. On the left hand side bar we’ve used the custom menu widget to list the tags and categories to provide access to the publications divided by topic and publication type as HTML and as a feed (just add /feed to the end of the URL). Also on the left, the List Authors plugin gives us links to publications by author.

In order to provide a preview of the publication in the post I used the TGN embed everything plugin. The only problem is that the “preview” is too good: it’s readable but not the highest quality, so it might lead some people to think that we’re disseminating low quality versions of the papers, whereas we do include links to high quality downloadable files.

The built-in WordPress search is rubbish. For example, it doesn’t include the author field in the search (not that the first thing we tested was vanity searching), and the results returned are sorted by date not relevance. Happily the relevanssi plugin provides all the search we need.

Finally a few tweaks. We chose URL patterns that avoid unnecessary cruft, and closed comments to avoid spam. We installed the Google analytics plugin, so we know what you’re doing on our site, and the login lock plugin for a bit of security. The only customisation that we want that couldn’t be done with a theme option or plugin was providing some context to the multi-post pages. These are pages like the list of all the publications, or all the briefing papers, and we wanted a heading and some text to explain what that particular cut of our collection was. Some themes do this by default, based on information entered about the tag/catergory/author on which the cut is made, but not Atahualpa. I put a few lines of PHP into the theme’s index.php template to deal with publication types, but we’ve yet to do it properly for all possible multipost pages.

And in the end…
As I said at the top, we’re happy with this approach; if you have any comment on it, do please leave them below.

One last thing. Using a popular platform like WordPress means that there is a lot of support, and I don’t just mean a well supported code base and directory of plugins and themes. One of the most useful sources of support has been the WordPress community, especially the local group of WPUK, at whose meet-ups I get burritos and advice on themes, plugins, security and all things wordpressy.

The hunting of the OER

“As internet resources are being moved, they can no longer be traced.” I read in a press release from Knowledge Exchange. This struck me as important for OERs since part of their “openness” is the licence to copy them, and I have recently been on something of an OER hunt, which highlights the importance of using identifiers correctly and of “curatorial responsibility”.

The OER I was hunting was an “Interactive timeline on Anglo-Dutch relations (50 BC to 1830)” from the UKOER Open Dutch project. It was recommended at a year or so ago as great output which pretty much anyone could see the utility of that used the MIT SIMILE timeline software to create a really engaging interface. I liked it, but more importantly for what I’m considering now I used it as an example when investigating whether putting resources into a repository enhanced their visibility on Google (in this case it did).

Well, that was a year+ ago. The other week I wanted to find it again. So I went to Google and searched for “anglo dutch timeline” (without the quotes). Sure enough, I got three results for the one I am looking for on the first page (of course, your results my vary; Google’s like that now-a-days). These were, from the bottom up:

  1. A link to a record in the NDLR (the Irish National Digital Learning Resources Repository) which gave the link URL as http://open.jorum.ac.uk:80/xmlui/handle/123456789/517 (see below)
  2. A link to a resource page in HumBox, which turned out to be a manifest-only content package (i.e. metadata in a zip file). Looking into it, there’s no resource location given in the metadata, and the pointer to the content (which should be the resource being described) actually points to the Open Dutch home page.
  3. Finally, a link to a resource page in JORUM. This also describes the resource I was looking for but actually points to Open Dutch project page. The URL for Jorum page describing the resource is given as the persistent link–I believe that the NDLR harvests metadata from Jorum, so my guess is that that is why NDLR list this as the location of the resource.

Finding descriptions of a resource isn’t really helpful to many people. OK, I now know the full name and the author of the resource, which might help me track down the resource, but at this point I couldn’t. Furthermore, nobody wants to find a description of a resource that links to a description of the resource. I think one lesson concerns the importance of identifiers: “describe the thing you identify; identify the thing you describe.”

This story (and I very much suspect it is not an isolated case) has significance for debates about whether repositories should accept metadata-only “representations” of resources. Whether or not it is a good idea to deposit resources you are releasing as OERs in a third-party repository will depend on what you want to achieve by releasing them; whether or not it is a good idea for a repository to take and store resources from third parties will depend on what the repository’s sponsors wish to facilitate. Either way, someone needs to take some curatorial responsibility for the resource and for the metadata about it. That means on the one hand making sure that the resource stays on the web and on the other hand making sure that the metadata record continues to point to the right resource (automatic link checking for HTTP 404 responses etc. helps but, as this post on link rot notes, it’s not always that simple).

By the way, thanks to the incomparable David Kernohan, I now know that the timeline is currently at http://www.ucl.ac.uk/alternative-languages/OER/timeline/.

UKOER Sources

I have been compiling a directory of how people can get at the resources released by the UKOER pilot phase projects: that is the websites for human users and the “interoperability end points” for machines–ie the RSS and ATOM feed URLs, SRU targets, OAI-PMH base URLs and API documentation. This wasn’t nearly as easy as it should have been: I would have hoped that just listing the main URL for each project would have been enough for anyone to get at the resources they wanted or the interoperability end point in a click or two, but that often wasn’t the case.

So here are some questions I would like OER providers to answer by way of self assessment, which will hopefully simplify this in the future.

Does your project website have a very prominent link to where the OERs you have released may be found?

The technical requirements for phase 1 for delivery platforms said:

Projects are free to use any system or application as long as it is capable of delivering content freely on the open web. … In addition projects should use platforms that are capable of generating RSS/Atom feeds, particularly for collections of resources

So: what RSS feeds do you provide for collections of resources and where do you describe these? Have you thought about how many items you have in each feed and how well described they are?

Are your RSS feed URLs and other interoperability endpoints easy to find?

Do your interoperability end points work? I mean, have you tested them? Have you spoken to people who might use them?

While you’re thinking about interoperability end points: have you ever thought of your URI scheme as one? If for example you have a coherent scheme that puts all your OERs under a base URI, and better, provides URIs with some easily identifiable pattern for those OERs that form some coherent collection, then building simple applications such as Google Custom Search Engines becomes a whole lot easier. A good example is how MIT OCW is arranged: all most of the URIs have a pattern http://ocw.mit.edu/courses/[department]/[courseName]/[resourceType]/[filename].[ext] (the exceptions are things like video recordings where the actual media file is held elsewhere).

EdReNe: Building successful educational repositories

EdReNe, the EU funded Educational Repositories Network, have just published what looks like a useful a useful set of recommendations on Building successful educational repositories [pdf]. Many of the recommendations seem motherhood-and-apple-pie stuff: engage users, have policies, etc. though some of these, e.g. “support the needs of existing communities” have interesting implications when thought through in more depth (in this case don’t base your repository strategy entirely on creating a new community).

Others that caught my eye:

  • Take advantage of generally used, open standards to allow for the broadest range of partnerships, future adaptability and innovation.
    With the comment “Use standards that are ‘as broad as possible'”–a reference to “web-wide” standards rather than those that are specific to the repository world?
  • Acknowledge that integration with a range of tools and services will greatly benefit uptake and use of digital learning resources.
    “What is useful, is the ability to integrate with the tools/service that the user selects.” So you’ll need an API.
  • Support the development of ‘sharing as a culture’ by providing user friendly mechanisms for depositing and repurposing
  • Open up information silos by a strong focus on web services, APIs and other ways of allowing seamless integration across services.
    “Repositories have to interface with many systems.”
  • Make it easy to participate – for all members
    “Barriers to participation are the single biggest problem.”
  • Present clear and easy-to-understand information on usage rights
    and
  • Clearly express usage rights to users when depositing or accessing resources
    The “when accessing” aspect of this is something that has been exercising me recently, it’s hard to believe how many OERs don’t have any cc-licence information displayed on the resource itself. (For extra added irony, this report bears no licence information within it.)
  • Support open licensing to increase impact of funding and maximize possibilities for reuse and re-purposing,
  • Encourage use of CC-BY licenses when publishing own work
    and
  • Encourage institutions to engage in sharing and production of open content (institution management)
    The OER effort is clearly having an impact on repository thinking, though there are comments to these and other recs that reflect that not all resources in repositories will be open.
  • When content standards are encouraged, this should be done with central guidance
    “A top?down strategy for fixing standards does not work anymore.” (well, it only ever worked in some limited scenarios).

The recommendations are the output of EdReNe’s 5th seminar which took place in Copenhagen, 6-7 October 2010. Thanks to LTSO for the info about this.

Sharing service information?

Over the past few weeks the question of how to find service end-points keeps coming up in conversation (I know, says a lot about the sort of conversations I have), for example we have been asked whether we can provide information about where are the RSS feed locations for the services/collections created by the all the UKOER projects. I would generalise this to service end points, by which I mean the things like the base URL for OAI-PMH or RSS/ATOM feed locations or SRU target locations, more generally the location of the web API or protocol implementations that provide machine-to-machine interoperability. It seems that these are often harder to find than they should be, and I would like to recommend one and suggest another approach to helping make them easier to find.

The approach I would like to recommend to those who provide service end points, i.e. those of you who have a web-based service (e.g. a repository or OER collection) that supports machine-to-machine interoperability (e.g. for metadata dissemination, remote search, or remote upload) is that taken by web 2.0 hosts. Most of these have reasonably easy-to-find sections of their website devoted to documenting their API, and providing “how-to” information for what can be done with it, with examples you can follow, and the best of them with simple step-by-step instructions. Here’s a quick list by way of providing examples

I’ll mention Xpert Labs as well because, while the “labs” or “backstage” approach in general isn’t quite what I mean by simple “how-to” information, it looks like Xpert are heading that way and “labs” sums up the experimental nature of what they provide.

That helps people wanting to interoperate with those services and sites they know about, but it begs a more fundamental question, which is how to find those services in the first place; for example, how do you find all those collections of OERs. Well, some interested third-party could build a registry for you, but that’s an extra effort for someone who is neither providing or using the data/service/API. Furthermore, once the information is in the registry it’s dead, or at least at risk of death. What I mean is that there is little contact between the service provider and the service registry: the provider doesn’t really rely on the service registry for people to use their services and the service registry doesn’t actually use the information that it stores. Thus, it’s easy for the provider to forget to tell the service registry when the information changes, and if it does change there is little chance of the registry maintainer noticing. So my suggestion is that those who are building aggregation services based on interoperating with various other sites provide access to information about the endpoints they use. An example of this working is the JournalToCs service, which is an RSS aggregator for research journal tables of contents but which has an API that allows you to find information for the Journals that it knows about (JOPML showed the way here, taking information from a JISC project that spawned JournalToCs and passing on lists of RSS feeds as OPML). Hopefully this approach of endpoint users proving information about what they used would only provide information that actually worked and was useful (at least for them).

Analysing OCWSearch logs

We have a meeting coming up on the topic of investigating what data we have (or could acquire) to answer the question of what metadata is really required to support the discovery, selection, use and management of educational resources. At the same time as I was writing a blog post about that, over at OCWSearch they were publishing the list of top searches for their collection (I think Pierre Far is the person to thank for that). So, what does this tell us about metadata requirements?

I’ve been through the terms at the top half of the list (it says that the list is roughly in descending order of popularity, however it would be really good to know more about how popular each search term was) and tried to judge what characteristic or property of the resource the searcher was searching on.

There were just under 170 search terms in total. It doesn’t surprise me that the vast majority (over 95%) of them are subject searches. Both higher-level, broad subject terms (disciplines, e.g. “Mathematics”) and lower-level, finer-grained subject terms (topics, e.g. “Applied Geometric Algebra”) crop up in abundance. I’m not sure you can say much about their relative importance.

What’s left is (to me) more interesting. We have:

  • resource types, specifically: “online text book”, “audio”, “online classes”.
  • People, who seem to be staff at MIT, so while it’s possible someone is searching for material about them or about their theories, I think it is likely that people are searching for them as resource creators
  • level, specifically: 101, Advanced (x2), college-level. These are often used in conjunction with subject terms.
  • Course codes e.g. HSM 260, 15.822, Psy 315. (These also imply a level and a subject.)

I think with more data and more time spent on the analysis we could get some interesting results from this sort of approach.

Jorum and Google ranking

Les Carr has posted an interesting analysis of Visibility of OER Material: the Jorum Learning and Teaching Competition. He searches for six resources on Google and compares the ranking in the results page of the resource on Google with the resource elsewhere. The results are mixed: sometimes Jorum has the top place sometimes some other site (institutional or author’s site) is top, though it should be said that with one exception we’re talking about which is first and which is second. In other words both would be found quite easily.

Les concludes:

Can we draw any general patterns from this small sample? To be honest, I don’t think so! The range of institutions is too diverse. Some of the alternative locations are highly visible, so it is not surprising that Jorum is eclipsed by their ranking (e.g. Cambridge, very newsworthy Gurkhas international organisation). Some 49% of Open Jorum’s records provide links to external sources rather than holding bitstream contents directly. It would be very interesting to see the bigger picture of OER visibility by undertaking a more comprehensive survey.

Yes it would be very interesting to see the bigger picture, and also it would be interesting to see a more thorough investigation of just the Jorum’s role (I don’t think Les will mind the implication that he has no more than scraped the surface).

Some random thoughts that this raises in my mind:

  • Title searches are too easy, the quality of resource description will only be tested by searching for the keywords that are really used by people looking for these resources. Some will know the title of the resource, but not many. Just have a play with using the most important one or two words from the title rather than the whole title and see how the results change.
  • To say that Jorum enhances/doesn’t enhance visibility depending on whether it comes above or below the alternative sites is too simplistic. If it links to the other site Jorum will enhance the visibility of that site even if it ranks below it; having the same resource represented twice in the search engine results page enhances its visibility no matter what the ordering; on the other hand, having links from elsewhere pointing to two separate sites probably reduces the visibility of both.
  • Sometimes Jorum hosts a copy of the resource, sometimes it just points to a copy elsewhere; that’s got to have an effect (hasn’t it?).
  • What is the cause of the difference? When I’ve tried similar (very superficial) comparisons, I’ve noticed that Jorum gets some of the basics of SEO right (e.g. using the resource’s title in the HTML Title element; curiously it doesn’t seem to use the HTML Description element). How does this compare to other hosts? I’ve noticed some other OER sites that don’t get this right, so we could see Jorum as guaranteeing a certain basic quality of resource discovery rather than as necessarily enhancing visibility. (Question: is this really necessary?)
  • What happens over time? Do people link to the copy in the Jorum or elsewhere. This will vary a lot, but there may be a trend. I’ll note in passing that choosing six resources that had been promoted by Jorum’s learning and teaching competition may have skewed the results.
  • Which should be highest ranked anyway? Do we want Jorum to be highly ranked to reflect its role as part of the national infrastructure, a place to showcase what you’ve produced; or do institutions see releasing OERs as part of a marketing strategy, and the best Jorum can do is quietly improve the ranking of the OERs on the institution’s site by linking to them? This surely relates to the choice between having Jorum host the resource or just having it link to the resource on the institutions site (doesn’t it?).

Sorry, all questions and no answers!

An open and closed case for educational resources

I gave a pecha kucha presentation (20 slides, 20 seconds per slide) at the Repository Fringe in Edinburgh last week. I’ve put the slides on slideshare, and there’s also a video of the presentation but since the slides are just pictures, and the notes are a bit disjointed, and my delivery was rather rushed, it seems to me that it would be useful to reproduce what I said here. Without the 20 second per slide constraint.

The main thrust of the argument is that Open Educational Resource (OER) or OpenCourseWare (OCW) release can be a good way of overcoming some of the problems institutions have regarding the management of their learning materials. By OER or OCW release we mean an institution, group or individual disseminating their educational resources under creative commons licences that allow anyone to take and use those resources for free. As you probably know over the last year or so HEFCE have put a lot of money into the UKOER programme.

I first started thinking about this approach in relation to building repositories four or five years ago.

I was on the advisory group for a typical institutional learning object repository project. The approach that they and many others like them at the time had chosen was to build a closed, inward-facing repository, providing access and services only within the institution. The project concerned about interoperability with their library systems and worried a lot about metadata.

Castle Kennedy The repository was not a success. In the final advisory group meeting I was asked whether I could provide an example of an institution with a successful learning object repository. I gave some rambling unsatisfactory answer about how there were a few institutions trying the same approach but it was difficult to know what was happening since they (like the one I was working with) didn’t want to broadcast much information about what they were doing.

And two days later it dawned on me that what I should have said was MIT.

MIT OpenCourseWare
At that time MIT’s OpenCourseWare initiative was by far the most mature open educational resource initiative, but now we have many more examples. But in what way does OER-related activity relate to the sort of internal management of educational materials that concerns projects like the one with which I was involved?

The challenges of managing educational resources
The problems that institional learning object repositories were trying to solve at that time were typically these:

  • they wanted to account for what educational content they had and where it was;
  • they wanted to promote reuse and sharing within the Institution;
  • they wanted more effective and efficient use of resources that they had paid to develop.

And why, in general, did they fail? I would say that there was a lack of buy-in or commitment all round, there was a lack of motivation from the staff to deposit and there was a lack of awareness that the repository even existed. Also there was more focus on the repository per se and systems interoperability than on directly addressing the needs of their stakeholders.

Does an open approach address these challenges?

Well, firstly, by putting your resources on the open web everyone will be able to access them, including the institution’s own staff and students. What’s more once these resources are on the open web they can be found using Google, which is how those staff and students search. Helping your staff find and have access to the resources created by other staff helps a lot with promoting reuse and sharing within the institution.

It is also becoming apparent that there are good institution-level benefits from releasing OERs.

For example the OU have traced a direct link from use of their OpenLearn website to course enrolment.

In general terms, open content raises the profile of the institution and its courses on the web, providing an effective shop window for the institution’s teaching, in a way that an inward facing repository cannot. Open content also gives prospective students a better understanding of what is offered by an institution’s courses than a prospectus can, and so helps with recruitment and retention.

There’s also a social responsibility angle on OERs. On launching the Open Universities OpenLearn initiative Prof. David Vincent said:

Our mission has always been to be open to people, places, methods and ideas and OpenLearn allows us to extend these values into the 21st century.

While the OU is clearly a special case in UK Higher Education, I don’t think there are many working in Universities who would say that something similar wasn’t at least part of what they were trying to do. Furthermore, there is a growing feeling that material produced with public funds should be available to all members of the public, and that Universities should be of benefit to the wider community not just to those scholars who happen to work within the system.

Another, less positive, harder-edged angle on social responsibility was highlighted in the ruling on a Freedom of Information request where the release of course material was required. The Information Tribunal said

it must be open to those outside the academic community to question what is being taught and to what level in our universities

We would suggest that we are looking at a future where open educational resources should be seen as the default approach, and that a special case should need to be made for resources that a public institution such as a university wants to keep “private”. But for now the point we’re making is that social responsibility is a strong motivator for some individuals, institutions and funders.

Legalities.
Releasing educational content openly on the web requires active management of intellectual property rights associated with the content used for teaching at the institution. This is something that institutions should be doing anyway, but they often fudge it. They should address questions such as:

  • Who is responsible for ensuring there is no copyright violation?
  • Who owns the teaching materials, the lecturer who wrote them or the institution?
  • Who is allowed to use materials created by a member of staff who moves on to another institution?

The process of applying open licences helps institutions address these issues, and other legal requirements such as responding to freedom of information requests relating to teaching materials (and they do happen).

Not all doom and gloom
Some things do become simpler when you release learning materials as OERs.

For example access management for the majority of users (those who just want read-only access) is a whole lot simpler if you decide to make a collection open; no need for the authentication or authorization burden that typically comes with making sure that only the right people have access.

On a larger scale, the Open University have found that setting up partnerships for teaching and learning with other institutions becomes easier if you no longer have to negotiate terms and conditions for mutual access to course materials from each institution.

Some aspects of resource description also become easier.

Some (but not all) OER projects present material in the context in which they were originally delivered, i.e. arranged as courses (The MIT OCW course a screen capture of which I used above is one example). This may have some disadvantages, but the advantage is that the resource is self describing–you don’t have to rely soley on metadata to convey information such as educational level and potential educational use. This is especially important becuase whereas most universities can describe their courses in ways that make sense, we struggle to agree controlled vocabularies that can be applied across the sector.

Course or resources?
The other advantage of presenting the material as courses rather than disaggregated as individual objects is that the course will be more likely to be useful to learners.

Of course the presentation of resources in the context of a course should not stop anyone from taking or pointing to a single component resource and using it in another context. That should be made as simple as possible; but it’s always going to be very hard to go in the other direction, once a course is disaggregated it’s very hard to put it back together (the source of the materil could describe how to put it back together, or how it fitted in to other parts of a course, but then we’re back into the creation of additional metadata).

Summary and technical
What I’ve tried say is that putting clearly licensed stuff onto the open web solves many problems.

What is the best technology genre for this? repository or content management system or VLE or Web2 service. Within the UKOER programme all four approaches were used successfully. Some of these technologies are primarily designed for local management and presentation of resources rather than open dissemination; and vice versa. There’s no consensus, but there is a discernable trend towards using a diversity of approaches and mixing-and-matching, e.g. some UKOER projects used repositories to hold the material and push it to Web 2 services; others pulled material in the other direction.

ps: While I was writing this, Timothy Vollmer over on the CreativeCommons blog was writing “Do Open Educational Resources Increase Efficiency?” making some similar points.

Image credits
Most of the images are sourced from Flickr and have one or another flavour of creative commons licence. From the top: