Some downside to OER?

As part of my non-CETIS work I occasionally go out to evaluate teaching practice in Engineering for the HE Academy Engineering Subject Centre. This involves me going to some University and talking to an Engineering lecturer and some students about an approach they are using for teaching and learning. I especially enjoy this because it brings me close to the point of education and helps keeps me in touch with what is really happening in universities around the UK. During a recent evaluation the following observations came up which are incidental to what I was actually evaluating but relevant, I think, to UKOER. They concern a couple of points raised, one by the lecturer and one by the students, that reflect genuine problems people might have to OER release.

Part of the lecturer’s approach involves a sequence of giving students some problems to solve each week, and then providing online and face-to-face support for these problems. The online support is good stuff; it’s video screen captures of worked model solutions with pretty good production values. Something like the Khan academy but less rough. It would be great if the this were released as OER, however doing so would compromise the pedagogic strategy that the tutor has adopted. I don’t want to go into the specifics of why this lecturer has adopted this strategy but it in general it may be important that the students try the problems before they look at the support, and this is an interesting example of how OER release isn’t pedagogically neutral.

The point raised by the students concerned to reuse of OERs rather than their release. They really liked what their lecturer had done, and part of what they liked about it was that it was personal. This was important to them not just because it meant that there was an exact fit between the resources and their course but because they took it as showing that the lecturer had taken a deal of time and made a real effort in preparing their course. They were right in that, but they also went on to say that if the lecturer had taken resources from elsewhere, that they themselves could have found, they would have drawn the opposite inference. We may think that the students would be wrong in this, or that their expectations are unrealistic, but what’s important is that they felt that they would have been demotivated had the lecturer reused resources created elsewhere. I think this fits into a wider picture of how reuse of materials affects the relationship between teacher and student.

I’m not claiming that either of these observations are conclusive or in any way compelling arguments against the release of OER, and I will object to anyone claiming that a couple of data points is ever conclusive or compelling, but I did find them interesting.

EdReNe: Building successful educational repositories

EdReNe, the EU funded Educational Repositories Network, have just published what looks like a useful a useful set of recommendations on Building successful educational repositories [pdf]. Many of the recommendations seem motherhood-and-apple-pie stuff: engage users, have policies, etc. though some of these, e.g. “support the needs of existing communities” have interesting implications when thought through in more depth (in this case don’t base your repository strategy entirely on creating a new community).

Others that caught my eye:

  • Take advantage of generally used, open standards to allow for the broadest range of partnerships, future adaptability and innovation.
    With the comment “Use standards that are ‘as broad as possible'”–a reference to “web-wide” standards rather than those that are specific to the repository world?
  • Acknowledge that integration with a range of tools and services will greatly benefit uptake and use of digital learning resources.
    “What is useful, is the ability to integrate with the tools/service that the user selects.” So you’ll need an API.
  • Support the development of ‘sharing as a culture’ by providing user friendly mechanisms for depositing and repurposing
  • Open up information silos by a strong focus on web services, APIs and other ways of allowing seamless integration across services.
    “Repositories have to interface with many systems.”
  • Make it easy to participate – for all members
    “Barriers to participation are the single biggest problem.”
  • Present clear and easy-to-understand information on usage rights
    and
  • Clearly express usage rights to users when depositing or accessing resources
    The “when accessing” aspect of this is something that has been exercising me recently, it’s hard to believe how many OERs don’t have any cc-licence information displayed on the resource itself. (For extra added irony, this report bears no licence information within it.)
  • Support open licensing to increase impact of funding and maximize possibilities for reuse and re-purposing,
  • Encourage use of CC-BY licenses when publishing own work
    and
  • Encourage institutions to engage in sharing and production of open content (institution management)
    The OER effort is clearly having an impact on repository thinking, though there are comments to these and other recs that reflect that not all resources in repositories will be open.
  • When content standards are encouraged, this should be done with central guidance
    “A top?down strategy for fixing standards does not work anymore.” (well, it only ever worked in some limited scenarios).

The recommendations are the output of EdReNe’s 5th seminar which took place in Copenhagen, 6-7 October 2010. Thanks to LTSO for the info about this.

CETIS “What metadata…?” meeting summary

Yesterday we had a meeting in London with about 25 people thinking about the question “What metadata is really useful?

My thinking behind having a meeting on this subject was that resource description can be a lot of effort; so we need to be careful that the decisions we make about how it is done are evidence-based. Given the right data we should be able to get evidence about what metadata is really used for, as opposed to what we might speculate that it is useful for (with the caveat that we need to allow for innovation, which sometimes involves supporting speculative usage scenarios). So, what data do we have and what evidence could we get that would help us decide such things as whether providing a description of a characteristic such as the “typical learning time for using a resource” either is or isn’t helpful enough to justify the effort? Pierre Far went to an even more basic level and asked in his presentation, why do we use XML for sharing metadata?–is it the result of a reasoned appraisal of the alternatives, such as JSON, or did just seem the right thing to do at some point?

Dan Rehak made the very useful point to me that we need a reason for wanting to answer such questions, i.e. what is it we want to do? what is the challenge? Most of the people in the room were interested in disseminating educational resources (often OERs): some have an interest in disseminating resources that had been provided by their own project or organization, others have an interest in services that help users find resources from a wide range of providers. So I had “help users find resources they needed” as the sort of reason for asking these questions; but I think Dan was after something new, less generic, and (though he would never say this) less vague and unimaginative. What he suggested as a challenge was something like “how do you build a recommender system for learning materials?” Which is a good angle, and I know it’s one that Dan is interested in at the moment; I hope that others can either buy into that challenge or have something equally interesting that they want to do.

I have suggested that user surveys, existing metadata and search logs are potential sources of data reflecting real use and real user behaviour, and no one has disagreed so I structured much of the meeting around discussion of those. We had short overviews of examples previous work on each each of these, and some discussion about that, followed by group discussions in more depth for each. I didn’t want this to be an academic exercise, I wanted the group discussions to turn up ideas that could be taken forward and acted on, and I was happy at the end of the day. Here’s a sampler of the ideas turned up during the day:
* continue to build the resources with background information that I gathered for the meeting.
* promote the use common survey tools, for example the online tool used by David Davies for the MeDeV subject centre (results here).
* textual analysis of metadata records to show what is being described in what terms.
* sharing search log in a common format so that they can be analysed by others (echoes here of Dave Pattern’s sharing of library usage data and subsequent work on business intelligence that can be extracted from it).
* analysis of search logs to show which queries yield zero hits which would identify topics on which there was unmet demand.

In the coming weeks we shall be working through the ideas generated at the meeting in more depth with the intention of seeing which can actually be brought to fruition. In the meantime keep an eye on the wikipage for the meeting which I shall be turning into a more detailed record of the event.

Analysing OCWSearch logs

We have a meeting coming up on the topic of investigating what data we have (or could acquire) to answer the question of what metadata is really required to support the discovery, selection, use and management of educational resources. At the same time as I was writing a blog post about that, over at OCWSearch they were publishing the list of top searches for their collection (I think Pierre Far is the person to thank for that). So, what does this tell us about metadata requirements?

I’ve been through the terms at the top half of the list (it says that the list is roughly in descending order of popularity, however it would be really good to know more about how popular each search term was) and tried to judge what characteristic or property of the resource the searcher was searching on.

There were just under 170 search terms in total. It doesn’t surprise me that the vast majority (over 95%) of them are subject searches. Both higher-level, broad subject terms (disciplines, e.g. “Mathematics”) and lower-level, finer-grained subject terms (topics, e.g. “Applied Geometric Algebra”) crop up in abundance. I’m not sure you can say much about their relative importance.

What’s left is (to me) more interesting. We have:

  • resource types, specifically: “online text book”, “audio”, “online classes”.
  • People, who seem to be staff at MIT, so while it’s possible someone is searching for material about them or about their theories, I think it is likely that people are searching for them as resource creators
  • level, specifically: 101, Advanced (x2), college-level. These are often used in conjunction with subject terms.
  • Course codes e.g. HSM 260, 15.822, Psy 315. (These also imply a level and a subject.)

I think with more data and more time spent on the analysis we could get some interesting results from this sort of approach.

An open and closed case for educational resources

I gave a pecha kucha presentation (20 slides, 20 seconds per slide) at the Repository Fringe in Edinburgh last week. I’ve put the slides on slideshare, and there’s also a video of the presentation but since the slides are just pictures, and the notes are a bit disjointed, and my delivery was rather rushed, it seems to me that it would be useful to reproduce what I said here. Without the 20 second per slide constraint.

The main thrust of the argument is that Open Educational Resource (OER) or OpenCourseWare (OCW) release can be a good way of overcoming some of the problems institutions have regarding the management of their learning materials. By OER or OCW release we mean an institution, group or individual disseminating their educational resources under creative commons licences that allow anyone to take and use those resources for free. As you probably know over the last year or so HEFCE have put a lot of money into the UKOER programme.

I first started thinking about this approach in relation to building repositories four or five years ago.

I was on the advisory group for a typical institutional learning object repository project. The approach that they and many others like them at the time had chosen was to build a closed, inward-facing repository, providing access and services only within the institution. The project concerned about interoperability with their library systems and worried a lot about metadata.

Castle Kennedy The repository was not a success. In the final advisory group meeting I was asked whether I could provide an example of an institution with a successful learning object repository. I gave some rambling unsatisfactory answer about how there were a few institutions trying the same approach but it was difficult to know what was happening since they (like the one I was working with) didn’t want to broadcast much information about what they were doing.

And two days later it dawned on me that what I should have said was MIT.

MIT OpenCourseWare
At that time MIT’s OpenCourseWare initiative was by far the most mature open educational resource initiative, but now we have many more examples. But in what way does OER-related activity relate to the sort of internal management of educational materials that concerns projects like the one with which I was involved?

The challenges of managing educational resources
The problems that institional learning object repositories were trying to solve at that time were typically these:

  • they wanted to account for what educational content they had and where it was;
  • they wanted to promote reuse and sharing within the Institution;
  • they wanted more effective and efficient use of resources that they had paid to develop.

And why, in general, did they fail? I would say that there was a lack of buy-in or commitment all round, there was a lack of motivation from the staff to deposit and there was a lack of awareness that the repository even existed. Also there was more focus on the repository per se and systems interoperability than on directly addressing the needs of their stakeholders.

Does an open approach address these challenges?

Well, firstly, by putting your resources on the open web everyone will be able to access them, including the institution’s own staff and students. What’s more once these resources are on the open web they can be found using Google, which is how those staff and students search. Helping your staff find and have access to the resources created by other staff helps a lot with promoting reuse and sharing within the institution.

It is also becoming apparent that there are good institution-level benefits from releasing OERs.

For example the OU have traced a direct link from use of their OpenLearn website to course enrolment.

In general terms, open content raises the profile of the institution and its courses on the web, providing an effective shop window for the institution’s teaching, in a way that an inward facing repository cannot. Open content also gives prospective students a better understanding of what is offered by an institution’s courses than a prospectus can, and so helps with recruitment and retention.

There’s also a social responsibility angle on OERs. On launching the Open Universities OpenLearn initiative Prof. David Vincent said:

Our mission has always been to be open to people, places, methods and ideas and OpenLearn allows us to extend these values into the 21st century.

While the OU is clearly a special case in UK Higher Education, I don’t think there are many working in Universities who would say that something similar wasn’t at least part of what they were trying to do. Furthermore, there is a growing feeling that material produced with public funds should be available to all members of the public, and that Universities should be of benefit to the wider community not just to those scholars who happen to work within the system.

Another, less positive, harder-edged angle on social responsibility was highlighted in the ruling on a Freedom of Information request where the release of course material was required. The Information Tribunal said

it must be open to those outside the academic community to question what is being taught and to what level in our universities

We would suggest that we are looking at a future where open educational resources should be seen as the default approach, and that a special case should need to be made for resources that a public institution such as a university wants to keep “private”. But for now the point we’re making is that social responsibility is a strong motivator for some individuals, institutions and funders.

Legalities.
Releasing educational content openly on the web requires active management of intellectual property rights associated with the content used for teaching at the institution. This is something that institutions should be doing anyway, but they often fudge it. They should address questions such as:

  • Who is responsible for ensuring there is no copyright violation?
  • Who owns the teaching materials, the lecturer who wrote them or the institution?
  • Who is allowed to use materials created by a member of staff who moves on to another institution?

The process of applying open licences helps institutions address these issues, and other legal requirements such as responding to freedom of information requests relating to teaching materials (and they do happen).

Not all doom and gloom
Some things do become simpler when you release learning materials as OERs.

For example access management for the majority of users (those who just want read-only access) is a whole lot simpler if you decide to make a collection open; no need for the authentication or authorization burden that typically comes with making sure that only the right people have access.

On a larger scale, the Open University have found that setting up partnerships for teaching and learning with other institutions becomes easier if you no longer have to negotiate terms and conditions for mutual access to course materials from each institution.

Some aspects of resource description also become easier.

Some (but not all) OER projects present material in the context in which they were originally delivered, i.e. arranged as courses (The MIT OCW course a screen capture of which I used above is one example). This may have some disadvantages, but the advantage is that the resource is self describing–you don’t have to rely soley on metadata to convey information such as educational level and potential educational use. This is especially important becuase whereas most universities can describe their courses in ways that make sense, we struggle to agree controlled vocabularies that can be applied across the sector.

Course or resources?
The other advantage of presenting the material as courses rather than disaggregated as individual objects is that the course will be more likely to be useful to learners.

Of course the presentation of resources in the context of a course should not stop anyone from taking or pointing to a single component resource and using it in another context. That should be made as simple as possible; but it’s always going to be very hard to go in the other direction, once a course is disaggregated it’s very hard to put it back together (the source of the materil could describe how to put it back together, or how it fitted in to other parts of a course, but then we’re back into the creation of additional metadata).

Summary and technical
What I’ve tried say is that putting clearly licensed stuff onto the open web solves many problems.

What is the best technology genre for this? repository or content management system or VLE or Web2 service. Within the UKOER programme all four approaches were used successfully. Some of these technologies are primarily designed for local management and presentation of resources rather than open dissemination; and vice versa. There’s no consensus, but there is a discernable trend towards using a diversity of approaches and mixing-and-matching, e.g. some UKOER projects used repositories to hold the material and push it to Web 2 services; others pulled material in the other direction.

ps: While I was writing this, Timothy Vollmer over on the CreativeCommons blog was writing “Do Open Educational Resources Increase Efficiency?” making some similar points.

Image credits
Most of the images are sourced from Flickr and have one or another flavour of creative commons licence. From the top:

Repositories and the Open Web

On the 19 April, in London CETIS are holding a meeting in London on Repositories and the Open Web. The theme of the meeting is how repositories and social sharing / web 2.0 web sites compare as hosts for learning materials: how well does each facilitate the tasks of resource discovery and resource management; what approaches to resource description do the different approaches take; and are there any lessons that users of one approach can draw from the other?

Both the title of the event (does the ‘and’ imply a distinction? why not repositories on the open web?) and the tag CETISROW may be taken as slightly provocative. Well, the tag is meant lightheartedly, of course, and yes there is a rich vein of work on how repositories can work as part of the web. Just looking back are previous CETIS events I would like to highlight these contributions to previous meetings:

  • Lara Whitelaw presented on the PROWE Project, about using wikis and blogs as shared repositories to support part-time distance tutors in June 2006.
  • David Davies spoke about RSS, Yahoo! Pipes and mashups in June 2007.
  • Roger Greenhalgh, talking about the National Rural Knowledge Exchange, in the May 2008 meeting. And many of us remember his “what’s hot in pigs” intervention in an earlier meeting.
  • Richard Davis talking about SNEEP (social network extensions for ePrints) at the same meeting

Most recently we’ve seen a natural intersection between the aims of Open Educational Resources initiatives and the use of hosting on web 2 and social sharing sites, so, for example, the technical requirements suggested for the UKOER programme said this under delivery platforms:

Projects are free to use any system or application as long as it is capable of delivering content freely on the open web. However all projects must also deposit their content in JorumOpen. In addition projects should use platforms that are capable of generating RSS/Atom feeds, particularly for collections of resources e.g. YouTube channels. Although this programme is not about technical development projects are encouraged to make the most of the functionality provided by their chosen delivery platforms.

We have followed this up with some work looking at the use of distribution platforms for UKOER resources which treats web 2 platforms and repository software as equally useful for that task.

So, there’s a longstanding recognition that repositories live on the open web, and that formal repositories aren’t the only platform suitable for the management and dissemination of learning materials. But I would missing something I think important if I left it at that. For some time I’ve had misgivings about the direction that conceptualising your resource management and dissemination as a repository leads. A while back a colleague noticed that a description of some proposed specification work, which originated from repository vendors, developers and project managers, talked about content being “hidden inside repositories”, which we thought revealing. Similarly, I’ve written before that repository-think leads to talk of interoperability between repositories and repository-related services (I’m sure I’ve written that before). Pretty soon one ends up with a focus on repositories and repository-specific standards per se and not on the original problem of resource management and dissemination. A better solution, if you want to disseminate your resource widely, is not to “hide them in repositories” in the first place. Also, in repository-world the focus is on metadata, rather than resource description: the encoding of descriptive data into fields can be great for machines, but I don’t think that we’ve done a great job of getting that encoding right for educational characteristics of resources, and that this has been at the expense of providing suitable information for people.

Of course not every educational resource is open, and so the open web isn’t an appropriate place for all collections. Also, once you start using some of the web 2.0 social sharing sites for resource management you begin to hit some problems (no option for creative commons licensing, assumptions that the uploader created/owns the resource, limitations on export formats, etc.)–though there are some exceptions. It is, however, my belief that all repository software could benefit from the examples shown by the best of the social sharing websites, and my hope that we will see that in action during this meeting.

Detail about the meeting (agenda, location, etc.) will be posted on the CETIS wiki.

Registration is open, through the CETIS events system.

Repository standards

Tore Hoel tweeted:

The most successful repository initiatives do not engage with LT standards EDRENE report concludes #icoper

pointing me to what looks like a very interesting report which also concludes

Important needs expressed by content users include:

  • Minimize number of repositories necessary to access

Of these, the first bullet point clearly relates to interoperability of repositories, and indicates the importance of focusing on repository federations, including metadata harvesting and providing central indexes for searching for educational content.

Coincidentally I had just finished an email replying to someone who asked about repository aggregation in the context of Open Educational Resources because she is “Trying to get colleagues here to engage with the culture of sharing learning content. Some of them are aware that there are open educational learning resources out there but they don’t want to visit and search each repository.” My reply covered Google advanced search (with the option to limit by licence type), Google custom search engines for OERs, OER Commons, OpenCourseWare Consortium search, the Creative Commons Search, the Steeple podcast aggregator and the similar-in-concept Ensemble Feed finder.

I concluded: you’ll probably notice that everything I’ve written above relies on resources being on the open web (as full text and summarized in RSS feeds) but not necessarily in repositories. If there are any OER discovery services built on repository standards like OAI-PMH or SRU or the like then they are pretty modest in their success. Of course using a repository is a fine way of putting resources onto the web, but you might want to think about things like search engine optimization, making sure Google has access to the full text resource, making sure you have a site map, encouraging (lots of) links from other domains to resources (rather than metadata records), making sure you have a rich choice of RSS feeds and so on.

I have some scribbled notes on 4 or 5 things that people think are good about repositories by which may also be harmful, a focus on interoperability between repositories and repository-related services (when it is at the expense of being part of the open web) is on there.

Feeding a repository

There has been some discussion recently about mechanisms for remote or bulk deposit in repositories and similar services. David Flanders ran a very thought provoking and lively show and tell meeting a couple of weeks ago looking at deposit. In part this is familiar territory; looking at and tweaking the work that the creators of the SWORD profile have done based on APP; or looking again at webDav. But there is also a newly emerging approach of using RSS or Atom feeds to populate repositories, a sort of feed-deposit. Coincidentally we also received a query at CETIS from a repository which is looking to collect outputs of the UKOER programme asking for help in firming-up the requirements for bulk or remote deposit, and asking how RSS possibly fitted into this.

So what is this feed-deposit idea. The first thing to be aware of is that as far as I can make out a lot of the people who talk about this don’t necessarily have the same idea of “repository” and “deposit” as I do. For example the Nottingham Xpert rapid innovation project and the Ensemble feed aggregator are both populated by feeds (you can also disseminate material through iTunesU this way). But, (I think) these are all links-only collections, so I would call them a catalogues not repositories, and I would say that they work by metadata harvest(*) not deposit. However, they do show that you can do something with feeds which the people who think that RSS or Atom is about stuff like showing the last ten items published should take note of. The other thing to take note of is podcasting, by which I don’t mean sticking audio files on a web server and letting people find them, but I mean feeds that either carry or point to audio/video content so that applications and devices like phones and wireless-network enabled media players can automatically load that content. If you combine what Xpert and Ensemble are doing by way of getting information about entire collections with the way that podcasts let you automatically download content then you could populate a repository through feeds.

The trouble is, though, that once you get down to details there are several problems and several different ways of overcoming them.

For example, how do you go beyond having a feed for just the last 10 resources? Putting everything into one feed doesn’t scale. If your content is broken down into manageable sized collections (e.g. The OU’s OpenLearn courses and I guess many other OER projects) you could put everything from each collection into a feed and then have an OPML file to say where all the different feeds are (which works up to a point, especially if the feeds will be fairly static, until your OPML file gets too large). Or you could have an API that allowed the receiver of the feed to specify how they wanted to chunk up the data: OpenSearch should be useful here, it might be worth looking at YouTube as an example. Then there are similar choices to be made for how just about every piece of metadata and the content itself is expressed in the feed, starting with the choice of flavour(s) for RSS or ATOM feed.

But, feed-deposit is a potential solution, and it’s not good to try to start with a solution and then articulate the problem. The problem that needs addressing (by the repository that made the query I mentioned above) is how best to deposit 100s of items given (1) a local database which contains the necessary metadata (2) enough programming expertise to read that metadata from the database and republish or post to an API. The answer does not involve someone sat for a week copy-and-pasting into a web form that the repository provides as its only means of deposit.

There are several ways of dealling with that. So far a colleague who is in this position has had success depositing into Flickr, SlideShare and Scribd by repeated calls to their respective APIs for remote deposit—which you could call a depositer-push approach—but an alternative is that she put the resources somewhere, provides information to tell repositories where they are so any repository that listens can come and harvest them—which would be more like a repository-pull approach, and in which case Feed-deposit might be the solution.

[* Yes, I know about OAI-PMH, the comparison is interesting, but this is a long post already.]

Resource description requirements for a UKOER project

CETIS have provided information on what we think are the metadata requirements for the UK OER programme, but we have also said that individual projects should think about their own metadata requirements in addition to these. As an example of what I mean by this, here is what I produced for the Engineering Subject Centre’s OER project.

Like it says on the front page it’s an attempt to define what information about a resource should be provided, why, for whom, and in what format, where:

“Who” includes project funders (HEFCE + JISC and Academy as their agents), project partner contributing resource, project manager, end users (teachers and students), aggregators—that is people who wish to build services on top of the collection.

“Why” includes resource management, selection and use as well as discovery through Google or otherwise, etc. etc.

“Format” includes free text for humans to read (which is incidentally what Google works from) and encoded text for machine operations (e.g. XML, RSS, HTML metatags, microformats, metadata embedded in other formats or held in the database of whatever content management system lies behind the host we are using).

You can read it on Scribd: Resource description requirements for EngSC OER project

[I should note that I work for the Engineering Subject Centre as well as CETIS and this work was not part of my CETIS work.]

It would be useful to know if other projects have produced anything similar. . .

Distribution platforms for OERs

One of the workpackages for CETIS’s support of the UKOER programme is:

Technical Guidelines–Services and Applications Inventory and Guidance:
Checklist and notes to support projects in selecting appropriate publication/distribution applications and services with some worked examples (or recommendations).
Output: set of wiki pages based on content type and identifying relevant platforms, formats, standards, ipr issues, etc.

I’ve made a start on this here, in a way which I hope will combine the three elements mentioned in the workpackage:

  1. An inventory of host platforms by resource type. Which are platforms that are being used for which media or resource types?
  2. A checklist of technical factors that projects should consider in their choice of platform
  3. Further information and guidance for some of the host platforms. Essentially that’s the checklist filled in

In keeping with the nature of this phase of the UKOER programme as a pilot, we’re trying not to be prescriptive about the type of platform projects will use. Specifically, we’re not assuming that they will use standard repository software and are encouraging projects to explore and share any information about the suitability of web2.0 social sharing sites. At the moment the inventory is pretty biased to these web2.0 sites, but that’s just a reflection of where I think new information is required.

How you can help

Feedback
Any feedback on the direction of this work would be welcome. Are there any media types I’m not considering that I should? Are the factors being considered in the checklist the right ones? Is the level of detail sufficient? Where are the errors?

Information
I want to focus on the platforms that are actually being used, so it would be helpful to know which these are. Also, I know from talking to some of you that there is invaluable experience about using some of these services, for example some APIs are better documented than others, some offer better functionality than others, some have limitations that aren’t apparent until you try to use them seriously. It would be great to have this in-depth information, there is space in the entry for each platform for these “notes and comments”.

Contributions
The more entries are filled out the better, but there’s a limit on what I can do, so all contributions would be welcome. In particular, I know that iTunes/iTunesU is important for audio video / podcasting, but I don’t have access myself — it seems to require some sort of plug-in called “iTunes” ;-) — so if anyone can help with that I would be especially grateful.

Depending on how you feel, you help by emailing me (philb@icbl.hw.ac.uk), or by registering on the CETIS wiki and either using the article talk page (please sign your comments) or the article itself. Anything you write is likely to be distributed under a Creative Commons cc-by-nc licence.