EdReNe: Building successful educational repositories

EdReNe, the EU funded Educational Repositories Network, have just published what looks like a useful a useful set of recommendations on Building successful educational repositories [pdf]. Many of the recommendations seem motherhood-and-apple-pie stuff: engage users, have policies, etc. though some of these, e.g. “support the needs of existing communities” have interesting implications when thought through in more depth (in this case don’t base your repository strategy entirely on creating a new community).

Others that caught my eye:

  • Take advantage of generally used, open standards to allow for the broadest range of partnerships, future adaptability and innovation.
    With the comment “Use standards that are ‘as broad as possible'”–a reference to “web-wide” standards rather than those that are specific to the repository world?
  • Acknowledge that integration with a range of tools and services will greatly benefit uptake and use of digital learning resources.
    “What is useful, is the ability to integrate with the tools/service that the user selects.” So you’ll need an API.
  • Support the development of ‘sharing as a culture’ by providing user friendly mechanisms for depositing and repurposing
  • Open up information silos by a strong focus on web services, APIs and other ways of allowing seamless integration across services.
    “Repositories have to interface with many systems.”
  • Make it easy to participate – for all members
    “Barriers to participation are the single biggest problem.”
  • Present clear and easy-to-understand information on usage rights
    and
  • Clearly express usage rights to users when depositing or accessing resources
    The “when accessing” aspect of this is something that has been exercising me recently, it’s hard to believe how many OERs don’t have any cc-licence information displayed on the resource itself. (For extra added irony, this report bears no licence information within it.)
  • Support open licensing to increase impact of funding and maximize possibilities for reuse and re-purposing,
  • Encourage use of CC-BY licenses when publishing own work
    and
  • Encourage institutions to engage in sharing and production of open content (institution management)
    The OER effort is clearly having an impact on repository thinking, though there are comments to these and other recs that reflect that not all resources in repositories will be open.
  • When content standards are encouraged, this should be done with central guidance
    “A top?down strategy for fixing standards does not work anymore.” (well, it only ever worked in some limited scenarios).

The recommendations are the output of EdReNe’s 5th seminar which took place in Copenhagen, 6-7 October 2010. Thanks to LTSO for the info about this.

Semantic web applications in higher education

Last week I was in Southampton for the second workshop on Semantic web applications in higher education (SemHE), organised by Thanasis Tiropanis and friends from Learning Societies Lab at Southampton University. These same people had worked with CETIS on the Semantic Technologies in Learning and Teaching (SemTech) project. The themes from the meeting seemed to be an emphasis on using this technology to solve real problems, i.e. the applications in the workshop title, and, to quote Thanasis in his introduction a consequent “move away from complex idiosyncratic ontologies not much used outside of the original developers” and towards a simpler “linked data field”.

First: the scope of the workshop, at least in terms of what was meant by “higher education” in the title. The interests of those who attended came under at least two (not mutually exclusive) headings. One was HE as an enterprise, and the application of semantic web applications to the running of the University, i.e. e-administration, resource and facility management, and the like. The other was the role semantic technologies in teaching and learning, one aspect of which was summed up nicely by Su White as identifying the native semantic technologies that would give the student an authentic learning experience to prepare them for a world where massive amounts data are openly available, e.g. preparing geography students to work with real data sets.

The emphasis on solving a real problem was nicely encapsulated by a presentation from Farhana Sarker where she identified ~20 broad challenges facing UK HE such as management, funding, widening participation, retention, contribution to economy, assessment, plagiarism, group formation in learning & teaching, construction of personal and group knowledge…. She then presented what you might call a factorisation of the data that could help address these challenges into about 9 thematic repositories (using that word in a broad sense) containing: course information, teaching material, student records, research output, research activities, staff expertise, infrastructure data, accreditation records (course/institnl accred), staff development programme details (I may have missed a couple). Of course each repository addresses more than one of the challenges, and to do so much of the data held in them needs to be shared outside of the institution.

A nice, concrete, example of using shared data to address a problem in resource management and discovery was provided by Dave Lambert showing how external linked data sources such as Dewey.info, Library of Congress, GeoNames, sindice.com zemanta.com and a vocabulary drawn from from the FOAF, DC in RDFS, SKOS, WGS84 and Timeline ontologies, have been used by the OU to catalogue videos in the annomation tool and provide a discovery service through the SugarTube project.

One comment that Dave made was that many relevant ontologies were too heavyweight for the purpose he had, and this focus on what is needed to solve a problem linked with another theme that ran through the meeting, that of pragmatism and as much simplicity as possible. Chris Gutteridge made a very interesting observation, that the uptake of semantic technologies, like the uptake of the web in the late 1990s, would involve a change in the people working on it from those who were doing so because they were interested in the semantic web to those who were doing so because their boss had told them they had to. This has some interesting consequences, for example: there are clear gains to be made (says Chris) from the application of semantic technologies to e-admin, however the IT support for admin is not often well versed in semantic ideas. Therefore, to realise these gains those pioneering the use of the semantic web and linked data should supply patterns that are easy to follow; consuming data from a million different ontologies won’t scale.

Towards the end of the day the discussion on pragmatism rather than idealism settled on the proposal, I forget who made it, that ontologies were a barrier to mass adoption of the semantic web, and that what would be better would be to create a “big bag of predicates” with domain thing, range thing. The suggestion being that more specific domains or ranges tended to be ignored anyway. (Aside, I don’t know whether the domain & range would be owl:Thing s, or whether it would matter if rdfs:Resource were used instead. If you can explain how a distinction between those two helps interoperability then I would be interested; throw skos:Concept into the mix and I’ll buy you a pint.)

Returning to the SemTech project, the course of the meeting did a lot to reiterate the final report of that project, and in particular the roadmap it produced, which was a sequence of 1) release data openly as linked data with an emphasis on lightweight knowledge models; 2) creation and deployment of applications build on this data; 3) emergence of ontologies and pedagogy-aware semantic applications. While the linked data cloud shows the progress of step 1, I would suggest that it is worth keeping an eye on whether step 2 is happening (the SemTech project provided a baseline survey for comparison, so what I am suggesting is a follow up of that at some point).

Finally: thanks to Thanasis for organising the workshop, I know he had a difficult time of it, and I hope that doesn’t put him off organising a third (once you call something the second… you’ve created a series!)

Sharing service information?

Over the past few weeks the question of how to find service end-points keeps coming up in conversation (I know, says a lot about the sort of conversations I have), for example we have been asked whether we can provide information about where are the RSS feed locations for the services/collections created by the all the UKOER projects. I would generalise this to service end points, by which I mean the things like the base URL for OAI-PMH or RSS/ATOM feed locations or SRU target locations, more generally the location of the web API or protocol implementations that provide machine-to-machine interoperability. It seems that these are often harder to find than they should be, and I would like to recommend one and suggest another approach to helping make them easier to find.

The approach I would like to recommend to those who provide service end points, i.e. those of you who have a web-based service (e.g. a repository or OER collection) that supports machine-to-machine interoperability (e.g. for metadata dissemination, remote search, or remote upload) is that taken by web 2.0 hosts. Most of these have reasonably easy-to-find sections of their website devoted to documenting their API, and providing “how-to” information for what can be done with it, with examples you can follow, and the best of them with simple step-by-step instructions. Here’s a quick list by way of providing examples

I’ll mention Xpert Labs as well because, while the “labs” or “backstage” approach in general isn’t quite what I mean by simple “how-to” information, it looks like Xpert are heading that way and “labs” sums up the experimental nature of what they provide.

That helps people wanting to interoperate with those services and sites they know about, but it begs a more fundamental question, which is how to find those services in the first place; for example, how do you find all those collections of OERs. Well, some interested third-party could build a registry for you, but that’s an extra effort for someone who is neither providing or using the data/service/API. Furthermore, once the information is in the registry it’s dead, or at least at risk of death. What I mean is that there is little contact between the service provider and the service registry: the provider doesn’t really rely on the service registry for people to use their services and the service registry doesn’t actually use the information that it stores. Thus, it’s easy for the provider to forget to tell the service registry when the information changes, and if it does change there is little chance of the registry maintainer noticing. So my suggestion is that those who are building aggregation services based on interoperating with various other sites provide access to information about the endpoints they use. An example of this working is the JournalToCs service, which is an RSS aggregator for research journal tables of contents but which has an API that allows you to find information for the Journals that it knows about (JOPML showed the way here, taking information from a JISC project that spawned JournalToCs and passing on lists of RSS feeds as OPML). Hopefully this approach of endpoint users proving information about what they used would only provide information that actually worked and was useful (at least for them).

Descriptions and metadata; documents and RDF

I keep coming back to thinking about embedding metadata into human-oriented resource descriptions web pages.

Last week I was discussing RDFa vs triple stores with Wilbert. Wilbert was making the point that publishing RDF is easier to manage, less error prone and easier on the consumer if you deal with it on its own rather than trying to deal with encoding triples and producing a human readable web page with valid XHTML all at the the same time. A valid point, though Wilbert’s starting point was “if you’re wanting to publish RDF” and that left me still with the question of when do we want metadata, i.e. encoded machine readable resource descriptions and when do we want resource descriptions that people can read, and do we really have to separate the two?

Then yesterday, following a recommendation by Dan Rehak, I read this excellent comparison of three approaches that could be used to manage resource descriptions or metadata, relational databases, document stores/noSQL, an triple stores/RDF. Which really helps in that it explains how storing information about “atomic” resources is a strength of document stores (with features like versioning and flexible schema) and storing relationships is a strength of triple stores (with, you know, features like links between concepts). So you might store information about a resource as an XML document structured by some schema so that you could extract the title, author name etc., but sometimes you want to give more detail, e.g. you might want to show how the subject related to other subjects, in which case you’re into the world where RDF has strengths. And then again, while author name is enough for many uses, an unambiguous identifier for the author encoded so that a machine will understand it as a link to more information about the author is also useful.

Also relevant:

CETIS “What metadata…?” meeting summary

Yesterday we had a meeting in London with about 25 people thinking about the question “What metadata is really useful?

My thinking behind having a meeting on this subject was that resource description can be a lot of effort; so we need to be careful that the decisions we make about how it is done are evidence-based. Given the right data we should be able to get evidence about what metadata is really used for, as opposed to what we might speculate that it is useful for (with the caveat that we need to allow for innovation, which sometimes involves supporting speculative usage scenarios). So, what data do we have and what evidence could we get that would help us decide such things as whether providing a description of a characteristic such as the “typical learning time for using a resource” either is or isn’t helpful enough to justify the effort? Pierre Far went to an even more basic level and asked in his presentation, why do we use XML for sharing metadata?–is it the result of a reasoned appraisal of the alternatives, such as JSON, or did just seem the right thing to do at some point?

Dan Rehak made the very useful point to me that we need a reason for wanting to answer such questions, i.e. what is it we want to do? what is the challenge? Most of the people in the room were interested in disseminating educational resources (often OERs): some have an interest in disseminating resources that had been provided by their own project or organization, others have an interest in services that help users find resources from a wide range of providers. So I had “help users find resources they needed” as the sort of reason for asking these questions; but I think Dan was after something new, less generic, and (though he would never say this) less vague and unimaginative. What he suggested as a challenge was something like “how do you build a recommender system for learning materials?” Which is a good angle, and I know it’s one that Dan is interested in at the moment; I hope that others can either buy into that challenge or have something equally interesting that they want to do.

I have suggested that user surveys, existing metadata and search logs are potential sources of data reflecting real use and real user behaviour, and no one has disagreed so I structured much of the meeting around discussion of those. We had short overviews of examples previous work on each each of these, and some discussion about that, followed by group discussions in more depth for each. I didn’t want this to be an academic exercise, I wanted the group discussions to turn up ideas that could be taken forward and acted on, and I was happy at the end of the day. Here’s a sampler of the ideas turned up during the day:
* continue to build the resources with background information that I gathered for the meeting.
* promote the use common survey tools, for example the online tool used by David Davies for the MeDeV subject centre (results here).
* textual analysis of metadata records to show what is being described in what terms.
* sharing search log in a common format so that they can be analysed by others (echoes here of Dave Pattern’s sharing of library usage data and subsequent work on business intelligence that can be extracted from it).
* analysis of search logs to show which queries yield zero hits which would identify topics on which there was unmet demand.

In the coming weeks we shall be working through the ideas generated at the meeting in more depth with the intention of seeing which can actually be brought to fruition. In the meantime keep an eye on the wikipage for the meeting which I shall be turning into a more detailed record of the event.

Analysing OCWSearch logs

We have a meeting coming up on the topic of investigating what data we have (or could acquire) to answer the question of what metadata is really required to support the discovery, selection, use and management of educational resources. At the same time as I was writing a blog post about that, over at OCWSearch they were publishing the list of top searches for their collection (I think Pierre Far is the person to thank for that). So, what does this tell us about metadata requirements?

I’ve been through the terms at the top half of the list (it says that the list is roughly in descending order of popularity, however it would be really good to know more about how popular each search term was) and tried to judge what characteristic or property of the resource the searcher was searching on.

There were just under 170 search terms in total. It doesn’t surprise me that the vast majority (over 95%) of them are subject searches. Both higher-level, broad subject terms (disciplines, e.g. “Mathematics”) and lower-level, finer-grained subject terms (topics, e.g. “Applied Geometric Algebra”) crop up in abundance. I’m not sure you can say much about their relative importance.

What’s left is (to me) more interesting. We have:

  • resource types, specifically: “online text book”, “audio”, “online classes”.
  • People, who seem to be staff at MIT, so while it’s possible someone is searching for material about them or about their theories, I think it is likely that people are searching for them as resource creators
  • level, specifically: 101, Advanced (x2), college-level. These are often used in conjunction with subject terms.
  • Course codes e.g. HSM 260, 15.822, Psy 315. (These also imply a level and a subject.)

I think with more data and more time spent on the analysis we could get some interesting results from this sort of approach.

Jorum and Google ranking

Les Carr has posted an interesting analysis of Visibility of OER Material: the Jorum Learning and Teaching Competition. He searches for six resources on Google and compares the ranking in the results page of the resource on Google with the resource elsewhere. The results are mixed: sometimes Jorum has the top place sometimes some other site (institutional or author’s site) is top, though it should be said that with one exception we’re talking about which is first and which is second. In other words both would be found quite easily.

Les concludes:

Can we draw any general patterns from this small sample? To be honest, I don’t think so! The range of institutions is too diverse. Some of the alternative locations are highly visible, so it is not surprising that Jorum is eclipsed by their ranking (e.g. Cambridge, very newsworthy Gurkhas international organisation). Some 49% of Open Jorum’s records provide links to external sources rather than holding bitstream contents directly. It would be very interesting to see the bigger picture of OER visibility by undertaking a more comprehensive survey.

Yes it would be very interesting to see the bigger picture, and also it would be interesting to see a more thorough investigation of just the Jorum’s role (I don’t think Les will mind the implication that he has no more than scraped the surface).

Some random thoughts that this raises in my mind:

  • Title searches are too easy, the quality of resource description will only be tested by searching for the keywords that are really used by people looking for these resources. Some will know the title of the resource, but not many. Just have a play with using the most important one or two words from the title rather than the whole title and see how the results change.
  • To say that Jorum enhances/doesn’t enhance visibility depending on whether it comes above or below the alternative sites is too simplistic. If it links to the other site Jorum will enhance the visibility of that site even if it ranks below it; having the same resource represented twice in the search engine results page enhances its visibility no matter what the ordering; on the other hand, having links from elsewhere pointing to two separate sites probably reduces the visibility of both.
  • Sometimes Jorum hosts a copy of the resource, sometimes it just points to a copy elsewhere; that’s got to have an effect (hasn’t it?).
  • What is the cause of the difference? When I’ve tried similar (very superficial) comparisons, I’ve noticed that Jorum gets some of the basics of SEO right (e.g. using the resource’s title in the HTML Title element; curiously it doesn’t seem to use the HTML Description element). How does this compare to other hosts? I’ve noticed some other OER sites that don’t get this right, so we could see Jorum as guaranteeing a certain basic quality of resource discovery rather than as necessarily enhancing visibility. (Question: is this really necessary?)
  • What happens over time? Do people link to the copy in the Jorum or elsewhere. This will vary a lot, but there may be a trend. I’ll note in passing that choosing six resources that had been promoted by Jorum’s learning and teaching competition may have skewed the results.
  • Which should be highest ranked anyway? Do we want Jorum to be highly ranked to reflect its role as part of the national infrastructure, a place to showcase what you’ve produced; or do institutions see releasing OERs as part of a marketing strategy, and the best Jorum can do is quietly improve the ranking of the OERs on the institution’s site by linking to them? This surely relates to the choice between having Jorum host the resource or just having it link to the resource on the institutions site (doesn’t it?).

Sorry, all questions and no answers!

An open and closed case for educational resources

I gave a pecha kucha presentation (20 slides, 20 seconds per slide) at the Repository Fringe in Edinburgh last week. I’ve put the slides on slideshare, and there’s also a video of the presentation but since the slides are just pictures, and the notes are a bit disjointed, and my delivery was rather rushed, it seems to me that it would be useful to reproduce what I said here. Without the 20 second per slide constraint.

The main thrust of the argument is that Open Educational Resource (OER) or OpenCourseWare (OCW) release can be a good way of overcoming some of the problems institutions have regarding the management of their learning materials. By OER or OCW release we mean an institution, group or individual disseminating their educational resources under creative commons licences that allow anyone to take and use those resources for free. As you probably know over the last year or so HEFCE have put a lot of money into the UKOER programme.

I first started thinking about this approach in relation to building repositories four or five years ago.

I was on the advisory group for a typical institutional learning object repository project. The approach that they and many others like them at the time had chosen was to build a closed, inward-facing repository, providing access and services only within the institution. The project concerned about interoperability with their library systems and worried a lot about metadata.

Castle Kennedy The repository was not a success. In the final advisory group meeting I was asked whether I could provide an example of an institution with a successful learning object repository. I gave some rambling unsatisfactory answer about how there were a few institutions trying the same approach but it was difficult to know what was happening since they (like the one I was working with) didn’t want to broadcast much information about what they were doing.

And two days later it dawned on me that what I should have said was MIT.

MIT OpenCourseWare
At that time MIT’s OpenCourseWare initiative was by far the most mature open educational resource initiative, but now we have many more examples. But in what way does OER-related activity relate to the sort of internal management of educational materials that concerns projects like the one with which I was involved?

The challenges of managing educational resources
The problems that institional learning object repositories were trying to solve at that time were typically these:

  • they wanted to account for what educational content they had and where it was;
  • they wanted to promote reuse and sharing within the Institution;
  • they wanted more effective and efficient use of resources that they had paid to develop.

And why, in general, did they fail? I would say that there was a lack of buy-in or commitment all round, there was a lack of motivation from the staff to deposit and there was a lack of awareness that the repository even existed. Also there was more focus on the repository per se and systems interoperability than on directly addressing the needs of their stakeholders.

Does an open approach address these challenges?

Well, firstly, by putting your resources on the open web everyone will be able to access them, including the institution’s own staff and students. What’s more once these resources are on the open web they can be found using Google, which is how those staff and students search. Helping your staff find and have access to the resources created by other staff helps a lot with promoting reuse and sharing within the institution.

It is also becoming apparent that there are good institution-level benefits from releasing OERs.

For example the OU have traced a direct link from use of their OpenLearn website to course enrolment.

In general terms, open content raises the profile of the institution and its courses on the web, providing an effective shop window for the institution’s teaching, in a way that an inward facing repository cannot. Open content also gives prospective students a better understanding of what is offered by an institution’s courses than a prospectus can, and so helps with recruitment and retention.

There’s also a social responsibility angle on OERs. On launching the Open Universities OpenLearn initiative Prof. David Vincent said:

Our mission has always been to be open to people, places, methods and ideas and OpenLearn allows us to extend these values into the 21st century.

While the OU is clearly a special case in UK Higher Education, I don’t think there are many working in Universities who would say that something similar wasn’t at least part of what they were trying to do. Furthermore, there is a growing feeling that material produced with public funds should be available to all members of the public, and that Universities should be of benefit to the wider community not just to those scholars who happen to work within the system.

Another, less positive, harder-edged angle on social responsibility was highlighted in the ruling on a Freedom of Information request where the release of course material was required. The Information Tribunal said

it must be open to those outside the academic community to question what is being taught and to what level in our universities

We would suggest that we are looking at a future where open educational resources should be seen as the default approach, and that a special case should need to be made for resources that a public institution such as a university wants to keep “private”. But for now the point we’re making is that social responsibility is a strong motivator for some individuals, institutions and funders.

Legalities.
Releasing educational content openly on the web requires active management of intellectual property rights associated with the content used for teaching at the institution. This is something that institutions should be doing anyway, but they often fudge it. They should address questions such as:

  • Who is responsible for ensuring there is no copyright violation?
  • Who owns the teaching materials, the lecturer who wrote them or the institution?
  • Who is allowed to use materials created by a member of staff who moves on to another institution?

The process of applying open licences helps institutions address these issues, and other legal requirements such as responding to freedom of information requests relating to teaching materials (and they do happen).

Not all doom and gloom
Some things do become simpler when you release learning materials as OERs.

For example access management for the majority of users (those who just want read-only access) is a whole lot simpler if you decide to make a collection open; no need for the authentication or authorization burden that typically comes with making sure that only the right people have access.

On a larger scale, the Open University have found that setting up partnerships for teaching and learning with other institutions becomes easier if you no longer have to negotiate terms and conditions for mutual access to course materials from each institution.

Some aspects of resource description also become easier.

Some (but not all) OER projects present material in the context in which they were originally delivered, i.e. arranged as courses (The MIT OCW course a screen capture of which I used above is one example). This may have some disadvantages, but the advantage is that the resource is self describing–you don’t have to rely soley on metadata to convey information such as educational level and potential educational use. This is especially important becuase whereas most universities can describe their courses in ways that make sense, we struggle to agree controlled vocabularies that can be applied across the sector.

Course or resources?
The other advantage of presenting the material as courses rather than disaggregated as individual objects is that the course will be more likely to be useful to learners.

Of course the presentation of resources in the context of a course should not stop anyone from taking or pointing to a single component resource and using it in another context. That should be made as simple as possible; but it’s always going to be very hard to go in the other direction, once a course is disaggregated it’s very hard to put it back together (the source of the materil could describe how to put it back together, or how it fitted in to other parts of a course, but then we’re back into the creation of additional metadata).

Summary and technical
What I’ve tried say is that putting clearly licensed stuff onto the open web solves many problems.

What is the best technology genre for this? repository or content management system or VLE or Web2 service. Within the UKOER programme all four approaches were used successfully. Some of these technologies are primarily designed for local management and presentation of resources rather than open dissemination; and vice versa. There’s no consensus, but there is a discernable trend towards using a diversity of approaches and mixing-and-matching, e.g. some UKOER projects used repositories to hold the material and push it to Web 2 services; others pulled material in the other direction.

ps: While I was writing this, Timothy Vollmer over on the CreativeCommons blog was writing “Do Open Educational Resources Increase Efficiency?” making some similar points.

Image credits
Most of the images are sourced from Flickr and have one or another flavour of creative commons licence. From the top:

What do we know about educational metadata requirements

We at CETIS are in the early stages of planning a meeting (pencilled in for October, date and venue tbc) to collect and compare evidence on what we know about user requirements for metadata to support the discovery, retrieval, use and management of educational resources. We would like to know who has what to contribute: so if you’re in the business of creating metadata for educational resources, please would you come and tell us what it is useful for.

One approach taken to developing metadata standards and application profiles is to start with use cases and derive requirements from them; the problem is that when standardizing a new domain these use cases are often aspirational. In other words, someone argues a case for describing some characteristic of a resource (shall we use “semantic density” as an example?) because they would like to those descriptions for some future application that they think would be valuable. Whether or not that application materialises, the metadata to describe the characteristic remains in the standard. Once the domain matures we can look back at what is actually useful practice. Educational metadata is now a mature domain, and some of this reviewing of what has been found to be useful is happening, it is this that we want to extend. We hope that in doing so we will help those involved in disseminating and managing educational resources make evidence-based decisions on what metadata they should provide.

I can think of three approaches for carrying out a review of what metadata really is useful. The first is to look at what metadata has been created, that is what fields have been used. This has been happening for some time now, for example back in 2004 Norm Friesen looked at LOM instances to see which elements were used, and Carol Jean Godby looked at application profiles to see which elements were recommended for use. More recent work associated with the IEEE LOM working group seems to confirm the findings of these early studies. The second approach is to survey users of educational resources to find out how they search for them. David Davies presented the results of a survey asking “what do people look for when they search online for learning resources?” at a recent CETIS meeting. Finally, we can look directly at the logs kept by repositories and catalogues of educational materials to ascertain the real search habits of users, e.g. what terms do they search for, what characteristics do they look for, what browse links do they click. I’m not sure that this final type of information is shared much, if at all, at present (though there have been some interesting ideas floated recently about sharing various types of analytic information for OERs, and there is the wider Collective Intelligence work of OLNet). If you have information from any of these approaches (or one I haven’t thought of) that you would be willing to share at the meeting I would like to hear from you. Leave a comment below or email phil.barker@hw.ac.uk .