We at CETIS are in the early stages of planning a meeting (pencilled in for October, date and venue tbc) to collect and compare evidence on what we know about user requirements for metadata to support the discovery, retrieval, use and management of educational resources. We would like to know who has what to contribute: so if you’re in the business of creating metadata for educational resources, please would you come and tell us what it is useful for.
One approach taken to developing metadata standards and application profiles is to start with use cases and derive requirements from them; the problem is that when standardizing a new domain these use cases are often aspirational. In other words, someone argues a case for describing some characteristic of a resource (shall we use “semantic density” as an example?) because they would like to those descriptions for some future application that they think would be valuable. Whether or not that application materialises, the metadata to describe the characteristic remains in the standard. Once the domain matures we can look back at what is actually useful practice. Educational metadata is now a mature domain, and some of this reviewing of what has been found to be useful is happening, it is this that we want to extend. We hope that in doing so we will help those involved in disseminating and managing educational resources make evidence-based decisions on what metadata they should provide.
I can think of three approaches for carrying out a review of what metadata really is useful. The first is to look at what metadata has been created, that is what fields have been used. This has been happening for some time now, for example back in 2004 Norm Friesen looked at LOM instances to see which elements were used, and Carol Jean Godby looked at application profiles to see which elements were recommended for use. More recent work associated with the IEEE LOM working group seems to confirm the findings of these early studies. The second approach is to survey users of educational resources to find out how they search for them. David Davies presented the results of a survey asking “what do people look for when they search online for learning resources?” at a recent CETIS meeting. Finally, we can look directly at the logs kept by repositories and catalogues of educational materials to ascertain the real search habits of users, e.g. what terms do they search for, what characteristics do they look for, what browse links do they click. I’m not sure that this final type of information is shared much, if at all, at present (though there have been some interesting ideas floated recently about sharing various types of analytic information for OERs, and there is the wider Collective Intelligence work of OLNet). If you have information from any of these approaches (or one I haven’t thought of) that you would be willing to share at the meeting I would like to hear from you. Leave a comment below or email phil.barker@hw.ac.uk .