11 thoughts on “UKOER 2: Collections, technology, and community

  1. hmm, have just thought through the implications of separating the Collections strand out… eg use of WordPress in rest of programme is nil. I’m guessing the areas where it will make the biggest difference are platforms and standards time to get back to the data.

  2. Pingback: UKOER 2: Collections, technology, and community | Calling All Lecturers | Scoop.it

  3. CSAP used WordPress as a hybrid tool part blog part collection; OERBITAL used WordPress purely as a blog (after initial consideration of its other possible uses).
    Triton used WordPress as a lightweight aggregator of multiple content types (presenting a single interface). Triton finding that there may be a need for some kind of middleware/ directory aggregation function that community portals and services could be built on. CSAP found taxonomy construction required both technical expertise and subject knowledge. OpenFieldwork worked on extracting resources relevant to fieldwork but they found that they had to lots of the tagging and geolocation themselves. OF also displayed spectrum of materials with different licences.

  4. Delores: their static collection was created manually and all openly licensed content (for use as trainer for sux0r). Level of structured resource description within resources made it difficult for automatic classification to work.

  5. OAI-PMH as machine to machine/ repo to repo protocol – it’s a completely different world – diff skills, knowledge, tools from publishing side. OERBITAL didn’t use oai-pmh beyond initial investigation – irrelevant for static collection.
    Again perceived need for centralised subject community portal to mediate repository harvesting. Xpert used by most projects – intermediated oai-pmh?

  6. Phil Barker: Harking back to Andy Powell ‘s comment at start of IE architecture 2deposit at an institutional level and discover at a subject level” – is this even more true for OER

  7. CSAP: Role of institution as quality control (vs anyone can deposit); OERBITAL: quality control is highly labour intensive and wading through poor quality resources is demoralizing.

  8. Quality review via specialists or at scale? academics are getting used to approximate measures of quality – || to Amazon. Wikipedia now offers opportunity to review pages.

  9. We didn’t use OAI-PMH as there is very little OER in OAI-PMH feeds – at best OERCommons is, and OAI-PMH is usually appallingly slow. So the benefits, if any are incredibly small, and when traded with the speed, it’s not worth the hassle.

    The format of OAI-PMH is at best just harder RSS, so an XML parser can do both without worrying too much (it’s not a lot of code, or a time requirement). The issue is as above, there is next to nothing there, and when you do find something, it’s slow, especially if your PHP script only has 30 seconds to run.

    A possible omission in this analysis is the difficulty in getting at content, as we partially covered in our blog post – there is no standard API, or approach, or a large enough collection to work with to mean code doesn’t have to be mangled for different approaches. So the lack of OAI, could possibly be perceived as a lack of support for SRU.