An open and closed case for educational resources

I gave a pecha kucha presentation (20 slides, 20 seconds per slide) at the Repository Fringe in Edinburgh last week. I’ve put the slides on slideshare, and there’s also a video of the presentation but since the slides are just pictures, and the notes are a bit disjointed, and my delivery was rather rushed, it seems to me that it would be useful to reproduce what I said here. Without the 20 second per slide constraint.

The main thrust of the argument is that Open Educational Resource (OER) or OpenCourseWare (OCW) release can be a good way of overcoming some of the problems institutions have regarding the management of their learning materials. By OER or OCW release we mean an institution, group or individual disseminating their educational resources under creative commons licences that allow anyone to take and use those resources for free. As you probably know over the last year or so HEFCE have put a lot of money into the UKOER programme.

I first started thinking about this approach in relation to building repositories four or five years ago.

I was on the advisory group for a typical institutional learning object repository project. The approach that they and many others like them at the time had chosen was to build a closed, inward-facing repository, providing access and services only within the institution. The project concerned about interoperability with their library systems and worried a lot about metadata.

Castle Kennedy The repository was not a success. In the final advisory group meeting I was asked whether I could provide an example of an institution with a successful learning object repository. I gave some rambling unsatisfactory answer about how there were a few institutions trying the same approach but it was difficult to know what was happening since they (like the one I was working with) didn’t want to broadcast much information about what they were doing.

And two days later it dawned on me that what I should have said was MIT.

MIT OpenCourseWare
At that time MIT’s OpenCourseWare initiative was by far the most mature open educational resource initiative, but now we have many more examples. But in what way does OER-related activity relate to the sort of internal management of educational materials that concerns projects like the one with which I was involved?

The challenges of managing educational resources
The problems that institional learning object repositories were trying to solve at that time were typically these:

  • they wanted to account for what educational content they had and where it was;
  • they wanted to promote reuse and sharing within the Institution;
  • they wanted more effective and efficient use of resources that they had paid to develop.

And why, in general, did they fail? I would say that there was a lack of buy-in or commitment all round, there was a lack of motivation from the staff to deposit and there was a lack of awareness that the repository even existed. Also there was more focus on the repository per se and systems interoperability than on directly addressing the needs of their stakeholders.

Does an open approach address these challenges?

Well, firstly, by putting your resources on the open web everyone will be able to access them, including the institution’s own staff and students. What’s more once these resources are on the open web they can be found using Google, which is how those staff and students search. Helping your staff find and have access to the resources created by other staff helps a lot with promoting reuse and sharing within the institution.

It is also becoming apparent that there are good institution-level benefits from releasing OERs.

For example the OU have traced a direct link from use of their OpenLearn website to course enrolment.

In general terms, open content raises the profile of the institution and its courses on the web, providing an effective shop window for the institution’s teaching, in a way that an inward facing repository cannot. Open content also gives prospective students a better understanding of what is offered by an institution’s courses than a prospectus can, and so helps with recruitment and retention.

There’s also a social responsibility angle on OERs. On launching the Open Universities OpenLearn initiative Prof. David Vincent said:

Our mission has always been to be open to people, places, methods and ideas and OpenLearn allows us to extend these values into the 21st century.

While the OU is clearly a special case in UK Higher Education, I don’t think there are many working in Universities who would say that something similar wasn’t at least part of what they were trying to do. Furthermore, there is a growing feeling that material produced with public funds should be available to all members of the public, and that Universities should be of benefit to the wider community not just to those scholars who happen to work within the system.

Another, less positive, harder-edged angle on social responsibility was highlighted in the ruling on a Freedom of Information request where the release of course material was required. The Information Tribunal said

it must be open to those outside the academic community to question what is being taught and to what level in our universities

We would suggest that we are looking at a future where open educational resources should be seen as the default approach, and that a special case should need to be made for resources that a public institution such as a university wants to keep “private”. But for now the point we’re making is that social responsibility is a strong motivator for some individuals, institutions and funders.

Legalities.
Releasing educational content openly on the web requires active management of intellectual property rights associated with the content used for teaching at the institution. This is something that institutions should be doing anyway, but they often fudge it. They should address questions such as:

  • Who is responsible for ensuring there is no copyright violation?
  • Who owns the teaching materials, the lecturer who wrote them or the institution?
  • Who is allowed to use materials created by a member of staff who moves on to another institution?

The process of applying open licences helps institutions address these issues, and other legal requirements such as responding to freedom of information requests relating to teaching materials (and they do happen).

Not all doom and gloom
Some things do become simpler when you release learning materials as OERs.

For example access management for the majority of users (those who just want read-only access) is a whole lot simpler if you decide to make a collection open; no need for the authentication or authorization burden that typically comes with making sure that only the right people have access.

On a larger scale, the Open University have found that setting up partnerships for teaching and learning with other institutions becomes easier if you no longer have to negotiate terms and conditions for mutual access to course materials from each institution.

Some aspects of resource description also become easier.

Some (but not all) OER projects present material in the context in which they were originally delivered, i.e. arranged as courses (The MIT OCW course a screen capture of which I used above is one example). This may have some disadvantages, but the advantage is that the resource is self describing–you don’t have to rely soley on metadata to convey information such as educational level and potential educational use. This is especially important becuase whereas most universities can describe their courses in ways that make sense, we struggle to agree controlled vocabularies that can be applied across the sector.

Course or resources?
The other advantage of presenting the material as courses rather than disaggregated as individual objects is that the course will be more likely to be useful to learners.

Of course the presentation of resources in the context of a course should not stop anyone from taking or pointing to a single component resource and using it in another context. That should be made as simple as possible; but it’s always going to be very hard to go in the other direction, once a course is disaggregated it’s very hard to put it back together (the source of the materil could describe how to put it back together, or how it fitted in to other parts of a course, but then we’re back into the creation of additional metadata).

Summary and technical
What I’ve tried say is that putting clearly licensed stuff onto the open web solves many problems.

What is the best technology genre for this? repository or content management system or VLE or Web2 service. Within the UKOER programme all four approaches were used successfully. Some of these technologies are primarily designed for local management and presentation of resources rather than open dissemination; and vice versa. There’s no consensus, but there is a discernable trend towards using a diversity of approaches and mixing-and-matching, e.g. some UKOER projects used repositories to hold the material and push it to Web 2 services; others pulled material in the other direction.

ps: While I was writing this, Timothy Vollmer over on the CreativeCommons blog was writing “Do Open Educational Resources Increase Efficiency?” making some similar points.

Image credits
Most of the images are sourced from Flickr and have one or another flavour of creative commons licence. From the top:

CETIS Gathering

At the end of June we ran an event about technical approaches to gathering open educational resources. Our intent was that we would provide space and facilities for people to some and talk about these issues, but we would not prescribe anything like a schedule of presentations or discussion topics. So, people came but what did they talk about?

In the morning we had a large group discussing approaches to aggregating resources and information about them through feeds such RSS or ATOM, and another smaller group discussing tracking what happens to OER resources once they are released.

I wasn’t part of the larger discussion, but I gather than they were interested in the limits of what can be brought in by RSS and difficulties due to the (shall we say) flexible semantics of the elements typically used in RSS even when extended in the typical way with Dublin Core. They would like to bring in information which was more tightly defined and also information from a broader range of sources relating to the actual use of the resource. They would also like to identify the contents of resources at a finer granularity (e.g. an image or movie rather than a lesson) while retaining the context of the larger resource. These are perennial issues, and bring to my mind technologies such as OAI-PMH with metadata richer than the default Dublin Core, Dublin Core Terms (in contrast to Dublin Core Element Set), OAI-ORE, and projects such as PerX and TicToCs (see JournalToCs) (just to mention two which happened to be based in the same office as me). At CETIS we will continue to explore these issues, but I think it is recognised that the solution is not as simple as using a new metadata standard that is in some way better than what we have now.

The discussion on tracking resources (summarized here by Michelle Bachler) was prompted by some work from the Open University’s OLNet on Collective Intelligence, and also some CETIS work on tracking OERs. For me the big “take-home” idea was that many individual OER providers and services must have information about the use of their resources which, while interesting in themselves, would become really useful if made available more widely. So how about, for example, open usage information about open resources? That could really give us some data to analyse.

There were some interesting overlaps between the two discussions: for example how to make sure that a resource is identified in such a way that you can track it and gather information about it from many sources, and what role can usage information play in the full description of a resources.

After lunch we had a demo of a search service built by cross-searching web 2 resource hosts via their APIs, which has been used by the Engineering Subject Centre’s OER pilot project. This lead on to a discussion of the strengths and limitations of this approach: essentially it is relatively simple to implement and can be used to provide a tailored search for an specialised OER collection so long as the number of targets being searched is reasonably low and their APIs stable reliable. The general approach of pulling in information via APIs could be useful in pulling in some of the richer information discussed in the morning. The diversity of APIs lead on to another well-rehearsed discussion mentioning SRU and OpenSearch as standard alternatives.

We also had a demonstration of the iCOPER search / metadata enrichment tool which uses REST, Atom and SPI to allow annotation of metadata records–very interesting as a follow-on from the discussions above which were beginning to see metadata not as a static record but as an evolving body of information associated with a resource.

Throughout the day, but especially after these demos, people were talking in twos and threes, finding out about QTI, Xerte, cohere, and anything else that one person knew about and others wanted to. I hope people who came found it useful, but it’s very difficult as an organiser of such and event to provide a single definitive summary!

Additional technical work for UKOER

CETIS has been funded by JISC to do some additional technical work relevant to the the UKOER programme. The work will cover three topics: deposit via RSS feeds, aggregation of OERs, and tracking & analysis of OER use.

Feed deposit
There is a need for services hosting OERs to provide a mechanism for depositors to upload multiple resources with minimal human intervention per resource. One possible way to meet this requirement that has already identified by some projects is “feed deposit”. This approach is inspired by the way in which metadata and content is loaded onto user devices and applications in podcasting. in short, RSS and ATOM feeds are capable, in principle, of delivering the metadata required for deposit into a repository and in addition can provide either a pointer to the content or that content itself may be embedded into the feed. There are a number of issues with this approach that would need to be overcome.

In this work we will: (1) Identify projects, initiatives, services, etc. that are engaged in relevant work [–if that’s you, please get in touch]. (2) Identify and validate the issues that would arise with respect to feed deposit, starting with those outlined in the Jorum paper linked to above. (3) Identify current approaches used to address these issues, and identify where consensus may be readily achieved.

Aggregation of OERs
There is interest in facilitating a range of options for the provision of aggregations of resources representing the whole or a subset of the UKOER programme output (possibly along with resources from other sources). There have been some developments that implement solutions based on RSS aggregation, e.g. Ensemble and Xpert; and the UKOLN tagometer measures the number of resources on various sites that are tagged as relevant to the UKOER programme.

In this work we will illustrate and report on other approaches, namely (a) Google custom search, (b) query and result aggregation through Yahoo pipes and (c) querying through the host service APIs. We will document the benefits and affordances as well as drawbacks and limitations of each of these approaches. These include the ease with which they may be adopted, and the technical expertise necessary for their development, their dependency on external services (which may still be in beta), their scalability, etc.

Tracking and analysis of OER use
Monitoring the release of resources through various channels, how those resources are used and reused and the comments and ratings associated with them, through technical means is highly relevant to evaluating the uptake of OERs. CETIS have already described some of the options for resource tracking that are relevant to the UKOER programme.

In this work we will write and commission case studies to illustrate the use of these methods, and synthesise the results learnt from this use.

Who’s involved in this work
The work will be managed by me, Phil Barker, and Lorna M Campbell.

Lisa J Rogers will be doing most of the work related to feed deposit and aggregation of OERs

R John Robertson will be doing most of the work relating to Tracking and analysis of OER use.

Please do contact us if you’re interested in this work.

Repositories and the Open Web: report

The CETISROW event took place at Birkbeck college, London, on the 19 April 2010, and I have to say it wasn’t really much of a row. There seemed to me to be more agreement on common themes than disagreement, so I’ll try to pull those together in this report, and if anyone disagrees with them there’s a “comment” form at the bottom of this page :-)

Focus on our aims not the means and by “means” I mean repositories. The sort of aims I have in mind are hosting, disseminating (or sharing), organising, and managing resources, facilitating social interaction around resources, and facilitating resource discovery. I was struck by how the Sheen Sharing project (about which Sarah Currier reported) had started by building what their community of users actually wanted and could use at that time, and not working with early adopters in the hope that they could somehow persuade the mainstream and laggards to follow. Roger Greenhalgh illustrated how wider aims such as social cohesion and knowledge transfer could be fostered through sites focussed on meeting community needs.

One of the participants mentioned at the end how pleased she was that we had progressed to talking in these terms rather than hitting people over the head with all the requirements that come from running a repository. I hope this would please Rachel Heery who, reflecting on various JISC Repositories programmes, made the point a while back that we might get better value from a focus on strategic objectives rather than a specific technology supposed to achieve those objectives.

So, what’s to do if we want to progress with this? We need to be clear about what the requirements are, so there is work to do building on and extending the survey on what people look for when they search online for learning resources from the MeDeV Organising OERs project presented by David Davies, and more work on getting systems to fit with needs–what the EdShare participants call cognitive ergonomics.

There was also a broad theme of working with what is already there, which I think this came through in a couple of sub themes of about web-scale systems and web-wide standards.

Firstly there were several accounts of working with existing services to provide hosting or community. Sheen Sharing (see above) did this, as did the Materials and Engineering subject centres’ OER projects that Lisa J Rogers reported on. Joss Winn also reported on using existing tools and communities saying

I don’t think it’s worth developing social features for repositories when there is already an abundance of social software available. It’s a waste of time and effort and the repository scene will never be able to trump the features that the social web scene offers and that people increasingly expect to use.

Perhaps this where we get closest to disagreement, since the EdShare team have been developing social features for ePrints that mirror those found on Web 2.0 sites. (The comment form is at the bottom…)

Related to this was the second theme of working with the technologies and specifications of web 2.0 sites, most notably RSS/ATOM syndication feeds. Patrick Lockley’s presentation on the Xpert repository was entirely about this, and Lisa Rogers and Sarah Currier both emphasised the importance of RSS (and in Lisa’s case easily-implemented APIs in general) in getting what they had done to work.

So, again, what do we need to do to continue this? Firstly there was a call to do more to synthesise and disseminate information about what approaches people are trying and what is working, so that other projects can follow the successful pioneers. Secondly there is potentially work to be done in smoothing over path that is taken, for example the Xpert project has found many complexities and irregularities in syndication feeds that could perhaps be avoided if we could provide some norms and guidelines for how to use them.

A theme that didn’t quite get discussed, but is nonetheless interesting was around openness. Joss Winn made a very valid distinction between the open web and the social web, one which I had blurred in the build up to the event. So facebook is part of the social web but is by no means open. There was some discussion about whether openness is important in achieving the goals of, e.g., disseminating learning resources. For example, iTunesU is used successfully by many to disseminate pod- and videocasts of lectures, and arguably the vertical integration offered by Apple’s ownership/control of all the levels leads to a better user experience than is the case for some of the alternatives.

All in all, I think we found ourselves broadly in agreement with the outcomes of the ADL Repository and Registries summit, as summarised by Dan Rehak, especially in: the increase in interest in social media and web 2.0 rather than conventional, formal repositories; the focus on understanding what we are really trying to do and finding out what users really want; and in not wanting new standards, especially not new repository-specific standards.

Finally, thanks to Roger Greenhalgh, I now know that there is a world carrot museum online.

Repositories and the Open Web

On the 19 April, in London CETIS are holding a meeting in London on Repositories and the Open Web. The theme of the meeting is how repositories and social sharing / web 2.0 web sites compare as hosts for learning materials: how well does each facilitate the tasks of resource discovery and resource management; what approaches to resource description do the different approaches take; and are there any lessons that users of one approach can draw from the other?

Both the title of the event (does the ‘and’ imply a distinction? why not repositories on the open web?) and the tag CETISROW may be taken as slightly provocative. Well, the tag is meant lightheartedly, of course, and yes there is a rich vein of work on how repositories can work as part of the web. Just looking back are previous CETIS events I would like to highlight these contributions to previous meetings:

  • Lara Whitelaw presented on the PROWE Project, about using wikis and blogs as shared repositories to support part-time distance tutors in June 2006.
  • David Davies spoke about RSS, Yahoo! Pipes and mashups in June 2007.
  • Roger Greenhalgh, talking about the National Rural Knowledge Exchange, in the May 2008 meeting. And many of us remember his “what’s hot in pigs” intervention in an earlier meeting.
  • Richard Davis talking about SNEEP (social network extensions for ePrints) at the same meeting

Most recently we’ve seen a natural intersection between the aims of Open Educational Resources initiatives and the use of hosting on web 2 and social sharing sites, so, for example, the technical requirements suggested for the UKOER programme said this under delivery platforms:

Projects are free to use any system or application as long as it is capable of delivering content freely on the open web. However all projects must also deposit their content in JorumOpen. In addition projects should use platforms that are capable of generating RSS/Atom feeds, particularly for collections of resources e.g. YouTube channels. Although this programme is not about technical development projects are encouraged to make the most of the functionality provided by their chosen delivery platforms.

We have followed this up with some work looking at the use of distribution platforms for UKOER resources which treats web 2 platforms and repository software as equally useful for that task.

So, there’s a longstanding recognition that repositories live on the open web, and that formal repositories aren’t the only platform suitable for the management and dissemination of learning materials. But I would missing something I think important if I left it at that. For some time I’ve had misgivings about the direction that conceptualising your resource management and dissemination as a repository leads. A while back a colleague noticed that a description of some proposed specification work, which originated from repository vendors, developers and project managers, talked about content being “hidden inside repositories”, which we thought revealing. Similarly, I’ve written before that repository-think leads to talk of interoperability between repositories and repository-related services (I’m sure I’ve written that before). Pretty soon one ends up with a focus on repositories and repository-specific standards per se and not on the original problem of resource management and dissemination. A better solution, if you want to disseminate your resource widely, is not to “hide them in repositories” in the first place. Also, in repository-world the focus is on metadata, rather than resource description: the encoding of descriptive data into fields can be great for machines, but I don’t think that we’ve done a great job of getting that encoding right for educational characteristics of resources, and that this has been at the expense of providing suitable information for people.

Of course not every educational resource is open, and so the open web isn’t an appropriate place for all collections. Also, once you start using some of the web 2.0 social sharing sites for resource management you begin to hit some problems (no option for creative commons licensing, assumptions that the uploader created/owns the resource, limitations on export formats, etc.)–though there are some exceptions. It is, however, my belief that all repository software could benefit from the examples shown by the best of the social sharing websites, and my hope that we will see that in action during this meeting.

Detail about the meeting (agenda, location, etc.) will be posted on the CETIS wiki.

Registration is open, through the CETIS events system.

Feeding a repository

There has been some discussion recently about mechanisms for remote or bulk deposit in repositories and similar services. David Flanders ran a very thought provoking and lively show and tell meeting a couple of weeks ago looking at deposit. In part this is familiar territory; looking at and tweaking the work that the creators of the SWORD profile have done based on APP; or looking again at webDav. But there is also a newly emerging approach of using RSS or Atom feeds to populate repositories, a sort of feed-deposit. Coincidentally we also received a query at CETIS from a repository which is looking to collect outputs of the UKOER programme asking for help in firming-up the requirements for bulk or remote deposit, and asking how RSS possibly fitted into this.

So what is this feed-deposit idea. The first thing to be aware of is that as far as I can make out a lot of the people who talk about this don’t necessarily have the same idea of “repository” and “deposit” as I do. For example the Nottingham Xpert rapid innovation project and the Ensemble feed aggregator are both populated by feeds (you can also disseminate material through iTunesU this way). But, (I think) these are all links-only collections, so I would call them a catalogues not repositories, and I would say that they work by metadata harvest(*) not deposit. However, they do show that you can do something with feeds which the people who think that RSS or Atom is about stuff like showing the last ten items published should take note of. The other thing to take note of is podcasting, by which I don’t mean sticking audio files on a web server and letting people find them, but I mean feeds that either carry or point to audio/video content so that applications and devices like phones and wireless-network enabled media players can automatically load that content. If you combine what Xpert and Ensemble are doing by way of getting information about entire collections with the way that podcasts let you automatically download content then you could populate a repository through feeds.

The trouble is, though, that once you get down to details there are several problems and several different ways of overcoming them.

For example, how do you go beyond having a feed for just the last 10 resources? Putting everything into one feed doesn’t scale. If your content is broken down into manageable sized collections (e.g. The OU’s OpenLearn courses and I guess many other OER projects) you could put everything from each collection into a feed and then have an OPML file to say where all the different feeds are (which works up to a point, especially if the feeds will be fairly static, until your OPML file gets too large). Or you could have an API that allowed the receiver of the feed to specify how they wanted to chunk up the data: OpenSearch should be useful here, it might be worth looking at YouTube as an example. Then there are similar choices to be made for how just about every piece of metadata and the content itself is expressed in the feed, starting with the choice of flavour(s) for RSS or ATOM feed.

But, feed-deposit is a potential solution, and it’s not good to try to start with a solution and then articulate the problem. The problem that needs addressing (by the repository that made the query I mentioned above) is how best to deposit 100s of items given (1) a local database which contains the necessary metadata (2) enough programming expertise to read that metadata from the database and republish or post to an API. The answer does not involve someone sat for a week copy-and-pasting into a web form that the repository provides as its only means of deposit.

There are several ways of dealling with that. So far a colleague who is in this position has had success depositing into Flickr, SlideShare and Scribd by repeated calls to their respective APIs for remote deposit—which you could call a depositer-push approach—but an alternative is that she put the resources somewhere, provides information to tell repositories where they are so any repository that listens can come and harvest them—which would be more like a repository-pull approach, and in which case Feed-deposit might be the solution.

[* Yes, I know about OAI-PMH, the comparison is interesting, but this is a long post already.]

Resource description requirements for a UKOER project

CETIS have provided information on what we think are the metadata requirements for the UK OER programme, but we have also said that individual projects should think about their own metadata requirements in addition to these. As an example of what I mean by this, here is what I produced for the Engineering Subject Centre’s OER project.

Like it says on the front page it’s an attempt to define what information about a resource should be provided, why, for whom, and in what format, where:

“Who” includes project funders (HEFCE + JISC and Academy as their agents), project partner contributing resource, project manager, end users (teachers and students), aggregators—that is people who wish to build services on top of the collection.

“Why” includes resource management, selection and use as well as discovery through Google or otherwise, etc. etc.

“Format” includes free text for humans to read (which is incidentally what Google works from) and encoded text for machine operations (e.g. XML, RSS, HTML metatags, microformats, metadata embedded in other formats or held in the database of whatever content management system lies behind the host we are using).

You can read it on Scribd: Resource description requirements for EngSC OER project

[I should note that I work for the Engineering Subject Centre as well as CETIS and this work was not part of my CETIS work.]

It would be useful to know if other projects have produced anything similar. . .

Distribution platforms for OERs

One of the workpackages for CETIS’s support of the UKOER programme is:

Technical Guidelines–Services and Applications Inventory and Guidance:
Checklist and notes to support projects in selecting appropriate publication/distribution applications and services with some worked examples (or recommendations).
Output: set of wiki pages based on content type and identifying relevant platforms, formats, standards, ipr issues, etc.

I’ve made a start on this here, in a way which I hope will combine the three elements mentioned in the workpackage:

  1. An inventory of host platforms by resource type. Which are platforms that are being used for which media or resource types?
  2. A checklist of technical factors that projects should consider in their choice of platform
  3. Further information and guidance for some of the host platforms. Essentially that’s the checklist filled in

In keeping with the nature of this phase of the UKOER programme as a pilot, we’re trying not to be prescriptive about the type of platform projects will use. Specifically, we’re not assuming that they will use standard repository software and are encouraging projects to explore and share any information about the suitability of web2.0 social sharing sites. At the moment the inventory is pretty biased to these web2.0 sites, but that’s just a reflection of where I think new information is required.

How you can help

Feedback
Any feedback on the direction of this work would be welcome. Are there any media types I’m not considering that I should? Are the factors being considered in the checklist the right ones? Is the level of detail sufficient? Where are the errors?

Information
I want to focus on the platforms that are actually being used, so it would be helpful to know which these are. Also, I know from talking to some of you that there is invaluable experience about using some of these services, for example some APIs are better documented than others, some offer better functionality than others, some have limitations that aren’t apparent until you try to use them seriously. It would be great to have this in-depth information, there is space in the entry for each platform for these “notes and comments”.

Contributions
The more entries are filled out the better, but there’s a limit on what I can do, so all contributions would be welcome. In particular, I know that iTunes/iTunesU is important for audio video / podcasting, but I don’t have access myself — it seems to require some sort of plug-in called “iTunes” ;-) — so if anyone can help with that I would be especially grateful.

Depending on how you feel, you help by emailing me (philb@icbl.hw.ac.uk), or by registering on the CETIS wiki and either using the article talk page (please sign your comments) or the article itself. Anything you write is likely to be distributed under a Creative Commons cc-by-nc licence.