Other Voices » architecture http://blogs.cetis.org.uk/othervoices Cetis blog for guest posts Tue, 09 Jul 2013 13:39:41 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.22 Improving the student experience with an improved tutorial selection process http://blogs.cetis.org.uk/othervoices/2012/08/17/improving-the-student-experience-with-improved-an-improved-tutorial-selection-process/ http://blogs.cetis.org.uk/othervoices/2012/08/17/improving-the-student-experience-with-improved-an-improved-tutorial-selection-process/#comments Fri, 17 Aug 2012 07:54:27 +0000 http://blogs.cetis.org.uk/othervoices/?p=265 As part of the JISC Institutional Approaches to Curriculum Design project, the UG-Flex project at the University of Greenwich undertook set out to “reveal and enhance the University’s curriculum development processes in order to support a more agile and diverse curriculum underpinned by integrated systems.”

As part of on going dialogue about the technical aspects of the project the team shared with us their some of their plans for developing more sophisticated, real time timetabling processes. Although this work is not directly related to the UG-Flex project, this example of choice of systems and their integration demonstrates the positive contribution to be made to the day to day delivery of the University’s curriculum. Clifton Kandler, Web Services Manager explains more in this guest post.

The Problem
Like many university’s at the start of courses (programs), course leaders are faced with the need to separate students in to groups for tutorials, lab sessions and in larger courses lectures. For our Business School which has courses with up to 500 registered this has been a particularly important issue for some time. Having moved on from collating student’s tutorial selections from pieces of paper placed on notice boards, prior to our migration to Moodle the Business School used the group functionality within WebCT to either allocate students to tutorials or enable them to self select a tutorial slot.

The lack of integration between WebCT and our timetable system, Syllabus plus from Scientia however meant that the setting up of these groups within WebCT was a manual process. Once students had been allocated to tutorials or had made a tutorial selection manual intervention was again required to provide this information to our timetable system to enable construction of a personalized timetable for students which is accessed via our Portal (Luminus from Elluician).

These points of manual intervention resulted in errors and delays in providing students with accurate timetable information at the start of courses, frustration on the part of course leaders who could not be sure who should be in their tutorials and consequently delays in organising students in to groups for group assignments for example.

The Opportunities
A clear opportunity existed to improve the experience of students, academics, School administrators and timetabling staff by integrating the systems involved and removing the points of manual intervention.

The decision to migrate to Moodle as our institutional VLE for the start of the 2011/12 academic year also provided an opportunity to develop the environment to meet our specific challenges. This was one of the decisive factors in choosing Moodle. The time table selection block was the first area of development chosen.

The final opportunity came in the form of SunGard’s Infinity Process Platform, this Business Process management tool enabled us with SunGard’s help to model, analyse and execute the work flows and integrations required. This tool is used extensively in the financial services industry and Greenwich is the first to use it in a Higher Education context.

The Process
A series of workshops was held with representatives from Schools to further understand the problem to be addressed and draw up a list of requirements and timetable for development. As well as meeting the objectives set a major outcome of these workshops was the acknowledgment on the part of participants of the complexities of producing solution to the issues raised from a systems integration perspective. The outcome of the process was that the following requirements were identified:

The solution should enable:
*Allocation of students on a Moodle course to tutorials.
*Enable students to self select a tutorial.
*Reduce the size of a tutorial on the fly – allowing staff to hide the full tutorial capacity in case they need to move students.

An eight week timetable was identified for the delivery of the project.

Systems Integration Achieved

Diagram of system integration

Diagram of system integration

Challenges
The major challenge for this development has been managing the co-ordination of 4 the parties (including Greenwich) involved. University of London Computing Centre who host our Moodle environment, SunGard for IPP and Scientia who provide the timetable system. A steep learning curve was involved in delivering this project within a tight 8 week time frame and on budget.

Enabling users to articulate their requirements was an additional challenge, the tendency is for users to largely ask for what they already have and to really only fully understand their requirements after actual use (see paragraph below). The ability to quickly develop in Moodle and IPP has meant that we have been able to respond to new requirements that have emerged.

Implementation and Subsequent Development
The selection block was used on all 561 Business School Courses at the start of the 2012/13 academic year and has been very popular with students, with reports of them valuing the additional control they now have over determining their timetables. We are clearly providing a better service to students.

Following the initial implementation the timetable block has been further developed to provide the following additional functionality:
● Allow staff to time release the block on a course by course basis.
● Allow staff to make changes to all activities in one go.
● Staff to be able to download the list of allocated students from the tutorial.
● Hide individual tutorials.
● Change the size of a tutorial in Moodle – this change is not written back to Syllabus +

The timetable block will be used by our Engineering, Humanities and Social Sciences, and Computing and Maths School’s as well as Business at the start of the 2012/12 academic year which means that over 70% of courses will be using the development.

Conclusion
The development of the Timetable selection block has not only enabled us to improve the student experience via process improvement, but has also enabled us to work with a Business Process Modelling tool seriously for the first time enabling Greenwich to support its desire to be a more agile and Service orientated institution.

About Clifton
Clifton Kandler is the Web Services Manager at the University of Greenwich, leading the team responsible for the development, implementation and support of the University’s VLE, Portal, Library management system and e-portfolio.

]]>
http://blogs.cetis.org.uk/othervoices/2012/08/17/improving-the-student-experience-with-improved-an-improved-tutorial-selection-process/feed/ 0
Curriculum Design: The Big Picture http://blogs.cetis.org.uk/othervoices/2011/06/29/curriculum-design-the-big-picture/ http://blogs.cetis.org.uk/othervoices/2011/06/29/curriculum-design-the-big-picture/#comments Wed, 29 Jun 2011 08:32:53 +0000 http://blogs.cetis.org.uk/othervoices/?p=40 Guest Post

As part of our series of articles on the technical aspects of the e-Learning Programme over the next few weeks Lou McGill, e-learning consultant, will be focussing on the Institutional Approaches to Curriculum Design projects. Now three years in to the four year programme these twelve projects have been exploring the convoluted and sometimes opaque processes universities use to design, validate and deliver courses. In this first post Lou discovers just how useful modelling approaches have been to help projects clarify these curriculum processes.

Engaging with institutional processes and practice

Learning and teaching is of course the core business of our Universities and Colleges, but the processes around how courses are designed and developed are sometimes fragmented. Curriculum Design connects with several processes and systems within an educational institution and impacts on a range of stakeholders. It is difficult to engage with institutional processes without referring to ‘business’ language – and talking about curriculum design in this way can easily alienate the very people you need to engage. Taking a business process view of educational activities however can help to highlight technical and system requirements as well as supporting strategic planning and development. Similarly, focussing on efficiencies, reducing duplication and saving time can result in real enhancements for both the staff and student experience, as highlighted by the JISC Transforming Curriculum Delivery Through Technology programme which has just completed. (see end of this post for more information)

Projects taking part in the ongoing JISC funded Institutional approaches to Curriculum Design programme have become very familiar with the challenges of taking an institution-wide business process approach. How can you get academic staff who often see the course approval process as a ‘form-filling exercise’ to use it as an opportunity to re-imagine and re-think their curricula? How do you convince staff to see the bigger institution-wide picture and understand the links between the seemingly separate processes that support teaching and learning? How do you utilise technologies to take some of the burden of these processes away, leaving staff time to focus on more important activities?

Projects have been responding to these challenges by looking at ways to effectively integrate institutional systems to prevent duplication and streamline processes, particularly in relation to learning and teaching technologies such as institutional VLEs (Virtual Learning Environment) or e-portfolio systems. They have had to articulate and demonstrate the added value that linking these systems brings to a range or different stakeholders. Their experiences in identifying which technologies and standards can meet institutional needs and, perhaps more importantly, which people in the institution have the knowledge to inform these decisions are of value to other institutions. Key to streamlining these processes and integrating systems is the need to identify which data is central and how institutions collate, share and manage that information.

Like many Universities, project teams have also been grappling with the need to align curriculum design with external drivers such as employer needs, the widening participation agenda or practical things like UCAS or HERA requirements; institutional requirements for increased efficiency; and flexibility to respond to changing learner needs. Many institutional systems reflect a time when standard three year degrees where the norm. Modularisation of courses and increasing numbers of part-time, distance and work-based students has resulted in the need for more agile systems that can reflect changing learning patterns and the need for more flexible support mechanisms.

Sarah Knight, Programme Manager for the JISC e-Learning Team says ‘This is a difficult time for educational institutions as they struggle to make sure that they continue to offer high quality learning and teaching whilst responding to drivers for increased efficiency and the need to offer flexible learning choices. The projects in this programme are making excellent use of business process modelling and other innovative approaches to engage stakeholders, highlight their strengths and adapt their systems to be more effective. The wider HE and FE communities should find much to inform their own practice.’

For many projects the programme timing mirrored an institutional desire to review existing systems which provided an ideal opportunity to re-examine the processes affected by these systems.
The processes covered so far by the programme include:

  • course creation, approval, validation, documentation, QA, management (timetabling, resource allocation), modification;
  • student data – enrolment, admissions, registration, progression and assessment, records and e-portfolios;
  • marketing and advertising – recruitment

This range of processes is underpinned by a number of systems and data, many of which will have been developed over time in response to specific needs, and often without an institution-wide consideration. At City University the PREDICT Project (Promoting Realistic and Engaging Discussions in Curriculum Teams) proved timely as the Institution had identified a need for review of IS systems. The obvious practical value of the two things happening at the same time has been augmented by the long term benefits in raising the profile and understanding of curriculum planning requirements. John Gallagher, Enterprise Architect at City said,

‘Historically with IS our projects have ignored the impact of both Curriculum design and delivery unless that was the specific focus of the project. However now, thanks to the profile of PREDICT, we do try to assess any possible impact. An example of this recently was the introduction of a new Student Module Feedback system to provide additional information in order to assess academic performance. However this feedback is also essential for staff to review the design and delivery of the module. Prior to PREDICT we would have ignored this angle as it was not a specific deliverable of the project!’

Where to start?

As a four year programme projects had time to do a comprehensive baseline review and to invest significant time in engaging stakeholders. Project baselining activities or benchmarking are of high value to any change process as they offer time to look at and record where an institution is at the start of the change process and to really examine existing practice, identify strengths and highlight weaknesses. Having to explain something you do to others in detail provides an opportunity for questioning of activities that seem obvious but can be based on historical or traditional approaches that are no longer relevant.

When baselining is done across a programme of activities a picture of the sector begins to emerge and this one has highlighted issues relating to marketing, quality assurance and enhancement, understanding and responding to employer and learner needs and demands, issues around assessment and feedback and managing course information. These areas are discussed in more detail in a briefing paper produced in February 2011 and will, of course, be described in more detail in the final synthesis reports at the end of the programme. This series of blog posts will focus on issues around managing course information and how institutions link this to other data through a range of processes and systems.

Business process modelling

During the baselining stage many diagrams were developed to capture the various processes and data that result from these. One example is the process workflow diagram from the PIP project (Principles in Patterns) at the University of Strathclyde.

pip1-sm

It can be useful to see how other institutions visualise their activities and processes, even if they do tend to be context specific, but it is the process of developing the diagrams that has been of most significance to the institutions involved, such as this curriculum approval process mapping described by the PALET Project (Programme Approval Lean Electronic Toolset) from Cardiff university. The dialogue and learning to be had from identifying, recording and sharing how the various business processes link up has been noted by several projects. The SRC Project (Supporting Responsive Curricula) has produced an excellent case study detailing Manchester Metropolitan University’s stakeholder requirements for an Academic Database of programmes and units. This document describes the activities and includes examples of process diagrams. Engaging stakeholders through the use of rich pictures was a method adopted by the UG-Flex Project at the University of Greenwich – a diagramming technique developed to capture stakeholder views in a non-confrontational way. See this example – computer says no!

computersaysno

(A further explanation of developing rich pictures is available at:http://jiscdesignstudio.pbworks.com/w/page/24763278/Rich-Pictures)

It is clear from even these brief examples that Projects have adopted a range of modelling approaches. JISC CETIS ran a workshop in May 2009 to highlight the ArchiMate enterprise architechture modelling language for curriculum design processes. Discussions with projects at the end of the CETIS workshop highlighted that:

‘ In terms of cost-benefit, adopting a modelling approach for those projects that didn’t already use it, opinions differed. Some felt that the investment in software and skills acquisition were only worth it if an institution took the strategic decision to adopt a modelling approach. Others felt that a lightweight, iterative use of Archimate, perhaps using common drawing tools such as Visio during the trial stage, was a good first step. Yet others thought that modelling was worth doing only with considerable investment in tools and skills right at the beginning.’ Wilbert Kraan, JISC CETIS

The SRC Project used ArchiMate components to present visualisations during baselining activities and have also blogged about this. They presented at the CETIS workshop on their experiences with ARchiMate which they decided to try after finding the Course Validation Reference Model (COVARM) useful but a bit complex for most stakeholders. The SRC project also used UML (Unifying Modelling Language) to develop more detailed diagrams and examples are available in the SRC case study.

A range of project diagrams and process maps reflecting the different modelling approaches will be made available in the JISC Design Studio which is being added to as projects progress. The JISC CETIS Architecture and Modelling page provides an ongoing picture of developments in this area. It seems likely that these projects have significant potential to further our understanding of some key interoperability standards such as XCRI, learning design specifications, competency standards and qualification frameworks, particularly in relation to how these support data sharing across the range of institutional systems.

A shorter two year sister programme ran in parallel to the Institutional approaches to Curriculum Design programme which focussed on curriculum delivery – the space where students engage with the curriculum. Both programmes naturally involved some overlap with curriculum design and delivery having close synergies. The Transforming Curriculum Delivery Through Technology programme has now completed and outcomes (lessons learned) and outputs (case studies, guidelines, etc.) are incorporated into the JISC Design Studio. Both programmes are feeding into this resource which was created during the programmes to provide both a resource for projects and ultimately a source for the wider community. http://jiscdesignstudio.pbworks.com/w/page/40379712/Transforming-Curriculum-Delivery-through-Technology

http://jiscdesignstudio.pbworks.com/w/page/40489793/Institutional-Approaches-to-Curriculum-Design

About Lou

Lou McGill is currently working independently and has recently been involved in synthesis and evaluation activities for the HE Academy/JISC UKOER programme and the JISC Transforming Curriculum Delivery Through Technology programme. She lead the team that produced the Good Intentions report on business cases for sharing and worked on the LLiDA study (Learning Literacies in a Digital Age). She has experience of working in a range of HE institutions as a librarian, learning technologist and project manager and used to be a JISC Programme Manager on the eLearning team. In the distant past she worked for NIACE (the adult learning organisation) and in Health education for the NHS. Her interests and experience include digital literacy, information literacy, open education, distance learning, managing organisational change, and effective use of technologies to support learning. Further information on Lou’s work can be found at: http://loumcgill.co.uk

]]>
http://blogs.cetis.org.uk/othervoices/2011/06/29/curriculum-design-the-big-picture/feed/ 0
The Learning Registry: “Social Networking for Metadata” http://blogs.cetis.org.uk/othervoices/2011/03/22/thelearningregistry/ http://blogs.cetis.org.uk/othervoices/2011/03/22/thelearningregistry/#comments Tue, 22 Mar 2011 11:40:54 +0000 http://blogs.cetis.org.uk/othervoices/?p=7

Guest post

Dan Rehak talks about the Learning Registry, a new initiative offering an alternative approach to learning resource discovery, sharing and usage tracking. The Learning Registry prioritizes sharing and second party usage data and analytics over first party metadata. It’s an enabling infrastructure accessible by anyone, has no mandated data standards, can be replicated worldwide and is open, cloud-based, and app ready.

Isn’t this all too hard?

Let’s imagine that you’re a secondary school physics teacher and you want to build a lesson on orbital mechanics which combines elements of physics, maths, the history of the space program and a writing assignment. Where would you go to find the learning resources you need, either lessons to reuse or individual pieces? A search engine might help in finding the individual pieces, but even formulating a query for the entire lesson is difficult. If you want images and primary historic source material, you’ll probably have to search individual collections: NASA, the US National Archives, Smithsonian, Library of Congress, and probably multiple repositories for each. Let’s assume you found several animations on orbital mechanics. Can you tell which of these are right for your students (without having to preview each)? Is there any information about who else has used them and how effective they were? How can you provide your feedback about the resources you used, both to other teachers and to the organizations that published or curated them? Is there any way to aggregate this feedback to improve discoverability?

The Learning Registry.

The Learning Registry (http://www.learningregistry.org/) is defining and building an infrastructure to help answer these questions. It provides a means for anyone to “publish” information about their learning resources (both resources specifically created for education along with primary source materials, including historic and cultural heritage resources). It allows anyone to use the published information. Beyond metadata and descriptions, this information includes usage data, feedback, rankings, likes, etc.; we call this paradata. It provides a metadata timeline—a stream of activity data about a learning resource. It enables building better discovery tools (search, recommender systems), but it’s not a search engine, a repository, or a registry in the conventional sense.

Share. Find. Use. Amplify.

With the Learning Registry, anyone can put any information into the timeline, anyone can get it from the timeline, anyone can filter parts of it for their communities, and anyone can provide feedback, all through a distributed infrastructure. NASA could “announce” a new animation. The PBS (US Public Broadcasting Service) Teachers portal could be watching for NASA publishing events, and add the resource to their secondary school “Science and Tech” stream. The National Science Digital Library (NSDL) could also provide it via one of their “Pathways”. A teacher using the PBS portal could find it, rank it, comment on it, or share it via the PBS portal. PBS can publish this paradata back into the timeline. NSDL would see this new paradata and could update their information about the resource. NSDL paradata also flows back into the timeline. NASA could monitor the timeline and see how and where their resource is being used; the timeline and cumulative paradata provides more contextual information than just server access logs. This enables resource and information sharing, discovery, access, integration and knowledge amplification among producers, consumers and brokers. We believe that this paradata timeline can be more valuable than traditional metadata for learning resource discovery and that collaborative publishing to the network amplifies the available knowledge about what learning resources are effective in which learning contexts.

Share Find Use Amplify

Share Find Use Amplify

The Infrastructure.

The operational Learning Registry is a light-weight learning resource information sharing network: a high-latency, loosely connected network of master-master synchronizing brokers (nodes) distributing resources, metadata and paradata. The network provides distributed publishing and access. There are simple RESTful+JSON core APIs (limited to publish, obtain, and distribute). You can publish anything to the timeline at any node and access it all from any other node. There is no central control, central registries or central repositories in the core resource distribution network. All data eventually flows to all nodes.

Apps and APIs.

We’re also defining a small set of non-core APIs for integration with existing edge services, e.g., SWORD for repository publishing, OAI-PMH for harvest from the network to local stores, SiteMaps for search engine exposure, ATOM for syndication, and we encourage people to build and share more APIs. We don’t provide (and don’t plan to provide) a search engine, community portal, recommender system, etc. We want the community to build their own value-added services on top of the core network infrastructure and are exploring a variety of tools and social plug-ins (e.g., Fedora, DSpace, Moodle, SAKAI, WordPress, Facebook). Thus while we’re defining the transport network, we only expect that it will be used directly by application developers. Teachers and students will interact with the timeline via their local environments, and shouldn’t know or care about the Learning Registry infrastructure. Our aim is only to provide the essential core features and to be enabling.

Apps and APIs

Apps and APIs

No Metadata Standards.

We don’t mandate any metadata or paradata formats nor do we attempt to harmonize—that would be futile. We encourage simple hashtags and allow “legacy” metadata in any format. We assume some smart people will do some interesting (and unanticipated) things with the timeline data stream (e.g., consider the JISC CETIS OER Technical MiniProject on the Analysis of Learning Resource Metadata Records as an example) and that sub-communities will gravitate towards shared approaches.

Cool Tech.

Our initial implementation is built upon CouchDB (a NoSQL document-oriented database with RESTful JSON APIs, native JavaScript code execution, MapReduce support, and distributed data replication). Our design approach is influenced by its capabilities and design, but the APIs provides an abstraction layer on top of Couch. We’re using Python as an application layer when needed, but apps can be built in your favorite environment. You can install and run a node on your own hardware (Linux, Windows or Mac) or you can stand up a node in the cloud; we’re currently hosting nodes on Amazon EC2. We’re aiming for zero-config installers to make adding a node to the network simple and fast.

Learning Registry network infrastructure

Learning Registry network infrastructure

The Project.

The project leadership comes from the U.S. Department of Education and U.S. Department of Defense, but we aim to be fully open and collaborative: open documents (CC-BY-3.0 in a public Google Docs collection: http://goo.gl/8I9Gc), open process (join our discussion list, participate in our calls [announced on the list], http://groups.google.com/group/learningregistry), open metadata and paradata (CC-BY-3.0), open source (Apache 2.0, code @ http://git.learningregistry.org/, project tracking @ http://tracker.learningregistry.org/), Open Spec (OWF, draft Spec @ http://goo.gl/2Cf3L). We’re working with a number of Governmental, NGO and commercial organizations worldwide. An initial public beta network is planned for September 2011; interested parties can connect to the testbed in April.

Try it @ OER Hackday.

We’re coming to Manchester. We have a working infrastructure with several operational nodes, basic installers, functioning core APIs and basic SWORD and OAI-PMH integration. We want to test our baseline and our other assumptions. We are interested in connecting to various sources (testing our APIs) and building sufficient data to try “interesting things”. We want to understand what value-added APIs are useful, explore doing things with the paradata timeline and mashup the Learning Registry with other tools (e.g., we’re exploring IndexTank as an external search index). Our rule of thumb is that it should take at most two hours to understand an API and less than a day to build something useful with it. Are we on track?

About Dan

Daniel R. Rehak, Ph.D., is Senior Technical Advisor, Advanced Distributed Learning Initiative (ADL) where he provides technical expertise in the areas of systems design, information management and architecture, with emphasis on learning and training technologies. He provides technical leadership to the DoD for the development of the Learning Registry.

]]>
http://blogs.cetis.org.uk/othervoices/2011/03/22/thelearningregistry/feed/ 7