PLE, e-p, or what?

The concept of the personal learning environment could helpfully be more related to the e-portfolio (e-p), as both can help informal learning of skills, competence, etc., whether these abilities are formally defined or not.

Several people at CETIS/IEC here in Bolton had a wide-ranging discussion this Thursday morning (2010-02-18), focused around the concept of the “personal learning environment” or PLE. It’s a concept that CETIS people helped develop, from the Colloquia system, around 1996, and Bill Olivier and Oleg Liber formulated in a paper in 2001 — see http://is.gd/8DWpQ . The idea is definitely related to an e-portfolio, in that an e-p can store information related to this personal learning, and the idea is generally to have portfolio information continue “life-long” across different episodes of learning.

As Scott Wilson pointed out, it may be that the PLE concept overreached itself. Even to conceive of “a” system that supports personal learning in general is hazardous, as it invites people to design a “big” system in their own mind. Inevitably, such a “big” system is impractical, and the work on PLEs that was done between, say, 2000 and 2005 has now been taken forward in different ways — Scott’s work on widgets is a good example of enabling tools with a more limited scope, but which can be joined together as needed.

We’ve seen parallel developments in the e-portfolio world. I think back to LUSID, from 1997, where the emphasis was on individuals auditing and developing their transferable / employability skills. Then increasingly we saw the emergence of portfolio tools that included more functionality: presentation to others (through the web); “social” communication and collaboration tools. Just as widgets can be seen as the dethroning of the concept of monolithic learning technology in general, so the “thin portfolio” concept (borrowing from the prior “personal information aggregation and distribution service” concept) represents the idea that you don’t need that portfolio information in one server; but that it is very helpful to have one place where one can access all “your” information, and set permissions for others to view it. This concept is only beginning to be implemented. The current PIOP 3 work plans to lay down more of the web services groundwork for this, but perhaps we should be looking over at the widgets work.

Skills and competences have long been connected with portfolio tools. Back in 1997 LUSID had a framework structure for employability skills. But what is new is the recent greatly enlarged extent of interest in learning outcomes, abilities, skills and competencies. Recent reading for eCOTOOL has revealed that the ECVET approach, as well as being firmly based on “outcomes” (which ICOPER also focuses), also recognises non-formal and informal learning as central. Thus ECVET credit is not attached only to vocational courses, but also to the accreditation of prior learning by institutions that are prepared to validate the outcomes involved. Can we, perhaps, connect with this European policy, and develop tools that are aimed at helping to implement it? It takes far sighted institutions to give up the short term gain of students enrolled on courses and instead to assess their prior learning and validate their existing abilities. But surely it makes sense in the long run, as long as standards are maintained?

If we are to have learning technology — and it really doesn’t matter if you call them PLEs, e-portfolios or whatever — that supports the acquisition or improvement of skills and competence by individuals in their own diverse ways, then surely a central organising principle within those tools needs to be the skills, competencies or whatever that the individual wants to acquire or improve. Can we draw, perhaps on the insights of PLE and related work, put them together with e-portfolio work, and focus on tools to manage the components of competence? In the IEC, we have all our experience on the TENCompetence project that has finished, as well as ICOPER that is underway and eCOTOOL that is starting. Then we expect there will be work associated with PIOP 3 that brings in frameworks of skill and competence. Few people can be in a better position to do this work that we are in CETIS/IEC.

In part, I would formulate this as providing technology and tools to help people recognise their existing (uncertificated) skills, evidence them (the portfolio part) and then help them, and the institutions they attend, to assess this “prior learning” (APL) and bring it in to the world of formal recognition, and qualifications.

But I think there is another very important aspect to the technology connected with the PLE concept, and that is to provide the guidance that learners need to ensure they get on the “right” course. At the meeting, we discussed how employers often do not want the very graduates whose studies have titles that seem to related directly to the job. What has gone wrong? It’s all very well treating students like customers — “the customer is always right” — but what happens when a learner wants to take a course aimed at something one believes they are not going to be successful at? Perhaps the right intervention is to start earlier, helping learners clarify their values before their goals, understand who they are before deciding what they should do. This would be “personal learning” in the sense of learning about oneself. Perhaps the PDP part of the e-portfolio community, and those who come from careers guidance, know more about this, but even they sometimes seem not to know what to do for the best. To me, this self-knowledge requires a social dimension (with the related existing tools), and is something that needs to be able to draw on many aspects of a learner’s life (“lifewide” portfolio perhaps).

So, to reconstruct PLE ideas, not as monolithic systems, but as parts, there are two key parts in my view.

The first would be a tool for bringing together evidence residing in different systems, and organising it to provide material for reflection on, and evidence of, skills and competence across different areas of life, and integrating with institutional systems for recognising what has already been learned, as well as slotting people in to suitable learning opportunities. This would play a natural part in continuous professional development, and in the relatively short term learning education and training needs we have, which we can see we need from an existing working perspective, and thus, in the kind of workplace learning that many are predicting will need to grow.

The second may perhaps be not a tool but several tools to help people understand themselves, their values, their motives, their real goals, and the activities and employment that they would actually find satisfying, rather than what they might falsely imagine. Without this function, any learning education or training risks being wasted. Doing this seems much more challenging, but also much more deeply interesting to me.

ICOPER and outcomes

The other European project I’m involved in for CETIS is called ICOPER. Over the last couple of weeks I’ve been doing some work improving the deliverable D2.2, mainly working with Jad Najjar. I flag it here because it uses some of the conceptual modelling work I’ve been involved in. My main direct contribution is Section 2. This starts with part of an adaptation of my diagram in a recent post here. It is adapted by removing the part on the right, for recognition, as that is of relatively minor importance to ICOPER. As ICOPER is focused on outcomes, the “desired pattern” is relabelled as “intended learning outcome or other objective”. I thought this time it would be clearer without the groupings of learning opportunity or assessment. And ICOPER is not really concerned with reflection by individuals, so that is omitted as well.

In explaining the diagram, I explain what the different colours represent. I’m still waiting for critique (or reasoned support, for that matter) of the types of thing I find so helpful in conceptual modelling (again, see previous post on this).

As I find so often, detailed thinking for any particular purpose has clarified one part of the diagram. I have introduced (and will bring back into the mainstream of my modelling) an “assessment result pattern”. I recognise that logically you cannot specify actual results as pre-requisites for opportunities, but rather patterns, such as “pass” or “at least 80%” for particular assessments. It takes a selection process (which I haven’t represented explicitly anywhere yet) to compare actual results with the required result pattern.

Overall, this section 2 of the deliverable explains quite a lot about a part of the overall conceptual model intended to be at least approximately from the point of view of ICOPER. The title of this deliverable, “Model for describing learning needs and learning opportunities taking context ontology modelling into account” was perhaps not what would have been chosen at the time of writing, but we needed to write to satisfy that title. Here, “learning needs” is understood as intended learning outcomes, which is not difficult to cover as it is central to ICOPER.

The deliverable as a whole continues with a review of MLO, the prospective European Standard on Metadata for Learning Opportunities (Advertising), to get in the “learning opportunities” aspect. Then it goes on to suggests an information model for “Learning Outcome Definitions”. This is a tricky one, as one cannot really avoid IMS RDCEO and IEEE RCD. As I’ve argued in the past, I don’t think these are really substantially more helpful than just using Dublin Core, and in a way the ICOPER work here implicitly recognises this, in that even though they still doff a cap to those two specs, most of RDCEO is “profiled” away, and instead a “knowledge / skill / competence” category is added, to square with the concepts as described in the EQF.

Perhaps the other really interesting part of the deliverable was one we put in quite a lot of joint thinking to. Jad came up with the title “Personal Achieved Learning Outcomes” (PALO), which is fine for what is intended to be covered here. What we have come up with (provisionally, it must be emphasised) is a very interesting mixture of bits that correspond to the overall conceptual model, with the addition of the kind of detail needed to turn a conceptual model into an information or data model. Again, not surprisingly, this raises some interesting questions for the overall conceptual model. How does the concept of achievement (in this deliverable) relate to the overall model’s “personal claim expression”? This “PALO” model is a good effort towards something that I haven’t personally written much about – how do you represent context in a helpful way for intended learning outcomes or competences? If you’re interested, see what you think. For most skills and competences, one can imagine several aspects of context that are really meaningful, and without which describing things would definitely lose something. Can you do it better?

I hope I’ve written enough to stimulate a few people at least to skim through that deliverable D2.2.

Development of a conceptual model 5

This conceptual model now includes basic ideas about what goes on in the individual, plus some of the most important concepts for PDP and e-portfolio use, as well as the generalised formalisable concepts processes surrounding individual action. It has come a long way since the last time I wrote about it.

The minimised version is here, first… (recommended to view the images below separately, perhaps with a right-click)

eurolmcm25-min3

and that is complex enough, with so many relationship links looking like a bizarre and distorted spider’s web. Now for the full version, which is quite scarily complex now…

eurolmcm25

Perhaps that is the inevitable way things happen. One thinks some more. One talks to some more people. The model grows, develops, expands. The parts connected to “placement processes” were stimulated by Luk Vervenne’s contribution to the workshop in Berlin of my previous blog entry. But — and I find hard to escape from this — much of the development is based on internal logic, and just looking at it from different points of view.

It still makes sense to me, of course, because I’ve been with it through its growth and development. But is there any point in putting such a complex structure up on my blog? I do not know. It’s reached the stage where perhaps it needs turning into a paper-length exposition, particularly including all the explanatory notes that you can see if you use CmapTools, and breaking it down into more digestible, manageable parts. I’ve put the CXL file and a PDF version up on my own concept maps page. I can only hope that some people will find this interesting enough to look carefully at some of the detail, and comment… (please!) If you’re really interested, get in touch to talk things over with me. But the thinking will in any case surface in other places. And I’ll link from here later if I do a version with comments that is easier to get at.

Development of a conceptual model 4

This version of the conceptual model (of learning opportunity provision + assessment + award of credit or qualification) uses the CmapTools facility for grouping nodes; and it further extends the use of my own “top ontology” (introduced in my book).

There are now two diagrams: a contracted and an expanded version. When you use CmapTools, you can click on the << or >> symbols, and the attached box will expand to reveal the detail, or contract to hide it. This grouping was suggested by several people in discussion, particularly Christian Stracke. Let’s look at the two diagrams first, then go on to draw out the other points.

eurolmcm13-contracted1

You can’t fail to notice that this is remarkably simpler than the previous version. What is important is to note the terms chosen for the groupings. It is vital to the communicative effectiveness of the pair of diagrams that the term for the grouping represents the things contained by the grouping, and in the top case — “learning opportunity provision” — it was Cleo Sgouropoulou who helped find that term. Most of the links seem to work OK with these groupings, though some are inevitably less than fully clear. So, on to the full, expanded diagram…

eurolmcm13-expanded1

I was favourably impressed with the way in which CmapTools allows grouping to be done, and how the tools work.

Mainly the same things are there as in the previous version. The only change is that, instead of having one blob for qualification, and one for credit value, both have been split into two. This followed on from being uncomfortable with the previous position of “qualification”, where it appeared that the same thing was wanted or led to, and awarded. It is, I suggest, much clearer to distinguish the repeatable pattern — that is, the form of the qualification, represented by its title and generic properties — and the particular qualification awarded to a particular learner on a particular date. I originally came to this clear distinction, between patterns and expressions, in my book, when trying to build a firmer basis for the typology of information represented in e-portfolio systems. But in any case, I am now working on a separate web page to try to explain it more clearly. When done, I’ll post that here on my blog.

A pattern, like a concept, can apply to many different things, at least in principle. Most of the documentation surrounding courses, assessment, and the definitions about qualifications and credit, are essentially repeatable patterns. But in contrast, an assessment result, like a qualification or credit awarded, is in effect an expression, relating one of those patterns to a particular individual learner at a particular time. They are quite different kinds of thing, and much confusion may be caused by failing to distinguish which one is talking about, particularly when discussing things like qualifications.

These distinctions between types of thing at the most generic level is what I am trying to represent with the colour and shape scheme in these diagrams. You could call it my “top ontology” if you like, and I hope it is useful.

CmapTools is available free. It has been a great tool for me, as I don’t often get round to diagrams, but CmapTools makes it easy to draw the kinds of models I want to draw. If you have it, you might like to try finding and downloading the actual maps, which you can then play with. Of course, there is only one, not two; but I have put it in both forms on the ICOPER Cmap server, and also directly in CXL form on my own site. If you do, you will see all the explanatory comments I have made on the nodes. Please feel free to send me back any elaborations you create.

Development of a conceptual model 3

I spent 3 days in Lyon this week, in meetings with European project colleagues and learning technology standardization people. This model had a good airing, and there was lots of discussion and feedback. So it has developed quite a lot over the three days from the previous version.
eurolmcm12

So, let’s start at the top left. The French contingent wanted to add some kind of definition of structure to the MLO (Metadata for Learning Opportunities) draft CWA (CEN Workshop Agreement) and it seemed like a good idea to put this in somewhere. I’ve added it as “combination rule set”. As yet we haven’t agreed its inclusion, let alone its structure, but if it is represented as a literal text field just detailing what combinations of learning opportunities are allowed by a particular provider, that seems harmless enough. A formal structure can await future discussion.

Still referring to MLO, the previous “assessment strategy” really only related to MLO and nothing else. As it was unclear from the diagram what it was, I’ve taken it out. There is usually some designed relationship between a course and a related assessment, but though perhaps ideally the relationship should be through intended learning outcomes (as shown), it may not be so — in fact it might involve those combination rules — so I’ve put in a dotted relationship “linked to”. The dotted relationships are meant to indicate some caution: in this case its nature is unclear; while the “results in” relationship is really through a chain of other ones. I’ve also made dotted the relationship between a learning opportunity specification and a qualification. Yes, perhaps the learning opportunity is intended to lead to the award of a qualification, but that is principally the intention of the learning opportunity provider, and may vary with other points of view.

Talking about the learning opportunity provider, discussion at the meetings, particularly with Mark Stubbs, suggested that the important relationships between a provider and an learning opportunity specification are those of validation and advertising. And the simple terms “runs” and “run by” seem to express reasonably well how a provider relates to an instance. I am suggesting that these terms might replace the confusingly ambiguous “offer” terminology in MLO.

Over on the right of the diagram, I’ve tidied up the arrows a bit. The Educational Credit Information Model CWA (now approved) has value, level and scheme on a par, so I though it would be best to reflect that in the diagram with just one blob. Credit transfer and accumulation schemes may or may not be tied to wider qualifications frameworks with levels. I’ve left that open, but represented levels in frameworks separately from credit.

I’ve also added a few more common-sense relationships with the learner, who is and should be central to this whole diagram. Learners aspire to vague things like intended learning outcomes as well as specific results and qualifications. They get qualifications. And how do learners relate to learning opportunity specifications? One would hope that they would be useful for searching, for investigation, as part of the process of a learner deciding to enrol on a course.

I’ve added a key in the top right. It’s not quite adequate, I think, but I’m increasingly convinced that this kind of distinction is very helpful and important for discussing and agreeing conceptual models. I’m hoping to revisit the distinctions I made in my book, and to refine the key so that it is even clearer what kind of concept each one is.

Development of a conceptual model 2

As promised, the model is gently evolving from the initial one posted.

eurolmcm111

Starting from the left, I’ve added a “creates” relationship between the assessing body and the assessment specification, to mirror the one for learning. Then, I’ve reversed the arrows and amended the relationship captions accordingly, for some of the middle part of the diagram. This is to make it easier to read off scenarios from the diagram. Of course, each arrow could be drawn in either direction in principle, just by substituting an inverse relationship, but often one direction makes more sense than the other. I’ve also amended some other captions for clarity.

An obvious scenario to read off would be this: “The learner enrols on a course, which involves doing some activities (like listening, writing, practical work, tests, etc.) These activities result in records (e.g. submitted coursework) which is assessed in a process specified by the assessing body, designed to evaluate the intended learning outcomes that are the objectives of the course. As a result of this summative assessment, the awarding body awards the learner a qualification.” I hope that one sounds plausible.

The right hand side of the diagram hadn’t had much attention recently. To simplify things a little, I decided that level and framework are so tightly joined that there is no need to separate them in this model. Then, mirroring the idea that a learner can aspire to an assessment outcome, it’s natural also to say that a learner may want a qualification. And what happens to credits after they have been awarded? They are normally counted towards a qualification — but this has to be processed, it is not automatic, so I’ve included that in the awarding process.

I’m still reasonably happy with the colour and shape scheme, in which yellow ovals are processes or activities (you can ask, “when did this happen?”), green things are parts of the real world, things that have concrete existence; and blue things are information.

Development of a conceptual model

Reflecting on the challenging field of conceptual models, I thought of the idea of exposing my evolving conceptual model that extends across the areas of learner mobility, learning, evaluation/assessment, credit, qualifications and awards, and intended learning outcomes — which could easily be detailed to cover knowledge, skill and competence.

eurolmcm10

This is more or less the whole thing as it is at present. It will evolve, and I would like that to illustrate how a model can evolve as a result of taking into account other ideas. It also wants a great deal of explanation. I invite questions as comments (or directly) so that I can judge what explanation is helpful. I also warmly welcome views that might be contrasting, to help my conceptual model to grow and develop.

It originates in work with the European Learner Mobility team specifying a model for European Learner Mobility documents — that currently include the Diploma Supplement (DS) and Certificate Supplement. This in turn is based on the European draft standard Metadata for Learning Opportunities (MLO), which is quite similar to the UK’s (and CETIS’s) XCRI. (Note: some terminology has been modified from MLO.) Alongside the DS, the model is intended to cover the UK’s HEAR — Higher Education Achievement Report. And the main advance from previous models of these things, including transcripts of course results, is that it aims to cover intended learning outcomes in a coherent way.

This work is evolving already with valued input from colleagues I talk to in

but I wanted to publish it here so that anyone can contribute, and anyone in any of these groups can refer to it and pass it round — even if as a “straw man”.

It would have been better to start from the beginning, so that I could explain the origin of each part. However that is not feasible, so I will have to be content with starting from where I am, and hoping that the reasoning supporting each feature will become clear in time, as there is an interest. Of course, at any time, the reasoning may not adequately support the feature, and on realising that I will want to change the model.

Please comment if there are discrepancies between this model and your model of the same things, and we can explore the language expressing the divergence of opinion, and the possibility for unification.

Obviously this relates to the SC36 model I discussed yesterday.

See also the next version.

Book finally available

My book, “Electronic Portfolios: Personal information, personal development and personal values” has recently been published, and is eventually available on Amazon UK etc. (or .fr or .de or .com)

The publishers have it in their catalogue.

I was very surprised by the high list price, which I have had no influence over. I would publish it for no more than half that price. Perhaps the publishers aren’t expecting all that many sales? But I hope that doesn’t stop people ordering it for their libraries. It is relevant to many different people, and the principles should be valid for a few years, so I’d say it’s worth having in any library where there are educators using e-portfolios, or developers developing them.

Skills frameworks, interoperability, portfolios, etc.

Last Thursday (2009-04-16) I went to a very interesting meeting in Leeds, specially arranged, at the Leeds Institute of Medical Education, between various interested parties, about their needs and ideas for interoperability with e-portfolio tools – but also about skills frameworks.

It was interesting particularly because it showed more evidence of a groundswell of willingness to work towards e-portfolio interoperability, and this has two aspects for the people gathered (6 including me). On the one hand, the ALPS CETL is working with MyKnowledgeMap (MKM) – a small commercial learning technology vendor based in York – on a project involving health and social care students in their 5 HEIs around Leeds. They are using the MKM portfolio tool, Multi-Port, but are aware of a need to have records which are portable between their system and others. It looks like being a fairly straightforward case of a vendor with a portfolio tool being drawn in to the LEAP2A fold on the back of the success we have had so far – without the need for extra funding. The outcome should be a classic interoperability win-win: learners will be able to export their records to PebblePad, Mahara, etc., and the MKM tool users will be able to import their records from the LEAP2A-implementing systems to kick-start their portfolio records there with the ALPS CETL or other MKM sites.

MKM tools, as suggested by the MKM name, do cover the representation of skills frameworks, and this forms a bridge between two threads to this meeting: first, the ALPS CETL work, and second, the more challenging area of medical education, where frameworks – of knowledge, skill or competence – abound and are pretty important for medical students and in the professional development of medical practitioners, and health professionals more generally.

In this more challenging side of the meeting, we discussed some of the issues surrounding skills frameworks in medical education – including the transfer of students at undergraduate level; the transfer between a medical school like Leeds and a teaching hospital, where the doctors may well soon be using the NHS Foundation Year e-portfolio tools in conjunction with their further training and development; and then on to professional life.

The development of LEAP2A has probably been helped greatly by not trying to do too much all at once. We haven’t yet fully dealt with how to integrate skills frameworks into e-portfolio information. At one very simple level we have covered it – if each skill definition has a URI, that can be referred to by an “ability” item in the LEAP2A. But at another level it is greatly challenging. Here in medical education we have not one, but several real-life scenarios calling for interoperable skills frameworks for use with portfolio tools. So how are we actually going to advise the people who want to create skills frameworks, about how to do this in a useful way? Their users, using their portfolio tools, want to carry forward the learning (against learning outcomes) and evidence (of competence) to another setting. They want the information to be ready to use, to save them repetition – potentially wasteful to the institution as well as the learner.

The answer necessarily goes beyond portfolio technology, and needs to tackle the issues which several people are currently working on: European projects like TENCompetence and ICOPER, where I have given presentations or written papers; older JISC project work I have been involved with (ioNW2, SPWS); and now the recently set up a CETIS team on competences.

Happily, it seems like we are all pushing at an open door. I am happy to be able to respond in my role as Learning Technology Advisor for e-portfolio technology, and point MKM towards the documentation on – and those with experience of implementing – LEAP2A. And the new competence team has been looking for a good prompt to hold an initial meeting. I imagine we might hold a meeting, perhaps around the beginning of July, focused on frameworks of skills, competence, knowledge, and their use together with curriculum learning outcomes, with assessment criteria, and with portfolio evidence? The Leeds people would be very willing to contribute. Then, perhaps JISC might offer a little extra funding (on the same lines as previous PIOP and XCRI projects) to get together a group of medical educators to implement LEAP2A and related skills frameworks together – in whatever way we all agree is good to take forward the skills framework developments.

Anti-social software

Social software is good for learning if, and only if, the society of learners is, or can be persuaded to be, positive towards learning. But what if you’re a teenager in a peer group in which learning is uncool? Perhaps we need software that expressly excludes the peer group?

I was at a meeting in Birkbeck, London, November 18th, called “Workshop on Personalised Technologies for Lifelong Learning”, which included outcomes (I missed) from the MyPlan project – generally to do with e-portfolio systems, lifelong learning, etc. In the general discussion, “Next Generation Environments for Lifelong Learning”, it was Andrew Ravenscroft (who manages the fascinating InterLoc) who came out with the phrase “antisocial software”, but I thought is was so apt, even though a bit extreme, that it needs popularising.

There’s enough of a serious point there to be well worth thinking about carefully. The general assumption that social software is a potential positive force for learning (among those keen on social software) needs challenging, not because is isn’t often true, but because it isn’t true always. Rather, you have to start by thinking about what the social group norms and values are. It has been said that in some school environments, achievement is a serious handicap to social success in the peer group. Surely, in these environments, it is not a good idea to use social software for learning, in the sense of doing learning in a group which involves all the peer group by default.

Instead, learning ideally needs to be done out of the view of the peer group, or in a setting where the peer group social norms and values do not apply. One way of doing this for traditional classroom learning is to introduce strong behaviour rules that are very different from behaviour outside the classroom. This approach would be the one proposed by various “old school” teachers, and there are books which I remember from teacher training days where these approaches are promulgated. Another way of doing this, which could also now be thought of as traditional, is through a more personalised approach, where learners work on their own worksheets. But for e-learning in these environments, what is needed is to separate the learning experience from the social group, not link it.

Of course, learning software that works that way would not really be “antisocial”, and this for two reasons. Firstly, one could have social software with varying degrees of privacy. Learners could use the more openly social facilities with the peer group, and private ones with teachers. Indeed, learners actually interested in learning might benefit from support in their interactions in the peer group. Secondly, social software with these capabilities could help learners find those other minority individuals who also want to learn, and smaller groups could be formed, outside the view of the majority.

A wider point relates to other things of interest to me, particularly about the multiplicity of personality. Teenagers in particular have different “personas”, or whatever you want to call this phenomenon of behaving in different ways in different contexts, and being embarrassed if behaviour displaying the values from one context slips through into the other. E-portfolio tools, as I will be describing in my book, could be used to help young learners to recognise the differences between the different contexts they find themselves in, and to adapt their personality differently in those different contexts.

To end with a much wider-reaching question, could we use anti-social software, not only in schools, to subvert social norms which do not value learning, but also perhaps as an aid to subversion in an organisation where the peer culture has turned against really effective work, or a country being ruled by a force which is fundamentally against democratic and accountable government?