Evaluation of Online-Participation
Norbert Kersting | Universität Münster (Germany)
Monitoring and Evaluation of E-Participation
Abstract: Monitoring and evaluation Instruments of are meant to enhance the quality of policy implementation. It is obvious that in numerous cases this monitoring and evaluation of online and office participation does not exist or is not applied by external actors. In the participatory instruments of the invented space, monitoring and evaluation is often ignored, there is no time or there is no funding to implement it thoroughly. The paper refers to the long history of participatory research. It shows that there are numerous participatory methods, but only a few concepts of evaluation. It criticizes theoretical concepts leading to indicators such as the Arnstein ladder of Participation, political action studies, civic engagement and the theoretical and historical blindness of newer instruments. Finally, it argues that categories and concepts do not differ in research on online and offline participation-but the theoretical foundations of political participation do.
How do we assess online participation? Is it possible to assess it with the same tools that are used to assess offline/traditional participation?
Acknowledged crisis of representative democracy: lack of responsiveness and accountability, post-parliamentarism, post-democracy, against elections, against democracy…
Jason Brennan states that we have trolls (they do not like anything, they are hooligans), hobbits (they actually do not care) and all the people in between, most of them cynics.
In many countries in Europe there have been local government reforms in Europe, some of them including more participatory processes like direct democracy at the local level.
Participatory instruments. Evaluation 1. Criteria:
- Participation: openness and equality
- Control, responsiveness.
Participatory instruments. Evaluation 2. Purposes
- Brainstorming: sharing knowledge and ideas, capacity-building
- Planning: problem-solving, innovation, strategy or action plan, decision-making.
- Networking: building relationship, personal/leader development.
- Conflict resolution: dealing with conflict, generating awareness, sharing vision.
The formal part is also important: can we compare voting with demonstrations? Should we? With what instruments?
Q: what could be done to do more and better evaluations of participatory processes? Kersting: benchmark good cases, have processes accepted in as many governments as possible, create standards, etc.
Ismael Peña-López: maybe, from a rational-choice approach it is true that “politicians do not assess” participation. But from a post-marxist approach, taking into account the theories from Hannah Arendt or Antonio Gramsci, yes politicians plan participatory processes but not for the reasons to achieve “real impact” but to control the relate and a way of assessing it would just simply be winning the elections, or placing a specific topic on the public agenda and being hegemonic in this discourse.
Maria A. Wimmer | Universität Koblenz
Evaluation of e-Participation Initiatives
There are a number of evaluation frameworks, with similarities and differences.
The MOMENTUM evaluation approach has:
- What to evaluate. Assets to be assessed: tools, processes, topics, policies.
- How to evaluate. Evaluation criteria: usability; appropriateness, interest, policies met).
- Main target of evaluation and impact towards target groups.
- Efficiency: system quality, information quality, service quality.
- Efficacy: information, communication, decision, expectations.
- Effectiveness: what the current situation is and what the future situation looks like to be.
If you need to cite this article in a formal way (i.e. for bibliographical purposes) I dare suggest:
Peña-López, I. (2017) “OP@LL Conference (VI): Evaluation of Online-Participation” In ICTlogy,
#171, December 2017. Barcelona: ICTlogy.
Retrieved month dd, yyyy from http://ictlogy.net/review/?p=4590
Previous post: OP@LL Conference (V): Online and Offline Participation
Next post: OP@LL Conference (VII): Case Studies 3