By Ismael Peña-López (@ictlogist), 25 June 2013
Main categories: Cyberlaw, governance, rights, e-Government, e-Administration, Politics, Information Society, Meetings, Participation, Engagement, Use, Activism
Other tags: big_data, chris_marsden, humberto_carrasco, idp, idp2013, julian_valero_torrijos, marc_vilalta
No Comments »
Moderator: Marc Vilalta Reixach. Lecturer, School of Law and Political Science (UOC).
Regulating Code: Towards Prosumer Law?
Chris Marsden. Professor of Law, Law School, University of Sussex
Ian Brown. Senior Research Fellow at the Oxford Internet Institute, Oxford University
(Communication after the book Regulating Code. Good Governance and Better Regulation in the Information Age by Brown and Marsden).
We certainly are prosumers, but we are sure not super-users or geeks. Most US academic arguments for self-regulation may work for geeks, but not for the remaining 99% of users/prosumers.
What regulation teaches us about code? We need more ex-ante — added to ex-post — intervention. More interoperability and open code/data procurement. And a certain biased policy towards open code.
Prosumer law suggests a more directed intervention: solutions for problems of dominant networking sites, preventing erecting fences around a piece of information and the commons, etc.
It is not sufficient to permit data deletion, as that not only covers the user’s tracks. Interconnection and interoperability, more than transparency and theoretical possibility to switch. The possibility for prosumers to interoperate to permit exit.
Increased interoperability would increase transparency while not increasingly “data hazard”.
Neoliberalism is about liberalization, deregulation and privatization. The underlying idea is to boost competence, understood as a good thing. In telecommunications, we are moving from sectoral regulation to competence law. The question being: what is better, sectoral regulation or competence law? Or are they both compatible? Or is it a matter of time, being sectoral regulation good for the early stages and as a temporal solution, until competence law can be the main tool at use?
Some examples in the US show that sectoral regulation is incompatible with competence. In these cases, sectoral regulation prevailed over competence law.
On the other hand, in Europe cases have proven the compatibility between sectoral regulation and competence law.
In the case of Chile, after a very early (de)regulation of the sector and a major preponderance of competence law, some new sectoral regulation was approved especially to protect some “public goods” based on telecommunications.
Emulated competence would be a legal framework whose aim would be promoting competence (thus acting as competence law) but including some ex-ante conditions (regulation) to protect some specific goods and services. An underlying goal is to promote competence to end up with monopolies, but trying to avoid actual oligopolies.
How can big data be used by governments to issue sanctions? But it is not about “digitizing” the Administration, but about innovating processes. For instance: could Big Data be used to check whether the declared income to the tax agency fits with the perceived wealth/consumption-level of a specific citizen on social networking sites?
What are the legal consequences of such an action?
Discussion
Agustí Cerrillo: how does the new Spanish Transparency Law fits in the era of Big Data? Valero: it does not. It is a Law that will be born already old.
Hildebrandt: what happens with reutilization of public information? Can the government reuse it (even for different purposes for the ones which citizen information was provided for)? Valero: on the one hand, why not? why not enabling reutilization of public information? On the other hand, there is a issue concerning privacy. Dissociation of information and identity would be an option, but the problem is that it is becoming increasingly easy to perform reverse engineering and relate identities to information. Of course, different finalities my require consent, but that would put a lot of stress on the government’s part. Maybe transparency (letting the citizen know all the different purposes) would settle the problem.
9th Internet, Law and Politics Conference (2013)
By Ismael Peña-López (@ictlogist), 25 June 2013
Main categories: Cyberlaw, governance, rights, e-Government, e-Administration, Politics, Information Society, Meetings, Participation, Engagement, Use, Activism
Other tags: big_data, idp, idp2013, mireille_hildebrandt
No Comments »
Mireille Hildebrandt. Professor of Smart Environments, Data Protection and the Rule of Law at the Institute for Computing and Information Sciences (iCIS) at Radboud University Nijmegen
Slaves of Big Data. Are we?
Big data says that n = all.
We are worshipping big data, believing in it, as if it was “Godspeech” that cannot be contested. But it is developed by governments, businesses and scientists.
Defining Big Data
Things one can do at a large scale that cannot be done at a smaller scale
, Mayer-Schönberger & Cukier.
The non-trivial process of identifying valid, novel, potentially useful, and ultimately understandable patterns in data
, Fayyad et al.
We assume that machines can learn. Despite this being true, the question is whether us (humans) can anticipate this learning and make it useful.
Normally, quantitative research means assuming a hypothesis, a test upon a representative sample, and trying to extrapolate the results of the sample on the general population. But big data, as the sample grows, uncertainty reduces. This has implied a movement towards ‘Datafication’, where everything needs to be recorded so that afterwards it can be treated. We translate the flux of life into data, tracking and recording all aspects of life.
Exploiting these data is no more about queries, but about data mining. And creating ‘data derivatives’ which are anticipations, present futures of the future present
(Elena Esposito). And it is also about a certain “end of theory” (Chris Anderson) where a pragmatist approach makes us shift from causality (back) to correlation. We also move away from creating or defining concepts, which in turn shape the way we understand reality. We move from expertise to data science. Are we on the verge of data dictatorship? Is this the end of free will?
There are novel inequalities dues to new knowledge asymmetries between data subjects and data controllers.
What does it mean to skip the theory and to limit oneself to generating and testing hypotheses against ‘a’ population?
What does the opacity of computational techniques mean for the robustness of the outcomes?
Personal data management in the age of Big Data
Should we shift towards data minimisation? Towards blocking access to our data?
New personal data typology needed for data protection: volunteered data, behavioural data, inferred data.
The ones performing profiling after Big Data, for data protection to be a protected right, these agents should provide all kinds of information on how this profiling is being made, with especial attention to procedures and outcomes.
If we cannot have “privacy by design” we should have personal data management, context aware data management.
Personal data management: sufficient autonomy to develop one’s identity; dependent on the context of the transactions; enabling considerations of constraints relevant to personal preferences (Bus & Nguyen).
A rising problem: we will eventually be able to market data and make profits from it. Do we have any ethical approach towards the way these data were obtained? or how and when were they created?
Who are we in the era of Big Data?
Imperfection, ambiguity, opacity, disorder, and the opportunity to err, sin, to do the wrong thing: all of these are constitutive of human freedom, and any attempt to root them out will root out that freedom as well
, Morozov.
So, where is the balance between techno-optimism and techno-pessimism?
Luhmann’s Double Contingency explains how there is a double contingency in behaviour and communications, as one can choose from different options how to act, but this action will have different ranges of reactions on third parties. Are we preserving this double contingency in the era of Big Data? Do big data machines anticipate us so much that we get over contingencies at all?
What if it is machines that anticipate us? What if we anticipate how machines anticipate us?
Caveats
- Datafication multiplies and reduces: N is not all, not at all.
- We create machines like us… do we increasingly are like machines?
- Monetisation as a means to reinstate double contingency. Total transparency by means of monetisation.
Can we move from overexposure to clair obscure? Can we build data buffers to avoid overflow?
Discussion
Chris Marsden: How can we make policy-makers take into account all these important issues? Hildebrandt: there are two big problems with Big Data (1) enthusiasm that Big Data will solve it all and (2) huge business opportunities of business around Big Data. There sure is a way to “tweak” business models so that privacy and business opportunities and innovations can actually coexist. Capitalism sure has ways to achieve a balance.
Hildebrandt: big data, monetisation, etc. will surely change the ones we are. The question being: should we be always the ones we are? Instead of privacy, we should maybe shift to concentrating on transparency, on knowing what things happen and why things happen. For instance, can we put more attention on what we need to solve and not in what solutions are available? That is also transparency.
9th Internet, Law and Politics Conference (2013)
By Ismael Peña-López (@ictlogist), 25 June 2013
Main categories: Cyberlaw, governance, rights, e-Government, e-Administration, Politics, Information Society, Meetings, Participation, Engagement, Use, Activism
Other tags: big_data, idp, idp2013, irina_baraliuc, marc_mimler, miquel_peguera, pedro_letai
No Comments »
Moderator: Miquel Peguera. Lecturer, School of Law and Political Science (UOC).
There is a need to restore the legitimacy of copyright. But, of course, re-balancing its role before other rights. And we have to go back to the origins of copyright: incentivating creation.
These incentives have limits and exceptions: private copying, incidental use, academic use, quotation, parody, etc. But these limits should not be harming the legitimate activities of rights holders.
The problem is that users (user generated content), search engines, etc. are doing cutting edge uses that are not appropriately contemplated by the Law.
So, the norm should be a little bit more open so not to harm neither the creation of authors nor the creation by non-professionals, the diffusion of the creation of the former or just emerging industries based in digital content.
3d Printing, the Internet and Patent Law – A History Repeating?
Marc Mimler. Queen Mary Intellectual Property Research Institute, Centre for Commercial Law Studies (CCLS), Queen Mary University of London
3D printing was initially thought for rapid prototyping. 3D would normally begin with a designer drawing the blueprints of the object to be printed, but scanners have evolved enough to be able to scan 3D objects, map them and then be able to replicate them on 3D printers.
Some advantages: reversing offshoring low-cost labour economic activities, reducing the environmental impact by reducing transportation of goods, empowerment for the end user that can now print their own designs, etc.
On the other side, 3D printing can imply direct patent infringement (creating replicas and counterfeit). But also indirect patent infringement, as virtually anyone can create those replicas.
There are some differences though: how the invention is put into effect, how the patented product is produced (as sometimes the process is part of the patent, or sometimes it is just the process what is the object of the patent), etc.
3D printing means that is not only copyright that has to be rethought in a new digital realm, but also patent law.
Intellectual Privacy: A Fortress for the Individual User?
Irina Baraliuc.Research Group on Law, Science, Technology & Society (LSTS), Vrije Universiteit Brussel (VUB), doctoral researcher
There is a public debate on fundamental rights in the digital context, which has ensued a judicial activity on the balancing of fundamental rights.
Julie E. Cohen defines intellectual privacy as the breathing space of intellectual activity: informational privacy, spacial privacy
. Neil M. Richards speaks of ability do develop ideas without any interference: freedom of thought and believe, spatial privacy, freedom of intellectual exploration, confidentiality
.
These concepts are very related with privacy, data protection, freedom of thought, freedom of expression… and copyright.
Concerning privacy and data protection, there are some points to be made: how the lack of copyright in the private space can affect creationor intellectual privacy; how DRM can affect it too, etc.
Surveillance can affect freedom of thought and thus creation, intellectual activity and intellectual privacy.
We have the build a copyright-concerned private space online, taking into account intellectual “privacy” or “freedom”, “self-determination” or “autonomy”.
Discussion
Pedro Letai: more that having a comprehensive approach, what we should be thinking about is that there is a change of paradigm (from “everything closed” to “everything open” by default). And maybe regulations should be made according to the later paradigm, and thinking more on diffusion of one’s work, rather than (only) providing incentives to creation (of new works).
9th Internet, Law and Politics Conference (2013)