Moderadora: María José Pifarré. Lecturer, School of Law and Political Science (UOC).
Digital Surveillance and Criminal Investigation: Blurring of Thresholds and Boundaries in the Criminal Justice System?
John Vervaele. Full time professor of economic and European criminal law at Utrecht Law School (the Netherlands) and Professor of European criminal law at the College of Europe in Bruges (Belgium)
Criminal law is about prosecuting suspects of alleged crimes. To do saw, some agents have coercive and sanctioning power. Within some boundaries and thresholds, which are utterly important. One of them is the jurisdiction to investigate. Only judicial authorities can investigate — or policial authorities under the supervision of judicial authorities. To investigate there must be suspicion, suspicion under a threshold for that: reasonable suspicion, probable cause, indices of criminality… The more the coercive the measure, the higher the threshold. And, very important, those are reactive measures, always ex-post, after a crime has been committed.
But there has been a paradigm change within criminal law itself.
The criminal justice tools/system are more and more used not to combat committed crime, but to prevent not only crime but even risks and threats. That is, to fight for security. Security is legitimating this change of paradigm. On the other hand, under new regulatory statutes, security agencies are being entitled with the powers of criminal justice.
What about the protective side of criminal justice? Protecting security has to be balanced with rule of law and other human rights. And the more the need (the “more security we need”), the lower the thresholds required to pursue criminal investigations. So low, that we have even shifted towards ex-ante actions, towards acting based on suspicions or risks or threats against security. Now, in this new paradigm, criminal justice-like powers are fighting to prevent the commission of crime, not to prosecute the actual commission of crime.
Plus, the Information Society has also influenced criminal justice and criminal investigation.
Having data before any criminal investigation is due, is very important. So, we have gone even further behind the suspicion that a crime will be committed, and we now gather data just in case. We now want to predict behaviour.
Summing up, we have gone from realinzing that a crime has been committed, investigating it and, thus, needing to gather data; to gathering data just in case, perform active surveillance (just in case too), and, in the end, trying to predict a threat that a crime could be committed (before it is actually or ever committed).
The combination of the change of paradigm and the influence of the Information Society on Criminal Law and Criminal Investigation is a major upheaval in the discipline. And it is a fact that this major upheaval is not restricted to national security, but is spreading to all other criminal offences.
New challenges for the protection of privacy in the age of the cloud.
Ivan Salvadori. Professor of Criminal Law and Criminal Computer Law at the University of Barcelona and Postdoctoral Researcher at Università di Verona (Italy)
Many threats of cloud computing: identity theft, computer damages, abusing private information, data theft, hacking/cracking, DDoS attacks, etc.
Entering an information system without permission, and breaking security measures, has been qualified as a criminal act.
The problem with some laws concerning illegal access to information systems is that e.g. employees of cloud servers or insiders will “never” actually break any security measure. But it can be addressed as “remaining” (too much) in the system beyond the managing needs.
In the same line, appropriation and illicit diffusion of codes for accessing information systems (e.g. cracking and distributing passwords) has also been considered crime in several regulation systems.
So, we do not need much more tools than the ones we already have to prevent or punish misuses of data in the cloud or illicit access to cloud systems.
The reform of the European regulation on privacy protection aims at providing a common frame for all these aspects, taking into account access to information, right to be forgotten, etc.
Chris Marsden: do we find any historical precedent on this shift on criminal law or criminal investigation? Vervaele: I don’t think there is a recent precedent in such an increasing (con)fusion between the legislative, the judicial and the executive powers, at least not in the modern era.
Julián Valero: is it legitimate the delegate all data and all data management to third parties? Salvadori: depending of the law, this delegation could imply an abandonment of responsibilities and, thus, could be punished by the law. Vervaele: maybe the concept itself of “privacy” is obsolete, and we should begin to speak about “informational self-determination”.
9th Internet, Law and Politics Conference (2013)
Moderator: Clara Marsan. Lecturer, School of Law and Political Science (UOC).
Terrorist use of the Internet: Communicate, recruit, fundraise, train, launch propaganda videos, etc.
Which poses surveillance challenges: filter terrorist communications, locate terrorist communications, uncover terrorist identity hiding, etc.
- can make sense of huge amounts of data.
- creates “new” knowledge.
- can generate hypotheses. You do not need a prior theory.
Data mining is “seeing the forest from the trees”.
Governments cannot disclose their data mining procedures as they would enable criminals to circumvent their practices by changing their behaviour.
How do we get both security and privacy?
The European Court of Human Rights asks for:
- is there an interference?
- is the interference justified?
The problem is how to balance legality, legitimacy and proportionality. Are there procedural safeguards that limit the scope of the law?
What happens when huge amounts of data are transferred to national security agencies? Usually, security wins in the trade-off with privacy.
When different countries require the collection of data from air voyagers, some incompatibilities may arise between different countries’ regulations. What to do? What can airlines do to accomplish both regulations (origin and destiny)?
The European Union has bilateral agreements with the US, Canada and Australia. And all of them are different among them: they have different goals, different sets of data to be shared/transmitted are defined, different time-spans where data can be used, and, indeed, they all rely on the domestic (destiny) regulation to be able to tell what rights to apply to the European citizen facilitating their data.
One of the problems with EU regulation on international data transission is that it has always been reactive to the demands of third countries. The EU should be more proactive and try and agree on shared regulation that lies within some red-lines drawn by the EU itself.
What happens when a person is attributed the authorship of a text they have never written? Can they claim “non-authorship”? To whom? How? e.g. the Wikipedia entry for a write attributes to him being the author of a work when they are not the authors, and the Wikipedia managers will not change the entry despite the “author” clarifying that they never wrote that piece.
Some laws (e.g. in Germany) consider illicit attributing to someone writings that they never penned, especially when these writings can confuse the image or the personal identity of that person, e.g. by identifying them with ideologies that they do not share.
This would be a right to one’s identity but not from the usual approach of the issue. This may be necessary as the Internet has changed dramatically the potential to alter one’s words or ideas. And there is no other approach to this issue from other perspectives: privacy, identity, intellectual property rights, etc.
Maybe the best approach would be the one that applies to mass media: the right to rectification, that is, the right to be presented in society the way one wishes best. The problem is that the Internet has multiplied the difficulties to identify what is a medium, who is the owner/administrator, who is the responsible for a specific bunch of content, etc.
What should be stored and what can be already been used “because it’s out there”? Colonna: sure the line should be laid around the principle of proportionality… wherever this principle may lay.
Clara Marsan: Is there any research on the impact on privacy vs. the performance of surveillance practices? Literature on “traditional” surveillance usually says that the impact on privacy is much bigger than the successes against terrorism. Colonna: the problem is that most of this information is classified, so there is no way of telling the impact or the benefits of digital surveillance.
9th Internet, Law and Politics Conference (2013)
Moderator: Marc Vilalta Reixach. Lecturer, School of Law and Political Science (UOC).
Regulating Code: Towards Prosumer Law?
Chris Marsden. Professor of Law, Law School, University of Sussex
Ian Brown. Senior Research Fellow at the Oxford Internet Institute, Oxford University
(Communication after the book Regulating Code. Good Governance and Better Regulation in the Information Age by Brown and Marsden).
We certainly are prosumers, but we are sure not super-users or geeks. Most US academic arguments for self-regulation may work for geeks, but not for the remaining 99% of users/prosumers.
What regulation teaches us about code? We need more ex-ante — added to ex-post — intervention. More interoperability and open code/data procurement. And a certain biased policy towards open code.
Prosumer law suggests a more directed intervention: solutions for problems of dominant networking sites, preventing erecting fences around a piece of information and the commons, etc.
It is not sufficient to permit data deletion, as that not only covers the user’s tracks. Interconnection and interoperability, more than transparency and theoretical possibility to switch. The possibility for prosumers to interoperate to permit exit.
Increased interoperability would increase transparency while not increasingly “data hazard”.
Neoliberalism is about liberalization, deregulation and privatization. The underlying idea is to boost competence, understood as a good thing. In telecommunications, we are moving from sectoral regulation to competence law. The question being: what is better, sectoral regulation or competence law? Or are they both compatible? Or is it a matter of time, being sectoral regulation good for the early stages and as a temporal solution, until competence law can be the main tool at use?
Some examples in the US show that sectoral regulation is incompatible with competence. In these cases, sectoral regulation prevailed over competence law.
On the other hand, in Europe cases have proven the compatibility between sectoral regulation and competence law.
In the case of Chile, after a very early (de)regulation of the sector and a major preponderance of competence law, some new sectoral regulation was approved especially to protect some “public goods” based on telecommunications.
Emulated competence would be a legal framework whose aim would be promoting competence (thus acting as competence law) but including some ex-ante conditions (regulation) to protect some specific goods and services. An underlying goal is to promote competence to end up with monopolies, but trying to avoid actual oligopolies.
How can big data be used by governments to issue sanctions? But it is not about “digitizing” the Administration, but about innovating processes. For instance: could Big Data be used to check whether the declared income to the tax agency fits with the perceived wealth/consumption-level of a specific citizen on social networking sites?
What are the legal consequences of such an action?
Agustí Cerrillo: how does the new Spanish Transparency Law fits in the era of Big Data? Valero: it does not. It is a Law that will be born already old.
Hildebrandt: what happens with reutilization of public information? Can the government reuse it (even for different purposes for the ones which citizen information was provided for)? Valero: on the one hand, why not? why not enabling reutilization of public information? On the other hand, there is a issue concerning privacy. Dissociation of information and identity would be an option, but the problem is that it is becoming increasingly easy to perform reverse engineering and relate identities to information. Of course, different finalities my require consent, but that would put a lot of stress on the government’s part. Maybe transparency (letting the citizen know all the different purposes) would settle the problem.
9th Internet, Law and Politics Conference (2013)
Mireille Hildebrandt. Professor of Smart Environments, Data Protection and the Rule of Law at the Institute for Computing and Information Sciences (iCIS) at Radboud University Nijmegen
Slaves of Big Data. Are we?
Big data says that n = all.
We are worshipping big data, believing in it, as if it was “Godspeech” that cannot be contested. But it is developed by governments, businesses and scientists.
Defining Big Data
Things one can do at a large scale that cannot be done at a smaller scale, Mayer-Schönberger & Cukier.
The non-trivial process of identifying valid, novel, potentially useful, and ultimately understandable patterns in data, Fayyad et al.
We assume that machines can learn. Despite this being true, the question is whether us (humans) can anticipate this learning and make it useful.
Normally, quantitative research means assuming a hypothesis, a test upon a representative sample, and trying to extrapolate the results of the sample on the general population. But big data, as the sample grows, uncertainty reduces. This has implied a movement towards ‘Datafication’, where everything needs to be recorded so that afterwards it can be treated. We translate the flux of life into data, tracking and recording all aspects of life.
Exploiting these data is no more about queries, but about data mining. And creating ‘data derivatives’ which are anticipations,
present futures of the future present (Elena Esposito). And it is also about a certain “end of theory” (Chris Anderson) where a pragmatist approach makes us shift from causality (back) to correlation. We also move away from creating or defining concepts, which in turn shape the way we understand reality. We move from expertise to data science. Are we on the verge of data dictatorship? Is this the end of free will?
There are novel inequalities dues to new knowledge asymmetries between data subjects and data controllers.
What does it mean to skip the theory and to limit oneself to generating and testing hypotheses against ‘a’ population?
What does the opacity of computational techniques mean for the robustness of the outcomes?
Personal data management in the age of Big Data
Should we shift towards data minimisation? Towards blocking access to our data?
New personal data typology needed for data protection: volunteered data, behavioural data, inferred data.
The ones performing profiling after Big Data, for data protection to be a protected right, these agents should provide all kinds of information on how this profiling is being made, with especial attention to procedures and outcomes.
If we cannot have “privacy by design” we should have personal data management, context aware data management.
Personal data management: sufficient autonomy to develop one’s identity; dependent on the context of the transactions; enabling considerations of constraints relevant to personal preferences (Bus & Nguyen).
A rising problem: we will eventually be able to market data and make profits from it. Do we have any ethical approach towards the way these data were obtained? or how and when were they created?
Who are we in the era of Big Data?
Imperfection, ambiguity, opacity, disorder, and the opportunity to err, sin, to do the wrong thing: all of these are constitutive of human freedom, and any attempt to root them out will root out that freedom as well, Morozov.
So, where is the balance between techno-optimism and techno-pessimism?
Luhmann’s Double Contingency explains how there is a double contingency in behaviour and communications, as one can choose from different options how to act, but this action will have different ranges of reactions on third parties. Are we preserving this double contingency in the era of Big Data? Do big data machines anticipate us so much that we get over contingencies at all?
What if it is machines that anticipate us? What if we anticipate how machines anticipate us?
- Datafication multiplies and reduces: N is not all, not at all.
- We create machines like us… do we increasingly are like machines?
- Monetisation as a means to reinstate double contingency. Total transparency by means of monetisation.
Can we move from overexposure to clair obscure? Can we build data buffers to avoid overflow?
Chris Marsden: How can we make policy-makers take into account all these important issues? Hildebrandt: there are two big problems with Big Data (1) enthusiasm that Big Data will solve it all and (2) huge business opportunities of business around Big Data. There sure is a way to “tweak” business models so that privacy and business opportunities and innovations can actually coexist. Capitalism sure has ways to achieve a balance.
Hildebrandt: big data, monetisation, etc. will surely change the ones we are. The question being: should we be always the ones we are? Instead of privacy, we should maybe shift to concentrating on transparency, on knowing what things happen and why things happen. For instance, can we put more attention on what we need to solve and not in what solutions are available? That is also transparency.
9th Internet, Law and Politics Conference (2013)
Moderator: Miquel Peguera. Lecturer, School of Law and Political Science (UOC).
There is a need to restore the legitimacy of copyright. But, of course, re-balancing its role before other rights. And we have to go back to the origins of copyright: incentivating creation.
These incentives have limits and exceptions: private copying, incidental use, academic use, quotation, parody, etc. But these limits should not be harming the legitimate activities of rights holders.
The problem is that users (user generated content), search engines, etc. are doing cutting edge uses that are not appropriately contemplated by the Law.
So, the norm should be a little bit more open so not to harm neither the creation of authors nor the creation by non-professionals, the diffusion of the creation of the former or just emerging industries based in digital content.
3d Printing, the Internet and Patent Law – A History Repeating?
Marc Mimler. Queen Mary Intellectual Property Research Institute, Centre for Commercial Law Studies (CCLS), Queen Mary University of London
3D printing was initially thought for rapid prototyping. 3D would normally begin with a designer drawing the blueprints of the object to be printed, but scanners have evolved enough to be able to scan 3D objects, map them and then be able to replicate them on 3D printers.
Some advantages: reversing offshoring low-cost labour economic activities, reducing the environmental impact by reducing transportation of goods, empowerment for the end user that can now print their own designs, etc.
On the other side, 3D printing can imply direct patent infringement (creating replicas and counterfeit). But also indirect patent infringement, as virtually anyone can create those replicas.
There are some differences though: how the invention is put into effect, how the patented product is produced (as sometimes the process is part of the patent, or sometimes it is just the process what is the object of the patent), etc.
3D printing means that is not only copyright that has to be rethought in a new digital realm, but also patent law.
Intellectual Privacy: A Fortress for the Individual User?
Irina Baraliuc.Research Group on Law, Science, Technology & Society (LSTS), Vrije Universiteit Brussel (VUB), doctoral researcher
There is a public debate on fundamental rights in the digital context, which has ensued a judicial activity on the balancing of fundamental rights.
Julie E. Cohen defines intellectual privacy as the
breathing space of intellectual activity: informational privacy, spacial privacy. Neil M. Richards speaks of
ability do develop ideas without any interference: freedom of thought and believe, spatial privacy, freedom of intellectual exploration, confidentiality.
These concepts are very related with privacy, data protection, freedom of thought, freedom of expression… and copyright.
Concerning privacy and data protection, there are some points to be made: how the lack of copyright in the private space can affect creationor intellectual privacy; how DRM can affect it too, etc.
Surveillance can affect freedom of thought and thus creation, intellectual activity and intellectual privacy.
We have the build a copyright-concerned private space online, taking into account intellectual “privacy” or “freedom”, “self-determination” or “autonomy”.
Pedro Letai: more that having a comprehensive approach, what we should be thinking about is that there is a change of paradigm (from “everything closed” to “everything open” by default). And maybe regulations should be made according to the later paradigm, and thinking more on diffusion of one’s work, rather than (only) providing incentives to creation (of new works).
9th Internet, Law and Politics Conference (2013)