Chairs: Mònica Vilasau, Lecturer, School of Law and Political Science (UOC).
Yves Poullet. Rector of the University of Namur (Belgium). Professor at the Faculty of Law at the University of Namur (UNamur) and Liège (Ulg).
A new Privacy age: towards a citizen’s empowerment: New issues and new challenges
Changes in the technological landscape
Characteristics of the new information systems, between Tera and Nano. More ability to store speech, data, images. Increasing capacity as regards the transmission. Increasing capacity as regards the processing. increasing capacity as regards the storage capacity. On the other end, multiplication of terminal devices which are now ubiquitous (GPS, RFID, mobiles, human implants…).
New applications. New ways to collect data, especially through web 2.0 platforms (social networking sites, online services…) and ambient intelligence (RFID, bodies’ implants…). And new ways of data storage, such as cloud computing.
We have to acknowledge that we increasingly have less control and even ownership of our own data, which “live in the cloud”. And, indeed, neither we know where data is, in what territory, and which laws affect them.
New methods of data processing. Profiling, a method using three steps: data warehouse, data mining, profiling of individuals. Neuroelectronics, which is the possibility to modify the functioning of our brain (through body implants and brain computer interfaces, e.g. to stimulate the memory function or to reduce stress). Affective computing, on how to interpret feelings (e.g. facial movements) and to adapt the environment or to take decisions on the basis of that interpretation.
New actors. Stantardisation of terminals of communication, protocols, led by private organizations (IETF, W3C) and not by public/international ones. New emerging actors, such as the terminals’ producers, which lack regulation upon their behaviour, without “technology control”. New gatekeepers. Blurring of borders and, with them, blurring of states’ sovereignty.
The legal answer: privacy or/and data protection
Initially, privacy was understood as a right to opacity, the right to be left alone. Progressively data protection as a new constitutional right besides privacy, a way of re-establishing a certain equilibrium between the informational powers, a right to self determination, to control the flows of one’s informational image.
- Legitimacy of the processing.
- Right to a transparent processing for the data subject.
- Data protection authority (a new actor) as a balance keeper.
There is a trend of understanding privacy with the negative approach without reference to the large ‘privacy’ concept. We need to reassess the value of data protection today. We need to accurately manage the delicate balance between the need for intermitent retreat from others and the need for interaction and cooperation with others (cf. Arendt), now that there is a pervasive Lacanian “extimacy” due to social networking sites.
New privacy risks:
- Opacity, and the risks of anticipatory conformism.
- Decontextualization, data collected in one context might be used in another context.
- Reductionism, from individual to her data and finally to her profile by using data related to other people.
- Increasing assymmetry, between the informational powers of, from one part, the data subject and, from the other part, the data controller.
- Towards a suveillance society.
- Abolition of some rights.
The human facing ICTs: a man traced and surveyed, a man “without masks”, a reduced man, a man normalized. Where it is question of dignity, of individual self-determination, of social justice and… definitively democracy. Privacy — which is much more than data protection — should be seen as self-development.
New rights of the data subject: right to be forgotten, right to data portability.
10th Internet, Law and Politics Conference (2014)
Notes from the research seminar Searching information on the Internet: legal implications, by Julià Minguillón, held at the Open University of Catalonia, Barcelona, Spain, on April 29th, 2010.
Tim Berners-Lee creates the World Wide Web, based on a structure and protocols that require linking to work. The URL or URI identify documents that can be found on the Internet, creating a directed graph: A points to B, but we (usually) cannot walk the inverse way, the link is not reversible (i.e. you need another link to go from B to A, the initial A to B link does not serve this purpose).
There are two main strategies to explore the Internet and find information within: browsing and searching.
One of the “problems” of the Internet is that, as a graph, it’s got no centre: the Internet as no centre or place that can be considered as its begin.
There are some initiatives to map the Internet, to index it (like the Open Directory Project, but the speed of growth of the Internet have made them difficult to maintain… and even to use.
- a web crawler explores the Internet, retrieving information about the content and the structure of a web site;
- an index is created where the information is listed and categorized, and
- a query manager enables the user to ask the index and retrieve the desired information.
Web crawlers require that pages are linked to be able to visit them. Ways to prevent web crawlers to explore a web site (besides unlinking) is protection by username/password, use of CAPTCHAs, use of protocols of exclusion (e.g. in robots.txt files), etc.
Protocol of exclusion (robot.txt):
- Has to be public;
- Indication, not compulsory;
- Discloses sensible information;
- Google hack: intitle:index.of robots.txt
- Search engines find sensible information.
- Content and links are different things. A linked content might not be in the same place as the source content where the link is published.
- Users can link sensible information/contents.
- Broken links and permalinks: content might be moved but engines/users might track and re-link that content.
- Outdated versions (cache): to avoid repeated visiting, search engines save old versions of sites (caches), which stand for a specific time even if some content is deleted.
- Software vulnerabilities:
- Browsing patterns (case of AOL): what a user does on the Internet can be tracked and reveal personal information.
Nowadays, most ways to remain anonymous on the Internet is opting out of services like web crawling by search engines.
With the Web 2.0 things become more complicated. Initiallly, “all” content was originated by the “owner” of a website: you needed a hosting and to directly manage that site. When everyone can create or share content in a very easy and immediate way, the relationship server/hosting-manager-content is not as straightforward as it used to be.
Linking and tagging also complicate even more the landscape. And with the upcoming semantic web, cross-search and crossing data from different sources can make it easy to retrieve complex information and find out really sensible information.
- Users demand more and more services and are willing to give their privacy away for a handful of candies.
- Personalization is often on a trade-off relationship with privacy, and people demand more personalization.
- Opt-in should be the default, but it raises barriers to quick access to sites/services, hence opt-out is the default.
- An increased trend in egosurfing and aim for e-stardom is accompanied by an increasing trail of data left behind by users.
- The creator of content
- The uploader
- The one who links
- The one who tags
- Search engines
- End users
- Social networking sites
Ramon Casas points at Google cache and, while being not strictly necessary to run the search engine, it represents an ilegal copy and/or access to content that (in many cases) was removed from its original website. In his example
the museum closes at 20:00 but Google leaves the back door open until 22:00.
Bruce Kasanoff (2001). Making It Personal: How to Profit from Personalization without Invading Privacy. See a review by Julià Minguillón at UOC Papers.