Privacy Risk Analysis. Sourya Joyee De
Читать онлайн книгу.the only way forward is to go beyond dual visions in this matter and to rely on assessments of actual risks rather than fixed definitions and obligations [120, 128].
However more nuanced views have also been expressed on this topic. For example, the Working Party 29 [6] stresses that the risk-based approach should never lead to a weakening of the rights of the individuals: the rights granted to the data subject should be respected regardless of the level of risk (right of access, erasure, objection, etc.). The fundamental principles applicable to data controllers should also remain the same (legitimacy, data minimization, purpose limitation, transparency, data integrity, etc.), even if they can be scalable (based on the results of a risk assessment). In addition, the risk-based approach should consider not only harms to individuals but also general societal impacts.
Some privacy advocates also fear that the flexibility provided by the risk-based approach is abused by some organizations, and risk assessment is perverted into a self-legitimation exercise [57]. To avoid this drift and ensure that the risk-based approach really contributes to improving privacy, a number of conditions have to be met. First and foremost, the analysis has to be rigorous, both from the technical point of view and from the procedural point of view. The methodology used for the analysis should be clearly defined, as well as the assumptions about the context and the potential privacy impacts. This is a key requirement to ensure that the results of a privacy risk analysis are trustworthy and can be subject to independent checks.
However, if existing PIA frameworks and guidelines [160, 161, 163] provide a good deal of details on organizational aspects (including budget allocation, resource allocation, stakeholder consultation, etc.), they are much vaguer on the technical part, in particular on the actual risk assessment task.
A key step to achieve a better convergence between PIA frameworks geared toward legal and organizational issues on one hand and technical approaches to privacy risk analysis on the other hand, is to agree on a common terminology and a set of basic notions. It is also necessary to characterize the main tasks to be carried out in a privacy risk analysis and their inputs and outputs.
The above objectives are precisely the subject of this book. The intended audience includes both computer scientists looking for an introductory survey on privacy risk analysis and stakeholders involved in a PIA process with the desire to address technical aspects in a rigorous way. We hope that the reader will have as much pleasure in reading this book as we had in putting it together.
Sourya Joyee De and Daniel Le Métayer
August 2016
1The notion is even not referred to explicitly in the text of the Directive.
2More precisely, the GDPR uses the wording “Data Protection Impact Assessment.”
Acknowledgments
We thank our colleagues of the PRIVATICS research group in Grenoble and Lyon, in particular Gergely Ács and Claude Castelluccia for their comments on an earlier draft of this book and many fruitful discussions on privacy risk analysis. This work has been partially funded by the French ANR-12-INSE-0013 project BIOPRIV and the Inria Project Lab CAPPRIS.
Sourya Joyee De and Daniel Le Métayer
August 2016
CHAPTER 1
Introduction
Considering that the deployment of new information technologies can lead to substantial privacy risks for individuals, there is a growing recognition that a privacy impact assessment (PIA) should be conducted before the design of a product collecting or processing personal data. De facto PIAs have become more and more popular during the last decade. Several countries such as Australia, New Zealand, Canada, the U.S. and the United Kingdom [164] have played a leading role in this movement. Europe has also promoted PIAs in areas such as RFIDs [9, 107] and smart grids [11, 12] and is putting strong emphasis on privacy and data protection risk analysis in its new General Data Protection Regulation (GDPR)1 [48]. However, if existing PIA frameworks and guidelines provide a good deal of details on organizational aspects (including budget allocation, resource allocation, stakeholder consultation, etc.), they are much vaguer on the technical part (what we call “Privacy Risk Analysis” or “PRA” in this book), in particular on the actual risk assessment task. Some tools have also been proposed to help in the management of organizational aspects [3, 118, 144] but no support currently exists to perform the technical analysis. For PIAs to keep up their promises and really play a decisive role to enhance privacy protection, they should be more precise with regard to these technical aspects. This is a key requirement to ensure that their results are trustworthy and can be subject to independent checks. However, this is also a challenge because privacy is a multifaceted notion involving a wide variety of factors that may be difficult to assess.
Some work has already been carried out on PRA in the computer science community [39, 40, 52, 169] but the results of these efforts are not yet integrated within existing PIA frameworks. A first step to achieve a better convergence between PIA frameworks geared toward legal and organizational issues on one hand and technical approaches to PRA on the other hand, is to agree on a common terminology and a set of basic notions. It is also necessary to characterize the main tasks to be carried out in a privacy risk analysis and their inputs and outputs.
Surveys of current practices and recommendations have already been published for PIAs [29, 160, 163, 164] but, as far as we know, not for PRAs. The goal of this book is to fill this gap by providing an introduction to the basic notions, requirements and key steps of a privacy risk analysis. Apart from Chapter 9, in which we put PRA into the context of PIA, we focus on the technical part of the process here. For example, we do not consider legal obligations such as the obligation to notify the supervisory authority before carrying out personal data processing (in European jurisdictions). Neither do we discuss the organization of the stakeholders consultation which forms an integral part of a PIA.
Another choice made in this book is to focus on privacy risks for persons (including individuals, groups and society as a whole) who have to suffer from privacy violations rather than the risks for organizations processing the data (data controllers or data processors in the European terminology). Certain frameworks [55, 106, 107] integrate both types of risks but we believe that this can be a source of confusion because, even if they are interrelated, these risks concern two types of stakeholders with different, and sometimes conflicting, interests. The risks to business, or to organizations in general, posed by privacy can be analyzed in a second stage, when privacy risks for persons have been evaluated, since the former can be seen as indirect consequences of the latter.
Chapter 2 sets the scene with a review of the common terms used in privacy risk analysis, a study of their variations and a definition of the terminology used in this book. We proceed with detailed presentations of the components of a privacy risk analysis and suggestions of classifications, considering successively, processing systems (Chapter 3), personal data (Chapter 4), stakeholders (Chapter 5), risk sources (Chapter 6), feared events (Chapter 7) and privacy harms (Chapter 8). Then, we show how all the notions introduced in this book can be used in a privacy risk analysis process (Chapter 9). We conclude with a reflection on security and privacy risk analysis and avenues for further work (Chapter 10).
We use a running example in the area of smart grids (the BEMS System introduced in Chapter