0% found this document useful (0 votes)
228 views23 pages

LINDDUN: A Privacy Threat Analysis Framework

The document presents the LINDDUN privacy threat analysis framework. It provides a systematic methodology for modeling privacy threats analogous to the STRIDE framework for security threats. The methodology uses a data flow diagram model and maps identified privacy threat types to elements in the model. It also includes an extensive catalogue of privacy threat tree patterns to detail the threat analysis. Finally, it describes how privacy-enhancing technologies can be mapped to identified privacy threats to help select appropriate countermeasures.

Uploaded by

dont4get
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
228 views23 pages

LINDDUN: A Privacy Threat Analysis Framework

The document presents the LINDDUN privacy threat analysis framework. It provides a systematic methodology for modeling privacy threats analogous to the STRIDE framework for security threats. The methodology uses a data flow diagram model and maps identified privacy threat types to elements in the model. It also includes an extensive catalogue of privacy threat tree patterns to detail the threat analysis. Finally, it describes how privacy-enhancing technologies can be mapped to identified privacy threats to help select appropriate countermeasures.

Uploaded by

dont4get
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

An up-to-date version of the LINDDUN Privacy by Design framework

can be found on the official website:


https://distrinet.cs.kuleuven.be/software/linddun
Indeed, there is an asymmetry for privacy with respect to
security concerns. These latter have a far better support
in terms of methodological approaches to threat model-
ing. For instance, in the goal-oriented requirements space,
LINDDUN: a privacy threat analysis KAOS [1] provides a methodology to systematically ana-
framework lyze a system’s anti-goals (and the corresponding refined
threats) and therefore derive security requirements [2]. The
same holds in the area of scenario-based techniques. For
instance, Microsoft’s STRIDE is an industrial-level meth-
odology to eliciting threat scenarios and, therefore, deriving
security use cases [3]. Notably, a significantly sized body of
reusable knowledge is also available in the secure software
engineering community. Security knowledge is often pack-
Abstract aged in the shape of checklists and patterns. For instance,
STRIDE comes bundled with a catalogue of security threat
Ready or not, the digitalization of information has come tree patterns that can be readily instantiated in the system
and privacy is standing out there, possibly at stake. Al- at hand so to elicit a close-to-exhaustive set of potential se-
though digital privacy is an identified priority in our so- curity threats. Methodologies and knowledge are two im-
ciety, few systematic, effective methodologies exist that deal portant pillars for software security and privacy, including
with privacy threats thoroughly. This paper presents a com- requirements engineering [4]. Surprisingly, privacy is still
prehensive framework to model privacy threats in software- lagging behind. For instance, STRIDE does not cover pri-
based systems. First, this work provides a systematic meth- vacy threats.
odology to model privacy-specific threats. Analogous to This paper contributes to the aforementioned dimen-
STRIDE, an information flow oriented model of the system sions, in terms of methodology and knowledge, by provid-
is leveraged to guide the analysis and to provide broad cov- ing a comprehensive privacy threat modeling framework. A
erage. The methodology instructs the analyst on what issues high-level overview of this work is sketched out in Section
should be investigated, and where in the model those issues 3.
could emerge. This is achieved by (i) defining a list of pri- First, this work provides a systematic methodology to
vacy threat types and (ii) providing the mappings between model privacy-specific threats. Analogous to STRIDE, an
threat types and the elements in the system model. Sec- information flow oriented model of the system is leveraged
ond, this work provides an extensive catalogue of privacy- to guide the analysis and to provide broad coverage. The
specific threat tree patterns that can be used to detail the data flow diagram (DFD) notation has been selected and,
threat analysis outlined above. Finally, this work provides for reference, it is described in Section 2. The methodology
the means to map the existing privacy-enhancing technolo- instructs the analyst on what issues should be investigated
gies (PETs) to the identified privacy threats. Therefore, the and where in the model those issues could emerge. This
selection of sound privacy countermeasures is simplified. is achieved by defining a list of privacy threat types and
by providing the mapping between the threat types and the
elements in the system model. This part of the methodology
1 Introduction is described in Section 5. Note that the privacy threat types
have been identified in contrast with well known privacy
Privacy becomes increasingly important in the current objectives, which are summarized in Section 4.
society. Most of the information is now digitalized to fa- Second, this work provides an extensive catalogue of
cilitate quick and easy access. It is thus extremely impor- privacy-specific threat tree patterns that can be used to de-
tant that digital privacy is sufficiently protected to prevent tail the threat analysis outlined above. In a nutshell, they
personal information from being revealed to unauthorized refine the privacy threat types by providing concrete exam-
subjects. A stepping stone of security and privacy analy- ples. The catalogue is described in Section 6, while Section
sis is threat modeling, i.e., the “black hat” activity of look- 8 illustrates how to instantiate the threat tree patterns in or-
ing into what can possibly go wrong in a system. Threats der to elicit the misuse cases.
are crucial to the definition of the requirements and play a An additional contribution of this paper refers to the soft-
key role in the selection of the countermeasures. Unfortu- ware engineering phase. This work provides the means to
nately, the state of the art lacks systematic approaches to map the existing privacy-enhancing technologies (PETs) to
model privacy threats, elicit privacy requirements, and in- the identified privacy threats, which simplifies the selection
stantiate privacy-enhancing countermeasures, accordingly. of sound privacy countermeasures. This is described in Sec-

1
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun
tion 9. where online users share personal information such as rela-
tionship status, pictures, and comments with their friends.
In the DFD, the user is represented as an entity to interact
2 Background: security threat modeling us- with the system. The Social Network 2.0 application con-
ing STRIDE tains two processes (the portal and the service) and one data
store containing all the personal information of the users.
Security, in contrast to privacy, has already been well in- The trust boundary shows that the processes, the data store,
tegrated in the Secure Development Lifecycle (SDL) [3], and the communication (data flows)between the two are as-
which is a well-established methodology. To build a secure sumed to be trustworthy in this particular setting.
software system, an important aspect is to consider how an
attacker might compromise the system by exploiting design
flaws and building the necessary defense mechanisms in the
system. In this respect, threat modeling plays the key role,
and SDL has integrated a systematic approach for secu-
rity threat modeling using STRIDE. In this section, we will
briefly review the STRIDE threat modeling process, which
consists of nine high-level steps.
Step 1: Define use scenarios. System designers need to
determine which key functionality is within the scope.
Step 2: Gather a list of external dependencies. Each
application depends on the operating system it runs on, the
database it uses, and so on; these dependencies need to be
defined. Figure 1: The Data Flow Diagram (DFD) of the Social Net-
Step 3: Define security assumptions. In the analysis work 2.0 application
phase, decisions are often based on implicit assumptions.
Therefore, it is important to note down all the assumptions, Step 6: Determine threat types. The STRIDE threat tax-
to understand the entire system comprehensively. onomy is used to identify security threat types. STRIDE is
Step 4: Create external security notes. Because each ex- an acronym for Spoofing, Tampering, Repudiation, Infor-
ternal dependency can have its implication on security, it is mation disclosure, Denial of service, and Elevation of priv-
useful to list all the restrictions and implications introduced ilege. These threats are the negation of the main security
by the external security notes. An example of such a se- properties, namely confidentiality, integrity, availability, au-
curity note is to specify which ports are open for database thentication, authorization and non-repudiation.
access or HTTP traffic. Step 7: Identify the threats to the system. Each element
Step 5: Create one or more DFDs of the application be- of the data flow diagram is assigned to a set of susceptible
ing analyzed. The software-based system being analyzed is threats. Table 1 gives an overview of the different DFD
decomposed in relevant (either logical or structural) compo- elements with the corresponding security threats they are
nents, and for each of these parts the corresponding threats subject to (marked with ⇥).
are analyzed. This process is repeated over an increasingly To identify which threats are applicable to a specific sys-
refined model until a level is reached where the residual tem, threat tree patterns can be used. For each valid inter-
threats are acceptable. section in Table 1, a threat tree pattern suggests the possible
The system is graphically represented using a data flow security-related preconditions for the STRIDE category, in
diagram (DFD), with the following elements: data flows order to help analysts determine the relevance of a threat
(i.e. communication data), data stores (i.e. logical data or for the system. An example threat tree is presented in Fig-
concrete databases, files, and so on), processes (i.e. units ure 2. Each path of the threat tree indicates a valid attack
of functionality or programs) and external entities (i.e. end- path. Note that some trees cascade. For example, the tree in
points of the system like users, external services, and so on). Figure 2 shows the conditions that could lead to tampering
For threat modeling, trust boundaries are also introduced to threats against a process. The node indicated as a circle (or
indicate the border between trustworthy and untrustworthy oval) in the threat tree means a root threat. These are the
elements. main STRIDE threats which, indirectly, can lead to another
An example DFD is shown in Figure 1 to illustrate a root threat, e.g. someone can indirectly tamper with a pro-
use case application (Social Network 2.0) that will be dis- cess by spoofing an external entity. The node indicated as
cussed throughout this paper. This Social Network 2.0 ap- a rectangle suggest a concrete threat in an attack path. The
plication is an abstract representation of a social network, arrows connecting the nodes in general refer to a OR rela-

2
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun
Table 1: Security concerns with corresponding security form a privacy threat analysis. In particular, privacy as-
threats and DFD elements susceptible to threats (DF-Data sumptions need to be specified in step 3 and external pri-
flow, DS-Data store, P-Process, E-External entity), pro- vacy notes are considered in step 4. This paper proposes
posed by the Security Development Lifecycle (SDL) [3]. privacy-specific extensions to the key steps: determining
privacy threat types (step 6) in Section 5.1 and identifying
Security prop- Security threat DF DS P E privacy threats (step 7) in Sections 5.2 to 8. The mitigation
erty of privacy threats via privacy enhancing solutions (step 9)
Authentication Spoofing ⇥ ⇥ is discussed in Section 9.
Integrity Tampering ⇥ ⇥ ⇥
Non- Repudiation ⇥ ⇥ ⇥
repudiation
Confidentiality Information Disclosure ⇥ ⇥ ⇥ 3 Our approach – the LINDDUN methodol-
Availability Denial of Service ⇥ ⇥ ⇥ ogy
Authorization Elevation of Privilege ⇥
DF data flow, DS Data store, P Process, E External entity
In this work, we propose a systemic approach for privacy
threat modeling – the LINDDUN methodology – to elicit
tion among the various preconditions, unless it is indicated the privacy requirements of software-intensive systems and
explicitly with “AND” to refer to a AND relation. select privacy enhancing technologies accordingly. Each
letter of “LINDDUN” stands for a privacy threat type ob-
Afterwards, the identified privacy threats need to be doc-
tained by negating a privacy property. Privacy properties
umented as misuse cases, i.e., as a collection of threat sce-
and threats types are briefly described in Sections 4 and 5,
narios in the system.
respectively.
Figure 3 depicts the building blocks of LINDDUN. In the
figure, a distinction is marked between the proposed meth-
odology and the supporting knowledge provided to assist
each step. First of all, a data flow diagram is created based
on the high-level system description. This is followed by
mapping privacy threats to the DFD elements using Table 4
as a guide to determine the corresponding threats. In par-
ticular, a number of privacy tree patterns from Section 6
will be proposed to detail the privacy threat instances in a
designated system, by providing an overview of the most
common preconditions of each threat. Next, the identified
privacy threats that are relevant to the designated system are
documented as misuse cases (cf. Section 8). A misuse case
presents a collection of threat scenarios in the system.
The identified privacy threats that needs to be evaluated
and prioritized via risk assessment. Indeed, due to both time
Figure 2: Example security threat tree pattern of tampering and budget constraints, not all threats are worthy further
a process [3] treatment. Note that details on the risk-analysis process are
beyond the scope of this work.
Step 8: Determine risk. For each threat, the appropriate The last two steps comprise so-called “white hat” activ-
security risk level has to be determined, which can be used ities. The privacy requirements of the system are elicited
to define the priorities of the threats to be resolved. from the misuse cases following the mapping in Table 6. Fi-
Step 9: Plan mitigation. In the final step of the meth- nally, appropriate privacy enhancing solutions are selected
odology, the risk of the threat is reduced or eliminated by according to the privacy requirements. Table 7 provides an
introducing proper countermeasures and defenses. Mitigat- overview of the state-of-art privacy enhancing techniques
ing a risk to the threat corresponds to eliminating one at- and the mapping to their corresponding privacy objectives.
tack path in the threat tree. An overview of some possible The fact that the LINDDUN framework and STRIDE
mitigation technologies linked to each security property is are based on similar approaches creates synergy. Therefore,
provided. the privacy and security analysis can be closely integrated
These steps are security-related and should be enhanced into the SDL. Nevertheless, the aforementioned LINDDUN
by the corresponding privacy perspective in order to per- framework for privacy can be performed independently.

3
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun

Figure 3: The LINDDUN methodology and the required system-specific knowledge

4 Privacy properties reduce the need to “trust” other entities. The threat model
includes service provider, data holder, and adversarial envi-
It is not the intention to propose a new taxonomy of ronment, where strategic adversaries with certain resources
privacy definitions in this paper. However, it is crucial to are motivated to breach privacy, similar to security systems.
have the right basis for the proposed LINDDUN frame- Soft privacy, on the contrary, is based on the assumption
work, therefore definitions of privacy properties are elab- that data subject lost control of personal data and has to
orately studied and reviewed in this section. The literature trust the honesty and competence of data controllers. The
is rich of studies to conceptualize privacy, and we refer in- data protection goal of soft privacy is to provide data se-
terested readers to the work by Solove [5, 6] for a compre- curity and process data with specific purpose and consent,
hensive understanding of privacy. Most privacy properties by means of policies, access control, and audit. The system
in the LINDDUN framework comply with the terminology model is that the data subject provides personal data and the
proposed by Pfitzmann et al. [7], as is widely recognized in data controller (as a security user) is responsible for the data
the privacy research community. protection. Consequently, a weaker threat model applies,
including different parties with inequality of power, such as
4.1 Understanding privacy: hard privacy external parties, honest insiders who make errors, and cor-
vs. soft privacy rupt insiders within honest data holders. An overview of
hard and soft privacy solutions will be given in Section 9.
As an abstract and subjective concept, the definition Besides conceptualizing privacy, another research chal-
of privacy varies depending on social and cultural issues, lenge is to define privacy properties in software based sys-
study disciplines, stakeholder interests, and application con- tems. Some classical security properties are desired for
text. Popular privacy definitions include “the right to be building in privacy, including confidentiality (ensuring that
let alone”, focusing on freedom from intrusion, and “the information is accessible only by authorized parties), in-
right to informational self-determination”, allowing indi- tegrity (safeguarding the accuracy and completeness of in-
viduals to “control, edit, manage, and delete information formation and processing methods), availability (or censor-
about themselves and decide when, how and to what extent ship resistance, ensuring information is accessible to autho-
that information is communicated to others” [8]. rized users), and non repudiation (ensuring one not be able
Privacy can be distinguished as hard privacy and soft pri- to deny what one has done). The definitions of these prop-
vacy, as proposed by Danezis [9]. The data protection goal erties can be found in ISO 17799 [10].
of hard privacy refers to data minimization, based on the as- In addition, a number of properties are also appreciated,
sumption that personal data is not divulged to third parties. including anonymity (hiding links between identity and ac-
The system model of hard privacy is that a data subject (as a tion or a piece of information), unlinkability (hiding link
security user) provides as little data as possible and tries to between two or more actions, identities and pieces of infor-

4
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun
mation), undetectability (or covertness) and unobservability ing of messages as attributes; the items of interest (IOIs) are
(hiding user’s activity), plausible deniability (opposite as who has sent or received which message. Then, “anonymity
non-repudiation, no others can prove one has said or done of a subject with respect to an attribute may be defined as
something), and forward security (also referred as forward unlinkability of this subject and this attribute.” For instance,
secrecy and freedom from compulsion, meaning that once sender anonymity of a subject means that to this potentially
the communication is securely over, it cannot be decrypted sending subject, each message is unlinkable.
any more).
We decided to include the following privacy properties 4.4 Pseudonymity
in the proposed framework, namely unlinkability, anonym-
ity and pseudonymity, plausible deniability, undetectability The pseudonymity property suggests that it is possible
and unobservability, and confidentiality (hiding data con- to build a reputation on a pseudonym and possible to use
tent, including access control) as hard privacy properties; multiple pseudonyms for different purposes. Examples in-
user content awareness (including feedback for user privacy clude a person publishes comments on social network sites
awareness, data update and expire) together with policy and under different pseudonyms and a person uses a pseudonym
consent compliance as soft privacy properties. These prop- to subscribe to a service.
erties are described in the following sections. Note that Pfitzmann et al. [7] defines pseudonymity as: “A pseu-
properties such as integrity, availability, and forward secu- donym is an identifier of a subject other than one of the sub-
rity are also important for privacy. However, we consider jects real names. Pseudonymity is the use of pseudonyms as
them as typical security properties; hence they are to be identifiers. A subject is pseudonymous if a pseudonym is
considered in the security engineering framework, such as used as identifier instead of one of its real names.” Pseu-
STRIDE. donymity can also be perceived with respect to linkability.
Whereas anonymity and identifiability (or accountability)
4.2 Unlinkability are the extremes with respect to linkability to subjects, pseu-
donymity is the entire field between and including these ex-
The unlinkability property refers to hiding the link be- tremes. Thus, pseudonymity comprises all degrees of link-
tween two or more actions, identities, and pieces of in- ability to a subject.
formation. Examples of unlinkability include hiding links
between two anonymous messages sent by the same per- 4.5 Plausible deniability
son, two web page visits by the same user, entries in two
databases related to the same person, or two people related For privacy, plausible deniability refers to the ability to
by a friendship link in a social network. deny having performed an action that other parties can nei-
Unlinkability is defined Pfitzmann et al. as [7]: “Unlink- ther confirm nor contradict. Plausible deniability from an
ability of two or more items of interest (IOIs, e.g., subjects, attackers perspective means that an attacker cannot prove a
messages, actions, ...) from an attackers perspective means user knows, has done or has said something. Sometimes,
that within the system (comprising these and possibly other depending on the application, plausible deniability is desir-
items), the attacker cannot sufficiently distinguish whether able over non-repudiation, for instance, in an application
these IOIs are related or not.” Although it is not explicitly used by whistleblowers, users will want to deny ever sent
mentioned, the definition of unlinkability implies that the a certain message to protect their safety. Other examples
two or more IOIs are of the comparable types, otherwise it include off-the-record conversations, possibility to deny the
is infeasible to make the comparison. existence of an encrypted file, deny that a file is transmitted
from a data source, or deny that a database record belongs
4.3 Anonymity to a person.
The relation between non-repudiation and plausible de-
Essentially, the anonymity property refers to hiding the niability is according to Roe in [11]: “The goal of the non-
link between an identity and an action or a piece of informa- repudiation service is to provide irrefutable evidence con-
tion. Examples are anonymous sender of an email, writer of cerning the occurrence or non-occurrence of an event or
a text, person accessing a service, person to whom an entry action. If we believe that there is a need for this as a security
in a database relates, and so on. service[...] we must also concede that some participants de-
Anonymity is defined as [7]: “Anonymity of a subject sire the opposite effect: that there be no irrefutable evidence
from an attackers perspective means that the attacker can- concerning a disputed event or action.” This “complemen-
not sufficiently identify the subject within a set of subjects, tary service” is plausible deniability. In particular, it ensures
the anonymity set.” Anonymity can also be described in that “an instance of communication between computer sys-
terms of unlinkability. If one considers sending and receiv- tems leaves behind no unequivocal evidence of its having

5
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun
taken place. Features of communications protocols that 4.8 Content awareness
were seen as defects from the standpoint of non-repudiation
can be seen as benefits from the standpoint of this converse Unlike the aforesaid classical privacy properties, to our
problem, which is called plausible deniability.” knowledge, the following two properties, namely content
awareness, and policy and consent compliance, are not ex-
4.6 Undetectability and unobservability plicitly defined in the literature. However, we consider them
important privacy objectives, due to their significance to pri-
vacy and data protection. With the emerging of Web 2.0
The undetectability and unobservability properties refer
technologies, users tend to provide excessive information to
to hiding the user’s activities. Practical examples include, it
service providers and lose control of their personal informa-
is impossible to know whether an entry in a database corre-
tion. Therefore, the content awareness property is proposed
sponds to a real person, or to distinguish whether someone
to make sure that users are aware of their personal data
or no one is in a given location.
and that only the minimum necessary information should be
Undetectability is defined as [7]: “Undetectability of an sought and used to allow for the performance of the function
item of interest (IOI) from an attackers perspective means to which it relates.
that the attacker cannot sufficiently distinguish whether it
The more personal identifiable information a data sub-
exists or not. If we consider messages as IOIs, this means
ject discloses, the higher the risk is for privacy violation. To
that messages are not sufficiently discernible from, e.g., ran-
ensure content awareness, a number of technical enforce-
dom noise.” For anonymity and unlinkability, not the IOI,
ment tools have been developed. For instance, the con-
but only its relationship to the subject or other IOIs is pro-
cept of personal information feedback tools has been pro-
tected. For undetectability, the IOIs are protected as such.
moted [13, 14] to help users gain privacy awareness and
Undetectability by uninvolved subjects together with an- self-determine which personal data to disclose.
onymity even if IOIs can be detected is defined as unob- The Platform for Privacy Preferences Project (P3P) [15]
servability [7]: “Unobservability of an item of interest (IOI) has been designed to allow websites (as data controllers)
means undetectability of the IOI against all subjects un- to declare their intended use of the information that they
involved in it and anonymity of the subject(s) involved in collected about the browsing users (as data subjects). P3P
the IOI even against the other subject(s) involved in that addresses the content awareness property by making users
IOI.” The definition suggests that unobservability is unde- aware of how personal data are processed by the data con-
tectability by uninvolved subjects AND anonymity even if troller.
IOIs can be detected. Consequently, unobservability im- Although not necessarily privacy-oriented, another re-
plies anonymity, and unobservability implies undetectabil- sponsibility of the user, within the realm of content aware-
ity. It means, with respect to the same attacker, unobserv- ness objective, is to keep user’s data up-to-date to prevent
ability reveals always only a subset of the information an- wrong decisions based on incorrect data. This means that
onymity reveals. Later sections of this paper will focus on the data subject or the data controller (depends on appli-
undetectability, since unobservability is in fact a combina- cations) is responsible for deleting and updating inaccurate
tion of undetectability and anonymity. information. For example, it is crucial to maintain patient’s
data in e-health applications. Imagine a doctor forgetting to
4.7 Confidentiality mention that the patient is a diabetic, the absence of infor-
mation could cause fatal consequences for patients taking
The confidentiality property refers to hiding the data medication without considering negative side effects on di-
content or controlled release of data content. Examples in- abetics.
clude transferring encrypted email, applying access control To summarize, the content awareness property focuses
to a classified document or a database containing sensitive on the user’s consciousness regarding his own data. The
information. user needs to be aware of the consequences of sharing in-
NIST[12] describes confidentiality as following: Confi- formation. These consequences can refer to the user’s pri-
dentiality means preserving authorized restrictions on in- vacy, which can be violated by sharing too much personal
formation access and disclosure, including means for pro- identifiable information, as well as to undesirable results by
tecting personal privacy and proprietary information. Al- providing incomplete or incorrect information.
though confidentiality is a security property, as the defini-
tion above states, it is also important for preserving privacy 4.9 Policy and consent compliance
properties, such as anonymity and unlinkability. Therefore,
confidentiality is also considered an important privacy prop- Unlike the content awareness property focused on the
erty. user, the policy and consent compliance property requires

6
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun
the whole system – including data flows, data stores, and Table 2: In the LINDDUN methodology, privacy properties
processes – as data controller to inform the data subject and the corresponding privacy threat are categorized as hard
about the system’s privacy policy, or allow the data subject privacy and soft privacy
to specify consents in compliance with legislation, before
users accessing the system. According to the definitions Privacy properties Privacy threats
from the EU Directive 95/46/EC [16]: “Controller shall Hard privacy
mean the natural or legal person, public authority, agency Unlinkability Linkability
or any other body which alone or jointly with others de- Anonymity & Pseudonymity Identifiability
termines the purposes and means of the processing of per- Plausible deniability Non-repudiation
sonal data.” “The data subject’s consent shall mean any Undetectability & unobservability Detectability
freely given specific and informed indication of his wishes Confidentiality Disclosure of information
by which the data subject signifies his agreement to per-
sonal data relating to him being processed.” Soft privacy
Content awareness Content Unawareness
A policy specifies one or more rules with respect to data
Policy and consent compliance Policy and consent non-
protection. These are general rules determined by the stake-
compliance
holders of the system. Consents specify one or more data
protection rules as well, however, these rules are determined
by the user and only relate to the data regarding this specific
user. The policy and consent compliance property essen- quirements of Cisco products were specified to comply with
tially ensures that the system’s policy and the user’s con- Section 508 of the U.S. Workforce Investment Act (WIA)
sent, specified in textual form, are indeed implemented and of 1998 [22]. They developed a set of qualitative metrics to
enforced. rationalize the comparison of two requirements. These met-
This property is closely related to legislation. There are rics demonstrate that alignments between legal and product
a number of legal frameworks addressing the raised con- requirements can be described in detail by using the goal-
cerns of data protection, such as the Health issued the Insur- oriented concept of refinement. Their analysis revealed that
ance Portability and Accountability Act (HIPAA) [17] in the a frame-based requirements analysis method [23], which
United States, the Data Protection Directive 95/46/EC [16] itemizes requirements and preserves legal language, is use-
in Europe, the Personal Information Protection and Elec- ful to incorporate legal requirements into a manufacturer’s
tronic Documents Act and Privacy Act [18] in Canada, the compliance framework.
Commonwealth Privacy Act 1988 and Privacy Amendment
(Private Sector) Act 2000 [19] in Australia, and the OECD
Guidelines on the Protection of Privacy and Transborder
5 Mapping privacy threats to DFD
Flows of Personal Data [20].
One example of consent compliance is in e-health, for In this section, we present the privacy threat categories
some countries, healthcare professionals are not allowed to based on the above-mentioned privacy properties. We also
intervene until the data subject has given informed consent discuss how to map these categories to the DFD elements.
for medical treatment.
There are initiatives to protect data subjects and create 5.1 Privacy threat categories
openness; however it is evidently important to ensure that
internal rules actually comply with that promised in poli- As shown in Table 2, the methodology considers seven
cies and consents. Unfortunately, few technical solutions types of threats. LINDDUN is the mnemonic acronym that
exist to guarantee the compliance. A possible non-technical we use.
solution is to use employee contracts to enforce penalties The following section describes LINDDUN compo-
(e.g., get fired or pay fines) to ensure compliance. Another nents:
solution is to hire an auditor to check policies compliance. 1. Linkability of two or more items of interest (IOIs, e.g.,
Eventually, necessary legal actions can be taken by data subjects, messages, actions, etc.) allows an attacker to
subjects in case of noncompliance. sufficiently distinguish whether these IOIs are related
Breaux et al. [21] pointed out that to ensure a product or not within the system.
that complies with its privacy and security goals, legal re-
quirements need to be identified and refined into product 2. Identifiability of a subject means that the attacker can
requirements, and the product requirements need to be in- sufficiently identify the subject associated to an IOI,
tegrated into the ongoing product design and testing pro- for instance, the sender of a message. Usually, identi-
cesses. They presented an industry case study in which re- fiability refers to a set of potential subjects, called the

7
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun
identifiability set [7]. In essence, identifiability is a Table 3: DFD elements in the Social Network 2.0 applica-
special case of linkability when a subject and its at- tion
tributes are involved. Identifiability is a threat to both
anonymity and pseudonymity. Entity User
Process Portal
3. Non-repudiation, in contrast to security, this is a threat Social network service
for privacy. Non-repudiation allows an attacker to
Data Store Social network DB
gather evidence to counter the claims of the repudi-
ating party, and to prove that a user knows, has done or Data Flow User data stream (user-portal)
has said something. Service data stream(portal-service)
DB data stream (service DB)
4. Detectability of an IOI means that the attacker can suf-
ficiently distinguish whether such an item exists or not.
If we consider messages as IOIs, it means that mes- Table 4: Mapping LINDDUN privacy threats to DFD ele-
sages are sufficiently discernible from random noise. ment types

5. Information Disclosure threats expose personal infor- DFD element L I N D D U N


mation to individuals who are not suppose to have ac- Data Store ⇥ ⇥ ⇥ ⇥ ⇥ ⇥
cess to it. Data Flow ⇥ ⇥ ⇥ ⇥ ⇥ ⇥
Process ⇥ ⇥ ⇥ ⇥ ⇥ ⇥
6. Content Unawareness indicates that a user is unaware Entity ⇥ ⇥ ⇥
of the information disclosed to the system. The user
either provides too much information which allows an From left to right: L linkability, I identifiability, N non-
attacker to easily retrieve the user’s identity or inaccu- repudiation, D detectability, D information disclosure, U content
rate information which can cause wrong decisions or unawareness, N policy/consent non-compliance
actions.
In our running example Social Network 2.0, Alice is a
7. Policy and consent Noncompliance means that even
registered user of a social network. Each time Alice up-
though the system shows its privacy policies to its
dates her friends list, she first connects to the social net-
users, there is no guarantee that the system actually
work’s web portal. Accordingly, the portal communicates
complies to the advertised policies. Therefore, the
with the social network’s server, and eventually, the friend-
user’s personal data might still be revealed.
ship information of Alice and all other users of that social
network is stored in a database.
5.2 Mapping privacy threat categories to The DFD for the Social Network 2.0 application was al-
the system ready presented in Figure 1 of Section 2. Table 3 lists the
DFD elements.
This section provides the guidelines to identify privacy The creation of the DFD is an important part in the anal-
threats of a software based system. First, a Data Flow ysis. If the DFD was incorrect, the analysis results would
Diagram (DFD) is created in correspondence to the appli- be wrong as well. Since privacy focuses on the protection
cation’s use case scenarios. Second, privacy threats are of user’s personal information, it is important to consider
mapped to the DFD. where the information will be stored or passed by, as these
are the crucial elements for building in privacy.
5.2.1 Creating Application DFD Based On Use Case
Scenarios 5.2.2 Mapping Privacy Threats to DFD

DFD is chosen to represent a software system based on two After the DFD elements are listed, we identify the privacy
reasons. First, DFD is proven to be sufficiently expressive threat categories for each DFD element by following the
in a number of case studies examined by the authors. Sec- mapping depicted in Table 4. Each intersection marked with
ond, DFD is also used by the SDL threat modeling process, the symbol ⇥ indicates a potential privacy threat at a corre-
hence by deploying the same modeling technique an inter- sponding DFD element in the system.
esting synergy can be created between the proposed frame- In essence, each DFD element is subject to certain pri-
work and the SDL process. vacy threats, and the nature of the potential privacy threat is
Running example: Social Network 2.0 determined by the DFD element type. For example, a data

8
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun
flow is subject to a number of privacy threats such as iden- guish whether it exists or not. Though in some applications,
tifiability, linkability, detectability, non-repudiation, and in- techniques such as covert channel and steganography can
formation disclosure. The following sections will explain be used to protect both messages (data flow) and communi-
how privacy threats affect DFD elements. More threat sce- cating parties (entity), in this case the threat actually occurs
narios corresponding to our running example will be dis- at data flow instead of entity. In other words, the asset we
cussed in Section 8. want to protect against the detectability threat includes data
The nature of linkability indicates that the threat af- flow, data store, and process.
fects DFD elements by pair. In other words, linkability Information disclosure threats affect data flow, data
of a DFD element refers to a pair (x1 , x2 ), where x 2 store, and process, referring to the exposure of information
{E, DF, DS, P } is the linkable IOI. Obviously, linkability at these DFD elements to individuals who are not supposed
at entity, from an attackers perspective means that within to have access to it.
the system (comprising these and possibly other items), the
attacker can sufficiently distinguish whether these entities The content unawareness threat is related to entity, since
are related or not. Similar description applies for that of the entity (data subject or data controller) is actually respon-
data flow, data store, and process. sible to provide the necessary consents to process personal
data and update or delete the expired information.
The identifiability threat affects all four DFD elements,
such that each DFD element is made explicit as the at- Policy and consent noncompliance is a threat that affects
tributes that identifiability (or its opposite property ano- system as a whole, because each system component (includ-
nymity) relates to, by forming a pair with a subject. Es- ing data flow, data store and process) is responsible to en-
sentially, identifiability at each DFD element refers to a sure that actions are taken in compliance with privacy poli-
pair (x, y), where x 2 {E} is the identifiable subject, and cies and data subject’s consents.
y 2 {E, DS, DF, P } is the attribute identifiability relates Running example: Social Network 2.0
to. For example, identifiability at entity refers to a pair Considering the Social Network 2.0 application, the list of
(E, E), meaning to identify an entity within a set of enti- generic privacy threats to the modeled system is depicted in
ties. Identifiability at data flow refers to a pair (E, DF ), Table 5. This is obtained by gathering the elements from
meaning that a message is linkable to a potentially sending Table 3 and then determining the susceptible threats with
or receiving subject. Identifiability at data store refers to a Table 4.
pair (E, DS), meaning that a database entry is linkable to
a potential data holder or subject. Identifiability at process
refers to a pair (E, P ), meaning that a process is linkable to
a potentially accessing subject. 6 Detailing privacy threats via threat tree
Non-repudiation, opposite of plausible deniability, is a patterns
privacy threat that affects the DFD elements of data flow,
data store and process. Non-repudiation might be appreci-
ated for some system but undesirable for others. It depends This section presents an extensive catalog of threat tree
on the system requirements. For e-commerce applications, patterns that can be used to detail the privacy threats to a
non-repudiation is an important security property. Imag- realistic system. For each marked intersection in Table 4, a
ine a situation where a buyer signs for a purchased item threat tree pattern exists showing the detailed preconditions
upon receipt, the vendor can later use the signed receipt as for this specific threat category to materialize. The precon-
evidence that the user received the item. For other appli- ditions are hence vulnerabilities that can be exploited for a
cations, such as off-the-record conversations, participants privacy attack scenario.
may desire plausible deniability for privacy protection such
that there will be no record to demonstrate the communica- The present catalog is based on the state-of-art privacy
tion event, the participants and the content. In this scenario, developments and the threat trees reflect common attack
non-repudiation is a privacy threat. Even though entity is patterns and help application designers think about privacy
the only DFD element being able to (non-)repudiate, the conditions in the system. However, the threat trees depicted
non-repudiation privacy threat actually occurs at data flow, in this section present the best effort so far. The catalog is
data store, and process. Similar to linkability and identifia- subject to continuous improvement in order to reflect newly
bility, non-repudiation at each DFD element refers to a pair discovered threats. Further, the catalog is meant to be up-
(x, y), where x 2 {E} is the non-repudiating subject, and dated as new results are available from the industrial valida-
y 2 {DS, DF, P } is the attribute it relates to. tion experiments.
Detectability threats occur at data flow, data store, and The threat tree catalog can be consulted on
process, meaning that the attacker can sufficiently distin- https://people.cs.kuleuven.be/˜kim.wuyts/private/ERISE/.

9
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun
Table 5: Determining privacy threats for DFD elements within the Social Network 2.0 application (From left to right: L-
Linkability, I-Identifiability, N-Non Repudiation, D-Detectability, D-Information Disclosure, U-Content Unawareness, N-
Consent/policy Noncompliance)

Threat target L I N D D U N
Data Store Social network DB ⇥ ⇥ ⇥ ⇥ ⇥ ⇥
Data Flow User data stream (user – portal) ⇥ ⇥ ⇥ ⇥ ⇥ ⇥
Service data stream (portal – service) ⇥ ⇥ ⇥ ⇥ ⇥ ⇥
DB data stream (service – DB) ⇥ ⇥ ⇥ ⇥ ⇥ ⇥
Process Portal ⇥ ⇥ ⇥ ⇥ ⇥ ⇥
Social network service ⇥ ⇥ ⇥ ⇥ ⇥ ⇥
Entity User ⇥ ⇥ ⇥

From left to right: L linkability, I identifiability, N non-repudiation, D detectability, D information disclosure, U content unawareness, N
policy/consent non-compliance

7 Scoping the privacy analysis based on as- ferent DFD elements into 1 large threat.
sumptions By applying these assumptions to the initial mapping,
the number of threats has significantly decreased from 40 to
10.
As illustrated in Section 5.2, several DFD element cat-
egories correspond to multiple threat categories. This im-
plies that when the system, and hence the DFD increases 8 Documenting Threats Scenarios in Misuse
size, the number of threats will grow exponentially. To con- Cases
trol the number of threats to be analyzed, one can make
assumptions based on the system in general (e.g. the data Threat tree patterns are used to detail the generic LIND-
flow between process X and process Y will be encrypted), DUN threat categories into specific threat instances that can
or assumptions and decisions can be obtained by inspecting occur in a system. Furthermore, some threat instances could
the privacy threat trees (e.g. there are no non-repudiation have been discarded during the risk-analysis step. The re-
threats applicable to this system). sult of the above process should be a collection of threat
Running example: Social Network 2.0 When inspecting scenarios that need to be documented. To this aim, mis-
the Social Network 2.0 application, primarily we assume use cases can be used. In particular, a misuse case can be
that DFD elements within the trust boundary (marked as considered as a use case from the misactor’s point of view.
dashed line in Figure 1) are trustworthy. We trust the pro- A misactor is someone who intentionally or unintentionally
cesses within the boundary, as well as all data flows in the initiates the misuse case. Alexander [26] provides some ex-
trust boundary. Therefore, we will not discuss linkability, ample misuse cases, together with the corresponding (posi-
identifiability, and information disclosure threats on these tive) use cases. We chose misuse cases because they repre-
elements. We however do not trust the user and its commu- sent a well established technique to elicit requirements, and
nication with the portal and we also want to protect the data a number of support tools exist as well.
store containing all the user’s information. The structure of a misuse case, which is based on the
Moreover, after careful consideration of the correspond- template provided by Sindre and Opdahl [27] is described
ing privacy threat trees, non-repudiation and detectability below:
threats are considered irrelevant for social networks. Pre- Summary: provides a brief description of the threat.
sumably, it depends on what privacy properties are required Assets, stakeholders and threats: describes the assets being
for a particular social network system. In case plausible de- threatened, their importance to the different stakeholders,
niability and undetectability would be desirable for a certain and what is the potential damage if the misuse case
application, we should still consider these threats for each succeeds.
DFD element accordingly. Primary misactor: describes the type of misactor perform-
Finally, the non-compliance threat tree indicates that its ing the misuse-case. Possible types are insiders, people
corresponding threats are not specific to a specific DFD el- with a certain technical skill, and so on. Also, some misuse
ement, but are applicable to the entire system, therefore it case could occur accidentally whereas other are most likely
was decided to merge the non-compliance threats of the dif- to be performed intentionally.

10
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun
Basic Flow: discusses the normal flow of actions, resulting 2. The misactor can link the data entries together and pos-
in a successful attack for the misactor. sibly re-identify the data subject from the data content
Alternative Flows: describes the other ways the misuse can
occur. Alternative Flow:
Trigger: describes how and when the misuse case is 1. The misactor gains access to the database
initiated.
Preconditions: precondition that the system must meet for 2. Each data entry is linked to a pseudonym
the attack to be feasible.
3. The misactor can link the different pseudonyms to-
gether (linkability of entity)
The preconditions refer to the leaf nodes of the threat
tree patterns and the basic (and alternative) flow describes 4. Based on the pseudonyms, the misactor can link the
how a miuser could exploit these weaknesses of the system different data entries
in order to mount an attack.
Running example: Social Network 2.0 Trigger: by misactor, can always happen.
In our running example, we assume that communication Preconditions:
and processes within the social network service provider • no or insufficient protection of the data store
are trustworthy (see the trust boundary in the DFD de-
picted in Figure 1). However, we want to protect the data • no or insufficient data anonymization techniques or
store against information disclosure. The data controllers strong data mining applied
could be users, social network providers, and application
Note that formulating soft privacy threats is less straight-
providers.
forward and requires some out-of-the-box thinking for suit-
To illustrate how to create a misuse case based on the
able (non-)technical solutions. We refer the reader to mis-
threat tree patterns, consider the threat tree of linkability at
use cases 9 and 10 in the appendix as an example of the
the data store. The tree illustrates that in order to be sus-
latter case.
ceptible to this threat, neither the data store is sufficiently
protected against information disclosure nor sufficient data
anonymization techniques are employed. These are the pre-
8.1 Risk Assessment
conditions of the misuse case. To create the attack scenar-
ios, it is clear that the attacker first needs to have access Similarly to STRIDE, LINDDUN can suggest a (large)
to the data store, and secondly, either the user (as the data number of documented threats. Before the process moves
subject) can be re-identified (as the basic flow) or the pseu- forward, the identified threats must be prioritized. Only the
donyms can be linkable (as the alternative flow). The afore- important ones should be considered for inclusion in the re-
mentioned misuse case is presented in this section. The ad- quirements specification and, consequently, in the design of
ditional nine misuse cases applicable to the social network the solution. Risk assessment techniques provide support
example are described in Appendix A. for this stage. In general, risk is calculated as a function of
the likelihood of the attack scenario depicted in the MUC
Title: MUC 1 – Linkability of social network database (data (misuse case) and its impact. The risk value is used to sort
store) the MUCs: the higher the risk, the more important the MUC
Summary: Data entries can be linked to the same person is.
(without necessarily revealing the persons identity) The LINDDUN framework (similarly to STRIDE) is in-
Assets, stakeholders and threats: Personal Identifiable In- dependent from the specific risk assessment technique that
formation (PII) of the user. is used. The analyst is free to pick the technique of choice,
for instance the OWASP’s Risk Rating Methodology [28],
• The user: Microsoft’s DREAD [29], NIST’s Special Publication 800-
– Data entries can be linked to each other which 30 [30], or SEI’s OCTAVE [31]. These techniques lever-
might reveal the persons identity age the information contained in the MUC, as the involved
assets (for the impact), and the attacker profile as well as
– The misactor can build a profile of a user’s on- the basic/alternative flows (for the likelihood). Many of
line activities (interests, actives time, comments, the above-mentioned techniques include privacy consider-
updates, etc.) ations when assessing the impact of a threat. However, as a
Primary misactor: skilled insider / skilled outsider research challenge, a privacy-specific risk assessment tech-
Basic Flow: nique is worthwhile to be investigated, as the on-field expe-
rience reveals any inadequacy of state-of-the-art techniques.
1. The misactor gains access to the database This goes beyond the scope of this work.

11
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun
9 From threat analysis to privacy enhancing Note that the PETs categorization is inspired by the tax-
solutions onomies proposed in [32, 33]. Further, Table 7 introduces
some key primitives of hard privacy technologies and the
state-of-art of soft privacy technologies. New privacy en-
This section explains the elicitation of privacy require-
hancing solutions keep emerging; therefore a complete list
ments from threat analysis and the selection of mitigation
of PETs and best practices for choosing the appropriate mit-
strategies and techniques based on privacy objectives.
igation is beyond the scope of this paper. The latest devel-
opment of privacy enhancing technologies can be found at
9.1 Eliciting Privacy Requirements: From [34].
Privacy Threat Analysis to Mitigation
In summary, privacy protection solutions boil down to
Strategy
either technical or legal enforcement. In general, privacy
technology enables functionality while offering the highest
Misuse cases describe the relevant (risk-wise) threat sce- protection for privacy. Further, Hard Privacy Technology
narios for the system. The preconditions are based on the provides cryptographically strong protections for privacy,
threat tree patterns and the basic and alternative flows are assumes no unnecessary leakage of information, and replies
inspired by the system’s use cases. on massive distribution of trust excluding potential adver-
As a next step, the system’s (positive) requirements can sary and privacy violators. Soft Privacy Technology (e.g.
be extracted from the misuse cases. To this aim, the spec- privacy policy and feedback tools in Table 7) offers protec-
ification of the privacy requirements is facilitated by Table tions against mass surveillance and violations, assumes data
6, which maps the types of threats scenarios to types of pri- subjects sharing of personal data is necessary, and employs
vacy requirements. Note that the table is a refinement of the a weaker adversary model.
more generic objectives in Table 2.
Running example: Social Network 2.0
Table 8 summarizes the selection of PETs based on the
9.2 From Privacy Requirements to Pri- privacy requirements elicited in our running example. It is
vacy Enhancing Solutions possible that a more business oriented example would sug-
gest different mitigation strategies. Nevertheless, we hope
Similarly to security, privacy requirements can be satis- the example depicted in this section can illustrate how the
fied via a range of solution strategies: proposed framework can be applied in real life applications.
In an attempt to make the running example more acces-
1. Warn the user could be a valid strategy for lower risk sible to the reader, the system model, the misuse cases,
(but still relevant) threats. However precautions have and the mitigation techniques of the Social Network 2.0
to be taken so that users, especially nontechnical ones, are largely simplified due to the assumption that the social
do not make poor trust decisions. network providers are semi-trustworthy (i.e., the adversary
2. Removing or turning off the feature is the only way to model consists of external parties, data holder, honest in-
reduce the risk to zero. When threat models indicate siders who make errors, and corrupt insiders). If different
that the risk is too great or the mitigation techniques assumptions would hold, different misuse cases should be
are untenable, it is best not to build the feature in the identified with a distinct mitigation approach. For instance,
first place, in order to gain a balance between user fea- if we apply a smaller trust boundary and assume that the
tures and potential privacy risks. social network provider is totally untrustworthy, then ex-
tra privacy requirements and a stronger threat model would
3. Countering threats with either preventive or reactive be considered. One possible misuse case would be that the
privacy enhancing technology is the most commonly malicious social network provider, as an attacker, takes ad-
used strategy to solve specific issues. vantage of profiling user’s personal data for its own bene-
fits. In that scenario, one solution could be building a secu-
This section mainly focuses on the last strategy. When rity agriculture out of smart clients and an untrusted central
countering threats with technology is chosen as the mitiga- server to removes the need for faith in network operators
tion strategy, system designers have to identify the sound and gives users control of their privacy [82]. Another so-
and appropriate privacy enhancing technology (PET). We lution could be using encryption to enforce access control
summarize the state-of-art PETs in Table 7 and map these for users’ personal information based on their privacy pref-
techniques to each of the corresponding privacy require- erences [83, 84].
ments of Table 6. As a result, improved guidance is pro- Another research discussion is concerning practicality to
vided to the system designers over the solution selection build user privacy feedback tools. In short, from a techni-
process. cal point of view, feedback could be realized by means of

12
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun
Table 6: Privacy objectives based on LINDDUN threat types (E-Entity, DF-Data Flow, DS-Data Store, P-Process)

LINDDUN threats Elementary privacy objectives


Linkability of (E, E) Unlinkability of (E, E)
Linkability of (DF, DF ) Unlinkability of (DF, DF )
Linkability of (DS, DS) Unlinkability of (DS, DS)
Linkability of (P, P ) Unlinkability of (P, P )
Identifiability of (E, E) Anonymity / pseudonymity of (E, E)
Identifiability of (E, DF ) Anonymity / pseudonymity of (E, DF )
Identifiability of (E, DS) Anonymity / pseudonymity of (E, DS)
Identifiability of (E, P ) Anonymity / pseudonymity of (E, P )
Non-repudiation of (E, DF ) Plausible deniability of (E, DF )
Non-repudiation of (E, DS) Plausible deniability of (E, DS)
Non-repudiation of (E, P ) Plausible deniability of (E, P )
Detectability of DF Undetectability of DF
Detectability of DS Undetectability of DS
Detectability of P Undetectability of P
Information Disclosure of DF Confidentiality of DF
Information Disclosure of DS Confidentiality of DS
Information Disclosure of P Confidentiality of P
Content Unawareness of E Content awareness of E
Policy and consent Noncompliance of the sys- Policy and consent compliance of the system
tem

data mining techniques (e.g., k-anonymity model) to coun- 10 Conclusion


termeasure user identification and data profiling attacks. It
compares data user sends to the social network with a whole In this paper, we have presented a comprehensive frame-
set of data composed of data from all networks users, and work to model privacy threats in software-based systems,
checks the “uniqueness” of personal identifiable informa- elicit privacy requirements, and instantiate privacy enhan-
tion (PII) of the user. With a unique PII, a user has a cing countermeasures. The primary contribution is the sys-
higher probability to be identified. Then it warns users tematic methodology to model privacy specific threats. This
each time their activities provoke privacy risks, e.g. shows is achieved by defining a list of privacy threat types and pro-
a risk level of identifiability by posting a message “you are viding the necessary mappings to the elements in the sys-
about to leave the anonymity safe zone”. There are some re- tem model. The second contribution is represented by the
search incentives for feedback systems for social networks supporting body of knowledge, namely, an extensive cata-
[13, 14, 81]. However, this concept implies a paradox that logue of privacy specific threat tree patterns. In addition,
in order to ensure accurate feedback, the feedback tool itself this work provides the means to map the most commonly
should be a “perfect attacker” that knows all the data from known privacy enhancing technologies (PETs) to the iden-
all users. Due to the space and scope limit of this paper, tified privacy threats and the elicited privacy requirements.
we cannot discuss this in detail. We encourage interested The privacy threat tree patterns and categorization of sug-
readers to formalize the feedback system model and investi- gested PETs are expected to be continuously updated and
gate whether it is technically realistic to realize the feedback improved upon, since new threats keep emerging, just as
concept and beyond which threshold a feedback could be new privacy technologies keep evolving.
satisfactory. Intuitively speaking, the aforementioned feed- As future work, we plan to apply the proposed frame-
back concept is not about technical problem purely but more work to larger case studies, for instance, validation in the
an education problem to raise user’s privacy awareness. The context of a national e-health system is being performed.
usability of such feedback tools is also an issue, such as how
to design a user friendly interface and encourage users to
use feedback remains a research challenge.

13
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun

Table 7: Mapping privacy objectives with privacy enhancing techniques (U – Unlinkability, A – Anonymity / Pseudonymity,
P – Plausible deniability, D – Undetectability / unobservability, C – Confidentiality, W – Content Awareness, O – Policy and
consent compliance of the system)

Mitigation techniques: PETs U A P D C W O


Anonymity system Mix-networks (1981) [35], DC-networks (1985) [36, 37], ⇥ ⇥ ⇥
ISDN-mixes [38], Onion Routing (1996) [39], Crowds
(1998) [40], Single proxy (90s) (Penet pseudonymous re-
mailer (1993-1996), Anonymizer, SafeWeb), anonymous
Remailer (Cipherpunk Type 0, Type 1 [41], Mixmaster
Type 2 (1994) [42], Mixminion Type 3 (2003) [43]), and
Low-latency communication (Freedom Network (1999-
2001) [44], Java Anon Proxy (JAP) (2000) [45], Tor
(2004) [46])
DC-net & MIX-net + dummy traffic, ISDN-mixes [38] ⇥ ⇥ ⇥ ⇥
Broadcast systems [47, 48] + dummy traffic ⇥ ⇥ ⇥
Privacy preserving au- Private authentication [49, 50] ⇥ ⇥
thentication
Anonymous credentials (single show [51], multishow ⇥ ⇥
[52])
Deniable authentication [53] ⇥ ⇥ ⇥
Off-the-record messaging [54] ⇥ ⇥ ⇥ ⇥
Privacy preserving cryp- Multi-party computation (Secure function evaluation) [55, ⇥ ⇥
tographic protocols 56]
Anonymous buyer-seller watermarking protocol [57] ⇥ ⇥ ⇥
Information retrieval Private information retrieval [58] + dummy traffic ⇥ ⇥ ⇥
Oblivious transfer [59, 60]) ⇥ ⇥ ⇥
Privacy preserving data mining [61, 62] ⇥ ⇥ ⇥
Searchable encryption [63], Private search [64] ⇥ ⇥
Data anonymization K-anonymity model [25, 65], l-Diversity [66] ⇥ ⇥
Information hiding Steganography [67] ⇥ ⇥ ⇥
Covert communication [68] ⇥ ⇥ ⇥
Spread spectrum [69] ⇥ ⇥ ⇥
Pseudonymity systems Privacy enhancing identity management system [70] ⇥ ⇥
User-controlled identity management system [71] ⇥ ⇥
Privacy preserving biometrics [72] ⇥ ⇥ ⇥
Encryption techniques Symmetric key & public key encryption [73] ⇥
Deniable encryption ⇥ ⇥
Homomorphic encryption [74] ⇥
Verifiable encryption [75] ⇥
Access control techniques Context-based access control [76] ⇥
Privacy-aware access control [77, 78] ⇥
Policy and feedback tools Policy communication (P3P [15]) ⇥
Policy enforcement (XACML [79], EPAL [80]) ⇥
Feedback tools for user privacy awareness [13, 14, 81] ⇥
Data removal tools (spyware removal, browser cleaning ⇥
tools, activity traces eraser, harddisk data eraser)

14
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun

Table 8: Social Network 2.0 example: from misuse cases to privacy requirements and suggested mitigation strategies and
techniques

No. Misuse cases Privacy requirements Suggested mitigation strategies and techniques
1 Linkability of social Unlinkability of data entries within the social Apply data anonymization techniques, such as k-anonymity [65].
network data store network database
Protection of data store Enforce data protection by means of relationship-based access
control [77]
2 Linkability of data Unlinkability of messages of user-portal Deploy anonymity system, such as TOR [46].
flow of the user data communication; channel confidentiality
stream (user-portal)
3 Linkability of enti- Unlinkability of different pseudonyms (user 1) Technical enforcement: deploy anonymity system, such as
ties the social network IDs) of social network users; channel confi- TOR [46], for communication between user and social network
users dentiality. web portal;
2) User privacy awareness: inform users that revealing too much
information online can be privacy invasive.
4 Identifiability at the Anonymity of social network users such that Protection of the data store, by applying data anonymization tech-
social network data the user will not be identified from social niques, such as k-anonymity [65].
store network database entries
Protection of data store Enforce data protection by means of relationship-based access
control [77]
5 Identifiability at data Anonymity of social network users such that Deploy anonymity system, such as TOR [46], for communication
flow of user data the user will not be identified from user- between user and social network web portal.
stream (user-portal) portal communication by content; channel
confidentiality
6 Identifiability of the Pseudonymize users IDs 1) Apply secure pseudonymization techniques to issue pseudo-
social network users nyms as user IDs;
2) User privacy awareness: inform users using real ID has a risk
for privacy violation.
Use identity management to ensure unlink- Employ privacy preserving identity management, e.g. proposed
ability is sufficiently preserved (as seen by in [70], together with user-controlled identity management sys-
an attacker) between the partial identities of tem [71] to ensure user-controlled linkability of personal data.
an individual person required by the applica- System supports the user in making an informed choice of pseu-
tions donyms, representing his or her partial identities. Make the flow
of this user’s identity attributes explicit to the user and gives its
user a large degree of control.
Confidentiality of data flow in user-portal Deploy anonymity system such as TOR [46].
communication
7 Information disclo- Release of the social network data store Apply access control at the social network databases, e.g. privacy
sure at the social should be controlled according to user’s pri- aware collaborative access control based on relationships [77]
network data store vacy preference
8 Information disclo- Confidentiality of communication between Employ a secure communication channel and deploy anonymity
sure of communi- the user and the social network should be en- system such as TOR [46].
cation between the sured
user and the social
network
9 Content unawareness Users need to be aware that they only need Use feedback tools to raise user’s privacy awareness.
of user to provide minimal set of required personal
data (the data minimization principle)
10 Policy and consent Design system in compliance with legal 1) Hire employee who is responsible for making the policies com-
noncompliance of the guidelines for privacy and data protection pliant OR hire external company for compliancy auditing
whole social network 2) Ensure training obligations for employees.
system
Ensure user aware that in case of violation, E.g., user can sue the social network provider whenever users
user is legitimated to take legal actions personal data is not processed according to what is consented.
Employee contracts clearly specify do’s and 1) Ensure training obligations for employees;
don’ts according to legal guidance 2) Employees who disclose users information will be penalized
(get fired, pay fine, etc.).

15
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun
References [12] E. McCallister, T. Grance, and K. Kent, “Guide to pro-
tecting the confidentiality of personally identifiable in-
[1] A. V. Lamsweerde, S. Brohez, R. D. Landtsheer, formation (pii) (draft),” tech. rep., National Institute of
D. Janssens, and D. D. Informatique, “From system Standards and Technology (U.S.), 2009.
goals to intruder anti-goals: Attack generation and res-
olution for security requirements engineering,” in In [13] S. Lederer, J. I. Hong, A. K. Dey, and J. A. Landay,
Proceedings of the RE03 Workshop on Requirements “Personal privacy through understanding and action:
for High Assurance Systems (RHAS03), pp. 49–56, Five pitfalls for designers,” Personal and Ubiquitous
2003. Computing, vol. 8, pp. 440–454, 2004.

[2] A. van Lamsweerde, Requirements Engineering: [14] S. Patil and A. Kobsa, Privacy Considerations in
From System Goals to UML Models to Software Spec- Awareness Systems: Designing with Privacy in Mind,
ifications. Wiley, 2009. ch. 8, pp. 187–206. HumanComputer Interaction Se-
ries, Springer London, June 2009.
[3] M. Howard and S. Lipner, The Security Development
Lifecycle. Redmond, WA, USA: Microsoft Press, [15] P3P, “Platform for privacy preferences project, w3c
2006. p3p specifications.” http://www.w3.org/TR/
P3P/.
[4] G. Mcgraw, Software Security: Building Security In.
Addison-Wesley Professional, 2006. [16] EU, “Directive 95/46/ec of the european parliament
and of the council of 24 october 1995 on the pro-
[5] D. J. Solove, “A taxonomy of privacy,” University of tection of individuals with regard to the processing
Pennsylvania Law Review, vol. 154, no. 3, 2006. of personal data and on the free movement of such
data.,” Official Journal of the European Communities,
[6] D. J. Solove, Understanding Privacy. Harvard Uni- vol. 281, pp. 31–50, 1995. http://europa.eu/
versity Press, May 2008. scadplus/leg/en/lvb/l14012.htm.

[7] A. Pfitzmann and M. Hansen, “A terminology for [17] HIPAA, “Hipaa administrative simplification: En-
talking about privacy by data minimization: Ano- forcement; final rule. united states department of
nymity, unlinkability, undetectability, unobservability, health & human service.,” Federal Register / Rules
pseudonymity, and identity management (Version 0.33 and Regulations, vol. 71, no. 32, 2006.
April 2010),” tech. rep., TU Dresden and ULD Kiel,
April 2010. http://dud.inf.tu-dresden. [18] PIPEDA, “Personal information protection and
de/Anon_Terminology.shtml. electronic documents act (2000, c. 5),” October
2009. http://laws.justice.gc.ca/en/
[8] M. Hansen, “Linkage control integrating the essence showtdm/cs/P-8.6.
of privacy protection into identity management sys-
tems,” in Collaboration and the Knowledge Economy: [19] A. national privacy regulator, “Australia’s national
Issues, Applications, Case Studies (P. Cunningham privacy regulator: Privacy act.” http://www.
and M. Cunningham, eds.), Proceedings of eChal- privacy.gov.au/law/act.
lenges, pp. 1585–1592, IOS Press, Amsterdam, 2008.
[20] OECD, “Guidelines on the protection of privacy
[9] G. Danezis, “Talk: an introduction to u-prove pri- and transborder flows of personal data, organization
vacy protection technology, and its role in the iden- for economic cooperation and development,” 1980.
tity metasystem – what future for privacy technology,” http://www.oecd.org/document/18/0,
2008. http://www.petsfinebalance.com/ 2340,en_2649_34255_1815186_1_1_1_1,
agenda/index.php. 00.html.

[10] “Iso 17799: Information technology code of prac- [21] T. D. Breaux, A. I. Anton, K. Boucher, and M. Dorf-
tice for information security management,” tech. rep., man, “Legal requirements, compliance and practice:
British Standards Institute, 2000. An industry case study in accessibility,” in RE’08:
Proceedings of the 16th IEEE International Require-
[11] M. Roe, Cryptography and Evidence. PhD thesis, Uni- ments Engineering Conference (RE’08), pp. 43–52,
versity of Cambridge, Clare College, 1997. IEEE Society Press, September 2008.

16
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun
[22] U. D. of Justice, “Workforce investment act of [36] D. Chaum, “Security without identification: Transac-
1998, SEC. 508. electronic and information technol- tion systems to make big brother obsolete,” Commu-
ogy.” http://www.justice.gov/crt/508/ nications of the ACM, vol. 28, no. 10, pp. 1030–1044,
508law.php. 1985.
[23] T. Breaux and A. Antón, “Analyzing regulatory rules [37] D. Chaum, “The dining cryptographers problem: Un-
for privacy and security requirements,” IEEE Trans. conditional sender and recipient untraceability,” Jour-
Softw. Eng., vol. 34, no. 1, pp. 5–20, 2008. nal of Cryptology, vol. 1, no. 1, pp. 65–75, 1988.
[24] G. Danezis, C. Diaz, and P. Syverson, Systems for
[38] A. Pfitzmann, B. Pfitzmann, and M. Waidner, “ISDN-
Anonymous Communication, in CRC Handbook of Fi-
mixes: Untraceable communication with very small
nancial Cryptography and Security, p. 61. Chapman
bandwidth overhead,” in Proceedings of the GI/ITG
& Hall, 2009.
Conference on Communication in Distributed Sys-
[25] L. Sweeney, “K-anonymity: a model for protecting tems, pp. 451–463, February 1991.
privacy,” Int. J. Uncertain. Fuzziness Knowl.-Based
Syst., vol. 10, no. 5, 2002. [39] D. M. Goldschlag, M. G. Reed, and P. F. Syver-
son, “Hiding Routing Information,” in Proceedings
[26] I. Alexander, “Misuse cases: Use cases with hostile in- of Information Hiding: First International Workshop
tent,” IEEE Software, vol. 20, no. 1, pp. 58–66, 2003. (R. Anderson, ed.), pp. 137–150, Springer-Verlag,
[27] G. S. Andreas and A. L. Opdahl, “Templates for mis- LNCS 1174, May 1996.
use case description,” in Proceedings of the 7 th In-
[40] M. Reiter and A. Rubin, “Crowds: Anonymity for web
ternational Workshop on Requirements Engineering,
transactions,” ACM Transactions on Information and
Foundation for Software Quality, pp. 4–5, 2001.
System Security, vol. 1, June 1998.
[28] OWASP, “Risk rating methodology.” http:
//www.owasp.org/index.php/OWASP_ [41] A. Bacard, “Anonymous.to: Cypherpunk tutorial.”
Risk_Rating_Methodology. http://www.andrebacard.com/remail.
html.
[29] M. Library, “Improving web application security:
Threats and countermeasures.” [42] Mixmaster, “Mixmaster homepage.” http://
mixmaster.sourceforge.net/.
[30] NIST, “Risk management guide for information
technology systems, special publication 800-30.”
[43] Mixminion, “Mixminion officia site.” http://
http://csrc.nist.gov/publications/
mixminion.net/.
nistpubs/800-30/sp800-30.pdf.
[31] C. S. E. Institute, “OCTAVE.” http://www. [44] A. Back, I. Goldberg, and A. Shostack, “Freedom sys-
cert.org/octave/. tems 2.1 security issues and analysis,” white paper,
Zero Knowledge Systems, Inc., May 2001.
[32] K. Wuyts, R. Scandariato, B. D. Decker, and
W. Joosen, “Linking privacy solutions to developer [45] O. Berthold, H. Federrath, and S. Köpsell, “Web
goals,” Availability, Reliability and Security, Interna- MIXes: A system for anonymous and unobservable
tional Conference on, vol. 0, pp. 847–852, 2009. Internet access,” in Proceedings of Designing Privacy
Enhancing Technologies: Workshop on Design Issues
[33] C. Kalloniatis, E. Kavakli, and S. Gritzalis, “Address-
in Anonymity and Unobservability (H. Federrath, ed.),
ing privacy requirements in system design: the pris
pp. 115–129, Springer-Verlag, LNCS 2009, July 2000.
method,” Requirements Engineering, vol. 13, pp. 241–
255, September 2008. http://dx.doi.org/10. [46] R. Dingledine, N. Mathewson, and P. Syverson, “Tor:
1007/s00766-008-0067-3. The second-generation onion router,” in Proceedings
[34] PETs, “Annual symposium on privacy enhancing tech- of the 13th USENIX Security Symposium, August
nologies, homepage.” http://petsymposium. 2004.
org/.
[47] A. Pfitzmann and M. Waidner, “Networks without
[35] D. Chaum, “Untraceable electronic mail, return ad- user observability – design options,” in Proceedings
dresses, and digital pseudonyms,” Communications of of EUROCRYPT 1985, Springer-Verlag, LNCS 219,
the ACM, vol. 24, no. 2, pp. 84–88, 1981. 1985.

17
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun
[48] M. Waidner and B. Pfitzmann, “The dining cryptogra- [60] C. Cachin, “On the foundations of oblivious transfer,”
phers in the disco: Unconditional sender and recipient in Advances in Cryptology – Eurocrypt 1998, pp. 361–
untraceability,” in Proceedings of EUROCRYPT 1989, 374, Springer-Verlag, LNCS 1403, 1998.
Springer-Verlag, LNCS 434, 1990.
[61] V. Verykios, E. Bertino, I. Fovino, L. Provenza,
[49] M. Abadia and C. Fournet, “Private authentication,” Y. Saygin, and Y. Theodoridis, “State-of-the-art in pri-
Theoretical Computer Science, vol. 322, pp. 427–476, vacy preserving data mining,” ACM SIGMOD Record,
September 2004. vol. 3, pp. 50–57, Mar. 2004.

[50] W. Aiello, S. M. Bellovin, M. Blaze, R. Canetti, [62] B. Pinkas, “Cryptographic techniques for privacy pre-
J. Ioannidis, A. D. Keromytis, and O. Reingold, “Just serving data mining,” SIGKDD Explorations, vol. 4,
fast keying: Key agreement in a hostile internet,” ACM no. 2, pp. 12–19, 2002.
Trans. Inf. Syst. Secur, vol. 7, p. 2004, 2004.
[63] M. Abdalla, M. Bellare, D. Catalano, E. Kiltz,
[51] S. Brands and D. Chaum, “Distance-bounding pro- T. Kohno, T. Lange, J. Malone-lee, G. Neven, Pas-
tocols (extended abstract),” in EUROCRYPT93, Lec- cal, P. Paillier, and H. Shi, “Searchable encryption
ture Notes in Computer Science 765, pp. 344–359, revisited: Consistency properties, relation to anony-
Springer-Verlag, 1993. mous ibe, and extensions,” in Proceeding of CRYPTO,
pp. 205–222, Springer-Verlag, 2005.
[52] J. Camenisch and A. Lysyanskaya, “Signature
schemes and anonymous credentials from bilinear [64] R. Ostrovsky and W. E. S. III, “Private searching on
maps,” in Proceedings Crypto, pp. 56–72, Springer- streaming data,” in CRYPTO, pp. 223–240, 2005.
Verlag, LNCS 3152, 2004.
[65] L. Sweeney, “Achieving k-anonymity privacy protec-
[53] M. Naor, “Deniable ring authentication,” in In Pro- tion using generalization and suppression,” Int. J. Un-
ceedings of Crypto 2002, volume 2442 of LNCS, certain. Fuzziness Knowl.-Based Syst., vol. 10, no. 5,
pp. 481–498, Springer-Verlag, 2002. 2002.

[54] N. Borisov, I. Goldberg, and E. Brewer, “Off-the- [66] A. Machanavajjhala, J. Gehrke, D. Kifer, and
record communication, or, why not to use pgp,” in M. Venkitasubramaniam, “l-diversity: Privacy beyond
Proceedings of the 2004 ACM workshop on Privacy k-anonymity.,” in Proceedings of the 22nd Interna-
in the electronic society, pp. 77–84, ACM New York, tional Conference on Data Engineering (ICDE’06),
NY, USA, 2004. p. 24, 2006.

[55] A. C.-C. Yao, “Protocols for secure computations,” [67] R. Anderson and F. Petitcolas, “On the limits of
in Proceedings of Twenty-third IEEE Symposium steganography,” IEEE Journal of Selected Areas in
on Foundations ofComputer Science, pp. 160–164, Communications, vol. 16, pp. 474–481, 1998.
November 1982.
[68] I. Moskowitz, R. E. Newman, D. P. Crepeau, and A. R.
[56] M. Naor and K. Nissim, “Communication com- Miller, “Covert channels and anonymizing networks,”
plexity and secure function evaluation,” CoRR, in In Workshop on Privacy in the Electronic Society,
vol. cs.CR/0109011, 2001. pp. 79–88, ACM, 2003.

[57] M. Deng, T. Bianchi, A. Piva, and B. Preneel, “An [69] D. Kirovski and H. S. Malvar, “Robust covert com-
efficient buyer-seller watermarking protocol based on munication over a public audio channel using spread
composite signal representation,” in Proceedings of spectrum,” in Information Hiding, pp. 354–368, 2001.
the 11th ACM workshop on Multimedia and security,
(Princeton, New Jersey, USA), pp. 9–18, ACM New [70] M. Hansen, P. Berlich, J. Camenisch, S. Clauß,
York, NY, USA, 2009. A. Pfitzmann, and M. Waidner, “Privacy-enhancing
identity management,” Information Security Tech-
[58] B. Chor, O. Goldreich, E. Kushilevitz, and M. Sudan, nical Report (ISTR), vol. 9, no. 1, pp. 35–
“Private information retrieval,” in Journal of the ACM, 44, 2004. http://dx.doi.org/10.1016/
pp. 41–50, 1998. S1363-4127(04)00014-7.

[59] M. O. Rabin, “How to exchange secrets by oblivious [71] S. Clauß, A. Pfitzmann, M. Hansen, and E. V. Her-
transfer,” technical report tr-81, Aiken Computation reweghen, “Privacy-enhancing identity management.”
Laboratory, Harvard University, 1981. The IPTS Report 67, 8-16, September 2002.

18
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun
[72] K. Simoens, P. Tuyls, and B. Preneel, “Privacy weak- [84] PrimeLife, “The european primelife research project
nesses in biometric sketches,” in Proceedings of the – privacy and identity management in europe for life.”
2009 30th IEEE Symposium on Security and Privacy, http://www.primelife.eu/.
pp. 188–203, IEEE Computer Society Washington,
DC, USA, 2009.
[73] A. J. Menezes, P. C. V. Oorschot, S. A. Vanstone, and
R. L. Rivest, “Handbook of applied cryptography,”
1997.
[74] C. Fontaine and F. Galand, “A survey of ho-
momorphic encryption for non-specialists,”
EURASIP Journal on Information Security,
Oct. 2007. http://www.hindawi.com/
RecentlyAcceptedArticlePDF.aspx?
journal=IS&number=13801.
[75] J. Camenisch and I. Damgard, “Verifiable encryption
and applications to group signatures and signature
sharing,” in Technical Report RS-98-32, BRICS, De-
partment of Computer Science, University of Aarhus,
Dec. 1998.
[76] C. K. Georgiadis, I. Mavridis, G. Pangalos, and R. K.
Thomas, “Flexible team-based access control using
contexts,” in SACMAT, pp. 21–27, 2001.
[77] B. Carminati and E. Ferrari, “Privacy-aware collabo-
rative access control in web-based social networks,” in
Proc. of the 22nd IFIP WG 11.3 Working Conference
on Data and Applications Security (DBSEC2008),
2008.
[78] C. A. Ardagna, J. Camenisch, M. Kohlweiss,
R. Leenes, G. Neven, B. Priem, P. Samarati, D. Som-
mer, and M. Verdicchio, “Exploiting cryptography for
privacy-enhanced access control: A result of the prime
project,” Journal of Computer Security, 2009.
[79] O. (oasis open.org), “Xacml 3.0 - work in
progress, retrieved 09-september-2009.” http:
//www.oasis-open.org/committees/tc_
home.php?wg_abbrev=xacml#CURRENT.
[80] IBM, “Enterprise privacy authorization language (epal
1.2).” W3C Member Submission 10 November 2003.
[81] H. R. Lipford, A. Besmer, and J. Watson in UPSEC.
[82] J. Anderson, C. Diaz, J. Bonneau, and F. Stajano,
“Privacy-enabling social networking over untrusted
networks,” in WOSN ’09: Proceedings of the 2nd
ACM workshop on Online social networks, 2009.
[83] F. Beato, M. Kohlweiss, , and K. Wouters, “Enforc-
ing access control in social networks.” HotPets 2009,
2009. http://www.cosic.esat.kuleuven.
be/publications/article-1240.pdf.

19
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun
A Misuse case examples Primary misactor: skilled insider / skilled outsider
Basic Flow:
MUC 2: Linkability of of the user-portal
1. The misactor intercepts or eavesdrops two or more
data stream (data flow)
pseudonyms
Summary: Data flows can be linked to the same person 2. The misactor can link the pseudonyms to each other
(without necessarily revealing the persons identity) and possibly link (by combining this information) to
Asset: PII of the user the user / data subject
• The user: Trigger: by misactor, can happen whenever data is commu-
– data flow can be linked to each other which might nicated
reveal the persons identity Preconditions:

– the attacker can build a profile of a user’s online • Information Disclosure of the data flow possible
activities (interests, active time, comments, up-
dates, etc.) • Different “pseudonyms” are linked to each other based
on content of the data flow
Primary misactor: skilled insider / skilled outsider
Basic Flow: Prevention capture points:

1. The misactor intercepts / eavesdrops two or more data • protection of information such as user temporary ID,
flows IP address, time and location, session ID, identifier and
biometrics, computer ID, communication content, e.g.
2. The misactor can link the data flows to each other and apply data obfuscation to protection this information
possibly link them (by combining this information) to (security)
the user / data subject
• message and channel confidentiality provided
Trigger: by misactor, can happen whenever data is commu-
nicated Prevention guarantee: Impossible to link data to each other
Preconditions:
MUC 4: Identifiability at the social net-
• No anonymous communication system used
work database (data store)
• Information disclosure of data flow possible
Summary: The users identity is revealed
Prevention capture points: Asset: PII of the user
• Use strong anonymous communication techniques • The user: revealed identity
• Provide confidential channel
Primary misactor: skilled insider / skilled outsider
Prevention guarantee: Impossible to link data to each other Basic Flow:

1. The misactor gains access to the database


MUC 3: Linkability of the social network
users (entity) 2. The data is linked to a pseudonym

Summary: Entities (with different pseudonyms) can be 3. The misactor can link the pseudonym to the actual
linked to the same person (without necessarily revealing the identity (identifiability of entity)
persons identity)
4. The misactor can link the data to the actual user’s iden-
Asset: PII of the user
tity
• The user:
Alternative Flow:
– data can be linked to each other which might re-
veal the persons identity 1. The misactor gains access to the database

– attacker can build a profile of a user’s online ac- 2. The can link information from the database to other
tivities (interests, actives time, comments, up- information (from another database or information
dates, etc.) which might be publicly accessible)

20
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun
3. The misactor can re-identify the user based on the MUC 6: Identifiability of users of the social
combined information network system (entity)

Trigger: by misactor, can always happen Summary: The users identity is revealed
Preconditions: Asset: PII of the user

• no or insufficient protection of the data store • The user: revealed identity


Primary misactor: skilled insider / skilled outsider
• no data anonymization techniques used
Basic Flow:
Prevention capture points: 1. The misactor gains access to the data flow

• protection of the data store (security) 2. The data contains the user’s password

• apply data anonymization techniques 3. The misactor has access to the identity management
database
Prevention guarantee: hard-impossible to link data to iden- 4. The misactor can link the password to the user
tity (depending on applied technique)
Alternative Flow:
MUC 5: Identifiability of user-portal data 1. The misactor gains access to the data flow
stream (data flow)
2. The data contains the user’s password
Summary: The users identity is revealed 3. The misactor can link the user’s password to the user’s
Asset: PII of the user identity (password is initials followed by birthdate)

• The user: revealed identity Trigger: by misactor, can happen whenever data is commu-
nicated and the user logs in using his “secret”
Primary misactor: insider / outsider Preconditions:
Basic Flow:
• Insecure IDM system OR
1. The misactor gains access to the data flow • weak passwords used and information disclosure of
data flow possible
2. The data contains personal identifiable information
about the user (user relationships, address, etc.) Prevention capture points:

3. The misactor is able to extract personal identifiable in- • Strong pseudonymity technique used (e.g. strong pass-
formation from the user / data subject words)
• privacy-enhancing IDM system
Trigger: by misactor, can happen whenever data is commu-
nicated • Data flow confidentiality
Preconditions:
Prevention guarantee: hard(er) to link log-in to identity.
• no or weak anonymous communication system used
MUC 7: Information Disclosure at the so-
• Information disclosure of data flow possible cial network database (data store)

Prevention capture points: Summary: Data is exposed to unauthorized users


Asset: PII of the user
• apply anonymous communication techniques
• The user: revealed sensitive data
• Use confidential channel Primary misactor: skilled insider / skilled outsider
Basic Flow:
Prevention guarantee: hard-impossible to link data to iden-
tity (depending on applied technique) 1. The misactor gains access to the database

21
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun
2. The misactor retrieves data to which he should not Primary misactor: skilled insider / skilled outsider
have access Basic Flow:

Trigger: by misactor, can always happen 1. The misactor gain access to user’s online comments
Preconditions:
2. The misactor profiles the user’s data and can identify
• no or insufficient internal access policies the user
Prevention capture points: Trigger: by misactor, can always happen
Preconditions:
• strong access control policies (security). For example,
rule-based access control based on friendships in the • User provides too much personal data
social network
Prevention capture points:
Prevention guarantee: hard-impossible to obtain data with-
out having the necessary permissions • User provides only minimal set of required informa-
tion
MUC 8: Information Disclosure of commu-
nication between the user and the social Prevention guarantee: user will be informed about potential
network (data flow) privacy risks

Summary: The communication is exposed to unautho- MUC 10: Policy and consent noncompli-
rized users ance
Asset: PII of the user
Summary: The social network provider doesn’t process
• The user: revealed sensitive data user’s personal data in compliance with user consent, e.g.,
disclose the database to third parties for secondary use
Primary misactor: skilled insider / skilled outsider
Asset: PII of the user
Basic Flow:
• The user: revealed identity and personal information
1. The misactor gains access to the data flow

2. The misactor retrieves data to which he should not • The system / company: negative impact on reputation
have access Primary misactor: Insider
Trigger: by misactor, can happen whenever messages are Basic Flow:
being sent 1. The misactor gains access to social network database
Preconditions:
2. The misactor discloses the data to a third party
• communication goes through insecure public network
Trigger: by misactor, can always happen
Prevention capture points:
Preconditions:
• messages sent between user and social network web
• misactor can tamper with privacy policies and makes
client is encrypted and secure communication channel
consents inconsistent OR
is ensured

Prevention guarantee: hard-impossible to gain access to the • policies not managed correctly (not updated according
data flow without having the right permissions to user’s requests)

Prevention capture points:


MUC 9: Content unawareness
• Design system in compliance with legal guidelines for
Summary: User is unaware that his or her anonymity is privacy and data protection and keep internal policies
at risk due to the fact that too much personal identifiable consistent with policies communicated to user
information is released
Asset: PII of the user • Legal enforcement: user can sue the social network
provider whenever his or her personal data is processed
• The user: revealed identity without consents

22
An up-to-date version of the LINDDUN Privacy by Design framework
can be found on the official website:
https://distrinet.cs.kuleuven.be/software/linddun
• Employee contracts: employees who share informa-
tion with 3th parties will be penalized (fired, pay fine,
etc.)
Prevention guarantee: Legal enforcement will lower the
threat of an insider leaking information but it will still be
possible to breach user’s privacy

23

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy