On the role of contextual information for privacy attacks and classification
t-matyas@microsoft.com & matyas@fi.muni.cz
(On sabbatical leave from Masaryk U. Brno, CZ)
Abstract
teria and Freiburg Privacy Diamond models, motivation for
our research, and a simple example used to illustrate the
Many papers and articles attempt to define or even quan-
use of privacy models. Section two then presents some of
tify privacy, typically with a major focus on anonymity. A
the Common Criteria concepts used in the following dis-
related research exercise in the area of evidence-based trust
cussions, and also outlines the Common Criteria approach
models for ubiquitous computing environments has given us
to privacy issues (families), together with a discussion of
an impulse to take a closer look at the definition(s) of pri-
unlinkability – the most complex property/quality of pri-
vacy in the Common Criteria, which we then transcribed
vacy. The third section presents the Freiburg Privacy Di-
in a bit more formal manner. This lead us to a further re-
amond – a semi-formal model allowing for expression of
view of unlinkability, and revision of another semi-formal
anonymity and unlinkability, focussing on the mobile envi-
model allowing for expression of anonymity and unlinkabil-
ronment. Section four then examines the role of contexts
ity – the Freiburg Privacy Diamond. We propose new means
in these two approaches to modelling privacy. This leads to
of describing (obviously only observable) characteristics of
the fifth section that proposes using contextual information
a system to reflect the role of contexts for profiling – and
to model systems for privacy evaluations, and presents non-
linking – users with actions in a system. We believe this
existential definitions of the four Common Criteria privacy
approach should allow for evaluating privacy in large data
concepts. Section six concludes with an outline of related
1.1 Evidence-based trust/reputation 1. Introduction
Evidence-based systems work basically with two sets
of evidence (data describing interaction outcomes). The
This paper outlines the development of our appreciation
primary set contains evidence that is delivered (or se-
of privacy concepts that started with a research exercise on
lected from locally stored data) according to a given re-
data mining in evidential data for evidence-based reputation
quest content. That data is used for reputation evaluation to
systems. The novel idea of evidence-based reputation (or
grant/reject access requests. Data in this first set may con-
trust) systems is that such systems do not rely on an objec-
tain information from third parties representing evidence
tive knowledge of user identity [1, 2, 11]. One has instead to
about behaviour collected by other nodes – recommenders.
consider possible privacy infringements based on the use of
The secondary set comprises data relevant to a local
data (evidence) about previous behaviour of entities in the
system. That data is used for self-assessment of the lo-
systems. We provide a brief introduction to evidence-based
cal system security in various contexts (it may be a non-
trust/reputation systems, as well as to the privacy issues, ad-
deterministic process in a certain sense). This set may be
dressing the common problem of many papers that narrow
also referenced as derived or secondary data. Note that there
the considerations of privacy to anonymity only.
may be an intersection between the two evidence sets with
The paper is structured in the following way – remaining
implications to privacy issues that we are investigating in
parts of this introductory section provide a brief overview
of issues related to evidence-based systems, Common Cri-
The approach of reputation systems is rather probabilis-
tic and this feature directly implies properties of security
computers, time, type of service invoked, size of messages,
mechanisms that may be defined on top of such systems.
etc.) into the CC model and compare it with the FPD model
The essential problem arises with recommendations that
that reflects only one very specific context information – lo-
may be artificially created by distributed types of attacks
(Sybil attack [7]) based on large number of nodes created
Our objectives for starting this work are as follows.
just to gather enough evidence and achieve maximum repu-
Firstly, we want to provide a model that allows one to
tation that would allow them to launch their attack(s).
cover as many aspects of user interactions as is beneficial
for improving quantification/measurement for different as-
1.2 A note on the Common Criteria and Freiburg
pects of privacy; this model shall definitely provide for bet-
Privacy Diamond models
ter reasoning/evaluation of privacy than Common Criteria
and Freiburg Privacy Diamond models do. Secondly, and in
This paper proposes formal definitions of existing Com-
a close relation to the first objective, we want to illustrate
mon Criteria concepts/areas of privacy and compares them
the deficiency of the Common Criteria treatment of privacy,
with the Freiburg Privacy Diamond model (FPD) [18]. Re-
and to provide a foundation that would assist in improving
cent research in anonymity systems [6, 10, 15] demonstrates
this treatment. Thirdly, with a long-term perspective, we
that it is usually unfeasible to provide perfect anonymity
aim to provide basis for partly or fully automated evalua-
and that implementations of privacy enhancing systems
may provide only a certain level of privacy (anonymity,
This paper does not address all aspects of data collection
pseudonymity). This lead to definitions of several metrics
for privacy models, and neither does it suggest any means
that can quantify level of privacy achievable in a system,
for improving the level of privacy protection.
The Common Criteria class Privacy deals with aspects
1.4 A simple example
of privacy as outlined in their four families. Three of these
families have a similar grounding with respect to entities
Let us present a trivial example that we use later in this
(i.e., users or processes) whose privacy might be in dan-
paper to compare the formal models for privacy. The at-
ger. They are vulnerable to varying threats, which make
tacker attempts to determine which payment cards are used
them distinct from each other. These families are Unob-
by a certain person with a particular card – she is interested
servability, Anonymity, and Unlinkability. The fourth fam-
in linking together all the cards of this person (identifica-
ily – Pseudonymity – addresses somewhat different kind of
tion of the particular person is not part of the attacker’s goal
at the moment). We assume the attacker is able to collect
till receipts of shoppers from the same house or the same
1.3 Motivation
company. For this subset of supermarket clients we then do
not mind a given receipt to show only a part of the payment
While working on related issues [5], we became aware
of the need to define the Common Criteria concepts (called
There are three payment cards (with numbers 11, 21, 25)
families) dealing with privacy in a bit more precise fash-
used for three actual shoppings (visits of the supermarket
ion. As we were examining definitions of privacy con-
resulting in payments – A, B, C), and there is also a set of
cepts/families as stated in Common Criteria two negative
typical baskets/shopping lists (l, m) in our simplistic exam-
facts emerged. First, the definitions are given in an exis-
tential manner, and secondly, not all aspects of user inter-
The attacker has a precise (100%) knowledge about con-
actions relevant to privacy are covered. Both issues come
nections between payment cards and shoppings, and an im-
from research carried out in the areas of side-channel anal-
precise knowledge about classification of individual shop-
ysis and security of system implementations, showing that it
pings into typical “consumer group” baskets. This classifi-
is not sufficient to take into account only the idealised prin-
cation to “typical baskets” is usually done with some kind
cipals and messages. It is also very important to consider
of a data-mining algorithm over actual shopping lists. Note
the context, in/with which the interactions are undertaken.
that one could obviously achieve perfect knowledge should
Information like physical and virtual (IP, MAC addresses)
loyalty cards be used (and their numbers on the receipts),
positions of users and computers, time, type of service in-
introduction of this has no qualitative impact to this exam-
voked, size of messages, etc. allow to profile typical user
behaviour and successfully deteriorate privacy of users in
With just changing semantics, we may define a very
similar example based on users of chat services connect-
We propose to introduce context information (side/covert
ing from a given Internet cafe. The categories would then
channels, like physical and virtual location of users and
be chat-room pseudonyms, chat sessions, and classifica-
tion into groups based on interest (content) and/or language,
to TOE’s resources. This abstract model does not directly
with the attacker’s goal of identifying pseudonyms used by
cover communication like in (remailer) mixes as it explic-
one user in different chat sessions.
itly describes only relations between users/subjects and re-
sources of target information system. However, it is not
2. Privacy in the Common Criteria
difficult to extend the proposed formal definitions of major
privacy concepts based on this model for communication
2.1. The starting point – model
Since some of the discussions and proposals in this paper
2.2 Privacy in the Common Criteria
are based on the Common Criteria concepts, let us briefly
present the related information. Relevant Common Criteria
Unobservability : This family ensures that a user may
notions and concepts are as follows [17]:
use a resource or service without others, especiallyTarget of Evaluation (TOE) – An IT product or system third parties, being able to observe that the resource
and its associated administrator and user guidance doc-
or service is being used. The protected asset in this
umentation that is the subject of an evaluation.
case can be information about other users’ communi-
cations, about access to and use of a certain resource or
TOE Security Functions (TSF) – A set consisting of all
service, etc. Several countries, e.g. Germany, consider
hardware, software, and firmware of the TOE that must
the assurance of communication unobservability as an
be relied upon for the correct enforcement of the TOE
essential part of the protection of constitutional rights.
Threats of malicious observations (e.g., through Trojan
TSF Scope of Control (TSC) – The set of interactions that
Horses) and traffic analysis (by others than communi-
can occur with or within a TOE and are subject to the
cating parties) are best-known examples. Anonymity : This family ensures that a user may use a re- Subject – An entity within the TSC that causes operations source or service without disclosing the user identity.The requirements for Anonymity provide protection ofthe user identity. Anonymity is not intended to protectAssets – Information or resources to be protected by the the subject identity. Although it may be surprising to
find a service of this nature in a Trusted Computing
Object – An entity within the TSC that contains or receives
Environment, possible applications include enquiries
information and upon which subjects perform opera-
of a confidential nature to public databases, etc. A
protected asset is usually the identity of the requesting
entity, but can also include information on the kind of
User – Any entity (human user or external IT entity) out-
requested operation (and/or information) and aspects
side the TOE that interacts with the TOE.
such as time and mode of use. The relevant threats
are: disclosure of identity or leakage of information
leading to disclosure of identity – often described as
Unlinkability : This family ensures that a user may make multiple uses of resources or services without othersbeing able to link these uses together. The protected as-
sets are of the same as in Anonymity. Relevant threats
can also be classed as “usage profiling”. Pseudonymity : This family ensures that a user may use a resource or service without disclosing its user identity,Figure 1. Common Criteria model. but can still be accountable for that use. Possible ap-
plications are usage and charging for phone services
We can see (fig. 1) that user does not access objects di-
without disclosing identity, “anonymous” use of an
rectly but through subjects – internal representation of her-
electronic payment, etc. In addition to the Anonymity
self inside TOE/TSC. This indirection is exploited for defi-
services, Pseudonymity provides methods for authori-
nition of pseudonymity as we will see later. Objects repre-
sation without identification (at all or directly to the
sent not only information but also services mediating access
2.3. Privacy families revisited
she can only find mu(s) = uID with a probabil-
ity not significantly greater than 1/|ID|.
Common Criteria privacy families are defined in an ex-
2. A does not know anything about ID (particular
istential manner and any formal definition of them has to
elements or size) – then for ∀ uID ∈ UID, she
tackle a number of ambiguities. It is unrealistic to as-
cannot even guess whether uID ∈ ID with a
sume perfect/absolute privacy as demonstrated by several
probability significantly greater than 1/2. (The
anonymity metrics, based on anonymity sets (number of
probability of finding mu(s) = uID would not
users able to use a given resource/service in a given con-
text) [12] or entropy assigned to a projection between ser-
vice and user/subject identities (uncertainty about using a
Unlinkability – let us assume there is a function δ : m ×
S × S → [no, yes]. This function determines whether
Can we introduce more formal definition of privacy no-
two service uses were invoked by the same uID ∈ UID
tions and use them to define mutual relations? It is not easy,
or not. Parameter m stands for a function that maps
but the prospects of getting a clearer picture of mutual rela-
service uses (S) into a set of identities UID (e.g., mu
tions between different privacy aspects/qualities are encour-
It is infeasible for A with any δ and any s1, s2 ∈
Our proposal for the CC model privacy formalisation
is based on the following graphical representation (fig. 2).
with a probability significantly greater than 1/2.
The set S represents observations of uses of services or re-
Pseudonymity – there exists and is known to
PID is equivalent of subjects and ID stands for
all possible service use observations and identities, respec-
tively – not only those relevant for a given system. By
PID, uID ∈ ID, but is subject to strict conditions
stating with probability not significantly greater than in the
following definitions, we mean negligible difference (lower
1. knows ID, she cannot determine correct uID
than ε) from a specified value [3]. Let A be any attacker
with a probability significantly greater than
2. does not know ID, she can only guess with
a probability not significantly greater than 1/2
These existential expressions can then be easily turned
into probabilistic ones that allow for expressing differ-
ent qualitative levels of all these privacy concepts/families.
This can be done simply by changing the “not significantly
greater than” expression to “not greater than ∆”, where ∆
Figure 2. Schematics for the CC view of pri- 2.4 The Unlinkables
Unlinkability cannot be satisfied without other privacy
Our formal transcription of existential definitions of CC
families. It is now understood [13, 14] that the Com-
mon Criteria definition of unlinkability is not supporting
Unobservability – there is a space of encodings (
some aspects of unlinkability in real systems, and a Com-
which some elements are defined to encode use of ser-
mon Criteria modification proposal in this manner is cur-
rently submitted. We point the reader to the fact that when
pseudonymity is flawed, an attacker may obtain the ID of an
∀s ∈ S with a probability significantly greater than
actual user. The same holds when anonymity is breached.
1/2 whether a particular s ∈ S or s ∈ (US − S).
Moreover, we are convinced that unlinkability may be a
Anonymity – there is a probability mapping m
property of other privacy families. This comes straight from
the formal unlinkability definition as stated above, where
mapping m is the link binding the families together. Un-
1. A knows the set ID – then ∀ s ∈ S, uID ∈ ID,
linkability should ensure that the particular family (or rather
its implementation) does not contain side-channels (context
information) that could be exploited by an attacker. We have
found, in this context, two other meanings for unlinkability
during our analysis. The first meaning is expressed in the
following definition of unlinkable pseudonymity. It says
that when a user employs two different pseudonyms, any
A is not able to connect these two pseudonyms together. Unlinkable pseudonymity – As for the definition of
pseudonymity above in part 2.3, and also for any
s1, s2 ∈ S, where s1 = s2, mu(s1) = u1, mu(s2) =u (where
Figure 3. The example in CC model.
1. If A knows ID – she cannot find (with a proba-
bility significantly greater than 1/|ID|), whetherm
though the link/connection exists. This shows that CC sim-
ply do not address contextual information.
A does not know ID – she cannot guess with a
probability significantly greater than 1/4 whethermi(u1) × mi(u2) belong to ID × ID, ID × ID,
3. Freiburg Privacy Diamond
ID × ID, ID × ID, respectively. (ID = UID −ID)
FPD is a semiformal anonymity (and partly also unlink-
The second semantics is built on the assumption that
ability) model by A. Zugenmaier et al. [18, 19]. The model
knowledge of several pieces of mutually related information
originated from their research in the area of security in mo-
is much more powerful than knowledge of just one piece of
bile environments. The model is graphically represented as
such information. When compared with the previous defini-
a diamond with vertices User, Action, Device (alternatives
tion of unlinkable pseudonymity, the definition is now con-
for CC’s user, service, and subject), and Location (fig. 4).
cerned with a property ensuring that there is no increase in
The main reason for introducing location as a category here
the probability of correct identification of a given user when
is probably due to the overall focus of this model on mobile
more information is available. The same reasoning lies be-
hind the following definition of unlinkable anonymity. Unlinkable anonymity – As for the definition of anonymi-
1. if A knows ID – she cannot find with a prob-
ability significantly greater than 1/|ID| such
s1, s2 ∈ S, where s1 = s2, mu(s1) = mu(s2).
2. A does not know ID – with a probability not
significantly greater than 1/4 whether mu(s1) ×mu(s2) belong to ID ×ID, ID ×ID, ID ×ID,
Figure 4. Freiburg privacy diamond.
We can apply profiling when unlinkability is breached.
Basically, unlinkability should ensure that the particu-
Anonymity of a user u performing an action a is
lar family (or its implementation) does not contain side-
breached when there exists a connection between a and
channels that could be used when several service invoca-
u. This may be achieved through any path in the diamond
model. Let us recap basic definitions of the FPD model:
The example: The figure 3 depicts how CC models our
1. Any element x has got a type type(x) ∈ {User,
example from part 1.4. It is obvious that there is no informa-
Action, Device, Location}. Any two elements, such
tion about the context information for the basket (chat) con-
as x, y ∈ {e|type(e) = User ∨ Action ∨ Device ∨
tents. This implies that an attacker will not find any link be-
Location}, type(x) = type(y) are in a relation R if
tween payment cards (pseudonyms) using this model, even
the attacker has evidence connecting x and y. 4. Contexts in the two models
U ser ∧ (u, a) ∈ R} is either empty or |UR| > t > 1,
where t is an anonymity threshold defining minimum
Contexts and their roles are not reflected in the CC
model. Considering fig. 2, we see that the two vectors in
question (mi, mu) are bound together through a pseudonym
3. There is the transitivity rule saying that if (x, y) ∈ R
– subject in the CC language. Contexts may be assigned to
and (y, z) ∈ R, and type(x) = type(z), then x, z ∈
any element of the model. ID represents physical entities
and we may know their mobile phone locations, addresses,
4. The union of all initial relations known to an attacker
patterns of network usage, etc. PID – virtual IDs – can be
characterised by previous transactions and possibly virtual
locations (a virtual location may be in some cases very ef-
fectively mapped on a physical location). Elements of S
information an attacker A may infer from her initial
may be further characterised by type, provider, etc.
The edges between sets (their elements) represent ses-
sions taking place in the system. The information we may
The book [18] also introduces three types of attacks with
gather about them are highly dependent on actual imple-
mentation of the system and may comprise contextual in-
formation such as time, length, routing path, content, etc.
• Recognition attack – A realises that several users (xi,
4.1 Contexts in FPD
i) = U ser) are in fact a single user.
• Linking attack – (x, y) ∈ R and (z, y) ∈ R are in the
The FPD model only briefly mentions context informa-
V iewA. When A is able to find just one pair (y, xi) ∈
tion but does not introduce any definition of it. The attacks
R then she will know that xi = x and (z, x) ∈ R.
based on context information do not say how to perform
them but only defines changes in V iewA when an attack is
• Intersection attack – A knows anonymity sets for sev-
eral actions. When she knows that a certain user is in
Since the FPD model newly addressed the mobile
all anonymity sets, she can apply intersections to re-
computing environment, as opposed to the old-fashioned
duce size of anonymity set and eventually identify the
“static” environment, location had a very prominent role,
as did the device to some extent. We have decided to treat
these as “ordinary” context information, i.e. as any other
Finally, the model assigns probabilities to edges in order
additional information about the system that can link a user
to express attacker’s certainty about existence of particular
and an action (or more precisely, their identifiers).
relations with some simple rules how to derive certainty for
5 Context revisited – basics of the PATS (Pri- vacy Across-The-Street1) model The example: When attempting to model the example
scenario (see part 1.4) in the FPD model, the attacker ends
We propose the following approach, inspired by the way
up with three diamonds for each service use (see fig 5). Here
location and device (descriptors) are represented in FPD. user and location represent domains with no particular val-
We suggest all context information available to an at-
ues as there is no such information available. The attacker
tacker to be represented as vertices in a graph, where edges
cannot find any intersection of the three diamonds – i.e.,
are weighed with the probability of the two incident ver-
there is no attack as defined by the FPD model theory. This
tices (contextual information, user and service IDs) to be
is obvious since the FPD model does not cover any other
related/connected. Those connections may be between any
contextual information, only location and device.
two vertices, and a path connecting a user ID and a service
ID with a certain probability value of the path suggests a
link between the service use and the user ID exists.
The graph reflects all knowledge of an attacker at a given
time. Attackers with different knowledge will build differ-
ent graphs for a system as will likely do the same attacker
Figure 5. The example with in FPD model.
1Authors of this proposal work for different institutions located across
What is not clear to us at the moment is the question
2. does not know anything about ID (particular ele-
whether pseudonyms should be treated differently from
ments or size), the path from u to uID has weight
other contexts or not. Clearly they are more important in
not greater than ∆, i.e. Wmax(u, uID) ≤ ∆.
the model since their connection to users and actions de-
fines level of pseudonymity achieved in the system. Yet at
There are several proposals for formal frameworks for
the moment we suggest all vertices to be treated equally, al-
anonymity [8, 9] and unlinkability [16]. Frameworks in-
though we suspect that some of them might be more equal
troduced in these papers define typed systems with several
defined categories like agents, type of agents, messages [9]
or an inductive system based on modal logic of knowledge
5.1 Outline of the graph model
[8]. We believe that our proposal would be more flexible
and would cover context information as an inherent part of
We denote the set of all vertices by V , the set of all iden-
the model thus opening interesting questions.
tifiers of service instances by S, and the set of all user IDs
by ID. There are no edges between any pair of elements
of ID, only indirect paths through a linking context, and
the same applies to elements of S. There is also a function
max calculating overall probability weight for a path in
the graph, and therefore also a way to determine the high-
max (va, vb) for a path between va and vb. The
value of any path is calculated as a multiplication of the
weights (w) of all its individual edges, e.g. for the path P =
v1, v2, . . . , vi of i vertices of the graph, the value of the pathP is W (v1, vi) = w(v1, v2) × w(v2, v3) × . . . w(vi−1, vi). Figure 6. En example with a PATS model Unobservability (of service s
after observing a system at a given time does not in-
Unlinkability (between two nodes v
a graph that A can build when observing the system at
The example: Let us express our example from part 1.4
a given time has no path connecting v with
in the PATS model. Figure 6 shows how the context infor-
the overall probability greater than ∆, i.e. provides
mation about typical basket contents is connected to actual
instances of shoppings. As we are interested in connections
between payment cards (pseudonyms), we are looking for
Anonymity (of a user u
paths (and their aggregate values) containing pairs of par-
ticular payment cards. Let us try to find paths between card11 and the other two cards:
1. knows the set ID, she can only find a path from v
to uID with the weight not greater than 1/|ID|+
∆ such that Wmax(v, uID) ≤ 1/|ID| + ∆;
2. does not know anything about ID (particular el-
ements or size), she can only find a path from v
I D with the weight not greater than ∆, i.e. Pseudonymity (of a subject/pseudonym u ∈ P Figure 7. Paths connecting payment card 11
level ∆) – there exists a path known to A from any
with the other two cards
s ∈ S to u with a satisfactory value of Wmax(s, u),
but for A there is no knowledge of an edge from u to
These are the shortest (and highest value) paths only.
The attacker may deduce (with a high probability) that pay-
ment cards 11 and 25 belong to the same person, though
has weight not greater than 1/|ID| + ∆, i.e.
she does not know who this person is. According to our
definitions, unlinkable pseudonymity is breached. 6. Conclusions and open issues
above that considers the intermediate edges from uID only
(to any other vertex – contextual information – that would
This paper points out that contexts provide (or perhaps
we can even say that they produce) side-channels that are
Another interesting issue is the role of time that has a
not covered neither by the Common Criteria Privacy Class,
two-fold role – firstly, it can be a contextual information
nor by the Freiburg Privacy Diamond model. We also be-
(time of an action invoked by a certain subject, i.e. three
lieve that contexts in general are not well reflected in other
mutually connected vertices). Secondly, the probabilistic
current research attempts to quantify the levels (and deterio-
weights of edges in a graph change with time, as do the sets
ration) of privacy. A simplistic introduction of pseudonyms
of vertices and edges as such. Obviously, the contextual role
will not guarantee perfect privacy, and we need to have
of time may be reflected by the latter view – time of an ac-
some means to quantify what levels of privacy is needed
tion invoked by a certain subject is denoted by existence of
and/or achievable for specific scenarios. There are two so-
vertices describing action and subject identifiers, connected
lutions for protection against side-channels: hiding and so-
by an edge with weight 1, at the given time.
called anonymizing. Hiding is what anonymizing networks
utilise – they combine number of messages together, thus
7. Acknowledgements
creating satisfactory anonymity set. Anonymizing (or rather
more often in practice pseudonymising) requires creation of
Thanks go to Andrei Serjantov and Alf Zugenmaier for
layers that cloak the identity of the protected entity. Com-
their opinions and links to some interesting references in the
mon Criteria use this concept when defining pseudonymity
area of privacy, and to Flaminia Luccio and other anony-
that still enforces accountability of users, but hides/shades
mous referees for pointing out some problems with an ear-
lier version of our paper and namely for valuable discus-
One particularly interesting issue relates to the Common
sions of the PATS graph model details.
Criteria definition of unlinkability, as empirically reviewed
by Rannenberg and Iachello [13, 14] and more formally
References
specified above in section 2.4, is whether the unlinkable
“items” in question should only be operations (service invo-
[1] A. Abdul-Rahman and S. Hailes. Supporting trust in vir-
cations) or whether other kinds of unlinkability should also
tual communities. In Hawaii International Conference on
be considered. We have provided a supporting evidence for
System Sciences 33, pages 1769–1777. ACM, 2000.
a substantial revision of unlinkability specifications, while
[2] J. Bacon, K. Moody, J. Bates, R. Hayton, C. Ma, A. McNeil,
leaving the actual revision as an item for the future research.
O. Seidel, and M. Spiteri. Generic support for distributed
We also provide our basic PATS model that is not so lim-
applications. IEEE Computer, pages 68–76, March 2000.
ited in the coverage of selected aspects of user interactions
[3] M. Bellare. A note on negligible functions. Technical Report
CS97-529, Department of Computer Science and Engineer-
and therefore allows for better quantification/measurement
of different aspects of privacy. This proposal, unlike the CC
[4] V. Cahill et al. Using trust for secure collaboration in uncer-
or FPD models, introduces a computational model (based
tain environments. IEEE Pervasive Computing Magazine,
on graph theory). One of the problems we are currently
examining is atomicity for the vertices, i.e. contextual in-
[5] D. Cvrˇcek and V. Maty´aˇs. Pseudonymity in the light of
formation. We currently review various approaches to this
evidence-based trust. In Proc. of the 12th Workshop on Secu-
problem, being aware that the issue of atomicity has a crit-
rity Protocols, LNCS (forthcoming), Cambridge, UK, April
ical impact on the possibility of graph normalisation and
[6] C. Diaz, S. Seys, J. Claessens, and B. Preneel. Towards mea-
therefore also for the provision of the critical properties of
suring anonymity. In R. Dingledine and P. Syverson, editors,
completeness and soundness. This work in progress in-
Proceedings of Privacy Enhancing Technologies Workshop
cludes the issue of edge dependence, for it is clear that the
(PET 2002), LNCS 2482. Springer-Verlag, April 2002.
edges are not completely independent. We can mark sets of
[7] J. Douceur. The Sybil attack. In 1st International Workshop
nodes from distinct kinds of context (e.g., pseudonyms, IP
on Peer-to-Peer Systems (IPTPS’02), LNCS 2429, pages
addresses used in connections from the same provider) – let
us call them domains. Then we can address additional graph
[8] J. Y. Halpern and K. O’Neill. Anonymity and information
properties, e.g., such that for all pairs of domains D
hiding in multiagent systems. In Proceedings of the 16th
all sums of probabilities from any node in
IEEE Computer Security Foundations Workshop, pages 75–
D are not higher then a given value, typically
[9] D. Hughes and V. Shmatikov. Information hiding, anonymi-
The PATS approach allows for two definitions of
ty and privacy: A modular approach. Journal of Computer
anonymity, a weaker one considering a weight of the en-
Security, special issue on selected papers of WITS 2002,
tire path from uID to si can be added to the stronger one
[10] D. Kesdogan, D. Agrawal, and S. Penz. Limits of anonymity
in open environments. In F. Petitcolas, editor, Proceedingsof Information Hiding Workshop (IH 2002), LNCS 2578. Springer-Verlag, October 2002.
[11] M. Kinateder and S. Pearson. A privacy-enhanced peer-
to-peer reputation system. In Proceedings of the 4th In-ternational Conference on Electronic Commerce and WebTechnologies, EC-Web 2003, LNCS 2738, pages 206–215,Prague, Czech Republic, September 2003. Springer-Verlag.
[12] A. Pfitzmann and M. K¨ohntopp. Anonymity, unobserv-
ability and pseudonymity – a proposal for terminology. In Designing Privacy Enhancing Technologies: Proceed-ings of the International Workshop on the Design Issuesin Anonymity and Observability, LNCS 2009, pages 1–9. Springer-Verlag, 2000.
[13] K. Rannenberg and G. Iachello. Protection profiles for re-
mailer mixes. In International workshop on Designing pri-vacy enhancing technologies: design issues in anonymityand unobservability, LNCS 2009, pages 181–230, Berkeley,California, 2000. Springer-Verlag.
[14] K. Rannenberg and G. Iachello. Protection profiles for re-
mailer mixes – do the new evaluation criteria help? In 16thAnnual Computer Security Applications Conference (AC-SAC’00), pages 107–118. IEEE, December 2000.
[15] A. Serjantov and G. Danezis. Towards an information the-
oretic metric for anonymity. In Privacy Enhancing Tech-nologies (PET), LNCS 2482, pages 41–53. Springer-Verlag,April 2002.
[16] S. Steinbrecher and S. K¨opsell. Modelling unlinkability.
In R. Dingledine, editor, Privacy Enhancing Technologies(PET), LNCS 2760, pages 32–47. Springer-Verlag, 2003.
[17] The Common Criteria Project Sponsoring Organisations. Common Criteria for Information Technology Security Eval-uation – part 2, version 2.1. August 1999.
[18] A. Zugenmaier. Anonymity for Users of Mobile Devicesthrough Location Addressing. RHOMBOS-Verlag, ISBN 3-930894-96-3, Berlin, 2003.
[19] A. Zugenmaier, M. Kreutzer, and G. M¨uller. The Freiburg
Privacy Diamond: An attacker model for a mobile comput-ing environment. In Kommunikation in Verteilten Systemen(KiVS) ’03, Leipzig, 2003.
High Prevalence of Multidrug-Tolerant Bacteria andAssociated Antimicrobial Resistance Genes Isolated fromOrnamental Fish and Their Carriage WaterDavid W. Verner-Jeffreys1*, Timothy J. Welch2, Tamar Schwarz1,3, Michelle J. Pond1, Martin J. Woodward4, Sarah J. Haig1,3, Georgina S. E. Rimmer1, Edward Roberts1, Victoria Morrison4, Craig1 Centre for Environment, Fisheries and Aquaculture Sciences,
Suplemento integrante da ADVOCEF em Revista | Ano IX | Nº 89 I Julho I 2010O suplemento Juris Tantum rende homenagem ao escritor português José Saramago, Prêmio Nobel de1998, recentemente falecido. Em dois textos, o autor aborda um de seus temas favoritos, a Justiça. recusar o trabalho que as faria funcionar -uns riscos no chão, a espetar umas esta-ça de Deus a ideia de viajar um dia a