Anonymity and Trust in the Electronic World

Abstract

Privacy has never been an explicit goal of authorization mechanisms. The traditional approach to authorisation relies on strong authentication of a stable identity using long term credentials. Audit is then linked to authorization via the same identity. Such an approach compels users to enter into a trust relationship with large parts of the system infrastructure, including entities in remote domains. In this dissertation we advance the view that this type of compulsive trust relationship is unnecessary and can have undesirable consequences. We examine in some detail the consequences which such undesirable trust relationships can have on individual privacy, and investigate the extent to which taking a unified approach to trust and anonymity can actually provide useful leverage to address threats to privacy without compromising the principal goals of authentication and audit. We conclude that many applications would benefit from mechanisms which enabled them to make authorization decisions without using long-term credentials. We next propose specific mechanisms to achieve this, introducing a novel notion of a short-lived electronic identity, which we call a surrogate. This approach allows a localisation of trust and entities are not compelled to transitively trust other entities in remote domains. In particular, resolution of stable identities needs only ever to be done locally to the entity named. Our surrogates allow delegation, enable role-based access control policies to be enforced across multiple domains, and permit the use of non-anonymous payment mechanisms, all without compromising the privacy of a user. The localisation of trust resulting from the approach proposed in this dissertation also has the potential to allow clients to control the risks to which they are exposed by bearing the cost of relevant countermeasures themselves, rather than forcing clients to trust the system infrastructure to protect them and to bear an equal share of the cost of all countermeasures whether or not effective for them. This consideration means that our surrogate-based approach and mechanisms are of interest even in Kerberos-like scenarios where anonymity is not a requirement, but the remote authentication mechanism is untrustworthy.

 

Chapter 1

Introduction

1.1 Motivation

The purpose of authentication is to verify that a user is who he/she is claiming to be. The goal of authorization is to provide access for certain users to certain resources based on predefined business rules. An audit trail links actions to principals retrospectively. Authentication, authorization and audit trail are three traditional concerns in building a privilege management infrastructure (PMI). Traditionally, authentication is strong (based long term credentials linked to a stable identity) and authorization is linked to audit via the authentication mechanism (explicitly using the same long term credential and identity).

Often authorization decisions are made in electronic services using a stand alone application known as a trust management engine [10]. Trust management engines are used to aid applications in situations where the application is faced with a request for a potentially dangerous action. Long term credentials of requestors are evaluated against policies by the trust management engine before a decision is made about the request. For example in a typical role based access control scenario [7,35] users present the role server with the user’s key certificate (which is long term and linked to a stable identity) and prove that the presenting user is the legitimate owner of the key certificate. The role server can then consult a trust management engine like Keynote [9] or Policymaker [10] before activating the relevant roles.

Traditionally identity management has been a part of the trust management envelope. The repeated use of long term digital credentials (by trust management engines to make authorization decisions) enables an adversary to correlate all the actions of a particular user and then link these actions back to the stable identity to which these credentials correspond. Thus an adversary can build a complete dossier on an individual [19]. Thieves can steal sensitive personal information, terrorists can track their targets using government maintained address records, or servers at the far end can leak sensitive information about us. In many instances in the past people have suffered damage due to the malicious use of sensitive information [4]. One of the problems is that too much information about us is stored at too many places, maybe without our explicit knowledge and consent, and we do not have a clue how personal sensitive information will be used by entities at the other end of the communication channel. Moreover most attacks on electronic databases are by insiders and not outsiders [4], and tighter access controls cannot prevent an attack mounted by a user with legitimate access to the system.

There is a widely held view (which we discuss further in chapter 2) that the rapid erosion of privacy, resulting from the repeated use of long term credentials in the electronic world, is a threat. There have been several previous approaches (which we discuss in chapter 2) advocating anonymous transactions. The problem of using long term digital credentials linked to a stable long term identity has well been understood in the world of commercial transactions, and anonymous payment mechanisms [19,18] were designed to deal with this. However current anonymous payment mechanisms cannot be used to address the threats to privacy in the domain of access control systems. Digital cash [19] for example advocates complete anonymity and it then becomes difficult for an auditor to link actions to principals retrospectively which is a legitimate requirement.

1.2 Scope of this Dissertation

This dissertation focuses primarily on the area of access control systems, and argues that the exclusive use of long term credentials represents a lost opportunity in this area. We propose that many applications would benefit from having ways of making authorization decisions without using long term credentials. We propose examples of such mechanisms in this dissertation, using which policies can be enforced in access control systems without compromising the privacy of a user. Our approaches allows a localisation of trust and entities are not compelled to transitively trust [21] other entities which form part of a system.

This dissertation investigates the extent to which taking a unified approach to trust and anonymity can provide useful leverage to address the threat to privacy without compromising the principal goal of access control mechanism, which is to allow access to a resource to authorized persons and to prevent unauthorized access. In our privacy model users reveal their identities to some and conceal their identities from others, which is similar to the privacy model proposed in [1]. We focus on some prominent role based authorization models with emphasis on providing auditable anonymous role activation mechanisms using short lived electronic identities. We propose a new layer of anonymity in the current trust management systems which can coexist with traditional non-anonymous mechanisms. To be precise we concentrate on

  1. Establishing the extent to which un-correlatability is an obvious requirement for authorization systems.
  2. Examining how we can address classical trust management problems in an anonymous way.

These two requirements are not independent, at present we are prevented from thinking of the former as there is no way of doing the latter. This dissertation provides a happy middle ground between absolute privacy and zero privacy. Our approaches are founded on an ability to control both the availability and linkability of transactions. One of the interesting aspects of transactions is that we have no real control over the actions of the other party, so in order to achieve certainty of privacy we use unlinkability as our weapon.

1.3 Contribution

In this dissertation we define a new notion of surrogates or short lived electronic identities. We construct authorisation mechanisms using these surrogates which are suitable for use with role based access control systems such as RCBS [7], NIST model [35] and OASIS [6], which currently rely on authentication using long term credentials for activation of roles. We demonstrate how the powerful concept of activation of roles using prerequisites described in [6] can also be achieved by combining our surrogate-based authentication mechanisms with ring signatures [55]. We also propose an authentication mechanism using our surrogates suitable for object based distributed systems like Globe [52].

The designers of various previous anonymous transaction systems [15,43,64, 14,66] emphasize the property of non transferability i. e. only the owner of a credential can use the credential and the credential cannot be delegated for use by others. We take issue with such approaches, and investigate ways of supporting auditable delegation using our surrogates in the anonymous world. We show how our surrogates can be used to combine anonymous authentication with delegation, and give some example scenarios where this is beneficial. Our delegation mechanism does not greatly restrict the choice of delegation semantics, although for exposition we adhere to Crispo’s [25] delegation model in this dissertation.

We also show that surrogates can be combined with existing electronic payment mechanisms without compromising the anonymity of the user. Our surrogate based authentication mechanism allows an unbiased auditor with appropriate authority to correlate actions to individuals but a malicious auditor cannot forge audit records. Moreover we also show that our surrogates can be combined with delegation in such a way as to allow a two level audit mechanism with different levels of trust assumptions for an external and internal auditor. Our approaches allow a localisation of trust and users are not required to transitively trust [21] external entities. These mechanisms turn out to be useful even when anonymity is not a requirement because they enable remote authentication mechanism to be taken out of the local trust envelope.

Our surrogates are generated by modular exponentiation of the parent public key of the user and the secret value corresponding to a surrogate is generated by modular multiplication of the private key with the exponent used to generate the surrogate. Surrogates differ from other anonymous or blinded credentials [19, 12,18,43,15,63] in the way that anonymous credentials need to be certified by some authority [15,63], and can be reused [15], or the user needs to get a new credential issued after every single transaction [63]. The surrogates we propose are for single use, does not need to be certified and can only be used by their legitimate owner. Surrogates are verified by proving that the user/presenter of the surrogate has knowledge of the secret that was used to generate the surrogate without revealing the secret. The verification authority can verify surrogates but cannot masquerade as a legitimate user of the system.

1.4 Organization

We start in chapter 2 with a brief overview of various trust management systems and anonymous transaction systems. Chapter 3 presents a brief overview of access control and authentication systems. In chapter 4 we define the new notion of surrogates which is the main contribution of this dissertation. In chapter 5 we introduce the basic authorization protocols. We present some mechanisms for auditable anonymous delegation in chapter 6. The implications of our work in some other application areas is described with some examples in chapter 7. We set out our conclusions in chapter 8.

We have deliberately kept the main body of the dissertation short and included additional material in the appendix. The Appendix contains a number of papers and a technical report. The first paper entitled “A Palladium Based Solution for Bipartite Trust Management” presents a mechanism which was developed at the very early stages of the author’s PhD project and which we now regard as a false start. This is followed by “Anonymous Authentication” a paper which was presented at the Cambridge security protocols workshop in 2004 and will be published in the Lecture Notes In Computer Science by Springer Verlag. The paper “Uncorrelatable Electronic Transactions using Ring signatures” where we use ring signatures to authenticate users is included next and was presented at the Wholes workshop organized by the Swedish Institute of Computer Science. The paper titled “Authorization for Ubiquitous Computing”, which introduces a protocol which can be used in systems based on distributed shared objects, was presented at the Conference on Distributed Processing and Networking organized by Indian Institute of Technology. The final item in the appendix is a technical report published by the computer science department of the University of Hertfordshire titled “How to deceive Prying Eyes in the Electronic World”.

 

Chapter 2

Trust and Anonymity

2.1 Introduction

Considerable work has been done both in the area of anonymous transaction systems and of trust management. In this chapter we review some prominent previous approaches in the light of the research question we are addressing in this dissertation. Trust management systems were developed to aid applications in making decisions about requests for potentially dangerous actions, based on local policies and credentials. For our purposes we divide anonymity techniques into two major classes, namely anonymous communication channels and anonymous transaction protocols. The former operates at the network layer, the latter operates on top of the anonymous channels. Examples of the former are Mixnets [17], Mixminion [27] and crowd [54] and examples of the latter are Digital Cash [19], Pseudonym systems [18], [20], [55]etc.

Under various subsections of 2.2 we discuss various trust management approaches like Keynote, Independent Trust Entities, Attribute Vector model, Certificates, trusted computing, which is followed by a discussion, of anonymous communication protocols at the network layer which protects against traffic analysis, in 2.3. Section 2.4 reviews anonymous transaction protocols which operate on top of an anonymous communication network discussed in section 2.3. Since we use ring signatures using RSA keys in some of our protocols, we give a brief description of the RSA cryptosystem and ring signatures in sections 2.5 and 2.6 respectively. This is followed by our conclusions.

 

2.2 Trust Management Approaches

2.2.1 Trust

Trust is a complex subject and there is a considerable variation in the meaning of trust as used in the literature [38]. Trust is often used to mean reliance i.e. A cannot complete A’s part of a task unless B completes what is required of B. A can validate B’s actions and it is asserted in [21] that in this context A does not need to trust B although A needs to rely upon B. In the context of the trust management system we discuss in section 2.2.2 trust is often used interchangeably with authorization and a trust management system was designed to aid applications to answer the question “Should we allow to carry out this dangerous action”. In [38] trust is defined as “the firm belief in the competence of an entity to act dependably, securely and reliably within a specified context”. It is noted in [38] that delegation is an example of a transitive trust relationship, but in contrast it is concluded in [21] that trust is not delegation. In this dissertation we adhere to the notion of trust set out in [39] which defines trust as a measure of risk i. e. A trusts B means that B has the ability to violate A’s security policy. We use the services of a third party whom we refer to as a partially trusted third party (see chapter 4) in the following chapters. Users trust the partially trusted third party to aid them to carry out a transaction but the third party cannot forge transactions or audit records. In other words the partially trusted third party has the potential to violate A’s anonymity, but cannot masquerade as A. We attempt where possible to avoid transitivity, as it might have adverse and unexpected results [21] in particular, all trust relationships are local and users are not required to trust unknown entities external to their local domain. In the rest of this section we discuss some trust management systems. Systems like Keynote were proposed to address the issue of authorisation (which we deal with in this dissertation) in access control systems where as independent unbiased trust entities were designed to address the issue of privacy. We keep policy specification languages outside the purview of this discussion as our proposed approach (see chapter 4) is policy neutral and can be used along with any policy specification language.

 

2.2.2 Keynote

Keynote [9] is the latest version of a trust management approach [10] that initially came from Blaze and others at AT&T. Policy maker [10] was also developed by the same people and some of them were involved in the development of REFEREE [24] another trust management approach that was developed along with researchers from MIT and W3C. We will focus on Keynote here. Keynote works more or less like a database query engine. It can function as a stand alone application interfacing with other parts of the system and helping them in making decisions. Let’s lump these other parts of the system together and call them `client applications’. Whenever any client application faces the question “Should we carry out this dangerous action ” then it refers to Keynote for an opinion and based on that opinion it decides its future course of action. The application presents the Keynote trust management engine with a set of local policies that should be taken into account while taking a decision on this particular request, along with the credentials of the requestor and details about the proposed action. If the proposed action conforms to the local policy then Keynote advises the requestor to proceed, otherwise Keynote advises it not to perform this action as it is against the local policy. Keynote acts as a compliance checker for the client application. The policies are specified in the form of assertions and the actions are specified which are evaluated against these assertions.

Let our policy be that we are only going to allow payment by credit cards to those who are authorized to accept them and have the required credentials from a bank or a building society which is legally empowered to issue such authorizations. A keynote assertion specifying this policy looks like

  • Authorizer: “DSA: 1FFG2” # CA’s key
  • Licensee: “RSA: DEF662” # Buyer’s key
  • Conditions: “((app-domain = “BUY”) “Pay by credit card if seller is authorized to accept credit cards”)”
  • Singature “DSA-SHA1: 1861234” # Signature of the authorizer.

The sellers need to have digital representation of their authorizations to accept credit cards. Lets the seller use a SPKI certificate which can be roughly as

  • Issuer: “RSA 2GG36” # Bank’s key
  • Subject: “RSA 7YYH5” # Seller’s key
  • Authorization: “Can accept credit cards”
  • Delegation: “No”
  • Validity: “10/02 – 10/09”

When there is a request by the buyer then the relevant application fetches the relevant credential of the seller, parses it and presents it to Keynote along with the action requested, the id of the requestor and the id of the policy to be consulted to make a decision. Based upon this information keynote comes out with a decision which is most likely to be positive in the above instance. Keynote perfectly enforced compliance with the above mentioned policy but that is not all that we want to achieve. Here keynote checks whether the remote host is allowed to accept credit cards or not and based on this it gives a decision. But consider a situation where there is a corrupt or disgruntled employee who steals credit card numbers and uses them, or in other cases people getting access to other information like medical records etc. Using Keynote one has to live with the threat of correlation of access requests by an adversary. So we shouldn’t use a Keynote affirmation to wrap the entire organization with a trust blanket when we do not have information about the storage and use of records. This static nature of assertions won’t be able to support a dynamic real world process, where information is accessed and used by several different users.

2.2.3 Independent Unbiased Trust Entities

Independent unbiased trust entities like TRUSTe, EEF [48] etc. issue a seal that is displayed on the websites that do financial transactions online. There are also alternative dispute resolution agencies that intervene whenever there is a dispute between the consumer and the seller. These seals are more like the trade licenses that we find in most shops or like a safety certificate. For example any boat plying on the Trent River has to have some form of authorization from the British waterways board and display that, but the fact is that just having a seal doesn’t guarantee all the employees are trained what to do when there is a man overboard. There are checks while issuing such seals or certificates but someone can always register and get a seal and later on turn dishonest. We are very skeptical about the utility of such practices in the cyberspace as gathering evidence purely in the electronic world is very hard to do [57], and a seal cannot act as a guarantee for somebody’s honesty. Seals can have a psychological effect on the customer who doesn’t understand the system in much depth. So if we again put the question what happens to our personal information, can these trust entities give us a satisfactory answer? Neither do these seals guarantee proper delivery of goods or save the seller from being defrauded. Merchant websites displaying such seals can argue that since they are displaying the relevant seals they are expected to act properly. The success of these independent unbiased trust entities depends more on self regulation which assumes that everyone will benefit by acting honestly. The organizations issuing seals have little control over the storage, use and access control mechanisms of the target organizations. The seal issuing organizations also lack verifiable proof of use and storage of information by the sellers and so an element of risk or unnecessary trust is involved between these trust entities and the sellers.

 

2.2.4 Certificates

Digital certificates were first proposed by Kohnfelder in his 1978 MIT bachelor’s thesis [42]. A digital certificate was initially devised after public key cryptography was introduced and the need arose to communicate to principals each other’s public key. Typically a public key certificate contains information about the issuer which may be a certifying authority, the subject whom it is supposed to represent, the dates the certificate is valid and related information. This kind of certificate could not vouch for the trustworthiness of the subject but is primarily used for binding keys to principals [33]. Digital certificates may also be used to check for authorization as in SPKI [32]. As we have seen above in section 2.1, a certificate from a bank states whether or not a particular seller is capable of accepting credit cards but that is not enough to ensure that our credit card numbers won’t be used maliciously. Certification schemes depend on a global authority [33]. We need detailed assertions about who can be a CA, its authority, revocation services etc in order to build a public key authority of some credibility. In the light of these problems it can be said that current certificates aren’t readily able to address the issues which we are interested in like proper use of personal information. Moreover we are yet to build a global certification authority and our proposed schemes do not need a global certification authority.

 

2.2.5 Attribute Vector Model

The Attribute Vector Model [60] was developed with the goal of allowing trust decisions in pervasive computing. This model incorporated both the traditional identity based model and the context based model that is of relevance to pervasive computing. In the attribute vector model the degree of trust of an entity Si on Sj is derived as

D(Si, Sj) =f (A(Sj))

where Si and Sj are separate entities and A is the set of their attributes. Si‘s degree of trust D on Sj is evaluated using a function f which takes the attributes A of Sj as an input. Attributes can be traditional credentials or they may be context-based attributes like location. The former can be used for traditional computing purposes and the later can be used for pervasive computing devices. In the light of our question relating to the safety of our personal information now we do not think this model can be of much help when applied in the traditional context as it operates on credentials and we have seen earlier that credentials aren’t meant for guaranteeing somebody’s trustworthiness.

Apart from the approaches mentioned above there were other systems like PICS [24] which is mostly used for content selection to prevent children from visiting pornographic websites. In PILS we do not have the means to specify policies and credentials in the metadata format. PICS looks for labels and based on them it approves or disapproves a decision. This kind of system cannot be used to get an answer to the questions we started with. Microsoft Authenticode [24] was another approach that was developed to tackle a particular trust management problem called code signing. Authenticode provides users with the authenticity and assurance for accountability for software downloaded over the internet. These approaches rely on the use of long term credentials which by itself is a problem as discussed above in sections 3.6 and 2.2.2

2.3 Anonymous Communication Channel

In this section we discuss the mechanisms which prevent an adversary from figuring out who is communicating with whom and when, thus preventing traffic analysis. This is important as the protocols we describe later on in chapter 3 of this dissertation assume the existence of an anonymous communication channel between the communicating parties. Here we discuss some approaches to thwart traffic analysis by an adversary who can observe all traffic on the network.

Chaum introduced the idea of using relay servers or remailers or mixes for anonymous communication [17]. The first widespread implementation [26] of an anonymous communication channel were produced by the Cypherpunks mailing list, which was based on the theoretical work on Mixes. Later Mixmaster [45] was developed which added some features missing in Cypherpunk remailers. A mix network which allows the sender to choose the path is known as a free route network where as a mix network which allows only fixed routes is known as a cascade network. Cascades provide greater anonymity against an adversary who owns many mixes [8] than free routes, but are more vulnerable to blending attacks [59]. Moreover cascade networks have lower anonymity as the anonymity set is limited to the number of messages the weakest node can handle, whereas in free networks larger anonymity sets are possible as no mix acts as a bottle neck and many mixes handles messages in parallel [27].

 

2.3.1 Chaum’s Mix nets

This technique was proposed by David Chaum [17] based on public key cryptography. It allows an electronic mail system to hide who a participant communicates with as well as the content of the communication. The system assumes that:

  1. anyone may learn the origin, destination and representation of all messages in the underlying telecommunication system and anyone can inject, modify or remove messages.
  2. It is difficult to determine anything about the correspondences between a set of sealed items and the corresponding set of unsealed items.

The users of the computer system include not only the communicating partners but also a computer called a mix which will process each mail before it is delivered. The sender encrypts a message with the recipient’s public key, and appends the address of the recipient. The encrypted message and the address of the destination are then encrypted using the public key of the mix. If the public key of the recipient is If, and that of the mix is Kr. then the input to the mix is If Km [R1, Kr (Ro, M), Ar] where Ro and R1 are random numbers used as confounders (the confounder is a random byte string which has been inserted in the message and is intended to make chosen and known plaintext attacks more difficult) and A, is the address of the recipient and denotes. The mix decrypts the input using its secret key and then forwards the message to the recipient. The mix outputs messages in batches. The goal is to hide the correspondence between the input to, and output of, a mix. However if one item is repeated in the input and is allowed to be repeated in the output then the correspondence is revealed for that item. What is important is that the mix removes redundant copies before the output. In case of a single mix the approach is to maintain a record of items used in previous batches.

2.3.2 Onion Routing

Onion routing [62] is a general purpose communication infrastructure for private communication over a public network. It interfaces with off the shelf software and systems through specialized proxies making it easy to integrate into existing systems. It operates by building anonymous connections within a network of real time Chaum mixes. Onion Routing’s network of core onion routers are distributed and under multiple administrative domains so that no single router can compromise the entire network. Onion routing can be used with both proxy-aware and non proxy-aware applications. Onion routing’s anonymous connections are protocol independent and exist in three phases:

  1. Connection set up- Set up begins when the initiator creates an onion which defines the path of the connection through the network. An onion is a data structure that specifies the properties of the connection at each point along the route.
  2. The next phase is data movement when data travels along the connection and each onion along the route uses its public key to decrypt the onion it receives. As data moves through the anonymous connection each onion router removes one layer of encryption as defined by the control information in the onion defining the route.
  3. Connection tear down can be initiated by either end or by the middle.

2.3.3 Mixmaster: Type 2 Remailer protocol

Mixmaster’s [45] design philosophy is strongly influenced by Chaum’s digital mixes. Messages are sent as one or more packets. All mixmaster packets are of same length and all bits are encrypted with a 3DES [61] key at every hop, so no information about the identity of the message is visible to the observer. Even a compromised remailer can only know the previous and next locations in the chain but it cannot figure out how many preceding hops there were or how many following hops there will be. The header for the last remailer in the chain contains a flag indicating that it is the last hop, and indicates whether the message is part of a multipart message. If it is part of a message then the message ID number is used to identify all the other parts. Only the last hop can see that a group of packets are part of a single message. If not all parts arrive within a time limit then the message is discarded.

2.3.4 Mixminion: Type 3 Remailer protocol

Mixminion [27] a protocol for asynchronous loosely federated remailers addresses some of the flaws of Mixmaster while being as flexible as Mixmaster. Mixmaster does not support replies while Mixminion introduces a new primitive called single reply use block (SURB) which makes correlated replies as secure as forward blocks. In Mixminion the servers themselves cannot distinguish reply messages from forward messages. Mixminion uses TLS over TCP for link encryption between remailers and use ephemeral keys to ensure forward anonymity for each message, where as Mixmaster uses SMTP for transport. Most ISPs do not tolerate users who potentially deliver hate mail etc, and this requirement forms a barrier to wide scale remailer deployment. Mixminion allows each node to specify and advertise an exit policy, where as Mixmaster provides no way for the nodes to advertise their capabilities and roles. Replay attack is a serious problem in mix networks: Mixmaster keeps a list of old entries but here an attacker just has to wait till the server has forgets all its old entries and then replay a message. Mixminion addresses this threat using key rotation: a message is sent addressed to a given key and after the key changes no messages to an old key will be accepted. Mixminion uses synchronized redundant directory servers to provide information about the network compared to the ad hoc approach of Mixmaster. There is also a simple dummy policy in Mixminion to improve anonymity.

 

2.4 Anonymous Transaction Protocols

2.4.1 Private Authentication in Mobile Networks

Protocols for private authentication in mobile networks were proposed in [1] where communicating parties reveal their identities to each other but not to others in a group or third parties. When a mobile principal A wants to communicate to another mobile principal B both in the same location then they both exchange messages encrypted with their public keys in such a manner that an eavesdropper cannot detect the presence of either A or B in the area. The protocols in [1] cannot be used for the scenarios we deal with in this dissertation as in our cases the communicating partners do not reveal their identities to each other. Moreover in [1] two parties always use long term credentials to communicate with each other which can be used to correlate actions of a particular principal.

 

2.4.2 On iPrivacy and Lumeria

These systems were developed to protect user’s personal information from companies. iPrivacy is a U. S based company whereby the user downloads software from a website [48]. This software encrypts the user’s personal details, creates a fictitious identity and a one time credit card number, which is matched by the credit card company with the real credit card number and then the goods are delivered at an address chosen by the customer. With Lumeria [48] all the information is stored with Lumeria and the customer accesses the seller via a proxy server of Lumeria and can then buy goods online. These schemes can be compared with the example that we have mentioned earlier where one trusts his/her doctor to keep their medical records safe. We cannot have any verifiable proof about the integrity of the software downloaded over the net or the integrity of the company storing our personal information. So these kinds of schemes we think are vulnerable to abuse in the same way as the previous ones where we keep our information with the selling websites.

 

2.4.3 Chaum’s Digital Cash

A detail description of digital cash can be found in [19]. The user goes to the bank with a signed request called a note; the bank credits the account of the requestor after checking the signature on the note. In this scheme users generate the note number which the bank cannot see and that’s how they ensure that even the bank cannot track the spending habits of the user. The note number is unique for every different note. After a note has been spent the bank can see the note number but cannot figure out to whom the note was issued. Users carry a device called representative supplied by the bank to generate the note numbers. There are also proposals of implementing an observer in the representative of the user which checks against double spending. The anonymity of a user can be compromised if he/she tries to spend a particular note twice. This scheme cannot prevent transfer of credentials which is not a problem in case of notes but certainly would be for driving licenses, medical prescriptions etc.

 

2.4.4 Pseudonyms

A scheme using pseudonyms or short lived electronic identities was proposed in [43]. Here the user generates private and public keys and so do the organizations. The user goes to the credential issuing organization and generates an pseudonym (short lived identity) which is a function of the secret and non secret keys of the user and the organization. Credentials contain the nym of the user which the user can then use for the purpose the credential is meant for. The user has different credentials for different organizations and it can use credentials issued by one organization while dealing with another organization. Our proposals differ from pseudonyms: in our scheme the user is globally represented by his/her public key rather than by different pseudonyms, auditable delegation is not possible using pseudonyms.

 

2.4.5 Idemix

A description of idemix can be found in [15]. In idemix an user first registers with a global pseudonym authority (PA) with which the user registers its pseudonym and PA issues a credential stating that the pseudonym is valid. The user then can use the pseudonym to get a reference for a credit card payment from a different organization. The organization issuing payment tokens trusts the PA. A user can then use the payment tokens with other organizations. The user, PA and other organizations have to be part of the idemix system: idemix issues IP addresses as well as SSL certificates to each of them. Moreover there arises a need for a global pseudonym authority which everyone needs to trust. It is our contention that building a global pseudonym authority is as difficult as building a global public key authority and both in any case are developments yet to come to fruition. Moreover transfer of credentials is not possible in the idemix system. The mechanisms we propose in chapter 5 do not need a global pseudonym authority, as trust relationships are more localised e. g students trust their own university network administrator. Moreover our mechanisms can coexist along side with non anonymous mechanisms and do not need a significant change in the infrastructure.

 

2.4.6 Globally Unique Pseudonym

A scheme for tying attributes to pseudonyms was proposed in [64]. An user contacts a registrar with a proof of his/her identity. The registrar is not a single entity but a group of principals and the user must contact a threshold number of them. The registrar then contacts an issuer who issues a globally unique pseudonym to the user and binds the pseudonym with the public key for signature and encryption which is called a GUP certificate. Issuers like registrars are threshold entities. Having threshold entities prevents disclosure of individual information as well as registration of multiple entities by a single user. GUP certificates can be used either as a attribute certificate or can be used for commercial purposes. The scheme presented in [64] requires a global pseudonym authority and a registration authority which is similar to trusting a third party with personal sensitive information [4]. The schemes we present in this dissertation has a more localised trust relationship rather than a global authority.

Related Topic  Safety-Checking of Machine Code

 

2.5 The Rivest Shamir Adleman Cryptosystem

The RSA crypto-system [56) relies on the difficulty of factoring composites of large primes to provide its security. A composite n=p*q is computed, and made public, while the two primes p and q are kept secret. A value e where 1<e< (p-1) * (q – 1) is chosen at random, and d such that d*e=1 mod (p – 1) * (q – 1) is efficiently calculated. The public key is(e, n), whiled, n) is the private  key. In order to encrypt a message M for the public key (e, n) one simply performs an exponentiation modulo n. The ciphertext is therefore

Me mod n                                                                                         (2.1)

To decrypt, the message is simply raised to the power of the decryption key,

Med mod n=M mod n                                                                   (2.2)

Digital signatures can also be implemented. The public verification key is denoted (e, n) while the signature key is (d, n). The signer raises the message to be signed to the power d as follows

Md mod n                                                                                         (2.3)

and the verifier checks the signature as follows:

Med mod n=M mod n                                                                   (2.4)

All operations are performed modulo n. Digital signatures provide integrity properties and non-repudiation properties: if the public key of Bob is well known, Alice can prove to a third party that Bob has signed a message. We in this dissertation use RSA keys to generate ring signature (see section 2.6).

One of the major issues in public key cryptosystems is key management. Several techniques have been proposed for the distribution of public keys which are as follows

  1. Public announcement
  2. Publicly available directory
  3. Public key authority
  4. Public key certificates

For a detailed discussion on key management one can refer to [61,11]. For the protocols we propose in this dissertation we do not assume the existence of a global public key authority, thus eliminating transitive trust relationships between users and global authorities.

 

2.6 Ring Signatures

Protocols for our example scenarios 5.4,5.3 and 6.3 make use of a two level authentication mechanism where users first authenticate as members of a group using ring signatures and then use their surrogates to authenticate as discussed in section 4.5. We give a brief description of ring signatures here. Ring signature was designed Ron Rivest, Adi Shamir and Yael Tauman [55]. They call a set of possible signers a ring. One of the members of the ring actually signs using his/her own private key and the public keys of the other ring members. In this signature scheme the verifier doesn’t learn who the signer is but can only learn that the signer is a member of a certain group of possible signers. In producing such a signature the signatory doesn’t need the co-operation of any other member of the group. The other members do not even have to agree to be in the group. All the signer needs to know is the public keys of all the members of the group. Unlike group signatures ring signatures do not have any set up procedure, have no revocation procedures (but verification depends on ring membership at the time of signing), and any user can choose any set of possible signers. What is important from the point of view of the protocols we present in this dissertation is that ring signature is perfectly signer ambiguous. If there are r members of a ring then it is difficult for an adversary to guess the correct signature with a probability more than 1/r. Since complete anonymity is not a requirement in our protocols we use ring signature in conjunction with our surrogates to enable an auditor to link actions to individuals retrospectively.

The ring member who actually produces the signature is known as the signer and other ring members are known as non signers. Each member is associated with a public key Pk (via a PKI directory or certificate) and the corresponding private key is known as Sk. It is assumed that the ring members use a trapdoor one way permutation (such as RSA) to generate and verify signatures. Ring signatures have two procedures:

  1. ring – sign(m, P1, P2…. Pr, Sk): This produces a signature , on message m, given the public keys P1, P2…. Pr of the r ring members and the private key Sk of the signer.
  2. ring-verify(m, ) : This accepts a message m and the signature or (which includes public keys of all the members of the ring) and outputs either true or false.

Here we describe a ring signature scheme where individual signers use RSA as their signature scheme.

 

2.6.1 Ring Sign

Each ring member Ai has an RSA public key Pi = (ni, ei) and the trap door one way permutation fz over Zn; is defined as: fi(x) = xei mod ni

By the properties of RSA it is assumed that only Ai can compute the inverse permutation fi-1 efficiently. Since the trapdoor one way permutations of various ring members will have domains of different sizes, so for ring signatures all the trap door permutations are extended to have a common domain {0,1}b, where 2b is some power of two which is larger than all moduli nis. The trapdoor oneway permutation fi over Zni is being extended to gi over {0,1}b and a detailed discussion can be found in [55]. The signature is generated as:

  1. The signer s first computes a symmetric key as the hash of the message m to be signed: k= h(m)
  2. Then the signer chooses a random initialization value v uniformly at random from {0,1}b.
  3. Next the signer for picks random xi for each other member of the ring, 1<i<r, I /=s, uniformly from {0,1}b and computes yj for each ring member where i/=s: yi = gi (xi)
  4. Fourth the signer solves the ring equation for ys. The ring equation is a combining function which takes as input the key k, and the initialization value v and values y1, y2 …, yr in {0,1}b and produces an output z in {0,1}b. The combining function is efficiently solvable for any single input: for each s, 1<i<r, given ab bit value z and values for all the inputs yi except ys it is possible to efficiently find ab bit value for ys such that Ck,v (y1, y2, …, yr)=z. In this step the signer solves this equation for ys where z=v. Ck,v (y1, y2, …, yr)= v
  5. Next the signer uses his knowledge of the trapdoor oneway function in order to invert gs on ys to obtain xs. xs = gs-1(xs)
  6. The signature o, on the message is defined as:  = (P1, P2, … Pr; vi x1, …, xr)

 

2.6.2 Ring Verify

A verifier can verify an alleged signature  on the message m, where

= (P1, P2 …, ‘Pr; xi,…, xr) as:

  1. First for i=1,2, ….. r the verifier computes yi = gi(Xi)
  2. Second the verifier computes the hash of the message to get the key k as: k =h(m)
  3. Finally the verifier checks whether the yzs satisfy the fundamental equation: If the equation is satisfied then the verifier accepts the signature as valid.

Ck,V(y1, y2, …, yr) =v

We have discussed the RSA cryptosystem based on composites in section 2.5 where we generate inverses of RSA public keys. These inverses are used for doing ring signatures as described here in this section e. g. where the private key is used to invert the trapdoor one way permutation as shown in step 5 of the ring sign operation.

 

2.7 Conclusions

As discussed above trust management systems, independent trust entities or certificates cannot guarantee proper use of personal sensitive information. Anonymity systems provide a solution to the problem of privacy but cannot be used for making authorization decisions using current trust management systems, as trust management systems currently make decisions based on long term credentials like key certificates. Long term credentials, on the other hand, enable an adversary to correlate all the transactions of a particular user, which compromises the privacy of the user. In chapter 5 we shall present some alternative mechanisms where trust decisions can be made using transient identities or surrogates.

Privacy enhancing technologies [19,12,18,43,15,63] cannot ignore the fact that delegation is a key organizational practice. For anonymity systems to gain widespread acceptability they must support delegation. In chapter 6 we propose a mechanism using which we can have auditable delegation of credentials in the anonymous world. We illustrate the general approach which, we propose, by presenting a protocol which can be used in a role based access mechanism to authenticate users to roles anonymously.

 

Chapter 3

Access Control and Authentication Mechanisms

3.1 Introduction

We review some popular access control models relevant to our work in section 3.2, and describe an approach to access control in a system based on distributed shared objects in section 3.3. The delegation model we shall adhere to in this dissertation is discussed in section 3.4 and is followed by a brief description of an open network authentication system called Kerberos in section 3.5. We describe an early version of Microsoft Palladium and our solution based on this Palladium architecture in section 3.6 and subsection 3.6.1 respectively. This is followed by our conclusions in section 3.7.

 

3.2 Access Control Mechanisms

Access control mechanisms are designed to control access to valuable information and are used by the military, the government and large organizations. Their goal is to permit authorized access as well as to prevent unauthorized access. In the following subsections we give a brief overview of various access control mechanisms.

 

3.2.1 Mandatory Access Control

Mandatory access control (MAC) supported both by military and civilian governments attaches security labels to objects to be protected and clearance levels to the users. This model was first formalised by Bell and La Padula [5] but later on Sandhu in [58] presented a minimal model called BLP. In these models access is granted on the basis of the security label of the object and the clearance level of the user. MAC was developed primarily for the military environment and did not permit read up or write down: that is an user with a lower clearance level cannot read information with a higher clearance level nor can write objects with a lower clearance level. Thus information flow among entities is restricted. The system administrator is responsible for maintaining and setting both sets of security levels.

 

3.2.2 Discretionary Access Control

Discretionary access control (DAC) mechanisms unlike MAC allows users to control who has access to their information. Users can delegate their own rights to other users. A matrix can be used to represent access to all objects by all users. This matrix is known as an access matrix and has a row for every user and a column for every object. DAC is more appropriate for static situations where users and the objects remain same for a considerable length of time. In organizations where users as well as objects change frequently DAC is inadequate [6,7].

 

3.2.3 Role Based Access Control

Role Based Access Control (RBAC) has been perceived by many [34] as more appropriate than DAC to the secure information processing needs of non-military systems. The RBAC mechanism is built upon the premise that in large corporations individual employees do not own the information they process or have access to, the information is owned by the corporation and RBAC mechanisms should prevent employees from making unauthorized use of information. Roles in RBAC model the roles individuals perform in any organization e. g. in a hospital the roles an individual can perform can be doctor, nurse, clinician, pharmacist etc. Roles are a group of permissions that an individual acting in the role can perform within the context of the organization. The determination of membership of roles and allocation of permissions to roles are in compliance with organization policies and not so much at the discretion of the system administrator. These policies are based on existing practices, laws, or ethics. The users cannot pass access permissions on to other users at their own discretion. Authentication of users to roles is done using long term credentials like key certificates, role certificates etc. This enables an adversary to correlate all the transactions of a particular user. In mechanisms using long term credentials, audit is traditionally linked to access control via authentication using the same identity. We show alternative ways of authentication and audit without using long term credentials in this dissertation. In this dissertation we also deal with activation of roles using prerequisites as described in [6].

 

NIST model

The NIST model was proposed in [35]. The NIST RBAC model is organised in four step sequence of increasing functional capabilities. The four levels are:

  • Flat RBAC
  • Hierarchical RBAC
  • Constrained RBAC
  • Symmetric RBAC

Flat RBAC captures the essential elements of RBAC. Users are assigned to roles and permissions are assigned to roles. Flat RBAC has a requirement for role review whereby the roles assigned to a specific user can be determined as well as the users assigned to specific roles. Flat RBAC also requires that users are able to exercise permissions of multiple roles simultaneously. Hierarchical RBAC is for supporting role hierarchies. A hierarchy is mathematically a partial order defining a seniority relation between roles, whereby senior roles acquire the permissions of their juniors. Hierarchical RBAC can support an arbitrary partial order or may impose restrictions on the role hierarchy. Constrained RBAC adds a requirement for enforcing separation of duties. Constrained RBAC allows permission role review like user role review in Flat RBAC. The NIST RBAC model does not directly address the issue of authentication and depends on strong authentication mechanisms based on long term credentials.

 

RCBS

The Role and Context Based Security (RCBS) model was proposed in [7]. This model introduces permission centric operational Separation of Duty (SoD) constraints which can be built on top of standard RBAC models or can be introduced as an additional feature of RBAC models. RCBS introduces the concept of critical combination which groups the set of transactions of a task and enforces Per-Role Operational SoD, Static Operational SoD and Inheritance in Operational SoD between roles with reference to the transactions in the critical combination. The application of permission specific constraints as proposed in RCBS allows for finer granularity in mutual exclusion between roles. Authentication in RCBS is done using role certificates which declare the identity of the user and the roles associated with the user. An activation certificate binds the user’s identity with the names of the activated roles of the current session. The role manager and the activation manager generate the appropriate certificates. The certificates can also be used for interdomain authentication in the RCBS model.

 

OASIS

The Open Architecture for Secure Interworking Services [6] (OASIS) is an RBAC model developed at the University of Cambridge Computer Laboratory. The features of OASIS include session support, prerequisite parameters, context awareness, flexible delegation with a fast revocation mechanism and distributed operation and management. Context awareness is achieved in OASIS either by tagging roles with parameters which depend on the environment in which the role was activated or by the use of environmental predicates. These predicates allow information outside the OASIS system to be incorporated in the access control decision process. OASIS supports a form of delegation where the assigner can transfer rights not held by the assigner e. g. a nurse can assign a doctor the role oftreating-doctor although the nurse does not have the rights of a doctor. OASIS provides such assignment of rights by mean of appointments. Appointments in OASIS can express traditional delegation as well as transfer of rights not held by the assigner. In OASIS [6] the assignment of users to roles is handled by role activation rules and these rules allow users to enter roles based on valid prerequisites. Such prerequisites can be a role, an appointment or an environmental predicate. Environmental predicates are powerful enough to express constraints such as separation of duties and cardinalities. Every prerequisite can be tagged to make it a membership condition and can be revoked. If a revoked role forms part of a prerequisite to activate other roles, the revocation of the former can trigger further revocation resuting in a cascade. An example scenario [6] is a hospital where the role on-duty-doctor can only be activated by a user if the user has already activated the role employed_paramedic, local-user and on-duty. Another example scenario is a University where the role research-student can only be activated by a student if the student has already activated the role student.

 

3.3 Distributed Shared Objects

We live in a world where resources are hardly ever entirely local but are usually distributed across various locations. There is seldom a central authority that manages all access to these resources. Designers of distributed systems generally use long term credentials (key certificates) for authentication purposes. This means that servers can identify their clients, and can correlate their actions.

Globe [52] is a wide area distributed system where objects are physically replicated at several locations. The principal construct in Globe is a Distributed Shared Object (DSO) which consists of several local objects residing in local address spaces. The local objects storing a part of a DSO’s state are known as replicas. A replica consists of the following subobjects:

  • Semantics subobject, which is the only subobject written by the application developer, and which contains the code that implements the DSO.
  • Communication subobject is responsible for communication between local objects residing at local address spaces.
  • Replication subobject is responsible for keeping the replica’s state consistent with the other replicas.
  • Control subobject is responsible for taking care of the invocations from the client process on the host.
  • Security subobject is responsible for implementing the security policies of the DSO.

To invoke a method on a DSO, a user has to create a local object in his/her own address space. This object often acts as a proxy routing requests to appropriate replicas. The Globe Location Service facilitates finding of replicas. A user willing to run a replica or a user proxy needs a Globe Object Server in his/her. computer, either stand alone or integrated with other application.

The current proposals by the developers [52] of Globe are based on certificates issued by the DSO owner and follow a role based authorization model.

 

3.3.1 Access Control in Globe

If we model an electronic newspaper as a DSO then the corporate entity running the newspaper is the owner of the DSO and determines the security policies for the DSO by identifying all the meaningful roles for the object. This involves careful examination of the application concerned, which is outside the scope of this dissertation. Once the user role set is identified for an object then with each user role is associated a bit vector indicating the methods the role is allowed to execute. Grouping the bit vectors for all the roles of a particular object forms the access control matrix for the DSO. This matrix is stored in the security subobject and when a replica is created it gets the matrix from the owner of the DSO. When a role wants to invoke a method on an object the security subobject consults this matrix to verify whether or not this role is allowed to invoke this particular method.

The DSO owners sign certificates binding users to their public key, this certificate is then used by the user to authenticate themselves while requesting access. If there are users with the power to delegate then the security subobject verifies the chain of certificates till it reaches the one signed by the DSO owner. The replicas also authenticate themselves to the user, and the replica and the user can start talking to each other after they have authenticated to each other. Replicas authenticate themselves to the user using replica certificates issued by the DSO owner.

The disadvantage of using long term credentials by the user to authenticate himself/herself arises from the fact that an adversary can correlate all the actions of any particular user. This will leak information about any user’s online activities to an adversary which can be used in a way harmful to the user. We present an alternative authentication model in this dissertation whereby an user can authenticate himself/herself to the replica anonymously and an adversary cannot correlate the actions of any user. This we believe is novel because previous approaches to authentication rely on long term credentials. Similarly, at present trust management systems like Keynote [9] make decisions based on long term credentials and policies. Our new approach doesn’t require significant changes to the way authentications are done at present. The trust management systems can be queried using the surrogates [28,25] employed in our approach, and decisions can still be made based on fixed or dynamic policies.

 

3.4 Controlled Delegation

Delegation [30,37] is a process by which a principal A authorizes another principal B to act on its behalf by transferring a set of its rights to B possibly for a specific period of time. In many organizations, employees higher up the hierarchy delegate certain responsibilities to their secretaries or subordinates. Such instances are quite common in the real world and so to be useful in the real world anonymity systems should support delegation in certain situations. In an organization a manager can delegate their secretary the power to act on behalf of the manager for a duration chosen by the delegator.

There are several commercial websites whose customer base consists of people who are legally not allowed to have a credit card (children below 18 years). In such situations children use their parents’ credit card to buy online. Now the problem here is that credit card numbers once learnt can be reused. Things become more complicated when parents have anonymous credentials instead of credit cards: to share anonymous credentials such as those mentioned in [18,43,15,63] the owner also has to share their secret key. This is a major problem in such situations where delegation is a legitimate requirement.

In this dissertation we adhere to the semantics of the delegation model proposed by Crispo [25] due to the following reasons. The delegation model proposed by Crispo considers threats posed by the delegator thus allowing space for repudiability of actions. The trust assumptions between the delegator and the delegate are clearly spelt out in the specification of the delegation mechanisms. It is hard for the delegator to frame the delegatee as well for the delegatee to frame the delegator. It is also emphasized that the principle of consent be explicitly implemented; thus the delegator delegates only with the explicit knowledge and consent of the delegatee.

In the mechanism we introduce in section 6.4 we ensure that the delegator is not able to frame the delegatee nor the delegatee is able to frame the delegator. It is also difficult for the delegatee to masquerade as the delegator and an unbiased auditor can uniquely and irrefutably link actions to individuals without being able to forge the audit records. The delegator would no longer be able to use the permissions it has delegated.

 

3.5 Kerberos

Kerberos [46] is a trusted third party authentication service and users are referred to as the clients of the Kerberos authentication service. Various services (e. g. mail services, file servers) trust Kerberos’ judgment as to the identity of a user to be accurate. We give here a brief description of the current design of Kerberos which is followed by an overview of our proposals in section 8.5.

Kerberos has a top level authentication service issuing Ticket Granting Tickets (TGT). The TGT contains clients name along with current time, lifetime of the ticket, and the client’s IP address and is encrypted in a key known only to the ticket-granting server and the authentication server. The authentication server then sends the ticket, along with another copy of the random session key, and some other information, back to the client. Only Kerberos authentication server and the client, which is actually derived from the user’s password, know this response. In order to gain access to a service server, the application needs to build an authenticator that contains the client’s name and IP address, and the current time. The authenticator is sent to a Ticket Granting Service (TGS) along with the TGT and TGS verifies the TGT. The TGS, after authenticating the client, constructs a new ticket that contains a new session key, the name of the client, and an expiration time, all encrypted in the service server’s key. Users prove their identity for each desired service to the concerned server and server also proves its identity to the user. Servers are allowed to keep track of all past requests with time stamps that are still valid to counter replay attacks. So what we see is strong authentication based on permanent credentials and audit is linked to authorization via the same permanent credential used to authenticate. Such an approach gives birth to some unnecessary trust assumptions between various entities of the system e. g. clients are compelled to trust servers not to enable an adversary to correlate their access requests. We discuss the work that can be carried on Kerberos to remove such compulsive trust assumptions in our future work section discussed in section 8.5.

 

3.6 Microsoft Palladium

Palladium architecture [16] (now known as NGSCB) was proposed in 2002 by Microsoft in association with other companies namely Intel, HP etc. In Palladium the public and secret keys of a user are embedded in a chip soldered with the microprocessor. This chip is popularly known as a Fritz chip. There is a trusted software component called Nexus which communicates between applications on top and the keys embedded in the hardware. Applications running on top can be divided into trusted and untrusted applications. Applications trusted by the user are known as Nexus certified agents or NCAs. Having keys embedded in the hardware would enable recipients of messages to be certain about the sender of the message and keys would be tied to principals by means of certificates which can be retrieved for verification. The problem with such an approach is that it requires a global certification authority. Even if there is a global certification authority revocation would still be a problem. Once keys are compromised it would require users to replace their trusted boxes with a new one. Moreover such an architecture does not enable plausible deniability which is a legitimate requirement in certain situations [57]. Palladium depends on long term credentials which again is not good for individual privacy.

 

3.6.1 Trusted Hardware Based Approach

As part of our initial attempt to address the problem of anonymity and trust we designed a solution using Palladium which is explained in appendix A. Here we explain the salient features of this scheme and explain why this approach was not pursued further. Trustworthiness of individuals, in our Palladium based approach, were ascertained using weights assigned to principals by their peers and a decision was made using a simple metric. We assume the Palladium machine will prevent rogue applications from stealing passwords and implement the owner’s (i. e. owner of the keys stored in the box) security policies. There is also a trusted third party called AAWS which acts as a repository of information about the users. In the example we present in appendix A users get payment tokens form the bank before they buy goods electronically from an online merchant. Users authenticate themselves to the bank using a certificate issued by the AAWS and the bank issues payment tokens to the users. We assume that merchants also use Palladium machines and those machines are certified by the AAWS. There is a two way reputation assessment between the merchant and the seller. Customers use weights assigned to the merchant by other customers and use a simple metric to ascertain the reputation of the merchant. Merchants use a compliance checker and the certificates issued by the AAWS to the sellers to ascertain the trustworthiness of the customers. The users trust the AAWS with personal sensitive information and the AAWS issues pseudonym certificates to users which users then present to the point where they request access or a service.

Although the exercise was useful we later realized that the disadvantages of such an approach using Palladium outweighs the advantages. Our design does not satisfy the requirement that trust should not be transitive [21] and there was a need for a global certificate issuing and revocation authority which we believe is difficult to achieve and has not happened yet. Moreover systems need to trust the authentication mechanism of a third party as in our example merchants need to trust the authentication mechanism of the AAWS and the bank. We discuss how we can use trusted hardware in our conclusions in chapter 8 where users retrieve and use the keys in the hardware.

 

3.7 Conclusions

The access control and authentication mechanisms discussed in this chapter rely on long term credentials for making authorization decisions. Services also depend on the authentication mechanism of a trusted third party e. g. various services trust Kerberos’ judgment about the identity of a requestor and accepts tickets issued by the TGS. Such approaches give rise to a unnecessary and compulsive trust relationship [39] between principals and services they use as well as between services and the authentication mechanism. The approaches we present in chapter 5 eliminate such compulsive trust relationships between principals; thus allowing principals to independently control the threats they are exposed to.

Traditionally, in access control mechanisms allowing delegation, authentication is linked to audit using long term credentials. This enables an auditor to correlate actions with individuals retrospectively. Since anonymity systems do not use long term credentials it is difficult to link actions to individuals using such credentials; thus making it difficult for an auditor to resolve disputes. Some partially anonymous mechanisms allowing audit are presented in chapter 5.

 

Chapter 4

Surrogates

4.1 Introduction

The generation and use of surrogates is the key contribution of this dissertation. In the protocols discussed in chapters 5,6, and 7 we replace long term credentials with surrogates or short lived electronic identities; thus allowing identity to be taken out of the trust management envelope. In this chapter we discuss our key and surrogate generation mechanism. We start by describing our key generation mechanism in section 4.2. Since our surrogate generation method depends on the difficulty of computing discrete logarithms, we next introduce the discrete logarithm problem and briefly review some approaches to the solution of this problem in section 4.3. In section 4.4 we give a general overview the features of our surrogates which is followed by a description of the method used to generate surrogates in section 4.5. In section 4.6 we introduce the assumptions on which we build our protocols, and the notations which we use in the following chapters. This is followed by our conclusions in section 4.7.

 

4.2 Generation of Diffie-Hellman Keys

A detailed description of Diffie-Hellman (DH) key systems can be found in [29, 44], here we give a brief description of generation of DH keys which is as follows: the user selects a generator g mod P (i. e. calculating consecutive powers of g generates all the elements in the multiplicative group modulo P) and selects a secret s є ZP* (i. e. s is an element between 1 to (P – 1)), where ZP* is a multiplicative group [51] modulo a large prime P. The user constructs his/her public key X from the secret s as

X= gs  mod P

It is hard for an adversary to calculate s from X and P as this is the discrete logarithm problem (see section 4.3), which is considered to be intractable. The user proves ownership of X by proving knowledge of s without revealing s (see section 4.5.1).

 

4.3 The Discrete Logarithm problem

The discrete logarithm problem can be described as follows: Let ZP* be a multiplicative group. For s є ZP*, the discrete logarithm problem for the group may be stated as:

Given g є ZP* and a є ZP*, find an integer x such that gX = a.

Such an integer x is the discrete logarithm or index of a to the base g and is denoted as x= indga.

Diffie and Hellman used the discrete logarithm problem in cryptography as a source of a one way function in [29]. A one way function f can be defined as one for which given a value y=f (x) it is difficult to compute x at least for most values of y [51 ]. In their paper Diffie and Hellman proposed as a natural candidate a function based on exponentiation modulo a prime. Let P be a large prime and g be a generator in ZP*, then f (x) = gX mod P, which is relatively easy to calculate using binary expansion [51]. On the other hand inverting the function f clearly requires an algorithm for computing discrete logarithm in ZP*  which is thought to be an intractable problem. One of the early users of one way functions in cryptography is Roger Needham [67]. Needham used one way functions to store passwords securely in multi-user operating systems. Most operating systems use passwords for authentication and if the passwords are stored in a file then the file needs to be heavily protected by the operating system, but Needham’s approach was to store the values of a one way function applied to the passwords. When a user typed in his/her passwords, a one way function was applied to the password and the resultant value was compared to the stored value.

 

4.3.1 Complexity of Discrete Logarithm

The security of cryptosystems against computational attacks is dependent upon what computations are feasible. The field of mathematics that deals with this is known as computational complexity theory and a good introduction can be found in a paper by Shafi Goldwasser [51]. Computational complexity theory classifies computational problems according to their difficulty. If one problem can be solved in polynomial time given a solution to another, then we have a polynomial reduction between the two computational problems. Such polynomial reductions can provide information about the intrinsic difficulty of the problem. For example, the order problem for a finite group generated by g mod P is to find the least n>0 such that gn =1 mod P. It is shown in [51] that an algorithm to compute the discrete logarithm can be used to solve the order problem in polynomial time. Hence it can be said that it is at least as difficult to compute discrete logarithms to a base as it is to compute the order of the base in the group [51].

There are algorithms for solving the discrete logarithm problem which are as follows:

  1. The algorithm that applies for general groups builds all n powers of g and then looks up the group elements to find the discrete logarithm and this takes n operations to compute the table and O(n) space to store the table [51]. Daniel Shanks improved upon the bounds of this algorithm to the running time of O(n½ logn) and the space’ requirement of O(n½) group elements [41]. An algorithm was proposed by Pollard [50] which has the same running time but very small space requirement.
  2. The second class of algorithms applies to groups whose order has no large prime factor. A positive integer is called smooth if it does not have any large prime factors. S. Pohlig and M. Hellman proposed an algorithm to calculate discrete logarithm where the order of g does not have a large prime factor [49]. The running time for the group operations of the algorithm can be given as

where n is the order of g (n and g has been explained above in this section), ci is a random integer and p2 is a prime factor. Once this is done the Chinese Remainder theorem can be used to derive the final value of indga.

  1. The third class of algorithms are probabilistic rather than deterministic and only works for groups endowed with a special structure [51]. The approaches in this category are commonly known as Index Calculation methods.

 

4.4 Surrogates

Our surrogates are transient numbers generated from a parent number (such as a bank account number, credit card number, or social security number) by a particular mathematical function similar to that used by David Chaum in the generation of group signatures [20]. There are two parts to a surrogate, one is the publicly transmitted part and the corresponding private part from which the public part is generated. The corresponding private part is known only to the user. In this dissertation we denote the public part as surrogate itself and the corresponding private part as the private value. When we use Diffie-Hellman keys, the public part is known as the parent public key while the secret value from which the public key is generated by modular exponentiation is known as the secret or secret key. In this chapter and the rest of this dissertation we use Diffie-Hellman keys unless otherwise explicitly mentioned (see sections 5.3 and 5.4), thus whenever we use the term parent public key we mean Diffie-Hellman key.

Related Topic  Safety-Checking of Machine Code

Although our mechanism for producing surrogates is novel, the concrete surrogates which it produces have similar properties to those required by the abstract notion of surrogate postulated in [25]. Surrogates differ from other anonymous or blinded credentials [19,12,18,43,15,63] in the way that anonymous credentials need to be certified by some authority [15,63], and can be reused [15], or the user needs to get a new credential issued after every single transaction [63]. Our surrogates are for single use, do not need to be certified, and can be generated by the user. We use them for a single transaction and then discard them. Other anonymous credentials [15,43] can be verified by the acceptor if the acceptor knows the public key of the issuer. However, the surrogates defined here are used by proving that the user/presenter of the surrogate has knowledge of the private part that was used to generate the surrogate in a way which does not reveal the private part.

The surrogates in our protocol can be independently generated by the user and by the issuing authority, which is a partially trusted third party in the sense of section 2.2.1. However the issuing authority cannot masquerade as the user. Other parties (verifiers) can verify surrogates but cannot generate them. Note that the verifiers are not in general trusted by the user. The surrogates for a particular user are generated from the parent public key of that user. The private value corresponding to a surrogate is generated from the secret that was used to generate the parent public key. We generate DH keys for different example scenarios and then generate surrogates from the Difl’ie-Hellman keys.

The requirementS for surrogates are:

  1. Verifiability – The verifier, who might be the owner of the resource or an access granting service, should be able to unambiguously verify that the user presenting a surrogate is the legitimate owner of the surrogate.
  2. Un-correlatability – It should be hard for an untrusted adversary to link actions to individuals retrospectively even if the adversary manages to obtain the surrogate used for the transaction along with the transaction details. For example if the adversary is a legitimate verifier. Someone observing the network should not be able to gain any information about the nature of communication and the communicating parties.
  3. Misuse of surrogates – Even if an adversary manages to capture and retain a surrogate used for a particular transaction still it should be hard for him/her to generate and/or use future surrogates.
  4. Protection from (partially) trusted third party – No adversary should be able to pretend to be a legitimate owner of a surrogate. Although a third party generates and issues the information users need to generate and use their surrogates, this third party should not be able to masquerade as the legitimate owner of a surrogate.
  5. Audit – An unbiased partially trusted auditor should be able to link actions verifiably and unforgeably to individuals, but only with the explicit knowledge and consent of the legitimate owner of the surrogate.

We allow sharing of surrogates (in particular to surrogate delegation) but learning them won’t help the attacker in any way. Keys and surrogates used by principals in the protocols presented in this chapter depend on the difficulty to compute discrete logarithm. We show how our surrogates satisfy these requirements when we describe various example scenarios in the following chapters.

 

4.5 Generation of Surrogates

4.6 Notation and Assumptions

In this section we outline the assumptions on which we build our new protocols. The assumptions and notation we outline in this section apply to all the protocols we present in chapters 5,6 and 7 unless otherwise explicitly mentioned.

 

4.6.1 Assumptions

For our new protocols we assume that principals have a public key generated from a secret key-using the conventional Difie-Hellman mechanism as described in section 4.2. The association between a principal and its public key is known by a partially trusted third party e. g. the role server and can be implemented using some offline certification authority. Entities can verify the association between principals and their keys by fetching certificates and this verification is done with the explicit knowledge and consent of the principal. Ownership of surrogates are verified and established by proving knowledge of the corresponding private value using the signing mechanism discussed in section 4.5.1.

Our protocols depend on certain properties about the underlying communication channels above which they operate. Since we use our protocols for secure web access, and for remote access to distributed objects and as our protocols also can potentially be used for commercial purposes, we choose to use Mixminion (see section 2.3.4), a secure anonymous communication channel. Mixminion uses TLS [61] over TCP [61] for link encryption between remailers and uses ephemeral keys to ensure forward anonymity for each message. Mixminion also supports replies by the form of SURBs and the replies cannot be correlated with the initial message. This enables two way communication between the communicating partners

We also assume the existence of a secure authenticated communication channel such as SSL/TLS. SSL/TLS allows for two way authentication, preserves data integrity and confidentiality and is widely used. SSL/TLS can also be used for the scenarios we describe here in this dissertation for e. g. remote web access, secure submission of personal information etc.

Based on the discussion about complexity of the discrete logarithm problem in section 4.3 for our protocols we choose a cyclic group for which computing the order is thought to be very difficult. If (P – 1) has small prime factors then computing discrete logarithm is easy [49]. For our protocols P is chosen such that (P – 1) has one large prime factor. This can be done by choosing a large prime q and selecting P as the smallest prime congruent to 1 mod q or by choosing P of the form 2q +1 where P and q [23] are both prime.

 

4.6.2 Notation

  • P denotes as a large prime with the properties mentioned in section 4.6.1. g is a generator modulo P (see section 4.3). P is public and so is g.

We are going to introduce named individuals and examples in the subsequent chapters since it aids the understanding. We use a subscript to the keys to denote the user as illustrated next.

  • Xc represents the Diffie-Hellman public key of Carol. s, represents the corresponding secret key of Carol. Xb represents the Diffie-Hellman public key of Bob. Sb represents the corresponding secret key of Bob.
  • Ki+ represents the surrogate used for the ith Ki represents the private value corresponding to the surrogate Ki+ Surrogates are always generated from the corresponding user’s Difie-Hellman public key unless otherwise explicitly stated.
  • PKc represents the RSA public key of Carol. SKc represents the corresponding RSA private key (see section 2.5).
  • SIG(T) denotes a signature on message T using the mechanism described in section 4.5.1. Such signatures are always generated with the private value IKE corresponding to the surrogate I( used for the ith transaction. RSIG(T) denotes the ring signature (see section 2.6) on message T and such signatures are always generated by Carol using SKc

4.7 Conclusion

Surrogates provide a happy middle ground between absolute privacy and zero privacy. Since one of the interesting aspects of transactions is that we have no real control over the actions of the other party, so in order to achieve certainty of privacy we use unlinkability as our weapon. Surrogates provide the ability to control both the availability and linkability of transactions.

In the following chapters we demonstrate how to use our surrogates to address the problem of trust and privacy in various electronic services. In chapter 5 we integrate our surrogate based authentication mechanism with some prominent role based authorisation models which were discussed in section 3.2.3. In chapter 5 we also show how the powerful concept of activation of roles using prerequisites discussed in section 3.2.3 can be achieved using surrogates. Delegation of surrogates is discussed in chapter 6. We also show how surrogates can be integrated with payment mechanisms in chapter 7.

 

Chapter 5

Basic Scenarios

5.1 Introduction

We have already discussed some role based access models [6,7,35] where long term credentials like role certificates or key certificates are used for making authorization decisions. Applications such as [52] present long term credentials to a trust management engine [9] to check whether or not requested actions conform to local policies.

In this chapter we describe some example scenarios where long term credentials are being used at present for making authorization decisions. The mechanisms described in the following sections show alternative ways of making authorization decisions using the transient identities or surrogates described in section 4.4. Our mechanisms can coexist along with the other non-anonymous authorization mechanisms being used at present. The surrogates can be used to check compliance with local policies by a trust management engine. In our approach an issuing authority, e. g. the role server in a role based access mechanism, can communicate policies expressed in terms of roles pertaining to a surrogate or group of surrogates to a trust management engine situated at the point where the surrogates will be used, and then the trust management engine can make authorization decisions on requests from the surrogate owners. Our approach adds an extra layer of anonymity in the trust management engines which can coexist with current trust management architectures.

The first example scenario discussing auditable anonymous role activation in a role based access control system, is described in section 5.2. Section 5.3 gives an alternative mechanism for anonymous activation of roles in the case where auditability is not required. This is followed in section 5.4 by a mechanism for anonymous activation of roles with prerequisites which allow auditing. We conclude in section 5.5.

 

5.2 Scenario 1: Role Based Access Control with Fixed Roles

The role based authorisation model we adhere to in this example scenario was discussed in section 3.2.3. Suppose Carol is a subscriber of an electronic newspaper. The newspaper uses a role based authorization model where Carol is assigned the role of subscriber. The process is same for every subscriber. The subscription department (SA) prepares and sends the information Carol needs to generate her surrogates. At regular intervals the subscription department also sends to the role server (RS) the information needed to authenticate Carol and other subscribers. (SA) acts as the partially trusted third party (see section 2.2.1). Every time Carol wishes to read a newspaper she first authenticates herself to the local web server (WS) using her surrogate for the current transaction, and upon a successful authentication her role is activated. WS consults a local trust management engine before activating relevant permissions for Carol. SA can link Carol’s transactions but WS cannot. For every different transaction Carol uses a different surrogate. The requirements which were identified in section 4.4 are as follows:

  1. Verifiability
  2. Un-correlatability
  3. Misuse of surrogates
  4. Protection from (partially) trusted third party (SA here)
  5. Audit

The protocol has a preparation phase followed by a transaction phase.

the adversary to solve equation 7.12 which is widely thought (see section 4.3) to be an intractable problem. The subscription authority or SA initially issues the numbers the subscribers use to generate their surrogates in step 2 of the preparation phase and the subscription authority also sends WS the surrogates subscribers will use to activate their accounts during the preparation phase. SA can generate surrogates belonging to subscribers using equations 4.6 and 4.2, but SA cannot masquerade as a subscriber as it cannot generate the secret corresponding to a surrogate using equation 4.1 since s, is secret. Thus, from the surrogate the subscriber uses, SA can resolve disputes retrospectively. SA would divulge this link only in circumstances specified under legal authority, such as contract, legislation, search warrant or court order.

 

5.3 Scenario 2: Activation of Roles Without Auditability

In example scenario 1 we described a mechanism for anonymous activation of roles using surrogates, where an auditor can figure out retrospectively who activated a particular role. In this section we present a mechanism for completely anonymous activation of roles where an auditor cannot figure out who activated a particular role. In our approach we view principals as members of a group which may be a subset of all the members of a particular role. To access a resource the requestor has to prove that he/she is a member of a group that is allowed access to the requested resource.

We have discussed distributed shared objects (DSO) in section 3.3. The protocol we present here can be used to allow users to anonymously authenticate to a globe replica. In this example a DSO models the electronic newspaper. In the original Globe proposal (see section 3.3.1) the DSO owner would create a role of subscriber and issue certificates to paid subscribers. The paid subscribers in turn would authenticate themselves to the replica using their certificate and access the newspaper.

A customer, Carol, registers with the subscription authority (SA) as a subscriber. Like the previous scenario here (SA) acts as the partially trusted third party (see section 2.2.1). This registration requires Carol to prove that she owns the corresponding secret key. SA informs the DSO owner about its new subscriber Carol along with other new subscribers. Because we are not using surrogates in this protocol, the requirements of this protocol are different to the requirements we identified in section 4.4.

The requirements for this protocol are

  1. Verifiability – The verifier, who might be the owner of the resource or an access granting service, should be able to unambiguously verify that the user is a member of the group.
  2. Un-correlatability – It should be hard for an untrusted adversary to link actions to individuals retrospectively even if the adversary manages to obtain the surrogate used for the transaction along with the transaction details. For example if the adversary is an legitimate verifier. Someone observing the network should not be able to gain any information about the nature of communication and the communicating parties.
  3. Unforgeability – It should be hard for an adversary to masquerade as a legitimate user of the system.

The protocol has a preparation phase followed by a transaction phase.

5.4 Scenario 3: Anonymous Activation of Roles with Prerequisites

Here we illustrate the use of a two level authentication mechanism, using both ring signature (see section 2.6) and surrogates. A two level authentication mechanism supports the concept of activate security [6] according to which access control decisions depend on context which is monitored e. g. the assignment of users to roles is handled by role activation rules which require users to activate roles on possession of valid prerequisites. If there is a change in the context in which a role was initially activated then the role is revoked. Such an approach helps to organize access control systems into an hierarchical structure.

We have already described OASIS in section 3.2.3. The mechanism we present in this section can be used for anonymous activation of roles using prerequisites in the OASIS role based authorisation system. In our example here the prerequisite for activation of a role is another role. The example we use here emulates access to a University Learning and Information Services (LIS) by students. On registration all students are assigned the role student and then they are assigned roles based on their program of study. For research students they are subsequently assigned the role of research-student. The role research student can only be activated by a student if the student has already activated the role student. The LIS consults a trust management engine like Keynote [9] before granting specific permissions to a user.

Here students first authenticate as a member of the group of students to activate their prerequisite role of student and then use their surrogates to activate specific roles according to their program of study. LIS is accessed by staff as well as students and the procedure to activate roles for everyone is same. All users are members of a group depending on their affiliation with the university say student, staff etc. In order to have a two level authentication we use RSA keys along with DH keys.

The transaction flow can be outlined as:

  1. Carol is a research student and registers with the University learning and information services using her public key. She is assigned the role of research student by the role server (RS). (RS) acts as the partially trusted third party (see section 2.2.1). This registration requires Carol to prove that she owns the corresponding secret key. All the students of the University are assigned the role of student when they enroll.
  2. Carol first activates her student role before she access the database of print and electronic journals provided by the learning and information services.
  3. For activating her student role Carol proves she is a student without revealing her identity. She authenticates as a member of the group students using ring signature.
  4. She activates her research-student role by presenting her surrogates to the LIS. The LIS server consults the local trust management engine to ascertain whether or not Carol is allowed to carry out the actions she requested. For every different transaction she uses a different surrogate pair.
  5. In case of a dispute Carol’s transactions can be linked by the registration authority. This helps in auditing. But an adversary at the point of use of the surrogates cannot link surrogates belonging to Carol.

The requirements which were identified in section 4.4 are as follows:

  1. Verifiability
  2. Un-correlatability
  3. Misuse of surrogates
  4. Protection from (partially) trusted third party (RS here)
  5. Audit

The protocol is again described as a preparation phase followed by a transaction phase.

versary to calculate the equation 7.12, which is widely thought (see section 4.3) to be an intractable problem.

The role server or RS initially issues the numbers the students use to generate their surrogates in step 2 and the role server also sends LIS the surrogates students will use to activate their accounts. RS can generate surrogates belonging to subscribers using equations 4.6 and 4.2, but RS cannot masquerade as a subscriber as it cannot generate the secret corresponding to a surrogate using equation 4.1 since s, is secret. Thus, from the surrogate the subscriber uses, RS can resolve disputes retrospectively. RS would divulge this link only in circumstances specified under legal authority, such as contract, legislation, search warrant or court order.

 

5.5 Conclusions

RSA keys when used in conjunction with DH keys help us to have two level authentication. In DH key systems modulo a prime it is difficult to keep multiplicative inverses of public keys secret. When both the public key XC, and the modulus P are public it is easy for anyone to calculate the inverse of Xc modulo P. RSA key systems modulo a composite can have public keys with an inverse and the inverse can be kept secret, because to calculate the inverse of a public key an adversary has to factorize a large number which is thought to be an intractable problem. A two level authentication mechanism helps us to achieve anonymous activation of roles with prerequisites as described in section 5.4 respectively. In the two level authentication mechanism described in this dissertation, users first authenticate as members of a group using ring signatures and then authenticate using surrogates. The two level authentication mechanism in scenario 5.4 allows us to have two level audit level mechanism with different trust assumptions for the internal and external auditor. The external auditors can attribute actions to a group where as only the internal auditor can tie actions to individual DH public keys. We develop this notion further in the context of delegation in the chapter 6.

The mechanisms described above shows the possibility of coexistence of trust management and anonymity systems. Applications can answer the question “Shall we carry out this potentially dangerous action” without having too much information about the initiator of the said action in question. This leads to the possibility of an extra anonymity layer in the trust management systems e. g. Keynote [9], which can coexist with the non anonymous service. The surrogates of a user can be presented along with policies and request for the action by the user, to the trust management engine and thus policies can be enforced using surrogates.

The protocol for example scenario 2 does not make use of surrogates and thus it differs from other protocols. Here users prove membership to groups using ring signatures only. The limitation of such a scheme is that, by preventing an auditor from correlating actions belonging to a particular user, it does not facilitate fair dispute resolution. Protocols like scenarios 1 and 3 allow an auditor to figure out retrospectively who used a particular surrogate pair.

It is worth noting that identity resolution is local in our system. Only the RS can link a surrogate to its corresponding public key and an external auditor cannot. An external auditor can link a surrogate to its corresponding public key only with the help of the RS. We believe this is significant as users do not need to have trust in the honesty and competence of an unknown external auditor; users in our system trust only their local entities (e. g. the RS here) whom they know. However the external auditor can prove to a arbitrary third party that a member of the group activated his/role role.

The protocols we have presented here preserve the privacy of the user. For the protocol using two level authentication mechanism in scenario 3 we use ring signature, which is purely signer ambiguous and an adversary cannot determine the signer with a probability more than (1/r). Linking surrogates with a public key, in any of the protocols, requires calculating discrete logarithm, which is an intractable problem. It is difficult for an outsider to masquerade as a legitimate member of a group in scenario 3 as that would involve factorizing large numbers which is thought to be an intractable problem.

 

Chapter 6

Delegation

6.1 Introduction

Traditionally correlation of transaction records and identity theft has been the primary concern for the designers of anonymity systems [63,15,64]. Such systems do not allow principals to share their credentials, which we believe prevent principals from delegating their credentials. However it is often a legitimate real world requirement that users are able to delegate their credentials in an auditable manner. The approach which we take here does not greatly restrict the choice of the delegation semantics which can be adopted, although for exposition we adhere to Crispo’s [25] delegation model in this dissertation. We have already reviewed Crispo’s delegation model in section 3.4.

Like our mechanisms discussed in chapter 5 we assume that principals have a public key generated from a secret using conventional Diffie-Hellman as discussed in section 4.2. The association between a principal and its public key can be verified by a partially trusted third party. Similar to chapter 5 we also assume the existence of a confidential anonymous communication channel and a secure authenticated communication channel for our protocols. Users in the new protocols presented in this chapter generate surrogates with properties discussed in section 4.4 using the method discussed in section 4.5 Using the approach we describe in this chapter users can delegate their surrogates without the delegatee being able to figure out the secret used to generate the surrogate or reuse the surrogates of the delegator.

We start in section 6.2 with a mechanism allowing delegation where it is difficult for an auditor to link actions to individuals. In section 6.3 we present a mechanism for delegation where we use ring signatures and a two level authentication mechanism similar to scenario 3 described in section 5.4. The delegation protocol we describe in section 6.4 does not depend on ring signatures and has stronger properties than the one presented in section 6.3. We conclude in section 6.5.

 

6.2 Scenario 4: Fully Anonymous Delegation -1

The role based authorisation model we use here has been discussed in section 3.2.3. The transaction flow for this protocol is similar to the one described in section 5.2, the difference being is that here it is hard for an auditor to figure out retrospectively who used a particular surrogate pair. This protocol uses a DH key system as discussed in section 4.2, and the users only use their surrogate to authenticate. Though here it will be hard to figure out whether it was the delegator or the delegatee who used a particular surrogate pair but the auditor can figure out the owner of the surrogate pair. The requirements which were identified in section 4.4 are as follows:

  1. Verifiability
  2. Un-correlatability
  3. Misuse of surrogates
  4. Protection from (partially) trusted third party (RS here)
  5. Audit

The protocol has a preparation phase followed by a transaction phase.

The role server or RS initially issues the numbers the subscribers use to generate their surrogates in step 2 of the preparation phase and the role server also sends BB the surrogates subscribers will use to activate their accounts during the preparation phase. RS can generate surrogates belonging to subscribers using equations 4.6 and 4.2, but RS cannot masquerade as a subscriber as it cannot generate the secret corresponding to a surrogate using equation 4.1 since sc is secret. Thus, from the surrogate the subscriber uses, RS can resolve disputes retrospectively. RS would divulge this link only in circumstances specified under legal authority, such as contract, legislation, search warrant or court order.

The drawback in this scheme is that although the RS can figure out the owner of a surrogate but it cannot uniquely identify who used a particular surrogate. We address this issue in section 6.4.

 

6.3 Scenario 5: Auditable Anonymous Delegation – 2

Let us assume that users in an organization are members of various groups, e.g. managers belong to a group labeled as managers where as secretaries belong to a group labeled as secretaries. Carol is a manager in an organization and she uses her surrogates to authenticate, and check or submit announcements in the company bulletin board once every day. The organization uses a role based authorization model where Carol is assigned the role of manager. When she is out of her office she can ask her secretary to login and check or post notices.

We propose a two layered authentication mechanism where an user first authenticates as a member of a group say managers or secretaries and then uses his/her surrogates to check or post notices in the bulletin board. We have briefly introduced the features of a two level authentication mechanism in section 5.4. We assume that Carol’s secretary also calculates her own public private RSA key pair. Users authenticate as a member of a group using ring signatures as described in section 2.6. For example while Carol’s secretary is authenticating as a member of the group secretaries then he/she does so using the public keys of the other secretaries and his/her own private and public key pair.

The second layer or the authentication using surrogates helps the auditor to figure out the owner of a particular surrogate while the first layer or group authentication helps the auditor to figure out who used a particular surrogate pair. For example if Carol herself is using the bulletin board she first authenticates as a member of the group managers and then uses her surrogate pairs. The auditor, from the group Carol used to authenticate, can figure out that it was a manager who accessed the bulletin board and from the surrogate can figure out that the surrogates belong to Carol. If Carol’s secretary is using the bulletin board with surrogates delegated to her by Carol then she first authenticates as a member of the group secretaries and then uses the surrogates delegated by Carol. Here the auditor, from the group a secretary uses to authenticate, can figure out that it was a secretary who accessed the bulletin board and from the surrogates can figure out whose secretary accessed the bulletin board. No user can authenticate as a member of a different group and this facilitates fair dispute resolution.

  1. Carol is assigned the role of manager by the role server (RS). RS acts as a partially trusted third party and performs the same functions as RS of scenario 3 discussed in section 5.4.
  2. RS prepares and sends the information Carol needs to generate her surrogates.
  3. Carol generates the required number of surrogate pairs which depends on the number of days she will be out of office. We assume that her secretary will log in to the bulletin board once every day thus requiring one surrogate pair each day.
  4. Carol passes the surrogates to her secretary. This transfer of credentials between Carol and her secretary is done via a secure authenticated communication channel thus enabling Carol to prove later on that it was her secretary who used the surrogates in cases of dispute.
  5. While using the bulletin board Carol’s secretary first authenticates as a member of the group secretaries using ring signatures with her RSA key pair.
  6. Carol’s secretary can only use the surrogates and she learns nothing about Carol’s secret key Sc in the process.
  7. The bulletin board maintains an audit record containing the surrogates and the name of the group the user belongs to.
  8. The next time Carol uses the bulletin board she authenticates as before. Anyone who learns the value of a surrogate cannot do any harm to Carol.

The requirements which were identified in section 4.4 are as follows:

  1. Verifiability
  2. Un-correlatability
  3. Misuse of surrogates
  4. Protection from (partially) trusted third party (RS here)
  5. Audit

The protocol is again described as a preparation phase followed by a transaction phase.

vulge this link only in circumstances specified under legal authority, such as contract, legislation, search warrant or court order. The drawback of this scheme is that the secretary can further delegate Carol’s surrogates without the explicit knowledge or consent of Carol. We address this issue in the following protocol.

 

6.4 Scenario 6: Auditable Anonymous Delegation -3

There are several commercial websites where users can log in and watch movies by paying a monthly or yearly membership fee. Such a website will require a user to register with their credit/debit card number to become a member. The aim of registration is to produce an identification token that binds one of the user’s conventional identities (credit card) to a bit pattern (login name) which uniquely identifies the user in the computer system [22]. So long as a member wishes to continue his/her subscription, his/her login name remains the same. Every time a member wishes to watch a movie they authenticate using the fixed login name and password. So long as a particular member account is valid and active there is no limit to the number of movies a member can watch.

However it is a legitimate real world requirement that adult members allow their children (below 18) to watch a movie using the credentials of their parents. We refer to the principal that delegates as the delegator, and the principal that acts using the credentials of the delegator is referred to as the delegatee. The problem in this scenario is that once a child has learnt the fixed login name and password of his parents he can reuse them at a later date without the explicit knowledge or consent of his parents. Using the following protocol users can share their credentials in such a way that it is difficult for the delegatee to reuse credentials of the delegator. This protocol is for authorization and not for payment. We discuss implications of our protocols in the design of payment mechanisms in chapter 7.

The transaction flow is as follows:

  1. Carol registers with the bank with her public key.
  2. Payment is handled between the bank and the merchant. The bank prepares and sends the information Carol needs to generate her surrogates.
  3. For the ith transaction Carol prepares her ith surrogate and sends to Bob in such a way that only Bob can use it.

The requirements which were identified in section 4.4 are as follows:

  1. Verifiability
  2. Un-correlatability
  3. Misuse of surrogates
  4. Protection from (partially) trusted third party (RS here)
  5. Audit

else Bob can frame Carol or further delegate Carol’s surrogate. If a signed blinded public key can be verified using Ki+ then an auditor can be sure that Carol signed it as Ki is only known to Carol. Carol cannot masquerade as Bob since Sb is secret and known only to Bob. Bob also cannot masquerade as Carol as Ki is secret and known only to Carol.

Thus it is difficult for Carol to frame Bob or for Bob to frame Carol. Although an auditor can link actions to principals still it cannot forge audit records as it cannot generate Carol’s or Bob’s signature in steps 4 or 5 respectively.

In the protocol described in this paper, identity resolution is local; neither the bank nor the merchant need to know Bob’s identity. There can also be two different auditors, one who links the surrogate back to Carol while the other can prove that it was Bob who used it. This is significant because all trust is now local and Bob does not need to trust the bank or the merchant with personal information in order to use the service of the merchant. This has implications in the design of role based authorisation systems: using mechanisms like ours users can activate roles across domains without revealing personal information and auditors can still link actions back to the original user. An external auditor can link actions back to originating domain but linking the individual user requires the co-operation of an auditor trusted locally. Thus users have control over their personal information without compromising the goals of audit and authorisation.

 

6.5 Conclusions

Contrary to the previous approaches mentioned under section 2.4 we show here that one can transfer credentials in the anonymous world without revealing the secret from which the credential was generated. Transferability in turn allows delegation. We can have both auditable and non auditable transfer of credentials in the anonymous world using our approaches. In the protocols presented in sections 6.2 and 6.3 although it is possible for an auditor to detect the owner of a surrogate retrospectively it is hard to determine who used a surrogate. Moreover once the surrogate has been delegated Carol can still use the surrogate and frame her secretary in the protocol discussed in sections 6.2. Moreover Carol cannot control who can use the surrogate and her secretary can well share Carol’s surrogate with others.

In the protocol discussed in section 6.4 Carol delegates her surrogate in such a way that only Bob can use it. If ICS is not delegated then Carol will use it by signing T with Ki. It is in Carol’s interest to delegate by correctly signing the blinded public key of Bob else Bob can frame Carol or further delegate Carol’s surrogate. If a signed blinded public key can be verified using Ki+ then an auditor can be sure that Carol signed it as Kt is only known to Carol. Carol cannot masquerade as Bob since sb is secret. Bob also cannot masquerade as Carol as Ki is secret and known only to Carol. Thus it is difficult for Carol to frame Bob or for Bob to frame Carol. By signing the blinded public key of Bob Carol can specify that only Bob sb can use the surrogate; thus preventing Bob from further delegating Carol’s surrogate. To further delegate Carol’s surrogate Bob needs to share his secret key.

For the protocols discussed in this chapter the bank can generate the surrogates using equations 4.6 and 4.2 and in cases dispute involving a surrogate the bank can detect the owner of a surrogate, but cannot masquerade as the legitimate owner of Xc as the corresponding sc is secret. Calculating sc requires an adversary to solve the equation

Logg Xc= sc mod P                                                                                                 (6.10)

which is thought to be hard. Moreover even if the value of the exponent Ti is known still without the knowledge of s, it is hard to generate the secret corresponding to a surrogate using equation 4.1. Thus the bank cannot masquerade as Carol to Bob or to Carol’s secretary.

In the protocol described in section 6.4 identity resolution is local, neither the bank nor the merchant need to know Bob’s identity. There can also be different auditors one who links the surrogate back to Carol while the other can prove that it was Bob who used it. This is significant because all trust is now local and Bob does not need to trust the bank or the merchant with personal information although Bob can use the service of the merchant. This has implications in the design of role based authorization systems. Users can activate roles across domains without revealing personal information and an auditor can still link actions back to the original user. An external auditor can link actions back to the user only with the co-operation of an auditor trusted locally. Thus users have control over their personal information without compromising the goals of audit and authorization.

Related Topic  Safety-Checking of Machine Code

 

Chapter 7

Implications and Extensions

7.1 Introduction

Although this dissertation has been primarily concerned with access control our work has implications in other areas such as electronic payment, price discrimination, licensing enforcement. If users access a service by using their surrogates then to pay for the service there should be ways of tying the payment tokens to their corresponding surrogates for auditing and accounting purposes. Similarly for price discrimination if a buyer can prove using his/her surrogate which price band he/she belongs to, and the buyer is the legitimate owner of the surrogate, then price discrimination can be done with online transient identities, thus protecting offline identities. In this chapter we present some approaches where we tie payment tokens to surrogates, and show a way of charging different prices to different customers using transient identities. We assume as usual the existence of a secure anonymous channel, a secure, confidential authenticated communication channel, and prime numbers with the same properties as in the previous chapter, for the examples we present in this chapter.

We give a description of how we generate payment tokens in section 7.2. In section 7.3 we show a way for charging different prices to different users which is followed in section 7.4 by a protocol which ties payment tokens to our surrogates. In section 7.6 we present a mechanism which allows sharing of payment tokens between principals which is followed by conclusions in section 7.7. Principals in our protocols generate surrogates as described in section 4.5 and use either conventional Dife-Hellman keys as described in section 4.3 or generate keys as described in ??. We do not advocate a particular payment mechanism in this dissertation. Other existing payment mechanisms can be combined with our surrogates using the protocols similar to those we present in this chapter

Our protocols depend on certain properties about the underlying communication channel above which they operate. We assume the existence of a secure anonymous communication channel like Mixminion and a secure authenticated communication channel like SSL/TLS, both with the properties discussed in 5.

We start with a description of the method we use to generate payment tokens in section 7.2. The first scenario deals with price discrimination and a protocol which can be used for charging different prices to different customers is described in section 7.3. We show a way of tying the payment tokens to the surrogates in section 7.4 which is followed by a protocol using which one can share payment tokens in section 7.6. We conclude this chapter in section 7.7.

7.3 Scenario 7: Price Discrimination using On Line Identities

7.3.1 Motivation

The issue of deployment and use of anonymizing technologies is analogous to a non zero sum game where the players’ (merchants and users) interests are not always in direct conflict to each other but there are opportunities for both to gain. Both the merchants and the customers have incentives to support techniques (e. g. differential pricing) that facilitate optimal resource allocation. In games like this where the players have common interests it is unlikely that there will be any deployment and use of anonymizing technologies, unless there is an agreement between merchants and the users [2]. A model where everybody (merchants as well as the user) has the incentive to cooperate is more likely to be accepted [2]. The merchants need to cooperate because deployment of fully anonymizing techniques involves a switching cost on the part of the merchant.

The users and the merchants both have incentives in the deployment and use of medium anonymity systems. Medium anonymity systems support techniques (price discrimination) that makes the economy stronger without violating the privacy of the customer. Merchants can also increase their customer base by offering privacy conscious customers the use of anonymizing technologies. It has been argued in [65] that when merchants face privacy conscious customers they will adopt measures to protect the privacy of the customer. Its been reported in [3,13] that if sellers share information about taste and the buying habits of customers, then “market laws alone might produce pareto-optimal outcomes” [2]. While [3] states that the buyer can benefit from the seller knowing him better which is good for the society, [36] concludes that the use of pseudonyms helps both the society and the individual. So one can say based on these studies [3,36,65] that differential pricing based on online identities can be effective. The merchants as well as the users both have incentives to cooperate in the deployment and use of such anonymizing technologies. We talk very briefly about some anonymizing technologies that permit differential pricing in the next section.

Why settling for medium anonymity systems is good for both the merchants and the sellers can be more explicitly formalized through the principle of sub optimization:

“Local optimization in general does not lead to global optimization. ” [40]

We use the term local optimization in this context to mean procedures or designs intended to provide absolute anonymity i. e. systems which does not allow a third party to correlate transactions belonging to any particular user, under any condition. By global optimization we mean a globally optimal cooperative arrangement i. e. optimal allocation of resources. Since differential pricing facilitates an optimal allocation of resources [47] so widespread use of technologies that do not facilitate price discrimination might create a less efficient economy. Thus the sub optimizing decision (absolute anonymity) is inconsistent with the globally optimizing one (efficient economy).

 

7.3.2 Transaction Flow

In our protocol a principal (Carol) proves they are entitled to a student discount and then pays for her rail ticket using payment tokens. The amount is eventually charged Carol’s credit card. The payment tokens are different for every transaction and various payment tokens cannot be linked either to each other or to the user except by the credit card company. Before the protocol run there is a preparation phase where:

  1. Carol registers with a Local Education Authority (LEA) with her public key as a student. This registration requires Carol to provide supporting paper documents the details of which are outside the scope of this paper.
  2. The customer Carol requests payment tokens from her bank. The bank prepares and sends the information Carol needs to generate her payment tokens. These payment tokens are not specific to any particular product. These payment tokens can be used for purchasing various products.

During the protocol run:

  1. Carol authenticates to the rail company that she is entitled to a students’ discount.
  2. Carol generates the payment tokens.
  3. Carol pays for her tickets with the payment tokens.
  4. The payment is authorized similar to the way credit cards are authorized at present. The authorization decision is communicated to the seller.

Every time Carol authenticates to the rail company that she is entitled to a students’ discount she doesn’t need to give any personal sensitive information to the vendor. The payment tokens cannot be linked to Carol by the vendor. The guard in the train can verify Carol’s entitlement to her ticket without Carol disclosing personal sensitive information. The requirements which were identified in section 4.4 are as follows:

  1. Verifiability
  2. Un-correlatability
  3. Misuse of surrogates
  4. Protection from (partially) trusted third party (RS here)
  5. Audit

 

7.3.3 Preparation

This phase is not specific to the rail tickets application. Registering with the Local Education Authority (LEA) is required for other situations (for example for a waiver of council tax from the county council) and the payment tokens issued by the bank can be used to buy other goods also.

Registering with the LEA

Carol sends her RSA public key PKc to the LEA, using a secure, conventionally authenticated channel.

 

7.4 Scenario 8: Tying of Payment Tokens to Surrogates

Now we present a protocol where we tie payment tokens with surrogates. Users use conventional DH keys as described in section 4.3 and generate surrogates as described in section 4.5. The transaction flow is outlined by means of an example as:

  1. The customer Carol requests payment tokens from her bank.
  2. The bank prepares and sends the information Carol needs to generate her payment tokens and surrogates.
  3. Carol goes to a website selling goods she wants to purchase.
  4. Carol generates the payment tokens and surrogates.
  5. The seller authenticates locally whether or not Carol is the legitimate owner of the surrogate.
  6. Carol pays with her payment tokens which the seller validates with the bank.

The next time Carol goes to shop with the same seller she uses different surrogates and payment tokens which can be verified as before but cannot be correlated with a previous surrogate or payment token. Our motivation has been that Carol trusts her bank which is quite a practical thing to do. The requirements which were identified in section 4.4 are as follows:

  1. Verifiability
  2. Un-correlatability
  3. Misuse of surrogates
  4. Protection from (partially) trusted third party (RS here)
  5. Audit

 

7.6 Scenario 9: Delegation of Payment Tokens

There are several commercial websites whose customer base comprises of people who are legally not allowed to have a credit card (boys and girls below 18 yrs). In such situations children use their parents’ credit card to buy online. Now as in chapter 6 the problem here is that credit card numbers once learnt can be reused. Things become more complicated when parents have anonymous credentials instead of credit cards: to share anonymous credentials such as those mentioned in [18,43,15,63] the owners also have to share their secret key. This is a major problem in such situations where delegation is a legitimate requirement. In this protocol we show a way of sharing payment tokens and surrogates. In this protocol Carol generates her DH keys modulo a prime as shown in section 4.3. The requirements which were identified in section 4.4 are as follows:

  1. Verifiability
  2. Un-correlatability
  3. Misuse of surrogates
  4. Protection from (partially) trusted third party (RS here)
  5. Audit

7.7 Conclusions

Though we concentrate on access control in this dissertation nevertheless our work has some interesting implications in the areas of payments, differential pricing etc. Although we have presented some payment protocols in chapter 7 we admit that they are far from perfect and we are happy to use any other payment protocol along with our surrogates in the protocols described above.

In our trust model the users trust the bank which replaces the need for a global pseudonym authority by a more localised trust relationship. Moreover in the real world users have to trust the bank and are legally bound to provide self identifying information to the bank.

The protocol presented in section 7.3 does not make use of surrogates and any other public key cryptosystem based on a trap door one way permutation can be used. What we have shown is that price discrimination using real life identities need not always be a threat to privacy.

The traditional belief that transferability is not desirable in anonymity systems has led to systems which do not support delegation. Our surrogates can be auditably delegated along with the payment tokens, while preserving privacy.

 

Chapter 8

Conclusions

8.1 Introduction

In this chapter we assess the extent to which our approaches can provide useful leverage to address problems of trust and anonymity in various electronic services. In section 8.2 we revisit the problems of access control, trust management, and delegation in the light of the approaches described in this dissertation. Section 8.3 gives a brief overview of how our approaches can be used in conjunction with trusted hardware. In section 8.4 we describe how our approaches can be exploited so as to reduce trust assumptions necessary between various entities of a system. Future work which could be carried out by extending various approaches presented in this dissertation is discussed in section 8.5. We summarise our work in section 8.6

 

8.2 Significance

We have shown ways in which authorization decisions can be made without using long term credentials linked to a stable identity. Role based access control models such as [7,35] currently rely on authentication using long term credentials for activation of roles. But our surrogates can be used in RCBS [7] for activation of roles without compromising individual privacy by means of the protocols described in section 5. The powerful concept of activation of roles using prerequisites described under section 3.2.3 can be achieved with surrogates using the protocol described in section 5.4. Our approaches can also be used for activation of roles in applications like Globe [52], discussed in section 3.3, without the proxy being able to correlate actions belonging to a particular user. At the same time, proxies can still get the benefit of consulting a local trust management engine before allowing access to a resource. Compared to the approaches mentioned under section 2.4 our approaches do not need a global pseudonym authority like idemix [15] or depend on some universal authority for registration of pseudonyms like Globally Unique Pseudonyms [64]. Our approaches do not advocate complete anonymity, but allow an auditor with appropriate authority to correlate actions belonging to a particular user retrospectively. None of the protocols we presented in chapter 5 requires communicating partners to share a long term secret, a feature which we believe is significant.

Keynote, discussed in section 2.2.2, currently uses long term credentials to make authorization decisions. This can be a threat to individual privacy. Our approaches can be integrated with systems that currently use trust management systems such as Keynote to make authorization decisions. Surrogates can come with explicit labels and policies which can then be used by the trust management systems to make authorization decisions. This adds an extra layer of anonymity in trust management systems, which can coexist peacefully with traditional nonanonymous mechanisms. We have shown that one can make trust decisions based on transient identities and we believe this ability is useful for certain services.

Users, in the protocols presented in this dissertation, generate and use their own surrogates and it is difficult for an adversary to masquerade as a legitimate user. The anonymity systems described under section 2.4 either require the user to trust a third party with personal sensitive information as in section 2.4.2 or depend on a global authority as in sections 2.4.5 and 2.4.6. Trusting a third party with personal sensitive information is not quite right [4]. A global pseudonym authority as advocated in [15,64] is difficult to build or at least is as hard as a global public key certification authority which hasn’t happened yet. Moreover the trust relationship between users and a global pseudonym authority is unnecessary, compulsive and can have undesirable consequences. In our approaches principals do not share personal sensitive information with everybody and we have more localised trust relationships. No longer are users required to accept a transitive notion of trust, nor do we presuppose a global pseudonym authority.

Previous anonymous credential management systems do not allow principals to share their credentials, a feature which we believe prevents users from delegating credentials. We have described approaches in chapter 6 where users can delegate their surrogates without the delegatee being able to figure out the secret used to generate the surrogate. The ability to delegate, we feel, is a significant improvement over previous approaches because delegation is useful [25] as resources are hardly ever entirely local and it is often a legitimate real world requirement that users are able to delegate jobs and credentials using which the delegatee can access the resources of the delegatee so as to complete the job.

 

8.3 Trusted Hardware

Though the palladium based approach we review in section 3.6.1 does not help us to ascertain trust while maintaining individual privacy at the same time, trusted hardware can nevertheless be deployed in conjunction with the protocols we presented in this dissertation.

The two level authentication mechanism we present in sections 5.4 and 6.3 can exploit the availability of trusted hardware. Users could make use of such hardware to authenticate as members of a group before using their surrogates in the protocols described in section 5.4 and 6.3. One could also use trusted hardware to authenticate as a member of a group in the protocol described in section 5.3. The top level authentication mechanism in the protocols described in sections 5.4, 5.3 and 6.3 is done using the public key of the users. Ring signature is perfectly signer ambiguous so it is difficult for an adversary to figure out the actual signer from a group and at the same time it is difficult for an outsider to masquerade as a member of a group.

It is not necessary that keys be permanently stored in particular hardware as the keys can be fetched using a protocol like the one presented in [23]. What is important is that only the legitimate owner of the keys can fetch and use the keys via the trusted hardware. The user can first auditably authenticate to the hardware using surrogates and then the hardware keeps an audit trail. The hardware canthen be used by the user to authenticate as a member of the group.

Unlike our earlier approach (which we present in the appendix and now regarded as a false start) we advocate a more localised trust relationship in the mechanisms presented in chapter 5. In particular, in our protocols there is no need for a global certification and revocation authority. The keys are not stored in the hardware when the hardware is not in use, so even if the hardware is compromised it is still difficult to compromise the keys. Moreover unlike a conventional Palladium based approach, end-systems do not need to trust the authentication mechanism of any third party. Our new approaches, take identity out of the authorization management infrastructure. In the approaches presented in chapter 5 users do not need to reveal any digital credential, such as a public key, linked to a stable identity to authenticate themselves or obtain tokens, but the trusted hardware nevertheless makes it difficult for an adversary to forge transactions.

 

8.4 Localisation of Trust

Using the protocols proposed in this dissertation it is possible to reduce the need to trust. In the protocols described in this dissertation, when principals request access, the server does not need to trust the authentication mechanism of a third party. For example in the Kerberos authentication service the services must trust Kerberos’ judgment as to the identity of a user to be accurate. Thus in the context of access to the LIS described in section 5.4, Kerberos advocates an approach where principals authenticate to an entity other than the LIS and get a ticket which the principals then use to access the LIS. In such an approach the LIS needs to trust the authentication mechanism of the third party whereas in the protocol described in section 5.4 the LIS authenticates users directly, but anonymously, before allowing access. Thus the LIS does not need to trust the authentication mechanism of a third party. The LIS server has the surrogates of users who are allowed access and the use of two different mechanisms makes it very difficult for someone who is not authorized to masquerade as a member of a group.

In our approach principals need only reveal their long term credentials to entities which are legally authorized to verify them. For example, in the protocol described in section 5.2, Carol only reveals her long term credential linked to her stable identity to the subscription authority while registering, she does not need to reveal her long term credentials every time she reads the newspaper. The subscription authority is legitimately required to know Carol whereas the server where Carol logs in to read the newspaper does not need to know Carol’s identity but only needs to know whether or not Carol is authorized to read the newspaper. Since there is now no risk of correlation of Carol’s online activities by the server it follows that Carol does not need to unnecessarily trust the server when she logs in to read the newspaper. So the threat that information about us is stored at too many places can be countered using the protocols presented in this dissertation.

In the delegation protocol presented in section 6.4 Carol does not need to trust Bob to be honest. Carol has control over who can use the surrogate as she can specify this. Bob cannot further delegate surrogates without the explicit knowledge and consent of Carol. In the protocol presented in section 6.3 Carol can use the surrogate she delegates and so Bob needs to trust Carol to be honest and not use the surrogate she delegates. But in the protocol discussed in section 6.4 Bob does not need to trust Carol to be honest and Carol cannot use the surrogates Carol delegates to Bob: an auditor can uniquely and irrefutably link all actions to principals. Moreover all identity resolution in the protocol presented in section 6.4 is local; Bob does not need to trust the bank or the merchant with personal sensitive information in order to view movies. An external auditor can link transactions to Bob only with the cooperation of an auditor who is internal to Bob’s domain and Bob needs to only trust this local auditor. Local identity resolution of this kind has implications in the design of role based authorization mechanisms where users can activate roles across domains without transitively trusting authorities outside their immediate domain [39].

The problem at present is that users have little control of the risks they are exposed to since: they must enter into an unnecessary compulsive trust relationship with the system. This forces them to trust the system to protect them from threats. This dissertation provides the way to allow clients to control the risks to which they are exposed by bearing the cost of relevant countermeasures themselves, rather than clients being forced to trust the system infrastructure (and bear an equal share of the cost of all countermeasures which may or may not be effective for them). Moreover using our mechanisms systems do not need to trust the authentication mechanism of a third party. This feature is useful even if privacy is not required, due to the abilitywhich it provides to systematically weaken trust assumptions.

 

8.5 Future Work

Our work can be extended to design an open network authentication system similar to Kerberos, which was discussed in section 3.5. Kerberos has a strong authentication mechanism based on long term credentials, and audit is linked to authorization via the same long term credential used to authenticate. Such an approach gives birth to some unnecessary trust assumptions between various entities of the system e. g. clients are compelled to trust servers not to enable an adversary to correlate their access requests.

We have argued that the trust relationship between users and servers providing various services in this scenario is unnecessary, compulsive and can have undesirable consequences. Services only need to know that the ticket has been issued by the ticket granting service and that the user presenting the ticket is authorized to use the service. Services do not need to know the identity of the user. An important conclusion of this dissertation is that (in this sense) the requirements of trust and anonymity are not in conflict with each other and can coexist peacefully without compromising the requirements of secure authentication and audit. We have demonstrated in this dissertation that even if users authenticate to a service using weak identities an auditor can still link actions to principals, thus enabling the elimination of the present compulsive trust relationship between users and services. The approaches presented in this dissertation could be extended to investigate how they can provide useful leverage to eliminate such unnecessary and compulsive trist relationship between various other parts of the system.

Delegation is difficult in current versions of Kerberos. If tickets issued by a Kerberos authentication service are delegated then it will be hard for an auditor to uniquely and irrefutably link actions to users, and users can frame each other. At present, to delegate anonymous credentials users have to share their secret key, which is not desirable as the delegatee can reuse credentials of the delegator and an auditor cannot figure out whether it was the delegator or the delegatee who used a particular credential. This dissertation shows a way of introducing auditable anonymous delegation in the electronic world. Open network authentication systems are often used to issue tickets to users, using which users can access a remote resource. The approach for auditable anonymous delegation can be extended to design open network authentication services where users can delegate credentials without compromising the goal of audit.

The goals of the open network authentication system which we propose in this section can be summarized as

  1. The service should be sure that the ticket has been issued by a ticket granting service authorized to do so.
  2. An adversary should not be able to masquerade by stealing the ticket of a legitimate ticket owner.
  3. Only the legitimate owner of the ticket should be able to use a ticket during the duration of the ticket.
  4. An adversary who controls a service should not be able to correlate actions of any particular user.
  5. The authorization service should allow authorized delegation of credentials.
  6. An authorized auditor should be able to uniquely and irrefutably link actions to principals.

An important benefit of an approach with reduced trust assumptions is that risks are reduced e. g. clients do not need to trust the server or the remote authentication mechanism to presgrye their privacy. Preserving workstation integrity is an open problem in public networks, For example someone might change the software running on a workstation to record the password of a user. Our approach using weak identities or surrogates can be extended to address this problem. Since our passwords change after each use so an adversary does not benefit by recording the password. Only the TGT service and the legitimate owner of the password can use a TGT ticket.

 

8.6 Summary

We have shown mechanisms which enable us to separate identity management from the trust management envelope, thus eliminating the present compulsive trust relationship between users and various electronic services. We propose a more localized trust relationship rather than having globally trusted entities acting as a repository of personal sensitive information. Even though the protocols described in this dissertation include a partially trusted third party, our third party cannot do any harm to the legitimate owner of a surrogate. In this dissertation we concentrated on access control models, but our work has implications in other application areas some of which we discussed in chapter 7. Chapter 7 shows that the surrogates can be tied to payment tokens and used in conjunction with them. This dissertation also comments implicitly on the desirable level of privacy which does not impede the development of approaches allowing auditability The approaches we present in chapters 5 and 6 do not guarantee absolute anonymity but make it difficult for an adversary to link actions to individuals. Previous anonymous credential management systems (which we have discussed in chapter 2) do not allow principals to share their credentials, a feature which we believe prevents users being able to delegate credentials. Our delegation mechanism in chapter 6, we feel, is a significant improvement over previous approaches. Delegation is useful [25] because resources are hardly ever entirely local and it is often a legitimate real world requirement that users are able to delegate both jobs and credentials using which the delegatee can access the resources of the delegatee so as to complete the job. Such approaches allow an auditor with appropriate authority to link actions to individuals, thus enabling fair dispute resolution. This dissertation shows that the requirements of trust and anonymity are not in conflict with each other and that the two can coexist in a peaceful manner. Finally, reduced trust assumptions enable users to control the risks to which they are exposed.

 

Bibliography

[1] Martin Abadi and Cedric Fourmet. Private Authentication. Journal of Theoretical Computer Science, 322: 427-476,2004.

[2] Alessandro Acquisti. Privacy In Electronic Commerce and the Economics Of Immediate Gratification. Proceedings of the 5th ACM conference on Electronic Commerce, pages 21-29,2004.

[3] Alessandro Acquisti and Hal Varian. Conditioning prices on purchase history. Technical report, University of California Berkeley, 2001.

[4] Ross Anderson. Security Engineering. Wiley, Inc, 2001.

[5] David E. Bell and Leonard J. LaPadulla. Secure computer systems: Unified exposition and multics interpretation. Technical report, Mitre Corporation, 1975.

[6] Andras Belokosztolszki. Role based access control policy administration. Technical Report 586, University of Cambridge, 2004.

[7] Yolanta Beresnevichiene. A role and context based security model. Technical Report 559TH University of Cambridge, 2003.

[8] Oliver Berthold, Andreas Pfitzman, and Rony Stantke. The Disadvantages Of Free Mix Routes And How To Overcome Them. Proceedings of the Workshop on Design Issues in Anonymity and Unobservability: Designing Privacy Enhancing Technologies: Lecture Notes in Computer Science Series, 2009: 30-49,2000.

[9] Matt Blaze, Joan Feigenbaum, J. loannides, and Angelos Keromytis. The Keynote Trust Management System. Request For Comments Series, (2704), 1999.

[10] Matt Blaze, Joan Feigenbaum, and M. Strauss. Compliance Checking In The Policymaker Trust Management System. Proceedings of the 2nd Conference on Financial Cryptography: Lecture Notes in Computer Science Series, 1465: 251-265,1998.

[11] C. Boyd and A. Mathuria. Protocols for Authentication and Key Establishment. Springer, 2003.

[12] Stefan Brands. Offline cash transfer by smart cards. Technical report, Centrum voor Wiskunde en Informatica, 1994.

[13] Giacomo Calzolari and Alessandro Pavan. Optimal design of privacy policies. Technical report, University of Toulouse, 1992.

[14] J. Camenisch and A. Lysyanskaya. An Efficient System for Nontransferrable Anonymous Credentials with Optional Anonymity Revocation. Proceedings of Eurocrypt, 2001, Springer Verlag, 2045: 93-118,2001.

[15] Jan Camenisch and Els Van Herreweghen. Design And Implementation Of The idemix Anonymous Credential System. Proceedings of the 9th ACM conference on Computer and Communications Security, pages 21-30,2002.

[16] Amy Carrol, Mario Juarez, Julia Polk, and Tony Leininger. Microsoft Palladium: A Business Overview. Available at http: //wtivw. micr o, Fojt. corn/PressPass/feat Ures/ 2002/ju102/0724palladi unnvp. asp, 2002. -.

[17] David Chaum. Untraceable Electronic Mail, Return Addresses And Digital Pseudonyms. Communications of the ACM, 24(2): 84-90,1981.

[18] David Chaum. Security Without Identification: Transaction Systems To Make Big Brother Obsolete. Communications of the ACM, 28(10): 1030-1044,1985.

[19] David Chaum. Achieving Electronic Privacy. Scientific American, pages 96-101, August 1992.

[20] David Chaum and Eugene Van Heyst. Group Signatures. Proceedings of Eurocrypt: Advances in Cryptology: Lecture Notes in Computer Science Series, 547: 257-265,1991.

[21] Bruce Christianson and William Harbison. Why Isn’t Trust Transitive. Proceedings of the 3rd International Workshop on Security Protocols: Lecture Notes in Computer Science Series, 1189: 171-176,1996.

[22] Bruce Christianson and J. A. Malcolm. Binding Bit Patterns To Real World Entities. Proceedings of the 5th International Workshop on Security Protocols Lecture Notes in Computer Science Series, 1361: 105-113,1998.

[23] Bruce Christianson, Michael Roe, and David Wheeler. Secure Sessions From Weak Secrets. Proceedings of the 11th International Workshop on Security Protocols Lecture Notes in Computer Science Series, 3364: 190-212, 2003.

[24] Yang Hua Chu. Trust management for the world wide web. Master’s thesis, M. I. T., 1997.

[25] B. Crispo. Delegation of Responsibility. PhD thesis, University of Cambridge, 1999.

[26] George Danezis. Designing and attacking anonymous communication systems. Technical Report 594, University of Cambridge.

[27] George Danezis, Roger Dingledine, and Nick Mathewson. Mixminion: Design Of A type III Anonymous Remailer. Proceedings of the 24th IEEE Symposium on Security and Privacy, pages 2-15,2003.

[28] Partha Das Chowdhury, Bruce Christianson, and J. A. Malcolm. Anonymous Authentication. To Appear in the Proceedings of the 12th International Workshop on Security Protocols Lecture Notes in Computer Science Series, 2004.

[29] W. Diffie and M. Hellman. New Directions In Cryptography. IEEE Transactions on Information Theory, 22: 472-492,1976.

[30] Y. Ding, P. Horster, and H. Peterson. A New Approach for Delegation Using Hierarchical Delegation Token. Proceedings of the 2nd Conference on Computer and Communications Security, 1996.

[31] Taher El-Gamal. A Public Key Cryptosystem And a Signature Scheme Based on Discrete Logarithms. Proceedings of CRYPTO 84 on Advances in cryptology, pages 10-18,1984.

[32] Carl Ellison, B. Frantz, Ronald Rivest, B. Thomas, and T Ylonen. Spki Certificate Theory. Request For Comments Series, (2693), 1999.

[33] Carl Ellison and Bruce Schneier. Ten Risks of PKI: What You Are Not Being Told About Public Key Infrastructure. Computer Security Journal, 16(1): 1-7,2000.

[34] David Ferraiolo and Richard Kuhn. Role Based Access Control. Proceedings of the 15th National Computer Security Conference, pages 13-16, 1992.

[35] David Ferraiolo, Ravi Sandhu, Serban Gavrilla, Richard Kuhn, and Ramaswamy Chandramouli. Proposed NIST Standard For Role Based Access Control. ACM Transactions on Information and Systems Security, 4(3): 224-274.

[36] Eric J. Friedman and Paul Resnick. The Social Cost of Cheap Pseudonyms. Journal of Econonnics and Management Strategy, 10(2): 173-199,2001.

[37] M. Gasser and E. McDermott. An Architecture for Practical Delegation in a Distributed System. Proceedings of the IEEE Symposium on Security and Privacy1,9 90.

[38] Tyrone Grandison and Morris Sloman. A Survey of Trust in Internet Applications. IEEE communications Surveys Tutorials, 3(4): 2-16,2000.

[39] William Harbison. Trusting in computer systems. Technical Report 437, University of Cambridge, 1997.

[40] F. Heylighen. Evolution Selfishness and Cooperation. Journal of Ideas, 2(4): 70-76,1992.

[41] D. Knuth. Sorting and Searching: The Art of Computer Programmmning volume 3. Addison Wesley, Inc, 1998.

[42] Loren M. KohnFelder. Towards a practical public key cryptosystem, B. S. thesis, M. I. T., 1978.

[43] Anna Lysyanskaya, Ronald Rivest, Amit Sahai, and Stefan Wolf. Pseudonym Systems. Proceedings of the 6th Annual International Workshop on Selected Areas in Cryptography: Lecture Notes in Computer Science, 1758: 184-199,1999.

[44] A. Menezes, P. van Oorschoot, and S. Vanstone. Handbook ofApplied Cryptography. CRC Press, 1997.

[45) Ulf Moller, Lance Cotrell, Peter Palfrader, and Len Sassaman. Mixmaster Protocol Version 2.2003. Draft.

[46] B. Clifford Neuman and Theodore Tso’s. Kerberos: An Authentication Service For Computer Networks. IEEE Communications, 32(9): 33-38.

[47] Andrew Odylzko. Privacy Economics and Price Discrimination on the Internet. Proceedings of the 5th International Conference on Electronic Continerce, pages 355-366,2003.

[48] Mary Ann Patton and Audung Josang. Technologies For Trust In Ecommerce. Proceedings of the IFIP working conference on Ecommerce, 2001.

[49] S. Pohlig and M. Hellman. An Improved Algorithm For Computing Logarithms And Its Cryptographic Significance. IEEE Transactions on Inforntation Theory, 24: 106-110,1978.

[50] J. M. Pollard. Monte Carlo Method for Index Computation mod p. Mathematics of Computation, 32(143): 918-924,1978.

[51] Carl Pomerance. CryptologyAnd Computational Number Theory: Proceedings of the Symposia on Applied Mathematics, volume 42. American Mathematical Society, 1989.

[52] Bogdan Popescu, Martin Van Steen, and Andrew Tanenbaum. A Security Architecture for Object Based Distributed Systems. Proceedings of the 18th Annual Conlputer’SecurityApplications Conference, 2002.

[53] Tal Rabin and Michael Ben-Or. Verifiable Secret Sharing and Multiparty Protocols With Honest Majority. Proceedings of the 21st ACM Symposia on Theory of Computing, pages 73-85,1989.

[54] Michael K. Reiter and Aviel D. Rubin. Crowds: Anonymity For Web Transactions. ACM Transactions on Information and Systems Security, 1(1): 66-92,1998.

[55] Ronald Rivest, Adi Shamir, and Yael Tauman. How To Leak A Secret. Proceedings of the 7th International Conference on the Theory and Applications of Cryptology and Information Security; Advances in Cryptology: Lecture Notes in Computer Science Series, 2248: 552-565,2001.

[56] Shamir Adi. Rivest, Ronald and Mark. Adleman. A Method For Obtaining Digital Signatures And Public-Key Cryptosystems. Communications of the ACM, 21(2): 120-126,1978.

[57] Michael Roe. Cryptography and Evidence. PhD thesis, University of Cambridge, 1997

[58] Ravi Sandhu. Lattice Based Access Control Models. IEEE Computers, 26(2): 9-19,1993.

[59] Andrei Serjantov, Roger Dingledine, and Paul Syverson. From Trickle to a Flood: Active Attacks on Several Mix Types. Proceedings of the 5th International Workshop on Information Hiding: Lecture Notes in Computer Science Series, 2482: 36-52,2002.

[60] Narendar Shankar and William A. Arbaugh. On Trust for Ubiquitous Computing. Proceedings of the Workshop on Security in Ubiquitous Computing: available at http: //wwtiv. teco. edu/philip/ubiconip2002ivs/, 2002.

[61] William Stallings. Cryptography and Network Security. Prentice Hall, 1999.

[62] Paul Syverson, David Goldschlag, and Michael Reed. Anonymous Connections And Onion Routing. Proceedings of the 18th IEEE Syniposiuln on Security and Privacy, pages 44-54,1997.

[63] Paul Syverson and David Goldshlag. Unlinkable Serial Transactions: Protocols And Applications. ACM Transactions on Information and Systems Security, 2(4): 354-389,2000.

[64] Paul F. Syverson and Stuart Stubblebine. Authentic Attributes With Fine Grained Anonymity Protection. Proceedings of the 4th Annual Conference on Financial Cryptography: Lecture Notes in Computer Science, 1962: 276-294,2000.

[65] Curtis R. Taylor. Private demands and demands for privacy: Dynamic pricing and market for customer information. Technical report, Department of Economics of Duke University, 2002.

[66] E. R. Verheul. Self Blindable Credential Certificates from the Weil Pairing. Proceedings ofAsiacrypt, 2001, Springer Verlag, 2248: 533-551,2001.

[67] Maurice V. Wilkes. Time Sharing Computer Systems. Elsevier Science Inc., 1975.

Leave a Comment Here