Bridging the Real and the Ideal for Encryption:
Security Against Chosen Objects Attack
Shashank Agrawal∗ Shweta Agrawal† Manoj Prabhakaran‡
Abstract
Cryptographic primitives like Public-Key Encryption (PKE) are widely used in applications and standards that are often too complex to analyze without idealizing the primitives. However, real-world security vulnerabilities lurk in the gaps between what the idealizations model and what the actual cryptographic security definitions guarantee. While prior work has addressed this gap in the random oracle model, here we propose a new approach in the standard model that aims to minimize this gap.
Our contributions are as follows:
• We develop a new security definition for PKE that subsumes CCA2 security, anonymity and robustness as special cases — without having to explicitly model any of these security concerns
— and also addresses security concerns that arise when honest parties may use maliciously created keys. The main premise of this definition is that any of the objects in a cryptographic scheme could be adversarially generated, and that should not compromise the security that honest parties have when using an idealized scheme.
The definition is in the real/ideal paradigm, but instead of simulation based security, employs the notion ofindistinguishability-preserving(IND-PRE) security, following the Cryptographic Agents framework (Agrawal et al., Eurocrypt 2015). To formalize our definition, we extend the Cryptographic Agents framework to allow adversarially generated objects. The extended framework is of independent interest.
• Somewhat surprisingly, we show that in the case of PKE, the above comprehensive definition is implied by a simpler definition (which we call COA security) that combines a traditional game-based definition with a set of consistency requirements.
• Finally, we provide constructions. We present transformations from any Anonymous CCA2- secure PKE scheme to a COA-secure PKE. Under mild correctness conditions on the Anony- mous CCA2-secure PKE scheme, our transformation can be instantiated quite efficiently and is arguably a viable enhancement for PKE schemes used in practice.
∗Western Digital. Email: [email protected]
†Indian Institute of Technology Madras. Email: [email protected]
‡Indian Institute of Technology Bombay. Email: [email protected].
Contents
1 Introduction 1
1.1 Our Contributions . . . 2
1.2 Related Work . . . 3
1.3 Chosen Key Attacks . . . 3
1.4 Ideal PKE: Modeling and Approximating . . . 4
1.5 Limitations of ∆-s-IND-PRE security. . . 6
2 Technical Overview 6 2.1 COA Secure PKE. . . 6
2.2 Proving that COA Security implies ∆-s-IND-PRE Secure PKE . . . 8
3 COA Secure Encryption 9 3.1 Ciphertext Resistance . . . 11
4 Constructing COA Secure PKE 12 4.1 Assuming Universal Key Reliability. . . 14
4.2 From any Anon-CCA secure scheme . . . 15
4.3 Practical COA Secure Schemes . . . 15
5 Extending Cryptographic Agents 16 5.1 The New Model. . . 16
5.2 Security Definition . . . 18
6 PKE in Agents Framework 19 6.1 Proof of Security of Πpke: A Sketch . . . 19
6.2 Detailed Proof . . . 24
A Additional Discussion 34 A.1 Choice of Security Framework . . . 34
A.2 Additional Attack Scenarios . . . 35
A.3 A Comparison with the Original Cryptographic Agents Framework . . . 35
A.4 Additional Notes on Related Work . . . 36
B Additional Preliminaries 37 B.1 Formalism of Agents . . . 37
B.2 Cryptographic Primitives Used . . . 38
C Constructing COA Secure PKE: Omitted Proofs 40 C.1 Proof Omitted from Section 4.1 . . . 40
C.2 Proof Omitted from 4.2 . . . 43
D On the Tightness of COA Security 44
E Impossibility of Γppt-IND-PRE Secure Encryption 45
1 Introduction
Today, we live in an intricate web of digital objects and activities, amidst a plethora of security concerns. The rise of mobile apps, cloud computing, Internet of Things, and blockchain technologies have all changed the landscape of security threats. Novel protocols that exploit the new architectures are frequently created and may rely on using primitives like PKE in innovative yet “non-standard”
ways, guided by idealized security expectations on the primitives. Many real-world security vul- nerabilities — in complex protocol standards as well as in customized applications — lurk in the gaps between what idealizations of cryptographic primitives promise and what the actual security definitions guarantee.
Enabling formal security guarantees for such applications is a practically important and the- oretically challenging goal. In a recent (concurrent) work, Zhandry and Zhang [38] used the indifferentiability framework of Maurer et al. [29] to provide an elegant solution to this challenge.
Their construction yields a PKE scheme that is indifferentiable from a random oracle endowed with PKE functionality. Such an idealized oracle is indeed an excellent model of how PKE is intuitively interpreted in high-level security applications. This significantly improves on a prior work of Backes et al. [6] who also recognized the same problem, and proposed upgrading the security guarantee on a PKE scheme to approximate an idealized interface; however, the notion of approximation offered in [6] was based on “preserving trace properties” which is much more restrictive compared to indifferentiability.
While the notion of indifferentiability with an idealized PKE oracle, as achieved in [38], is arguably the best guarantee one could hope to have for a PKE scheme, it does not fully close the question of bridging the gap between idealized models and real schemes. The reason for this is that the scheme in [38] (as well as that in [6]) rely on another idealized primitive, namely a Random Oracle. The Random Oracle Model is of course, much more well-studied and widely used than an idealized PKE oracle; yet it is an unrealizable idealization. Indeed, an important counterexample of what Random Oracles enable, but are impossible to achieve in the Standard Model relates to encryption: as pointed out by Nielsen [32], simulation-based adaptive security of (public-key or symmetric-key) encryption is unrealizable in the standard model, yet can be realized in the random oracle model. Adaptive security concerns a natural situation (wherein the decryption key is revealed to an eavesdropper after it has seen many ciphertexts), and it is imperative that any encryption scheme designed to be used in complex protocols should extend its security guarantees to such situations. But Nielsen’s result shows that being able to achieve simulation-based adaptive security in the random oracle model cannot be interpreted as a similar security guarantee in standard model.
This leaves us with the following question:
Can a PKE scheme in the standard model approximate an idealized PKE scheme?
The first challenge in tackling this question is formalizing what it means to approximate an idealized PKE scheme, without running into impossibilities. In this work, we provide a compelling solution using the notion of indistinguishability preservation (IND-PRE) [2]. At a high level, IND-PRE requires the following:
Any bit of information that is hidden from the adversary in a system that uses the idealized encryption scheme should remain hidden from the adversary even when the system uses the real encryption scheme.
The system here (later referred to asTest) models honest users who invoke the encryption scheme arbitrarily and adaptively, but through its prescribed interface, while carrying out arbitrary other computation and interaction with the adversary. As such, this system plays roughly the role of the environment in UC security. The interaction of the system with the adversary involves exchanging objects that are part of the encryption scheme (keys, ciphertexts) as well as arbitrary strings.
While not as powerful as a simulation-based security guarantee (see later), IND-PRE security, being in the mold of the “real-ideal” paradigm, does indeed provide an approximation of an idealized PKE scheme. Instead of addressing security concerns under various specific settings, as exemplified by multiple security definitions like IND-CPA, IND-CCA [31,35,17], anonymity [7] or robustness [1,30,18], IND-PRE security accounts for all of them at once. We remark that being agnostic about individual attacks, such a definition is likely to generalize to other unforeseen attacks. Further, by its very nature, it composes with larger schemes that use it arbitrarily, provided that honest users arerestricted to its ideal interface.
Another important contribution in this work is to provide a more conventional-looking definition for PKE, called “security against Chosen Objects Attack,” or COA security, which implies the IND-PRE security mentioned above. Then the main technical results in this work separate into two parts: (1) showing that COA security implies IND-PRE security, and (2) achieving COA security under standard assumptions.
1.1 Our Contributions
• COA Security and Indistinguishability Preservation: Our first contribution is to formalize a notion of ideal-like PKE scheme. For this, technically, we use the notion of ∆-s-IND-PRE security used in the Cryptographic Agents framework [2, 3], but allow adversarially created objects in the real and ideal models (see Section 5). Then, we present a simpler security definition called “security against Chosen Objects Attack,” or COA security for short (Section 3), and show that it implies
∆-s-IND-PRE security for PKE schemes (Section 6). Showing this surprising implication forms a major part of the technical contribution of this work.
• Meeting the Security Definition: We present two constructions, to transform any secure PKE scheme with standard security guarantees security (CCA security and anonymity [7]) into a COA secure PKE scheme. The first construction (Section 4.1), which has a light overhead and is quite practical, relies on the given PKE scheme having an additional property (called universal key reliability). The second construction (Section 4.2) avoids the need for this additional property and hence proves that COA security entails no additional complexity assumptions beyond anonymous CCA security. But it maybe less practical due to its additional overheads. We also note that existing PKE schemes could already be COA secure (an example is the Cramer-Shoup encryption scheme [16], with a modification proposed by Abdalla et al. [1]), and further, COA security is amenable to practical approaches to improving the efficiency, like hybrid encryption.
Apart from our concrete results on PKE, we remark that the Extended Cryptographic Agents framework we develop is of independent interest. It yields strong (and as yet unrealized) definitions for primitives like obfuscation and functional encryption, which offer security even in the presence of adversarially created objects.
1.2 Related Work
Approximating an ideal PKE scheme has been explored in the random oracle model. Most prominently, this includes the concurrent work of Zhandry and Zhang [38] using the indifferentiability framework of Maurer et al. [29] and an earlier work of [6] which explored this in the CoSP framework of [5]. Both these works achieve a stronger form of idealized PKE than we achieve (in particular, theysupport key cycles and key-dependent messages), but rely on the random oracle model for this.
In contrast, weprevent key-dependent messages since honest parties cannot directly access the keys without transferring them to the adversary first.
In the standard model, several individual issues were identified and addressed over a period of time, since the introduction of the first formal definition of semantic security [21]. Firstly, chosen ciphertext attack security was identified [10, 31, 35,17,9]. Later, the notion of anonymity (also known askey privacy) was introduced by Bellare, Boldyreva, Desai and Pointcheval [7], and studied in several subsequent works [22,23,24,26]. Further issues, arising from decrypting ciphertexts with
“wrong” secret-keys, were identified and developed into the notion of (full) robustness[1,30, 18].
COA security subsumes all these definitions.
We point out that some of the considerations in this line of work go beyond the interface offered by an ideal PKE scheme, and as such, these issues are not covered by our current work (or by [6, 38]). This includes complete robustness [18] (which is concerned with using a PKE scheme as a commitment scheme, wherein opening is carried out by revealing the randomness used for encryption),leakage resilience [4,25] (which is concerned with the scenario where some function of the secret keys may be leaked), and security against related key attacks [8, 37] (which allows an adversary to inject faults into the secret key and obtain decryptions of ciphertexts under this modified secret key).
IND-PRE security definitions were introduced in [2], and further extended in [3] to s-IND-PRE security, to reason about advanced primitives like obfuscation and functional encryption. We extend the cryptographic agents framework of these works, to be able to handle adversarially generated objects. Please seeAppendix A.4 for additional discussion on related work.
1.3 Chosen Key Attacks
COA security lets an adversary make honest parties operate with not only maliciously created ciphertexts but also maliciously created keys. While some security issues related to keys have been modeled by anonymity [7,22] and robustnesss [1,30,18], there are several more unexpected consequences of operating on malicious keys. We discuss a few such examples below.
• A third party could censor the communication between honest parties by providing the sender and receiver with maliciously crafted public/secret keys, such that ciphertexts carrying certain messages will not be decrypted. While the honest parties cannot expect the ciphertexts to be hiding against the adversary (which may not be a concern, e.g., if the adversary does not have access to the ciphertext), they may reasonably expect that the messages will be delivered unaltered.
However, standard encryption gives no guarantees of correctness when malicious secret-keys are used for decryption.
• A third party could present two different secret keys to two parties such that when an honestly generated ciphertext is posted, they both proceed to decrypt it differently. The adversary may also be able to create tailor-made ciphertexts which will decrypt to specific different messages for
the two parties. A variant of this attack is when the adversary observes a valid public-key posted online, and generates a fake secret-key for it, which “decrypts” ciphertexts generated using that public key into plausible messages. Existing notions of security, including “complete robustness”
[18] which considers some aspects of maliciously generated keys (see Appendix A.4), does not preclude such an attack.
• An adversary may generate multiple seemingly unrelated public-keys that all produce ciphertexts that correctly decrypt under the secret-key of a given public-key.1 This can lead to unexpected attacks that are absent with an idealized encryption scheme (seeAppendix A.2for an illustration).
• As a dual of the above attack, an adversary may generate a single public-key which produces ciphertexts which decrypt under various honest parties’ secret-keys (whose public-keys the adversary had access to). Further, which ciphertexts produced using this malicious public-key are decrypted (correctly or incorrectly) by a given secret-key could depend on the message (and randomness) used during encryption.
Non-Attacks. It is also interesting to note why certain “attacks” are allowed by our security definitions. In particular, COA security does not prevent an adversary from passing off a “lossy”
public-key – which does not have a corresponding secret-key at all – as a genuine public key to be used by honest parties. One may expect that this is not very different from an adversary forgetting its secret key. However, it is not a priori clear that there is no experiment at all in which the adversary could leverage its knowledge about the key being lossy (e.g., by giving a proof that the key is lossy). What prevents such attacks is the fact that the honest users do not directly access the key or the ciphertexts produced by it.
Also, COA security does not rule out additional structure in the encryption scheme. For instance, a COA secure encryption may also support “proxy reencryption”: one could deliberately generate two key pairs (SK1,PK1) and (SK2,PK2) such that a ciphertext corresponding to PK1 can be efficiently modified (without knowingSK1) into a ciphertext corresponding toPK2. While this may appear non-ideal at first glance, this is not a problematic feature: An honest party shall not generate such key pairs or carry out the reencryption operation (as these are not part of the interface for PKE); on the other hand, if such key pairs are generated by the adversary, then it could anyway carry out the reencryption operation by decrypting (usingSK1) and encrypting again.
We emphasize that an important reason why these “attacks” do not violate ∆-s-IND-PRE security is that the honest users stick to their ideal interface and access the cryptographic objects only via their handles.
1.4 Ideal PKE: Modeling and Approximating
Consider traditional security definitions like IND-CCA security for PKE. It involves a challenger interacting with an adversary, wherein the adversary’s goal is to guess a random “test bit” used by the challenger. A necessary feature of the challenger’s behavior in such a definition is that the adversary should have no advantage in guessing the test bit unless it “breaks” the encryption scheme. That is, if the encryption scheme were to be replaced by some kind of an idealization — which we shall elaborate on shortly — then even a computationally unbounded adversary will have no information about the test bit used by the challenger.
1Interestingly, Waters et al. [36] considered allowing such an attack as afeature. For us, this is a vulnerability that we seek to remove.
∆-s-IND-PRE security that we shall use is a “universal” version of the above security definition:
It considers all behaviors for the challenger such that in the ideal setting the adversary has no (or only negligible) information about the test bit. Such universality — i.e., for whichever challengers the test bit is hidden in the ideal world, it should remain hidden in the real world as well — is the essence of IND-PRE security. ∆ refers to a class of behaviors for the challenger wherein the only information hidden from the adversary is the value of the test bit (apart from the internal state of the algorithms of the encryption scheme, including the keys generated); while a more general notion is conceivable, they lead back to impossibility results in the standard model (see Appendix E). The prefixs(for statistical) signifies that, in the ideal world, the hiding should be information-theoretic.
Returning to the question of idealization, one possible approach (followed in [38], for instance) is to use random strings to represent the various objects (decryption and encryption keys, and ciphertexts). But this has theside-effect of creating a random oracle in the ideal model, making it impossible in the standard model to approximate this in a meaningful way. The way around this is to use an even more natural ideal model wherethe objects do not have any global representation shared between the adversary and the challenger. Instead, an oracle implementing the ideal primitive will maintain separate tables for the adversary and the challenger; it will give out handles (serial numbers) to each of them to refer to the entries in their respective tables, and also will facilitate explicit transfers by copying items from one table to (the bottom of) the other.
The use of s-IND-PRE security, and an ideal model without object representations was already formalized in the minimalistic Cryptographic Agents framework [2,3]. As such we use this model, with essential extensions to capture the concern of maliciously created objects (specifically, the ability for the adversary to transfer handles to the challenger). Please see Appendix A.1 for further discussion on why this framework is a natural choice for our purposes.
Cryptographic Agents. We briefly review the Cryptographic Agents framework here (with more technical details appearing in Section 5). Cryptographic Agents were originally proposed as a framework to model various cryptographic objects ranging from modern primitives such as fully-homomorphic encryption, functional encryption and obfuscation, to classic primitives such as public key encryption, to generic group and random oracle models [2].
The framework is minimalistic and conceptually simple, and consists of the following:
• Two arbitrary entities. Testmodels all the honest parties, andUser models the adversary.
• The ideal model has a trusted party which hands outhandlestoTest andUser for manipulating data stored with it. The manner in which data can be manipulated in the ideal model is specified by a “schema” (which is akin to a functionality in the UC security model).
• The real model, in which cryptographic objects are used in place of the ideal handles.
• Indistinguishability Preservation: If a predicate about Test’s inputs is hidden from User in the ideal world, then it should be hidden in the real world as well.
We use a fairly straightforward formulation of the PKE schema Σpke, but making explicit the guarantees we do not seek (e.g., we allow an adversary with one secret-key for a public-key to generate more).
Beyond PKE. We point out that the extended cryptographic agents framework developed here is quite general and can be used to model other security for other schemas in the presence of adversarially created objects. We leave it for future work to extend our results for PKE to schemas
like signatures, or authenticated encryption. For more advanced primitives like obfuscation, the original frameworks of [2, 3] modeled security notions like indistinguishability obfuscation, differing- inputs obfuscation and VGB obfuscation, by using different test families; these (and other) test families remain to be explored for obfuscation in our extended framework. As the basic security definitions for obfuscation and functional encryption are increasingly considered to be realizable, the achievability of stronger definitions emerges as an important question.
1.5 Limitations of ∆-s-IND-PRE security.
Even though ∆-s-IND-PRE security is based on an ideal world model, and subsumesall possibleIND definitions, we advise caution against interpreting ∆-s-IND-PRE security on par with a simulation- based security definition (which, indeed, is unrealizable). For instance, ∆-s-IND-PRE does not require preserving non-negligible advantages: e.g., a distinguishing advantage of 0.1 in the ideal world could translate to an advantage of 0.2 in the real world. Note that this is usually not a concern, because here the ideal world is already “insecure.”
Another issue is that while an ideal encryption scheme could be used as a non-malleable commitment scheme, ∆-s-IND-PRE security makes no such assurances. This is because, in the ideal world, if a commitment is to be opened such that indistinguishability ceases, then IND-PRE security makes no more guarantees. We leave it as an intriguing question whether ∆-s-IND-PRE secure encryption could be leveraged in an indirect way to obtain a non-malleable commitment scheme.
Finally, a limitation of ∆-s-IND-PRE security that is absent in the random oracle based idealizations, is related tonon-blackbox composition. As mentioned earlier,Test plays the role of an environment and models arbitrary systems that use a PKE scheme. As such, a ∆-s-IND-PRE PKE scheme does compose with larger schemes that use it arbitrarily, provided that honest users are restricted to its ideal interface. However, it is not unusual for protocols to violate this ideal interface.
In particular, recall that the interface provides the users only with handles (serial numbers) for the cryptographic objects they create or receive. It does not allow honest users to directly access the ciphertexts (unless they are transferred to the adversary first); this prevents treating the ciphertext as an input to other schemes, like signatures or the encryption scheme itself (for double encryption).
Similarly, it disallows key cycles, as the honest users cannot directly access the keys. We remark that this restriction is, in fact, adesirable feature in a programming interface for a cryptographic library;
violating this interface (e.g., to create a signcryption scheme) should not be up to the programmer, but should be carefully designed, analyzed and exposed as a new schema by the creators of the cryptographic library.
2 Technical Overview
We proceed to provide a technical overview of our work.
2.1 COA Secure PKE
The definition of COA security of PKE is deceptively simple. It consists of two parts: Anonymous CCA (or Anon-CCA) security [7,1] and “existential consistency.” The latter is a natural correctness guarantee that requires that even an adversarially generated ciphertext should have at most one
message and one public-key associated with it, and even maliciously generated secret-keys will decrypt it in a manner consistent with its underlying public-key and message.
More formally, there is an efficient algorithm used to accept or reject externally generated objects (keys, ciphertexts). For any object that is accepted as a secret key, there should be a deterministic procedure, which we denote bypkGen, to convert it to a public key. Additionally, for any object that is accepted as a ciphertext, there must be an information theoretic binding of the ciphertext to a (hidden) public key and message, captured by the existence of (computationally intractable) mapspkId:CT → PK ∪ {⊥}andmsgId:CT → M ∪ {⊥}. The consistency requirement insists that for any matchingsecret key and ciphertext, namely, whenpkGen(SK) =pkId(CT), the decryption procedure must always reveal msgId(CT) (which may be⊥), and if the key and ciphertext do not match, it must always output⊥.
We present two general constructions for a COA secure PKE scheme, by modifying an arbitrary Anon-CCA encryption scheme. The first construction is fairly light-weight, and considering that it can be used in the hybrid encryption (KEM/DEM) mode, quite efficient. It relies on a slightly non-standard correctness requirement (which we call universal key reliability), which states that forallsecret-keys that can be generated by the key generation algorithm, honestly encrypting any message with its corresponding public-key and then decrypting the resulting ciphertext should return the original message, with high probability (over the randomness used for encryption). The details of this construction are given inSection 4.1.
Our second construction is perhaps of more theoretical interest as it proves the following theorem.
Theorem 1. A COA secure PKE scheme exists if anAnon-CCAsecure PKE scheme and injective one-way functions exist.
Not requiring universal key reliability of the given PKE scheme significantly complicates the construction (and the proof). As it is hard to detect a bad secret-key which may decrypt some message incorrectly, we randomize the message (via secret-sharing) before encrypting. Then, a probabilistic check can ensure that a secret-key causes decryption error with at most an inverse polynomial error probability. To drive this probability down to a negligible quantity, we include multiple encryptions, and rely on error-correction during decryption.
However the use of secret-sharing and error-correction creates several complications with CCA security, let alone Anon-CCA security. While CCA security can be restored by carefully using a signature scheme, key anonymity requires further work. In particular, we need to analyze what happens when, in the given PKE scheme pke, a wrong key is used to decrypt a ciphertext. One might expect that such a mismatched decryption will result in “garbage” and will be of little use to the adversary in creating a decryption query in the Anon-CCA game. Unfortunately, this intuition is wrong: while the adversary cannot control the outcome of decrypting an honestly generated ciphertext using an honestly and independently generated secret-key in the given PKE scheme, it is possible that this outcome is predictable(and not ⊥). As this decryption yields only a share of a message, being able to predict it allows the adversary to control the reconstructed message. To counter this, we require that each of the shares carries a tag that was randomly chosen and included in the public-key, so that again, the adversary will need to control the outcome of mismatched decryption. The details of this construction are given in Section 4.2and Appendix C.2.
2.2 Proving that COA Security implies ∆-s-IND-PRE Secure PKE
Implementing the schema Σpke is a challenging task because it is highly idealized and implies numerous security guarantees that may not be immediately apparent. (For instance, the ideal world provides “ciphertext resistance” in that an adversary who gets oracle access to encryption and decryption cannot create a valid ciphertext that it did not already receive from the encryption oracle.) These guarantees are not explicit in the definition of COA security. Nevertheless, we show the following:
Theorem 2. A ∆-s-IND-PRE secure implementation of Σpke exists if a COA secure PKE scheme exists.
The construction itself is direct, syntactically translating the elements of a PKE scheme into those of an implementation of Σpke. However, the proof of security is quite non-trivial. This should not be surprising given the simplicity of the COA security definition vis-`a-vis the generality of ∆- s-IND-PRE security. We use a careful sequence of hybrids to argue indistinguishability preservation.
The hybrids involve the use of an “extended schema” (which is partly ideal and partly real). To switch between these hybrids, we use both PPT simulators (which rely on Anon-CCA security) and computationally unbounded simulators (which rely on existential consistency). As we shall see, the simulators heavily rely on the fact that Test∈∆, and hence the only uncertainty regarding agents transferred by Test is the choice between one of two known agents, determined by the test-bit b given as input toTest. The essential ingredients of these simulators are summarized below.2
• First, we move from the real execution to a hybrid execution in which objects originating from Testare replaced by ideal agents, while the objects originating from the adversary are left as such (in the form of non-ideal agents, which carry the real objects within). The extended schema is used to process both kinds of agents together. This hybrid uses a simulator Sb† which knows the test bitb. Since Test∈∆ reports all its interactions with the ideal schema, Sb† can exactly simulate all the objects created by Test.3
Here one needs to be careful in sorting objects as originating from Testor adversary (as only the former are replaced by ideal agents). For instance, if the adversary uses a public-key sent by the Test to create a ciphertext, this would be treated as an object originating fromTest.
• The next step is to show that there is a simulatorS‡ which does not need to know the bit b to carry out the above simulation. This is the most delicate part of the proof. The high-level idea is to argue that the executions forb= 0 and b= 1 should proceed identically from the point of view of the adversary (as Testhides the bit bin the ideal world), and hence a joint simulation should be possible. S‡ will abort when it cannot assign a single simulated object for the two possible choices of a transferred agent, corresponding to b = 0 and b = 1. Intuitively, this event corresponds to revealingb in the ideal execution.
The actual argument is more complicated. For example, suppose Test transfers a ciphertext agent such that it has different messages in the two executions corresponding tob= 0 andb= 1. Then there is no consistent assignment of that agent to an object that works for both b= 0 and b= 1.
Nevertheless this may still keepbhidden, as long as the corresponding secret-keys are not transferred.
SoS‡ can assign a random ciphertext to this agent; provided that the key will be “locked away”
2To facilitate keeping track of the arguments being made, we describe the corresponding hybrids fromSection 6.1.2.
The goal is to showH0≈H7, for hybrids corresponding to real executions withb= 0 andb= 1 respectively.
3This corresponds toH0≈H1 (withb= 0) andH6≈H7 (withb= 1).
and never transferred, Anon-CCA ensures that the simulation is good. To analyze this, we letS‡ maintain a list of secret-keys that get locked in this way, and will abort the simulation if one of those keys is transferred later.
OnceS‡is carefully specified, it can be verified that if it aborts with non-negligible probability then the bit b was not hidden in the ideal world to begin with; otherwise, due to Anon-CCA security, the simulation is good. Here, bnot being hidden does not yield a contradiction yet.4
• The next simulatorS∗ is computationally unbounded, and helps us move from the ideal world with the extended schema to the ideal world involving only the schema Σpke. The key to this step is existential consistency: S∗ will use unbounded computational power to map objects sent by the adversary to ideal agents.5
• To prove ∆-IND-PRE security we need only consider Test∈∆ such that the bit bremains hidden against a computationally unboundedadversary. For such aTest, the above two hybrids are indistinguishable from each other.6
Together these steps establish that if bis statistically hidden in the ideal execution, then that it is (computationally) hidden in the real execution. Section 6.1 and Section 6.2present the complete argument.
3 COA Secure Encryption
In this section, we define COA security of a PKE scheme. (See Appendix B.2.1 for its syntax.) Let SK, PK,CT, andMdenote the space of secret keys, public keys, ciphertexts, and messages, respectively. We require these spaces to be mutually disjoint, with efficient membership algorithms (as could be readily enforced, say by requiring a header), but not all the elements in a space need to bevalid (i.e., actually produced by an algorithm of the scheme). We define the space of all objects O=SK ∪ PK ∪ CT ∪ M.
Let the algorithms constituting the PKE scheme be skGen,pkGen,enc, anddec. Here, we use the convention that skGensamples secret-keys, whilepkGen:SK → PK deterministitcally converts secret-keys to public-keys. dec:SK × CT → M ∪ {⊥}is also set to be a deterministic function.
Anonymous CCA Security. Anon-CCA (Definition 1, inFigure 1) is equivalent to AI-CCA presented in [1] as a combination of IND-CCA and IK-CCA security [7]. The experiment for Anon-CCA is similar to that of IND-CCA, except that the adversary is given two independently generated public-keys, and access to the decryption oracles corresponding to both of them. A random bit is used to select one of the keys as well as one in a pair of messages submitted by the adversary, to produce a challenge ciphertext (which cannot be queried to either decryption oracle). (Also see Appendix B.2.2.)
Existential Consistency. Informally, the consistency requirement is that when the objects given by an adversary are operated on by the algorithms in the encryption scheme, they should behave as objects that were generated honestly according to an ideally correct scheme. Since we would like to consider non-uniform adversaries, this means that all objects should behave consistently with underlying ideal objects. We capture this inDefinition 2 (see Figure 1).
The intention of providing the algorithm acc is to run it on an object when it is received, in order to sort it as a secret-key, public-key or ciphertext (or none of them). This will be made explicit
4This corresponds to showing thatifH2≈H5,thenH1≈H2 andH5≈H6.
5This showsH2≈H3 andH4≈H5.
6That is,H3≈H4.
Exptano-cca
A (κ):
• (SK0,PK0)←pke.keyGen; (SK1,PK1)←pke.keyGen
• (m0, m1,state)← Apke.dec(SK0,·),pke.dec(SK1,·)
1 (1κ,PK0,PK1)
• b←R{0,1} andCT∗←pke.enc(PKb, mb)
• b0 ← Apke.dec(SK0,·),pke.dec(SK1,·)
2 (state,CT∗). Now, both oracles return⊥if queried withCT∗.
• Outputb⊕b0, ifm0, m1∈ M,length(m0) =length(m1) andb0∈ {0,1}. Else, output a random bit.
Definition 1 (Anonymous CCA Security). A public-key encryption scheme pke is anonymous chosen-plaintext attack (Anon-CCA) secure if for all PPT adversaries A= (A1,A2), probability that the experiment Exptano-cca
A (κ) outputs1 is at most 12 +(κ) where is negligible inκ.
Definition 2 (Existential Consistency). An encryption schemepke= (skGen,pkGen,enc,dec) is said to be existentially consistent if:
1. There is a PPT algorithm acc which takes any string as input and outputs one of the tokens {sk,pk,ct,⊥}, such that the following probabilities are negligible:
Pr[acc(skGen())6=sk]
Pr[acc(obj) =sk ∧ acc(pkGen(obj))6=pk] ∀obj ∈ O Pr[acc(obj) =pk ∧ acc(enc(obj, m))6=ct] ∀m∈ M,∀obj ∈ O
2. There exist (computationally inefficient) deterministic functions pkId:CT → PK ∪ {⊥pk} and msgId:CT → M ∪ {⊥} such that the following holds. Let D:SK × CT → M ∪ {⊥} be defined as
D(SK,CT) =
(msgId(CT) if pkGen(SK) =pkId(CT)6=⊥pk
⊥ otherwise.
Then ∀ SK ∈ SK,PK ∈ PK,CT ∈ CT, m∈ M, the following probabilities are negligible.
Pr[acc(SK) =sk ∧ acc(CT) =ct ∧ dec(SK,CT)6=D(SK,CT)]
Pr[acc(PK) =pk ∧ pkId(enc(PK, m))6=PK}]
Pr[acc(SK) =sk ∧ msgId(enc(pkGen(SK), m))6=m].
Definition 3 (COA Security). A public-key encryption scheme is said to be COA-secure if it is Anon-CCA secure and existentially consistent. We say that it is non-trivial if the message space has more than one element.
Figure 1: Defining COA Security
in the interface provided in the cryptographic agents model. In typical schemes,acc could simply correspond to a format check (possibly reading headers and checking length). However, as we shall see inSection 4.2, a non-trivial probabilistic check can sometimes be useful to vet an object before
accepting it.
We make a few observations about this definition. It does not prevent accfrom accepting invalid secret-keys (i.e., those never produced by skGen). However, existential consistency guarantees that such secret-keys will behave “correctly.” pkId andmsgId provide a similar guarantee for maliciously crafted ciphertexts – that any ciphertext could be (existentially) interpreteduniquely (if at all) as obtained by encrypting a message using a valid public-key.
COA Security. COA security is simply a combination of the above two security guarantees (see Figure 1forDefinition 3). While the name COA security is quite broad, our definition above may appear quite limited in its goals. It just adds a deceptively simple set of consistency properties to a standard security definitiona and makes no apparent attempt to address various potential attacks that arise in a scenario with multiple keys. In particular, there is no strong correctness requirement associated with public keys that can get accepted by acc (unlike for secret keys).
So an adversary may possibly send an invalid public-key which honest users may use to encrypt their messages, possibly resulting in the message being information-theoretically lost. However, a moment’s reflection may reveal that this is not very different from a scenario where the adversary sends a valid public-key, but refuses to reveal the secret-key that generated it. Nevertheless, the reader may be left with an uneasy suspicion that our definition of COA security does not address all possible scenarios involving multiple secret-keys, public-keys and ciphertexts generated by honest and malicious parties interacting with each other.
The justification for the name comes from the fact that this definition suffices to achieve security in a general and abstract model of Cryptographic Agents inSection 5. Analyzing security in this abstract model requires a detailed (and tedious) argument, but the properties needed for meeting the strong security requirement in that model can be condensed to COA security. In fact, though we omit a formal statement or proof, a converse also holds: any scheme secure in that model will essentiallyhave to satisfy COA security. (SeeAppendix D.)
3.1 Ciphertext Resistance
We point out an implication of COA security – called ciphertext resistance – that will be useful later inSection 6. Ciphertext resistance requires that any PPT adversary who is givenoracle access to the encryption and decryption algorithms with an honestly generated key pair (but not the keys themselves), has negligible probability of generating a newvalid ciphertext for this secret key (i.e., a ciphertext that is different from the ones returned by the encryption oracle, which on decryption using the secret key yields a non-⊥outcome).
Lemma 1. Any non-trivial COA-secure encryption scheme is ciphertext-resistant.
To prove this lemma, we need the following consequence of existential consistency:
CTmax0∈CT Pr
SK←skGen[acc(CT0) =ct ∧ dec(SK,CT0)6=⊥] (1) It states that it is unlikely that when an honest party creates a secret-key it would “accidentally match” a ciphertext that already exists in a system. We state and prove this (more generally) as follows.
Lemma 2. Suppose pke= (skGen,pkGen,enc,dec) is a non-trivial COA-secure encryption scheme.
Then, the following probabilities are negligible:
SKmax0∈SK Pr
SK←skGen[acc(SK0) =sk ∧ pkGen(SK) =pkGen(SK0)] (2)
PKmax0∈PK Pr
SK←skGen[acc(PK0) =pk ∧ pkGen(SK) =PK0] (3)
CTmax0∈CT Pr
SK←skGen[acc(CT0) =ct ∧ dec(SK,CT0)6=⊥] (4)
PKmax0∈PK m∈M
SK←skGenPr [acc(PK0) =pk ∧ dec(SK,enc(PK0, m))6=⊥] (5) Proof. Note that (2) is upper bounded by (3). The latter can be shown to be negligible because of (semantic) security and existential consistency: Firstly, note that correctness is implied by the consistency requirements onacc,decand enc/msgId: i.e.,
δ:= max
m∈M Pr
SK←skGen[dec(SK,enc(pkGen(SK), m))6=m]
is negligible. Now, suppose for somePK0 ∈ PKwe have
SK←skGenPr [pkGen(SK) =PK0] =.
Then an adversary Awho sends distinctm0, m1 (recall that|M|>1) to Exptano-cca
A (κ) will be able to predict bcorrectly with probability 12 + 2−δ, simply by checking if the public-key given to it equalsPK0 and if so decrypting the challenge ciphertext with some fixed secret-keySK0 such that pkGen(SK0) =PK0. Hence, by semantic security, must be negligible.
To upper bound (4), note that for anyCT0∈ CT, by the previous part, PrSK←skGen[pkGen(SK) = pke?.pkId(CT0)] is negligible. However, by existential consistency, wheneverpkGen(SK)6=pke?.pkId(CT0) we have that dec(SK,CT0) =⊥.
Finally, to upper bound (5), note that ifpkId(enc(PK0, m))∈ {PK0,⊥}andpkGen(SK)6=PK0, then by from Definition 2, dec(SK,enc(PK0, m)) = ⊥, except with negligible probability. This implies that (5) is upper bounded by the sum of (3) and a negligible quantity.
Proof of Lemma 1. Lemma 1follows from Anon-CCA security and (1). Consider a hybrid experi- ment derived from the ciphertext resistance experiment, in which the adversary is given oracle access to encryption and decryption using not the actual (SK,PK) pair, but a separate, independently generated key pair. Firstly, by Anon-CCA security the two hybrids are indistinguishable. Secondly, in this hybrid, using the bound in (1), ciphertext resistance holds. Hence ciphertext resistance holds in the original experiment too.
4 Constructing COA Secure PKE
COA security imposes a fairly natural additional requirement on a Anon-CCA secure encryption scheme. We show how to transform any Anon-CCA secure PKE scheme into a COA secure PKE scheme, if the former satisfies a simpler and natural correctness property that we refer to asuniversal key reliability. Later we shall show that this extra requirement can be removed, and hence a COA secure encryption scheme can be based onanyAnon-CCA secure encryption.
Given a Anon-CCA secure PKE scheme, pke= (pke.skGen,pke.pkGen,pke.enc,pke.dec) a perfectly binding commitment scheme com, and a strong existentially unforgeable one-time signature scheme Sign, constructing a PKE scheme pke?.
In this construction we consider a finite message-space (which can be extended to an arbitrary message space using standard hybrid encryption). For concreteness, let the message-space be M={0,1}` and length(m) =`where` can be a function ofκ. The given scheme pke is assumed to admit the message-space {0,1}`+κ. We use a parametert=ω(logκ) (sayt= log2κ).
Objects defined as tuples are unambiguously encoded into strings, and all the objects include implicit indicators as to which algorithms produced them.
• pke?.skGen. It outputs (rpke, rcom,
ri0, r1i i∈[t]), where rpke is a freshly sampled random-tape for pke.skGenandrcomis a freshly sampled random-tape used bycom.Committo commit to an element in the secret-key spaceSK, and rbi ← {0,1}κ.
• pke?.pkGen(SK?). Parse SK? as (rpke, rcom,
r0i, r1i i∈[t]). Let SK ← pke.skGen using random- tape rpke. Compute PK := pke.pkGen(SK) and (c,d) := com.Commit(SK;rcom). Output (PK,c,
ri0, r1i i∈[t]). Output ⊥pk if any of the intermediate steps (including parsing the input) fails or outputs⊥.
• pke?.enc(PK?, m). Let (c?,d?) ←com.Commit(PK?) and (sigk,verk)←sig.keyGen. Parse PK? as (PK,c,
ri0, ri1 i∈[t]). For each i ∈ [t], additively secret-share (m,d?,c?,verk) into a pair (mi0, mi1), and define µib=rib||mib andγ =
pke.enc(PK, µib) i∈[t],b∈{0,1}. (γ =⊥if any of the steps above fails.) Letξ = (c?, γ) and letτ ←sig.Sign(sigk, ξ). Output (ξ, τ) as the ciphertext.
• pke?.dec(SK?,CT?). Parse CT? as (ξ, τ), and further parse ξ as (c?,
CTi0,CTi1 i∈[t]). Parse SK? as (rpke, rcom,
r0i, r1i i∈[t]). Then do the following:
1. ComputePK? :=pke?.pkGen(SK?). Along the way, this obtainsSK by runningpke.skGenwith random-tape rpke.
2. Letµib =pke.dec(SK,CTib). Parse µib assib||mib, wheresib ∈ {0,1}`.
3. Check if there is a setS ⊆[t], |S|> t/2 such that ∃m,d?,verk, ∀i∈S, si0 =r0i,si1 =ri1 and mi0⊕mi1= (m,d?,c?,verk).
4. If so, check ifPK? =com.Open(c?,d?). andsig.Verify(verk, ξ, τ) = 1.
5. If all the checks pass, outputm. If any of the steps fail, output ⊥.
• pke?.acc(obj). Ifobj has the form of a public-key (including⊥pk) or ciphertext above,pke?.acc(obj) outputspkor ct respectively. However, ifobj has the form of a secret-key, it proceeds as follows:
Parse obj as (rpke, rcom,
ri0, r1i i∈[t]), and compute SK ←pke.skGenusing random-taperpke. Pick κ random stringsρi ∈ {0,1}` and fori∈[κ], check if pke.dec(SK,pke.enc(PK, ρi)) =ρi; output sk if all the checks pass and output ⊥ otherwise. (Output ⊥if obj does not have a form that matches a valid object.)
Figure 2: A COA secure PKE scheme without assuming universal key reliability for the underlying PKE scheme.
4.1 Assuming Universal Key Reliability
Let pke = (pke.skGen, pke.pkGen,pke.enc, pke.dec) be a Anon-CCA-secure public-key encryption scheme with the following correctness property:
Universal Key Reliability: For allSK that can be produced by pke.skGenwith positive probability, for any messagemin the message-space, Pr[pke.dec(SK,pke.enc(PK, m))6=
m] is negligible, wherePK =pke.pkGen(SK) (the probability being over the randomness ofpke.enc).
Note that this is a relatively mild correctness requirement, as it needs to hold only for secret-keys that can actually be generated bypke.skGen, and also needs to hold only with high probability for honestly generated ciphertexts. In particular, it does not rule out the possibility that a (maliciously crafted) ciphertext could be decrypted into different messages by different secret-keys corresponding to a public-key. However, universal key reliability does go beyond the basic correctness guarantee for PKE, which requires the error probability to be negligible only when averaged also over the choice of the secret-key (i.e., there could be bad secret-keys which can behave arbitrarily, as long as they are unlikely to be produced). In the second construction below we shall remove this requirement on pke, but given that this is a natural correctness property that holds for typical encryption schemes used in practice, we describe a relatively efficient modification that can make such schemes COA secure.
Firstly, we apply a simple modification which treats the random-tape for pke.skGen as the secret-key. This has the advantage that we need not check whether a given secret-key could be valid, as all random-tapes (padded up with 0’s if necessary) are valid. This modification will let us extend the above correctness guarantee (stated for all secret-keys that can be generated by pke.skGen) to all secret-keys. Given this, the heart of our modification is to ensure that each ciphertext can be generated by at most one public-key (this lets us define pkId, at least for honestly generated ciphertexts) and all the secret-keys leading to a public-key “behave the same way” (this will let us definemsgId via decryption using a secret-key identified this way).
To ensure that all secret-keys that generate the same public-key decrypt identically, we shall simply include a commitment to the secret-key (or rather, the part of the secret-key that corresponds to the secret-key from the underlying PKE schemepke) in our public-key. This will leave us with the problem of ensuring a well-definedpkId.
As a na¨ıve attempt towards definingpkId, consider concatenating the public-key to the ciphertext:
i.e., setting our new ciphertext as
CT? = (PK,pke.enc(PK, m)).
Clearly this will not be key-anonymous, but it would let us definepkId as required. To restore key- anonymity one may try to move the public-key inside the encryption, asCT? =pke.enc(PK,PK||m).
While this does recover the Anon-CCA security property, it no more lets us define apkId function.
Instead, we may consider letting
CT? = (c,pke.enc(PK,d||m)),
where (c,d)←com.Commit(PK) is obtained using a perfectly binding commitment schemecom.
This does not yet guarantee CCA security, as it leaves open the possibility of modifying the commitment without changing the decommitment information. Hence, instead ofd||m above we
shall use c||d||m (unless d||m can be used to efficiently and uniquely compute c). Further, note that for pkId, we need to uniquely associate a public-key for our scheme (and not for the given schemepke), and hence we actually need (c,d)←com.Commit(PK?), wherePK? is of the form (c0,PK) with (c0,d 0)←com.Commit(SK).
The proof of Anon-CCA security of the resulting scheme is not immediate. We carry it out in two parts. First, we show that the construction is CCA secure. Then, to prove Anon-CCA security, among other things we need to address what happens when the adversary takes the challenge ciphertext (c, ζ) and queries thewrongdecryption oracle with (c0, ζ). Here we rely on the fact that our augmented public-key has high min-entropy (from the commitment that is included in it) to rule out the possibility thatζ accidentally decrypts (under the wrong secret-key) to yieldc0 which is a perfectly binding commitment to the (wrong) public-key.
InAppendix C.1we summarize the above construction (Figure 13) and the proof that it satisfies COA security.
4.2 From any Anon-CCA secure scheme
If the underlying PKE scheme does not offer universal key reliability, we need to design our scheme to achieve a similar property (the last item in existential consistency). The key to enforcing this property is to carry out a probabilistic check on the secret-key before accepting it. Roughly, a probabilistic check can be used to ensure that with good probability, the secret-key will decrypt encryptions of random messagescorrectly. Then a randomization of the messages to encrypt (using secret-sharing) and an error-correction step during decryption can be used to ensure that honest encryption using every secret key that is accepted will result in correct decryption, except with negligible probability over the randomness of encryption and randomness in the acceptance test.
However, these modifications affect Anon-CCA security. The error-correction feature provides the adversary with way to corrupt a ciphertext and still have it decrypted; this is prevented using a combination of (strong, one-time) signatures and the techniques from the previous construction.
The use of secret-sharing introduces another avenue for attack, which is thwarted by appending to the shares “tags” which are included in the public-key. The full construction is presented in Figure 2. The proof of security, presented inAppendix C.2, provesTheorem 1.
4.3 Practical COA Secure Schemes
We point out that COA-security can be achieved with a relatively low overhead, and hence it would be fairly practical to require PKE schemes used in practice to meet this extra security requirement.
Hybrid Encryption. It is easy to see that hybrid encryption, by combining with a CCA secure symmetric-key encryption scheme, preserves COA security.Note that in such a hybrid scheme, the overhead of COA security applies only to short messages (keys), independent of the actual size of the data being encrypted.
Existing PKE Schemes. The Cramer-Shoup encryption scheme [16], with a minor modification, satisfies COA security. The modification, which was proposed by Abdalla et al. [1], simply makes a pathological ciphertext that is independent of the public-key to be invalid. It was shown in [1]
that this leaves the scheme Anon-CCA secure (and also makes it meet a robustness property). For naturally defined algorithms acc,pkId and msgId (withacc requiring efficient recognizability of the group elements), it can be verified that this scheme satisfies existential consistency too.
Relaxing COA Security. COA security can be relaxed by restricting to uniform adversaries, or by allowing a random-oracle (see Appendix D). Then, the commitment can be replaced by a computationally binding commitment or even a random-oracle-based commitment.
By the above observations, combined with the fact that many of the practical PKE schemes satisfy the mild universal key reliability condition (allowing the first transformation above to be used), obtaining COA security has a very low overhead in terms of computation and communication.
5 Extending Cryptographic Agents
Cryptographic Agents [2] provides a framework that naturally models all cryptographic objects (keys as well as ciphertexts, in the case of PKE) as transferable agents, which the users can manipulate only via an idealized interface. But the original framework of [2] does not capture attacks involving maliciously created objects, as only the honest user (Test) is allowed to create objects. Hence in the case of encryption, even CCA security could not be modeled in this framework. Here we present the technical details of the (Extended) Cryptographic Agents framework which we develop and use. Appendix A.3gives a summary of the substantial differences between the new model and the original model of [2]. Note that this framework itself is not specialized for PKE; that is carried out inSection 6.
5.1 The New Model
Agents are interactive Turing machines with tapes for input, output, incoming communication, outgoing communication, randomness and work-space. A schemadefines the behavior of agents corresponding to a cryptographic functionality. Multiple agents of a schema can interact with one another in a session(as detailed in Appendix B.1).
Ideal World Model. A schema Σ is simply a family of agents; typically, this family will have a single agent which will behave differently depending on the contents of its work-tape. (Jumping ahead, the PKE schemaΣpke inSection 6has an agent that can behave as a secret-key, public-key or ciphertext.) The ideal system for a schemaΣconsists of two parties Test andUser and a fixed third party B[Σ] (for “black-box”). All three parties are probabilistic polynomial time (PPT) ITMs, have a security parameterκ built-in. Test and User may be non-uniform. Testreceives atest-bit b as input and User produces an output bitb0.
B[Σ] maintains two lists of handlesRTest and RUser, which contain the set of handles belonging toTest andUser respectively. Each handle in these lists is mapped to an agent. At the beginning of an execution, both the lists are empty. WhileTest andUser can arbitrarily talk to each other, their interaction with B[Σ] can be summarized as follows:
• Creating agents. Test andUser can, at any point, choose an agent fromΣand send it to B[Σ]
for instantiation. More precisely, they can send a command (init, P,str) toB[Σ], where P ∈Σ and str is an initial input for the agent. Then,B[Σ] will instantiate the agent (with an empty work-tape) and run it withstr and security parameter as inputs. It then stores (h,config) in the list of the party who sent the command (RTest or RUser) where configis the agent’s configuration after the execution and h is a new handle (say, simply, the number of handles stored so far in the list); h is returned to the relevant party (Test orUser).
• Request for Session Execution. At any point in time,Testor User may request an execution of a session. We describe the process whenTest requests a session execution; the process for User is symmetric.
Test can send a command (run,(h1, x1). . . ,(ht, xt)), where hi are handles obtained in the list RTest, andxi are input strings for the corresponding agents.7 B[Σ] executes a session with the agents with starting configurations in RTest, corresponding to the specified handles, with their respective inputs, till it terminates. It obtains a collection of outputs (y1, . . . , yt) and updated configurations of agents. It generates new handles h01, . . . ,h0t corresponding to the updated configurations, adds them to RTest, and returns (h01, . . . ,h0t, y1, . . . , yt) toTest. If an agent halts in a session, no new handle h0i is given out for that agent. After a session, the old handles for the agents are not invalidated; so a party can access a configuration of an agent any number of times, by using the same handle.
• Transferring agents. Test can send a command (transfer,h) to B[Σ] upon which it looks up the entry (h,config) fromRTest (if such an entry exists) and adds an entry (h0,config) toRUser, whereh0 is a new handle, and sends the handle h0 toUser. Symmetrically, User can transfer an agent toTest using thetransfer command.
We define the random variableidealhTest(b)|Σ|Userito be the output ofUser in an execution of the above system, whenTest getsb as the test-bit. We writeidealhTest|Σ|Useri to denote the output when the test-bit is a uniformly random bit. We also defineTimehTest|Σ|Useri as the maximum number of steps taken by Test (with a random input), B[Σ] and User in total.
In this work, we use the notion of statistical hiding in the ideal world as introduced in [3], rather than the original notion used in [2]. (This still results in a security definition that subsumes the traditional definitions, as they involve tests that are statistically hiding.)
Definition 4 ((Statistical) Ideal world hiding). A Test is s-hiding w.r.t. a schema Σ if, for all unbounded users User who make at most a polynomial number of queries,
idealhTest(0)|Σ|Useri ≈idealhTest(1)|Σ|Useri.
Real World Model. The real world for a schema Σconsists of two partiesTestandUserthat interact with each other arbitrarily, as in the ideal world. However, the third party B[Σ] in the ideal world is replaced by two other parties I[Π,RepoTest] and I[Π,RepoUser] (when User is honest), which run the algorithms specified by a cryptographic schemeΠ. A cryptographic scheme (or simply scheme) Π is a collection of stateless (possibly randomized) algorithms Π.init,Π.runand Π.receive, which use a repository Repoto store a mapping from handles to objects. More precisely, the repository is a table with entries of the form (h,obj), whereh is a unique handle (say, a non-negative integer) and obj is a cryptographic object (represented, for instance, as a binary string). At the start of an execution, Repois empty.
If a scheme implementation (I[Π,RepoTest] orI[Π,RepoUser]) receives input (init, P,str), then it runs Π.init(P,str) to obtain an object obj which is added to Repoand a handle is returned. If it receives the command (run,(h1, x1),· · ·,(ht, xt)), then objects (obj1, . . . ,objt) corresponding to (h1, . . . ,ht) are retrieved fromRepoand Π.run((obj1, x1), . . . ,(objt, xt)) is evaluated to obtain ((obj01, y1), . . . ,(obj0t, yt)) whereobj0iare new objects andyi are output strings; the objects are added
7If a handle appears more than once among h1, . . . ,ht, it is interpreted as multiple agents with the same configuration (but possibly different inputs).