Commit | Line | Data |
---|---|---|
888f8938 MW |
1 | %%% Deniably authenticated asymmetric encryption |
2 | %%% | |
3 | %%% Copyright (c) 2010 Mark Wooding | |
4 | %%% Licence: CC-BY | |
5 | ||
6 | %% things that need sorting out | |
7 | %% | |
8 | %% * doing something about the sender-simulator oracle | |
9 | %% * sorting out the standard-model auth token | |
10 | %% * tag auth tokens, and non-directable KEMs. | |
11 | %% | |
12 | %% interesting remarks | |
13 | %% | |
14 | %% * deniable auth token using ring signatures! | |
15 | ||
16 | \ifx\iflncs\notexist | |
17 | \expandafter\let\csname iflncs\expandafter\endcsname\csname iffalse\endcsname | |
18 | \fi | |
19 | ||
20 | \iflncs | |
21 | \documentclass[runningheads]{llncs} | |
22 | \else | |
23 | \documentclass{strayman} | |
24 | \fi | |
25 | ||
26 | \usepackage[T1]{fontenc} | |
27 | \usepackage[utf8]{inputenc} | |
28 | \iflncs\else | |
29 | \usepackage[palatino, helvetica, courier, maths = cmr]{mdwfonts} | |
30 | \usepackage[mdwmargin]{mdwthm} | |
31 | \fi | |
32 | \usepackage{mdwtab, mathenv, mdwmath, crypto, mdwref, mdwlist} | |
33 | \usepackage{tikz} | |
34 | ||
35 | \usetikzlibrary{% | |
36 | fit,positioning,calc,% | |
37 | decorations.markings,% | |
38 | decorations.pathreplacing} | |
39 | ||
40 | %%\makeatletter\def\th@construction{\th@base\relax} | |
41 | %%\theoremstyle{construction} | |
42 | %%\newtheorem{construction}[theorem]{Construction} | |
43 | %%\defxref{con}{construction} | |
44 | \iflncs\else | |
45 | \numberwithin{theorem}{subsection} | |
46 | \numberwithin{equation}{subsection} | |
47 | \numberwithin{figure}{subsection} | |
48 | \fi | |
49 | ||
50 | \title{Deniably authenticated asymmetric encryption} | |
51 | \author{Mark Wooding} | |
52 | ||
53 | \def\Bin{\Sigma} | |
54 | \def\random{\$} | |
55 | \def\bitsto{\mathop{..}} | |
56 | \let\epsilon\varepsilon | |
57 | \let\emptystring\epsilon | |
58 | \let\emptyset\varnothing | |
59 | \let\mdwPr\Pr \def\Pr{\mdwPr\nolimits} | |
60 | \let\len\ell | |
61 | \def\proto#1{\Pi_{\text{\/\normalfont\scshape#1\/}}} | |
62 | \def\Thing#1#2#3{% | |
63 | \text{\normalfont\bfseries#1}^{\text{\normalfont\scshape#2}}_{#3}} | |
64 | \def\Xid#1#2{\textsc{$#1$-#2}} | |
65 | ||
66 | \newenvironment{notation}{% | |
67 | \dimen8=\textwidth% | |
68 | \advance\dimen8-4em% | |
69 | \begin{tabular}[L]{@{\qquad}>{$}p{0.4\dimen8}<{$}p{0.6\dimen8}}% | |
70 | }{% | |
71 | \end{tabular}% | |
72 | } | |
73 | ||
74 | \renewcommand\topfraction{.8} | |
75 | ||
76 | \tikzset{ | |
77 | box/.style = {draw, minimum size = 16pt, fill = #1}, | |
78 | op/.style = {box = #1, shape = circle}, | |
79 | rounded/.style = {rounded corners = 2mm}, | |
80 | offset/.style = {transform canvas = {shift = {#1}}}, | |
81 | node distance = 5mm, | |
82 | > = stealth | |
83 | } | |
84 | ||
85 | \def\fixme{\marginpar{FIXME}} | |
86 | \def\cn{\fixme\textsuperscript{[\textcolor{blue}{citation needed}]}} | |
87 | ||
88 | \bibliographystyle{mdwalpha} | |
89 | ||
90 | \begin{document} | |
91 | ||
92 | \maketitle | |
93 | ||
94 | \begin{abstract} | |
95 | We consider the notion of \emph{deniably authenticated asymmetric | |
96 | encryption}: briefly, Bob can encrypt a message and send the ciphertext | |
97 | to Alice; Alice (and nobody else other than maybe Bob) can decrypt the | |
98 | message; Alice is convinced that Bob actually sent the message; and nobody | |
99 | can prove this to anyone else. | |
100 | ||
101 | We present formal definitions of security for this new notion, and offer | |
102 | two efficient instantiations. One is a generic construction, using any | |
103 | key-encapsulation mechanism, signature scheme, and symmetric authenticated | |
104 | encryption scheme, but it meets a relatively weak notion of deniability; | |
105 | the other has security based on the computational Diffie--Hellman problem: | |
106 | and provides strong deniability, but the security is only proven in the | |
107 | random oracle model. | |
108 | \end{abstract} | |
109 | ||
110 | \thispagestyle{empty} | |
111 | \newpage | |
112 | \tableofcontents | |
113 | \newpage | |
114 | ||
115 | %%%-------------------------------------------------------------------------- | |
116 | \section{Introduction} | |
117 | \label{sec:intro} | |
118 | ||
119 | \subsection{Authenticated asymmetric encryption} | |
120 | \label{sec:intro.aae} | |
121 | ||
122 | Secure email protocols, such as PGP | |
123 | \cite{Zimmermann:1995:PSC,rfc1991,rfc2440,rfc4880} and S/MIME | |
124 | \cite{rfc2311,rfc2633,rfc3851} attempt to provide both \emph{secrecy} for the | |
125 | messages they handle, and \emph{authenticity}. The former property, | |
126 | informally, means that Bob can send a message to Alice and be confident that | |
127 | nobody else can read it. The latter means that Alice can be confident that | |
128 | the message was really sent by Bob. | |
129 | ||
130 | This is commonly achieved by a generic sign-then-encrypt construction: the | |
131 | message plaintext is signed using a standalone digital-signature algorithm | |
132 | with the sender's private key, and then the message and its signature | |
133 | together are encrypted with the recipient's public key | |
134 | \[ y = E_A([m, S_b(m)]) \] | |
135 | This construction, among others, is analysed by An, Dodis, and Rabin | |
136 | \cite{An:2002:SJS,cryptoeprint:2002:046} and shown to provide formally | |
137 | defined notions of secrecy and authenticity. | |
138 | ||
139 | As noticed by Davis \cite{Davis:2001:DSE}, the sign-then-encrypt approach has | |
140 | some surprising failure modes. For example, Alice can re-encrypt the signed | |
141 | message using, say, Carol's public key, and sent it on: | |
142 | \[ y' = E_C([m, S_b(m)]) \] | |
143 | this can obviously cause confusion if the message doesn't encode the identity | |
144 | of the intended recipient. But there are worse possibilities. For example, | |
145 | if Alice and Bob are having an affair, each signed-and-encrypted | |
146 | \emph{billet-doux} becomes potential blackmail material: Alice might threaten | |
147 | to publish the | |
148 | \[ m, S_b(m) \] | |
149 | if her demands aren't met. If Alice is a journalist and Bob is a source, | |
150 | leaking secrets to her, then the signed leaks are incriminating evidence. | |
151 | ||
152 | The encrypt-then-sign construction makes this sort of `attack' less trivial, | |
153 | but still possible. The signature is applied to a ciphertext encrypted using | |
154 | Alice's public key, so an \emph{unaided} third party should be unable to | |
155 | verify that the ciphertext matches any particular claimed plaintext. But | |
156 | there will usually be some additional information that Alice can publish to | |
157 | show the correlation. For hybrid schemes, where an asymmetric encryption | |
158 | scheme is used to transfer a symmetric key, publishing the symmetric key is | |
159 | usually sufficient. In the case of asymmetric schemes based on trapdoor | |
160 | permutations (e.g., RSA \cite{Rivest:1978:MOD}) she can publish the preimage | |
161 | of the ciphertext. In the case of schemes based on Diffie--Hellman key | |
162 | agreement (e.g., ElGamal's scheme \cite{ElGamal:1985:PKCb} or DLIES | |
163 | \cite{cryptoeprint:1999:007,Abdalla:2001:DHIES,IEEE:2000:1363}) even | |
164 | verifying the shared secret may be hard for third parties (this is the | |
165 | decisional Diffie--Hellman problem), but Alice can additionally publish a | |
166 | noninteractive zero-knowledge proof that the shared secret is | |
167 | correct.\footnote{% | |
168 | We'll work in a group $(G, +)$ with $p = \#G$ prime and generated by $P$. | |
169 | Let $a \inr \gf{p}$ be Alice's private key, with $A = a P$. For ElGamal, | |
170 | Bob encodes his message as $M \in G$, chooses $r \inr \gf{p}$, computes $R | |
171 | = r P$ and $Z = r A$, and sends $(R, Z + M)$ as his ciphertext. Alice can | |
172 | publish $Z$, but must prove that $Z = a R$. We assume a random oracle | |
173 | $H\colon G^2 \to \gf{p}$. She chooses $u \inr \gf{p}$, computes $c = H(u | |
174 | P, u R)$ and $v = u - c a$, and publishes $(Z, c, v)$. To verify, check | |
175 | that $H(v P + c A, v R + c Z) = c$. Completeness is immediate; soundness | |
176 | holds by rewinding and mutating the random oracle -- one obtains a new $c'$ | |
177 | and $v'$ for the same $u$, and can solve for $a$; and the simulator fixes | |
178 | up the random oracle after choosing $v$ at random. The structure of this | |
179 | proof follows \cite{Maurer:2009:UZK}.} % | |
180 | ||
181 | Similarly, `signcryption' schemes \cite{Zheng:1997:DSH} usually provide a | |
182 | `non-repudiation' property. \fixme | |
183 | ||
184 | With this in mind, it seems a shame that most work on asymmetric | |
185 | authentication concentrates on \emph{non-repudiation} -- ensuring that Bob's | |
186 | signature (or equivalent) can be demonstrated to third parties, even without | |
187 | Bob's later cooperation -- or even despite his opposition. | |
188 | ||
189 | \subsection{Related work} | |
190 | \label{sec:intro.related} | |
191 | ||
192 | Dolev, Dwork, and Naor's work \cite{Dolev:1991:NC} on nonmalleable | |
193 | cryptography defines a simple asymmetric authentication protocol. If $m$ is | |
194 | some message that Bob wants Alice to authenticate, he chooses a random string | |
195 | $\rho$, and sends her $(m, \rho)$ encrypted using a nonmalleable encryption | |
196 | scheme under Alice's public key. To authenticate the message~$m$, Alice | |
197 | replies with $\rho$. The authors prove their protocol's security, on the | |
198 | assumption that the encryption is nonmalleable, and comment without proof on | |
199 | the `plausible deniability' that it offers. | |
200 | ||
201 | Dwork, Naor, and Sahai \cite{cryptoeprint:1999:023} address deniability in | |
202 | the context of concurrent zero-knowledge interactive proofs. They improve | |
203 | the \cite{Dolev:1991:NC} protocol, showing that the new version is deniable | |
204 | under \emph{sequential} composition. They also \fixme | |
205 | %%% read this and finish description | |
206 | ||
207 | Di~Raimondo and Gennaro \cite{Raimondo:2005:NAD} \fixme | |
208 | %%% ought to actually read this | |
209 | ||
210 | There is an active literature on the topic of deniably authenticated key | |
211 | exchange, where the participants are confident that they are communicating | |
212 | securely with each other, but are unable to prove this to a third party. | |
213 | Di~Raimondo, Gennaro, and Krawczyk \cite{cryptoeprint:2006:280} define | |
214 | deniably authenticated key exchange and prove deniability properties of some | |
215 | IPSEC subprotocols. | |
216 | ||
217 | The \emph{Off the Record} protocol for instant-messaging systems | |
218 | \cite{Borisov:2004:OTR,Alexander:2007:IUA} provides strong deniability for | |
219 | users, for example by publishing authentication keys at the ends of | |
220 | conversations. The Wrestlers key-exchange protocol | |
221 | \cite{cryptoeprint:2006:386} provides strong deniability, and has been | |
222 | implemented as part of a virtual private network system | |
223 | \cite{Wooding:2001:TrIPE}. | |
224 | ||
225 | All of the deniable protocols described above are fundamentally | |
226 | \emph{interactive}, and therefore unsuitable for use in an encryption scheme. | |
227 | On the other hand, we have an advantage over the designers of plain deniable | |
228 | authentication protocols, since we can assume that both the sender \emph{and} | |
229 | the recipient have public keys. This can be seen to be essential in a | |
230 | non-interactive scheme as follows. If an authentication scheme doesn't | |
231 | somehow identify a specific recipient (or set of recipients) then either | |
232 | everyone can verify authenticity or nobody can. In the latter case, the | |
233 | scheme is useless; in the former, it is undeniable. An interactive protocol | |
234 | can identify the recipient implicitly through the interaction. A | |
235 | non-interactive scheme doesn't have this luxury; the only means remaining is | |
236 | to assume that the recipient knows something -- i.e., a private key. | |
237 | ||
238 | Other related concepts include \emph{chameleon signatures} and \emph{ring | |
239 | signatures}. A chameleon signature \cite{cryptoeprint:1998:010} allows a | |
240 | designated recipient to satisfy himself as to the origin of a message, and to | |
241 | create forgeries -- so the recipient is unable to convince third parties that | |
242 | a message is authentic; however, the sender is able to prove a forgery, and a | |
243 | failure to provide such proof may be considered an admission of authenticity. | |
244 | (We shall discuss chameleon signatures further in \ref{sec:deny.weak}.) A | |
245 | ring signature \cite{Rivest:2001:HLS} lets sender `hide' among a set of users | |
246 | -- without their participation -- and sign a message in such a way as to | |
247 | convince a recipient that some member of the set signed it, but with no way | |
248 | to determine which. (This differs from the group signatures of | |
249 | \cite{Chaum:1991:GS} in two respects: firstly, the participants in a group | |
250 | signature scheme are explicitly enrolled into it; and secondly, a group | |
251 | signature scheme includes an authority who can reveal the individual | |
252 | participant who signed a particular message.) | |
253 | ||
254 | \fixme Naccache \cite{Naccache:2010:ITC} asked | |
255 | \begin{quote} | |
256 | Construct a non-interactive authenticated PKE scheme: | |
257 | \begin{itemize} | |
258 | \item Alice encrypts a message for Bob and mixes with it a secret. | |
259 | \item Bob can ascertain that the message cam from Alice. | |
260 | \item Bob cannot convey this conviction to anybody. | |
261 | \end{itemize} | |
262 | \end{quote} | |
263 | ||
264 | \subsection{Our contribution} | |
265 | \label{sec:intro.contribution} | |
266 | ||
267 | We provide formal definitions for deniably authenticated asymmetric | |
268 | encryption. We give a `strong' definition, which allows a sender to deny | |
269 | convincingly ever having communicated with a recipient, and a `weak' | |
270 | definition, where it may be possible to prove the fact of communication, but | |
271 | not the contents. (This latter definition turns out to be rather | |
272 | complicated.) | |
273 | ||
274 | We also describe simple and efficient schemes which meet these definitions of | |
275 | security. Our weakly deniable scheme is generic: it uses an asymmetric key | |
276 | encapsulation mechanism, a digital signature scheme, and an authenticated | |
277 | symmetric encryption scheme: security is proven in the standard model. We | |
278 | show that Diffie--Hellman key distribution \cite{Diffie:1976:NDC} is a basis | |
279 | for a strongly deniable authenticated encryption scheme. We describe such a | |
280 | scheme and prove security, assuming the difficulty of the computational | |
281 | Diffie--Hellman problem, in the random oracle model. | |
282 | ||
283 | %%%-------------------------------------------------------------------------- | |
284 | \section{Preliminaries} | |
285 | \label{sec:pre} | |
286 | ||
287 | \subsection{Binary strings and encodings} | |
288 | \label{sec:bin} | |
289 | ||
290 | We write $\Bin = \{ 0, 1 \}$ as the set of binary digits; $\Bin^t$ is the set | |
291 | of $t$-bit strings, and $\Bin^* = \bigcup_{0\le i} \Bin^i$ is the set of all | |
292 | binary strings. We write $\emptystring$ for the empty string. If $m$ is a | |
293 | binary string then $\len(m)$ is the length of $m$; so $m \in \Bin^{\len(m)}$, | |
294 | and $\len(\emptystring) = 0$. If $0 \le i < \len(m)$ then $m[i]$ is the | |
295 | $i$th bit of $m$. If $m$ and $n$ are two strings then $m \cat n$ is their | |
296 | concatenation; we have $\len(m \cat n) = \len(m) + \len(n)$; $m \cat | |
297 | \emptystring = \emptystring \cat m = m$; and $(m \cat n)[i] = m[i]$ if $i < | |
298 | \len(m)$ and $n[i - \len(m)]$ otherwise. If $0 \le i \le j \le \len(m)$ then | |
299 | $m[i \bitsto j]$ is the substring starting with bit~$i$ and ending before | |
300 | bit~$j$ -- so $\len(m[i \bitsto j]) = j - i$, $m[i \bitsto j][k] = m[i + k]$, | |
301 | and $m = m[0 \bitsto i] \cat m[i \bitsto j] \cat m[j \bitsto \len(m)]$. | |
302 | ||
303 | We shall assume the existence of a deterministic, unambiguous encoding of | |
304 | integers, group elements, and other such objects, as bit strings. Rather | |
305 | than build a complicated formalism, we shall simply write $[a, b, c, \ldots]$ | |
306 | as the encoding of items $a$, $b$, $c$, \dots If $n$ is an integer with $0 | |
307 | \le n < 2^k$ then $[n]_k$ is a $k$-bit encoding of $n$. | |
308 | ||
309 | We assume the existence of a distinguished object~$\bot$ (pronounced | |
310 | `bottom'), and a set of \emph{atomic symbols}, written in | |
311 | \cookie{sans-serif}, whose purpose is simply to be distinct from all other | |
312 | objects. | |
313 | ||
314 | \subsection{Algorithms, oracles, and resource bounds} | |
315 | \label{sec:alg} | |
316 | ||
317 | We present algorithms using a fairly standard imperative notation. We write | |
318 | $x \gets \tau$ to mean that variable~$x$ is to be assigned the value of the | |
319 | term~$\tau$. If $S$ is a set, then $x \getsr S$ means that $x$ is to be | |
320 | assigned an element of $S$ chosen uniformly and independently at random. | |
321 | Variables are globally scoped. Indentation is used to indicate the extent of | |
322 | iterated and conditional fragments. | |
323 | ||
324 | We make frequent use of implicit `destructuring' (or `pattern matching') in | |
325 | algorithm descriptions: a tuple (in parentheses, as is conventional) or | |
326 | encoded sequence (in square brackets) containing variables may appear on the | |
327 | left of an assignment, or as a formal argument to a procedure, indicating | |
328 | that the corresponding right-hand-side or actual argument is to be decoded | |
329 | and split up according to the structure of the pattern | |
330 | ||
331 | Oracles provided to algorithms are written as superscripts; the inputs | |
332 | supplied in an oracle query are written as `$\cdot$'. | |
333 | ||
334 | We work throughout in the tradition of concrete security, as initiated in | |
335 | \cite{Bellare:1994:SCB}. We consider adversaries operating under explicit | |
336 | resource bounds of time and numbers of oracle queries, and provide explicit | |
337 | bounds on their success probabilities and advantages. We find that this | |
338 | approach both simplifies presentation -- since we no longer have to deal with | |
339 | families of objects indexed by security parameters or other artificial | |
340 | features of the asymptotic approach -- and provides more practically useful | |
341 | results. (As an aside, we remark that, \cite{Goldreich:1997:FMCa} | |
342 | notwithstanding, it's much easier to derive asymptotic results from concrete | |
343 | ones than \emph{vice versa}.) | |
344 | ||
345 | We make use of the random oracle model, as formalized in | |
346 | \cite{Bellare:1993:ROP}. Rather than give separate definitions for security | |
347 | with random oracles, we shall quietly include a bound $q_H$ on an adversary's | |
348 | random oracle queries when attacking a scheme which uses random oracles. | |
349 | ||
350 | The model of computation used to measure running times is not specified -- or | |
351 | especially important to our results. We do note, however, that running times | |
352 | \emph{include} the time taken by the `game' infrastructure, including any | |
353 | time needed for oracles to compute their results. In order to take into | |
354 | account algorithms which make use of precomputed tables, we also include in | |
355 | the running time a term proportional to the size of the algorithm's | |
356 | description according to the (unspecified) computational model. | |
357 | ||
358 | \subsection{Useful results} | |
359 | \label{sec:handy} | |
360 | ||
361 | The following simple lemmata will be used in our security proofs. The first | |
362 | is by Shoup; the second seems to be folklore; and the third is an abstraction | |
363 | of various results. | |
364 | ||
365 | \begin{lemma}[Difference lemma \cite{Shoup:2002:OR}] | |
366 | \label{lem:shoup} | |
367 | ||
368 | Let $S$, $T$, and $F$ be events such that | |
369 | \[ \Pr[S \mid \bar{F}] = \Pr[T \mid \bar{F}] \] | |
370 | Then | |
371 | \[ \Pr[S] - \Pr[T] \le \Pr[F] \] | |
372 | \end{lemma} | |
373 | \begin{proof} | |
374 | A simple calculation: | |
375 | \begin{eqnarray*}[rl] | |
376 | \Pr[S] - \Pr[T] | |
377 | & = (\Pr[S \land F] + \Pr[S \land \bar{F}]) - | |
378 | (\Pr[T \land F] + \Pr[T \land \bar{F}]) \\ | |
379 | & = (\Pr[S \land F] - \Pr[T \land F]) + | |
380 | \Pr[\bar{F}] (\Pr[S \mid \bar{F}] - \Pr[T \mid \bar{F}]) \\ | |
381 | & \le \Pr[F] \eqnumber[\qed] | |
382 | \end{eqnarray*} | |
383 | \end{proof} | |
384 | ||
385 | \begin{lemma}[Advantage lemma] | |
386 | \label{lem:advantage} | |
387 | ||
388 | Suppose that $b \inr \{0, 1\}$ is uniformly distributed. Let $a \in \{0, | |
389 | 1\}$ be a random bit, not necessarily uniform or independent of $b$. | |
390 | \[ \Pr[a = 1 \mid b = 1] - \Pr[a = 1 \mid b = 0] = 2 \Pr[a = b] - 1 \] | |
391 | \end{lemma} | |
392 | \begin{proof} | |
393 | Another calculation: | |
394 | \begin{eqnarray*}[rl] | |
395 | \Pr[a = 1 \mid b = 1] - \Pr[a = 1 \mid b = 0] | |
396 | & = \Pr[a = 1 \mid b = 1] + \Pr[a = 0 \mid b = 0] - 1 \\ | |
397 | & = \frac{\Pr[a = b \land b = 1]}{\Pr[b = 1]} + | |
398 | \frac{\Pr[a = b \land b = 0]}{\Pr[b = 0]} - 1 \\ | |
399 | & = 2(\Pr[a = b \land b = 1] + \Pr[a = b \land b = 0]) - 1 \\ | |
400 | & = 2\Pr[a = b] - 1 \eqnumber[\qed] | |
401 | \end{eqnarray*} | |
402 | \end{proof} | |
403 | ||
404 | \begin{lemma}[Randomization lemma] | |
405 | \label{lem:random} | |
406 | ||
407 | Let $X$ and $Y$ be countable sets, and let $f\colon X \times Y \to \{0, | |
408 | 1\}$ be a function. Suppose $y$ is a random variable taking values in $Y$ | |
409 | such that, for all $x \in X$, | |
410 | \[ \Pr[f(x, y) = 1] \le \epsilon \] | |
411 | for some $\epsilon$. Let $(x_0, x_1, \ldots, x_{q-1})$ be random variables | |
412 | taking values from $X$, and independent of $y$. Then | |
413 | \[ \Pr\biggl[\bigvee_{0\le i<q} f(x_i, y) = 1\biggr] \le q \epsilon \] | |
414 | \end{lemma} | |
415 | \begin{proof} | |
416 | Another calculation: | |
417 | \begin{eqnarray*}[rl] | |
418 | \Pr\biggl[\bigvee_{0\le i<q} f(x_i, y) = 1\biggr] | |
419 | & \le \sum_{0\le i<q} \Pr[f(x_i, y) = 1] \\ | |
420 | & = \sum_{0\le i<q} \sum_{x\in X} \Pr[f(x_i, y) = 1 \land x_i = x] \\ | |
421 | & = \sum_{0\le i<q} \sum_{x\in X} | |
422 | \Pr[f(x_i, y) = 1 \mid x_i = x] \Pr[x_i = x] \\ | |
423 | & = \sum_{0\le i<q} \sum_{x\in X} \Pr[f(x, y) = 1] \Pr[x_i = x] \\ | |
424 | & \le \sum_{0\le i<q} \sum_{x\in X} \epsilon \cdot \Pr[x_i = x] \\ | |
425 | & = q \epsilon \eqnumber[\qed] | |
426 | \end{eqnarray*} | |
427 | \end{proof} | |
428 | ||
429 | %%%-------------------------------------------------------------------------- | |
430 | \section{Definitions} | |
431 | \label{sec:defs} | |
432 | \subsection{Diffie--Hellman problems} | |
433 | \label{sec:dh} | |
434 | ||
435 | Throughout this section, let $(G, +)$ be a (necessarily cyclic) group of | |
436 | prime order, written additively; let $p = \#G$ be its order, and let $P$ be a | |
437 | generator of $G$. In order to simplify notation, we shall think of $G$ as | |
438 | being a $\gf{p}$-vector space; group elements (vectors) will be given | |
439 | uppercase letters, while elements of $\gf{p}$ (scalars) will be given | |
440 | lowercase letters. | |
441 | ||
442 | The computational Diffie--Hellman problem in $G$ is to determine $x y P$ | |
443 | given only $x P$ and $y P$. The problem is named after the authors of | |
444 | \cite{Diffie:1976:NDC}, whose key distribution scheme relies on the | |
445 | difficulty of this problem in the group $\gf{p}^*$ of units in a prime finite | |
446 | field. | |
447 | ||
448 | More formally, we use the following definition. | |
449 | ||
450 | \begin{definition}[Computational Diffie--Hellman] | |
451 | \label{def:cdh} | |
452 | ||
453 | If $A$ is any algorithm, then its \emph{advantage} in solving the | |
454 | computational Diffie--Hellman problem (CDH) in $G$ is given by | |
455 | \[ \Adv{cdh}{G}(A) = | |
456 | \Pr[\textrm{$x \getsr \gf{p}$; $y \getsr \gf{p}$} : | |
457 | A(x P, y P) = x y P] | |
458 | \] | |
459 | The CDH insecurity function of $G$ is then | |
460 | \[ \InSec{cdh}(G; t) = \max_A \Adv{cdh}{G}(A) \] | |
461 | where the maximum is taken over all algorithms~$A$ completing the game in | |
462 | time~$t$. | |
463 | \end{definition} | |
464 | ||
465 | We shall also make use of the \emph{twin Diffie--Hellman} problem | |
466 | \cite{cryptoeprint:2008:067}: given $x P$, $x' P$ and $y P$, to find $x y P$ | |
467 | and $x' y P$, given an oracle which can decide, given $(U, V, V')$, whether | |
468 | $V = x U$ and $V' = x' U$. | |
469 | ||
470 | \begin{definition}[Twin Diffie--Hellman] | |
471 | \label{def:2dh} | |
472 | ||
473 | An algorithm $A$'s ability to solve the twin Diffie--Hellman problem in a | |
474 | group~$G$ is measured using the following game. | |
475 | \begin{program} | |
476 | $\Game{2dh}{G}(A)$: \+ \\ | |
477 | $x \getsr \gf{p}$; | |
478 | $x' \getsr \gf{p}$; | |
479 | $y \getsr \gf{p}$; \\ | |
480 | $(Z, Z') \gets A^{\id{2dhp}(\cdot, \cdot, \cdot)}(x P, x' P, y P)$; \\ | |
481 | \IF $Z = x y P \land Z' = x' y P$ \THEN \RETURN $1$ \ELSE \RETURN $0$; | |
482 | \- \\[\medskipamount] | |
483 | $\id{2dhp}(U, V, V')$: \+ \\ | |
484 | \IF $V = x U \land V' = x' U$ \THEN \RETURN $1$ \ELSE \RETURN $0$; | |
485 | \end{program} | |
486 | If $A$ is any algorithm, then its \emph{advantage} in solving the twin | |
487 | Diffie--Hellman problem (2DH) in $G$ is given by | |
488 | \[ \Adv{2dh}{G}(A) = \Pr[\Game{2dh}{G}(A) = 1] \] | |
489 | Finally, the 2DH insecurity function of ~$H$ is given by | |
490 | \[ \InSec{2dh}(G; t, q) = \max_A \Adv{2dh}{G}(A) \] | |
491 | where the maximum is taken over all algorithms~$A$ completing the game in | |
492 | time~$t$ and making at most~$q$ queries to the \id{2dhp} oracle. | |
493 | \end{definition} | |
494 | ||
495 | This all seems rather artificial; but \cite{cryptoeprint:2008:067} relates | |
496 | 2DH security tightly to plain CDH security. The main trick is as follows. | |
497 | \begin{lemma}[Detecting 2DH triples] | |
498 | \label{lem:2dh-detect} | |
499 | ||
500 | Let $G$ be a cyclic group generated by~$P$, with prime order $p = \#G$. | |
501 | Let $X \in G$ be any group element; let $r$ and $s$ be any elements of | |
502 | $\gf{p}$, and set $X' = r P + s X$. If $Y = y P$ for some $y \in \gf{p}$, | |
503 | and $Z$, $Z'$ are two further group elements, then | |
504 | \begin{enumerate} | |
505 | \item if $Z = y X$, then $Z' = y X'$ if and only if $Z' = r Y + s Z$; and, | |
506 | conversely, | |
507 | \item if $Z \ne y X$ then there is precisely one possible value of $s$ for | |
508 | which $Z' = r Y + s Z$. | |
509 | \end{enumerate} | |
510 | \end{lemma} | |
511 | \begin{proof} | |
512 | For the first statement, suppose that $Z = y X$; then $y X' = y (r P + s X) | |
513 | = r (y P) + s (y X) = r Y + s Z$. For the second, suppose now that $Z \ne | |
514 | y X$, but | |
515 | \[ Z' = r Y + s Z \] | |
516 | holds anyway. We already have $X' = r P + s X$, so | |
517 | \[ y X' = r Y + s y X \] | |
518 | Subtracting the latter equation from the former gives | |
519 | \[ Z' - y X' = s (Z - y X) \] | |
520 | Since $Z - y X \ne 0$, it generates $G$; hence there is a single satisfying | |
521 | $s \in \gf{p}$ as claimed. | |
522 | \end{proof} | |
523 | ||
524 | \begin{theorem}[$\textrm{CDH} \Leftrightarrow \textrm{2DH}$] | |
525 | \label{th:cdh-2dh} | |
526 | ||
527 | Let $G$ be a cyclic group with prime order; then | |
528 | \[ \InSec{cdh}(G; t) \le | |
529 | \InSec{2dh}(G; t, q) \le | |
530 | \InSec{cdh}(G; t + t') + \frac{q}{\#G} \] | |
531 | where $t'$ is the time taken to perform two scalar multiplications and an | |
532 | addition in $G$. | |
533 | \end{theorem} | |
534 | \begin{proof} | |
535 | The first inequality is obvious. For the second, suppose that $A$ is any | |
536 | algorithm for solving 2DH in~$G$. Let $p = \#G$. We define algorithm~$B$ | |
537 | as follows. | |
538 | \begin{program} | |
539 | $B(X, Y)$: \+ \\ | |
540 | $r \getsr \gf{p}$; | |
541 | $s \getsr \gf{p}$; | |
542 | $X' \gets r P + s X$; \\ | |
543 | $(Z, Z') \gets A^{\id{2dhp}'(\cdot, \cdot, \cdot)}(X, X', Y)$; \\ | |
544 | \IF $\id{2dhp}'(Y, Z, Z') = 1$ \THEN \RETURN $Z$ \ELSE \RETURN $\bot$; | |
545 | \- \\[\medskipamount] | |
546 | $\id{2dhp}'(U, V, V')$: \+ \\ | |
547 | \IF $V' = r U + s V$ \THEN \RETURN $1$ \ELSE \RETURN $0$; | |
548 | \end{program} | |
549 | If $A$ runs in time $t$, then $B$ runs in time~$t + t'$ as stated: while | |
550 | $\id{2dhp}'$ works differently from $\id{2dhp}$, the simultaneous | |
551 | multiplication in the former can be done more efficiently than the two | |
552 | separate multiplications in the latter; so the only overhead is in the | |
553 | additional setup. | |
554 | ||
555 | We have $r$ uniform on $\gf{p}$ and independent of $s$, so $\Pr[X' = C] = | |
556 | \Pr[r P = C - s X] = 1/p$ for any constant~$C$, so $X'$ is uniform on $G$ | |
557 | and independent of~$s$. By \xref{lem:2dh-detect}, there are at most $q$ | |
558 | values of $s$ which would cause $\id{2dhp}'$ to give an incorrect answer. | |
559 | The theorem follows. | |
560 | \end{proof} | |
561 | ||
562 | \subsection{Symmetric encryption} | |
563 | \label{sec:se} | |
564 | ||
565 | The syntax of symmetric encryption schemes is straightforward. The only | |
566 | slightly unusual feature of our definition is that we require the key space | |
567 | to be a set of binary strings of a given length: this will be important in | |
568 | what follows. | |
569 | ||
570 | \begin{definition}[Symmetric encryption syntax] | |
571 | \label{def:se-syntax} | |
572 | ||
573 | A \emph{symmetric encryption scheme} is a triple $\proto{se} = (\kappa, E, | |
574 | D)$, where $\kappa \ge 0$ is a nonnegative integer, and $E$ and $D$ are | |
575 | (maybe randomized) algorithms, as follows. | |
576 | \begin{itemize} | |
577 | \item The \emph{encryption algorithm}~$E$ accepts a key $K \in \Bin^\kappa$ | |
578 | and a message $m \in \Bin^*$, and outputs a ciphertext $c \gets E_K(m)$. | |
579 | \item The \emph{decryption algorithm}~$D$ accepts a key $K \in \Bin^\kappa$ | |
580 | and a ciphertext $c$, and outputs $m = D_K(c)$ which is either a message | |
581 | $m \in \Bin^*$ or the distinguished symbol~$\bot$. This decryption | |
582 | algorithm must be such that, for all $K \in \Bin^*$, and all messages $m | |
583 | \in \Bin^*$, if $c$ is any output of $E_K(m)$ then $D_K(c)$ outputs | |
584 | $m$. \qed | |
585 | \end{itemize} | |
586 | \end{definition} | |
587 | ||
588 | We define two security notions for symmetric encryption. | |
589 | ||
590 | Our notion of secrecy is \emph{indistinguishability under chosen-ciphertext | |
591 | attack} (IND-CCA). We choose a random key~$K$ for the symmetric encryption | |
592 | scheme and provide the adversary with two oracles: an encryption oracle which | |
593 | accepts two messages $m_0$ and $m_1$ of the same length, consistently chooses | |
594 | either $m_0$ or $m_1$, encrypts it using the key~$K$, and returns the | |
595 | resulting ciphertext $y = E_K(m_b)$; and a decryption oracle which accepts a | |
596 | ciphertext and returns the corresponding plaintext or an indication that the | |
597 | ciphertext was malformed. The scheme is considered secure if no adversary | |
598 | playing this game can determine whether the encryption oracle is encrypting | |
599 | $m_0$ or $m_1$. Since the game would be trivial if the adversary could ask | |
600 | for decryption of ciphertexts returned by the encryption oracle, we forbid | |
601 | (only) this. | |
602 | ||
603 | Our notion of authenticity is \emph{integrity of ciphertexts} (INT-CTXT). | |
604 | We choose a random key~$K$, and provide the adversary with two oracles: an | |
605 | encryption oracle which encrypts a message provided as input and returns the | |
606 | resulting ciphertext; and a decryption oracle which decrypts the ciphertext | |
607 | provided as input and returns the plaintext or an indication that the | |
608 | ciphertext was invalid. The scheme is considered security if no adversary | |
609 | playing this game can query its decryption oracle on a valid ciphertext which | |
610 | wasn't returned by the encryption oracle. | |
611 | ||
612 | These notions are standard; we take them from | |
613 | \cite{Bellare:2000:AER,cryptoeprint:2000:025}. Rogaway | |
614 | \cite{cryptoeprint:2006:221} presents a rather more convenient definition | |
615 | which combines both secrecy and authenticity into a single notion; this | |
616 | definition is stronger than we need because it requires that ciphertexts be | |
617 | indistinguishable from random data, rather than merely other ciphertexts. | |
618 | ||
619 | \begin{definition}[Symmetric encryption security] | |
620 | \label{def:se-security} | |
621 | ||
622 | Let $\Pi = (\kappa, E, D)$ be a symmetric encryption scheme. An | |
623 | adversary~$A$'s ability to attack the security of $\Pi$ is measured using | |
624 | the following games. | |
625 | ||
626 | \begin{program} | |
627 | $\Game{ind-cca-$b$}{\Pi}(A)$: \+ \\ | |
628 | $K \getsr \Bin^\kappa$; \\ | |
629 | $b' \gets A^{E_K(\id{lr}(\cdot, \cdot)), D_K(\cdot)}()$; \\ | |
630 | \RETURN $b'$; | |
631 | \next | |
632 | $\id{lr}(m_0, m_1)$: \+ \\ | |
633 | \RETURN $m_b$; | |
634 | \newline | |
635 | $\Game{int-ctxt}{\Pi}(A)$: \+ \\ | |
636 | $K \getsr \Bin^\kappa$; \\ | |
637 | $Y \gets \emptyset$; $w \gets 0$; \\ | |
638 | $A^{\id{encrypt}(\cdot), \id{decrypt}(\cdot)}()$; \\ | |
639 | \RETURN $w$; \- | |
640 | \next | |
641 | $\id{encrypt}(m)$: \+ \\ | |
642 | $y \gets E_K(m)$; \\ | |
643 | $Y \gets Y \cup \{ y \}$; \\ | |
644 | \RETURN $y$; \- | |
645 | \next | |
646 | $\id{decrypt}(y)$: \+ \\ | |
647 | $m \gets D_K(y)$; \\ | |
648 | \IF $m \ne \bot \land y \notin Y$ \THEN $w \gets 1$; \\ | |
649 | \RETURN $m$; \- | |
650 | \end{program} | |
651 | ||
652 | In the IND-CCA games, we require that the two messages $m_0$ and $m_1$ | |
653 | passed to the encryption oracle have the same length, and that the | |
654 | adversary not query its decryption oracle on any ciphertext produced by the | |
655 | encryption oracle. | |
656 | ||
657 | The adversary's \emph{advantage} in these games is defined as follows. | |
658 | \begin{eqnarray*}[c] | |
659 | \Adv{ind-cca}{\Pi}(A) = | |
660 | \Pr[\Game{ind-cca-$1$}{\Pi}(A) = 1] - | |
661 | \Pr[\Game{ind-cca-$0$}{\Pi}(A) = 1] | |
662 | \\[\medskipamount] | |
663 | \Adv{int-ctxt}{\Pi}(A) = | |
664 | \Pr[\Game{int-ctxt}{\Pi}(A) = 1] | |
665 | \end{eqnarray*} | |
666 | ||
667 | Finally, the IND-CCA and INT-CTXT insecurity functions of $\Pi$ are given | |
668 | by | |
669 | \begin{eqnarray*}[c] | |
670 | \InSec{ind-cca}(\Pi; t, q_E, q_D) = \max_A \Adv{ind-cca}{\Pi}(A) | |
671 | \\[\medskipamount] | |
672 | \InSec{int-ctxt}(\Pi; t, q_E, q_D) = \max_A \Adv{int-ctxt}{\Pi}(A) | |
673 | \end{eqnarray*} | |
674 | where the maxima are taken over all adversaries~$A$ completing the game in | |
675 | time~$t$ and making at most $q_E$ and $q_D$ queries to their encryption and | |
676 | decryption oracles, respectively. | |
677 | \end{definition} | |
678 | ||
679 | In fact, our constructions only require security against a single | |
680 | chosen-plaintext query in order to achieve the usual notions of | |
681 | indistinguishability and authenticity (\xref{sec:deny.intuition}). However, | |
682 | in order to prove deniability we shall have need to encrypt additional | |
683 | messages under the same key, and it would be a shame to lose secrecy or | |
684 | authenticity as a consequence. | |
685 | ||
686 | Symmetric encryption schemes according to this definition can easily be | |
687 | constructed using a generic `encrypt-then-authenticate' approach using a | |
688 | symmetric encryption scheme secure against chosen-plaintext attack and a | |
689 | message authentication code (MAC) secure against `strong' forgery under | |
690 | chosen-message attack.\footnote{% | |
691 | Such schemes may fail to meet Rogaway's stronger definition because MAC | |
692 | tags need not be indistinguishable from random data.} % | |
693 | Alternatively, there are dedicated schemes based on block ciphers, such as | |
694 | OCB~\cite{Rogaway:2002:AEA,Rogaway:2001:OCB,cryptoeprint:2001:026}, | |
695 | CCM~\cite{rfc3610,Dworkin:2003:DRB} (but note \cite{cryptoeprint:2003:070}) | |
696 | and GCM~\cite{McGrew:2004:SPG}. | |
697 | ||
698 | \subsection{Authenticated encryption with additional data} | |
699 | \label{sec:aead} | |
700 | ||
701 | \begin{definition}[AEAD syntax] | |
702 | \label{def:aead-syntax} | |
703 | ||
704 | An \emph{authenticated encryption with additional data (AEAD)} scheme is a | |
705 | triple $\proto{se} = (\kappa, E, D)$, where $\kappa \ge 0$ is a nonnegative | |
706 | integer, and $E$ and $D$ are (maybe randomized) algorithms, as follows. | |
707 | \begin{itemize} | |
708 | \item The \emph{encryption algorithm}~$E$ accepts a key $K \in | |
709 | \Bin^\kappa$, additional data~$h \in \Bin^*$, and a message $m \in | |
710 | \Bin^*$, and outputs a ciphertext $c \gets E_K(h, m)$. | |
711 | \item The \emph{decryption algorithm}~$D$ accepts a key $K \in | |
712 | \Bin^\kappa$, additional data~$h \in \Bin^*$, and a ciphertext $c$, and | |
713 | outputs $m = D_K(h, c)$ which is either a message $m \in \Bin^*$ or the | |
714 | distinguished symbol~$\bot$. This decryption algorithm must be such | |
715 | that, for all $K \in \Bin^*$, all additional data strings~$h \in \Bin^*$, | |
716 | and all messages $m \in \Bin^*$, if $c$ is any output of $E_K(h, m)$ then | |
717 | $D_K(h, c)$ outputs $m$. \qed | |
718 | \end{itemize} | |
719 | \end{definition} | |
720 | ||
721 | \begin{definition}[AEAD security] | |
722 | \label{def:aead-security} | |
723 | ||
724 | Let $\Pi = (\kappa, E, D)$ be a symmetric encryption scheme. An | |
725 | adversary~$A$'s ability to attack the security of $\Pi$ is measured using | |
726 | the following games. | |
727 | ||
728 | \begin{program} | |
729 | $\Game{ind-cca-$b$}{\Pi}(A)$: \+ \\ | |
730 | $K \getsr \Bin^\kappa$; \\ | |
731 | $b' \gets A^{E_K(\cdot, \id{lr}(\cdot, \cdot)), | |
732 | D_K(\cdot, \cdot)}()$; \\ | |
733 | \RETURN $b'$; | |
734 | \next | |
735 | $\Game{int-ctxt}{\Pi}(A)$: \+ \\ | |
736 | $K \getsr \Bin^\kappa$; \\ | |
737 | $Y \gets \emptyset$; $w \gets 0$; \\ | |
738 | $A^{\id{encrypt}(\cdot, \cdot), \id{decrypt}(\cdot, \cdot)}()$; \\ | |
739 | \RETURN $w$; \- | |
740 | \newline | |
741 | $\id{lr}(m_0, m_1)$: \+ \\ | |
742 | \RETURN $m_b$; | |
743 | \next | |
744 | $\id{encrypt}(h, m)$: \+ \\ | |
745 | $y \gets E_K(h, m)$; \\ | |
746 | $Y \gets Y \cup \{ (h, y) \}$; \\ | |
747 | \RETURN $y$; \- | |
748 | \next | |
749 | $\id{decrypt}(h, y)$: \+ \\ | |
750 | $m \gets D_K(h, y)$; \\ | |
751 | \IF $m \ne \bot \land (h, y) \notin Y$ \THEN $w \gets 1$; \\ | |
752 | \RETURN $m$; \- | |
753 | \end{program} | |
754 | ||
755 | In the IND-CCA games, we require that the two messages $m_0$ and $m_1$ | |
756 | passed to the encryption oracle have the same length, and that the | |
757 | adversary not query its decryption oracle on any pair $(h, y)$ such that | |
758 | $y$ was output by the encryption oracle on a query with additional | |
759 | data~$h$. | |
760 | ||
761 | The adversary's \emph{advantage} in these games is defined as follows. | |
762 | \begin{eqnarray*}[c] | |
763 | \Adv{ind-cca}{\Pi}(A) = | |
764 | \Pr[\Game{ind-cca-$1$}{\Pi}(A) = 1] - | |
765 | \Pr[\Game{ind-cca-$0$}{\Pi}(A) = 1] | |
766 | \\[\medskipamount] | |
767 | \Adv{int-ctxt}{\Pi}(A) = | |
768 | \Pr[\Game{int-ctxt}{\Pi}(A) = 1] | |
769 | \end{eqnarray*} | |
770 | ||
771 | Finally, the IND-CCA and INT-CTXT insecurity functions of $\Pi$ are given | |
772 | by | |
773 | \begin{eqnarray*}[c] | |
774 | \InSec{ind-cca}(\Pi; t, q_E, q_D) = \max_A \Adv{ind-cca}{\Pi}(A) | |
775 | \\[\medskipamount] | |
776 | \InSec{int-ctxt}(\Pi; t, q_E, q_D) = \max_A \Adv{int-ctxt}{\Pi}(A) | |
777 | \end{eqnarray*} | |
778 | where the maxima are taken over all adversaries~$A$ completing the game in | |
779 | time~$t$ and making at most $q_E$ and $q_D$ queries to their encryption and | |
780 | decryption oracles, respectively. | |
781 | \end{definition} | |
782 | ||
783 | \subsection{Asymmetric encryption} | |
784 | \label{sec:ae} | |
785 | ||
786 | An \emph{asymmetric encryption scheme} allows a sender, say Alice, to | |
787 | transmit a message to a recipient, say Bob, whose public key she knows, and | |
788 | be confident that the message's contents will remain secret from anyone who | |
789 | doesn't know Bob's private key. | |
790 | ||
791 | Our notion of security is the standard find-then-guess notion of | |
792 | indistinguishability under chosen-ciphertext attack, as described in | |
793 | \cite{Bellare:1998:RAN}, which also shows that this is sufficient to achieve | |
794 | \emph{non-malleability} under chosen-ciphertext attack; see also | |
795 | \cite{Bellare:1999:NME,cryptoeprint:1999:018,cryptoeprint:2006:228}. | |
796 | ||
797 | \begin{definition}[Asymmetric encryption syntax] | |
798 | \label{def:ae-syntax} | |
799 | ||
800 | An \emph{asymmetric encryption scheme} is a triple $\proto{ae} = (G, E, D)$ | |
801 | where $G$, $E$ and $D$ are (maybe randomized) algorithms, as follows. | |
802 | \begin{itemize} | |
803 | \item The \emph{key-generation algorithm} $G$ accepts no parameters and | |
804 | outputs a pair $(x, X)\gets G()$. We call $x$ the \emph{private key} and | |
805 | $X$ the \emph{public key}. | |
806 | \item The \emph{encryption algorithm} $E$ accepts a public key $X$ and a | |
807 | message $m \in \Bin^*$, and outputs a \emph{ciphertext} $c \gets E_X(m)$. | |
808 | \item The \emph{decryption algorithm} $D$ accepts a private key $x$ and a | |
809 | ciphertext $c$, and outputs $m \gets D_x(c)$ which is either a message $m | |
810 | \in \Bin^*$ or the distinguished symbol~$\bot$. This decryption | |
811 | algorithm must be such that, for all messages $m \in \Bin^*$, if $(x, X)$ | |
812 | is any key pair output by $G$, and $c$ is any ciphertext output by | |
813 | $E_X(m)$, then $D_x(c)$ outputs $m$. \qed | |
814 | \end{itemize} | |
815 | \end{definition} | |
816 | ||
817 | \begin{definition}[Asymmetric encryption security] | |
818 | \label{def:ae-security} | |
819 | ||
820 | Let $\Pi = (G, E, D)$ be an asymmetric encryption scheme. An | |
821 | adversary~$A$'s ability to attack the security of $\Pi$ is measured using | |
822 | the following game. | |
823 | ||
824 | \begin{program} | |
825 | $\Game{ind-cca}{\Pi}(A, b)$: \+ \\ | |
826 | $(x, X) \gets G()$; \\ | |
827 | $(m_0, m_1, s) \gets A^{D_x(\cdot)}(\cookie{find}, X)$; \\ | |
828 | $c^* \gets E_X(m_b)$; \\ | |
829 | $b' \gets A^{D_x(\cdot)}(\cookie{guess}, s, c^*)$; \\ | |
830 | \RETURN $b'$; | |
831 | \end{program} | |
832 | The adversary is required to produce two messages $m_0$ and $m_1$ of equal | |
833 | lengths at the end of its \cookie{find} stage, and is forbidden from | |
834 | querying its decryption oracle $D_x(\cdot, \cdot)$ on the challenge | |
835 | ciphertext $c^*$ during the \cookie{guess} stage. | |
836 | ||
837 | The adversary's \emph{advantage} in this game us measured as follows. | |
838 | \[ \Adv{ind-cca}{\Pi}(A) = | |
839 | \Pr[\Game{ind-cca}(A, 1) = 1] - | |
840 | \Pr[\Game{ind-cca}(A, 0) = 1] | |
841 | \] | |
842 | Finally, the IND-CCA insecurity function of $\Pi$ is given by | |
843 | \[ \InSec{ind-cca}(\Pi; t, q_D) = \max_A \Adv{ind-cca}{\Pi}(A) \] | |
844 | where the maximum is taken over all adversaries~$A$ completing the game in | |
845 | time~$t$ and making at most $q_D$ queries to their decryption oracles. | |
846 | \end{definition} | |
847 | ||
848 | \subsection{Key-encapsulation mechanisms} | |
849 | \label{sec:kem} | |
850 | ||
851 | A \emph{key-encapsulation mechanism}, or KEM, is an asymmetric cryptographic | |
852 | scheme for transferring a short random secret (e.g., a symmetric key) to a | |
853 | recipient. It differs from asymmetric encryption in that the KEM is | |
854 | responsible for choosing the random string as part of its operation, rather | |
855 | than having it be an input. This makes efficient constructions simpler and | |
856 | more efficient, and makes security reductions more efficient (and hence more | |
857 | meaningful). | |
858 | ||
859 | ElGamal's encryption scheme \cite{ElGamal:1985:PKCb} be seen, with copious | |
860 | hindsight, as consisting of a KEM based on the Diffie--Hellman problem | |
861 | combined with a symmetric encryption using the same group operation as a | |
862 | one-time pad. Zheng and Seberry \cite{Zheng:1993:PAA}, inspired by | |
863 | Damg\aa{}rd's `modified ElGamal' scheme \cite{Damgaard:1991:TPP}, present | |
864 | (without proofs) a number of constructions of asymmetric encryption schemes | |
865 | secure against adaptive chosen-ciphertext attacks. The DHIES scheme of | |
866 | Abdalla, Bellare, and Rogaway | |
867 | \cite{cryptoeprint:1999:007,Abdalla:2001:DHIES}, standardized in | |
868 | \cite{IEEE:2000:1363}), uses a hash function to derive a symmetric key from | |
869 | the Diffie--Hellman shared secret; they prove security against | |
870 | chosen-ciphertext attack. Finally, Shoup \cite{cryptoeprint:2001:112} | |
871 | formalizes the notion of a key-encapsulation, and describes one (RSA-KEM, | |
872 | formerly `Simple RSA') based on the RSA primitive \cite{Rivest:1978:MOD}. | |
873 | ||
874 | \begin{definition}[Key-encapsulation mechanism syntax] | |
875 | \label{def:kem-syntax} | |
876 | ||
877 | A \emph{key-encapsulation mechanism} is a quadruple $\proto{kem} = | |
878 | (\lambda, G, E, D)$ where $\lambda \ge 0$ is a non-negative integer, and | |
879 | $G$, $E$ and $D$ are (maybe randomized) algorithms, as follows. | |
880 | \begin{itemize} | |
881 | \item The \emph{key-generation algorithm} $G$ accepts no parameters and | |
882 | outputs a pair $(x, X) \gets G()$. We call $x$ the \emph{private key} | |
883 | and $X$ the \emph{public key}. | |
884 | \item The \emph{encapsulation algorithm} $E$ accepts a public key $X$ and | |
885 | outputs a pair $(K, u) \gets E_X()$ where $K \in \Bin^\lambda$ is a bit | |
886 | string, and $u$ is a `clue'. | |
887 | \item The \emph{decapsulation algorithm} $D$ accepts a private key $x$ and | |
888 | a clue $u$, and outputs $K \gets D_x(u)$ which is either a string $K \in | |
889 | \Bin^\lambda$ or the distinguished symbol $\bot$. The decapsulation | |
890 | algorithm must be such that if $(x, X)$ is any pair of keys produced by | |
891 | $G$, and $(K, u)$ is any key/clue pair output by $E_X()$, then $D_x(u)$ | |
892 | outputs $K$. \qed | |
893 | \end{itemize} | |
894 | \end{definition} | |
895 | ||
896 | Our notion of security is indistinguishability from random under | |
897 | chosen-ciphertext attack (IND-CCA). We generate a key pair, and invoke the | |
898 | encapsulation algorithm on the public key. The adversary is provided with | |
899 | the resulting KEM clue and an $\lambda$-bit string. It may query an oracle | |
900 | which accepts a clue as input and returns the result of applying the | |
901 | decapsulation algorithm to it using the private key. The scheme is | |
902 | considered secure if the adversary cannot determine whether its input string | |
903 | was the output of the key encapsulation algorithm, or generated uniformly at | |
904 | random and independently. Since this game can be won easily if the adversary | |
905 | is permitted query its decapsulation oracle on its input clue, we forbid | |
906 | (only) this. | |
907 | ||
908 | \begin{definition}[Key encapsulation mechanism security] | |
909 | \label{def:kem-security} | |
910 | ||
911 | Let $\Pi = (\lambda, G, E, D)$ be a key encapsulation mechanism. We | |
912 | measure an adversary~$A$'s ability to attack $\Pi$ using the following | |
913 | games. | |
914 | \begin{program} | |
915 | $\Game{ind-cca-$b$}{\Pi}(A)$: \+ \\ | |
916 | $(x, X) \gets G()$; \\ | |
917 | $K_0 \getsr \Bin^\lambda$; \\ | |
918 | $(K_1, u) \gets E_X()$; \\ | |
919 | $b' \gets A^{D_x(\cdot)}(X, u, K_b)$; \\ | |
920 | \RETURN $b'$; | |
921 | \end{program} | |
922 | In these games, the adversary is forbidden from querying its decapsulation | |
923 | oracle at the challenge clue $u$. | |
924 | ||
925 | The adversary's IND-CCA \emph{advantage} is measured by | |
926 | \[ \Adv{ind-cca}{\Pi}(A) = | |
927 | \Pr[\Game{ind-cca-$1$}{\Pi}(A) = 1] - | |
928 | \Pr[\Game{ind-cca-$0$}{\Pi}(A) = 1] | |
929 | \] | |
930 | Finally, the IND-CCA insecurity function of $\Pi$ is defined by | |
931 | \[ \InSec{ind-cca}(\Pi; t, q_D) = \max_A \Adv{ind-cca}{\Pi}(A) \] | |
932 | where the maximum is taken over all adversaries~$A$ completing the game in | |
933 | time~$t$ and making at most $q_D$ queries to their decapsulation oracles. | |
934 | \end{definition} | |
935 | ||
936 | \subsection{Digital signatures} | |
937 | \label{sec:sig} | |
938 | ||
939 | Our definition for digital signatures is, more specifically, for signature | |
940 | schemes with appendix: the signing algorithm produces a signature, | |
941 | and the verification algorithm requires both the signature and the original | |
942 | message in order to function. | |
943 | ||
944 | \begin{definition}[Digital signature syntax] | |
945 | \label{def:sig-syntax} | |
946 | ||
947 | A \emph{digital signature scheme} is a triple $\proto{sig} = (G, S, | |
948 | V)$ of (maybe randomized) algorithms, as follows. | |
949 | \begin{itemize} | |
950 | \item The \emph{key-generation algorithm} $G$ accepts no parameters and | |
951 | outputs a pair $(x, X) \gets G()$. We call $x$ the \emph{private key} | |
952 | and $X$ the \emph{public key}. | |
953 | \item The \emph{signature algorithm} $S$ accepts a private key~$x$ and a | |
954 | message~$m \in \Bin^*$, and outputs a signature~$\sigma \gets S_x(m)$. | |
955 | \item The \emph{verification algorithm} $V$ accepts a public key~$X$, a | |
956 | message~$m \in \Bin^*$, and a signature~$\sigma$, and outputs a bit $b | |
957 | \gets V_X(m, \sigma)$ subject to the condition that, if $(x, X)$ is any | |
958 | key pair output by $G$, $m \in \Bin^*$ is any message, and $\sigma$ is | |
959 | any signature output by $S_x(m)$, then $V_X(m, \sigma)$ outputs $1$. | |
960 | \qed | |
961 | \end{itemize} | |
962 | \end{definition} | |
963 | ||
964 | Our notion of security for digital signatures is \emph{existential | |
965 | unforgeability under chosen-message attack} (EUF-CMA). It is essentially | |
966 | that of \cite{Goldwasser:1988:DSS}, modified to measure concrete security. | |
967 | We provide the adversary with a public key and a signing oracle which signs | |
968 | input messages using the corresponding private key. The scheme is considered | |
969 | secure if no adversary can output a valid message/signature pair for a | |
970 | message distinct from the adversary's signing queries. | |
971 | ||
972 | \begin{definition}[Digital signature security] | |
973 | \label{def:sig-security} | |
974 | ||
975 | Let $\Pi = (G, S, V)$ be a digital signature scheme. The \emph{advantage} | |
976 | of an adversary~$A$ attacking $\Pi$ is given by | |
977 | \[ \Adv{euf-cma}{\Pi}(A) = \Pr[\textrm{ | |
978 | $(x, X) \gets G()$; | |
979 | $(m, \sigma) \gets A^{S_x(\cdot)}()$ | |
980 | } : V_X(m, \sigma) = 1] | |
981 | \] | |
982 | where the adversary is forbidden from returning a message~$m$ if it queried | |
983 | its signing oracle~$S_x(\cdot)$ at $m$. | |
984 | ||
985 | The EUF-CMA \emph{insecurity function} of $\Pi$ is given by | |
986 | \[ \InSec{euf-cma}(\Pi; t, q) = \max_A \Adv{euf-cma}{\Pi}(A) \] | |
987 | where the maximum is taken over all adversaries~$A$ completing the game in | |
988 | time~$t$ and issuing at most $q$ signing-oracle queries. | |
989 | \end{definition} | |
990 | ||
991 | This notion is weaker than it might be. A possible strengthening would be to | |
992 | allow the adversary to return a pair~$(m, \sigma')$ even if it made a signing | |
993 | query for $m$, as long as the signing oracle didn't return the | |
994 | signature~$\sigma'$. We shall not require this stronger definition. | |
995 | ||
996 | \subsection{Universal one-way hashing} | |
997 | \label{sec:defs.uowhf} | |
998 | ||
999 | \begin{definition}[Universal one-way hash function syntax] | |
1000 | \label{def:uowhf.syntax} | |
1001 | ||
1002 | A \emph{universal one-way hash function} is a triple $\proto{h} = (\nu, | |
1003 | \lambda, H)$ consisting of two positive integers $\nu$ and $\lambda$, and | |
1004 | an algorithm $H$: | |
1005 | \begin{itemize} | |
1006 | \item The \emph{hashing algorithm} $H$ accepts as input a \emph{nonce}~$N | |
1007 | \in \Bin^\nu$ and a message~$m \in \Bin^*$, and outputs an $\lambda$-bit | |
1008 | hash $h \gets H_N(m)$. \qed | |
1009 | \end{itemize} | |
1010 | \end{definition} | |
1011 | ||
1012 | \begin{definition}[Universal one-way hash function security] | |
1013 | \label{def:uowhf.security} | |
1014 | ||
1015 | let $\Pi = (\nu, \lambda, H)$ be a universal one-way hash function. We | |
1016 | measure the \emph{advantage} of $A$ in attacking $\Pi$ by | |
1017 | \[ \Adv{uow}{\Pi}(A) = | |
1018 | \Pr[\textrm{$(m, \sigma) \gets A(\cookie{find})$; | |
1019 | $N \getsr \Bin^\nu$; $m' \gets A(\cookie{guess}, N, m, \sigma)$} | |
1020 | : H_K(m) = H_K(m')] | |
1021 | \] | |
1022 | Finally, the \emph{insecurity function} of $\Pi$ is given by | |
1023 | \[ \InSec{uow}(\Pi; t) = \max_A \Adv{uow}{\Pi}(A) \] | |
1024 | where the maximum is taken over all adversaries $A$ completing the game in | |
1025 | time~$t$. | |
1026 | \end{definition} | |
1027 | ||
1028 | %%%-------------------------------------------------------------------------- | |
1029 | \section{Defining deniably authenticated encryption} | |
1030 | \label{sec:deny} | |
1031 | ||
1032 | \subsection{Capturing the intuition} | |
1033 | \label{sec:deny.intuition} | |
1034 | ||
1035 | The basic intuition behind deniably authenticated encryption is fairly clear. | |
1036 | If Bob sends a message Alice, then | |
1037 | \begin{itemize} | |
1038 | \item (authenticated) she should be able to have confidence that the message | |
1039 | was sent by Bob; | |
1040 | \item (encryption) both she and Bob should be able to have confidence that | |
1041 | nobody else could read the message; and | |
1042 | \item (deniably) nobody should later be able to convince a third party -- | |
1043 | Justin, say -- that Bob actually sent it. | |
1044 | \end{itemize} | |
1045 | Of course, we're considering encrypted messages, so Alice will have to send | |
1046 | something other than just the message to Justin if he's to be convinced of | |
1047 | anything. (Otherwise Justin is able to compromise the indistinguishability | |
1048 | of the encryption scheme.) | |
1049 | ||
1050 | This intuition seems fairly clear; but, as we shall see, the notion of | |
1051 | deniably authenticated encryption is somewhat slippery to formalize. | |
1052 | ||
1053 | \subsubsection{Authenticated encryption} | |
1054 | An, Dodis and Rabin \cite{An:2002:SJS} study asymmetric authenticated | |
1055 | encryption and provide good definitions. They consider authenticated | |
1056 | encryption in a multiparty setting, and deal with both `insider' and | |
1057 | `outsider' security. | |
1058 | ||
1059 | The basic idea of their definitions is standard. For secrecy, an adversary | |
1060 | chooses two messages, is given an encryption of one of them, and attempts to | |
1061 | guess which message was encrypted. For authenticity, the adversary attempts | |
1062 | to construct a forged ciphertext which is accepted as authentic. In both | |
1063 | cases, the adversary is provided with oracles which simulate the other | |
1064 | parties in the protocol: specifically, an encryption oracle simulating a | |
1065 | sender, and a decryption oracle simulating a recipient. | |
1066 | ||
1067 | To properly model security in the multiparty setting, the encryption oracle | |
1068 | accepts as input a public key identifying the proposed recipient of the | |
1069 | ciphertext; and the decryption oracle accepts a public key of the claimed | |
1070 | sender. The adversary may pass any purported public keys of its choosing to | |
1071 | these oracles, whether or not it knows the corresponding private keys. | |
1072 | ||
1073 | The notion of outsider security assumes that the sender and recipient are | |
1074 | trusted, and captures attacks by third parties. Accordingly, in the outsider | |
1075 | security setting, the adversary is given only the public keys of the sender | |
1076 | and recipient. By contrast, insider security assumes that only one of the | |
1077 | participants is honest: in the secrecy game, the adversary is given the | |
1078 | sender's private key; and in the authenticity game, the adversary is given | |
1079 | the recipient's private key. The idea of insider-secure secrecy doesn't seem | |
1080 | especially useful, though it does capture a kind of forward security. The | |
1081 | notion of insider-secure authenticity captures a kind of non-repudiation | |
1082 | property which is explicitly contrary to our goals. We therefore deal only | |
1083 | with outsider security. | |
1084 | ||
1085 | Our security notion for secrecy is \emph{indistinguishability under outsider | |
1086 | chosen-ciphertext attack} (IND-OCCA). We generate a key pair each for two | |
1087 | participants, a `sender' and a `recipient'. The adversary is given the two | |
1088 | public keys and provided with two oracles: an encryption oracle, which | |
1089 | accepts a message and a public key, and encrypts the message using the public | |
1090 | key and the sender's private key, returning the resulting ciphertext; and a | |
1091 | decryption oracle, which accepts a ciphertext and a public key, and decrypts | |
1092 | the ciphertext using the recipient's private key and checks it against the | |
1093 | given public key, returning either the resulting plaintext or an indication | |
1094 | that the decryption failed. The adversary runs in two stages: in the first | |
1095 | `find' stage, it chooses two messages $m_0$ and $m_1$ of equal lengths and | |
1096 | returns them; one of these messages is encrypted by the sender using the | |
1097 | recipient's public key. The adversary's second stage is given the resulting | |
1098 | `challenge' ciphertext. The scheme is considered secure if no adversary can | |
1099 | determine which of its messages was encrypted. Since this is trivial if the | |
1100 | adversary queries its decryption oracle with the challenge ciphertext and the | |
1101 | sender's public key, (only) this is forbidden. The resulting game is | |
1102 | precisely \cite{Baek:2007:FPS}'s `FSO/FUO-IND-CCA2', which corresponds to | |
1103 | \cite{An:2002:SJS}'s notion of `multi-user outsider privacy'. A similar | |
1104 | `insider security' notion would provide the adversary with the sender's | |
1105 | private key. We prefer outsider security: the idea that we must prevent | |
1106 | Alice from decrypting the message she just encrypted to send to Bob seems | |
1107 | unnecessary. | |
1108 | ||
1109 | Our security notion for authenticity is \emph{unforgeability under outsider | |
1110 | chosen-message attack} (UF-OCMA). Again, we generate sender and recipient | |
1111 | keys, and the adversary is given the same two oracles. This time, the | |
1112 | adversary's task is to submit to the decryption oracle a ciphertext which it | |
1113 | accepts as being constructed by the sender, but is not equal to any | |
1114 | ciphertext returned by the encryption oracle. This notion is strictly weaker | |
1115 | than \cite{Baek:2007:FPS}'s FSO-UF-CMA, since it corresponds to | |
1116 | \cite{An:2002:SJS,cryptoeprint:2002:046}'s `multi-user outsider | |
1117 | authenticity', whereas FSO-UF-CMA corresponds to `multi-user \emph{insider} | |
1118 | authenticity'. This weakening of security is a necessary consequence of our | |
1119 | deniability requirements: we \emph{want} Alice to be able to forge messages | |
1120 | that appear to be sent to her by Bob: see \xref{sec:deny.insider}. On the | |
1121 | other hand, we allow the adversary to make multiple decryption queries; this | |
1122 | makes a difference to our quantitative results. | |
1123 | ||
1124 | \subsubsection{Deniability} | |
1125 | Following our intuition above, we might model Justin as an algorithm~$J$ | |
1126 | which receives as input a message~$m$ and ciphertext~$c$, and some other | |
1127 | information, and outputs either~$1$ or $0$ depending on whether it is | |
1128 | convinced that $c$ represents an authentic encryption of $m$ sent from Bob to | |
1129 | Alice. For deniability, we shall require the existence of a \emph{simulator} | |
1130 | which forges ciphertexts sufficiently well that Justin can't distinguish them | |
1131 | from ones that Bob made. The difficult questions concern which inputs we | |
1132 | should give Justin on the one hand, and the simulator on the other. | |
1133 | ||
1134 | The simulator is easier. We must provide at least Bob's public key, in order | |
1135 | to identify him as the `sender'. We must also provide Alice's \emph{private} | |
1136 | key. This initially seems excessive; but a simulator which can forge | |
1137 | messages without Alice's private key exhibits an outsider-authenticity | |
1138 | failure. (We could get around this by providing Bob's private key and | |
1139 | Alice's public key instead, but we already know that Bob can construct | |
1140 | plausible ciphertexts, so this seems uninteresting -- for now.) We should | |
1141 | also give the simulator the `target' message for which it is supposed to | |
1142 | concoct a ciphertext: allowing the simulator to make up the target message | |
1143 | yields a very poor notion of deniability. Finally, we might give the | |
1144 | simulator a \emph{valid} ciphertext from Bob to Alice. Doing this weakens | |
1145 | our deniability, because Bob may no longer be able to deny engaging in any | |
1146 | form of communication with Alice; but this seems a relatively minor | |
1147 | complaint. We therefore end up with two different notions: \emph{strong | |
1148 | deniability}, where the simulator is given only the keys and the target | |
1149 | message; and \emph{weak deniability}, where the simulator is also provided | |
1150 | with a valid sample ciphertext. | |
1151 | ||
1152 | Now we turn to Justin's inputs. It's clear that if he can say whether a | |
1153 | ciphertext is an encryption of some specified message given only Alice's and | |
1154 | Bob's public keys then he exhibits an outsider-secrecy failure. If Alice is | |
1155 | to prove anything useful about the ciphertext to Justin, then, she must | |
1156 | provide additional information, possibly in the form of a zero-knowledge | |
1157 | proof. We don't need to model this, though: it's clearly \emph{sufficient} | |
1158 | to hand Alice's private key to Justin. On the other hand, this is also | |
1159 | \emph{necessary}, since \emph{in extremis} Alice could do precisely this. | |
1160 | Similarly, it might be the case that Justin demands that Bob engage in some | |
1161 | protocol in an attempt to demonstrate his innocence. We can therefore also | |
1162 | give Bob's private key to Justin. | |
1163 | ||
1164 | We could simply measure the simulator's ability to deceive Justin separately | |
1165 | for each message; but this would only give us convincing deniability for | |
1166 | messages which are independent of the key. Instead, we split the deniability | |
1167 | game into two parts: one where Justin gets to choose two messages of his | |
1168 | choice (they don't have to be the same length, like they do for secrecy), and | |
1169 | one where he has to decide whether the ciphertext he's given is genuine or | |
1170 | simulated. | |
1171 | ||
1172 | There are ways we could make the definition stronger. For example, we might | |
1173 | consider auxiliary inputs, or even allow Justin to choose the public and | |
1174 | private keys himself, subject to being (more-or-less) distributed | |
1175 | correctly.\footnote{% | |
1176 | This does actually have some merit as an idea. Consider an (admittedly | |
1177 | pathological) example. Suppose that the scheme's private key includes an | |
1178 | otherwise useless random group element $Q$, and that genuine ciphertexts | |
1179 | contain a Diffie--Hellman pair $(r P, r Q)$ for random $r$, while simulated | |
1180 | ciphertexts (for no good reason) contain a pair $(r P, s P)$ with random | |
1181 | $r$ and $s$. Distinguishing genuine ciphertexts from simulations on the | |
1182 | basis of this pair is precisely the decision Diffie--Hellman problem, so | |
1183 | Justin may well find it difficult; but if he knows $q$ such that $Q = q P$ | |
1184 | -- which isn't part of the private key -- then he'd find it easy.} % | |
1185 | We take the view that our definition is already sufficiently complicated and | |
1186 | unwieldy. | |
1187 | ||
1188 | \subsection{Definitions for strong deniability} | |
1189 | \label{sec:deny.strong} | |
1190 | ||
1191 | This gives us enough to define strong deniability. As we shall see, however, | |
1192 | weak deniability is somewhat trickier to capture. | |
1193 | ||
1194 | Following the discussion above, we define the syntax of a strongly deniable | |
1195 | authentication encryption scheme. (Weakly deniable schemes will turn out to | |
1196 | be \emph{syntactically} different, which will be somewhat annoying.) | |
1197 | ||
1198 | \begin{definition}[Strongly deniable AAE syntax] | |
1199 | \label{def:sdaee-synax} | |
1200 | ||
1201 | A \emph{strongly deniable asymmetric authenticated encryption scheme} | |
1202 | (SD-AAE) is a quadruple of (maybe randomized) algorithms | |
1203 | $\proto{sdaae} = (G, E, D, R)$ as follows. | |
1204 | \begin{itemize} | |
1205 | \item The \emph{key-generation algorithm}~$G$ accepts no parameters and | |
1206 | outputs a pair $(x, X) \gets G()$. We call $x$ the \emph{private key} | |
1207 | and $X$ the corresponding \emph{public key}. | |
1208 | \item The \emph{encryption algorithm}~$E$ accepts as input a private | |
1209 | key~$x$, a public key~$Y$, and a message~$m \in \Bin^*$, and outputs a | |
1210 | ciphertext $c \gets E_x(Y, m)$. | |
1211 | \item The \emph{decryption algorithm}~$D$ accepts as input a private | |
1212 | key~$x$, a public key~$Y$, and a ciphertext~$c$, and outputs $m \gets | |
1213 | D_y(Y, M)$ which is either a message~$m \in \Bin^*$ or the distinguished | |
1214 | symbol~$\bot$. The decryption algorithm must be such that, for all key | |
1215 | pairs $(x, X)$ and $(y, Y)$ output by $G$, all messages $m \in \Bin^*$, | |
1216 | if $c$ is any output of $E_x(Y, m)$, then $D_y(X, c)$ outputs $m$. | |
1217 | \item The \emph{recipient simulator}~$R$ accepts as input a private | |
1218 | key~$x$, a public key~$Y$, and a message~$m \in \Bin^*$, and outputs a | |
1219 | ciphertext $c \gets R_x(Y, m)$. \qed | |
1220 | \end{itemize} | |
1221 | \end{definition} | |
1222 | The recipient simulator is syntactically like the encryption algorithm, but | |
1223 | semantically different. In particular, the encryption algorithm is given the | |
1224 | sender's private key and the recipient's public key, while the recipient | |
1225 | simulator is given the sender's public key and the recipient's private key. | |
1226 | ||
1227 | We now proceed to define the security notions. | |
1228 | ||
1229 | \begin{definition}[Strongly deniable AAE security] | |
1230 | \label{def:sdaae-security} | |
1231 | ||
1232 | Let $\Pi = (G, E, D, R)$ be a strongly deniable asymmetric authenticated | |
1233 | encryption scheme. An adversary $A$'s ability to attack the secrecy and | |
1234 | authenticity of $\Pi$ is measured using the following games. | |
1235 | \begin{program} | |
1236 | $\Game{ind-occa}{\Pi}(A, b)$: \+ \\ | |
1237 | $(x, X) \gets G()$; | |
1238 | $(y, Y) \gets G()$; \\ | |
1239 | $(m_0, m_1, s) \gets | |
1240 | A^{E_x(\cdot, \cdot), D_y(\cdot, \cdot)}(\cookie{find}, X, Y)$; \\ | |
1241 | $c^* \gets E_x(Y, m_b)$; \\ | |
1242 | $b' \gets | |
1243 | A^{E_x(\cdot, \cdot), D_y(\cdot, \cdot)}(\cookie{guess}, c^*, s)$; \\ | |
1244 | \RETURN $b'$; \- | |
1245 | \\[\medskipamount] | |
1246 | $\Game{uf-ocma}{\Pi}(A)$: \+ \\ | |
1247 | $w \gets 0$; | |
1248 | $\mathcal{C} \gets \emptyset$; \\ | |
1249 | $(x, X) \gets G()$; | |
1250 | $(y, Y) \gets G()$; \\ | |
1251 | $A^{\id{enc}(\cdot, \cdot), \id{dec}}(X, Y)$; \\ | |
1252 | \RETURN $w$; \- | |
1253 | \next | |
1254 | $\id{enc}(Z, m)$: \+ \\ | |
1255 | $c \gets E_x(Z, m)$; \\ | |
1256 | \IF $Z = Y$ \THEN $\mathcal{C} \gets \mathcal{C} \cup \{ c \}$; \\ | |
1257 | \RETURN $c$; \- | |
1258 | \\[\medskipamount] | |
1259 | $\id{dec}(Z, c)$: \+ \\ | |
1260 | $m \gets D_y(Z, c)$; \\ | |
1261 | \IF $Z = X \land m \ne \bot \land c \notin \mathcal{C}$ \THEN | |
1262 | $w \gets 1$; \\ | |
1263 | \RETURN $m$; \- | |
1264 | \end{program} | |
1265 | ||
1266 | In the IND-OCCA games $\Game{ind-occa-$b$}{\Pi}$, the adversary is required | |
1267 | to produce two messages $m_0$ and $m_1$ of equal lengths at the end of the | |
1268 | \cookie{find} stage, and is forbidden from querying its decryption oracle | |
1269 | $D_y(\cdot, \cdot)$ on the pair $(X, c^*)$ during the \cookie{guess} stage; | |
1270 | in the UF-OCMA game $\Game{uf-ocma}{\Pi}$, the adversary is forbidden from | |
1271 | returning any ciphertext $c$ that it obtained by querying its encryption | |
1272 | oracle~$E_x(\cdot, \cdot)$. | |
1273 | ||
1274 | The adversary's \emph{advantage} in these games is measured as follows. | |
1275 | \begin{eqnarray*}[c] | |
1276 | \Adv{ind-occa}{\Pi}(A) = | |
1277 | \Pr[\Game{ind-occa}{\Pi}(A, 1) = 1] - | |
1278 | \Pr[\Game{ind-occa}{\Pi}(A, 0) = 1] | |
1279 | \\[\medskipamount] | |
1280 | \Adv{uf-ocma}{\Pi}(A) = | |
1281 | \Pr[\Game{uf-ocma}{\Pi}(A) = 1] | |
1282 | \end{eqnarray*} | |
1283 | Finally, the IND-CCA and OUF-CMA insecurity functions of $\Pi$ are given by | |
1284 | \begin{eqnarray*}[c] | |
1285 | \InSec{ind-occa}(\Pi; t, q_E, q_D) = \max_A \Adv{ind-occa}{\Pi}(A) | |
1286 | \\[\medskipamount] | |
1287 | \InSec{uf-ocma}(\Pi; t, q_E, q_D) = \max_A \Adv{uf-ocma}{\Pi}(A) | |
1288 | \end{eqnarray*} | |
1289 | where the maxima are taken over all adversaries~$A$ completing the game in | |
1290 | time~$t$ and making at most $q_E$ and $q_D$ queries to their encryption and | |
1291 | decryption oracles, respectively. | |
1292 | \end{definition} | |
1293 | ||
1294 | \begin{definition}[Strongly deniable AAE deniability] | |
1295 | \label{def:sdaae-deny} | |
1296 | ||
1297 | Let $\Pi = (G, E, D, R)$ be a strongly deniable asymmetric authenticated | |
1298 | encryption scheme. The judge~$J$'s ability to distinguish simulated | |
1299 | ciphertexts is defined by the following games. | |
1300 | \begin{program} | |
1301 | $\Game{sdeny}{\Pi}(J, b)$: \+ \\ | |
1302 | $(x, X) \gets G()$; | |
1303 | $(y, Y) \gets G()$; \\ | |
1304 | $(m, \sigma) \gets J(\cookie{find}, x, X, y, Y)$; | |
1305 | \IF $b = 1$ \THEN $c \gets E_x(Y, m)$; \\ | |
1306 | \ELSE $c \gets R_y(X, m)$; \\ | |
1307 | $b' \gets J(\cookie{guess}, \sigma, c)$; \\ | |
1308 | \RETURN $b'$; | |
1309 | \end{program} | |
1310 | We measure $J$'s \emph{advantage} in distinguishing simulated ciphertexts | |
1311 | from genuine ones by | |
1312 | \[ \Adv{sdeny}{\Pi}(J) = | |
1313 | \Pr[\Game{sdeny}{\Pi}(J, 1) = 1] - | |
1314 | \Pr[\Game{sdeny}{\Pi}(J, 0) = 1] \] | |
1315 | Finally, we define the \emph{insecurity function} for the (strong) | |
1316 | deniability of $\Pi$ by | |
1317 | \[ \InSec{sdeny}(\Pi; t) = \max_J \Adv{sdeny}{\Pi}(J) \] | |
1318 | where the maximum is taken over all algorithms~$J$ completing the game in | |
1319 | time~$t$. We say that $\Pi$ is \emph{statistically deniable} if | |
1320 | $\InSec{sdeny}(\Pi; t)$ is constant; if it is identically zero then we say | |
1321 | that $\Pi$ is \emph{perfectly deniable}. | |
1322 | \end{definition} | |
1323 | ||
1324 | \subsection{Intuition for weak deniability} | |
1325 | \label{sec:deny.weak-intuition} | |
1326 | ||
1327 | If we followed our intuition, we'd define weak deniability by taking the | |
1328 | definition for strongly deniable AAE above, and simply giving the simulator a | |
1329 | sample ciphertext -- say $c_0$ in \xref{def:sdaae-deny}. | |
1330 | ||
1331 | \subsubsection{Chameleon signatures} | |
1332 | Unfortunately, this still isn't quite enough. Chameleon signatures | |
1333 | \cite{cryptoeprint:1998:010} achieve weak deniability as we've outlined above | |
1334 | but still provide a kind of nonrepudiation property. | |
1335 | ||
1336 | Chameleon signatures make use of trapdoor commitments: Alice generates a | |
1337 | trapdoor and a public key for the commitment scheme; Bob signs a message $m$ | |
1338 | by creating a commitment for $m$ using Alice's public key, signs the | |
1339 | commitment (and Alice's public commitment key) with an ordinary signature | |
1340 | scheme, and presents the pair of this signature and an opening of the | |
1341 | commitment as his chameleon signature on~$m$. This achieves a limited form | |
1342 | of deniability, called `non-transferability' in \cite{cryptoeprint:1998:010}: | |
1343 | Alice can forge signatures using her trapdoor, so she can't convince others | |
1344 | of the signature's validity -- but she can only do this for commitments that | |
1345 | Bob has already signed, or she'd break the unforgeability of the underlying | |
1346 | signature scheme. In theory, then, a dispute can be settled by demanding | |
1347 | that Bob provide an opening to the commitment different from Alice's: if he | |
1348 | can, then Alice must have used her trapdoor; otherwise he is deemed to accept | |
1349 | the validity of the signature. This looks like a deniability failure. | |
1350 | ||
1351 | (Di~Raimondo and Gennaro \cite{Raimondo:2005:NAD} frame the same problem in | |
1352 | terms of a deniability failure for the recipient, and describe a notion of | |
1353 | `forward deniability' which excludes this failure. The difference in point | |
1354 | of view is mainly whether we consider the sender acting voluntarily or under | |
1355 | compulsion.) | |
1356 | ||
1357 | None of this affects strong deniability. If Alice can make up plausible | |
1358 | messages from nothing but Bob's public key then Bob has nothing to prove. | |
1359 | (We prove a formal version of this statement in \xref{th:strong-weak}.) But | |
1360 | it means that our weak deniability is still too weak. | |
1361 | ||
1362 | \subsubsection{A second simulator} | |
1363 | The fundamental problem is that, if the recipient's simulator needs a valid | |
1364 | ciphertext to work on, then there must be at least one valid ciphertext for | |
1365 | any simulated one, and Bob should be able to produce it; if he can't, then we | |
1366 | conclude that the ciphertext we have is the valid original. To avoid this, | |
1367 | then, we must demand a \emph{second} `sender' simulator: given a genuine | |
1368 | ciphertext~$c$ corresponding to a message~$m$, Alice's public key, Bob's | |
1369 | private key, and a target message~$m'$, the simulator must construct a new | |
1370 | ciphertext~$c'$. To test the efficacy of this simulator, we ask that Justin | |
1371 | attempt to distinguish between, on the one hand, a legitimate | |
1372 | message/ciphertext pair and an `Alice-forgery' created by the first | |
1373 | simulator, and on the other hand, a `Bob-forgery' created by the second | |
1374 | simulator and a legitimate message/ciphertext pair: the simulator is good if | |
1375 | Justin can't distinguish these two cases. The intuition here is Justin is | |
1376 | being asked to adjudicate between Alice and Bob: each of them claims to have | |
1377 | the `legitimate' ciphertext, but one of them has secretly used the simulator. | |
1378 | Justin shouldn't be able to work out which is telling the truth. | |
1379 | ||
1380 | A scheme based on chameleon signatures fails to meet this stricter | |
1381 | definition. Justin looks at the two chameleon signatures: if they have the | |
1382 | same commitment but different openings, then it's an Alice forgery; if the | |
1383 | commitments differ, then it's a Bob forgery. If the sender simulator, which | |
1384 | isn't given Alice's trapdoor key, can make a second ciphertext using the same | |
1385 | commitment, then it has found a second opening of the commitment, which is a | |
1386 | binding failure. | |
1387 | ||
1388 | Again, we can distinguish two variant definitions, according to whether Bob's | |
1389 | simulator has to work only on the given ciphertext, or whether it's also | |
1390 | provided with a `hint' produced by the encryption algorithm but not made | |
1391 | available to the adversary in the IND-OCCA and UF-OCMA games. We prefer this | |
1392 | latter formulation;\footnote{% | |
1393 | As described in \cite{cryptoeprint:1998:010}, we could instead attach this | |
1394 | hint to the ciphertext, encrypted under a symmetric key. We shall describe | |
1395 | a general construction of a non-leaky scheme from a leaky one.} % | |
1396 | we must therefore define formally our weakly deniable schemes as `leaky' | |
1397 | encryption schemes which provide such hints. | |
1398 | ||
1399 | \subsubsection{The secrecy and authenticity games} | |
1400 | The fact that the recipient simulator is given a sample ciphertext can affect | |
1401 | the secrecy and authenticity of the encryption scheme in surprising ways. | |
1402 | The generic weakly-deniable scheme described in \xref{sec:gwd.description} | |
1403 | provides an example of the problem. We sketch the scheme very briefly here: | |
1404 | to encrypt a message, the sender uses a KEM to generate a pseudorandom | |
1405 | string, splits the string in half, signs one half, and encrypts the signature | |
1406 | and his message using a symmetric encryption scheme with the other half of | |
1407 | the string as a key; the ciphertext is the KEM clue and the symmetric | |
1408 | ciphertext. | |
1409 | ||
1410 | It's a standard result that a scheme like this provides secrecy against | |
1411 | chosen-ciphertext attack if the KEM is secure and the symmetric encryption | |
1412 | scheme is secure against an adversary making only one encryption query -- | |
1413 | i.e., a \emph{one-time} encryption scheme is sufficient for secrecy. This is | |
1414 | very convenient: many symmetric encryption schemes, such as GCM | |
1415 | \cite{McGrew:2004:SPG}, require a nonce or IV input, which must (at least) be | |
1416 | distinct for each encryption. A one-time encryption scheme can be | |
1417 | deterministic and stateless, perhaps using the same nonce all the time. GCM | |
1418 | fails catastrophically if a nonce is reused: authentication fails completely, | |
1419 | and applying XOR-linear differences becomes easy. | |
1420 | ||
1421 | Unfortunately, the way to construct simulated ciphertexts is to recover the | |
1422 | symmetric key from the sample ciphertext and encrypt the replacement message | |
1423 | with it, including the signature from the sample, and copying the sample KEM | |
1424 | clue. We've therefore reused the symmetric key, so a one-time symmetric | |
1425 | encryption scheme is no longer suitable. Indeed, if one used GCM with a | |
1426 | fixed nonce here, authenticity for the entire scheme would be lost. It would | |
1427 | certainly be unfortunate if our security model failed to identify such a | |
1428 | defect! From a practical point of view, users are unlikely to simulate | |
1429 | ciphertexts if simulation has a security cost, which argues in favour of | |
1430 | ciphertexts being genuine, and therefore compromises deniability. | |
1431 | ||
1432 | The problem is that the simulator can poke about inside ciphertexts in ways | |
1433 | that the adversary cannot in the usual attack game. To capture the effects | |
1434 | of simulators, then, we must allow adversaries to make use of them. We shall | |
1435 | therefore provide an oracle for the recipient-simulator. | |
1436 | ||
1437 | It's worth considering why we didn't need to provide an oracle for the | |
1438 | simulator in the strong deniability definition. The intuitive reason is that | |
1439 | the simulator can't provide information about ciphertexts, because it doesn't | |
1440 | accept one as input. The formal reason, as shown in the proof of | |
1441 | \xref{th:strong-weak}, is that we can replace simulations by real | |
1442 | encryptions using the deniability test. | |
1443 | ||
1444 | It would be convenient if we could similarly show that access to the sender | |
1445 | simulator doesn't help an adversary attack weakly deniable schemes. Alas, | |
1446 | this doesn't seem to be the case: certainly, it's not true that we can | |
1447 | necessarily simulate a sender-simulation oracle using the recipient | |
1448 | simulator. To see this, consider the following contrived example: suppose | |
1449 | that legitimate ciphertexts contain a field $x$ which is uniformly | |
1450 | distributed on $\{0, 1, 2\}$, while a recipient-simulated ciphertext contains | |
1451 | $x_R = (x + 1) \bmod 3$ where $x$ is the value in the sample ciphertext; in | |
1452 | order to achieve weak deniability, the sender simulator must include $x_S = | |
1453 | (x - 1) \bmod 3$ due to the ordering of inputs to the judge -- but now, if | |
1454 | one is given a genuine ciphertext and a simulation constructed from it, one | |
1455 | can tell immediately whether the simulation was generated by the sender or | |
1456 | the receiver. | |
1457 | ||
1458 | This kind of distinguishability doesn't immediately appear to be a | |
1459 | deniability failure \emph{per se} (but see \xref{sec:deny.sufficient}), but | |
1460 | it does suggest that the simulators exhibit undesirable artifacts. Also, | |
1461 | correctly modelling the sender simulator in the attack games is very | |
1462 | complicated, because one must somehow account for the hints without simply | |
1463 | handing them to the adversary. We therefore prefer to require | |
1464 | indistinguishability of genuine/recipient-simulated ciphertext pairs from | |
1465 | genuine/sender-simulated pairs for deniability, in \emph{addition} to the | |
1466 | existing requirement that genuine/recipient-simulated pairs be | |
1467 | indistinguishable from sender-simulated/genuine pairs. | |
1468 | ||
1469 | The result of this chicanery is that, if we allowed adversaries to make $q_S$ | |
1470 | sender-simulator oracle queries, then we'd get insecurity results up to $2 | |
1471 | q_S \epsilon$ higher for secrecy and $q_S \epsilon$ for authenticity than the | |
1472 | apparent results neglecting sender-simulator queries, where $\epsilon$ is the | |
1473 | deniability insecurity to be defined later. While this seems disconcerting, | |
1474 | \begin{itemize} | |
1475 | \item we expect, as a matter of course, that $\epsilon = 0$ -- as it is in | |
1476 | our example constructions -- or at most very small, and certainly | |
1477 | `negligible' should one take an asymptotic view, so this doesn't affect | |
1478 | asymptotic results; | |
1479 | \item one would, presumably, be interested in using the simulators in | |
1480 | practice only if $\epsilon$ were indeed small or zero; and | |
1481 | \item we may salve our consciences with the knowledge that, were we to | |
1482 | provide adversarial access to the sender simulator, the secrecy and | |
1483 | authenticity games would become fiendishly complicated. | |
1484 | \end{itemize} | |
1485 | ||
1486 | \subsection{Definitions for weak deniability} | |
1487 | \label{sec:deny.weak} | |
1488 | ||
1489 | \begin{definition}[Weakly deniable AAE syntax] | |
1490 | \label{def:wdaae-syntax} | |
1491 | ||
1492 | A \emph{strongly deniable asymmetric authenticated encryption scheme} | |
1493 | (SD-AAE) is a quintuple of (maybe randomized) algorithms | |
1494 | $\proto{sdaae} = (G, E, D, R, S)$ as follows. | |
1495 | \begin{itemize} | |
1496 | \item The \emph{key-generation algorithm}~$G$ accepts no parameters and | |
1497 | outputs a pair $(x, X) \gets G()$. We call $x$ the \emph{private key} | |
1498 | and $X$ the corresponding \emph{public key}. | |
1499 | \item The \emph{encryption algorithm}~$E$ accepts as input a private | |
1500 | key~$x$, a public key~$Y$, and a message~$m \in \Bin^*$, and outputs a | |
1501 | pair $(c, \eta) \gets E_x(Y, m)$ consisting of a ciphertext~$c$ and a | |
1502 | \emph{hint}~$\eta$. If the hint is always $\bot$ then we say that the | |
1503 | AAE scheme is \emph{non-leaky}; otherwise we say that it is \emph{leaky}. | |
1504 | \item The \emph{decryption algorithm}~$D$ accepts as input a private | |
1505 | key~$x$, a public key~$Y$, and a ciphertext~$c$, and outputs $m \gets | |
1506 | D_y(Y, M)$ which is either a message~$m \in \Bin^*$ or the distinguished | |
1507 | symbol~$\bot$. The decryption algorithm must be such that, for all key | |
1508 | pairs $(x, X)$ and $(y, Y)$ output by $G$, all messages $m \in \Bin^*$, | |
1509 | if $(c, \eta)$ is any output of $E_x(Y, m)$, then $D_y(X, c)$ outputs | |
1510 | $m$. | |
1511 | \item The \emph{recipient simulator}~$R$ accepts as input a private | |
1512 | key~$x$, a public key~$Y$, a ciphertext~$c$, and a message~$m \in | |
1513 | \Bin^*$, and outputs a ciphertext $c' \gets R_x(Y, c, m)$. | |
1514 | \item The \emph{sender simulator}~$S$ accepts as input a private key~$x$, a | |
1515 | public key~$Y$, a ciphertext~$c$, a hint $\eta$, and a message~$m \in | |
1516 | \Bin^*$, and outputs a ciphertext $c \gets S_x(Y, \eta, c, m)$. \qed | |
1517 | \end{itemize} | |
1518 | \end{definition} | |
1519 | ||
1520 | \begin{definition}[Weakly deniable AAE security] | |
1521 | \label{def:wdaae-security} | |
1522 | ||
1523 | Let $\Pi = (G, E, D, R, S)$ be a weakly deniable asymmetric authenticated | |
1524 | encryption scheme. An adversary $A$'s ability to attack the secrecy and | |
1525 | authenticity of $\Pi$ is measured using the following games. | |
1526 | \begin{program} | |
1527 | $\Game{ind-occa}{\Pi}(A, b)$: \+ \\ | |
1528 | $(x, X) \gets G()$; | |
1529 | $(y, Y) \gets G()$; \\ | |
1530 | $(m_0, m_1, s) \gets | |
1531 | A^{\bar{E}_x(\cdot, \cdot), D_y(\cdot, \cdot), | |
1532 | R_y(\cdot, \cdot, \cdot)} | |
1533 | (\cookie{find}, X, Y)$; \\ | |
1534 | $(c^*, \eta^*) \gets E_x(Y, m_b)$; \\ | |
1535 | $b' \gets | |
1536 | A^{\bar{E}_x(\cdot, \cdot), D_y(\cdot, \cdot), | |
1537 | R_y(\cdot, \cdot, \cdot)} | |
1538 | (\cookie{guess}, c^*, s)$; \\ | |
1539 | \RETURN $b'$; \- | |
1540 | \\[\medskipamount] | |
1541 | $\Game{uf-ocma}{\Pi}(A)$: \+ \\ | |
1542 | $w \gets 0$; | |
1543 | $\mathcal{C} \gets \emptyset$; \\ | |
1544 | $(x, X) \gets G()$; | |
1545 | $(y, Y) \gets G()$; \\ | |
1546 | $A^{\id{enc}(\cdot, \cdot), \id{dec}(\cdot, \cdot), | |
1547 | \id{rsim}(\cdot, \cdot, \cdot)}(X, Y)$; \\ | |
1548 | \RETURN $w$; \- | |
1549 | \next | |
1550 | $\bar{E}_x(Z, m)$: \+ \\ | |
1551 | $(c, \eta) \gets E_x(Z, m)$; \\ | |
1552 | \RETURN $c$; \- | |
1553 | \\[\medskipamount] | |
1554 | $\id{enc}(Z, m)$: \+ \\ | |
1555 | $(c, \eta) \gets E_x(Z, m)$; \\ | |
1556 | \IF $Z = Y$ \THEN $\mathcal{C} \gets \mathcal{C} \cup \{ c \}$; \\ | |
1557 | \RETURN $c$; \- | |
1558 | \\[\medskipamount] | |
1559 | $\id{dec}(Z, c)$: \+ \\ | |
1560 | $m \gets D_y(Z, c)$; \\ | |
1561 | \IF $Z = X \land m \ne \bot \land c \notin \mathcal{C}$ \THEN | |
1562 | $w \gets 1$; \\ | |
1563 | \RETURN $m$; \- | |
1564 | \\[\medskipamount] | |
1565 | $\id{rsim}(Z, c, m')$: \+ \\ | |
1566 | $c' \gets R_y(Z, c, m')$; \\ | |
1567 | \IF $c' \ne \bot$ \THEN | |
1568 | $\mathcal{C} \gets \mathcal{C} \cup \{ c' \}$; \\ | |
1569 | \RETURN $c'$; \- | |
1570 | \end{program} | |
1571 | ||
1572 | In $\Game{ind-occa}{\Pi}$, the adversary is forbidden from querying its | |
1573 | decryption oracle $D_y(\cdot, \cdot)$ on the pair $(X, c^*)$. (It's fine | |
1574 | to pass simulated ciphertexts from $R_y(\cdot, \cdot, \cdot)$ to the | |
1575 | decryption oracle.) | |
1576 | ||
1577 | The adversary's \emph{advantage} in these games is measured as follows. | |
1578 | \begin{eqnarray*}[c] | |
1579 | \Adv{ind-occa}{\Pi}(A) = | |
1580 | \Pr[\Game{ind-occa}{\Pi}(A, 1) = 1] - | |
1581 | \Pr[\Game{ind-occa}{\Pi}(A, 0) = 1] | |
1582 | \\[\medskipamount] | |
1583 | \Adv{uf-ocma}{\Pi}(A) = | |
1584 | \Pr[\Game{uf-ocma}{\Pi}(A) = 1] | |
1585 | \end{eqnarray*} | |
1586 | Finally, the IND-OCCA and UF-OCMA insecurity functions of $\Pi$ are given | |
1587 | by | |
1588 | \begin{eqnarray*}[c] | |
1589 | \InSec{ind-occa}(\Pi; t, q_E, q_D, q_R) = \max_A \Adv{ind-occa}{\Pi}(A) | |
1590 | \\[\medskipamount] | |
1591 | \InSec{uf-ocma}(\Pi; t, q_E, q_D, q_R) = \max_A \Adv{uf-ocma}{\Pi}(A) | |
1592 | \end{eqnarray*} | |
1593 | where the maxima are taken over all adversaries~$A$ completing the game in | |
1594 | time~$t$ and making at most $q_E$, $q_D$, and $q_R$ queries to their | |
1595 | encryption, decryption and recipient-simulator oracles, respectively. | |
1596 | \end{definition} | |
1597 | ||
1598 | As we mentioned above, there are three kinds of pairs of ciphertexts which | |
1599 | ought to be hard to distinguish. We describe three distinct subgames, $0$, | |
1600 | $1$, and $2$, and ask the adversary to output a guess as to which one it | |
1601 | played. If we write $p_{ij}$ as the probability that the adversary outputs | |
1602 | $j$ when it was playing subgame~$i$, then we define its advantage as | |
1603 | $\max_{i, j \in \{0, 1, 2\}} p_{ii} - p_{ji}$. Obviously the cases $i = j$ | |
1604 | aren't very interesting, but it's simpler to leave them in anyway. | |
1605 | ||
1606 | The reason that we structure the definition like this is that it gives us | |
1607 | very strong results to use in reductions. Stepping from a game which works | |
1608 | like subgame~$i$ to one that works like subgame~$j$ \fixme | |
1609 | ||
1610 | \begin{definition}[Weakly deniable AAE deniability] | |
1611 | \label{def:wdaae-deny} | |
1612 | ||
1613 | Let $\Pi = (G, E, D, R, S)$ be a weakly deniable asymmetric authenticated | |
1614 | encryption scheme. The judge~$J$'s ability to distinguish simulated | |
1615 | ciphertexts is defined by the following game. | |
1616 | \begin{program} | |
1617 | $\Game{wdeny}{\Pi}(J, i)$: \+ \\ | |
1618 | $(x, X) \gets G()$; | |
1619 | $(y, Y) \gets G()$; \\ | |
1620 | $(m_0, m_1, \sigma) \gets J(\cookie{find}, x, X, y, Y)$; \\ | |
1621 | \IF $i = 2$ \THEN \\ \ind | |
1622 | $(c_0, \eta) \gets E_x(Y, m_0)$; \\ | |
1623 | $c_1 \gets R_y(X, c_0, m_1)$; \- \\ | |
1624 | \ELSE \IF $i = 1$ \THEN \\ \ind | |
1625 | $(c_1, \eta) \gets E_x(Y, m_1)$; \\ | |
1626 | $c_0 \gets S_x(Y, \eta, c_1, m_0)$; \- \\ | |
1627 | \ELSE \IF $i = 0$ \THEN \\ \ind | |
1628 | $(c_0, \eta) \gets E_x(Y, m_0)$; \\ | |
1629 | $c_1 \gets S_x(Y, \eta, c_0, m_1)$; \- \\ | |
1630 | $j \gets J(\cookie{guess}, \sigma, c_0, c_1)$; \\ | |
1631 | \RETURN $j$; | |
1632 | \end{program} | |
1633 | We measure $J$'s \emph{advantage} in distinguishing simulated ciphertexts | |
1634 | from genuine ones by | |
1635 | \[ \Adv{wdeny}{\Pi}(J) = \max_{i, j \in \{0, 1, 2\}} | |
1636 | (\Pr[\Game{wdeny}{\Pi}(A, i) = i] - | |
1637 | \Pr[\Game{wdeny}{\Pi}(A, j) = i]) | |
1638 | \] | |
1639 | Finally, we define the \emph{insecurity function} for the (weak) | |
1640 | deniability of $\Pi$ by | |
1641 | \[ \InSec{wdeny}(\Pi; t) = \max_J \Adv{wdeny}{\Pi}(J) \] | |
1642 | where the maximum is taken over all algorithms~$J$ completing the game in | |
1643 | time~$t$. We say that $\Pi$ is \emph{statistically deniable} if | |
1644 | $\InSec{wdeny}(\Pi; t)$ is constant; if it is identically zero then we say | |
1645 | that $\Pi$ is \emph{perfectly deniable}. | |
1646 | \end{definition} | |
1647 | ||
1648 | \subsection{Discussion} | |
1649 | \label{sec:deny.discuss} | |
1650 | ||
1651 | \subsubsection{Sufficiency of the simulators} | |
1652 | \label{sec:deny.sufficient} | |
1653 | As we've defined them, the various simulators expect to receive only | |
1654 | `genuine' ciphertexts, as generated by the encryption algorithm $E_x(Y, m)$. | |
1655 | Especially in the case of the weak-deniability sender simulator, the reader | |
1656 | may be wondering whether this indicates a lapse in the definition. | |
1657 | ||
1658 | With a practical point of view, we may imagine that Alice, a user of a weakly | |
1659 | deniable AAE scheme, is confronted with a ciphertext forged by Bob. If Alice | |
1660 | has the original ciphertext which Bob used as his sample, then she could | |
1661 | reveal it directly. If the original contents were not to her liking, she | |
1662 | could also construct a sender-simulation from it. That this will be | |
1663 | satisfactory is a direct consequence of weak deniability. But \fixme | |
1664 | ||
1665 | %% E,R/S,E and E,R/E,S are indistinguishable pairs | |
1666 | %% hence E,S/E,R/S,E and so R,E/E,S/E,R | |
1667 | %% want S,R/S,E, say | |
1668 | ||
1669 | %% history is important! want S(R),R(E)/S(E),E/E,R(E) -- no, i've got lost | |
1670 | %% somewhere | |
1671 | ||
1672 | %% want S(E),R(E)/E,R(E) or so | |
1673 | ||
1674 | %% given E,R/S,E -> S,E,R/SS,S,E or E,R/E,S -> S,E,R/S',E,S | |
1675 | ||
1676 | Let us investigate the indistinguishability properties of weakly deniable | |
1677 | encryption schemes. For simplicity, we consider (for the moment) only | |
1678 | perfect deniability, and therefore which other kinds of simulated ciphertexts | |
1679 | are identically distributed. We shall show later how to extend the analysis | |
1680 | to the more general situation. | |
1681 | ||
1682 | Fix a perfectly weakly deniable AAE scheme $\Pi = (G, E, D, R, S)$. Let | |
1683 | $\mathcal{C}$ be the ciphertext space of $\Pi$, and $\mathcal{N}$ be the hint | |
1684 | space -- so $E$ outputs a result in $\mathcal{C} \times \mathcal{N}$. | |
1685 | ||
1686 | We define random variables $x$, $X$, $y$ and $Y$, corresponding to two key | |
1687 | pairs as generated by $G$. Let $\mathcal{R}$ be the space of random | |
1688 | variables taking values in $\mathcal{N} \times (\Bin^*)^2 \times | |
1689 | \mathcal{C}^2$. For any message~$m \in \Bin^2$, we let $I_m \in \mathcal{R}$ | |
1690 | be distributed according to the experiment | |
1691 | \[ (c, \eta) \gets E_x(Y, m) : (\eta, m, m, c, c) \] | |
1692 | ||
1693 | We define an equivalence relation $\sim$ on $\mathcal{R}$, writing $(\eta, m, | |
1694 | n, c, d) \sim (\eta', m', n', c', d')$ if and only if $(m, n) = (m', n')$ | |
1695 | and, for all judges~$J$, | |
1696 | \[ \Pr[J(x, X, y, Y, \eta, c, m, d, m) = 1] = | |
1697 | \Pr[J(x, X, y, Y, \eta', c', m', d', m') = 1] | |
1698 | \] | |
1699 | Next we define some operations on $\mathcal{R}$: | |
1700 | \begin{eqnarray*}[rl] | |
1701 | \pi(\eta, m, n, c, d) &= (\eta, n, m, d, c) \\ | |
1702 | \delta(\eta, m, n, c, d) &= (\eta, m, m, c, c) \\ | |
1703 | \epsilon_{m'}(\eta, m, n, c, d) &= (\eta, m', n, E_x(Y, m'), d) \\ | |
1704 | \rho_{m'}(\eta, m, n, c, d) &= (\eta, m', n, R_y(X, c, m'), d) \\ | |
1705 | \sigma_{m'}(\eta, m, n, c, d) &= (\eta, m', n, S_x(Y, \eta, c, m'), d) | |
1706 | \end{eqnarray*} | |
1707 | Let us denote by $M$ the monoid generated by these operations under | |
1708 | composition $\alpha \beta = \alpha \circ \beta$, and let $e \in M$ be the | |
1709 | identity operation on $\mathcal{R}$. A simple reduction shows that, if | |
1710 | $\alpha \in M$ and $C \sim D$ then $\alpha C \sim \alpha D$. Using this | |
1711 | notation, the requirement that $\Pi$ is perfectly deniable is that, for all | |
1712 | messages $m$, $m'$, | |
1713 | \[ \pi \sigma_{m'} I_m \sim \pi \rho_{m'} I_m \sim \sigma_m I_{m'} \] | |
1714 | Since $\pi^2 = e$, then, we have | |
1715 | \[ \rho_{m'} I_m \sim \sigma_{m'} I_m \sim \pi \rho_m I_{m'} \] | |
1716 | ||
1717 | \newpage | |
1718 | \subsubsection{[Musings]} | |
1719 | starting points: | |
1720 | \[ \rho \sim \sigma \sim \pi \rho \qquad \pi \sigma \sim \pi \rho \sim | |
1721 | \sigma \] | |
1722 | hmm. is there a chance we can get | |
1723 | \[ \alpha \sim \beta \land C \sim D \implies \alpha C \sim \beta D \] | |
1724 | doesn't look very likely | |
1725 | ||
1726 | \[ \sigma \sim \pi \rho \] | |
1727 | ||
1728 | Let $P_J(C) = \Pr[J(x, X, y, Y, \eta, c, m, d, n) = 1]$. | |
1729 | ||
1730 | From properties of $\sim$: | |
1731 | \[ P_J(\alpha_1 C) - P_J(\alpha_0 C) = P_J(\alpha_1 D) - P_J(\alpha_0 D) \] | |
1732 | so | |
1733 | \[ P_J(\alpha_0 D) - P_J(\alpha_0 C) = P_J(\alpha_1 D) - P_J(\alpha_1 C) \] | |
1734 | ||
1735 | generic construction of non-leaky wdaae? | |
1736 | ||
1737 | \subsubsection{Strong deniability implies weak deniability} | |
1738 | \label{sec:deny.strong-weak} | |
1739 | By now, it's probably not at all clear that strong deniability actually | |
1740 | implies weak deniability; but this is nonetheless true. The important | |
1741 | observation is that, since the recipient's simulator works without reference | |
1742 | to an existing ciphertext, the standard encryption algorithm works well as a | |
1743 | sender's simulator. This is expressed in the following theorem. | |
1744 | ||
1745 | \begin{theorem}[$\textrm{SDENY} \implies \textrm{WDENY}$] | |
1746 | \label{th:strong-weak} | |
1747 | ||
1748 | Let $\Pi = (G, E, D, R)$ be a strongly deniable AAE scheme. Define | |
1749 | \[ E'_x(Y, m) = (E_x(Y, m), \bot) \textrm, \quad | |
1750 | R'_x(Y, c, m) = R_x(Y, m) | |
1751 | \quad \textrm{and} \quad | |
1752 | S'_x(Y, c, \eta, m) = E'_x(Y, m) | |
1753 | \] | |
1754 | Then $\Pi' = (G, E', D, R', S')$ is a non-leaky weakly deniable AAE scheme | |
1755 | such that | |
1756 | \begin{eqnarray*}[rl] | |
1757 | \InSec{ind-occa}(\Pi'; t, q_E, q_D, q_R) | |
1758 | & \le \InSec{ind-occa}(\Pi; t, q_E + q_R, q_D) + | |
1759 | 2 q_R \InSec{sdeny}(\Pi; t + t_E) \\ | |
1760 | \InSec{uf-ocma}(\Pi'; t, q_E, q_D, q_R) | |
1761 | & \le \InSec{uf-ocma}(\Pi; t, q_E + q_R, q_D) + | |
1762 | q_R \InSec{sdeny}(\Pi; t + t_E) \\ | |
1763 | \InSec{wdeny}(\Pi'; t) | |
1764 | & \le \InSec{sdeny}(\Pi; t + t_E) | |
1765 | \end{eqnarray*} | |
1766 | where $t_E$ is the time required for a single encryption. | |
1767 | \end{theorem} | |
1768 | \begin{proof} | |
1769 | For the IND-OCCA inequality, we use a hybrid argument. Let $A$ be an | |
1770 | adversary; we define $\G0$ be the game in which $b \inr \{0,1\}$ is chosen | |
1771 | at randomly, followed by $\Game{ind-occa-$b$}{\Pi'}(A)$ of | |
1772 | \xref{def:wdaae-security}. | |
1773 | ||
1774 | Now we define $\G{i}$, for $0 \le i \le q_R$. (This definition will | |
1775 | coincide with the definition we already gave for $\G0$.) In each case, the | |
1776 | first $i$ recipient-simulator queries are answered by invoking the | |
1777 | encryption oracle instead; the remaining $q_R - i$ queries use the | |
1778 | recipient simulator as before. In $\G{q_R}$, the recipient simulator isn't | |
1779 | used at all, so this game is exactly $\Game{ind-occa-$b$}{\Pi}(A)$ of | |
1780 | \xref{def:sdaae-security}. Let $S_i$ be the event that, in $\G{i}$, the | |
1781 | output $b'$ of $A$ is equal to $b$. We claim: | |
1782 | \begin{eqnarray}[rl] | |
1783 | \Adv{ind-occa}{\Pi'}(A) & = 2 \Pr[S_0] - 1 \\ | |
1784 | \Adv{ind-occa}{\Pi}(A) & = 2 \Pr[S_{q_R}] - 1 \\ | |
1785 | \Pr[S_i] - \Pr[S_{i+1}] & \le \InSec{sdeny}(\Pi; t) | |
1786 | \qquad \textrm{for $0 \le i < q_R$} | |
1787 | \end{eqnarray} | |
1788 | The first two equations are clear. The last inequality requires a | |
1789 | reduction. We construct a judge~$J$ which | |
1790 | simulates the above game (recall that the judge is given all of the | |
1791 | necessary private keys!), except that it uses its input ciphertext to | |
1792 | respond to the $i$th recipient-simulator query. If $A$ wins the simulated | |
1793 | game by guessing correctly, then $J$ outputs $1$, declaring its ciphertext | |
1794 | to be simulated; otherwise $J$ outputs $0$. Then, conditioning on any | |
1795 | particular message~$m$ being the value of the random variable corresponding | |
1796 | to the $i$th recipient-simulator query, we have | |
1797 | \begin{eqnarray*}[rl] | |
1798 | \Pr[S_i] - \Pr[S_{i+1}] | |
1799 | & = \sum_{m \in \Bin^*} | |
1800 | \Pr[M = m] (\Pr[S_i \mid M = m] - \Pr[S_{i+1} \mid M = m]) \\ | |
1801 | & \begin{subsplit} \displaystyle {} | |
1802 | = \sum_{m \in \Bin^*} | |
1803 | \Pr[M = m] (\Pr[\Game{sdeny-$1$}{\Pi}(J, m, m) = 1] - {} \\[-3\jot] | |
1804 | \Pr[\Game{sdeny-$0$}{\Pi}(J, m, m) = 1]) | |
1805 | \end{subsplit} \\ | |
1806 | & \le \sum_{m \in \Bin^*} | |
1807 | \Pr[M = m] \InSec{sdeny}(\Pi; t + t') \\ | |
1808 | & = \InSec{sdeny}(\Pi; t + t') \sum_{m \in \Bin^*} \Pr[M = m] \\ | |
1809 | & = \InSec{sdeny}(\Pi; t + t') | |
1810 | \end{eqnarray*} | |
1811 | which proves the claim. | |
1812 | ||
1813 | The proof of the UF-OCMA inequality is very similar; we omit the details. | |
1814 | ||
1815 | Finally, we prove the deniability inequality. Let $J$ be a judge in the | |
1816 | weak deniability game. Note that our `sender simulator' is precisely the | |
1817 | official encryption algorithm, so its output is obviously `genuine'. The | |
1818 | notation is uncomfortably heavyweight; let us write $\G{b} = | |
1819 | \Game{wdeny-$b$}{\Pi'}(J)$. In $\G0$ both ciphertexts are `genuine' -- | |
1820 | they were generated by the proper encryption algorithm, using the proper | |
1821 | private key. In $\G1$, on the other hand, the left-hand ciphertext is | |
1822 | genuine but the right-hand ciphertext is simulated by~$R$. However, $R$ is | |
1823 | meant to be a good simulator of genuine ciphertexts. Indeed, if $J$ can | |
1824 | tell a pair of good, independently generated ciphertexts from a good | |
1825 | ciphertext and a (still independent) $R$-simulated one, then we can use it | |
1826 | to distinguish $R$'s simulations.\footnote{% | |
1827 | This independence, which follows from the fact that the strong | |
1828 | deniability simulator is required to work \emph{ab initio} without | |
1829 | reference to an existing ciphertext, is essential to the proof. Without | |
1830 | it, we'd `prove' that one needs only a single simulator to achieve weak | |
1831 | deniability -- but the previous discussion of chameleon signatures shows | |
1832 | that this is false.} % | |
1833 | ||
1834 | Choose some message~$m^*$. We construct an explicit~$J'$ as follows. | |
1835 | \begin{program} | |
1836 | $J'(x, X, y, Y, c, m)$: \+ \\ | |
1837 | $c^* \gets E_y(X, m^*)$; \\ | |
1838 | $b \gets J'(x, X, y, Y, c^*, m^*, c, m)$; \\ | |
1839 | \RETURN $b$; | |
1840 | \end{program} | |
1841 | This $J'$ is given either a genuine or an $S$-simulated ciphertext and | |
1842 | message, allegedly for message~$m$. It generates a genuine ciphertext | |
1843 | independently for~$m'$. It now has either two independent genuine | |
1844 | ciphertexts or a genuine ciphertext and a simulation, and therefore | |
1845 | distinguishes the two cases exactly as well as $J$. We conclude that | |
1846 | \[ \Adv{wdeny}{\Pi'}(J') \le \InSec{sdeny}(\Pi; t + t') \] | |
1847 | as required. | |
1848 | \end{proof} | |
1849 | ||
1850 | Unsurprisingly, weak deniability doesn't imply strong deniability (unless | |
1851 | one-way functions don't exist). This will follow immediately from | |
1852 | \xref{th:gwd-not-sdeny}. | |
1853 | ||
1854 | \subsubsection{Insider authenticity} | |
1855 | \label{sec:deny.insider} | |
1856 | There is a kind of attack considered in the study of key-exchange protocols | |
1857 | (see, for example, \cite{Blake-Wilson:1997:KAP}) called \emph{key-compromise | |
1858 | impersonation}. The adversary is assumed to know the long-term secret | |
1859 | information of one of the participants -- Alice, say. Clearly, he can now | |
1860 | impersonate Alice to Bob. If the protocol is vulnerable to key-compromise | |
1861 | impersonation attacks, however, he can also impersonate Bob -- or anybody | |
1862 | else of his choosing -- to Alice.\footnote{% | |
1863 | It is apparently considered unsatisfactory for a key-exchange protocol to | |
1864 | admit key-compromise impersonation, though it's unclear to us why we might | |
1865 | expect Alice to retain any useful security properties after having been | |
1866 | corrupted.} % | |
1867 | The analogue of key-compromise-impersonation resistance in the context of | |
1868 | noninteractive asymmetric encryption is \emph{insider authenticity}. | |
1869 | ||
1870 | Deniably authenticated asymmetric encryption schemes do not provide insider | |
1871 | authenticity: the two are fundamentally incompatible. Indeed, the existence | |
1872 | of the recipient simulator~$S$ in definitions~\xref{def:sdaae-deny} | |
1873 | and~\ref{def:wdaae-deny} constitutes an insider-authenticity attack. | |
1874 | ||
1875 | Insider authenticity can be defined fairly easily by modifying the UF-OCMA | |
1876 | games of definitions~\ref{def:sdaae-security} or~\ref{def:wdaae-security} to | |
1877 | give the adversary the recipient's private key~$y$; we shall omit the tedious | |
1878 | formalities, but we shall end up with a security measure | |
1879 | $\InSec{uf-icma}(\Pi; t, q_E)$ (for `unforgeability under insider | |
1880 | chosen-message attack'). The attack is simple: retrieve a single message~$m$ | |
1881 | from the sender (if the scheme is only weakly deniable -- even this is | |
1882 | unnecessary if we have strong deniability), pass it to the simulator~$S$, | |
1883 | along with the private key~$y$ and a different message~$m' \ne m$. Let $y'$ | |
1884 | be the simulator's output. We know that the decryption algorithm will always | |
1885 | succeed on real ciphertexts; so if $p$ is the probability that decryption | |
1886 | fails, then | |
1887 | \[ \InSec{uf-icma}(\Pi; t_E + t_S, 1) \ge | |
1888 | p \ge 1 - \InSec{wdeny}(\Pi, S; t_D) \] | |
1889 | where $t_E$, $t_S$ and $t_D$ are the respective times required to encrypt a | |
1890 | message, run the recipient simulator, and decrypt a message. | |
1891 | ||
1892 | \subsection{Generic non-leaky weakly deniably authenticated asymmetric | |
1893 | encryption} | |
1894 | \label{sec:nleak} | |
1895 | ||
1896 | Let $\proto{wdaae} = (\mathcal{G}, \mathcal{E}, \mathcal{D}, \mathcal{R}, | |
1897 | \mathcal{S})$ be a weakly deniably authenticated asymmetric encryption | |
1898 | scheme. We shall assume that the hints output by the encryption algorithm | |
1899 | are bit strings of some constant length $n$. (All of the weakly deniable | |
1900 | schemes in this paper have this property.) | |
1901 | ||
1902 | Let $\proto{se} = (\kappa, E, D)$ be a symmetric encryption scheme, and let | |
1903 | $\proto{h} = (\nu, \lambda, H)$ be a universal one-way hash function. We | |
1904 | construct a \emph{non-leaky} weakly deniably authenticated asymmetric | |
1905 | encryption scheme | |
1906 | \begin{spliteqn*} | |
1907 | \Xid{\Pi}{wdaae-nleak}(\proto{wdaae}, \proto{se}, \proto{h}) = | |
1908 | (\Xid{G}{wdaae-nleak}, \Xid{E}{wdaae-nleak}, \\ | |
1909 | \Xid{D}{wdaae-nleak}, \Xid{R}{wdaae-nleak}, \Xid{S}{wdaae-nleak}) | |
1910 | \end{spliteqn*} | |
1911 | where the component algorithms are as follows. | |
1912 | \begin{program} | |
1913 | $\Xid{G}{wdaae-nleak}()$: \+ \\ | |
1914 | $(x, X) \gets \mathcal{G}()$; \\ | |
1915 | $K \getsr \Bin^\kappa$; \\ | |
1916 | \RETURN $\bigl( (x, K), X \bigr)$; \- | |
1917 | \next | |
1918 | $\Xid{E}{wdaae-nleak}_{x, K}(Y, m)$: \+ \\ | |
1919 | $N \getsr \Bin^\nu$; \\ | |
1920 | $(c, \eta) \gets \mathcal{E}_x(Y, \emptystring)$; \\ | |
1921 | $z \gets E_K(\eta)$; | |
1922 | $h \gets H_N(z)$; \\ | |
1923 | $c' \gets \mathcal{S}_x(Y, \eta, c, N \cat h \cat m)$; \\ | |
1924 | \RETURN $\bigl( (c', z), \bot \bigr)$; \- | |
1925 | \next | |
1926 | $\Xid{D}{wdaae-nleak}_{x, K}\bigl( Y, (c, z) \bigr)$: \+ \\ | |
1927 | $m \gets \mathcal{D}_x(Y, c)$; \\ | |
1928 | \IF $m = \bot$ \THEN \RETURN $\bot$; \\ | |
1929 | $N \gets m[0 \bitsto \nu]$; | |
1930 | $h \gets m[\nu \bitsto \nu + \lambda]$; \\ | |
1931 | \IF $H_N(z) \ne h$ \THEN \RETURN $\bot$; \\ | |
1932 | \RETURN $m[\nu + \lambda \bitsto \len(m)]$; \- | |
1933 | \newline | |
1934 | $\Xid{R}{wdaae-nleak}_{x, K}\bigl( Y, (c, z), m \bigr)$: \+ \\ | |
1935 | $N \getsr \Bin^\nu$; | |
1936 | $h \gets H_N(z)$; \\ | |
1937 | \RETURN $\mathcal{R}_x(Y, c, N \cat h \cat m)$; \- | |
1938 | \next | |
1939 | $\Xid{S}{wdaae-nleak}_{x, K}\bigl( Y, \eta', (c, z), m \bigr)$: \+ \\ | |
1940 | $\eta \gets D_K(z)$; \\ | |
1941 | $N \getsr \Bin^\nu$; | |
1942 | $h \gets H_N(z)$; \\ | |
1943 | \RETURN $\mathcal{S}_x(Y, \eta, c, N \cat h \cat m)$; \- | |
1944 | \end{program} | |
1945 | ||
1946 | It is clear that the constructed scheme is non-leaky. The following theorems | |
1947 | relate secrecy, authenticity and deniability to that of the original scheme. | |
1948 | ||
1949 | \begin{theorem}[Non-leaky secrecy] | |
1950 | \label{th:wdaae-nleak.sec} | |
1951 | ||
1952 | \fixme | |
1953 | Let $\proto{wdaae}$ be a weakly deniably authenticated asymmetric | |
1954 | encryption scheme with $n$-bit hints; let $\proto{se}$ be a symmetric | |
1955 | encryption scheme and let $\proto{h}$ be a universal one-way hash. Then | |
1956 | \[ \InSec{ind-occa}(\Xid{\Pi}{wdaae-nleak}(\proto{wdaae}, \proto{se}, | |
1957 | \proto{h}); t, q_E, q_D, q_R) \ldots | |
1958 | \] | |
1959 | \end{theorem} | |
1960 | \begin{proof} | |
1961 | Let $A$ be any adversary attacking $\Pi = | |
1962 | \Xid{\Pi}{wdaae-nleak}(\proto{wdaae}, \proto{se}, \proto{h})$ within the | |
1963 | stated resource bounds. We use a sequence of games. Game~$\G0$ is the | |
1964 | standard IND-OCCA attack game, with the bit $b$ chosen uniformly at | |
1965 | random. In each game $\G{i}$, let $S_i$ be the event that $A$'s output is | |
1966 | equal to $b$. It then suffices to bound $\Pr[S_0]$. | |
1967 | ||
1968 | Game~$\G1$ is like $\G0$, except that we change the way the challenge | |
1969 | message $m_b$ is encrypted. Specifically, we set $z = E_K(0^n)$. A simple | |
1970 | reduction, which we omit, shows that | |
1971 | \begin{equation} \label{eq:wdaae-nleak.sec.s0} | |
1972 | \Pr[S_0] - \Pr[S_1] \le \InSec{ind-cca}(\proto{se}; t, q_E, 0) | |
1973 | \end{equation} | |
1974 | ||
1975 | Game~$\G2$ is like $\G1$, except that we again change the way the challenge | |
1976 | message $m_b$ is encrypted. The new algorithm is as follows. | |
1977 | \begin{program} | |
1978 | $N \getsr \Bin^\nu$; \\ | |
1979 | $z \gets E_K(0^n)$; | |
1980 | $h \gets H_N(z)$; \\ | |
1981 | $(c, \eta) \gets \mathcal{E}_x(Y, N \cat h \cat m_b)$; \\ | |
1982 | \RETURN $(c, z)$; | |
1983 | \end{program} | |
1984 | We claim that | |
1985 | \begin{equation} \label{eq:wdaae-nleak.sec.s1} | |
1986 | \Pr[S_1] - \Pr[S_2] \le \InSec{wdeny}(\proto{wdaae}; t) | |
1987 | \end{equation} | |
1988 | To prove the claim, we describe a reduction from the weak deniability of | |
1989 | $\proto{wdaae}$, conditioning on the value of $m_b$, and using | |
1990 | \xref{lem:random}. | |
1991 | \fixme | |
1992 | \end{proof} | |
1993 | ||
1994 | %%%-------------------------------------------------------------------------- | |
1995 | \section{Authentication tokens} | |
1996 | \label{sec:tokens} | |
1997 | ||
1998 | An obvious way for Bob to authenticate an encrypted message sent to Alice is | |
1999 | for him to include in it a secret known only to himself and her, which is | |
2000 | independent of his actual message. We call such secrets \emph{authentication | |
2001 | tokens}. | |
2002 | ||
2003 | Authentication tokens may, on their own, either be deniable or not. A | |
2004 | deniable token is one that can be computed by either the sender or the | |
2005 | recipient; nondeniable tokens can be computed only by the sender, but the | |
2006 | recipient can still verify the token's validity. The corresponding | |
2007 | encryption scheme is weakly deniable regardless of whether the tokens are | |
2008 | deniable: since the token is independent of the message, it can be | |
2009 | transferred into another encrypted message without detection. However, if | |
2010 | the recipient can't compute a token independently, we achieve only weak | |
2011 | deniability: the token's existence proves that the sender created it. If the | |
2012 | tokens are deniable then the overall encryption scheme achieves strong | |
2013 | deniability. | |
2014 | ||
2015 | We first define formally what we demand of authentication tokens. Secondly, | |
2016 | we describe and prove the security of a simple deniably authenticated | |
2017 | asymmetric encryption scheme using an authentication token. Next, we | |
2018 | describe an obvious instantiation of nondeniable authentication tokens in | |
2019 | terms of a digital signature scheme. Finally, we sketch a deniable | |
2020 | authentication token based on the computational Diffie--Hellman problem. | |
2021 | ||
2022 | \subsection{Definitions for authentication tokens} | |
2023 | \label{sec:tokens.definition} | |
2024 | ||
2025 | Before we start on the formal definitions, there's a couple of details we | |
2026 | need to address. The intuition we gave above will lead us to a | |
2027 | three-algorithm definition: key-generation, token creation, and verification. | |
2028 | This will give us a perfectly serviceable authentication token which will | |
2029 | entirely fail to compose usefully to give us higher-level protocols. The | |
2030 | problem is that, as we've defined it, an authentication token depends only on | |
2031 | the sender's private key and the recipient's public key -- for the | |
2032 | authentication token scheme itself only. When we compose it with other | |
2033 | cryptographic schemes, our overall protocol is going to end up with a bunch | |
2034 | of keys and unless we bind them together somehow, an adversary in a | |
2035 | multiparty game will be able to make a public key consisting of (say) a | |
2036 | legitimate recipient's authentication-token public key, and a freshly | |
2037 | generated encryption scheme public key. | |
2038 | ||
2039 | We fix this problem by introducing a fourth `binding' algorithm into the | |
2040 | definition: given a collection of public keys for a higher-level protocol, | |
2041 | compute some binding value to represent them. The binding value and other | |
2042 | keys will be additional inputs into the authentication-token construction | |
2043 | algorithm. We could instead provide the other keys as inputs to the | |
2044 | key-generation algorithm, effectively making the binding value part of the | |
2045 | authentication token's public key; the downside with this approach is that | |
2046 | two schemes which both use this trick won't compose properly with each other. | |
2047 | Splitting binding from key generation solves the dependency problem. | |
2048 | ||
2049 | There are two other details. Firstly, because we'll be encrypting the | |
2050 | tokens, we'll need to be able to encode tokens as constant-length bit | |
2051 | strings. To simplify the exposition, we'll treat this encoding as the | |
2052 | primary form of the token. And, secondly, we'll introduce a notion of | |
2053 | deniability for tokens which will require a simulation algorithm. Rather | |
2054 | than introduce a syntactic difference between deniable and nondeniable | |
2055 | tokens, we shall require all tokens to have a simulation algorithm, though | |
2056 | nondeniable algorithms can use a trivial `simulator'. | |
2057 | ||
2058 | \begin{definition}[Authentication token syntax] | |
2059 | \label{def:at.syntax} | |
2060 | ||
2061 | An authentication token scheme is a sextuple $\proto{tok} = (\kappa, G, B, | |
2062 | T, V, R)$, where $\kappa$ is a positive integer, and $G$, $B$, $T$, $V$, | |
2063 | and $R$ are (maybe randomized) algorithms as follows. | |
2064 | \begin{itemize} | |
2065 | \item The \emph{key-generation algorithm} $G$ accepts no parameters and | |
2066 | outputs a pair~$(x, X) \gets G()$. We call $x$ the \emph{private key} | |
2067 | and $X$ the \emph{public key}. | |
2068 | \item The \emph{binding algorithm} $B$ accepts a private key~$x$, and a | |
2069 | string $s \in \Bin^*$, and outputs a \emph{binding value} $\beta \gets | |
2070 | B_x(s)$. | |
2071 | \item The \emph{token construction algorithm} $T$ accepts a private | |
2072 | key~$x$, a public key~$Y$, a string~$s \in \Bin^*$, and a binding | |
2073 | value~$\beta$, and outputs a token $\tau \gets C_x(Y, s, \beta)$, which | |
2074 | is either a $\kappa$-bit string $\tau \in \Bin^\kappa$, or the | |
2075 | distinguished value~$\bot$. | |
2076 | \item The \emph{token verification algorithm} $V$ accepts a private | |
2077 | key~$x$, a public key~$Y$, a binding value~$\beta$, a string~$s \in | |
2078 | \Bin^*$, and a token~$\tau \in \Bin^\kappa$; it outputs a bit $b \gets | |
2079 | V_x(Y, s, \beta, \tau)$. The verification algorithm must be such that, | |
2080 | if $(x, X)$ and $(y, Y)$ are any two pairs of keys generated by $G$, $s | |
2081 | \in \Bin^*$ is any string, $\beta$ is any binding value output by | |
2082 | $B_y(s)$, and $\tau$ is any token constructed by $T_x(Y, s, \beta)$, then | |
2083 | $V_y(X, s, \beta, \tau)$ outputs $1$. | |
2084 | \item The \emph{recipient simulator} $R$ accepts a private key~$x$, a | |
2085 | public key~$Y$, and a binding value~$\beta$, and outputs a token~$\tau | |
2086 | \in \Bin^\kappa$. \qed | |
2087 | \end{itemize} | |
2088 | \end{definition} | |
2089 | ||
2090 | The security definition is fairly straightforward. We play a game in which | |
2091 | the adversary's goal is to determine the authentication token shared between | |
2092 | two users: a `sender' and a `recipient'. We grant the adversary access to | |
2093 | an oracle which constructs tokens using to the sender's private key, and | |
2094 | another which verifies tokens using the recipient's key. So that the game is | |
2095 | nontrivial, we forbid the adversary from making a token-construction query | |
2096 | using the recipient's public key and binding value. | |
2097 | ||
2098 | Which raises the question of where the binding value comes from. For maximum | |
2099 | generality, we allow the adversary to select the input string, possibly | |
2100 | dependent on the recipient's public key, in a preliminary stage of the game. | |
2101 | ||
2102 | \begin{definition}[Authentication token security] | |
2103 | \label{def:at.security} | |
2104 | ||
2105 | Let $\Pi = (\kappa, G, B, T, V, R)$ be an authentication token scheme. We | |
2106 | measure an adversary $A$'s ability to attack $\Pi$ using the following | |
2107 | game. | |
2108 | \begin{program} | |
2109 | $\Game{token}{\Pi}(A)$: \+ \\ | |
2110 | $(x, X) \gets G()$; | |
2111 | $(y, Y) \gets G()$; \\ | |
2112 | $(s, t, \sigma) \gets A(\cookie{bind}, X, Y)$; \\ | |
2113 | $\alpha \gets B_x(s)$; | |
2114 | $\beta \gets B_y(t)$; \\ | |
2115 | $w \gets 0$; \\ | |
2116 | $A^{T_x(\cdot, \cdot, \cdot), \id{vrf}(\cdot, \cdot)} | |
2117 | (\cookie{guess}, \sigma, \alpha, \beta)$; \\ | |
2118 | \RETURN $w$; \- | |
2119 | \next | |
2120 | $\id{vrf}(Q, \tau)$: \+ \\ | |
2121 | $b \gets V_y(Q, t, \beta, \tau)$; \\ | |
2122 | \IF $Q = X \land b = 1$ \THEN $w \gets 1$; \\ | |
2123 | \RETURN $b$; \- | |
2124 | \end{program} | |
2125 | The adversary is forbidden from querying its token-construction oracle at | |
2126 | $(Y, t, \beta)$. | |
2127 | ||
2128 | The adversary's \emph{advantage} attacking $\Pi$ is measured by | |
2129 | \[ \Adv{token}{\Pi}(A) = \Pr[\Game{token}{\Pi}(A) = 1] \] | |
2130 | Finally, the \emph{insecurity function} of~$\Pi$ is defined by | |
2131 | \[ \InSec{token}(\Pi; t, q_T, q_V) = \max_A \Adv{token}{\Pi}(A) \] | |
2132 | where the maximum is taken over all adversaries $A$ completing the game in | |
2133 | time~$t$ and making at most $q_T$ token-construction and $q_V$ verification | |
2134 | queries. | |
2135 | \end{definition} | |
2136 | ||
2137 | Our notion of deniability for authentication tokens makes use of an | |
2138 | \emph{auxiliary input}~$a \in \Bin^*$ to the judge. In higher-level | |
2139 | protocols, this auxiliary input will be additional private keys corresponding | |
2140 | to the public keys input to the binding algorithm. From a technical point of | |
2141 | view, this complication is necessary in order to prove reductions to token | |
2142 | deniability. This isn't especially satisfying intuitively, though, so it's | |
2143 | worth considering what goes wrong if we don't allow auxiliary inputs. \fixme | |
2144 | ||
2145 | \begin{definition}[Authentication token deniability] | |
2146 | \label{def:at.deny} | |
2147 | ||
2148 | Let $\Pi = (\kappa, G, B, T, V, R)$ be an authentication token scheme. The | |
2149 | judge~$J$'s ability to distinguish simulated tokens is defined using the | |
2150 | following game. | |
2151 | \begin{program} | |
2152 | $\Game{deny}{\Pi}(J, s, a, b)$: \+ \\ | |
2153 | $(x, X) \gets G()$; | |
2154 | $(y, Y) \gets G()$; \\ | |
2155 | $\beta \gets B_y(s)$; \\ | |
2156 | $\tau_1 \gets T_x(Y, s, \beta)$; \\ | |
2157 | $\tau_0 \gets R_y(X, \beta)$; \\ | |
2158 | $b' \gets J(x, y, X, Y, s, a, \beta, \tau_b)$; \\ | |
2159 | \RETURN $b'$; \- | |
2160 | \end{program} | |
2161 | The judge's \emph{advantage} in distinguishing simulated tokens is measured | |
2162 | by | |
2163 | \[ \Adv{deny}{\Pi}(J) = \max_{s, a \in \Bin^*} \bigl( | |
2164 | \Pr[\Game{deny}{\Pi}(J, s, a, 1) = 1] - | |
2165 | \Pr[\Game{deny}{\Pi}(J, s, a, 0) = 1] \bigr) | |
2166 | \] | |
2167 | and the \emph{insecurity function} for the deniability of $\Pi$ is defined | |
2168 | as | |
2169 | \[ \InSec{deny}(\Pi; t) = \max_J \Adv{deny}{\Pi}(J) \] | |
2170 | where the maximum is take over all judges~$J$ completing in time~$t$. We | |
2171 | say that $\Pi$ is \emph{statistically deniable} if $\InSec{deny}(\Pi; t)$ | |
2172 | is constant; if it is identically zero then say that $\Pi$ is | |
2173 | \emph{perfectly deniable}. | |
2174 | \end{definition} | |
2175 | ||
2176 | \subsection{Deniably authenticated asymmetric encryption from authentication | |
2177 | tokens} | |
2178 | \label{sec:tokens.construct} | |
2179 | ||
2180 | In this section, we construct and analyse two deniably authenticated | |
2181 | asymmetric encryption schemes using | |
2182 | \begin{itemize} | |
2183 | \item a universal one-way hash function $\proto{h} = (\nu, \lambda, H)$; | |
2184 | \item an asymmetric encryption scheme $\proto{ae} = (G, E, D)$; and | |
2185 | \item an authentication token scheme $\proto{tok} = (\kappa, G, B, T, V, R)$. | |
2186 | \end{itemize} | |
2187 | The component algorithms for the schemes are shown in | |
2188 | \xref{fig:tokens.construct}. The first is a weakly deniable scheme | |
2189 | \begin{eqnarray*}[Ll] | |
2190 | \Xid{\Pi}{wdaae-tok}(\proto{ae}, \proto{tok}) = \\ | |
2191 | & (\Xid{G}{daae-tok}, \Xid{E}{daae-tok}, \Xid{D}{daae-tok}, | |
2192 | \Xid{R}{daae-tok}, \Xid{S}{daae-tok}) | |
2193 | \end{eqnarray*} | |
2194 | and the second is a strongly deniable scheme | |
2195 | \[ \Xid{\Pi}{sdaae-tok}(\proto{ae}, \proto{tok}) = | |
2196 | (\Xid{G}{daae-tok}, \Xid{\hat E}{daae-tok}, \Xid{D}{daae-tok}, | |
2197 | \Xid{\hat R}{daae-tok}) | |
2198 | \] | |
2199 | ||
2200 | \begin{figure} | |
2201 | \begin{program} | |
2202 | \begin{tikzpicture} | |
2203 | \node[box = yellow!20, minimum width = 30mm] (m) {$m$}; | |
2204 | \node[box = cyan!20, left = -0.6pt of m] (h) {$h$}; | |
2205 | \node[box = cyan!20, left = -0.6pt of h] (n) {$N$}; | |
2206 | \node[box = green!20, left = -0.6pt of n] (tau) {$\tau$}; | |
2207 | \node[op = green!20, above = of tau] (token) {$T$} edge[->] (tau); | |
2208 | \node[op = cyan!20, above = of h] (hash) {$H$} edge[->] (h); | |
2209 | \node[right = of hash] {$X$, $X'$} edge[->] (hash); | |
2210 | \node[above = of hash] (nn) {$N$} edge[->] (hash); | |
2211 | \draw[rounded, ->] ($(nn)!1/3!(hash)$) -| (n); | |
2212 | \node[left = of token] {$x'$} edge[->] (token); | |
2213 | \node[above = of token] {$Y'$, $Y$, $\beta$} edge[->] (token); | |
2214 | \draw[decorate, decoration = brace] | |
2215 | (m.south east) -- (tau.south west) | |
2216 | coordinate [pos = 0.5, below = 2.5pt] (p); | |
2217 | \node[op = red!20, below = of p] (e) {$E$} edge[<-] (p); | |
2218 | \node[left = of e] {$Y$} edge[->] (e); | |
2219 | \node[box = red!20, minimum width = 30mm + 46.8pt, below = of e] | |
2220 | {$c$} edge[<-] (e); | |
2221 | \end{tikzpicture} | |
2222 | \\[\bigskipamount] | |
2223 | $\Xid{G}{daae-tok}()$: \+ \\ | |
2224 | $(x, X) \gets G()$; | |
2225 | $(x', X') \gets G'()$; \\ | |
2226 | $\alpha \gets B'_x([X])$; \\ | |
2227 | \RETURN $\bigl( (x, x', X, X', \alpha), (X, X', \alpha) \bigr)$; \- | |
2228 | \next | |
2229 | $\Xid{E}{daae-tok}_{(x, x', X, X', \alpha)} | |
2230 | \bigl( (Y, Y', \beta), m \bigr)$: \+ \\ | |
2231 | $\tau \gets T_{x'}(Y', [Y], \beta)$; | |
2232 | \IF $\tau = \bot$ \THEN \RETURN $\bot$; \\ | |
2233 | $N \getsr \Bin^\nu$; $h \gets H_N([X, X'])$; \\ | |
2234 | $c \gets E_Y(\tau \cat N \cat h \cat m)$; \\ | |
2235 | \RETURN $(c, \tau)$; \- | |
2236 | \\[\medskipamount] | |
2237 | $\id{decrypt}(x, x', Y, Y', X, \alpha, c)$: \+ \\ | |
2238 | $m' \gets D_x(c)$; \\ | |
2239 | \IF $m' = \bot \lor \len(m') < \kappa + \nu + \lambda$ | |
2240 | \THEN \RETURN $(\bot, \bot)$; \\ | |
2241 | $\tau \gets m'[0 \bitsto \kappa]$; | |
2242 | $h \gets m'[\kappa + \nu \bitsto \kappa + \nu + \lambda]$; \\ | |
2243 | $N \gets m'[\kappa \bitsto \kappa + \nu]$; | |
2244 | $m \gets m'\bigl[ \kappa + \nu + \lambda \bitsto \len(m') \bigr]$; \\ | |
2245 | \IF $V_{x'}(Y', [X], \alpha, \tau) = 0 \lor | |
2246 | H_N([Y, Y']) \ne h$ \\ | |
2247 | \THEN \RETURN $(\bot, \bot)$ | |
2248 | \ELSE \RETURN $(\tau, m)$; | |
2249 | \- | |
2250 | \\[\medskipamount] | |
2251 | $\Xid{D}{daae-tok}_{(x, x', X, X', \alpha)} | |
2252 | \bigl( (Y, Y', \beta), c \bigr)$: \+ \\ | |
2253 | $(m, \tau) \gets \id{decrypt}(x, x', Y, Y', X, \alpha, c)$; \\ | |
2254 | \RETURN $m$; | |
2255 | \newline | |
2256 | $\Xid{R}{daae-tok}_{(x, x', X, X', \alpha)} | |
2257 | \bigl( (Y, Y', \beta), c, m \bigr)$: \+ \\ | |
2258 | $(\tau, m') \gets \id{decrypt}(x, x', Y, Y', X, \alpha, c)$; \\ | |
2259 | \IF $\tau = \bot$ \THEN \RETURN $\bot$; \\ | |
2260 | $N \getsr \Bin^\nu$; $h \gets H_N([Y, Y'])$; \\ | |
2261 | $c \gets E_X(\tau \cat N \cat h \cat m)$; \\ | |
2262 | \RETURN $c$; \- | |
2263 | \\[\medskipamount] | |
2264 | $\Xid{S}{daae-tok}_{(x, x', X, X', \alpha)} | |
2265 | \bigl( (Y, Y', \beta), \tau, c, m \bigr)$: \+ \\ | |
2266 | $N \getsr \Bin^\nu$; $h \gets H_N([X, X'])$; \\ | |
2267 | $c \gets E_Y(\tau \cat [X, X'] \cat m)$; \\ | |
2268 | \RETURN $c$; \- | |
2269 | \next | |
2270 | $\Xid{\hat E}{daae-tok}_{(x, x', X, X', \alpha)} | |
2271 | \bigl( (Y, Y', \beta), m \bigr)$: \+ \\ | |
2272 | $(c, \tau) \gets {}$\\ | |
2273 | \qquad $\Xid{E}{daae-tok}_{(x, x', X, X', \alpha)} | |
2274 | \bigl( (Y, Y', \beta), m \bigr)$; \\ | |
2275 | \RETURN $c$; \- | |
2276 | \\[\medskipamount] | |
2277 | $\Xid{\hat R}{daae-tok}_{(x, x', X, X', \alpha)} | |
2278 | \bigl( (Y, Y', \beta), m \bigr)$: \+ \\ | |
2279 | $\tau \gets R_{x'}(Y', \alpha)$; \\ | |
2280 | $N \getsr \Bin^\nu$; $h \gets H_N([X, X'])$; \\ | |
2281 | $c \gets E_X(\tau \cat N \cat h \cat m)$; \\ | |
2282 | \RETURN $c$; \- | |
2283 | \end{program} | |
2284 | \caption{Deniably authenticated asymmetric encryption from authentication | |
2285 | tokens} | |
2286 | \label{fig:tokens.construct} | |
2287 | \end{figure} | |
2288 | ||
2289 | \begin{theorem}[Secrecy of the authentication-token DAAE] | |
2290 | \label{th:daae-tok.secrecy} | |
2291 | ||
2292 | Let $\proto{ae}$ and $\proto{tok}$ be an asymmetric encryption and | |
2293 | an authentication token scheme, respectively. Then | |
2294 | \begin{spliteqn*} | |
2295 | \InSec{ind-occa}(\Xid{\Pi}{sdaae-tok}(\proto{ae}, \proto{tok}); | |
2296 | t, q_E, q_D) \le \\ | |
2297 | 2 q_E \InSec{uow}(\proto{h}; t) + | |
2298 | \InSec{ind-cca}(\proto{ae}; t, q_D) | |
2299 | \end{spliteqn*} | |
2300 | and | |
2301 | \begin{spliteqn*} | |
2302 | \InSec{ind-occa}(\Xid{\Pi}{wdaae-tok}(\proto{ae}, \proto{tok}); | |
2303 | t, q_E, q_D, q_R) \le \\ | |
2304 | 2 (q_E + q_R) \InSec{uow}(\proto{h}; t) + | |
2305 | \InSec{ind-cca}(\proto{ae}; t, q_D + q_R) | |
2306 | \end{spliteqn*} | |
2307 | \end{theorem} | |
2308 | \begin{proof} | |
2309 | We prove the latter inequality; the former is follows by a rather simpler | |
2310 | version of the same argument, since the encryption algorithms are (by | |
2311 | construction) almost the same, and there is no recipient simulator in the | |
2312 | strong-deniability game. | |
2313 | ||
2314 | So let $A$ be any adversary breaking the secrecy of $\Pi = | |
2315 | \Xid{\Pi}{wdaae-tok}$ within the given resource bounds. It will suffice to | |
2316 | bound the advantage of $A$. We use a sequence of games. Let $\G0$ be the | |
2317 | standard IND-CCA game, with $b$ chosen uniformly at random. In | |
2318 | game~$\G{i}$, let $S_i$ be the event that $A$'s output is equal to $b$. By | |
2319 | \xref{lem:advantage} it suffices to bound $\Pr[S_0]$. | |
2320 | ||
2321 | Let $\mathcal{H}$ be the set of pairs $(N, h) = (N, H_N([X, X'])$ computed | |
2322 | by the encryption and recipient-simulator oracles. Game~$\G1$ is the same | |
2323 | as $\G0$ except that we abort the game if $A$ makes a decryption or | |
2324 | recipient-simulation query with public key $(Z, Z', \gamma)$ and ciphertext | |
2325 | $c$, with following conditions: | |
2326 | \begin{itemize} | |
2327 | \item the ciphertext decrypts successfully, yielding a message $\tau \cat N | |
2328 | \cat h \cat m$, with $\len(\tau) = \kappa$, $\len(N) = \nu$ and $\len(h) | |
2329 | = \lambda$; | |
2330 | \item the pair $(N, h) \in \mathcal{H}$; and | |
2331 | \item the hash $h = H_N([Z, Z'])$; but | |
2332 | \item the public key $(Z, Z') \ne (X, X')$. | |
2333 | \end{itemize} | |
2334 | Let $F_1$ be the event that the game aborts for this reason. By | |
2335 | \xref{lem:shoup}, we have | |
2336 | \begin{equation} \label{eq:daae-tok.sec.s0} | |
2337 | \Pr[S_0] - \Pr[S_1] \le \Pr[F_1] | |
2338 | \end{equation} | |
2339 | We claim that | |
2340 | \begin{equation} \label{eq:daae-tok.sec.f1} | |
2341 | \Pr[F_1] \le (q_E + q_R) \InSec{uow}(\proto{h}; t) | |
2342 | \end{equation} | |
2343 | For $0 \le i < q_E + q_R$, let $M_i$ be the event that we abort the game | |
2344 | because a hash in an input ciphertext matches the hash output by the $i$th | |
2345 | encryption or recipient-simulator query; then $\Pr[F_1] \le \sum_{0\le | |
2346 | i<q_E+q_R} \Pr[M_i]$. It will suffice to show that | |
2347 | \[ \Pr[M_i] \le \InSec{uow}(\proto{h}; t) \] | |
2348 | But this follows immediately by a simple reduction: the \cookie{find} stage | |
2349 | generates $X$ and $X'$ and outputs $[X, X']$; the \cookie{guess} stage | |
2350 | receives a key $N^*$ to be used in responding to the $i$th oracle query, | |
2351 | and simulates $\G0$. The event $M_i$ is easy to detect: if it occurs, the | |
2352 | reduction stops simulating the game and outputs the colliding key. | |
2353 | ||
2354 | We now bound $\Pr[S_1]$ by a reduction from IND-CCA security of | |
2355 | $\proto{ae}$. The reduction's \cookie{find} stage starts by generating | |
2356 | authentication-token keys for the sender and recipient, and | |
2357 | asymmetric-encryption keys for the sender. It generates binding values for | |
2358 | the sender and recipient. Finally, it runs the \cookie{find} stage of $A$, | |
2359 | providing simulated encryption, decryption and recipient-simulator oracles | |
2360 | to be described later. Eventually, $A$'s \cookie{find} stage completes, | |
2361 | returning two messages $m_0$ and~$m_1$ of equal length. The reduction | |
2362 | computes a token $\tau^*$, chooses a key $N^* \inr \Bin^\nu$, computes | |
2363 | $h^* = H_{N^*}([X, X'])$, and outputs $m^*_0 = \tau^* \cat N^* \cat h^* | |
2364 | \cat m_0$ and $m^*_1 = \tau^* \cat N^* \cat h^* \cat m_1$, where $X$ and | |
2365 | $X'$ are the public keys of the simulated sender. These two strings | |
2366 | clearly have the same length. | |
2367 | ||
2368 | The reduction's \cookie{guess} stage is then invoked with a challenge | |
2369 | ciphertext~$c^*$. It runs $A$'s \cookie{guess} stage on $c^*$, simulating | |
2370 | the oracles in the way. Finally, when $A$ outputs a guess $b'$, the | |
2371 | reduction outputs the same guess. | |
2372 | ||
2373 | We now describe the simulated oracles provided by the reduction. The only | |
2374 | secret required by $\Pi$'s encryption algorithm is the authentication-token | |
2375 | key, which the reduction chose at the beginning of the game, so the | |
2376 | simulated encryption oracle can use the encryption algorithm directly. The | |
2377 | reduction simulates $A$'s decryption oracle using its own decryption | |
2378 | oracle, thereby using one decryption query for each of $A$'s decryption | |
2379 | queries; it verifies the token using the private key it chose earlier. | |
2380 | Note that $A$ is forbidden from querying its decryption oracle at $c^*$ | |
2381 | using the sender's public key, though it may use a different public key. | |
2382 | However, a different sender public key will either fail to match the public | |
2383 | key hash embedded in $c^*$ or the game will abort, so such a query must | |
2384 | output $\bot$. | |
2385 | ||
2386 | Finally, the reduction also uses its decryption oracle to implement its | |
2387 | recipient-simulator oracle, using one decryption query for each of $A$'s | |
2388 | recipient-simulator queries. Again, there's a special case if the sample | |
2389 | ciphertext is equal to $c^*$: in this case, the reduction performs the | |
2390 | simulation by prepending $\tau^* \cat N \cat H_N([X, X'])$ to the query | |
2391 | message for some fresh $N \inr \Bin^\nu$, and encrypting the result. | |
2392 | ||
2393 | Plainly, the reduction provides an accurate simulation for $A$, and | |
2394 | achieves the same advantage; also, the reduction completes its game in the | |
2395 | same time. Therefore, by \xref{lem:advantage}, | |
2396 | \begin{equation} \label{eq:daae-tok.sec.s1} | |
2397 | \Pr[S_1] \le \frac12 \InSec{ind-cca}(\proto{ae}; t, q_D + q_R) + \frac12 | |
2398 | \end{equation} | |
2399 | The theorem follows by combining | |
2400 | equations~\ref{eq:daae-tok.sec.s0}--\ref{eq:daae-tok.sec.s1}. | |
2401 | \end{proof} | |
2402 | ||
2403 | \begin{theorem}[Authenticity of the authentication-token DAAE] | |
2404 | \label{th:daae-tok.auth} | |
2405 | ||
2406 | Let $\proto{ae}$ and $\proto{tok}$ be an asymmetric encryption and | |
2407 | an authentication token scheme, respectively. Then | |
2408 | \[ \InSec{uf-ocma}(\Xid{\Pi}{sdaae-tok}(\proto{ae}, \proto{tok}); | |
2409 | t, q_E, q_D) \le | |
2410 | q_E \InSec{ind-cca}(\proto{ae}; t, q_D) | |
2411 | \] | |
2412 | and | |
2413 | \begin{spliteqn*} | |
2414 | \InSec{uf-ocma}(\Xid{\Pi}{wdaae-tok}(\proto{ae}, \proto{tok}); | |
2415 | t, q_E, q_D, q_R) \le \\ | |
2416 | (q_E + q_R) \InSec{ind-cca}(\proto{ae}; t, q_D + q_R) + | |
2417 | \InSec{token}(\proto{tok}; t, q_E, q_D + q_R) | |
2418 | \end{spliteqn*} | |
2419 | \end{theorem} | |
2420 | \begin{proof} | |
2421 | Again, we give the proof in the weak deniability case: the strong | |
2422 | deniability case is similar but easier. | |
2423 | ||
2424 | Let $A$ be any adversary attacking the authenticity of $\Pi = | |
2425 | \Xid{\Pi}{wdaae-tok}$ and running within the stated resource bounds. It | |
2426 | suffices to bound $A$'s advantage, which we do using a sequence of games. | |
2427 | Let $\G0 = \Game{uf-ocma}{\Pi}(A)$ be the original attack game. In each | |
2428 | game~$\G{i}$, let $S_i$ be the event that $A$ wins, i.e., that submits a | |
2429 | valid forgery to its decryption oracle. Our objective is then to bound | |
2430 | $S_0$ from above. | |
2431 | ||
2432 | In game~$\G1$, we change the way that the encryption oracle works. | |
2433 | Specifically, if the input public key is equal to the recipient public key, | |
2434 | then rather than generating $\tau$ using the token-construction algorithm, | |
2435 | it simply sets $\tau = 0^\kappa$. We also change the decryption and | |
2436 | recipient-simulator oracles so that they accept these modified ciphertext | |
2437 | as being valid and, in the latter case, to propagate the fake token into | |
2438 | the simulated ciphertext. | |
2439 | ||
2440 | We claim that | |
2441 | \begin{equation} \label{eq:daae-tok.auth.s0} | |
2442 | \Pr[S_0] - \Pr[S_1] \le | |
2443 | (q_E + q_R) \InSec{ind-cca}(\proto{ae}; t, q_D + q_R) | |
2444 | \end{equation} | |
2445 | To prove the claim, we use a hybrid argument. For $0 \le i \le q_E + q_R$, | |
2446 | we define game~$\G[H]i$ to use fake tokens for the final $i$ encryption or | |
2447 | simulator queries, but generate proper tokens for the first $q_E + q_R - i$ | |
2448 | queries. If $T_i$ is the event that the adversary wins in $\G[H]i$, then | |
2449 | the claim will follow if we can show that | |
2450 | \[ \Pr[T_i] - \Pr[T_{i+1}] \le | |
2451 | \InSec{ind-cca}(\proto{ae}; t, q_D + q_R) | |
2452 | \] | |
2453 | This we show by a reduction from the secrecy of $\proto{ae}$. The | |
2454 | reduction accepts a public key, which it uses for the recipient, and | |
2455 | generates the other keys. The \cookie{find} stage of the reduction runs | |
2456 | $A$, using simulated oracles, until the $i$th encryption or simulation | |
2457 | query; then, if $m$ is the input message, the reduction's \cookie{find} | |
2458 | stage chooses $N \inr \Bin^\nu$, sets $h = H_N([X, X'])$, and outputs the | |
2459 | two messages $\tau \cat N \cat h \cat m$ and $0^\kappa \cat N \cat h \cat | |
2460 | m$, which are of equal lengths. The \cookie{guess} stage of the reduction | |
2461 | is given a challenge ciphertext $c^*$ corresponding to one of these | |
2462 | plaintexts; it records $c^*$ and $m$, and resumes execution of $A$ until | |
2463 | completion; finally, it outputs $1$ if $A$ submits a valid forgery and $0$ | |
2464 | otherwise. The reduction's simulated decryption and simulator oracles | |
2465 | detect the challenge ciphertext~$c^*$ and use the recorded message. | |
2466 | Depending on which message was chosen by the IND-CCA game, the reduction | |
2467 | simulates either $\G[H]i$ or $\G[H]{i+1}$; it outputs $1$ precisely when | |
2468 | $T_i$ or $T_{i+1}$ occur in the respective cases, and the claim follows. | |
2469 | ||
2470 | Finally, we bound $\Pr[S_1]$, claiming | |
2471 | \begin{equation} \label{eq:daae-tok.auth.s1} | |
2472 | \Pr[S_1] \le \InSec{token}(\proto{tok}; t, q_E, q_D + q_R) | |
2473 | \end{equation} | |
2474 | Again, we prove this claim by a reduction, this time from the authenticity | |
2475 | of $\proto{tok}$. The \cookie{bind} stage of the reduction generates | |
2476 | encryption keys for the sender and recipient and outputs encodings of them. | |
2477 | The \cookie{guess} stage simulates $\G1$. We handle an encryption query | |
2478 | naming the recipient public key by using a `fake' token $0^\kappa$; for any | |
2479 | other public key, we can use the token-construction oracle to generate a | |
2480 | token. For decryption and the recipient simulator, we must recognize | |
2481 | ciphertexts from the encryption or recipient-simulator oracle and accept | |
2482 | them as valid despite the invalid tokens; for other ciphertexts, we decrypt | |
2483 | and use the token verification oracle. If the decryption oracle receives a | |
2484 | valid ciphertext with the sender public key, other than one constructed by | |
2485 | the encryption or recipient-simulator oracles, then it must contain a valid | |
2486 | authentication token between the sender and the recipient -- but $\G1$ | |
2487 | contains no mechanism for constructing such a token, so the token must have | |
2488 | been forged. The claim follows. | |
2489 | ||
2490 | Finally, the theorem follows from combining | |
2491 | equations~\ref{eq:daae-tok.auth.s0} and~\ref{eq:daae-tok.auth.s1}. | |
2492 | \end{proof} | |
2493 | ||
2494 | \begin{theorem}[Deniability of the authentication-token DAAE] | |
2495 | \label{th:daae-tok.deny} | |
2496 | ||
2497 | Let $\proto{ae}$ and $\proto{tok}$ be an asymmetric encryption and | |
2498 | an authentication token scheme, respectively. Then | |
2499 | \[ \InSec{sdeny}(\Xid{\Pi}{sdaae-tok}(\proto{ae}, \proto{tok}); t) = | |
2500 | \InSec{deny}(\proto{tok}; t) \] | |
2501 | and | |
2502 | \[ \InSec{wdeny}(\Xid{\Pi}{wdaae-tok}(\proto{ae}, \proto{tok}); t) = 0 \] | |
2503 | \end{theorem} | |
2504 | \begin{proof} | |
2505 | We prove the first statement by reduction. Firstly, we show that | |
2506 | \[ \InSec{sdeny}(\Xid{\Pi}{sdaae-tok}(\proto{ae}, \proto{tok}); t) \le | |
2507 | \InSec{deny}(\proto{tok}; t) | |
2508 | \] | |
2509 | If $J$ is a judge distinguishing simulated ciphertexts in time~$t$ then we | |
2510 | construct a judge~$J'$ distinguishing simulated tokens. Recall the inputs | |
2511 | to $J'$: the public and private keys $(x', X')$ of the sender and $(y', | |
2512 | Y')$ of the recipient, a string~$s$, the recipient's binding value~$\beta$ | |
2513 | on $s$, an auxiliary input~$a$, and a (possibly simulated) token~$\tau$. | |
2514 | If $s$ is the encoding of a public key $Y$ for $\proto{se}$, and $a = y$ is | |
2515 | the corresponding private key, then $J'$ chooses a $\proto{se}$ key pair | |
2516 | $(x, X)$ for the sender, computes $\alpha = B_{x'}([X])$, chooses a key~$N | |
2517 | \in \Bin^\nu$, computes $h = H_N([X, X'])$, constructs a ciphertext $c = | |
2518 | E_Y(\tau \cat N \cat h \cat m$ for some message~$m$, and runs $J$ giving it | |
2519 | all of the keys and binding values, the message $m$ and the ciphertext $c$. | |
2520 | Notice that $c$ is distributed exactly as a genuine ciphertext if $\tau$ is | |
2521 | genuine, or as a simulated ciphertext if $\tau$ is simulated; hence we | |
2522 | conclude that $J'$ distinguishes with advantage at most | |
2523 | $\InSec{deny}(\proto{tok}; t)$. Finally, we observe that the recipient's | |
2524 | encryption key pair $(y, Y)$ is independent of the token key pair, so we | |
2525 | can apply \xref{lem:random} to complete the proof of the inequality. | |
2526 | ||
2527 | Secondly, we show that | |
2528 | \[ \InSec{sdeny}(\Xid{\Pi}{sdaae-tok}(\proto{ae}, \proto{tok}); t) \ge | |
2529 | \InSec{deny}(\proto{tok}; t) | |
2530 | \] | |
2531 | We use a reduction in the other direction: given keys, a ciphertext and a | |
2532 | judge~$J$ distinguishing tokens, we construct a judge~$J'$ distinguishing | |
2533 | ciphertexts: simply decrypt the ciphertext using the recipient's decryption | |
2534 | key, extract the token from the recovered plaintext, and pass it, together | |
2535 | with the necessary keys, to $J$. | |
2536 | ||
2537 | The second statement is easier. It suffices to observe that, in both the | |
2538 | sender-simulator and recipient-simulator cases, both ciphertexts contain | |
2539 | the same token, properly generated using the token-construction algorithm; | |
2540 | they both contain hashes of the sender's public key using uniformly | |
2541 | distributed keys, the appropriate messages, and are encrypted using the | |
2542 | proper encryption algorithm. Therefore all three distributions are | |
2543 | identical and we have perfect weak deniability. | |
2544 | \end{proof} | |
2545 | ||
2546 | \subsection{Authentication tokens using signatures} | |
2547 | \label{sec:tokens.sig} | |
2548 | ||
2549 | In this section, we describe a simple, non-deniable, proof-of-concept | |
2550 | authentication token scheme using digital signatures. Let $\proto{sig} = (G, | |
2551 | S, V)$ be a signature scheme with the property that all signatures are bit | |
2552 | strings of some constant length, i.e., there exists an integer $\kappa$ such | |
2553 | that, for all messages~$m$, | |
2554 | \[ \textrm{$(x, X) \gets G()$; | |
2555 | $\sigma \gets S_x(m)$} : \sigma \in \Bin^\kappa | |
2556 | \] | |
2557 | This requirement is not especially onerous. Most practical signature scheme | |
2558 | already this property already. For those that do not, most will at least | |
2559 | produce signatures which can be encoded as a bounded-length string for | |
2560 | fixed-length messages, and this latter condition can be assured using a | |
2561 | universal one-way hash function. | |
2562 | ||
2563 | If $\proto{sig} = (G, S, V)$ is a signature scheme producing $\kappa$-bit | |
2564 | signatures, then we define our token scheme | |
2565 | \[ \Xid{\Pi}{sig}(\proto{sig}) = | |
2566 | (\kappa, \Xid{G}{sig}, \Xid{B}{sig}, \Xid{T}{sig}, | |
2567 | \Xid{V}{sig}, \Xid{R}{sig}) | |
2568 | \] | |
2569 | where the component algorithms are as follows. | |
2570 | \begin{program} | |
2571 | $\Xid{G}{sig}()$: \+ \\ | |
2572 | $(x, X) \gets G()$; \\ | |
2573 | \RETURN $(x, X)$; \- | |
2574 | \next | |
2575 | $\Xid{B}{sig}_x(s)$: \+ \\ | |
2576 | \RETURN $\emptystring$; \- | |
2577 | \\[\medskipamount] | |
2578 | $\Xid{R}{sig}_x(Y, s, \alpha)$: \+ \\ | |
2579 | \RETURN $0^\kappa$; \- | |
2580 | \next | |
2581 | $\Xid{T}{sig}_x(Y, s, \beta)$: \+ \\ | |
2582 | \IF $\beta \ne \epsilon$ | |
2583 | \THEN \RETURN $\bot$; \\ | |
2584 | $\tau \gets S_x(s)$; \\ | |
2585 | \RETURN $s$; \- | |
2586 | \next | |
2587 | $\Xid{V}{sig}_x(Y, s, \alpha, \tau)$: \+ \\ | |
2588 | $v \gets V_Y(s, \tau)$; \\ | |
2589 | \RETURN $v$; \- | |
2590 | \end{program} | |
2591 | ||
2592 | \begin{theorem}[Security of signature tokens] | |
2593 | Let $\proto{sig}$ be a signature scheme producing $\kappa$-bit signatures. | |
2594 | Then | |
2595 | \[ \InSec{token}(\Xid{\Pi}{sig}(\proto{sig}; t, q_T, q_V) \le | |
2596 | \InSec{euf-cma}(\proto{sig}; t, q_T) | |
2597 | \] | |
2598 | \end{theorem} | |
2599 | \begin{proof} | |
2600 | Immediate reduction. | |
2601 | \end{proof} | |
2602 | ||
2603 | \begin{theorem}[Non-deniability of signature tokens] | |
2604 | Let $\proto{sig}$ be a signature scheme producing $\kappa$-bit signatures. | |
2605 | Let $R$ be any algorithm, and let $\Pi = (\kappa, \Xid{G}{sig}, | |
2606 | \Xid{B}{sig}, \Xid{T}{sig}, \Xid{V}{sig}, R)$, where $\Xid{G}{sig}$ etc.\ | |
2607 | are as defined above. Then | |
2608 | \[ \InSec{deny}(\Pi; t) \ge 1 - \InSec{euf-cma}(\Pi; t, 0) \] | |
2609 | \end{theorem} | |
2610 | \begin{proof} | |
2611 | Let $J$ be the judge which applies the verification algorithm | |
2612 | $\Xid{V}{sig}$ to its supplied public key and token, and outputs the | |
2613 | result. By correctness of $\proto{sig}$, $J$ will always output $1$ given | |
2614 | a valid token. Note that $R$ is not given the private signature key or any | |
2615 | sample messages; therefore (immediate reduction) the probability that $R$ | |
2616 | outputs a valid signature is bounded above by $\InSec{euf-cma}(\Pi; t, 0)$. | |
2617 | \end{proof} | |
2618 | ||
2619 | Toegether with the equality in \xref{th:daae-tok.deny}, this theorem shows a | |
2620 | separation between weak and strong deniability. | |
2621 | ||
2622 | \subsection{Deniable authentication tokens from the Diffie--Hellman problem} | |
2623 | \label{sec:tokens.dh} | |
2624 | ||
2625 | This section sketches a construction of a deniable authentication token whose | |
2626 | security is based on the difficulty of the computational Diffie--Hellman | |
2627 | problem. A full analysis, alas, must wait for another time: the construction | |
2628 | makes use of non-interactive zero-knowledge proofs, and developing the | |
2629 | necessary definitions for a satisfactory concrete analysis would require too | |
2630 | much space. | |
2631 | ||
2632 | We require the following ingredients: | |
2633 | \begin{itemize} | |
2634 | \item a group $G$ of prime order $p = \#G$, generated by an element $P$, such | |
2635 | that elements of $G$ have fixed-length encodings; | |
2636 | \item a digital signature scheme $\proto{sig} = (G, S, V)$; and | |
2637 | \item a non-malleable non-interactive zero-knowledge (NIZK) proof of | |
2638 | knowledge scheme. | |
2639 | \end{itemize} | |
2640 | The token scheme works as follows. | |
2641 | \begin{description} | |
2642 | \item[Key generation] A private key is a pair of scalars $x, \hat{x} \inr | |
2643 | \gf{p}$; the corresponding public key is $(X, \hat{X}) = (x P, \hat{x} P)$. | |
2644 | \item[Binding] We are given a private key $x$ and a string $s$. We generate | |
2645 | a signature key pair $(x', X')$ and compute a signature $\sigma \gets | |
2646 | S_{x'}(s)$. The binding value a NIZK proof of knowledge of $x$, $X'$, and | |
2647 | $\sigma$. | |
2648 | \item[Token construction] We are given a private key~$(x, \hat{x})$, a | |
2649 | string~$s$, a binding value, and a public key $(Y, \hat{Y})$. We verify | |
2650 | the NIZK in the binding value. If it is valid, the token is simply an | |
2651 | encoding of $(x Y, x \hat{Y})$. | |
2652 | \item[Verification] Given a private key $(y, \hat{y})$, a public key~$(X, | |
2653 | \hat{X})$, and a token $(Z, \hat{Z})$, we check that $(Z, \hat{Z}) = (y X, | |
2654 | \hat{y} X)$. | |
2655 | \item[Simulation] Given a private key $(y, \hat{y})$ and a public key~$(X, | |
2656 | \hat{X})$, we simulate the token as $(y X, \hat{y} X)$. | |
2657 | \end{description} | |
2658 | Perfect deniability is immediate. Authenticity is somewhat trickier. We | |
2659 | sketch a reduction from twin Diffie--Hellman, and use \xref{th:cdh-2dh}. | |
2660 | ||
2661 | The reduction is given three group elements $X$, $Y$, and $\hat{Y}$ which | |
2662 | will serve as the two public keys: we can choose $\hat{x} \inr \gf{p}$ | |
2663 | randomly and set $\hat{X} = \hat{x} P$. The desired output is precisely the | |
2664 | corresponding authentication token. The reduction runs the first stage of | |
2665 | the adversary, collecting strings to be bound to the keys. The reduction | |
2666 | generates signing keys and simulates the NIZKs for the binding values (since | |
2667 | it doesn't know the corresponding private keys). It then runs the second | |
2668 | stage of the adversary. | |
2669 | ||
2670 | We now describe how the oracles are simulated. The token-construction oracle | |
2671 | is given a public key $Q$, string $s$, and NIZK $\pi$; at least one of these | |
2672 | differs from the recipient's public keys and (simulated) NIZK. If the NIZK | |
2673 | differs, we use the NIZK extractor to recover the signature, verification | |
2674 | key, and private key, verify the signature, and answer the query using the | |
2675 | extracted private key. If the public key $Q$ differs from the recipient's | |
2676 | public key $Y$ but the NIZK matches then the proof of knowledge of the | |
2677 | private key must be invalid, so we reject. Similarly, if $s$ is not equal to | |
2678 | the string bound to the recipient public key then the proof of knowledge of | |
2679 | the signature is probably invalid (reduction from forgery of $\proto{sig}$). | |
2680 | The verification oracle uses the twin-Diffie--Hellman verification oracle to | |
2681 | decide whether the token is correct. This completes the sketch of the | |
2682 | reduction. | |
2683 | ||
2684 | %%%-------------------------------------------------------------------------- | |
2685 | \section{Non-interactive asymmetric integrity algorithms with deniability} | |
2686 | \label{sec:naiad} | |
2687 | ||
2688 | In this section, we describe and define an important ingredient in our | |
2689 | strongly deniable scheme, which we term \emph{non-interactive asymmetric | |
2690 | integrity algorithms with deniability}, or NAIAD for short. The basic | |
2691 | setup is this. Alice and Bob both have private keys, and know each others' | |
2692 | public keys. (As mentioned in the introduction, the assumption that both | |
2693 | participants have keys is essential if we are to have non-interactive | |
2694 | deniable authentication.) Alice has a message that she wants to send to Bob, | |
2695 | so that Bob knows that it came from Alice, but nobody can prove this to | |
2696 | anyone else. | |
2697 | ||
2698 | This will be an essential ingredient in our quest for strongly deniable | |
2699 | authenticated encryption. We could get away with using a signature | |
2700 | scheme for weak deniability, but a digital signature is clear evidence that a | |
2701 | communication occurred, and we should like to avoid leaving such traces. | |
2702 | ||
2703 | \subsection{Definitions} | |
2704 | \label{sec:naiad.defs} | |
2705 | ||
2706 | We have a fairly clear idea of what deniability and authentication should | |
2707 | mean now. Since we have seen that signatures suffice for weakly deniable | |
2708 | encryption, we shall focus only on strong deniability. | |
2709 | ||
2710 | \begin{definition} | |
2711 | \label{def:naiad-syntax} | |
2712 | ||
2713 | A \emph{non-interactive asymmetric integrity algorithm with deniability} | |
2714 | (NAIAD) is a quadruple $\proto{naiad} = (G, T, V, R)$ of (maybe | |
2715 | randomized) algorithms as follows. | |
2716 | \begin{itemize} | |
2717 | \item The \emph{key-generation algorithm} $G$ accepts no parameters and | |
2718 | outputs a pair $(x, X) \gets G()$. We call $x$ the \emph{private key} | |
2719 | and $X$ the \emph{public key}. | |
2720 | \item The \emph{tagging algorithm} $T$ accepts a private key~$x$, a public | |
2721 | key~$Y$, and a message~$m \in \Bin^*$, and outputs a tag~$\tau \gets | |
2722 | T_x(Y, m)$. | |
2723 | \item The \emph{verification algorithm} $V$ accepts a private key~$x$, a | |
2724 | public key~$Y$, a message~$m \in \Bin^*$, and a tag~$\tau$, and outputs a | |
2725 | verdict $v \gets V_x(Y, m, \tau)$ which is a bit $v \in \{0, 1\}$. The | |
2726 | verification algorithm must be such that if $(x, X)$ and $(y, Y)$ are any | |
2727 | two pairs of keys produced by $G$, $m$ is any message, and $\tau$ is any | |
2728 | tag produced by $T_x(Y, m)$, then $V_y(X, m, \tau)$ outputs~$1$. | |
2729 | \item The \emph{recipient simulator} $R$ accepts a private key~$x$, a | |
2730 | public key~$Y$, and a message~$m \in \Bin^*$, and outputs a tag~$\tau | |
2731 | \gets S_x(Y, m)$. \qed | |
2732 | \end{itemize} | |
2733 | \end{definition} | |
2734 | ||
2735 | \begin{definition} | |
2736 | \label{def:naiad-security} | |
2737 | ||
2738 | Let $\Pi = (G, T, V, R)$ be a NAIAD. We measure an adversary~$A$'s ability | |
2739 | to attack $\Pi$'s authenticity using the following game. | |
2740 | \begin{program} | |
2741 | $\Game{uf-ocma}{\Pi}(A)$: \+ \\ | |
2742 | $w \gets 0$; | |
2743 | $\mathcal{T} \gets \emptyset$; \\ | |
2744 | $(x, X) \gets G()$; | |
2745 | $(y, Y) \gets G()$; \\ | |
2746 | $A^{\id{tag}(\cdot, \cdot), \id{vrf}(\cdot, \cdot)}(X, Y)$; \\ | |
2747 | \RETURN $v$; \- | |
2748 | \newline | |
2749 | $\id{tag}(Q, m)$: \+ \\ | |
2750 | $\tau \gets T_x(Q, m)$; \\ | |
2751 | \IF $Q = Y$ \THEN | |
2752 | $\mathcal{T} \gets \mathcal{T} \cup \{ (m, \tau) \}$; \\ | |
2753 | \RETURN $\tau$; \- | |
2754 | \next | |
2755 | $\id{vrf}(Q, m, \tau)$: \+ \\ | |
2756 | $v \gets V_y(Q, m, \tau)$; \\ | |
2757 | \IF $v = 1 \land Q = X \land (m, \tau) \notin \mathcal{T}$ \THEN | |
2758 | $w \gets 1$; \\ | |
2759 | \RETURN $v$; | |
2760 | \end{program} | |
2761 | ||
2762 | The adversary's UF-OCMA \emph{advantage} is measured by | |
2763 | \[ \Adv{uf-ocma}{\Pi}(A) = \Pr[\Game{uf-ocma}{\Pi}(A) = 1] \] | |
2764 | Finally, the UF-OCMA insecurity function of $\Pi$ is defined by | |
2765 | \[ \InSec{uf-ocma}(\Pi; t, q_T, q_V) = \max_A \Adv{uf-ocma}{\Pi}(A) \] | |
2766 | where the maximum is taken over all adversaries~$A$ completing the game in | |
2767 | time~$t$ and making at most $q_T$ tagging queries and at most $q_V$ | |
2768 | verification queries. | |
2769 | \end{definition} | |
2770 | ||
2771 | \begin{definition} | |
2772 | \label{def:naiad-deniability} | |
2773 | ||
2774 | Let $\Pi = (G, T, V, R)$ and $J$ (the `judge') be an algorithm. The | |
2775 | simulator's ability to deceive the judge is measured by the following | |
2776 | games. | |
2777 | \begin{program} | |
2778 | $\Game{sdeny-$b$}{\Pi}(J, m_0, m_1)$: \+ \\ | |
2779 | $(x, X) \gets G()$; | |
2780 | $(y, Y) \gets G()$; \\ | |
2781 | $\tau_0 \gets T_y(X, m_0)$; \\ | |
2782 | $\tau_1 \gets R(x, Y, m_1)$; \\ | |
2783 | $b' \gets J(x, X, y, Y, \tau_b, m_b)$; \\ | |
2784 | \RETURN $b'$; | |
2785 | \end{program} | |
2786 | ||
2787 | We measure $J$'s \emph{advantage} in distinguishing simulated tags from | |
2788 | genuine ones by | |
2789 | \[ \Adv{sdeny}{\Pi}(J) = \max_{m_0, m_1 \in \Bin^*} \bigl( | |
2790 | \Pr[\Game{sdeny-$1$}{\Pi}(J, m_0, m_1) = 1] - | |
2791 | \Pr[\Game{sdeny-$0$}{\Pi}(J, m_0, m_1) = 1] \bigr) \] | |
2792 | Finally, we define the \emph{insecurity function} for the deniability of | |
2793 | $\Pi$ by | |
2794 | \[ \InSec{sdeny}(\Pi; t) = \max_J \Adv{sdeny}{\Pi}(J) \] | |
2795 | where the maximum is taken over all algorithms~$J$ completing the game in | |
2796 | time~$t$. | |
2797 | \end{definition} | |
2798 | ||
2799 | \subsection{A NAIAD based on Diffie--Hellman} | |
2800 | \label{sec:naiad.dh} | |
2801 | ||
2802 | We now present a simple NAIAD whose security in the random-oracle model is | |
2803 | based on the computational Diffie--Hellman problem. We work in a cyclic | |
2804 | group $G$ with $p = \#G$ prime, generated by a point $P \in G$. We assume a | |
2805 | hash function $H\colon \Bin^* \to \Bin^\kappa$, which we model as a random | |
2806 | oracle. The scheme is | |
2807 | \[ \proto{dh}^H(G) = | |
2808 | (\Xid{G}{dh}, \Xid{T}{dh}^H, \Xid{V}{dh}^H, \Xid{R}{dh}^H) | |
2809 | \] | |
2810 | where the algorithms are defined as follows. | |
2811 | \begin{program} | |
2812 | $\Xid{G}{dh}()$: \+ \\ | |
2813 | $x \getsr \gf{p}$; | |
2814 | $x' \getsr \gf{p}$; \\ | |
2815 | $X \gets x P$; | |
2816 | $X' \gets x' P$; \\ | |
2817 | \RETURN $\bigl((x, x'), (X, X')\bigr)$; \- | |
2818 | \next | |
2819 | $\Xid{T}{dh}^{H(\cdot)}_{x, x'}((Y, Y'), m)$: \+ \\ | |
2820 | $\tau \gets H([m, x P, x' P, Y, Y', x Y, x Y', x' Y, x' Y'])$; \\ | |
2821 | \RETURN $\tau$; \- | |
2822 | \\[\medskipamount] | |
2823 | $\Xid{V}{dh}^{H(\cdot)}_{x, x'}((Y, Y'), m, \tau)$: \+ \\ | |
2824 | $\tau' \gets H([m, Y, Y', x P, x' P, y X, y' X, y X', y' X'])$; \\ | |
2825 | \IF $\tau = \tau'$ \THEN \RETURN $1$ | |
2826 | \ELSE \RETURN $0$; \- | |
2827 | \\[\medskipamount] | |
2828 | $\Xid{R}{dh}^{H(\cdot)}_{x, x'}((Y, Y'), m, \tau)$: \+ \\ | |
2829 | $\tau' \gets H([m, Y, Y', x P, x' P, y X, y' X, y X', y' X'])$; \\ | |
2830 | \RETURN $\tau'$; | |
2831 | \end{program} | |
2832 | ||
2833 | \begin{theorem}[Deniability of Diffie--Hellman NAIAD] | |
2834 | Let $\Pi = \proto{dh}^H(G)$ be as described above. Then $\Pi$ is | |
2835 | perfectly deniable; i.e., | |
2836 | \[ \InSec{sdeny}(\Pi; t, q_H) = 0 \] | |
2837 | \end{theorem} | |
2838 | \begin{proof} | |
2839 | There is precisely one correct tag for a given message from a given sender | |
2840 | to a given recipient, and it can be computed either by the tagging | |
2841 | algorithm or by the simulator. | |
2842 | \end{proof} | |
2843 | ||
2844 | \begin{theorem}[Security of Diffie--Hellman NAIAD] | |
2845 | \label{th:dh-security} | |
2846 | ||
2847 | Let $\Pi = \proto{dh}^H(G)$ be as described above. Then | |
2848 | \[ \InSec{uf-ocma}(\Pi; t, q_T, q_V, q_H) \le | |
2849 | \InSec{cdh}(G; t + t') + | |
2850 | \frac{2 q_H}{\#G} + \frac{q_V}{2^\kappa} | |
2851 | \] | |
2852 | where the $t'$ is the time taken to perform $4 q_H + 4$ multiplications and | |
2853 | $2 q_H + 2$ additions in $G$, and $q_T + q_V + q_H$ hashtable probes and | |
2854 | insertions. | |
2855 | \end{theorem} | |
2856 | \begin{proof} | |
2857 | Let~$A$ be any adversary which attacks the authenticity of $\Pi$ and runs | |
2858 | within the stated resource bounds. It will suffice to bound $A$'s | |
2859 | advantage. | |
2860 | ||
2861 | Game~$\G0 \equiv \Game{uf-ocma}{\Pi}(A)$ is the standard NAIAD attack game: | |
2862 | $A$ receives $X = x P$, $X' = x' P$, $Y = y P$, and $Y' = y' P$ as input; | |
2863 | its objective is to submit a forgery to its verification oracle. | |
2864 | ||
2865 | In order to simplify our presentation, we shall describe the tagging and | |
2866 | verification functions in a different way. Firstly, we define the function | |
2867 | $D\colon G^4 \to G^8$ by | |
2868 | \[ D(x P, x' P, Y, Y') = (x P, x' P, Y, Y', x Y, x Y', x' Y, x' Y') \] | |
2869 | (This function is clearly not easily computable, though it becomes so if we | |
2870 | know $x$ and $x'$.) $D$ is evidently injective, because it includes its | |
2871 | preimage in the image. | |
2872 | ||
2873 | Next, we define a simple permutation $\pi\colon G^8 \to G^8$ by | |
2874 | \[ \pi(X, X', Y, Y', Z, Z', W, W') = (Y, Y', X, X', Z, W, Z', W') \] | |
2875 | (It's not important, but $\pi = \pi^{-1}$.) If we define $D' = \pi \circ D$ | |
2876 | then | |
2877 | \[ D(Y, Y', X, X') = D'(X, X', Y, Y') \] | |
2878 | Finally, for any message~$m$, we define $H_m\colon G^8 \to \Bin^\kappa$ by | |
2879 | \[ H_m(X, X', Y, Y', Z, Z', W, W') = H([m, X, X', Y, Y', Z, Z', W, W']) \] | |
2880 | We can now define | |
2881 | \[ T_m = H_m \circ D \qquad \textrm{and} \qquad | |
2882 | V_m = H_m \circ D' = H_m \circ \pi \circ D \] | |
2883 | i.e., the following diagram commutes. | |
2884 | \[ \begin{tikzpicture} | |
2885 | \tikzset{ | |
2886 | every to/.style = {above, draw, font = \footnotesize}, | |
2887 | every edge/.style = {every to}, | |
2888 | node distance = 20mm | |
2889 | } | |
2890 | \node (xy) {$G^4$}; | |
2891 | \node[coordinate, below = of xy] (x) {}; | |
2892 | \node[left = 5mm of x] (d) {$G^8$} | |
2893 | edge [<-, left] node {$D$} (xy); | |
2894 | \node[right = 5mm of x] (dd) {$G^8$} | |
2895 | edge [<-, right] node {$D'$} (xy); | |
2896 | \draw ($(d.east) + (0, 2pt)$) | |
2897 | to[->] node {$\pi$} | |
2898 | ($(dd.west) + (0, 2pt)$); | |
2899 | \draw ($(dd.west) - (0, 2pt)$) | |
2900 | to[->, below] node {$\pi$} | |
2901 | ($(d.east) - (0, 2pt)$); | |
2902 | \node[left = of d] (t) {$\Bin^\kappa$} | |
2903 | edge [<-, below] node {$H_m$} (d) | |
2904 | edge [<-, above left] node {$T_m$} (xy); | |
2905 | \node[right = of dd] (v) {$\Bin^\kappa$} | |
2906 | edge [<-, below] node {$H_m$} (dd) | |
2907 | edge [<-, above right] node {$V_m$} (xy); | |
2908 | \end{tikzpicture} \] | |
2909 | ||
2910 | We can consequently rewrite | |
2911 | \[ T_{x, x'}((R, R'), m) = T_m(x P, x' P, R, R') \] | |
2912 | and | |
2913 | \[ V_{y, y'}((Q, Q'), m, \tau) = \begin{cases} | |
2914 | 1 & if $\tau = V_m(y P, y' P, Q, Q')$ \\ | |
2915 | 0 & otherwise | |
2916 | \end{cases} | |
2917 | \] | |
2918 | ||
2919 | Now, $H$ -- and hence its restriction $H_m$ -- is a random function, | |
2920 | assigning a uniformly distributed and independent $\kappa$-bit string to | |
2921 | each point in its domain. Since $D$ and $D'$ are injective, the functions | |
2922 | $T_m$ and $V_m$ also have this property. (Obviously the outputs of $H_m$, | |
2923 | $T_m$ and $V_m$ are not independent of \emph{each other}, merely of other | |
2924 | outputs of the same function.) It's also clear that the action of $H_m$ on | |
2925 | $D(G^4)$ is determined by $T_m$, and similarly for $D'(G^4)$ and $V_m$. | |
2926 | This observation motivates the definition of the next game~$\G1$. | |
2927 | ||
2928 | Game~$\G1$ redefines the three oracles provided to the adversary in terms | |
2929 | of three new functions $T$, $V$ and $H$ shown in \xref{fig:dh-naiad-g1}. | |
2930 | We use the `lazy sampling' technique; we implement $T$ directly as a lazily | |
2931 | sampled random function; $V$ consults $T$ where applicable, and otherwise | |
2932 | uses a separate lazily sampled random function; and $H$ consults $T$ or $V$ | |
2933 | where applicable. | |
2934 | ||
2935 | \begin{figure} | |
2936 | \begin{program} | |
2937 | Initialization: \+ \\ | |
2938 | $\mathcal{H} \gets \emptyset$; \\ | |
2939 | $\mathcal{T} \gets \emptyset$; \\ | |
2940 | $\mathcal{V} \gets \emptyset$; \- | |
2941 | \\[\medskipamount] | |
2942 | $T(R, R', m)$: \+ \\ | |
2943 | \IF $(R, R', m) \in \dom \mathcal{T}$ \THEN \\ | |
2944 | \quad $\tau \gets \mathcal{T}(R, R', m)$; \\ | |
2945 | \ELSE \\ \ind | |
2946 | $\tau \getsr \Bin^\kappa$; \\ | |
2947 | $\mathcal{T} \gets \mathcal{T} \cup \{ (R, R', m) \mapsto \tau \}$; | |
2948 | \- \\ | |
2949 | \RETURN $\tau$; \- | |
2950 | \next | |
2951 | $V'(Q, Q', m)$: \+ \\ | |
2952 | \IF $(Q, Q', m) \in \dom \mathcal{V}$ \THEN \\ | |
2953 | \quad $\tau' \gets \mathcal{V}(Q, Q', m)$; \\ | |
2954 | \ELSE \\ \ind | |
2955 | \IF $Q = X \land Q' = X'$ \THEN | |
2956 | $\tau' \gets T(Y, Y', m)$; \\ | |
2957 | \ELSE | |
2958 | $\tau' \getsr \Bin^\kappa$; \\ | |
2959 | $\mathcal{V} \gets \mathcal{V} \cup | |
2960 | \{ (Q, Q', m) \mapsto \tau' \}$; | |
2961 | \- \\ | |
2962 | \RETURN $\tau'$; \- | |
2963 | \\[\medskipamount] | |
2964 | $V(Q, Q', m, \tau)$: \+ \\ | |
2965 | $\tau' \gets V'(Q, Q', m)$; \\ | |
2966 | \IF $\tau = \tau'$ \THEN \RETURN $1$ \ELSE \RETURN $0$; \- | |
2967 | \newline | |
2968 | $H(s)$: \+ \\ | |
2969 | \IF $s \in \dom \mathcal{H}$ \THEN | |
2970 | $h \gets \mathcal{H}(s)$; \\ | |
2971 | \ELSE \\ \ind | |
2972 | \IF $s = [m, Q, Q', R, R', Z, Z', W, W']$ for some \\ | |
2973 | \hspace{4em} | |
2974 | $(m, Q, Q', R, R', Z, Z', W, W') \in \Bin^* \times G^8$ | |
2975 | \THEN \\ \ind | |
2976 | \IF $(Q, Q') = (X, X') \land | |
2977 | (Z, Z', W, W') = (x R, x R', x' R, x' R')$ \THEN | |
2978 | $h \gets T(R, R', m)$; \\ | |
2979 | \ELSE \IF $(R, R') = (Y, Y') \land | |
2980 | (Z, Z', W, W') = (y Q, y' Q, y Q', y' R')$ \THEN | |
2981 | $h \gets V'(Q, Q', m)$; \\ | |
2982 | \ELSE | |
2983 | $h \getsr \Bin^\kappa$; \- \\ | |
2984 | \ELSE | |
2985 | $h \getsr \Bin^\kappa$; \\ | |
2986 | $\mathcal{H} \gets \mathcal{H} \cup \{ s \mapsto h \}$; \- \\ | |
2987 | \RETURN $h$; | |
2988 | \end{program} | |
2989 | ||
2990 | \caption{Tagging, verification and hashing functions for $\G1$} | |
2991 | \label{fig:dh-naiad-g1} | |
2992 | \end{figure} | |
2993 | ||
2994 | The adversary's oracles map onto these functions in a simple way: $H$ is | |
2995 | precisely the hashing oracle, and | |
2996 | \[ T_{x, x'}((R, R'), m) = T(R, R', m) \qquad \textrm{and} \qquad | |
2997 | V_{y, y'}((Q, Q'), m, \tau) = V(Q, Q', m) \] | |
2998 | Given the foregoing discussion, we see that, despite the rather radical | |
2999 | restructuring of the game, all of the quantities that the adversary sees | |
3000 | are distributed identically, and therefore | |
3001 | \begin{equation} | |
3002 | \label{eq:dh-naiad-s0} | |
3003 | \Pr[S_0] = \Pr[S_1] | |
3004 | \end{equation} | |
3005 | ||
3006 | Game~$\G2$ is the same as $\G1$, except that we no longer credit the | |
3007 | adversary with a win if it makes a verification query $V_{y, y'}((X, X'), | |
3008 | m, \tau)$ without previously making a hashing query $H([m, X, X', Y, Y', y | |
3009 | X, y' X, y X', y' X'])$. If this happens, either there has been a previous | |
3010 | query to $T_{x, x'}((Y, Y'), m)$, in which case the verification query | |
3011 | can't count as a win in any case, or there was no such query, in which case | |
3012 | the true tag $\tau'$ will be freshly generated uniformly at random. | |
3013 | Evidently, then, | |
3014 | \begin{equation} | |
3015 | \label{eq:dh-naiad-s1} | |
3016 | \Pr[S_1] - \Pr[S_2] \le \frac{q_V}{2^\kappa} | |
3017 | \end{equation} | |
3018 | ||
3019 | Game~$\G3$ is similar to $\G2$, except that we change the way that the keys | |
3020 | are set up. Rather than choosing $x'$ and $y'$ at random and setting $(X', | |
3021 | Y') = (x' P, y' P)$, we choose $(u, u', v, v') \inr \gf{p}$ and set | |
3022 | \[ X' = u P + u' X \qquad \textrm{and} \qquad Y' = v P + v' Y \] | |
3023 | It's clear that (a) $X'$ and $Y'$ are still uniformly distributed on $G$, | |
3024 | and (b) they are independent of $u'$ and $v'$. Since this change doesn't | |
3025 | affect the distribution of $X'$ and $Y'$, we conclude that | |
3026 | \begin{equation} | |
3027 | \label{eq:dh-naiad-s2} | |
3028 | \Pr[S_2] = \Pr[S_3] | |
3029 | \end{equation} | |
3030 | ||
3031 | Finally, we bound $\Pr[S_3]$ by a reduction from the computational | |
3032 | Diffie--Hellman problem in~$G$. The reduction receives points $X^* = x P$ | |
3033 | and $Y^* = y P$ and must compute $Z^* = x y P$. It sets $X = X^*$ and $Y = | |
3034 | Y^*$. It chooses $u$, $u'$, $v$, and $v'$, determines $X'$ and $Y'$ as in | |
3035 | $\G4$, and runs $A$ with simulated oracles: the tagging and verification | |
3036 | oracles work exactly as in $\G3$; however, it cannot implement the hashing | |
3037 | function as described, so we must change it: | |
3038 | \begin{itemize} | |
3039 | \item rather than checking whether $(Z, Z', W, W') = (x R, x R', x' R, x' | |
3040 | R')$, we check whether $(W, W') = (u R + u' Z, u R' + u' Z')$; and | |
3041 | \item rather than checking whether $(Z, Z', W, W') = (y Q, y' Q, y Q', y' | |
3042 | R')$, we check whether $(Z', W') = (v R + v' Z, v R' + v' W)$. | |
3043 | \end{itemize} | |
3044 | Let $F$ be the event that this change causes the reduction's hashing | |
3045 | function to make a decision that the proper $\G3$ wouldn't make. | |
3046 | ||
3047 | Let $U$ be the event that the adversary (apparently, using the reduction's | |
3048 | modified hashing function) makes a successful forgery. In this case, we | |
3049 | must have seen a hashing query $H([m, X, X', Y, Y', Z, v R + v' Z, W, v R' | |
3050 | + v' W])$. If $F$ doesn't occur, then we in fact have $Z = x y P = Z^*$; | |
3051 | the work we must do in the reduction over and above $\G1$ is to choose $u$ | |
3052 | etc., to compute $X'$ and $Y'$, perform the dictionary maintenance, and | |
3053 | use the twin-Diffie--Hellman detector, so | |
3054 | \[ \Pr[U \mid \bar{F}] \le \InSec{cdh}(G; t + t') \] | |
3055 | Furthermore, the reduction and $\G4$ proceed identically unless $F$ occurs, | |
3056 | so \xref{lem:shoup} gives | |
3057 | \[ \Pr[S_4] - \Pr[U] \le \Pr[F] \] | |
3058 | A simple calculation bounds $\Pr[U]$: | |
3059 | \begin{eqnarray*}[rl] | |
3060 | \Pr[U] & = \Pr[U \land F] + \Pr[U \land \bar{F}] \\ | |
3061 | & = \Pr[U \land F] + \Pr[U \mid \bar{F}] \Pr[\bar{F}] \\ | |
3062 | & \le \Pr[F] + \Pr[U \mid \bar{F}] \\ | |
3063 | & \le \InSec{cdh}(G; t + t') + \Pr[F] | |
3064 | \eqnumber \label{eq:dh-naiad-t} | |
3065 | \end{eqnarray*} | |
3066 | Finally, we bound $\Pr[F]$ using \xref{lem:2dh-detect}: | |
3067 | \begin{equation} | |
3068 | \label{eq:dh-naiad-f} | |
3069 | \Pr[F] \le \frac{2 q_H}{\#G} | |
3070 | \end{equation} | |
3071 | Piecing together equations~\ref{eq:dh-naiad-s0}--\ref{eq:dh-naiad-f} | |
3072 | completes the proof. | |
3073 | \end{proof} | |
3074 | ||
3075 | \section{Tag signing and nondirectable key encapsulation} | |
3076 | \label{sec:tag} | |
3077 | ||
3078 | \subsection{Construction and general authenticity failure} | |
3079 | \label{sec:tag.construct} | |
3080 | ||
3081 | This section describes a rather different approach to constructing a deniably | |
3082 | authenticated asymmetric encryption scheme. The approach could be summed up | |
3083 | in the phrase `sign something irrelevant'. | |
3084 | ||
3085 | More concretely, we make use of the following ingredients: | |
3086 | \begin{itemize} | |
3087 | \item a key-encapsulation mechanism $\proto{kem} = (\lambda, \mathcal{G}, | |
3088 | \mathcal{E}, \mathcal{D})$; | |
3089 | \item an authenticated encryption with additional data (AEAD) scheme | |
3090 | $\proto{aead} = (\kappa, E, D)$, with $\kappa < \lambda$; and | |
3091 | \item a digital signature scheme $\proto{sig} = (G, S, V)$. | |
3092 | \end{itemize} | |
3093 | Given these, we define a weakly deniably authenticated asymmetric encryption | |
3094 | scheme | |
3095 | \begin{spliteqn*} | |
3096 | \Xid{\Pi}{wdaae-tag}(\proto{kem}, \proto{aead}, \proto{sig}) = | |
3097 | (\Xid{G}{wdaae-tag}, \Xid{E}{wdaae-tag}, \\ | |
3098 | \Xid{D}{wdaae-tag}, \Xid{R}{wdaae-tag}, \Xid{S}{wdaae-tag}) | |
3099 | \end{spliteqn*} | |
3100 | where the component algorithms are as shown in \xref{fig:tag.construct}. | |
3101 | ||
3102 | \begin{figure} | |
3103 | \begin{program} | |
3104 | \begin{tikzpicture}[node distance = 5mm] | |
3105 | \node[box = yellow!20, minimum width = 30mm] (m) {$m$}; | |
3106 | \node[op = red!20, below = of m] (enc) {$E$} | |
3107 | edge[<-] (m); | |
3108 | \node[box = red!20, minimum width = 30mm + 15pt, below = of enc] | |
3109 | (c) {$c$} | |
3110 | edge[<-] (enc); | |
3111 | \node[box = green!20, right = -0.6pt of c] (sig) {$\sigma$}; | |
3112 | \node[op = green!20, above = 10mm] at (sig |- m) (s) {$S$} | |
3113 | edge[->] (sig); | |
3114 | \node[above = of s] {$a'$} edge[->] (s); | |
3115 | \draw[->] (s |- enc) -- (enc); | |
3116 | \node[box = green!20, left = 25mm of s, below] (t2) {$\tau$}; | |
3117 | \node[box = blue!20, above = -0.6pt of t2] (b2) {$B$}; | |
3118 | \draw[decorate, decoration = brace] | |
3119 | (b2.north east) -- (t2.south east) | |
3120 | coordinate[pos = 0.5, right = 2.5pt] (sm); | |
3121 | \draw[->] (sm) -- (s); | |
3122 | \node[box = green!20, left = of t2] (tag) {$\tau$}; | |
3123 | \node[box = red!20, left = -0.6pt of tag] (k) {$K$}; | |
3124 | \draw[decorate, decoration = brace] | |
3125 | (k.north west) -- (tag.north east) | |
3126 | coordinate[pos = 0.5, above = 2.5pt] (z); | |
3127 | \node[op = blue!20, above = 8mm of z] (kem) {$\mathcal{E}$} | |
3128 | edge[->] (z); | |
3129 | \node[box = blue!20, left = -0.6pt of c] (u) {$u$}; | |
3130 | \draw[rounded, ->] (kem) -| +(-10mm, -8mm) |- (u); | |
3131 | \node (b) at (kem -| b2) {$B$} edge[->] (kem); | |
3132 | \draw[->] (tag) -- (t2); | |
3133 | \draw[rounded, ->] (k) |- (enc); | |
3134 | \draw[->] (b) -- (b2); | |
3135 | \end{tikzpicture} | |
3136 | \next | |
3137 | $\Xid{G}{wdaae-tag}()$: \+ \\ | |
3138 | $(x, X) \gets \mathcal{G}()$; | |
3139 | $(x', X') \gets G()$; \\ | |
3140 | \RETURN $\bigl( (x, x', X), (X, X') \bigr)$; \- | |
3141 | \\[\medskipamount] | |
3142 | $\Xid{E}{wdaae-tag}_{(x, x', X)}\bigl( (Y, Y'), m \bigr)$: \+ \\ | |
3143 | $(Z, u) \gets \mathcal{E}_Y()$; \\ | |
3144 | $K \gets Z[0 \bitsto \kappa]$; | |
3145 | $\tau \gets Z[\kappa \bitsto \lambda]$; \\ | |
3146 | $\sigma \gets S_{x'}([\tau, Y])$; \\ | |
3147 | $c \gets E_K(m, [\sigma])$; \\ | |
3148 | \RETURN $\bigl( K, (u, \sigma, c) \bigr)$; \- | |
3149 | \\[\medskipamount] | |
3150 | $\Xid{D}{wdaae-tag}_{(x, x', X)}\bigl( (Y, Y'), (u, \sigma, c) \bigr)$: | |
3151 | \+ \\ | |
3152 | $Z \gets \mathcal{D}_x(u)$; \IF $Z = \bot$ \THEN \RETURN $\bot$; \\ | |
3153 | $K \gets Z[0 \bitsto \kappa]$; | |
3154 | $\tau \gets Z[\kappa \bitsto \lambda]$; \\ | |
3155 | $v \gets V_{Y'}([\tau, X], \sigma)$; | |
3156 | \IF $v \ne 1$ \THEN \RETURN $\bot$; \\ | |
3157 | $m \gets D_K(c, [\sigma])$; \\ | |
3158 | \RETURN $m$; \- | |
3159 | \newline | |
3160 | $\Xid{R}{wdaae-tag}_{(x, x', X)} | |
3161 | \bigl( (Y, Y'), (u, \sigma, y), m' \bigr)$: \+ \\ | |
3162 | $Z \gets \mathcal{D}_x(u)$; \IF $Z = \bot$ \THEN \RETURN $\bot$; \\ | |
3163 | $K \gets Z[0 \bitsto \kappa]$; | |
3164 | $c' \gets E_K(m', [\sigma])$; \\ | |
3165 | \RETURN $(u, \sigma, c')$; \- | |
3166 | \next | |
3167 | $\Xid{S}{wdaae-tag}_{(x, x', X)} | |
3168 | \bigl( (Y, Y'), K, (u, \sigma, c), m' \bigr)$: \+ \\ | |
3169 | $c' \gets E_K(m', [\sigma])$; \\ | |
3170 | \RETURN $(u, \sigma, c')$; \- | |
3171 | \end{program} | |
3172 | \caption{Weakly deniably authenticated asymmetric encryption by signing | |
3173 | tags} | |
3174 | \label{fig:tag.construct} | |
3175 | \end{figure} | |
3176 | ||
3177 | When we analyse this construction, we find that we have good secrecy | |
3178 | (\xref{th:wdaae-tag.sec}) and deniability (\xref{th:wdaae-tag.deny}). | |
3179 | However, we are unable to prove authenticity. Indeed, it's quite easy to see | |
3180 | that authenticity fails, in general. | |
3181 | ||
3182 | Let $\proto{ae} = (G, E, D)$ be an asymmetric encryption scheme; we can | |
3183 | construct from it a simple KEM $\Xid{\Pi}{ae-kem}(\proto{ae}, \kappa) = | |
3184 | (\kappa, G, E', D)$ where $E'$ is simply | |
3185 | \begin{program} | |
3186 | $E'_X()$: $K \getsr \Bin^\kappa$; $u \gets E_X(K)$; \RETURN $(u, K)$; | |
3187 | \end{program} | |
3188 | We immediately have | |
3189 | \[ \InSec{ind-cca}(\Xid{\Pi}{ae-kem}(\proto{ae}, \kappa); t, q_D) \le | |
3190 | \InSec{ind-cca}(\proto{ae}; t, q_D) | |
3191 | \] | |
3192 | by a trivial reduction. | |
3193 | ||
3194 | Let $\proto{sig} = (G, S, V)$ be a digital signature scheme. We construct | |
3195 | another digital signature scheme $\proto{sig}' = (G, S', V')$ as follows. | |
3196 | \begin{program} | |
3197 | $S'_x(m)$: $\sigma \gets S_x(m)$; \RETURN $(\sigma, m)$; \\ | |
3198 | $V'_X(m, (m', \sigma))$: $v \gets V_X(m, \sigma)$; \IF $v = 1 \land m' = m$ | |
3199 | \THEN \RETURN $1$ \ELSE \RETURN $0$; | |
3200 | \end{program} | |
3201 | We immediately have | |
3202 | \[ \InSec{euf-cma}(\proto{sig}'; t, q_S) = | |
3203 | \InSec{euf-cma}(\proto{sig}; t, q_S) | |
3204 | \] | |
3205 | ||
3206 | We have the following proposition. | |
3207 | ||
3208 | \begin{proposition}[General authenticity failure of tag-signing] | |
3209 | \label{prop:daae-tag.auth-fail} | |
3210 | ||
3211 | Let $\proto{ae}$, $\proto{aead}$, and $\proto{sig}$ be an asymmetric | |
3212 | encryption scheme, a scheme for authenticated encryption with additional | |
3213 | data, and a digital signature scheme, respectively. Then | |
3214 | \[ \InSec{uf-ocma}(\Xid{\Pi}{wdaae-tag} | |
3215 | (\Xid{\Pi}{ae-kem}(\proto{ae}, \lambda), | |
3216 | \proto{aead}, \proto{sig}'); t, 1, 1, 0) = 1 | |
3217 | \] | |
3218 | for a small constant $t$. | |
3219 | \end{proposition} | |
3220 | \begin{proof} | |
3221 | Suppose that $\proto{ae} = (\mathcal{G}, \mathcal{E}, \mathcal{D})$, and | |
3222 | $\proto{aead} = (\kappa, E, D)$. We describe a simple adversary breaking | |
3223 | authenticity. | |
3224 | \begin{enumerate} | |
3225 | \item The adversary is invoked with public keys $(X, X')$ for the sender | |
3226 | and $(Y, Y')$ for the recipient. | |
3227 | \item It requests the encryption for public key $(Y, Y')$ and message $m = | |
3228 | 0$. Let the ciphertext returned be $(u, c, (\sigma, \tau))$. | |
3229 | \item It chooses a key $K \inr \Bin^\kappa$. | |
3230 | \item It computes $u' \gets \mathcal{E}_Y(K \cat \tau)$. | |
3231 | \item It computes $c' \gets E_K(1, [(\sigma, \tau)])$. | |
3232 | \item It submits the ciphertext $(u', c', (\sigma, \tau))$ to the | |
3233 | decryption oracle, with public key $(X, X')$. | |
3234 | \end{enumerate} | |
3235 | Inspection of the decryption algorithm shows that this is a valid | |
3236 | ciphertext, so the adversary wins with probability $1$. | |
3237 | \end{proof} | |
3238 | ||
3239 | What has gone wrong? The problem is that, while an honest KEM user allows it | |
3240 | to select its output key at random, the adversary might be able to | |
3241 | \emph{direct} it so as to give the output key that it wants. In the next | |
3242 | sections: | |
3243 | \begin{itemize} | |
3244 | \item we define what it means for a KEM to be \emph{nondirectable}: | |
3245 | essentially, that it is hard to arrange for the output key to be anything | |
3246 | in particular; | |
3247 | \item we show that a class of existing KEMs, using the random oracle model, | |
3248 | are already nondirectable; and | |
3249 | \item we prove the security of our construction assuming nondirectability of | |
3250 | the KEM. | |
3251 | \end{itemize} | |
3252 | ||
3253 | \subsection{Definition of nondirectability} | |
3254 | \label{sec:ndir.def} | |
3255 | ||
3256 | Informally, we say that a KEM $\Pi = (\kappa, G, E, D)$ is $(t, | |
3257 | n)$-nondirectable if, given a public key $X$ (with corresponding private key | |
3258 | $x$) and a set $\mathcal{T}$ of $n$ independent and uniformly distributed | |
3259 | $t$-bit strings, it's infeasible to find a clue $u$ for which $D_x(u)[\kappa | |
3260 | - t \bitsto \kappa] \in \mathcal{T}$. Formally, we have the following | |
3261 | definition. | |
3262 | \begin{definition}[$(t, n)$-nondirectability of a KEM] | |
3263 | \label{def:kem-nondir} | |
3264 | ||
3265 | Let $\Pi = (\kappa, G, E, D)$ be a key-encapsulation mechanism, let $t$ be | |
3266 | an integer such that $0 < t \le \kappa$, and let $n$ be a positive integer. | |
3267 | We measure an adversary $A$'s ability to attack the $t$-nondirectability of | |
3268 | $\Pi$ with the following game. | |
3269 | \begin{program} | |
3270 | $\Game{ndir}{\Pi, t, n}(A)$: \+ \\ | |
3271 | $(x, X) \gets G()$; \\ | |
3272 | \FOR $0 \le i < n$: \\ \ind | |
3273 | $\tau_i \getsr \Bin^t$; \- \\ | |
3274 | $\mathbf{t} \gets (\tau_0, \tau_1, \ldots, \tau_{n-1})$; \\ | |
3275 | $\mathcal{T} \gets \{\tau_0, \tau_1, \ldots, \tau_{n-1}\}$; \\ | |
3276 | $w \gets 0$; \\ | |
3277 | $A^{\id{try}(\cdot)}(X, \mathbf{t})$; \\ | |
3278 | \RETURN $w$; \- | |
3279 | \next | |
3280 | $\id{try}(u)$: \+ \\ | |
3281 | $K \gets D_x(u)$; \\ | |
3282 | \IF $K \ne \bot \land K[\kappa - t \bitsto \kappa] \in \mathcal{T}$ | |
3283 | \THEN $w \gets 1$; \\ | |
3284 | \RETURN $K$; | |
3285 | \end{program} | |
3286 | The \emph{advantage} of $A$ against the $(t, n)$-nondirectability of $\Pi$ | |
3287 | is given by | |
3288 | \[ \Adv{ndir}{\Pi, t, n}(A) = \Pr[\Game{ndir}{\Pi, t, n}(A) = 1] \] | |
3289 | Finally, the $t$-nondirectability insecurity function of $\Pi$ is defined | |
3290 | by | |
3291 | \[ \InSec{ndir}(\Pi, t, n; t', q_D) = \max_A \Adv{ndir}{\Pi, t, n}(A) \] | |
3292 | where the maximum is taken over all adversaries $A$ completing the game in | |
3293 | time~$t'$ and making at most $q_D$ oracle queries. | |
3294 | \end{definition} | |
3295 | ||
3296 | The general definition above is useful when proving security for | |
3297 | constructions which use KEMs as components; but the following theorem will be | |
3298 | handy when proving nondirectability of specific KEMs. | |
3299 | ||
3300 | \begin{theorem}[Multi-target nondirectability] | |
3301 | \label{th:kem-nondir-multi} | |
3302 | ||
3303 | For any key-encapsulation mechanism $\Pi$, | |
3304 | \[ \InSec{ndir}(\Pi, t, n; t', q) \le | |
3305 | n \, \InSec{ndir}(\Pi, t, 1; t', q) \] | |
3306 | \end{theorem} | |
3307 | \begin{proof} | |
3308 | We describe a reduction from attacking $(t, 1)$-nondirectability to | |
3309 | attacking $(t, n)$-nondirectability. Suppose, then, that we're given a | |
3310 | public key~$X$, a singleton vector $\mathbf{t} = (\tau_0)$, and an | |
3311 | arbitrary adversary~$A$ running within the given resource bounds. We | |
3312 | choose $\tau_i \inr \Bin^\kappa$ at random, for $1 \le i < n$, choose a | |
3313 | random permutation $\pi$ on $\{0, 1, \ldots, n - 1\}$, and set $\mathbf{t}' | |
3314 | = (\tau_{\pi(0)}, \tau_{\pi(1)}, \ldots, \tau_{\pi(n-1)})$. We run $A$ on | |
3315 | $X$ and $\mathbf{t}'$. Let $\epsilon = \Adv{ndir}{\Pi, t, n}(A)$; then we | |
3316 | claim that our reduction succeeds with probability at least $\epsilon/n$. | |
3317 | The theorem then follows immediately from the claim. | |
3318 | ||
3319 | To prove the claim, then, note that $\mathbf{t}$ is uniformly distributed | |
3320 | on $(\Bin^*)^n$, and $\pi$ is independent of $\mathbf{t}$ -- in particular, | |
3321 | $\pi^{-1}(0)$ is uniform and independent of $\mathbf{t}$. Hence, for any | |
3322 | algorithm selecting a coordinate of $\mathbf{t}$, the probability that it | |
3323 | chooses $\tau_0$ is at least $1/n$ (with equality if the coordinates are | |
3324 | distinct). Now consider the algorithm that runs $A$ on $\mathbf{t}'$ and | |
3325 | $X$, and selects the first element for which $A$ submits a matching KEM | |
3326 | clue: conditioning on $A$ winning the game, it matches $\tau_0$ with | |
3327 | probability at least $1/n$, and the claim follows. | |
3328 | \end{proof} | |
3329 | ||
3330 | \subsection{Random oracle key-encapsulation mechanisms} | |
3331 | \label{sec:ndir.ro} | |
3332 | ||
3333 | A number of key-encapsulation mechanisms in the random oracle model follow | |
3334 | a similar pattern: the sender constructs somehow a clue~$u$ and an answer~$z$ | |
3335 | such that $z$ is unpredictable given only $u$ and the recipient's public key, | |
3336 | but the recipient can easily determine $z$ given her private key. To move | |
3337 | from unpredictability to indistinguishable from random, both sender and | |
3338 | recipient hash $z$ to obtain a key~$K$. If we model the hash function as a | |
3339 | random oracle, then $K$ is indistinguishable from random if the adversary | |
3340 | can't determine~$z$ exactly. | |
3341 | ||
3342 | This pattern captures both Diffie--Hellman KEMs, such as are found in DHIES | |
3343 | \cite{Abdalla:2001:DHIES} or Twin ElGamal \cite{cryptoeprint:2008:067}, and | |
3344 | KEMs based on trapdoor one-way functions, such as RSA-KEM | |
3345 | \cite{cryptoeprint:2001:112}. | |
3346 | ||
3347 | A bit more formally, we make the following definitions. | |
3348 | ||
3349 | \begin{definition}[Pre-KEM syntax] | |
3350 | \label{def:pre-kem.syntax} | |
3351 | ||
3352 | A \emph{pre-KEM} is a triple $\proto{pkem} = (G, E, D)$ of (maybe | |
3353 | randomized) algorithms, as follows. | |
3354 | \begin{itemize} | |
3355 | \item The \emph{key-generation algorithm} $G$ accepts no parameters and | |
3356 | outputs a pair $(x, X) \gets G()$. We call $x$ the \emph{private key} | |
3357 | and $X$ the \emph{public key}. | |
3358 | \item The \emph{encapsulation algorithm} $E$ accepts a public key $X$ and | |
3359 | outputs a pair $(z, u) \gets E_X()$. We call $z$ the \emph{answer} and | |
3360 | $u$ the \emph{clue}. | |
3361 | \item The \emph{decapsulation algorithm} $D$ accepts a private key $x$ and | |
3362 | a clue $u$, and outputs $z \gets D_x(u)$ which is either an answer or the | |
3363 | distinguished symbol $\bot$. The decapsulation algorithm must be such | |
3364 | that if $(x, X)$ is any pair of keys produced by $G$, and $(z, u)$ is any | |
3365 | answer/clue pair output by $E_X()$, then $D_x(u)$ outputs $z$. \qed | |
3366 | \end{itemize} | |
3367 | \end{definition} | |
3368 | ||
3369 | The basic notion of security for a pre-KEM is \emph{one-wayness under | |
3370 | chosen-ciphertext attack}: given a public key and a clue, the adversary | |
3371 | should be unable to guess the correct answer. We grant the adversary access | |
3372 | to an oracle which verifies guessed answers to clues. | |
3373 | ||
3374 | \begin{definition}[Pre-KEM security] | |
3375 | \label{def:pre-kem.security} | |
3376 | ||
3377 | Let $\Pi = (G, E, D)$ be a pre-KEM. We measure an adversary~$A$'s ability | |
3378 | to attack $\Pi$ using the following game. | |
3379 | \begin{program} | |
3380 | $\Game{ow-cca}{\Pi}(A)$: \+ \\ | |
3381 | $(x, X) \gets G()$; \\ | |
3382 | $(z^*, u^*) \gets E_X()$; \\ | |
3383 | $w \gets 0$; | |
3384 | $A^{\id{vrf}(\cdot, \cdot)}(X, u^*)$; \\ | |
3385 | \RETURN $w$; \- | |
3386 | \next | |
3387 | $\id{vrf}(u, z)$: \+ \\ | |
3388 | $z' \gets D_x(u)$; \\ | |
3389 | \IF $z \ne z'$ \THEN \RETURN $0$; \\ | |
3390 | \IF $u = u^*$ \THEN $w \gets 1$; \\ | |
3391 | \RETURN $1$; \- | |
3392 | \end{program} | |
3393 | ||
3394 | The adversary's OW-CCA \emph{advantage} is measured by | |
3395 | \[ \Adv{ind-cca}{\Pi}(A) = | |
3396 | \Pr[\Game{ind-cca-$1$}{\Pi}(A) = 1] - | |
3397 | \Pr[\Game{ind-cca-$0$}{\Pi}(A) = 1] | |
3398 | \] | |
3399 | Finally, the IND-CCA insecurity function of $\Pi$ is defined by | |
3400 | \[ \InSec{ind-cca}(\Pi; t_D, q_V) = \max_A \Adv{ind-cca}{\Pi}(A) \] | |
3401 | where the maximum is taken over all adversaries~$A$ completing the game in | |
3402 | time~$t$ and making at most and $q_V$ queries to their verification | |
3403 | oracles. | |
3404 | \end{definition} | |
3405 | ||
3406 | \begin{example} | |
3407 | [Pre-KEM from Twin Diffie--Hellman \cite{cryptoeprint:2008:067}] | |
3408 | Let $G = \langle P \rangle$ be a group with prime order $p$. Private keys | |
3409 | are $(x, y) \inr \gf{p}^2$; the corresponding public key is $(X, Y) = (x P, | |
3410 | y P)$. To encapsulate, choose $r \inr \gf{p}$; the clue is $R = r P$, and | |
3411 | the answer is $z = (r X, r Y)$; to decapsulate, compute $z = (x R, y R)$. | |
3412 | Security is proven using \xref{lem:2dh-detect} to implement the | |
3413 | verification oracle. | |
3414 | \end{example} | |
3415 | ||
3416 | \begin{example}[Pre-KEM from a trapdoor one-way permutation] | |
3417 | A trapdoor one-way permutation generator is (informally, at least) a family | |
3418 | of permutations $f_K$ and an algorithm $G$ which generates pairs $(k, K)$, | |
3419 | such that $y = f_K(x)$ can be computed easily given only $K$, but computing | |
3420 | the preimage of a point is hard; given the `trapdoor' $k$, however, | |
3421 | computing inverses $x = f^{-1}_k(y)$ is also easy. Private keys are | |
3422 | trapdoors $k$; the corresponding public key is $K$. To encapsulate, choose | |
3423 | a random point $z$ in the domain of $f$: $z$ is itself the answer and $u = | |
3424 | f_K(z)$ is the clue. To decapsulate, calculate $z = f^{-1}_k(u)$. | |
3425 | Security follows immediately by a reduction from inverting $f_K$, since | |
3426 | guesses are readily confirmed using the permutation in the forward | |
3427 | direction. | |
3428 | \end{example} | |
3429 | ||
3430 | We are now ready to describe the `hash-KEM' construction of a full KEM from a | |
3431 | pre-KEM. Given a pre-KEM $\Pi = (G, E, D)$, and a hash function $H\colon | |
3432 | \Bin^* \to \Bin^\kappa$, we construct $\Xid{\Pi}{hkem}(\Pi, H) = (\kappa, G, | |
3433 | \Xid{E}{hkem}, \Xid{D}{hkem})$ as follows. | |
3434 | \begin{program} | |
3435 | $\Xid{E}{hkem}_X()$: \+ \\ | |
3436 | $(z, u) \gets E_X()$; \\ | |
3437 | $K \gets H([u, z])$; \\ | |
3438 | \RETURN $(K, u)$; \- | |
3439 | \next | |
3440 | $\Xid{D}{hkem}_x(u)$: \+ \\ | |
3441 | $z \gets D_x(u)$; \\ | |
3442 | $K \gets H([u, z])$; \\ | |
3443 | \RETURN $K$; \- | |
3444 | \end{program} | |
3445 | ||
3446 | \begin{theorem}[Hash-KEM security] | |
3447 | \label{th:hkem.sec} | |
3448 | ||
3449 | Let $\Pi$ be a pre-KEM, and let $H\colon \Bin^* \to \Bin^\kappa$ be a | |
3450 | random oracle; then | |
3451 | \[ \InSec{ind-cca}(\Xid{\Pi}{hkem}(\Pi, H); t, q_H) \le | |
3452 | 2 \InSec{ow-cca}(\Pi; t + q_H t_D + O(q_H + q_D), q_H) \] | |
3453 | where $t_D$ is the time required for $\Pi$'s decapsulation algorithm, and | |
3454 | the $O(\cdot)$ hides a small constant factor representing the time required | |
3455 | to generate a random $\kappa$-bit string and insert it into a hash table. | |
3456 | \end{theorem} | |
3457 | \begin{proof} | |
3458 | Let $A$ be any adversary attacking $\Xid{\Pi}{hkem}(\Pi, H)$ within the | |
3459 | stated resource bounds. We use a sequence of games. Game $\G0$ chooses a | |
3460 | uniform random bit $b \inr \{0, 1\}$, and then proceeds as in the IND-CCA | |
3461 | game. Let $X$ be the public key and $u^*$ and $K^*$ be the clue and key | |
3462 | provided to the adversary. Let $x$ be the corresponding private key, and | |
3463 | let $z^* = D_x(u^*)$. | |
3464 | ||
3465 | In each game $\G{i}$, let $S_i$ be the event that the adversary's output | |
3466 | $b' = b$. By \xref{lem:advantage}, it will suffice to show that | |
3467 | \[ \Pr[S_0] \le \frac{1}{2} + \InSec{ow-cca}(\Pi; t, q_H) \] | |
3468 | ||
3469 | In game~$\G1$, we change the behaviour of the random oracle and the | |
3470 | decapsulation oracle. Rather than decapsulating the given clue, the oracle | |
3471 | maintains a hash table: if a clue has not been queried before, it chooses a | |
3472 | $\kappa$-bit string uniformly at random, associates the string with the | |
3473 | clue in the hash table, and returns it; if the clue has been queried | |
3474 | before, it finds the associated random string, and returns it. If an input | |
3475 | to the random oracle has the form of a clue/answer pair $[u, z]$, where $z | |
3476 | = D_x(u)$, then it replies as if $u$ had been queried to the decapsulation | |
3477 | oracle; other random-oracle queries are handled by maintaining a second | |
3478 | hash table mapping query inputs to random $\kappa$-bit strings. While this | |
3479 | changes the underlying machinery substantially, everything appears | |
3480 | identically from the point of view of the adversary, so | |
3481 | \begin{equation} \label{eq:hkem.sec.s0} | |
3482 | \Pr[S_0] = \Pr[S_1] | |
3483 | \end{equation} | |
3484 | ||
3485 | In game~$\G2$, we always choose $K^*$ at random, regardless of the value of | |
3486 | $b$. If $b = 1$, we should have computed it as $H([u^*, z^*])$; but since | |
3487 | the random oracle associates uniformly distributed independent strings with | |
3488 | it inputs, the adversary cannot discover our deception unless it queries | |
3489 | $H$ at $[u^*, z^*]$. Let $F_1$ be the event that the adversary makes such | |
3490 | a query; then (\xref{lem:shoup}): | |
3491 | \begin{equation} \label{eq:hkem.sec.s1} | |
3492 | \Pr[S_1] - \Pr[S_2] \le \Pr[F_1] | |
3493 | \end{equation} | |
3494 | Since $b$ is uniform and not used anywhere in the game except to compare | |
3495 | with the adversary's output, we have | |
3496 | \begin{equation} \label{eq:hkem.sec.s2} | |
3497 | \Pr[S_2] = \frac{1}{2} | |
3498 | \end{equation} | |
3499 | Finally, we claim that | |
3500 | \begin{equation} \label{eq:hkem.sec.f1} | |
3501 | \Pr[F_1] \le \InSec{ow-cca}(\Pi; t + q_H t_D + O(q_D + q_H), q_H) | |
3502 | \end{equation} | |
3503 | We prove this by reduction, attacking $\Pi$. Our reduction simulates $\G2$ | |
3504 | as described, providing the public key $X$ and clue $u^*$ to $A$; it | |
3505 | answers decapsulation queries randomly, as for $\G1$, and uses its | |
3506 | verification oracle to identify random-oracle inputs of the necessary form. | |
3507 | Clearly if $A$ makes a query $H([u^*, z^*])$ then the reduction wins its | |
3508 | game. | |
3509 | ||
3510 | The theorem follows by combining | |
3511 | equations~\ref{eq:hkem.sec.s0}--\ref{eq:hkem.sec.f1}. | |
3512 | \end{proof} | |
3513 | ||
3514 | \begin{theorem}[Pre-KEM nondirectability] | |
3515 | If $H\colon \Bin^* \to \Bin^\kappa$ is a random oracle then | |
3516 | \[ \InSec{ndir}(\proto{hkem}, t, n; t', q_D, q_H) \le | |
3517 | \frac{(q_D + q_H) n}{2^t} \] | |
3518 | \end{theorem} | |
3519 | \begin{proof} | |
3520 | Let $A$ be an adversary in the nondirectability game. $A$ is given a set | |
3521 | $\mathcal{T}$ of uniformly distributed $t$-bit target strings with | |
3522 | $\#\mathcal{T} \le n$. The processing of each verification query involves | |
3523 | computing an answer $z$ from the input clue $u$, and querying the random | |
3524 | oracle at $[u, z]$. If the clues are distinct, the random oracle inputs | |
3525 | will be also distinct. Additionally, the adversary may make random oracle | |
3526 | queries of its own. In all, there may therefore be at most $q_D + q_H$ | |
3527 | such distinct queries. In response to each distinct query, the random | |
3528 | oracle chooses an independent uniformly distributed $\kappa$-bit string. | |
3529 | The probability that any such random oracle output has a $t$-bit suffix in | |
3530 | $\mathcal{T}$ is therefore at most $\#\mathcal{T}/2^t \le n/2^t$, and the | |
3531 | theorem follows immediately. | |
3532 | \end{proof} | |
3533 | ||
3534 | \subsection{Weakly deniable AAE from a nondirectable KEM} | |
3535 | \label{sec:tag.theorems} | |
3536 | ||
3537 | Having established the theory of nondirectable KEMs, we apply it to the | |
3538 | construction of \xref{sec:tag.construct}. | |
3539 | ||
3540 | \begin{theorem}[Tag-signing secrecy] | |
3541 | \label{th:wdaae-tag.sec} | |
3542 | ||
3543 | Let $\proto{kem}$, $\proto{aead}$, and $\proto{sig}$ be a key-encapsulation | |
3544 | mechanism, AEAD scheme and signature scheme, respectively. Then | |
3545 | \begin{spliteqn*} | |
3546 | \InSec{ind-occa}(\Xid{\Pi}{wdaae-tag} | |
3547 | (\proto{kem}, \proto{aead}, \proto{sig}); t, q_E, q_D, q_R) \le \\ | |
3548 | 2 \InSec{ind-cca}(\proto{kem}; t, q_D + q_R) + | |
3549 | \InSec{ind-cca}(\proto{aead}; t, 1 + q_R, q_D) | |
3550 | \end{spliteqn*} | |
3551 | \end{theorem} | |
3552 | \begin{proof} | |
3553 | Let $A$ be any adversary attacking $\Pi = \Xid{\Pi}{wdaae-tag}(\proto{kem}, | |
3554 | \proto{aead}, \proto{sig})$ within the stated resource bounds. It will | |
3555 | suffice to bound $A$'s advantage. | |
3556 | ||
3557 | We use a sequence of games. Game~$\G0$ is the standard IND-OCCA game, with | |
3558 | $b$ chosen uniformly at random. In each game~$\G{i}$, let $S_i$ be the | |
3559 | event that $A$'s output is equal to $b$. | |
3560 | ||
3561 | In game~$\G1$, we change the way that we compute the challenge ciphertext. | |
3562 | Specifically, rather than using the key and tag output by the KEM, we | |
3563 | simply choose them at random. We record the KEM clue for later use, so | |
3564 | that the decryption and recipient-simulator oracles can notice it and use | |
3565 | the correct key and tag. We claim that | |
3566 | \begin{equation} \label{eq:wdaae-tag.sec.s0} | |
3567 | \Pr[S_0] - \Pr[S_1] \le \InSec{ind-cca}(\proto{kem}; t, q_D + q_R) | |
3568 | \end{equation} | |
3569 | The claim follows by a simple reduction from KEM secrecy. The reduction is | |
3570 | given a KEM public key $Y$, a KEM clue $u^*$ and a string $Z^* \in | |
3571 | \Bin^\lambda$. It chooses a bit $b^*$, generates a KEM key pair $(x, X)$ | |
3572 | for the sender, and signature key pairs $(x', X')$ for the sender and $(y', | |
3573 | Y')$ for the recipient. It then runs $A$'s \cookie{find} stage, providing | |
3574 | it with the various public keys and encryption, decryption and | |
3575 | recipient-simulator oracles described below, eventually receiving two | |
3576 | messages $m_0$ and $m_1$ of equal length. It splits $Z^*$ into $K^*$ and | |
3577 | $\tau^*$, computes signature $\sigma^* \gets S_{x'}([\tau^*, Y])$, and | |
3578 | ciphertext $c^* \gets E_{K^*}(m_{b^*})$. It runs $A$'s \cookie{guess} | |
3579 | stage on $(u^*, c^*, \sigma^*)$, again providing the various oracles, until | |
3580 | $A$ outputs a bit. If $A$'s output equals $b^*$ then the reduction outputs | |
3581 | $1$; otherwise it outputs $0$. | |
3582 | ||
3583 | We now describe the oracles provided by the reduction. The encryption | |
3584 | oracle provided by the reduction simply uses the encryption algorithm, | |
3585 | using the known private key~$x'$ to make the necessary signatures. The | |
3586 | decryption oracle, given a ciphertext triple $(u, c, \sigma)$, checks | |
3587 | whether $u = u^*$. If so, it decrypts the ciphertext using the known value | |
3588 | of $Z^*$; otherwise, it queries the decapsulation oracle at $u$ to obtain | |
3589 | $Z$. The recipient-simulator oracle behaves similarly. | |
3590 | ||
3591 | We see that, if $Z^*$ is a real KEM output then the reduction simulates | |
3592 | $\G0$; otherwise it simulates $\G1$, both within the claimed resource | |
3593 | limits. It therefore obtains advantage $\Pr[S_0] - \Pr[S_1]$, which is by | |
3594 | definition bounded above as claimed. | |
3595 | ||
3596 | Finally, we bound $\Pr[S_1]$; we claim: | |
3597 | \begin{equation} \label{eq:wdaae-tag.sec.s1} | |
3598 | \Pr[S_1] \le | |
3599 | \frac12 \InSec{ind-cca}(\proto{aead}; t, 1 + q_R, q_D) + \frac12 | |
3600 | \end{equation} | |
3601 | Again, we use a reduction, this time from secrecy of the AEAD scheme. The | |
3602 | reduction encrypts the challenge ciphertext by running the | |
3603 | key-encapsulation algorithm to choose a clue $u^*$, but generating $\tau^*$ | |
3604 | at random and using the AEAD encryption oracle to encrypt one or other of | |
3605 | $A$'s chosen messages. It simulates the decryption oracle by detecting | |
3606 | ciphertexts with clue $u^*$ and using the AEAD decryption oracle. (We note | |
3607 | that the $A$ is forbidden from querying its decryption oracle at precisely | |
3608 | the challenge ciphertext, so the reduction will not itself make forbidden | |
3609 | queries.) The recipient-simulator oracle similarly detects ciphertexts | |
3610 | with clue $u^*$, and uses the AEAD encryption oracle (using the input | |
3611 | message as both message inputs) to construct its output. The claim follows | |
3612 | from an application of \xref{lem:advantage}. | |
3613 | ||
3614 | Finally, the theorem follows by combining | |
3615 | equations~\ref{eq:wdaae-tag.sec.s0} and~\ref{eq:wdaae-tag.sec.s1}, and | |
3616 | again applying \xref{lem:advantage}. | |
3617 | \end{proof} | |
3618 | ||
3619 | \begin{theorem}[Tag-signing authenticity] | |
3620 | \label{th:wdaae-tag.auth} | |
3621 | ||
3622 | Let $\proto{kem}$, $\proto{aead}$, and $\proto{sig}$ be a key-encapsulation | |
3623 | mechanism, AEAD scheme and signature scheme, respectively. Then | |
3624 | \begin{eqnarray*}[Lcl] | |
3625 | \InSec{uf-ocma}(\Xid{\Pi}{wdaae-tag} | |
3626 | (\proto{kem}, \proto{aead}, \proto{sig}); t, q_E, q_D, q_R) \le \\ | |
3627 | & & q_E \InSec{ind-cca}(\proto{kem}; t, q_D + q_R) + | |
3628 | q_E \InSec{int-ctxt}(\proto{aead}; t, 1 + q_R, q_D) \\ | |
3629 | & + & \InSec{ndir}(\proto{kem}, \lambda - \kappa, q_E; t, q_D) + | |
3630 | \InSec{euf-cma}(\proto{sig}; t, q_E) | |
3631 | \end{eqnarray*} | |
3632 | \end{theorem} | |
3633 | \begin{proof} | |
3634 | Let $A$ be any adversary attacking $\Pi = \Xid{\Pi}{wdaae-tag}(\proto{kem}, | |
3635 | \proto{aead}, \proto{sig})$ within the stated resource bounds. It will | |
3636 | suffice to bound $A$'s advantage. | |
3637 | ||
3638 | We introduce the following notation. For $0 \le i < q_E$, let $(Q_i, Q'_i, | |
3639 | m_i)$ be the adversary's $i$th query to its encryption oracle. Let $(u_i, | |
3640 | Z_i) \gets \mathcal{E}_{Q_i}()$ be the KEM clue and output key generated | |
3641 | while answering the query, and $K_i = Z_i[0 \bitsto \kappa]$ and $\tau_i = | |
3642 | Z_i[\kappa \bitsto \lambda]$. Let $\sigma_i \gets S_{x'}([\tau_i, Q_i])$ | |
3643 | be the signature generated, and let $c_i \gets E_{K_i}(m_i, [\sigma_i])$ be | |
3644 | the AEAD ciphertext; the triple returned to the adversary is $(u_i, c_i, | |
3645 | \sigma_i)$. | |
3646 | ||
3647 | We use a sequence of games. Game~$\G0$ is the standard UF-OCMA game. In | |
3648 | each game~$\G{i}$, let $S_i$ be the event that $A$ queries its decryption | |
3649 | oracle on a valid forgery $(u, c, \sigma)$. | |
3650 | ||
3651 | In game~$\G1$, we alter the encryption oracle, so that, when answering a | |
3652 | query with $Q_i = Y$, it generates $Z_i$ uniformly at random, rather than | |
3653 | using the output of the KEM. We also alter the decryption and | |
3654 | recipient-simulator oracles: if the input clue is equal to $u_i$ for some | |
3655 | $0 \le i < q_E$ such that $Q_i = Y$, then they use the corresponding $Z_i$ | |
3656 | rather than applying the decapsulation algorithm. We claim that | |
3657 | \begin{equation} \label{eq:wdaae-tag.auth.s0} | |
3658 | \Pr[S_0] - \Pr[S_1] \le | |
3659 | q_E \InSec{ind-cca}(\proto{kem}; t, q_D + q_R) | |
3660 | \end{equation} | |
3661 | The claim is proven using a hybrid argument. For $0 \le i \le q_E$, we | |
3662 | define the hybrid game $\G[H]i$ like $\G0$, except that the first $i$ | |
3663 | encryption oracle queries are answered by generating $Z_i$ at random, | |
3664 | rather than using the KEM, as in $\G1$; the remaining $q_E - i$ queries are | |
3665 | answered using the KEM, as in $\G0$. Let $T_i$ be the event in | |
3666 | game~$\G[H]i$ that the adversary correctly guesses~$b$. Notice that | |
3667 | $\G[H]0 \equiv \G0$ and $\G[H]{q_E} = \G1$. A reduction argument, similar | |
3668 | to that in the previous proof shows that, for $0 \le i < q_E$, | |
3669 | \[ \Pr[T_i] - \Pr[T_{i+1}] \le | |
3670 | \InSec{ind-cca}(\proto{kem}; t, q_D + q_R) | |
3671 | \] | |
3672 | The claim then follows because | |
3673 | \[ \Pr[S_0] - \Pr[S_1] = \Pr[T_0] - \Pr[T_{q_E}] | |
3674 | = \sum_{0\le i<q_E} (T_i - T_{i+1}) | |
3675 | \] | |
3676 | ||
3677 | In game~$\G2$, we no longer credit the adversary with a win for any forgery | |
3678 | attempt $(u, c, \sigma)$ if $u = u_i$ for some $0 \le i < q_E$ such that | |
3679 | $Q_i = Y$. Let $F_2$ be the event that the adversary makes a decryption | |
3680 | query that, in $\G1$, would be a valid forgery, but which is rejected in | |
3681 | $\G2$; then (\xref{lem:shoup}) | |
3682 | \begin{equation} \label{eq:wdaae-tag.auth.s1} | |
3683 | \Pr[S_1] - \Pr[S_2] \le \Pr[F_2] | |
3684 | \end{equation} | |
3685 | For each $0 \le i < q_E$, let $E_i$ be the event that the adversary makes a | |
3686 | decryption query $(u_i, c, \sigma)$ which would have been a valid forgery | |
3687 | in $\G1$ but is not in $\G2$. Then $F_2 = \bigvee_{0\le i<q_E} E_i$ so | |
3688 | \[ \Pr[F_2] \le \sum_{0\le i<q_E} \Pr[E_i] \] | |
3689 | We claim that | |
3690 | \[ \Pr[E_i] \le \InSec{int-ctxt}(\proto{aead}; t, 1 + q_R, q_D) \] | |
3691 | and it will therefore follow that | |
3692 | \begin{equation} \label{eq:wdaae-tag.auth.f2} | |
3693 | \Pr[F_2] \le q_E \InSec{int-ctxt}(\proto{aead}; t, 1 + q_R, q_D) | |
3694 | \end{equation} | |
3695 | We prove the claim by a reduction from authenticity of the AEAD scheme. | |
3696 | The reduction generates keys for the sender and recipient and simulates | |
3697 | $\G1$, with the following exceptions. If $Q_i = Y$ then the reduction | |
3698 | generates $\tau_i$ at random, but doesn't bother choosing $K_i$: rather, it | |
3699 | uses the AEAD encryption oracle to determine $c_i$. Similarly, when | |
3700 | responding to a decryption query with clue $u_i$, it uses its AEAD | |
3701 | decryption oracle to verify and decrypt the AEAD ciphertext; and it uses | |
3702 | the AEAD encryption oracle to answer recipient-simulator queries with clue | |
3703 | equal to $u_i$. If $E_i$ occurs, then there must be a decryption query | |
3704 | $(Y, Y', u_i, c, \sigma)$ with $(c, \sigma)$ not equal to $(c_i, \sigma_i)$ | |
3705 | or any of the ciphertext/signature pairs output by the recipient-simulator | |
3706 | oracle with clue $u_i$. But if this happens then the reduction will have | |
3707 | submitted a valid ciphertext/header pair $(c, [\sigma])$, not output by the | |
3708 | AEAD encryption oracle, proving the claim. | |
3709 | ||
3710 | In game~$\G3$, we no longer credit the adversary with a win for any forgery | |
3711 | attempt $(u, c, \sigma)$ when $\tau = \mathcal{D}_y(u)[\kappa \bitsto | |
3712 | \lambda] = \tau_i$ for any $0 \le i < q_E$ with $Q_i = Y$. Let $F_3$ be | |
3713 | the event that the adversary makes a decryption query that, in $\G2$, would | |
3714 | be a valid forgery, but which is rejected in $\G3$; then (\xref{lem:shoup}) | |
3715 | \begin{equation} \label{eq:wdaae-tag.auth.s2} | |
3716 | \Pr[S_2] - \Pr[S_3] \le \Pr[F_3] | |
3717 | \end{equation} | |
3718 | Note that the $\tau_i$ are each uniformly distributed, so a simple | |
3719 | reduction (which we omit) shows that | |
3720 | \begin{equation} \label{eq:wdaae-tag.auth.f3} | |
3721 | \Pr[F_3] \le \InSec{ndir}(\proto{kem}, \lambda - \kappa, q_E; t, q_D) | |
3722 | \end{equation} | |
3723 | ||
3724 | Let us pause to examine the situation in $\G3$. Let $(X, X', u, c, | |
3725 | \sigma)$ be a decryption query, and let $Z \gets \mathcal{D}_y(u)$, and | |
3726 | $\tau = Z[\kappa \bitsto \lambda]$, then we must have $\tau \ne \tau_i$ for | |
3727 | any $0 \le i < q_E$ with $Q_i = Y$. We must also have $V_X([\tau, Y], | |
3728 | \sigma) \to 1$; but the only signatures produced by the encryption oracle | |
3729 | are on messages $[\tau_i, Q_i] \ne [\tau, Y]$. Therefore the only avenue | |
3730 | remaining to the adversary is to forge a signature. Another simple | |
3731 | reduction, which again we omit, shows that | |
3732 | \begin{equation} \label{eq:wdaae-tag.auth.s3} | |
3733 | \Pr[S_3] \le \InSec{euf-cma}(\proto{sig}; t, q_E) | |
3734 | \end{equation} | |
3735 | ||
3736 | Finally, piecing together | |
3737 | equations~\ref{eq:wdaae-tag.auth.s0}--\ref{eq:wdaae-tag.auth.s3} completes | |
3738 | the proof. | |
3739 | \end{proof} | |
3740 | ||
3741 | \begin{theorem}[Tag-signing weak deniability] | |
3742 | \label{th:wdaae-tag.deny} | |
3743 | ||
3744 | Let $\proto{kem}$, $\proto{aead}$, and $\proto{sig}$ be a key-encapsulation | |
3745 | mechanism, AEAD scheme and signature scheme, respectively. Then | |
3746 | \[ \InSec{wdeny}(\Xid{\Pi}{wdaae-tag} | |
3747 | (\proto{kem}, \proto{aead}, \proto{sig}); t, q_E, q_D, q_R) = 0 | |
3748 | \] | |
3749 | \end{theorem} | |
3750 | \begin{proof} | |
3751 | Inspection of the simulators reveals that any pair of ciphertexts submitted | |
3752 | to the judge have the following structure: both ciphertexts contain the | |
3753 | same properly-generated KEM clue and signature, and potentially different, | |
3754 | but properly-generated AEAD encryptions of the appropriate messages. They | |
3755 | are therefore identically distributed. | |
3756 | \end{proof} | |
3757 | ||
3758 | %%%-------------------------------------------------------------------------- | |
3759 | \section{Generic weakly deniable construction} | |
3760 | \label{sec:gwd} | |
3761 | ||
3762 | In this section we describe a generic construction meeting the definition of | |
3763 | `weak deniability' (\xref{sec:deny.weak}), and prove its security. | |
3764 | ||
3765 | \subsection{Description of the construction} | |
3766 | \label{sec:gwd.description} | |
3767 | ||
3768 | Firstly, we give an informal description; then we present the formal version | |
3769 | as pseudocode. We need the following ingredients. | |
3770 | \begin{itemize} | |
3771 | \item A key encapsulation mechanism (KEM; | |
3772 | \xref{def:kem-syntax})~$\proto{kem} = (\lambda, \mathcal{G}, \mathcal{E}, | |
3773 | \mathcal{D})$, secure against chosen-ciphertext attack (IND-CCA; | |
3774 | \xref{def:kem-security}). | |
3775 | \item A symmetric encryption scheme (\xref{def:se-syntax}) | |
3776 | scheme~$\proto{se} = (\kappa, E, D)$, secure against chosen-ciphertext | |
3777 | attack and with integrity of ciphertexts (IND-CCA and INT-CTXT; | |
3778 | \xref{def:se-security}), where $\kappa < \lambda$. | |
3779 | \item A digital signature (\xref{def:sig-syntax}) scheme~$\proto{sig} = | |
3780 | (G, S, V)$, secure against existential forgery under chosen-message attack | |
3781 | (EUF-CMA; \xref{def:sig-security}) and satisfying the additional property | |
3782 | that the encoding $[\sigma]$ of any signature $\sigma$ has the same | |
3783 | length.\footnote{% | |
3784 | Most practical signature schemes, e.g., based on RSA | |
3785 | \cite{Rivest:1978:MOD,Bellare:1996:ESD,RSA:2002:PVR,rfc3447} or DSA | |
3786 | \cite{FIPS:2000:DSS}, have this property for arbitrary messages. | |
3787 | Regardless, we investigate how to weaken this requirement in | |
3788 | \xref{sec:gwd.variant}.} % | |
3789 | \end{itemize} | |
3790 | ||
3791 | Alice's private key consists of a private key~$a$ for the KEM and a private | |
3792 | key~$a'$ for the signature scheme; her public key consists of the two | |
3793 | corresponding public keys $A$ and~$A'$. Similarly, Bob's private key | |
3794 | consists of $b$ and $b'$ for the KEM and signature scheme, and his public key | |
3795 | is $B$ and $B'$. A diagram of the scheme is shown in \xref{fig:gwd}. | |
3796 | ||
3797 | To send a message~$m$ to Bob, Alice performs the following steps. | |
3798 | \begin{enumerate} | |
3799 | \item She runs the KEM on Bob's public key~$B$; it returns a `clue'~$u$ and | |
3800 | an $\lambda$-bit `key'~$Z$. | |
3801 | \item She splits $Z$ into a $\kappa$-bit key~$K$ for the symmetric encryption | |
3802 | scheme, and a $t$-bit `tag'~$\tau$. | |
3803 | \item She signs $[\tau, B]$ using the signature scheme and her private | |
3804 | key~$a'$, producing a signature~$\sigma$. | |
3805 | \item She encrypts the signature and her message using the symmetric | |
3806 | encryption scheme, with key~$K$, producing a ciphertext $y$. | |
3807 | \item The final ciphertext consists of two elements: the KEM clue~$u$ and the | |
3808 | symmetric ciphertext~$y$. | |
3809 | \end{enumerate} | |
3810 | To decrypt the message represented by $(u, y)$, Bob performs these steps. | |
3811 | \begin{enumerate} | |
3812 | \item He applies the KEM to the clue~$u$ and his private key~$b$, obtaining | |
3813 | an $\lambda$-bit `key'~$Z$. | |
3814 | \item He splits $Z$ into a $\kappa$-bit key~$K$ for the symmetric encryption | |
3815 | scheme and a $t$-bit tag~$\tau$. | |
3816 | \item He decrypts the ciphertext~$y$ using the symmetric encryption scheme | |
3817 | with key~$K$, obtaining a signature~$\sigma$ and a message~$m$. | |
3818 | \item He verifies the signature $\sigma$ on the pair~$[\tau, B]$, using | |
3819 | Alice's public key~$A'$. | |
3820 | \end{enumerate} | |
3821 | ||
3822 | \begin{figure} | |
3823 | \centering | |
3824 | \begin{tikzpicture} | |
3825 | \node[op = blue!20] (kem) {$\mathcal{E}$}; | |
3826 | \node[box = red!20, left = -0.3pt] at (0, -1) (K) {$K$}; | |
3827 | \node[box = green!20, right = -0.3pt] at (0, -1) (tau) {$\tau$}; | |
3828 | \draw[decorate, decoration = brace] | |
3829 | (K.north west) -- (tau.north east) | |
3830 | coordinate [pos = 0.5, above = 2.5pt] edge [<-] (kem); | |
3831 | \node[box = green!20, right = of tau] (tau2) {$\tau$}; | |
3832 | \node[box = blue!20, right = -0.6pt of tau2] (B) {$B$}; | |
3833 | \node at (B |- kem) {$B$} edge [->] (kem) | |
3834 | edge [->] (B); | |
3835 | \draw[->] (tau) -- (tau2); | |
3836 | \node[op = green!20, below = of tau2.south east] (sig) {$S$}; | |
3837 | \draw[decorate, decoration = brace] | |
3838 | (B.south east) -- (tau2.south west) | |
3839 | coordinate [pos = 0.5, below = 2.5pt] edge [->] (sig); | |
3840 | \node[left = of sig] {$a'$} edge [->] (sig); | |
3841 | \node[box = green!20, below = of sig] (sigma) {$\sigma$} | |
3842 | edge [<-] (sig); | |
3843 | \node[box = yellow!20, right = -0.6pt of sigma, minimum width = 30mm] | |
3844 | (m) {$m$}; | |
3845 | \node at (m |- kem) {$m$} edge [->] (m); | |
3846 | \node[op = red!20, below = of m.south west] (enc) {$E$}; | |
3847 | \draw[decorate, decoration = brace] | |
3848 | (m.south east) -- (sigma.south west) | |
3849 | coordinate [pos = 0.5, below = 2.5pt] (p); | |
3850 | \draw[->, rounded] (p) |- (enc); | |
3851 | \draw[->, rounded] (K) |- (enc); | |
3852 | \node[box = red!20, below = of enc, minimum width = 35mm] (y) {$y$}; | |
3853 | \node[box = blue!20, left = -0.6pt of y] (u) {$u$}; | |
3854 | \draw[->] (enc) -- (y); | |
3855 | \draw[->, rounded] (kem) -- +(-1, 0) |- (u); | |
3856 | \end{tikzpicture} | |
3857 | \caption{Generic weakly-deniable asymmetric encryption} | |
3858 | \label{fig:gwd} | |
3859 | \end{figure} | |
3860 | ||
3861 | More formally, we define our generic weakly-deniable scheme | |
3862 | $\proto{aae-gwd}(\proto{kem}, \proto{sig}, \proto{se})$ | |
3863 | to be the collection of algorithms $(\Xid{G}{aae-gwd}, \Xid{E}{aae-gwd}, | |
3864 | \Xid{D}{aae-gwd}, \Xid{R}{aae-gwd}, \Xid{S}{aae-gwd})$ as follows. | |
3865 | \begin{program} | |
3866 | $\Xid{G}{aae-gwd}()$: \+ \\ | |
3867 | $(x, X) \gets \mathcal{G}()$; \\ | |
3868 | $(x', X') \gets G()$; \\ | |
3869 | \RETURN $\bigl( (x, x'), (X, X') \bigr)$; \- | |
3870 | \newline | |
3871 | $\Xid{E}{aae-gwd}_{x, x'}\bigl((Y, Y'), m\bigr)$: \+ \\ | |
3872 | $(Z, u) \gets \mathcal{E}_Y()$; \\ | |
3873 | $K \gets Z[0 \bitsto \kappa]$; | |
3874 | $\tau \gets Z[\kappa \bitsto \lambda]$; \\ | |
3875 | $\sigma \gets S_{x'}([\tau, Y])$; \\ | |
3876 | $y \gets E_K([\sigma] \cat m)$; \\ | |
3877 | \RETURN $\bigl((u, y), K\bigr)$; \- | |
3878 | \next | |
3879 | $\Xid{D}{aae-gwd}_{x, x'}\bigl((Y, Y'), (u, y)\bigr)$: \+ \\ | |
3880 | $Z \gets \mathcal{D}_x(u)$; | |
3881 | \IF $Z = \bot$ \THEN \RETURN $\bot$; \\ | |
3882 | $K \gets Z[0 \bitsto \kappa]$; | |
3883 | $\tau \gets Z[\kappa \bitsto \lambda]$; \\ | |
3884 | $\hat{m} \gets D_K(y)$; | |
3885 | \IF $\hat{m} = \bot$ \THEN \RETURN $\bot$; \\ | |
3886 | $[\sigma] \cat m \gets \hat{m}$; | |
3887 | \IF $V_{Y'}([\tau, X], \sigma) = 0$ \THEN \RETURN $\bot$; \\ | |
3888 | \RETURN $m$; | |
3889 | \newline | |
3890 | $\Xid{R}{aae-gwd}((x, x'), (Y, Y'), (u, y), m')$: \+ \\ | |
3891 | $Z \gets \mathcal{D}_x(u)$; | |
3892 | \IF $Z = \bot$ \THEN \RETURN $\bot$; \\ | |
3893 | $K \gets Z[0 \bitsto \kappa]$; \\ | |
3894 | $\hat{m} \gets D_K(y)$; | |
3895 | \IF $\hat{m} = \bot$ \THEN \RETURN $\bot$; \\ | |
3896 | $[\sigma] \cat m \gets \hat{m}$; \\ | |
3897 | $y' \gets E_K(\sigma \cat m')$; \\ | |
3898 | \RETURN $(u, y')$; \- | |
3899 | \next | |
3900 | $\Xid{S}{aae-gwd}((Y, Y'), (x, x'), (u, y), K, m')$: \+ \\ | |
3901 | \\ | |
3902 | \\ | |
3903 | $\hat{m} \gets D_K(y)$; | |
3904 | \IF $\hat{m} = \bot$ \THEN \RETURN $\bot$; \\ | |
3905 | $[\sigma] \cat m \gets \hat{m}$; \\ | |
3906 | $y' \gets E_K(\sigma \cat m')$; \\ | |
3907 | \RETURN $(u, y')$; \- | |
3908 | \end{program} | |
3909 | ||
3910 | \subsection{Deniability} | |
3911 | \label{sec:gwd.deny} | |
3912 | ||
3913 | Examining the encryption algorithm for our scheme reveals that the | |
3914 | authentication and secrecy parts are almost independent. Most importantly, | |
3915 | the signature~$\sigma$ is the only part of the ciphertext dependent on the | |
3916 | sender's private key is used, and $\sigma$ is independent of the message~$m$. | |
3917 | As we've seen, encrypting the signature prevents outsiders from detaching and | |
3918 | reusing it in forgeries, but the recipient can extract the signature and | |
3919 | replace the message. | |
3920 | ||
3921 | This is the source of the scheme's deniability: anyone who knows the | |
3922 | symmetric key~$K$ can extract the signature and replace the encrypted message | |
3923 | with a different one. | |
3924 | ||
3925 | More formally, we have the following theorem. | |
3926 | ||
3927 | \begin{theorem} | |
3928 | \label{th:gwd-wdeny} | |
3929 | ||
3930 | The scheme $\Pi = \proto{aae-gwd}(\proto{kem}, \proto{sig}, | |
3931 | \proto{se})$ has perfect deniability: i.e., | |
3932 | \[ \InSec{wdeny}(\Pi; t) = 0 \] | |
3933 | \end{theorem} | |
3934 | \begin{proof} | |
3935 | Simply observe that the clue encrypted signature are simply copied from the | |
3936 | input original ciphertext in each case, and the symmetric ciphertext in | |
3937 | each case is constructed using the proper symmetric encryption algorithm | |
3938 | using the proper symmetric key. | |
3939 | \end{proof} | |
3940 | ||
3941 | However, our scheme is not strongly deniable unless signatures are easily | |
3942 | forged. To formalize this claim, we must deal with the syntactic difference | |
3943 | between strongly and weakly deniable encryption schemes. | |
3944 | ||
3945 | \begin{theorem} | |
3946 | \label{th:gwd-not-sdeny} | |
3947 | ||
3948 | Let $\Pi = \proto{aae-gwd}(\proto{kem}, \proto{sig}, | |
3949 | \proto{se}) = (G, E, D, R, S)$ as defined above. Define $\pi_0(c, | |
3950 | \eta) = c$ and $\bar{E}_x(Y, m) = \pi_0(E_x(Y, m))$ to discard the hint | |
3951 | $\eta$. For any recipient simulator~$R'$, let $\Pi'(R') = (G, \bar{E}, D, | |
3952 | R')$; then $\Pi'(R')$ is syntactically a strongly deniable AAE scheme; but | |
3953 | \[ \InSec{sdeny}(\Pi'(R'); t) \ge | |
3954 | 1 - \InSec{euf-cma}(\proto{sig}; t_{R'} + t', 0) \] | |
3955 | where $t_{R'}$ is the running time of $R'$ and $t'$ is the time taken to | |
3956 | decrypt a ciphertext of $\Pi$. | |
3957 | \end{theorem} | |
3958 | \begin{proof} | |
3959 | It is plain that $\Pi'(R')$ is syntactically correct. | |
3960 | ||
3961 | Recall that a strong-deniability simulator does not require a sample | |
3962 | ciphertext to work from. Now, consider the judge~$J$ which, given a | |
3963 | ciphertext and the appropriate keys, recovers the symmetric key using the | |
3964 | recipient's private key, decrypts the symmetric ciphertext to extract the | |
3965 | signature, and verifies it against the tag using the sender's public key; | |
3966 | if the signature verifies OK, it outputs~$1$, otherwise~$0$. If given a | |
3967 | genuine ciphertext, the judge always outputs $1$. Let $\epsilon$ be the | |
3968 | probability that the judge outputs $1$ given a simulated ciphertext; then | |
3969 | $J$'s advantage is $1 - \epsilon$ by definition. The theorem will follow | |
3970 | from the claim that | |
3971 | \[ \epsilon \le \InSec{euf-cma}(\proto{sig}; t_{R'} + t', 0) \] | |
3972 | We prove this by a simple reduction: given a public verification key for | |
3973 | the signature scheme, generate a KEM key pair, and run $S$ on the KEM | |
3974 | private key, the verification key, and the empty message. Decrypt the | |
3975 | resulting ciphertext using the KEM private key, recovering the signature | |
3976 | and tag, and return both as the forgery. The reduction runs in the stated | |
3977 | time, proving the claim. | |
3978 | \end{proof} | |
3979 | ||
3980 | \subsection{Conventional security} | |
3981 | \label{sec:gwd.aae} | |
3982 | ||
3983 | Before we embark on the formal security proof, it's worth reflecting on the | |
3984 | intuitive reason that the generic scheme is secure -- in the sense of | |
3985 | providing (outsider) secrecy and authenticity. | |
3986 | ||
3987 | Secrecy is fairly straightforward: it follows directly from the security of | |
3988 | the KEM and the symmetric encryption scheme. | |
3989 | ||
3990 | Firstly we consider secrecy, and initially restrict our attention to secrecy | |
3991 | under chosen-plaintext attack only. If the KEM is any good then the key~$Z$ | |
3992 | appears to be random and particularly the first $\kappa$ bits -- i.e., the | |
3993 | symmetric key~$K$ -- are unknown to the adversary. Since~$K$ is good, and we | |
3994 | assume that the symmetric scheme is good, then the ciphertext~$y$ hides~$m$, | |
3995 | and since~$y$ is the only part of the overall ciphertext that depends on~$m$ | |
3996 | this is sufficient. | |
3997 | ||
3998 | For secrecy under chosen-ciphertext attack we must show that a decryption | |
3999 | oracle doesn't help. A decryption query may share a KEM clue with a given | |
4000 | target ciphertext. If it does then we appeal to symmetric security; if not, | |
4001 | then the KEM security suffices. | |
4002 | ||
4003 | Finally we deal with authenticity. For a forgery to be successful, it must | |
4004 | contain a signature which can be verified using the purported sender's public | |
4005 | key; if the signature scheme is good, then this must be a signature actually | |
4006 | made by the purported sender. If the KEM clue on the forgery doesn't match | |
4007 | the clue from the message from which the signature was extracted, then the | |
4008 | tag taken from the forgery will fail to match with high probability. If the | |
4009 | KEM clue does match then the symmetric key must also match, and so the | |
4010 | symmetric scheme's authentication will ensure that the signature and message | |
4011 | are both unaltered -- so the forgery is trivial. | |
4012 | ||
4013 | We now present the formal security theorems. | |
4014 | ||
4015 | \begin{theorem}[AAE-GWD secrecy] | |
4016 | \label{th:gwd-secrecy} | |
4017 | Let $\Pi = \proto{aae-gwd}(\proto{kem}, \proto{sig}, | |
4018 | \proto{se})$ be as defined above. Then | |
4019 | \[ \InSec{ind-occa}(\Pi; t, q_E, q_D) \le \\ | |
4020 | 2\,\InSec{ind-cca}(\proto{kem}; t, q_D + q_R) + | |
4021 | \InSec{ind-cca}(\proto{se}; t, 1 + q_R, q_D + q_R) | |
4022 | \] | |
4023 | \end{theorem} | |
4024 | \begin{theorem}[AAE-GWD authenticity] | |
4025 | \label{th:gwd-authenticity} | |
4026 | Let $\Pi = \proto{aae-gwd}(\proto{kem}, \proto{sig}, | |
4027 | \proto{se})$ be as defined above. Then | |
4028 | \begin{spliteqn*} | |
4029 | \InSec{uf-ocma}(\Pi; t, q_E, q_D, q_R) \le | |
4030 | q_E \InSec{ind-cca}(\proto{kem}; t, q_D + q_R) + {} \\ | |
4031 | q_E \InSec{int-ctxt}(\proto{se}; t, 1 + q_R, q_D + q_R) + | |
4032 | q_E \InSec{ind-cca}(\proto{se}; t, 1 + q_R, 0) + {} \\ | |
4033 | \InSec{euf-cma}(\proto{sig}; t, q_E) | |
4034 | \end{spliteqn*} | |
4035 | \end{theorem} | |
4036 | ||
4037 | \begin{proof}[Proof of \xref{th:gwd-secrecy}] | |
4038 | We use sequences of games over the same underlying probability space, as | |
4039 | described in \cite{Shoup:2002:OR,cryptoeprint:2004:332}. | |
4040 | ||
4041 | Let $A$ be any adversary attacking the outsider secrecy of $\Pi$ which runs | |
4042 | within the stated resource bounds. It will therefore suffice to bound | |
4043 | $A$'s advantage. | |
4044 | ||
4045 | In game~$\G0$, we toss a coin~$b \inr \{0, 1\}$, and play | |
4046 | $\Game{ind-occa-$b$}{\Pi}(A)$. The adversary's output is $b'$. Let $S_0$ | |
4047 | be the event that $b = b'$. We have, by \xref{lem:advantage}, that | |
4048 | \begin{equation} | |
4049 | \label{eq:gwd-sec-s0} | |
4050 | \Adv{ind-occa}{\Pi}(A) = 2\Pr[S_0] - 1 | |
4051 | \end{equation} | |
4052 | Hence bounding $\Pr[S_0]$ will be sufficient to complete the proof. In | |
4053 | each game~$\G{i}$ that we define, $S_i$ will be the event that $b' = b$ in | |
4054 | that game. As we define new games, we shall bound $\Pr[S_i]$ in terms of | |
4055 | $\Pr[S_{i+1}]$ until eventually we shall be able to determine this | |
4056 | probability explicitly. | |
4057 | ||
4058 | Before we start modifying the game, we shall pause to establish some | |
4059 | notation. The \emph{challenge ciphertext} passed to the \cookie{guess} | |
4060 | stage of the adversary is a clue/signature/ciphertext triple $c^* = (u^*, | |
4061 | y^*)$, where $(Z^*, u^*) \gets \mathcal{E}_Y()$, $K^* = Z^*[0 \bitsto | |
4062 | \kappa]$, $\tau^* = Z^*[\kappa \bitsto \lambda]$, $\sigma^* \gets | |
4063 | S_x([\tau^*, Y])$, and $y^* \gets E_{K^*}([\sigma^*] \cat m_b)$. | |
4064 | ||
4065 | Game~$\G1$ works in almost the same way as $\G0$. The difference is that | |
4066 | rather than computing~$Z^*$ using the key-encapsulation algorithm | |
4067 | $\mathcal{E}_Y$, we simply choose it uniformly at random from | |
4068 | $\Bin^\lambda$. Furthermore, the decryption and recipient-simulator | |
4069 | oracles compensate for this by inspecting the input ciphertext $c = (u, | |
4070 | y)$: if and only if $u = u^*$ then decryption proceeds using $Z = Z^*$ | |
4071 | rather than using the key-decapsulation algorithm~$\mathcal{D}$. | |
4072 | ||
4073 | We claim that | |
4074 | \begin{equation} | |
4075 | \label{eq:gwd-sec-g1} | |
4076 | \Pr[S_0] - \Pr[S_1] \le | |
4077 | \InSec{ind-cca}(\proto{kem}; t, q_D + q_R) | |
4078 | \end{equation} | |
4079 | The proof of the claim is by a simple reduction argument: we define an | |
4080 | adversary~$\hat{A}$ which attacks the KEM. We describe the | |
4081 | adversary~$\hat{A}$ in detail by way of example; future reduction arguments | |
4082 | will be rather briefer. | |
4083 | ||
4084 | The adversary receives as input a public key~$Y$ for the KEM, an | |
4085 | $\lambda$-bit string $Z^*$ and a clue~$u^*$. It generates signature key | |
4086 | pairs ~$(x', X')$ and $(y', Y')$, and a KEM key pair~$(x, X)$, using the | |
4087 | key-generation algorithms; it also chooses $b \in \{0, 1\}$ uniformly at | |
4088 | random. It runs the \cookie{find} stage of adversary~$A$, giving it $(X, | |
4089 | X')$ and $(Y, Y')$ as input. Eventually, the \cookie{find} stage completes | |
4090 | and outputs $m_0$ and $m_1$ and a state~$s$. Our KEM adversary computes | |
4091 | $\sigma^*$ and $y^*$ in terms of the $Z^*$ it was given and the | |
4092 | message~$m_b$, and runs the \cookie{guess} stage of adversary~$A$, | |
4093 | providing it with the ciphertext~$(u^*, y^*)$ and the state~$s$. | |
4094 | Eventually, $A$ completes, outputting its guess~$b'$. If $b' = b$ then our | |
4095 | KEM adversary outputs~$1$; otherwise it outputs~$0$. | |
4096 | ||
4097 | During all of this, we must simulate $A$'s various oracles. The encryption | |
4098 | oracle poses no special difficulty, since we have the signing key~$x'$. On | |
4099 | input $\bigl((Q, Q'), (u, \sigma, y)\bigr)$, the simulated decryption | |
4100 | oracle works as follows. If $u = u^*$ then set $Z= Z^*$; otherwise | |
4101 | retrieve $Z$ by querying the decapsulation oracle at~$u$, since this is | |
4102 | permitted by the KEM IND-CCA game rules. Except for this slightly fiddly | |
4103 | way of discovering~$Z$, the simulated decryption oracle uses the `proper' | |
4104 | decryption algorithm, verifying the signature $\sigma$ on the tag~$\tau = | |
4105 | Z[\kappa \bitsto \lambda]$ using the public key~$Q'$. The | |
4106 | recipient-simulator oracle works in a similar fashion. | |
4107 | ||
4108 | If the KEM adversary is playing the `real' | |
4109 | $\Game{ind-cca-$1$}{\proto{kem}}$ then our KEM adversary simulates | |
4110 | $\G0$ perfectly; hence, the probability that it outputs $1$ is precisely | |
4111 | $\Pr[S_0]$. On the other hand, if it is playing | |
4112 | $\Game{ind-cca-$0$}{\proto{kem}}$ then it simulates~$\G1$, and the | |
4113 | probability that it outputs $1$ is $\Pr[S_1]$. Hence | |
4114 | \[ \Pr[S_0] - \Pr[S_1] = \Adv{ind-cca}{\proto{kem}}(\hat{A}) | |
4115 | \le \InSec{ind-cca}(\proto{kem}; t, q_D + q_R) | |
4116 | \] | |
4117 | as claimed. | |
4118 | ||
4119 | Finally, we can bound $S_1$ explicitly in $\G1$: | |
4120 | \begin{equation} | |
4121 | \label{eq:gwd-sec-s1} | |
4122 | \Pr[S_1] \le | |
4123 | \frac{1}{2} \InSec{ind-cca}(\proto{se}; t, 1 + q_R, q_D + q_R) + | |
4124 | \frac{1}{2} | |
4125 | \end{equation} | |
4126 | This follows by a simple reduction to the chosen-ciphertext secrecy of | |
4127 | $\proto{se}$: $K^*$ will be known to the IND-CCA game, but not | |
4128 | directly to our adversary. It will generate all of the long-term | |
4129 | asymmetric keys, and run $A$'s \cookie{find} stage, collects the two | |
4130 | plaintext messages, encrypts them (making use of the left-or-right | |
4131 | $\proto{se}$ encryption oracle to find $y^* = E_{K^*}([\sigma^*] \cat | |
4132 | \id{lr}(m_0, m_1))$. The challenge ciphertext is passed to $A$'s | |
4133 | \cookie{guess} stage, which will eventually output a guess~$b'$; our | |
4134 | adversary outputs this guess. | |
4135 | ||
4136 | The decryption oracle is simulated as follows. Let $(u, y)$ be the | |
4137 | chosen-ciphertext query. If $u \ne u^*$ then we decrypt $y$ by recovering | |
4138 | $K \cat \tau = \mathcal{D}_y(u)$ as usual; otherwise we must have $y \ne | |
4139 | y^*$ so $y$ is a legitimate query to the symmetric decryption oracle, so we | |
4140 | recover $\hat{m} = D_{K^*}(y)$ and continue from there. | |
4141 | ||
4142 | The recipient-simulation oracle is simulated as follows. Let $(Z, c, m)$ | |
4143 | be the simulation query. If $u \ne u^*$ then we use the KEM as usual. If | |
4144 | $y \ne y^*$ then we can recover the signature using our symmetric | |
4145 | decryption oracle; otherwise, we already know that the signature is | |
4146 | $\sigma^*$. In either case, we construct the new plaintext and invoke the | |
4147 | symmetric encryption oracle to do the encryption, using the same message as | |
4148 | both the left and right inputs to the oracle. | |
4149 | ||
4150 | This is a valid adversary, and runs in the stated resource bounds; we apply | |
4151 | \xref{lem:advantage} and the claim follows. | |
4152 | ||
4153 | We can bound the advantage of adversary~$A$ by combining | |
4154 | equations~\ref{eq:gwd-sec-s0}--\ref{eq:gwd-sec-s1}: | |
4155 | \begin{eqnarray*}[rl] | |
4156 | \Adv{ind-occa}{\Pi}(A) | |
4157 | & = 2\Pr[S_0] - 1 \\ | |
4158 | & \le 2 \, \InSec{ind-cca}(\proto{kem}; t, q_D + q_R) + | |
4159 | \InSec{ind-cca}(\proto{se}; t, 1 + q_R, q_D + q_R) | |
4160 | \end{eqnarray*} | |
4161 | completing the proof. | |
4162 | \end{proof} | |
4163 | ||
4164 | \begin{proof}[Proof of \xref{th:gwd-authenticity}] | |
4165 | We use a sequence of games again. | |
4166 | ||
4167 | Let $A$ be any adversary attacking the outsider authenticity of $\Pi$ and | |
4168 | running within the specified resource bounds. It will suffice to bound | |
4169 | $A$'s advantage. | |
4170 | ||
4171 | Game~$\G0$ is precisely $\Game{uf-ocma}{\Pi}(A)$. We let $S_0$ be the | |
4172 | event that $A$ outputs a valid forgery, i.e., | |
4173 | \begin{equation} | |
4174 | \label{eq:gwd-auth-s0} | |
4175 | \Adv{uf-ocma}{\Pi}(A) = \Pr[S_0] | |
4176 | \end{equation} | |
4177 | As before, in each subsequent game~$\G{i}$ we let $S_i$ be the | |
4178 | corresponding event. The two key pairs will be $((x, x'), (X, X'))$ and | |
4179 | $((y, y'), (Y, Y'))$. For each $0 \le i < q_E$, we define $m_i$ to be the | |
4180 | message in $A$'s $i$th encryption-oracle query and $(P, P_i)$ as the | |
4181 | corresponding public key; and we set $(Z_i, u_i) \gets \mathcal{E}_{P_i}()$ | |
4182 | to be the KEM output while responding to that query, with $K_i \cat \tau_i | |
4183 | = Z_i$, $\sigma_i = S_{x'}([\tau_i, Q_i])$, and $y_i \gets E_{K_i}(m_i)$. | |
4184 | For each $0 \le j < q_D$, let $(Q^*_j, u^*_j, y^*_j)$ be the adversary's | |
4185 | $j$th decryption query, and define $Z^*_j = \mathcal{D}_y(u^*_j)$, $K^*_j = | |
4186 | Z^*_i[0 \bitsto \kappa]$, $\tau^*_j = Z^*_j[\kappa \bitsto \lambda]$, and | |
4187 | $[\sigma^*_j] \cat m^*_j = \hat{m}^*_j = D_{K^*_j}(y^*_j)$, insofar as such | |
4188 | quantities are well-defined. For each $0 \le j < q_R$, let $(\hat{Q}_j, | |
4189 | \hat{u}_j, \hat{y}_j, \hat{m}_j)$ be the adversary's $j$th | |
4190 | recipient-simulation query, and define $\hat{Z}_j$, etc.\@ accordingly; let | |
4191 | $\hat{y}'_j = E_{\hat{K}_j}([\hat{\sigma}_j] \cat \hat{m}_j)$ be the | |
4192 | symmetric ciphertext in the simulator oracle's output. | |
4193 | ||
4194 | Game~$\G1$ is the same as~$\G0$, except that the encryption oracle | |
4195 | generates random keys $Z_i \inr \Bin^\lambda$ if $Q_i = Y$ rather than | |
4196 | using the keys output by the key encapsulation algorithm. The clue~$u_i$ | |
4197 | returned is valid; but the key $K_i$ used for the symmetric encryption, and | |
4198 | the tag~$\tau_i$ signed by the signature algorithm, are random and | |
4199 | independent of the clue. We also modify the decryption oracle: if $Q^*_j = | |
4200 | X$, and $u^*_j = u_i$ matches a clue returned by the encryption oracle with | |
4201 | $Q_i = Y$, then the decryption oracle sets $Z^*_j = Z_i$ rather than using | |
4202 | the decapsulation algorithm. We claim that | |
4203 | \begin{equation} | |
4204 | \label{eq:gwd-auth-g1} | |
4205 | \Pr[S_0] - \Pr[S_1] \le | |
4206 | q_E \InSec{ind-cca}(\proto{kem}; t, q_D + q_R) | |
4207 | \end{equation} | |
4208 | For this we use a hybrid argument: for $0 \le i \le q_E$ we define | |
4209 | game~$\G[H]{i}$ to use random keys to generate $Z_\kappa$ for $0 \le \kappa | |
4210 | < i$ and to use the `proper' encapsulation algorithm for the remaining $q_E | |
4211 | - i$ queries; let $T_i$ be the event that the adversary returns a valid | |
4212 | forgery in~$\G[H]{i}$. Then | |
4213 | \[ \G[H]{0} \equiv \G0 \qquad \textrm{and} \qquad \G[H]{q_E} \equiv \G1 \] | |
4214 | but a simple reduction argument shows that, for $0 \le i < q_E$, | |
4215 | \[ \Pr[T_i] - \Pr[T_{i+1}] \le | |
4216 | \InSec{ind-cca}(\proto{kem}; t, q_D + q_R) | |
4217 | \] | |
4218 | We construct an adversary~$\hat{A}$ attacking the KEM's secrecy by | |
4219 | simulating one of a pair of hybrid games. Let $\hat{A}$'s input be | |
4220 | $\hat{Z}, \hat{u}$. The KEM adversary proceeds by answering the first~$i$ | |
4221 | encryption queries using random keys, using $Z_i = \hat{Z}$ for query $i$, | |
4222 | and the key encapsulation algorithm for the remaining $q_E - i - 1$ | |
4223 | queries. A decryption query can be answered by using the key decapsulation | |
4224 | oracle if $u^*_j \ne u_\kappa$ for all $u_\kappa$ queried so far, or by | |
4225 | setting $Z^*_j = Z_\kappa$ directly otherwise; clearly if $\kappa = i$ then | |
4226 | $Z^*_j = Z_i = \hat{Z}$; a recipient-simulator query is answered similarly. | |
4227 | If $\hat{Z}$ is a real KEM output then we have simulated $\G[H]{i}$; | |
4228 | otherwise we have simulated~$\G[H]{i+1}$. The claim follows since | |
4229 | \begin{eqnarray*}[rl] | |
4230 | \Pr[S_0] - \Pr[S_1] | |
4231 | & = \Pr[T_0] - \Pr[T_{q_E}] \\ | |
4232 | & = \sum_{0\le i<q_E} (\Pr[T_i] - \Pr[T_{i+1}]) \\ | |
4233 | & \le q_E \InSec{ind-cca}(\proto{kem}; t, q_D + q_R) | |
4234 | \end{eqnarray*} | |
4235 | ||
4236 | For all $0 \le i < q_E$, let $\mathcal{Y}_i$ be the set of symmetric | |
4237 | ciphertexts output by the encryption oracle or the recipient-simulator | |
4238 | oracle with sender public key $X$ and clue $u_i$; i.e,, | |
4239 | \[ \mathcal{Y}_i = | |
4240 | \{ y_i \} \cup \{\, \hat{y}'_j \mid | |
4241 | \hat{Q}_j = X \land \hat{u}_j = u_i \,\} | |
4242 | \] | |
4243 | Game~$\G2$ is the same as $\G1$, except that | |
4244 | \begin{itemize} | |
4245 | \item the decryption oracle returns $\bot$ whenever a query is made with | |
4246 | $u^*_j = u_i$ and $y^*_j \notin \mathcal{Y}_i$, where $u_i$ is any clue | |
4247 | returned by the encryption oracle so far; and | |
4248 | \item the recipient-simulator oracle returns $\bot$ if $\hat{u}_j = u_i$ | |
4249 | and $\hat{y}_j \notin \mathcal{Y}_i$. | |
4250 | \end{itemize} | |
4251 | Let $F_2$ be the event that a query of this form is rejected in $\G2$, | |
4252 | when it would be accepted in $\G1$. By \xref{lem:shoup} we have | |
4253 | \begin{equation} | |
4254 | \label{eq:gwd-auth-s2} | |
4255 | \Pr[S_1] - \Pr[S_2] \le \Pr[F_2] | |
4256 | \end{equation} | |
4257 | We claim that | |
4258 | \begin{equation} | |
4259 | \label{eq:gwd-auth-f2} | |
4260 | \Pr[F_2] \le q_E \InSec{int-ctxt}(\proto{se}; t, 1 + q_R, q_D + q_R) | |
4261 | \end{equation} | |
4262 | Again the proof of this claim proceeds by a hybrid argument: we introduce | |
4263 | hybrid games $\G[H]{i}'$ for $0 \le i \le q_E$ in which symmetric | |
4264 | ciphertexts in $\mathcal{Y}_\kappa$ are rejected if $0 \le \kappa < i$; so | |
4265 | \[ \G[H]0' \equiv \G1 \qquad \textrm{and} \qquad \G[H]{q_E}' \equiv \G2 \] | |
4266 | Let $0 \le i < q_E$. Let $T'_i$ be the event that a ciphertext is rejected | |
4267 | in $\G[H]{i+i}'$ which was not in $\G[H]i'$. If this occurs, then we must | |
4268 | have | |
4269 | \[ u^*_j = u_i \textrm{,} \qquad | |
4270 | y^*_j \notin \mathcal{Y}_i \textrm{,} \qquad \textrm{and} \qquad | |
4271 | D_{K_i}(y^*_j) \ne \bot | |
4272 | \] | |
4273 | or | |
4274 | \[ \hat{u}_j = u_i \textrm{,} \qquad | |
4275 | \hat{y}_j \notin \mathcal{Y}_i \textrm{,} \qquad \textrm{and} \qquad | |
4276 | D_{K_i}(\hat{y}_j) \ne \bot | |
4277 | \] | |
4278 | $K_i$ is random in all of these games, so a simple reduction shows that | |
4279 | \[ \Pr[T'_i] \le | |
4280 | \InSec{int-ctxt}(\proto{se}; t, 1 + q_R, q_D + q_R) | |
4281 | \] | |
4282 | The reduction uses the INT-CTXT encryption oracle to perform the $i$th | |
4283 | encryption query, and decryption oracle to respond to any decryption query | |
4284 | with $u^*_j = u_i$, and both decryption and encryption oracles to respond | |
4285 | to recipient-simulator queries with or with $\hat{u}_j = u_i$. | |
4286 | ||
4287 | If $F_2$ occurs then $y^*_j \notin \mathcal{Y}_i$ or $\hat{y}_j \notin | |
4288 | \mathcal{Y}_i$ is an INT-CTXT forgery. Since the $T'_i$ form a partition | |
4289 | of $F_2$, | |
4290 | \[ \Pr[F_2] = \sum_{0 \le i < q_E} T'_i \le | |
4291 | q_E \InSec{int-ctxt}(\proto{se}; t, 1 + q_R, q_D + q_R) \] | |
4292 | and the claim is proven. | |
4293 | ||
4294 | Game~$\G3$ is the same as~$\G2$, except that the encryption oracle no | |
4295 | longer includes a valid signature in some ciphertexts, as follows. Let $n | |
4296 | = \len([\sigma])$ is the length of an encoded signature. Then, if $Q_i = | |
4297 | Y$, then we set $y_i = E_K(0^n \cat m_i)$ when encrypting message~$m_i$. | |
4298 | Other encryption queries are not affected. We claim that | |
4299 | \begin{equation} | |
4300 | \label{eq:gwd-auth-g3} | |
4301 | \Pr[S_2] - \Pr[S_3] \le q_E \InSec{ind-cca}(\proto{se}; t, 1, 0) | |
4302 | \end{equation} | |
4303 | Again we use a hybrid argument: we introduce hybrid games $\G[H]{i}''$ for | |
4304 | $0 \le i \le q_E$ in which the first $i$ encryption queries are performed | |
4305 | as in $\G3$ and the remaining $q_E - i$ queries are performed as in $\G2$. | |
4306 | Hence we have | |
4307 | \[ \G[H]0'' \equiv \G2 \qquad \textrm{and} \qquad | |
4308 | \G[H]{q_E}'' \equiv \G3 | |
4309 | \] | |
4310 | Let $T''_i$ be the event that the adversary outputs a good forgery in | |
4311 | $\G[H]{i}''$. A reduction to symmetric IND-CCA security shows that | |
4312 | \[ \Pr[T''_i] - \Pr[T''_{i+1}] \le | |
4313 | \InSec{ind-cca}(\proto{se}; t, 1 + q_R, 0) | |
4314 | \] | |
4315 | The reduction works by querying the IND-CCA encryption oracle for the $i$th | |
4316 | query, using the pair $0^n \cat m_i$ and $[\sigma_i] \cat m_i$; other | |
4317 | queries are encrypted by generating $Z_i \inr \Bin^\lambda$ at random, as | |
4318 | before. We shall not require the decryption oracle: for decryption | |
4319 | queries, if $u^*_j = u_i$ then either $y^*_j \in \mathcal{Y}_i$, in which | |
4320 | case we set $m^*_j$ to the corresponding message (either $m_i$ or some | |
4321 | $\hat{m}'_k$), or $y^*_j \ne y_i$ in which case we immediately return | |
4322 | $\bot$; for recipient-simulator queries, if $\hat{u}_j = u_i$, we return | |
4323 | $\bot$ if $\hat{y}_j \notin \mathcal{Y}$, and compute $\hat{y}'_j$ by | |
4324 | encrypting $0^n \cat \hat{m}_j$ otherwise. Other decryption and | |
4325 | recipient-simulator queries are answered using the KEM private key as | |
4326 | usual. The claim follows because | |
4327 | \begin{eqnarray*}[rl] | |
4328 | \Pr[S_0] - \Pr[S_1] | |
4329 | & = \Pr[T_0] - \Pr[T_{q_E}] \\ | |
4330 | & = \sum_{0\le i<q_E} (\Pr[T''_i] - \Pr[T''_{i+1}]) \\ | |
4331 | & \le q_E \InSec{ind-cca}(\proto{se}; t, 1 + q_R, 0) | |
4332 | \end{eqnarray*} | |
4333 | ||
4334 | Observe that in $\G3$ we never generate a signature on a message of the | |
4335 | form $[\tau, Y]$. In order to generate a forgery in this game, though, the | |
4336 | adversary must construct such a signature. We can therefore bound | |
4337 | $\Pr[S_2]$ using a reduction to the authenticity of $\proto{sig}$: | |
4338 | \begin{equation} | |
4339 | \label{eq:gwd-auth-s3} | |
4340 | \Pr[S_2] \le \InSec{euf-cma}(\proto{sig}; t, q_E) | |
4341 | \end{equation} | |
4342 | The reduction uses the signing oracle in order to implement $A$'s | |
4343 | encryption oracle. Because no signing queries are on messages of the form | |
4344 | $[\tau, Y]$, and a successful forgery for $\Pi$ must contain a signature | |
4345 | $\sigma^*_j$ for which $V_{X'}([\tau^*_j, Y])$ returns $1$, our reduction | |
4346 | succeeds whenever $A$ succeeds in $\G2$. The claim follows. | |
4347 | ||
4348 | We can bound the advantage of adversary~$A$ by combining | |
4349 | equations~\ref{eq:gwd-auth-s0}--\ref{eq:gwd-auth-s3}: | |
4350 | \begin{eqnarray*}[rLl] | |
4351 | \Adv{uf-ocma}{\Pi}(A) | |
4352 | & = \Pr[S_0] \\ | |
4353 | & \le q_E \InSec{ind-cca}(\proto{kem}; t, q_D + q_R) + {} \\ | |
4354 | & & q_E \InSec{int-ctxt}(\proto{se}; t, 1 + q_R, q_D + q_R) + {} \\ | |
4355 | & & q_E \InSec{ind-cca}(\proto{se}; t, 1 + q_R, 0) + {} \\ | |
4356 | & & \InSec{euf-cma}(\proto{sig}; t, q_E) | |
4357 | \end{eqnarray*} | |
4358 | and the theorem follows. | |
4359 | \end{proof} | |
4360 | ||
4361 | \subsection{Variants} | |
4362 | \label{sec:gwd.variant} | |
4363 | ||
4364 | \subsubsection{Exposing the signature} | |
4365 | The authenticity of our scheme requires that the signature~$\sigma$ be | |
4366 | encrypted. Why is this? What are the consequences if it isn't encrypted? | |
4367 | ||
4368 | For secrecy, it's sufficient that the signature is covered by the symmetric | |
4369 | encryption's authenticity. A scheme for authenticated encryption with | |
4370 | additional data (AEAD) \cite{Rogaway:2002:AEA} would suffice, placing the | |
4371 | signature in the AEAD scheme's header input. (The reader may verify that the | |
4372 | proof of \ref{th:gwd-secrecy} still goes through with minimal changes.) | |
4373 | ||
4374 | This is not the case for authenticity.\footnote{% | |
4375 | We therefore find ourselves in the rather surprising position of requiring | |
4376 | authenticity of the signature for secrecy, and secrecy of the signature for | |
4377 | authenticity!} % | |
4378 | The problem is that it might be possible, through some backdoor in the | |
4379 | decapsulation function, to `direct' a KEM to produce a clue which | |
4380 | encapsulates a specified string. The security definition can't help us: it | |
4381 | deals only with honest users of the public key. Furthermore, some signature | |
4382 | schemes fail to hide the signed message -- indeed, signature schemes `with | |
4383 | message recovery' exist where this is an explicit feature. | |
4384 | ||
4385 | Suppose, then, that we have such a `directable' KEM and a non-hiding | |
4386 | signature scheme: we'd attack our deniable AAE scheme as follows. Request an | |
4387 | encryption of some arbitrary message~$m$ for the targetted recipient, extract | |
4388 | the tag from the signature, generate a random symmetric key, and construct a | |
4389 | KEM clue which encapsulates the symmetric key and tag. Now we send the clue, | |
4390 | copy the signature from the legitimate ciphertext, and encrypt a different | |
4391 | message, using the hash as additional data. | |
4392 | ||
4393 | We note that practical KEMs seem not to be `directable' in this manner. KEMs | |
4394 | which apply a cryptographic hash function to some value are obviously hard to | |
4395 | `direct'. KEMs based on Diffie--Hellman are likely to be undirectable anyway | |
4396 | assuming the difficulty of extracting discrete logarithms -- which is of | |
4397 | course at least as hard as computational Diffie--Hellman. | |
4398 | ||
4399 | \subsubsection{Non-leaky encryption} | |
4400 | We briefly sketch a variant of our weakly deniable scheme which doesn't need | |
4401 | to leak the symmetric key -- and therefore the sender doesn't need to | |
4402 | remember the symmetric key for every message he sends. We use Krawczyk and | |
4403 | Rabin's trick \cite{cryptoeprint:1998:010}. | |
4404 | ||
4405 | We include, in each participant's private key, a key~$R$ for the symmetric | |
4406 | encryption scheme. When encrypting a message, a sender computes an | |
4407 | additional ciphertext $d = E_R(K)$, and includes $d$ in the computation of | |
4408 | the signature $\sigma = S_{x'}([Y, \tau, d])$ and in the combined ciphertext. | |
4409 | This doesn't affect authenticity, but secrecy degrades by | |
4410 | $\InSec{euf-cma}(\proto{sig}; t, q_E)$, since we must appeal to | |
4411 | unforgeability of the signatures in order to demonstrate that a decryption | |
4412 | query with a reused clue will be rejected unless the rest of the ciphertext | |
4413 | is also reused. | |
4414 | ||
4415 | %%%-------------------------------------------------------------------------- | |
4416 | \section{Open problems} | |
4417 | \label{sec:open} | |
4418 | ||
4419 | A number of problems remain outstanding. | |
4420 | \begin{itemize} | |
4421 | \item Can we formalize the requirement that a KEM be `non-directable' and | |
4422 | construct efficient non-directable KEMs? Is this sufficient to show that | |
4423 | only authenticity of signatures are required in the generic scheme of | |
4424 | \xref{sec:gwd}? (This would obviate the constant-length requirement on the | |
4425 | signature scheme.) | |
4426 | \item The NAIAD of \xref{sec:naiad.dh} requires four multiplications in the | |
4427 | group. These can be precomputed once per correspondent, but it still seems | |
4428 | inefficient: can we construct a faster NAIAD? | |
4429 | \item Can we construct a NAIAD with security in the standard model? | |
4430 | \item It might be that the non-interactive proof systems of Groth and Sahai | |
4431 | \cite{Groth:2008:ENP,cryptoeprint:2007:155} can be used to make efficient | |
4432 | magic tokens. Is this really the case? Can we achieve a similarly (or | |
4433 | more) efficient scheme without requiring a pairing and/or with weaker | |
4434 | intractability assumptions? | |
4435 | \end{itemize} | |
4436 | ||
4437 | %%%-------------------------------------------------------------------------- | |
4438 | \section{Acknowledgements} | |
4439 | \label{sec:ack} | |
4440 | ||
4441 | I'm very grateful to Christine Swart for encouraging me to actually write | |
4442 | this up properly. Thanks also to Matthew Kreeger for his helpful comments. | |
4443 | ||
4444 | \bibliography{mdw-fixes,mdw-crypto,cryptography,cryptography2000,eprint,jcryptology,lncs1991,lncs1997b,lncs2001d,lncs2002b,rfc} | |
4445 | ||
4446 | \end{document} | |
4447 | ||
4448 | %%% Local variables: | |
4449 | %%% mode: LaTeX | |
4450 | %%% TeX-PDF-mode: t | |
4451 | %%% hs-minor-mode: nil | |
4452 | %%% outline-minor-mode: t | |
4453 | %%% End: |