Add build system. Write most of the introduction.
[doc/wrestlers] / wrestlers.tex
1 %%% -*-latex-*-
2 %%%
3 %%% The Wrestlers Protocol: secure, deniable key-exchange
4 %%%
5 %%% (c) 2006 Mark Wooding
6 %%%
7
8 \newif\iffancystyle\fancystylefalse
9 \newif\ifshort\shortfalse
10 \fancystyletrue
11 \InputIfFileExists{wr.cfg}\relax\relax
12 \errorcontextlines=\maxdimen
13 \showboxdepth=\maxdimen
14 \showboxbreadth=\maxdimen
15
16 \iffancystyle
17 \documentclass
18 [a4paper, article, 10pt, numbering, noherefloats, notitlepage,
19 runinsubsubsec]
20 {strayman}
21 \usepackage[T1]{fontenc}
22 \usepackage[palatino, helvetica, courier, maths=cmr]{mdwfonts}
23 \usepackage[within = subsection, mdwmargin]{mdwthm}
24 \usepackage{mdwlist}
25 \usepackage{sverb}
26 \PassOptionsToPackage{dvips}{xy}
27 \else
28 \documentclass[a4paper]{llncs}
29 \usepackage{a4wide}
30 \fi
31
32 \PassOptionsToPackage{show}{slowbox}
33 %\PassOptionsToPackage{hide}{slowbox}
34 \usepackage{mdwtab, mathenv, mdwmath, crypto}
35 \usepackage{slowbox}
36 \usepackage{amssymb, amstext}
37 \usepackage{url, multicol}
38 \usepackage{tabularx}
39 \DeclareUrlCommand\email{\urlstyle{tt}}
40 \ifslowboxshow
41 \usepackage[all]{xy}
42 \turnradius{4pt}
43 \fi
44
45 \iffancystyle
46 \def\next{\title[The Wrestlers Protocol]}
47 \else
48 \def\next{\title}
49 \fi
50 \next
51 {The Wrestlers Protocol \\
52 A simple, practical, secure, deniable protocol for key-exchange}
53 \iffancystyle
54 \author{Mark Wooding \\ \email{mdw@distorted.org.uk}}
55 \else
56 \author{Mark Wooding}
57 \institute{\email{mdw@distorted.org.uk}}
58 \fi
59
60 \iffancystyle
61 \bibliographystyle{mdwalpha}
62 \let\epsilon\varepsilon
63 \let\emptyset\varnothing
64 \let\le\leqslant\let\leq\le
65 \let\ge\geqslant\let\geq\ge
66 \numberwithin{table}{section}
67 \numberwithin{figure}{section}
68 \numberwithin{equation}{subsection}
69 \let\random\$
70 \else
71 \bibliographystyle{plain}
72 \expandafter\let\csname claim*\endcsname\claim
73 \expandafter\let\csname endclaim*\endcsname\endclaim
74 \fi
75
76 \iffancystyle
77 \newcommand{\Nupto}[1]{\N_{<{#1}}}
78 \else
79 \newcommand{\Nupto}[1]{\{0, 1, \ldots, #1 - 1\}}
80 \fi
81 \let\Bin\Sigma
82 \let\emptystring\lambda
83 \edef\Pr{\expandafter\noexpand\Pr\nolimits}
84 \newcommand{\bitsto}{\mathbin{..}}
85 \newcommand{\E}{{\mathcal{E}}}
86 \newcommand{\M}{{\mathcal{M}}}
87 \iffancystyle
88 \def\description{%
89 \basedescript{%
90 \let\makelabel\textit%
91 \desclabelstyle\multilinelabel%
92 \desclabelwidth{1in}%
93 }%
94 }
95 \fi
96 \def\fixme#1{\marginpar{FIXME}[FIXME: #1]}
97 \def\hex#1{\texttt{#1}_{x}}
98
99 \def\dbox#1{%
100 \vtop{%
101 \def\\{\unskip\egroup\hbox\bgroup\strut\ignorespaces}%
102 \hbox{\strut#1}%
103 }%
104 }
105
106 \def\Wident{\Xid{W}{ident}}
107 \def\Wkx{\Xid{W}{kx}}
108 \def\Wdkx{\Xid{W}{dkx}}
109 \def\Func#1#2{\mathcal{F}[#1\to#2]}
110 \def\diff#1#2{\Delta_{#1, #2}}
111 \def\Hid{H_{\textit{ID}}}
112
113 %% protocol run diagrams
114 \def\PRaction#1{#1\ar[r]}
115 \def\PRcreatex#1{\PRaction{\textsf{Create session\space}#1}}
116 \def\PRcreate#1#2#3{\PRcreatex{(\text{#1},\text{#2},#3)}}
117 \def\PRreceivex#1{\PRaction{\textsf{Receive\space}#1}}
118 \def\PRreceive#1#2{\PRreceivex{\msg{#1}{#2}}}
119 \def\PRsession#1{\relax\mathstrut#1\ar[r]}
120 \def\msg#1#2{(\cookie{#1},#2)}
121 \def\PRresult#1{#1}
122 \def\PRsendx#1{\PRresult{\textsf{Send\space}#1}}
123 \def\PRsend#1#2{\PRsendx{\msg{#1}{#2}}}
124 \def\PRcomplete#1{\textsf{Complete:\space}#1}
125 \def\PRstate#1{\textsf{State:\space}#1}
126 \def\PRignore{\textsf{(ignored)}}
127 \def\PRreveal{\textsf{Session-state reveal}\ar[r]}
128 \def\protocolrun#1{\[\xymatrix @R=0pt @C=2em {#1}\]}
129
130 %% class-ids for proof of extractor lemma
131 \let\Cid=\Lambda
132 \let\Csession=S
133 \let\Cindex=r
134 \let\Cquery=Q
135 \let\Chash=H
136 \let\Ccheck=c
137 \let\Ccheckset=V
138 \let\Ccount=\nu
139
140 \def\HG#1{\mathbf{H}_{#1}}
141
142 \begin{document}
143
144 %%%--------------------------------------------------------------------------
145
146 \maketitle
147
148 \begin{abstract}
149 We describe and prove (in the random-oracle model) the security of a simple
150 but efficient zero-knowledge identification scheme, whose security is based
151 on the computational Diffie-Hellman problem. Unlike other recent proposals
152 for efficient identification protocols, we don't need any additional
153 assumptions, such as the Knowledge of Exponent assumption.
154
155 From this beginning, we build a simple key-exchange protocol, and prove
156 that it achieves `SK-security' -- and hence security in Canetti's Universal
157 Composability framework.
158
159 Finally, we show how to turn the simple key-exchange protocol into a
160 slightly more complex one which provides a number of valuable `real-life'
161 properties, without damaging its security.
162 \end{abstract}
163
164 \iffancystyle
165 \newpage
166 \columnsep=2em \columnseprule=0pt
167 \tableofcontents[\begin{multicols}{2}\raggedright][\end{multicols}]
168 %%\listoffigures[\begin{multicols}{2}\raggedright][\end{multicols}]
169 %% \listoftables[\begin{multicols}{2}\raggedright][\end{multicols}]
170 \newpage
171 \fi
172
173 %%%--------------------------------------------------------------------------
174
175 \section{Introduction}
176
177
178 This paper proposes protocols for \emph{identification} and
179 \emph{authenticated key-exchange}.
180
181 An identification protocol allows one party, say Bob, to be sure that he's
182 really talking to another party, say Alice. It assumes that Bob has some way
183 of recognising Alice; for instance, he might know her public key. Our
184 protocol requires only two messages -- a challenge and a response -- and has
185 a number of useful properties. It is very similar to, though designed
186 independently of, a recent protocol by Stinson and Wu; we discuss their
187 protocol in section~\ref{sec:stinson-ident}.
188
189 Identification protocols are typically less useful than they sound. As Shoup
190 \cite{Shoup:1999:OFM} points out, it provides a `secure ping', by which Bob
191 can know that Alice is `there', but provides no guarantee that any other
192 communication was sent to or reached her. However, there are situations
193 where this an authentic channel between two entities -- e.g., a device and a
194 smartcard -- where a simple identification protocol can still be useful.
195
196 An authenticated key-exchange protocol lets Alice and Bob agree on a shared
197 secret, known to them alone, even if there is an enemy who can read and
198 intercept all of their messages, and substitute messages of her own. Once
199 they have agreed on their shared secret, of course, they can use standard
200 symmetric cryptography techniques to ensure the privacy and authenticity of
201 their messages.
202
203
204 \subsection{Desirable properties of our protocols}
205
206 Our identification protocol has a number of desirable properties.
207 \begin{itemize}
208 \item It is \emph{simple} to understand and implement. In particular, it
209 requires only two messages.
210 \item It is fairly \emph{efficient}, requiring two scalar multiplications by
211 each of the prover and verifier.
212 \item It is provably \emph{secure} (in the random oracle model), assuming the
213 intractability of the computational Diffie-Hellman problem.
214 \end{itemize}
215
216 Our key-exchange protocol also has a number of desirable
217 properties.\footnote{%
218 We don't claim resistance to `key-compromise impersonation' (KCI) attacks.
219 To perform a KCI attack, an adversary learns a party's -- say Alice's --
220 secret and somehow uses this to impersonate Bob to Alice. Our model
221 doesn't consider this to be a legitimate attack. The purpose of a
222 key-exchange protocol is to provide a shared secret for some other,
223 high-level protocol between the participants. If one of the parties is
224 already corrupted -- i.e., the adversary could be impersonating her -- then
225 it is impossible for us to make any meaningful security statement about the
226 high-level protocol anyway, so it seems unreasonable to expect the
227 key-exchange protocol to cope.}
228 \begin{itemize}
229 \item It is fairly \emph{simple} to understand and implement, though there
230 are a few subtleties. In particular, it is \emph{symmetrical}. We have
231 implemented a virtual private network system based on this protocol.
232 \item It is \emph{efficient}, requiring four scalar multiplications by each
233 participant. The communication can be reduced to three messages by
234 breaking the protocol's symmetry.
235 \item It is provably \emph{secure} (again, in the random oracle model),
236 assuming the intractability of the computational Diffie-Hellman problem,
237 and the security of a symmetric encryption scheme.
238 \item It provides \emph{perfect forward secrecy}. That is, even if a user's
239 long-term secrets are compromised, past sessions remain secure.
240 \item It is \emph{deniable}. It is possible to construct simulated
241 transcripts of protocol executions between any number of parties without
242 knowing any of their private keys. The simulated transcripts are (almost)
243 indistinguishable from real protocol transcripts. Hence, a transcript
244 does not provide useful evidence that a given party was really involved in
245 a given protocol execution.
246 \end{itemize}
247
248 \subsection{Asymptotic and concrete security results}
249
250 Most security definitions for identification (particularly zero-knowledge)
251 and key-exchange in the literature are \emph{asymptotic}. That is, they
252 consider a family of related protocols, indexed by a \emph{security
253 parameter}~$k$; they that any \emph{polynomially-bounded} adversary has only
254 \emph{negligible} advantage. A polynomially-bounded adversary is one whose
255 running time is a bounded by some polynomial~$t(k)$. The security definition
256 requires that, for any such polynomially-bounded adversary, and any
257 polynomial $p(k)$, the adversary's advantage is less than $p(k)$ for all
258 sufficiently large values of $k$.
259
260 Such asymptotic notions are theoretically interesting, and have obvious
261 connections to complexity theory. Unfortunately, such an asymptotic result
262 tells us nothing at all about the security of a particular instance of a
263 protocol, or what parameter sizes one needs to choose for a given level of
264 security against a particular kind of adversary. Koblitz and Menezes
265 \cite{Koblitz:2006:ALP} (among other useful observations) give examples of
266 protocols, proven to meet asymptotic notions of security, whose security
267 proofs guarantee nothing at all for the kinds of parameters typically used in
268 practice.
269
270 Since, above all, we're interested in analysing a practical and implemented
271 protocol, we follow here the `practice-oriented provable security' approach
272 largely inspired by Bellare and Rogaway, and exemplified by
273 \cite{Bellare:1994:SCB,Bellare:1995:XMN,Bellare:1995:OAE,Bellare:1996:ESD,%
274 Bellare:1996:KHF,Bellare:2000:CST}; see also \cite{Bellare:1999:POP}.
275 Rather than attempting to say, formally, whether or not a protocol is
276 `secure', we associate with each protocol an `insecurity function' which
277 gives an upper bound on the advantage of any adversary attacking the protocol
278 within given resource bounds.
279
280
281 \subsection{Formal models for key-exchange}
282
283 Many proposed key-exchange protocols have turned out to have subtle security
284 flaws. The idea of using formal methods to analyse key-exchange protocols
285 begins with the logic of Burrows, Abadi and Needham \cite{Burrows:1989:LA}.
286 Their approach requires a `formalising' step, in which one expresses in the
287 logic the contents of the message flows, and the \emph{beliefs} of the
288 participants.
289
290 Bellare and Rogaway \cite{Bellare:1994:EAK} describe a model for studying the
291 computational security of authentication and key-exchange protocols in a
292 concurrent setting, i.e., where multiple parties are running several
293 instances of a protocol simultaneously. They define a notion of security in
294 this setting, and show that several simple protocols achieve this notion.
295 Their original paper dealt with pairs of parties using symmetric
296 cryptography; they extended their definitions in \cite{Bellare:1995:PSS} to
297 study three-party protocols involving a trusted key-distribution centre.
298
299 Blake-Wilson, Johnson and Menezes \cite{Blake-Wilson:1997:KAP} applied the
300 model of \cite{Bellare:1994:EAK} to key-exchange protocols using asymmetric
301 cryptography, and Blake-Wilson and Menezes \cite{Blake-Wilson:1998:EAA}
302 applied it to protocols based on the Diffie-Hellman protocol.
303
304 The security notion of \cite{Bellare:1994:EAK} is based on a \emph{game}, in
305 which an adversary nominates a \emph{challenge session}, and is given either
306 the key agreed by the participants of the challenge session, or a random
307 value independently sampled from an appropriate distribution. The
308 adversary's advantage -- and hence the insecurity of the protocol -- is
309 measured by its success probability in guessing whether the value it was
310 given is really the challenge key. This challenge-session notion was also
311 used by the subsequent papers described above.
312
313 Bellare, Canetti and Krawczyk \cite{Bellare:1998:MAD} described a pair of
314 models which they called the \textsc{am} (for `authenticated links model')
315 and \textsc{um} (`unauthenticated links model'). They propose a modular
316 approach to the design of key-exchange protocols, whereby one first designs a
317 protocol and proves its security in the \textsc{am}, and then applies a
318 authenticating `compiler' to the protocol which they prove yields a protocol
319 secure in the realistic \textsc{um}. Their security notion is new. They
320 define an `ideal model', in which an adversary is limited to assigning
321 sessions fresh, random and unknown keys, or matching up one session with
322 another, so that both have the same key. They define a protocol to be secure
323 if, for any adversary~$A$ in the \textsc{am} or \textsc{um}, there is an
324 ideal adversary~$I$, such that the outputs of $A$ and $I$ are computationally
325 indistinguishable.
326
327 In \cite{Shoup:1999:OFM}, Shoup presents a new model for key-exchange, also
328 based on the idea of simulation. He analyses the previous models,
329 particularly \cite{Bellare:1994:EAK} and \cite{Bellare:1998:MAD}, and
330 highlights some of their inadequacies.
331
332 Canetti and Krawczyk \cite{Canetti:2001:AKE} describe a new notion of
333 security in the basic model of \cite{Bellare:1998:MAD}, based on the
334 challenge-session notion of \cite{Bellare:1994:EAK}. The security notion,
335 called `SK-security', seems weaker in various ways than those of earlier
336 works such as \cite{Bellare:1994:EAK} or \cite{Shoup:1999:OFM}. However, the
337 authors show that their notion suffices for constructing `secure channel'
338 protocols, which they also define.
339
340 In \cite{Canetti:2001:UCP}, Canetti describes the `universal composition'
341 framework. Here, security notions are simulation-based: one defines security
342 notions by presenting an `ideal functionality'. A protocol securely
343 implements a particular functionality if, for any adversary interacting with
344 parties who use the protocol, there is an adversary which interacts with
345 parties using the ideal functionality such that no `environment' can
346 distinguish the two. The environment is allowed to interact freely with the
347 adversary throughout, differentiating this approach from that of
348 \cite{Bellare:1998:MAD} and \cite{Shoup:1999:OFM}, where the distinguisher
349 was given only transcripts of the adversary's interaction with the parties.
350 With security defined in this way, it's possible to prove a `universal
351 composition theorem': one can construct a protocol, based upon various ideal
352 functionalities, and then `plug in' secure implementations of the ideal
353 functionalities and appeal to the theorem to prove the security of the entire
354 protocol. The UC framework gives rise to very strong notions of security,
355 due to the interactive nature of the `environment' distinguisher.
356
357 Canetti and Krawczyk \cite{Canetti:2002:UCN} show that the SK-security notion
358 of \cite{Canetti:2001:AKE} is \emph{equivalent} to a `relaxed' notion of
359 key-exchange security in the UC framework, and suffices for the construction
360 of UC secure channels.
361
362 The result of \cite{Canetti:2002:UCN} gives us confidence that SK-security is
363 the `right' notion of security for key-exchange protocols. Accordingly,
364 SK-security is the standard against which we analyse our key-exchange
365 protocol.
366
367
368 \subsection{Outline of the paper}
369
370 The remaining sections of this paper are as follows.
371 \begin{itemize}
372 \item Section \ref{sec:prelim} provides the essential groundwork for the rest
373 of the paper. It introduces important notation, and describes security
374 notions and intractability assumptions.
375 \item Section \ref{sec:zk-ident} describes our zero-knowledge identification
376 protocol and proves its security.
377 \item Section \ref{sec:kx} describes the simple version of our key-exchange
378 protocol, and proves its security and deniability. It also describes some
379 minor modifications which bring practical benefits without damaging
380 security.
381 \item Finally, section \ref{sec:conc} presents our conclusions.
382 \end{itemize}
383
384 %%%--------------------------------------------------------------------------
385
386 \section{Preliminaries}
387 \label{sec:prelim}
388
389 \subsection{Miscellaneous notation}
390
391 We write $\Func{D}{R}$ for the set of all functions with domain $D$ and range
392 $R$.
393 \iffancystyle
394 We write $\Nupto{n} = \{\, i \in \Z \mid 0 \le i < n \,\} = \{0, 1, \ldots, n
395 - 1\}$ for the set of nonnegative integers less than $n$.
396 \fi
397
398
399 \subsection{Groups}
400
401 Let $(G, +)$ be a cyclic group\footnote{
402 We find that additive group notation is easier to read. In particular, in
403 multiplicative groups, one ends up with many interesting things tucked away
404 in little superscripts.}%
405 of prime order $q$, and generated by an element $P$. We shall write the
406 identity of $G$ as $0_G$, or simply as $0$ when no ambiguity is likely to
407 arise. Thus, we have $\langle P \rangle = G$ and $q P = 0$. Any $X \in G$
408 can be written as $X = x P$ for some $x \in \Nupto{q}$.
409
410
411 \subsection{Bit strings and encodings}
412 \label{sec:bitenc}
413
414 Let $\Bin = \{0, 1\}$ be the set of binary digits. Then $\Bin^n$ is the set
415 of $n$-bit strings, and $\Bin^*$ the set of all (finite) bit strings. If $x
416 \in \Bin^n$ is a bit string, we write its length as $|x| = n$. For a bit
417 string $x \in \Bin^n$, and for $0 \le i < n$, we write $x[i]$ as the $i$th
418 bit of $x$. The empty string is denoted $\emptystring$.
419
420 Let $x$ and $y$ be two bit strings. If $|x| = |y| = n$, we write $x \xor y$
421 to mean the bitwise exclusive-or of $x$ and $y$: if $z = x \xor y$ then $|z|
422 = n$, and $z[i] = (x[i] + y[i]) \bmod 2$ for $0 \le i < n$. We write $x \cat
423 y$ to mean the concatenation of $x$ and $y$: if $z = x \cat y$ then $|z| =
424 |x| + |y|$ and $z[i] = x[i]$ if $0 \le i < |x|$ and $z[i] = y[i - |x|]$ if
425 $|x| < i \le |x| + |y|$.
426
427 Finally, we let $\bot$ be a value distinct from any bit string.
428
429 We shall want to encode group elements $X \in G$ and indices $x \in I =
430 \Nupto{|G|}$ as bit strings. To this end, we shall assume the existence of
431 integers $\ell_G, \ell_I > 0$ and functions
432 \[
433 e_S\colon S \to \Bin^{\ell_S}
434 \quad \textrm{and} \quad
435 d_S\colon \Bin^{\ell_S} \to S \cup \{ \bot \}
436 \qquad
437 \textrm{for } S \in \{ G, I \}.
438 \]
439 with the following properties.
440 \begin{itemize}
441 \item The functions are \emph{unique} and \emph{unambiguous}, i.e., for any
442 $t \in \Bin^{\ell_S}$, we have
443 \[ d_S(t) = \begin{cases}
444 s & if there is some $s \in S$ such that $t = e_S(s)$, or \\
445 \bot & if no such $s$ exists.
446 \end{cases}
447 \]
448 \item The functions should be \emph{efficient} to compute. Indeed, we shall
449 be assuming that the time taken for encoding and decoding is essentially
450 trivial.
451 \end{itemize}
452 Note that, as we have defined them, all encodings of group elements are the
453 same length, and similarly for encodings of indices. This is necessary for
454 the security of our protocols.
455
456 We shall frequently abuse notation by omitting the encoding and decoding
457 functions where it is obvious that they are required.
458
459
460 \subsection{Games, adversaries, and oracles}
461 \label{sec:games}
462
463 Many of the security definitions and results given here make use of
464 \emph{games}, played with an \emph{adversary}. An adversary is a
465 probabilistic algorithm. In some games, the adversary is additionally
466 equipped with \emph{oracles}, which perform computations with values chosen
467 by the adversary and secrets chosen by the game but not revealed to the
468 adversary. We impose limits on the adversary's resource usage: in
469 particular, the total time it takes, and the number of queries it makes to
470 its various oracles. Throughout, we include the size of the adversary's
471 program as part of its `time', in order to model adversaries which contain
472 large precomputed tables.
473
474 The games provide models of someone trying to attack a construction or
475 protocol. For security, we will either define a notion of `winning' the
476 game, and require that all adversaries have only a very small probability of
477 winning, or we consider two different games and require that no adversary can
478 distinguish between the two except with very small probability.
479
480 Our proofs make frequent use of sequences of games; see
481 \cite{Shoup:2004:SGT,Bellare:2004:CBG}. The presentation owes much to Shoup
482 \cite{Shoup:2004:SGT}. We begin with a game $\G0$ based directly on a
483 relevant security definition, and construct a sequence of games $\G1$, $\G2$,
484 \dots, each slightly different from the last. We define all of the games in
485 a sequence over the same underlying probability space -- the random coins
486 tossed by the algorithms involved -- though different games may have slightly
487 differently-defined events and random variables. Our goal in doing this is
488 to bound the probability of the adversary winning the initial game $\G0$ by
489 simultaneously (a) relating the probability of this event to that of
490 corresponding events in subsequent games, and (b) simplifying the game until
491 the probability of the corresponding event can be computed directly.
492
493 The following simple lemma from \cite{Shoup:2001:OR} will be frequently
494 useful.
495 \begin{lemma}[Difference Lemma]
496 \label{lem:shoup}
497 Let $S$, $T$, $F$ be events. Suppose $\Pr[S|\bar F] = \Pr[T|\bar F]$.
498 Then $|\Pr[S] - \Pr[T]| \le \Pr[F]$.
499 \end{lemma}
500 \begin{proof}
501 A simple calculation:
502 \begin{eqnarray*}[rl]
503 |\Pr[S] - \Pr[T]|
504 & = |(\Pr[S|F]\Pr[F] + \Pr[S|\bar F]\Pr[\bar F]) -
505 (\Pr[T|F]\Pr[F] + \Pr[T|\bar F]\Pr[\bar F])| \\
506 & = \Pr[F] \cdot |\Pr[S|F] - \Pr[T|F]| \\
507 & \le \Pr[F]
508 \end{eqnarray*}
509 and we're done!
510 \end{proof}
511
512
513 \subsection{The random oracle model}
514 \label{sec:ro}
515
516 In particular, most of our results will make use of the \emph{random oracle}
517 model \cite{Bellare:1993:ROP}, in which all the participants, including the
518 adversary, have access to a number of `random oracles'. A random oracle with
519 domain $D$ and range $R$ is an oracle which computes a function chosen
520 uniformly at random from the set of all such functions. (In the original
521 paper \cite{Bellare:1993:ROP}, random oracles are considered having domain
522 $\Bin^*$ and range $\Bin^\omega$; we use finite random oracles here, because
523 they're easier to work with.)
524
525 Given a protocol proven secure in the random oracle model, we can instantiate
526 each random oracle by a supposedly-secure hash function and be fairly
527 confident that either our protocol will be similarly secure, or one of the
528 hash functions we chose has some unfortunate property.
529
530 Proofs in the random oracle must be interpreted carefully. For example,
531 Canetti, Goldreich and Halevi \cite{Canetti:2004:ROM} show that there are
532 schemes which can be proven secure in the random oracle model but provably
533 have no secure instantiation in the standard model.
534
535 The random oracle model is useful for constructing reductions and simulators
536 for two main reasons.
537 \begin{enumerate}
538 \item One can use the transcript of an adversary's queries to random oracles
539 in order to extract knowledge from it.
540 \item One can `program' a random oracle so as to avoid being bound by prior
541 `commitments', or to guide an adversary towards solving a selected instance
542 of some problem.
543 \end{enumerate}
544 Our proofs only make use of the first feature. This becomes particularly
545 important when we consider issues of zero-knowledge and deniability in a
546 concurrent setting, because we want to be able to claim that we retain these
547 features when the random oracle is instantiated using a cryptographic hash
548 function, and hash functions definitely aren't `programmable' in this way!
549 The former property seems rather more defensible -- one would indeed hope
550 that the only sensible way of working out (anything about) the hash of a
551 particular string is to actually compute the hash function, and the random
552 oracle model is, we hope, just giving us a `hook' into this process.
553
554 (Our protocols can be modified to make use of bilinear pairings so as to
555 provide identity-based identification and key-exchange, using the techniques
556 of \cite{Boneh:2001:IBE}. Proving the security of the modifications we
557 discuss would involve `programming' random oracles, but this doesn't affect
558 the zero-knowledge or deniability of the resulting protocols.)
559
560
561 \subsection{Notation for algorithms}
562
563 We shall have occasion to describe algorithms by means of a pseudocode. Our
564 choice of pseudocode is unlikely to be particularly controversial. We let $x
565 \gets y$ denote the action of setting $x$ to the value $y$; similarly, $x
566 \getsr Y$ denotes the action of sampling $x$ from the set $Y$ uniformly at
567 random.
568
569 The expression $a \gets A^{O(\cdot, x)}(y)$ means `assign to $a$ the value
570 output by algorithm $A$ on input $y$, and with oracle access to the algorithm
571 which, given input $z$, computes $O(z, x)$'.
572
573 We make use of conditional (\IF-\ELSE) and looping (\FOR-\DO and \WHILE-\DO)
574 constructions; in order to reduce the amount of space taken up, the bodies of
575 such constructions are shown by indentation only.
576
577 We don't declare the types of our variables explicitly, assuming that these
578 will be obvious by inspection; also, we don't describe our variables' scopes
579 explicitly, leaving the reader to determine these from context.
580
581 Finally, the notation $\Pr[\textit{algorithm} : \textit{condition}]$ denotes
582 the probability that \textit{condition} is true after running the given
583 \textit{algorithm}.
584
585
586 \subsection{Diffie-Hellman problems}
587 \label{sec:dhp}
588
589 The security of our protocols is related to the hardness of the
590 computational, decisional, and gap Diffie-Hellman problems in the group $G$.
591 We define these problems and what it means for them to be `hard' here.
592
593 The \emph{computational} Diffie-Hellman problem (CDH) is as follows: given
594 two group elements $X = x P$ and $Y = y P$, find $Z = x y P$.
595 \begin{definition}[The computational Diffie-Hellman problem]
596 Let $(G, +)$ be a cyclic group generated by $P$. For any adversary $A$, we
597 say that $A$'s \emph{success probability} at solving the computational
598 Diffie-Hellman problem in $G$ is
599 \[ \Succ{cdh}{G}(A) =
600 \Pr[ x \getsr I; y \getsr \Nupto{|G|} : A(x P, y P) = x y P ]
601 \]
602 where the probability is taken over the random choices of $x$ and $y$ and
603 any random decisions made by $A$. We say that the \emph{CDH insecurity
604 function} of $G$ is
605 \[ \InSec{cdh}(G; t) = \max_A \Succ{cdh}{G}(A) \]
606 where the maximum is taken over adversaries which complete in time $t$.
607 \end{definition}
608 Certainly, if one can compute discrete logarithms in the group $G$ (i.e.,
609 given $x P$, find $x$), then one can solve the computational Diffie-Hellman
610 problem. The converse is not clear, though. Shoup \cite{Shoup:1997:LBD}
611 gives us some confidence in the difficulty of the problem by showing that a
612 \emph{generic} adversary -- i.e., one which makes no use of the specific
613 structure of a group -- has success probability no greater than $q^2/|G|$.
614
615 This isn't quite sufficient for our purposes. Our proofs will be able to
616 come up with (possibly) a large number of guesses for the correct answer, and
617 at most one of them will be correct. Unfortunately, working out which one is
618 right seems, in general, to be difficult. This is the \emph{decision}
619 Diffie-Hellman problem (DDH), which \cite{Shoup:1997:LBD} shows, in the
620 generic group model, is about as hard as CDH. (See \cite{Boneh:1998:DDP} for
621 a survey of the decision Diffie-Hellman problem.)
622
623 Our reference problem will be a `multiple-guess computational Diffie-Hellman
624 problem' (MCDH), which is captured by a game as follows. An adversary is
625 given a pair of group elements $(x P, y P)$, and an oracle $V(\cdot)$ which
626 accepts group elements as input. The adversary wins the game if it queries
627 $V(x y P)$.
628 \begin{definition}[The multiple-guess computational Diffie-Hellman problem]
629 \label{def:mcdh}
630 Let $(G, +)$ be a cyclic group generated by $P$. For some adversary $A$,
631 we say that $A$'s \emph{success probability} at solving the multiple-guess
632 computational Diffie-Hellman problem in $G$ is
633 \[ \Succ{mcdh}{G}(A) = \Pr[\Game{mcdh}{G}(A) = 1] \]
634 where $\Game{mcdh}{G}(A)$ is shown in figure~\ref{fig:mcdh}. We say that
635 the \emph{MCDH insecurity function of $G$} is
636 \[ \InSec{mcdh}(G; t, q_V) = \max_A \Succ{mcdh}{G}(A) \]
637 where the maximum is taken over adversaries which complete in time $t$ and
638 make at most $q_V$-oracle queries.
639 \end{definition}
640
641 \begin{figure}
642 \begin{program}
643 $\Game{mcdh}{G}(A)$: \+ \\
644 $w \gets 0$; \\
645 $x \getsr \Nupto{|G|}$; $y \getsr \Nupto{|G|}$; \\
646 $A^{V(\cdot)}(x P, y P)$; \\
647 \RETURN $w$;
648 \next
649 Function $V(Z)$: \+ \\
650 \IF $Z = x y P$ \THEN \\ \ind
651 $w \gets 1$; \\
652 \RETURN $1$; \- \\
653 \RETURN $0$;
654 \end{program}
655
656 \caption{The multiple-guess computational Diffie-Hellman problem:
657 $\Game{mcdh}{G}(A)$}
658 \label{fig:mcdh}
659 \end{figure}
660
661 Note that our MCDH problem is not quite the `gap Diffie-Hellman problem'
662 (GDH). The gap problem measures the intractibility of solving CDH even with
663 the assistance of an oracle for solving (restricted) decision Diffie-Hellman
664 problems in the group. Specifically, the adversary is given $(X, Y) = (x P,
665 y P)$ and tries to find $Z = x y P$, as for CDH, but now it has access to an
666 oracle $D(R, S)$ which answers $1$ if $S = x R$ and $0$ otherwise.
667
668 Clearly MCDH is at least as hard as GDH, since our simple verification oracle
669 $V(Z)$ can be simulated with the gap problem's DDH oracle, as $D(Y, Z)$.
670 However, we can (loosely) relate the difficulty of MCDH to the difficulty of
671 CDH.
672 \begin{proposition}[Comparison of MCDH and CDH security]
673 For any cyclic group $(G, +)$,
674 \[ \InSec{mcdh}(G; t, q_V) \le q_V\,\InSec{cdh}(G; t + O(q_V)). \]
675 \end{proposition}
676 \begin{proof}
677 Let $A$ be an adversary attacking the multiple-guess computational
678 Diffie-Hellman problem in $G$, and suppose that it runs in time $t$ and
679 issues $q_V$ queries to its verification oracle.
680
681 We use a sequence of games. Game $\G0$ is the original MCDH attack game.
682 In each game $\G{i}$, we let the event $S_i$ be the probability that the
683 adversary wins the game.
684
685 Game $\G1$ is the same as $\G0$, except that we change the behaviour of the
686 verification oracle. Specifically, we make the oracle always return $0$.
687 We claim that this doesn't affect the adversary's probability of winning,
688 i.e., $\Pr[S_1] = \Pr[S_0]$. To see this, note that if none of the
689 adversary's $V(\cdot)$ queries was correct, then there is no change in the
690 game; conversely, if any query was correct, then the adversary will have
691 won regardless of its subsequent behaviour (which may differ arbitrarily
692 between the two games).
693
694 We are now ready to construct from $A$ an adversary $B$ attacking the
695 standard computational Diffie-Hellman problem.
696 \begin{program}
697 Adversary $B(X, Y)$: \+ \\
698 $n \gets 0$; \\
699 \FOR $i \in \Nupto{q_V}$ \DO $Q_i \gets 0$; \\
700 $A^{V(\cdot)}$; \\
701 $r \getsr \Nupto{n}$; \\
702 \RETURN $Q_r$;
703 \next
704 Function $D(Z')$: \+ \\
705 $Q_n \gets Z'$; \\
706 $n \gets n + 1$; \\
707 \RETURN $0$;
708 \end{program}
709 Observe that $B$ provides $A$ with an accurate simulation of game $\G1$.
710 Moreover, at the end of the algorithm, we have $0 < n \le q_V$, the
711 output of $A$ is stored in $Q_{n-1}$ and the values $Q_0$, $Q_1$, \dots,
712 $Q_{n-1}$ are the values of $A$'s oracle queries. Hence, with probability
713 $Pr[S_1]$, at least of one of the $Q_i$ is the correct answer to the CDH
714 problem. Let $\epsilon = \Pr[S_1] = \Pr[S_0]$; we claim that $B$'s
715 probability of success is at least $\epsilon/q_V$. The proposition
716 follows directly from this claim and that, because $A$ was chosen
717 arbitrarily, we can maximize and count resources.
718
719 We now prove the above claim. For $0 \le i < q_V$, let $W_i$ be the
720 event that $Q_i = x y P$, i.e., that $Q_i$ is the correct response. A
721 simple union bound shows that
722 \[ \sum_{0\le i<j} \Pr[W_i \mid n = j] \ge \epsilon. \]
723 We now perform a calculation:
724 \begin{eqnarray*}[rl]
725 \Succ{cdh}{G}(B)
726 & = \sum_{0\le i<q_V} \Pr[W_i \land r = i] \\
727 & = \sum_{0<j\le q_V} \Pr[n = j]
728 \biggl( \sum_{0\le i<j} \Pr[W_i \land r = i \mid n = j] \biggr) \\
729 & = \sum_{0<j\le q_V} \Pr[n = j]
730 \biggl( \frac{1}{j} \sum_{0\le i<j} \Pr[W_i \mid n = j] \biggr) \\
731 &\ge \sum_{0<j\le q_V} \Pr[n = j] \frac{\epsilon}{j} \\
732 &\ge \frac{\epsilon}{q_V} \sum_{0<j\le q_V} \Pr[n = j] \\
733 & = \frac{\epsilon}{q_V}.
734 \end{eqnarray*}
735 which completes the proof.
736 \end{proof}
737
738
739 \subsection{Example groups and encodings}
740
741 For nonnegative integers $0 \le n < 2^\ell$, there is a natural binary
742 encoding $N_\ell\colon \Nupto{2^\ell} \to \Bin^\ell$ which we can define
743 recursively as follows.
744 \[ N_0(0) = \emptystring \qquad
745 N_\ell(n) = \begin{cases}
746 N_{\ell-1}(n) \cat 0 & if $0 \le n < 2^{\ell-1}$ \\
747 N_{\ell-1}(n - 2^{\ell-1}) \cat 1 & if $2^{\ell-1} \le n < 2^\ell$.
748 \end{cases}
749 \]
750 Given an encoding $a = N_\ell(n)$ we can recover $n$ as
751 \[ n = \sum_{0\le i<\ell} a[i] 2^i. \]
752 Hence, given some limit $L \le 2^\ell$, we can encode elements of $\Nupto{L}$
753 using the functions $(e, d)$:
754 \[ e(L, \ell, n) = N_\ell(n) \qquad
755 d(L, \ell, a) = \begin{cases}
756 N_\ell(a) & if $N_\ell(a) < L$ \\
757 \bot & otherwise
758 \end{cases}
759 \]
760 The reader can verify that the functions $e(L, \ell, \cdot)$ and $d(L, \ell,
761 \cdot)$ satisfy the requirements of section~\ref{sec:bitenc}.
762
763 Given some $q < 2^{\ell_I}$ and $I = \Nupto{q}$, then, we can define an
764 encoding $(e_I, d_I)$ by $e_I(n) = e(q, \ell_I, n)$ and $d_I(a) = d(q,
765 \ell_I, a)$.
766
767 Let $p$ and $q$ be primes, with $q|(p - 1)$. Then there is an order-$q$
768 subgroup of $(\Z/p\Z)^*$. In practice, an order-$q$ element can be found
769 easily by taking elements $h \in (\Z/p\Z)^*$ at random and computing $g =
770 h^{(p-1)/2}$ until $g \ne 1$; then $G = \langle g \rangle$ is a group of $q$
771 elements. Assuming that $p$ and $q$ are sufficiently large, the
772 Diffie-Hellman problems seem to be difficult in $G$. Some texts recommend
773 additional restrictions on $p$, in particular that $(p - 1)/2q$ be either
774 prime or the product of large primes. Primes of this form protect against
775 small-subgroup attacks; but our protocols are naturally immune to these
776 attacks, so such precautions are unnecessary here. Elements of $G$ can be
777 encoded readily, since each element $n + p\Z$ of $\Z/p\Z$ has an obvious
778 `representative' integer $n$ such that $0 \le n < p$, and given $2^{\ell_G} >
779 p$, we can encode $n$ as $e(p, \ell_G, n)$, as above.
780
781 Alternatively, let $\F = \gf{p^f}$ be a finite field, and $E$ be an elliptic
782 curve defined over $\F$ such that the group $E(\F)$ of $\F$-rational points
783 of $E$ has a prime-order cyclic subgroup $G$. Elements of $G$ can be
784 represented as pairs of elements of $\F$. If $f = 1$, i.e., $\F = \Z/p\Z$
785 then field elements can be encoded as above. If $p = 2$, we can represent
786 the field as $\F_2/(p(x))$ for some irreducible polynomial $p(x) \in \F_2[x]$
787 of degree $f$. An element $r \in \F$ can then be represented by a polynomial
788 $r(x)$ with degree less than $f$, and coefficients $c_i \in \{0, 1\}$, i.e.,
789 \[ r(x) = \sum_{0\le i<f} c_i x^i \]
790 and hence we can uniquely encode $r$ as an $f$-bit string $a$ such that $a[i]
791 = c_i$.
792
793
794 \subsection{Symmetric encryption}
795 \label{sec:sym-enc}
796
797 Our key-exchange protocol requires a symmetric encryption scheme. Our
798 definition is fairly standard, except that, rather than specifying a
799 key-generation algorithm, we assume that key generation simply involves
800 selecting a string of a given length uniformly at random.
801 \begin{definition}[Symmetric encryption schemes]
802 A \emph{symmetric encryption scheme} $\E = (\kappa, E, D)$ consists of:
803 \begin{itemize}
804 \item an integer $\kappa \ge 0$,
805 \item a randomized \emph{encryption algorithm} $E$ which, on input $K \in
806 \Bin^\kappa$ and $p \in \Bin^*$ outputs some $c \in \Bin^*$, written $c
807 \gets E_K(p)$;
808 \item a \emph{decryption algorithm} $D$ which, on input $K \in \Bin^\kappa$
809 and $c \in \Bin^*$ outputs some $p' \in \Bin^* \cup \{\bot\}$, written
810 $p' \gets D_K(c)$.
811 \end{itemize}
812 Furthermore, a symmetric encryption scheme must be \emph{sound}: that is,
813 if $c \gets E_K(p)$ for some $K \in \Bin^\kappa$ and $p \in \Bin^*$, and
814 $p' \gets D_K(c)$ then $p = p'$.
815 \end{definition}
816 Our security notion for symmetric encryption is the standard notion of
817 left-or-right indistinguishability of ciphertexts under chosen-ciphertext
818 attack.
819 \begin{definition}
820 [Indistinguishability under chosen-ciphertext attack (IND-CCA)]
821 \label{def:ind-cca}
822 Let $\E = (\kappa, E, D)$ be a symmetric encryption scheme, and $A$ be an
823 adversary. Let $\id{lr}_b(x_0, x_1) = x_b$ for $b \in \{0, 1\}$. Let
824 \[ P_b =
825 \Pr[K \getsr \Bin^\kappa;
826 b \gets A^{E_K(\id{lr}_b(\cdot, \cdot)), D_K(\cdot)}() :
827 b = 1]
828 \]
829 An adversary is \emph{valid} if
830 \begin{itemize}
831 \item for any query to its encryption oracle $E_K(\id{lr}_b(x_0, x_1))$ we
832 have $|x_0| = |x_1|$, and
833 \item no query to the decryption oracle $D_K(\cdot)$ is equal to any reply
834 from an encryption query.
835 \end{itemize}
836 If $A$ is valid, then we define its \emph{advantage} in attacking the
837 security of $\E$ as follows
838 \[ \Adv{ind-cca}{\E} = P_1 - P_0. \]
839 Further, we define the \emph{IND-CCA insecurity function of $\E$} to be
840 \[ \InSec{ind-cca}(\E; t, q_E, q_D) = \max_A \Adv{ind-cca}{\E}(A) \]
841 where the maximum is taken over all valid adversaries $A$ which run in time
842 $t$, and issue at most $q_E$ encryption and $q_D$ decryption queries.
843 \end{definition}
844
845
846 \subsection{Simulations}
847 \label{sec:sim}
848
849 In section~\ref{sec:zk-ident}, we shall prove that our identification
850 protocol is zero-knowledge; in section~\ref{sec:denial}, we show that our
851 key-exchange protocol is deniable. In both of these proofs, we shall need to
852 demonstrate \emph{simulatability}.
853
854 \subsubsection{General framework}
855 Consider a game in which an adversary~$A$ interacts with some `world'~$W$,
856 which we shall represent as a probabilistic algorithm. The world may in fact
857 represent a number of honest parties communicating in a concurrent fashion,
858 but we can consider them as a single algorithm for our present purposes.
859
860 Initially the world and the adversary are both given the same \emph{common
861 input}~$c$; in addition, the world is given a \emph{private input}~$w$.
862 Both $c$ and~$w$ are computed by an \emph{initialization function}~$I$, which
863 is considered to be part of the definition of the game. Finally, the
864 adversary decides somehow that it has finished interacting, and outputs a
865 value~$a$. All this we notate as
866 \[ (w, c) \gets I(); a \gets A^{W(w, c)}(c). \]
867 This game is \emph{simulatable} if there is an algorithm~$S$ -- the
868 \emph{simulator} -- which can compute the same things as~$A$, but all by
869 itself without interacting with the world. That is, we run the simulator on
870 the common input~$c$, allowing it to interact in some way with the
871 adversary~$A$, and finally giving us an output~$s$.
872 \[ (w, c) \gets I(); s \gets S^A(c). \]
873 We shall say that the simulator is \emph{effective} if it's difficult to tell
874 whether a given string was output by the adversary after interacting with the
875 world, or by the simulator running by itself. That is, for any algorithm~$D$
876 -- a \emph{distinguisher} -- running in some bounded amount of time, its
877 advantage
878 \begin{spliteqn*}
879 \Pr[(w, c) \gets I(); a \gets A^{W(w, c)}(c);
880 b \gets D(c, a) : b = 1] - {} \\
881 \Pr[(w, c) \gets I(); s \gets S^A(c); b \gets D(c, s) : b = 1]
882 \end{spliteqn*}
883 is small. (Note that we gave the distinguisher the common input as well as
884 the output of the adversary or the simulator.)
885
886 It's usual to study \emph{transcripts} of interactions in these kinds of
887 settings. We are considering arbitrary adversarial outputs here, so this
888 certainly includes adversaries which output a transcript of their
889 interactions. Indeed, for any adversary~$A$, we could construct an
890 adversary~$A_T$ which performs the same computation, and outputs the same
891 result, but also includes a complete transcript of $A$'s interaction with the
892 world. Therefore we're just providing additional generality.
893
894 \subsubsection{Random oracles}
895 We shall be considering interactions in which all the parties have access to
896 several random oracles. We could simply say that the random oracles are part
897 of the world~$W$. In the setting described above, only the adversary
898 actually interacts with the world (and therefore would be able to query
899 random oracles). The simulator would be forced to `make up' its own random
900 oracle, and the distinguisher would have to study the distributions of the
901 random-oracle queries and their responses to make up its mind about which it
902 was given.
903
904 However, this would be a poor model for the real world, since once we
905 instantiate the random oracle with a hash function, we know that everyone
906 would in actually be able to compute the hash function for themselves. Thus
907 a distinguisher in the real world would be able to tell the difference
908 immediately between a real interaction and the simulated transcript, since
909 the `random oracle' queries recorded in the latter would be wrong!
910
911 Therefore we decide not to include the random oracles as part of the world,
912 but instead allow all the participants -- adversary, simulator and
913 distinguisher -- access to them. If we denote by~$\mathcal{H} = (H_0, H_1,
914 \ldots, H_{n-1})$ the collection of random oracles under consideration, the
915 expression for the distinguisher's advantage becomes
916 \begin{spliteqn*}
917 \Pr[(w, c) \gets I(); a \gets A^{W(w, c), \mathcal{H}}(c);
918 b \gets D^{\mathcal{H}}(c, a) : b = 1] - {} \\
919 \Pr[(w, c) \gets I(); s \gets S^{A, \mathcal{H}}(c);
920 b \gets D^{\mathcal{H}}(c, s) : b = 1].
921 \end{spliteqn*}
922
923 \subsubsection{Auxiliary inputs}
924 If an adversary's output can be effectively simulated, then we can
925 confidently state that the adversary `learnt' very little of substance from
926 its interaction, and certainly very little it can \emph{prove} to anyone
927 else. However, as we have described the setting so far, we fix an adversary
928 before we choose inputs to the world, so our model says little about what an
929 adversary which has already acquired some knowledge might learn beyond that.
930 For example, an adversary might overhear some other conversation between
931 honest parties and be able to use this to its advantage.
932
933 To this end, we give the adversary an \emph{auxiliary input}~$u$, computed by
934 an algorithm~$U$. We give $U$ both $c$ and $w$, in order to allow the
935 adversary to gain some (possibly partial) knowledge of the secrets of the
936 other parties. We also allow $U$ access to the random oracles~$\mathcal{H}$,
937 because clearly in the `real world' it would be ridiculous to forbid such an
938 algorithm from computing a publicly-known hash function.
939
940 The simulator and distinguisher are also given the auxiliary input. The
941 simulator is meant to represent the adversary's ability to compute things on
942 its own, without interacting with the world, and since the adversary is given
943 the auxiliary input, the simulator must be too. The distinguisher must be
944 the auxiliary input because otherwise the simulator could just `make up'
945 plausible-looking inputs.
946
947 \subsubsection{Resource limits}
948 We shall not allow our algorithms to perform completely arbitrary
949 computations and interactions. Instead, we impose limits on the amount of
950 time they are allowed to take, the number of random-oracle queries they make,
951 and so on. Specifically, we are interested in
952 \begin{itemize}
953 \item the time $t_A$ taken by the adversary and $t_D$ taken by the
954 distinguisher,
955 \item the number of oracle queries $\mathcal{Q}_A = (q_{A,0}, q_{A,1},
956 \ldots, q_{A,n-1})$ made by the adversary, and $\mathcal{Q}_D$ made by the
957 distinguisher,
958 \item a number of resource bounds $\mathcal{R}$ on the adversary's
959 interaction with the world (e.g., number of messages of various kinds sent
960 and received), and
961 \item a number of bounds $\mathcal{U}$ on the contents of the adversary's
962 auxiliary input~$u$.
963 \end{itemize}
964 Sometimes we shall not be interested in proving simulatability of adversaries
965 with auxiliary inputs. We write $\mathcal{U} = 0$ to indicate that auxiliary
966 output is not allowed.
967
968 \subsubsection{World syntax}
969 It will be worth our while being more precise about what a `world' actually
970 is, syntactically. We define a world to be a single, randomized algorithm
971 taking inputs $(\iota, \sigma, \tau, \mu) \in (\Bin^*)^4$; the algorithm's
972 output is a pair $(\sigma', \rho) \in (\Bin^*)^2$. We show how the
973 adversary's interaction is mapped on to this world algorithm in
974 figure~\ref{fig:sim-world}.
975 \begin{itemize}
976 \item The `input' $\iota$ is the result of the initialization function~$I$.
977 That is, it is the pair $(w, c)$ of the world's private input and the
978 common input.
979 \item The `state' $\sigma$ is empty on the world's first invocation; on each
980 subsequent call, the value of the world's output $\sigma'$ is passed back.
981 In this way, the world can maintain state.
982 \item The `type $\tau$ is a token giving the type of invocation this is.
983 \item The `message' $\mu$ is any other information passed in; its form will
984 typically depend on the type~$\tau$ of the invocation.
985 \item The `new state' $\sigma'$ is the value of $\sigma$ to pass to the next
986 invocation of the world.
987 \item The `reply $\rho$ is the actual output of the invocation.
988 \end{itemize}
989 There are two special invocation types. The adversary is \emph{forbidden}
990 from making special invocations.
991 \begin{itemize}
992 \item The special invocation type $\cookie{init}$ is used to allow the world to
993 prepare an initial state. The world is invoked as
994 \[ W^{\mathcal{H}}(\iota, \emptystring, \cookie{init}, \emptystring) \]
995 and should output an initial state $\sigma'$. The world's reply $\rho$ is
996 ignored. (Recall that $\emptystring$ represents the empty string.)
997 \item The special invocation type $\cookie{random}$ is used to inform the
998 world that the adversary has issued a random oracle query. The world is
999 invoked as
1000 \[ W^{\mathcal{H}}(\iota, \sigma, \cookie{random}, (i, x, h)) \]
1001 to indicate that the adversary has queried its random oracle $H_i(\cdot)$
1002 on the input $x$, giving output~$h$. The world may output an updated state
1003 $\sigma'$; its reply $\rho$ is ignored.
1004 \end{itemize}
1005 The latter special query is a technical device used to allow the `fake-world'
1006 simulators we define below to be aware of the adversary's random oracle
1007 queries without being able to `program' the random oracle. Including it here
1008 does little harm, and simplifies the overall exposition.
1009
1010 \begin{figure}
1011 \begin{program}
1012 Interaction $A^{W(w, c), \mathcal{H}}(c, u)$: \+ \\
1013 $(\sigma, \rho) \gets
1014 W((w, c), \emptystring, \cookie{init}, \emptystring)$; \\
1015 $a \gets A^{\id{world}(\cdot, \cdot),
1016 \id{random}(\cdot, \cdot)}(c, u)$; \\
1017 \RETURN $a$;
1018 \newline
1019 Function $\id{world}(\tau, \mu)$: \+ \\
1020 \IF $\tau \in \{\cookie{init}, \cookie{random}\}$ \THEN
1021 \RETURN $\bot$; \\
1022 $(\sigma, \rho) \gets W((w, c), \sigma, \tau, \mu)$; \\
1023 \RETURN $\rho$; \-
1024 \next
1025 Function $\id{random}(i, x)$: \+ \\
1026 $h \gets H_i(x)$; \\
1027 $(\sigma, \rho) \gets
1028 W((w, c), \sigma, \cookie{random}, (i, x, h))$; \\
1029 \RETURN $h$;
1030 \end{program}
1031
1032 \caption{Interacting with a world: Interaction $A^{W, \mathcal{H}}$}
1033 \label{fig:sim-world}
1034 \end{figure}
1035
1036 \subsubsection{Definitions}
1037 We are now ready to begin making definitions.
1038 \begin{definition}[Simulation security]
1039 \label{def:sim}
1040 Consider the game described above, with the initialization function~$I$,
1041 and the world~$W$: let $A$ be an adversary, and let~$U$ be an
1042 auxiliary-input function; let $S$ be a simulator, and let $D$ be a
1043 distinguisher. We define $D$'s \emph{advantage against $S$'s simulation of
1044 $A$'s interaction with~$W$ with auxiliary inputs provided by~$U$} to be
1045 \[ \Adv{sim}{W, I, S}(A, U, D) =
1046 \Pr[\Game{real}{W, I, S}(A, U, D) = 1] -
1047 \Pr[\Game{sim}{W, I, S}(A, U, D)= 1]
1048 \]
1049 where the games are as shown in figure~\ref{fig:sim}.
1050 Furthermore, we define the \emph{simulator's insecurity function} to be
1051 \[ \InSec{sim}(W, I, S;
1052 t_D, t_A, \mathcal{Q}_D, \mathcal{Q}_A, \mathcal{R}, \mathcal{U}) =
1053 \max_{D, A, U} \Adv{sim}{W, I, S}(A, U, D)
1054 \]
1055 where the maximum is taken over all distinguishers~$D$ running in
1056 time~$t_D$ and making at most $\mathcal{Q}_D$ random-oracle queries, and
1057 all adversaries~$A$ running in time~$t_A$, making at most $\mathcal{Q}_A$
1058 random-oracle queries, not exceeding the other stated resource
1059 bounds~$\mathcal{R}$ on its interaction with~$W$, and auxiliary-input
1060 functions producing output not exceeding the stated bounds~$\mathcal{U}$.
1061 \end{definition}
1062 \begin{remark}
1063 The usual definitions of zero-knowledge, for example, require the simulator
1064 to work for all choices of inputs (common, private and auxiliary), rather
1065 than for random choices. Our definition therefore looks weaker. Our proof
1066 of zero-knowledge actually carries through to the traditional
1067 stronger-looking definition. Critically, however, the standard
1068 universal quantification over inputs fails to capture deniability in the
1069 random oracle model, since the inputs can't therefore depend on the random
1070 oracle. Our formulation therefore actually gives \emph{stronger}
1071 deniability that the usual one.
1072 \end{remark}
1073
1074 \begin{figure}
1075 \begin{program}
1076 $\Game{real}{W, I, S}(A, U, D)$: \+ \\
1077 $(w, c) \gets I()$; \\
1078 $u \gets U^{\mathcal{H}}(w, c)$; \\
1079 $a \gets A^{W(w, c), \mathcal{H}}(c, u)$; \\
1080 $b \gets D^{\mathcal{H}}(c, u, a)$; \\
1081 \RETURN $b$;
1082 \next
1083 $\Game{sim}{W, I, S}(A, U, D)$: \+ \\
1084 $(w, c) \gets I()$; \\
1085 $u \gets U^{\mathcal{H}}(w, c)$; \\
1086 $s \gets S^{A, \mathcal{H}}(c, u)$; \\
1087 $b \gets D^{\mathcal{H}}(c, u, s)$; \\
1088 \RETURN $b$;
1089 \end{program}
1090
1091 \caption{Games for simulation: $\Game{real}{W, I}$ and $\Game{sim}{W, I}$}
1092 \label{fig:sim}
1093 \end{figure}
1094
1095 \subsubsection{Fake-world simulators}
1096 The simulators we shall be considering in the present paper are of a specific
1097 type which we call `fake-world simulators'. They work by running the
1098 adversary in a fake `cardboard cut-out' world, and attempting to extract
1099 enough information from the adversary's previous interactions and random
1100 oracle queries to maintain a convincing illusion.
1101
1102 That is, the behaviour of a fake-world simulator~$S$ is simply to allow the
1103 adversary to interact with a `fake world'~$W'$, which was not given the world
1104 private input. That is, there is some world $W'$ such that
1105 \[ S^{A, \mathcal{H}}(c, u) \equiv A^{W'(u, c), \mathcal{H}}(c, u) \]
1106 Fake-world simulators are convenient because they allow us to remove from
1107 consideration the distinguisher~$D$ as the following definition shows.
1108 \begin{definition}[Fake-world simulation security]
1109 \label{def:fakesim}
1110 Let $I$, $W$ and $U$ be as in definition~\ref{def:sim}. Let $A$ be an
1111 adversary which outputs a single bit. Let $S$ be a fake-world simulator.
1112 We define $A$'s \emph{advantage against $S$'s fake-world simulation of $W$
1113 with auxiliary inputs provided by~$U$} to be
1114 \begin{spliteqn*}
1115 \Adv{fw}{W, I, S}(A, U) =
1116 \Pr[(w, c) \gets I(); u \gets U^{\mathcal{H}}(w, c);
1117 b \gets A^{W(w, c), \mathcal{H}}(c, u) : b = 1] - {} \\
1118 \Pr[(w, c) \gets I(); u \gets U^{\mathcal{H}}(w, c);
1119 b \gets S^{A, \mathcal{H}}(c, u) : b = 1]
1120 \end{spliteqn*}
1121 Furthermore, we define the \emph{simulator's insecurity function} to be
1122 \[ \InSec{fw}(W, I, S;
1123 t_D, t, \mathcal{Q}, \mathcal{R}, \mathcal{U}) =
1124 \max_{A, U} \Adv{fw}{W, I, S}(A, U)
1125 \]
1126 where the maximum is taken over all adversaries~$A$ running in time~$t$,
1127 making at most $\mathcal{Q}$ random-oracle queries, not exceeding the other
1128 stated resource bounds~$\mathcal{R}$ on its interaction with~$W$, and
1129 auxiliary-input functions producing output not exceeding the stated
1130 bounds~$\mathcal{U}$.
1131 \end{definition}
1132 It remains for us to demonstrate that this is a valid way of analysing
1133 simulators; the following simple proposition shows that this is indeed the
1134 case.
1135 \begin{proposition}[Fake-world simulation]
1136 \label{prop:fakesim}
1137 Let $I$ be an initialization function and let $W$ be a world. Then, for
1138 any fake-world simulator~$S$,
1139 \[ \InSec{sim}(W, I, S; t_D, t_A, \mathcal{Q}_D, \mathcal{Q}_A,
1140 \mathcal{R}, \mathcal{U}) \le
1141 \InSec{fw}(W, I, S; t_A + t_D, \mathcal{Q}_D + \mathcal{Q}_A,
1142 \mathcal{R}, \mathcal{U})
1143 \]
1144 (where addition of query bounds $\mathcal{Q}$ is done elementwise).
1145 \end{proposition}
1146 \begin{proof}
1147 Let $W$ and $I$ as in the proposition statement be given; also let a
1148 distinguisher~$D$ running in time~$t_D$ and making $\mathcal{Q}_D$
1149 random-oracle queries, an adversary~$A$ running in time~$t_A$ and making
1150 $\mathcal{Q}_A$ random-oracle queries and interacting with its world within
1151 the stated bounds~$\mathcal{R}$, an auxiliary-input function~$U$ satisfying
1152 the constraints~$\mathcal{U}$ on its output, and a fake-world simulator~$S$
1153 all be given.
1154
1155 We construct an adversary~$B$ outputting a single bit as follows
1156 \begin{program}
1157 Adversary $B^{W, \mathcal{H}}(c, u)$: \+ \\
1158 $a \gets A^{W, \mathcal{H}}(c, u)$; \\
1159 $b \gets D^{\mathcal{H}}(c, u, a)$; \\
1160 \RETURN $b$;
1161 \end{program}
1162 A glance at definitions \ref{def:sim} and~\ref{def:fakesim} and the
1163 resources used by $B$ shows that
1164 \[ \Adv{sim}{W, I, S}(A, U) = \Adv{fw}{W, I, S}(B, U)
1165 \le \InSec{fw}(W, I, S; t_D + t_A, \mathcal{Q}_D + \mathcal{Q}_A,
1166 \mathcal{R}, \mathcal{U})
1167 \]
1168 as required.
1169 \end{proof}
1170
1171 %%%--------------------------------------------------------------------------
1172
1173 \section{A zero-knowledge identification scheme}
1174 \label{sec:zk-ident}
1175
1176
1177 \subsection{Description}
1178
1179 Here we present a simple zero-knowledge identification scheme. Fix some
1180 group $G$. Suppose Alice chooses a private key $x \inr \Nupto{|G|}$, and
1181 publishes the corresponding public key $X = x P$. Let $H_I\colon G^2 \to
1182 \Bin^{\ell_I}$ be a secure hash function. Here's a simple protocol which
1183 lets her prove her identity to Bob.
1184 \begin{enumerate}
1185 \item Bob selects a random $r \inr \Nupto{|G|}$, and computes $R = r P$, $Y =
1186 r X$, and $c = r \xor H_I(R, Y)$. He sends the pair $(R, c)$ to Alice as
1187 his \emph{challenge}.
1188 \item Alice receives $(R, c)$. She computes $Y' = x R$ and $r' = c \xor
1189 H_I(R', Y')$, and checks that $R = r' P$. If so, she sends $Y'$ as her
1190 \emph{response}; otherwise she sends $\bot$.
1191 \item Bob receives $Y'$ from Alice. He checks that $Y' = Y$. If so, he
1192 accepts that he's talking to Alice; otherwise he becomes suspicious.
1193 \end{enumerate}
1194 We name this the Wrestlers Identification Protocol in~$G$, $\Wident^G$ (we
1195 drop the superscript to refer to the protocol in general, or when no
1196 ambiguity is likely to result). A summary is shown in
1197 figure~\ref{fig:wident}.
1198
1199 \begin{figure}
1200 \begin{description}
1201 \item[Setup] Group $G = \langle P \rangle$; $|G| = q$ is prime.
1202 $H_I(\cdot, \cdot)$ is a secure hash.
1203 \item[Private key] $x \inr \Nupto{q}$.
1204 \item[Public key] $X = x P$.
1205 \item[Challenge] $(R, c)$ where $r \inr \Nupto{q}$, $R = r P$, $c = r \xor
1206 H_I(R, r X)$.
1207 \item[Response] $x R = r X$ if $R = (c \xor H_I(R, x R)) P$; otherwise
1208 $\bot$.
1209 \end{description}
1210
1211 \caption{Summary of the Wrestlers Identification Protocol, $\Wident$}
1212 \label{fig:wident}
1213 \end{figure}
1214
1215
1216 \subsection{Security}
1217
1218 In order to evaluate the security of our protocol, we present a formal
1219 description of the algorithms involved in figure~\ref{fig:wident}. Here, the
1220 hash function $H_I(\cdot, \cdot)$ is modelled as a random oracle.
1221
1222 \begin{figure}
1223 \begin{program}
1224 Function $\id{setup}()$: \+ \\
1225 $x \getsr \Nupto{q}$; \\
1226 $X \gets x P$; \\
1227 \RETURN $(x, X)$;
1228 \next
1229 Function $\id{challenge}^{H_I(\cdot, \cdot)}(R, c, X)$: \+ \\
1230 $r \getsr \Nupto{q}$; \\
1231 $R \gets r P$; $Y \gets r X$; \\
1232 $h \gets H_I(R, Y)$; $c \gets r \xor h$; \\
1233 \RETURN $(Y, R, c)$; \- \\[\medskipamount]
1234 Function $\id{verify}(Y, Y')$: \+ \\
1235 \IF $Y' = Y$ \THEN \RETURN $1$; \\
1236 \RETURN $0$;
1237 \next
1238 Function $\id{response}^{H_I(\cdot, \cdot)}(R, c, x)$: \+ \\
1239 $Y' \gets x R$; \\
1240 $h \gets H_I(R', Y')$; $r' \gets c \xor h$; \\
1241 \IF $R \ne r' P$ \THEN \RETURN $\bot$; \\
1242 \RETURN $Y'$;
1243 \end{program}
1244
1245 \caption{Functions implementing $\Wident$ in the random oracle model}
1246 \label{fig:wident-ro}
1247 \end{figure}
1248
1249 \subsubsection{Completeness}
1250 Suppose that Bob really is talking to Alice. Note that $Y' = x R = x (r P) =
1251 r (x P) = r X = Y$. Hence $r' = c \xor H_I(R', Y') = c \xor H_I(R, Y) = r$,
1252 so $r' P = r P = R$, so Alice returns $Y' = Y$ to Bob. Therefore $\Wident$
1253 is \emph{complete}: if Bob really is communicating with Alice then he
1254 accepts.
1255
1256 \subsubsection{Soundness}
1257 We next show that impersonating Alice is difficult. The natural way to prove
1258 this would be to give an adversary a challenge and prove that its probability
1259 of giving a correct response is very small. However, we prove a stronger
1260 result: we show that if the adversary can respond correctly to any of a large
1261 collection of challenges then it can solve the MCDH problem.
1262
1263 Consider the game $\Game{imp}{\Wident}$ shown in
1264 figure~\ref{fig:wident-sound}. An adversary's probability of successfully
1265 impersonating Alice in our protocol, by correctly responding to any one of
1266 $n$ challenges, is exactly its probability of winning the game (i.e., causing
1267 it to return $1$).
1268
1269 \begin{figure}
1270 \begin{program}
1271 $\Game{imp-$n$}{\Wident}(A)$: \+ \\
1272 $H_I \getsr \Func{G^2}{\Bin^{\ell_I}}$; \\
1273 $(x, X) \gets \id{setup}()$; \\
1274 $\id{win} \gets 0$; \\
1275 $\Xid{R}{map} \gets \emptyset$; \\
1276 $\mathbf{c} \gets \id{challenges}(n)$; \\
1277 $(R', Y') \gets A^{H_I(\cdot, \cdot), \id{check}(\cdot, \cdot)}
1278 (X, \mathbf{c})$; \\
1279 \RETURN $\id{win}$;
1280 \newline
1281 Function $\id{challenges}(n)$: \+ \\
1282 \FOR $i \in \Nupto{n}$ \DO \\ \ind
1283 $(Y, R, c) \gets \id{challenge}^{H_I(\cdot, \cdot)}$; \\
1284 $\Xid{R}{map} \gets \Xid{R}{map} \cup \{ R \mapsto Y \}$; \\
1285 $\mathbf{c}[i] \gets (R, c)$; \- \\
1286 \RETURN $\mathbf{c}$; \next
1287 Function $\id{check}(R', Y')$: \\
1288 \IF $R' \notin \dom \Xid{R}{map}$ \THEN \RETURN $0$; \\
1289 $Y \gets \Xid{R}{map}(R')$; \\
1290 \IF $\id{verify}(Y, Y')$ \THEN \\ \ind
1291 $\id{win} \gets 1$; \\
1292 \RETURN $1$; \- \\
1293 \RETURN $0$;
1294 \end{program}
1295
1296 \caption{Soundness of $\Wident$: $\Game{imp-$n$}{\Wident}(A)$}
1297 \label{fig:wident-sound}
1298 \end{figure}
1299
1300 \begin{theorem}[Soundness of $\Wident$]
1301 \label{thm:wident-sound}
1302 Let $A$ be any adversary running in time $t$ and making $q_I$ queries to
1303 its random oracle, and $q_V$ queries to its verification oracle. Let $G$
1304 be a cyclic group. Then
1305 \[ \Pr[\Game{imp-$n$}{\Wident^G}(A) = 1] \le
1306 \InSec{mcdh}(G; t', q_I + q_V)
1307 \]
1308 where $t' = t + O(q_I) + O(q_V)$.
1309 \end{theorem}
1310 \begin{remark}
1311 Note that the security bound here is \emph{independent} of the value of
1312 $n$.
1313 \end{remark}
1314 \begin{proof}
1315 We prove this by defining a sequence of games $\G{i}$. The first will be
1316 the same as the attack game $\Game{imp-$n$}{\Wident}(A)$ and the others
1317 will differ from it in minor ways. In each game $\G{i}$, let $S_i$ be the
1318 event that $A$ wins the game -- i.e., that it successfully impersonates the
1319 holder of the private key~$x$.
1320
1321 Let game $\G0$ be the attack game $\Game{imp}{\Wident}(A)$, and let $(R',
1322 Y')$ be the output of $A$ in the game.
1323
1324 We define a new game $\G1$ which is the same as $\G0$, except that we query
1325 the random oracle $H_I$ at $(R', Y')$ whenever the adversary queries
1326 $\id{check}(R', Y')$. (We don't charge the adversary for this.) This
1327 obviously doesn't affect the adversary's probability of winning, so
1328 $\Pr[S_1] = \Pr[S_0]$.
1329
1330 Game $\G2$ is like $\G1$, except that we change the way we generate
1331 challenges and check their responses. Specifically, we new functions
1332 $\id{challenges}_2$ and $\id{check}_2$, as shown in
1333 figure~\ref{fig:wident-sound-2}.
1334
1335 \begin{figure}
1336 \begin{program}
1337 Function $\id{challenges}_2(n)$: \+ \\
1338 $r^* \getsr I$; $R^* \gets r^* P$; $Y^* \gets r^* X$; \\
1339 \FOR $i \in \Nupto{n}$ \DO \\ \ind
1340 $r \getsr I$; $R \gets r R^*$; $Y \gets r Y^*$; \\
1341 $h \gets H_I(R, Y)$; $c \gets r \xor h$; \\
1342 $\Xid{R}{map} \gets \Xid{R}{map} \cup \{ R \mapsto r \}$; \\
1343 $\mathbf{c}[i] \gets (R, c)$; \- \\
1344 \RETURN $\mathbf{c}$;
1345 \next
1346 Function $\id{check}_2(R', Y')$: \+ \\
1347 \IF $R' \notin \dom \Xid{R}{map}$ \THEN \RETURN $0$; \\
1348 $r \gets \Xid{R}{map}(R')$; \\
1349 \IF $\id{verify}(Y^*, Y'/r)$ \THEN \\ \ind
1350 $\id{win} \gets 1$; \\
1351 \RETURN $1$; \- \\
1352 \RETURN $0$;
1353 \end{program}
1354
1355 \caption{Soundness of $\Wident$: $\id{challenges}_2$ and $\id{check}_2$}
1356 \label{fig:wident-sound-2}
1357 \end{figure}
1358
1359 While we're generating and checking challenges in a more complicated way
1360 here, we're not actually changing the distribution of the challenges, or
1361 changing the winning condition. Hence $\Pr[S_2] = \Pr[S_1]$.
1362
1363 Now we change the rules again. Let $\G3$ be the same as $\G2$ except that
1364 we change the winning condition. Instead, we say that the adversary wins
1365 if any of the queries to its random oracle $H_I(R', Y')$ would be a correct
1366 response -- i.e., $\id{check}_2(R', Y')$ would return $1$. Since we query
1367 the oracle on $(R', Y')$ on its behalf at the end of the game, no adversary
1368 can do worse in this game than it does in $\G2$, so we have $\Pr[S_3] \ge
1369 \Pr[S_2]$. (It's not hard to see that this only helps quite stupid
1370 adversaries. We can transform any adversary into one for which equality
1371 holds here.)
1372
1373 Finally, let $\G4$ be the same as $\G3$ except that we change the way we
1374 generate challenges again: rather than computing $h$ and setting $c \gets h
1375 \xor r$, we just choose $c$ at random. Specifically, we use the new
1376 function, $\id{challenges}_4$, shown in figure~\ref{fig:wident-sound-4}.
1377
1378 \begin{figure}
1379 \begin{program}
1380 Function $\id{challenges}_4(n)$: \+ \\
1381 $r^* \getsr I$; $R^* \gets r^* P$; $Y^* \gets r^* X$; \\
1382 \FOR $i \in \Nupto{n}$ \DO \\ \ind
1383 $r \getsr I$; $R \gets r R^*$; \\
1384 $c \getsr \Bin^{\ell_I}$; \\
1385 $\Xid{R}{map} \gets \Xid{R}{map} \cup \{ R \mapsto r \}$; \\
1386 $\mathbf{c}[i] \gets (R, c)$; \- \\
1387 \RETURN $\mathbf{c}$;
1388 \end{program}
1389
1390 \caption{Soundness of $\Wident$: $\id{challenges}_4$}
1391 \label{fig:wident-sound-4}
1392 \end{figure}
1393
1394 Since $H_I(\cdot, \cdot)$ is a random function, the adversary can only
1395 distinguish $\G4$ from $\G3$ if it queries its random oracle at some $(R, r
1396 Y^*)$. But if it does this, then by the rule introduced in $\G3$ it has
1397 already won. Therefore we must have $\Pr[S_4] = \Pr[S_3]$.
1398
1399 Our $\id{challenges}_4$ function is interesting, since it doesn't actually
1400 make use of $r^*$ or $Y^*$ when generating its challenges. This gives us
1401 the clue we need to bound $\Pr[S_4]$: we can use adversary $A$ to solve the
1402 multiple-guess Diffie-Hellman problem in $G$ by simulating the game $\G4$.
1403 Specifically, we define the adversary $B$ as shown in
1404 figure~\ref{fig:wident-sound-cdh}. That is, for each query $A$ makes to
1405 its random oracle at a new pair $(R', Y')$, we see whether this gives us
1406 the answer we're looking for. We have $\Pr[S_0] \le \Pr[S_4] =
1407 \Succ{mcdh}{G}(B) \le \InSec{gdh}(G; t', q_I + q_V)$ as required.
1408
1409 \begin{figure}
1410 \begin{program}
1411 Adversary $B^{V(\cdot)}(X, R^*)$: \+ \\
1412 $F \gets \emptyset$; $\Xid{R}{map} \gets \emptyset$; \\
1413 \FOR $i \in \Nupto{n}$ \DO \\ \ind
1414 $r \getsr I$; $R \gets r R^*$; $c \getsr \Bin^{\ell_I}$; \\
1415 $\Xid{R}{map} \gets \Xid{R}{map} \cup \{ R \mapsto r \}$; \\
1416 $\mathbf{c}[i] \gets (R, c)$; \- \\
1417 $(R', Y') \gets A^{H_I(\cdot, \cdot), \id{check}(\cdot, \cdot)}
1418 (X, \mathbf{c})$; \\
1419 \IF $Y' \neq \bot$ \THEN $H_I(R', Y')$;
1420 \next
1421 Oracle $H_I(R', Y')$: \+ \\
1422 \IF $(R', Y') \in \dom F$ \THEN \\ \quad
1423 $h \gets F(x)$; \\
1424 \ELSE \\ \ind
1425 $\id{check}(R', Y')$; \\
1426 $h \getsr \Bin^{\ell_I}$;
1427 $F \gets F \cup \{ (R', Y') \mapsto h \}$; \- \\
1428 \RETURN $h$;
1429 \- \\[\medskipamount]
1430 Oracle $\id{check}(R', Y')$: \+ \\
1431 \IF $R' \in \dom \Xid{R}{map}$ \THEN
1432 $V(Y'/\Xid{R}{map}(R'))$;
1433 \end{program}
1434
1435 \caption{Soundness of $\Wident$: reduction from MCDH}
1436 \label{fig:wident-sound-cdh}
1437 \end{figure}
1438
1439 \end{proof}
1440
1441 \subsubsection{Zero-knowledge}
1442 Finally we must prove that $\Wident$ is (statistical) zero-knowledge -- i.e.,
1443 that, except with very small probability, Bob learns nothing of use to him
1444 except that he's interacting with Alice. To do this, we show that, for any
1445 algorithm $B$ which Bob might use to produce his challenge to the real Alice,
1446 there exists a simulator $S$ which produces transcripts distributed very
1447 similarly to transcripts of real conversations between $B$ and Alice, the
1448 difference being that $S$ doesn't know Alice's key. We shall show that the
1449 statistical difference between the two distributions is $2^{-\ell_I}$.
1450
1451 The intuition here is that Bob ought to know what answer Alice is going to
1452 give him when he constructs his challenge. This is certainly true if he's
1453 honest: his challenge is $R = r P$ for some $r$ he knows, so he won't learn
1454 anything useful when Alice responds with $x R = r X$. However, if Bob sends
1455 a challenge $R$ when he doesn't know the corresponding $r$, he learns
1456 something potentially useful. The accompanying check value $c = r \xor
1457 H_I(R, r X)$ keeps him honest.
1458
1459 To show this, we present an \emph{extractor} which, given any challenge $(R,
1460 c)$ Bob can construct, and his history of random-oracle queries, either
1461 returns a pair $(r, Y)$ such that $R = r P$ and $Y = r X$, or $\bot$;
1462 moreover, the probability that Alice returns a response $Y' \ne \bot$ given
1463 the challenge $(R, c)$ is $2^{-\ell}$. We can, of course, readily convert
1464 this extractor into a simulator to prove the zero-knowledge property of our
1465 protocol.
1466
1467 We shall actually consider a slightly more complex setting. We grant Bob
1468 access to an oracle which produces random, correctly-formed challenges. We
1469 require this to model the legitimate challenges of other parties when we
1470 analyse the security of our key exchange protocol. The extractor is
1471 permitted to fail given
1472
1473 \begin{definition}[Discrete-log extractors]
1474 Let $T$, $B$ be randomized algorithms. Define the game
1475 $\Game{dl-ext}{G}(T, B)$ as shown in figure~\ref{fig:dlext}. The
1476 \emph{success probability of $T$ as a discrete-log extractor against $B$}
1477 is defined as
1478 \[ \Succ{dl-ext}{G}(T, B) = \Pr[\Game{dl-ext}{G}(T, B) = 1]. \]
1479 \end{definition}
1480 Let's unpack this definition slightly. We make the following demands of our
1481 extractor.
1482 \begin{itemize}
1483 \item It is given a bare `transcript' of $B$'s execution. In particular, it
1484 is given only its output and a list of $B$'s random-oracle queries in no
1485 particular order.
1486 \item While the extractor is not given the private key~$x$, the adversary~$B$
1487 is given the private key.
1488 \item We require that, if the extractor produces values $r, Y \ne \bot$ then
1489 $r$ and $Y$ are \emph{correct}; i.e., that $R = r P$ and $Y = x R$.
1490 \item The extractor is explicitly \emph{not} given the outputs of the
1491 challenge-generation oracle $C()$, nor of the random-oracle queries issued
1492 by $C()$. However, we allow the extractor to fail (i.e., return $\bot$) if
1493 $B$ simply parrots one of its $C$-outputs.
1494 \item The extractor is allowed -- indeed \emph{required} -- to fail if the
1495 challenge $(R, c)$ is \emph{invalid} (i.e., Alice would return $\bot$ given
1496 the challenge).
1497 \end{itemize}
1498 The resulting definition bears a striking similarity to the concept of
1499 \emph{plaintext awareness} in \cite{Bellare:1998:RAN}.
1500
1501 \begin{figure}
1502 \begin{program}
1503 $\Game{dl-ext}{G}(T, B):$ \+ \\
1504 $H_I \getsr \Func{G^2}{\Bin^{\ell_I}}$;
1505 $Q_H \gets \emptyset$; $Q_C \gets \emptyset$; \\
1506 $(x, X) \gets \id{setup}()$; \\
1507 $(R, c) \gets B^{\Xid{H_I}{trap}(\cdot, \cdot), C()}(x, X)$; \\
1508 $(r, Y) \gets T(R, c, Q_H)$; \\
1509 $Y' \gets x R$; $h' \gets H_I(R, Y')$; $r' \gets c \xor h'$; \\
1510 \IF $r \ne \bot$ \THEN \\ \quad
1511 \IF $Y = \bot \lor R \ne r P \lor Y \ne Y'$ \THEN \RETURN $0$; \\
1512 \IF $R = r' P$ \THEN $(r^*, Y^*) = (r', Y')$; \\
1513 \ELSE $(r^*, Y^*) \gets (\bot, \bot)$; \\
1514 \IF $(R, c) \in Q_C$ \THEN \RETURN $1$; \\
1515 \IF $(r, Y) = (r', Y')$ \THEN \RETURN $1$; \\
1516 \RETURN $0$;
1517 \next
1518 Oracle $\Xid{H_I}{trap}(R', Y')$: \+ \\
1519 $h \gets H_I(R', Y')$; \\
1520 $Q_H \gets Q_H \cup \{(R', Y', h)\}$; \\
1521 \RETURN $h$; \- \\[\medskipamount]
1522 Oracle $C()$: \+ \\
1523 $r \getsr \Nupto{q}$; \\
1524 $R \gets r P$; $c \gets r \xor H_I(R, r X)$; \\
1525 $Q_C \gets Q_C \cup \{(R, c)\}$; \\
1526 \RETURN $(R, c)$
1527 \end{program}
1528
1529 \caption{Discrete log extraction game: $\Game{dl-ext}{G}(T, B)$}
1530 \label{fig:dlext}
1531 \end{figure}
1532
1533 Such an extractor indeed exists, as the following lemma states.
1534 \begin{lemma}[Effectiveness of extractor $T_\Wident$]
1535 \label{lem:dlext}
1536 There exists a \emph{universal discrete-log extractor} $T_\Wident$ which,
1537 for any algorithm $B$,
1538 \[ \Succ{dl-ext}{G}(T_\Wident, B) \ge 1 - \frac{1}{2^{\ell_I}}. \]
1539 Moreover, if $B$ issues at most $q_H$ random-oracle queries, then the
1540 running time of $T_\Wident$ is $O(q_H)$.
1541 \end{lemma}
1542 We prove this result at the end of the section. For now, let us see how to
1543 prove that $\Wident$ is zero-knowledge.
1544
1545 We use the set-up described in section~\ref{sec:sim}. Our initialization
1546 function~$I_\Wident$ just chooses a random $x \in \Nupto{q}$ as the world
1547 private input and sets $X = x P$ as the common input. In the `real world',
1548 the adversary is allowed to submit a (single) challenge to the prover; it is
1549 given the prover's response, and must then compute its output. This is shown
1550 on the left hand side of figure~\ref{fig:wident-sim}.
1551
1552 The zero-knowledge property of the scheme is described by the following
1553 theorem.
1554 \begin{theorem}[Statistical zero-knowledge of $\Wident$]
1555 \label{thm:wident-zk}
1556 Let $I_\Wident$, $W_\Wident$ and $S_\Wident$ be the real-prover world and
1557 simulator shown in figure~\ref{fig:wident-sim}. Then, for any~$t$,
1558 $q_I$ and $q_C$,
1559 \[ \InSec{sim}(W_\Wident, I_\Wident, S_\Wident; t, q_I, q_C, 0) \le
1560 \frac{q_C}{2^\ell_I}.
1561 \]
1562 where $q_C$ is the maximum number of challenges allowed by the adversary.
1563 \end{theorem}
1564 \begin{proof}
1565 The simulator simply uses the extractor~$T_\Wident$ to extract the answer
1566 from the adversary's history of random oracle queries. Observe that
1567 $S_\Wident$ is a fake-world simulator. By lemma~\ref{lem:dlext}, the
1568 extractor fails with probability only $2^{-\ell_I}$. The theorem follows
1569 by a simple union bound and proposition~\ref{prop:fakesim}.
1570 \end{proof}
1571
1572 \begin{figure}
1573 \begin{program}
1574 Initialization function $I_\Wident()$: \+ \\
1575 $x \getsr \Nupto{|G|}$; \\
1576 $X \gets x P$; \\
1577 \RETURN $(x, X)$;
1578 \- \\[\medskipamount]
1579 Real-prover world $W_\Wident^{H_I(\cdot, \cdot)}
1580 ((x, X), \sigma, \tau, \mu)$: \+ \\
1581 \IF $\tau = \cookie{challenge}$ \THEN \\ \ind
1582 $(R, c) \gets \mu$; \\
1583 $Y \gets \id{response}^{H_I(\cdot, \cdot)}(R, c, x)$; \\
1584 \RETURN $(1, Y)$; \- \\
1585 \ELSE \\ \ind
1586 \RETURN $(\sigma, \bot)$;
1587 \next
1588 Simulator $S_\Wident$'s fake world \\
1589 \hspace{1in} $W_{\text{sim}}^{H_I(\cdot, \cdot)}
1590 ((X, u), \sigma, \tau, \mu)$: \+ \\
1591 \IF $\tau = \cookie{init}$ \THEN \\ \ind
1592 \RETURN $(\emptyset, \emptystring)$; \- \\
1593 $Q_H \gets \sigma$; \\
1594 \IF $\tau = \cookie{challenge}$ \THEN \\ \ind
1595 $(R, c) \gets \mu$; \\
1596 $(r, Y) \gets T_\Wident(R, c, Q_H)$; \\
1597 \RETURN $(Q_H, Y)$; \- \\
1598 \ELSE \IF $\tau = \cookie{random}$ \THEN \\ \ind
1599 $(i, (R', Y'), h) \gets \mu$; \\
1600 $Q_H \gets Q_H \cup \{(R', Y', h)\}$; \\
1601 \RETURN $(Q_H, \emptystring)$; \- \\
1602 \ELSE \\ \ind
1603 \RETURN $(\sigma, \bot)$;
1604 \end{program}
1605
1606 \caption{Real-prover and simulator for zero-knowledge of $\Wident$}
1607 \label{fig:wident-sim}
1608 \end{figure}
1609
1610 We now return to proving that our extractor exists and functions as claimed.
1611 The following two trivial lemmata will be useful, both now and later.
1612 \begin{lemma}[Uniqueness of discrete-logs]
1613 \label{lem:unique-dl}
1614 Let $G = \langle P \rangle$ be a cyclic group. For any $X \in G$ there is
1615 a unique integer $x$ where $0 \le x < |G|$ and $X = x P$.
1616 \end{lemma}
1617 \begin{proof}
1618 Certainly such an $x$ exists, since $G$ is cyclic and finite. Suppose $X =
1619 x P = x' P$: then $0 = x P - x' P = (x - x') P$. Hence $(x - x')$ is a
1620 multiple of $|G|$.
1621 \end{proof}
1622 \begin{lemma}[Uniqueness of check values]
1623 \label{lem:unique-c}
1624 Let $G = \langle P \rangle$ be a cyclic group; let $H_I$ be a function
1625 $H_I\colon \Bin^{2\ell_G} \to \Bin^{\ell_I}$. Fix some $x \in
1626 \Nupto{|G|}$ and define the set
1627 \[ V_x = \bigl\{\, (R, c) \in G \times \Bin^{\ell_I} \bigm|
1628 R = \bigl( c \xor H_I(R, x R) \bigr) P \,\bigr\}.
1629 \]
1630 Then, for any $R$, $c$, $c'$, if $(R, c) \in V_x$ and $(R, c') \in V_x$
1631 then $c = c'$.
1632 \end{lemma}
1633 \begin{proof}
1634 From lemma~\ref{lem:unique-dl}, we see that there is a unique $r \in
1635 \Nupto{|G|}$ for which $R = r P$. Now, if $(R, c) \in V_x$, we must have
1636 $r = c \xor H_I(R, x R)$. It follows that $c = r \xor H_I(R, x R)$.
1637 \end{proof}
1638
1639 \begin{proof}[Proof of lemma~\ref{lem:dlext}]
1640 Our extractor $T_\Wident$ is shown in figure~\ref{fig:twident}.
1641
1642 \begin{figure}
1643 \begin{program}
1644 Extractor $T_\Wident(R, c, Q_H)$: \+ \\
1645 \FOR $(R', Y', h)$ \IN $Q_H$ \DO \\ \ind
1646 $r \gets h \xor c$; \\
1647 \IF $R = R' = r P \land Y' = r X$ \THEN \RETURN $(r, Y')$; \- \\
1648 \RETURN $(\bot, \bot)$;
1649 \end{program}
1650
1651 \caption{The discrete-log extractor $T_\Wident$}
1652 \label{fig:twident}
1653 \end{figure}
1654
1655 Let $B$ be any randomized algorithm, and let $(R, c, Q_H)$ be as given to
1656 the extractor by $\Game{dl-ext}{G}(T_\Wident, B)$. Let the quantities
1657 $H_I$, $Q_C$, $r$, $r'$, $x$ and $X$ be as in that game.
1658
1659 Suppose that the extractor returns values $(r, Y) \ne (\bot, \bot)$. Let
1660 $h = r \xor c$; then there must be a query $(R, Y, h) \in Q_H$, and we have
1661 $R = r P$ and $Y = r X = r (x P) = x (r P) = x R = Y'$, so the extractor's
1662 output must be correct unless it fails.
1663
1664 Furthermore, in the case where the extractor did not fail, we have $h =
1665 H_I(R, Y) = H_I(R, Y')$ and $c = r \xor h$, so the challenge was valid.
1666 Therefore, if the challenge was invalid, the extractor will fail.
1667
1668 We now deal with the challenge-generation oracle. Suppose that $(R, c')
1669 \in Q_C$ for some $c'$. Now, if $c = c'$ then $(R, c')$ is a repeat of
1670 some challenge from the challenge-generation oracle, and the extractor is
1671 permitted to fail. On the other hand, suppose $c \ne c'$; then, the
1672 challenge $(R, c)$ must be invalid by lemma~\ref{lem:unique-c}, so the
1673 extractor is required to fail, and we have established that indeed it will.
1674 From now on, suppose that $R$ is distinct from all the $R$-values returned
1675 by $C()$.
1676
1677 Let $Y = x R$. Suppose that $B$ queried its random oracle at $(R, Y)$.
1678 Let $h = H_I(Y)$, so $r' = c \xor h$. If the challenge is valid then $R =
1679 r' P$; therefore $Y = x R = x r' P = r' X$, so we have $(R, Y, h) \in Q_H$
1680 with $R = r P$ and $Y = r X$. Hence the extractor returns $r = r'$ as
1681 required.
1682
1683 It remains to deal with the case where there is no random-oracle query at
1684 $(R, Y)$. But then $h = H_I(R, Y)$ is uniformly distributed, and
1685 independent of the entire game up to this point. Let $r$ be the correct
1686 discrete log of $R$; by lemma~\ref{lem:unique-dl} there is only one
1687 possible value. The extractor always fails under these circumstances, but
1688 a correct responder would reply with probability
1689 \[ \Pr[h = c \xor r] = \frac{1}{2^{\ell_I}}. \]
1690 This concludes the proof.
1691 \end{proof}
1692 \begin{remark}
1693 Note that the fact that the algorithm~$B$ was given the private key is
1694 irrelevant to the above argument. However, we shall need this property
1695 when we come to prove deniability for the key-exchange protocol.
1696 \end{remark}
1697 \begin{remark}
1698 It's easy to see from the above proof that the extractor works flawlessly
1699 on the `honest verifier' algorithm $\id{challenge}$ shown in
1700 figure~\ref{fig:wident-ro}. This shows that $\Wident$ is perfect
1701 zero-knowledge against honest verifiers. We're much more interested in
1702 dishonest verifiers, though.
1703 \end{remark}
1704
1705
1706 \subsection{An identity-based identification scheme}
1707 \label{sec:wident-id}
1708
1709 Boneh and Franklin \cite{Boneh:2001:IBE} showed how to construct an
1710 identity-based encryption scheme using bilinear pairings. The resulting
1711 encryption scheme looks somewhat like a pairing-based version of ElGamal's
1712 encryption scheme \cite{ElGamal:1984:PKC}. We can easily apply their
1713 techniques to our identification protocol, and thereby obtain an
1714 identity-based identification scheme. Providing the necessary formalisms to
1715 prove theorems analogous to our theorems~\ref{thm:wident-sound}
1716 and~\ref{thm:wident-zk} would take us too far from our objectives; but given
1717 appropriate security notions, we can readily adapt our existing proofs to the
1718 new setting.
1719
1720 \subsubsection{Bilinear pairings}
1721 Before we describe the necessary modifications to the protocol, we first give
1722 a (very brief!) summary of cryptographic pairings. (The Boneh-Franklin paper
1723 \cite{Boneh:2001:IBE} gives more detail; also \cite{Menezes:2006:IPB}
1724 provides a useful introduction to the topic.)
1725
1726 Let $(G, +)$, $(G', +)$ and $(G_T, \times)$ be cyclic groups with prime order
1727 $q$; let $P \in G$ and $P' \in G'$ be elements of order $q$ in $G$ and $G'$
1728 respectively. We say that a mapping $\hat{e}\colon G \times G' \to G_T$ is a
1729 \emph{non-degenerate bilinear pairing} if it satisfies the following
1730 properties.
1731 \begin{description}
1732 \item[Bilinearity] For all $R \in G$ and $S', T' \in G'$, we have $\hat{e}(R,
1733 S' + T') = \hat{e}(R, S')\,\hat{e}(R, T')$; and for all $R, S \in G$ and $T'
1734 \in G'$ we have $\hat{e}(R + S, T') = \hat{e}(R, T')\,\hat{e}(S, T')$.
1735 \item[Non-degeneracy] $\hat{e}(P, P') \ne 1$.
1736 \end{description}
1737 For practical use, we also want $\hat{e}(\cdot, \cdot)$ to be efficient to
1738 compute. The reader can verify that $\hat{e}(a P, b P') = \hat{e}(P,
1739 P')^{ab}$. It is permitted for the two groups $G$ and $G'$ to be equal.
1740
1741 We require a different intractability assumption, specifically that the
1742 \emph{bilinear} Diffie-Hellman problem (BDH) -- given $(a P, b P, a P', c P')
1743 \in G^2 \times G'^2$, find $\hat{e}(P, P')^{abc} \in G_T$ -- is difficult.
1744 This implies the difficulty of the computational Diffie-Hellman problem in
1745 all three of $G$, $G'$ and~$G_T$.
1746
1747 \subsubsection{The identity-based scheme}
1748 We need a trusted authority; following \cite{Schneier:1996:ACP} we shall call
1749 him Trent. Trent's private key is $t \in \Nupto{q}$; his public key is $T =
1750 t P$.
1751
1752 Finally, we need cryptographic hash functions $H_I\colon G \times G_T \to
1753 \Bin^{\ell_I}$ and $\Hid\colon \Bin^* \to G'$; a formal security analysis
1754 would model these as random oracles.
1755
1756 Alice's public key is $A = \Hid(\texttt{Alice}) \in G'$. Her private key is
1757 $K_A = t A \in G'$ -- she needs Trent to give this to her. Bob can interact
1758 with Alice in order to verify her identity as follows.
1759 \begin{enumerate}
1760 \item Bob computes $\gamma_A = \hat{e}(T, A) \in G_T$. (He can do this once
1761 and store the result if he wants, but it's not that onerous to work it out
1762 each time.)
1763 \item Bob chooses $r \inr \Nupto{q}$, and sets $R = r P$. He also computes
1764 $\psi = \gamma_A^r$, $h = H_I(R, \psi)$ and $c = r \xor h$. He sends his
1765 challenge $(R, c)$ to Alice.
1766 \item Alice receives $(R', c')$. She computes $\psi' = \hat{e}(R, K_A)$, $h'
1767 = H_I(R', \psi')$, and $r' = c' \xor h')$. She checks that $R' = r' P$; if
1768 so, she sends $\psi'$ back to Bob; otherwise she refuses to talk to him.
1769 \item Bob receives $\psi'$. If $\psi = \psi'$ then he accepts that he's
1770 talking to Alice.
1771 \end{enumerate}
1772 This works because $\psi = \gamma_A^r = \hat{e}(T, A)^r = \hat{e}(t P, A)^r =
1773 \hat{e}(r P, A)^t = \hat{e}(R, t A) = \psi'$.
1774
1775 \subsubsection{Informal analysis}
1776 An analogue to lemma~\ref{lem:dlext} can be proven to show how to extract $r$
1777 from a verifier's random-oracle queries; statistical zero knowledge would
1778 then follow easily, as in theorem~\ref{thm:wident-zk}. Soundness is
1779 intuitively clear, since an adversary must compute $\psi = \hat{e}(P,
1780 P')^{art}$ given $A = a P'$, $R = r P$ and $T = t P$, which is an instance of
1781 the BDH problem. An analogue of theorem~\ref{thm:wident-sound} would have to
1782 prove this for an adversary capable of making identity requests as well as
1783 obtaining challenges. Finally, our key-exchange protocol can be constructed
1784 out of this identity-based identification scheme, yielding an identity-based
1785 authenticated key-exchange protocol. We leave it to the reader to work
1786 through the details.
1787
1788
1789 \subsection{Comparison with the protocol of Stinson and Wu}
1790 \label{sec:stinson-ident}
1791
1792 Our protocol is similar to a recent proposal by Stinson and Wu
1793 \cite{Stinson:2006:EST}. They restrict their attention to Schnorr groups $G
1794 \subset \F_p^*$. Let $\gamma$ be an element of order $q = |G|$. The
1795 prover's private key is $a \inr \Nupto{q}$ and her public key is $\alpha =
1796 \gamma^a$. In their protocol, the challenger chooses $r \inr \Nupto{q}$,
1797 computes $\rho = \gamma^r$ and $\psi = \alpha^r$, and sends a challenge
1798 $(\rho, H(\rho, \psi))$. The prover checks that $\rho^q \ne 1$, computes
1799 $\psi = \rho^a$, checks the hash, and sends $\psi$ back by way of response.
1800 They prove their protocol's security in the random-oracle model.
1801
1802 Both the Wrestlers protocol and Stinson-Wu require both prover and verifier
1803 to compute two exponentiations (or scalar multiplications) each. The
1804 sizes of the messages used by the two protocols are also identical.
1805
1806 (An earlier version of the Stinson-Wu protocol used a cofactor
1807 exponentiation: if we set $f = (p - 1)/q$, then we use $\psi = \alpha^{rf}) =
1808 \rho^{af} = \gamma^{afr}$. This is more efficient in typical elliptic curve
1809 subgroups, since the cofactor of such subgroups is usually small: indeed,
1810 \cite{SEC1} recommends rejecting groups with cofactor $f > 4$. However, in
1811 the Schnorr groups used by Stinson and Wu, the cofactor is much larger than
1812 $q$, and their new variant is more efficient.)
1813
1814 We note that the zero-knowledge property of the Stinson-Wu protocol requires
1815 the Diffie-Hellman knowledge of exponent assumption (KEA). Very briefly:
1816 suppose $A$ is a randomized algorithm which takes as input $X \in G$ and
1817 outputs a pair $(r P, r X)$; intuitively, the KEA asserts $A$ must have done
1818 this by choosing $r$ somehow and then computing its output from it.
1819 Formally, it asserts the existence of an `extractor' algorithm which takes as
1820 input the element $X$ and the random coins used by $A$ and outputs $r$ with
1821 high probability. This is a very strong assumption, and one which is
1822 unnecessary for our protocol, since we can present an \emph{explicit}
1823 extractor.
1824
1825 The KEA assumption as stated in \cite{Stinson:2006:EST} allows the extractor
1826 to fail with some negligible probability, over and above the probability that
1827 a dishonest verifier managed to guess the correct $h = H(\rho, \psi)$ without
1828 making this random-oracle query. Not only does our protocol achieve zero-
1829 knowledge without the KEA, our extractor is, in this sense, `perfect'.
1830
1831 Our protocol is just as strong as Stinson-Wu under attack from active
1832 intruders: see table~\ref{tab:wident-active} for a very brief sketch of the
1833 case-analysis which would be the basis of a proof of this.
1834
1835 \begin{table}
1836 \begin{tabular}[C]{|*{3}{c|}p{8cm}|}
1837 \hlx{hv[1]}
1838 \multicolumn{2}{|c|}{\textbf{Challenge}} &
1839 \textbf{Response} &
1840 \textbf{Security}
1841 \\ \hlx{v[1]hv}
1842 %% unpleasant hacking to make the R and c columns the same width :-(
1843 \settowidth{\dimen0}{\textbf{Challenge}}%
1844 \dimen0=.5\dimen0
1845 \advance\dimen0by-\tabcolsep
1846 \advance\dimen0by-.5\arrayrulewidth
1847 \hbox to\dimen0{\hfil$R$\hfil}
1848 & $c$ & $Y$ & Nothing to prove. \\ \hlx{v}
1849 $R$ & $c'$ & --- & Prover rejects by lemma~\ref{lem:unique-c};
1850 $Y'$ probably wrong by
1851 theorem~\ref{thm:wident-sound}. \\ \hlx{v}
1852 $R$ & $c$ & $Y'$ & Response is incorrect. \\ \hlx{v}
1853 $R'$ & --- & $Y$ & Response is incorrect. \\ \hlx{v}
1854 $R'$ & $c$ & $Y'$ & Prover rejects with probability $1 - 2^{-\ell_I}$;
1855 $Y'$ probably wrong by
1856 theorem~\ref{thm:wident-sound}. \\ \hlx{v}
1857 $R'$ & $c'$ & $Y'$ & Simulate prover using extractor
1858 (lemma~\ref{lem:dlext}); $Y'$ probably wrong by
1859 theorem~\ref{thm:wident-sound}. \\ \hlx{vh}
1860 \end{tabular}
1861
1862 \caption{Security of $\Wident$ against active intruders (summary)}
1863 \label{tab:wident-active}
1864 \end{table}
1865
1866 An identity-based analogue of Stinson-Wu can be defined using a bilinear
1867 pairing, just as we did in section~\ref{sec:wident-id}. However, to prove
1868 the zero-knowledge property, one needs to make a bilinear analogue of the
1869 knowledge of exponent assumption.
1870
1871 We suspect that a key-exchange protocol like ours can be constructed using
1872 Stinson-Wu rather than the Wrestlers identification scheme. We haven't,
1873 however, gone through all the details, since we believe our protocol is just
1874 as efficient and is based on much more conservative assumptions.
1875
1876 %%%--------------------------------------------------------------------------
1877
1878 \section{A simple key-exchange protocol}
1879 \label{sec:kx}
1880
1881 In this section, we describe a simple key-exchange protocol built out of the
1882 identification protocol shown previously.
1883
1884 The key-exchange protocol arises from the following observation. If Bob
1885 sends a challenge, presumably to Alice, and gets a correct response, then not
1886 only did he really send the challenge to Alice but he knows that she received
1887 it correctly.
1888
1889 So, if Alice and Bob authenticate each other, by the end of it, they should
1890 each have chosen a random private value, sent the corresponding public value
1891 to the other, and been convinced that it arrived safely.
1892
1893 Unfortunately, life isn't quite this kind, and we have to do some more work
1894 to make this scheme secure.
1895
1896
1897 Our key exchange protocol essentially consists of two parallel instances of
1898 the identification protocol. If Alice receives a correct response to her
1899 challenge, she will know that Bob received her challenge correctly, and
1900 \emph{vice versa}. If we let Alice's challenge be $R_0 = r_0 P$ and Bob's
1901 challenge be $R_1 = r_1 P$ then each can compute a shared secret $Z = r_0 R_1
1902 = r_0 r_1 P = r_1 R_0$ unknown to an adversary. There are, unfortunately, a
1903 few subtleties involved in turning this intuition into a secure key-exchange
1904 protocol, which we now describe.
1905
1906
1907 \subsection{Overview}
1908 \label{sec:kx-overview}
1909
1910 We present a quick, informal description of our basic key-exchange protocol.
1911 In addition to our group $G$, we shall also need a secure symmetric
1912 encryption scheme $\E = (\kappa, E, D)$, and two secure hash functions
1913 $H_I\colon \Bin^{2\ell_G} \to \Bin^{\ell_I}$ and $H_K\colon \Bin^{\ell_G+1}
1914 \to \Bin^\kappa$.
1915
1916 Suppose that Alice's and Bob's private keys are $a$ and $b$ respectively, and
1917 their public keys are $A = a P$ and $B = b P$.
1918 \begin{enumerate}
1919 \item Alice chooses a random index $r \inr \Nupto{|G|}$. She computes $R = r
1920 P$ and $c = r \xor H_I(R, r B)$. She sends the pair $(R, c)$ to Bob.
1921 \item Similarly, Bob chooses a random $s \inr \Nupto{|G|}$. He computes $S
1922 = s P$ and $d = s \xor H_I(S, s A)$. He sends $(S, d)$ to Alice.
1923 \item Alice receives $(S', d')$ from Bob. She computes $s' = d' \xor H_I(S',
1924 a S')$, and verifies that $S' = s' P$. If so, she computes $K_A = H_K(0
1925 \cat r S')$, and sends $R, E_{K_A}(a S')$ to Bob.
1926 \item Similarly, Bob receives $(R', c')$ from Alice. He verifies that $R' =
1927 \bigl( c' \xor H_I(R', b R') \bigr) P$. If so, he computes $K_B = H_K(0
1928 \cat s R')$ and sends S, $E_{K_B}(b R')$ to Alice.
1929 \item Alice receives a ciphertext $(S'', \chi_B)$ from Bob. She checks that
1930 $S'' = S'$, decrypts $\chi_B$, and checks that $D_{K_A}(\chi_B) = r B$. If
1931 so, she uses $H_K(1 \cat r S')$ as her shared secret.
1932 \item Similarly, Bob receives $(R'', \chi_A)$ from Alice, and checks $R'' =
1933 R'$ and $D_{K_B}(\chi_A) = s A$. If so, he uses $H_K(1 \cat s R')$ as his
1934 shared secret.
1935 \end{enumerate}
1936 This is the Wrestlers Key Exchange protocol, $\Wkx^{G, \E}$ (again, we omit
1937 the superscripts when referring to the general protocol, or when confusion is
1938 unlikely). A diagrammatic summary of the protocol is shown in
1939 figure~\ref{fig:wkx}.
1940
1941 \begin{figure}
1942 \begin{description}
1943 \item[Setup] Group $G = \langle P \rangle$; $|G| = q$ is prime.
1944 $H_I(\cdot, \cdot)$ and $H_K(\cdot)$ are secure hashes. $\E = (\kappa,
1945 E, D)$ is an IND-CCA2 symmetric encryption scheme.
1946 \item[Parties] $U_i$ for $0 \le i < n$.
1947 \item[Private keys] $x_i \inr \Nupto{q}$.
1948 \item[Public keys] $X_i = x_i P$.
1949 \end{description}
1950 \[ \begin{graph}
1951 !{0; <8cm, 0cm>: <0cm, 3cm>::}
1952 *+[F]\dbox{$r_i \getsr I$; $R_i \gets r_i P$ \\
1953 $c_i \gets r_i \xor H_I(R_i, r_i X_j)$} ="i0"
1954 [d]
1955 *+[F]\dbox{Check $R_j = \bigl(c_j \xor H_I(x_i R_j)\bigr) P$ \\
1956 $Z \gets r_i R_j$; $K \gets H_K(0, Z)$ \\
1957 $\chi_i \gets E_K(x_i R_j)$} ="i1"
1958 [d]
1959 *+[F]\dbox{Check $D_K(\chi_j) = r_i X_j$ \\
1960 Shared key is $H_K(1, Z)$} ="i2"
1961 "i0" [r]
1962 *+[F]\dbox{$r_j \getsr I$; $R_j \gets r_j P$ \\
1963 $c_j \gets r_j \xor H_I(R_j, r_j X_i)$} ="j0"
1964 [d]
1965 *+[F]\dbox{Check $R_i = \bigl(c_i \xor H_I(x_j R_i)\bigr) P$ \\
1966 $Z \gets r_j R_i$; $K \gets H_K(0, Z)$ \\
1967 $\chi_j \gets E_K(x_j R_i)$} ="j1"
1968 [d]
1969 *+[F]\dbox{Check $D_K(\chi_i) = r_j X_i$ \\
1970 Shared key is $H_K(1, Z)$} ="j2"
1971 %
1972 "i0" : |(0)/3.25cm/*+{(R_i, c_i)} "j1"
1973 "i1" : |(0)/3.25cm/*+{(R_i, \chi_i)} "j2"
1974 "j0" : |(0)/3.25cm/*+{(R_j, c_j)} "i1"
1975 "j1" : |(0)/3.25cm/*+{(R_j, \chi_j)} "i2"
1976 \end{graph} \]
1977
1978 \caption{Summary of the Wrestlers Key Exchange protocol, $\Wkx$}
1979 \label{fig:wkx}
1980 \end{figure}
1981
1982 Assume, for the moment, that Alice and Bob's messages are relayed honestly.
1983 Then:
1984 \begin{itemize}
1985 \item $a S' = a S = a (s P) = s (a P) = s A$, so $s' = d' \xor H_I(S' a S') =
1986 d \xor H_I(S, s A) = s$, and $S' = S = s P = s' P$, and therefore Alice
1987 responds to Bob's message;
1988 \item similarly $b R' = r B$, so $r' = r$ and $R' = r' P$, and therefore Bob
1989 responds to Alice's message;
1990 \item $b R' = b R = b (r P) = r (b P) = r B$, and $a S' = a S = a (s P) = s
1991 (a P) = s A$, and therefore both parties compute their responses correctly;
1992 and
1993 \item $r S' = r S = r (s P) = s (r P) = s R = s R'$, so $K_A = K_B$, and
1994 therefore they can decrypt each others' responses, and agree the same
1995 shared secret.
1996 \end{itemize}
1997 This shows that the protocol is basically valid, but says little about its
1998 security. The remainder of this section will describe our protocol in more
1999 formal detail, and prove its security in a model with multiple parties and an
2000 adversary who controls the network.
2001
2002 Observe that the protocol as we've presented here is \emph{symmetrical}.
2003 There's no notion of `initiator' or `responder'. There are a total of four
2004 messages which must be sent before both parties accept. However, this can be
2005 reduced to three by breaking the symmetry of the protocol and combining one
2006 or other party's challenge and response messages. We choose to analyse the
2007 symmetrical version, since to do so, it suffices to consider only the two
2008 different kinds of messages. Since our security model allows the messages to
2009 be adversarially delayed and reordered, it is easy to show that the security
2010 of an optimized, asymmetrical protocol is no worse than the symmetrical
2011 version we present here.
2012
2013
2014 \subsection{Security model and security definition}
2015 \label{sec:um}
2016
2017 Our model is very similar to that of Canetti and Krawczyk
2018 \cite{Canetti:2001:AKE}, though we have modified it in two ways.
2019 \begin{enumerate}
2020 \item We allow all the participants (including the adversary) in the protocol
2021 access to the various random oracles required to implement it.
2022 \item Since we want to analyse a specific, practical scheme, asymptotic
2023 results are useless. We measure the adversary's resource usage carefully,
2024 and produce a quantitative bound on the adversary's advantage in the
2025 SK-security game.
2026 \end{enumerate}
2027
2028 \subsubsection{Overview}
2029 We briefly describe our modified model, pointing out the changes we have
2030 made, and how they apply to our protocol. Much of Canetti and Krawczyk's
2031 model (for example, the local and global outputs) is useful for proving more
2032 general security properties such as demonstrating that SK-security suffices
2033 for constructing secure channels, and we shall not concern ourselves with
2034 such details. Other parts deal with issues such as security parameters and
2035 ensuring that all the computation is polynomially bounded, which are
2036 irrelevant since we are dealing with a single concrete protocol rather than a
2037 family of them.
2038
2039 The entities in the model are the \emph{adversary}~$A$, and a (fixed) number
2040 of \emph{parties}~$P_i$, for $0 \le i < n$. If the protocol under
2041 consideration makes use of random oracles, then all the participants -- the
2042 adversary and the parties -- are all allowed access to the random oracles.
2043
2044 The parties and the adversary play a `game'. At the beginning of the game,
2045 the participants are given some inputs computed by a randomized
2046 \emph{initialization procedure}~$\id{init}$. This produces as output a pair
2047 $(i_U, \mathbf{i})$; the value $i_U$ is the \emph{global input}, and is given
2048 to all the participants including the adversary. The vector $\mathbf{i}$ has
2049 $n$ components, and party $P_i$ is given $(i_U, \mathbf{i}[i])$ as input.
2050
2051 \subsubsection{Sessions}
2052 Parties don't act directly. Instead, each party runs a number of
2053 \emph{sessions}. A session a triple $S = (P_i, P_j, s)$, where $i, j \in
2054 \Nupto{n}$ identify the owning party and a \emph{partner}, and $s \in
2055 \Bin^{\ell_S}$ is a \emph{session-id}. (The original model includes a
2056 r\^ole, for distinguishing between initiators and responders. Our protocol
2057 is symmetrical, so this distinction isn't useful.) If $P_i$ runs a session
2058 $S = (P_i, P_j, s)$ and $P_j$ runs a session $S' = (P_j, P_i, s)$ then we say
2059 that $S$ and $S'$ are \emph{matching}, and that $P_j$ is $P_i$'s
2060 \emph{partner} for the session.
2061
2062 At most one participant in the game is \emph{active} at any given time.
2063 Initially the adversary is active. The adversary may \emph{activate} a
2064 session in one of two ways.
2065 \begin{enumerate}
2066 \item It may \emph{create a session} of a party~$P_i$, by selecting a
2067 session-id~$s \in \Bin^{\ell_S}$ and a partner $j$. There is no
2068 requirement that $P_j$ ever have a matching session. However, all sessions
2069 of a party must be distinct, i.e., sessions with the same partner must have
2070 different session-ids.
2071 \item It may \emph{deliver a message}~$\mu \in \Bin^*$, from party~$P_j$, to
2072 an existing session~$S = (P_i, P_j, s)$. There is no requirement that any
2073 party previously sent $\mu$: the adversary is free to make up messages as
2074 it sees fit.
2075 \end{enumerate}
2076 The adversary becomes inactive, and the session becomes active. The session
2077 performs some computation, according to its protocol, and may request a
2078 message~$\mu$ be delivered to the matching session running in its partner
2079 (which may not exist). The session may also \emph{terminate}. In the case
2080 we are interested in, of key-exchange protocols, a session~$S = (P_i, P_j,
2081 s)$ may terminate in one of two ways:
2082 \begin{enumerate}
2083 \item it may \emph{complete}, outputting $(i, j, s, K)$, for some
2084 \emph{session key}~$K$, or
2085 \item it may \emph{abort}, outputting $(i, j, s, \bot)$.
2086 \end{enumerate}
2087 Once it has performed these actions, the session deactivates and the
2088 adversary becomes active again. The adversary is given the message~$\mu$, if
2089 any, and informed of whether the session completed or aborted, but, in the
2090 case of completion, not of the value of the key~$K$. A session is
2091 \emph{running} if it has been created and has not yet terminated.
2092
2093 \subsubsection{Other adversarial actions}
2094 As well as activating sessions, the adversary has other capabilities, as
2095 follows.
2096 \begin{itemize}
2097 \item It may \emph{expire} any session~$S$, causing the owning party to
2098 `forget' the session key output by that session.
2099 \item It may \emph{corrupt} any party~$P_i$, at will: the adversary learns
2100 the entire state of the corrupted party, including its initial
2101 input~$\mathbf{i}[i]$, the state of any sessions it was running at the
2102 time, and the session keys of any completed but unexpired sessions. Once
2103 corrupted, a party can no longer be activated. Of course, the adversary
2104 can continue to send messages allegedly from the corrupted party.
2105 \item It may \emph{reveal the state} of a running session~$S$, learning any
2106 interesting values specific to that session, but \emph{not} the owning
2107 party's long-term secrets.
2108 \item It may \emph{reveal the session-key} of a completed session~$S$.
2109 \item It may elect to be \emph{challenged} with a completed session~$S$,
2110 provided. Challenge sessions form part of the security notion for
2111 key-exchange protocols. See below for more details.
2112 \end{itemize}
2113 We say that a session $S = (P_i, P_j, s)$ is \emph{locally exposed} if
2114 \begin{itemize}
2115 \item it has had its state revealed,
2116 \item it has had its session-key revealed, or
2117 \item $P_i$ has been corrupted, and $S$ had not been expired when this
2118 happened.
2119 \end{itemize}
2120 A session is \emph{exposed} if it is locally exposed, or if its matching
2121 session exists and has been locally exposed.
2122
2123 At the beginning of the game, a bit $b^*$ is chosen at random. The adversary
2124 may choose to be \emph{challenged} with any completed, unexposed
2125 session;\footnote{%
2126 The original Canetti-Krawczyk definition restricts the adversary to a
2127 single challenge session, but our proof works independent of the number of
2128 challenge sessions, so we get a stronger result by relaxing the requirement
2129 here.)}
2130 the adversary is then given either the session's key -- if $b^* = 1$ -- or a
2131 string chosen at random and independently of the game so far from a
2132 protocol-specific distribution -- if $b^* = 0$. At the end of the game, the
2133 adversary outputs a single bit~$b$.
2134
2135 \subsubsection{SK-security}
2136 We've now described the game; it is time to explain the adversary's goal in
2137 it. The adversary \emph{wins} the game if either
2138 \begin{enumerate}
2139 \item two unexposed, matching sessions complete, but output different
2140 keys,\footnote{%
2141 The original Canetti-Krawczyk definition differs slightly here. It
2142 requires that `if two \emph{uncorrupted} parties complete matching
2143 sessions then they both output the same key' [original emphasis]. This
2144 can't be taken at face value, since none of the protocols they claim to
2145 be secure actually meet this requirement: they meet only the weaker
2146 requirement that parties completing matching sessions output different
2147 keys with negligible probability. We assume here that this is what they
2148 meant.}
2149 or
2150 \item the adversary correctly guesses the hidden bit~$b^*$.
2151 \end{enumerate}
2152 More formally, we make the following definition.
2153 \begin{definition}[SK-security]
2154 \label{def:sk}
2155 Let $\Pi^{H_0(\cdot), H_1(\cdot), \ldots}$ be a key-exchange protocol
2156 which makes use of random oracles $H_0(\cdot)$, $H_1(\cdot)$, \dots, and
2157 let $A$ be an adversary playing the game described previously, where $n$
2158 parties run the protocol~$\Pi$. Let $V$ be the event that any pair of
2159 matching, unexposed sessions completed, but output different session keys.
2160 Let $W$ be the event that the adversary's output bit matches the game's
2161 hidden bit~$b^*$. We define the adversary's \emph{advantage against the
2162 SK-security of the protocol~$\Pi$} to be
2163 \[ \Adv{sk}{\Pi}(A, n) = \max(\Pr[V], 2\Pr[W] - 1). \]
2164 Furthermore, we define the \emph{SK insecurity function of the
2165 protocol~$\Pi$} to be
2166 \[ \InSec{sk}(\Pi; t, n, q_S, q_M, q_{H_0}, q_{H_1}, \dots) =
2167 \max_A \Adv{sk}{\Pi}(A, n)
2168 \]
2169 where the maximum is taken over all adversaries~$A$ with total running
2170 time~$t$ (not including time taken by the parties), create at most $q_S$
2171 sessions, deliver at most $q_M$~messages, and (if applicable) make at most
2172 $q_{H_i}$ random-oracle queries to each random oracle $H_i(\cdot)$.
2173 \end{definition}
2174
2175
2176 \subsection{Security}
2177
2178 In order to analyse our protocol $\Wkx^{G, \E}$ properly, we must describe
2179 exactly how it fits into the formal model described in our formal model.
2180
2181 \subsubsection{Sessions and session-ids}
2182 Our formal model introduced the concept of sessions, which the informal
2183 description of section~\ref{sec:kx-overview} neglected to do. (One could
2184 argue that we described a single session only.) As we shall show in
2185 section~\ref{sec:kx-insecure}, our protocol is \emph{insecure} unless we
2186 carefully modify it to distinguish between multiple sessions.
2187
2188 In the model, distinct key-exchange sessions are given distinct partners and
2189 session-ids. In order to prevent sessions interfering with each other, we
2190 shall make explicit use of the session-ids.
2191
2192 Suppose the session-ids are $\ell_S$-bit strings. We expand the domain of
2193 the random oracle $H_I$ so that it's now
2194 \[ H_I\colon G \times \Bin^{\ell_S} \times G \times G \to \Bin_{\ell_I}. \]
2195
2196 \subsubsection{Messages}
2197 We split the messages our protocols into two parts: a \emph{type}~$\tau$ and
2198 a \emph{body}~$\mu$. We assume some convenient, unambiguous encoding of
2199 pairs $(\tau, \mu)$ as bit-strings. For readability, we present message
2200 types as text strings, e.g., `\cookie{challenge}', though in practice one
2201 could use numerical codes instead.
2202
2203 The message body itself may be a tuple of values, which, again, we assume are
2204 encoded as bit-strings in some convenient and unambiguous fashion. We shall
2205 abuse the notation for the sake of readability by dropping a layer of nesting
2206 in this case: for example, we write $(\cookie{hello}, x, y, z)$ rather than
2207 $\bigl(\cookie{hello}, (x, y, z)\bigr)$.
2208
2209 \subsubsection{The protocol}
2210 Our protocol is represented by three functions, shown in
2211 figure~\ref{fig:wkx-formal}.
2212 \begin{itemize}
2213 \item $\id{init}(n)$ is the initialization function, as described in
2214 section~\ref{sec:um}. It outputs a pair $(\mathbf{p}, \mathbf{i})$, where
2215 $\mathbf{i}[i]$ is the private key of party~$P_i$ and $\mathbf{p}[i]$ is
2216 the corresponding public key. Only $P_i$ is given $\mathbf{i}[i]$, whereas
2217 all parties and the adversary are given $\mathbf{p}$.
2218 \item $\id{new-session}^{H_I(\cdot, \cdot, \cdot, \cdot), H_K(\cdot)}
2219 (\mathbf{p}, x, i, j, s)$ is the new-session function. This is executed by
2220 party~$P_i$ when the adversary decides to create a new session~$S = (P_i,
2221 P_j, s)$. It is also given the relevant outputs of $\id{init}$, and
2222 allowed access to the random oracles $H_I$ and $H_K$.
2223 \item $\id{message}^{H_I(\cdot, \cdot, \cdot, \cdot), H_K(\cdot)} (\tau,
2224 \mu)$ is the incoming-message function. This is executed by a session when
2225 the adversary decides to deliver a message $(\tau, \mu)$ to it. It makes
2226 use of the subsidiary functions $\id{msg-challenge}$ and
2227 $\id{msg-response}$ to handle the messages.
2228 \end{itemize}
2229 We observe that the protocol never aborts. We could make it do so if it
2230 receives an invalid message, but we prefer to ignore invalid messages for the
2231 sake of robustness.\footnote{%
2232 Note that this protocol would therefore require modification before it was
2233 acceptable in the simulation-based model of \cite{Shoup:1999:OFM}. There
2234 it is required that a key-exchange protocol terminate after a
2235 polynomially-bounded number of messages are delivered to it.}
2236
2237 \begin{figure}
2238 \begin{program}
2239 Function $\id{init}(n)$: \+ \\
2240 \FOR $i \in \Nupto{n}$ \DO \\ \ind
2241 $x \getsr \Nupto{|G|}$; \\
2242 $\mathbf{i}[i] \gets x$; \\
2243 $\mathbf{p}[i] \gets x P$; \- \\
2244 \RETURN $(\mathbf{p}, \mathbf{i})$;
2245 \- \\[\medskipamount]
2246 Function $\id{new-session}^{H_I(\cdot, \cdot, \cdot, \cdot), H_K(\cdot)}
2247 (\mathbf{p}, x, i, j, s)$: \+ \\
2248 $X \gets \mathbf{p}[i]$;
2249 $X' \gets \mathbf{p}[j]$;
2250 $C \gets \emptyset$; \\
2251 $r \getsr \Nupto{|G|}$;
2252 $R \gets r P$;
2253 $Y \gets r X'$; \\
2254 $h \gets H_I(X, s, R, Y)$;
2255 $c \gets r \xor h$; \\
2256 \SEND $(\cookie{challenge}, R, c)$;
2257 \- \\[\medskipamount]
2258 Function $\id{message}^{H_I(\cdot, \cdot, \cdot, \cdot), H_K(\cdot)}
2259 (\tau, \mu)$: \+ \\
2260 \IF $\tau = \cookie{challenge}$ \THEN $\id{msg-challenge}(\mu)$; \\
2261 \ELSE \IF $\tau = \cookie{response}$ \THEN $\id{msg-response}(\mu)$;
2262 \next
2263 Function $\id{msg-challenge}(\mu)$: \+ \\
2264 $(R', c') \gets \mu$; \\
2265 $Y' \gets x R'$; \\
2266 $h' \gets H_I(X', s, R', Y')$; \\
2267 $r' \gets c' \xor h'$; \\
2268 \IF $R' \ne r' P$ \THEN \RETURN; \\
2269 $C \gets C \cup \{R\}$; \\
2270 $Z \gets r R'$; \\
2271 $(K_0, K_1) \gets H_K(Z)$; \\
2272 $\chi \gets E_{K_0}(Y')$; \\
2273 \SEND $(\cookie{response}, R, \chi)$;
2274 \- \\[\medskipamount]
2275 Function $\id{msg-response}(\mu)$: \+ \\
2276 $(R', \chi') \gets \mu$; \\
2277 \IF $R' \notin C$ \THEN \RETURN; \\
2278 $Z \gets r R'$; \\
2279 $(K_0, K_1) \gets H_K(Z)$; \\
2280 $Y' \gets D_{K_0}(\chi')$; \\
2281 \IF $Y' \ne Y$ \THEN \RETURN; \\
2282 \OUTPUT $K_1$;
2283 \STOP;
2284 \end{program}
2285
2286 \caption{Formalization of $\Wkx$}
2287 \label{fig:wkx-formal}
2288 \end{figure}
2289
2290 \subsubsection{Session states}
2291 We must specify what the adversary obtains when it chooses to reveal a
2292 session's state. Given the program in figure~\ref{fig:wkx-formal}, we can
2293 see that the session state consists of the variables $(x, X, X', r, R, Y,
2294 C)$.
2295
2296 However, $x$ is the owning party's long-term secret, and it seems
2297 unreasonable to disclose this to an adversary who stops short of total
2298 corruption.
2299
2300 The public keys $X$ and $X'$ are part of the adversary's input anyway, so
2301 revealing them doesn't help. Similarly, the set $C$ of valid challenges
2302 could have been computed easily by the adversary, since a group element $R'
2303 \in C$ if and only if the session $S$ responded to some message
2304 $(\cookie{challenge}, R', c')$.
2305
2306 The value $R = r P$ is easily computed given $r$, and besides is sent in the
2307 clear in the session's first message. The expected response $Y = r X'$ is
2308 also easily computed from $r$. The converse is also true, since $r$ can be
2309 recovered from $R$ and $c$ in the session's challenge message and the value
2310 $Y$. Besides, $r$ is necessary for computing $Z$ in response to incoming
2311 challenges.
2312
2313 We conclude that the only `interesting' session state is $r$.
2314
2315 \subsubsection{Security}
2316 Having formally presented the protocol, we can now state our main theorem
2317 about its security. The proof is given in appendix~\ref{sec:sk-proof}.
2318 \begin{theorem}[SK-security of $\Wkx$]
2319 \label{thm:sk}
2320 Let $G$ be a cyclic group. Let $\E = (\kappa, E, D)$ be a symmetric
2321 encryption scheme. Then
2322 \begin{spliteqn*}
2323 \InSec{sk}(\Wkx^{G, \E}; t, n, q_S, q_M, q_I, q_K) \le
2324 2 q_S \bigl( \InSec{ind-cca}(\E; t', q_M, q_M) + {} \\
2325 \InSec{mcdh}(G; t', q_K) +
2326 n \,\InSec{mcdh}(G; t', q_M + q_I) \bigr) +
2327 \frac{n (n - 1)}{|G|} +
2328 \frac{2 q_M}{2^{\ell_I}}.
2329 \end{spliteqn*}
2330 where $t' = t + O(n) + O(q_S) + O(q_M q_I) + O(q_K)$.
2331 \end{theorem}
2332
2333
2334 \subsection{Insecure protocol variants}
2335 \label{sec:kx-insecure}
2336
2337 It's important to feed the session-id and verifier's public key to the random
2338 oracle $H_I$ when constructing the check-value~$c$. Without these, the
2339 protocol is vulnerable to attack. In this section, we consider some variants
2340 of our protocol which hash less information, and show attacks against them.
2341
2342 To simplify the presentation, we shall consider Alice and Bob, and a third
2343 character Carol. We shall be attacking a pair of matching sessions $A$
2344 and~$B$, run by Alice and Bob respectively. Let Alice and Bob's private keys
2345 be $x_A$ and~$x_B$, and let their public keys be $X_A = x_A P$ and $X_B = x_B
2346 P$ respectively.
2347
2348 \subsubsection{Protocol diagram notation}
2349 In order to keep the presentation as clear as possible, we use a simple
2350 diagrammatic notation for describing protocol runs. A line of the form
2351 \protocolrun{\textit{action} \ar[r] & \PRsession{S} & \textit{result}}
2352 states that the adversary performs the given \textit{action} on session~$S$,
2353 with the stated \textit{result}. The \textit{action} may be
2354 \begin{itemize}
2355 \item \textsf{Create session $(P_i, P_j, s)$}: the session is created,
2356 running in party~$P_i$, with partner~$P_j$ and session-id~$s$.
2357 \item \textsf{Receive $\mu$}: the session is activated with an incoming message~$\mu$.
2358 \item \textsf{Session-state reveal}: The adversary requests the session's
2359 internal state.
2360 \end{itemize}
2361 The \textit{result} may be
2362 \begin{itemize}
2363 \item \textsf{Send $\mu'$}: the session requests the delivery of the
2364 message~$\mu'$.
2365 \item \textsf{Complete: $K$}: the session completes, outputting the key~$K$.
2366 \item \textsf{State: $\sigma$}: the session's state is revealed to
2367 be~$\sigma$.
2368 \item \textsf{(Ignore)}: the result of the action is unimportant.
2369 \end{itemize}
2370
2371 \subsubsection{Omitting the session-id}
2372 Consider a protocol variant where session $S$ sets $h_S = H_I(X_N, R_S,
2373 Y_S)$, where $N$ is the session's partner. That is, we've omitted the
2374 session-id from the hash. An adversary can cross over two sessions between
2375 Alice and Bob. Here's how.
2376
2377 The attack assumes that Alice and Bob set up two pairs of matching sessions
2378 with each other, thus.
2379 \protocolrun{
2380 \PRcreate{Alice}{Bob}{s} & \PRsession{A} &
2381 \PRsend{challenge}{R_A, c_A} \\
2382 \PRcreate{Bob}{Alice}{s} & \PRsession{B} &
2383 \PRsend{challenge}{R_B, c_B} \\
2384 \PRcreate{Alice}{Bob}{s'} & \PRsession{A'} &
2385 \PRsend{challenge}{R_{A'}, c_{A'}} \\
2386 \PRcreate{Bob}{Alice}{s'} & \PRsession{B'} &
2387 \PRsend{challenge}{R_{B'}, c_{B'}}
2388 }
2389 Observe that the session pairs use distinct session-ids $s \ne s'$, so this
2390 is allowed. Now the adversary crosses over the challenges, using the second
2391 pair of sessions to provide responses to the challenges issued by the first
2392 pair. By revealing the state in the second pair of sessions, the adversary
2393 can work out the (probably different) session keys accepted by the first
2394 pair.
2395 \protocolrun{
2396 \PRreceive{challenge}{R_B, c_B} & \PRsession{A'} &
2397 \PRsend{response}{R_{A'}, E_{K_{A'B,0}}(x_A R_B)} \\
2398 \PRreceive{challenge}{R_A, c_A} & \PRsession{B'} &
2399 \PRsend{response}{R_{B'}, E_{K_{B'A,0}}(x_B R_A)} \\
2400 \PRreceive{challenge}{R_{A'}, c_{A'}} & \PRsession{A} & \PRignore \\
2401 \PRreceive{challenge}{R_{B'}, c_{B'}} & \PRsession{B} & \PRignore \\
2402 \PRreveal & \PRsession{A'} & r_{A'} \\
2403 \PRreveal & \PRsession{B'} & r_{B'} \\
2404 \PRreceive{response}{R_{B'}, E_{K_{B'A,0}}(x_B R_A)} & \PRsession{A} &
2405 \PRcomplete{K_{AB',1}} \\
2406 \PRreceive{response}{R_{A'}, E_{K_{A'B,0}}(x_A R_B)} & \PRsession{B} &
2407 \PRcomplete{K_{BA',1}} \\
2408 }
2409 The adversary can now compute $K_{AB'} = H_K(r_{B'} R_A)$ and $K_{B'A} =
2410 H_K(r_{A'} R_B)$. Safely in possession of both keys, the adversary can now
2411 read and impersonate freely.
2412
2413 \subsubsection{Omitting the partner's public key}
2414 Now consider a protocol variant where session $S$ sets $h_S = H_I(s, R_S,
2415 Y_S)$, where $s$ is the session-id. An adversary can use a sessions with
2416 Carol to attack a session between Alice and Bob. Here's how the sessions are
2417 set up.
2418 \protocolrun{
2419 \PRcreate{Alice}{Bob}{s} & \PRsession{A} &
2420 \PRsend{challenge}{R_A, c_A} \\
2421 \PRcreate{Bob}{Alice}{s} & \PRsession{B} &
2422 \PRsend{challenge}{R_B, c_B} \\
2423 \PRcreate{Alice}{Carol}{s} & \PRsession{A'} &
2424 \PRsend{challenge}{R_{A'}, c_{A'}} \\
2425 \PRcreate{Bob}{Carol}{s} & \PRsession{B'} &
2426 \PRsend{challenge}{R_{B'}, c_{B'}}
2427 }
2428 Although each of Alice and Bob have two sessions with session-id~$s$, this is
2429 allowed, since they are with different partners. The rest of the attack in
2430 fact proceeds identically to the previous case.
2431
2432
2433 \subsection{Deniability}
2434 \label{sec:denial}
2435
2436 We have claimed that the Wrestlers key-exchange protocol is \emph{deniable}.
2437 In this section, we define what we mean, explain the limits of the
2438 deniablility of the protocol as currently presented, fix the protocol with an
2439 additional pass (which can be collapsed to a single message flow by breaking
2440 the protocol's symmetry), and prove the deniability of the resulting
2441 protocol.
2442
2443 Our notion of deniability is taken from Di~Raimondo, Gennaro and Krawczyk
2444 \cite{DiRaimondo:2006:DAK}, except that, as usual, we opt for a concrete
2445 security approach.
2446
2447 \subsubsection{Discussion}
2448 Our definition for deniability is that, for any adversary interacting with
2449 participants in the protocol, a simulator exists which can compute the same
2450 things as the adversary. In particular, since an adversary could output a
2451 transcript of the interactions between itself and the parties, it would
2452 follow that a simulator could do this too. If the simulator is effective,
2453 its output is indistinguishable from that of the real adversary, and hence no
2454 `judge' (distinguisher) should be persuaded by evidence presented by someone
2455 who claims to have witnessed or participated in an interaction.
2456
2457 We work again the model described in~\ref{sec:um}. That is, our adversary
2458 has complete control over the ordering and delivery of all messages. The
2459 adversary is also able, at any time, to reveal the state of any session.
2460 However, deniability is obviously impossible against an adversary who can
2461 \emph{corrupt} other parties, since simulating such an adversary's actions
2462 would necessarily require the simulator to compute private keys corresponding
2463 to the known public keys, and this is (we believe) difficult, because an
2464 efficient algorithm for doing this could easily attack our protocol, which we
2465 already proved secure. Therefore, we forbid the adversary from corrupting
2466 parties.
2467
2468 In order to allow the adversary to participate in the protocol, rather than
2469 merely observing it, we need to give it one or more private keys. We could
2470 modify the initialization function \id{init} from figure~\ref{fig:wkx-formal}
2471 to give the adversary a private key, but there's an easier way: we can just
2472 give the adversary a number of private keys in its auxiliary input.
2473
2474 \subsubsection{Definitions}
2475 Let $\Pi$ be a key-exchange protocol, in the model described in
2476 section~\ref{sec:um}. We use the simulation framework of
2477 section~\ref{sec:sim}. We define the initialization function $I_\Pi$ to be
2478 the initialization function of $\Pi$, as before, and the corresponding
2479 world~$W_\Pi(\iota, \sigma, \tau, \mu)$ is a fairly straightforward mapping
2480 of the adversary's possible actions to the simulation model:
2481 \begin{itemize}
2482 \item The invocation $\cookie{new-session}$ with $\mu = (i, j, s)$ creates a
2483 new session on party~$P_i$, with partner~$P_j$ and session-id~$s$. The
2484 reply $\rho = (\delta, m)$ is a \emph{decision} $\delta \in
2485 \{\cookie{continue}, \cookie{abort}, \cookie{complete}\}$ and an output
2486 message $m \in \Bin^* \cup \{\bot\}$. If $m \ne \bot$ then $m$ is a
2487 message to be sent to the matching session (if any).
2488 \item The invocation $\cookie{deliver}$ with $\mu = (i, j, s, m)$ delivers
2489 message $m$ to the session $S = (P_i, P_j, s)$. The response $\rho$ is as
2490 for $\cookie{new-session}$ invocations.
2491 \item The invocation $\cookie{reveal-session-state}$ with $\mu = (i, j, s)$
2492 reveals to the adversary the state of the running session $S = (P_i, P_j,
2493 s)$. The response $\rho$ is the session's state if $S$ is indeed a running
2494 session, or $\bot$ otherwise.
2495 \item The invocation $\cookie{reveal-session-key}$ with $\mu = (i, j, s)$
2496 reveals to the adversary the session-key of the completed session~$S =
2497 (P_i, P_j, s)$. The response $\rho$ is the session key~$K$ if the session
2498 is indeed complete, or $\bot$ otherwise.
2499 \end{itemize}
2500 There are no invocations corresponding to the adversary actions of corrupting
2501 parties (since deniability against an corrupting adversary is impossible, as
2502 discussed earlier), or session expiry or challenging (since they are useless
2503 in this context).
2504
2505 We measure the deniability of a protocol~$\Pi$, using a given simulator~$S$,
2506 by the insecurity function $\InSec{sim}(W_\Pi, I_\Pi, S; t_D, t_A,
2507 \mathcal{Q}_D, \mathcal{Q}_A, \mathcal{R}, \mathcal{U})$ of
2508 definition~\ref{def:sim}. The interaction bounds $\mathcal{R} = (q_S, q_M)$
2509 we place on the adversary are on the number ($q_S$) of \cookie{new-session}
2510 and ($q_M$) \cookie{deliver} invocations it makes.
2511
2512 We shall (informally) say that a protocol~$\Pi$ is deniable if there is a
2513 simulator~$S_\Pi$ for which the insecurity function is small for appropriate
2514 resource bounds on the adversary and distinguisher.
2515
2516 \subsubsection{The current protocol}
2517 As it stands, $\Wkx$ isn't deniable, according to our definition, for
2518 arbitrary auxiliary inputs. Let's see why.
2519
2520 Suppose that Bob is an informant for the secret police, and wants to convince
2521 a judge that Alice is involved in subversive activities in cyberspace.
2522 Unfortunately, Bob's job is difficult, because of the strong zero-knowledge
2523 nature of the Wrestlers identification protocol. However, Bob can work with
2524 the judge to bring him the evidence necessary to convict Alice. Here's how.
2525
2526 Alice's public key is $A$, and Bob's public key is $B$. The judge chooses
2527 some session-id $s$, and $r \inr \Nupto{q}$. He computes $R = r P$ and $c =
2528 r \xor H_I(B, s, R, r A)$, and gives Bob the triple $(s, R, c)$, keeping $r$
2529 secret. Bob can now persuade Alice to enter into a key-exchange with him,
2530 with session-id $s$. He uses $(R, c)$ as his challenge message. When Alice
2531 sends back her response $(R', \chi)$ (because Bob's challenge is correctly
2532 formed), Bob aborts and shows $(R', \chi)$ to the judge. The judge knows $r$
2533 and can therefore decrypt $\chi$ and check the response. If it's wrong, then
2534 Bob was cheating and gets demoted -- he can't get the right answer by himself
2535 because that would require him to impersonate Alice. If it's right, Alice is
2536 really a subversive element and `disappears' in the night.
2537
2538 We shall show in theorem~\ref{thm:denial} below that this is basically the
2539 only attack against the deniability of the protocol. However, we can do
2540 better.
2541
2542 \subsubsection{Fixing deniability}
2543 We can fix the protocol to remove even the attack discussed above. The
2544 change is simple: we feed \emph{both} parties' challenges to the hash
2545 function~$H_I$ rather than just the sender's. We use a five-argument hash
2546 function (random oracle) $H_I\colon G^2 \times \Bin^{\ell_S} \times G^2 \to
2547 \Bin^{\ell_I}$. We introduce a new message pass at the beginning of the
2548 protocol: each session simply sends its challenge point $R = r P$ in the
2549 clear as a `pre-challenge'. The actual challenge is $R$ and $c = r \xor
2550 H_I(X, R', s, R, c)$, where $R'$ is the challenge of the matching session.
2551
2552 By breaking symmetry, we can reduce the communication complexity of this
2553 variant to four messages. As before, we analyse the symmetrical version.
2554 The extra flow might seem a high price to pay, but we shall see that it has
2555 additional benefits beyond deniability.
2556
2557 A summary of the new protocol is shown in figure~\ref{fig:wdkx}, and the
2558 formal description is shown in figure~\ref{fig:wdkx-formal}.
2559
2560 \begin{figure}
2561 \begin{description}
2562 \item[Setup] Group $G = \langle P \rangle$; $|G| = q$ is prime.
2563 $H_I(\cdot, \cdot, \cdot, \cdot, \cdot)$ and $H_K(cdot)$ are secure
2564 hashes. $\E = (\kappa, E, D)$ is an IND-CCA2 symmetric encryption
2565 scheme.
2566 \item[Parties] $U_i$ for $0 \le i < n$.
2567 \item[Private keys] $x_i \inr \Nupto{q}$.
2568 \item[Public keys] $X_i = x_i P$.
2569 \end{description}
2570 \[ \begin{graph}
2571 !{0; <8cm, 0cm>: <0cm, 3cm>::}
2572 *+[F]\dbox{$r_i \getsr I$; $R_i \gets r_i P$} ="i-1"
2573 [d]
2574 *+[F]\dbox{$c_i \gets r_i \xor H_I(R_j, X_i, s, R_i, r_i X_j)$} ="i0"
2575 [d]
2576 *+[F]\dbox{Check $R_j = \bigl(c_j \xor H_I(x_i R_j)\bigr) P$ \\
2577 $Z \gets r_i R_j$; $K \gets H_K(0, Z)$ \\
2578 $\chi_i \gets E_K(x_i R_j)$} ="i1"
2579 [d]
2580 *+[F]\dbox{Check $D_K(\chi_j) = r_i X_j$ \\
2581 Shared key is $H_K(1, Z)$} ="i2"
2582 "i-1" [r]
2583 *+[F]\dbox{$r_j \getsr I$; $R_j \gets r_j P$} ="j-1"
2584 [d]
2585 *+[F]\dbox{$c_j \gets r_j \xor H_I(R_i, X_j, s, R_j, r_j X_i)$} ="j0"
2586 [d]
2587 *+[F]\dbox{Check $R_i = \bigl(c_i \xor H_I(x_j R_i)\bigr) P$ \\
2588 $Z \gets r_j R_i$; $K \gets H_K(0, Z)$ \\
2589 $\chi_j \gets E_K(x_j R_i)$} ="j1"
2590 [d]
2591 *+[F]\dbox{Check $D_K(\chi_i) = r_j X_i$ \\
2592 Shared key is $H_K(1, Z)$} ="j2"
2593 %
2594 "i-1" : |(0)/3.25cm/*+{R_i} "j0"
2595 "i0" : |(0)/3.25cm/*+{(R_i, c_i)} "j1"
2596 "i1" : |(0)/3.25cm/*+{(R_i, \chi_i)} "j2"
2597 "j-1" : |(0)/3.25cm/*+{R_j} "i0"
2598 "j0" : |(0)/3.25cm/*+{(R_j, c_j)} "i1"
2599 "j1" : |(0)/3.25cm/*+{(R_j, \chi_j)} "i2"
2600 \end{graph} \]
2601
2602 \caption{Summary of the Deniable Wrestlers Key Exchange protocol, $\Wdkx$}
2603 \label{fig:wdkx}
2604 \end{figure}
2605
2606 \begin{figure}
2607 \begin{program}
2608 Function $\id{init}(n)$: \+ \\
2609 \FOR $i \in \Nupto{n}$ \DO \\ \ind
2610 $x \getsr \Nupto{|G|}$; \\
2611 $\mathbf{i}[i] \gets x$; \\
2612 $\mathbf{p}[i] \gets x P$; \- \\
2613 \RETURN $(\mathbf{p}, \mathbf{i})$;
2614 \- \\[\medskipamount]
2615 Function $\id{new-session}^{H_I(\cdot, \cdot, \cdot, \cdot, \cdot), H_K(\cdot)}
2616 (\mathbf{p}, x, i, j, s)$: \+ \\
2617 $X \gets \mathbf{p}[i]$;
2618 $X' \gets \mathbf{p}[j]$;
2619 $C \gets \emptyset$; \\
2620 $r \getsr \Nupto{|G|}$;
2621 $R \gets r P$;
2622 $Y \gets r X'$; \\
2623 \SEND $(\cookie{pre-challange}, R)$;
2624 \- \\[\medskipamount]
2625 Function $\id{message}^{H_I(\cdot, \cdot, \cdot, \cdot, \cdot), H_K(\cdot)}
2626 (\tau, \mu)$: \+ \\
2627 \IF $\tau = \cookie{pre-challenge}$ \THEN
2628 $\id{msg-pre-challenge}(\mu)$; \\
2629 \ELSE \IF $\tau = \cookie{challenge}$ \THEN
2630 $\id{msg-challenge}(\mu)$; \\
2631 \ELSE \IF $\tau = \cookie{response}$ \THEN $\id{msg-response}(\mu)$;
2632 \next
2633 Function $\id{msg-pre-challenge}(\mu)$: \+ \\
2634 $R' \gets \mu$; \\
2635 $h \gets H_I(R', X, s, R, c)$; \\
2636 $c \gets r \xor h$; \\
2637 \SEND $(\id{msg-challenge}, R, c)$;
2638 \- \\[\medskipamount]
2639 Function $\id{msg-challenge}(\mu)$: \+ \\
2640 $(R', c') \gets \mu$; \\
2641 $Y' \gets x R'$; \\
2642 $h' \gets H_I(R, X', s, R', Y')$; \\
2643 $r' \gets c' \xor h'$; \\
2644 \IF $R' \ne r' P$ \THEN \RETURN; \\
2645 $C \gets C \cup \{R\}$; \\
2646 $Z \gets r R'$; \\
2647 $(K_0, K_1) \gets H_K(Z)$; \\
2648 $\chi \gets E_{K_0}(Y')$; \\
2649 \SEND $(\cookie{response}, R, \chi)$;
2650 \- \\[\medskipamount]
2651 Function $\id{msg-response}(\mu)$: \+ \\
2652 $(R', \chi') \gets \mu$; \\
2653 \IF $R' \notin C$ \THEN \RETURN; \\
2654 $Z \gets r R'$; \\
2655 $(K_0, K_1) \gets H_K(Z)$; \\
2656 $Y' \gets D_{K_0}(\chi')$; \\
2657 \IF $Y' \ne Y$ \THEN \RETURN; \\
2658 \OUTPUT $K_1$;
2659 \STOP;
2660 \end{program}
2661
2662 \caption{Deniable key-exchange: formalization of $\Wdkx$}
2663 \label{fig:wdkx-formal}
2664 \end{figure}
2665
2666 The security of this variant is given by the following theorem, whose proof
2667 is in appendix~\ref{sec:sk2-proof}.
2668 \begin{theorem}[SK-security of $\Wdkx$]
2669 \label{thm:sk2}
2670 Let $G$ be a cyclic group. Let $\E = (\kappa, E, D)$ be a symmetric
2671 encryption scheme. Then
2672 \[ \InSec{sk}(\Wdkx^{G, \E}; t, n, q_S, q_M, q_I, q_K) =
2673 \InSec{sk}(\Wkx^{G, \E}; t, n, q_S, q_M, q_I, q_K)
2674 \]
2675 \end{theorem}
2676
2677 \subsubsection{Deniability of the Wrestlers protocols}
2678 In order to quantify the level of deniability our protocols provide, we shall
2679 impose a limit on the auxiliary input to the adversary. In particular, we
2680 shall use $\mathcal{U}$ of definition~\ref{def:sim} to count the number of
2681 \emph{challenges} in the auxiliary input. That is, $\mathcal{U} = n_C$ is
2682 the number of tuples $(i, j, s, R', R, c)$ for which there is an $r$ such
2683 that $R = r P$ and $c = r \xor H_I(R', X_j, s, R, r X_i)$ (or without the
2684 $R'$ for $\Wkx$).
2685
2686 With this restriction in place, we can state the following theorem about the
2687 deniability of our protocols.
2688 \begin{theorem}[Deniability of $\Wkx$ and $\Wdkx$]
2689 \label{thm:denial}
2690 There exist simulators $S_\Wkx$ and $\Wdkx$ such that
2691 \[ \InSec{sim}(W_{\Wkx^{G, \E}}, I_{\Wkx^{G, \E}}, S_{\Wkx^{G, \E}};
2692 t_D, t_A, \mathcal{Q}_D, \mathcal{Q}_A, (q_S, q_M), 0) \le
2693 \frac{q_M}{2^{\ell_I}}
2694 \]
2695 and
2696 \[ \InSec{sim}(W_{\Wdkx^{G, \E}}, I_{\Wdkx^{G, \E}}, S_{\Wdkx^{G, \E}};
2697 t_D, t_A, \mathcal{Q}_D, \mathcal{Q}_A, (q_S, q_M), n_C) \le
2698 \frac{n_C q_S}{|G|} +
2699 \frac{q_M}{2^{\ell_I}}.
2700 \]
2701 The running time of the simulators is $O(t_A) + O(\mathcal{Q}_A q_M)$.
2702 \end{theorem}
2703 \begin{proof}
2704 The simulators $S_\Wkx$ and $S_\Wdkx$ are very similar. We describe both
2705 here. Both are fake-world simulators, working as follows.
2706 \begin{enumerate}
2707 \item Initially, it constructs simulated parties $P_i$, for $0 \le i < n$,
2708 giving each the public key $X_i$ from the common input.
2709 \item Suppose the adversary requests creation of a new session $S = (P_i,
2710 P_j, s)$. Then the simulator creates a new session, including a random
2711 value $r_S \inr \Nupto{|G|}$, and computes $R_S = r_S P$, and $Y_S = r_S
2712 X_j$. For $\Wdkx$, it sends the message $(\cookie{pre-challenge}, R_S)$;
2713 for $\Wkx$, it additionally computes $h = H_I(X_i, s, R_S, Y_S)$ and
2714 sends $(\cookie{challenge}, R_S, r_S \xor h)$.
2715 \item Suppose, for $\Wdkx$, the adversary sends a message
2716 $(\cookie{pre-challenge}, R')$ to a session~$S = (P_i, P_j, s)$. The
2717 simulator computes $h = H_I(R', X_i, s, R_S, Y_S)$, and sends
2718 $(\cookie{challenge}, R_S, r_S \xor h)$.
2719 \item Suppose the adversary sends a message $(\cookie{challenge}, R', c')$
2720 to session $S = (P_i, P_j, s)$. The simulator doesn't know $x_i$.
2721 \begin{enumerate}
2722 \item If $R' = R_T$ for some other simulated session $T$, then the
2723 simulator knows $r_T$ such that $R_T = r_T P$. Let $Y' = r_T X_i$.
2724 The simulator computes $h = H_I(X_j, s, R', Y')$ (resp.\ $h = H_I(R_S,
2725 X_j, s, R', Y')$) for $\Wkx$ (resp.\ $\Wdkx$) and checks that $r_T = c'
2726 \xor h$. If not, the simulator discards the message. Otherwise, it
2727 computes $(K_0, K_1) = H_K(r_S R')$, and sends the message
2728 $(\cookie{response}, R, E_{K_0}(Y'))$.
2729 \item \label{en:simextract} Otherwise the simulator runs the extractor
2730 $T_\Wident$ on the adversary's history of queries $H_I(X_j, s, R',
2731 \cdot)$ (resp.\ $H_I(R_S, X_j, s, R', \cdot)$) for $\Wkx$ (resp.\
2732 $\Wdkx$). The extractor returns $(r', Y')$. If $Y' = \bot$ then the
2733 simulator ignores the message. Otherwise, the simulator computes
2734 $(K_0, K_1) = H_K(r R')$ and sends back $(\cookie{response}, R,
2735 E_{K_0}(Y'))$.
2736 \end{enumerate}
2737 \item Suppose the adversary sends a message $(\cookie{response}, R', \chi)$
2738 to session $S = (P_i, P_j, s)$. The simulator computes $(K_0, K_1) =
2739 H_K(r_S R')$, and decrypts $Y' = D_{K_0}(\chi)$. If $Y' \ne Y_S$ then
2740 the simulator discards the message. Otherwise, it makes the simulated
2741 session complete, and outputs key $K_1$.
2742 \item Finally, if the adversary reveals a session's state, the simulator
2743 reveals $r_S$ as required; if the adversary reveals a session-key, the
2744 simulator reveals the $K_1$ it output.
2745 \end{enumerate}
2746 The only point where the simulation fails to be perfect is in
2747 \ref{en:simextract}. Let $R'$ and $c'$ be the values from an incoming
2748 challenge message to session $S = (P_i, P_j, s)$. Let $r'$ be such that
2749 $R' = r' P$ and let $Y' = r' X_i$. If a random-oracle query $H_I(X_j, s,
2750 R', Y')$ (or $H_I(R_S, X_j, s, R', Y')$ for $\Wdkx$) has been issued, then
2751 there are a number of possibilities. Let $h'$ be the result of this query.
2752 \begin{itemize}
2753 \item The adversary made this query. Then the extractor will find it and
2754 return $Y'$ if $c' = h' \xor r'$, or $\bot$ otherwise.
2755 \item Some simulated session $U = (P_{i'}, P_{j'}, s')$ made this query.
2756 But simulated sessions only make $H_I$-queries when constructing
2757 challenges, so $R' = R_U$ for some session~$U$. But the simulator does
2758 something different in that case.
2759 \item In $\Wdkx$, the quadruple $(s, R_S, R', c')$ came from the
2760 adversary's auxiliary input. In this case the simulator must fail. But
2761 $R_S = r_S P$, and $r_S$ was chosen uniformly at random. If there are at
2762 most $n_C$ challenge sets in the auxiliary input then this happens with
2763 probability at most $n_C/|G|$ for any given session.
2764 \end{itemize}
2765 We conclude that the simulator fails with probability
2766 \[ \frac{q_M}{2^{\ell_I}} + \frac{q_S n_C}{|G|}. \]
2767 (Note that we only consider $n_C = 0$ for $\Wkx$.) No adversary can
2768 distinguish the simulator from a real interaction unless the simulator
2769 fails, and the simulator is a fake-world simulator. We therefore apply
2770 proposition~\ref{prop:fakesim}; the theorem follows.
2771 \end{proof}
2772
2773
2774 \subsection{Practical issues}
2775 \label{sec:practice}
2776
2777 \subsubsection{Denial of service from spoofers}
2778 The adversary we considered in~\ref{sec:um} is very powerful. Proving
2779 security against such a powerful adversary is good and useful. However,
2780 there are other useful security properties we might like against weaker
2781 adversaries.
2782
2783 Eavesdropping on the Internet is actually nontrivial. One needs control of
2784 one of the intermediate routers between two communicating parties. (There
2785 are tricks one can play involving subversion of DNS servers, but this is also
2786 nontrivial.) However, injecting packets with bogus source addresses is very
2787 easy.
2788
2789 Layering the protocol over TCP \cite{RFC0793} ameliorates this problem because
2790 an adversary needs to guess or eavesdrop in order to obtain the correct
2791 sequence numbers for a spoofed packet; but the Wrestlers protocol is
2792 sufficiently simple that we'd like to be able to run it over UDP
2793 \cite{RFC0768}, for which spoofing is trivial.
2794
2795 Therefore, it's possible for anyone on the 'net to send Alice a spurious
2796 challenge message $(R, c)$. She will then compute $Y = a R$, recover $r' = c
2797 \xor H_I(\ldots, R, Y)$ check that $R = r' P$ and so on. That's at least two
2798 scalar multiplications to respond to a spoofed packet, and even with very
2799 efficient group operations, coping with this kind of simple denial-of-service
2800 attack might be difficult.
2801
2802 A straightforward solution is to use the Deniable variant of the protocol,
2803 and require a challenge to quote its matching session's challenge $R'$ in its
2804 challenge. That is, upon receiving a $(\cookie{pre-challenge}, R')$, the
2805 session sends $(\cookie{challenge}, R', R, c)$. Alice then rejects any
2806 \emph{incoming} challenge message which doesn't quote her current challenge
2807 value. Now only eavesdroppers can force her to perform expensive
2808 computations.
2809
2810 Indeed, one need not quote the entire challenge $R'$: it suffices to send
2811 some short hard-to-guess hash of it, maybe just the bottom 128 bits or so.
2812
2813 This can't reduce security. Consider any adversary attacking this protocol
2814 variant. We can construct an adversary which attacks the original protocol
2815 just as efficiently. The new adversary attaches fake $R'$ values to
2816 challenges output by other parties, and strips them off on delivery,
2817 discarding messages with incorrect $R'$ values.
2818
2819 \subsubsection{Key confirmation}
2820 Consider an application which uses the Wrestlers protocol to re-exchange keys
2821 periodically. The application can be willing to \emph{receive} incoming
2822 messages using the new key as soon as the key exchange completes
2823 successfully; however, it should refrain from \emph{sending} messages under
2824 the new key until it knows that its partner has also completed. The partner
2825 may not have received the final response message, and therefore be unwilling
2826 to accept a new key; it will therefore (presumably) reject incoming messages
2827 under this new key.
2828
2829 While key confirmation is unnecessary for \emph{security}, it has
2830 \emph{practical} value, since it solves the above problem. If the
2831 application sends a \cookie{switch} message when it `completes', it can
2832 signal its partner that it is indeed ready to accept messages under the new
2833 key. Our implementation sends $(\cookie{switch-rq}, E_{K_0}(H_S(0, R, R')))$
2834 as its switch message; the exact contents aren't important. Our
2835 retransmission policy (below) makes use of an additional message
2836 \cookie{switch-ok}, which can be defined similarly.
2837
2838 It's not hard to show that this doesn't adversely affect the security of the
2839 protocol, since the encrypted message is computed only from public values.
2840 In the security proof, we modify the generation of \cookie{response}
2841 messages, so that the plaintexts are a constant string rather than the true
2842 responses, guaranteeing that the messages give no information about the
2843 actual response. To show this is unlikely to matter, we present an adversary
2844 attacking the encryption scheme by encrypting either genuine responses or
2845 fake constant strings. Since the adversary can't distinguish which is being
2846 encrypted (by the definition of IND-CCA security,
2847 definition~\ref{def:ind-cca}), the change goes unnoticed. In order to allow
2848 incorporate our switch messages, we need only modify this adversary, to
2849 implement the modified protocol. This is certainly possible, since the
2850 messages contain (hashes of) public values. We omit the details.
2851
2852 \subsubsection{Optimization and piggybacking}
2853 We can optimize the number of messages sent by combining them. Here's one
2854 possible collection of combined messages:
2855 \begin{description}
2856 \item [\cookie{pre-challenge}] $R$
2857 \item [\cookie{challenge}] $R'$, $R$, $c = H_I(R', X, s, R, c) \xor r$
2858 \item [\cookie{response}] $R'$, $R$, $c$, $E_{K_0}(x R')$
2859 \item [\cookie{switch}] $R'$, $E_{K_0}(x R', H_S(0, R, R'))$
2860 \item [\cookie{switch-ok}] $R'$, $E_{K_0}(H_S(1, R, R'))$
2861 \end{description}
2862 The combination is safe:
2863 \begin{itemize}
2864 \item the \cookie{switch} and \cookie{switch-ok} messages are safe by the
2865 argument above; and
2866 \item the other recombinations can be performed and undone in a `black box'
2867 way, by an appropriately defined SK-security adversary.
2868 \end{itemize}
2869
2870 \subsubsection{Unreliable transports}
2871 The Internet UDP \cite{RFC0768} is a simple, unreliable protocol for
2872 transmitting datagrams. However, it's very efficient, and rather attractive
2873 as a transport for datagram-based applications such as virtual private
2874 networks (VPNs). Since UDP is a best-effort rather than a reliable
2875 transport, it can occasionally drop packets. Therefore it is necessary for a
2876 UDP application to be able to retransmit messages which appear to have been
2877 lost.
2878
2879 We recommend the following simple retransmission policy for running the
2880 Wrestlers protocol over UDP.
2881 \begin{itemize}
2882 \item Initially, send out the \cookie{pre-challenge} message every minute.
2883 \item On receipt of a \cookie{pre-challenge} message, send the corresponding
2884 full \cookie{challenge}, but don't retain any state.
2885 \item On receipt of a (valid) \cookie{challenge}, record the challenge value
2886 $R'$ in a table, together with $K = (K_0, K_1)$ and the response $Y' = x
2887 R'$. If the table is full, overwrite an existing entry at random. Send
2888 the corresponding \cookie{response} message, and retransmit it every ten
2889 seconds or so.
2890 \item On receipt of a (valid) \cookie{response}, discard any other
2891 challenges, and stop sending \cookie{pre-challenge} and \cookie{response}
2892 retransmits. At this point, the basic protocol described above would
2893 \emph{accept}, so the key $K_1$ is known to be good. Send the
2894 \cookie{switch} message, including its response to the (now known-good)
2895 sender's challenge.
2896 \item On receipt of a (valid) \cookie{switch}, send back a \cookie{switch-ok}
2897 message and stop retransmissions. It is now safe to start sending messages
2898 under $K_1$.
2899 \item On receipt of a (valid) \cookie{switch-ok}, stop retransmissions. It
2900 is now safe to start sending messages under $K_1$.
2901 \end{itemize}
2902
2903 \subsubsection{Key reuse}
2904 Suppose our symmetric encryption scheme $\E$ is not only IND-CCA secure
2905 (definition~\ref{def:ind-cca}) but also provides integrity of plaintexts
2906 \cite{Bellare:2000:AER} (or, alternatively, is an AEAD scheme \fixme{AEAD
2907 cite}) Then we can use it to construct a secure channel, by including message
2908 type and sequence number fields in the plaintexts, along with the message
2909 body. If we do this, we can actually get away with just the one key $K =
2910 H_K(Z)$ rather than both $K_0$ and $K_1$.
2911
2912 To do this, it is essential that the plaintext messages (or additional data)
2913 clearly distinguish between the messages sent as part of the key-exchange
2914 protocol and messages sent over the `secure channel'. Otherwise, there is a
2915 protocol-interference attack: an adversary can replay key-exchange ciphertexts
2916 to insert the corresponding plaintexts into the channel.
2917
2918 We offer a sketch proof of this claim in appendix~\ref{sec:sc-proof}.
2919
2920 %%%--------------------------------------------------------------------------
2921
2922 \section{Conclusions}
2923 \label{sec:conc}
2924
2925 We have presented new protocols for identification and authenticated
2926 key-exchange, and proven their security. We have shown them to be efficient
2927 and simple. We have also shown that our key-exchange protocol is deniable.
2928 Finally, we have shown how to modify the key-exchange protocol for practical
2929 use, and proven that this modification is still secure.
2930
2931 %%%--------------------------------------------------------------------------
2932
2933 \section{Acknowledgements}
2934
2935 The Wrestlers Protocol is named after the Wrestlers pub in Cambridge where
2936 Clive Jones and I worked out the initial design.
2937
2938 %%%--------------------------------------------------------------------------
2939
2940 \bibliography{mdw-crypto,cryptography2000,cryptography,rfc}
2941
2942 %%%--------------------------------------------------------------------------
2943
2944 \appendix
2945 \section{Proofs}
2946
2947 \subsection{Proof of theorem~\ref{thm:sk}}
2948 \label{sec:sk-proof}
2949
2950 Before we embark on the proof proper, let us settle on some notation. Let
2951 $P_i$ be a party. Then we write $x_i$ for $P_i$'s private key and $X_i = x_i
2952 P$ is $P_i$'s public key. Let $S = (P_i, P_j, s)$ be a session. We write
2953 $r_S$ for the random value chosen at the start of the session, and $R_S$,
2954 $c_S$ etc.\ are the corresponding derived values in that session.
2955
2956 The proof uses a sequence of games. For each game~$\G{i}$, let $V_i$ be the
2957 event that some pair of unexposed, matching sessions both complete but output
2958 different keys, and let $W_i$ be the event that the adversary's final output
2959 equals the game's hidden bit~$b^*$. To save on repetition, let us write
2960 \[ \diff{i}{j} = \max(|\Pr[V_i] - \Pr[V_j]|, |\Pr[W_i] - \Pr[W_j]|). \]
2961 Obviously,
2962 \[ \diff{i}{j} \le \sum_{i\le k<j} \diff{k}{k + 1}. \]
2963
2964 Here's a quick overview of the games we use.
2965 \begin{itemize}
2966 \item $\G0$ is the original SK-security game.
2967 \item In $\G1$, we abort the game unless all parties' public keys are
2968 distinct. Since keys are generated at random, parties are unlikely to be
2969 given the same key by accident.
2970 \item In $\G2$, we change the way sessions respond to challenge messages, by
2971 using the extractor to fake up answers to most challenges. Since the
2972 extractor is good, the adversary is unlikely to notice.
2973 \item In $\G3$, we abort the game if the adversary ever queries $H_K(\cdot)$
2974 on the Diffie-Hellman secret $r_S r_T P$ shared between two unexposed
2975 matching sessions. We show that this is unlikely to happen if the
2976 Diffie-Hellman problem is hard.
2977 \item In $\G4$, we abort the game if any unexposed session \emph{accepts} a
2978 response message which wasn't sent by a matching session.
2979 \end{itemize}
2980 Finally, we show that the adversary has no advantage in $\G4$. The theorem
2981 follows.
2982
2983 For ease of comprehension, we postpone the detailed proofs of some of the
2984 steps until after we conclude the main proof.
2985
2986 Let $A$ be a given adversary which runs in time~$t$, creates at most~$q_S$
2987 sessions, delivers at most~$q_M$ messages, and makes at most~$q_I$ queries to
2988 its $H_I(\cdot, \cdot, \cdot, \cdot)$ oracle and at most~$q_K$ queries to its
2989 $H_K(\cdot)$ oracle. Let $\G0$ be the original SK-security game of
2990 definition~\ref{def:sk}, played with adversary~$A$.
2991
2992 Game~$\G1$ is the same as game~$\G0$ except, if the initialization function
2993 reports two parties as having the same public key (i.e., we have $X_i \ne
2994 X_j$ where $0 \le i < j < n$), we stop the game immediately and without
2995 crediting the adversary with a win. This only happens when the corresponding
2996 private keys are equal, i.e., $x_i = x_j$, and since the initialization
2997 function chooses private keys uniformly at random, this happens with
2998 probability at most $\binom{n}{2}/|G|$. Since if this doesn't happen, the
2999 game is identical to $\G0$, we can apply lemma~\ref{lem:shoup}, and see that
3000 \begin{equation}
3001 \label{eq:sk-g0-g1}
3002 \diff{0}{1} \le \frac{1}{|G|} \binom{n}{2} = \frac{n (n - 1)}{2 |G|}.
3003 \end{equation}
3004 In game~$\G1$ and onwards, we can assume that public keys for distinct
3005 parties are themselves distinct. Note that the game now takes at most
3006 $O(q_I)$ times longer to process each message delivered by the adversary.
3007 This is where the $O(q_I q_M)$ term comes from in the theorem statement.
3008
3009 Game~$\G2$ is the same as game~$\G1$, except that we change the way that we
3010 make parties respond to \cookie{challenge} messages $(\cookie{challenge}, R,
3011 c)$. Specifically, suppose that $S = (P_i, P_j, s)$ is a session.
3012 \begin{itemize}
3013 \item Suppose $T = (P_j, P_i, s)$ is the matching session of $S$. The game
3014 proceeds as before if $(R, c) = (R_T, c_T)$ is the challenge issued by $T$.
3015 \item Otherwise, we run the extractor $T_\Wident$ on the adversary's history
3016 so far of oracle queries $H_I(X_i, s, R, \cdot)$ to determine a pair $(r,
3017 Y)$. If $r = \bot$ then we discard the message. Otherwise, we add $R$ to
3018 the list~$C$, and return a fake response to the adversary by computing $K =
3019 H_K(r R_S)$ and handing the adversary $(\cookie{response}, R_S, E_K(Y))$.
3020 \end{itemize}
3021 The following lemma shows how this affects the adversary's probabilities of
3022 winning.
3023 \begin{lemma}
3024 \label{lem:sk-g1-g2}
3025 \begin{equation}
3026 \label{eq:sk-g1-g2}
3027 \diff{1}{2} \le \frac{q_M}{2^{\ell_I}}.
3028 \end{equation}
3029 \end{lemma}
3030
3031 Let us say that a session $S = (P_i, P_j, s)$ is \emph{ripe} if
3032 \begin{itemize}
3033 \item there is a matching session $T = (P_j, P_i, s)$, and
3034 \item $S$ is unexposed.
3035 \end{itemize}
3036 Suppose that $S$ is a ripe session, and that it has a matching session~$T$:
3037 let $Z_S = Z_T = r_S r_T P$.
3038
3039 Game~$\G3$ is the same as $\G2$, except that the game is immediately aborted
3040 if ever the adversary queries its random oracle $H_K(\cdot)$ at a value $Z_S$
3041 for any ripe session~$S$. The following lemma shows how this affects the
3042 adversary's probabilities of winning.
3043 \begin{lemma}
3044 \label{lem:sk-g2-g3}
3045 For some $t'$ within the bounds given in the theorem statement we have
3046 \begin{equation}
3047 \label{eq:sk-g2-g3}
3048 \diff{2}{3} \le q_S \InSec{mcdh}(G; t', q_K).
3049 \end{equation}
3050 \end{lemma}
3051
3052 Game~$\G4$ is the same as $\G3$ except that the game is immediately aborted
3053 if ever the adversary sends a response message to a ripe session~$S$ which
3054 wasn't output by its matching session as a response to $S$'s challenge, with
3055 the result that $S$ completes.
3056
3057 Let's make this more precise. Let $U$ and $V$ be a pair of matching
3058 sessions. Let $C_U = (\cookie{challenge}, R_U, c_U$ be the challenge message
3059 sent by $U$. Let $M_T$ be the set of messages which $T$ has sent upon
3060 delivery of $C_U$. Then, in $\G4$, we abort the game if, for any pair $S$
3061 and~$T$ of matching, unexposed sessions, $S$ has completed as a result of
3062 being sent a message $\mu \notin M_T$. We have the following lemma.
3063 \begin{lemma}
3064 \label{lem:sk-g3-g4}
3065 For a $t'$ within the stated bounds, we have
3066 \begin{equation}
3067 \label{eq:sk-g3-g4}
3068 \diff{3}{4} \le q_S \bigl( \InSec{ind-cca}(\E; t', q_M, q_M) +
3069 n \cdot \InSec{mcdh}(G; t', q_M + q_I) \bigr)
3070 \end{equation}
3071 \end{lemma}
3072
3073 Finally, let us consider the state we're in with $\G4$.
3074 \begin{itemize}
3075 \item No ripe session completes except as a result the adversary faithfully
3076 delivering messages between it and its matching session.
3077 \item The adversary never queries $Z_S$ for any ripe session~$S$. If we set
3078 $K_S = (K_{S, 0}, K_{S, 1}) = H_K(Z_S)$, then $K_{S, 1}$ is the key output
3079 by $S$ when it completes.
3080 \item If $S$ and $T$ are matching ripe sessions, then $K_S = K_T$, since $Z_S
3081 = r_S R_T = r_T R_S = Z_T$.
3082 \item For any ripe session~$S$, $K_{S, 1}$ is uniformly distributed in
3083 $\Bin^\kappa$ and independent of the adversary's view.
3084 \item If $S = (P_i, P_j, s)$ and $T = (P_j, P_i, s)$ are matching ripe
3085 sessions, then $Z_S$ depends only $r_S$ and $r_T$. Hence, once $S$ and~$T$
3086 complete, and erase their states, $Z_S$ is independent of everything except
3087 the messages sent between the two sessions. In particular, $Z_S$ is
3088 independent of the long-term secrets $x_i$ and~$x_j$, so if either player
3089 is later corrupted, the key $K_{S, 1}$ remains independent of the
3090 adversary's view.
3091 \item Hence, the keys output by unexposed sessions are indistinguishable from
3092 freshly-generated random strings, and remain so indefinitely.
3093 \end{itemize}
3094 We conclude that, for any adversary $A$,
3095 \begin{equation}
3096 \label{eq:sk-g4}
3097 \Pr[V_4] = 0 \qquad \text{and} \qquad \Pr[W_4] = \frac{1}{2}.
3098 \end{equation}
3099 Putting equations~\ref{eq:sk-g0-g1}--\ref{eq:sk-g4} together, we find
3100 \begin{spliteqn}
3101 \Adv{sk}{\Wident^{G, \E}}(A) \le
3102 2 q_S \bigl(\InSec{ind-cca}(\E; t', q_M, q_M) + {} \\
3103 \InSec{mcdh}(G; t', q_K) +
3104 n \,\InSec{mcdh}(G; t', q_M + q_I) \bigr) + {}
3105 \frac{n (n - 1)}{|G|} +
3106 \frac{2 q_M}{2^{\ell_I}}.
3107 \end{spliteqn}
3108 The theorem follows, since $A$ was chosen arbitrarily.
3109
3110
3111 \begin{proof}[Proof of lemma~\ref{lem:sk-g1-g2}]
3112 The two games $\G1$ and~$\G2$ differ only in whether they accept or reject
3113 particular challenge messages $(\cookie{challenge}, R, c)$.
3114
3115 We claim firstly that no message is \emph{accepted} by $\G2$ which would
3116 have been rejected by $\G1$. To prove the claim, it is sufficient to note
3117 that the extractor's output, if not $\bot$, is always correct, and hence if
3118 $\G2$ accepts a message then $\G1$ would have done so too.
3119
3120 Since $\G2$ also behaves identically when the adversary submits to~$S$ the
3121 challenge from the matching session~$T$, we have nothing to prove in this
3122 case. Let $F$ be the event that the adversary submits a message
3123 $(\cookie{challenge}, R, c)$ to a session~$S$ which $S$ would have accepted
3124 in $\G1$ but would be rejected by the new rule in~$\G2$. By
3125 lemma~\ref{lem:shoup} we have $\diff{1}{2} \le \Pr[F]$. To prove the
3126 current lemma, therefore, we must show that $\Pr[F] \le q_M/2^{\ell_I}$.
3127
3128 Rather than consider individual challenge messages, we consider
3129 \emph{classes} of messages. We shall refer to a quadruple~$\Cid = (i, j,
3130 s, R)$ as a \emph{class-id}, and define some useful functions:
3131 \begin{itemize}
3132 \item the class's \emph{session} $\Csession(\Cid) = (P_i, P_j, s)$;
3133 \item the class's \emph{index} $\Cindex(\Cid)$ is $r \in I$ where $R = r
3134 P$, which is well-defined by lemma~\ref{lem:unique-dl};
3135 \item the class's \emph{query} $\Cquery(\Cid) = (X_j, s, R, x_i R)$;
3136 \item the class's \emph{hash} $\Chash(\Cid) = H_I(\Cquery(\Cid)) = H_I(X_j,
3137 s, R, x_i R)$;
3138 \item the class's \emph{check-value} $\Ccheck(\Cid) = \Chash(\Cid) \xor
3139 \Cindex(\Cid)$;
3140 \item the class's \emph{check-set} $\Ccheckset(\Cid)$ is the set of
3141 check-values $c$ such that a message $(\cookie{challenge}, R, c)$ was
3142 sent to session $S = (P_i, P_j, s)$; and
3143 \item the class's \emph{count} $\Ccount(\Cid) = |\Ccheckset(\Cid)|$.
3144 \end{itemize}
3145
3146 Consider any class-id~$\Cid = (i, j, s, R)$. A natural question which
3147 arises is: which participants have issued $\Cid$'s query, i.e., queried
3148 $H_I$ at $\Cquery(\Cid)$?
3149
3150 We can characterise the $H_I(\cdot, \cdot, \cdot, \cdot)$ queries of a
3151 session $U = (P_{i'}, P_{j'}, s')$ as follows:
3152 \begin{itemize}
3153 \item computing the check-value for the challenge $R_U$ by querying
3154 $H_I(X_{i'}, s', R_U, r_U X_{j'})$, and
3155 \item checking an incoming challenge~$R'$ by querying $H_I(X_{j'}, s', R',
3156 x_{i'} R')$.
3157 \end{itemize}
3158 The class~$\Cid$'s query $\Cquery(\Cid)$ is $U$'s check-value query if
3159 \[ (j, i, s, R) = (i', j', s', R_U) \]
3160 i.e., $U$ is the matching session of $\Csession(\Cid)$, and moreover $R =
3161 R_U$ is the challenge value issued by $U$. For any $c \in
3162 \Ccheckset(\Cid)$, if $c = \Ccheck(\Cid)$ then $(\cookie{challenge}, R, c)$
3163 is precisely the challenge message issued by~$U$ to $\Csession(\Cid)$; the
3164 rules for handling this message didn't change. However, if $c \ne
3165 \Ccheck(\Cid)$ then the message would have been rejected in $\G1$, and we
3166 have already shown that $\G2$ continues to reject all messages rejected by
3167 $\G1$.
3168
3169 Let us say that a class-id~$\Cid = (i, j, s, R)$ is \emph{bad} if
3170 \begin{enumerate}
3171 \item the value $R$ is not the challenge issued by $\Csession(\Cid)$'s
3172 matching session, and
3173 \item the adversary has not issued $\Cid$'s query $\Cquery(\Cid)$,
3174 \emph{but}
3175 \item $\Ccheck(\Cid) \in \Ccheckset(\Cid)$, so one of the check-values
3176 submitted to $S$ was actually correct.
3177 \end{enumerate}
3178 We claim that our extractor will work perfectly unless some class-id is
3179 bad. Certainly, if $R$ was issued by the matching session, there is
3180 nothing to prove; if the adversary has issued the relevant query then the
3181 extractor will recover $\Cindex(\Cid)$ just fine; and if $\Ccheck(\Cid)
3182 \notin \Ccheckset(\Cid)$ then all messages in the class would have been
3183 rejected by $\G1$ anyway.
3184
3185 Let $B(\Cid)$ be the event that the class~$\Cid$ is bad. We claim that
3186 \[ \Pr[B(\Cid)] \le \frac{\Ccount(\Cid)}{2^{\ell_I}}. \]
3187 The present lemma follows, since
3188 \[ \diff{1}{2}
3189 \le \Pr[F]
3190 \le \sum_\Cid \Pr[B(\Cid)]
3191 \le \sum_\Cid \frac{\Ccount(\Cid)}{2^{\ell_I}}
3192 = \frac{1}{2^{\ell_I}} \sum_\Cid \Ccount(\Cid)
3193 \le \frac{q_M}{2^{\ell_I}}
3194 \]
3195 as required.
3196
3197 Now observe that, in $\G2$, sessions don't actually check incoming
3198 challenges in this way any more -- instead we run the extractor. So, to
3199 prove the claim, we consider a class~$\Cid$ where properties~1 and~2 above
3200 hold. The correct hash $\Chash(\Cid)$ is then independent of the rest of
3201 the game, so the probability that $\Ccheck(\Cid) \in \Ccheckset(\Cid)$ is
3202 precisely $\Ccount(\Cid)/2^{\ell_I}$ as required.
3203
3204 This completes the proof the lemma.
3205 \end{proof}
3206
3207 \begin{proof}[Proof of lemma~\ref{lem:sk-g2-g3}]
3208 Let $F$ be the event that the adversary makes a query $H_K(Z_S)$ for some
3209 ripe session~$S$. Since $\G3$ proceeds exactly as $\G2$ did unless $F_2$
3210 occurs, we apply lemma~\ref{lem:shoup}, which tells us that $\diff{2}{3}
3211 \le \Pr[F_2]$. We must therefore bound this probability.
3212
3213 To do this, we consider a new game~$\G3'$, which is the same as $\G3$,
3214 except that, at the start of the game, we choose a random number $k \inr
3215 \Nupto{q_S}$. For $0 \le i < q_S$, let $S_i$ be the $i$th session created
3216 by the adversary. We define $F'$ to be the event that the adversary
3217 queries $H_K(Z_{S_k})$ when $S_k$ is ripe.
3218
3219 The lemma now follows from these two claims.
3220
3221 \begin{claim}
3222 $\Pr[F] \le q_S \Pr[F']$.
3223 \end{claim}
3224 To see this, for any session~$S$, let $F_S$ be the event that the adversary
3225 queries~$H_K(Z_S)$ when $S$ is ripe. Then
3226 \[ \Pr[F] \le \sum_{0\le i<q_S} \Pr[F_{S_i}]. \]
3227 Hence,
3228 \[ \Pr[F'] = \Pr[F_{S_k}] = \sum_{0\le i<q_S} \Pr[F_{S_i}] \Pr[k = i]
3229 = \frac{1}{q_S} \sum_{0\le i<q_S} \Pr[F_{S_i}]
3230 \ge \frac{\Pr[F]}{q_S}
3231 \]
3232 proving the claim.
3233
3234 \begin{claim}
3235 For some $t' = t + O(n) + O(q_S q_M) + O(q_I) + O(q_K)$, we have
3236 $\Pr[F'] \le \InSec{mcdh}(G; t', q_K).$
3237 \end{claim}
3238 To prove this claim, we construct an adversary~$B$ which solves the MCDH
3239 problem in~$G$. The adversary works as follows.
3240 \begin{enumerate}
3241 \item It is given a pair $(R^*, S^*) = (r^* P, s^* P)$ of group elements;
3242 its objective is to make a verification-oracle query $V(Z^*)$ where $Z^*
3243 = r^* s^* P$.
3244 \item It sets up a simulation of the game~$\G3'$, by running the
3245 $\id{init}$ function, and simulating all of the parties. In particular,
3246 it chooses a random~$k \in \Nupto{q_S}$.
3247 \item It sets up accurate simulations of the random oracles $H_K(\cdot)$
3248 and $H_I(\cdot, \cdot, \cdot, \cdot)$, which choose random outputs for
3249 new, fresh inputs. However, whenever $A$ queries $H_K(\cdot)$ on a group
3250 element $Z$, $B$ also queries $V(Z)$.
3251 \item It runs $A$ in its simulated game. It responds to all of $A$'s
3252 instructions faithfully, until the $k$th session-creation.
3253 \item When creating the $k$th session $S = S_k = (P_i, P_j, s)$, $B$ has
3254 party~$P_i$ choose $R^*$ as its challenge, rather than choosing $r_S$ and
3255 setting $R_S = r_S P$. Because it simulates all the parties, $B$ can
3256 compute $Y_S = x_j R$, which is still correct.
3257 \item If $A$ requests the creation of a matching session $T = (P_j, P_i,
3258 s)$ then $B$ has party~$P_j$ choose $S^*$ as its challenge. Again, $B$
3259 computes $Y_T = x_i S^*$.
3260 \item If $A$ ever corrupts the parties $P_i$ or $P_j$, or reveals the
3261 session state of $S$ or $T$ then $B$ stops the simulation abruptly and
3262 halts.
3263 \end{enumerate}
3264 Adversary $B$'s running time is within the bounds required of $t'$, and $B$
3265 makes at most $q_K$ queries to $V(\cdot)$; we therefore have
3266 \[ \Pr[F'] \le \Succ{mcdh}{G}(B) \le \InSec{mcdh}(G; t', q_K) \]
3267 as required.
3268 \end{proof}
3269
3270 \begin{proof}[Proof of lemma~\ref{lem:sk-g3-g4}]
3271 Let $F_4$ be the event under which we abort the game~$\G4$. Clearly, if
3272 $F$ doesn't occur, games $\G3$ and $\G4$ proceed identically, so we can
3273 apply lemma~\ref{lem:shoup} to see that $\diff{3}{4} \le \Pr[F_4]$.
3274 Bounding $\Pr[F_4]$, however, is somewhat complicated. We use a further
3275 sequence of games.
3276
3277 Firstly, let $\G5$ be like $\G4$ with the exception that we choose, at
3278 random, an integer~$k \inr \Nupto{q_S}$. As we did in the proof for
3279 lemma~\ref{lem:sk-g3-g4}, let $S_i$ be the $i$th session created by the
3280 adversary. For each session~$S_i$, let $T_i$ be its matching session, if
3281 there is one. We define $F_5$ to be the event that
3282 \begin{itemize}
3283 \item $S_k$ completes immediately following delivery of a message $\mu
3284 \notin M_{T_k}$, and
3285 \item $S_k$ was ripe at this point.
3286 \end{itemize}
3287 For games~$\G{i}$, for $i > 5$, we define the event $F_i$ to be the event
3288 corresponding to $F_5$ in $\G{i}$. Note that if $S_k$ \emph{is} sent a
3289 message in $M_{T_k}$ then $S_k$ immediately completes.
3290
3291 \begin{claim}
3292 $\Pr[F_4] \le \Pr[F_5]/q_S$.
3293 \end{claim}
3294 This claim is proven exactly as we did for claim~1 of
3295 lemma~\ref{lem:sk-g2-g3}.
3296
3297 Let~$\G6$ be the same as $\G5$ except that we change the encrypted
3298 responses of session~$S_k$ and its matching session~$T_k$. Let $K^* =
3299 (K_0^*, K_1^*) = H_K(Z_S)$. Then, rather than sending $(\cookie{response},
3300 R_S, E_{K_0^*}(Y_T))$, session~$S$ sends $(\cookie{response}, R_S,
3301 E_{K_0^*}(1^{\ell_G}))$.
3302 \begin{claim}
3303 $|\Pr[F_6] - \Pr[F_5]| \le \InSec{ind-cca}(\E; t', q_M, q_M).$
3304 \end{claim}
3305 To prove this claim, we construct an adversary~$B$ which attacks the
3306 IND-CCA security of our encryption scheme $\E$. The adversary~$B$ works as
3307 follows.
3308 \begin{enumerate}
3309 \item It is given no input, but a pair of oracles $E(\cdot, \cdot)$ and
3310 $D(\cdot)$; the former encrypts either the left or right input, according
3311 to a hidden bit, and the latter decrypts ciphertexts other than those
3312 returned by $E(\cdot, \cdot)$. Its goal is to guess the hidden bit.
3313 \item It sets up a simulation of the game~$\G5$, by running the $\id{init}$
3314 function, and simulating all of the parties. In particular, it chooses a
3315 random $k \in \Nupto{q_S}$.
3316 \item It sets up accurate simulations of the random oracles $H_K(\cdot)$
3317 and $H_I(\cdot, \cdot, \cdot, \cdot)$.
3318 \item It runs $A$ in its simulated game. It responds to all of $A$'s
3319 instructions faithfully, except for the matching sessions $S_k$ and
3320 $T_k$. Let $S = S_k = (P_i, P_j, s)$, and $T = T_k = (P_j, P_i, s)$.
3321 \item Suppose $T$ is sent the message $C_S = (\cookie{challenge}, R_S,
3322 c_S)$. Rather than computing $K^* = H_K(r_T R_S)$ and performing the
3323 encryption properly, $B$ queries its left-or-right encryption oracle
3324 $E(\cdot, \cdot)$ on $E(1^{\ell_G}, x_j R_S)$, and sends the resulting
3325 ciphertext~$\chi$ back to~$S$ as part of a message $(\cookie{response},
3326 R_T, \chi)$. The set $M_T$ is precisely the set of messages constructed
3327 in this fashion. (Recall that challenge messages other than $C_S$ aren't
3328 actually delivered to $T$, since we simulate the responses using the
3329 extractor, as of $\G2$.)
3330 \item Suppose $S$ is sent a message $M = (\cookie{response}, R_T, \chi) \in
3331 M_T$. We immediately stop the simulation, and $B$ returns $0$.
3332 \item Suppose, instead, that $S$ is sent some message $M' =
3333 (\cookie{response}, R, \chi) \notin M_T$. There are two cases to
3334 consider. If $R = R_T$ then we must have $\chi$ distinct from the
3335 ciphertexts returned by the $E(\cdot, \cdot)$ oracle, so we can invoke
3336 the decryption oracle $D(\cdot)$ on $\chi$ to obtain a response $Y$.
3337 Alternatively, if $R \ne R_T$, we can compute the key $K = (K_0, K_1) =
3338 H_K(r_S R)$, and recover $Y = D_{K_0}(\chi)$. In either case, if $Y =
3339 r_S X_j)$ then $S$ would complete at this point: $B$ stops the simulation
3340 and returns $1$.
3341 \item If $A$ exposes $S$ (by corrupting $P_i$ or~$P_j$, or revealing $S$ or
3342 $T$) then we stop the simulation and $B$ returns $0$.
3343 \item Finally, if the game stops, either because $A$ halts, or because of
3344 one of the special rules introduced in earlier games, $B$ returns $0$.
3345 \end{enumerate}
3346 It is clear that $B$'s oracle queries are acceptable, since $|x_j R_S| =
3347 \ell_G$ by definition, and $B$ never queries $D(\cdot)$ on a ciphertext
3348 returned by its encryption oracle. By the rules of~$\G3$, we know that the
3349 game stops immediately if $A$ ever queries $Z_S$, so the key $K^*$ is
3350 independent of everything in $A$'s view except the ciphertexts $\chi$
3351 output by $S$ and $T$. Therefore, if the hidden bit of the IND-CCA game is
3352 $1$, $B$ accurately simulates $\G5$, whereas if the bit is $0$ then $B$
3353 accurately simulates $\G6$. We issue no more that $q_M$ encryption or
3354 decryption queries. Finally, $B$'s running time is within the bounds
3355 allowed for $t'$. Therefore,
3356 \[ \Adv{ind-cca}{\E}(B) = \Pr[F_5] - \Pr[F_6]
3357 \le \InSec{ind-cca}(\E; t', q_M, q_M). \]
3358 We construct the adversary~$\bar{B}$ which is the same as $B$ above, except
3359 that $\bar{B}$ returns $0$ whenever $B$ returns~$1$, and \emph{vice versa}.
3360 Clearly
3361 \[ \Adv{ind-cca}{\E}(\bar{B})
3362 = (1 - \Pr[F_5]) - (1 - \Pr[F_6])
3363 = \Pr[F_6] - \Pr[F_5]
3364 \le \InSec{ind-cca}(\E; t', q_M, q_M).
3365 \]
3366 This proves the claim.
3367
3368 Let $\G7$ be the same as $\G6$, except that at the start of the game we
3369 choose a random $m \in \Nupto{n}$, and when the adversary creates the
3370 session $S = S_k = (P_i, P_j, s)$, we abort the game unless $j = m$.
3371 Clearly we have $\Pr[F_6] = n \Pr[F_7]$.
3372
3373 Finally, we can explicitly bound $F_6$. In $\G6$, the adversary's view is
3374 independent of the correct response $Y_S = r_S X_S = x_j R_S$ to $S$'s
3375 challenge. Therefore, if $A$ manages to send any message $\mu \notin M_T$
3376 which causes $S$ to complete, then it has impersonated~$P_j$.
3377 \begin{claim}
3378 $\Pr[F_7] \le \InSec{mcdh}(G; t', q_M + q_I)$.
3379 \end{claim}
3380 The present lemma follows from this and the previous claims.
3381
3382 To prove the claim formally, we construct an adversary~$B'$, which behaves
3383 as follows.
3384 \begin{enumerate}
3385 \item It is given as input a public key $X^*$ and a single challenge $(R^*,
3386 c^*)$, a random oracle $H^*_I(\cdot, \cdot)$, and an oracle $V(\cdot,
3387 \cdot)$, which verifies responses $(R, Y)$. Its goal is to invoke
3388 $V(\cdot, \cdot)$ with a correct response to the challenge.
3389 \item It chooses a random $k \in \Nupto{q_S}$ and $m \in \Nupto{n}$. It
3390 sets up a simulation of the game~$\G7$, by running the $\id{init}$
3391 function, and simulating all of the parties, except that it gives party
3392 $P_m$ the public key $X^*$. This makes no difference, since $P_m$
3393 doesn't actually need to give any `honest' responses because of the
3394 change we made in $\G6$.
3395 \item It sets up accurate simulations of the random oracles $H_K(\cdot)$
3396 and $H_I(\cdot, \cdot, \cdot, \cdot)$, with one exception -- see below.
3397 \item It runs $A$ in its simulated game. It responds to all of $A$'s
3398 instructions faithfully, except for the session $S_k$. Let $S = S_k =
3399 (P_i, P_j, s)$, and let $T = T_k = (P_j, P_i, s)$ be its matching
3400 session.
3401 \item When session~$S$ is created, $B'$ checks that $j = m$, and if not
3402 stops the simulation and halts. Otherwise, $B'$ invokes its oracle~$C()$
3403 to obtain a pair $(R, c)$. Session~$S$ sends $C_S = (\cookie{challenge},
3404 R, c)$ as its challenge to~$T$.
3405 \item When $A$ makes a query $H_I(X^*, s, R, Y)$, $B$ answers it by
3406 querying its \emph{own} random oracle $H^*_I(R, Y)$.
3407 \item When $S$ receives a message $(\cookie{response}, R, \chi)$, we
3408 compute $(K_0, K_1) = r_S R$, and $Y = D_{K_0}(\chi)$. If $Y \ne \bot$
3409 then $B'$ calls $V(R, Y)$.
3410 \item If $A$ reveals $S$ or corrupts $P_i$ or $P_j$ then $B'$ stops the
3411 simulation immediately and halts.
3412 \end{enumerate}
3413 The running time of $B'$ is within the bounds required of $t'$; it makes at
3414 most $q_I$ random-oracle and at most $q_M$ verification queries. Clearly
3415 $B'$ succeeds whenever $F_7$ occurs. The claim follows from
3416 theorem~\ref{thm:wident-sound}.
3417 \end{proof}
3418
3419
3420 \subsection{Proof of theorem~\ref{thm:sk2}}
3421 \label{sec:sk2-proof}
3422
3423 The proof is almost identical to the proof of theorem~\ref{thm:sk}, in
3424 appendix~\ref{sec:sk-proof}. Unfortunately a black-box reduction doesn't
3425 seem possible.
3426
3427 We use the games and notation of section~\ref{sec:sk-proof}.
3428
3429 The change to the check-value calculation doesn't affect key-generation at
3430 all, so the transition to $\G1$ goes through as before.
3431
3432 The transition from $\G1$ to $\G2$ -- answering challenges using the
3433 extractor -- needs a little care. Let $S = (P_i, P_j, s)$ be a session, and
3434 consider an incoming message $(\cookie{challenge}, R, c)$.
3435 \begin{itemize}
3436 \item If $T = (P_j, P_i, s)$ is the matching session to $S$, and $R = R_T$ is
3437 the public challenge value of $T$, and $c = r_T \xor H_I(R_S, X_j, s, R_T,
3438 r_T X_i)$ is the check-value output by $T$ when it received
3439 $(\cookie{pre-challenge}, R_S)$ as input, then $S$ replies as it did in
3440 $\G1$.
3441 \item If the challenge message is any other message, then we use the
3442 extractor.
3443 \end{itemize}
3444 As in lemma~\ref{lem:sk-g1-g2}, we examine which sessions could have queried
3445 $H_I(R_S, X_j, s, R, x_i R)$, and for the same reasons conclude that only the
3446 matching session would have done this, and only in response to the
3447 pre-challenge $R_S$. It follows that $\diff{1}{2} \le q_M/2^{\ell_I}$ as
3448 before.
3449
3450 The remaining game steps go through unchanged. In particular, we conclude
3451 that a ripe session will only complete if the adversary has transmitted
3452 messages from its matching session correctly, and the session key is
3453 independent of the adversary's view. The theorem follows.
3454
3455
3456 \subsection{Sketch proof of single-key protocol for secure channels}
3457 \label{sec:sc-proof}
3458
3459 We want to show that the Wrestlers Key-Exchange protocol, followed by use of
3460 the encryption scheme $\E$, with the \emph{same} key $K = K_0$, still
3461 provides secure channels.
3462
3463 \subsubsection{Secure channels definition}
3464 We (very briefly!) recall the \cite{Canetti:2001:AKE} definition of a secure
3465 channels protocol. We again play a game with the adversary. At the
3466 beginning, we choose a bit $b^* \inr \{0, 1\}$ at random. We allow the
3467 adversary the ability to establish \emph{secure channels} sessions within the
3468 parties. Furthermore, for any running session $S = (P_i, P_j, s)$, we allow
3469 the adversary to request $S$ to send a message~$\mu$ through its secure
3470 channel. Finally, the adversary is allowed to choose one ripe
3471 \emph{challenge} session, and ask for it to send of one of a \emph{pair} of
3472 messages $(\mu_0, \mu_1)$, subject to the restriction that $|\mu_0| =
3473 |\mu_1|$; the session sends message $\mu_{b^*}$. The adversary may not
3474 expose the challenge session.
3475
3476 The adversary wins if (a)~it can guess the bit $b^*$, or (b)~it can cause a
3477 ripe session $S$ (i.e., an unexposed, running session), with a matching
3478 session~$T$ to output a message other than one that it requested that $T$
3479 send.
3480
3481 \subsubsection{Protocol definition}
3482 The protocol begins with Wrestlers key-exchange. The encryption in the
3483 key-exchange protocol is performed as $E_K(\cookie{kx}, \cdot)$; encryption
3484 for secure channels is performed as $E_K(\cookie{sc}, i, o, \cdot)$, where
3485 $i$ is a sequence number to prevent replays and $o \in \{S, T\}$ identifies
3486 the sender.
3487
3488 \subsubsection{Proof sketch}
3489 We use the games and notation of appendix~\ref{sec:sk-proof}.
3490
3491 The proof goes through as far as the step between $\G5$ and $\G6$ in the
3492 proof of lemma~\ref{lem:sk-g3-g4}. Here we make the obvious adjustments to
3493 our adversary against the IND-CCA security of $\E$. (Our adversary will need
3494 to use the left-or-right oracle for messages sent using the secure channel
3495 built on $K^*$. That's OK.)
3496
3497 In $\G4$, we know that ripe sessions agree the correct key, and the adversary
3498 never queries the random oracle, so the key is independent of the adversary's
3499 view.
3500
3501 We define a new game $\G8$, which is like $\G4$, except that we stop the game
3502 if the adversary ever forges a message sent over the secure channel. That
3503 is, if a ripe session $S$ ever announces receipt of a message not sent (at
3504 the adversary's request) by its matching session $T$. Let $F_8$ be the event
3505 that a forgery occurs. We apply lemma~\ref{lem:shoup}, which tells us that
3506 $\diff{4}{8} \le \Pr[F_8]$. To bound $F_8$, we isolate a session at random
3507 (as in lemmata \ref{lem:sk-g2-g3} and~\ref{lem:sk-g3-g4}), which tells us
3508 that
3509 \begin{equation}
3510 \label{eq:sc-g4-g8}
3511 \diff{4}{8} \le q_S \cdot \InSec{int-ptxt}(\E; t', q_M, q_M)
3512 \end{equation}
3513 Finally, we can bound the adversary's advantage at guessing the hidden bit
3514 $b^*$. We isolate (we hope) the challenge session $S$ by choosing a target
3515 session at random, as before. Let $K_* = H_K(Z_S)$ be the key agreed by the
3516 session (if it becomes ripe). We define an adversary $B$ against the IND-CCA
3517 security of $\E$. The adversary $B$ simulates the game. If the adversary
3518 exposes the target session, or doesn't choose it as the challenge session,
3519 $B$ fails (and exits 0); otherwise it uses the left-or-right encryption
3520 oracle to encrypt both of the adversary's message choices, and outputs the
3521 adversary's choice. Let $b$ be the adversary's output, and let $\epsilon$ be
3522 the advantage of our IND-CCA distinguisher. Then
3523 \begin{eqnarray*}[rl]
3524 \Pr[b = b^*]
3525 & = \Pr[b = b^* \wedge b^* = 1] + \Pr[b = b^* \wedge b^* = 0] \\
3526 & = \frac{1}{2}\bigl( \Pr[b = b^* \mid b^* = 1] +
3527 \Pr[b = b^* \mid b^* = 0] \bigr) \\
3528 & = \frac{1}{2}\bigl( \Pr[b = b^* \mid b^* = 1] +
3529 (1 - \Pr[b \ne b^* \mid b^* = 0]) \bigr) \\
3530 & = \frac{1}{2}\bigl( \Pr[b = 1 \mid b^* = 1] -
3531 \Pr[b = 1 \mid b^* = 0] + 1 \bigr) \\
3532 & = \frac{1}{2}\bigl(1 + q_S\,\Adv{ind-cca}{\E}(B) \bigr) \\
3533 & \le \frac{1}{2} \bigl( 1 + q_S\,\InSec{ind-cca}(\E; t', q_M, q_M) \bigr).
3534 \eqnumber
3535 \end{eqnarray*}
3536
3537 \end{document}
3538
3539 %%% Local Variables:
3540 %%% mode: latex
3541 %%% TeX-master: t
3542 %%% End: