build-latex.sh: Be verbose while running.
[doc/wrestlers] / wrestlers.tex
CommitLineData
a6e375a6
MW
1%%% -*-latex-*-
2%%%
3%%% The Wrestlers Protocol: secure, deniable key-exchange
4%%%
5%%% (c) 2006 Mark Wooding
6%%%
7
32609df3
MW
8\ifx\iffancystyle\xxundefined\newif\iffancystyle\fancystyletrue\fi
9\ifx\ifshort\xxundefined\newif\ifshort\shortfalse\fi
10
11\typeout{Options:}
12\typeout{ Fancy style: \iffancystyle ON\else OFF\fi}
13\typeout{ Short version: \ifshort ON\else OFF\fi}
14
a6e375a6
MW
15\errorcontextlines=\maxdimen
16\showboxdepth=\maxdimen
17\showboxbreadth=\maxdimen
18
19\iffancystyle
32609df3
MW
20 \documentclass{strayman}
21 \parskip=0pt plus 1pt \parindent=1.2em
a6e375a6
MW
22 \usepackage[T1]{fontenc}
23 \usepackage[palatino, helvetica, courier, maths=cmr]{mdwfonts}
24 \usepackage[within = subsection, mdwmargin]{mdwthm}
25 \usepackage{mdwlist}
26 \usepackage{sverb}
32609df3 27 \ifpdfing\else
d8139f2c
MW
28 \PassOptionsToPackage{dvips}{xy}
29 \fi
a6e375a6 30\else
32609df3
MW
31 \PassOptionsToClass{runningheads}{llncs}
32 \documentclass{llncs}
a6e375a6
MW
33\fi
34
35\PassOptionsToPackage{show}{slowbox}
36%\PassOptionsToPackage{hide}{slowbox}
32609df3 37\usepackage{mdwtab, mdwmath, crypto}
a6e375a6
MW
38\usepackage{slowbox}
39\usepackage{amssymb, amstext}
40\usepackage{url, multicol}
41\usepackage{tabularx}
42\DeclareUrlCommand\email{\urlstyle{tt}}
43\ifslowboxshow
44 \usepackage[all]{xy}
45 \turnradius{4pt}
46\fi
32609df3 47\usepackage{mathenv}
a6e375a6 48
32609df3 49\newcommand{\Nupto}[1]{\{0, 1, \ldots, #1 - 1\}}
a6e375a6 50\iffancystyle
32609df3 51 \let\next\title
a6e375a6 52\else
32609df3 53 \def\next[#1]{\titlerunning{#1}\title}
a6e375a6
MW
54\fi
55\next
32609df3
MW
56 [The Wrestlers Protocol]
57 {The Wrestlers Protocol%
58 \ifshort\thanks{This is an extended abstract; the full version
59 \cite{Wooding:2006:WP} is available from
60 \texttt{http://eprint.iacr.org/2006/386/}.}\fi \\
a6e375a6
MW
61 A simple, practical, secure, deniable protocol for key-exchange}
62\iffancystyle
63 \author{Mark Wooding \\ \email{mdw@distorted.org.uk}}
64\else
65 \author{Mark Wooding}
66 \institute{\email{mdw@distorted.org.uk}}
67\fi
68
69\iffancystyle
70 \bibliographystyle{mdwalpha}
71 \let\epsilon\varepsilon
72 \let\emptyset\varnothing
73 \let\le\leqslant\let\leq\le
74 \let\ge\geqslant\let\geq\ge
75 \numberwithin{table}{section}
76 \numberwithin{figure}{section}
77 \numberwithin{equation}{subsection}
78 \let\random\$
79\else
32609df3 80 \bibliographystyle{splncs}
a6e375a6
MW
81 \expandafter\let\csname claim*\endcsname\claim
82 \expandafter\let\csname endclaim*\endcsname\endclaim
83\fi
84
a6e375a6
MW
85\let\Bin\Sigma
86\let\emptystring\lambda
87\edef\Pr{\expandafter\noexpand\Pr\nolimits}
88\newcommand{\bitsto}{\mathbin{..}}
89\newcommand{\E}{{\mathcal{E}}}
90\newcommand{\M}{{\mathcal{M}}}
91\iffancystyle
92 \def\description{%
93 \basedescript{%
94 \let\makelabel\textit%
95 \desclabelstyle\multilinelabel%
96 \desclabelwidth{1in}%
97 }%
98 }
99\fi
100\def\fixme#1{\marginpar{FIXME}[FIXME: #1]}
101\def\hex#1{\texttt{#1}_{x}}
102
32609df3
MW
103\newenvironment{longproof}[1]{%
104 \ifshort#1\expandafter\ignore
105 \else\proof\fi
106}{%
107 \ifshort\else\endproof\fi
108}
109
a6e375a6
MW
110\def\dbox#1{%
111 \vtop{%
112 \def\\{\unskip\egroup\hbox\bgroup\strut\ignorespaces}%
113 \hbox{\strut#1}%
114 }%
115}
116
117\def\Wident{\Xid{W}{ident}}
118\def\Wkx{\Xid{W}{kx}}
119\def\Wdkx{\Xid{W}{dkx}}
120\def\Func#1#2{\mathcal{F}[#1\to#2]}
121\def\diff#1#2{\Delta_{#1, #2}}
122\def\Hid{H_{\textit{ID}}}
123
124%% protocol run diagrams
125\def\PRaction#1{#1\ar[r]}
126\def\PRcreatex#1{\PRaction{\textsf{Create session\space}#1}}
127\def\PRcreate#1#2#3{\PRcreatex{(\text{#1},\text{#2},#3)}}
128\def\PRreceivex#1{\PRaction{\textsf{Receive\space}#1}}
129\def\PRreceive#1#2{\PRreceivex{\msg{#1}{#2}}}
130\def\PRsession#1{\relax\mathstrut#1\ar[r]}
131\def\msg#1#2{(\cookie{#1},#2)}
132\def\PRresult#1{#1}
133\def\PRsendx#1{\PRresult{\textsf{Send\space}#1}}
134\def\PRsend#1#2{\PRsendx{\msg{#1}{#2}}}
135\def\PRcomplete#1{\textsf{Complete:\space}#1}
136\def\PRstate#1{\textsf{State:\space}#1}
137\def\PRignore{\textsf{(ignored)}}
138\def\PRreveal{\textsf{Session-state reveal}\ar[r]}
139\def\protocolrun#1{\[\xymatrix @R=0pt @C=2em {#1}\]}
140
32609df3
MW
141\def\protocol{%
142 \unskip\bigskip
143 \begin{tabular*}{\linewidth}%
144 {@{\qquad}l@{\extracolsep{0ptplus1fil}}r@{\qquad}}}
145\def\endprotocol{\end{tabular*}}
146\def\send#1#2{\noalign{%
147 \centerline{\xy\ar @{#1}|*+{\mathstrut#2}<.5\linewidth, 0pt>\endxy}}}
148
a6e375a6
MW
149%% class-ids for proof of extractor lemma
150\let\Cid=\Lambda
151\let\Csession=S
152\let\Cindex=r
153\let\Cquery=Q
154\let\Chash=H
155\let\Ccheck=c
156\let\Ccheckset=V
157\let\Ccount=\nu
158
dff0fad2
MW
159\def\HG#1{\mathbf{H}_{#1}}
160
32609df3
MW
161\iffancystyle\else
162 \let\xsssec\subsubsection\def\subsubsection#1{\xsssec[#1]{#1.}}
163\fi
164
a6e375a6
MW
165\begin{document}
166
167%%%--------------------------------------------------------------------------
168
169\maketitle
d8139f2c 170\iffancystyle \thispagestyle{empty} \fi
a6e375a6
MW
171
172\begin{abstract}
173 We describe and prove (in the random-oracle model) the security of a simple
174 but efficient zero-knowledge identification scheme, whose security is based
175 on the computational Diffie-Hellman problem. Unlike other recent proposals
176 for efficient identification protocols, we don't need any additional
177 assumptions, such as the Knowledge of Exponent assumption.
178
179 From this beginning, we build a simple key-exchange protocol, and prove
180 that it achieves `SK-security' -- and hence security in Canetti's Universal
181 Composability framework.
182
183 Finally, we show how to turn the simple key-exchange protocol into a
184 slightly more complex one which provides a number of valuable `real-life'
185 properties, without damaging its security.
186\end{abstract}
187
188\iffancystyle
189\newpage
d8139f2c 190\thispagestyle{empty}
a6e375a6
MW
191\columnsep=2em \columnseprule=0pt
192\tableofcontents[\begin{multicols}{2}\raggedright][\end{multicols}]
193%%\listoffigures[\begin{multicols}{2}\raggedright][\end{multicols}]
194%% \listoftables[\begin{multicols}{2}\raggedright][\end{multicols}]
195\newpage
196\fi
197
198%%%--------------------------------------------------------------------------
199
200\section{Introduction}
201
dff0fad2
MW
202This paper proposes protocols for \emph{identification} and
203\emph{authenticated key-exchange}.
a6e375a6 204
dff0fad2
MW
205An identification protocol allows one party, say Bob, to be sure that he's
206really talking to another party, say Alice. It assumes that Bob has some way
207of recognising Alice; for instance, he might know her public key. Our
208protocol requires only two messages -- a challenge and a response -- and has
209a number of useful properties. It is very similar to, though designed
32609df3
MW
210independently of, a recent protocol by Stinson and Wu
211\cite{Stinson:2006:EST}; we discuss their protocol and compare it to ours in
212\ifshort the full version of this paper. \else
213section~\ref{sec:stinson-ident}. \fi
a6e375a6 214
dff0fad2
MW
215Identification protocols are typically less useful than they sound. As Shoup
216\cite{Shoup:1999:OFM} points out, it provides a `secure ping', by which Bob
217can know that Alice is `there', but provides no guarantee that any other
218communication was sent to or reached her. However, there are situations
219where this an authentic channel between two entities -- e.g., a device and a
220smartcard -- where a simple identification protocol can still be useful.
a6e375a6 221
dff0fad2
MW
222An authenticated key-exchange protocol lets Alice and Bob agree on a shared
223secret, known to them alone, even if there is an enemy who can read and
224intercept all of their messages, and substitute messages of her own. Once
225they have agreed on their shared secret, of course, they can use standard
226symmetric cryptography techniques to ensure the privacy and authenticity of
227their messages.
a6e375a6
MW
228
229
230\subsection{Desirable properties of our protocols}
231
232Our identification protocol has a number of desirable properties.
233\begin{itemize}
234\item It is \emph{simple} to understand and implement. In particular, it
235 requires only two messages.
236\item It is fairly \emph{efficient}, requiring two scalar multiplications by
237 each of the prover and verifier.
238\item It is provably \emph{secure} (in the random oracle model), assuming the
239 intractability of the computational Diffie-Hellman problem.
240\end{itemize}
241
dff0fad2 242Our key-exchange protocol also has a number of desirable
d8139f2c 243properties.
a6e375a6
MW
244\begin{itemize}
245\item It is fairly \emph{simple} to understand and implement, though there
246 are a few subtleties. In particular, it is \emph{symmetrical}. We have
247 implemented a virtual private network system based on this protocol.
248\item It is \emph{efficient}, requiring four scalar multiplications by each
249 participant. The communication can be reduced to three messages by
250 breaking the protocol's symmetry.
251\item It is provably \emph{secure} (again, in the random oracle model),
252 assuming the intractability of the computational Diffie-Hellman problem,
253 and the security of a symmetric encryption scheme.
254\item It provides \emph{perfect forward secrecy}. That is, even if a user's
255 long-term secrets are compromised, past sessions remain secure.
256\item It is \emph{deniable}. It is possible to construct simulated
257 transcripts of protocol executions between any number of parties without
258 knowing any of their private keys. The simulated transcripts are (almost)
259 indistinguishable from real protocol transcripts. Hence, a transcript
260 does not provide useful evidence that a given party was really involved in
261 a given protocol execution.
262\end{itemize}
263
32609df3 264\ifshort\else
dff0fad2 265\subsection{Asymptotic and concrete security results}
a6e375a6 266
dff0fad2
MW
267Most security definitions for identification (particularly zero-knowledge)
268and key-exchange in the literature are \emph{asymptotic}. That is, they
269consider a family of related protocols, indexed by a \emph{security
270parameter}~$k$; they that any \emph{polynomially-bounded} adversary has only
271\emph{negligible} advantage. A polynomially-bounded adversary is one whose
272running time is a bounded by some polynomial~$t(k)$. The security definition
273requires that, for any such polynomially-bounded adversary, and any
274polynomial $p(k)$, the adversary's advantage is less than $p(k)$ for all
275sufficiently large values of $k$.
276
277Such asymptotic notions are theoretically interesting, and have obvious
278connections to complexity theory. Unfortunately, such an asymptotic result
279tells us nothing at all about the security of a particular instance of a
280protocol, or what parameter sizes one needs to choose for a given level of
281security against a particular kind of adversary. Koblitz and Menezes
282\cite{Koblitz:2006:ALP} (among other useful observations) give examples of
283protocols, proven to meet asymptotic notions of security, whose security
284proofs guarantee nothing at all for the kinds of parameters typically used in
285practice.
286
287Since, above all, we're interested in analysing a practical and implemented
288protocol, we follow here the `practice-oriented provable security' approach
289largely inspired by Bellare and Rogaway, and exemplified by
290\cite{Bellare:1994:SCB,Bellare:1995:XMN,Bellare:1995:OAE,Bellare:1996:ESD,%
291Bellare:1996:KHF,Bellare:2000:CST}; see also \cite{Bellare:1999:POP}.
292Rather than attempting to say, formally, whether or not a protocol is
293`secure', we associate with each protocol an `insecurity function' which
294gives an upper bound on the advantage of any adversary attacking the protocol
295within given resource bounds.
32609df3 296\fi
a6e375a6 297
dff0fad2 298\subsection{Formal models for key-exchange}
a6e375a6 299
32609df3
MW
300\ifshort
301
302The first model for studying the \emph{computational} security of
303key-exchange protocols (rather than using protocol-analysis logics like that
304of \cite{Burrows:1989:LA}) was given by Bellare and Rogaway
305\cite{Bellare:1994:EAK}; the model has since been enhanced, both by the
306original authors and others, in \cite{Bellare:1995:PSS,%
307Blake-Wilson:1997:KAP,Blake-Wilson:1998:EAA}. The model defines security
308in terms of a game: key-exchange protocols are secure if an adversary can't
309distinguish the key agreed by a chosen `challenge session' from a key chosen
310independently at random. Other models for key-exchange have been proposed in
311\cite{Bellare:1998:MAD} and \cite{Shoup:1999:OFM}; these use a different
312notion of security, involving implementation of an ideal functionality.
313
314\else
315
dff0fad2
MW
316Many proposed key-exchange protocols have turned out to have subtle security
317flaws. The idea of using formal methods to analyse key-exchange protocols
318begins with the logic of Burrows, Abadi and Needham \cite{Burrows:1989:LA}.
319Their approach requires a `formalising' step, in which one expresses in the
320logic the contents of the message flows, and the \emph{beliefs} of the
321participants.
322
323Bellare and Rogaway \cite{Bellare:1994:EAK} describe a model for studying the
324computational security of authentication and key-exchange protocols in a
325concurrent setting, i.e., where multiple parties are running several
326instances of a protocol simultaneously. They define a notion of security in
327this setting, and show that several simple protocols achieve this notion.
328Their original paper dealt with pairs of parties using symmetric
329cryptography; they extended their definitions in \cite{Bellare:1995:PSS} to
330study three-party protocols involving a trusted key-distribution centre.
331
332Blake-Wilson, Johnson and Menezes \cite{Blake-Wilson:1997:KAP} applied the
333model of \cite{Bellare:1994:EAK} to key-exchange protocols using asymmetric
334cryptography, and Blake-Wilson and Menezes \cite{Blake-Wilson:1998:EAA}
335applied it to protocols based on the Diffie-Hellman protocol.
336
337The security notion of \cite{Bellare:1994:EAK} is based on a \emph{game}, in
338which an adversary nominates a \emph{challenge session}, and is given either
339the key agreed by the participants of the challenge session, or a random
340value independently sampled from an appropriate distribution. The
341adversary's advantage -- and hence the insecurity of the protocol -- is
342measured by its success probability in guessing whether the value it was
343given is really the challenge key. This challenge-session notion was also
344used by the subsequent papers described above.
345
346Bellare, Canetti and Krawczyk \cite{Bellare:1998:MAD} described a pair of
347models which they called the \textsc{am} (for `authenticated links model')
348and \textsc{um} (`unauthenticated links model'). They propose a modular
349approach to the design of key-exchange protocols, whereby one first designs a
350protocol and proves its security in the \textsc{am}, and then applies a
351authenticating `compiler' to the protocol which they prove yields a protocol
352secure in the realistic \textsc{um}. Their security notion is new. They
353define an `ideal model', in which an adversary is limited to assigning
354sessions fresh, random and unknown keys, or matching up one session with
355another, so that both have the same key. They define a protocol to be secure
356if, for any adversary~$A$ in the \textsc{am} or \textsc{um}, there is an
357ideal adversary~$I$, such that the outputs of $A$ and $I$ are computationally
358indistinguishable.
359
360In \cite{Shoup:1999:OFM}, Shoup presents a new model for key-exchange, also
361based on the idea of simulation. He analyses the previous models,
362particularly \cite{Bellare:1994:EAK} and \cite{Bellare:1998:MAD}, and
363highlights some of their inadequacies.
364
32609df3
MW
365\fi
366
dff0fad2 367Canetti and Krawczyk \cite{Canetti:2001:AKE} describe a new notion of
32609df3 368security in the model of \cite{Bellare:1998:MAD}, based on the
dff0fad2
MW
369challenge-session notion of \cite{Bellare:1994:EAK}. The security notion,
370called `SK-security', seems weaker in various ways than those of earlier
371works such as \cite{Bellare:1994:EAK} or \cite{Shoup:1999:OFM}. However, the
372authors show that their notion suffices for constructing `secure channel'
373protocols, which they also define.
374
32609df3 375\ifshort\else
d8139f2c 376In \cite{Canetti:2001:UCS}, Canetti describes the `universal composition'
dff0fad2
MW
377framework. Here, security notions are simulation-based: one defines security
378notions by presenting an `ideal functionality'. A protocol securely
379implements a particular functionality if, for any adversary interacting with
380parties who use the protocol, there is an adversary which interacts with
381parties using the ideal functionality such that no `environment' can
382distinguish the two. The environment is allowed to interact freely with the
383adversary throughout, differentiating this approach from that of
384\cite{Bellare:1998:MAD} and \cite{Shoup:1999:OFM}, where the distinguisher
385was given only transcripts of the adversary's interaction with the parties.
386With security defined in this way, it's possible to prove a `universal
387composition theorem': one can construct a protocol, based upon various ideal
388functionalities, and then `plug in' secure implementations of the ideal
389functionalities and appeal to the theorem to prove the security of the entire
390protocol. The UC framework gives rise to very strong notions of security,
391due to the interactive nature of the `environment' distinguisher.
32609df3 392\fi
dff0fad2
MW
393
394Canetti and Krawczyk \cite{Canetti:2002:UCN} show that the SK-security notion
395of \cite{Canetti:2001:AKE} is \emph{equivalent} to a `relaxed' notion of
32609df3
MW
396key-exchange security in the UC framework\ifshort\space of
397\cite{Canetti:2001:UCS}\fi, and suffices for the construction of UC secure
398channels.
dff0fad2
MW
399
400The result of \cite{Canetti:2002:UCN} gives us confidence that SK-security is
401the `right' notion of security for key-exchange protocols. Accordingly,
402SK-security is the standard against which we analyse our key-exchange
403protocol.
a6e375a6
MW
404
405
406\subsection{Outline of the paper}
407
408The remaining sections of this paper are as follows.
409\begin{itemize}
410\item Section \ref{sec:prelim} provides the essential groundwork for the rest
411 of the paper. It introduces important notation, and describes security
412 notions and intractability assumptions.
413\item Section \ref{sec:zk-ident} describes our zero-knowledge identification
414 protocol and proves its security.
415\item Section \ref{sec:kx} describes the simple version of our key-exchange
416 protocol, and proves its security and deniability. It also describes some
417 minor modifications which bring practical benefits without damaging
32609df3 418 security.
dff0fad2 419\item Finally, section \ref{sec:conc} presents our conclusions.
a6e375a6
MW
420\end{itemize}
421
32609df3
MW
422\ifshort
423The full version of this paper describes how to make our protocols
424identity-based by using bilinear pairings using the techniques introduced in
425\cite{Boneh:2003:IBE}. It also contains proofs of the various theorems
426stated here.
427\fi
428
a6e375a6
MW
429%%%--------------------------------------------------------------------------
430
431\section{Preliminaries}
432\label{sec:prelim}
433
32609df3
MW
434\ifshort
435\subsection{Basics}
436\let\prelimsec\subsubsection
437\else
438\let\prelimsec\subsection
439\fi
440
441\prelimsec{Miscellaneous notation}
a6e375a6
MW
442
443We write $\Func{D}{R}$ for the set of all functions with domain $D$ and range
444$R$.
a6e375a6 445
32609df3 446\prelimsec{Groups}
a6e375a6
MW
447
448Let $(G, +)$ be a cyclic group\footnote{
449 We find that additive group notation is easier to read. In particular, in
450 multiplicative groups, one ends up with many interesting things tucked away
451 in little superscripts.}%
452of prime order $q$, and generated by an element $P$. We shall write the
453identity of $G$ as $0_G$, or simply as $0$ when no ambiguity is likely to
454arise. Thus, we have $\langle P \rangle = G$ and $q P = 0$. Any $X \in G$
32609df3 455can be written as $X = x P$ for some $x \in \{0, 1, \ldots, q - 1\}$.
a6e375a6 456
32609df3
MW
457We consider a cyclic group of order $n$ as a $\Z/n\Z$-module, and in
458particular our group $G$ can be seen as a vector space over $\gf{q}$. This
459makes the notation slightly more convenient.
a6e375a6 460
32609df3 461\prelimsec{Bit strings and encodings}
a6e375a6
MW
462\label{sec:bitenc}
463
464Let $\Bin = \{0, 1\}$ be the set of binary digits. Then $\Bin^n$ is the set
465of $n$-bit strings, and $\Bin^*$ the set of all (finite) bit strings. If $x
466\in \Bin^n$ is a bit string, we write its length as $|x| = n$. For a bit
467string $x \in \Bin^n$, and for $0 \le i < n$, we write $x[i]$ as the $i$th
468bit of $x$. The empty string is denoted $\emptystring$.
469
470Let $x$ and $y$ be two bit strings. If $|x| = |y| = n$, we write $x \xor y$
32609df3
MW
471to mean the bitwise exclusive-or of $x$ and $y$\ifshort\else: if $z = x \xor
472y$ then $|z| = n$, and $z[i] = (x[i] + y[i]) \bmod 2$ for $0 \le i < n$\fi.
473We write $x \cat y$ to mean the concatenation of $x$ and $y$\ifshort\else: if
474$z = x \cat y$ then $|z| = |x| + |y|$ and $z[i] = x[i]$ if $0 \le i < |x|$
475and $z[i] = y[i - |x|]$ if $|x| < i \le |x| + |y|$\fi.
a6e375a6
MW
476
477Finally, we let $\bot$ be a value distinct from any bit string.
478
479We shall want to encode group elements $X \in G$ and indices $x \in I =
32609df3
MW
480\gf{q}$ as bit strings.
481\ifshort
482To this end, we shall assume the existence of efficient, unambiguous
483encodings of group elements as $\ell_G$-bit strings, and indices as
484$\ell_I$-bit strings. To reduce clutter, we shall leave encoding and
485decoding as implicit operations.
486\else
487To this end, we shall assume the existence of
a6e375a6
MW
488integers $\ell_G, \ell_I > 0$ and functions
489\[
490 e_S\colon S \to \Bin^{\ell_S}
491 \quad \textrm{and} \quad
492 d_S\colon \Bin^{\ell_S} \to S \cup \{ \bot \}
493 \qquad
32609df3 494 \textrm{for } S \in \{ G, \F \}.
a6e375a6
MW
495\]
496with the following properties.
497\begin{itemize}
498\item The functions are \emph{unique} and \emph{unambiguous}, i.e., for any
499 $t \in \Bin^{\ell_S}$, we have
500 \[ d_S(t) = \begin{cases}
501 s & if there is some $s \in S$ such that $t = e_S(s)$, or \\
502 \bot & if no such $s$ exists.
503 \end{cases}
504 \]
505\item The functions should be \emph{efficient} to compute. Indeed, we shall
506 be assuming that the time taken for encoding and decoding is essentially
507 trivial.
508\end{itemize}
509Note that, as we have defined them, all encodings of group elements are the
510same length, and similarly for encodings of indices. This is necessary for
511the security of our protocols.
512
513We shall frequently abuse notation by omitting the encoding and decoding
514functions where it is obvious that they are required.
32609df3 515\fi
a6e375a6 516
32609df3
MW
517\ifshort\else
518\prelimsec{Games, adversaries, and oracles}
a6e375a6
MW
519\label{sec:games}
520
521Many of the security definitions and results given here make use of
522\emph{games}, played with an \emph{adversary}. An adversary is a
523probabilistic algorithm. In some games, the adversary is additionally
524equipped with \emph{oracles}, which perform computations with values chosen
525by the adversary and secrets chosen by the game but not revealed to the
526adversary. We impose limits on the adversary's resource usage: in
527particular, the total time it takes, and the number of queries it makes to
528its various oracles. Throughout, we include the size of the adversary's
529program as part of its `time', in order to model adversaries which contain
530large precomputed tables.
531
532The games provide models of someone trying to attack a construction or
533protocol. For security, we will either define a notion of `winning' the
534game, and require that all adversaries have only a very small probability of
535winning, or we consider two different games and require that no adversary can
32609df3 536distinguish between the two except with very small probability.
a6e375a6
MW
537
538Our proofs make frequent use of sequences of games; see
539\cite{Shoup:2004:SGT,Bellare:2004:CBG}. The presentation owes much to Shoup
540\cite{Shoup:2004:SGT}. We begin with a game $\G0$ based directly on a
541relevant security definition, and construct a sequence of games $\G1$, $\G2$,
542\dots, each slightly different from the last. We define all of the games in
543a sequence over the same underlying probability space -- the random coins
544tossed by the algorithms involved -- though different games may have slightly
545differently-defined events and random variables. Our goal in doing this is
546to bound the probability of the adversary winning the initial game $\G0$ by
547simultaneously (a) relating the probability of this event to that of
548corresponding events in subsequent games, and (b) simplifying the game until
549the probability of the corresponding event can be computed directly.
550
551The following simple lemma from \cite{Shoup:2001:OR} will be frequently
552useful.
553\begin{lemma}[Difference Lemma]
554 \label{lem:shoup}
32609df3
MW
555 Let $S$, $T$, $F$ be events. Suppose $\Pr[S \mid \bar F] =
556 \Pr[T \mid \bar F]$. Then $|\Pr[S] - \Pr[T]| \le \Pr[F]$.
a6e375a6
MW
557\end{lemma}
558\begin{proof}
559 A simple calculation:
560 \begin{eqnarray*}[rl]
561 |\Pr[S] - \Pr[T]|
32609df3
MW
562 & = |(\Pr[S \mid F]\Pr[F] + \Pr[S \mid \bar F]\Pr[\bar F]) -
563 (\Pr[T \mid F]\Pr[F] + \Pr[T \mid \bar F]\Pr[\bar F])| \\
564 & = \Pr[F] \cdot |\Pr[S \mid F] - \Pr[T \mid F]| \\
a6e375a6
MW
565 & \le \Pr[F]
566 \end{eqnarray*}
567 and we're done!
568\end{proof}
32609df3 569\fi
a6e375a6
MW
570
571
32609df3 572\prelimsec{The random oracle model}
a6e375a6
MW
573\label{sec:ro}
574
32609df3 575\ifshort\else
a6e375a6
MW
576In particular, most of our results will make use of the \emph{random oracle}
577model \cite{Bellare:1993:ROP}, in which all the participants, including the
578adversary, have access to a number of `random oracles'. A random oracle with
579domain $D$ and range $R$ is an oracle which computes a function chosen
580uniformly at random from the set of all such functions. (In the original
581paper \cite{Bellare:1993:ROP}, random oracles are considered having domain
582$\Bin^*$ and range $\Bin^\omega$; we use finite random oracles here, because
583they're easier to work with.)
584
585Given a protocol proven secure in the random oracle model, we can instantiate
586each random oracle by a supposedly-secure hash function and be fairly
587confident that either our protocol will be similarly secure, or one of the
588hash functions we chose has some unfortunate property.
589
590Proofs in the random oracle must be interpreted carefully. For example,
591Canetti, Goldreich and Halevi \cite{Canetti:2004:ROM} show that there are
592schemes which can be proven secure in the random oracle model but provably
593have no secure instantiation in the standard model.
32609df3 594\fi
a6e375a6 595
32609df3
MW
596The random oracle model \ifshort\cite{Bellare:1993:ROP} \fi is useful for
597constructing reductions and simulators for two main reasons.
a6e375a6
MW
598\begin{enumerate}
599\item One can use the transcript of an adversary's queries to random oracles
600 in order to extract knowledge from it.
601\item One can `program' a random oracle so as to avoid being bound by prior
602 `commitments', or to guide an adversary towards solving a selected instance
603 of some problem.
604\end{enumerate}
605Our proofs only make use of the first feature. This becomes particularly
606important when we consider issues of zero-knowledge and deniability in a
607concurrent setting, because we want to be able to claim that we retain these
608features when the random oracle is instantiated using a cryptographic hash
609function, and hash functions definitely aren't `programmable' in this way!
610The former property seems rather more defensible -- one would indeed hope
611that the only sensible way of working out (anything about) the hash of a
612particular string is to actually compute the hash function, and the random
613oracle model is, we hope, just giving us a `hook' into this process.
614
32609df3 615\ifshort\else
a6e375a6
MW
616(Our protocols can be modified to make use of bilinear pairings so as to
617provide identity-based identification and key-exchange, using the techniques
d8139f2c 618of \cite{Boneh:2003:IBE}. Proving the security of the modifications we
a6e375a6
MW
619discuss would involve `programming' random oracles, but this doesn't affect
620the zero-knowledge or deniability of the resulting protocols.)
32609df3 621\fi
a6e375a6
MW
622
623
32609df3
MW
624\ifshort\else
625\prelimsec{Notation for algorithms}
a6e375a6
MW
626
627We shall have occasion to describe algorithms by means of a pseudocode. Our
628choice of pseudocode is unlikely to be particularly controversial. We let $x
629\gets y$ denote the action of setting $x$ to the value $y$; similarly, $x
630\getsr Y$ denotes the action of sampling $x$ from the set $Y$ uniformly at
631random.
632
633The expression $a \gets A^{O(\cdot, x)}(y)$ means `assign to $a$ the value
634output by algorithm $A$ on input $y$, and with oracle access to the algorithm
635which, given input $z$, computes $O(z, x)$'.
636
637We make use of conditional (\IF-\ELSE) and looping (\FOR-\DO and \WHILE-\DO)
638constructions; in order to reduce the amount of space taken up, the bodies of
639such constructions are shown by indentation only.
640
641We don't declare the types of our variables explicitly, assuming that these
642will be obvious by inspection; also, we don't describe our variables' scopes
643explicitly, leaving the reader to determine these from context.
644
645Finally, the notation $\Pr[\textit{algorithm} : \textit{condition}]$ denotes
646the probability that \textit{condition} is true after running the given
647\textit{algorithm}.
32609df3 648\fi
a6e375a6 649
32609df3 650\prelimsec{Diffie-Hellman problems}
a6e375a6
MW
651\label{sec:dhp}
652
653The security of our protocols is related to the hardness of the
654computational, decisional, and gap Diffie-Hellman problems in the group $G$.
655We define these problems and what it means for them to be `hard' here.
656
657The \emph{computational} Diffie-Hellman problem (CDH) is as follows: given
658two group elements $X = x P$ and $Y = y P$, find $Z = x y P$.
32609df3 659\ifshort\else
a6e375a6
MW
660\begin{definition}[The computational Diffie-Hellman problem]
661 Let $(G, +)$ be a cyclic group generated by $P$. For any adversary $A$, we
662 say that $A$'s \emph{success probability} at solving the computational
663 Diffie-Hellman problem in $G$ is
664 \[ \Succ{cdh}{G}(A) =
32609df3 665 \Pr[ x \getsr I; y \getsr \Z/\#G\Z : A(x P, y P) = x y P ]
a6e375a6
MW
666 \]
667 where the probability is taken over the random choices of $x$ and $y$ and
668 any random decisions made by $A$. We say that the \emph{CDH insecurity
669 function} of $G$ is
670 \[ \InSec{cdh}(G; t) = \max_A \Succ{cdh}{G}(A) \]
671 where the maximum is taken over adversaries which complete in time $t$.
672\end{definition}
673Certainly, if one can compute discrete logarithms in the group $G$ (i.e.,
674given $x P$, find $x$), then one can solve the computational Diffie-Hellman
675problem. The converse is not clear, though. Shoup \cite{Shoup:1997:LBD}
676gives us some confidence in the difficulty of the problem by showing that a
677\emph{generic} adversary -- i.e., one which makes no use of the specific
32609df3 678structure of a group -- has success probability no greater than $q^2/\#G$.
a6e375a6
MW
679
680This isn't quite sufficient for our purposes. Our proofs will be able to
681come up with (possibly) a large number of guesses for the correct answer, and
682at most one of them will be correct. Unfortunately, working out which one is
683right seems, in general, to be difficult. This is the \emph{decision}
684Diffie-Hellman problem (DDH), which \cite{Shoup:1997:LBD} shows, in the
685generic group model, is about as hard as CDH. (See \cite{Boneh:1998:DDP} for
686a survey of the decision Diffie-Hellman problem.)
32609df3 687\par\fi
a6e375a6
MW
688Our reference problem will be a `multiple-guess computational Diffie-Hellman
689problem' (MCDH), which is captured by a game as follows. An adversary is
690given a pair of group elements $(x P, y P)$, and an oracle $V(\cdot)$ which
691accepts group elements as input. The adversary wins the game if it queries
692$V(x y P)$.
a6e375a6
MW
693
694\begin{figure}
695 \begin{program}
696 $\Game{mcdh}{G}(A)$: \+ \\
697 $w \gets 0$; \\
32609df3 698 $x \getsr \Z/\#G\Z$; $y \getsr \Z/\#G\Z$; \\
a6e375a6
MW
699 $A^{V(\cdot)}(x P, y P)$; \\
700 \RETURN $w$;
701 \next
702 Function $V(Z)$: \+ \\
703 \IF $Z = x y P$ \THEN \\ \ind
704 $w \gets 1$; \\
705 \RETURN $1$; \- \\
706 \RETURN $0$;
707 \end{program}
708
709 \caption{The multiple-guess computational Diffie-Hellman problem:
710 $\Game{mcdh}{G}(A)$}
711 \label{fig:mcdh}
712\end{figure}
713
32609df3
MW
714\begin{definition}[The multiple-guess computational Diffie-Hellman problem]
715 \label{def:mcdh}
716 Let $(G, +)$ be a cyclic group generated by $P$. For some adversary $A$,
717 we say that $A$'s \emph{success probability} at solving the multiple-guess
718 computational Diffie-Hellman problem in $G$ is
719 \[ \Succ{mcdh}{G}(A) = \Pr[\Game{mcdh}{G}(A) = 1] \]
720 where $\Game{mcdh}{G}(A)$ is shown in figure~\ref{fig:mcdh}. We say that
721 the \emph{MCDH insecurity function of $G$} is
722 \[ \InSec{mcdh}(G; t, q_V) = \max_A \Succ{mcdh}{G}(A) \]
723 where the maximum is taken over adversaries which complete in time $t$ and
724 make at most $q_V$-oracle queries.
725\end{definition}
726\ifshort
727We can (loosely) relate the difficulty of MCDH to the difficulty of
728the standard CDH problem, in which the adversary is allowed only a single
729guess.
730\else
a6e375a6
MW
731Note that our MCDH problem is not quite the `gap Diffie-Hellman problem'
732(GDH). The gap problem measures the intractibility of solving CDH even with
733the assistance of an oracle for solving (restricted) decision Diffie-Hellman
734problems in the group. Specifically, the adversary is given $(X, Y) = (x P,
735y P)$ and tries to find $Z = x y P$, as for CDH, but now it has access to an
736oracle $D(R, S)$ which answers $1$ if $S = x R$ and $0$ otherwise.
737
738Clearly MCDH is at least as hard as GDH, since our simple verification oracle
739$V(Z)$ can be simulated with the gap problem's DDH oracle, as $D(Y, Z)$.
740However, we can (loosely) relate the difficulty of MCDH to the difficulty of
741CDH.
32609df3 742\fi
a6e375a6
MW
743\begin{proposition}[Comparison of MCDH and CDH security]
744 For any cyclic group $(G, +)$,
32609df3
MW
745 \[ \InSec{mcdh}(G; t, q_V) \le
746 \ifshort q_V\,\InSec{mcdh}(G; t + O(q_V), 1) \else
747 q_V\,\InSec{cdh}(G; t + O(q_V)) \fi.
748 \]
a6e375a6 749\end{proposition}
32609df3
MW
750\begin{longproof}{The proof of this proposition may be found in the full
751 version of this paper.}
a6e375a6
MW
752 Let $A$ be an adversary attacking the multiple-guess computational
753 Diffie-Hellman problem in $G$, and suppose that it runs in time $t$ and
754 issues $q_V$ queries to its verification oracle.
755
756 We use a sequence of games. Game $\G0$ is the original MCDH attack game.
757 In each game $\G{i}$, we let the event $S_i$ be the probability that the
758 adversary wins the game.
759
760 Game $\G1$ is the same as $\G0$, except that we change the behaviour of the
761 verification oracle. Specifically, we make the oracle always return $0$.
762 We claim that this doesn't affect the adversary's probability of winning,
763 i.e., $\Pr[S_1] = \Pr[S_0]$. To see this, note that if none of the
764 adversary's $V(\cdot)$ queries was correct, then there is no change in the
765 game; conversely, if any query was correct, then the adversary will have
766 won regardless of its subsequent behaviour (which may differ arbitrarily
767 between the two games).
768
769 We are now ready to construct from $A$ an adversary $B$ attacking the
770 standard computational Diffie-Hellman problem.
771 \begin{program}
772 Adversary $B(X, Y)$: \+ \\
773 $n \gets 0$; \\
774 \FOR $i \in \Nupto{q_V}$ \DO $Q_i \gets 0$; \\
775 $A^{V(\cdot)}$; \\
776 $r \getsr \Nupto{n}$; \\
777 \RETURN $Q_r$;
778 \next
779 Function $D(Z')$: \+ \\
780 $Q_n \gets Z'$; \\
781 $n \gets n + 1$; \\
782 \RETURN $0$;
783 \end{program}
784 Observe that $B$ provides $A$ with an accurate simulation of game $\G1$.
32609df3
MW
785 Moreover, at the end of the algorithm, we have $0 < n \le q_V$, and the
786 values $Q_0$, $Q_1$, \dots, $Q_{n-1}$ are the values of $A$'s oracle
787 queries. Hence, with probability $Pr[S_1]$, at least of one of the $Q_i$
788 is the correct answer to the CDH problem. Let $\epsilon = \Pr[S_1] =
789 \Pr[S_0]$; we claim that $B$'s probability of success is at least
790 $\epsilon/q_V$. The proposition follows directly from this claim and that,
791 because $A$ was chosen arbitrarily, we can maximize and count resources.
a6e375a6
MW
792
793 We now prove the above claim. For $0 \le i < q_V$, let $W_i$ be the
794 event that $Q_i = x y P$, i.e., that $Q_i$ is the correct response. A
795 simple union bound shows that
796 \[ \sum_{0\le i<j} \Pr[W_i \mid n = j] \ge \epsilon. \]
797 We now perform a calculation:
798 \begin{eqnarray*}[rl]
799 \Succ{cdh}{G}(B)
800 & = \sum_{0\le i<q_V} \Pr[W_i \land r = i] \\
801 & = \sum_{0<j\le q_V} \Pr[n = j]
802 \biggl( \sum_{0\le i<j} \Pr[W_i \land r = i \mid n = j] \biggr) \\
803 & = \sum_{0<j\le q_V} \Pr[n = j]
804 \biggl( \frac{1}{j} \sum_{0\le i<j} \Pr[W_i \mid n = j] \biggr) \\
805 &\ge \sum_{0<j\le q_V} \Pr[n = j] \frac{\epsilon}{j} \\
806 &\ge \frac{\epsilon}{q_V} \sum_{0<j\le q_V} \Pr[n = j] \\
807 & = \frac{\epsilon}{q_V}.
808 \end{eqnarray*}
809 which completes the proof.
32609df3 810\end{longproof}
a6e375a6 811
32609df3
MW
812\ifshort\else
813\prelimsec{Example groups and encodings}
a6e375a6
MW
814
815For nonnegative integers $0 \le n < 2^\ell$, there is a natural binary
816encoding $N_\ell\colon \Nupto{2^\ell} \to \Bin^\ell$ which we can define
817recursively as follows.
818\[ N_0(0) = \emptystring \qquad
819 N_\ell(n) = \begin{cases}
820 N_{\ell-1}(n) \cat 0 & if $0 \le n < 2^{\ell-1}$ \\
821 N_{\ell-1}(n - 2^{\ell-1}) \cat 1 & if $2^{\ell-1} \le n < 2^\ell$.
822 \end{cases}
823\]
824Given an encoding $a = N_\ell(n)$ we can recover $n$ as
825\[ n = \sum_{0\le i<\ell} a[i] 2^i. \]
826Hence, given some limit $L \le 2^\ell$, we can encode elements of $\Nupto{L}$
827using the functions $(e, d)$:
828\[ e(L, \ell, n) = N_\ell(n) \qquad
829 d(L, \ell, a) = \begin{cases}
830 N_\ell(a) & if $N_\ell(a) < L$ \\
831 \bot & otherwise
832 \end{cases}
833\]
834The reader can verify that the functions $e(L, \ell, \cdot)$ and $d(L, \ell,
835\cdot)$ satisfy the requirements of section~\ref{sec:bitenc}.
836
32609df3
MW
837Given some $q$ with $q < 2^{\ell_I}$, then, we can define an encoding
838$(e_\F, d_\F)$ by $e_\F(n) = e(q, \ell_I, n)$ and $d_\F(a) = d(q, \ell_I,
839a)$.
840
841Let $p$ and $q$ be primes, with $q \mid (p - 1)$. Then there is an order-$q$
842subgroup of $\F_p^*$. In practice, an order-$q$ element can be found easily
843by taking elements $h \in \F_p^*$ at random and computing $g = h^{(p-1)/2}$
844until $g \ne 1$; then $G = \langle g \rangle$ is a group of $q$ elements.
845Assuming that $p$ and $q$ are sufficiently large, the Diffie-Hellman problems
846seem to be difficult in $G$. Some texts recommend additional restrictions on
847$p$, in particular that $(p - 1)/2q$ be either prime or the product of large
848primes. Primes of this form protect against small-subgroup attacks; but our
849protocols are naturally immune to these attacks, so such precautions are
850unnecessary here. Elements of $G$ can be encoded readily, since each element
851$n + p\Z$ of $\F_p = \Z/p\Z$ has an obvious `representative' integer $n$ such
852that $0 \le n < p$, and given $2^{\ell_G} > p$, we can encode $n$ as $e(p,
853\ell_G, n)$, as above.
a6e375a6
MW
854
855Alternatively, let $\F = \gf{p^f}$ be a finite field, and $E$ be an elliptic
856curve defined over $\F$ such that the group $E(\F)$ of $\F$-rational points
857of $E$ has a prime-order cyclic subgroup $G$. Elements of $G$ can be
858represented as pairs of elements of $\F$. If $f = 1$, i.e., $\F = \Z/p\Z$
859then field elements can be encoded as above. If $p = 2$, we can represent
860the field as $\F_2/(p(x))$ for some irreducible polynomial $p(x) \in \F_2[x]$
861of degree $f$. An element $r \in \F$ can then be represented by a polynomial
862$r(x)$ with degree less than $f$, and coefficients $c_i \in \{0, 1\}$, i.e.,
863\[ r(x) = \sum_{0\le i<f} c_i x^i \]
864and hence we can uniquely encode $r$ as an $f$-bit string $a$ such that $a[i]
865= c_i$.
32609df3 866\fi
a6e375a6
MW
867
868
32609df3 869\prelimsec{Symmetric encryption}
a6e375a6
MW
870\label{sec:sym-enc}
871
872Our key-exchange protocol requires a symmetric encryption scheme. Our
873definition is fairly standard, except that, rather than specifying a
874key-generation algorithm, we assume that key generation simply involves
875selecting a string of a given length uniformly at random.
876\begin{definition}[Symmetric encryption schemes]
877 A \emph{symmetric encryption scheme} $\E = (\kappa, E, D)$ consists of:
878 \begin{itemize}
879 \item an integer $\kappa \ge 0$,
880 \item a randomized \emph{encryption algorithm} $E$ which, on input $K \in
881 \Bin^\kappa$ and $p \in \Bin^*$ outputs some $c \in \Bin^*$, written $c
882 \gets E_K(p)$;
883 \item a \emph{decryption algorithm} $D$ which, on input $K \in \Bin^\kappa$
884 and $c \in \Bin^*$ outputs some $p' \in \Bin^* \cup \{\bot\}$, written
885 $p' \gets D_K(c)$.
886 \end{itemize}
887 Furthermore, a symmetric encryption scheme must be \emph{sound}: that is,
888 if $c \gets E_K(p)$ for some $K \in \Bin^\kappa$ and $p \in \Bin^*$, and
889 $p' \gets D_K(c)$ then $p = p'$.
890\end{definition}
891Our security notion for symmetric encryption is the standard notion of
892left-or-right indistinguishability of ciphertexts under chosen-ciphertext
893attack.
32609df3 894\begin{definition}[IND-CCA]
dff0fad2 895 \label{def:ind-cca}
a6e375a6
MW
896 Let $\E = (\kappa, E, D)$ be a symmetric encryption scheme, and $A$ be an
897 adversary. Let $\id{lr}_b(x_0, x_1) = x_b$ for $b \in \{0, 1\}$. Let
898 \[ P_b =
dff0fad2
MW
899 \Pr[K \getsr \Bin^\kappa;
900 b \gets A^{E_K(\id{lr}_b(\cdot, \cdot)), D_K(\cdot)}() :
901 b = 1]
a6e375a6
MW
902 \]
903 An adversary is \emph{valid} if
904 \begin{itemize}
905 \item for any query to its encryption oracle $E_K(\id{lr}_b(x_0, x_1))$ we
906 have $|x_0| = |x_1|$, and
907 \item no query to the decryption oracle $D_K(\cdot)$ is equal to any reply
908 from an encryption query.
909 \end{itemize}
910 If $A$ is valid, then we define its \emph{advantage} in attacking the
911 security of $\E$ as follows
912 \[ \Adv{ind-cca}{\E} = P_1 - P_0. \]
913 Further, we define the \emph{IND-CCA insecurity function of $\E$} to be
914 \[ \InSec{ind-cca}(\E; t, q_E, q_D) = \max_A \Adv{ind-cca}{\E}(A) \]
915 where the maximum is taken over all valid adversaries $A$ which run in time
916 $t$, and issue at most $q_E$ encryption and $q_D$ decryption queries.
917\end{definition}
918
919
920\subsection{Simulations}
921\label{sec:sim}
922
923In section~\ref{sec:zk-ident}, we shall prove that our identification
924protocol is zero-knowledge; in section~\ref{sec:denial}, we show that our
925key-exchange protocol is deniable. In both of these proofs, we shall need to
32609df3
MW
926demonstrate \emph{simulatability}.
927
928\ifshort
929
930We consider an adversary~$A$ interacting with a `world'~$W$; we model both as
931probabilistic algorithms. Both $A$ and~$W$ are given a common input~$c$; the
932world is additionally given a private input~$w$; these are chosen by a
933randomized initialization function $I$. The adversary is additionally given
934an auxiliary input~$u$ computed from $w$ by a randomized algorithm~$U$. All
935these algorithms -- the adversary and the world, but also the initialization
936and auxiliary-input algorithms $I$ and~$U$ -- have access to a number of
937random oracles $\mathcal{H} = (H_0, H_1, \ldots, H_{n-1})$. The adversary
938eventually decides to stop interacting, and produces an output~$a$.
939
940A \emph{simulator} for $A$'s interaction with $W$ is an algorithm $S^A$ which
941attempts to produce a similar output distribution, but without interacting
942with $W$. The simulator is given the same inputs $(c, u)$ as we gave
943to~$A$, and $S$ is also allowed to query the random oracles~$\mathcal{H}$.
944
945To measure the effectiveness of a simulator, we consider a distinguisher~$D$
946which is given $(c, u, a)$, and access to $\mathcal{H}$, and returns a bit
947$b$ representing its verdict as to whether the output $a$ was produced by the
948adversary or the simulator.
949
950\else
a6e375a6
MW
951
952\subsubsection{General framework}
953Consider a game in which an adversary~$A$ interacts with some `world'~$W$,
954which we shall represent as a probabilistic algorithm. The world may in fact
955represent a number of honest parties communicating in a concurrent fashion,
956but we can consider them as a single algorithm for our present purposes.
957
958Initially the world and the adversary are both given the same \emph{common
959input}~$c$; in addition, the world is given a \emph{private input}~$w$.
960Both $c$ and~$w$ are computed by an \emph{initialization function}~$I$, which
961is considered to be part of the definition of the game. Finally, the
962adversary decides somehow that it has finished interacting, and outputs a
963value~$a$. All this we notate as
964\[ (w, c) \gets I(); a \gets A^{W(w, c)}(c). \]
965This game is \emph{simulatable} if there is an algorithm~$S$ -- the
966\emph{simulator} -- which can compute the same things as~$A$, but all by
967itself without interacting with the world. That is, we run the simulator on
968the common input~$c$, allowing it to interact in some way with the
969adversary~$A$, and finally giving us an output~$s$.
970\[ (w, c) \gets I(); s \gets S^A(c). \]
971We shall say that the simulator is \emph{effective} if it's difficult to tell
972whether a given string was output by the adversary after interacting with the
973world, or by the simulator running by itself. That is, for any algorithm~$D$
974-- a \emph{distinguisher} -- running in some bounded amount of time, its
975advantage
976\begin{spliteqn*}
977 \Pr[(w, c) \gets I(); a \gets A^{W(w, c)}(c);
978 b \gets D(c, a) : b = 1] - {} \\
979 \Pr[(w, c) \gets I(); s \gets S^A(c); b \gets D(c, s) : b = 1]
980\end{spliteqn*}
981is small. (Note that we gave the distinguisher the common input as well as
982the output of the adversary or the simulator.)
983
984It's usual to study \emph{transcripts} of interactions in these kinds of
985settings. We are considering arbitrary adversarial outputs here, so this
986certainly includes adversaries which output a transcript of their
987interactions. Indeed, for any adversary~$A$, we could construct an
988adversary~$A_T$ which performs the same computation, and outputs the same
989result, but also includes a complete transcript of $A$'s interaction with the
990world. Therefore we're just providing additional generality.
991
992\subsubsection{Random oracles}
993We shall be considering interactions in which all the parties have access to
994several random oracles. We could simply say that the random oracles are part
995of the world~$W$. In the setting described above, only the adversary
996actually interacts with the world (and therefore would be able to query
997random oracles). The simulator would be forced to `make up' its own random
998oracle, and the distinguisher would have to study the distributions of the
999random-oracle queries and their responses to make up its mind about which it
1000was given.
1001
1002However, this would be a poor model for the real world, since once we
1003instantiate the random oracle with a hash function, we know that everyone
1004would in actually be able to compute the hash function for themselves. Thus
1005a distinguisher in the real world would be able to tell the difference
1006immediately between a real interaction and the simulated transcript, since
1007the `random oracle' queries recorded in the latter would be wrong!
1008
1009Therefore we decide not to include the random oracles as part of the world,
1010but instead allow all the participants -- adversary, simulator and
1011distinguisher -- access to them. If we denote by~$\mathcal{H} = (H_0, H_1,
1012\ldots, H_{n-1})$ the collection of random oracles under consideration, the
1013expression for the distinguisher's advantage becomes
1014\begin{spliteqn*}
1015 \Pr[(w, c) \gets I(); a \gets A^{W(w, c), \mathcal{H}}(c);
1016 b \gets D^{\mathcal{H}}(c, a) : b = 1] - {} \\
1017 \Pr[(w, c) \gets I(); s \gets S^{A, \mathcal{H}}(c);
1018 b \gets D^{\mathcal{H}}(c, s) : b = 1].
1019\end{spliteqn*}
1020
1021\subsubsection{Auxiliary inputs}
1022If an adversary's output can be effectively simulated, then we can
1023confidently state that the adversary `learnt' very little of substance from
1024its interaction, and certainly very little it can \emph{prove} to anyone
1025else. However, as we have described the setting so far, we fix an adversary
1026before we choose inputs to the world, so our model says little about what an
1027adversary which has already acquired some knowledge might learn beyond that.
1028For example, an adversary might overhear some other conversation between
1029honest parties and be able to use this to its advantage.
1030
1031To this end, we give the adversary an \emph{auxiliary input}~$u$, computed by
1032an algorithm~$U$. We give $U$ both $c$ and $w$, in order to allow the
1033adversary to gain some (possibly partial) knowledge of the secrets of the
1034other parties. We also allow $U$ access to the random oracles~$\mathcal{H}$,
1035because clearly in the `real world' it would be ridiculous to forbid such an
1036algorithm from computing a publicly-known hash function.
1037
1038The simulator and distinguisher are also given the auxiliary input. The
1039simulator is meant to represent the adversary's ability to compute things on
1040its own, without interacting with the world, and since the adversary is given
1041the auxiliary input, the simulator must be too. The distinguisher must be
32609df3
MW
1042given the auxiliary input because otherwise the simulator could just `make
1043up' plausible-looking inputs.
1044
1045\fi
1046
1047\ifshort
1048Because we're interested in a concrete, quantitative analysis, we must
1049constrain the resource usage of the various algorithms described above.
1050Specifically, we shall be interested in
1051\else
a6e375a6
MW
1052
1053\subsubsection{Resource limits}
1054We shall not allow our algorithms to perform completely arbitrary
1055computations and interactions. Instead, we impose limits on the amount of
1056time they are allowed to take, the number of random-oracle queries they make,
1057and so on. Specifically, we are interested in
32609df3 1058\fi
a6e375a6
MW
1059\begin{itemize}
1060\item the time $t_A$ taken by the adversary and $t_D$ taken by the
1061 distinguisher,
1062\item the number of oracle queries $\mathcal{Q}_A = (q_{A,0}, q_{A,1},
1063 \ldots, q_{A,n-1})$ made by the adversary, and $\mathcal{Q}_D$ made by the
1064 distinguisher,
1065\item a number of resource bounds $\mathcal{R}$ on the adversary's
1066 interaction with the world (e.g., number of messages of various kinds sent
1067 and received), and
1068\item a number of bounds $\mathcal{U}$ on the contents of the adversary's
1069 auxiliary input~$u$.
1070\end{itemize}
1071Sometimes we shall not be interested in proving simulatability of adversaries
1072with auxiliary inputs. We write $\mathcal{U} = 0$ to indicate that auxiliary
32609df3
MW
1073input is not allowed.
1074
1075\ifshort\else
a6e375a6
MW
1076
1077\subsubsection{World syntax}
1078It will be worth our while being more precise about what a `world' actually
1079is, syntactically. We define a world to be a single, randomized algorithm
1080taking inputs $(\iota, \sigma, \tau, \mu) \in (\Bin^*)^4$; the algorithm's
1081output is a pair $(\sigma', \rho) \in (\Bin^*)^2$. We show how the
1082adversary's interaction is mapped on to this world algorithm in
1083figure~\ref{fig:sim-world}.
1084\begin{itemize}
1085\item The `input' $\iota$ is the result of the initialization function~$I$.
1086 That is, it is the pair $(w, c)$ of the world's private input and the
1087 common input.
1088\item The `state' $\sigma$ is empty on the world's first invocation; on each
1089 subsequent call, the value of the world's output $\sigma'$ is passed back.
1090 In this way, the world can maintain state.
32609df3 1091\item The `type $\tau$ is a token giving the type of invocation this is.
a6e375a6
MW
1092\item The `message' $\mu$ is any other information passed in; its form will
1093 typically depend on the type~$\tau$ of the invocation.
1094\item The `new state' $\sigma'$ is the value of $\sigma$ to pass to the next
1095 invocation of the world.
1096\item The `reply $\rho$ is the actual output of the invocation.
1097\end{itemize}
1098There are two special invocation types. The adversary is \emph{forbidden}
1099from making special invocations.
1100\begin{itemize}
1101\item The special invocation type $\cookie{init}$ is used to allow the world to
1102 prepare an initial state. The world is invoked as
1103 \[ W^{\mathcal{H}}(\iota, \emptystring, \cookie{init}, \emptystring) \]
1104 and should output an initial state $\sigma'$. The world's reply $\rho$ is
1105 ignored. (Recall that $\emptystring$ represents the empty string.)
1106\item The special invocation type $\cookie{random}$ is used to inform the
1107 world that the adversary has issued a random oracle query. The world is
1108 invoked as
1109 \[ W^{\mathcal{H}}(\iota, \sigma, \cookie{random}, (i, x, h)) \]
1110 to indicate that the adversary has queried its random oracle $H_i(\cdot)$
1111 on the input $x$, giving output~$h$. The world may output an updated state
1112 $\sigma'$; its reply $\rho$ is ignored.
1113\end{itemize}
1114The latter special query is a technical device used to allow the `fake-world'
1115simulators we define below to be aware of the adversary's random oracle
1116queries without being able to `program' the random oracle. Including it here
1117does little harm, and simplifies the overall exposition.
1118
1119\begin{figure}
1120 \begin{program}
1121 Interaction $A^{W(w, c), \mathcal{H}}(c, u)$: \+ \\
1122 $(\sigma, \rho) \gets
1123 W((w, c), \emptystring, \cookie{init}, \emptystring)$; \\
1124 $a \gets A^{\id{world}(\cdot, \cdot),
1125 \id{random}(\cdot, \cdot)}(c, u)$; \\
1126 \RETURN $a$;
1127 \newline
1128 Function $\id{world}(\tau, \mu)$: \+ \\
1129 \IF $\tau \in \{\cookie{init}, \cookie{random}\}$ \THEN
1130 \RETURN $\bot$; \\
1131 $(\sigma, \rho) \gets W((w, c), \sigma, \tau, \mu)$; \\
1132 \RETURN $\rho$; \-
1133 \next
1134 Function $\id{random}(i, x)$: \+ \\
1135 $h \gets H_i(x)$; \\
1136 $(\sigma, \rho) \gets
1137 W((w, c), \sigma, \cookie{random}, (i, x, h))$; \\
1138 \RETURN $h$;
1139 \end{program}
1140
1141 \caption{Interacting with a world: Interaction $A^{W, \mathcal{H}}$}
1142 \label{fig:sim-world}
1143\end{figure}
1144
1145\subsubsection{Definitions}
1146We are now ready to begin making definitions.
32609df3
MW
1147\fi
1148
a6e375a6
MW
1149\begin{definition}[Simulation security]
1150 \label{def:sim}
1151 Consider the game described above, with the initialization function~$I$,
1152 and the world~$W$: let $A$ be an adversary, and let~$U$ be an
1153 auxiliary-input function; let $S$ be a simulator, and let $D$ be a
1154 distinguisher. We define $D$'s \emph{advantage against $S$'s simulation of
1155 $A$'s interaction with~$W$ with auxiliary inputs provided by~$U$} to be
1156 \[ \Adv{sim}{W, I, S}(A, U, D) =
1157 \Pr[\Game{real}{W, I, S}(A, U, D) = 1] -
1158 \Pr[\Game{sim}{W, I, S}(A, U, D)= 1]
1159 \]
1160 where the games are as shown in figure~\ref{fig:sim}.
1161 Furthermore, we define the \emph{simulator's insecurity function} to be
1162 \[ \InSec{sim}(W, I, S;
1163 t_D, t_A, \mathcal{Q}_D, \mathcal{Q}_A, \mathcal{R}, \mathcal{U}) =
1164 \max_{D, A, U} \Adv{sim}{W, I, S}(A, U, D)
1165 \]
1166 where the maximum is taken over all distinguishers~$D$ running in
1167 time~$t_D$ and making at most $\mathcal{Q}_D$ random-oracle queries, and
1168 all adversaries~$A$ running in time~$t_A$, making at most $\mathcal{Q}_A$
1169 random-oracle queries, not exceeding the other stated resource
1170 bounds~$\mathcal{R}$ on its interaction with~$W$, and auxiliary-input
1171 functions producing output not exceeding the stated bounds~$\mathcal{U}$.
1172\end{definition}
1173\begin{remark}
1174 The usual definitions of zero-knowledge, for example, require the simulator
1175 to work for all choices of inputs (common, private and auxiliary), rather
1176 than for random choices. Our definition therefore looks weaker. Our proof
1177 of zero-knowledge actually carries through to the traditional
1178 stronger-looking definition. Critically, however, the standard
1179 universal quantification over inputs fails to capture deniability in the
1180 random oracle model, since the inputs can't therefore depend on the random
1181 oracle. Our formulation therefore actually gives \emph{stronger}
32609df3 1182 deniability than the usual one.
a6e375a6
MW
1183\end{remark}
1184
1185\begin{figure}
1186 \begin{program}
1187 $\Game{real}{W, I, S}(A, U, D)$: \+ \\
1188 $(w, c) \gets I()$; \\
1189 $u \gets U^{\mathcal{H}}(w, c)$; \\
1190 $a \gets A^{W(w, c), \mathcal{H}}(c, u)$; \\
1191 $b \gets D^{\mathcal{H}}(c, u, a)$; \\
1192 \RETURN $b$;
1193 \next
1194 $\Game{sim}{W, I, S}(A, U, D)$: \+ \\
1195 $(w, c) \gets I()$; \\
1196 $u \gets U^{\mathcal{H}}(w, c)$; \\
1197 $s \gets S^{A, \mathcal{H}}(c, u)$; \\
1198 $b \gets D^{\mathcal{H}}(c, u, s)$; \\
1199 \RETURN $b$;
1200 \end{program}
1201
1202 \caption{Games for simulation: $\Game{real}{W, I}$ and $\Game{sim}{W, I}$}
1203 \label{fig:sim}
1204\end{figure}
1205
32609df3 1206\ifshort\else
a6e375a6
MW
1207\subsubsection{Fake-world simulators}
1208The simulators we shall be considering in the present paper are of a specific
1209type which we call `fake-world simulators'. They work by running the
1210adversary in a fake `cardboard cut-out' world, and attempting to extract
1211enough information from the adversary's previous interactions and random
1212oracle queries to maintain a convincing illusion.
1213
1214That is, the behaviour of a fake-world simulator~$S$ is simply to allow the
1215adversary to interact with a `fake world'~$W'$, which was not given the world
1216private input. That is, there is some world $W'$ such that
1217\[ S^{A, \mathcal{H}}(c, u) \equiv A^{W'(u, c), \mathcal{H}}(c, u) \]
1218Fake-world simulators are convenient because they allow us to remove from
1219consideration the distinguisher~$D$ as the following definition shows.
1220\begin{definition}[Fake-world simulation security]
1221 \label{def:fakesim}
1222 Let $I$, $W$ and $U$ be as in definition~\ref{def:sim}. Let $A$ be an
1223 adversary which outputs a single bit. Let $S$ be a fake-world simulator.
1224 We define $A$'s \emph{advantage against $S$'s fake-world simulation of $W$
1225 with auxiliary inputs provided by~$U$} to be
1226 \begin{spliteqn*}
1227 \Adv{fw}{W, I, S}(A, U) =
1228 \Pr[(w, c) \gets I(); u \gets U^{\mathcal{H}}(w, c);
1229 b \gets A^{W(w, c), \mathcal{H}}(c, u) : b = 1] - {} \\
1230 \Pr[(w, c) \gets I(); u \gets U^{\mathcal{H}}(w, c);
1231 b \gets S^{A, \mathcal{H}}(c, u) : b = 1]
1232 \end{spliteqn*}
1233 Furthermore, we define the \emph{simulator's insecurity function} to be
1234 \[ \InSec{fw}(W, I, S;
1235 t_D, t, \mathcal{Q}, \mathcal{R}, \mathcal{U}) =
1236 \max_{A, U} \Adv{fw}{W, I, S}(A, U)
1237 \]
1238 where the maximum is taken over all adversaries~$A$ running in time~$t$,
1239 making at most $\mathcal{Q}$ random-oracle queries, not exceeding the other
1240 stated resource bounds~$\mathcal{R}$ on its interaction with~$W$, and
1241 auxiliary-input functions producing output not exceeding the stated
1242 bounds~$\mathcal{U}$.
1243\end{definition}
1244It remains for us to demonstrate that this is a valid way of analysing
1245simulators; the following simple proposition shows that this is indeed the
1246case.
1247\begin{proposition}[Fake-world simulation]
1248 \label{prop:fakesim}
1249 Let $I$ be an initialization function and let $W$ be a world. Then, for
1250 any fake-world simulator~$S$,
1251 \[ \InSec{sim}(W, I, S; t_D, t_A, \mathcal{Q}_D, \mathcal{Q}_A,
1252 \mathcal{R}, \mathcal{U}) \le
1253 \InSec{fw}(W, I, S; t_A + t_D, \mathcal{Q}_D + \mathcal{Q}_A,
1254 \mathcal{R}, \mathcal{U})
1255 \]
1256 (where addition of query bounds $\mathcal{Q}$ is done elementwise).
1257\end{proposition}
1258\begin{proof}
1259 Let $W$ and $I$ as in the proposition statement be given; also let a
1260 distinguisher~$D$ running in time~$t_D$ and making $\mathcal{Q}_D$
1261 random-oracle queries, an adversary~$A$ running in time~$t_A$ and making
1262 $\mathcal{Q}_A$ random-oracle queries and interacting with its world within
1263 the stated bounds~$\mathcal{R}$, an auxiliary-input function~$U$ satisfying
1264 the constraints~$\mathcal{U}$ on its output, and a fake-world simulator~$S$
1265 all be given.
1266
1267 We construct an adversary~$B$ outputting a single bit as follows
1268 \begin{program}
1269 Adversary $B^{W, \mathcal{H}}(c, u)$: \+ \\
1270 $a \gets A^{W, \mathcal{H}}(c, u)$; \\
1271 $b \gets D^{\mathcal{H}}(c, u, a)$; \\
1272 \RETURN $b$;
1273 \end{program}
1274 A glance at definitions \ref{def:sim} and~\ref{def:fakesim} and the
1275 resources used by $B$ shows that
1276 \[ \Adv{sim}{W, I, S}(A, U) = \Adv{fw}{W, I, S}(B, U)
1277 \le \InSec{fw}(W, I, S; t_D + t_A, \mathcal{Q}_D + \mathcal{Q}_A,
1278 \mathcal{R}, \mathcal{U})
1279 \]
1280 as required.
1281\end{proof}
32609df3 1282\fi
a6e375a6
MW
1283
1284%%%--------------------------------------------------------------------------
1285
1286\section{A zero-knowledge identification scheme}
1287\label{sec:zk-ident}
1288
1289
1290\subsection{Description}
1291
1292Here we present a simple zero-knowledge identification scheme. Fix some
32609df3
MW
1293group $G$ with prime order $q = \#G$. Suppose Alice chooses a private key $x
1294\inr \gf{q}$, and publishes the corresponding public key $X = x P$. Let
1295$H_I\colon G^2 \to \Bin^{\ell_I}$ be a secure hash function. Here's a simple
1296protocol which lets her prove her identity to Bob.
a6e375a6 1297\begin{enumerate}
32609df3
MW
1298\item Bob selects a random $r \inr \gf{q}$, and computes $R = r P$, $Y = r X$,
1299 and $c = r \xor H_I(R, Y)$. He sends the pair $(R, c)$ to Alice as his
1300 \emph{challenge}.
a6e375a6
MW
1301\item Alice receives $(R, c)$. She computes $Y' = x R$ and $r' = c \xor
1302 H_I(R', Y')$, and checks that $R = r' P$. If so, she sends $Y'$ as her
1303 \emph{response}; otherwise she sends $\bot$.
1304\item Bob receives $Y'$ from Alice. He checks that $Y' = Y$. If so, he
1305 accepts that he's talking to Alice; otherwise he becomes suspicious.
1306\end{enumerate}
1307We name this the Wrestlers Identification Protocol in~$G$, $\Wident^G$ (we
1308drop the superscript to refer to the protocol in general, or when no
1309ambiguity is likely to result). A summary is shown in
1310figure~\ref{fig:wident}.
1311
1312\begin{figure}
1313 \begin{description}
32609df3 1314 \item[Setup] Group $G = \langle P \rangle$; $\#G = q$ is prime.
a6e375a6 1315 $H_I(\cdot, \cdot)$ is a secure hash.
32609df3 1316 \item[Private key] $x \inr \gf{q}$.
a6e375a6 1317 \item[Public key] $X = x P$.
32609df3 1318 \item[Challenge] $(R, c)$ where $r \inr \gf{q}$, $R = r P$, $c = r \xor
a6e375a6
MW
1319 H_I(R, r X)$.
1320 \item[Response] $x R = r X$ if $R = (c \xor H_I(R, x R)) P$; otherwise
1321 $\bot$.
1322 \end{description}
1323
1324 \caption{Summary of the Wrestlers Identification Protocol, $\Wident$}
1325 \label{fig:wident}
1326\end{figure}
1327
1328
1329\subsection{Security}
1330
1331In order to evaluate the security of our protocol, we present a formal
1332description of the algorithms involved in figure~\ref{fig:wident}. Here, the
1333hash function $H_I(\cdot, \cdot)$ is modelled as a random oracle.
1334
1335\begin{figure}
1336 \begin{program}
1337 Function $\id{setup}()$: \+ \\
32609df3 1338 $x \getsr \gf{q}$; \\
a6e375a6
MW
1339 $X \gets x P$; \\
1340 \RETURN $(x, X)$;
32609df3 1341 \ifshort\newline\else\next\fi
a6e375a6 1342 Function $\id{challenge}^{H_I(\cdot, \cdot)}(R, c, X)$: \+ \\
32609df3 1343 $r \getsr \gf{q}$; \\
a6e375a6
MW
1344 $R \gets r P$; $Y \gets r X$; \\
1345 $h \gets H_I(R, Y)$; $c \gets r \xor h$; \\
1346 \RETURN $(Y, R, c)$; \- \\[\medskipamount]
1347 Function $\id{verify}(Y, Y')$: \+ \\
1348 \IF $Y' = Y$ \THEN \RETURN $1$; \\
1349 \RETURN $0$;
1350 \next
1351 Function $\id{response}^{H_I(\cdot, \cdot)}(R, c, x)$: \+ \\
1352 $Y' \gets x R$; \\
1353 $h \gets H_I(R', Y')$; $r' \gets c \xor h$; \\
1354 \IF $R \ne r' P$ \THEN \RETURN $\bot$; \\
1355 \RETURN $Y'$;
1356 \end{program}
1357
1358 \caption{Functions implementing $\Wident$ in the random oracle model}
1359 \label{fig:wident-ro}
1360\end{figure}
1361
1362\subsubsection{Completeness}
1363Suppose that Bob really is talking to Alice. Note that $Y' = x R = x (r P) =
1364r (x P) = r X = Y$. Hence $r' = c \xor H_I(R', Y') = c \xor H_I(R, Y) = r$,
1365so $r' P = r P = R$, so Alice returns $Y' = Y$ to Bob. Therefore $\Wident$
1366is \emph{complete}: if Bob really is communicating with Alice then he
1367accepts.
1368
1369\subsubsection{Soundness}
1370We next show that impersonating Alice is difficult. The natural way to prove
1371this would be to give an adversary a challenge and prove that its probability
1372of giving a correct response is very small. However, we prove a stronger
1373result: we show that if the adversary can respond correctly to any of a large
1374collection of challenges then it can solve the MCDH problem.
1375
1376Consider the game $\Game{imp}{\Wident}$ shown in
1377figure~\ref{fig:wident-sound}. An adversary's probability of successfully
1378impersonating Alice in our protocol, by correctly responding to any one of
1379$n$ challenges, is exactly its probability of winning the game (i.e., causing
1380it to return $1$).
1381
1382\begin{figure}
1383 \begin{program}
1384 $\Game{imp-$n$}{\Wident}(A)$: \+ \\
1385 $H_I \getsr \Func{G^2}{\Bin^{\ell_I}}$; \\
1386 $(x, X) \gets \id{setup}()$; \\
1387 $\id{win} \gets 0$; \\
1388 $\Xid{R}{map} \gets \emptyset$; \\
1389 $\mathbf{c} \gets \id{challenges}(n)$; \\
1390 $(R', Y') \gets A^{H_I(\cdot, \cdot), \id{check}(\cdot, \cdot)}
1391 (X, \mathbf{c})$; \\
1392 \RETURN $\id{win}$;
1393 \newline
1394 Function $\id{challenges}(n)$: \+ \\
1395 \FOR $i \in \Nupto{n}$ \DO \\ \ind
1396 $(Y, R, c) \gets \id{challenge}^{H_I(\cdot, \cdot)}$; \\
1397 $\Xid{R}{map} \gets \Xid{R}{map} \cup \{ R \mapsto Y \}$; \\
1398 $\mathbf{c}[i] \gets (R, c)$; \- \\
1399 \RETURN $\mathbf{c}$; \next
1400 Function $\id{check}(R', Y')$: \\
1401 \IF $R' \notin \dom \Xid{R}{map}$ \THEN \RETURN $0$; \\
1402 $Y \gets \Xid{R}{map}(R')$; \\
1403 \IF $\id{verify}(Y, Y')$ \THEN \\ \ind
1404 $\id{win} \gets 1$; \\
1405 \RETURN $1$; \- \\
1406 \RETURN $0$;
1407 \end{program}
1408
1409 \caption{Soundness of $\Wident$: $\Game{imp-$n$}{\Wident}(A)$}
1410 \label{fig:wident-sound}
1411\end{figure}
1412
1413\begin{theorem}[Soundness of $\Wident$]
1414 \label{thm:wident-sound}
1415 Let $A$ be any adversary running in time $t$ and making $q_I$ queries to
1416 its random oracle, and $q_V$ queries to its verification oracle. Let $G$
1417 be a cyclic group. Then
1418 \[ \Pr[\Game{imp-$n$}{\Wident^G}(A) = 1] \le
1419 \InSec{mcdh}(G; t', q_I + q_V)
1420 \]
1421 where $t' = t + O(q_I) + O(q_V)$.
1422\end{theorem}
1423\begin{remark}
1424 Note that the security bound here is \emph{independent} of the value of
1425 $n$.
1426\end{remark}
32609df3
MW
1427\begin{longproof}{The proof of this theorem can be found in the full version
1428 of the paper.}
a6e375a6
MW
1429 We prove this by defining a sequence of games $\G{i}$. The first will be
1430 the same as the attack game $\Game{imp-$n$}{\Wident}(A)$ and the others
1431 will differ from it in minor ways. In each game $\G{i}$, let $S_i$ be the
1432 event that $A$ wins the game -- i.e., that it successfully impersonates the
1433 holder of the private key~$x$.
32609df3 1434
a6e375a6
MW
1435 Let game $\G0$ be the attack game $\Game{imp}{\Wident}(A)$, and let $(R',
1436 Y')$ be the output of $A$ in the game.
1437
1438 We define a new game $\G1$ which is the same as $\G0$, except that we query
1439 the random oracle $H_I$ at $(R', Y')$ whenever the adversary queries
1440 $\id{check}(R', Y')$. (We don't charge the adversary for this.) This
1441 obviously doesn't affect the adversary's probability of winning, so
1442 $\Pr[S_1] = \Pr[S_0]$.
1443
1444 Game $\G2$ is like $\G1$, except that we change the way we generate
1445 challenges and check their responses. Specifically, we new functions
1446 $\id{challenges}_2$ and $\id{check}_2$, as shown in
1447 figure~\ref{fig:wident-sound-2}.
1448
1449 \begin{figure}
1450 \begin{program}
1451 Function $\id{challenges}_2(n)$: \+ \\
1452 $r^* \getsr I$; $R^* \gets r^* P$; $Y^* \gets r^* X$; \\
1453 \FOR $i \in \Nupto{n}$ \DO \\ \ind
1454 $r \getsr I$; $R \gets r R^*$; $Y \gets r Y^*$; \\
1455 $h \gets H_I(R, Y)$; $c \gets r \xor h$; \\
1456 $\Xid{R}{map} \gets \Xid{R}{map} \cup \{ R \mapsto r \}$; \\
1457 $\mathbf{c}[i] \gets (R, c)$; \- \\
1458 \RETURN $\mathbf{c}$;
1459 \next
1460 Function $\id{check}_2(R', Y')$: \+ \\
1461 \IF $R' \notin \dom \Xid{R}{map}$ \THEN \RETURN $0$; \\
1462 $r \gets \Xid{R}{map}(R')$; \\
1463 \IF $\id{verify}(Y^*, Y'/r)$ \THEN \\ \ind
1464 $\id{win} \gets 1$; \\
1465 \RETURN $1$; \- \\
1466 \RETURN $0$;
1467 \end{program}
1468
1469 \caption{Soundness of $\Wident$: $\id{challenges}_2$ and $\id{check}_2$}
1470 \label{fig:wident-sound-2}
1471 \end{figure}
1472
1473 While we're generating and checking challenges in a more complicated way
1474 here, we're not actually changing the distribution of the challenges, or
1475 changing the winning condition. Hence $\Pr[S_2] = \Pr[S_1]$.
1476
1477 Now we change the rules again. Let $\G3$ be the same as $\G2$ except that
1478 we change the winning condition. Instead, we say that the adversary wins
1479 if any of the queries to its random oracle $H_I(R', Y')$ would be a correct
1480 response -- i.e., $\id{check}_2(R', Y')$ would return $1$. Since we query
1481 the oracle on $(R', Y')$ on its behalf at the end of the game, no adversary
1482 can do worse in this game than it does in $\G2$, so we have $\Pr[S_3] \ge
1483 \Pr[S_2]$. (It's not hard to see that this only helps quite stupid
1484 adversaries. We can transform any adversary into one for which equality
1485 holds here.)
1486
1487 Finally, let $\G4$ be the same as $\G3$ except that we change the way we
1488 generate challenges again: rather than computing $h$ and setting $c \gets h
1489 \xor r$, we just choose $c$ at random. Specifically, we use the new
1490 function, $\id{challenges}_4$, shown in figure~\ref{fig:wident-sound-4}.
1491
1492 \begin{figure}
1493 \begin{program}
1494 Function $\id{challenges}_4(n)$: \+ \\
1495 $r^* \getsr I$; $R^* \gets r^* P$; $Y^* \gets r^* X$; \\
1496 \FOR $i \in \Nupto{n}$ \DO \\ \ind
1497 $r \getsr I$; $R \gets r R^*$; \\
1498 $c \getsr \Bin^{\ell_I}$; \\
1499 $\Xid{R}{map} \gets \Xid{R}{map} \cup \{ R \mapsto r \}$; \\
1500 $\mathbf{c}[i] \gets (R, c)$; \- \\
1501 \RETURN $\mathbf{c}$;
1502 \end{program}
1503
1504 \caption{Soundness of $\Wident$: $\id{challenges}_4$}
1505 \label{fig:wident-sound-4}
1506 \end{figure}
1507
1508 Since $H_I(\cdot, \cdot)$ is a random function, the adversary can only
1509 distinguish $\G4$ from $\G3$ if it queries its random oracle at some $(R, r
1510 Y^*)$. But if it does this, then by the rule introduced in $\G3$ it has
1511 already won. Therefore we must have $\Pr[S_4] = \Pr[S_3]$.
1512
1513 Our $\id{challenges}_4$ function is interesting, since it doesn't actually
1514 make use of $r^*$ or $Y^*$ when generating its challenges. This gives us
1515 the clue we need to bound $\Pr[S_4]$: we can use adversary $A$ to solve the
1516 multiple-guess Diffie-Hellman problem in $G$ by simulating the game $\G4$.
1517 Specifically, we define the adversary $B$ as shown in
1518 figure~\ref{fig:wident-sound-cdh}. That is, for each query $A$ makes to
1519 its random oracle at a new pair $(R', Y')$, we see whether this gives us
1520 the answer we're looking for. We have $\Pr[S_0] \le \Pr[S_4] =
1521 \Succ{mcdh}{G}(B) \le \InSec{gdh}(G; t', q_I + q_V)$ as required.
1522
1523 \begin{figure}
1524 \begin{program}
1525 Adversary $B^{V(\cdot)}(X, R^*)$: \+ \\
1526 $F \gets \emptyset$; $\Xid{R}{map} \gets \emptyset$; \\
1527 \FOR $i \in \Nupto{n}$ \DO \\ \ind
1528 $r \getsr I$; $R \gets r R^*$; $c \getsr \Bin^{\ell_I}$; \\
1529 $\Xid{R}{map} \gets \Xid{R}{map} \cup \{ R \mapsto r \}$; \\
1530 $\mathbf{c}[i] \gets (R, c)$; \- \\
1531 $(R', Y') \gets A^{H_I(\cdot, \cdot), \id{check}(\cdot, \cdot)}
1532 (X, \mathbf{c})$; \\
1533 \IF $Y' \neq \bot$ \THEN $H_I(R', Y')$;
1534 \next
1535 Oracle $H_I(R', Y')$: \+ \\
1536 \IF $(R', Y') \in \dom F$ \THEN \\ \quad
1537 $h \gets F(x)$; \\
1538 \ELSE \\ \ind
1539 $\id{check}(R', Y')$; \\
1540 $h \getsr \Bin^{\ell_I}$;
1541 $F \gets F \cup \{ (R', Y') \mapsto h \}$; \- \\
1542 \RETURN $h$;
1543 \- \\[\medskipamount]
1544 Oracle $\id{check}(R', Y')$: \+ \\
1545 \IF $R' \in \dom \Xid{R}{map}$ \THEN
1546 $V(Y'/\Xid{R}{map}(R'))$;
1547 \end{program}
1548
1549 \caption{Soundness of $\Wident$: reduction from MCDH}
1550 \label{fig:wident-sound-cdh}
1551 \end{figure}
32609df3 1552\end{longproof}
a6e375a6
MW
1553
1554\subsubsection{Zero-knowledge}
1555Finally we must prove that $\Wident$ is (statistical) zero-knowledge -- i.e.,
1556that, except with very small probability, Bob learns nothing of use to him
1557except that he's interacting with Alice. To do this, we show that, for any
1558algorithm $B$ which Bob might use to produce his challenge to the real Alice,
1559there exists a simulator $S$ which produces transcripts distributed very
1560similarly to transcripts of real conversations between $B$ and Alice, the
1561difference being that $S$ doesn't know Alice's key. We shall show that the
1562statistical difference between the two distributions is $2^{-\ell_I}$.
1563
1564The intuition here is that Bob ought to know what answer Alice is going to
1565give him when he constructs his challenge. This is certainly true if he's
1566honest: his challenge is $R = r P$ for some $r$ he knows, so he won't learn
1567anything useful when Alice responds with $x R = r X$. However, if Bob sends
1568a challenge $R$ when he doesn't know the corresponding $r$, he learns
1569something potentially useful. The accompanying check value $c = r \xor
1570H_I(R, r X)$ keeps him honest.
1571
1572To show this, we present an \emph{extractor} which, given any challenge $(R,
1573c)$ Bob can construct, and his history of random-oracle queries, either
1574returns a pair $(r, Y)$ such that $R = r P$ and $Y = r X$, or $\bot$;
1575moreover, the probability that Alice returns a response $Y' \ne \bot$ given
1576the challenge $(R, c)$ is $2^{-\ell}$. We can, of course, readily convert
1577this extractor into a simulator to prove the zero-knowledge property of our
1578protocol.
1579
1580We shall actually consider a slightly more complex setting. We grant Bob
1581access to an oracle which produces random, correctly-formed challenges. We
1582require this to model the legitimate challenges of other parties when we
32609df3 1583analyse the security of our key exchange protocol.
a6e375a6
MW
1584
1585\begin{definition}[Discrete-log extractors]
1586 Let $T$, $B$ be randomized algorithms. Define the game
1587 $\Game{dl-ext}{G}(T, B)$ as shown in figure~\ref{fig:dlext}. The
1588 \emph{success probability of $T$ as a discrete-log extractor against $B$}
1589 is defined as
1590 \[ \Succ{dl-ext}{G}(T, B) = \Pr[\Game{dl-ext}{G}(T, B) = 1]. \]
1591\end{definition}
a6e375a6
MW
1592
1593\begin{figure}
1594 \begin{program}
1595 $\Game{dl-ext}{G}(T, B):$ \+ \\
1596 $H_I \getsr \Func{G^2}{\Bin^{\ell_I}}$;
1597 $Q_H \gets \emptyset$; $Q_C \gets \emptyset$; \\
1598 $(x, X) \gets \id{setup}()$; \\
1599 $(R, c) \gets B^{\Xid{H_I}{trap}(\cdot, \cdot), C()}(x, X)$; \\
1600 $(r, Y) \gets T(R, c, Q_H)$; \\
1601 $Y' \gets x R$; $h' \gets H_I(R, Y')$; $r' \gets c \xor h'$; \\
1602 \IF $r \ne \bot$ \THEN \\ \quad
1603 \IF $Y = \bot \lor R \ne r P \lor Y \ne Y'$ \THEN \RETURN $0$; \\
32609df3 1604 \IF $R = r' P$ \THEN $(r^*, Y^*) \gets (r', Y')$; \\
a6e375a6
MW
1605 \ELSE $(r^*, Y^*) \gets (\bot, \bot)$; \\
1606 \IF $(R, c) \in Q_C$ \THEN \RETURN $1$; \\
1607 \IF $(r, Y) = (r', Y')$ \THEN \RETURN $1$; \\
1608 \RETURN $0$;
1609 \next
1610 Oracle $\Xid{H_I}{trap}(R', Y')$: \+ \\
1611 $h \gets H_I(R', Y')$; \\
1612 $Q_H \gets Q_H \cup \{(R', Y', h)\}$; \\
1613 \RETURN $h$; \- \\[\medskipamount]
1614 Oracle $C()$: \+ \\
32609df3 1615 $r \getsr \gf{q}$; \\
a6e375a6
MW
1616 $R \gets r P$; $c \gets r \xor H_I(R, r X)$; \\
1617 $Q_C \gets Q_C \cup \{(R, c)\}$; \\
32609df3 1618 \RETURN $(R, c)$
a6e375a6
MW
1619 \end{program}
1620
1621 \caption{Discrete log extraction game: $\Game{dl-ext}{G}(T, B)$}
1622 \label{fig:dlext}
1623\end{figure}
1624
32609df3
MW
1625Let's unpack this definition slightly. We make the following demands of our
1626extractor.
1627\begin{itemize}
1628\item It is given a bare `transcript' of $B$'s execution. In particular, it
1629 is given only its output and a list of $B$'s random-oracle queries in no
1630 particular order.
1631\item While the extractor is not given the private key~$x$, the adversary~$B$
1632 is given the private key.
1633\item We require that, if the extractor produces values $r, Y \ne \bot$ then
1634 $r$ and $Y$ are \emph{correct}; i.e., that $R = r P$ and $Y = x R$.
1635\item The extractor is explicitly \emph{not} given the outputs of the
1636 challenge-generation oracle $C()$, nor of the random-oracle queries issued
1637 by $C()$. However, we allow the extractor to fail (i.e., return $\bot$) if
1638 $B$ simply parrots one of its $C$-outputs.
1639\item The extractor is allowed -- indeed \emph{required} -- to fail if the
1640 challenge $(R, c)$ is \emph{invalid} (i.e., Alice would return $\bot$ given
1641 the challenge).
1642\end{itemize}
1643The resulting definition bears a striking similarity to the concept of
1644\emph{plaintext awareness} in \cite{Bellare:1998:RAN}.
1645
a6e375a6
MW
1646Such an extractor indeed exists, as the following lemma states.
1647\begin{lemma}[Effectiveness of extractor $T_\Wident$]
1648 \label{lem:dlext}
32609df3
MW
1649 There exists a \emph{universal discrete-log extractor} $T_\Wident$, shown
1650 in figure~\ref{fig:twident}, such that, for any algorithm $B$,
a6e375a6
MW
1651 \[ \Succ{dl-ext}{G}(T_\Wident, B) \ge 1 - \frac{1}{2^{\ell_I}}. \]
1652 Moreover, if $B$ issues at most $q_H$ random-oracle queries, then the
1653 running time of $T_\Wident$ is $O(q_H)$.
1654\end{lemma}
32609df3
MW
1655\ifshort
1656The proof of this lemma is given in the full version of this paper.
1657\else
a6e375a6
MW
1658We prove this result at the end of the section. For now, let us see how to
1659prove that $\Wident$ is zero-knowledge.
32609df3
MW
1660\fi
1661
1662\begin{figure}
1663 \begin{program}
1664 Extractor $T_\Wident(R, c, Q_H)$: \+ \\
1665 \FOR $(R', Y', h)$ \IN $Q_H$ \DO \\ \ind
1666 $r \gets h \xor c$; \\
1667 \IF $R = R' = r P \land Y' = r X$ \THEN \RETURN $(r, Y')$; \- \\
1668 \RETURN $(\bot, \bot)$;
1669 \end{program}
1670
1671 \caption{The discrete-log extractor $T_\Wident$}
1672 \label{fig:twident}
1673\end{figure}
a6e375a6
MW
1674
1675We use the set-up described in section~\ref{sec:sim}. Our initialization
32609df3 1676function~$I_\Wident$ just chooses a random $x \in \gf{q}$ as the world
a6e375a6
MW
1677private input and sets $X = x P$ as the common input. In the `real world',
1678the adversary is allowed to submit a (single) challenge to the prover; it is
1679given the prover's response, and must then compute its output. This is shown
1680on the left hand side of figure~\ref{fig:wident-sim}.
1681
1682The zero-knowledge property of the scheme is described by the following
1683theorem.
1684\begin{theorem}[Statistical zero-knowledge of $\Wident$]
1685 \label{thm:wident-zk}
1686 Let $I_\Wident$, $W_\Wident$ and $S_\Wident$ be the real-prover world and
1687 simulator shown in figure~\ref{fig:wident-sim}. Then, for any~$t$,
1688 $q_I$ and $q_C$,
1689 \[ \InSec{sim}(W_\Wident, I_\Wident, S_\Wident; t, q_I, q_C, 0) \le
1690 \frac{q_C}{2^\ell_I}.
1691 \]
1692 where $q_C$ is the maximum number of challenges allowed by the adversary.
1693\end{theorem}
32609df3 1694\begin{longproof}{}
a6e375a6
MW
1695 The simulator simply uses the extractor~$T_\Wident$ to extract the answer
1696 from the adversary's history of random oracle queries. Observe that
1697 $S_\Wident$ is a fake-world simulator. By lemma~\ref{lem:dlext}, the
1698 extractor fails with probability only $2^{-\ell_I}$. The theorem follows
1699 by a simple union bound and proposition~\ref{prop:fakesim}.
32609df3 1700\end{longproof}
a6e375a6 1701
32609df3 1702%\ifshort\else
a6e375a6
MW
1703\begin{figure}
1704 \begin{program}
1705 Initialization function $I_\Wident()$: \+ \\
32609df3 1706 $x \getsr \gf{q}$; \\
a6e375a6
MW
1707 $X \gets x P$; \\
1708 \RETURN $(x, X)$;
1709 \- \\[\medskipamount]
1710 Real-prover world $W_\Wident^{H_I(\cdot, \cdot)}
1711 ((x, X), \sigma, \tau, \mu)$: \+ \\
1712 \IF $\tau = \cookie{challenge}$ \THEN \\ \ind
1713 $(R, c) \gets \mu$; \\
1714 $Y \gets \id{response}^{H_I(\cdot, \cdot)}(R, c, x)$; \\
1715 \RETURN $(1, Y)$; \- \\
1716 \ELSE \\ \ind
1717 \RETURN $(\sigma, \bot)$;
1718 \next
1719 Simulator $S_\Wident$'s fake world \\
1720 \hspace{1in} $W_{\text{sim}}^{H_I(\cdot, \cdot)}
1721 ((X, u), \sigma, \tau, \mu)$: \+ \\
1722 \IF $\tau = \cookie{init}$ \THEN \\ \ind
1723 \RETURN $(\emptyset, \emptystring)$; \- \\
1724 $Q_H \gets \sigma$; \\
1725 \IF $\tau = \cookie{challenge}$ \THEN \\ \ind
1726 $(R, c) \gets \mu$; \\
1727 $(r, Y) \gets T_\Wident(R, c, Q_H)$; \\
1728 \RETURN $(Q_H, Y)$; \- \\
1729 \ELSE \IF $\tau = \cookie{random}$ \THEN \\ \ind
1730 $(i, (R', Y'), h) \gets \mu$; \\
1731 $Q_H \gets Q_H \cup \{(R', Y', h)\}$; \\
1732 \RETURN $(Q_H, \emptystring)$; \- \\
1733 \ELSE \\ \ind
1734 \RETURN $(\sigma, \bot)$;
1735 \end{program}
1736
1737 \caption{Real-prover and simulator for zero-knowledge of $\Wident$}
1738 \label{fig:wident-sim}
1739\end{figure}
32609df3 1740%\fi
a6e375a6 1741
32609df3
MW
1742\ifshort\else
1743We now return to proving that the extractor $T_\Wident$ functions as claimed.
dff0fad2 1744The following two trivial lemmata will be useful, both now and later.
a6e375a6
MW
1745\begin{lemma}[Uniqueness of discrete-logs]
1746 \label{lem:unique-dl}
1747 Let $G = \langle P \rangle$ be a cyclic group. For any $X \in G$ there is
32609df3 1748 a unique $x \in \gf{q}$ where $X = x P$.
a6e375a6
MW
1749\end{lemma}
1750\begin{proof}
1751 Certainly such an $x$ exists, since $G$ is cyclic and finite. Suppose $X =
1752 x P = x' P$: then $0 = x P - x' P = (x - x') P$. Hence $(x - x')$ is a
32609df3 1753 multiple of $q$, i.e., $x = x'$.
a6e375a6
MW
1754\end{proof}
1755\begin{lemma}[Uniqueness of check values]
1756 \label{lem:unique-c}
32609df3
MW
1757 Let $G = \langle P \rangle$ be a cyclic group of prime order $q$; let $H_I$
1758 be a function $H_I\colon \Bin^{2\ell_G} \to \Bin^{\ell_I}$. Fix some $x
1759 \in \gf{q}$ and define the set
a6e375a6
MW
1760 \[ V_x = \bigl\{\, (R, c) \in G \times \Bin^{\ell_I} \bigm|
1761 R = \bigl( c \xor H_I(R, x R) \bigr) P \,\bigr\}.
1762 \]
1763 Then, for any $R$, $c$, $c'$, if $(R, c) \in V_x$ and $(R, c') \in V_x$
32609df3 1764 then $c = c'$.
a6e375a6
MW
1765\end{lemma}
1766\begin{proof}
32609df3
MW
1767 From lemma~\ref{lem:unique-dl}, we see that there is a unique $r \in \gf{q}$
1768 for which $R = r P$. Now, if $(R, c) \in V_x$, we must have $r = c \xor
1769 H_I(R, x R)$. It follows that $c = r \xor H_I(R, x R)$.
a6e375a6
MW
1770\end{proof}
1771
1772\begin{proof}[Proof of lemma~\ref{lem:dlext}]
a6e375a6
MW
1773 Let $B$ be any randomized algorithm, and let $(R, c, Q_H)$ be as given to
1774 the extractor by $\Game{dl-ext}{G}(T_\Wident, B)$. Let the quantities
1775 $H_I$, $Q_C$, $r$, $r'$, $x$ and $X$ be as in that game.
1776
1777 Suppose that the extractor returns values $(r, Y) \ne (\bot, \bot)$. Let
1778 $h = r \xor c$; then there must be a query $(R, Y, h) \in Q_H$, and we have
1779 $R = r P$ and $Y = r X = r (x P) = x (r P) = x R = Y'$, so the extractor's
1780 output must be correct unless it fails.
1781
1782 Furthermore, in the case where the extractor did not fail, we have $h =
1783 H_I(R, Y) = H_I(R, Y')$ and $c = r \xor h$, so the challenge was valid.
1784 Therefore, if the challenge was invalid, the extractor will fail.
1785
1786 We now deal with the challenge-generation oracle. Suppose that $(R, c')
1787 \in Q_C$ for some $c'$. Now, if $c = c'$ then $(R, c')$ is a repeat of
1788 some challenge from the challenge-generation oracle, and the extractor is
1789 permitted to fail. On the other hand, suppose $c \ne c'$; then, the
1790 challenge $(R, c)$ must be invalid by lemma~\ref{lem:unique-c}, so the
1791 extractor is required to fail, and we have established that indeed it will.
1792 From now on, suppose that $R$ is distinct from all the $R$-values returned
1793 by $C()$.
1794
1795 Let $Y = x R$. Suppose that $B$ queried its random oracle at $(R, Y)$.
1796 Let $h = H_I(Y)$, so $r' = c \xor h$. If the challenge is valid then $R =
1797 r' P$; therefore $Y = x R = x r' P = r' X$, so we have $(R, Y, h) \in Q_H$
1798 with $R = r P$ and $Y = r X$. Hence the extractor returns $r = r'$ as
1799 required.
1800
1801 It remains to deal with the case where there is no random-oracle query at
1802 $(R, Y)$. But then $h = H_I(R, Y)$ is uniformly distributed, and
1803 independent of the entire game up to this point. Let $r$ be the correct
1804 discrete log of $R$; by lemma~\ref{lem:unique-dl} there is only one
1805 possible value. The extractor always fails under these circumstances, but
1806 a correct responder would reply with probability
1807 \[ \Pr[h = c \xor r] = \frac{1}{2^{\ell_I}}. \]
1808 This concludes the proof.
1809\end{proof}
1810\begin{remark}
1811 Note that the fact that the algorithm~$B$ was given the private key is
1812 irrelevant to the above argument. However, we shall need this property
1813 when we come to prove deniability for the key-exchange protocol.
1814\end{remark}
1815\begin{remark}
1816 It's easy to see from the above proof that the extractor works flawlessly
1817 on the `honest verifier' algorithm $\id{challenge}$ shown in
1818 figure~\ref{fig:wident-ro}. This shows that $\Wident$ is perfect
1819 zero-knowledge against honest verifiers. We're much more interested in
1820 dishonest verifiers, though.
1821\end{remark}
32609df3 1822\fi
a6e375a6
MW
1823
1824
32609df3 1825\ifshort\else
a6e375a6
MW
1826\subsection{An identity-based identification scheme}
1827\label{sec:wident-id}
1828
d8139f2c 1829Boneh and Franklin \cite{Boneh:2003:IBE} showed how to construct an
a6e375a6
MW
1830identity-based encryption scheme using bilinear pairings. The resulting
1831encryption scheme looks somewhat like a pairing-based version of ElGamal's
d8139f2c 1832encryption scheme \cite{ElGamal:1985:PKC}. We can easily apply their
a6e375a6
MW
1833techniques to our identification protocol, and thereby obtain an
1834identity-based identification scheme. Providing the necessary formalisms to
1835prove theorems analogous to our theorems~\ref{thm:wident-sound}
1836and~\ref{thm:wident-zk} would take us too far from our objectives; but given
1837appropriate security notions, we can readily adapt our existing proofs to the
1838new setting.
1839
1840\subsubsection{Bilinear pairings}
1841Before we describe the necessary modifications to the protocol, we first give
1842a (very brief!) summary of cryptographic pairings. (The Boneh-Franklin paper
d8139f2c 1843\cite{Boneh:2003:IBE} gives more detail; also \cite{Menezes:2005:IPB}
a6e375a6
MW
1844provides a useful introduction to the topic.)
1845
1846Let $(G, +)$, $(G', +)$ and $(G_T, \times)$ be cyclic groups with prime order
1847$q$; let $P \in G$ and $P' \in G'$ be elements of order $q$ in $G$ and $G'$
1848respectively. We say that a mapping $\hat{e}\colon G \times G' \to G_T$ is a
1849\emph{non-degenerate bilinear pairing} if it satisfies the following
1850properties.
1851\begin{description}
1852\item[Bilinearity] For all $R \in G$ and $S', T' \in G'$, we have $\hat{e}(R,
dff0fad2
MW
1853 S' + T') = \hat{e}(R, S')\,\hat{e}(R, T')$; and for all $R, S \in G$ and $T'
1854 \in G'$ we have $\hat{e}(R + S, T') = \hat{e}(R, T')\,\hat{e}(S, T')$.
a6e375a6
MW
1855\item[Non-degeneracy] $\hat{e}(P, P') \ne 1$.
1856\end{description}
1857For practical use, we also want $\hat{e}(\cdot, \cdot)$ to be efficient to
1858compute. The reader can verify that $\hat{e}(a P, b P') = \hat{e}(P,
1859P')^{ab}$. It is permitted for the two groups $G$ and $G'$ to be equal.
1860
1861We require a different intractability assumption, specifically that the
1862\emph{bilinear} Diffie-Hellman problem (BDH) -- given $(a P, b P, a P', c P')
1863\in G^2 \times G'^2$, find $\hat{e}(P, P')^{abc} \in G_T$ -- is difficult.
1864This implies the difficulty of the computational Diffie-Hellman problem in
1865all three of $G$, $G'$ and~$G_T$.
1866
1867\subsubsection{The identity-based scheme}
1868We need a trusted authority; following \cite{Schneier:1996:ACP} we shall call
32609df3 1869him Trent. Trent's private key is $t \in \gf{q}$; his public key is $T =
a6e375a6
MW
1870t P$.
1871
1872Finally, we need cryptographic hash functions $H_I\colon G \times G_T \to
1873\Bin^{\ell_I}$ and $\Hid\colon \Bin^* \to G'$; a formal security analysis
1874would model these as random oracles.
1875
1876Alice's public key is $A = \Hid(\texttt{Alice}) \in G'$. Her private key is
1877$K_A = t A \in G'$ -- she needs Trent to give this to her. Bob can interact
1878with Alice in order to verify her identity as follows.
1879\begin{enumerate}
1880\item Bob computes $\gamma_A = \hat{e}(T, A) \in G_T$. (He can do this once
1881 and store the result if he wants, but it's not that onerous to work it out
1882 each time.)
32609df3 1883\item Bob chooses $r \inr \gf{q}$, and sets $R = r P$. He also computes
a6e375a6
MW
1884 $\psi = \gamma_A^r$, $h = H_I(R, \psi)$ and $c = r \xor h$. He sends his
1885 challenge $(R, c)$ to Alice.
1886\item Alice receives $(R', c')$. She computes $\psi' = \hat{e}(R, K_A)$, $h'
1887 = H_I(R', \psi')$, and $r' = c' \xor h')$. She checks that $R' = r' P$; if
1888 so, she sends $\psi'$ back to Bob; otherwise she refuses to talk to him.
1889\item Bob receives $\psi'$. If $\psi = \psi'$ then he accepts that he's
1890 talking to Alice.
1891\end{enumerate}
1892This works because $\psi = \gamma_A^r = \hat{e}(T, A)^r = \hat{e}(t P, A)^r =
1893\hat{e}(r P, A)^t = \hat{e}(R, t A) = \psi'$.
1894
1895\subsubsection{Informal analysis}
1896An analogue to lemma~\ref{lem:dlext} can be proven to show how to extract $r$
1897from a verifier's random-oracle queries; statistical zero knowledge would
1898then follow easily, as in theorem~\ref{thm:wident-zk}. Soundness is
1899intuitively clear, since an adversary must compute $\psi = \hat{e}(P,
1900P')^{art}$ given $A = a P'$, $R = r P$ and $T = t P$, which is an instance of
1901the BDH problem. An analogue of theorem~\ref{thm:wident-sound} would have to
1902prove this for an adversary capable of making identity requests as well as
1903obtaining challenges. Finally, our key-exchange protocol can be constructed
1904out of this identity-based identification scheme, yielding an identity-based
1905authenticated key-exchange protocol. We leave it to the reader to work
1906through the details.
32609df3 1907\fi
a6e375a6
MW
1908
1909
32609df3 1910\ifshort\else
a6e375a6
MW
1911\subsection{Comparison with the protocol of Stinson and Wu}
1912\label{sec:stinson-ident}
1913
1914Our protocol is similar to a recent proposal by Stinson and Wu
1915\cite{Stinson:2006:EST}. They restrict their attention to Schnorr groups $G
32609df3
MW
1916\subset \F_p^*$. Let $\gamma$ be an element of order $q = \#G$. The
1917prover's private key is $a \inr \gf{q}$ and her public key is $\alpha =
1918\gamma^a$. In their protocol, the challenger chooses $r \inr \gf{q}$, computes
1919$\rho = \gamma^r$ and $\psi = \alpha^r$, and sends a challenge $(\rho,
1920H(\psi))$. The prover checks that $\rho^q \ne 1$, computes $\psi = \rho^a$,
1921checks the hash, and sends $\psi$ back by way of response. They prove their
1922protocol's security in the random-oracle model.
a6e375a6
MW
1923
1924Both the Wrestlers protocol and Stinson-Wu require both prover and verifier
1925to compute two exponentiations (or scalar multiplications) each. The
32609df3 1926sizes of the messages used by the two protocols are also identical.
a6e375a6
MW
1927
1928(An earlier version of the Stinson-Wu protocol used a cofactor
1929exponentiation: if we set $f = (p - 1)/q$, then we use $\psi = \alpha^{rf}) =
1930\rho^{af} = \gamma^{afr}$. This is more efficient in typical elliptic curve
1931subgroups, since the cofactor of such subgroups is usually small: indeed,
1932\cite{SEC1} recommends rejecting groups with cofactor $f > 4$. However, in
1933the Schnorr groups used by Stinson and Wu, the cofactor is much larger than
1934$q$, and their new variant is more efficient.)
1935
1936We note that the zero-knowledge property of the Stinson-Wu protocol requires
1937the Diffie-Hellman knowledge of exponent assumption (KEA). Very briefly:
1938suppose $A$ is a randomized algorithm which takes as input $X \in G$ and
1939outputs a pair $(r P, r X)$; intuitively, the KEA asserts $A$ must have done
1940this by choosing $r$ somehow and then computing its output from it.
1941Formally, it asserts the existence of an `extractor' algorithm which takes as
1942input the element $X$ and the random coins used by $A$ and outputs $r$ with
1943high probability. This is a very strong assumption, and one which is
1944unnecessary for our protocol, since we can present an \emph{explicit}
1945extractor.
1946
1947The KEA assumption as stated in \cite{Stinson:2006:EST} allows the extractor
1948to fail with some negligible probability, over and above the probability that
8b8b8839 1949a dishonest verifier managed to guess the correct $h = H(\psi)$ without
a6e375a6
MW
1950making this random-oracle query. Not only does our protocol achieve zero-
1951knowledge without the KEA, our extractor is, in this sense, `perfect'.
1952
1953Our protocol is just as strong as Stinson-Wu under attack from active
1954intruders: see table~\ref{tab:wident-active} for a very brief sketch of the
1955case-analysis which would be the basis of a proof of this.
1956
1957\begin{table}
1958 \begin{tabular}[C]{|*{3}{c|}p{8cm}|}
1959 \hlx{hv[1]}
1960 \multicolumn{2}{|c|}{\textbf{Challenge}} &
1961 \textbf{Response} &
1962 \textbf{Security}
1963 \\ \hlx{v[1]hv}
1964 %% unpleasant hacking to make the R and c columns the same width :-(
1965 \settowidth{\dimen0}{\textbf{Challenge}}%
32609df3 1966 \dimen0=.5\dimen0
a6e375a6
MW
1967 \advance\dimen0by-\tabcolsep
1968 \advance\dimen0by-.5\arrayrulewidth
1969 \hbox to\dimen0{\hfil$R$\hfil}
1970 & $c$ & $Y$ & Nothing to prove. \\ \hlx{v}
1971 $R$ & $c'$ & --- & Prover rejects by lemma~\ref{lem:unique-c};
1972 $Y'$ probably wrong by
1973 theorem~\ref{thm:wident-sound}. \\ \hlx{v}
1974 $R$ & $c$ & $Y'$ & Response is incorrect. \\ \hlx{v}
1975 $R'$ & --- & $Y$ & Response is incorrect. \\ \hlx{v}
1976 $R'$ & $c$ & $Y'$ & Prover rejects with probability $1 - 2^{-\ell_I}$;
1977 $Y'$ probably wrong by
1978 theorem~\ref{thm:wident-sound}. \\ \hlx{v}
1979 $R'$ & $c'$ & $Y'$ & Simulate prover using extractor
1980 (lemma~\ref{lem:dlext}); $Y'$ probably wrong by
1981 theorem~\ref{thm:wident-sound}. \\ \hlx{vh}
1982 \end{tabular}
32609df3 1983
a6e375a6
MW
1984 \caption{Security of $\Wident$ against active intruders (summary)}
1985 \label{tab:wident-active}
1986\end{table}
1987
1988An identity-based analogue of Stinson-Wu can be defined using a bilinear
1989pairing, just as we did in section~\ref{sec:wident-id}. However, to prove
1990the zero-knowledge property, one needs to make a bilinear analogue of the
1991knowledge of exponent assumption.
1992
1993We suspect that a key-exchange protocol like ours can be constructed using
1994Stinson-Wu rather than the Wrestlers identification scheme. We haven't,
1995however, gone through all the details, since we believe our protocol is just
1996as efficient and is based on much more conservative assumptions.
32609df3 1997\fi
a6e375a6
MW
1998
1999%%%--------------------------------------------------------------------------
2000
2001\section{A simple key-exchange protocol}
2002\label{sec:kx}
2003
2004In this section, we describe a simple key-exchange protocol built out of the
2005identification protocol shown previously.
2006
2007The key-exchange protocol arises from the following observation. If Bob
2008sends a challenge, presumably to Alice, and gets a correct response, then not
2009only did he really send the challenge to Alice but he knows that she received
2010it correctly.
2011
2012So, if Alice and Bob authenticate each other, by the end of it, they should
2013each have chosen a random private value, sent the corresponding public value
2014to the other, and been convinced that it arrived safely.
2015
2016Unfortunately, life isn't quite this kind, and we have to do some more work
2017to make this scheme secure.
2018
2019
2020Our key exchange protocol essentially consists of two parallel instances of
2021the identification protocol. If Alice receives a correct response to her
2022challenge, she will know that Bob received her challenge correctly, and
2023\emph{vice versa}. If we let Alice's challenge be $R_0 = r_0 P$ and Bob's
2024challenge be $R_1 = r_1 P$ then each can compute a shared secret $Z = r_0 R_1
2025= r_0 r_1 P = r_1 R_0$ unknown to an adversary. There are, unfortunately, a
2026few subtleties involved in turning this intuition into a secure key-exchange
2027protocol, which we now describe.
2028
2029
2030\subsection{Overview}
2031\label{sec:kx-overview}
2032
2033We present a quick, informal description of our basic key-exchange protocol.
2034In addition to our group $G$, we shall also need a secure symmetric
2035encryption scheme $\E = (\kappa, E, D)$, and two secure hash functions
2036$H_I\colon \Bin^{2\ell_G} \to \Bin^{\ell_I}$ and $H_K\colon \Bin^{\ell_G+1}
2037\to \Bin^\kappa$.
2038
2039Suppose that Alice's and Bob's private keys are $a$ and $b$ respectively, and
2040their public keys are $A = a P$ and $B = b P$.
2041\begin{enumerate}
32609df3
MW
2042\item Alice chooses a random index $r \inr \gf{q}$. She computes $R = r P$ and
2043 $c = r \xor H_I(R, r B)$. She sends the pair $(R, c)$ to Bob.
2044\item Similarly, Bob chooses a random $s \inr \gf{q}$. He computes $S = s P$
2045 and $d = s \xor H_I(S, s A)$. He sends $(S, d)$ to Alice.
a6e375a6
MW
2046\item Alice receives $(S', d')$ from Bob. She computes $s' = d' \xor H_I(S',
2047 a S')$, and verifies that $S' = s' P$. If so, she computes $K_A = H_K(0
2048 \cat r S')$, and sends $R, E_{K_A}(a S')$ to Bob.
2049\item Similarly, Bob receives $(R', c')$ from Alice. He verifies that $R' =
2050 \bigl( c' \xor H_I(R', b R') \bigr) P$. If so, he computes $K_B = H_K(0
2051 \cat s R')$ and sends S, $E_{K_B}(b R')$ to Alice.
2052\item Alice receives a ciphertext $(S'', \chi_B)$ from Bob. She checks that
2053 $S'' = S'$, decrypts $\chi_B$, and checks that $D_{K_A}(\chi_B) = r B$. If
2054 so, she uses $H_K(1 \cat r S')$ as her shared secret.
2055\item Similarly, Bob receives $(R'', \chi_A)$ from Alice, and checks $R'' =
2056 R'$ and $D_{K_B}(\chi_A) = s A$. If so, he uses $H_K(1 \cat s R')$ as his
2057 shared secret.
2058\end{enumerate}
2059This is the Wrestlers Key Exchange protocol, $\Wkx^{G, \E}$ (again, we omit
2060the superscripts when referring to the general protocol, or when confusion is
2061unlikely). A diagrammatic summary of the protocol is shown in
2062figure~\ref{fig:wkx}.
2063
2064\begin{figure}
2065 \begin{description}
32609df3 2066 \item[Setup] Group $G = \langle P \rangle$; $\#G = q$ is prime.
a6e375a6
MW
2067 $H_I(\cdot, \cdot)$ and $H_K(\cdot)$ are secure hashes. $\E = (\kappa,
2068 E, D)$ is an IND-CCA2 symmetric encryption scheme.
2069 \item[Parties] $U_i$ for $0 \le i < n$.
32609df3 2070 \item[Private keys] $x_i \inr \gf{q}$.
a6e375a6
MW
2071 \item[Public keys] $X_i = x_i P$.
2072 \end{description}
32609df3
MW
2073 \begin{protocol}
2074 $r_i \getsr I$; $R_i \gets r_i P$; &
2075 $r_j \getsr I$; $R_j \gets r_j P$; \\
2076 $c_i \gets r_i \xor H_I(R_i, r_i X_j)$; &
2077 $c_j \gets r_j \xor H_I(R_j, r_j X_i)$; \\
2078 \send{->}{(R_i, c_i)}
2079 \send{<-}{(R_j, c_j)}
2080 Check $R_j = \bigl(c_j \xor H_I(x_i R_j)\bigr) P$; &
2081 Check $R_i = \bigl(c_i \xor H_I(x_j R_i)\bigr) P$; \\
2082 $Z \gets r_i R_j$; $(K_0, K_1) \gets H_K(Z)$; &
2083 $Z \gets r_j R_i$; $(K_0, K_1) \gets H_K(Z)$; \\
2084 $\chi_i \gets E_{K_0}(x_i R_j)$; &
2085 $\chi_j \gets E_{K_0}(x_j R_i)$; \\
2086 \send{->}{(R_i, \chi_i)}
2087 \send{<-}{(R_j, \chi_j)}
2088 Check $D_{K_0}(\chi_j) = r_i X_j$; &
2089 Check $D_{K_0}(\chi_i) = r_j X_i$; \\
2090 Shared key is $K_1$. & Shared key is $K_1$.
2091 \end{protocol}
a6e375a6
MW
2092
2093 \caption{Summary of the Wrestlers Key Exchange protocol, $\Wkx$}
2094 \label{fig:wkx}
2095\end{figure}
2096
2097Assume, for the moment, that Alice and Bob's messages are relayed honestly.
2098Then:
2099\begin{itemize}
2100\item $a S' = a S = a (s P) = s (a P) = s A$, so $s' = d' \xor H_I(S' a S') =
2101 d \xor H_I(S, s A) = s$, and $S' = S = s P = s' P$, and therefore Alice
2102 responds to Bob's message;
2103\item similarly $b R' = r B$, so $r' = r$ and $R' = r' P$, and therefore Bob
2104 responds to Alice's message;
2105\item $b R' = b R = b (r P) = r (b P) = r B$, and $a S' = a S = a (s P) = s
2106 (a P) = s A$, and therefore both parties compute their responses correctly;
32609df3 2107 and
a6e375a6
MW
2108\item $r S' = r S = r (s P) = s (r P) = s R = s R'$, so $K_A = K_B$, and
2109 therefore they can decrypt each others' responses, and agree the same
2110 shared secret.
2111\end{itemize}
2112This shows that the protocol is basically valid, but says little about its
2113security. The remainder of this section will describe our protocol in more
2114formal detail, and prove its security in a model with multiple parties and an
2115adversary who controls the network.
2116
2117Observe that the protocol as we've presented here is \emph{symmetrical}.
2118There's no notion of `initiator' or `responder'. There are a total of four
2119messages which must be sent before both parties accept. However, this can be
2120reduced to three by breaking the symmetry of the protocol and combining one
2121or other party's challenge and response messages. We choose to analyse the
2122symmetrical version, since to do so, it suffices to consider only the two
2123different kinds of messages. Since our security model allows the messages to
2124be adversarially delayed and reordered, it is easy to show that the security
2125of an optimized, asymmetrical protocol is no worse than the symmetrical
2126version we present here.
2127
2128
2129\subsection{Security model and security definition}
2130\label{sec:um}
2131
2132Our model is very similar to that of Canetti and Krawczyk
dff0fad2 2133\cite{Canetti:2001:AKE}, though we have modified it in two ways.
a6e375a6
MW
2134\begin{enumerate}
2135\item We allow all the participants (including the adversary) in the protocol
2136 access to the various random oracles required to implement it.
2137\item Since we want to analyse a specific, practical scheme, asymptotic
2138 results are useless. We measure the adversary's resource usage carefully,
2139 and produce a quantitative bound on the adversary's advantage in the
2140 SK-security game.
2141\end{enumerate}
2142
32609df3
MW
2143\ifshort
2144
2145Readers interested in the details of the model should see Canetti and
2146Krawczyk's paper \cite{Canetti:2001:AKE}, or the full version of this paper.
2147
2148\else
2149
a6e375a6
MW
2150\subsubsection{Overview}
2151We briefly describe our modified model, pointing out the changes we have
2152made, and how they apply to our protocol. Much of Canetti and Krawczyk's
2153model (for example, the local and global outputs) is useful for proving more
2154general security properties such as demonstrating that SK-security suffices
2155for constructing secure channels, and we shall not concern ourselves with
2156such details. Other parts deal with issues such as security parameters and
2157ensuring that all the computation is polynomially bounded, which are
2158irrelevant since we are dealing with a single concrete protocol rather than a
2159family of them.
2160
2161The entities in the model are the \emph{adversary}~$A$, and a (fixed) number
2162of \emph{parties}~$P_i$, for $0 \le i < n$. If the protocol under
2163consideration makes use of random oracles, then all the participants -- the
2164adversary and the parties -- are all allowed access to the random oracles.
2165
2166The parties and the adversary play a `game'. At the beginning of the game,
2167the participants are given some inputs computed by a randomized
2168\emph{initialization procedure}~$\id{init}$. This produces as output a pair
2169$(i_U, \mathbf{i})$; the value $i_U$ is the \emph{global input}, and is given
2170to all the participants including the adversary. The vector $\mathbf{i}$ has
2171$n$ components, and party $P_i$ is given $(i_U, \mathbf{i}[i])$ as input.
2172
2173\subsubsection{Sessions}
2174Parties don't act directly. Instead, each party runs a number of
32609df3
MW
2175\emph{sessions}. A session is represented by a triple $S = (P_i, P_j, s)$,
2176where $i, j \in \Nupto{n}$ identify the owning party and a \emph{partner},
2177and $s \in \Bin^{\ell_S}$ is a \emph{session-id}. (The original model
2178includes a r\^ole, for distinguishing between initiators and responders. Our
2179protocol is symmetrical, so this distinction isn't useful.) If $P_i$ runs a
2180session $S = (P_i, P_j, s)$ and $P_j$ runs a session $S' = (P_j, P_i, s)$
2181then we say that $S$ and $S'$ are \emph{matching}, and that $P_j$ is $P_i$'s
a6e375a6
MW
2182\emph{partner} for the session.
2183
2184At most one participant in the game is \emph{active} at any given time.
2185Initially the adversary is active. The adversary may \emph{activate} a
2186session in one of two ways.
2187\begin{enumerate}
2188\item It may \emph{create a session} of a party~$P_i$, by selecting a
2189 session-id~$s \in \Bin^{\ell_S}$ and a partner $j$. There is no
2190 requirement that $P_j$ ever have a matching session. However, all sessions
2191 of a party must be distinct, i.e., sessions with the same partner must have
2192 different session-ids.
2193\item It may \emph{deliver a message}~$\mu \in \Bin^*$, from party~$P_j$, to
2194 an existing session~$S = (P_i, P_j, s)$. There is no requirement that any
2195 party previously sent $\mu$: the adversary is free to make up messages as
2196 it sees fit.
2197\end{enumerate}
2198The adversary becomes inactive, and the session becomes active. The session
2199performs some computation, according to its protocol, and may request a
2200message~$\mu$ be delivered to the matching session running in its partner
2201(which may not exist). The session may also \emph{terminate}. In the case
2202we are interested in, of key-exchange protocols, a session~$S = (P_i, P_j,
2203s)$ may terminate in one of two ways:
2204\begin{enumerate}
2205\item it may \emph{complete}, outputting $(i, j, s, K)$, for some
2206 \emph{session key}~$K$, or
2207\item it may \emph{abort}, outputting $(i, j, s, \bot)$.
2208\end{enumerate}
2209Once it has performed these actions, the session deactivates and the
2210adversary becomes active again. The adversary is given the message~$\mu$, if
2211any, and informed of whether the session completed or aborted, but, in the
2212case of completion, not of the value of the key~$K$. A session is
2213\emph{running} if it has been created and has not yet terminated.
2214
2215\subsubsection{Other adversarial actions}
2216As well as activating sessions, the adversary has other capabilities, as
2217follows.
2218\begin{itemize}
2219\item It may \emph{expire} any session~$S$, causing the owning party to
2220 `forget' the session key output by that session.
2221\item It may \emph{corrupt} any party~$P_i$, at will: the adversary learns
2222 the entire state of the corrupted party, including its initial
2223 input~$\mathbf{i}[i]$, the state of any sessions it was running at the
2224 time, and the session keys of any completed but unexpired sessions. Once
2225 corrupted, a party can no longer be activated. Of course, the adversary
2226 can continue to send messages allegedly from the corrupted party.
2227\item It may \emph{reveal the state} of a running session~$S$, learning any
2228 interesting values specific to that session, but \emph{not} the owning
2229 party's long-term secrets.
2230\item It may \emph{reveal the session-key} of a completed session~$S$.
2231\item It may elect to be \emph{challenged} with a completed session~$S$,
2232 provided. Challenge sessions form part of the security notion for
2233 key-exchange protocols. See below for more details.
2234\end{itemize}
2235We say that a session $S = (P_i, P_j, s)$ is \emph{locally exposed} if
2236\begin{itemize}
2237\item it has had its state revealed,
2238\item it has had its session-key revealed, or
2239\item $P_i$ has been corrupted, and $S$ had not been expired when this
2240 happened.
2241\end{itemize}
2242A session is \emph{exposed} if it is locally exposed, or if its matching
2243session exists and has been locally exposed.
2244
2245At the beginning of the game, a bit $b^*$ is chosen at random. The adversary
2246may choose to be \emph{challenged} with any completed, unexposed
2247session;\footnote{%
2248 The original Canetti-Krawczyk definition restricts the adversary to a
2249 single challenge session, but our proof works independent of the number of
2250 challenge sessions, so we get a stronger result by relaxing the requirement
2251 here.)}
2252the adversary is then given either the session's key -- if $b^* = 1$ -- or a
2253string chosen at random and independently of the game so far from a
2254protocol-specific distribution -- if $b^* = 0$. At the end of the game, the
2255adversary outputs a single bit~$b$.
2256
2257\subsubsection{SK-security}
2258We've now described the game; it is time to explain the adversary's goal in
2259it. The adversary \emph{wins} the game if either
2260\begin{enumerate}
2261\item two unexposed, matching sessions complete, but output different
2262 keys,\footnote{%
2263 The original Canetti-Krawczyk definition differs slightly here. It
2264 requires that `if two \emph{uncorrupted} parties complete matching
2265 sessions then they both output the same key' [original emphasis]. This
2266 can't be taken at face value, since none of the protocols they claim to
2267 be secure actually meet this requirement: they meet only the weaker
2268 requirement that parties completing matching sessions output different
2269 keys with negligible probability. We assume here that this is what they
2270 meant.}
2271 or
2272\item the adversary correctly guesses the hidden bit~$b^*$.
2273\end{enumerate}
2274More formally, we make the following definition.
32609df3 2275\fi
a6e375a6
MW
2276\begin{definition}[SK-security]
2277 \label{def:sk}
2278 Let $\Pi^{H_0(\cdot), H_1(\cdot), \ldots}$ be a key-exchange protocol
2279 which makes use of random oracles $H_0(\cdot)$, $H_1(\cdot)$, \dots, and
32609df3
MW
2280 let $A$ be an adversary playing the game described \ifshort in
2281 \cite{Canetti:2001:AKE}\else previously\fi, where $n$
a6e375a6
MW
2282 parties run the protocol~$\Pi$. Let $V$ be the event that any pair of
2283 matching, unexposed sessions completed, but output different session keys.
2284 Let $W$ be the event that the adversary's output bit matches the game's
2285 hidden bit~$b^*$. We define the adversary's \emph{advantage against the
2286 SK-security of the protocol~$\Pi$} to be
2287 \[ \Adv{sk}{\Pi}(A, n) = \max(\Pr[V], 2\Pr[W] - 1). \]
2288 Furthermore, we define the \emph{SK insecurity function of the
2289 protocol~$\Pi$} to be
2290 \[ \InSec{sk}(\Pi; t, n, q_S, q_M, q_{H_0}, q_{H_1}, \dots) =
2291 \max_A \Adv{sk}{\Pi}(A, n)
2292 \]
2293 where the maximum is taken over all adversaries~$A$ with total running
2294 time~$t$ (not including time taken by the parties), create at most $q_S$
2295 sessions, deliver at most $q_M$~messages, and (if applicable) make at most
2296 $q_{H_i}$ random-oracle queries to each random oracle $H_i(\cdot)$.
2297\end{definition}
2298
2299
2300\subsection{Security}
2301
2302In order to analyse our protocol $\Wkx^{G, \E}$ properly, we must describe
32609df3 2303exactly how it fits into our formal model.
a6e375a6
MW
2304
2305\subsubsection{Sessions and session-ids}
2306Our formal model introduced the concept of sessions, which the informal
2307description of section~\ref{sec:kx-overview} neglected to do. (One could
2308argue that we described a single session only.) As we shall show in
2309section~\ref{sec:kx-insecure}, our protocol is \emph{insecure} unless we
2310carefully modify it to distinguish between multiple sessions.
2311
2312In the model, distinct key-exchange sessions are given distinct partners and
2313session-ids. In order to prevent sessions interfering with each other, we
2314shall make explicit use of the session-ids.
2315
2316Suppose the session-ids are $\ell_S$-bit strings. We expand the domain of
2317the random oracle $H_I$ so that it's now
2318\[ H_I\colon G \times \Bin^{\ell_S} \times G \times G \to \Bin_{\ell_I}. \]
2319
2320\subsubsection{Messages}
2321We split the messages our protocols into two parts: a \emph{type}~$\tau$ and
2322a \emph{body}~$\mu$. We assume some convenient, unambiguous encoding of
2323pairs $(\tau, \mu)$ as bit-strings. For readability, we present message
2324types as text strings, e.g., `\cookie{challenge}', though in practice one
2325could use numerical codes instead.
2326
2327The message body itself may be a tuple of values, which, again, we assume are
2328encoded as bit-strings in some convenient and unambiguous fashion. We shall
2329abuse the notation for the sake of readability by dropping a layer of nesting
2330in this case: for example, we write $(\cookie{hello}, x, y, z)$ rather than
2331$\bigl(\cookie{hello}, (x, y, z)\bigr)$.
2332
2333\subsubsection{The protocol}
2334Our protocol is represented by three functions, shown in
2335figure~\ref{fig:wkx-formal}.
2336\begin{itemize}
2337\item $\id{init}(n)$ is the initialization function, as described in
2338 section~\ref{sec:um}. It outputs a pair $(\mathbf{p}, \mathbf{i})$, where
2339 $\mathbf{i}[i]$ is the private key of party~$P_i$ and $\mathbf{p}[i]$ is
2340 the corresponding public key. Only $P_i$ is given $\mathbf{i}[i]$, whereas
2341 all parties and the adversary are given $\mathbf{p}$.
2342\item $\id{new-session}^{H_I(\cdot, \cdot, \cdot, \cdot), H_K(\cdot)}
2343 (\mathbf{p}, x, i, j, s)$ is the new-session function. This is executed by
2344 party~$P_i$ when the adversary decides to create a new session~$S = (P_i,
2345 P_j, s)$. It is also given the relevant outputs of $\id{init}$, and
2346 allowed access to the random oracles $H_I$ and $H_K$.
2347\item $\id{message}^{H_I(\cdot, \cdot, \cdot, \cdot), H_K(\cdot)} (\tau,
2348 \mu)$ is the incoming-message function. This is executed by a session when
2349 the adversary decides to deliver a message $(\tau, \mu)$ to it. It makes
2350 use of the subsidiary functions $\id{msg-challenge}$ and
2351 $\id{msg-response}$ to handle the messages.
2352\end{itemize}
2353We observe that the protocol never aborts. We could make it do so if it
2354receives an invalid message, but we prefer to ignore invalid messages for the
2355sake of robustness.\footnote{%
2356 Note that this protocol would therefore require modification before it was
2357 acceptable in the simulation-based model of \cite{Shoup:1999:OFM}. There
2358 it is required that a key-exchange protocol terminate after a
2359 polynomially-bounded number of messages are delivered to it.}
2360
2361\begin{figure}
2362 \begin{program}
2363 Function $\id{init}(n)$: \+ \\
2364 \FOR $i \in \Nupto{n}$ \DO \\ \ind
32609df3 2365 $x \getsr \gf{q}$; \\
a6e375a6
MW
2366 $\mathbf{i}[i] \gets x$; \\
2367 $\mathbf{p}[i] \gets x P$; \- \\
2368 \RETURN $(\mathbf{p}, \mathbf{i})$;
2369 \- \\[\medskipamount]
2370 Function $\id{new-session}^{H_I(\cdot, \cdot, \cdot, \cdot), H_K(\cdot)}
2371 (\mathbf{p}, x, i, j, s)$: \+ \\
2372 $X \gets \mathbf{p}[i]$;
2373 $X' \gets \mathbf{p}[j]$;
2374 $C \gets \emptyset$; \\
32609df3 2375 $r \getsr \gf{q}$;
a6e375a6
MW
2376 $R \gets r P$;
2377 $Y \gets r X'$; \\
2378 $h \gets H_I(X, s, R, Y)$;
2379 $c \gets r \xor h$; \\
2380 \SEND $(\cookie{challenge}, R, c)$;
2381 \- \\[\medskipamount]
2382 Function $\id{message}^{H_I(\cdot, \cdot, \cdot, \cdot), H_K(\cdot)}
2383 (\tau, \mu)$: \+ \\
2384 \IF $\tau = \cookie{challenge}$ \THEN $\id{msg-challenge}(\mu)$; \\
2385 \ELSE \IF $\tau = \cookie{response}$ \THEN $\id{msg-response}(\mu)$;
2386 \next
2387 Function $\id{msg-challenge}(\mu)$: \+ \\
2388 $(R', c') \gets \mu$; \\
2389 $Y' \gets x R'$; \\
2390 $h' \gets H_I(X', s, R', Y')$; \\
2391 $r' \gets c' \xor h'$; \\
2392 \IF $R' \ne r' P$ \THEN \RETURN; \\
2393 $C \gets C \cup \{R\}$; \\
2394 $Z \gets r R'$; \\
2395 $(K_0, K_1) \gets H_K(Z)$; \\
2396 $\chi \gets E_{K_0}(Y')$; \\
2397 \SEND $(\cookie{response}, R, \chi)$;
2398 \- \\[\medskipamount]
2399 Function $\id{msg-response}(\mu)$: \+ \\
2400 $(R', \chi') \gets \mu$; \\
2401 \IF $R' \notin C$ \THEN \RETURN; \\
2402 $Z \gets r R'$; \\
2403 $(K_0, K_1) \gets H_K(Z)$; \\
2404 $Y' \gets D_{K_0}(\chi')$; \\
2405 \IF $Y' \ne Y$ \THEN \RETURN; \\
2406 \OUTPUT $K_1$;
2407 \STOP;
2408 \end{program}
32609df3 2409
a6e375a6
MW
2410 \caption{Formalization of $\Wkx$}
2411 \label{fig:wkx-formal}
2412\end{figure}
2413
2414\subsubsection{Session states}
2415We must specify what the adversary obtains when it chooses to reveal a
2416session's state. Given the program in figure~\ref{fig:wkx-formal}, we can
2417see that the session state consists of the variables $(x, X, X', r, R, Y,
2418C)$.
2419
2420However, $x$ is the owning party's long-term secret, and it seems
2421unreasonable to disclose this to an adversary who stops short of total
2422corruption.
2423
2424The public keys $X$ and $X'$ are part of the adversary's input anyway, so
2425revealing them doesn't help. Similarly, the set $C$ of valid challenges
2426could have been computed easily by the adversary, since a group element $R'
2427\in C$ if and only if the session $S$ responded to some message
2428$(\cookie{challenge}, R', c')$.
2429
2430The value $R = r P$ is easily computed given $r$, and besides is sent in the
2431clear in the session's first message. The expected response $Y = r X'$ is
2432also easily computed from $r$. The converse is also true, since $r$ can be
2433recovered from $R$ and $c$ in the session's challenge message and the value
2434$Y$. Besides, $r$ is necessary for computing $Z$ in response to incoming
2435challenges.
2436
2437We conclude that the only `interesting' session state is $r$.
2438
2439\subsubsection{Security}
2440Having formally presented the protocol, we can now state our main theorem
32609df3
MW
2441about its security. The proof is given in \ifshort the full version of the
2442paper\else appendix~\ref{sec:sk-proof}\fi.
a6e375a6
MW
2443\begin{theorem}[SK-security of $\Wkx$]
2444 \label{thm:sk}
2445 Let $G$ be a cyclic group. Let $\E = (\kappa, E, D)$ be a symmetric
2446 encryption scheme. Then
2447 \begin{spliteqn*}
2448 \InSec{sk}(\Wkx^{G, \E}; t, n, q_S, q_M, q_I, q_K) \le
dff0fad2
MW
2449 2 q_S \bigl( \InSec{ind-cca}(\E; t', q_M, q_M) + {} \\
2450 \InSec{mcdh}(G; t', q_K) +
2451 n \,\InSec{mcdh}(G; t', q_M + q_I) \bigr) +
32609df3 2452 \frac{n (n - 1)}{q} +
dff0fad2 2453 \frac{2 q_M}{2^{\ell_I}}.
a6e375a6
MW
2454 \end{spliteqn*}
2455 where $t' = t + O(n) + O(q_S) + O(q_M q_I) + O(q_K)$.
2456\end{theorem}
2457
2458
32609df3 2459\ifshort\else
a6e375a6
MW
2460\subsection{Insecure protocol variants}
2461\label{sec:kx-insecure}
2462
2463It's important to feed the session-id and verifier's public key to the random
2464oracle $H_I$ when constructing the check-value~$c$. Without these, the
2465protocol is vulnerable to attack. In this section, we consider some variants
2466of our protocol which hash less information, and show attacks against them.
2467
2468To simplify the presentation, we shall consider Alice and Bob, and a third
2469character Carol. We shall be attacking a pair of matching sessions $A$
2470and~$B$, run by Alice and Bob respectively. Let Alice and Bob's private keys
2471be $x_A$ and~$x_B$, and let their public keys be $X_A = x_A P$ and $X_B = x_B
2472P$ respectively.
2473
2474\subsubsection{Protocol diagram notation}
2475In order to keep the presentation as clear as possible, we use a simple
2476diagrammatic notation for describing protocol runs. A line of the form
2477\protocolrun{\textit{action} \ar[r] & \PRsession{S} & \textit{result}}
2478states that the adversary performs the given \textit{action} on session~$S$,
2479with the stated \textit{result}. The \textit{action} may be
2480\begin{itemize}
2481\item \textsf{Create session $(P_i, P_j, s)$}: the session is created,
2482 running in party~$P_i$, with partner~$P_j$ and session-id~$s$.
2483\item \textsf{Receive $\mu$}: the session is activated with an incoming message~$\mu$.
2484\item \textsf{Session-state reveal}: The adversary requests the session's
2485 internal state.
2486\end{itemize}
2487The \textit{result} may be
2488\begin{itemize}
2489\item \textsf{Send $\mu'$}: the session requests the delivery of the
2490 message~$\mu'$.
2491\item \textsf{Complete: $K$}: the session completes, outputting the key~$K$.
2492\item \textsf{State: $\sigma$}: the session's state is revealed to
2493 be~$\sigma$.
2494\item \textsf{(Ignore)}: the result of the action is unimportant.
2495\end{itemize}
2496
2497\subsubsection{Omitting the session-id}
2498Consider a protocol variant where session $S$ sets $h_S = H_I(X_N, R_S,
2499Y_S)$, where $N$ is the session's partner. That is, we've omitted the
2500session-id from the hash. An adversary can cross over two sessions between
2501Alice and Bob. Here's how.
2502
2503The attack assumes that Alice and Bob set up two pairs of matching sessions
2504with each other, thus.
2505\protocolrun{
2506 \PRcreate{Alice}{Bob}{s} & \PRsession{A} &
2507 \PRsend{challenge}{R_A, c_A} \\
2508 \PRcreate{Bob}{Alice}{s} & \PRsession{B} &
2509 \PRsend{challenge}{R_B, c_B} \\
2510 \PRcreate{Alice}{Bob}{s'} & \PRsession{A'} &
2511 \PRsend{challenge}{R_{A'}, c_{A'}} \\
2512 \PRcreate{Bob}{Alice}{s'} & \PRsession{B'} &
2513 \PRsend{challenge}{R_{B'}, c_{B'}}
2514}
2515Observe that the session pairs use distinct session-ids $s \ne s'$, so this
2516is allowed. Now the adversary crosses over the challenges, using the second
2517pair of sessions to provide responses to the challenges issued by the first
2518pair. By revealing the state in the second pair of sessions, the adversary
2519can work out the (probably different) session keys accepted by the first
2520pair.
2521\protocolrun{
2522 \PRreceive{challenge}{R_B, c_B} & \PRsession{A'} &
2523 \PRsend{response}{R_{A'}, E_{K_{A'B,0}}(x_A R_B)} \\
2524 \PRreceive{challenge}{R_A, c_A} & \PRsession{B'} &
2525 \PRsend{response}{R_{B'}, E_{K_{B'A,0}}(x_B R_A)} \\
2526 \PRreceive{challenge}{R_{A'}, c_{A'}} & \PRsession{A} & \PRignore \\
2527 \PRreceive{challenge}{R_{B'}, c_{B'}} & \PRsession{B} & \PRignore \\
2528 \PRreveal & \PRsession{A'} & r_{A'} \\
2529 \PRreveal & \PRsession{B'} & r_{B'} \\
2530 \PRreceive{response}{R_{B'}, E_{K_{B'A,0}}(x_B R_A)} & \PRsession{A} &
2531 \PRcomplete{K_{AB',1}} \\
2532 \PRreceive{response}{R_{A'}, E_{K_{A'B,0}}(x_A R_B)} & \PRsession{B} &
2533 \PRcomplete{K_{BA',1}} \\
2534}
2535The adversary can now compute $K_{AB'} = H_K(r_{B'} R_A)$ and $K_{B'A} =
2536H_K(r_{A'} R_B)$. Safely in possession of both keys, the adversary can now
2537read and impersonate freely.
2538
2539\subsubsection{Omitting the partner's public key}
2540Now consider a protocol variant where session $S$ sets $h_S = H_I(s, R_S,
2541Y_S)$, where $s$ is the session-id. An adversary can use a sessions with
2542Carol to attack a session between Alice and Bob. Here's how the sessions are
2543set up.
2544\protocolrun{
2545 \PRcreate{Alice}{Bob}{s} & \PRsession{A} &
2546 \PRsend{challenge}{R_A, c_A} \\
2547 \PRcreate{Bob}{Alice}{s} & \PRsession{B} &
2548 \PRsend{challenge}{R_B, c_B} \\
2549 \PRcreate{Alice}{Carol}{s} & \PRsession{A'} &
2550 \PRsend{challenge}{R_{A'}, c_{A'}} \\
2551 \PRcreate{Bob}{Carol}{s} & \PRsession{B'} &
2552 \PRsend{challenge}{R_{B'}, c_{B'}}
2553}
2554Although each of Alice and Bob have two sessions with session-id~$s$, this is
2555allowed, since they are with different partners. The rest of the attack in
2556fact proceeds identically to the previous case.
32609df3 2557\fi
a6e375a6
MW
2558
2559\subsection{Deniability}
2560\label{sec:denial}
2561
2562We have claimed that the Wrestlers key-exchange protocol is \emph{deniable}.
2563In this section, we define what we mean, explain the limits of the
2564deniablility of the protocol as currently presented, fix the protocol with an
2565additional pass (which can be collapsed to a single message flow by breaking
2566the protocol's symmetry), and prove the deniability of the resulting
2567protocol.
2568
2569Our notion of deniability is taken from Di~Raimondo, Gennaro and Krawczyk
2570\cite{DiRaimondo:2006:DAK}, except that, as usual, we opt for a concrete
2571security approach.
2572
2573\subsubsection{Discussion}
2574Our definition for deniability is that, for any adversary interacting with
2575participants in the protocol, a simulator exists which can compute the same
2576things as the adversary. In particular, since an adversary could output a
2577transcript of the interactions between itself and the parties, it would
2578follow that a simulator could do this too. If the simulator is effective,
2579its output is indistinguishable from that of the real adversary, and hence no
2580`judge' (distinguisher) should be persuaded by evidence presented by someone
2581who claims to have witnessed or participated in an interaction.
2582
2583We work again the model described in~\ref{sec:um}. That is, our adversary
2584has complete control over the ordering and delivery of all messages. The
2585adversary is also able, at any time, to reveal the state of any session.
2586However, deniability is obviously impossible against an adversary who can
2587\emph{corrupt} other parties, since simulating such an adversary's actions
2588would necessarily require the simulator to compute private keys corresponding
2589to the known public keys, and this is (we believe) difficult, because an
2590efficient algorithm for doing this could easily attack our protocol, which we
2591already proved secure. Therefore, we forbid the adversary from corrupting
2592parties.
2593
2594In order to allow the adversary to participate in the protocol, rather than
2595merely observing it, we need to give it one or more private keys. We could
2596modify the initialization function \id{init} from figure~\ref{fig:wkx-formal}
2597to give the adversary a private key, but there's an easier way: we can just
2598give the adversary a number of private keys in its auxiliary input.
2599
2600\subsubsection{Definitions}
2601Let $\Pi$ be a key-exchange protocol, in the model described in
2602section~\ref{sec:um}. We use the simulation framework of
2603section~\ref{sec:sim}. We define the initialization function $I_\Pi$ to be
2604the initialization function of $\Pi$, as before, and the corresponding
2605world~$W_\Pi(\iota, \sigma, \tau, \mu)$ is a fairly straightforward mapping
2606of the adversary's possible actions to the simulation model:
2607\begin{itemize}
2608\item The invocation $\cookie{new-session}$ with $\mu = (i, j, s)$ creates a
2609 new session on party~$P_i$, with partner~$P_j$ and session-id~$s$. The
2610 reply $\rho = (\delta, m)$ is a \emph{decision} $\delta \in
2611 \{\cookie{continue}, \cookie{abort}, \cookie{complete}\}$ and an output
2612 message $m \in \Bin^* \cup \{\bot\}$. If $m \ne \bot$ then $m$ is a
2613 message to be sent to the matching session (if any).
2614\item The invocation $\cookie{deliver}$ with $\mu = (i, j, s, m)$ delivers
2615 message $m$ to the session $S = (P_i, P_j, s)$. The response $\rho$ is as
2616 for $\cookie{new-session}$ invocations.
2617\item The invocation $\cookie{reveal-session-state}$ with $\mu = (i, j, s)$
2618 reveals to the adversary the state of the running session $S = (P_i, P_j,
2619 s)$. The response $\rho$ is the session's state if $S$ is indeed a running
2620 session, or $\bot$ otherwise.
2621\item The invocation $\cookie{reveal-session-key}$ with $\mu = (i, j, s)$
2622 reveals to the adversary the session-key of the completed session~$S =
2623 (P_i, P_j, s)$. The response $\rho$ is the session key~$K$ if the session
2624 is indeed complete, or $\bot$ otherwise.
2625\end{itemize}
2626There are no invocations corresponding to the adversary actions of corrupting
2627parties (since deniability against an corrupting adversary is impossible, as
2628discussed earlier), or session expiry or challenging (since they are useless
2629in this context).
2630
2631We measure the deniability of a protocol~$\Pi$, using a given simulator~$S$,
2632by the insecurity function $\InSec{sim}(W_\Pi, I_\Pi, S; t_D, t_A,
2633\mathcal{Q}_D, \mathcal{Q}_A, \mathcal{R}, \mathcal{U})$ of
2634definition~\ref{def:sim}. The interaction bounds $\mathcal{R} = (q_S, q_M)$
2635we place on the adversary are on the number ($q_S$) of \cookie{new-session}
2636and ($q_M$) \cookie{deliver} invocations it makes.
2637
2638We shall (informally) say that a protocol~$\Pi$ is deniable if there is a
2639simulator~$S_\Pi$ for which the insecurity function is small for appropriate
2640resource bounds on the adversary and distinguisher.
2641
2642\subsubsection{The current protocol}
2643As it stands, $\Wkx$ isn't deniable, according to our definition, for
2644arbitrary auxiliary inputs. Let's see why.
2645
2646Suppose that Bob is an informant for the secret police, and wants to convince
2647a judge that Alice is involved in subversive activities in cyberspace.
2648Unfortunately, Bob's job is difficult, because of the strong zero-knowledge
2649nature of the Wrestlers identification protocol. However, Bob can work with
2650the judge to bring him the evidence necessary to convict Alice. Here's how.
2651
2652Alice's public key is $A$, and Bob's public key is $B$. The judge chooses
32609df3 2653some session-id $s$, and $r \inr \gf{q}$. He computes $R = r P$ and $c =
a6e375a6
MW
2654r \xor H_I(B, s, R, r A)$, and gives Bob the triple $(s, R, c)$, keeping $r$
2655secret. Bob can now persuade Alice to enter into a key-exchange with him,
2656with session-id $s$. He uses $(R, c)$ as his challenge message. When Alice
2657sends back her response $(R', \chi)$ (because Bob's challenge is correctly
2658formed), Bob aborts and shows $(R', \chi)$ to the judge. The judge knows $r$
2659and can therefore decrypt $\chi$ and check the response. If it's wrong, then
2660Bob was cheating and gets demoted -- he can't get the right answer by himself
2661because that would require him to impersonate Alice. If it's right, Alice is
2662really a subversive element and `disappears' in the night.
2663
2664We shall show in theorem~\ref{thm:denial} below that this is basically the
2665only attack against the deniability of the protocol. However, we can do
2666better.
2667
2668\subsubsection{Fixing deniability}
2669We can fix the protocol to remove even the attack discussed above. The
2670change is simple: we feed \emph{both} parties' challenges to the hash
2671function~$H_I$ rather than just the sender's. We use a five-argument hash
2672function (random oracle) $H_I\colon G^2 \times \Bin^{\ell_S} \times G^2 \to
2673\Bin^{\ell_I}$. We introduce a new message pass at the beginning of the
2674protocol: each session simply sends its challenge point $R = r P$ in the
2675clear as a `pre-challenge'. The actual challenge is $R$ and $c = r \xor
2676H_I(X, R', s, R, c)$, where $R'$ is the challenge of the matching session.
2677
2678By breaking symmetry, we can reduce the communication complexity of this
2679variant to four messages. As before, we analyse the symmetrical version.
2680The extra flow might seem a high price to pay, but we shall see that it has
2681additional benefits beyond deniability.
2682
2683A summary of the new protocol is shown in figure~\ref{fig:wdkx}, and the
2684formal description is shown in figure~\ref{fig:wdkx-formal}.
2685
2686\begin{figure}
2687 \begin{description}
32609df3 2688 \item[Setup] Group $G = \langle P \rangle$; $\#G = q$ is prime.
a6e375a6
MW
2689 $H_I(\cdot, \cdot, \cdot, \cdot, \cdot)$ and $H_K(cdot)$ are secure
2690 hashes. $\E = (\kappa, E, D)$ is an IND-CCA2 symmetric encryption
2691 scheme.
2692 \item[Parties] $U_i$ for $0 \le i < n$.
32609df3 2693 \item[Private keys] $x_i \inr \gf{q}$.
a6e375a6
MW
2694 \item[Public keys] $X_i = x_i P$.
2695 \end{description}
32609df3
MW
2696
2697 \begin{protocol}
2698 $r_i \getsr I$; $R_i \gets r_i P$; &
2699 $r_j \getsr I$; $R_j \gets r_j P$; \\
2700 \send{->}{R_i}
2701 \send{<-}{R_j}
2702 $c_i \gets r_i \xor H_I(R_j, X_i, s, R_i, r_i X_j)$; &
2703 $c_j \gets r_j \xor H_I(R_i, X_j, s, R_j, r_j X_i)$; \\
2704 \send{->}{(R_i, c_i)}
2705 \send{<-}{(R_j, c_j)}
2706 Check $R_j = \bigl(c_j \xor H_I(x_i R_j)\bigr) P$; &
2707 Check $R_i = \bigl(c_i \xor H_I(x_j R_i)\bigr) P$; \\
2708 $Z \gets r_i R_j$; $(K_0, K_1) \gets H_K(Z)$; &
2709 $Z \gets r_j R_i$; $(K_0, K_1) \gets H_K(Z)$; \\
2710 $\chi_i \gets E_{K_0}(x_i R_j)$; &
2711 $\chi_j \gets E_{K_0}(x_j R_i)$; \\
2712 \send{->}{(R_i, \chi_i)}
2713 \send{<-}{(R_j, \chi_j)}
2714 Check $D_{K_0}(\chi_j) = r_i X_j$; &
2715 Check $D_{K_0}(\chi_i) = r_j X_i$; \\
2716 Shared key is $K_1$. & Shared key is $K_1$.
2717 \end{protocol}
a6e375a6
MW
2718
2719 \caption{Summary of the Deniable Wrestlers Key Exchange protocol, $\Wdkx$}
2720 \label{fig:wdkx}
2721\end{figure}
2722
2723\begin{figure}
2724 \begin{program}
2725 Function $\id{init}(n)$: \+ \\
2726 \FOR $i \in \Nupto{n}$ \DO \\ \ind
32609df3 2727 $x \getsr \gf{q}$; \\
a6e375a6
MW
2728 $\mathbf{i}[i] \gets x$; \\
2729 $\mathbf{p}[i] \gets x P$; \- \\
2730 \RETURN $(\mathbf{p}, \mathbf{i})$;
2731 \- \\[\medskipamount]
2732 Function $\id{new-session}^{H_I(\cdot, \cdot, \cdot, \cdot, \cdot), H_K(\cdot)}
2733 (\mathbf{p}, x, i, j, s)$: \+ \\
2734 $X \gets \mathbf{p}[i]$;
2735 $X' \gets \mathbf{p}[j]$;
2736 $C \gets \emptyset$; \\
32609df3 2737 $r \getsr \gf{q}$;
a6e375a6
MW
2738 $R \gets r P$;
2739 $Y \gets r X'$; \\
2740 \SEND $(\cookie{pre-challange}, R)$;
2741 \- \\[\medskipamount]
2742 Function $\id{message}^{H_I(\cdot, \cdot, \cdot, \cdot, \cdot), H_K(\cdot)}
2743 (\tau, \mu)$: \+ \\
2744 \IF $\tau = \cookie{pre-challenge}$ \THEN
2745 $\id{msg-pre-challenge}(\mu)$; \\
2746 \ELSE \IF $\tau = \cookie{challenge}$ \THEN
2747 $\id{msg-challenge}(\mu)$; \\
2748 \ELSE \IF $\tau = \cookie{response}$ \THEN $\id{msg-response}(\mu)$;
2749 \next
2750 Function $\id{msg-pre-challenge}(\mu)$: \+ \\
2751 $R' \gets \mu$; \\
2752 $h \gets H_I(R', X, s, R, c)$; \\
2753 $c \gets r \xor h$; \\
2754 \SEND $(\id{msg-challenge}, R, c)$;
2755 \- \\[\medskipamount]
2756 Function $\id{msg-challenge}(\mu)$: \+ \\
2757 $(R', c') \gets \mu$; \\
2758 $Y' \gets x R'$; \\
2759 $h' \gets H_I(R, X', s, R', Y')$; \\
2760 $r' \gets c' \xor h'$; \\
2761 \IF $R' \ne r' P$ \THEN \RETURN; \\
2762 $C \gets C \cup \{R\}$; \\
2763 $Z \gets r R'$; \\
2764 $(K_0, K_1) \gets H_K(Z)$; \\
2765 $\chi \gets E_{K_0}(Y')$; \\
2766 \SEND $(\cookie{response}, R, \chi)$;
2767 \- \\[\medskipamount]
2768 Function $\id{msg-response}(\mu)$: \+ \\
2769 $(R', \chi') \gets \mu$; \\
2770 \IF $R' \notin C$ \THEN \RETURN; \\
2771 $Z \gets r R'$; \\
2772 $(K_0, K_1) \gets H_K(Z)$; \\
2773 $Y' \gets D_{K_0}(\chi')$; \\
2774 \IF $Y' \ne Y$ \THEN \RETURN; \\
2775 \OUTPUT $K_1$;
2776 \STOP;
2777 \end{program}
32609df3 2778
a6e375a6
MW
2779 \caption{Deniable key-exchange: formalization of $\Wdkx$}
2780 \label{fig:wdkx-formal}
2781\end{figure}
2782
2783The security of this variant is given by the following theorem, whose proof
32609df3
MW
2784is \ifshort given in the full version of this paper\else in
2785appendix~\ref{sec:sk2-proof}\fi.
a6e375a6
MW
2786\begin{theorem}[SK-security of $\Wdkx$]
2787 \label{thm:sk2}
2788 Let $G$ be a cyclic group. Let $\E = (\kappa, E, D)$ be a symmetric
2789 encryption scheme. Then
2790 \[ \InSec{sk}(\Wdkx^{G, \E}; t, n, q_S, q_M, q_I, q_K) =
2791 \InSec{sk}(\Wkx^{G, \E}; t, n, q_S, q_M, q_I, q_K)
2792 \]
2793\end{theorem}
2794
2795\subsubsection{Deniability of the Wrestlers protocols}
2796In order to quantify the level of deniability our protocols provide, we shall
2797impose a limit on the auxiliary input to the adversary. In particular, we
2798shall use $\mathcal{U}$ of definition~\ref{def:sim} to count the number of
2799\emph{challenges} in the auxiliary input. That is, $\mathcal{U} = n_C$ is
2800the number of tuples $(i, j, s, R', R, c)$ for which there is an $r$ such
2801that $R = r P$ and $c = r \xor H_I(R', X_j, s, R, r X_i)$ (or without the
2802$R'$ for $\Wkx$).
2803
2804With this restriction in place, we can state the following theorem about the
2805deniability of our protocols.
2806\begin{theorem}[Deniability of $\Wkx$ and $\Wdkx$]
2807 \label{thm:denial}
2808 There exist simulators $S_\Wkx$ and $\Wdkx$ such that
2809 \[ \InSec{sim}(W_{\Wkx^{G, \E}}, I_{\Wkx^{G, \E}}, S_{\Wkx^{G, \E}};
2810 t_D, t_A, \mathcal{Q}_D, \mathcal{Q}_A, (q_S, q_M), 0) \le
2811 \frac{q_M}{2^{\ell_I}}
2812 \]
2813 and
32609df3
MW
2814 \iffancystyle\[\else\begin{spliteqn*}\fi
2815 \InSec{sim}(W_{\Wdkx^{G, \E}}, I_{\Wdkx^{G, \E}}, S_{\Wdkx^{G, \E}};
2816 t_D, t_A, \mathcal{Q}_D, \mathcal{Q}_A, (q_S, q_M), n_C) \le
2817 \iffancystyle\else\\\fi
2818 \frac{n_C q_S}{\#G} +
a6e375a6 2819 \frac{q_M}{2^{\ell_I}}.
32609df3 2820 \iffancystyle\]\else\end{spliteqn*}\fi
a6e375a6
MW
2821 The running time of the simulators is $O(t_A) + O(\mathcal{Q}_A q_M)$.
2822\end{theorem}
32609df3
MW
2823\begin{longproof}{The proof of this theorem can be found in the full version
2824 of this paper.}
a6e375a6
MW
2825 The simulators $S_\Wkx$ and $S_\Wdkx$ are very similar. We describe both
2826 here. Both are fake-world simulators, working as follows.
2827 \begin{enumerate}
2828 \item Initially, it constructs simulated parties $P_i$, for $0 \le i < n$,
2829 giving each the public key $X_i$ from the common input.
2830 \item Suppose the adversary requests creation of a new session $S = (P_i,
2831 P_j, s)$. Then the simulator creates a new session, including a random
32609df3 2832 value $r_S \inr \gf{q}$, and computes $R_S = r_S P$, and $Y_S = r_S
a6e375a6
MW
2833 X_j$. For $\Wdkx$, it sends the message $(\cookie{pre-challenge}, R_S)$;
2834 for $\Wkx$, it additionally computes $h = H_I(X_i, s, R_S, Y_S)$ and
2835 sends $(\cookie{challenge}, R_S, r_S \xor h)$.
2836 \item Suppose, for $\Wdkx$, the adversary sends a message
2837 $(\cookie{pre-challenge}, R')$ to a session~$S = (P_i, P_j, s)$. The
2838 simulator computes $h = H_I(R', X_i, s, R_S, Y_S)$, and sends
2839 $(\cookie{challenge}, R_S, r_S \xor h)$.
2840 \item Suppose the adversary sends a message $(\cookie{challenge}, R', c')$
2841 to session $S = (P_i, P_j, s)$. The simulator doesn't know $x_i$.
2842 \begin{enumerate}
2843 \item If $R' = R_T$ for some other simulated session $T$, then the
2844 simulator knows $r_T$ such that $R_T = r_T P$. Let $Y' = r_T X_i$.
2845 The simulator computes $h = H_I(X_j, s, R', Y')$ (resp.\ $h = H_I(R_S,
2846 X_j, s, R', Y')$) for $\Wkx$ (resp.\ $\Wdkx$) and checks that $r_T = c'
2847 \xor h$. If not, the simulator discards the message. Otherwise, it
2848 computes $(K_0, K_1) = H_K(r_S R')$, and sends the message
2849 $(\cookie{response}, R, E_{K_0}(Y'))$.
2850 \item \label{en:simextract} Otherwise the simulator runs the extractor
2851 $T_\Wident$ on the adversary's history of queries $H_I(X_j, s, R',
2852 \cdot)$ (resp.\ $H_I(R_S, X_j, s, R', \cdot)$) for $\Wkx$ (resp.\
2853 $\Wdkx$). The extractor returns $(r', Y')$. If $Y' = \bot$ then the
2854 simulator ignores the message. Otherwise, the simulator computes
2855 $(K_0, K_1) = H_K(r R')$ and sends back $(\cookie{response}, R,
2856 E_{K_0}(Y'))$.
2857 \end{enumerate}
2858 \item Suppose the adversary sends a message $(\cookie{response}, R', \chi)$
2859 to session $S = (P_i, P_j, s)$. The simulator computes $(K_0, K_1) =
2860 H_K(r_S R')$, and decrypts $Y' = D_{K_0}(\chi)$. If $Y' \ne Y_S$ then
2861 the simulator discards the message. Otherwise, it makes the simulated
2862 session complete, and outputs key $K_1$.
2863 \item Finally, if the adversary reveals a session's state, the simulator
2864 reveals $r_S$ as required; if the adversary reveals a session-key, the
2865 simulator reveals the $K_1$ it output.
2866 \end{enumerate}
2867 The only point where the simulation fails to be perfect is in
2868 \ref{en:simextract}. Let $R'$ and $c'$ be the values from an incoming
2869 challenge message to session $S = (P_i, P_j, s)$. Let $r'$ be such that
2870 $R' = r' P$ and let $Y' = r' X_i$. If a random-oracle query $H_I(X_j, s,
2871 R', Y')$ (or $H_I(R_S, X_j, s, R', Y')$ for $\Wdkx$) has been issued, then
2872 there are a number of possibilities. Let $h'$ be the result of this query.
2873 \begin{itemize}
2874 \item The adversary made this query. Then the extractor will find it and
2875 return $Y'$ if $c' = h' \xor r'$, or $\bot$ otherwise.
2876 \item Some simulated session $U = (P_{i'}, P_{j'}, s')$ made this query.
2877 But simulated sessions only make $H_I$-queries when constructing
2878 challenges, so $R' = R_U$ for some session~$U$. But the simulator does
2879 something different in that case.
2880 \item In $\Wdkx$, the quadruple $(s, R_S, R', c')$ came from the
2881 adversary's auxiliary input. In this case the simulator must fail. But
2882 $R_S = r_S P$, and $r_S$ was chosen uniformly at random. If there are at
2883 most $n_C$ challenge sets in the auxiliary input then this happens with
32609df3 2884 probability at most $n_C/\#G$ for any given session.
a6e375a6
MW
2885 \end{itemize}
2886 We conclude that the simulator fails with probability
32609df3 2887 \[ \frac{q_M}{2^{\ell_I}} + \frac{q_S n_C}{\#G}. \]
a6e375a6
MW
2888 (Note that we only consider $n_C = 0$ for $\Wkx$.) No adversary can
2889 distinguish the simulator from a real interaction unless the simulator
2890 fails, and the simulator is a fake-world simulator. We therefore apply
2891 proposition~\ref{prop:fakesim}; the theorem follows.
32609df3 2892\end{longproof}
a6e375a6 2893
32609df3 2894\ifshort\else
a6e375a6
MW
2895\subsection{Practical issues}
2896\label{sec:practice}
2897
2898\subsubsection{Denial of service from spoofers}
2899The adversary we considered in~\ref{sec:um} is very powerful. Proving
2900security against such a powerful adversary is good and useful. However,
2901there are other useful security properties we might like against weaker
2902adversaries.
2903
2904Eavesdropping on the Internet is actually nontrivial. One needs control of
2905one of the intermediate routers between two communicating parties. (There
2906are tricks one can play involving subversion of DNS servers, but this is also
2907nontrivial.) However, injecting packets with bogus source addresses is very
2908easy.
2909
dff0fad2 2910Layering the protocol over TCP \cite{RFC0793} ameliorates this problem because
a6e375a6
MW
2911an adversary needs to guess or eavesdrop in order to obtain the correct
2912sequence numbers for a spoofed packet; but the Wrestlers protocol is
2913sufficiently simple that we'd like to be able to run it over UDP
dff0fad2 2914\cite{RFC0768}, for which spoofing is trivial.
a6e375a6
MW
2915
2916Therefore, it's possible for anyone on the 'net to send Alice a spurious
2917challenge message $(R, c)$. She will then compute $Y = a R$, recover $r' = c
2918\xor H_I(\ldots, R, Y)$ check that $R = r' P$ and so on. That's at least two
2919scalar multiplications to respond to a spoofed packet, and even with very
2920efficient group operations, coping with this kind of simple denial-of-service
2921attack might be difficult.
2922
2923A straightforward solution is to use the Deniable variant of the protocol,
2924and require a challenge to quote its matching session's challenge $R'$ in its
2925challenge. That is, upon receiving a $(\cookie{pre-challenge}, R')$, the
2926session sends $(\cookie{challenge}, R', R, c)$. Alice then rejects any
2927\emph{incoming} challenge message which doesn't quote her current challenge
2928value. Now only eavesdroppers can force her to perform expensive
2929computations.
2930
2931Indeed, one need not quote the entire challenge $R'$: it suffices to send
2932some short hard-to-guess hash of it, maybe just the bottom 128 bits or so.
2933
2934This can't reduce security. Consider any adversary attacking this protocol
2935variant. We can construct an adversary which attacks the original protocol
2936just as efficiently. The new adversary attaches fake $R'$ values to
2937challenges output by other parties, and strips them off on delivery,
2938discarding messages with incorrect $R'$ values.
2939
a6e375a6
MW
2940\subsubsection{Key confirmation}
2941Consider an application which uses the Wrestlers protocol to re-exchange keys
2942periodically. The application can be willing to \emph{receive} incoming
2943messages using the new key as soon as the key exchange completes
2944successfully; however, it should refrain from \emph{sending} messages under
2945the new key until it knows that its partner has also completed. The partner
2946may not have received the final response message, and therefore be unwilling
2947to accept a new key; it will therefore (presumably) reject incoming messages
2948under this new key.
2949
2950While key confirmation is unnecessary for \emph{security}, it has
2951\emph{practical} value, since it solves the above problem. If the
2952application sends a \cookie{switch} message when it `completes', it can
2953signal its partner that it is indeed ready to accept messages under the new
dff0fad2
MW
2954key. Our implementation sends $(\cookie{switch-rq}, E_{K_0}(H_S(0, R, R')))$
2955as its switch message; the exact contents aren't important. Our
2956retransmission policy (below) makes use of an additional message
2957\cookie{switch-ok}, which can be defined similarly.
2958
2959It's not hard to show that this doesn't adversely affect the security of the
2960protocol, since the encrypted message is computed only from public values.
2961In the security proof, we modify the generation of \cookie{response}
2962messages, so that the plaintexts are a constant string rather than the true
2963responses, guaranteeing that the messages give no information about the
2964actual response. To show this is unlikely to matter, we present an adversary
2965attacking the encryption scheme by encrypting either genuine responses or
2966fake constant strings. Since the adversary can't distinguish which is being
2967encrypted (by the definition of IND-CCA security,
2968definition~\ref{def:ind-cca}), the change goes unnoticed. In order to allow
2969incorporate our switch messages, we need only modify this adversary, to
2970implement the modified protocol. This is certainly possible, since the
2971messages contain (hashes of) public values. We omit the details.
2972
d8139f2c
MW
2973However, while the extra message doesn't affect the security of our protocol,
2974it would be annoying if an adversary could forge the switch request message,
2975since this would be a denial of service. In the strong adversarial model,
2976this doesn't matter, since the adversary can deny service anyway, but it's a
2977concern against less powerful adversaries. Most IND-CCA symmetric encryption
2978schemes also provide integrity of plaintexts \cite{Bellare:2000:AER} (e.g.,
2979the encrypt-then-MAC generic composition approach \cite{Bellare:2000:AER,%
2980Krawczyk:2001:OEA}, and the authenticated-encryption modes of
2981\cite{Rogaway:2003:OCB,Bellare:2004:EAX,McGrew:2004:SPG}), so this isn't a
2982great imposition.
2983
dff0fad2
MW
2984\subsubsection{Optimization and piggybacking}
2985We can optimize the number of messages sent by combining them. Here's one
2986possible collection of combined messages:
2987\begin{description}
2988\item [\cookie{pre-challenge}] $R$
2989\item [\cookie{challenge}] $R'$, $R$, $c = H_I(R', X, s, R, c) \xor r$
2990\item [\cookie{response}] $R'$, $R$, $c$, $E_{K_0}(x R')$
2991\item [\cookie{switch}] $R'$, $E_{K_0}(x R', H_S(0, R, R'))$
2992\item [\cookie{switch-ok}] $R'$, $E_{K_0}(H_S(1, R, R'))$
2993\end{description}
2994The combination is safe:
2995\begin{itemize}
2996\item the \cookie{switch} and \cookie{switch-ok} messages are safe by the
2997 argument above; and
2998\item the other recombinations can be performed and undone in a `black box'
2999 way, by an appropriately defined SK-security adversary.
3000\end{itemize}
a6e375a6
MW
3001
3002\subsubsection{Unreliable transports}
dff0fad2 3003The Internet UDP \cite{RFC0768} is a simple, unreliable protocol for
a6e375a6
MW
3004transmitting datagrams. However, it's very efficient, and rather attractive
3005as a transport for datagram-based applications such as virtual private
dff0fad2
MW
3006networks (VPNs). Since UDP is a best-effort rather than a reliable
3007transport, it can occasionally drop packets. Therefore it is necessary for a
3008UDP application to be able to retransmit messages which appear to have been
3009lost.
a6e375a6 3010
dff0fad2
MW
3011We recommend the following simple retransmission policy for running the
3012Wrestlers protocol over UDP.
3013\begin{itemize}
3014\item Initially, send out the \cookie{pre-challenge} message every minute.
3015\item On receipt of a \cookie{pre-challenge} message, send the corresponding
3016 full \cookie{challenge}, but don't retain any state.
3017\item On receipt of a (valid) \cookie{challenge}, record the challenge value
3018 $R'$ in a table, together with $K = (K_0, K_1)$ and the response $Y' = x
3019 R'$. If the table is full, overwrite an existing entry at random. Send
3020 the corresponding \cookie{response} message, and retransmit it every ten
3021 seconds or so.
3022\item On receipt of a (valid) \cookie{response}, discard any other
3023 challenges, and stop sending \cookie{pre-challenge} and \cookie{response}
3024 retransmits. At this point, the basic protocol described above would
3025 \emph{accept}, so the key $K_1$ is known to be good. Send the
3026 \cookie{switch} message, including its response to the (now known-good)
3027 sender's challenge.
3028\item On receipt of a (valid) \cookie{switch}, send back a \cookie{switch-ok}
3029 message and stop retransmissions. It is now safe to start sending messages
3030 under $K_1$.
3031\item On receipt of a (valid) \cookie{switch-ok}, stop retransmissions. It
3032 is now safe to start sending messages under $K_1$.
3033\end{itemize}
a6e375a6
MW
3034
3035\subsubsection{Key reuse}
dff0fad2
MW
3036Suppose our symmetric encryption scheme $\E$ is not only IND-CCA secure
3037(definition~\ref{def:ind-cca}) but also provides integrity of plaintexts
d8139f2c
MW
3038\cite{Bellare:2000:AER} (or, alternatively, is an AEAD scheme
3039\cite{Rogaway:2002:AEA}. Then we can use it to construct a secure channel,
3040by including message type and sequence number fields in the plaintexts, along
3041with the message body. If we do this, we can actually get away with just the
3042one key $K = H_K(Z)$ rather than both $K_0$ and $K_1$.
dff0fad2
MW
3043
3044To do this, it is essential that the plaintext messages (or additional data)
3045clearly distinguish between the messages sent as part of the key-exchange
3046protocol and messages sent over the `secure channel'. Otherwise, there is a
d8139f2c
MW
3047protocol-interference attack: an adversary can replay key-exchange
3048ciphertexts to insert the corresponding plaintexts into the channel.
dff0fad2
MW
3049
3050We offer a sketch proof of this claim in appendix~\ref{sec:sc-proof}.
32609df3 3051\fi
dff0fad2
MW
3052
3053%%%--------------------------------------------------------------------------
a6e375a6 3054
dff0fad2
MW
3055\section{Conclusions}
3056\label{sec:conc}
3057
3058We have presented new protocols for identification and authenticated
3059key-exchange, and proven their security. We have shown them to be efficient
3060and simple. We have also shown that our key-exchange protocol is deniable.
3061Finally, we have shown how to modify the key-exchange protocol for practical
3062use, and proven that this modification is still secure.
a6e375a6
MW
3063
3064%%%--------------------------------------------------------------------------
3065
3066\section{Acknowledgements}
3067
3068The Wrestlers Protocol is named after the Wrestlers pub in Cambridge where
3069Clive Jones and I worked out the initial design.
3070
3071%%%--------------------------------------------------------------------------
3072
d8139f2c 3073\bibliography{mdw-crypto,cryptography2000,cryptography,rfc,std}
a6e375a6
MW
3074
3075%%%--------------------------------------------------------------------------
3076
32609df3 3077\ifshort\def\next{\end{document}}\expandafter\next\fi
a6e375a6
MW
3078\appendix
3079\section{Proofs}
3080
3081\subsection{Proof of theorem~\ref{thm:sk}}
3082\label{sec:sk-proof}
3083
3084Before we embark on the proof proper, let us settle on some notation. Let
3085$P_i$ be a party. Then we write $x_i$ for $P_i$'s private key and $X_i = x_i
3086P$ is $P_i$'s public key. Let $S = (P_i, P_j, s)$ be a session. We write
3087$r_S$ for the random value chosen at the start of the session, and $R_S$,
3088$c_S$ etc.\ are the corresponding derived values in that session.
3089
a6e375a6
MW
3090The proof uses a sequence of games. For each game~$\G{i}$, let $V_i$ be the
3091event that some pair of unexposed, matching sessions both complete but output
3092different keys, and let $W_i$ be the event that the adversary's final output
3093equals the game's hidden bit~$b^*$. To save on repetition, let us write
3094\[ \diff{i}{j} = \max(|\Pr[V_i] - \Pr[V_j]|, |\Pr[W_i] - \Pr[W_j]|). \]
3095Obviously,
3096\[ \diff{i}{j} \le \sum_{i\le k<j} \diff{k}{k + 1}. \]
3097
3098Here's a quick overview of the games we use.
3099\begin{itemize}
3100\item $\G0$ is the original SK-security game.
3101\item In $\G1$, we abort the game unless all parties' public keys are
3102 distinct. Since keys are generated at random, parties are unlikely to be
3103 given the same key by accident.
3104\item In $\G2$, we change the way sessions respond to challenge messages, by
3105 using the extractor to fake up answers to most challenges. Since the
3106 extractor is good, the adversary is unlikely to notice.
3107\item In $\G3$, we abort the game if the adversary ever queries $H_K(\cdot)$
3108 on the Diffie-Hellman secret $r_S r_T P$ shared between two unexposed
3109 matching sessions. We show that this is unlikely to happen if the
3110 Diffie-Hellman problem is hard.
3111\item In $\G4$, we abort the game if any unexposed session \emph{accepts} a
3112 response message which wasn't sent by a matching session.
3113\end{itemize}
3114Finally, we show that the adversary has no advantage in $\G4$. The theorem
3115follows.
3116
3117For ease of comprehension, we postpone the detailed proofs of some of the
3118steps until after we conclude the main proof.
3119
3120Let $A$ be a given adversary which runs in time~$t$, creates at most~$q_S$
3121sessions, delivers at most~$q_M$ messages, and makes at most~$q_I$ queries to
3122its $H_I(\cdot, \cdot, \cdot, \cdot)$ oracle and at most~$q_K$ queries to its
3123$H_K(\cdot)$ oracle. Let $\G0$ be the original SK-security game of
3124definition~\ref{def:sk}, played with adversary~$A$.
3125
3126Game~$\G1$ is the same as game~$\G0$ except, if the initialization function
3127reports two parties as having the same public key (i.e., we have $X_i \ne
3128X_j$ where $0 \le i < j < n$), we stop the game immediately and without
3129crediting the adversary with a win. This only happens when the corresponding
3130private keys are equal, i.e., $x_i = x_j$, and since the initialization
3131function chooses private keys uniformly at random, this happens with
32609df3 3132probability at most $\binom{n}{2}/\#G$. Since if this doesn't happen, the
a6e375a6
MW
3133game is identical to $\G0$, we can apply lemma~\ref{lem:shoup}, and see that
3134\begin{equation}
3135 \label{eq:sk-g0-g1}
32609df3 3136 \diff{0}{1} \le \frac{1}{\#G} \binom{n}{2} = \frac{n (n - 1)}{2 \#G}.
a6e375a6
MW
3137\end{equation}
3138In game~$\G1$ and onwards, we can assume that public keys for distinct
dff0fad2
MW
3139parties are themselves distinct. Note that the game now takes at most
3140$O(q_I)$ times longer to process each message delivered by the adversary.
3141This is where the $O(q_I q_M)$ term comes from in the theorem statement.
a6e375a6
MW
3142
3143Game~$\G2$ is the same as game~$\G1$, except that we change the way that we
3144make parties respond to \cookie{challenge} messages $(\cookie{challenge}, R,
3145c)$. Specifically, suppose that $S = (P_i, P_j, s)$ is a session.
3146\begin{itemize}
3147\item Suppose $T = (P_j, P_i, s)$ is the matching session of $S$. The game
3148 proceeds as before if $(R, c) = (R_T, c_T)$ is the challenge issued by $T$.
3149\item Otherwise, we run the extractor $T_\Wident$ on the adversary's history
3150 so far of oracle queries $H_I(X_i, s, R, \cdot)$ to determine a pair $(r,
3151 Y)$. If $r = \bot$ then we discard the message. Otherwise, we add $R$ to
3152 the list~$C$, and return a fake response to the adversary by computing $K =
3153 H_K(r R_S)$ and handing the adversary $(\cookie{response}, R_S, E_K(Y))$.
3154\end{itemize}
3155The following lemma shows how this affects the adversary's probabilities of
3156winning.
3157\begin{lemma}
3158 \label{lem:sk-g1-g2}
3159 \begin{equation}
3160 \label{eq:sk-g1-g2}
3161 \diff{1}{2} \le \frac{q_M}{2^{\ell_I}}.
3162 \end{equation}
3163\end{lemma}
3164
3165Let us say that a session $S = (P_i, P_j, s)$ is \emph{ripe} if
3166\begin{itemize}
3167\item there is a matching session $T = (P_j, P_i, s)$, and
3168\item $S$ is unexposed.
3169\end{itemize}
3170Suppose that $S$ is a ripe session, and that it has a matching session~$T$:
3171let $Z_S = Z_T = r_S r_T P$.
3172
3173Game~$\G3$ is the same as $\G2$, except that the game is immediately aborted
3174if ever the adversary queries its random oracle $H_K(\cdot)$ at a value $Z_S$
3175for any ripe session~$S$. The following lemma shows how this affects the
3176adversary's probabilities of winning.
3177\begin{lemma}
3178 \label{lem:sk-g2-g3}
dff0fad2 3179 For some $t'$ within the bounds given in the theorem statement we have
a6e375a6
MW
3180 \begin{equation}
3181 \label{eq:sk-g2-g3}
3182 \diff{2}{3} \le q_S \InSec{mcdh}(G; t', q_K).
3183 \end{equation}
3184\end{lemma}
3185
3186Game~$\G4$ is the same as $\G3$ except that the game is immediately aborted
3187if ever the adversary sends a response message to a ripe session~$S$ which
3188wasn't output by its matching session as a response to $S$'s challenge, with
3189the result that $S$ completes.
3190
3191Let's make this more precise. Let $U$ and $V$ be a pair of matching
3192sessions. Let $C_U = (\cookie{challenge}, R_U, c_U$ be the challenge message
3193sent by $U$. Let $M_T$ be the set of messages which $T$ has sent upon
3194delivery of $C_U$. Then, in $\G4$, we abort the game if, for any pair $S$
3195and~$T$ of matching, unexposed sessions, $S$ has completed as a result of
3196being sent a message $\mu \notin M_T$. We have the following lemma.
3197\begin{lemma}
3198 \label{lem:sk-g3-g4}
dff0fad2 3199 For a $t'$ within the stated bounds, we have
a6e375a6
MW
3200 \begin{equation}
3201 \label{eq:sk-g3-g4}
3202 \diff{3}{4} \le q_S \bigl( \InSec{ind-cca}(\E; t', q_M, q_M) +
3203 n \cdot \InSec{mcdh}(G; t', q_M + q_I) \bigr)
3204 \end{equation}
3205\end{lemma}
3206
3207Finally, let us consider the state we're in with $\G4$.
3208\begin{itemize}
3209\item No ripe session completes except as a result the adversary faithfully
3210 delivering messages between it and its matching session.
3211\item The adversary never queries $Z_S$ for any ripe session~$S$. If we set
3212 $K_S = (K_{S, 0}, K_{S, 1}) = H_K(Z_S)$, then $K_{S, 1}$ is the key output
3213 by $S$ when it completes.
3214\item If $S$ and $T$ are matching ripe sessions, then $K_S = K_T$, since $Z_S
3215 = r_S R_T = r_T R_S = Z_T$.
3216\item For any ripe session~$S$, $K_{S, 1}$ is uniformly distributed in
3217 $\Bin^\kappa$ and independent of the adversary's view.
3218\item If $S = (P_i, P_j, s)$ and $T = (P_j, P_i, s)$ are matching ripe
3219 sessions, then $Z_S$ depends only $r_S$ and $r_T$. Hence, once $S$ and~$T$
3220 complete, and erase their states, $Z_S$ is independent of everything except
3221 the messages sent between the two sessions. In particular, $Z_S$ is
3222 independent of the long-term secrets $x_i$ and~$x_j$, so if either player
3223 is later corrupted, the key $K_{S, 1}$ remains independent of the
3224 adversary's view.
3225\item Hence, the keys output by unexposed sessions are indistinguishable from
3226 freshly-generated random strings, and remain so indefinitely.
3227\end{itemize}
3228We conclude that, for any adversary $A$,
3229\begin{equation}
3230 \label{eq:sk-g4}
3231 \Pr[V_4] = 0 \qquad \text{and} \qquad \Pr[W_4] = \frac{1}{2}.
3232\end{equation}
3233Putting equations~\ref{eq:sk-g0-g1}--\ref{eq:sk-g4} together, we find
32609df3 3234\begingroup \splitright=4em minus 4em
a6e375a6 3235\begin{spliteqn}
dff0fad2
MW
3236 \Adv{sk}{\Wident^{G, \E}}(A) \le
3237 2 q_S \bigl(\InSec{ind-cca}(\E; t', q_M, q_M) + {} \\
3238 \InSec{mcdh}(G; t', q_K) +
3239 n \,\InSec{mcdh}(G; t', q_M + q_I) \bigr) + {}
32609df3 3240 \frac{n (n - 1)}{\#G} +
a6e375a6 3241 \frac{2 q_M}{2^{\ell_I}}.
32609df3 3242\end{spliteqn} \endgroup
a6e375a6
MW
3243The theorem follows, since $A$ was chosen arbitrarily.
3244
3245
3246\begin{proof}[Proof of lemma~\ref{lem:sk-g1-g2}]
3247 The two games $\G1$ and~$\G2$ differ only in whether they accept or reject
3248 particular challenge messages $(\cookie{challenge}, R, c)$.
3249
3250 We claim firstly that no message is \emph{accepted} by $\G2$ which would
3251 have been rejected by $\G1$. To prove the claim, it is sufficient to note
3252 that the extractor's output, if not $\bot$, is always correct, and hence if
3253 $\G2$ accepts a message then $\G1$ would have done so too.
3254
3255 Since $\G2$ also behaves identically when the adversary submits to~$S$ the
3256 challenge from the matching session~$T$, we have nothing to prove in this
3257 case. Let $F$ be the event that the adversary submits a message
3258 $(\cookie{challenge}, R, c)$ to a session~$S$ which $S$ would have accepted
3259 in $\G1$ but would be rejected by the new rule in~$\G2$. By
3260 lemma~\ref{lem:shoup} we have $\diff{1}{2} \le \Pr[F]$. To prove the
3261 current lemma, therefore, we must show that $\Pr[F] \le q_M/2^{\ell_I}$.
3262
3263 Rather than consider individual challenge messages, we consider
3264 \emph{classes} of messages. We shall refer to a quadruple~$\Cid = (i, j,
3265 s, R)$ as a \emph{class-id}, and define some useful functions:
3266 \begin{itemize}
3267 \item the class's \emph{session} $\Csession(\Cid) = (P_i, P_j, s)$;
3268 \item the class's \emph{index} $\Cindex(\Cid)$ is $r \in I$ where $R = r
3269 P$, which is well-defined by lemma~\ref{lem:unique-dl};
3270 \item the class's \emph{query} $\Cquery(\Cid) = (X_j, s, R, x_i R)$;
3271 \item the class's \emph{hash} $\Chash(\Cid) = H_I(\Cquery(\Cid)) = H_I(X_j,
3272 s, R, x_i R)$;
3273 \item the class's \emph{check-value} $\Ccheck(\Cid) = \Chash(\Cid) \xor
3274 \Cindex(\Cid)$;
3275 \item the class's \emph{check-set} $\Ccheckset(\Cid)$ is the set of
3276 check-values $c$ such that a message $(\cookie{challenge}, R, c)$ was
3277 sent to session $S = (P_i, P_j, s)$; and
3278 \item the class's \emph{count} $\Ccount(\Cid) = |\Ccheckset(\Cid)|$.
3279 \end{itemize}
3280
3281 Consider any class-id~$\Cid = (i, j, s, R)$. A natural question which
3282 arises is: which participants have issued $\Cid$'s query, i.e., queried
3283 $H_I$ at $\Cquery(\Cid)$?
3284
3285 We can characterise the $H_I(\cdot, \cdot, \cdot, \cdot)$ queries of a
3286 session $U = (P_{i'}, P_{j'}, s')$ as follows:
3287 \begin{itemize}
3288 \item computing the check-value for the challenge $R_U$ by querying
3289 $H_I(X_{i'}, s', R_U, r_U X_{j'})$, and
3290 \item checking an incoming challenge~$R'$ by querying $H_I(X_{j'}, s', R',
3291 x_{i'} R')$.
3292 \end{itemize}
3293 The class~$\Cid$'s query $\Cquery(\Cid)$ is $U$'s check-value query if
3294 \[ (j, i, s, R) = (i', j', s', R_U) \]
3295 i.e., $U$ is the matching session of $\Csession(\Cid)$, and moreover $R =
3296 R_U$ is the challenge value issued by $U$. For any $c \in
3297 \Ccheckset(\Cid)$, if $c = \Ccheck(\Cid)$ then $(\cookie{challenge}, R, c)$
3298 is precisely the challenge message issued by~$U$ to $\Csession(\Cid)$; the
3299 rules for handling this message didn't change. However, if $c \ne
3300 \Ccheck(\Cid)$ then the message would have been rejected in $\G1$, and we
3301 have already shown that $\G2$ continues to reject all messages rejected by
3302 $\G1$.
3303
3304 Let us say that a class-id~$\Cid = (i, j, s, R)$ is \emph{bad} if
3305 \begin{enumerate}
3306 \item the value $R$ is not the challenge issued by $\Csession(\Cid)$'s
3307 matching session, and
3308 \item the adversary has not issued $\Cid$'s query $\Cquery(\Cid)$,
3309 \emph{but}
3310 \item $\Ccheck(\Cid) \in \Ccheckset(\Cid)$, so one of the check-values
3311 submitted to $S$ was actually correct.
3312 \end{enumerate}
3313 We claim that our extractor will work perfectly unless some class-id is
3314 bad. Certainly, if $R$ was issued by the matching session, there is
3315 nothing to prove; if the adversary has issued the relevant query then the
3316 extractor will recover $\Cindex(\Cid)$ just fine; and if $\Ccheck(\Cid)
3317 \notin \Ccheckset(\Cid)$ then all messages in the class would have been
3318 rejected by $\G1$ anyway.
3319
3320 Let $B(\Cid)$ be the event that the class~$\Cid$ is bad. We claim that
3321 \[ \Pr[B(\Cid)] \le \frac{\Ccount(\Cid)}{2^{\ell_I}}. \]
3322 The present lemma follows, since
3323 \[ \diff{1}{2}
3324 \le \Pr[F]
3325 \le \sum_\Cid \Pr[B(\Cid)]
3326 \le \sum_\Cid \frac{\Ccount(\Cid)}{2^{\ell_I}}
3327 = \frac{1}{2^{\ell_I}} \sum_\Cid \Ccount(\Cid)
3328 \le \frac{q_M}{2^{\ell_I}}
3329 \]
32609df3 3330 as required.
a6e375a6
MW
3331
3332 Now observe that, in $\G2$, sessions don't actually check incoming
3333 challenges in this way any more -- instead we run the extractor. So, to
3334 prove the claim, we consider a class~$\Cid$ where properties~1 and~2 above
3335 hold. The correct hash $\Chash(\Cid)$ is then independent of the rest of
3336 the game, so the probability that $\Ccheck(\Cid) \in \Ccheckset(\Cid)$ is
3337 precisely $\Ccount(\Cid)/2^{\ell_I}$ as required.
3338
3339 This completes the proof the lemma.
3340\end{proof}
3341
3342\begin{proof}[Proof of lemma~\ref{lem:sk-g2-g3}]
3343 Let $F$ be the event that the adversary makes a query $H_K(Z_S)$ for some
3344 ripe session~$S$. Since $\G3$ proceeds exactly as $\G2$ did unless $F_2$
3345 occurs, we apply lemma~\ref{lem:shoup}, which tells us that $\diff{2}{3}
3346 \le \Pr[F_2]$. We must therefore bound this probability.
3347
3348 To do this, we consider a new game~$\G3'$, which is the same as $\G3$,
3349 except that, at the start of the game, we choose a random number $k \inr
3350 \Nupto{q_S}$. For $0 \le i < q_S$, let $S_i$ be the $i$th session created
3351 by the adversary. We define $F'$ to be the event that the adversary
3352 queries $H_K(Z_{S_k})$ when $S_k$ is ripe.
3353
3354 The lemma now follows from these two claims.
3355
3356 \begin{claim}
3357 $\Pr[F] \le q_S \Pr[F']$.
3358 \end{claim}
3359 To see this, for any session~$S$, let $F_S$ be the event that the adversary
3360 queries~$H_K(Z_S)$ when $S$ is ripe. Then
3361 \[ \Pr[F] \le \sum_{0\le i<q_S} \Pr[F_{S_i}]. \]
3362 Hence,
3363 \[ \Pr[F'] = \Pr[F_{S_k}] = \sum_{0\le i<q_S} \Pr[F_{S_i}] \Pr[k = i]
3364 = \frac{1}{q_S} \sum_{0\le i<q_S} \Pr[F_{S_i}]
3365 \ge \frac{\Pr[F]}{q_S}
3366 \]
3367 proving the claim.
3368
3369 \begin{claim}
3370 For some $t' = t + O(n) + O(q_S q_M) + O(q_I) + O(q_K)$, we have
3371 $\Pr[F'] \le \InSec{mcdh}(G; t', q_K).$
3372 \end{claim}
3373 To prove this claim, we construct an adversary~$B$ which solves the MCDH
3374 problem in~$G$. The adversary works as follows.
3375 \begin{enumerate}
3376 \item It is given a pair $(R^*, S^*) = (r^* P, s^* P)$ of group elements;
3377 its objective is to make a verification-oracle query $V(Z^*)$ where $Z^*
3378 = r^* s^* P$.
3379 \item It sets up a simulation of the game~$\G3'$, by running the
3380 $\id{init}$ function, and simulating all of the parties. In particular,
3381 it chooses a random~$k \in \Nupto{q_S}$.
3382 \item It sets up accurate simulations of the random oracles $H_K(\cdot)$
3383 and $H_I(\cdot, \cdot, \cdot, \cdot)$, which choose random outputs for
3384 new, fresh inputs. However, whenever $A$ queries $H_K(\cdot)$ on a group
3385 element $Z$, $B$ also queries $V(Z)$.
3386 \item It runs $A$ in its simulated game. It responds to all of $A$'s
3387 instructions faithfully, until the $k$th session-creation.
3388 \item When creating the $k$th session $S = S_k = (P_i, P_j, s)$, $B$ has
3389 party~$P_i$ choose $R^*$ as its challenge, rather than choosing $r_S$ and
3390 setting $R_S = r_S P$. Because it simulates all the parties, $B$ can
3391 compute $Y_S = x_j R$, which is still correct.
3392 \item If $A$ requests the creation of a matching session $T = (P_j, P_i,
3393 s)$ then $B$ has party~$P_j$ choose $S^*$ as its challenge. Again, $B$
3394 computes $Y_T = x_i S^*$.
3395 \item If $A$ ever corrupts the parties $P_i$ or $P_j$, or reveals the
3396 session state of $S$ or $T$ then $B$ stops the simulation abruptly and
3397 halts.
3398 \end{enumerate}
dff0fad2
MW
3399 Adversary $B$'s running time is within the bounds required of $t'$, and $B$
3400 makes at most $q_K$ queries to $V(\cdot)$; we therefore have
a6e375a6
MW
3401 \[ \Pr[F'] \le \Succ{mcdh}{G}(B) \le \InSec{mcdh}(G; t', q_K) \]
3402 as required.
3403\end{proof}
3404
3405\begin{proof}[Proof of lemma~\ref{lem:sk-g3-g4}]
3406 Let $F_4$ be the event under which we abort the game~$\G4$. Clearly, if
3407 $F$ doesn't occur, games $\G3$ and $\G4$ proceed identically, so we can
3408 apply lemma~\ref{lem:shoup} to see that $\diff{3}{4} \le \Pr[F_4]$.
3409 Bounding $\Pr[F_4]$, however, is somewhat complicated. We use a further
3410 sequence of games.
3411
3412 Firstly, let $\G5$ be like $\G4$ with the exception that we choose, at
3413 random, an integer~$k \inr \Nupto{q_S}$. As we did in the proof for
3414 lemma~\ref{lem:sk-g3-g4}, let $S_i$ be the $i$th session created by the
3415 adversary. For each session~$S_i$, let $T_i$ be its matching session, if
3416 there is one. We define $F_5$ to be the event that
3417 \begin{itemize}
3418 \item $S_k$ completes immediately following delivery of a message $\mu
3419 \notin M_{T_k}$, and
3420 \item $S_k$ was ripe at this point.
3421 \end{itemize}
3422 For games~$\G{i}$, for $i > 5$, we define the event $F_i$ to be the event
3423 corresponding to $F_5$ in $\G{i}$. Note that if $S_k$ \emph{is} sent a
3424 message in $M_{T_k}$ then $S_k$ immediately completes.
3425
3426 \begin{claim}
3427 $\Pr[F_4] \le \Pr[F_5]/q_S$.
3428 \end{claim}
3429 This claim is proven exactly as we did for claim~1 of
3430 lemma~\ref{lem:sk-g2-g3}.
3431
3432 Let~$\G6$ be the same as $\G5$ except that we change the encrypted
3433 responses of session~$S_k$ and its matching session~$T_k$. Let $K^* =
3434 (K_0^*, K_1^*) = H_K(Z_S)$. Then, rather than sending $(\cookie{response},
3435 R_S, E_{K_0^*}(Y_T))$, session~$S$ sends $(\cookie{response}, R_S,
3436 E_{K_0^*}(1^{\ell_G}))$.
3437 \begin{claim}
3438 $|\Pr[F_6] - \Pr[F_5]| \le \InSec{ind-cca}(\E; t', q_M, q_M).$
3439 \end{claim}
3440 To prove this claim, we construct an adversary~$B$ which attacks the
3441 IND-CCA security of our encryption scheme $\E$. The adversary~$B$ works as
3442 follows.
3443 \begin{enumerate}
3444 \item It is given no input, but a pair of oracles $E(\cdot, \cdot)$ and
3445 $D(\cdot)$; the former encrypts either the left or right input, according
3446 to a hidden bit, and the latter decrypts ciphertexts other than those
3447 returned by $E(\cdot, \cdot)$. Its goal is to guess the hidden bit.
3448 \item It sets up a simulation of the game~$\G5$, by running the $\id{init}$
3449 function, and simulating all of the parties. In particular, it chooses a
3450 random $k \in \Nupto{q_S}$.
3451 \item It sets up accurate simulations of the random oracles $H_K(\cdot)$
3452 and $H_I(\cdot, \cdot, \cdot, \cdot)$.
3453 \item It runs $A$ in its simulated game. It responds to all of $A$'s
3454 instructions faithfully, except for the matching sessions $S_k$ and
3455 $T_k$. Let $S = S_k = (P_i, P_j, s)$, and $T = T_k = (P_j, P_i, s)$.
3456 \item Suppose $T$ is sent the message $C_S = (\cookie{challenge}, R_S,
3457 c_S)$. Rather than computing $K^* = H_K(r_T R_S)$ and performing the
3458 encryption properly, $B$ queries its left-or-right encryption oracle
3459 $E(\cdot, \cdot)$ on $E(1^{\ell_G}, x_j R_S)$, and sends the resulting
3460 ciphertext~$\chi$ back to~$S$ as part of a message $(\cookie{response},
3461 R_T, \chi)$. The set $M_T$ is precisely the set of messages constructed
3462 in this fashion. (Recall that challenge messages other than $C_S$ aren't
3463 actually delivered to $T$, since we simulate the responses using the
3464 extractor, as of $\G2$.)
3465 \item Suppose $S$ is sent a message $M = (\cookie{response}, R_T, \chi) \in
3466 M_T$. We immediately stop the simulation, and $B$ returns $0$.
3467 \item Suppose, instead, that $S$ is sent some message $M' =
3468 (\cookie{response}, R, \chi) \notin M_T$. There are two cases to
3469 consider. If $R = R_T$ then we must have $\chi$ distinct from the
3470 ciphertexts returned by the $E(\cdot, \cdot)$ oracle, so we can invoke
3471 the decryption oracle $D(\cdot)$ on $\chi$ to obtain a response $Y$.
3472 Alternatively, if $R \ne R_T$, we can compute the key $K = (K_0, K_1) =
3473 H_K(r_S R)$, and recover $Y = D_{K_0}(\chi)$. In either case, if $Y =
3474 r_S X_j)$ then $S$ would complete at this point: $B$ stops the simulation
3475 and returns $1$.
3476 \item If $A$ exposes $S$ (by corrupting $P_i$ or~$P_j$, or revealing $S$ or
3477 $T$) then we stop the simulation and $B$ returns $0$.
3478 \item Finally, if the game stops, either because $A$ halts, or because of
3479 one of the special rules introduced in earlier games, $B$ returns $0$.
3480 \end{enumerate}
3481 It is clear that $B$'s oracle queries are acceptable, since $|x_j R_S| =
3482 \ell_G$ by definition, and $B$ never queries $D(\cdot)$ on a ciphertext
3483 returned by its encryption oracle. By the rules of~$\G3$, we know that the
3484 game stops immediately if $A$ ever queries $Z_S$, so the key $K^*$ is
3485 independent of everything in $A$'s view except the ciphertexts $\chi$
3486 output by $S$ and $T$. Therefore, if the hidden bit of the IND-CCA game is
3487 $1$, $B$ accurately simulates $\G5$, whereas if the bit is $0$ then $B$
dff0fad2
MW
3488 accurately simulates $\G6$. We issue no more that $q_M$ encryption or
3489 decryption queries. Finally, $B$'s running time is within the bounds
3490 allowed for $t'$. Therefore,
a6e375a6
MW
3491 \[ \Adv{ind-cca}{\E}(B) = \Pr[F_5] - \Pr[F_6]
3492 \le \InSec{ind-cca}(\E; t', q_M, q_M). \]
3493 We construct the adversary~$\bar{B}$ which is the same as $B$ above, except
3494 that $\bar{B}$ returns $0$ whenever $B$ returns~$1$, and \emph{vice versa}.
3495 Clearly
3496 \[ \Adv{ind-cca}{\E}(\bar{B})
3497 = (1 - \Pr[F_5]) - (1 - \Pr[F_6])
3498 = \Pr[F_6] - \Pr[F_5]
3499 \le \InSec{ind-cca}(\E; t', q_M, q_M).
3500 \]
3501 This proves the claim.
3502
3503 Let $\G7$ be the same as $\G6$, except that at the start of the game we
3504 choose a random $m \in \Nupto{n}$, and when the adversary creates the
3505 session $S = S_k = (P_i, P_j, s)$, we abort the game unless $j = m$.
3506 Clearly we have $\Pr[F_6] = n \Pr[F_7]$.
32609df3 3507
a6e375a6
MW
3508 Finally, we can explicitly bound $F_6$. In $\G6$, the adversary's view is
3509 independent of the correct response $Y_S = r_S X_S = x_j R_S$ to $S$'s
3510 challenge. Therefore, if $A$ manages to send any message $\mu \notin M_T$
3511 which causes $S$ to complete, then it has impersonated~$P_j$.
3512 \begin{claim}
3513 $\Pr[F_7] \le \InSec{mcdh}(G; t', q_M + q_I)$.
3514 \end{claim}
3515 The present lemma follows from this and the previous claims.
3516
3517 To prove the claim formally, we construct an adversary~$B'$, which behaves
3518 as follows.
3519 \begin{enumerate}
3520 \item It is given as input a public key $X^*$ and a single challenge $(R^*,
3521 c^*)$, a random oracle $H^*_I(\cdot, \cdot)$, and an oracle $V(\cdot,
3522 \cdot)$, which verifies responses $(R, Y)$. Its goal is to invoke
3523 $V(\cdot, \cdot)$ with a correct response to the challenge.
3524 \item It chooses a random $k \in \Nupto{q_S}$ and $m \in \Nupto{n}$. It
3525 sets up a simulation of the game~$\G7$, by running the $\id{init}$
3526 function, and simulating all of the parties, except that it gives party
3527 $P_m$ the public key $X^*$. This makes no difference, since $P_m$
3528 doesn't actually need to give any `honest' responses because of the
3529 change we made in $\G6$.
3530 \item It sets up accurate simulations of the random oracles $H_K(\cdot)$
3531 and $H_I(\cdot, \cdot, \cdot, \cdot)$, with one exception -- see below.
3532 \item It runs $A$ in its simulated game. It responds to all of $A$'s
3533 instructions faithfully, except for the session $S_k$. Let $S = S_k =
3534 (P_i, P_j, s)$, and let $T = T_k = (P_j, P_i, s)$ be its matching
3535 session.
3536 \item When session~$S$ is created, $B'$ checks that $j = m$, and if not
3537 stops the simulation and halts. Otherwise, $B'$ invokes its oracle~$C()$
3538 to obtain a pair $(R, c)$. Session~$S$ sends $C_S = (\cookie{challenge},
3539 R, c)$ as its challenge to~$T$.
3540 \item When $A$ makes a query $H_I(X^*, s, R, Y)$, $B$ answers it by
3541 querying its \emph{own} random oracle $H^*_I(R, Y)$.
3542 \item When $S$ receives a message $(\cookie{response}, R, \chi)$, we
3543 compute $(K_0, K_1) = r_S R$, and $Y = D_{K_0}(\chi)$. If $Y \ne \bot$
3544 then $B'$ calls $V(R, Y)$.
3545 \item If $A$ reveals $S$ or corrupts $P_i$ or $P_j$ then $B'$ stops the
3546 simulation immediately and halts.
3547 \end{enumerate}
dff0fad2
MW
3548 The running time of $B'$ is within the bounds required of $t'$; it makes at
3549 most $q_I$ random-oracle and at most $q_M$ verification queries. Clearly
3550 $B'$ succeeds whenever $F_7$ occurs. The claim follows from
3551 theorem~\ref{thm:wident-sound}.
a6e375a6
MW
3552\end{proof}
3553
3554
3555\subsection{Proof of theorem~\ref{thm:sk2}}
3556\label{sec:sk2-proof}
3557
3558The proof is almost identical to the proof of theorem~\ref{thm:sk}, in
3559appendix~\ref{sec:sk-proof}. Unfortunately a black-box reduction doesn't
32609df3 3560seem possible.
a6e375a6
MW
3561
3562We use the games and notation of section~\ref{sec:sk-proof}.
3563
3564The change to the check-value calculation doesn't affect key-generation at
3565all, so the transition to $\G1$ goes through as before.
3566
3567The transition from $\G1$ to $\G2$ -- answering challenges using the
3568extractor -- needs a little care. Let $S = (P_i, P_j, s)$ be a session, and
3569consider an incoming message $(\cookie{challenge}, R, c)$.
3570\begin{itemize}
3571\item If $T = (P_j, P_i, s)$ is the matching session to $S$, and $R = R_T$ is
3572 the public challenge value of $T$, and $c = r_T \xor H_I(R_S, X_j, s, R_T,
3573 r_T X_i)$ is the check-value output by $T$ when it received
3574 $(\cookie{pre-challenge}, R_S)$ as input, then $S$ replies as it did in
3575 $\G1$.
3576\item If the challenge message is any other message, then we use the
3577 extractor.
3578\end{itemize}
3579As in lemma~\ref{lem:sk-g1-g2}, we examine which sessions could have queried
3580$H_I(R_S, X_j, s, R, x_i R)$, and for the same reasons conclude that only the
3581matching session would have done this, and only in response to the
3582pre-challenge $R_S$. It follows that $\diff{1}{2} \le q_M/2^{\ell_I}$ as
3583before.
3584
3585The remaining game steps go through unchanged. In particular, we conclude
3586that a ripe session will only complete if the adversary has transmitted
3587messages from its matching session correctly, and the session key is
3588independent of the adversary's view. The theorem follows.
3589
dff0fad2
MW
3590
3591\subsection{Sketch proof of single-key protocol for secure channels}
3592\label{sec:sc-proof}
3593
3594We want to show that the Wrestlers Key-Exchange protocol, followed by use of
3595the encryption scheme $\E$, with the \emph{same} key $K = K_0$, still
3596provides secure channels.
3597
3598\subsubsection{Secure channels definition}
3599We (very briefly!) recall the \cite{Canetti:2001:AKE} definition of a secure
3600channels protocol. We again play a game with the adversary. At the
3601beginning, we choose a bit $b^* \inr \{0, 1\}$ at random. We allow the
3602adversary the ability to establish \emph{secure channels} sessions within the
3603parties. Furthermore, for any running session $S = (P_i, P_j, s)$, we allow
3604the adversary to request $S$ to send a message~$\mu$ through its secure
3605channel. Finally, the adversary is allowed to choose one ripe
3606\emph{challenge} session, and ask for it to send of one of a \emph{pair} of
3607messages $(\mu_0, \mu_1)$, subject to the restriction that $|\mu_0| =
3608|\mu_1|$; the session sends message $\mu_{b^*}$. The adversary may not
3609expose the challenge session.
3610
3611The adversary wins if (a)~it can guess the bit $b^*$, or (b)~it can cause a
3612ripe session $S$ (i.e., an unexposed, running session), with a matching
3613session~$T$ to output a message other than one that it requested that $T$
3614send.
3615
3616\subsubsection{Protocol definition}
3617The protocol begins with Wrestlers key-exchange. The encryption in the
3618key-exchange protocol is performed as $E_K(\cookie{kx}, \cdot)$; encryption
3619for secure channels is performed as $E_K(\cookie{sc}, i, o, \cdot)$, where
3620$i$ is a sequence number to prevent replays and $o \in \{S, T\}$ identifies
3621the sender.
3622
3623\subsubsection{Proof sketch}
3624We use the games and notation of appendix~\ref{sec:sk-proof}.
3625
3626The proof goes through as far as the step between $\G5$ and $\G6$ in the
3627proof of lemma~\ref{lem:sk-g3-g4}. Here we make the obvious adjustments to
3628our adversary against the IND-CCA security of $\E$. (Our adversary will need
3629to use the left-or-right oracle for messages sent using the secure channel
3630built on $K^*$. That's OK.)
3631
3632In $\G4$, we know that ripe sessions agree the correct key, and the adversary
3633never queries the random oracle, so the key is independent of the adversary's
3634view.
3635
3636We define a new game $\G8$, which is like $\G4$, except that we stop the game
3637if the adversary ever forges a message sent over the secure channel. That
3638is, if a ripe session $S$ ever announces receipt of a message not sent (at
3639the adversary's request) by its matching session $T$. Let $F_8$ be the event
3640that a forgery occurs. We apply lemma~\ref{lem:shoup}, which tells us that
3641$\diff{4}{8} \le \Pr[F_8]$. To bound $F_8$, we isolate a session at random
3642(as in lemmata \ref{lem:sk-g2-g3} and~\ref{lem:sk-g3-g4}), which tells us
3643that
3644\begin{equation}
3645 \label{eq:sc-g4-g8}
3646 \diff{4}{8} \le q_S \cdot \InSec{int-ptxt}(\E; t', q_M, q_M)
3647\end{equation}
3648Finally, we can bound the adversary's advantage at guessing the hidden bit
3649$b^*$. We isolate (we hope) the challenge session $S$ by choosing a target
32609df3 3650session at random, as before. Let $K^* = H_K(Z_S)$ be the key agreed by the
dff0fad2
MW
3651session (if it becomes ripe). We define an adversary $B$ against the IND-CCA
3652security of $\E$. The adversary $B$ simulates the game. If the adversary
3653exposes the target session, or doesn't choose it as the challenge session,
3654$B$ fails (and exits 0); otherwise it uses the left-or-right encryption
3655oracle to encrypt both of the adversary's message choices, and outputs the
3656adversary's choice. Let $b$ be the adversary's output, and let $\epsilon$ be
3657the advantage of our IND-CCA distinguisher. Then
3658\begin{eqnarray*}[rl]
3659 \Pr[b = b^*]
3660 & = \Pr[b = b^* \wedge b^* = 1] + \Pr[b = b^* \wedge b^* = 0] \\
3661 & = \frac{1}{2}\bigl( \Pr[b = b^* \mid b^* = 1] +
3662 \Pr[b = b^* \mid b^* = 0] \bigr) \\
3663 & = \frac{1}{2}\bigl( \Pr[b = b^* \mid b^* = 1] +
3664 (1 - \Pr[b \ne b^* \mid b^* = 0]) \bigr) \\
3665 & = \frac{1}{2}\bigl( \Pr[b = 1 \mid b^* = 1] -
3666 \Pr[b = 1 \mid b^* = 0] + 1 \bigr) \\
3667 & = \frac{1}{2}\bigl(1 + q_S\,\Adv{ind-cca}{\E}(B) \bigr) \\
3668 & \le \frac{1}{2} \bigl( 1 + q_S\,\InSec{ind-cca}(\E; t', q_M, q_M) \bigr).
3669 \eqnumber
3670\end{eqnarray*}
3671
a6e375a6
MW
3672\end{document}
3673
32609df3 3674%%% Local Variables:
a6e375a6
MW
3675%%% mode: latex
3676%%% TeX-master: t
32609df3 3677%%% End: