wrestlers.tex: Let `\section' act as a section header in the source.
[doc/wrestlers] / wrestlers.tex
CommitLineData
a6e375a6
MW
1%%% -*-latex-*-
2%%%
3%%% The Wrestlers Protocol: secure, deniable key-exchange
4%%%
5%%% (c) 2006 Mark Wooding
6%%%
7
b12a993f
MW
8\makeatletter
9\def\@doit#1{#1}
10\ifx\iffancystyle\xxundefined\expandafter\@doit\else\expandafter\@gobble\fi
11{\newif\iffancystyle\fancystyletrue}
12\ifx\ifshort\xxundefined\expandafter\@doit\else\expandafter\@gobble\fi
13{\newif\ifshort\shortfalse}
14\iffancystyle\expandafter\@gobble\else\expandafter\@doit\fi{\newif\ifpdfing}
15\makeatother
32609df3
MW
16
17\typeout{Options:}
18\typeout{ Fancy style: \iffancystyle ON\else OFF\fi}
19\typeout{ Short version: \ifshort ON\else OFF\fi}
20
a6e375a6
MW
21\errorcontextlines=\maxdimen
22\showboxdepth=\maxdimen
23\showboxbreadth=\maxdimen
24
25\iffancystyle
d8eb72b5 26 \documentclass[indentpar]{strayman}
a6e375a6
MW
27 \usepackage[T1]{fontenc}
28 \usepackage[palatino, helvetica, courier, maths=cmr]{mdwfonts}
29 \usepackage[within = subsection, mdwmargin]{mdwthm}
30 \usepackage{mdwlist}
31 \usepackage{sverb}
32609df3 32 \ifpdfing\else
d8139f2c
MW
33 \PassOptionsToPackage{dvips}{xy}
34 \fi
a6e375a6 35\else
32609df3
MW
36 \PassOptionsToClass{runningheads}{llncs}
37 \documentclass{llncs}
a6e375a6
MW
38\fi
39
40\PassOptionsToPackage{show}{slowbox}
41%\PassOptionsToPackage{hide}{slowbox}
32609df3 42\usepackage{mdwtab, mdwmath, crypto}
a6e375a6
MW
43\usepackage{slowbox}
44\usepackage{amssymb, amstext}
45\usepackage{url, multicol}
46\usepackage{tabularx}
47\DeclareUrlCommand\email{\urlstyle{tt}}
48\ifslowboxshow
49 \usepackage[all]{xy}
50 \turnradius{4pt}
51\fi
32609df3 52\usepackage{mathenv}
a6e375a6 53
32609df3 54\newcommand{\Nupto}[1]{\{0, 1, \ldots, #1 - 1\}}
a6e375a6 55\iffancystyle
32609df3 56 \let\next\title
a6e375a6 57\else
32609df3 58 \def\next[#1]{\titlerunning{#1}\title}
a6e375a6
MW
59\fi
60\next
32609df3
MW
61 [The Wrestlers Protocol]
62 {The Wrestlers Protocol%
63 \ifshort\thanks{This is an extended abstract; the full version
081da1b1 64 \cite{wooding-2006:wrestlers} is available from
32609df3 65 \texttt{http://eprint.iacr.org/2006/386/}.}\fi \\
a6e375a6
MW
66 A simple, practical, secure, deniable protocol for key-exchange}
67\iffancystyle
68 \author{Mark Wooding \\ \email{mdw@distorted.org.uk}}
69\else
70 \author{Mark Wooding}
71 \institute{\email{mdw@distorted.org.uk}}
72\fi
73
74\iffancystyle
75 \bibliographystyle{mdwalpha}
76 \let\epsilon\varepsilon
77 \let\emptyset\varnothing
78 \let\le\leqslant\let\leq\le
79 \let\ge\geqslant\let\geq\ge
80 \numberwithin{table}{section}
81 \numberwithin{figure}{section}
82 \numberwithin{equation}{subsection}
83 \let\random\$
84\else
32609df3 85 \bibliographystyle{splncs}
a6e375a6
MW
86 \expandafter\let\csname claim*\endcsname\claim
87 \expandafter\let\csname endclaim*\endcsname\endclaim
88\fi
89
a6e375a6
MW
90\let\Bin\Sigma
91\let\emptystring\lambda
92\edef\Pr{\expandafter\noexpand\Pr\nolimits}
93\newcommand{\bitsto}{\mathbin{..}}
94\newcommand{\E}{{\mathcal{E}}}
95\newcommand{\M}{{\mathcal{M}}}
96\iffancystyle
97 \def\description{%
98 \basedescript{%
99 \let\makelabel\textit%
100 \desclabelstyle\multilinelabel%
101 \desclabelwidth{1in}%
102 }%
103 }
104\fi
105\def\fixme#1{\marginpar{FIXME}[FIXME: #1]}
106\def\hex#1{\texttt{#1}_{x}}
107
32609df3
MW
108\newenvironment{longproof}[1]{%
109 \ifshort#1\expandafter\ignore
110 \else\proof\fi
111}{%
112 \ifshort\else\endproof\fi
113}
114
a6e375a6
MW
115\def\dbox#1{%
116 \vtop{%
117 \def\\{\unskip\egroup\hbox\bgroup\strut\ignorespaces}%
118 \hbox{\strut#1}%
119 }%
120}
121
122\def\Wident{\Xid{W}{ident}}
123\def\Wkx{\Xid{W}{kx}}
124\def\Wdkx{\Xid{W}{dkx}}
125\def\Func#1#2{\mathcal{F}[#1\to#2]}
126\def\diff#1#2{\Delta_{#1, #2}}
127\def\Hid{H_{\textit{ID}}}
128
129%% protocol run diagrams
130\def\PRaction#1{#1\ar[r]}
131\def\PRcreatex#1{\PRaction{\textsf{Create session\space}#1}}
132\def\PRcreate#1#2#3{\PRcreatex{(\text{#1},\text{#2},#3)}}
133\def\PRreceivex#1{\PRaction{\textsf{Receive\space}#1}}
134\def\PRreceive#1#2{\PRreceivex{\msg{#1}{#2}}}
135\def\PRsession#1{\relax\mathstrut#1\ar[r]}
136\def\msg#1#2{(\cookie{#1},#2)}
137\def\PRresult#1{#1}
138\def\PRsendx#1{\PRresult{\textsf{Send\space}#1}}
139\def\PRsend#1#2{\PRsendx{\msg{#1}{#2}}}
140\def\PRcomplete#1{\textsf{Complete:\space}#1}
141\def\PRstate#1{\textsf{State:\space}#1}
142\def\PRignore{\textsf{(ignored)}}
143\def\PRreveal{\textsf{Session-state reveal}\ar[r]}
144\def\protocolrun#1{\[\xymatrix @R=0pt @C=2em {#1}\]}
145
32609df3
MW
146\def\protocol{%
147 \unskip\bigskip
148 \begin{tabular*}{\linewidth}%
149 {@{\qquad}l@{\extracolsep{0ptplus1fil}}r@{\qquad}}}
150\def\endprotocol{\end{tabular*}}
151\def\send#1#2{\noalign{%
152 \centerline{\xy\ar @{#1}|*+{\mathstrut#2}<.5\linewidth, 0pt>\endxy}}}
153
a6e375a6
MW
154%% class-ids for proof of extractor lemma
155\let\Cid=\Lambda
156\let\Csession=S
157\let\Cindex=r
158\let\Cquery=Q
159\let\Chash=H
160\let\Ccheck=c
161\let\Ccheckset=V
162\let\Ccount=\nu
163
dff0fad2
MW
164\def\HG#1{\mathbf{H}_{#1}}
165
32609df3
MW
166\iffancystyle\else
167 \let\xsssec\subsubsection\def\subsubsection#1{\xsssec[#1]{#1.}}
168\fi
169
a6e375a6
MW
170\begin{document}
171
172%%%--------------------------------------------------------------------------
173
174\maketitle
d8139f2c 175\iffancystyle \thispagestyle{empty} \fi
a6e375a6
MW
176
177\begin{abstract}
178 We describe and prove (in the random-oracle model) the security of a simple
179 but efficient zero-knowledge identification scheme, whose security is based
c51900e9
MW
180 on the computational Diffie--Hellman problem. Unlike other recent
181 proposals for efficient identification protocols, we don't need any
182 additional assumptions, such as the Knowledge of Exponent assumption.
a6e375a6
MW
183
184 From this beginning, we build a simple key-exchange protocol, and prove
185 that it achieves `SK-security' -- and hence security in Canetti's Universal
186 Composability framework.
187
188 Finally, we show how to turn the simple key-exchange protocol into a
189 slightly more complex one which provides a number of valuable `real-life'
190 properties, without damaging its security.
191\end{abstract}
192
193\iffancystyle
194\newpage
d8139f2c 195\thispagestyle{empty}
a6e375a6
MW
196\columnsep=2em \columnseprule=0pt
197\tableofcontents[\begin{multicols}{2}\raggedright][\end{multicols}]
198%%\listoffigures[\begin{multicols}{2}\raggedright][\end{multicols}]
199%% \listoftables[\begin{multicols}{2}\raggedright][\end{multicols}]
200\newpage
201\fi
202
203%%%--------------------------------------------------------------------------
a6e375a6
MW
204\section{Introduction}
205
dff0fad2
MW
206This paper proposes protocols for \emph{identification} and
207\emph{authenticated key-exchange}.
a6e375a6 208
dff0fad2
MW
209An identification protocol allows one party, say Bob, to be sure that he's
210really talking to another party, say Alice. It assumes that Bob has some way
211of recognising Alice; for instance, he might know her public key. Our
212protocol requires only two messages -- a challenge and a response -- and has
213a number of useful properties. It is very similar to, though designed
32609df3 214independently of, a recent protocol by Stinson and Wu
081da1b1
MW
215\cite{stinson-wu-2006:two-flow-zero-knowledge}; we discuss their protocol and
216compare it to ours in \ifshort the full version of this paper. \else
32609df3 217section~\ref{sec:stinson-ident}. \fi
a6e375a6 218
dff0fad2 219Identification protocols are typically less useful than they sound. As Shoup
081da1b1
MW
220\cite{shoup-1999:formal-model-key-exchange} points out, it provides a `secure
221ping', by which Bob can know that Alice is `there', but provides no guarantee
222that any other communication was sent to or reached her. However, there are
4331bf84
MW
223situations where this an authentic channel between two entities -- e.g., a
224device and a smartcard -- where a simple identification protocol can still be
225useful.
a6e375a6 226
dff0fad2
MW
227An authenticated key-exchange protocol lets Alice and Bob agree on a shared
228secret, known to them alone, even if there is an enemy who can read and
229intercept all of their messages, and substitute messages of her own. Once
230they have agreed on their shared secret, of course, they can use standard
231symmetric cryptography techniques to ensure the privacy and authenticity of
232their messages.
a6e375a6
MW
233
234
235\subsection{Desirable properties of our protocols}
236
237Our identification protocol has a number of desirable properties.
238\begin{itemize}
239\item It is \emph{simple} to understand and implement. In particular, it
240 requires only two messages.
241\item It is fairly \emph{efficient}, requiring two scalar multiplications by
242 each of the prover and verifier.
243\item It is provably \emph{secure} (in the random oracle model), assuming the
c51900e9 244 intractability of the computational Diffie--Hellman problem.
a6e375a6
MW
245\end{itemize}
246
dff0fad2 247Our key-exchange protocol also has a number of desirable
d8139f2c 248properties.
a6e375a6
MW
249\begin{itemize}
250\item It is fairly \emph{simple} to understand and implement, though there
251 are a few subtleties. In particular, it is \emph{symmetrical}. We have
252 implemented a virtual private network system based on this protocol.
253\item It is \emph{efficient}, requiring four scalar multiplications by each
254 participant. The communication can be reduced to three messages by
255 breaking the protocol's symmetry.
256\item It is provably \emph{secure} (again, in the random oracle model),
c51900e9 257 assuming the intractability of the computational Diffie--Hellman problem,
a6e375a6
MW
258 and the security of a symmetric encryption scheme.
259\item It provides \emph{perfect forward secrecy}. That is, even if a user's
260 long-term secrets are compromised, past sessions remain secure.
261\item It is \emph{deniable}. It is possible to construct simulated
262 transcripts of protocol executions between any number of parties without
263 knowing any of their private keys. The simulated transcripts are (almost)
264 indistinguishable from real protocol transcripts. Hence, a transcript
265 does not provide useful evidence that a given party was really involved in
266 a given protocol execution.
267\end{itemize}
268
32609df3 269\ifshort\else
dff0fad2 270\subsection{Asymptotic and concrete security results}
a6e375a6 271
dff0fad2
MW
272Most security definitions for identification (particularly zero-knowledge)
273and key-exchange in the literature are \emph{asymptotic}. That is, they
274consider a family of related protocols, indexed by a \emph{security
275parameter}~$k$; they that any \emph{polynomially-bounded} adversary has only
276\emph{negligible} advantage. A polynomially-bounded adversary is one whose
277running time is a bounded by some polynomial~$t(k)$. The security definition
278requires that, for any such polynomially-bounded adversary, and any
279polynomial $p(k)$, the adversary's advantage is less than $p(k)$ for all
280sufficiently large values of $k$.
281
282Such asymptotic notions are theoretically interesting, and have obvious
283connections to complexity theory. Unfortunately, such an asymptotic result
284tells us nothing at all about the security of a particular instance of a
285protocol, or what parameter sizes one needs to choose for a given level of
286security against a particular kind of adversary. Koblitz and Menezes
081da1b1
MW
287\cite{koblitz-menezes-2006:another-look-provable-security-ii} (among other
288useful observations) give examples of protocols, proven to meet asymptotic
289notions of security, whose security proofs guarantee nothing at all for the
290kinds of parameters typically used in practice.
dff0fad2
MW
291
292Since, above all, we're interested in analysing a practical and implemented
293protocol, we follow here the `practice-oriented provable security' approach
294largely inspired by Bellare and Rogaway, and exemplified by
081da1b1
MW
295\cite{bellare-1994:security-cbc,bellare-1995:xor-macs,%
296bellare-rogaway-1995:oaep,bellare-rogaway-1996:exact-security-sigs,%
297bellare-1996:hmac,bellare-1997:concrete-symmetric}; see also
298\cite{bellare-1999:practice-oriented-provable-security}.
dff0fad2
MW
299Rather than attempting to say, formally, whether or not a protocol is
300`secure', we associate with each protocol an `insecurity function' which
301gives an upper bound on the advantage of any adversary attacking the protocol
302within given resource bounds.
32609df3 303\fi
a6e375a6 304
dff0fad2 305\subsection{Formal models for key-exchange}
a6e375a6 306
32609df3
MW
307\ifshort
308
309The first model for studying the \emph{computational} security of
310key-exchange protocols (rather than using protocol-analysis logics like that
081da1b1
MW
311of \cite{burrows-1989:logic-authn}) was given by Bellare and Rogaway
312\cite{bellare-rogaway-1994:entity-authn-key-distrib}; the model has since
313been enhanced, both by the original authors and others, in
314\cite{bellare-rogaway-1995:session-key-distrib,%
315 blake-wilson-1997:key-agreement,%
316 blake-wilson-menezes-1998:asymm-key-transport}.
317The model defines security in terms of a game: key-exchange protocols are
318secure if an adversary can't distinguish the key agreed by a chosen
319`challenge session' from a key chosen independently at random. Other models
320for key-exchange have been proposed in
321\cite{bellare-1998:modular-key-exchange} and
322\cite{shoup-1999:formal-model-key-exchange}; these use a different notion of
323security, involving implementation of an ideal functionality.
32609df3
MW
324
325\else
326
dff0fad2
MW
327Many proposed key-exchange protocols have turned out to have subtle security
328flaws. The idea of using formal methods to analyse key-exchange protocols
081da1b1
MW
329begins with the logic of Burrows, Abadi and Needham
330\cite{burrows-1989:logic-authn}. Their approach requires a `formalising'
331step, in which one expresses in the logic the contents of the message flows,
332and the \emph{beliefs} of the participants.
333
334Bellare and Rogaway \cite{bellare-rogaway-1994:entity-authn-key-distrib}
335describe a model for studying the computational security of authentication
336and key-exchange protocols in a concurrent setting, i.e., where multiple
337parties are running several instances of a protocol simultaneously. They
338define a notion of security in this setting, and show that several simple
339protocols achieve this notion. Their original paper dealt with pairs of
340parties using symmetric cryptography; they extended their definitions in
341\cite{bellare-rogaway-1995:session-key-distrib} to study three-party
342protocols involving a trusted key-distribution centre.
343
344Blake-Wilson, Johnson and Menezes \cite{blake-wilson-1997:key-agreement}
345applied the model of \cite{bellare-rogaway-1994:entity-authn-key-distrib} to
346key-exchange protocols using asymmetric cryptography, and Blake-Wilson and
347Menezes \cite{blake-wilson-menezes-1998:asymm-key-transport} applied it to
348protocols based on the Diffie-Hellman protocol.
349
350The security notion of \cite{bellare-rogaway-1994:entity-authn-key-distrib}
351is based on a \emph{game}, in which an adversary nominates a \emph{challenge
352session}, and is given either the key agreed by the participants of the
353challenge session, or a random value independently sampled from an
354appropriate distribution. The adversary's advantage -- and hence the
355insecurity of the protocol -- is measured by its success probability in
356guessing whether the value it was given is really the challenge key. This
357challenge-session notion was also used by the subsequent papers described
358above.
359
360Bellare, Canetti and Krawczyk \cite{bellare-1998:modular-key-exchange}
361described a pair of models which they called the \textsc{am} (for
362`authenticated links model') and \textsc{um} (`unauthenticated links model').
363They propose a modular approach to the design of key-exchange protocols,
364whereby one first designs a protocol and proves its security in the
365\textsc{am}, and then applies a authenticating `compiler' to the protocol
366which they prove yields a protocol secure in the realistic \textsc{um}.
367Their security notion is new. They define an `ideal model', in which an
368adversary is limited to assigning sessions fresh, random and unknown keys, or
369matching up one session with another, so that both have the same key. They
370define a protocol to be secure if, for any adversary~$A$ in the \textsc{am}
371or \textsc{um}, there is an ideal adversary~$I$, such that the outputs of $A$
372and $I$ are computationally indistinguishable.
373
374In \cite{shoup-1999:formal-model-key-exchange}, Shoup presents a new model
375for key-exchange, also based on the idea of simulation. He analyses the
376previous models, particularly
377\cite{bellare-rogaway-1994:entity-authn-key-distrib} and
378\cite{bellare-1998:modular-key-exchange}, and highlights some of their inadequacies.
dff0fad2 379
32609df3
MW
380\fi
381
081da1b1
MW
382Canetti and Krawczyk
383\cite{canetti-krawczyk-2001:secure-channels-eprint,%
384 canetti-krawczyk-2001:secure-channels}
385describe a new notion of security in the model of
386\cite{bellare-1998:modular-key-exchange}, based on the challenge-session
387notion of \cite{bellare-rogaway-1994:entity-authn-key-distrib}. The security
388notion, called `SK-security', seems weaker in various ways than those of
389earlier works such as \cite{bellare-rogaway-1994:entity-authn-key-distrib} or
390\cite{shoup-1999:formal-model-key-exchange}. However, the authors show that
391their notion suffices for constructing `secure channel' protocols, which they
392also define.
393
394\ifshort\else In \cite{canetti-2001:uc-security-eprint,%
395 canetti-2001:uc-security},
396Canetti describes the `universal composition' framework. Here, security
397notions are simulation-based: one defines security notions by presenting an
398`ideal functionality'. A protocol securely implements a particular
399functionality if, for any adversary interacting with parties who use the
400protocol, there is an adversary which interacts with parties using the ideal
401functionality such that no `environment' can distinguish the two. The
402environment is allowed to interact freely with the adversary throughout,
403differentiating this approach from that of
404\cite{bellare-1998:modular-key-exchange} and
405\cite{shoup-1999:formal-model-key-exchange}, where the distinguisher was
406given only transcripts of the adversary's interaction with the parties. With
407security defined in this way, it's possible to prove a `universal composition
408theorem': one can construct a protocol, based upon various ideal
409functionalities, and then `plug in' secure implementations of the ideal
410functionalities and appeal to the theorem to prove the security of the entire
411protocol. The UC framework gives rise to very strong notions of security,
412due to the interactive nature of the `environment' distinguisher. \fi
413
414Canetti and Krawczyk \cite{canetti-krawczyk-2002:uc-key-exchange} show that
415the SK-security notion of \cite{canetti-krawczyk-2001:secure-channels} is
416\emph{equivalent} to a `relaxed' notion of key-exchange security in the UC
417framework\ifshort\space of \cite{canetti-2001:uc-security}\fi, and suffices
418for the construction of UC secure channels.
419
420The result of \cite{canetti-krawczyk-2002:uc-key-exchange} gives us
421confidence that SK-security is the `right' notion of security for
422key-exchange protocols. Accordingly, SK-security is the standard against
423which we analyse our key-exchange protocol.
a6e375a6
MW
424
425
426\subsection{Outline of the paper}
427
428The remaining sections of this paper are as follows.
429\begin{itemize}
430\item Section \ref{sec:prelim} provides the essential groundwork for the rest
431 of the paper. It introduces important notation, and describes security
432 notions and intractability assumptions.
433\item Section \ref{sec:zk-ident} describes our zero-knowledge identification
434 protocol and proves its security.
435\item Section \ref{sec:kx} describes the simple version of our key-exchange
436 protocol, and proves its security and deniability. It also describes some
437 minor modifications which bring practical benefits without damaging
32609df3 438 security.
dff0fad2 439\item Finally, section \ref{sec:conc} presents our conclusions.
a6e375a6
MW
440\end{itemize}
441
32609df3
MW
442\ifshort
443The full version of this paper describes how to make our protocols
444identity-based by using bilinear pairings using the techniques introduced in
081da1b1
MW
445\cite{boneh-franklin-2003:ibe-weil-pairing}. It also contains proofs of the
446various theorems stated here.
32609df3
MW
447\fi
448
a6e375a6 449%%%--------------------------------------------------------------------------
a6e375a6
MW
450\section{Preliminaries}
451\label{sec:prelim}
452
32609df3
MW
453\ifshort
454\subsection{Basics}
455\let\prelimsec\subsubsection
456\else
457\let\prelimsec\subsection
458\fi
459
460\prelimsec{Miscellaneous notation}
a6e375a6 461
ffedfecd
MW
462We write $\#S$ for the cardinality of a set $S$. We write $\Func{D}{R}$ for
463the set of all functions with domain $D$ and range $R$.
a6e375a6 464
32609df3 465\prelimsec{Groups}
a6e375a6
MW
466
467Let $(G, +)$ be a cyclic group\footnote{
468 We find that additive group notation is easier to read. In particular, in
469 multiplicative groups, one ends up with many interesting things tucked away
470 in little superscripts.}%
471of prime order $q$, and generated by an element $P$. We shall write the
472identity of $G$ as $0_G$, or simply as $0$ when no ambiguity is likely to
473arise. Thus, we have $\langle P \rangle = G$ and $q P = 0$. Any $X \in G$
32609df3 474can be written as $X = x P$ for some $x \in \{0, 1, \ldots, q - 1\}$.
a6e375a6 475
32609df3
MW
476We consider a cyclic group of order $n$ as a $\Z/n\Z$-module, and in
477particular our group $G$ can be seen as a vector space over $\gf{q}$. This
478makes the notation slightly more convenient.
a6e375a6 479
32609df3 480\prelimsec{Bit strings and encodings}
a6e375a6
MW
481\label{sec:bitenc}
482
483Let $\Bin = \{0, 1\}$ be the set of binary digits. Then $\Bin^n$ is the set
484of $n$-bit strings, and $\Bin^*$ the set of all (finite) bit strings. If $x
485\in \Bin^n$ is a bit string, we write its length as $|x| = n$. For a bit
486string $x \in \Bin^n$, and for $0 \le i < n$, we write $x[i]$ as the $i$th
487bit of $x$. The empty string is denoted $\emptystring$.
488
489Let $x$ and $y$ be two bit strings. If $|x| = |y| = n$, we write $x \xor y$
32609df3
MW
490to mean the bitwise exclusive-or of $x$ and $y$\ifshort\else: if $z = x \xor
491y$ then $|z| = n$, and $z[i] = (x[i] + y[i]) \bmod 2$ for $0 \le i < n$\fi.
492We write $x \cat y$ to mean the concatenation of $x$ and $y$\ifshort\else: if
493$z = x \cat y$ then $|z| = |x| + |y|$ and $z[i] = x[i]$ if $0 \le i < |x|$
494and $z[i] = y[i - |x|]$ if $|x| < i \le |x| + |y|$\fi.
a6e375a6
MW
495
496Finally, we let $\bot$ be a value distinct from any bit string.
497
498We shall want to encode group elements $X \in G$ and indices $x \in I =
32609df3
MW
499\gf{q}$ as bit strings.
500\ifshort
501To this end, we shall assume the existence of efficient, unambiguous
502encodings of group elements as $\ell_G$-bit strings, and indices as
503$\ell_I$-bit strings. To reduce clutter, we shall leave encoding and
504decoding as implicit operations.
505\else
506To this end, we shall assume the existence of
a6e375a6
MW
507integers $\ell_G, \ell_I > 0$ and functions
508\[
509 e_S\colon S \to \Bin^{\ell_S}
510 \quad \textrm{and} \quad
511 d_S\colon \Bin^{\ell_S} \to S \cup \{ \bot \}
512 \qquad
32609df3 513 \textrm{for } S \in \{ G, \F \}.
a6e375a6
MW
514\]
515with the following properties.
516\begin{itemize}
517\item The functions are \emph{unique} and \emph{unambiguous}, i.e., for any
518 $t \in \Bin^{\ell_S}$, we have
519 \[ d_S(t) = \begin{cases}
520 s & if there is some $s \in S$ such that $t = e_S(s)$, or \\
521 \bot & if no such $s$ exists.
522 \end{cases}
523 \]
524\item The functions should be \emph{efficient} to compute. Indeed, we shall
525 be assuming that the time taken for encoding and decoding is essentially
526 trivial.
527\end{itemize}
528Note that, as we have defined them, all encodings of group elements are the
529same length, and similarly for encodings of indices. This is necessary for
530the security of our protocols.
531
532We shall frequently abuse notation by omitting the encoding and decoding
533functions where it is obvious that they are required.
32609df3 534\fi
a6e375a6 535
32609df3
MW
536\ifshort\else
537\prelimsec{Games, adversaries, and oracles}
a6e375a6
MW
538\label{sec:games}
539
540Many of the security definitions and results given here make use of
541\emph{games}, played with an \emph{adversary}. An adversary is a
542probabilistic algorithm. In some games, the adversary is additionally
543equipped with \emph{oracles}, which perform computations with values chosen
544by the adversary and secrets chosen by the game but not revealed to the
545adversary. We impose limits on the adversary's resource usage: in
546particular, the total time it takes, and the number of queries it makes to
547its various oracles. Throughout, we include the size of the adversary's
548program as part of its `time', in order to model adversaries which contain
549large precomputed tables.
550
551The games provide models of someone trying to attack a construction or
552protocol. For security, we will either define a notion of `winning' the
553game, and require that all adversaries have only a very small probability of
554winning, or we consider two different games and require that no adversary can
32609df3 555distinguish between the two except with very small probability.
a6e375a6
MW
556
557Our proofs make frequent use of sequences of games; see
081da1b1
MW
558\cite{bellare-rogaway-2006:triple-enc,shoup-2004:sequences-of-games}.
559The presentation owes much to Shoup \cite{shoup-2004:sequences-of-games}. We
560begin with a game $\G0$ based directly on a relevant security definition, and
561construct a sequence of games $\G1$, $\G2$, \dots, each slightly different
562from the last. We define all of the games in a sequence over the same
563underlying probability space -- the random coins tossed by the algorithms
564involved -- though different games may have slightly differently-defined
565events and random variables. Our goal in doing this is to bound the
566probability of the adversary winning the initial game $\G0$ by simultaneously
567(a) relating the probability of this event to that of corresponding events in
568subsequent games, and (b) simplifying the game until the probability of the
569corresponding event can be computed directly.
570
571The following simple lemma from \cite{shoup-2001:oaep-reconsidered} will be
572frequently useful.
a6e375a6
MW
573\begin{lemma}[Difference Lemma]
574 \label{lem:shoup}
32609df3
MW
575 Let $S$, $T$, $F$ be events. Suppose $\Pr[S \mid \bar F] =
576 \Pr[T \mid \bar F]$. Then $|\Pr[S] - \Pr[T]| \le \Pr[F]$.
a6e375a6
MW
577\end{lemma}
578\begin{proof}
579 A simple calculation:
580 \begin{eqnarray*}[rl]
581 |\Pr[S] - \Pr[T]|
32609df3
MW
582 & = |(\Pr[S \mid F]\Pr[F] + \Pr[S \mid \bar F]\Pr[\bar F]) -
583 (\Pr[T \mid F]\Pr[F] + \Pr[T \mid \bar F]\Pr[\bar F])| \\
584 & = \Pr[F] \cdot |\Pr[S \mid F] - \Pr[T \mid F]| \\
a6e375a6
MW
585 & \le \Pr[F]
586 \end{eqnarray*}
587 and we're done!
588\end{proof}
32609df3 589\fi
a6e375a6
MW
590
591
32609df3 592\prelimsec{The random oracle model}
a6e375a6
MW
593\label{sec:ro}
594
32609df3 595\ifshort\else
a6e375a6 596In particular, most of our results will make use of the \emph{random oracle}
081da1b1
MW
597model \cite{bellare-rogaway-1993:random-oracles}, in which all the
598participants, including the adversary, have access to a number of `random
599oracles'. A random oracle with domain $D$ and range $R$ is an oracle which
600computes a function chosen uniformly at random from the set of all such
601functions. (In the original paper
602\cite{bellare-rogaway-1993:random-oracles}, random oracles are considered
603having domain $\Bin^*$ and range $\Bin^\omega$; we use finite random oracles
604here, because they're easier to work with.)
a6e375a6
MW
605
606Given a protocol proven secure in the random oracle model, we can instantiate
607each random oracle by a supposedly-secure hash function and be fairly
608confident that either our protocol will be similarly secure, or one of the
609hash functions we chose has some unfortunate property.
610
611Proofs in the random oracle must be interpreted carefully. For example,
081da1b1
MW
612Canetti, Goldreich and Halevi \cite{canetti-2004:rand-oracle-revisit} show
613that there are schemes which can be proven secure in the random oracle model
614but provably have no secure instantiation in the standard model.
32609df3 615\fi
a6e375a6 616
081da1b1
MW
617The random oracle model \ifshort\cite{bellare-rogaway-1993:random-oracles}
618\fi is useful for constructing reductions and simulators for two main
619reasons.
a6e375a6
MW
620\begin{enumerate}
621\item One can use the transcript of an adversary's queries to random oracles
622 in order to extract knowledge from it.
623\item One can `program' a random oracle so as to avoid being bound by prior
624 `commitments', or to guide an adversary towards solving a selected instance
625 of some problem.
626\end{enumerate}
627Our proofs only make use of the first feature. This becomes particularly
628important when we consider issues of zero-knowledge and deniability in a
629concurrent setting, because we want to be able to claim that we retain these
630features when the random oracle is instantiated using a cryptographic hash
631function, and hash functions definitely aren't `programmable' in this way!
632The former property seems rather more defensible -- one would indeed hope
633that the only sensible way of working out (anything about) the hash of a
634particular string is to actually compute the hash function, and the random
635oracle model is, we hope, just giving us a `hook' into this process.
636
32609df3 637\ifshort\else
a6e375a6
MW
638(Our protocols can be modified to make use of bilinear pairings so as to
639provide identity-based identification and key-exchange, using the techniques
081da1b1
MW
640of \cite{boneh-franklin-2003:ibe-weil-pairing}. Proving the security of the
641modifications we discuss would involve `programming' random oracles, but this
642doesn't affect the zero-knowledge or deniability of the resulting protocols.)
32609df3 643\fi
a6e375a6
MW
644
645
32609df3
MW
646\ifshort\else
647\prelimsec{Notation for algorithms}
a6e375a6
MW
648
649We shall have occasion to describe algorithms by means of a pseudocode. Our
650choice of pseudocode is unlikely to be particularly controversial. We let $x
651\gets y$ denote the action of setting $x$ to the value $y$; similarly, $x
652\getsr Y$ denotes the action of sampling $x$ from the set $Y$ uniformly at
653random.
654
655The expression $a \gets A^{O(\cdot, x)}(y)$ means `assign to $a$ the value
656output by algorithm $A$ on input $y$, and with oracle access to the algorithm
657which, given input $z$, computes $O(z, x)$'.
658
659We make use of conditional (\IF-\ELSE) and looping (\FOR-\DO and \WHILE-\DO)
660constructions; in order to reduce the amount of space taken up, the bodies of
661such constructions are shown by indentation only.
662
663We don't declare the types of our variables explicitly, assuming that these
664will be obvious by inspection; also, we don't describe our variables' scopes
665explicitly, leaving the reader to determine these from context.
666
667Finally, the notation $\Pr[\textit{algorithm} : \textit{condition}]$ denotes
668the probability that \textit{condition} is true after running the given
669\textit{algorithm}.
32609df3 670\fi
a6e375a6 671
32609df3 672\prelimsec{Diffie-Hellman problems}
a6e375a6
MW
673\label{sec:dhp}
674
675The security of our protocols is related to the hardness of the
676computational, decisional, and gap Diffie-Hellman problems in the group $G$.
677We define these problems and what it means for them to be `hard' here.
678
679The \emph{computational} Diffie-Hellman problem (CDH) is as follows: given
680two group elements $X = x P$ and $Y = y P$, find $Z = x y P$.
32609df3 681\ifshort\else
a6e375a6
MW
682\begin{definition}[The computational Diffie-Hellman problem]
683 Let $(G, +)$ be a cyclic group generated by $P$. For any adversary $A$, we
684 say that $A$'s \emph{success probability} at solving the computational
685 Diffie-Hellman problem in $G$ is
686 \[ \Succ{cdh}{G}(A) =
32609df3 687 \Pr[ x \getsr I; y \getsr \Z/\#G\Z : A(x P, y P) = x y P ]
a6e375a6
MW
688 \]
689 where the probability is taken over the random choices of $x$ and $y$ and
690 any random decisions made by $A$. We say that the \emph{CDH insecurity
691 function} of $G$ is
692 \[ \InSec{cdh}(G; t) = \max_A \Succ{cdh}{G}(A) \]
693 where the maximum is taken over adversaries which complete in time $t$.
694\end{definition}
695Certainly, if one can compute discrete logarithms in the group $G$ (i.e.,
696given $x P$, find $x$), then one can solve the computational Diffie-Hellman
081da1b1
MW
697problem. The converse is not clear, though. Shoup
698\cite{shoup-1997:dh-lower-bounds} gives us some confidence in the difficulty
699of the problem by showing that a \emph{generic} adversary -- i.e., one which
700makes no use of the specific structure of a group -- has success probability
701no greater than $q^2/\#G$.
a6e375a6
MW
702
703This isn't quite sufficient for our purposes. Our proofs will be able to
704come up with (possibly) a large number of guesses for the correct answer, and
705at most one of them will be correct. Unfortunately, working out which one is
706right seems, in general, to be difficult. This is the \emph{decision}
081da1b1
MW
707Diffie-Hellman problem (DDH), which \cite{shoup-1997:dh-lower-bounds} shows,
708in the generic group model, is about as hard as CDH. (See
709\cite{boneh-1998:ddh} for a survey of the decision Diffie-Hellman problem.)
32609df3 710\par\fi
a6e375a6
MW
711Our reference problem will be a `multiple-guess computational Diffie-Hellman
712problem' (MCDH), which is captured by a game as follows. An adversary is
713given a pair of group elements $(x P, y P)$, and an oracle $V(\cdot)$ which
714accepts group elements as input. The adversary wins the game if it queries
715$V(x y P)$.
a6e375a6
MW
716
717\begin{figure}
718 \begin{program}
719 $\Game{mcdh}{G}(A)$: \+ \\
720 $w \gets 0$; \\
32609df3 721 $x \getsr \Z/\#G\Z$; $y \getsr \Z/\#G\Z$; \\
a6e375a6
MW
722 $A^{V(\cdot)}(x P, y P)$; \\
723 \RETURN $w$;
724 \next
725 Function $V(Z)$: \+ \\
726 \IF $Z = x y P$ \THEN \\ \ind
727 $w \gets 1$; \\
728 \RETURN $1$; \- \\
729 \RETURN $0$;
730 \end{program}
731
732 \caption{The multiple-guess computational Diffie-Hellman problem:
733 $\Game{mcdh}{G}(A)$}
734 \label{fig:mcdh}
735\end{figure}
736
32609df3
MW
737\begin{definition}[The multiple-guess computational Diffie-Hellman problem]
738 \label{def:mcdh}
739 Let $(G, +)$ be a cyclic group generated by $P$. For some adversary $A$,
740 we say that $A$'s \emph{success probability} at solving the multiple-guess
741 computational Diffie-Hellman problem in $G$ is
742 \[ \Succ{mcdh}{G}(A) = \Pr[\Game{mcdh}{G}(A) = 1] \]
743 where $\Game{mcdh}{G}(A)$ is shown in figure~\ref{fig:mcdh}. We say that
744 the \emph{MCDH insecurity function of $G$} is
745 \[ \InSec{mcdh}(G; t, q_V) = \max_A \Succ{mcdh}{G}(A) \]
746 where the maximum is taken over adversaries which complete in time $t$ and
747 make at most $q_V$-oracle queries.
748\end{definition}
749\ifshort
750We can (loosely) relate the difficulty of MCDH to the difficulty of
751the standard CDH problem, in which the adversary is allowed only a single
752guess.
753\else
a6e375a6
MW
754Note that our MCDH problem is not quite the `gap Diffie-Hellman problem'
755(GDH). The gap problem measures the intractibility of solving CDH even with
756the assistance of an oracle for solving (restricted) decision Diffie-Hellman
757problems in the group. Specifically, the adversary is given $(X, Y) = (x P,
758y P)$ and tries to find $Z = x y P$, as for CDH, but now it has access to an
759oracle $D(R, S)$ which answers $1$ if $S = x R$ and $0$ otherwise.
760
761Clearly MCDH is at least as hard as GDH, since our simple verification oracle
762$V(Z)$ can be simulated with the gap problem's DDH oracle, as $D(Y, Z)$.
763However, we can (loosely) relate the difficulty of MCDH to the difficulty of
764CDH.
32609df3 765\fi
a6e375a6
MW
766\begin{proposition}[Comparison of MCDH and CDH security]
767 For any cyclic group $(G, +)$,
32609df3
MW
768 \[ \InSec{mcdh}(G; t, q_V) \le
769 \ifshort q_V\,\InSec{mcdh}(G; t + O(q_V), 1) \else
770 q_V\,\InSec{cdh}(G; t + O(q_V)) \fi.
771 \]
a6e375a6 772\end{proposition}
32609df3
MW
773\begin{longproof}{The proof of this proposition may be found in the full
774 version of this paper.}
a6e375a6
MW
775 Let $A$ be an adversary attacking the multiple-guess computational
776 Diffie-Hellman problem in $G$, and suppose that it runs in time $t$ and
777 issues $q_V$ queries to its verification oracle.
778
779 We use a sequence of games. Game $\G0$ is the original MCDH attack game.
780 In each game $\G{i}$, we let the event $S_i$ be the probability that the
781 adversary wins the game.
782
783 Game $\G1$ is the same as $\G0$, except that we change the behaviour of the
784 verification oracle. Specifically, we make the oracle always return $0$.
785 We claim that this doesn't affect the adversary's probability of winning,
786 i.e., $\Pr[S_1] = \Pr[S_0]$. To see this, note that if none of the
787 adversary's $V(\cdot)$ queries was correct, then there is no change in the
788 game; conversely, if any query was correct, then the adversary will have
789 won regardless of its subsequent behaviour (which may differ arbitrarily
790 between the two games).
791
792 We are now ready to construct from $A$ an adversary $B$ attacking the
793 standard computational Diffie-Hellman problem.
794 \begin{program}
795 Adversary $B(X, Y)$: \+ \\
796 $n \gets 0$; \\
797 \FOR $i \in \Nupto{q_V}$ \DO $Q_i \gets 0$; \\
798 $A^{V(\cdot)}$; \\
799 $r \getsr \Nupto{n}$; \\
800 \RETURN $Q_r$;
801 \next
802 Function $D(Z')$: \+ \\
803 $Q_n \gets Z'$; \\
804 $n \gets n + 1$; \\
805 \RETURN $0$;
806 \end{program}
807 Observe that $B$ provides $A$ with an accurate simulation of game $\G1$.
32609df3
MW
808 Moreover, at the end of the algorithm, we have $0 < n \le q_V$, and the
809 values $Q_0$, $Q_1$, \dots, $Q_{n-1}$ are the values of $A$'s oracle
810 queries. Hence, with probability $Pr[S_1]$, at least of one of the $Q_i$
811 is the correct answer to the CDH problem. Let $\epsilon = \Pr[S_1] =
812 \Pr[S_0]$; we claim that $B$'s probability of success is at least
813 $\epsilon/q_V$. The proposition follows directly from this claim and that,
814 because $A$ was chosen arbitrarily, we can maximize and count resources.
a6e375a6
MW
815
816 We now prove the above claim. For $0 \le i < q_V$, let $W_i$ be the
817 event that $Q_i = x y P$, i.e., that $Q_i$ is the correct response. A
818 simple union bound shows that
819 \[ \sum_{0\le i<j} \Pr[W_i \mid n = j] \ge \epsilon. \]
820 We now perform a calculation:
821 \begin{eqnarray*}[rl]
822 \Succ{cdh}{G}(B)
823 & = \sum_{0\le i<q_V} \Pr[W_i \land r = i] \\
824 & = \sum_{0<j\le q_V} \Pr[n = j]
825 \biggl( \sum_{0\le i<j} \Pr[W_i \land r = i \mid n = j] \biggr) \\
826 & = \sum_{0<j\le q_V} \Pr[n = j]
827 \biggl( \frac{1}{j} \sum_{0\le i<j} \Pr[W_i \mid n = j] \biggr) \\
828 &\ge \sum_{0<j\le q_V} \Pr[n = j] \frac{\epsilon}{j} \\
829 &\ge \frac{\epsilon}{q_V} \sum_{0<j\le q_V} \Pr[n = j] \\
830 & = \frac{\epsilon}{q_V}.
831 \end{eqnarray*}
832 which completes the proof.
32609df3 833\end{longproof}
a6e375a6 834
32609df3
MW
835\ifshort\else
836\prelimsec{Example groups and encodings}
a6e375a6
MW
837
838For nonnegative integers $0 \le n < 2^\ell$, there is a natural binary
839encoding $N_\ell\colon \Nupto{2^\ell} \to \Bin^\ell$ which we can define
840recursively as follows.
841\[ N_0(0) = \emptystring \qquad
842 N_\ell(n) = \begin{cases}
843 N_{\ell-1}(n) \cat 0 & if $0 \le n < 2^{\ell-1}$ \\
844 N_{\ell-1}(n - 2^{\ell-1}) \cat 1 & if $2^{\ell-1} \le n < 2^\ell$.
845 \end{cases}
846\]
847Given an encoding $a = N_\ell(n)$ we can recover $n$ as
848\[ n = \sum_{0\le i<\ell} a[i] 2^i. \]
849Hence, given some limit $L \le 2^\ell$, we can encode elements of $\Nupto{L}$
850using the functions $(e, d)$:
851\[ e(L, \ell, n) = N_\ell(n) \qquad
852 d(L, \ell, a) = \begin{cases}
853 N_\ell(a) & if $N_\ell(a) < L$ \\
854 \bot & otherwise
855 \end{cases}
856\]
857The reader can verify that the functions $e(L, \ell, \cdot)$ and $d(L, \ell,
858\cdot)$ satisfy the requirements of section~\ref{sec:bitenc}.
859
32609df3
MW
860Given some $q$ with $q < 2^{\ell_I}$, then, we can define an encoding
861$(e_\F, d_\F)$ by $e_\F(n) = e(q, \ell_I, n)$ and $d_\F(a) = d(q, \ell_I,
862a)$.
863
864Let $p$ and $q$ be primes, with $q \mid (p - 1)$. Then there is an order-$q$
865subgroup of $\F_p^*$. In practice, an order-$q$ element can be found easily
866by taking elements $h \in \F_p^*$ at random and computing $g = h^{(p-1)/2}$
867until $g \ne 1$; then $G = \langle g \rangle$ is a group of $q$ elements.
868Assuming that $p$ and $q$ are sufficiently large, the Diffie-Hellman problems
869seem to be difficult in $G$. Some texts recommend additional restrictions on
870$p$, in particular that $(p - 1)/2q$ be either prime or the product of large
871primes. Primes of this form protect against small-subgroup attacks; but our
872protocols are naturally immune to these attacks, so such precautions are
873unnecessary here. Elements of $G$ can be encoded readily, since each element
874$n + p\Z$ of $\F_p = \Z/p\Z$ has an obvious `representative' integer $n$ such
875that $0 \le n < p$, and given $2^{\ell_G} > p$, we can encode $n$ as $e(p,
876\ell_G, n)$, as above.
a6e375a6
MW
877
878Alternatively, let $\F = \gf{p^f}$ be a finite field, and $E$ be an elliptic
879curve defined over $\F$ such that the group $E(\F)$ of $\F$-rational points
880of $E$ has a prime-order cyclic subgroup $G$. Elements of $G$ can be
881represented as pairs of elements of $\F$. If $f = 1$, i.e., $\F = \Z/p\Z$
882then field elements can be encoded as above. If $p = 2$, we can represent
883the field as $\F_2/(p(x))$ for some irreducible polynomial $p(x) \in \F_2[x]$
884of degree $f$. An element $r \in \F$ can then be represented by a polynomial
885$r(x)$ with degree less than $f$, and coefficients $c_i \in \{0, 1\}$, i.e.,
886\[ r(x) = \sum_{0\le i<f} c_i x^i \]
887and hence we can uniquely encode $r$ as an $f$-bit string $a$ such that $a[i]
888= c_i$.
32609df3 889\fi
a6e375a6
MW
890
891
32609df3 892\prelimsec{Symmetric encryption}
a6e375a6
MW
893\label{sec:sym-enc}
894
895Our key-exchange protocol requires a symmetric encryption scheme. Our
896definition is fairly standard, except that, rather than specifying a
897key-generation algorithm, we assume that key generation simply involves
898selecting a string of a given length uniformly at random.
899\begin{definition}[Symmetric encryption schemes]
900 A \emph{symmetric encryption scheme} $\E = (\kappa, E, D)$ consists of:
901 \begin{itemize}
902 \item an integer $\kappa \ge 0$,
903 \item a randomized \emph{encryption algorithm} $E$ which, on input $K \in
904 \Bin^\kappa$ and $p \in \Bin^*$ outputs some $c \in \Bin^*$, written $c
905 \gets E_K(p)$;
906 \item a \emph{decryption algorithm} $D$ which, on input $K \in \Bin^\kappa$
907 and $c \in \Bin^*$ outputs some $p' \in \Bin^* \cup \{\bot\}$, written
908 $p' \gets D_K(c)$.
909 \end{itemize}
910 Furthermore, a symmetric encryption scheme must be \emph{sound}: that is,
911 if $c \gets E_K(p)$ for some $K \in \Bin^\kappa$ and $p \in \Bin^*$, and
912 $p' \gets D_K(c)$ then $p = p'$.
913\end{definition}
914Our security notion for symmetric encryption is the standard notion of
915left-or-right indistinguishability of ciphertexts under chosen-ciphertext
916attack.
32609df3 917\begin{definition}[IND-CCA]
dff0fad2 918 \label{def:ind-cca}
a6e375a6
MW
919 Let $\E = (\kappa, E, D)$ be a symmetric encryption scheme, and $A$ be an
920 adversary. Let $\id{lr}_b(x_0, x_1) = x_b$ for $b \in \{0, 1\}$. Let
921 \[ P_b =
dff0fad2
MW
922 \Pr[K \getsr \Bin^\kappa;
923 b \gets A^{E_K(\id{lr}_b(\cdot, \cdot)), D_K(\cdot)}() :
924 b = 1]
a6e375a6
MW
925 \]
926 An adversary is \emph{valid} if
927 \begin{itemize}
928 \item for any query to its encryption oracle $E_K(\id{lr}_b(x_0, x_1))$ we
929 have $|x_0| = |x_1|$, and
930 \item no query to the decryption oracle $D_K(\cdot)$ is equal to any reply
931 from an encryption query.
932 \end{itemize}
933 If $A$ is valid, then we define its \emph{advantage} in attacking the
934 security of $\E$ as follows
935 \[ \Adv{ind-cca}{\E} = P_1 - P_0. \]
936 Further, we define the \emph{IND-CCA insecurity function of $\E$} to be
937 \[ \InSec{ind-cca}(\E; t, q_E, q_D) = \max_A \Adv{ind-cca}{\E}(A) \]
938 where the maximum is taken over all valid adversaries $A$ which run in time
939 $t$, and issue at most $q_E$ encryption and $q_D$ decryption queries.
940\end{definition}
941
942
943\subsection{Simulations}
944\label{sec:sim}
945
946In section~\ref{sec:zk-ident}, we shall prove that our identification
947protocol is zero-knowledge; in section~\ref{sec:denial}, we show that our
948key-exchange protocol is deniable. In both of these proofs, we shall need to
32609df3
MW
949demonstrate \emph{simulatability}.
950
951\ifshort
952
953We consider an adversary~$A$ interacting with a `world'~$W$; we model both as
954probabilistic algorithms. Both $A$ and~$W$ are given a common input~$c$; the
955world is additionally given a private input~$w$; these are chosen by a
956randomized initialization function $I$. The adversary is additionally given
957an auxiliary input~$u$ computed from $w$ by a randomized algorithm~$U$. All
958these algorithms -- the adversary and the world, but also the initialization
959and auxiliary-input algorithms $I$ and~$U$ -- have access to a number of
960random oracles $\mathcal{H} = (H_0, H_1, \ldots, H_{n-1})$. The adversary
961eventually decides to stop interacting, and produces an output~$a$.
962
963A \emph{simulator} for $A$'s interaction with $W$ is an algorithm $S^A$ which
964attempts to produce a similar output distribution, but without interacting
965with $W$. The simulator is given the same inputs $(c, u)$ as we gave
966to~$A$, and $S$ is also allowed to query the random oracles~$\mathcal{H}$.
967
968To measure the effectiveness of a simulator, we consider a distinguisher~$D$
969which is given $(c, u, a)$, and access to $\mathcal{H}$, and returns a bit
970$b$ representing its verdict as to whether the output $a$ was produced by the
971adversary or the simulator.
972
973\else
a6e375a6
MW
974
975\subsubsection{General framework}
976Consider a game in which an adversary~$A$ interacts with some `world'~$W$,
977which we shall represent as a probabilistic algorithm. The world may in fact
978represent a number of honest parties communicating in a concurrent fashion,
979but we can consider them as a single algorithm for our present purposes.
980
981Initially the world and the adversary are both given the same \emph{common
982input}~$c$; in addition, the world is given a \emph{private input}~$w$.
983Both $c$ and~$w$ are computed by an \emph{initialization function}~$I$, which
984is considered to be part of the definition of the game. Finally, the
985adversary decides somehow that it has finished interacting, and outputs a
986value~$a$. All this we notate as
987\[ (w, c) \gets I(); a \gets A^{W(w, c)}(c). \]
988This game is \emph{simulatable} if there is an algorithm~$S$ -- the
989\emph{simulator} -- which can compute the same things as~$A$, but all by
990itself without interacting with the world. That is, we run the simulator on
991the common input~$c$, allowing it to interact in some way with the
992adversary~$A$, and finally giving us an output~$s$.
993\[ (w, c) \gets I(); s \gets S^A(c). \]
994We shall say that the simulator is \emph{effective} if it's difficult to tell
995whether a given string was output by the adversary after interacting with the
996world, or by the simulator running by itself. That is, for any algorithm~$D$
997-- a \emph{distinguisher} -- running in some bounded amount of time, its
998advantage
999\begin{spliteqn*}
1000 \Pr[(w, c) \gets I(); a \gets A^{W(w, c)}(c);
1001 b \gets D(c, a) : b = 1] - {} \\
1002 \Pr[(w, c) \gets I(); s \gets S^A(c); b \gets D(c, s) : b = 1]
1003\end{spliteqn*}
1004is small. (Note that we gave the distinguisher the common input as well as
1005the output of the adversary or the simulator.)
1006
1007It's usual to study \emph{transcripts} of interactions in these kinds of
1008settings. We are considering arbitrary adversarial outputs here, so this
1009certainly includes adversaries which output a transcript of their
1010interactions. Indeed, for any adversary~$A$, we could construct an
1011adversary~$A_T$ which performs the same computation, and outputs the same
1012result, but also includes a complete transcript of $A$'s interaction with the
1013world. Therefore we're just providing additional generality.
1014
1015\subsubsection{Random oracles}
1016We shall be considering interactions in which all the parties have access to
1017several random oracles. We could simply say that the random oracles are part
1018of the world~$W$. In the setting described above, only the adversary
1019actually interacts with the world (and therefore would be able to query
1020random oracles). The simulator would be forced to `make up' its own random
1021oracle, and the distinguisher would have to study the distributions of the
1022random-oracle queries and their responses to make up its mind about which it
1023was given.
1024
1025However, this would be a poor model for the real world, since once we
1026instantiate the random oracle with a hash function, we know that everyone
1027would in actually be able to compute the hash function for themselves. Thus
1028a distinguisher in the real world would be able to tell the difference
1029immediately between a real interaction and the simulated transcript, since
1030the `random oracle' queries recorded in the latter would be wrong!
1031
1032Therefore we decide not to include the random oracles as part of the world,
1033but instead allow all the participants -- adversary, simulator and
1034distinguisher -- access to them. If we denote by~$\mathcal{H} = (H_0, H_1,
1035\ldots, H_{n-1})$ the collection of random oracles under consideration, the
1036expression for the distinguisher's advantage becomes
1037\begin{spliteqn*}
1038 \Pr[(w, c) \gets I(); a \gets A^{W(w, c), \mathcal{H}}(c);
1039 b \gets D^{\mathcal{H}}(c, a) : b = 1] - {} \\
1040 \Pr[(w, c) \gets I(); s \gets S^{A, \mathcal{H}}(c);
1041 b \gets D^{\mathcal{H}}(c, s) : b = 1].
1042\end{spliteqn*}
1043
1044\subsubsection{Auxiliary inputs}
1045If an adversary's output can be effectively simulated, then we can
1046confidently state that the adversary `learnt' very little of substance from
1047its interaction, and certainly very little it can \emph{prove} to anyone
1048else. However, as we have described the setting so far, we fix an adversary
1049before we choose inputs to the world, so our model says little about what an
1050adversary which has already acquired some knowledge might learn beyond that.
1051For example, an adversary might overhear some other conversation between
1052honest parties and be able to use this to its advantage.
1053
1054To this end, we give the adversary an \emph{auxiliary input}~$u$, computed by
1055an algorithm~$U$. We give $U$ both $c$ and $w$, in order to allow the
1056adversary to gain some (possibly partial) knowledge of the secrets of the
1057other parties. We also allow $U$ access to the random oracles~$\mathcal{H}$,
1058because clearly in the `real world' it would be ridiculous to forbid such an
1059algorithm from computing a publicly-known hash function.
1060
1061The simulator and distinguisher are also given the auxiliary input. The
1062simulator is meant to represent the adversary's ability to compute things on
1063its own, without interacting with the world, and since the adversary is given
1064the auxiliary input, the simulator must be too. The distinguisher must be
32609df3
MW
1065given the auxiliary input because otherwise the simulator could just `make
1066up' plausible-looking inputs.
1067
1068\fi
1069
1070\ifshort
1071Because we're interested in a concrete, quantitative analysis, we must
1072constrain the resource usage of the various algorithms described above.
1073Specifically, we shall be interested in
1074\else
a6e375a6
MW
1075
1076\subsubsection{Resource limits}
1077We shall not allow our algorithms to perform completely arbitrary
1078computations and interactions. Instead, we impose limits on the amount of
1079time they are allowed to take, the number of random-oracle queries they make,
1080and so on. Specifically, we are interested in
32609df3 1081\fi
a6e375a6
MW
1082\begin{itemize}
1083\item the time $t_A$ taken by the adversary and $t_D$ taken by the
1084 distinguisher,
1085\item the number of oracle queries $\mathcal{Q}_A = (q_{A,0}, q_{A,1},
1086 \ldots, q_{A,n-1})$ made by the adversary, and $\mathcal{Q}_D$ made by the
1087 distinguisher,
1088\item a number of resource bounds $\mathcal{R}$ on the adversary's
1089 interaction with the world (e.g., number of messages of various kinds sent
1090 and received), and
1091\item a number of bounds $\mathcal{U}$ on the contents of the adversary's
1092 auxiliary input~$u$.
1093\end{itemize}
1094Sometimes we shall not be interested in proving simulatability of adversaries
1095with auxiliary inputs. We write $\mathcal{U} = 0$ to indicate that auxiliary
32609df3
MW
1096input is not allowed.
1097
1098\ifshort\else
a6e375a6
MW
1099
1100\subsubsection{World syntax}
1101It will be worth our while being more precise about what a `world' actually
1102is, syntactically. We define a world to be a single, randomized algorithm
1103taking inputs $(\iota, \sigma, \tau, \mu) \in (\Bin^*)^4$; the algorithm's
1104output is a pair $(\sigma', \rho) \in (\Bin^*)^2$. We show how the
1105adversary's interaction is mapped on to this world algorithm in
1106figure~\ref{fig:sim-world}.
1107\begin{itemize}
1108\item The `input' $\iota$ is the result of the initialization function~$I$.
1109 That is, it is the pair $(w, c)$ of the world's private input and the
1110 common input.
1111\item The `state' $\sigma$ is empty on the world's first invocation; on each
1112 subsequent call, the value of the world's output $\sigma'$ is passed back.
1113 In this way, the world can maintain state.
32609df3 1114\item The `type $\tau$ is a token giving the type of invocation this is.
a6e375a6
MW
1115\item The `message' $\mu$ is any other information passed in; its form will
1116 typically depend on the type~$\tau$ of the invocation.
1117\item The `new state' $\sigma'$ is the value of $\sigma$ to pass to the next
1118 invocation of the world.
1119\item The `reply $\rho$ is the actual output of the invocation.
1120\end{itemize}
1121There are two special invocation types. The adversary is \emph{forbidden}
1122from making special invocations.
1123\begin{itemize}
1124\item The special invocation type $\cookie{init}$ is used to allow the world to
1125 prepare an initial state. The world is invoked as
1126 \[ W^{\mathcal{H}}(\iota, \emptystring, \cookie{init}, \emptystring) \]
1127 and should output an initial state $\sigma'$. The world's reply $\rho$ is
1128 ignored. (Recall that $\emptystring$ represents the empty string.)
1129\item The special invocation type $\cookie{random}$ is used to inform the
1130 world that the adversary has issued a random oracle query. The world is
1131 invoked as
1132 \[ W^{\mathcal{H}}(\iota, \sigma, \cookie{random}, (i, x, h)) \]
1133 to indicate that the adversary has queried its random oracle $H_i(\cdot)$
1134 on the input $x$, giving output~$h$. The world may output an updated state
1135 $\sigma'$; its reply $\rho$ is ignored.
1136\end{itemize}
1137The latter special query is a technical device used to allow the `fake-world'
1138simulators we define below to be aware of the adversary's random oracle
1139queries without being able to `program' the random oracle. Including it here
1140does little harm, and simplifies the overall exposition.
1141
1142\begin{figure}
1143 \begin{program}
1144 Interaction $A^{W(w, c), \mathcal{H}}(c, u)$: \+ \\
1145 $(\sigma, \rho) \gets
1146 W((w, c), \emptystring, \cookie{init}, \emptystring)$; \\
1147 $a \gets A^{\id{world}(\cdot, \cdot),
1148 \id{random}(\cdot, \cdot)}(c, u)$; \\
1149 \RETURN $a$;
1150 \newline
1151 Function $\id{world}(\tau, \mu)$: \+ \\
1152 \IF $\tau \in \{\cookie{init}, \cookie{random}\}$ \THEN
1153 \RETURN $\bot$; \\
1154 $(\sigma, \rho) \gets W((w, c), \sigma, \tau, \mu)$; \\
1155 \RETURN $\rho$; \-
1156 \next
1157 Function $\id{random}(i, x)$: \+ \\
1158 $h \gets H_i(x)$; \\
1159 $(\sigma, \rho) \gets
1160 W((w, c), \sigma, \cookie{random}, (i, x, h))$; \\
1161 \RETURN $h$;
1162 \end{program}
1163
1164 \caption{Interacting with a world: Interaction $A^{W, \mathcal{H}}$}
1165 \label{fig:sim-world}
1166\end{figure}
1167
1168\subsubsection{Definitions}
1169We are now ready to begin making definitions.
32609df3
MW
1170\fi
1171
a6e375a6
MW
1172\begin{definition}[Simulation security]
1173 \label{def:sim}
1174 Consider the game described above, with the initialization function~$I$,
1175 and the world~$W$: let $A$ be an adversary, and let~$U$ be an
1176 auxiliary-input function; let $S$ be a simulator, and let $D$ be a
1177 distinguisher. We define $D$'s \emph{advantage against $S$'s simulation of
1178 $A$'s interaction with~$W$ with auxiliary inputs provided by~$U$} to be
1179 \[ \Adv{sim}{W, I, S}(A, U, D) =
1180 \Pr[\Game{real}{W, I, S}(A, U, D) = 1] -
1181 \Pr[\Game{sim}{W, I, S}(A, U, D)= 1]
1182 \]
1183 where the games are as shown in figure~\ref{fig:sim}.
1184 Furthermore, we define the \emph{simulator's insecurity function} to be
1185 \[ \InSec{sim}(W, I, S;
1186 t_D, t_A, \mathcal{Q}_D, \mathcal{Q}_A, \mathcal{R}, \mathcal{U}) =
1187 \max_{D, A, U} \Adv{sim}{W, I, S}(A, U, D)
1188 \]
1189 where the maximum is taken over all distinguishers~$D$ running in
1190 time~$t_D$ and making at most $\mathcal{Q}_D$ random-oracle queries, and
1191 all adversaries~$A$ running in time~$t_A$, making at most $\mathcal{Q}_A$
1192 random-oracle queries, not exceeding the other stated resource
1193 bounds~$\mathcal{R}$ on its interaction with~$W$, and auxiliary-input
1194 functions producing output not exceeding the stated bounds~$\mathcal{U}$.
1195\end{definition}
1196\begin{remark}
1197 The usual definitions of zero-knowledge, for example, require the simulator
1198 to work for all choices of inputs (common, private and auxiliary), rather
1199 than for random choices. Our definition therefore looks weaker. Our proof
1200 of zero-knowledge actually carries through to the traditional
1201 stronger-looking definition. Critically, however, the standard
1202 universal quantification over inputs fails to capture deniability in the
1203 random oracle model, since the inputs can't therefore depend on the random
1204 oracle. Our formulation therefore actually gives \emph{stronger}
32609df3 1205 deniability than the usual one.
a6e375a6
MW
1206\end{remark}
1207
1208\begin{figure}
1209 \begin{program}
1210 $\Game{real}{W, I, S}(A, U, D)$: \+ \\
1211 $(w, c) \gets I()$; \\
1212 $u \gets U^{\mathcal{H}}(w, c)$; \\
1213 $a \gets A^{W(w, c), \mathcal{H}}(c, u)$; \\
1214 $b \gets D^{\mathcal{H}}(c, u, a)$; \\
1215 \RETURN $b$;
1216 \next
1217 $\Game{sim}{W, I, S}(A, U, D)$: \+ \\
1218 $(w, c) \gets I()$; \\
1219 $u \gets U^{\mathcal{H}}(w, c)$; \\
1220 $s \gets S^{A, \mathcal{H}}(c, u)$; \\
1221 $b \gets D^{\mathcal{H}}(c, u, s)$; \\
1222 \RETURN $b$;
1223 \end{program}
1224
1225 \caption{Games for simulation: $\Game{real}{W, I}$ and $\Game{sim}{W, I}$}
1226 \label{fig:sim}
1227\end{figure}
1228
32609df3 1229\ifshort\else
a6e375a6
MW
1230\subsubsection{Fake-world simulators}
1231The simulators we shall be considering in the present paper are of a specific
1232type which we call `fake-world simulators'. They work by running the
1233adversary in a fake `cardboard cut-out' world, and attempting to extract
1234enough information from the adversary's previous interactions and random
1235oracle queries to maintain a convincing illusion.
1236
1237That is, the behaviour of a fake-world simulator~$S$ is simply to allow the
1238adversary to interact with a `fake world'~$W'$, which was not given the world
1239private input. That is, there is some world $W'$ such that
1240\[ S^{A, \mathcal{H}}(c, u) \equiv A^{W'(u, c), \mathcal{H}}(c, u) \]
1241Fake-world simulators are convenient because they allow us to remove from
1242consideration the distinguisher~$D$ as the following definition shows.
1243\begin{definition}[Fake-world simulation security]
1244 \label{def:fakesim}
1245 Let $I$, $W$ and $U$ be as in definition~\ref{def:sim}. Let $A$ be an
1246 adversary which outputs a single bit. Let $S$ be a fake-world simulator.
1247 We define $A$'s \emph{advantage against $S$'s fake-world simulation of $W$
1248 with auxiliary inputs provided by~$U$} to be
1249 \begin{spliteqn*}
1250 \Adv{fw}{W, I, S}(A, U) =
1251 \Pr[(w, c) \gets I(); u \gets U^{\mathcal{H}}(w, c);
1252 b \gets A^{W(w, c), \mathcal{H}}(c, u) : b = 1] - {} \\
1253 \Pr[(w, c) \gets I(); u \gets U^{\mathcal{H}}(w, c);
1254 b \gets S^{A, \mathcal{H}}(c, u) : b = 1]
1255 \end{spliteqn*}
1256 Furthermore, we define the \emph{simulator's insecurity function} to be
1257 \[ \InSec{fw}(W, I, S;
1258 t_D, t, \mathcal{Q}, \mathcal{R}, \mathcal{U}) =
1259 \max_{A, U} \Adv{fw}{W, I, S}(A, U)
1260 \]
1261 where the maximum is taken over all adversaries~$A$ running in time~$t$,
1262 making at most $\mathcal{Q}$ random-oracle queries, not exceeding the other
1263 stated resource bounds~$\mathcal{R}$ on its interaction with~$W$, and
1264 auxiliary-input functions producing output not exceeding the stated
1265 bounds~$\mathcal{U}$.
1266\end{definition}
1267It remains for us to demonstrate that this is a valid way of analysing
1268simulators; the following simple proposition shows that this is indeed the
1269case.
1270\begin{proposition}[Fake-world simulation]
1271 \label{prop:fakesim}
1272 Let $I$ be an initialization function and let $W$ be a world. Then, for
1273 any fake-world simulator~$S$,
1274 \[ \InSec{sim}(W, I, S; t_D, t_A, \mathcal{Q}_D, \mathcal{Q}_A,
1275 \mathcal{R}, \mathcal{U}) \le
1276 \InSec{fw}(W, I, S; t_A + t_D, \mathcal{Q}_D + \mathcal{Q}_A,
1277 \mathcal{R}, \mathcal{U})
1278 \]
1279 (where addition of query bounds $\mathcal{Q}$ is done elementwise).
1280\end{proposition}
1281\begin{proof}
1282 Let $W$ and $I$ as in the proposition statement be given; also let a
1283 distinguisher~$D$ running in time~$t_D$ and making $\mathcal{Q}_D$
1284 random-oracle queries, an adversary~$A$ running in time~$t_A$ and making
1285 $\mathcal{Q}_A$ random-oracle queries and interacting with its world within
1286 the stated bounds~$\mathcal{R}$, an auxiliary-input function~$U$ satisfying
1287 the constraints~$\mathcal{U}$ on its output, and a fake-world simulator~$S$
1288 all be given.
1289
1290 We construct an adversary~$B$ outputting a single bit as follows
1291 \begin{program}
1292 Adversary $B^{W, \mathcal{H}}(c, u)$: \+ \\
1293 $a \gets A^{W, \mathcal{H}}(c, u)$; \\
1294 $b \gets D^{\mathcal{H}}(c, u, a)$; \\
1295 \RETURN $b$;
1296 \end{program}
1297 A glance at definitions \ref{def:sim} and~\ref{def:fakesim} and the
1298 resources used by $B$ shows that
1299 \[ \Adv{sim}{W, I, S}(A, U) = \Adv{fw}{W, I, S}(B, U)
1300 \le \InSec{fw}(W, I, S; t_D + t_A, \mathcal{Q}_D + \mathcal{Q}_A,
1301 \mathcal{R}, \mathcal{U})
1302 \]
1303 as required.
1304\end{proof}
32609df3 1305\fi
a6e375a6
MW
1306
1307%%%--------------------------------------------------------------------------
a6e375a6
MW
1308\section{A zero-knowledge identification scheme}
1309\label{sec:zk-ident}
1310
1311
1312\subsection{Description}
1313
1314Here we present a simple zero-knowledge identification scheme. Fix some
32609df3
MW
1315group $G$ with prime order $q = \#G$. Suppose Alice chooses a private key $x
1316\inr \gf{q}$, and publishes the corresponding public key $X = x P$. Let
1317$H_I\colon G^2 \to \Bin^{\ell_I}$ be a secure hash function. Here's a simple
1318protocol which lets her prove her identity to Bob.
a6e375a6 1319\begin{enumerate}
32609df3
MW
1320\item Bob selects a random $r \inr \gf{q}$, and computes $R = r P$, $Y = r X$,
1321 and $c = r \xor H_I(R, Y)$. He sends the pair $(R, c)$ to Alice as his
1322 \emph{challenge}.
a6e375a6
MW
1323\item Alice receives $(R, c)$. She computes $Y' = x R$ and $r' = c \xor
1324 H_I(R', Y')$, and checks that $R = r' P$. If so, she sends $Y'$ as her
1325 \emph{response}; otherwise she sends $\bot$.
1326\item Bob receives $Y'$ from Alice. He checks that $Y' = Y$. If so, he
1327 accepts that he's talking to Alice; otherwise he becomes suspicious.
1328\end{enumerate}
1329We name this the Wrestlers Identification Protocol in~$G$, $\Wident^G$ (we
1330drop the superscript to refer to the protocol in general, or when no
1331ambiguity is likely to result). A summary is shown in
1332figure~\ref{fig:wident}.
1333
1334\begin{figure}
1335 \begin{description}
32609df3 1336 \item[Setup] Group $G = \langle P \rangle$; $\#G = q$ is prime.
a6e375a6 1337 $H_I(\cdot, \cdot)$ is a secure hash.
32609df3 1338 \item[Private key] $x \inr \gf{q}$.
a6e375a6 1339 \item[Public key] $X = x P$.
32609df3 1340 \item[Challenge] $(R, c)$ where $r \inr \gf{q}$, $R = r P$, $c = r \xor
a6e375a6
MW
1341 H_I(R, r X)$.
1342 \item[Response] $x R = r X$ if $R = (c \xor H_I(R, x R)) P$; otherwise
1343 $\bot$.
1344 \end{description}
1345
1346 \caption{Summary of the Wrestlers Identification Protocol, $\Wident$}
1347 \label{fig:wident}
1348\end{figure}
1349
1350
1351\subsection{Security}
1352
1353In order to evaluate the security of our protocol, we present a formal
1354description of the algorithms involved in figure~\ref{fig:wident}. Here, the
1355hash function $H_I(\cdot, \cdot)$ is modelled as a random oracle.
1356
1357\begin{figure}
1358 \begin{program}
1359 Function $\id{setup}()$: \+ \\
32609df3 1360 $x \getsr \gf{q}$; \\
a6e375a6
MW
1361 $X \gets x P$; \\
1362 \RETURN $(x, X)$;
32609df3 1363 \ifshort\newline\else\next\fi
a6e375a6 1364 Function $\id{challenge}^{H_I(\cdot, \cdot)}(R, c, X)$: \+ \\
32609df3 1365 $r \getsr \gf{q}$; \\
a6e375a6
MW
1366 $R \gets r P$; $Y \gets r X$; \\
1367 $h \gets H_I(R, Y)$; $c \gets r \xor h$; \\
1368 \RETURN $(Y, R, c)$; \- \\[\medskipamount]
1369 Function $\id{verify}(Y, Y')$: \+ \\
1370 \IF $Y' = Y$ \THEN \RETURN $1$; \\
1371 \RETURN $0$;
1372 \next
1373 Function $\id{response}^{H_I(\cdot, \cdot)}(R, c, x)$: \+ \\
1374 $Y' \gets x R$; \\
1375 $h \gets H_I(R', Y')$; $r' \gets c \xor h$; \\
1376 \IF $R \ne r' P$ \THEN \RETURN $\bot$; \\
1377 \RETURN $Y'$;
1378 \end{program}
1379
1380 \caption{Functions implementing $\Wident$ in the random oracle model}
1381 \label{fig:wident-ro}
1382\end{figure}
1383
1384\subsubsection{Completeness}
1385Suppose that Bob really is talking to Alice. Note that $Y' = x R = x (r P) =
1386r (x P) = r X = Y$. Hence $r' = c \xor H_I(R', Y') = c \xor H_I(R, Y) = r$,
1387so $r' P = r P = R$, so Alice returns $Y' = Y$ to Bob. Therefore $\Wident$
1388is \emph{complete}: if Bob really is communicating with Alice then he
1389accepts.
1390
1391\subsubsection{Soundness}
1392We next show that impersonating Alice is difficult. The natural way to prove
1393this would be to give an adversary a challenge and prove that its probability
1394of giving a correct response is very small. However, we prove a stronger
1395result: we show that if the adversary can respond correctly to any of a large
1396collection of challenges then it can solve the MCDH problem.
1397
1398Consider the game $\Game{imp}{\Wident}$ shown in
1399figure~\ref{fig:wident-sound}. An adversary's probability of successfully
1400impersonating Alice in our protocol, by correctly responding to any one of
1401$n$ challenges, is exactly its probability of winning the game (i.e., causing
1402it to return $1$).
1403
1404\begin{figure}
1405 \begin{program}
1406 $\Game{imp-$n$}{\Wident}(A)$: \+ \\
1407 $H_I \getsr \Func{G^2}{\Bin^{\ell_I}}$; \\
1408 $(x, X) \gets \id{setup}()$; \\
1409 $\id{win} \gets 0$; \\
1410 $\Xid{R}{map} \gets \emptyset$; \\
1411 $\mathbf{c} \gets \id{challenges}(n)$; \\
1412 $(R', Y') \gets A^{H_I(\cdot, \cdot), \id{check}(\cdot, \cdot)}
1413 (X, \mathbf{c})$; \\
1414 \RETURN $\id{win}$;
1415 \newline
1416 Function $\id{challenges}(n)$: \+ \\
1417 \FOR $i \in \Nupto{n}$ \DO \\ \ind
1418 $(Y, R, c) \gets \id{challenge}^{H_I(\cdot, \cdot)}$; \\
1419 $\Xid{R}{map} \gets \Xid{R}{map} \cup \{ R \mapsto Y \}$; \\
1420 $\mathbf{c}[i] \gets (R, c)$; \- \\
1421 \RETURN $\mathbf{c}$; \next
1422 Function $\id{check}(R', Y')$: \\
1423 \IF $R' \notin \dom \Xid{R}{map}$ \THEN \RETURN $0$; \\
1424 $Y \gets \Xid{R}{map}(R')$; \\
1425 \IF $\id{verify}(Y, Y')$ \THEN \\ \ind
1426 $\id{win} \gets 1$; \\
1427 \RETURN $1$; \- \\
1428 \RETURN $0$;
1429 \end{program}
1430
1431 \caption{Soundness of $\Wident$: $\Game{imp-$n$}{\Wident}(A)$}
1432 \label{fig:wident-sound}
1433\end{figure}
1434
1435\begin{theorem}[Soundness of $\Wident$]
1436 \label{thm:wident-sound}
1437 Let $A$ be any adversary running in time $t$ and making $q_I$ queries to
1438 its random oracle, and $q_V$ queries to its verification oracle. Let $G$
1439 be a cyclic group. Then
1440 \[ \Pr[\Game{imp-$n$}{\Wident^G}(A) = 1] \le
1441 \InSec{mcdh}(G; t', q_I + q_V)
1442 \]
1443 where $t' = t + O(q_I) + O(q_V)$.
1444\end{theorem}
1445\begin{remark}
1446 Note that the security bound here is \emph{independent} of the value of
1447 $n$.
1448\end{remark}
32609df3
MW
1449\begin{longproof}{The proof of this theorem can be found in the full version
1450 of the paper.}
a6e375a6
MW
1451 We prove this by defining a sequence of games $\G{i}$. The first will be
1452 the same as the attack game $\Game{imp-$n$}{\Wident}(A)$ and the others
1453 will differ from it in minor ways. In each game $\G{i}$, let $S_i$ be the
1454 event that $A$ wins the game -- i.e., that it successfully impersonates the
1455 holder of the private key~$x$.
32609df3 1456
a6e375a6
MW
1457 Let game $\G0$ be the attack game $\Game{imp}{\Wident}(A)$, and let $(R',
1458 Y')$ be the output of $A$ in the game.
1459
1460 We define a new game $\G1$ which is the same as $\G0$, except that we query
1461 the random oracle $H_I$ at $(R', Y')$ whenever the adversary queries
1462 $\id{check}(R', Y')$. (We don't charge the adversary for this.) This
1463 obviously doesn't affect the adversary's probability of winning, so
1464 $\Pr[S_1] = \Pr[S_0]$.
1465
1466 Game $\G2$ is like $\G1$, except that we change the way we generate
1467 challenges and check their responses. Specifically, we new functions
1468 $\id{challenges}_2$ and $\id{check}_2$, as shown in
1469 figure~\ref{fig:wident-sound-2}.
1470
1471 \begin{figure}
1472 \begin{program}
1473 Function $\id{challenges}_2(n)$: \+ \\
1474 $r^* \getsr I$; $R^* \gets r^* P$; $Y^* \gets r^* X$; \\
1475 \FOR $i \in \Nupto{n}$ \DO \\ \ind
1476 $r \getsr I$; $R \gets r R^*$; $Y \gets r Y^*$; \\
1477 $h \gets H_I(R, Y)$; $c \gets r \xor h$; \\
1478 $\Xid{R}{map} \gets \Xid{R}{map} \cup \{ R \mapsto r \}$; \\
1479 $\mathbf{c}[i] \gets (R, c)$; \- \\
1480 \RETURN $\mathbf{c}$;
1481 \next
1482 Function $\id{check}_2(R', Y')$: \+ \\
1483 \IF $R' \notin \dom \Xid{R}{map}$ \THEN \RETURN $0$; \\
1484 $r \gets \Xid{R}{map}(R')$; \\
1485 \IF $\id{verify}(Y^*, Y'/r)$ \THEN \\ \ind
1486 $\id{win} \gets 1$; \\
1487 \RETURN $1$; \- \\
1488 \RETURN $0$;
1489 \end{program}
1490
1491 \caption{Soundness of $\Wident$: $\id{challenges}_2$ and $\id{check}_2$}
1492 \label{fig:wident-sound-2}
1493 \end{figure}
1494
1495 While we're generating and checking challenges in a more complicated way
1496 here, we're not actually changing the distribution of the challenges, or
1497 changing the winning condition. Hence $\Pr[S_2] = \Pr[S_1]$.
1498
1499 Now we change the rules again. Let $\G3$ be the same as $\G2$ except that
1500 we change the winning condition. Instead, we say that the adversary wins
1501 if any of the queries to its random oracle $H_I(R', Y')$ would be a correct
1502 response -- i.e., $\id{check}_2(R', Y')$ would return $1$. Since we query
1503 the oracle on $(R', Y')$ on its behalf at the end of the game, no adversary
1504 can do worse in this game than it does in $\G2$, so we have $\Pr[S_3] \ge
1505 \Pr[S_2]$. (It's not hard to see that this only helps quite stupid
1506 adversaries. We can transform any adversary into one for which equality
1507 holds here.)
1508
1509 Finally, let $\G4$ be the same as $\G3$ except that we change the way we
1510 generate challenges again: rather than computing $h$ and setting $c \gets h
1511 \xor r$, we just choose $c$ at random. Specifically, we use the new
1512 function, $\id{challenges}_4$, shown in figure~\ref{fig:wident-sound-4}.
1513
1514 \begin{figure}
1515 \begin{program}
1516 Function $\id{challenges}_4(n)$: \+ \\
1517 $r^* \getsr I$; $R^* \gets r^* P$; $Y^* \gets r^* X$; \\
1518 \FOR $i \in \Nupto{n}$ \DO \\ \ind
1519 $r \getsr I$; $R \gets r R^*$; \\
1520 $c \getsr \Bin^{\ell_I}$; \\
1521 $\Xid{R}{map} \gets \Xid{R}{map} \cup \{ R \mapsto r \}$; \\
1522 $\mathbf{c}[i] \gets (R, c)$; \- \\
1523 \RETURN $\mathbf{c}$;
1524 \end{program}
1525
1526 \caption{Soundness of $\Wident$: $\id{challenges}_4$}
1527 \label{fig:wident-sound-4}
1528 \end{figure}
1529
1530 Since $H_I(\cdot, \cdot)$ is a random function, the adversary can only
1531 distinguish $\G4$ from $\G3$ if it queries its random oracle at some $(R, r
1532 Y^*)$. But if it does this, then by the rule introduced in $\G3$ it has
1533 already won. Therefore we must have $\Pr[S_4] = \Pr[S_3]$.
1534
1535 Our $\id{challenges}_4$ function is interesting, since it doesn't actually
1536 make use of $r^*$ or $Y^*$ when generating its challenges. This gives us
1537 the clue we need to bound $\Pr[S_4]$: we can use adversary $A$ to solve the
1538 multiple-guess Diffie-Hellman problem in $G$ by simulating the game $\G4$.
1539 Specifically, we define the adversary $B$ as shown in
1540 figure~\ref{fig:wident-sound-cdh}. That is, for each query $A$ makes to
1541 its random oracle at a new pair $(R', Y')$, we see whether this gives us
1542 the answer we're looking for. We have $\Pr[S_0] \le \Pr[S_4] =
1543 \Succ{mcdh}{G}(B) \le \InSec{gdh}(G; t', q_I + q_V)$ as required.
1544
1545 \begin{figure}
1546 \begin{program}
1547 Adversary $B^{V(\cdot)}(X, R^*)$: \+ \\
1548 $F \gets \emptyset$; $\Xid{R}{map} \gets \emptyset$; \\
1549 \FOR $i \in \Nupto{n}$ \DO \\ \ind
1550 $r \getsr I$; $R \gets r R^*$; $c \getsr \Bin^{\ell_I}$; \\
1551 $\Xid{R}{map} \gets \Xid{R}{map} \cup \{ R \mapsto r \}$; \\
1552 $\mathbf{c}[i] \gets (R, c)$; \- \\
1553 $(R', Y') \gets A^{H_I(\cdot, \cdot), \id{check}(\cdot, \cdot)}
1554 (X, \mathbf{c})$; \\
1555 \IF $Y' \neq \bot$ \THEN $H_I(R', Y')$;
1556 \next
1557 Oracle $H_I(R', Y')$: \+ \\
1558 \IF $(R', Y') \in \dom F$ \THEN \\ \quad
1559 $h \gets F(x)$; \\
1560 \ELSE \\ \ind
1561 $\id{check}(R', Y')$; \\
1562 $h \getsr \Bin^{\ell_I}$;
1563 $F \gets F \cup \{ (R', Y') \mapsto h \}$; \- \\
1564 \RETURN $h$;
1565 \- \\[\medskipamount]
1566 Oracle $\id{check}(R', Y')$: \+ \\
1567 \IF $R' \in \dom \Xid{R}{map}$ \THEN
1568 $V(Y'/\Xid{R}{map}(R'))$;
1569 \end{program}
1570
1571 \caption{Soundness of $\Wident$: reduction from MCDH}
1572 \label{fig:wident-sound-cdh}
1573 \end{figure}
32609df3 1574\end{longproof}
a6e375a6
MW
1575
1576\subsubsection{Zero-knowledge}
1577Finally we must prove that $\Wident$ is (statistical) zero-knowledge -- i.e.,
1578that, except with very small probability, Bob learns nothing of use to him
1579except that he's interacting with Alice. To do this, we show that, for any
1580algorithm $B$ which Bob might use to produce his challenge to the real Alice,
1581there exists a simulator $S$ which produces transcripts distributed very
1582similarly to transcripts of real conversations between $B$ and Alice, the
1583difference being that $S$ doesn't know Alice's key. We shall show that the
1584statistical difference between the two distributions is $2^{-\ell_I}$.
1585
1586The intuition here is that Bob ought to know what answer Alice is going to
1587give him when he constructs his challenge. This is certainly true if he's
1588honest: his challenge is $R = r P$ for some $r$ he knows, so he won't learn
1589anything useful when Alice responds with $x R = r X$. However, if Bob sends
1590a challenge $R$ when he doesn't know the corresponding $r$, he learns
1591something potentially useful. The accompanying check value $c = r \xor
1592H_I(R, r X)$ keeps him honest.
1593
1594To show this, we present an \emph{extractor} which, given any challenge $(R,
1595c)$ Bob can construct, and his history of random-oracle queries, either
1596returns a pair $(r, Y)$ such that $R = r P$ and $Y = r X$, or $\bot$;
1597moreover, the probability that Alice returns a response $Y' \ne \bot$ given
1598the challenge $(R, c)$ is $2^{-\ell}$. We can, of course, readily convert
1599this extractor into a simulator to prove the zero-knowledge property of our
1600protocol.
1601
1602We shall actually consider a slightly more complex setting. We grant Bob
1603access to an oracle which produces random, correctly-formed challenges. We
1604require this to model the legitimate challenges of other parties when we
32609df3 1605analyse the security of our key exchange protocol.
a6e375a6
MW
1606
1607\begin{definition}[Discrete-log extractors]
1608 Let $T$, $B$ be randomized algorithms. Define the game
1609 $\Game{dl-ext}{G}(T, B)$ as shown in figure~\ref{fig:dlext}. The
1610 \emph{success probability of $T$ as a discrete-log extractor against $B$}
1611 is defined as
1612 \[ \Succ{dl-ext}{G}(T, B) = \Pr[\Game{dl-ext}{G}(T, B) = 1]. \]
1613\end{definition}
a6e375a6
MW
1614
1615\begin{figure}
1616 \begin{program}
1617 $\Game{dl-ext}{G}(T, B):$ \+ \\
1618 $H_I \getsr \Func{G^2}{\Bin^{\ell_I}}$;
1619 $Q_H \gets \emptyset$; $Q_C \gets \emptyset$; \\
1620 $(x, X) \gets \id{setup}()$; \\
1621 $(R, c) \gets B^{\Xid{H_I}{trap}(\cdot, \cdot), C()}(x, X)$; \\
1622 $(r, Y) \gets T(R, c, Q_H)$; \\
1623 $Y' \gets x R$; $h' \gets H_I(R, Y')$; $r' \gets c \xor h'$; \\
1624 \IF $r \ne \bot$ \THEN \\ \quad
1625 \IF $Y = \bot \lor R \ne r P \lor Y \ne Y'$ \THEN \RETURN $0$; \\
32609df3 1626 \IF $R = r' P$ \THEN $(r^*, Y^*) \gets (r', Y')$; \\
a6e375a6
MW
1627 \ELSE $(r^*, Y^*) \gets (\bot, \bot)$; \\
1628 \IF $(R, c) \in Q_C$ \THEN \RETURN $1$; \\
1629 \IF $(r, Y) = (r', Y')$ \THEN \RETURN $1$; \\
1630 \RETURN $0$;
1631 \next
1632 Oracle $\Xid{H_I}{trap}(R', Y')$: \+ \\
1633 $h \gets H_I(R', Y')$; \\
1634 $Q_H \gets Q_H \cup \{(R', Y', h)\}$; \\
1635 \RETURN $h$; \- \\[\medskipamount]
1636 Oracle $C()$: \+ \\
32609df3 1637 $r \getsr \gf{q}$; \\
a6e375a6
MW
1638 $R \gets r P$; $c \gets r \xor H_I(R, r X)$; \\
1639 $Q_C \gets Q_C \cup \{(R, c)\}$; \\
32609df3 1640 \RETURN $(R, c)$
a6e375a6
MW
1641 \end{program}
1642
1643 \caption{Discrete log extraction game: $\Game{dl-ext}{G}(T, B)$}
1644 \label{fig:dlext}
1645\end{figure}
1646
32609df3
MW
1647Let's unpack this definition slightly. We make the following demands of our
1648extractor.
1649\begin{itemize}
1650\item It is given a bare `transcript' of $B$'s execution. In particular, it
1651 is given only its output and a list of $B$'s random-oracle queries in no
1652 particular order.
1653\item While the extractor is not given the private key~$x$, the adversary~$B$
1654 is given the private key.
1655\item We require that, if the extractor produces values $r, Y \ne \bot$ then
1656 $r$ and $Y$ are \emph{correct}; i.e., that $R = r P$ and $Y = x R$.
1657\item The extractor is explicitly \emph{not} given the outputs of the
1658 challenge-generation oracle $C()$, nor of the random-oracle queries issued
1659 by $C()$. However, we allow the extractor to fail (i.e., return $\bot$) if
1660 $B$ simply parrots one of its $C$-outputs.
1661\item The extractor is allowed -- indeed \emph{required} -- to fail if the
1662 challenge $(R, c)$ is \emph{invalid} (i.e., Alice would return $\bot$ given
1663 the challenge).
1664\end{itemize}
1665The resulting definition bears a striking similarity to the concept of
081da1b1 1666\emph{plaintext awareness} in \cite{bellare-1998:pub-enc-notions}.
32609df3 1667
a6e375a6
MW
1668Such an extractor indeed exists, as the following lemma states.
1669\begin{lemma}[Effectiveness of extractor $T_\Wident$]
1670 \label{lem:dlext}
32609df3
MW
1671 There exists a \emph{universal discrete-log extractor} $T_\Wident$, shown
1672 in figure~\ref{fig:twident}, such that, for any algorithm $B$,
a6e375a6
MW
1673 \[ \Succ{dl-ext}{G}(T_\Wident, B) \ge 1 - \frac{1}{2^{\ell_I}}. \]
1674 Moreover, if $B$ issues at most $q_H$ random-oracle queries, then the
1675 running time of $T_\Wident$ is $O(q_H)$.
1676\end{lemma}
32609df3
MW
1677\ifshort
1678The proof of this lemma is given in the full version of this paper.
1679\else
a6e375a6
MW
1680We prove this result at the end of the section. For now, let us see how to
1681prove that $\Wident$ is zero-knowledge.
32609df3
MW
1682\fi
1683
1684\begin{figure}
1685 \begin{program}
1686 Extractor $T_\Wident(R, c, Q_H)$: \+ \\
1687 \FOR $(R', Y', h)$ \IN $Q_H$ \DO \\ \ind
1688 $r \gets h \xor c$; \\
1689 \IF $R = R' = r P \land Y' = r X$ \THEN \RETURN $(r, Y')$; \- \\
1690 \RETURN $(\bot, \bot)$;
1691 \end{program}
1692
1693 \caption{The discrete-log extractor $T_\Wident$}
1694 \label{fig:twident}
1695\end{figure}
a6e375a6
MW
1696
1697We use the set-up described in section~\ref{sec:sim}. Our initialization
32609df3 1698function~$I_\Wident$ just chooses a random $x \in \gf{q}$ as the world
a6e375a6
MW
1699private input and sets $X = x P$ as the common input. In the `real world',
1700the adversary is allowed to submit a (single) challenge to the prover; it is
1701given the prover's response, and must then compute its output. This is shown
1702on the left hand side of figure~\ref{fig:wident-sim}.
1703
1704The zero-knowledge property of the scheme is described by the following
1705theorem.
1706\begin{theorem}[Statistical zero-knowledge of $\Wident$]
1707 \label{thm:wident-zk}
1708 Let $I_\Wident$, $W_\Wident$ and $S_\Wident$ be the real-prover world and
1709 simulator shown in figure~\ref{fig:wident-sim}. Then, for any~$t$,
1710 $q_I$ and $q_C$,
1711 \[ \InSec{sim}(W_\Wident, I_\Wident, S_\Wident; t, q_I, q_C, 0) \le
1712 \frac{q_C}{2^\ell_I}.
1713 \]
1714 where $q_C$ is the maximum number of challenges allowed by the adversary.
1715\end{theorem}
32609df3 1716\begin{longproof}{}
a6e375a6
MW
1717 The simulator simply uses the extractor~$T_\Wident$ to extract the answer
1718 from the adversary's history of random oracle queries. Observe that
1719 $S_\Wident$ is a fake-world simulator. By lemma~\ref{lem:dlext}, the
1720 extractor fails with probability only $2^{-\ell_I}$. The theorem follows
1721 by a simple union bound and proposition~\ref{prop:fakesim}.
32609df3 1722\end{longproof}
a6e375a6 1723
32609df3 1724%\ifshort\else
a6e375a6
MW
1725\begin{figure}
1726 \begin{program}
1727 Initialization function $I_\Wident()$: \+ \\
32609df3 1728 $x \getsr \gf{q}$; \\
a6e375a6
MW
1729 $X \gets x P$; \\
1730 \RETURN $(x, X)$;
1731 \- \\[\medskipamount]
1732 Real-prover world $W_\Wident^{H_I(\cdot, \cdot)}
1733 ((x, X), \sigma, \tau, \mu)$: \+ \\
1734 \IF $\tau = \cookie{challenge}$ \THEN \\ \ind
1735 $(R, c) \gets \mu$; \\
1736 $Y \gets \id{response}^{H_I(\cdot, \cdot)}(R, c, x)$; \\
1737 \RETURN $(1, Y)$; \- \\
1738 \ELSE \\ \ind
1739 \RETURN $(\sigma, \bot)$;
1740 \next
1741 Simulator $S_\Wident$'s fake world \\
1742 \hspace{1in} $W_{\text{sim}}^{H_I(\cdot, \cdot)}
1743 ((X, u), \sigma, \tau, \mu)$: \+ \\
1744 \IF $\tau = \cookie{init}$ \THEN \\ \ind
1745 \RETURN $(\emptyset, \emptystring)$; \- \\
1746 $Q_H \gets \sigma$; \\
1747 \IF $\tau = \cookie{challenge}$ \THEN \\ \ind
1748 $(R, c) \gets \mu$; \\
1749 $(r, Y) \gets T_\Wident(R, c, Q_H)$; \\
1750 \RETURN $(Q_H, Y)$; \- \\
1751 \ELSE \IF $\tau = \cookie{random}$ \THEN \\ \ind
1752 $(i, (R', Y'), h) \gets \mu$; \\
1753 $Q_H \gets Q_H \cup \{(R', Y', h)\}$; \\
1754 \RETURN $(Q_H, \emptystring)$; \- \\
1755 \ELSE \\ \ind
1756 \RETURN $(\sigma, \bot)$;
1757 \end{program}
1758
1759 \caption{Real-prover and simulator for zero-knowledge of $\Wident$}
1760 \label{fig:wident-sim}
1761\end{figure}
32609df3 1762%\fi
a6e375a6 1763
32609df3
MW
1764\ifshort\else
1765We now return to proving that the extractor $T_\Wident$ functions as claimed.
dff0fad2 1766The following two trivial lemmata will be useful, both now and later.
a6e375a6
MW
1767\begin{lemma}[Uniqueness of discrete-logs]
1768 \label{lem:unique-dl}
1769 Let $G = \langle P \rangle$ be a cyclic group. For any $X \in G$ there is
32609df3 1770 a unique $x \in \gf{q}$ where $X = x P$.
a6e375a6
MW
1771\end{lemma}
1772\begin{proof}
1773 Certainly such an $x$ exists, since $G$ is cyclic and finite. Suppose $X =
1774 x P = x' P$: then $0 = x P - x' P = (x - x') P$. Hence $(x - x')$ is a
32609df3 1775 multiple of $q$, i.e., $x = x'$.
a6e375a6
MW
1776\end{proof}
1777\begin{lemma}[Uniqueness of check values]
1778 \label{lem:unique-c}
32609df3
MW
1779 Let $G = \langle P \rangle$ be a cyclic group of prime order $q$; let $H_I$
1780 be a function $H_I\colon \Bin^{2\ell_G} \to \Bin^{\ell_I}$. Fix some $x
1781 \in \gf{q}$ and define the set
a6e375a6
MW
1782 \[ V_x = \bigl\{\, (R, c) \in G \times \Bin^{\ell_I} \bigm|
1783 R = \bigl( c \xor H_I(R, x R) \bigr) P \,\bigr\}.
1784 \]
1785 Then, for any $R$, $c$, $c'$, if $(R, c) \in V_x$ and $(R, c') \in V_x$
32609df3 1786 then $c = c'$.
a6e375a6
MW
1787\end{lemma}
1788\begin{proof}
32609df3
MW
1789 From lemma~\ref{lem:unique-dl}, we see that there is a unique $r \in \gf{q}$
1790 for which $R = r P$. Now, if $(R, c) \in V_x$, we must have $r = c \xor
1791 H_I(R, x R)$. It follows that $c = r \xor H_I(R, x R)$.
a6e375a6
MW
1792\end{proof}
1793
1794\begin{proof}[Proof of lemma~\ref{lem:dlext}]
a6e375a6
MW
1795 Let $B$ be any randomized algorithm, and let $(R, c, Q_H)$ be as given to
1796 the extractor by $\Game{dl-ext}{G}(T_\Wident, B)$. Let the quantities
1797 $H_I$, $Q_C$, $r$, $r'$, $x$ and $X$ be as in that game.
1798
1799 Suppose that the extractor returns values $(r, Y) \ne (\bot, \bot)$. Let
1800 $h = r \xor c$; then there must be a query $(R, Y, h) \in Q_H$, and we have
1801 $R = r P$ and $Y = r X = r (x P) = x (r P) = x R = Y'$, so the extractor's
1802 output must be correct unless it fails.
1803
1804 Furthermore, in the case where the extractor did not fail, we have $h =
1805 H_I(R, Y) = H_I(R, Y')$ and $c = r \xor h$, so the challenge was valid.
1806 Therefore, if the challenge was invalid, the extractor will fail.
1807
1808 We now deal with the challenge-generation oracle. Suppose that $(R, c')
1809 \in Q_C$ for some $c'$. Now, if $c = c'$ then $(R, c')$ is a repeat of
1810 some challenge from the challenge-generation oracle, and the extractor is
1811 permitted to fail. On the other hand, suppose $c \ne c'$; then, the
1812 challenge $(R, c)$ must be invalid by lemma~\ref{lem:unique-c}, so the
1813 extractor is required to fail, and we have established that indeed it will.
1814 From now on, suppose that $R$ is distinct from all the $R$-values returned
1815 by $C()$.
1816
1817 Let $Y = x R$. Suppose that $B$ queried its random oracle at $(R, Y)$.
1818 Let $h = H_I(Y)$, so $r' = c \xor h$. If the challenge is valid then $R =
1819 r' P$; therefore $Y = x R = x r' P = r' X$, so we have $(R, Y, h) \in Q_H$
1820 with $R = r P$ and $Y = r X$. Hence the extractor returns $r = r'$ as
1821 required.
1822
1823 It remains to deal with the case where there is no random-oracle query at
1824 $(R, Y)$. But then $h = H_I(R, Y)$ is uniformly distributed, and
1825 independent of the entire game up to this point. Let $r$ be the correct
1826 discrete log of $R$; by lemma~\ref{lem:unique-dl} there is only one
1827 possible value. The extractor always fails under these circumstances, but
1828 a correct responder would reply with probability
1829 \[ \Pr[h = c \xor r] = \frac{1}{2^{\ell_I}}. \]
1830 This concludes the proof.
1831\end{proof}
1832\begin{remark}
1833 Note that the fact that the algorithm~$B$ was given the private key is
1834 irrelevant to the above argument. However, we shall need this property
1835 when we come to prove deniability for the key-exchange protocol.
1836\end{remark}
1837\begin{remark}
1838 It's easy to see from the above proof that the extractor works flawlessly
1839 on the `honest verifier' algorithm $\id{challenge}$ shown in
1840 figure~\ref{fig:wident-ro}. This shows that $\Wident$ is perfect
1841 zero-knowledge against honest verifiers. We're much more interested in
1842 dishonest verifiers, though.
1843\end{remark}
32609df3 1844\fi
a6e375a6
MW
1845
1846
32609df3 1847\ifshort\else
a6e375a6
MW
1848\subsection{An identity-based identification scheme}
1849\label{sec:wident-id}
1850
081da1b1
MW
1851Boneh and Franklin \cite{boneh-franklin-2003:ibe-weil-pairing} showed how to
1852construct an identity-based encryption scheme using bilinear pairings. The
1853resulting encryption scheme looks somewhat like a pairing-based version of
1854ElGamal's encryption scheme \cite{elgamal-1985:dlog-enc-sign}. We can
1855easily apply their techniques to our identification protocol, and thereby
1856obtain an identity-based identification scheme. Providing the necessary
1857formalisms to prove theorems analogous to our theorems~\ref{thm:wident-sound}
a6e375a6
MW
1858and~\ref{thm:wident-zk} would take us too far from our objectives; but given
1859appropriate security notions, we can readily adapt our existing proofs to the
1860new setting.
1861
1862\subsubsection{Bilinear pairings}
1863Before we describe the necessary modifications to the protocol, we first give
c51900e9
MW
1864a (very brief!) summary of cryptographic pairings. (The Boneh--Franklin
1865paper \cite{boneh-franklin-2003:ibe-weil-pairing} gives more detail; also
081da1b1
MW
1866\cite{menezes-2005:intro-pairing-crypto} provides a useful introduction to
1867the topic.)
a6e375a6
MW
1868
1869Let $(G, +)$, $(G', +)$ and $(G_T, \times)$ be cyclic groups with prime order
1870$q$; let $P \in G$ and $P' \in G'$ be elements of order $q$ in $G$ and $G'$
1871respectively. We say that a mapping $\hat{e}\colon G \times G' \to G_T$ is a
1872\emph{non-degenerate bilinear pairing} if it satisfies the following
1873properties.
1874\begin{description}
1875\item[Bilinearity] For all $R \in G$ and $S', T' \in G'$, we have $\hat{e}(R,
dff0fad2
MW
1876 S' + T') = \hat{e}(R, S')\,\hat{e}(R, T')$; and for all $R, S \in G$ and $T'
1877 \in G'$ we have $\hat{e}(R + S, T') = \hat{e}(R, T')\,\hat{e}(S, T')$.
a6e375a6
MW
1878\item[Non-degeneracy] $\hat{e}(P, P') \ne 1$.
1879\end{description}
1880For practical use, we also want $\hat{e}(\cdot, \cdot)$ to be efficient to
1881compute. The reader can verify that $\hat{e}(a P, b P') = \hat{e}(P,
1882P')^{ab}$. It is permitted for the two groups $G$ and $G'$ to be equal.
1883
1884We require a different intractability assumption, specifically that the
1885\emph{bilinear} Diffie-Hellman problem (BDH) -- given $(a P, b P, a P', c P')
1886\in G^2 \times G'^2$, find $\hat{e}(P, P')^{abc} \in G_T$ -- is difficult.
1887This implies the difficulty of the computational Diffie-Hellman problem in
1888all three of $G$, $G'$ and~$G_T$.
1889
1890\subsubsection{The identity-based scheme}
081da1b1
MW
1891We need a trusted authority; following \cite{schneier-1996:applied-crypto} we
1892shall call him Trent. Trent's private key is $t \in \gf{q}$; his public key
1893is $T = t P$.
a6e375a6
MW
1894
1895Finally, we need cryptographic hash functions $H_I\colon G \times G_T \to
1896\Bin^{\ell_I}$ and $\Hid\colon \Bin^* \to G'$; a formal security analysis
1897would model these as random oracles.
1898
1899Alice's public key is $A = \Hid(\texttt{Alice}) \in G'$. Her private key is
1900$K_A = t A \in G'$ -- she needs Trent to give this to her. Bob can interact
1901with Alice in order to verify her identity as follows.
1902\begin{enumerate}
1903\item Bob computes $\gamma_A = \hat{e}(T, A) \in G_T$. (He can do this once
1904 and store the result if he wants, but it's not that onerous to work it out
1905 each time.)
32609df3 1906\item Bob chooses $r \inr \gf{q}$, and sets $R = r P$. He also computes
a6e375a6
MW
1907 $\psi = \gamma_A^r$, $h = H_I(R, \psi)$ and $c = r \xor h$. He sends his
1908 challenge $(R, c)$ to Alice.
1909\item Alice receives $(R', c')$. She computes $\psi' = \hat{e}(R, K_A)$, $h'
1910 = H_I(R', \psi')$, and $r' = c' \xor h')$. She checks that $R' = r' P$; if
1911 so, she sends $\psi'$ back to Bob; otherwise she refuses to talk to him.
1912\item Bob receives $\psi'$. If $\psi = \psi'$ then he accepts that he's
1913 talking to Alice.
1914\end{enumerate}
1915This works because $\psi = \gamma_A^r = \hat{e}(T, A)^r = \hat{e}(t P, A)^r =
1916\hat{e}(r P, A)^t = \hat{e}(R, t A) = \psi'$.
1917
1918\subsubsection{Informal analysis}
1919An analogue to lemma~\ref{lem:dlext} can be proven to show how to extract $r$
1920from a verifier's random-oracle queries; statistical zero knowledge would
1921then follow easily, as in theorem~\ref{thm:wident-zk}. Soundness is
1922intuitively clear, since an adversary must compute $\psi = \hat{e}(P,
1923P')^{art}$ given $A = a P'$, $R = r P$ and $T = t P$, which is an instance of
1924the BDH problem. An analogue of theorem~\ref{thm:wident-sound} would have to
1925prove this for an adversary capable of making identity requests as well as
1926obtaining challenges. Finally, our key-exchange protocol can be constructed
1927out of this identity-based identification scheme, yielding an identity-based
1928authenticated key-exchange protocol. We leave it to the reader to work
1929through the details.
32609df3 1930\fi
a6e375a6
MW
1931
1932
32609df3 1933\ifshort\else
a6e375a6
MW
1934\subsection{Comparison with the protocol of Stinson and Wu}
1935\label{sec:stinson-ident}
1936
1937Our protocol is similar to a recent proposal by Stinson and Wu
081da1b1
MW
1938\cite{stinson-wu-2006:two-flow-zero-knowledge}. They restrict their
1939attention to Schnorr groups $G \subset \F_p^*$. Let $\gamma$ be an element
1940of order $q = \#G$. The prover's private key is $a \inr \gf{q}$ and her
1941public key is $\alpha = \gamma^a$. In their protocol, the challenger chooses
4331bf84
MW
1942$r \inr \gf{q}$, computes $\rho = \gamma^r$ and $\psi = \alpha^r$, and sends
1943a challenge $(\rho, H(\psi))$. The prover checks that $\rho^q \ne 1$,
1944computes $\psi = \rho^a$, checks the hash, and sends $\psi$ back by way of
1945response. They prove their protocol's security in the random-oracle model.
a6e375a6 1946
c51900e9 1947Both the Wrestlers protocol and Stinson--Wu require both prover and verifier
a6e375a6 1948to compute two exponentiations (or scalar multiplications) each. The
32609df3 1949sizes of the messages used by the two protocols are also identical.
a6e375a6 1950
c51900e9 1951(An earlier version of the Stinson--Wu protocol used a cofactor
a6e375a6
MW
1952exponentiation: if we set $f = (p - 1)/q$, then we use $\psi = \alpha^{rf}) =
1953\rho^{af} = \gamma^{afr}$. This is more efficient in typical elliptic curve
1954subgroups, since the cofactor of such subgroups is usually small: indeed,
081da1b1
MW
1955\cite{certicom-2000:sec1} recommends rejecting groups with cofactor $f > 4$.
1956However, in the Schnorr groups used by Stinson and Wu, the cofactor is much
1957larger than $q$, and their new variant is more efficient.)
a6e375a6 1958
c51900e9 1959We note that the zero-knowledge property of the Stinson--Wu protocol requires
a6e375a6
MW
1960the Diffie-Hellman knowledge of exponent assumption (KEA). Very briefly:
1961suppose $A$ is a randomized algorithm which takes as input $X \in G$ and
1962outputs a pair $(r P, r X)$; intuitively, the KEA asserts $A$ must have done
1963this by choosing $r$ somehow and then computing its output from it.
1964Formally, it asserts the existence of an `extractor' algorithm which takes as
1965input the element $X$ and the random coins used by $A$ and outputs $r$ with
1966high probability. This is a very strong assumption, and one which is
1967unnecessary for our protocol, since we can present an \emph{explicit}
1968extractor.
1969
081da1b1
MW
1970The KEA assumption as stated in
1971\cite{stinson-wu-2006:two-flow-zero-knowledge} allows the extractor to fail
1972with some negligible probability, over and above the probability that a
1973dishonest verifier managed to guess the correct $h = H(\psi)$ without making
1974this random-oracle query. Not only does our protocol achieve zero- knowledge
1975without the KEA, our extractor is, in this sense, `perfect'.
a6e375a6 1976
c51900e9 1977Our protocol is just as strong as Stinson--Wu under attack from active
a6e375a6
MW
1978intruders: see table~\ref{tab:wident-active} for a very brief sketch of the
1979case-analysis which would be the basis of a proof of this.
1980
1981\begin{table}
1982 \begin{tabular}[C]{|*{3}{c|}p{8cm}|}
1983 \hlx{hv[1]}
1984 \multicolumn{2}{|c|}{\textbf{Challenge}} &
1985 \textbf{Response} &
1986 \textbf{Security}
1987 \\ \hlx{v[1]hv}
1988 %% unpleasant hacking to make the R and c columns the same width :-(
1989 \settowidth{\dimen0}{\textbf{Challenge}}%
32609df3 1990 \dimen0=.5\dimen0
a6e375a6
MW
1991 \advance\dimen0by-\tabcolsep
1992 \advance\dimen0by-.5\arrayrulewidth
1993 \hbox to\dimen0{\hfil$R$\hfil}
1994 & $c$ & $Y$ & Nothing to prove. \\ \hlx{v}
1995 $R$ & $c'$ & --- & Prover rejects by lemma~\ref{lem:unique-c};
1996 $Y'$ probably wrong by
1997 theorem~\ref{thm:wident-sound}. \\ \hlx{v}
1998 $R$ & $c$ & $Y'$ & Response is incorrect. \\ \hlx{v}
1999 $R'$ & --- & $Y$ & Response is incorrect. \\ \hlx{v}
2000 $R'$ & $c$ & $Y'$ & Prover rejects with probability $1 - 2^{-\ell_I}$;
2001 $Y'$ probably wrong by
2002 theorem~\ref{thm:wident-sound}. \\ \hlx{v}
2003 $R'$ & $c'$ & $Y'$ & Simulate prover using extractor
2004 (lemma~\ref{lem:dlext}); $Y'$ probably wrong by
2005 theorem~\ref{thm:wident-sound}. \\ \hlx{vh}
2006 \end{tabular}
32609df3 2007
a6e375a6
MW
2008 \caption{Security of $\Wident$ against active intruders (summary)}
2009 \label{tab:wident-active}
2010\end{table}
2011
c51900e9 2012An identity-based analogue of Stinson--Wu can be defined using a bilinear
a6e375a6
MW
2013pairing, just as we did in section~\ref{sec:wident-id}. However, to prove
2014the zero-knowledge property, one needs to make a bilinear analogue of the
2015knowledge of exponent assumption.
2016
2017We suspect that a key-exchange protocol like ours can be constructed using
c51900e9 2018Stinson--Wu rather than the Wrestlers identification scheme. We haven't,
a6e375a6
MW
2019however, gone through all the details, since we believe our protocol is just
2020as efficient and is based on much more conservative assumptions.
32609df3 2021\fi
a6e375a6
MW
2022
2023%%%--------------------------------------------------------------------------
a6e375a6
MW
2024\section{A simple key-exchange protocol}
2025\label{sec:kx}
2026
2027In this section, we describe a simple key-exchange protocol built out of the
2028identification protocol shown previously.
2029
2030The key-exchange protocol arises from the following observation. If Bob
2031sends a challenge, presumably to Alice, and gets a correct response, then not
2032only did he really send the challenge to Alice but he knows that she received
2033it correctly.
2034
2035So, if Alice and Bob authenticate each other, by the end of it, they should
2036each have chosen a random private value, sent the corresponding public value
2037to the other, and been convinced that it arrived safely.
2038
2039Unfortunately, life isn't quite this kind, and we have to do some more work
2040to make this scheme secure.
2041
2042
2043Our key exchange protocol essentially consists of two parallel instances of
2044the identification protocol. If Alice receives a correct response to her
2045challenge, she will know that Bob received her challenge correctly, and
2046\emph{vice versa}. If we let Alice's challenge be $R_0 = r_0 P$ and Bob's
2047challenge be $R_1 = r_1 P$ then each can compute a shared secret $Z = r_0 R_1
2048= r_0 r_1 P = r_1 R_0$ unknown to an adversary. There are, unfortunately, a
2049few subtleties involved in turning this intuition into a secure key-exchange
2050protocol, which we now describe.
2051
2052
2053\subsection{Overview}
2054\label{sec:kx-overview}
2055
2056We present a quick, informal description of our basic key-exchange protocol.
2057In addition to our group $G$, we shall also need a secure symmetric
2058encryption scheme $\E = (\kappa, E, D)$, and two secure hash functions
2059$H_I\colon \Bin^{2\ell_G} \to \Bin^{\ell_I}$ and $H_K\colon \Bin^{\ell_G+1}
2060\to \Bin^\kappa$.
2061
2062Suppose that Alice's and Bob's private keys are $a$ and $b$ respectively, and
2063their public keys are $A = a P$ and $B = b P$.
2064\begin{enumerate}
32609df3
MW
2065\item Alice chooses a random index $r \inr \gf{q}$. She computes $R = r P$ and
2066 $c = r \xor H_I(R, r B)$. She sends the pair $(R, c)$ to Bob.
2067\item Similarly, Bob chooses a random $s \inr \gf{q}$. He computes $S = s P$
2068 and $d = s \xor H_I(S, s A)$. He sends $(S, d)$ to Alice.
a6e375a6
MW
2069\item Alice receives $(S', d')$ from Bob. She computes $s' = d' \xor H_I(S',
2070 a S')$, and verifies that $S' = s' P$. If so, she computes $K_A = H_K(0
2071 \cat r S')$, and sends $R, E_{K_A}(a S')$ to Bob.
2072\item Similarly, Bob receives $(R', c')$ from Alice. He verifies that $R' =
2073 \bigl( c' \xor H_I(R', b R') \bigr) P$. If so, he computes $K_B = H_K(0
2074 \cat s R')$ and sends S, $E_{K_B}(b R')$ to Alice.
2075\item Alice receives a ciphertext $(S'', \chi_B)$ from Bob. She checks that
2076 $S'' = S'$, decrypts $\chi_B$, and checks that $D_{K_A}(\chi_B) = r B$. If
2077 so, she uses $H_K(1 \cat r S')$ as her shared secret.
2078\item Similarly, Bob receives $(R'', \chi_A)$ from Alice, and checks $R'' =
2079 R'$ and $D_{K_B}(\chi_A) = s A$. If so, he uses $H_K(1 \cat s R')$ as his
2080 shared secret.
2081\end{enumerate}
2082This is the Wrestlers Key Exchange protocol, $\Wkx^{G, \E}$ (again, we omit
2083the superscripts when referring to the general protocol, or when confusion is
2084unlikely). A diagrammatic summary of the protocol is shown in
2085figure~\ref{fig:wkx}.
2086
2087\begin{figure}
2088 \begin{description}
32609df3 2089 \item[Setup] Group $G = \langle P \rangle$; $\#G = q$ is prime.
a6e375a6
MW
2090 $H_I(\cdot, \cdot)$ and $H_K(\cdot)$ are secure hashes. $\E = (\kappa,
2091 E, D)$ is an IND-CCA2 symmetric encryption scheme.
2092 \item[Parties] $U_i$ for $0 \le i < n$.
32609df3 2093 \item[Private keys] $x_i \inr \gf{q}$.
a6e375a6
MW
2094 \item[Public keys] $X_i = x_i P$.
2095 \end{description}
32609df3
MW
2096 \begin{protocol}
2097 $r_i \getsr I$; $R_i \gets r_i P$; &
2098 $r_j \getsr I$; $R_j \gets r_j P$; \\
2099 $c_i \gets r_i \xor H_I(R_i, r_i X_j)$; &
2100 $c_j \gets r_j \xor H_I(R_j, r_j X_i)$; \\
2101 \send{->}{(R_i, c_i)}
2102 \send{<-}{(R_j, c_j)}
2103 Check $R_j = \bigl(c_j \xor H_I(x_i R_j)\bigr) P$; &
2104 Check $R_i = \bigl(c_i \xor H_I(x_j R_i)\bigr) P$; \\
2105 $Z \gets r_i R_j$; $(K_0, K_1) \gets H_K(Z)$; &
2106 $Z \gets r_j R_i$; $(K_0, K_1) \gets H_K(Z)$; \\
2107 $\chi_i \gets E_{K_0}(x_i R_j)$; &
2108 $\chi_j \gets E_{K_0}(x_j R_i)$; \\
2109 \send{->}{(R_i, \chi_i)}
2110 \send{<-}{(R_j, \chi_j)}
2111 Check $D_{K_0}(\chi_j) = r_i X_j$; &
2112 Check $D_{K_0}(\chi_i) = r_j X_i$; \\
2113 Shared key is $K_1$. & Shared key is $K_1$.
2114 \end{protocol}
a6e375a6
MW
2115
2116 \caption{Summary of the Wrestlers Key Exchange protocol, $\Wkx$}
2117 \label{fig:wkx}
2118\end{figure}
2119
2120Assume, for the moment, that Alice and Bob's messages are relayed honestly.
2121Then:
2122\begin{itemize}
2123\item $a S' = a S = a (s P) = s (a P) = s A$, so $s' = d' \xor H_I(S' a S') =
2124 d \xor H_I(S, s A) = s$, and $S' = S = s P = s' P$, and therefore Alice
2125 responds to Bob's message;
2126\item similarly $b R' = r B$, so $r' = r$ and $R' = r' P$, and therefore Bob
2127 responds to Alice's message;
2128\item $b R' = b R = b (r P) = r (b P) = r B$, and $a S' = a S = a (s P) = s
2129 (a P) = s A$, and therefore both parties compute their responses correctly;
32609df3 2130 and
a6e375a6
MW
2131\item $r S' = r S = r (s P) = s (r P) = s R = s R'$, so $K_A = K_B$, and
2132 therefore they can decrypt each others' responses, and agree the same
2133 shared secret.
2134\end{itemize}
2135This shows that the protocol is basically valid, but says little about its
2136security. The remainder of this section will describe our protocol in more
2137formal detail, and prove its security in a model with multiple parties and an
2138adversary who controls the network.
2139
2140Observe that the protocol as we've presented here is \emph{symmetrical}.
2141There's no notion of `initiator' or `responder'. There are a total of four
2142messages which must be sent before both parties accept. However, this can be
2143reduced to three by breaking the symmetry of the protocol and combining one
2144or other party's challenge and response messages. We choose to analyse the
2145symmetrical version, since to do so, it suffices to consider only the two
2146different kinds of messages. Since our security model allows the messages to
2147be adversarially delayed and reordered, it is easy to show that the security
2148of an optimized, asymmetrical protocol is no worse than the symmetrical
2149version we present here.
2150
2151
2152\subsection{Security model and security definition}
2153\label{sec:um}
2154
2155Our model is very similar to that of Canetti and Krawczyk
081da1b1
MW
2156\cite{canetti-krawczyk-2001:secure-channels}, though we have modified it in
2157two ways.
a6e375a6
MW
2158\begin{enumerate}
2159\item We allow all the participants (including the adversary) in the protocol
2160 access to the various random oracles required to implement it.
2161\item Since we want to analyse a specific, practical scheme, asymptotic
2162 results are useless. We measure the adversary's resource usage carefully,
2163 and produce a quantitative bound on the adversary's advantage in the
2164 SK-security game.
2165\end{enumerate}
2166
32609df3
MW
2167\ifshort
2168
2169Readers interested in the details of the model should see Canetti and
081da1b1
MW
2170Krawczyk's paper \cite{canetti-krawczyk-2001:secure-channels}, or the full
2171version of this paper.
32609df3
MW
2172
2173\else
2174
a6e375a6
MW
2175\subsubsection{Overview}
2176We briefly describe our modified model, pointing out the changes we have
2177made, and how they apply to our protocol. Much of Canetti and Krawczyk's
2178model (for example, the local and global outputs) is useful for proving more
2179general security properties such as demonstrating that SK-security suffices
2180for constructing secure channels, and we shall not concern ourselves with
2181such details. Other parts deal with issues such as security parameters and
2182ensuring that all the computation is polynomially bounded, which are
2183irrelevant since we are dealing with a single concrete protocol rather than a
2184family of them.
2185
2186The entities in the model are the \emph{adversary}~$A$, and a (fixed) number
2187of \emph{parties}~$P_i$, for $0 \le i < n$. If the protocol under
2188consideration makes use of random oracles, then all the participants -- the
2189adversary and the parties -- are all allowed access to the random oracles.
2190
2191The parties and the adversary play a `game'. At the beginning of the game,
2192the participants are given some inputs computed by a randomized
2193\emph{initialization procedure}~$\id{init}$. This produces as output a pair
2194$(i_U, \mathbf{i})$; the value $i_U$ is the \emph{global input}, and is given
2195to all the participants including the adversary. The vector $\mathbf{i}$ has
2196$n$ components, and party $P_i$ is given $(i_U, \mathbf{i}[i])$ as input.
2197
2198\subsubsection{Sessions}
2199Parties don't act directly. Instead, each party runs a number of
32609df3
MW
2200\emph{sessions}. A session is represented by a triple $S = (P_i, P_j, s)$,
2201where $i, j \in \Nupto{n}$ identify the owning party and a \emph{partner},
2202and $s \in \Bin^{\ell_S}$ is a \emph{session-id}. (The original model
2203includes a r\^ole, for distinguishing between initiators and responders. Our
2204protocol is symmetrical, so this distinction isn't useful.) If $P_i$ runs a
2205session $S = (P_i, P_j, s)$ and $P_j$ runs a session $S' = (P_j, P_i, s)$
2206then we say that $S$ and $S'$ are \emph{matching}, and that $P_j$ is $P_i$'s
a6e375a6
MW
2207\emph{partner} for the session.
2208
2209At most one participant in the game is \emph{active} at any given time.
2210Initially the adversary is active. The adversary may \emph{activate} a
2211session in one of two ways.
2212\begin{enumerate}
2213\item It may \emph{create a session} of a party~$P_i$, by selecting a
2214 session-id~$s \in \Bin^{\ell_S}$ and a partner $j$. There is no
2215 requirement that $P_j$ ever have a matching session. However, all sessions
2216 of a party must be distinct, i.e., sessions with the same partner must have
2217 different session-ids.
2218\item It may \emph{deliver a message}~$\mu \in \Bin^*$, from party~$P_j$, to
2219 an existing session~$S = (P_i, P_j, s)$. There is no requirement that any
2220 party previously sent $\mu$: the adversary is free to make up messages as
2221 it sees fit.
2222\end{enumerate}
2223The adversary becomes inactive, and the session becomes active. The session
2224performs some computation, according to its protocol, and may request a
2225message~$\mu$ be delivered to the matching session running in its partner
2226(which may not exist). The session may also \emph{terminate}. In the case
2227we are interested in, of key-exchange protocols, a session~$S = (P_i, P_j,
2228s)$ may terminate in one of two ways:
2229\begin{enumerate}
2230\item it may \emph{complete}, outputting $(i, j, s, K)$, for some
2231 \emph{session key}~$K$, or
2232\item it may \emph{abort}, outputting $(i, j, s, \bot)$.
2233\end{enumerate}
2234Once it has performed these actions, the session deactivates and the
2235adversary becomes active again. The adversary is given the message~$\mu$, if
2236any, and informed of whether the session completed or aborted, but, in the
2237case of completion, not of the value of the key~$K$. A session is
2238\emph{running} if it has been created and has not yet terminated.
2239
2240\subsubsection{Other adversarial actions}
2241As well as activating sessions, the adversary has other capabilities, as
2242follows.
2243\begin{itemize}
2244\item It may \emph{expire} any session~$S$, causing the owning party to
2245 `forget' the session key output by that session.
2246\item It may \emph{corrupt} any party~$P_i$, at will: the adversary learns
2247 the entire state of the corrupted party, including its initial
2248 input~$\mathbf{i}[i]$, the state of any sessions it was running at the
2249 time, and the session keys of any completed but unexpired sessions. Once
2250 corrupted, a party can no longer be activated. Of course, the adversary
2251 can continue to send messages allegedly from the corrupted party.
2252\item It may \emph{reveal the state} of a running session~$S$, learning any
2253 interesting values specific to that session, but \emph{not} the owning
2254 party's long-term secrets.
2255\item It may \emph{reveal the session-key} of a completed session~$S$.
2256\item It may elect to be \emph{challenged} with a completed session~$S$,
2257 provided. Challenge sessions form part of the security notion for
2258 key-exchange protocols. See below for more details.
2259\end{itemize}
2260We say that a session $S = (P_i, P_j, s)$ is \emph{locally exposed} if
2261\begin{itemize}
2262\item it has had its state revealed,
2263\item it has had its session-key revealed, or
2264\item $P_i$ has been corrupted, and $S$ had not been expired when this
2265 happened.
2266\end{itemize}
2267A session is \emph{exposed} if it is locally exposed, or if its matching
2268session exists and has been locally exposed.
2269
2270At the beginning of the game, a bit $b^*$ is chosen at random. The adversary
2271may choose to be \emph{challenged} with any completed, unexposed
2272session;\footnote{%
2273 The original Canetti-Krawczyk definition restricts the adversary to a
2274 single challenge session, but our proof works independent of the number of
2275 challenge sessions, so we get a stronger result by relaxing the requirement
2276 here.)}
2277the adversary is then given either the session's key -- if $b^* = 1$ -- or a
2278string chosen at random and independently of the game so far from a
2279protocol-specific distribution -- if $b^* = 0$. At the end of the game, the
2280adversary outputs a single bit~$b$.
2281
2282\subsubsection{SK-security}
2283We've now described the game; it is time to explain the adversary's goal in
2284it. The adversary \emph{wins} the game if either
2285\begin{enumerate}
2286\item two unexposed, matching sessions complete, but output different
2287 keys,\footnote{%
2288 The original Canetti-Krawczyk definition differs slightly here. It
2289 requires that `if two \emph{uncorrupted} parties complete matching
2290 sessions then they both output the same key' [original emphasis]. This
2291 can't be taken at face value, since none of the protocols they claim to
2292 be secure actually meet this requirement: they meet only the weaker
2293 requirement that parties completing matching sessions output different
2294 keys with negligible probability. We assume here that this is what they
2295 meant.}
2296 or
2297\item the adversary correctly guesses the hidden bit~$b^*$.
2298\end{enumerate}
2299More formally, we make the following definition.
32609df3 2300\fi
a6e375a6
MW
2301\begin{definition}[SK-security]
2302 \label{def:sk}
2303 Let $\Pi^{H_0(\cdot), H_1(\cdot), \ldots}$ be a key-exchange protocol
2304 which makes use of random oracles $H_0(\cdot)$, $H_1(\cdot)$, \dots, and
32609df3 2305 let $A$ be an adversary playing the game described \ifshort in
081da1b1 2306 \cite{canetti-krawczyk-2001:secure-channels}\else previously\fi, where $n$
a6e375a6
MW
2307 parties run the protocol~$\Pi$. Let $V$ be the event that any pair of
2308 matching, unexposed sessions completed, but output different session keys.
2309 Let $W$ be the event that the adversary's output bit matches the game's
2310 hidden bit~$b^*$. We define the adversary's \emph{advantage against the
2311 SK-security of the protocol~$\Pi$} to be
2312 \[ \Adv{sk}{\Pi}(A, n) = \max(\Pr[V], 2\Pr[W] - 1). \]
2313 Furthermore, we define the \emph{SK insecurity function of the
2314 protocol~$\Pi$} to be
2315 \[ \InSec{sk}(\Pi; t, n, q_S, q_M, q_{H_0}, q_{H_1}, \dots) =
2316 \max_A \Adv{sk}{\Pi}(A, n)
2317 \]
2318 where the maximum is taken over all adversaries~$A$ with total running
2319 time~$t$ (not including time taken by the parties), create at most $q_S$
2320 sessions, deliver at most $q_M$~messages, and (if applicable) make at most
2321 $q_{H_i}$ random-oracle queries to each random oracle $H_i(\cdot)$.
2322\end{definition}
2323
2324
2325\subsection{Security}
2326
2327In order to analyse our protocol $\Wkx^{G, \E}$ properly, we must describe
32609df3 2328exactly how it fits into our formal model.
a6e375a6
MW
2329
2330\subsubsection{Sessions and session-ids}
2331Our formal model introduced the concept of sessions, which the informal
2332description of section~\ref{sec:kx-overview} neglected to do. (One could
2333argue that we described a single session only.) As we shall show in
2334section~\ref{sec:kx-insecure}, our protocol is \emph{insecure} unless we
2335carefully modify it to distinguish between multiple sessions.
2336
2337In the model, distinct key-exchange sessions are given distinct partners and
2338session-ids. In order to prevent sessions interfering with each other, we
2339shall make explicit use of the session-ids.
2340
2341Suppose the session-ids are $\ell_S$-bit strings. We expand the domain of
2342the random oracle $H_I$ so that it's now
2343\[ H_I\colon G \times \Bin^{\ell_S} \times G \times G \to \Bin_{\ell_I}. \]
2344
2345\subsubsection{Messages}
2346We split the messages our protocols into two parts: a \emph{type}~$\tau$ and
2347a \emph{body}~$\mu$. We assume some convenient, unambiguous encoding of
2348pairs $(\tau, \mu)$ as bit-strings. For readability, we present message
2349types as text strings, e.g., `\cookie{challenge}', though in practice one
2350could use numerical codes instead.
2351
2352The message body itself may be a tuple of values, which, again, we assume are
2353encoded as bit-strings in some convenient and unambiguous fashion. We shall
2354abuse the notation for the sake of readability by dropping a layer of nesting
2355in this case: for example, we write $(\cookie{hello}, x, y, z)$ rather than
2356$\bigl(\cookie{hello}, (x, y, z)\bigr)$.
2357
2358\subsubsection{The protocol}
2359Our protocol is represented by three functions, shown in
2360figure~\ref{fig:wkx-formal}.
2361\begin{itemize}
2362\item $\id{init}(n)$ is the initialization function, as described in
2363 section~\ref{sec:um}. It outputs a pair $(\mathbf{p}, \mathbf{i})$, where
2364 $\mathbf{i}[i]$ is the private key of party~$P_i$ and $\mathbf{p}[i]$ is
2365 the corresponding public key. Only $P_i$ is given $\mathbf{i}[i]$, whereas
2366 all parties and the adversary are given $\mathbf{p}$.
2367\item $\id{new-session}^{H_I(\cdot, \cdot, \cdot, \cdot), H_K(\cdot)}
2368 (\mathbf{p}, x, i, j, s)$ is the new-session function. This is executed by
2369 party~$P_i$ when the adversary decides to create a new session~$S = (P_i,
2370 P_j, s)$. It is also given the relevant outputs of $\id{init}$, and
2371 allowed access to the random oracles $H_I$ and $H_K$.
2372\item $\id{message}^{H_I(\cdot, \cdot, \cdot, \cdot), H_K(\cdot)} (\tau,
2373 \mu)$ is the incoming-message function. This is executed by a session when
2374 the adversary decides to deliver a message $(\tau, \mu)$ to it. It makes
2375 use of the subsidiary functions $\id{msg-challenge}$ and
2376 $\id{msg-response}$ to handle the messages.
2377\end{itemize}
2378We observe that the protocol never aborts. We could make it do so if it
2379receives an invalid message, but we prefer to ignore invalid messages for the
2380sake of robustness.\footnote{%
2381 Note that this protocol would therefore require modification before it was
081da1b1
MW
2382 acceptable in the simulation-based model of
2383 \cite{shoup-1999:formal-model-key-exchange}. There it is required that a
2384 key-exchange protocol terminate after a polynomially-bounded number of
2385 messages are delivered to it.}
a6e375a6
MW
2386
2387\begin{figure}
2388 \begin{program}
2389 Function $\id{init}(n)$: \+ \\
2390 \FOR $i \in \Nupto{n}$ \DO \\ \ind
32609df3 2391 $x \getsr \gf{q}$; \\
a6e375a6
MW
2392 $\mathbf{i}[i] \gets x$; \\
2393 $\mathbf{p}[i] \gets x P$; \- \\
2394 \RETURN $(\mathbf{p}, \mathbf{i})$;
2395 \- \\[\medskipamount]
2396 Function $\id{new-session}^{H_I(\cdot, \cdot, \cdot, \cdot), H_K(\cdot)}
2397 (\mathbf{p}, x, i, j, s)$: \+ \\
2398 $X \gets \mathbf{p}[i]$;
2399 $X' \gets \mathbf{p}[j]$;
2400 $C \gets \emptyset$; \\
32609df3 2401 $r \getsr \gf{q}$;
a6e375a6
MW
2402 $R \gets r P$;
2403 $Y \gets r X'$; \\
2404 $h \gets H_I(X, s, R, Y)$;
2405 $c \gets r \xor h$; \\
2406 \SEND $(\cookie{challenge}, R, c)$;
2407 \- \\[\medskipamount]
2408 Function $\id{message}^{H_I(\cdot, \cdot, \cdot, \cdot), H_K(\cdot)}
2409 (\tau, \mu)$: \+ \\
2410 \IF $\tau = \cookie{challenge}$ \THEN $\id{msg-challenge}(\mu)$; \\
2411 \ELSE \IF $\tau = \cookie{response}$ \THEN $\id{msg-response}(\mu)$;
2412 \next
2413 Function $\id{msg-challenge}(\mu)$: \+ \\
2414 $(R', c') \gets \mu$; \\
2415 $Y' \gets x R'$; \\
2416 $h' \gets H_I(X', s, R', Y')$; \\
2417 $r' \gets c' \xor h'$; \\
2418 \IF $R' \ne r' P$ \THEN \RETURN; \\
2419 $C \gets C \cup \{R\}$; \\
2420 $Z \gets r R'$; \\
2421 $(K_0, K_1) \gets H_K(Z)$; \\
2422 $\chi \gets E_{K_0}(Y')$; \\
2423 \SEND $(\cookie{response}, R, \chi)$;
2424 \- \\[\medskipamount]
2425 Function $\id{msg-response}(\mu)$: \+ \\
2426 $(R', \chi') \gets \mu$; \\
2427 \IF $R' \notin C$ \THEN \RETURN; \\
2428 $Z \gets r R'$; \\
2429 $(K_0, K_1) \gets H_K(Z)$; \\
2430 $Y' \gets D_{K_0}(\chi')$; \\
2431 \IF $Y' \ne Y$ \THEN \RETURN; \\
2432 \OUTPUT $K_1$;
2433 \STOP;
2434 \end{program}
32609df3 2435
a6e375a6
MW
2436 \caption{Formalization of $\Wkx$}
2437 \label{fig:wkx-formal}
2438\end{figure}
2439
2440\subsubsection{Session states}
2441We must specify what the adversary obtains when it chooses to reveal a
2442session's state. Given the program in figure~\ref{fig:wkx-formal}, we can
2443see that the session state consists of the variables $(x, X, X', r, R, Y,
2444C)$.
2445
2446However, $x$ is the owning party's long-term secret, and it seems
2447unreasonable to disclose this to an adversary who stops short of total
2448corruption.
2449
2450The public keys $X$ and $X'$ are part of the adversary's input anyway, so
2451revealing them doesn't help. Similarly, the set $C$ of valid challenges
2452could have been computed easily by the adversary, since a group element $R'
2453\in C$ if and only if the session $S$ responded to some message
2454$(\cookie{challenge}, R', c')$.
2455
2456The value $R = r P$ is easily computed given $r$, and besides is sent in the
2457clear in the session's first message. The expected response $Y = r X'$ is
2458also easily computed from $r$. The converse is also true, since $r$ can be
2459recovered from $R$ and $c$ in the session's challenge message and the value
2460$Y$. Besides, $r$ is necessary for computing $Z$ in response to incoming
2461challenges.
2462
2463We conclude that the only `interesting' session state is $r$.
2464
2465\subsubsection{Security}
2466Having formally presented the protocol, we can now state our main theorem
32609df3
MW
2467about its security. The proof is given in \ifshort the full version of the
2468paper\else appendix~\ref{sec:sk-proof}\fi.
a6e375a6
MW
2469\begin{theorem}[SK-security of $\Wkx$]
2470 \label{thm:sk}
2471 Let $G$ be a cyclic group. Let $\E = (\kappa, E, D)$ be a symmetric
2472 encryption scheme. Then
2473 \begin{spliteqn*}
2474 \InSec{sk}(\Wkx^{G, \E}; t, n, q_S, q_M, q_I, q_K) \le
dff0fad2
MW
2475 2 q_S \bigl( \InSec{ind-cca}(\E; t', q_M, q_M) + {} \\
2476 \InSec{mcdh}(G; t', q_K) +
2477 n \,\InSec{mcdh}(G; t', q_M + q_I) \bigr) +
32609df3 2478 \frac{n (n - 1)}{q} +
dff0fad2 2479 \frac{2 q_M}{2^{\ell_I}}.
a6e375a6
MW
2480 \end{spliteqn*}
2481 where $t' = t + O(n) + O(q_S) + O(q_M q_I) + O(q_K)$.
2482\end{theorem}
2483
2484
32609df3 2485\ifshort\else
a6e375a6
MW
2486\subsection{Insecure protocol variants}
2487\label{sec:kx-insecure}
2488
2489It's important to feed the session-id and verifier's public key to the random
2490oracle $H_I$ when constructing the check-value~$c$. Without these, the
2491protocol is vulnerable to attack. In this section, we consider some variants
2492of our protocol which hash less information, and show attacks against them.
2493
2494To simplify the presentation, we shall consider Alice and Bob, and a third
2495character Carol. We shall be attacking a pair of matching sessions $A$
2496and~$B$, run by Alice and Bob respectively. Let Alice and Bob's private keys
2497be $x_A$ and~$x_B$, and let their public keys be $X_A = x_A P$ and $X_B = x_B
2498P$ respectively.
2499
2500\subsubsection{Protocol diagram notation}
2501In order to keep the presentation as clear as possible, we use a simple
2502diagrammatic notation for describing protocol runs. A line of the form
2503\protocolrun{\textit{action} \ar[r] & \PRsession{S} & \textit{result}}
2504states that the adversary performs the given \textit{action} on session~$S$,
2505with the stated \textit{result}. The \textit{action} may be
2506\begin{itemize}
2507\item \textsf{Create session $(P_i, P_j, s)$}: the session is created,
2508 running in party~$P_i$, with partner~$P_j$ and session-id~$s$.
2509\item \textsf{Receive $\mu$}: the session is activated with an incoming message~$\mu$.
2510\item \textsf{Session-state reveal}: The adversary requests the session's
2511 internal state.
2512\end{itemize}
2513The \textit{result} may be
2514\begin{itemize}
2515\item \textsf{Send $\mu'$}: the session requests the delivery of the
2516 message~$\mu'$.
2517\item \textsf{Complete: $K$}: the session completes, outputting the key~$K$.
2518\item \textsf{State: $\sigma$}: the session's state is revealed to
2519 be~$\sigma$.
2520\item \textsf{(Ignore)}: the result of the action is unimportant.
2521\end{itemize}
2522
2523\subsubsection{Omitting the session-id}
2524Consider a protocol variant where session $S$ sets $h_S = H_I(X_N, R_S,
2525Y_S)$, where $N$ is the session's partner. That is, we've omitted the
2526session-id from the hash. An adversary can cross over two sessions between
2527Alice and Bob. Here's how.
2528
2529The attack assumes that Alice and Bob set up two pairs of matching sessions
2530with each other, thus.
2531\protocolrun{
2532 \PRcreate{Alice}{Bob}{s} & \PRsession{A} &
2533 \PRsend{challenge}{R_A, c_A} \\
2534 \PRcreate{Bob}{Alice}{s} & \PRsession{B} &
2535 \PRsend{challenge}{R_B, c_B} \\
2536 \PRcreate{Alice}{Bob}{s'} & \PRsession{A'} &
2537 \PRsend{challenge}{R_{A'}, c_{A'}} \\
2538 \PRcreate{Bob}{Alice}{s'} & \PRsession{B'} &
2539 \PRsend{challenge}{R_{B'}, c_{B'}}
2540}
2541Observe that the session pairs use distinct session-ids $s \ne s'$, so this
2542is allowed. Now the adversary crosses over the challenges, using the second
2543pair of sessions to provide responses to the challenges issued by the first
2544pair. By revealing the state in the second pair of sessions, the adversary
2545can work out the (probably different) session keys accepted by the first
2546pair.
2547\protocolrun{
2548 \PRreceive{challenge}{R_B, c_B} & \PRsession{A'} &
2549 \PRsend{response}{R_{A'}, E_{K_{A'B,0}}(x_A R_B)} \\
2550 \PRreceive{challenge}{R_A, c_A} & \PRsession{B'} &
2551 \PRsend{response}{R_{B'}, E_{K_{B'A,0}}(x_B R_A)} \\
2552 \PRreceive{challenge}{R_{A'}, c_{A'}} & \PRsession{A} & \PRignore \\
2553 \PRreceive{challenge}{R_{B'}, c_{B'}} & \PRsession{B} & \PRignore \\
2554 \PRreveal & \PRsession{A'} & r_{A'} \\
2555 \PRreveal & \PRsession{B'} & r_{B'} \\
2556 \PRreceive{response}{R_{B'}, E_{K_{B'A,0}}(x_B R_A)} & \PRsession{A} &
2557 \PRcomplete{K_{AB',1}} \\
2558 \PRreceive{response}{R_{A'}, E_{K_{A'B,0}}(x_A R_B)} & \PRsession{B} &
2559 \PRcomplete{K_{BA',1}} \\
2560}
2561The adversary can now compute $K_{AB'} = H_K(r_{B'} R_A)$ and $K_{B'A} =
2562H_K(r_{A'} R_B)$. Safely in possession of both keys, the adversary can now
2563read and impersonate freely.
2564
2565\subsubsection{Omitting the partner's public key}
2566Now consider a protocol variant where session $S$ sets $h_S = H_I(s, R_S,
2567Y_S)$, where $s$ is the session-id. An adversary can use a sessions with
2568Carol to attack a session between Alice and Bob. Here's how the sessions are
2569set up.
2570\protocolrun{
2571 \PRcreate{Alice}{Bob}{s} & \PRsession{A} &
2572 \PRsend{challenge}{R_A, c_A} \\
2573 \PRcreate{Bob}{Alice}{s} & \PRsession{B} &
2574 \PRsend{challenge}{R_B, c_B} \\
2575 \PRcreate{Alice}{Carol}{s} & \PRsession{A'} &
2576 \PRsend{challenge}{R_{A'}, c_{A'}} \\
2577 \PRcreate{Bob}{Carol}{s} & \PRsession{B'} &
2578 \PRsend{challenge}{R_{B'}, c_{B'}}
2579}
2580Although each of Alice and Bob have two sessions with session-id~$s$, this is
2581allowed, since they are with different partners. The rest of the attack in
2582fact proceeds identically to the previous case.
32609df3 2583\fi
a6e375a6
MW
2584
2585\subsection{Deniability}
2586\label{sec:denial}
2587
2588We have claimed that the Wrestlers key-exchange protocol is \emph{deniable}.
2589In this section, we define what we mean, explain the limits of the
2590deniablility of the protocol as currently presented, fix the protocol with an
2591additional pass (which can be collapsed to a single message flow by breaking
2592the protocol's symmetry), and prove the deniability of the resulting
2593protocol.
2594
2595Our notion of deniability is taken from Di~Raimondo, Gennaro and Krawczyk
081da1b1
MW
2596\cite{raimondo-2006:deniable-authn-key-exchange}, except that, as usual, we
2597opt for a concrete security approach.
a6e375a6
MW
2598
2599\subsubsection{Discussion}
2600Our definition for deniability is that, for any adversary interacting with
2601participants in the protocol, a simulator exists which can compute the same
2602things as the adversary. In particular, since an adversary could output a
2603transcript of the interactions between itself and the parties, it would
2604follow that a simulator could do this too. If the simulator is effective,
2605its output is indistinguishable from that of the real adversary, and hence no
2606`judge' (distinguisher) should be persuaded by evidence presented by someone
2607who claims to have witnessed or participated in an interaction.
2608
2609We work again the model described in~\ref{sec:um}. That is, our adversary
2610has complete control over the ordering and delivery of all messages. The
2611adversary is also able, at any time, to reveal the state of any session.
2612However, deniability is obviously impossible against an adversary who can
2613\emph{corrupt} other parties, since simulating such an adversary's actions
2614would necessarily require the simulator to compute private keys corresponding
2615to the known public keys, and this is (we believe) difficult, because an
2616efficient algorithm for doing this could easily attack our protocol, which we
2617already proved secure. Therefore, we forbid the adversary from corrupting
2618parties.
2619
2620In order to allow the adversary to participate in the protocol, rather than
2621merely observing it, we need to give it one or more private keys. We could
2622modify the initialization function \id{init} from figure~\ref{fig:wkx-formal}
2623to give the adversary a private key, but there's an easier way: we can just
2624give the adversary a number of private keys in its auxiliary input.
2625
2626\subsubsection{Definitions}
2627Let $\Pi$ be a key-exchange protocol, in the model described in
2628section~\ref{sec:um}. We use the simulation framework of
2629section~\ref{sec:sim}. We define the initialization function $I_\Pi$ to be
2630the initialization function of $\Pi$, as before, and the corresponding
2631world~$W_\Pi(\iota, \sigma, \tau, \mu)$ is a fairly straightforward mapping
2632of the adversary's possible actions to the simulation model:
2633\begin{itemize}
2634\item The invocation $\cookie{new-session}$ with $\mu = (i, j, s)$ creates a
2635 new session on party~$P_i$, with partner~$P_j$ and session-id~$s$. The
2636 reply $\rho = (\delta, m)$ is a \emph{decision} $\delta \in
2637 \{\cookie{continue}, \cookie{abort}, \cookie{complete}\}$ and an output
2638 message $m \in \Bin^* \cup \{\bot\}$. If $m \ne \bot$ then $m$ is a
2639 message to be sent to the matching session (if any).
2640\item The invocation $\cookie{deliver}$ with $\mu = (i, j, s, m)$ delivers
2641 message $m$ to the session $S = (P_i, P_j, s)$. The response $\rho$ is as
2642 for $\cookie{new-session}$ invocations.
2643\item The invocation $\cookie{reveal-session-state}$ with $\mu = (i, j, s)$
2644 reveals to the adversary the state of the running session $S = (P_i, P_j,
2645 s)$. The response $\rho$ is the session's state if $S$ is indeed a running
2646 session, or $\bot$ otherwise.
2647\item The invocation $\cookie{reveal-session-key}$ with $\mu = (i, j, s)$
2648 reveals to the adversary the session-key of the completed session~$S =
2649 (P_i, P_j, s)$. The response $\rho$ is the session key~$K$ if the session
2650 is indeed complete, or $\bot$ otherwise.
2651\end{itemize}
2652There are no invocations corresponding to the adversary actions of corrupting
2653parties (since deniability against an corrupting adversary is impossible, as
2654discussed earlier), or session expiry or challenging (since they are useless
2655in this context).
2656
2657We measure the deniability of a protocol~$\Pi$, using a given simulator~$S$,
2658by the insecurity function $\InSec{sim}(W_\Pi, I_\Pi, S; t_D, t_A,
2659\mathcal{Q}_D, \mathcal{Q}_A, \mathcal{R}, \mathcal{U})$ of
2660definition~\ref{def:sim}. The interaction bounds $\mathcal{R} = (q_S, q_M)$
2661we place on the adversary are on the number ($q_S$) of \cookie{new-session}
2662and ($q_M$) \cookie{deliver} invocations it makes.
2663
2664We shall (informally) say that a protocol~$\Pi$ is deniable if there is a
2665simulator~$S_\Pi$ for which the insecurity function is small for appropriate
2666resource bounds on the adversary and distinguisher.
2667
2668\subsubsection{The current protocol}
2669As it stands, $\Wkx$ isn't deniable, according to our definition, for
2670arbitrary auxiliary inputs. Let's see why.
2671
2672Suppose that Bob is an informant for the secret police, and wants to convince
2673a judge that Alice is involved in subversive activities in cyberspace.
2674Unfortunately, Bob's job is difficult, because of the strong zero-knowledge
2675nature of the Wrestlers identification protocol. However, Bob can work with
2676the judge to bring him the evidence necessary to convict Alice. Here's how.
2677
2678Alice's public key is $A$, and Bob's public key is $B$. The judge chooses
32609df3 2679some session-id $s$, and $r \inr \gf{q}$. He computes $R = r P$ and $c =
a6e375a6
MW
2680r \xor H_I(B, s, R, r A)$, and gives Bob the triple $(s, R, c)$, keeping $r$
2681secret. Bob can now persuade Alice to enter into a key-exchange with him,
2682with session-id $s$. He uses $(R, c)$ as his challenge message. When Alice
2683sends back her response $(R', \chi)$ (because Bob's challenge is correctly
2684formed), Bob aborts and shows $(R', \chi)$ to the judge. The judge knows $r$
2685and can therefore decrypt $\chi$ and check the response. If it's wrong, then
2686Bob was cheating and gets demoted -- he can't get the right answer by himself
2687because that would require him to impersonate Alice. If it's right, Alice is
2688really a subversive element and `disappears' in the night.
2689
2690We shall show in theorem~\ref{thm:denial} below that this is basically the
2691only attack against the deniability of the protocol. However, we can do
2692better.
2693
2694\subsubsection{Fixing deniability}
2695We can fix the protocol to remove even the attack discussed above. The
2696change is simple: we feed \emph{both} parties' challenges to the hash
2697function~$H_I$ rather than just the sender's. We use a five-argument hash
2698function (random oracle) $H_I\colon G^2 \times \Bin^{\ell_S} \times G^2 \to
2699\Bin^{\ell_I}$. We introduce a new message pass at the beginning of the
2700protocol: each session simply sends its challenge point $R = r P$ in the
2701clear as a `pre-challenge'. The actual challenge is $R$ and $c = r \xor
2702H_I(X, R', s, R, c)$, where $R'$ is the challenge of the matching session.
2703
2704By breaking symmetry, we can reduce the communication complexity of this
2705variant to four messages. As before, we analyse the symmetrical version.
2706The extra flow might seem a high price to pay, but we shall see that it has
2707additional benefits beyond deniability.
2708
2709A summary of the new protocol is shown in figure~\ref{fig:wdkx}, and the
2710formal description is shown in figure~\ref{fig:wdkx-formal}.
2711
2712\begin{figure}
2713 \begin{description}
32609df3 2714 \item[Setup] Group $G = \langle P \rangle$; $\#G = q$ is prime.
a6e375a6
MW
2715 $H_I(\cdot, \cdot, \cdot, \cdot, \cdot)$ and $H_K(cdot)$ are secure
2716 hashes. $\E = (\kappa, E, D)$ is an IND-CCA2 symmetric encryption
2717 scheme.
2718 \item[Parties] $U_i$ for $0 \le i < n$.
32609df3 2719 \item[Private keys] $x_i \inr \gf{q}$.
a6e375a6
MW
2720 \item[Public keys] $X_i = x_i P$.
2721 \end{description}
32609df3
MW
2722
2723 \begin{protocol}
2724 $r_i \getsr I$; $R_i \gets r_i P$; &
2725 $r_j \getsr I$; $R_j \gets r_j P$; \\
2726 \send{->}{R_i}
2727 \send{<-}{R_j}
2728 $c_i \gets r_i \xor H_I(R_j, X_i, s, R_i, r_i X_j)$; &
2729 $c_j \gets r_j \xor H_I(R_i, X_j, s, R_j, r_j X_i)$; \\
2730 \send{->}{(R_i, c_i)}
2731 \send{<-}{(R_j, c_j)}
2732 Check $R_j = \bigl(c_j \xor H_I(x_i R_j)\bigr) P$; &
2733 Check $R_i = \bigl(c_i \xor H_I(x_j R_i)\bigr) P$; \\
2734 $Z \gets r_i R_j$; $(K_0, K_1) \gets H_K(Z)$; &
2735 $Z \gets r_j R_i$; $(K_0, K_1) \gets H_K(Z)$; \\
2736 $\chi_i \gets E_{K_0}(x_i R_j)$; &
2737 $\chi_j \gets E_{K_0}(x_j R_i)$; \\
2738 \send{->}{(R_i, \chi_i)}
2739 \send{<-}{(R_j, \chi_j)}
2740 Check $D_{K_0}(\chi_j) = r_i X_j$; &
2741 Check $D_{K_0}(\chi_i) = r_j X_i$; \\
2742 Shared key is $K_1$. & Shared key is $K_1$.
2743 \end{protocol}
a6e375a6
MW
2744
2745 \caption{Summary of the Deniable Wrestlers Key Exchange protocol, $\Wdkx$}
2746 \label{fig:wdkx}
2747\end{figure}
2748
2749\begin{figure}
2750 \begin{program}
2751 Function $\id{init}(n)$: \+ \\
2752 \FOR $i \in \Nupto{n}$ \DO \\ \ind
32609df3 2753 $x \getsr \gf{q}$; \\
a6e375a6
MW
2754 $\mathbf{i}[i] \gets x$; \\
2755 $\mathbf{p}[i] \gets x P$; \- \\
2756 \RETURN $(\mathbf{p}, \mathbf{i})$;
2757 \- \\[\medskipamount]
2758 Function $\id{new-session}^{H_I(\cdot, \cdot, \cdot, \cdot, \cdot), H_K(\cdot)}
2759 (\mathbf{p}, x, i, j, s)$: \+ \\
2760 $X \gets \mathbf{p}[i]$;
2761 $X' \gets \mathbf{p}[j]$;
2762 $C \gets \emptyset$; \\
32609df3 2763 $r \getsr \gf{q}$;
a6e375a6
MW
2764 $R \gets r P$;
2765 $Y \gets r X'$; \\
2766 \SEND $(\cookie{pre-challange}, R)$;
2767 \- \\[\medskipamount]
2768 Function $\id{message}^{H_I(\cdot, \cdot, \cdot, \cdot, \cdot), H_K(\cdot)}
2769 (\tau, \mu)$: \+ \\
2770 \IF $\tau = \cookie{pre-challenge}$ \THEN
2771 $\id{msg-pre-challenge}(\mu)$; \\
2772 \ELSE \IF $\tau = \cookie{challenge}$ \THEN
2773 $\id{msg-challenge}(\mu)$; \\
2774 \ELSE \IF $\tau = \cookie{response}$ \THEN $\id{msg-response}(\mu)$;
2775 \next
2776 Function $\id{msg-pre-challenge}(\mu)$: \+ \\
2777 $R' \gets \mu$; \\
2778 $h \gets H_I(R', X, s, R, c)$; \\
2779 $c \gets r \xor h$; \\
2780 \SEND $(\id{msg-challenge}, R, c)$;
2781 \- \\[\medskipamount]
2782 Function $\id{msg-challenge}(\mu)$: \+ \\
2783 $(R', c') \gets \mu$; \\
2784 $Y' \gets x R'$; \\
2785 $h' \gets H_I(R, X', s, R', Y')$; \\
2786 $r' \gets c' \xor h'$; \\
2787 \IF $R' \ne r' P$ \THEN \RETURN; \\
2788 $C \gets C \cup \{R\}$; \\
2789 $Z \gets r R'$; \\
2790 $(K_0, K_1) \gets H_K(Z)$; \\
2791 $\chi \gets E_{K_0}(Y')$; \\
2792 \SEND $(\cookie{response}, R, \chi)$;
2793 \- \\[\medskipamount]
2794 Function $\id{msg-response}(\mu)$: \+ \\
2795 $(R', \chi') \gets \mu$; \\
2796 \IF $R' \notin C$ \THEN \RETURN; \\
2797 $Z \gets r R'$; \\
2798 $(K_0, K_1) \gets H_K(Z)$; \\
2799 $Y' \gets D_{K_0}(\chi')$; \\
2800 \IF $Y' \ne Y$ \THEN \RETURN; \\
2801 \OUTPUT $K_1$;
2802 \STOP;
2803 \end{program}
32609df3 2804
a6e375a6
MW
2805 \caption{Deniable key-exchange: formalization of $\Wdkx$}
2806 \label{fig:wdkx-formal}
2807\end{figure}
2808
2809The security of this variant is given by the following theorem, whose proof
32609df3
MW
2810is \ifshort given in the full version of this paper\else in
2811appendix~\ref{sec:sk2-proof}\fi.
a6e375a6
MW
2812\begin{theorem}[SK-security of $\Wdkx$]
2813 \label{thm:sk2}
2814 Let $G$ be a cyclic group. Let $\E = (\kappa, E, D)$ be a symmetric
2815 encryption scheme. Then
2816 \[ \InSec{sk}(\Wdkx^{G, \E}; t, n, q_S, q_M, q_I, q_K) =
2817 \InSec{sk}(\Wkx^{G, \E}; t, n, q_S, q_M, q_I, q_K)
2818 \]
2819\end{theorem}
2820
2821\subsubsection{Deniability of the Wrestlers protocols}
2822In order to quantify the level of deniability our protocols provide, we shall
2823impose a limit on the auxiliary input to the adversary. In particular, we
2824shall use $\mathcal{U}$ of definition~\ref{def:sim} to count the number of
2825\emph{challenges} in the auxiliary input. That is, $\mathcal{U} = n_C$ is
2826the number of tuples $(i, j, s, R', R, c)$ for which there is an $r$ such
2827that $R = r P$ and $c = r \xor H_I(R', X_j, s, R, r X_i)$ (or without the
2828$R'$ for $\Wkx$).
2829
2830With this restriction in place, we can state the following theorem about the
2831deniability of our protocols.
2832\begin{theorem}[Deniability of $\Wkx$ and $\Wdkx$]
2833 \label{thm:denial}
2834 There exist simulators $S_\Wkx$ and $\Wdkx$ such that
2835 \[ \InSec{sim}(W_{\Wkx^{G, \E}}, I_{\Wkx^{G, \E}}, S_{\Wkx^{G, \E}};
2836 t_D, t_A, \mathcal{Q}_D, \mathcal{Q}_A, (q_S, q_M), 0) \le
2837 \frac{q_M}{2^{\ell_I}}
2838 \]
2839 and
32609df3
MW
2840 \iffancystyle\[\else\begin{spliteqn*}\fi
2841 \InSec{sim}(W_{\Wdkx^{G, \E}}, I_{\Wdkx^{G, \E}}, S_{\Wdkx^{G, \E}};
2842 t_D, t_A, \mathcal{Q}_D, \mathcal{Q}_A, (q_S, q_M), n_C) \le
2843 \iffancystyle\else\\\fi
2844 \frac{n_C q_S}{\#G} +
a6e375a6 2845 \frac{q_M}{2^{\ell_I}}.
32609df3 2846 \iffancystyle\]\else\end{spliteqn*}\fi
a6e375a6
MW
2847 The running time of the simulators is $O(t_A) + O(\mathcal{Q}_A q_M)$.
2848\end{theorem}
32609df3
MW
2849\begin{longproof}{The proof of this theorem can be found in the full version
2850 of this paper.}
a6e375a6
MW
2851 The simulators $S_\Wkx$ and $S_\Wdkx$ are very similar. We describe both
2852 here. Both are fake-world simulators, working as follows.
2853 \begin{enumerate}
2854 \item Initially, it constructs simulated parties $P_i$, for $0 \le i < n$,
2855 giving each the public key $X_i$ from the common input.
2856 \item Suppose the adversary requests creation of a new session $S = (P_i,
2857 P_j, s)$. Then the simulator creates a new session, including a random
32609df3 2858 value $r_S \inr \gf{q}$, and computes $R_S = r_S P$, and $Y_S = r_S
a6e375a6
MW
2859 X_j$. For $\Wdkx$, it sends the message $(\cookie{pre-challenge}, R_S)$;
2860 for $\Wkx$, it additionally computes $h = H_I(X_i, s, R_S, Y_S)$ and
2861 sends $(\cookie{challenge}, R_S, r_S \xor h)$.
2862 \item Suppose, for $\Wdkx$, the adversary sends a message
2863 $(\cookie{pre-challenge}, R')$ to a session~$S = (P_i, P_j, s)$. The
2864 simulator computes $h = H_I(R', X_i, s, R_S, Y_S)$, and sends
2865 $(\cookie{challenge}, R_S, r_S \xor h)$.
2866 \item Suppose the adversary sends a message $(\cookie{challenge}, R', c')$
2867 to session $S = (P_i, P_j, s)$. The simulator doesn't know $x_i$.
2868 \begin{enumerate}
2869 \item If $R' = R_T$ for some other simulated session $T$, then the
2870 simulator knows $r_T$ such that $R_T = r_T P$. Let $Y' = r_T X_i$.
2871 The simulator computes $h = H_I(X_j, s, R', Y')$ (resp.\ $h = H_I(R_S,
2872 X_j, s, R', Y')$) for $\Wkx$ (resp.\ $\Wdkx$) and checks that $r_T = c'
2873 \xor h$. If not, the simulator discards the message. Otherwise, it
2874 computes $(K_0, K_1) = H_K(r_S R')$, and sends the message
2875 $(\cookie{response}, R, E_{K_0}(Y'))$.
2876 \item \label{en:simextract} Otherwise the simulator runs the extractor
2877 $T_\Wident$ on the adversary's history of queries $H_I(X_j, s, R',
2878 \cdot)$ (resp.\ $H_I(R_S, X_j, s, R', \cdot)$) for $\Wkx$ (resp.\
2879 $\Wdkx$). The extractor returns $(r', Y')$. If $Y' = \bot$ then the
2880 simulator ignores the message. Otherwise, the simulator computes
2881 $(K_0, K_1) = H_K(r R')$ and sends back $(\cookie{response}, R,
2882 E_{K_0}(Y'))$.
2883 \end{enumerate}
2884 \item Suppose the adversary sends a message $(\cookie{response}, R', \chi)$
2885 to session $S = (P_i, P_j, s)$. The simulator computes $(K_0, K_1) =
2886 H_K(r_S R')$, and decrypts $Y' = D_{K_0}(\chi)$. If $Y' \ne Y_S$ then
2887 the simulator discards the message. Otherwise, it makes the simulated
2888 session complete, and outputs key $K_1$.
2889 \item Finally, if the adversary reveals a session's state, the simulator
2890 reveals $r_S$ as required; if the adversary reveals a session-key, the
2891 simulator reveals the $K_1$ it output.
2892 \end{enumerate}
2893 The only point where the simulation fails to be perfect is in
2894 \ref{en:simextract}. Let $R'$ and $c'$ be the values from an incoming
2895 challenge message to session $S = (P_i, P_j, s)$. Let $r'$ be such that
2896 $R' = r' P$ and let $Y' = r' X_i$. If a random-oracle query $H_I(X_j, s,
2897 R', Y')$ (or $H_I(R_S, X_j, s, R', Y')$ for $\Wdkx$) has been issued, then
2898 there are a number of possibilities. Let $h'$ be the result of this query.
2899 \begin{itemize}
2900 \item The adversary made this query. Then the extractor will find it and
2901 return $Y'$ if $c' = h' \xor r'$, or $\bot$ otherwise.
2902 \item Some simulated session $U = (P_{i'}, P_{j'}, s')$ made this query.
2903 But simulated sessions only make $H_I$-queries when constructing
2904 challenges, so $R' = R_U$ for some session~$U$. But the simulator does
2905 something different in that case.
2906 \item In $\Wdkx$, the quadruple $(s, R_S, R', c')$ came from the
2907 adversary's auxiliary input. In this case the simulator must fail. But
2908 $R_S = r_S P$, and $r_S$ was chosen uniformly at random. If there are at
2909 most $n_C$ challenge sets in the auxiliary input then this happens with
32609df3 2910 probability at most $n_C/\#G$ for any given session.
a6e375a6
MW
2911 \end{itemize}
2912 We conclude that the simulator fails with probability
32609df3 2913 \[ \frac{q_M}{2^{\ell_I}} + \frac{q_S n_C}{\#G}. \]
a6e375a6
MW
2914 (Note that we only consider $n_C = 0$ for $\Wkx$.) No adversary can
2915 distinguish the simulator from a real interaction unless the simulator
2916 fails, and the simulator is a fake-world simulator. We therefore apply
2917 proposition~\ref{prop:fakesim}; the theorem follows.
32609df3 2918\end{longproof}
a6e375a6 2919
32609df3 2920\ifshort\else
a6e375a6
MW
2921\subsection{Practical issues}
2922\label{sec:practice}
2923
2924\subsubsection{Denial of service from spoofers}
2925The adversary we considered in~\ref{sec:um} is very powerful. Proving
2926security against such a powerful adversary is good and useful. However,
2927there are other useful security properties we might like against weaker
2928adversaries.
2929
2930Eavesdropping on the Internet is actually nontrivial. One needs control of
2931one of the intermediate routers between two communicating parties. (There
2932are tricks one can play involving subversion of DNS servers, but this is also
2933nontrivial.) However, injecting packets with bogus source addresses is very
2934easy.
2935
4331bf84 2936Layering the protocol over TCP \cite{rfc793} ameliorates this problem because
a6e375a6
MW
2937an adversary needs to guess or eavesdrop in order to obtain the correct
2938sequence numbers for a spoofed packet; but the Wrestlers protocol is
2939sufficiently simple that we'd like to be able to run it over UDP
4331bf84 2940\cite{rfc768}, for which spoofing is trivial.
a6e375a6
MW
2941
2942Therefore, it's possible for anyone on the 'net to send Alice a spurious
2943challenge message $(R, c)$. She will then compute $Y = a R$, recover $r' = c
2944\xor H_I(\ldots, R, Y)$ check that $R = r' P$ and so on. That's at least two
2945scalar multiplications to respond to a spoofed packet, and even with very
2946efficient group operations, coping with this kind of simple denial-of-service
2947attack might be difficult.
2948
2949A straightforward solution is to use the Deniable variant of the protocol,
2950and require a challenge to quote its matching session's challenge $R'$ in its
2951challenge. That is, upon receiving a $(\cookie{pre-challenge}, R')$, the
2952session sends $(\cookie{challenge}, R', R, c)$. Alice then rejects any
2953\emph{incoming} challenge message which doesn't quote her current challenge
2954value. Now only eavesdroppers can force her to perform expensive
2955computations.
2956
2957Indeed, one need not quote the entire challenge $R'$: it suffices to send
2958some short hard-to-guess hash of it, maybe just the bottom 128 bits or so.
2959
2960This can't reduce security. Consider any adversary attacking this protocol
2961variant. We can construct an adversary which attacks the original protocol
2962just as efficiently. The new adversary attaches fake $R'$ values to
2963challenges output by other parties, and strips them off on delivery,
2964discarding messages with incorrect $R'$ values.
2965
a6e375a6
MW
2966\subsubsection{Key confirmation}
2967Consider an application which uses the Wrestlers protocol to re-exchange keys
2968periodically. The application can be willing to \emph{receive} incoming
2969messages using the new key as soon as the key exchange completes
2970successfully; however, it should refrain from \emph{sending} messages under
2971the new key until it knows that its partner has also completed. The partner
2972may not have received the final response message, and therefore be unwilling
2973to accept a new key; it will therefore (presumably) reject incoming messages
2974under this new key.
2975
2976While key confirmation is unnecessary for \emph{security}, it has
2977\emph{practical} value, since it solves the above problem. If the
2978application sends a \cookie{switch} message when it `completes', it can
2979signal its partner that it is indeed ready to accept messages under the new
dff0fad2
MW
2980key. Our implementation sends $(\cookie{switch-rq}, E_{K_0}(H_S(0, R, R')))$
2981as its switch message; the exact contents aren't important. Our
2982retransmission policy (below) makes use of an additional message
2983\cookie{switch-ok}, which can be defined similarly.
2984
2985It's not hard to show that this doesn't adversely affect the security of the
2986protocol, since the encrypted message is computed only from public values.
2987In the security proof, we modify the generation of \cookie{response}
2988messages, so that the plaintexts are a constant string rather than the true
2989responses, guaranteeing that the messages give no information about the
2990actual response. To show this is unlikely to matter, we present an adversary
2991attacking the encryption scheme by encrypting either genuine responses or
2992fake constant strings. Since the adversary can't distinguish which is being
2993encrypted (by the definition of IND-CCA security,
2994definition~\ref{def:ind-cca}), the change goes unnoticed. In order to allow
2995incorporate our switch messages, we need only modify this adversary, to
2996implement the modified protocol. This is certainly possible, since the
2997messages contain (hashes of) public values. We omit the details.
2998
d8139f2c
MW
2999However, while the extra message doesn't affect the security of our protocol,
3000it would be annoying if an adversary could forge the switch request message,
3001since this would be a denial of service. In the strong adversarial model,
3002this doesn't matter, since the adversary can deny service anyway, but it's a
3003concern against less powerful adversaries. Most IND-CCA symmetric encryption
081da1b1
MW
3004schemes also provide integrity of plaintexts
3005\cite{bellare-namprempre-2000:authn-enc-notions} (e.g., the
3006encrypt-then-MAC generic composition approach
3007\cite{bellare-namprempre-2000:authn-enc-notions,%
3008 krawczyk-2001:order-enc-authn},
3009and the authenticated-encryption modes of
3010\cite{rogaway-2001:ocb,bellare-2004:eax,%
3011 mcgrew-viega-2004:gcm-security-performance}),
3012so this isn't a great imposition.
d8139f2c 3013
dff0fad2
MW
3014\subsubsection{Optimization and piggybacking}
3015We can optimize the number of messages sent by combining them. Here's one
3016possible collection of combined messages:
3017\begin{description}
3018\item [\cookie{pre-challenge}] $R$
3019\item [\cookie{challenge}] $R'$, $R$, $c = H_I(R', X, s, R, c) \xor r$
3020\item [\cookie{response}] $R'$, $R$, $c$, $E_{K_0}(x R')$
3021\item [\cookie{switch}] $R'$, $E_{K_0}(x R', H_S(0, R, R'))$
3022\item [\cookie{switch-ok}] $R'$, $E_{K_0}(H_S(1, R, R'))$
3023\end{description}
3024The combination is safe:
3025\begin{itemize}
3026\item the \cookie{switch} and \cookie{switch-ok} messages are safe by the
3027 argument above; and
3028\item the other recombinations can be performed and undone in a `black box'
3029 way, by an appropriately defined SK-security adversary.
3030\end{itemize}
a6e375a6
MW
3031
3032\subsubsection{Unreliable transports}
4331bf84 3033The Internet UDP \cite{rfc768} is a simple, unreliable protocol for
a6e375a6
MW
3034transmitting datagrams. However, it's very efficient, and rather attractive
3035as a transport for datagram-based applications such as virtual private
dff0fad2
MW
3036networks (VPNs). Since UDP is a best-effort rather than a reliable
3037transport, it can occasionally drop packets. Therefore it is necessary for a
3038UDP application to be able to retransmit messages which appear to have been
3039lost.
a6e375a6 3040
dff0fad2
MW
3041We recommend the following simple retransmission policy for running the
3042Wrestlers protocol over UDP.
3043\begin{itemize}
3044\item Initially, send out the \cookie{pre-challenge} message every minute.
3045\item On receipt of a \cookie{pre-challenge} message, send the corresponding
3046 full \cookie{challenge}, but don't retain any state.
3047\item On receipt of a (valid) \cookie{challenge}, record the challenge value
3048 $R'$ in a table, together with $K = (K_0, K_1)$ and the response $Y' = x
3049 R'$. If the table is full, overwrite an existing entry at random. Send
3050 the corresponding \cookie{response} message, and retransmit it every ten
3051 seconds or so.
3052\item On receipt of a (valid) \cookie{response}, discard any other
3053 challenges, and stop sending \cookie{pre-challenge} and \cookie{response}
3054 retransmits. At this point, the basic protocol described above would
3055 \emph{accept}, so the key $K_1$ is known to be good. Send the
3056 \cookie{switch} message, including its response to the (now known-good)
3057 sender's challenge.
3058\item On receipt of a (valid) \cookie{switch}, send back a \cookie{switch-ok}
3059 message and stop retransmissions. It is now safe to start sending messages
3060 under $K_1$.
3061\item On receipt of a (valid) \cookie{switch-ok}, stop retransmissions. It
3062 is now safe to start sending messages under $K_1$.
3063\end{itemize}
a6e375a6
MW
3064
3065\subsubsection{Key reuse}
dff0fad2
MW
3066Suppose our symmetric encryption scheme $\E$ is not only IND-CCA secure
3067(definition~\ref{def:ind-cca}) but also provides integrity of plaintexts
081da1b1
MW
3068\cite{bellare-namprempre-2000:authn-enc-notions} (or, alternatively,
3069is an AEAD scheme \cite{rogaway-2002:aead}. Then we can use it to construct
3070a secure channel, by including message type and sequence number fields in the
3071plaintexts, along with the message body. If we do this, we can actually get
3072away with just the one key $K = H_K(Z)$ rather than both $K_0$ and $K_1$.
dff0fad2
MW
3073
3074To do this, it is essential that the plaintext messages (or additional data)
3075clearly distinguish between the messages sent as part of the key-exchange
3076protocol and messages sent over the `secure channel'. Otherwise, there is a
d8139f2c
MW
3077protocol-interference attack: an adversary can replay key-exchange
3078ciphertexts to insert the corresponding plaintexts into the channel.
dff0fad2
MW
3079
3080We offer a sketch proof of this claim in appendix~\ref{sec:sc-proof}.
32609df3 3081\fi
dff0fad2
MW
3082
3083%%%--------------------------------------------------------------------------
dff0fad2
MW
3084\section{Conclusions}
3085\label{sec:conc}
3086
3087We have presented new protocols for identification and authenticated
3088key-exchange, and proven their security. We have shown them to be efficient
3089and simple. We have also shown that our key-exchange protocol is deniable.
3090Finally, we have shown how to modify the key-exchange protocol for practical
3091use, and proven that this modification is still secure.
a6e375a6
MW
3092
3093%%%--------------------------------------------------------------------------
a6e375a6
MW
3094\section{Acknowledgements}
3095
3096The Wrestlers Protocol is named after the Wrestlers pub in Cambridge where
3097Clive Jones and I worked out the initial design.
3098
3099%%%--------------------------------------------------------------------------
3100
081da1b1 3101\bibliography{mdw-crypto}
a6e375a6
MW
3102
3103%%%--------------------------------------------------------------------------
32609df3 3104\ifshort\def\next{\end{document}}\expandafter\next\fi
a6e375a6
MW
3105\appendix
3106\section{Proofs}
3107
3108\subsection{Proof of theorem~\ref{thm:sk}}
3109\label{sec:sk-proof}
3110
3111Before we embark on the proof proper, let us settle on some notation. Let
3112$P_i$ be a party. Then we write $x_i$ for $P_i$'s private key and $X_i = x_i
3113P$ is $P_i$'s public key. Let $S = (P_i, P_j, s)$ be a session. We write
3114$r_S$ for the random value chosen at the start of the session, and $R_S$,
3115$c_S$ etc.\ are the corresponding derived values in that session.
3116
a6e375a6
MW
3117The proof uses a sequence of games. For each game~$\G{i}$, let $V_i$ be the
3118event that some pair of unexposed, matching sessions both complete but output
3119different keys, and let $W_i$ be the event that the adversary's final output
3120equals the game's hidden bit~$b^*$. To save on repetition, let us write
3121\[ \diff{i}{j} = \max(|\Pr[V_i] - \Pr[V_j]|, |\Pr[W_i] - \Pr[W_j]|). \]
3122Obviously,
3123\[ \diff{i}{j} \le \sum_{i\le k<j} \diff{k}{k + 1}. \]
3124
3125Here's a quick overview of the games we use.
3126\begin{itemize}
3127\item $\G0$ is the original SK-security game.
3128\item In $\G1$, we abort the game unless all parties' public keys are
3129 distinct. Since keys are generated at random, parties are unlikely to be
3130 given the same key by accident.
3131\item In $\G2$, we change the way sessions respond to challenge messages, by
3132 using the extractor to fake up answers to most challenges. Since the
3133 extractor is good, the adversary is unlikely to notice.
3134\item In $\G3$, we abort the game if the adversary ever queries $H_K(\cdot)$
3135 on the Diffie-Hellman secret $r_S r_T P$ shared between two unexposed
3136 matching sessions. We show that this is unlikely to happen if the
3137 Diffie-Hellman problem is hard.
3138\item In $\G4$, we abort the game if any unexposed session \emph{accepts} a
3139 response message which wasn't sent by a matching session.
3140\end{itemize}
3141Finally, we show that the adversary has no advantage in $\G4$. The theorem
3142follows.
3143
3144For ease of comprehension, we postpone the detailed proofs of some of the
3145steps until after we conclude the main proof.
3146
3147Let $A$ be a given adversary which runs in time~$t$, creates at most~$q_S$
3148sessions, delivers at most~$q_M$ messages, and makes at most~$q_I$ queries to
3149its $H_I(\cdot, \cdot, \cdot, \cdot)$ oracle and at most~$q_K$ queries to its
3150$H_K(\cdot)$ oracle. Let $\G0$ be the original SK-security game of
3151definition~\ref{def:sk}, played with adversary~$A$.
3152
3153Game~$\G1$ is the same as game~$\G0$ except, if the initialization function
3154reports two parties as having the same public key (i.e., we have $X_i \ne
3155X_j$ where $0 \le i < j < n$), we stop the game immediately and without
3156crediting the adversary with a win. This only happens when the corresponding
3157private keys are equal, i.e., $x_i = x_j$, and since the initialization
3158function chooses private keys uniformly at random, this happens with
32609df3 3159probability at most $\binom{n}{2}/\#G$. Since if this doesn't happen, the
a6e375a6
MW
3160game is identical to $\G0$, we can apply lemma~\ref{lem:shoup}, and see that
3161\begin{equation}
3162 \label{eq:sk-g0-g1}
32609df3 3163 \diff{0}{1} \le \frac{1}{\#G} \binom{n}{2} = \frac{n (n - 1)}{2 \#G}.
a6e375a6
MW
3164\end{equation}
3165In game~$\G1$ and onwards, we can assume that public keys for distinct
dff0fad2
MW
3166parties are themselves distinct. Note that the game now takes at most
3167$O(q_I)$ times longer to process each message delivered by the adversary.
3168This is where the $O(q_I q_M)$ term comes from in the theorem statement.
a6e375a6
MW
3169
3170Game~$\G2$ is the same as game~$\G1$, except that we change the way that we
3171make parties respond to \cookie{challenge} messages $(\cookie{challenge}, R,
3172c)$. Specifically, suppose that $S = (P_i, P_j, s)$ is a session.
3173\begin{itemize}
3174\item Suppose $T = (P_j, P_i, s)$ is the matching session of $S$. The game
3175 proceeds as before if $(R, c) = (R_T, c_T)$ is the challenge issued by $T$.
3176\item Otherwise, we run the extractor $T_\Wident$ on the adversary's history
3177 so far of oracle queries $H_I(X_i, s, R, \cdot)$ to determine a pair $(r,
3178 Y)$. If $r = \bot$ then we discard the message. Otherwise, we add $R$ to
3179 the list~$C$, and return a fake response to the adversary by computing $K =
3180 H_K(r R_S)$ and handing the adversary $(\cookie{response}, R_S, E_K(Y))$.
3181\end{itemize}
3182The following lemma shows how this affects the adversary's probabilities of
3183winning.
3184\begin{lemma}
3185 \label{lem:sk-g1-g2}
3186 \begin{equation}
3187 \label{eq:sk-g1-g2}
3188 \diff{1}{2} \le \frac{q_M}{2^{\ell_I}}.
3189 \end{equation}
3190\end{lemma}
3191
3192Let us say that a session $S = (P_i, P_j, s)$ is \emph{ripe} if
3193\begin{itemize}
3194\item there is a matching session $T = (P_j, P_i, s)$, and
3195\item $S$ is unexposed.
3196\end{itemize}
3197Suppose that $S$ is a ripe session, and that it has a matching session~$T$:
3198let $Z_S = Z_T = r_S r_T P$.
3199
3200Game~$\G3$ is the same as $\G2$, except that the game is immediately aborted
3201if ever the adversary queries its random oracle $H_K(\cdot)$ at a value $Z_S$
3202for any ripe session~$S$. The following lemma shows how this affects the
3203adversary's probabilities of winning.
3204\begin{lemma}
3205 \label{lem:sk-g2-g3}
dff0fad2 3206 For some $t'$ within the bounds given in the theorem statement we have
a6e375a6
MW
3207 \begin{equation}
3208 \label{eq:sk-g2-g3}
3209 \diff{2}{3} \le q_S \InSec{mcdh}(G; t', q_K).
3210 \end{equation}
3211\end{lemma}
3212
3213Game~$\G4$ is the same as $\G3$ except that the game is immediately aborted
3214if ever the adversary sends a response message to a ripe session~$S$ which
3215wasn't output by its matching session as a response to $S$'s challenge, with
3216the result that $S$ completes.
3217
3218Let's make this more precise. Let $U$ and $V$ be a pair of matching
3219sessions. Let $C_U = (\cookie{challenge}, R_U, c_U$ be the challenge message
3220sent by $U$. Let $M_T$ be the set of messages which $T$ has sent upon
3221delivery of $C_U$. Then, in $\G4$, we abort the game if, for any pair $S$
3222and~$T$ of matching, unexposed sessions, $S$ has completed as a result of
3223being sent a message $\mu \notin M_T$. We have the following lemma.
3224\begin{lemma}
3225 \label{lem:sk-g3-g4}
dff0fad2 3226 For a $t'$ within the stated bounds, we have
a6e375a6
MW
3227 \begin{equation}
3228 \label{eq:sk-g3-g4}
3229 \diff{3}{4} \le q_S \bigl( \InSec{ind-cca}(\E; t', q_M, q_M) +
3230 n \cdot \InSec{mcdh}(G; t', q_M + q_I) \bigr)
3231 \end{equation}
3232\end{lemma}
3233
3234Finally, let us consider the state we're in with $\G4$.
3235\begin{itemize}
3236\item No ripe session completes except as a result the adversary faithfully
3237 delivering messages between it and its matching session.
3238\item The adversary never queries $Z_S$ for any ripe session~$S$. If we set
3239 $K_S = (K_{S, 0}, K_{S, 1}) = H_K(Z_S)$, then $K_{S, 1}$ is the key output
3240 by $S$ when it completes.
3241\item If $S$ and $T$ are matching ripe sessions, then $K_S = K_T$, since $Z_S
3242 = r_S R_T = r_T R_S = Z_T$.
3243\item For any ripe session~$S$, $K_{S, 1}$ is uniformly distributed in
3244 $\Bin^\kappa$ and independent of the adversary's view.
3245\item If $S = (P_i, P_j, s)$ and $T = (P_j, P_i, s)$ are matching ripe
3246 sessions, then $Z_S$ depends only $r_S$ and $r_T$. Hence, once $S$ and~$T$
3247 complete, and erase their states, $Z_S$ is independent of everything except
3248 the messages sent between the two sessions. In particular, $Z_S$ is
3249 independent of the long-term secrets $x_i$ and~$x_j$, so if either player
3250 is later corrupted, the key $K_{S, 1}$ remains independent of the
3251 adversary's view.
3252\item Hence, the keys output by unexposed sessions are indistinguishable from
3253 freshly-generated random strings, and remain so indefinitely.
3254\end{itemize}
3255We conclude that, for any adversary $A$,
3256\begin{equation}
3257 \label{eq:sk-g4}
3258 \Pr[V_4] = 0 \qquad \text{and} \qquad \Pr[W_4] = \frac{1}{2}.
3259\end{equation}
3260Putting equations~\ref{eq:sk-g0-g1}--\ref{eq:sk-g4} together, we find
32609df3 3261\begingroup \splitright=4em minus 4em
a6e375a6 3262\begin{spliteqn}
dff0fad2
MW
3263 \Adv{sk}{\Wident^{G, \E}}(A) \le
3264 2 q_S \bigl(\InSec{ind-cca}(\E; t', q_M, q_M) + {} \\
3265 \InSec{mcdh}(G; t', q_K) +
3266 n \,\InSec{mcdh}(G; t', q_M + q_I) \bigr) + {}
32609df3 3267 \frac{n (n - 1)}{\#G} +
a6e375a6 3268 \frac{2 q_M}{2^{\ell_I}}.
32609df3 3269\end{spliteqn} \endgroup
a6e375a6
MW
3270The theorem follows, since $A$ was chosen arbitrarily.
3271
3272
3273\begin{proof}[Proof of lemma~\ref{lem:sk-g1-g2}]
3274 The two games $\G1$ and~$\G2$ differ only in whether they accept or reject
3275 particular challenge messages $(\cookie{challenge}, R, c)$.
3276
3277 We claim firstly that no message is \emph{accepted} by $\G2$ which would
3278 have been rejected by $\G1$. To prove the claim, it is sufficient to note
3279 that the extractor's output, if not $\bot$, is always correct, and hence if
3280 $\G2$ accepts a message then $\G1$ would have done so too.
3281
3282 Since $\G2$ also behaves identically when the adversary submits to~$S$ the
3283 challenge from the matching session~$T$, we have nothing to prove in this
3284 case. Let $F$ be the event that the adversary submits a message
3285 $(\cookie{challenge}, R, c)$ to a session~$S$ which $S$ would have accepted
3286 in $\G1$ but would be rejected by the new rule in~$\G2$. By
3287 lemma~\ref{lem:shoup} we have $\diff{1}{2} \le \Pr[F]$. To prove the
3288 current lemma, therefore, we must show that $\Pr[F] \le q_M/2^{\ell_I}$.
3289
3290 Rather than consider individual challenge messages, we consider
3291 \emph{classes} of messages. We shall refer to a quadruple~$\Cid = (i, j,
3292 s, R)$ as a \emph{class-id}, and define some useful functions:
3293 \begin{itemize}
3294 \item the class's \emph{session} $\Csession(\Cid) = (P_i, P_j, s)$;
3295 \item the class's \emph{index} $\Cindex(\Cid)$ is $r \in I$ where $R = r
3296 P$, which is well-defined by lemma~\ref{lem:unique-dl};
3297 \item the class's \emph{query} $\Cquery(\Cid) = (X_j, s, R, x_i R)$;
3298 \item the class's \emph{hash} $\Chash(\Cid) = H_I(\Cquery(\Cid)) = H_I(X_j,
3299 s, R, x_i R)$;
3300 \item the class's \emph{check-value} $\Ccheck(\Cid) = \Chash(\Cid) \xor
3301 \Cindex(\Cid)$;
3302 \item the class's \emph{check-set} $\Ccheckset(\Cid)$ is the set of
3303 check-values $c$ such that a message $(\cookie{challenge}, R, c)$ was
3304 sent to session $S = (P_i, P_j, s)$; and
3305 \item the class's \emph{count} $\Ccount(\Cid) = |\Ccheckset(\Cid)|$.
3306 \end{itemize}
3307
3308 Consider any class-id~$\Cid = (i, j, s, R)$. A natural question which
3309 arises is: which participants have issued $\Cid$'s query, i.e., queried
3310 $H_I$ at $\Cquery(\Cid)$?
3311
3312 We can characterise the $H_I(\cdot, \cdot, \cdot, \cdot)$ queries of a
3313 session $U = (P_{i'}, P_{j'}, s')$ as follows:
3314 \begin{itemize}
3315 \item computing the check-value for the challenge $R_U$ by querying
3316 $H_I(X_{i'}, s', R_U, r_U X_{j'})$, and
3317 \item checking an incoming challenge~$R'$ by querying $H_I(X_{j'}, s', R',
3318 x_{i'} R')$.
3319 \end{itemize}
3320 The class~$\Cid$'s query $\Cquery(\Cid)$ is $U$'s check-value query if
3321 \[ (j, i, s, R) = (i', j', s', R_U) \]
3322 i.e., $U$ is the matching session of $\Csession(\Cid)$, and moreover $R =
3323 R_U$ is the challenge value issued by $U$. For any $c \in
3324 \Ccheckset(\Cid)$, if $c = \Ccheck(\Cid)$ then $(\cookie{challenge}, R, c)$
3325 is precisely the challenge message issued by~$U$ to $\Csession(\Cid)$; the
3326 rules for handling this message didn't change. However, if $c \ne
3327 \Ccheck(\Cid)$ then the message would have been rejected in $\G1$, and we
3328 have already shown that $\G2$ continues to reject all messages rejected by
3329 $\G1$.
3330
3331 Let us say that a class-id~$\Cid = (i, j, s, R)$ is \emph{bad} if
3332 \begin{enumerate}
3333 \item the value $R$ is not the challenge issued by $\Csession(\Cid)$'s
3334 matching session, and
3335 \item the adversary has not issued $\Cid$'s query $\Cquery(\Cid)$,
3336 \emph{but}
3337 \item $\Ccheck(\Cid) \in \Ccheckset(\Cid)$, so one of the check-values
3338 submitted to $S$ was actually correct.
3339 \end{enumerate}
3340 We claim that our extractor will work perfectly unless some class-id is
3341 bad. Certainly, if $R$ was issued by the matching session, there is
3342 nothing to prove; if the adversary has issued the relevant query then the
3343 extractor will recover $\Cindex(\Cid)$ just fine; and if $\Ccheck(\Cid)
3344 \notin \Ccheckset(\Cid)$ then all messages in the class would have been
3345 rejected by $\G1$ anyway.
3346
3347 Let $B(\Cid)$ be the event that the class~$\Cid$ is bad. We claim that
3348 \[ \Pr[B(\Cid)] \le \frac{\Ccount(\Cid)}{2^{\ell_I}}. \]
3349 The present lemma follows, since
3350 \[ \diff{1}{2}
3351 \le \Pr[F]
3352 \le \sum_\Cid \Pr[B(\Cid)]
3353 \le \sum_\Cid \frac{\Ccount(\Cid)}{2^{\ell_I}}
3354 = \frac{1}{2^{\ell_I}} \sum_\Cid \Ccount(\Cid)
3355 \le \frac{q_M}{2^{\ell_I}}
3356 \]
32609df3 3357 as required.
a6e375a6
MW
3358
3359 Now observe that, in $\G2$, sessions don't actually check incoming
3360 challenges in this way any more -- instead we run the extractor. So, to
3361 prove the claim, we consider a class~$\Cid$ where properties~1 and~2 above
3362 hold. The correct hash $\Chash(\Cid)$ is then independent of the rest of
3363 the game, so the probability that $\Ccheck(\Cid) \in \Ccheckset(\Cid)$ is
3364 precisely $\Ccount(\Cid)/2^{\ell_I}$ as required.
3365
3366 This completes the proof the lemma.
3367\end{proof}
3368
3369\begin{proof}[Proof of lemma~\ref{lem:sk-g2-g3}]
3370 Let $F$ be the event that the adversary makes a query $H_K(Z_S)$ for some
3371 ripe session~$S$. Since $\G3$ proceeds exactly as $\G2$ did unless $F_2$
3372 occurs, we apply lemma~\ref{lem:shoup}, which tells us that $\diff{2}{3}
3373 \le \Pr[F_2]$. We must therefore bound this probability.
3374
3375 To do this, we consider a new game~$\G3'$, which is the same as $\G3$,
3376 except that, at the start of the game, we choose a random number $k \inr
3377 \Nupto{q_S}$. For $0 \le i < q_S$, let $S_i$ be the $i$th session created
3378 by the adversary. We define $F'$ to be the event that the adversary
3379 queries $H_K(Z_{S_k})$ when $S_k$ is ripe.
3380
3381 The lemma now follows from these two claims.
3382
3383 \begin{claim}
3384 $\Pr[F] \le q_S \Pr[F']$.
3385 \end{claim}
3386 To see this, for any session~$S$, let $F_S$ be the event that the adversary
3387 queries~$H_K(Z_S)$ when $S$ is ripe. Then
3388 \[ \Pr[F] \le \sum_{0\le i<q_S} \Pr[F_{S_i}]. \]
3389 Hence,
3390 \[ \Pr[F'] = \Pr[F_{S_k}] = \sum_{0\le i<q_S} \Pr[F_{S_i}] \Pr[k = i]
3391 = \frac{1}{q_S} \sum_{0\le i<q_S} \Pr[F_{S_i}]
3392 \ge \frac{\Pr[F]}{q_S}
3393 \]
3394 proving the claim.
3395
3396 \begin{claim}
3397 For some $t' = t + O(n) + O(q_S q_M) + O(q_I) + O(q_K)$, we have
3398 $\Pr[F'] \le \InSec{mcdh}(G; t', q_K).$
3399 \end{claim}
3400 To prove this claim, we construct an adversary~$B$ which solves the MCDH
3401 problem in~$G$. The adversary works as follows.
3402 \begin{enumerate}
3403 \item It is given a pair $(R^*, S^*) = (r^* P, s^* P)$ of group elements;
3404 its objective is to make a verification-oracle query $V(Z^*)$ where $Z^*
3405 = r^* s^* P$.
3406 \item It sets up a simulation of the game~$\G3'$, by running the
3407 $\id{init}$ function, and simulating all of the parties. In particular,
3408 it chooses a random~$k \in \Nupto{q_S}$.
3409 \item It sets up accurate simulations of the random oracles $H_K(\cdot)$
3410 and $H_I(\cdot, \cdot, \cdot, \cdot)$, which choose random outputs for
3411 new, fresh inputs. However, whenever $A$ queries $H_K(\cdot)$ on a group
3412 element $Z$, $B$ also queries $V(Z)$.
3413 \item It runs $A$ in its simulated game. It responds to all of $A$'s
3414 instructions faithfully, until the $k$th session-creation.
3415 \item When creating the $k$th session $S = S_k = (P_i, P_j, s)$, $B$ has
3416 party~$P_i$ choose $R^*$ as its challenge, rather than choosing $r_S$ and
3417 setting $R_S = r_S P$. Because it simulates all the parties, $B$ can
3418 compute $Y_S = x_j R$, which is still correct.
3419 \item If $A$ requests the creation of a matching session $T = (P_j, P_i,
3420 s)$ then $B$ has party~$P_j$ choose $S^*$ as its challenge. Again, $B$
3421 computes $Y_T = x_i S^*$.
3422 \item If $A$ ever corrupts the parties $P_i$ or $P_j$, or reveals the
3423 session state of $S$ or $T$ then $B$ stops the simulation abruptly and
3424 halts.
3425 \end{enumerate}
dff0fad2
MW
3426 Adversary $B$'s running time is within the bounds required of $t'$, and $B$
3427 makes at most $q_K$ queries to $V(\cdot)$; we therefore have
a6e375a6
MW
3428 \[ \Pr[F'] \le \Succ{mcdh}{G}(B) \le \InSec{mcdh}(G; t', q_K) \]
3429 as required.
3430\end{proof}
3431
3432\begin{proof}[Proof of lemma~\ref{lem:sk-g3-g4}]
3433 Let $F_4$ be the event under which we abort the game~$\G4$. Clearly, if
3434 $F$ doesn't occur, games $\G3$ and $\G4$ proceed identically, so we can
3435 apply lemma~\ref{lem:shoup} to see that $\diff{3}{4} \le \Pr[F_4]$.
3436 Bounding $\Pr[F_4]$, however, is somewhat complicated. We use a further
3437 sequence of games.
3438
3439 Firstly, let $\G5$ be like $\G4$ with the exception that we choose, at
3440 random, an integer~$k \inr \Nupto{q_S}$. As we did in the proof for
3441 lemma~\ref{lem:sk-g3-g4}, let $S_i$ be the $i$th session created by the
3442 adversary. For each session~$S_i$, let $T_i$ be its matching session, if
3443 there is one. We define $F_5$ to be the event that
3444 \begin{itemize}
3445 \item $S_k$ completes immediately following delivery of a message $\mu
3446 \notin M_{T_k}$, and
3447 \item $S_k$ was ripe at this point.
3448 \end{itemize}
3449 For games~$\G{i}$, for $i > 5$, we define the event $F_i$ to be the event
3450 corresponding to $F_5$ in $\G{i}$. Note that if $S_k$ \emph{is} sent a
3451 message in $M_{T_k}$ then $S_k$ immediately completes.
3452
3453 \begin{claim}
3454 $\Pr[F_4] \le \Pr[F_5]/q_S$.
3455 \end{claim}
3456 This claim is proven exactly as we did for claim~1 of
3457 lemma~\ref{lem:sk-g2-g3}.
3458
3459 Let~$\G6$ be the same as $\G5$ except that we change the encrypted
3460 responses of session~$S_k$ and its matching session~$T_k$. Let $K^* =
3461 (K_0^*, K_1^*) = H_K(Z_S)$. Then, rather than sending $(\cookie{response},
3462 R_S, E_{K_0^*}(Y_T))$, session~$S$ sends $(\cookie{response}, R_S,
3463 E_{K_0^*}(1^{\ell_G}))$.
3464 \begin{claim}
3465 $|\Pr[F_6] - \Pr[F_5]| \le \InSec{ind-cca}(\E; t', q_M, q_M).$
3466 \end{claim}
3467 To prove this claim, we construct an adversary~$B$ which attacks the
3468 IND-CCA security of our encryption scheme $\E$. The adversary~$B$ works as
3469 follows.
3470 \begin{enumerate}
3471 \item It is given no input, but a pair of oracles $E(\cdot, \cdot)$ and
3472 $D(\cdot)$; the former encrypts either the left or right input, according
3473 to a hidden bit, and the latter decrypts ciphertexts other than those
3474 returned by $E(\cdot, \cdot)$. Its goal is to guess the hidden bit.
3475 \item It sets up a simulation of the game~$\G5$, by running the $\id{init}$
3476 function, and simulating all of the parties. In particular, it chooses a
3477 random $k \in \Nupto{q_S}$.
3478 \item It sets up accurate simulations of the random oracles $H_K(\cdot)$
3479 and $H_I(\cdot, \cdot, \cdot, \cdot)$.
3480 \item It runs $A$ in its simulated game. It responds to all of $A$'s
3481 instructions faithfully, except for the matching sessions $S_k$ and
3482 $T_k$. Let $S = S_k = (P_i, P_j, s)$, and $T = T_k = (P_j, P_i, s)$.
3483 \item Suppose $T$ is sent the message $C_S = (\cookie{challenge}, R_S,
3484 c_S)$. Rather than computing $K^* = H_K(r_T R_S)$ and performing the
3485 encryption properly, $B$ queries its left-or-right encryption oracle
3486 $E(\cdot, \cdot)$ on $E(1^{\ell_G}, x_j R_S)$, and sends the resulting
3487 ciphertext~$\chi$ back to~$S$ as part of a message $(\cookie{response},
3488 R_T, \chi)$. The set $M_T$ is precisely the set of messages constructed
3489 in this fashion. (Recall that challenge messages other than $C_S$ aren't
3490 actually delivered to $T$, since we simulate the responses using the
3491 extractor, as of $\G2$.)
3492 \item Suppose $S$ is sent a message $M = (\cookie{response}, R_T, \chi) \in
3493 M_T$. We immediately stop the simulation, and $B$ returns $0$.
3494 \item Suppose, instead, that $S$ is sent some message $M' =
3495 (\cookie{response}, R, \chi) \notin M_T$. There are two cases to
3496 consider. If $R = R_T$ then we must have $\chi$ distinct from the
3497 ciphertexts returned by the $E(\cdot, \cdot)$ oracle, so we can invoke
3498 the decryption oracle $D(\cdot)$ on $\chi$ to obtain a response $Y$.
3499 Alternatively, if $R \ne R_T$, we can compute the key $K = (K_0, K_1) =
3500 H_K(r_S R)$, and recover $Y = D_{K_0}(\chi)$. In either case, if $Y =
3501 r_S X_j)$ then $S$ would complete at this point: $B$ stops the simulation
3502 and returns $1$.
3503 \item If $A$ exposes $S$ (by corrupting $P_i$ or~$P_j$, or revealing $S$ or
3504 $T$) then we stop the simulation and $B$ returns $0$.
3505 \item Finally, if the game stops, either because $A$ halts, or because of
3506 one of the special rules introduced in earlier games, $B$ returns $0$.
3507 \end{enumerate}
3508 It is clear that $B$'s oracle queries are acceptable, since $|x_j R_S| =
3509 \ell_G$ by definition, and $B$ never queries $D(\cdot)$ on a ciphertext
3510 returned by its encryption oracle. By the rules of~$\G3$, we know that the
3511 game stops immediately if $A$ ever queries $Z_S$, so the key $K^*$ is
3512 independent of everything in $A$'s view except the ciphertexts $\chi$
3513 output by $S$ and $T$. Therefore, if the hidden bit of the IND-CCA game is
3514 $1$, $B$ accurately simulates $\G5$, whereas if the bit is $0$ then $B$
dff0fad2
MW
3515 accurately simulates $\G6$. We issue no more that $q_M$ encryption or
3516 decryption queries. Finally, $B$'s running time is within the bounds
3517 allowed for $t'$. Therefore,
a6e375a6
MW
3518 \[ \Adv{ind-cca}{\E}(B) = \Pr[F_5] - \Pr[F_6]
3519 \le \InSec{ind-cca}(\E; t', q_M, q_M). \]
3520 We construct the adversary~$\bar{B}$ which is the same as $B$ above, except
3521 that $\bar{B}$ returns $0$ whenever $B$ returns~$1$, and \emph{vice versa}.
3522 Clearly
3523 \[ \Adv{ind-cca}{\E}(\bar{B})
3524 = (1 - \Pr[F_5]) - (1 - \Pr[F_6])
3525 = \Pr[F_6] - \Pr[F_5]
3526 \le \InSec{ind-cca}(\E; t', q_M, q_M).
3527 \]
3528 This proves the claim.
3529
3530 Let $\G7$ be the same as $\G6$, except that at the start of the game we
3531 choose a random $m \in \Nupto{n}$, and when the adversary creates the
3532 session $S = S_k = (P_i, P_j, s)$, we abort the game unless $j = m$.
3533 Clearly we have $\Pr[F_6] = n \Pr[F_7]$.
32609df3 3534
a6e375a6
MW
3535 Finally, we can explicitly bound $F_6$. In $\G6$, the adversary's view is
3536 independent of the correct response $Y_S = r_S X_S = x_j R_S$ to $S$'s
3537 challenge. Therefore, if $A$ manages to send any message $\mu \notin M_T$
3538 which causes $S$ to complete, then it has impersonated~$P_j$.
3539 \begin{claim}
3540 $\Pr[F_7] \le \InSec{mcdh}(G; t', q_M + q_I)$.
3541 \end{claim}
3542 The present lemma follows from this and the previous claims.
3543
3544 To prove the claim formally, we construct an adversary~$B'$, which behaves
3545 as follows.
3546 \begin{enumerate}
3547 \item It is given as input a public key $X^*$ and a single challenge $(R^*,
3548 c^*)$, a random oracle $H^*_I(\cdot, \cdot)$, and an oracle $V(\cdot,
3549 \cdot)$, which verifies responses $(R, Y)$. Its goal is to invoke
3550 $V(\cdot, \cdot)$ with a correct response to the challenge.
3551 \item It chooses a random $k \in \Nupto{q_S}$ and $m \in \Nupto{n}$. It
3552 sets up a simulation of the game~$\G7$, by running the $\id{init}$
3553 function, and simulating all of the parties, except that it gives party
3554 $P_m$ the public key $X^*$. This makes no difference, since $P_m$
3555 doesn't actually need to give any `honest' responses because of the
3556 change we made in $\G6$.
3557 \item It sets up accurate simulations of the random oracles $H_K(\cdot)$
3558 and $H_I(\cdot, \cdot, \cdot, \cdot)$, with one exception -- see below.
3559 \item It runs $A$ in its simulated game. It responds to all of $A$'s
3560 instructions faithfully, except for the session $S_k$. Let $S = S_k =
3561 (P_i, P_j, s)$, and let $T = T_k = (P_j, P_i, s)$ be its matching
3562 session.
3563 \item When session~$S$ is created, $B'$ checks that $j = m$, and if not
3564 stops the simulation and halts. Otherwise, $B'$ invokes its oracle~$C()$
3565 to obtain a pair $(R, c)$. Session~$S$ sends $C_S = (\cookie{challenge},
3566 R, c)$ as its challenge to~$T$.
3567 \item When $A$ makes a query $H_I(X^*, s, R, Y)$, $B$ answers it by
3568 querying its \emph{own} random oracle $H^*_I(R, Y)$.
3569 \item When $S$ receives a message $(\cookie{response}, R, \chi)$, we
3570 compute $(K_0, K_1) = r_S R$, and $Y = D_{K_0}(\chi)$. If $Y \ne \bot$
3571 then $B'$ calls $V(R, Y)$.
3572 \item If $A$ reveals $S$ or corrupts $P_i$ or $P_j$ then $B'$ stops the
3573 simulation immediately and halts.
3574 \end{enumerate}
dff0fad2
MW
3575 The running time of $B'$ is within the bounds required of $t'$; it makes at
3576 most $q_I$ random-oracle and at most $q_M$ verification queries. Clearly
3577 $B'$ succeeds whenever $F_7$ occurs. The claim follows from
3578 theorem~\ref{thm:wident-sound}.
a6e375a6
MW
3579\end{proof}
3580
3581
3582\subsection{Proof of theorem~\ref{thm:sk2}}
3583\label{sec:sk2-proof}
3584
3585The proof is almost identical to the proof of theorem~\ref{thm:sk}, in
3586appendix~\ref{sec:sk-proof}. Unfortunately a black-box reduction doesn't
32609df3 3587seem possible.
a6e375a6
MW
3588
3589We use the games and notation of section~\ref{sec:sk-proof}.
3590
3591The change to the check-value calculation doesn't affect key-generation at
3592all, so the transition to $\G1$ goes through as before.
3593
3594The transition from $\G1$ to $\G2$ -- answering challenges using the
3595extractor -- needs a little care. Let $S = (P_i, P_j, s)$ be a session, and
3596consider an incoming message $(\cookie{challenge}, R, c)$.
3597\begin{itemize}
3598\item If $T = (P_j, P_i, s)$ is the matching session to $S$, and $R = R_T$ is
3599 the public challenge value of $T$, and $c = r_T \xor H_I(R_S, X_j, s, R_T,
3600 r_T X_i)$ is the check-value output by $T$ when it received
3601 $(\cookie{pre-challenge}, R_S)$ as input, then $S$ replies as it did in
3602 $\G1$.
3603\item If the challenge message is any other message, then we use the
3604 extractor.
3605\end{itemize}
3606As in lemma~\ref{lem:sk-g1-g2}, we examine which sessions could have queried
3607$H_I(R_S, X_j, s, R, x_i R)$, and for the same reasons conclude that only the
3608matching session would have done this, and only in response to the
3609pre-challenge $R_S$. It follows that $\diff{1}{2} \le q_M/2^{\ell_I}$ as
3610before.
3611
3612The remaining game steps go through unchanged. In particular, we conclude
3613that a ripe session will only complete if the adversary has transmitted
3614messages from its matching session correctly, and the session key is
3615independent of the adversary's view. The theorem follows.
3616
dff0fad2
MW
3617
3618\subsection{Sketch proof of single-key protocol for secure channels}
3619\label{sec:sc-proof}
3620
3621We want to show that the Wrestlers Key-Exchange protocol, followed by use of
3622the encryption scheme $\E$, with the \emph{same} key $K = K_0$, still
3623provides secure channels.
3624
3625\subsubsection{Secure channels definition}
081da1b1
MW
3626We (very briefly!) recall the \cite{canetti-krawczyk-2001:secure-channels}
3627definition of a secure channels protocol. We again play a game with the
3628adversary. At the beginning, we choose a bit $b^* \inr \{0, 1\}$ at random.
3629We allow the adversary the ability to establish \emph{secure channels}
3630sessions within the parties. Furthermore, for any running session $S = (P_i,
3631P_j, s)$, we allow the adversary to request $S$ to send a message~$\mu$
3632through its secure channel. Finally, the adversary is allowed to choose one
3633ripe \emph{challenge} session, and ask for it to send of one of a \emph{pair}
3634of messages $(\mu_0, \mu_1)$, subject to the restriction that $|\mu_0| =
dff0fad2
MW
3635|\mu_1|$; the session sends message $\mu_{b^*}$. The adversary may not
3636expose the challenge session.
3637
3638The adversary wins if (a)~it can guess the bit $b^*$, or (b)~it can cause a
3639ripe session $S$ (i.e., an unexposed, running session), with a matching
3640session~$T$ to output a message other than one that it requested that $T$
3641send.
3642
3643\subsubsection{Protocol definition}
3644The protocol begins with Wrestlers key-exchange. The encryption in the
3645key-exchange protocol is performed as $E_K(\cookie{kx}, \cdot)$; encryption
3646for secure channels is performed as $E_K(\cookie{sc}, i, o, \cdot)$, where
3647$i$ is a sequence number to prevent replays and $o \in \{S, T\}$ identifies
3648the sender.
3649
3650\subsubsection{Proof sketch}
3651We use the games and notation of appendix~\ref{sec:sk-proof}.
3652
3653The proof goes through as far as the step between $\G5$ and $\G6$ in the
3654proof of lemma~\ref{lem:sk-g3-g4}. Here we make the obvious adjustments to
3655our adversary against the IND-CCA security of $\E$. (Our adversary will need
3656to use the left-or-right oracle for messages sent using the secure channel
3657built on $K^*$. That's OK.)
3658
3659In $\G4$, we know that ripe sessions agree the correct key, and the adversary
3660never queries the random oracle, so the key is independent of the adversary's
3661view.
3662
3663We define a new game $\G8$, which is like $\G4$, except that we stop the game
3664if the adversary ever forges a message sent over the secure channel. That
3665is, if a ripe session $S$ ever announces receipt of a message not sent (at
3666the adversary's request) by its matching session $T$. Let $F_8$ be the event
3667that a forgery occurs. We apply lemma~\ref{lem:shoup}, which tells us that
3668$\diff{4}{8} \le \Pr[F_8]$. To bound $F_8$, we isolate a session at random
3669(as in lemmata \ref{lem:sk-g2-g3} and~\ref{lem:sk-g3-g4}), which tells us
3670that
3671\begin{equation}
3672 \label{eq:sc-g4-g8}
3673 \diff{4}{8} \le q_S \cdot \InSec{int-ptxt}(\E; t', q_M, q_M)
3674\end{equation}
3675Finally, we can bound the adversary's advantage at guessing the hidden bit
3676$b^*$. We isolate (we hope) the challenge session $S$ by choosing a target
32609df3 3677session at random, as before. Let $K^* = H_K(Z_S)$ be the key agreed by the
dff0fad2
MW
3678session (if it becomes ripe). We define an adversary $B$ against the IND-CCA
3679security of $\E$. The adversary $B$ simulates the game. If the adversary
3680exposes the target session, or doesn't choose it as the challenge session,
3681$B$ fails (and exits 0); otherwise it uses the left-or-right encryption
3682oracle to encrypt both of the adversary's message choices, and outputs the
3683adversary's choice. Let $b$ be the adversary's output, and let $\epsilon$ be
3684the advantage of our IND-CCA distinguisher. Then
3685\begin{eqnarray*}[rl]
3686 \Pr[b = b^*]
3687 & = \Pr[b = b^* \wedge b^* = 1] + \Pr[b = b^* \wedge b^* = 0] \\
3688 & = \frac{1}{2}\bigl( \Pr[b = b^* \mid b^* = 1] +
3689 \Pr[b = b^* \mid b^* = 0] \bigr) \\
3690 & = \frac{1}{2}\bigl( \Pr[b = b^* \mid b^* = 1] +
3691 (1 - \Pr[b \ne b^* \mid b^* = 0]) \bigr) \\
3692 & = \frac{1}{2}\bigl( \Pr[b = 1 \mid b^* = 1] -
3693 \Pr[b = 1 \mid b^* = 0] + 1 \bigr) \\
3694 & = \frac{1}{2}\bigl(1 + q_S\,\Adv{ind-cca}{\E}(B) \bigr) \\
3695 & \le \frac{1}{2} \bigl( 1 + q_S\,\InSec{ind-cca}(\E; t', q_M, q_M) \bigr).
3696 \eqnumber
3697\end{eqnarray*}
3698
a6e375a6
MW
3699\end{document}
3700
32609df3 3701%%% Local Variables:
a6e375a6
MW
3702%%% mode: latex
3703%%% TeX-master: t
32609df3 3704%%% End: