3
$\begingroup$

The following question is intentionally informal because I don't understand things well enough to make them entirely formal. The topic is Universal Composability.

An important notion in the definition of security is that of being indistinguishable to an environment. To be more precise, you have two 'open' systems, and the notion of equality we are interested in is that of 'contextual' equality. That is, no environment should be able to distinguish between these two open systems.

(As a side note, these 'open systems' are often not just the protocol, but also include an attack / adversary, which models possible adversarial influence / leakage)

When you want to model concurrent systems, I think that the environment also plays the role of the scheduler, and chooses the next 'machine' in the system to be activated, etc.

On a very high level, it seems to me that the environment can count the number / kinds of scheduling decisions and distinguish two systems that should be 'intuitively' equal.

A vague / informal example is something like comparing a sequential ideal functionality with a concurrent distributed implementation. It seems to me that it might be possible that the environment might have to make more 'scheduling decisions' in the second case, and thus, be able to 'count' these and distinguish the two systems.

Is this a real issue in UC? Or maybe I am missing something?

$\endgroup$

1 Answer 1

1
$\begingroup$

This is a very complex question, but I'll try. The following comes from Canetti's JACM paper.

Traditionally, executions of systems that include concurrent executions of physically separate processes are mathematically captured by first considering the un-ordered collection of the executions of the locally sequential processes, and then considering all possible ways to interleave the local executions to form a single “sequentialized” execution, subject to certain causality constraints. (This modeling is often dubbed “non-deterministic scheduling.”) Here the granularity of “atomic events” (namely, events that are assumed to be executed at a single location and without interruption) is a key factor in determining the level of concurrency under consideration, and thus the expressive power of the model.

The model (or “mental experiment”) considered here is simpler: Instead of determining the granularity of “atomic events” ahead of time, and then considering “all possible interleavings” of these atomic events, the present model lets the processes themselves determine both the granularity of atomic events (by deciding when to send a message to another machine) and the specific interleaving of events (by deciding which machine to send a message to). The result is a single-threaded sequential execution that is determined by the aggregate of the local decisions made algorithmically by each machine.

Observe however that the simplicity of the formalism does not restrict its expressive power: Indeed, it is possible to capture any granularity of atomic events, by programming the machines to send messages at the end of the desired atomic sequence of operations. Arbitrary and adversarial interleaving of events is captured by having the machines send messages to adversarial machines that represent the desired level of variability in timing and ordering of events (as exemplified by the modeling of asynchronous communication channels sketched in a previous comment). Furthermore, this modeling allows restricting attention to computationally bounded scheduling of events, as opposed to fully non-deterministic scheduling. This ability is crucial for capturing security guarantees that hold only against computationally bounded adversaries.

There is a ton to unpack here, but I think that answers your question. In UC you specify how your system works by specifying the messages that get passed around. That determines the "atomic events", so the processes determine the granularity and interleaving as Canetti mentions through this specification. But anywhere that a message is sent back to the environment or the adversary (which are in cahoots and in fact you can consider a dummy adversary who is entirely controlled by the environment), the environment/adversary can determine what happens next. This requires a ton of care when specifying your system. A simple example is the reentrancy example of EasyUC.

Is this a real issue in UC? Or maybe I am missing something?

UC framework can handle it, but the proof complexity goes up significantly depending on the power you give the environment/adversary to control things. At every atomic event you have to consider how you will handle all the various things your environment/adversary can cause to happen.

$\endgroup$
2
  • $\begingroup$ Thanks for your response. Something that I didn't know was that UC could handle reentrancy, and that multiple commands can be input into the system before any of them have completed. Most ideal functionalities I have seen just return Error when the environment calls them when an existing command is 'in progress'. Is my understanding correct that this is not actually enforced / required by the framework? $\endgroup$ Commented Mar 3 at 21:37
  • $\begingroup$ Yes, you are correct. $\endgroup$ Commented Mar 3 at 22:55

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.