Secure Multiparty Computation

blocks connected by chains and locks - Credit: Weibel Christophe / Shutterstock

Secure Multiparty Computation
Communications of the ACM, January 2021, Vol. 64 No. 1, Pages 86-96
Review Articles
By Yehuda Lindell

“Over the past three decades, many different techniques have been developed for constructing MPC [MultiParty Computation] protocols with different properties, and for different settings.”

 

Distributed computing considers the scenario where a number of distinct, yet connected, computing devices (or parties) wish to carry out a joint computation of some function. For example, these devices may be servers that hold a distributed database system, and the function to be computed may be a database update of some kind. The aim of secure multiparty computation is to enable parties to carry out such distributed computing tasks in a secure manner. Whereas distributed computing often deals with questions of computing under the threat of machine crashes and other inadvertent faults, secure multiparty computation is concerned with the possibility of deliberately malicious behavior by some adversarial entity (these have also been considered in the distributed literature where they are called Byzantine faults). That is, it is assumed that a protocol execution may come under “attack” by an external entity, or even by a subset of the participating parties. The aim of this attack may be to learn private information or cause the result of the computation to be incorrect. Thus, two important requirements on any secure computation protocols are privacy and correctness. The privacy requirement states that nothing should be learned beyond what is absolutely necessary; more exactly, parties should learn their output and nothing else. The correctness requirement states that each party should receive its correct output. Therefore, the adversary must not be able to cause the result of the computation to deviate from the function that the parties had set out to compute.

 

Secure multiparty computation can be used to solve a wide variety of problems, enabling the utilisation of data without compromising privacy. Consider, for example, the problem of comparing a person’s DNA against a database of cancer patients’ DNA, with the goal of finding if the person is in a high risk group for a certain type of cancer. Such a task clearly has important health and societal benefits. However, DNA information is highly sensitive, and should not be revealed to private organizations. This dilemma can be solved by running a secure multiparty computation that reveals only the category of cancer that the person’s DNA is close to (or none). In this example, the privacy requirement ensures that only the category of cancer is revealed, and nothing else about anyone’s DNA (neither the DNA of the person being compared nor the DNA of the patients in the database). Furthermore, the correctness requirement guarantees that a malicious party cannot change the result (for example, make the person think that they are at risk of a type of cancer, and therefore need screening).

 

In another example, consider a trading platform where parties provide offers and bids, and are matched whenever an offer is greater than a bid (with, for example, the price of the trade being some function of the offer and bid prices). In such a scenario, it can be beneficial from a game theoretic perspective to not reveal the parties’ actual offers and bids (because this information can be used by others in order to artificially raise prices or provide bids that are lower than their utility). Privacy here guarantees that only the match between buyer and seller and the resulting price is revealed, and correctness would guarantee that the price revealed is the correct one according to the function (and, for example, not some lower value). It is interesting to note that in some cases privacy is more important (such as in the DNA example), whereas in others correctness is more important (such as in the trading example). In any case, MPC guarantees both of these properties, and more.

 

A note on terminology. In the literature, beyond secure multiparty computation (with acronym MPC, and sometimes SMPC), there are also references to secure function evaluation (SFE). These notions overlap significantly and are often used synonymously. In addition, special cases of MPC often have their own names. Two examples are private set intersection (PSI), which considers the secure computation of the intersection of private sets, and threshold cryptography, which considers the secure computation of digital signatures and decryption, where no single party holds the private key.

Security of MPC

The definitional paradigm. As we have mentioned, the setting that we consider is one where an adversarial entity controls some subset of the parties and wishes to attack the protocol execution. The parties under the control of the adversary are called corrupted, and follow the adversary’s instructions. Secure protocols should withstand any adversarial attack (where the exact power of the adversary will be discussed later). In order to formally claim and prove that a protocol is secure, a precise definition of security for multiparty computation is required. A number of different definitions have been proposed and these definitions aim to ensure a number of important security properties that are general enough to capture most (if not all) multiparty computation tasks. We now describe the most central of these properties:

 

  1. Privacy: No party should learn anything more than its prescribed output. In particular, the only information that should be learned about other parties’ inputs is what can be derived from the output itself. For example, in an auction where the only bid revealed is that of the highest bidder, it is clearly possible to derive that all other bids were lower than the winning bid. However, nothing else should be revealed about the losing bids.
  2. Correctness: Each party is guaranteed that the output that it receives is correct. To continue with the example of an auction, this implies that the party with the highest bid is guaranteed to win, and no party such as the auctioneer can influence this.
  3. Independence of Inputs: Corrupted parties must choose their inputs independently of the honest parties’ inputs. This property is crucial in a sealed auction, where bids are kept secret and parties must fix their bids independently of others. We note that independence of inputs is not implied by privacy. For example, it may be possible to generate a higher bid, without knowing the value of the original one. Such an attack can actually be carried out on some encryption schemes (that is, given an encryption of $100, it is possible to generate a valid encryption of $101, without knowing the original encrypted value).
  4. Guaranteed output delivery: Corrupted parties should not be able to prevent honest parties from receiving their output. In other words, the adversary should not be able to disrupt the computation by carrying out a “denial of service” attack.
  5. Fairness: Corrupted parties should receive their outputs if and only if the honest parties also receive their outputs. The scenario where a corrupted party obtains output and an honest party does not should not be allowed to occur. This property can be crucial, for example, in the case of contract signing. Specifically, it would be very problematic if the corrupted party received the signed contract and the honest party did not. Note that guaranteed output delivery implies fairness, but the converse is not necessarily true.

We stress that this list does not constitute a definition of security, but rather a set of requirements that should hold for any secure protocol. Indeed, one possible approach to defining security is to just generate a list of separate requirements (as mentioned) and then say that a protocol is secure if all of these requirements are fulfilled. However, this approach is not satisfactory for the following reasons. First, it may be possible that an important requirement was missed. This is especially true because different applications have different requirements, and we would like a definition that is general enough to capture all applications. Second, the definition should be simple enough so that it is trivial to see that all possible adversarial attacks are prevented by the proposed definition.

Read the Full Article »

About the Author:

Yehuda Lindell is a professor in the Department of Computer Science at Bar Ilan University, Ramat Gan, Israel, and is the CEO and co-founder of Unbound Tech.

See also: