a3nm's blog

Real-life decentralized killer game generation

— updated

The killer game is played as follows: each player is given a "victim" (another player) which he has to "kill" in some way (which depends on the variant played). Whenever a player manages to kill his victim, he gets the victim's victim as his new victim, and the killed player is eliminated. The aim of the game is to kill without being killed; the winner is the last man standing.

The usual way to play the game is to ask some trusted third party to generate the list of killer-victim relations and to give each player a piece of paper indicating his victim. Of course, it would be quite natural to use a computer to generate the game instead, in order to avoid the need to ask someone to create the game and not be able to play. What I will present is this post is an even nicer system: a way for players to organize together a killer game by themselves, in such a way as to prevent anyone from getting any information about the generated game. No computer needed: you only need paper, a pen, and a deck of n cards (where you number the players from 1 to n).

As for security, the main requirement isn't to prevent deliberate trickery but rather accidental remembering of information. In other words, we want to organize a game between people who are honest but who wouldn't like to remember things about the game inadvertently (ie. we don't want to depend on their having a bad memory). Hence, we assume that players don't really try to cheat or form coalitions to cheat together, but that they just want to generate the game without having to try to not remember what they have done.

The problem can be separated in two steps:

  1. First, generate the permutation mapping each player to its victim. (It is a permutation since each player is the victim of exactly one player.) We can't accept any permutation: the permutation needs to have only one orbit. (Otherwise, the game will have multiple winners.)
  2. Second, generate the papers giving to each player its victim. This isn't straightforward at all, because we can't just ask one player to write them down: the papers need to be written in a way which prevent anybody from knowing their contents.

The first step, in fact, is easy to solve. If you take a deck of n cards and shuffle it collaboratively (ie. every player gets a chance to shuffle the deck, to prevent cheating), the permutation mapping the i-th card of the deck to the i+1-th one (modulo n) is a permutation with only one orbit; moreover, if the order of cards is chosen uniformly at random, the resulting permutation is also chosen uniformly at random among all such permutations. Thus, the players have a way to create together securely a suitable permutation.

What remains is to generate pieces of paper mapping players to their victims. We will assume that, for each player, we have a blank piece of paper on which we will apply the following operations:

  1. Write the number of the victim on the paper.
  2. Fold the paper in such a way that the victim number cannot be read.
  3. Write on the outside of the paper the number of the player which should kill the victim.
  4. Fold the paper to make both numbers unreadable, and put the paper in an urn.

Now, the idea is that each player but the first one will apply the following procedure:

  • Input: The deck of cards and a piece of paper folded once.
  • Output: The deck of cards without the top card and another piece of paper folded once.
  • Procedure:
    1. Draw the first card of the deck, secretly, memorize its value, and discard the card (in a discard pile which will not be examined).
    2. Write the value on the folded piece of paper, fold it a second time and put it in the bin.
    3. Take a new piece of paper, write the value on it, and fold it once.

What is supposed to happen is that the players pass the deck of cards (with a piece of paper) to each other, producing papers as we go. The order of players is determined at random collaboratively (ie. the players draw a random number together, for example by committing to a value and XORing the values). The question of the first and last card still needs to be settled, and it is the role of the first player. More precisely, the first player starts off the process and ends it in the following way:

  • At the beginning, take the deck of cards, and follow the algorithm above without the last step to prepare the first piece of paper (folded once).
  • At the end, take the empty deck of cards and the last piece of paper (folded once), and apply the last step with the value memorized for the initial step.

Of course, once we have run all that, all that we have to do is to empty the urn, unfold each paper once, and give each paper to the player indicated on it. Players then open their paper and find their target.

What remains is that we want to ensure that, by following this scheme, players don't get information about the game. But the beautiful thing is that players only see one player value, so they really can't have no information about any player's relation to other players. In fact, to put it in a nutshell, the idea is that the papers represent couples of a value and its image by the permutation, and that we generate them by asking each player to write down a given number twice: once as value, and once as image.

As for actually playing the game, what's really funny is that you don't have any information about the game's progress, because nobody knows what the players are up to. Players kill themselves in a decentralized way, and, at some point, a player should discover that the person he killed was his own killer, which means that he won the game.

This scheme was successfully implemented two years ago to play a game in my Math spé class. Of course, the protocol wasn't respected to the letter (we didn't have enough people to generate the game, so players got multiple turns writing numbers down, which we tried to space as much as possible), and we had to work our way around a few practical difficulties, but everything went smoothly. The generated permutation was correct (an entirely unexpected thing given the fact that the deck used was a standard deck with a conversion rule to match value and suit to player number and then to player name, with a lot of opportunities for mistakes), and the game was played up to its conclusion (which was once again very fortunate, given the fact that there wasn't anyone able to correct errors or to make things move should the game get stuck somewhere).

Of course, the best thing was that I was able to play the game even though I was the one who had made the whole scheme up.

Identity theft on Facebook

— updated

I find it quite amazing that people, when they see the profile page of "John Doe" on Facebook, assume that the page is indeed controlled by John Doe and allows them to contact John Doe. (No, in fact, they wouldn't do that for someone called "John Doe", but let's assume that the name isn't "John Doe" but something sufficiently unique to ensure that there aren't multiple people with that name.)

If you think about it, this is absurd; what they are seeing is a page managed by Facebook, which pretends that they gave the control of the page to someone who claimed to be John Doe. Even if we assume that Facebook isn't interfering, at no point whatsoever did anybody check that the guy is indeed John Doe.

In practice, things work out quite well because there aren't that many people who are willing to take the time to impersonate someone else on Facebook. Nevertheless, I know that some people had to complain to Facebook because they had some sort of enemy who created an (insulting) Facebook profile with their name. This shows that there is a problem; theoretically, when confronted which such a page, nobody should assume that the page has anything to do with the person whose name appears on the page...

(Of course, this problem isn't restricted to Facebook, but extends to most of what you find online. Not everything, though (notable exceptions are the OpenPGP web of trust, and the HTTPS certificate authority system which is itself quite flawed).)

A chance to express

Does anyone else, when reading a book by a famous author (or any text for which the author could clearly expect that it would be read by a lot of people), think something along the lines of:

That was it, a chance to be read or heard by many people, to tell them something that matters and get your point across. This was a chance to pick important ideas and start spreading them. This was a chance to convince, a chance to change the way we think, a chance to change the world.

And, yes, what you said was probably important, and you said it well. But was that really the most important thing you could come up with?

Canonical choices

— updated

I think that most people would agree that the vast majority of blockbusters don't have much artistic value, and that you are more likely to find it in lesser-known films. Despite this, blockbusters are immensely successful. Why?

More generally, there are a lot of domains for which you have well-known, popular, and not really outstanding solutions, and lesser-known, potentially outstanding solutions. (Think going to McDonalds or Starbucks vs going to that original little restaurant next street.)

There are other contexts in which the most popular solution is worse than alternatives and where this fact can be accounted for by inertia and migration costs. Windows vs Linux, Qwerty vs dvorak, and so on. But this doesn't work here, because there are no migration costs.

Another explanation is people's unwillingness to take risks. Original movies can be outstanding, but they can also be horrible, whereas blockbusters are usually neither good nor bad. This is probably a factor which accounts for part of the phenomenon.

However, I would like to propose an additional, different explanation, which works for any choices which are made by groups of people, and which is totally independant from the relative merits of the different solutions. My point is that some solutions are sufficiently popular to be canonical in the sense that choosing them isn't even perceived as a choice anymore, whereas choosing something else is perceived as a deliberate choice. From this, it follows that if you decide to go see a canonical solution, it can be either good or bad, but if it is bad, you won't come out as having made the wrong choice: the film was bad, period. Whereas if you choose something original, it can be either good or bad, and if it is bad, you are sure to carry the responsibility of having made the wrong choice.

To summarize: popular choices stay popular because people are unwilling to take the risk of choosing something else and being wrong. (This is not the same thing as being unwilling to take the risk of being disappointed.)

This also explains the fact that TV is still popular even though being unable to choose precisely what you will watch and when you will watch it really sucks. Choosing something else to watch requires you to make a choice, expose your tastes, and take risks, whereas no one will blame you if you chose the canonical solution of "watching TV" no matter how bad it turns out to be.

A more cynical way of seing things is the following: canonical choices, no matter their quality, give people something to comment on and to talk about. It is socially consensual to go to the canonical solution and criticize it, whereas it is risky to take the initiative to pick something original. In fact, you can comment on the canonical choices more freely precisely because they are canonical: saying that you found some given blockbuster boring won't offend anyone because there was no real choice involved in deciding to go and see it, whereas saying that you didn't enjoy something which someone really chose to see won't really be nice to that person (because he either finds it interesting and wanted to share, or thought that it could be interesting and turned out to be wrong).

Publishing the public details of your life

— updated

I recently wondered about what it would be like to learn your own life by heart. Let me now wonder about what it would be like to publish the public details of your life. But first, let me explain what I mean with this convoluted expression.

There is remarkably little in the life of most people which is "really" private. For most of what you do, there is at least one other person who knows about it. Some of these people are family members or friends, and you trust them and expect them to respect your privacy. But others are people you don't know and trust: all the strangers who saw you someday at some place, the cashier who knows what you bought last time you went to the supermarket, and so on. These aren't the worst, and I'll even forget about them and assume in the rest of this post that all of them are trustworthy. The things which aren't trustworthy are the information systems run by organizations which you have no reason at all to trust: public transportation companies, phone companies, banks, the State, and, last but not least, your Internet provider and the websites you visit and services you use online. (Maybe you trust these organizations, but, personally, I don't think that trust can apply to juristic persons.)

Now, we don't give much thought about this, because we assume that all these organizations aren't actively working together to stalk us. But this way of thinking gives a false sense of security. These people are untrusted, so good security practices should force us to assume that they did the worst possible thing--namely, that they collaborated and shared all the info they had to draw all possible conclusions. This is not that far-fetched if you keep in mind that most of the organizations I mentionned would problably gladly hand over any information they have about you to the police if they were asked for it (and probably wouldn't insist much if due process were bypassed).

Hence the idea I wish to develop here: what if you wanted to distinguish the private part of your life (ie. the "very private", namely things that no one but you knows about, and the "sort-of private", namely things that no one but you and trusted people know about) and the public part of your life (ie. things that at least one untrusted party knows about), and, to help your brain make that distinction, you assumed the worst-case scenario and started publishing all the public details of your life... (The rationale being the following: if this information is public, you might as well tell it to everybody rather than offering it specifically to the big organization who got the info in the first place.)

A thought experiment such as this one makes you realise the sheer quantity of things in your life which are public according to the above definition. Roughly speaking, you could probably publish:

Important parts of your medical history
Because even though your doctor may have a legal obligation to keep this secret, and even though you might trust him, lots of information about social security, prescriptions and the like are probably going through some electronic system leading to an untrusted party.
Everything you buy with a debit card
Your bank knows everything about that.
Everything you say over the phone
The phone company can hear it all. 'nuff said.
Your current precise location, 24-7
If you carry a mobile phone, the phone company knows where you are at any point in time. CCTV can add some precision in public places. If you travel using public transportation, the public transportation company usually knows where you go; if you drive, beware of CCTV and toll roads.
Everything you do on the Internet
It is possible to communicate privately (in the strict sense defined above) over the Internet, using end-to-end encryption (that is, encryption and decryption are performed on the trusted users' machines, which should be running a secure and open-source OS). However, unless you know you're doing that, you probably aren't, and, when you're using the Internet, you're either using encryption to talk to an untrusted big organization like Google (and they know about what you do) or no encryption at all (and the Internet provider knows about what you do).