The Living Thing / Notebooks :

Collective information security

The world we know is held together by spit and string

Why is confidentiality broken?

Should we all have to be digital privacy experts to survive in the 21st century; How could we change the rules so that we can focus on our day jobs?

(I give you permission to despair if you can do it amusingly, I’d prefer amusingly with hope

Slamming PGP and the cryptopunk model of human behaviour it assumes, that is a cottage industry.

vinay gupta:

GPG and HTTPS (X509) are broken in usability terms because the conceptual model of trust embedded in each network does not correspond to how people actually experience the world. As a result, there is a constant grind between people and these systems, mainly showing up as a series of user interface disasters. The GPG web of trust results in absurd social constructs like signing parties because it does not work and creating social constructs that weird to support it is a sign of that: stand in a line and show 50 strangers your government ID to prove you exist? Really? Likewise, anybody who’s tried to buy an X509 certificate (HTTPS cert) knows the process is absurd: anybody who’s really determined can probably figure out how to fake your details if they happen to be doing this before you do it for yourself, and of the 1500 or so Certificate Authorities issuing trust credentials at least one is weak or compromised by a State, and all your browser will tell you is “yes, I trust this credential absolutely.” You just don’t get any say in the matter at all.

[…]

The best explanation of this in more detail is the Ode to the Granovetter Diagram which shows how this different trust model maps cleanly to the networks of human communication found by Mark Granovetter in his sociological research. We’re talking about building trust systems which correspond to actual trust systems as they are found in the real world, not the broken military abstractions of X509 or the flawed cryptoanarchy of GPG.

see also I went to the same school as Julian Assange but we learned different lessons

This World of Ours

When someone says “assume that a public key cryptosystem exists,” this is roughly equivalent to saying “assume that you could clone dinosaurs, and that you could fill a park with these dinosaurs, and that you could get a ticket to this ‘Jurassic Park,’ and that you could stroll throughout this park without getting eaten, clawed, or otherwise quantum entangled with a macroscopic dinosaur particle.”

Here is a lucid explanation by Ross Anderson:

information insecurity is at least as much due to perverse incentives. Many of the problems can be explained more clearly and convincingly using the language of microeconomics: network externalities, asymmetric information, moral hazard, adverse selection, liability dumping and the tragedy of the commons. […]

In a survey of fraud against autoteller machines [4], it was found that patterns of fraud depended on who was liable for them. In the USA, if a customer disputed a transaction, the onus was on the bank to prove that the customer was mistaken or lying; this gave US banks a motive to protect their systems properly. But in Britain, Norway and the Netherlands, the burden of proof lay on the customer: the bank was right unless the customer could prove it wrong. Since this was almost impossible, the banks in these countries became careless. Eventually, epidemics of fraud demolished their complacency. US banks, meanwhile, suffered much less fraud; although they actually spent less money on security than their European counterparts, they spent it more effectively [4].

Renee DiResta, The Digital Maginot Line (not sure about DiResta’s infosec expertise, but very sure about nice turn of phrase), related to [insurgence economics[({filename}economics_of_insurgence.md):

Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble.

But ultimately the information war is about territory — just not the geographic kind.

In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics. […] The key problem is this: platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors.

Refs