Privacy-preserving (and digital identity) tech needs to become a lot more like sex education or drug education campaigns. This thought came to me during #RWOT11@WebOfTrustInfo
A lot of digital identity and privacy is fear-based, on how Bad Things Can Happen With Your Data.
Many people *don’t* care about zero-knowledge proofs or private and secure identity because…it’s complex and boring (for many audiences). So, preaching that people should Really Care About Tracking doesn’t fly because a lot of people *like* ads.
I do myself. Normally I’m armed with 3 different ad/tracking blockers but if I’m buying something expensive, I’ll turn them off and *browse with intent*, hoping to get a targeted add with discounts. Or add something to a shopping cart, hoping to get an email with a coupon.
Preaching to people that they should care about how their data gets shared and tracked and resold is obviously noble, but it’s very similar to preaching abstinence as a measure to prevent STIs, or saying “don’t do drugs” to teenagers.
Instead, a lot of successful campaigns in those spaces acknowledged that people will engage in risky behaviour.
Concertgoers will do drugs…so offer them testing. People will engage in risky sexual behaviour…so offer them condoms or the morning after pill.
People *will* do “risky” things with their data, online personas, and reputation, mainly because our data and identity sharing methods right now are geared in risky, trackable sharing.
How can we offer methods of “privacy harm reduction”, in a way that’s not preachy?
A good example of where privacy tech has been successful is VPNs. They are everywhere: on YouTube, podcasts, online…and while some of it about hiding browsing activity, most VPN players *really* made an impact by offering the ability to watch geoblocked streaming content.
And sure, the other reason is VPNs are an extremely lucrative business with fat margins.
But on both fronts, *people* and *companies* had simple-to-understand value propositions without being too preachy.
That’s the food-for-thought I’m mulling over at #RWOT11, among other topics.
When people engage with “risky behaviour” with their data, what are the harm reduction techniques that will make them safer, without necessarily pitching the blanket idea of “more privacy”?