The Path to Self-Sovereign Identity

Identity-510866Today I head out to a month-long series of events associated with identity: I’m starting with the 22st (!) Internet Identity Workshop next week; then I’m speaking at the blockchain conference Consensus about identity; next I am part of the team putting together the first ID2020 Summit on Digital Identity at the United Nations; and finally I'm hosting the second #RebootingWebOfTrust design workshop on decentralized identity.

At all of these events I want to share a vision for how we can enhance the ability of digital identity to enable trust while preserving individual privacy. This vision is what I call “Self-Sovereign Identity”.

Why do we need this vision now? Governments and companies are sharing an unprecedented amount of information, cross-correlating everything from viewing habits to purchases, to where people are located during the day, to where they sleep at night, and with whom they associate. In addition, as the Third World enters the computer age, digital citizenship is providing Third World residents with greater access to human rights and to the global economy. When properly designed and implemented, self-sovereign identity can offer these benefits while also protecting individuals from the ever-increasing control of those in power, who may not have the best interests of the individual at heart.

But what exactly do I mean by “Self-Sovereign Identity”?

You Can’t Spell Identity without an “I”

Identity is a uniquely human concept. It is that ineffable “I” of self-consciousness, something that is understood worldwide by every person living in every culture. As René Descartes said, Cogito ergo sumI think, therefore I am.

However, modern society has muddled this concept of identity. Today, nations and corporations conflate driver’s licenses, social security cards, and other state-issued credentials with identity; this is problematic because it suggests a person can lose his very identity if a state revokes his credentials or even if he just crosses state borders. I think, but I am not.

Identity in the digital world is even trickier. It suffers from the same problem of centralized control, but it’s simultaneously very balkanized: identities are piecemeal, differing from one Internet domain to another.

As the digital world becomes increasingly important to the physical world, it also presents a new opportunity; it offers the possibility of redefining modern concepts of identity. It might allow us to place identity back under our control — once more reuniting identity with the ineffable “I”.

In recent years, this redefinition of identity has begun to have a new name: self-sovereign identity. However, in order to understand this term, we need to review some history of identity technology:

The Evolution of Identity

The models for online identity have advanced through four broad stages since the advent of the Internet: centralized identity, federated identity, user-centric identity, and self-sovereign identity.

Phase One: Centralized Identity (administrative control by a single authority or hierarchy)

In the Internet’s early days, centralized authorities became the issuers and authenticators of digital identity. Organizations like IANA (1988) determined the validity of IP addresses and ICANN (1998) arbitrated domain names. Then, beginning in 1995, certificate authorities (CAs) stepped up to help Internet commerce sites prove they were who they said they were.

Some of these organizations took a small step beyond centralization and created hierarchies. A root controller could annoint other organizations to each oversee their own heirarchy. However, the root still had the core power — they were just creating new, less powerful centralizations beneath them.

Unfortunately, granting control of digital identity to centralized authorities of the online world suffers from the same problems caused by the state authorities of the physical world: users are locked in to a single authority who can deny their identity or even confirm a false identity. Centralization innately gives power to the centralized entities, not to the users.

As the Internet grew, as power accumulated across hierarchies, a further problem was revealed: identities were increasingly balkanized. They multiplied as web sites did, forcing users to juggle dozens of identities on dozens of different sites — while having control over none of them.

To a large extent, identity on the Internet today is still centralized — or at best, hierarchical. Digital identities are owned by CAs, domain registrars, and individual sites, and then rented to users or revoked at any time. However, for the last two decades there’s also been a growing push to return identities to the people, so that they actually could control them.

Interlude: Foreshadowing the Future

PGP (1991) offered one of the first hints toward what could become self-sovereign identity. It introduced the 'Web of Trust'1, which established trust for a digital identity by allowing peers to act as introducers and validators of public keys2. Anyone could be validator in the PGP model. The result was a powerful example of decentralized trust management, but it focused on email addresses, which meant that it still depended on centralized hierarchies. For a variety of reasons, PGP never became broadly adopted.

Other early thoughts appeared in “Establishing Identity without Certification Authority” (1996), a paper by Carl Ellison that examined how digital identity was created3. He considered both authorities such as Certificate Authorities and peer-to-peer systems like PGP as options for defining digital identity. He then settled on a method for verifying online identity by exchanging shared secrets over a secure channel. This allowed users to control their own identity without depending on a managing authority.

Ellison was also at the heart of the SPKI/SDSI project (1999) 4 - 5. Its goal was to build a simpler public infrastructure for identity certificates that could replace the complicated X.509 system. Although centralized authorities were considered as an option, they were not the only option.

It was a beginning, but an even more revolutionary reconception of identity in the 21st century would be required to truly bring self-sovereignty to the forefront.

Phase Two: Federated Identity (administrative control by multiple, federated authorities)

The next major advancement for digital identity occurred at the turn of the century when a variety of commercial organizations moved beyond hierarchy to debalkanize online identity in a new manner.

Microsoft’s Passport (1999) initiative was one of the first. It imagined federated identity, which allowed users to utilize the same identity on multiple sites. However, it put Microsoft at the center of the federation, which made it almost as centralized as traditional authorities.

In response Sun Microsoft organized the Liberty Alliance (2001). They resisted the idea of centralized authority, instead creating a "true" federation, but the result was instaed an oligarchy: the power of centralized authority was now divided among several powerful entities.

Federation improved on the problem of balkanization: users could wander from site to site under the system. However, each individual site remained an authority.

Phase Three: User-Centric Identity (individual or administrative control across multiple authorities without requiring a federation)

The Augmented Social Network (2000) laid the groundwork for a new sort of digital identity in their proposal for the creation of a next-generation Internet. In an extensive white paper6, they suggested building “persistent online identity” into the very architecture of the Internet. From the viewpoint of self-sovereign identity, their most important advance was “the assumption that every individual ought to have the right to control his or her own online identity”. The ASN group felt that Passport and the Liberty Alliance could not meet these goals because the “business-based initiatives” put too much emphasis on the privatization of information and the modeling of users as consumers.

These ASN ideas would become the foundation of much that followed.

The Identity Commons (2001-Present) began to consolidate the new work on digital identity with a focus on decentralization. Their most important contribution may have been the creation, in association with the Identity Gang, of the Internet Identity Workshop (2005-Present) working group. For the last ten years, the IIW has advanced the idea of decentralized identity in a series of semi-yearly meetings.

The IIW community focused on a new term that countered the server-centric model of centralized authorities: user-centric identity. The term suggests that users are placed in the middle of the identity process. Initial discussions of the topic focused on creating a better user experience7, which underlined the need to put users front and center in the quest for online identity. However the definition of a user-centric identity soon expanded to include the desire for a user to have more control over his identity and for trust to be decentralized8.

The work of the IIW has supported many new methods for creating digital identity, including OpenID (2005), OpenID 2.0 (2006), OpenID Connect (2014), OAuth (2010), and FIDO (2013). As implemented, user-centric methodologies tend to focus on two elements: user consent and interoperability. By adopting them, a user can decide to share an identity from one service to another and thus debalkanize his digital self.

The user-centric identity communities had even more ambitious visions; they intended to give users complete control of their digital identities. Unfortunately, powerful institutions co-opted their efforts and kept them from fully realizing their goals. Much as with the Liberty Alliance, final ownership of user-centric identities today remain with the entities that register them.

OpenID offers an example. A user can theoretically register his own OpenID, which he can then use autonomously. However, this takes some technical know-how, so the casual Internet user is more likely to use an OpenID from one public web site as a login for another. If the user selects a site that is long-lived and trustworthy, he can gain many of the advantages of a self-sovereign identity — but it could be taken away at any time by the registering entity!

Facebook Connect (2008) appeared a few years after OpenID, leveraging lessons learned, and thus was several times more successful largely due to a better user interface9. Unfortunately, Facebook Connect veers even further from the original user-centric ideal of user control. To start with, there’s no choice of provider; it’s Facebook. Worse, Facebook has a history of arbitrarily closing accounts, as was seen in their recent real-name controversy10. As a result, people who access other sites with their “user-centric” Facebook Connect identity may be even more vulnerable than OpenID users to losing that identity in multiple places at one time.

It’s central authorities all over again. Worse, it’s like state-controlled authentication of identity, except with a self-elected “rogue” state.

In other words: being user-centric isn’t enough.

Phase Four: Self-Sovereign Identity (individual control across any number of authorities)

User-centric designs turned centralized identities into interoperable federated identities with centralized control, while also respecting some level of user consent about how to share an identity (and with whom). It was an important step toward true user control of identity, but just a step. To take the next step required user autonomy.

This is the heart of self-sovereign identity, a term that’s coming into increased use in the ‘10s. Rather than just advocating that users be at the center of the identity process, self-sovereign identity requires that users be the rulers of their own identity.

One of the first references to identity sovereignty occurred in February 2012, when developer Moxie Marlinspike wrote about “Sovereign Source Authority”11. He said that individuals “have an established Right to an ‘identity’”, but that national registration destroys that sovereignty. Some ideas are in the air, so it’s no surprise that almost simultaneously, in March 2012, Patrick Deegan began work on Open Mustard Seed, an open-source framework that gives users control of their digital identity and their data in decentralized systems12. It was one of several "personal cloud" initiatives that appeared around the same time.

Since then, the idea of self-sovereign identity has proliferated. Marlinspike has blogged how the term has evolved13. As a developer, he shows one way to address self-sovereign identity: as a mathematical policy, where cryptography is used to protect a user’s autonomy and control. However, that’s not the only model. Respect Network instead addresses self-sovereign identity as a legal policy; they define contractual rules and principles that members of their network agree to follow14. The Windhover Principles For Digital Identity, Trust and Data15 and Everynym’s Identity System Essentials16 offer some additional perspectives on the rapid advent of self-sovereign identity since 2012.

In the last year, self-sovereign identity has also entered the sphere of international policy17. This has largely been driven by the refugee crisis that has beset Europe, which has resulted in many people lacking a recognized identity due to their flight from the state that issued their credentials. However, it’s a long-standing international problem, as foreign workers have often been abused by the countries they work in due to the lack of state-issued credentials.

If self-sovereign identity was becoming relevant a few years ago, in light of current international crises its importance has skyrocketed.

The time to move toward self-sovereign identity is now.

A Definition of Self-Sovereign Identity

With all that said, what is self-sovereign identity exactly? The truth is that there’s no consensus. As much as anything, this article is intended to begin a dialogue on that topic. However, I wish to offer a starting position.

Self-sovereign identity is the next step beyond user-centric identity and that means it begins at the same place: the user must be central to the administration of identity. That requires not just the interoperability of a user’s identity across multiple locations, with the user’s consent, but also true user control of that digital identity, creating user autonomy. To accomplish this, a self-sovereign identity must be transportable; it can’t be locked down to one site or locale.

A self-sovereign identity must also allow ordinary users to make claims, which could include personally identifying information or facts about personal capability or group membership18. It can even contain information about the user that was asserted by other persons or groups.

In the creation of a self-sovereign identity, we must be careful to protect the individual. A self-sovereign identity must defend against financial and other losses, prevent human rights abuses by the powerful, and support the rights of the individual to be oneself and to freely associate19.

However, there’s a lot more to self-sovereign identity than just this brief summation. Any self-sovereign identity must also meet a series of guiding principles — and these principles actually provide a better, more comprehensive, definition of what self-sovereign identity is. A proposal for them follows:

Ten Principles of Self-Sovereign Identity

A number of different people have written about the principles of identity. Kim Cameron wrote one of the earliest “Laws of Identity”20, while the aforementioned Respect Network policy21 and W3C Verifiable Claims Task Force FAQ22 offer additional perspectives on digital identity. This section draws on all of these ideas to create a group of principles specific to self-sovereign identity. As with the definition itself, consider these principles a departure point to provoke a discussion about what’s truly important.

These principles attempt to ensure the user control that’s at the heart of self-sovereign identity. However, they also recognize that identity can be a double-edged sword — usable for both beneficial and maleficent purposes. Thus, an identity system must balance transparency, fairness, and support of the commons with protection for the individual.

  1. Existence. Users must have an independent existence. Any self-sovereign identity is ultimately based on the ineffable “I” that’s at the heart of identity. It can never exist wholly in digital form. This must be the kernel of self that is upheld and supported. A self-sovereign identity simply makes public and accessible some limited aspects of the “I” that already exists.
  2. Control. Users must control their identities. Subject to well-understood and secure algorithms that ensure the continued validity of an identity and its claims, the user is the ultimate authority on their identity. They should always be able to refer to it, update it, or even hide it. They must be able to choose celebrity or privacy as they prefer. This doesn’t mean that a user controls all of the claims on their identity: other users may make claims about a user, but they should not be central to the identity itself.
  3. Access. Users must have access to their own data. A user must always be able to easily retrieve all the claims and other data within his identity. There must be no hidden data and no gatekeepers. This does not mean that a user can necessarily modify all the claims associated with his identity, but it does mean they should be aware of them. It also does not mean that users have equal access to others’ data, only to their own.
  4. Transparency. Systems and algorithms must be transparent. The systems used to administer and operate a network of identities must be open, both in how they function and in how they are managed and updated. The algorithms should be free, open-source, well-known, and as independent as possible of any particular architecture; anyone should be able to examine how they work.
  5. Persistence. Identities must be long-lived. Preferably, identities should last forever, or at least for as long as the user wishes. Though private keys might need to be rotated and data might need to be changed, the identity remains. In the fast-moving world of the Internet, this goal may not be entirely reasonable, so at the least identities should last until they’ve been outdated by newer identity systems. This must not contradict a “right to be forgotten”; a user should be able to dispose of an identity if he wishes and claims should be modified or removed as appropriate over time. To do this requires a firm separation between an identity and its claims: they can't be tied forever.
  6. Portability. Information and services about identity must be transportable. Identities must not be held by a singular third-party entity, even if it's a trusted entity that is expected to work in the best interest of the user. The problem is that entities can disappear — and on the Internet, most eventually do. Regimes may change, users may move to different jurisdictions. Transportable identities ensure that the user remains in control of his identity no matter what, and can also improve an identity’s persistence over time.
  7. Interoperability. Identities should be as widely usable as possible. Identities are of little value if they only work in limited niches. The goal of a 21st-century digital identity system is to make identity information widely available, crossing international boundaries to create global identities, without losing user control. Thanks to persistence and autonomy these widely available identities can then become continually available.
  8. Consent. Users must agree to the use of their identity. Any identity system is built around sharing that identity and its claims, and an interoperable system increases the amount of sharing that occurs. However, sharing of data must only occur with the consent of the user. Though other users such as an employer, a credit bureau, or a friend might present claims, the user must still offer consent for them to become valid. Note that this consent might not be interactive, but it must still be deliberate and well-understood.
  9. Minimalization. Disclosure of claims must be minimized. When data is disclosed, that disclosure should involve the minimum amount of data necessary to accomplish the task at hand. For example, if only a minimum age is called for, then the exact age should not be disclosed, and if only an age is requested, then the more precise date of birth should not be disclosed. This principle can be supported with selective disclosure, range proofs, and other zero-knowledge techniques, but non-correlatibility is still a very hard (perhaps impossible) task; the best we can do is to use minimalization to support privacy as best as possible.
  10. Protection. The rights of users must be protected. When there is a conflict between the needs of the identity network and the rights of individual users, then the network should err on the side of preserving the freedoms and rights of the individuals over the needs of the network. To ensure this, identity authentication must occur through independent algorithms that are censorship-resistant and force-resilient and that are run in a decentralized manner.

I seek your assistance in taking these principles to the next level. I will be at the IIW conference this week, at other conferences this month, and in particular I will be meeting with other identity technologists on May 21st and 22nd in NYC after the ID 2020 Summit on Digital Identity. These principles will be placed into Github and we hope to collaborate with all those interested in refining them through the workshop, or through Github pull requests from the broader community. Come join us!

Conclusion

The idea of digital identity has been evolving for a few decades now, from centralized identities to federated identities to user-centric identities to self-sovereign identities. However, even today exactly what a self-sovereign identity is, and what rules it should recognize, aren’t well-known.

This article seeks to begin a dialogue on that topic, by offering up a definition and a set of principles as a starting point for this new form of user-controlled and persistent identity of the 21st century.

Glossary

The following terms are relevant to this article. These are just a subset of the terms generally used to discuss digital identity, and have been minimized to avoid unnecessary complexity.

Authority. A trusted entity that is able to verify and authenticate identities. Clasically, this was a centralized (or later, federated) entity. Now, this can also be an open and transparent algorithm run in a decentralized manner.

Claim. A statement about an identity. This could be: a fact, such as a person's age; an opinion, such as a rating of their trustworthiness; or something in between, such as an assessment of a skill.

Credential. In the identity community this term overlaps with claims. Here it is used instead for the dictionary definition: "entitlement to privileges, or the like, usually in written form"23. In other words, credentials refer to the state-issued plastic and paper IDs that grant people access in the modern world. A credential generally incorporates one or more identifiers and numerous claims about a single entity, all authenticated with some sort of digital signature.

Identifier. A name or other label that uniquely identifies an identity. For simplicity's sake, this term has been avoided in this article (except in this glossary), but it's generally important to an understanding of digital identity.

Identity. A representation of an entity. It can include claims and identifiers. In this article, the focus is on digital identity.

Thanks To…

Thanks to various people who commented on early drafts of this article. Some of their suggestions were used word for word, some were adapted to the text, and everything was carefully considered. The most extensive revisions came from comments by Shannon Appelcline, Dave Crocker, Anil John, and Drummond Reed. Other commentators and contributors include: Doc Searls, Kaliya Young, Devon Loffreto, Greg Slepak, Alex Fowler, Fen Labalme, Justin Netwon, Markus Sabadello, Adam Back, Ryan Shea, Manu Sporney, and Peter Todd. I know much of the commentary didn't make it into this draft, but the discussion on this topic continues…

Image by John Hain licensed CC0 https://pixabay.com/en/identity-mask-disguise-mindset-510866/

The opinions in this article are my own, not my employer's nor necessarily the opinions of those that have offered commentary on it.


1 Jon Callas, Phil Zimmerman. 2015. “The PGP Paradigm”. #RebootingWebOfTrust Design Workshop. https://github.com/WebOfTrustInfo/rebooting-the-web-of-trust/blob/master/topics-and-advance-readings/PGP-Paradigm.pdf.
2 Appelcline, Crocker, Farmer, Newton. 2015. “Rebranding the Web of Trust”. #RebootingWebOfTrust Design Workshop. https://github.com/WebOfTrustInfo/rebooting-the-web-of-trust/blob/master/final-documents/rebranding-web-of-trust.pdf
3 Ellison, Carl. 1996. “Establishing Identity without Certification Authorities”. 6th USENIX Security Symposium. http://irl.cs.ucla.edu/~yingdi/pub/papers/Ellison-OldFriend-USENIX-Security-1996.pdf.
4 Ellison, C. 1999. “RFC 2692: SPKI Requirements”. IETF. https://tools.ietf.org/html/rfc269
5 Ellison, C., et. al. 1999. “RFC 2693: SPKI Certificate Theory”. IETF. https://tools.ietf.org/html/rfc269
6 Jordon, Ken, Jan Hauser, and Steven Foster. 2003. “The Augmented Social Network: Building Identity and Trust into the Next-Generation Internet”. Networking: A Sustainable Future. http://asn.planetwork.net/asn-archive/AugmentedSocialNetwork.pdf
7 Jøsang, Audun and Simon Pope. 2005. “User Centric Identity Management”. AusCERT Conference 2005. http://folk.uio.no/josang/papers/JP2005-AusCERT.pdf
8 Verifiable Claims Task Force. 2006. “[Editor Draft] Verifiable Claims Working Group Frequently Asked Questions”. W3C Technology and Society Domain. http://w3c.github.io/webpayments-ig/VCTF/charter/faq.htm
9 Gilbertson, Scott. 2011. “OpenID: The Web’s Most Successful Failure”. Webmonkey. http://www.webmonkey.com/2011/01/openid-the-webs-most-successful-failure
10 Hassine, Wafa Ben and Eva Galperine. “Changes to Facebook’s ‘Real Name’ Policy Still Don’t Fix the Problem”. EFF. https://www.eff.org/deeplinks/2015/12/changes-facebooks-real-names-policy-still-dont-fix-problem
11 Marlinspike, Moxie. 2012. “What is ‘Sovereign Source Authority’?” The Moxie Tongue. http://www.moxytongue.com/2012/02/what-is-sovereign-source-authority.html
12 Open Mustard Seed. 2013. “Open Mustard Seed (OMS) Framework). ID3. https://idcubed.org/open-platform/platform/
13 Marlinspike, Moxie. 2016. “Self-Sovereign Identity”. The Moxie Tongue. http://www.moxytongue.com/2016/02/self-sovereign-identity.html
14 Respect Network. 2016. “The Respect Trust Network v2.1”. oixnet.org. http://oixnet.org/wp-content/uploads/2016/02/respect-trust-framework-v2-1.pdf
15 Graydon, Carter. 2014. “Top Bitcoin Companies Propose the Windhover Principles – A New Digital Framework for Digital Identity, Trust and Open Data”. CCN. https://www.cryptocoinsnews.com/top-bitcoin-companies-propose-windhover-principles-new-digital-framework-digital-identity-trust-open-data/
16 Smith, Samuel M. and Khovratovich, Dmitry. 2016. “Identity System Essentials”. Evernym. http://www.evernym.com/assets/doc/Identity-System-Essentials.pdf
17 Dahan, Mariana and John Edge. 2015. “The World Citizen: Transforming Statelessness into Global Citizenship”. The World Bank. http://blogs.worldbank.org/ic4d/category/tags/self-sovereign-identity-systems
18 Identity Commons. 2007. “Claim”. IDCommons Wiki. http://wiki.idcommons.net/Claim
19 Christopher Allen. 2015. “The Four Kinds of Privacy”. Life With Alacrity blog. http://www.lifewithalacrity.com/2015/04/the-four-kinds-of-privacy.html
20 Cameron, Kim. 2005. “The Laws of Identity”. https://msdn.microsoft.com/en-us/library/ms996456.aspx
21 Respect Network. 2016. “The Respect Trust Network v2.1”. oixnet.org. http://oixnet.org/wp-content/uploads/2016/02/respect-trust-framework-v2-1.pdf
22 Verifiable Claims Task Force. 2006. “[Editor Draft] Verifiable Claims Working Group Frequently Asked Questions”. W3C Technology and Society Domain. http://w3c.github.io/webpayments-ig/VCTF/charter/faq.html
23 "Definition of Credential". Dictionary.com. http://www.dictionary.com/browse/credential?s=t

Defining “Participatory Ecosystem” — Grow the Pie, Not Slice It!

Little Men CircleAs part of being a member of the sustainable MBA community at Pinchot University, I have been trying to articulate what I like about the kinds of collaboration that are possible even inside a competitive industry. In our MBA program, we don't just teach about competitive strategy (using classic's like Porter's book), but we also teach about the nature of coopetition. These practices are more likely to lead to sustainable businesses (not only sustainable=green, but sustainable=enduring).

I have been a participant during several periods of technology history where coopetition has been practiced, most recently during the early days of the Twitter ecosystem. I can distinctly remember two companies offering competitive Twitter clients sharing best practices at the iOSDevCamp hackathon I hosted. Unfortunately these practices ended when Twitter pulled the rug out from independent developers by controlling the API keys under the guise of Delivering a Consistent Twitter Experience (post now only visible on Internet Archive). There are some that argue that Twitter may have ultimately killed its ecosystem in pursuit of short-term profits over a long-term community of innovators. We have seen this take over elsewhere as described in How Ecosystems Became the New Walled Gardens.

So what makes a great participatory ecosystem? My recent work in trying to understand Elinor Ostrom's design principles for the collective governance of the commons and some other thoughts have led me to this definition:

Participatory Ecosystem

A participatory ecosystem is a business ecosystem with relatively low barriers to economic participation, artistic and professional expression, and civic engagement by all stakeholders, including producers and consumers.

It has strong support for creating, sharing and increasing the production of goods and services of value to the ecosystem. A participatory ecosystem will have system processes where established stakeholders are incentivized to share knowledge, access to markets, and capital to new stakeholders in the ecosystem, and to turn consumers into producers.

Stakeholders with leadership positions in a participatory ecosystem may change over time, but the function of an ecosystem leader is valued by the community because it enables members to move toward shared visions, grow the market, to align their investments, and to find mutually supportive roles in the ecosystem.

A participatory system is resilient, and has system processes, such as decentralization, to prevent bad actors. Every stakeholder in a participatory ecosystem believes that their goods and services matter, and feel some degree of social connection and community with one another (at least they care what other people think about what they produce.) Not every stakeholder must produce, but all must believe that they are free to produce when ready and what they contribute will be appropriately valued.

This is not completely an original idea — it is of a mashup of Henry Jenkins’ “participatory culture” definition with James Moore’s “business ecosystem” ideas. I've added a few things I've observed about working participatory ecosystems, and have also run this definition by a number of my colleagues in a variety of industries for small tweaks.

Most recently I've been using it in talks about the blockchain ecosystem. I have recently joined Blockstream where I am leading a number of strategic initiatives, including the company’s participation in cross-industry collaborations, as well as future technical efforts with international standards organizations such as W3C, IETF, and Oasis. In these efforts I have been sharing this definition more broadly, and it seems to be resonating with the community, in particular as the blockchain community cares deeply about decentralization.

I've posted this definition to Github hoping that others can fork it and submit pull requests to make it better — just like how open source world works!

 


A Revised “Ostrom’s Design Principles for Collective Governance of the Commons”

Resource Sharing

The traditional economic definition of “the commons” are those resources that are held in common and not privately owned. This is closely related to economic concept of public goods, which are goods that are both non-excludable (in that individuals cannot be effectively excluded from use) and non-rivalrous (where use by one individual does not reduce availability to others).

My own personal definition for the commons is broader — any regenerative, self-organizing complex system that can be drawn upon for deep wealth. These can include traditional commons, such as lumber, fish, etc., but can also include other regenerative systems such as communities, markets, intellectual property, etc.

In 2009, Elinor Ostrom received the Nobel Prize in Economics for her “analysis of economic governance, especially the commons”. In that, she listed 8 principles for effectively managing against the tragedy of the commons.

However, I've found her original words — as well as many adaptions I've seen since — to be not very accessible. Also, since the original release of the list of 8 principles there also has been some research resulting in updates and clarifications to her original list.

I also wanted to generalize her principles for broader use given my broader definition of the commons, and apply them to everything from how to manage an online community to how a business should function with competitors. I went to the original version, more contemporary research, as well as a number of modern adaptions to define a new set of design principles.

My first draft divided up her 8 principles into 10. I then ran this list by a number of experts in this field, and here is my second draft based on their feedback. This draft in fact breaks up her principles into 12 different ones, but I have retained her old numbering system as there are large number of works that refer to the original 8. In addition, there appears to be some differences in thoughts on number 8, so I've included two variations.

Ostrom’s Design Principles for Collective Governance of the Commons

or

How to Avoid the Tragedy of the Commons within Self-Organizing Systems

1A. DEFINE AUTHORIZED USE: The community of those who have the right to use the common resource is clearly defined.

1B. DEFINE COMMONS BOUNDARIES: The boundaries of the commons are clearly defined so as to separate the usage rules from the larger environment.

2A. MAKE COSTS PROPORTIONAL: Costs for using and maintaining the commons are proportional to the benefits that those users receive from the commons.

2B. PAY ALL COSTS: People that use the commons keep costs inside the local system as much as possible. They do not externalize costs either to neighbors or future generations.

3A. DECIDE INCLUSIVELY: Everyone who benefits from or is affected by the use of the commons makes choices about it. This includes decision-making on resource allocation and on rules and responsibilities for use of the commons. Members of the community operate with respect and mutual regard for each other.

3B. ADAPT LOCALLY: Members of the community adapt the rules and culture for the commons to local needs, resource qualities and environmental conditions.

4A. SHARE KNOWLEDGE: All members are actively engaged in observing and sharing knowledge about the conditions of the system.

4B. MONITOR EFFECTIVELY: Monitors who are members of (or supported by and accountable to) the community view and report on the use and maintenance of the commons by community members and others.

5. HOLD ACCOUNTABLE: Violators of the rules or culture of the commons face graduated sanctions, proportional to the seriousness of the transgression. These are applied by members of the community or by others who are accountable to community for applying such consequences.

6. PROMPTLY RESOLVE CONFLICTS: The community offers inexpensive, fast and easy access to mediation for effective conflict resolution.

7. GOVERN LOCALLY: Community self-determination is recognized and supported by higher-level authorities.

8. CONNECT TO RELATED SYSTEMS: Any side effects or other repercussions by one community to another in managing their commons, should be addressed in the context of larger, nested communities that have a legitimate role in those consequences. These externalities should be resolved by the community at the most immediate or local level (aka subsidiarity) that can operate from effective human relationships, rather than by a faceless authority.

ALT 8. COORDINATE WITH RELATED SYSTEMS: For groups that are part of larger social systems, there must be appropriate coordination among relevant groups. Every sphere of activity has an optimal scale. Large scale governance requires finding the optimal scale for each sphere of activity and appropriately coordinating the activities, a concept called polycentric governance. A related concept is subsidiarity, which assigns governance tasks by default to the lowest jurisdiction, unless this is explicitly determined to be ineffective.


A Spectrum of Consent

Consensus-flowchartI have made understanding of consent and consensus, in both their human and technological forms, a major part of my career. I have explored them through my work in cryptographic technologies, but also in human terms at the Group Pattern Language Project, by co-authoring with Shannon Appecline forthcoming book on the design of collaborative games, and another book in progress on the patterns of cooperative play. My business management style is also more collaborative and inclusive.

This topic is so important to me that I named the company I founded in 1988 (that eventually led the effort to establish TLS 1.0 as an internet standard) was named "Consensus Development" (archive.org).

Thus I've been fascinated this week to watch a major online community try to define for itself what “consent” and “consensus” in their community will mean. This community in question is the Bitcoin cryptocurrency community, which is faced with a minority of the community attempting to “hard fork”. This weekend they meet in Montreal to attempt to discover another way to return to some form of unanimity and broad consent of their stakeholders.

This is of the hardest problems in human interactions. Consent comes from the Latin, meaning “feel together”, which this community now believes they lack. But it is particularly poignant that this particular community is facing these questions. Bitcoin is technologically based on a formal protocol that uses a mathematical and cryptographic method for consensus called the “blockchain”. Every 10 minutes, thousands of nodes and hundreds of miners arrive at a consensus on all the bitcoin transactions during that time. Bitcoin, and the larger Blockchain community are the worlds experts on cryptographic forms of consensus. But the human consensus problem is still hard for them too.

A Spectrum of Consent

In my recent efforts into understanding this topic, and some recent dives into my  Systems For Collective Choice, the topic of the A Spectrum Of Consent has repeatedly come up. There appear to be a range of levels of consent required for various deliberative processes and voting systems.

I have had these incomplete notes on the Spectrum of Consent around for a couple of years, which I've only shared privately. However, given my discussions this week at the Consensus 2015 conference and elsewhere, it was time to share. I welcome comments and suggestions!

Basically, many groups view consensus processes as a requirement, however, many others see consensus processes as unworkable and a serious problem to be avoided. These efforts to define A Spectrum Of Consent are somewhat to help me understand this dichotomy. I personally believe that certain deliberative processes and voting systems are good for some things, some for others. But I find I'm rare in that opinion — most people lean strongly in way one or the other.

There is clearly some conflation and orthogonality in my list below of types of representation (who decides), deliberation (how you approach a decision), and choice selection method (voting system), but I think this is a good start at defining a spectrum.

Uniform Consensus (or Absolute Agreement) — A voting system where all parties are required support and agree to a decision, without anyone voting to neither agree or disagree (aka abstaining).

Unanimous Consent (or Unanimity) — A voting system where all parties support and agree to a decision, however, some parties may abstain by raising no objections.

Consensus Decision Making — A deliberative process of reaching unanimous consent (everyone agrees or abstains) with a number of procedural rules or cultural norms that limit blocking. Most forms of Consensus Decision Making all members have equal voice in the deliberation and equal vote. Most ask a closing question question such as "Are there any remaining unresolved concerns?" or "Are there any paramount objections?". Some believe that in Consensus Decision Making that there is never a “vote”. (Another definition of consensus) (PDF on the Consensus Process) and (when to not use consensus)

Sense of Meeting — A deliberative deliberative process practice as originally created by the Quakers. It is related to consensus which seeks “unity in the discernment of a decision” It is not necessary for every member to fully agree with a decision, but rather for members “to discern that as a body they are called in a particular direction.”

Consensus Minus One — A voting system where a party may block only if they can find at least one other uninvolved party to agree to join together to block. Otherwise the decision passes.

Consensus Seeking — A deliberation process that attempt to reach unanimous consent, but can fall back to a majority vote when required. Essentially Consensus Decision Making in deliberative process — listening to those with objections, but the ability to go to a vote stops the "tyranny of the minority" that sometimes happens with consensus.

Mediated Consensus — Related to Consensus Seeking. An immediate decision can be made by consensus, but if that fails the blockers and proponents have to meet separately with or without a mediator, and if they still don't agree then its brought back to the next meeting for a vote. 

Distributed Consensus (still seeking right name of this one) — There are some rules for how decision making is distributed into smaller, inter-related groups, each who have authority over a domain. Consensus is required within each group but not of the whole.

Consequential Consensus (still seeking right name of this one) — Only those affected by the outcome of the decision can participate and vote in the decision, which requires consensus of all those affected.

Representative Consensus (still seeking right name of this one) — Each party who participates in the deliberation votes represents the interests of others by some rules. Consensus is required only among the representatives.

Appreciative Inquiry Based Deliberation (another one needing naming) — A deliberative process that focuses on moving forward on moving forward on things that there is agreement on that are "the best of what is, in order to imagine what could be, followed by collective design of a desired future state that is compelling and thus, does not require the use of incentives, coercion or persuasion for planned change to occur."

Blocking or Vetoing Representation— Voting systems where one or more members may have the right to block the deliberative process before a vote, or veto after a vote.

Absolute Super Majority — A voting system where support for a proposal to pass much be larger than a simple majority based on the entire membership rather than only on those present and voting, typically 2/3rds.

Super Majority or Qualified Majority — A voting system where support for a proposal to pass must be greater than a simple majority of those present and voting, typically 2/3rds of those voting.

Absolute Majority — A voting system where support for a proposal to pass much be 50% + 1 vote, based on the entire membership rather than on those present and voting.

Simple Majority — A voting system where support for a proposal to pass must be 50% + 1 vote of those present and voting.

Rules of Order or Parliamentary Procedure — A deliberative process such as Robert's Rules of Order or other rules used by legislative bodies such as a senate or parliaments, but  often used by corporations for decision making such as a board of directors meeting.  At their heart is the rule of the majority with respect for the minority. Its object is to allow deliberation upon questions of interest to the group and to arrive at the sense or the will of the organization as a whole upon these questions.

Plurality or Relative Majority — A voting system where when there are multiple options, the largest number of votes wins.

Right to Fork — Unique to open source communities, consensus can be reached by having the stakeholders lacking consensus to split off to form their own consensus, and since the assets are largely intellectual the can compete equally for the attention of the community and markets they serve.

Citizen Assembly (sometimes called sortition, public sector representation, jury or allotment) — A deliberative process where a representative random sample of eligible voters is selected to make binding decisions for the group. The voting system used may be consensus or some form of majority. Ancient Athenian democracy actually form of this of this in the Boule and Jury.

Distributed Authority — There are some rules for how decision making is distributed into smaller, inter-related groups, each who have authority over a domain. How each group makes decisions is decided by that group.

Executive Authority — Rules for how a party is elected to represent a group, who then has authority to make decisions for that group, typically for a limited period of time.

Dictatorship — Executive authority exercised to ensure a monopoly of authority by its elected representative.

(image credit: grant horwood, aka frymaster CC BY-SA 2.5-2.0-1.0 )


Speaking at Consensus 2015

Christopher Allen Internet Security Pioneer Speaking at Consensus 10 Sep NYC

I'm heading out today to New York City to speak at Consensus 2015, where I am speaking on the panel ‘Bitcoin and its Antecedents: A Look at the History and Evolution of Digital Cash’:

Bitcoin is far from the first attempt at creating a form of digital money with the potential to upend existing systems. Our panelists will look at bitcoin's predecessors and close cousins. Nathaniel Popper wrote the book Digital Gold, which delves into bitcoin's genesis; Christopher Allen is an internet security expert who has been involved in digital cash systems including Digicash for decades, while Garrick Hileman is CoinDesk's lead analyst and an economic historian at the LSE, specializing in alternative and private monies.

This invite probably came about from an extended interview that Tim Swanson, author of the blockchain book and blog The Great Wall of Numbers, shared with his readers. Tim wrote me saying “There is some dispute, apparently, of the history and precursors of distributed computing / consensus before Bitcoin, and and asked me if I would share my “own view of what a blockchain is defined as, what Nakamoto consensus is relative to other 'solutions' to Byzantine faults, triple entry accounting, etc.” He published my answers in his post A blockchain with emphasis on the ‘a’. For posterity, I thought I'd also share my part of the interview here:

I certainly was an early digital currency banner waiver — I did some consulting work with Xanadu, and later for the very early Digicash. At various points in the growth of SSL both First Virtual and PGP tried to acquire my company. When I saw Nick’s “First Monday” article the day it came out, as it immediately clicked a number of different puzzle pieces that I’d not quite put together into one place. I immediately started using the term smart contracts and was telling my investors, and later Certicom, that this is what we really should be doing (maybe because I was getting tired of battles in SSL/TLS standards when that wasn’t what Consensus Development had been really founded to solve).

However, in the end, I don’t think any thing I did actually went anywhere, either technically or as a business, other than maybe getting some other technologists interested. So in the end I’m more of a witness to the birth of these technologies than a creator.

History in this area is distorted by software patents—there are a number of innovative approaches that would be scrapped because of awareness of litigious patent holders. I distinctly remember when I first heard about some innovative hash chain ideas that a number of us wanted to use hash trees with it, but we couldn’t figure out how to avoid the 1979 Merkle Hash Tree patent whose base patent wouldn’t expire until ’96, as well as some other subsidiary hash tree and time stamp patents that wouldn’t expire until early 2000s.

As I recall, at the time were we all trying to inspired solve the micropayment problem. Digicash had used cryptography for larger-sized cash transactions, whereas First Virtual, Cybercash and others were focused on securing the ledger side and needed larger transaction fees and thus larger amounts of money to function. To scale down we were all looking at hash chain ideas from Lamport’s S/KEY from the late 80’s and distributed transactional ledgers from X/Open’s DTP from the early 90s as inspirations. DEC introduced Millicent during this period, and I distinctly remember people saying “this will not work, it requires consumers to hold keys in a electronic wallet”. On the cryptographic hash side of this problem Adam Back did Hashcash, Rivest and his crew introduced PayWord and Micromint. On the transaction side CMU introduced NetBill.

Nick Szabo wrote using hashes for post-unforgeable transaction logs in his original smart contract paper in ’97, in which he referred to Surety’s work (and they held the Merkle hash tree and other time signature patents), but in that original paper he did not look at Proof-of-Work at all. It was another year before he, Wei Dai, and Hal Finney started talking about using proof-of-work as a possible foundational element for smart contracts. I remember some discussions over beer in Palo Alto circa ’99 with Nick after I became CTO of Certicom about creating dedicated proof-of-work secure hardware that would create tokens that could be used as an underlying basis for his smart contract ideas. This was interesting to Certicom as we had very good connections into cryptographic hardware industry, and I recommended that we should hire him. Nick eventually joined Certicom, but by that point they had cancelled my advanced cryptography group to raise profits in order to go public in the US (causing me to resign), and then later ceased all work in that area when the markets fell in 2001.

I truly believe that would could have had cryptographic smart contracts by ’04 if Certicom had not focused on short-profits (see Solution #3 at bottom of this post for my thoughts back in 2004 after a 3-year non-compete and NDA)…

What is required, I believe, is a major paradigm shift. We need to leave the whole business of fear behind and instead embrace a new model: using cryptography to enable business rather than to prevent harm. We need to add value by making it possible to do profitable business in ways that are impossible today. There are, fortunately, many cryptographic opportunities, if we only consider them. Cryptography can be used to make business processes faster and more efficient. With tools derived from cryptography, executives can delegate more efficiently and introduce better checks and balances. They can implement improved decision systems. Entrepreneurs can create improved auction systems.
Nick Szabo is one of the few developers who has really investigated this area, through his work on Smart Contracts. He has suggested ways to create digital bearer certificates, and has contemplated some interesting secure auctioning techniques and even digital liens. Expanding upon his possibilities we can view the ultimate Smart Contract as a sort of Smart Property. Why not form a corporation on the fly with digital stock certificates, allow it to engage in its creative work, then pay out its investors and workers and dissolve? With new security paradigms, this is all possible.

When I first heard about Bitcoin, I saw it as having clearly two different parts. First was a mix of old ideas about unforgeable transaction logs using hash trees combined into blocks connected by hash chains. This clearly is the “blockchain”. But in order for this blockchain to function, it needed timestamping, for which fortunately all the patents had expired. The second essential part of Bitcoin was through a proof-of-work system to timestamp the blocks, which clearly was based on Back’s HashCash rather than the way transactions were timestamped in Szabo’s BitGold implementation.

I have to admit, when I first saw it I didn’t really see much in Bitcoin that was innovative — but did appreciate how it combined a number of older ideas into one place. I did not predict its success, but thought it was an interesting experiment and that might lead to a more elegant solution. (BTW, IMHO Bitcoin became successful more because of how it leveraged cypherpunk memes and their incentives to participate in order to bootstrap the ecosystem rather than because of any particularly elegant or orginal cryptographic ideas).

In my head, Bitcoin consists of blocks of cryptographic transactional ledgers chained together, plus one particular approach to time-stamping this block chain that uses proof-of-work method of consensus. I’ve always thought of blockchain and mining as separate innovations.

To support this separation for your article, I have one more quote to offer you from Nick Szabo:

Instead of my automated market to account for the fact that the difficulty of puzzles can often radically change based on hardware improvements and cryptographic breakthroughs (i.e. discovering algorithms that can solve proofs-of-work faster), and the unpredictability of demand, Nakamoto designed a Byzantine-agreed algorithm adjusting the difficulty of puzzles. I can’t decide whether this aspect of Bitcoin is more feature or more bug, but it does make it simpler.

As to your question of when the community first started using the word consensus, I am not sure. The cryptographic company I founded in 1988 that eventually created the reference implementation of SSL 3.0 and offered the first TLS 1.0 toolkits was named “Consensus Development” so my memory is distorted. To me, the essential problem has always been how to solve consensus. I may have first read it about it in “The Ecology of Computation” published in 1988 which predicted many distributed computational approaches that are only becoming possible today, which mentions among other things such concepts as Distributed Scheduling Protocols, Byzantine Fault-Tolerance, Computational Auctions, etc. But I also heard it from various science fiction books of the period, so that is why I named my company after it.


The Four Kinds of Privacy

(This article has been cross-posted in Medium)

Ssh by Katie Tegtmeyer CC-BY (crop2)

Privacy is hitting the headlines more than ever. As computer users are asked to change their passwords again and again in the wake of exploits like Heartbleed and Shellshock, they're becoming aware of the vulnerability of their online data — a susceptibility that was recently verified by scores of celebrities who had their most intimate photographs stolen.

Any of us could have our privacy violated at any time… but what does that mean exactly?

I think that's a tricky question, and I say that with decades of experience in the privacy and security fields. In the early '90s I helped with anti-Clipper Chip activism and supported efforts to allow free export of RSAREF cryptography tools such as PGP. I'm also the co-author of the internet SSL standard, now the most broadly deployed security standard in the world (the "s" in "https"). I've been involved with several security companies — as an entrepreneur, a former CTO of Certicom, and most recently as the VP of Developer Relations at Blackphone.

In all that time, I've been very careful about my use of the word "privacy". For example, I founded a company that designed and sold the first commercial SSL toolkits. The technology clearly included privacy features, but our marketing said that SSL offered "message integrity", "confidentiality" and "authentication". We did not use the word "privacy".

This purposeful omission resulted from my feeling that the concept was too overloaded; if I promised privacy I might not be promising the same thing to everyone because people view the term in different ways.

What privacy means varies between Europe and the US, between libertarians and public figures, between the developed world and developing countries, between women and men. I think it's possible to differentiate no less than four different kinds of privacy, each of which is important to different people — and I'm going to be defining them over the course of this article.

Prelude: Defining Public

Cayucos Parade (Cropped, original by Mike Baird licensed CC-BY)However, before you can define privacy, you first have to define its opposite number: what's public. Meriam-Webster.com's first definition of "public" includes two different descriptions that are somewhat contradictory. The first is "exposed to general view: open" and the second is "well-known; prominent".

These definitions are at odds with each other because it's easy for something to be exposed to view without it being prominent. In fact, that defines a lot of our everyday life. If you have a conversation with a friend in a restaurant or if you do something in your living room with the curtains open or if you write a message for friends on Facebook, you presume that what you're doing will not be well-known, but it certainly could be open to general view.

So, is it public or is it private?

It turns out that this is a very old question. When talking about recent celebrity photo thefts, Kyle Chayka highlighted the early 20th-century case of Gabrielle Darley Melvin, an ex-prostitute who had been acquitted of murderer. After settling quietly into marriage, Melvin found herself the subject of an unauthorized movie that ripped apart the fabric of her new life. Melvin sued the makers of the film but lost. She was told in the 1931 decision Melvin v. Reid: "When the incidents of a life are so public as to be spread upon a public record they come within the knowledge and into the possession of the public and cease to be private." So the Supreme Court of Los Angeles had one answer for what was public and what was private.

Melvin's case was one where public events had clearly entered the public consciousness. However today, more and more of what people once thought of as private is also escaping into the public sphere — and we're often surprised by it. That's because we think that private means secret, and it doesn't; it just means something that isn't public. Yet. Anil Dash recently discussed this on Medium and he highlighted a few reasons that privacy is rapidly sliding down this slippery slope of publication.

First, companies have financial incentives for making things "well-known" or "prominent". The 24-hour news cycle is forcing media to report everything, so they're mobbing celebrities and publishing conversations from Facebook or Twitter that people consider private. Meanwhile, an increasing number of tech companies are mining deep ores of data and selling them to the highest bidder; the more information that they can find, the more they can sell.

Second, technology is making it ridiculously easy to take those situations that are considered "private" despite being "exposed to general view" and making them public. It's easier than ever to record a conversation, or steal data, or photograph or film through a window, or overhear a discussion. Some of these methods are legal, some aren't, but they're all happening — and they seem to be becoming more frequent.

The problem is big enough that governments are passing laws on the topic. In May 2014, the European Court of Justice decreed that people had a "Right to Be Forgotten": they should be able to remove information about themselves from the public sphere if it's no longer relevant, making it private once more. Whether this was a good decision remains to be seen, as it's already resulted in the removal of information that clearly is relevant to the public sphere. In addition, it's very much at odds with the laws and rights of the United States, especially the free speech clause of the First Amendment. As Jeffrey Toobin said in a recent New Yorker article: "In Europe, the right to privacy trumps freedom of speech; the reverse is true in the United States."

All of this means that the line between public and private remains as fuzzy as ever.

We have a deep need for the public world: both to be a part of it and to share ourselves with it. However, we also have a deep need for privacy: to keep our information, our households, our activities, and our intimate connections free from general view. In the modern world, drawing the line between these two poles is something that every single person has to consider and manage.

That's why it's important that each individual define their own privacy needs — so that they can fight for the kinds of privacy that are important to them.

The First Kind: Defensive Privacy

Hacker (Cropped, original by Christophe Verdier licensed CC-BY-NC) The first type of privacy is defensive privacy, which protects against transient financial loss resulting from information collection or theft. This is the territory of phishers, conmen, blackmailers, identity thieves, and organized crime. It could also be the purview of governments that seize assets from people or businesses.

An important characteristic of defensive privacy is that any loss is ultimately transitory. A phisher might temporarily access a victim's bank accounts, or an identity thief might cause problems by taking out new credit in the victim's name, or a government might confiscate a victim's assets. However, once a victim's finances have been lost, and they've spent some time clearing the problem up, they can then get back on their feet. The losses also might be recoverable — or if not, at least insurable.

This type of privacy is nonetheless important because assaults against it are very common and the losses can still be very damaging. The latest Bureau of Justice report says that 16.6 million Americans were affected in 2012 by identity theft alone, resulting in $24.7 billion dollars being stolen, or about $1,500 per victim. Though most victims were able to clear up the problems in less than a day, 10% had to spend more than a month.

Though defensive privacy is usually violated by illegal acts, US courts haven't always recognized the need for privacy of financial records. For example, in the 1976 case United States vs. Miller — which was about defrauding the government of whiskey tax — the Supreme Court declared that an individual's financial records at a bank were not "private papers" but instead the "business records of the banks".

It used to be that the biggest danger to defensive privacy was someone digging through a victim's trash for old bank and credit card statements. However, today the interconnectiveness of the world is making it easier than ever to trick someone out of their financial information. Phishing emails are growing increasingly sophisticated, while social scammers are taking over Facebook and email accounts to pretend to be friends in distress in a foreign country. The newest fad seems to be a resurrection of an old con: cold calling victims and pretending to be Microsoft support in the hope of taking over a victim's computer.

In other words, defensive privacy is more important than ever — which is unfortunately a trend that crosses all of the kinds of privacy in recent years.

The Second Kind: Human Rights Privacy

Human Rights Privacy (Anne Frank Photo 1942)The second type of privacy is human rights privacy, which protects against existential threats resulting from information collection or theft. This is the territory of stalkers and other felonious criminals as well as authoritarian governments and other persons intent on doing damage to someone for personal for his or her beliefs or political views.

An important characteristic of human rights privacy is that violations usually result in more long-lasting losses than was the case with defensive privacy. Most obviously, authoritarian governments and hardline theocracies might imprison or kill victims while criminals might murder them. However, political groups could also ostracize or blacklist a victim.

In Europe, human rights privacy is one of the most crucial sorts of privacy. This comes from the continent's history: the Netherlands in the 1930s had a very comprehensive administrative census and registration of their own population that collected a lot of personal information. The Nazis captured this data within the first three days of occupation. "The Dark Side of Numbers" by William Seltzer and Margo Anderson (published in Social Research Vol. 68 No. 2 — Summer 2001), reports the inevitable result: Dutch Jews had the highest death rate (73 percent) of Jews residing in any occupied Western European country — far higher than the death rate among the Jewish population of Belgium (40 percent) or France (25 percent). Even the death rate in Germany was lower than in the Netherlands because the Jews there had avoided registration.

However, the issue of human rights privacy didn't end there. As Oxford professor Viktor Mayer-Schönberge explained in Jeffrey Toobin's New Yorker article, Europe continued to experience serious human rights attacks during the Cold War. Mayer-Schönberge states: "With the Stasi, in East Germany, the task of capturing information and using it to further the power of the state is reintroduced and perfected by the society. So we had two radical ideologies, Fascism and Communism, and both end up with absolutely shockingly tight surveillance states."

In the United States, citizens are shielded from governmental abuses of human rights privacy by the Fourth Amendment, which protects against unreasonable searches and seizures. As a result, I personally felt good about human rights privacy in the US — until the presidency of George W. Bush and his first attorney general, John Ashcroft. Under their watch, the Patriot Act dramatically increased the ability of the US government to surveil its citizens, with some of the worst mandates of that law making it illegal for the subject of the surveillance to even talk about being target.

Unfortunately, individuals aren't the only ones facing the governmental breach of human rights privacy in the US. Ever since the 1972 Supreme Court case, Branzburg v. Hayes, the press has also been fighting against governmental demands to reveal confidential sources and direct governmental assaults on sites like WikiLeaks. Some US states have created shield laws to protect the press' privacy, but they're often not sufficient.

Today journalist James Risen faces jail for protecting a source. In a court filing, Risen said "…this court should reconsider Branzburg because in recent years, subpoenas to journalists seeking the identity of confidential sources have 'become commonplace.'" The Obama administration responded by stating, "…reporters have no privilege to refuse to provide direct evidence of criminal wrongdoing by confidential sources." The US Supreme Court refused Risen's appeal in June 2014, leaving him on the hook for the information or jail time. Meanwhile, other journalists report that surveillance is increasingly intimidating sources, making it hard to report on government activities.

On the bright side, these American governmental excesses are causing some corporations to consider how they can bolster the cause of human rights privacy: with the release of iOS8, Apple announced that they would no longer have access to users' mobile passcodes, making it nearly impossible for them to turn information over to the government. Google quickly followed on by making Android encryption easier for the public to access. Predictably, the US government isn't happy, and security experts are nonplussed by their complaints.

Though governments are the biggest actors on the stage of human rights breaches, individuals can also attack this sort of privacy — with cyberbullies being among the prime culprits. Though a bully's harassment could only involve words, the attackers frequently release personal information about the person they're harassing, which can cause the bullying to snowball. For example Jessica Logan, Hope Sitwell, and Amanda Todd were bullied after their nude pictures were broadcast, while Tyler Clementi was bullied after a hall-mate streamed video of Clementi kissing another young man. Unfortunately, cyberbullying often results in suicide, showing the existential dangers of these privacy breaches.

Even if threats aren't taken seriously, cyberbullying doubles back to the issue of defensive privacy, as revealed by Amanda Hess in her article "Why Women Aren't Welcome on the Internet" where she wrote: "Threats of rape, death, and stalking can overpower our emotional bandwidth, take up our time, and cost us money through legal fees, online protection services, and missed wages."

Hess is just one of a long list of women who have been targeted by cyberbullies online. Programmer Kathy Sierra was largely driven out of the tech industry back in 2007 for being too influential; she recently wrote about how this is part of a long-standing pattern of purposeful violation.

This year, a study by Pew Research revealed that about 25% of women aged 18–24 have been stalked or sexually harassed online — numbers far in excess of their male peers. It was a timely study because Zoe Quinn, Anita Sarkeesian, and Brianna Wu all came under cyberbullying assault around the same time due to their positions as female leaders in the gaming field. When actress and gamer Felicia Day dared to speak out against this problem, her home address and personal email were revealed in a "doxing" within an hour, showing how fragile privacy is on the modern internet.

It's not surprising that some of my own female associates are unwilling to post pictures or other personal information online for fear of just this sort of attack. And we're relatively lucky in the western world: in a hardline theocracy like Afghanistan, a woman's need for privacy can be even stronger, with many women not willing to use their real names online — or a female name at all.

Individuals taking part in the democratic process can also come under a personal onslaught of this sort. This has gained prominence in California in recent years when numerous people have been called to task over their contributions to Prop 8, an anti-gay-marriage ballot measure that is now widely considered homophobic and bigoted. Brendan Eich, Mozilla's CEO, was forced out of his job as a result of such a contribution. The fact that his donation was six years previous in a very different political climate shows how long-lasting this sort of damage can be. On the flip-side, the NRA has also been known to go after public figures with anti-gun views. Political repercussions of this sort are a more complex issue than simple cyberbullying, but they demonstrate another situation where individuals might wish they could assert their human rights privacy to shield themselves from damage.

The Third Kind: Personal Privacy

Personal Privacy (Cropped from original %22Don't Tread on Me%22 by DonkeyHotey on Flickr CC-BY-SA)The third type of privacy is personal privacy, which protects persons against observation and intrusion; it's what Judge Thomas Cooley called "the right to be let alone", as cited by future Supreme Court Justice Louis Brandeis in "The Right to Privacy", which he wrote for the Harvard Law Review of December 15, 1890. Brandeis' championing of this sort of privacy would result in a new, uniquely American right that has at times been attributed to the First Amendment (giving freedom of speech within one's house), the Fourth Amendment (protecting one's house from search & seizure by the government), and the Fifth Amendment (protecting one's private house from public use). This right can also be found in state Constitutions, such as the Constitution of California, which promises "safety, happiness, and privacy".

When personal privacy is breached we can lose our right to be ourselves. Without Brandeis' protection, we could easily come under Panoptic observation where we could be forced to act unlike ourselves even in our personal lives. Unfortunately, this isn't just a theory: a recent PEN America survey shows that 1 in 6 authors already self-censor due to NSA surveillance. Worse, it could be damaging: another report shows that unselfconscious time is restorative and that low self-esteem, depression, and anxiety can result from its lack.

Though Brandeis almost single-handedly created personal privacy, it didn't come easily. The Supreme Court refused to protect it in a 1928 wire-tapping case called Olmstead v. United States; in response, Brandeis wrote a famous dissenting opinion that brought his ideas about privacy into the official record of the Court. Despite this initial loss, personal privacy gained advocates in the Supreme Court over the years, who often referred to Brandeis' 1890 article. By the 1960s, Brandeis' ideas were in the mainstream and in the 1967 Supreme Court case Katz v. United States, Olmstead was finally overturned. Though the Court's opinion said that personal privacy was "left largely to the law of the individual States", it had clearly become a proper expectation.

The ideas that Brandeis championed about personal privacy are shockingly modern. They focused on the Victorian equivalent of the paparazzi, as Brandeis made clear when he said: "Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life." He also carefully threaded the interface between public and private, saying, "The right to privacy does not prohibit any publication of matter which is of public or general interest."

Today, personal privacy is the special concern of the more Libertarian-oriented founders of the Internet, such as Bit Torrent founder Bram Cohen, who demanded the basic human rights "to be free of intruders" and "to privacy". Personal privacy is more of an issue than ever for celebrities and other public figures, but attacks upon it also touch the personal life of average citizens. It's the focus of the "do not call" registry and other telemarketing laws and of ordinances outlawing soliciting and trespass. It's the right at the heart of doing what we please in our own homes — whether it be eating dinner in peace, discussing controversial politics & religion with our close friends, or playing sexual games with our partners.

Though personal privacy has grown in power in America since the 1960s, it's still under constant attack from the media, telemarketing interests, and the government. Meanwhile, it's not an absolute across the globe: some cultures, such as those in China and parts of Europe, actively preach against it — advocating that community and sharing trump personal privacy.

The Fourth Kind: Contextual Privacy

Contextual Privacy (Post Secret #2 by Meg Wilis, licensed CC-BY) The fourth type of privacy is contextual privacy, which protects persons against unwanted intimacy. This is what danah boyd calls "The Ickiness Factor":

"Ickiness is the guttural reaction that makes you cringe, scrunch your nose or gasp 'ick' simply because there's something slightly off, something disconcerting, something not socially right about an interaction…. The ickiness factor is tightly coupled with issues [of] feeling vulnerable or getting the sense that someone else is vulnerable because of a given situation."

Failure to defend contextual privacy can result in the loss of relationships with others. No one presents the same persona in the office as they do when spending time with their kids or even when meeting with other parents. They speak different languages to each of these different "tribes", and their connection to each tribe could be at risk if they mispoke by putting on the wrong persona or speaking the wrong language at the wrong time. Worse, the harm from this sort of privacy breach may also be increasing in the age of data mining, as bad actors can increasingly put together information from different contexts and twist them into a "complete" picture that simultaneously might be damning and completely false.

Though it's easy to understand what can be lost with a breach of contextual privacy, the concept can still be confusing because contextual privacy overlaps with other sorts of privacy; it could involve the theft of information or an unwelcome intrusion, but the result is different. Where theft of information might make you feel disadvantaged (if a conman stole your financial information) or endangered (if the government discovered you were whistle blowing), and where an intrusion might make you feel annoyed (if a telemarketer called during dinner), a violation of contextual privacy instead makes you feel emotionally uncomfortable and exposed — or as boyd said, "vulnerable".

I believe that this feeling of vulnerability often comes from an inappropriate level of intimacy. Some of the more professional social networks seem particularly prone to it: I felt uncomfortable when I realized that my professional colleagues on one of the earliest social networks could see that I was in a committed relationship, and in turn I could see that some of them were in open marriages. Not that I had any problem with the information, but did I "need to know" in the context of business?

You probably won't find any case law about contextual privacy, because it's a fairly new concept and because there's less obvious harm. However social networks from Facebook and LinkedIn to Twitter and LiveJournal are all facing contextual privacy problems as they each try to become the home for all sorts of social interaction. This causes people — in particular women and members of the LGBT communities — to try and keep multiple profiles, only to find "real name" policies working against them.

Meanwhile, some social networks make things worse by creating artificial social pressures to reveal information, as occurs when someone tags you in a Facebook picture or in status update. To date, Google+ is one of the few networks to attempt a solution by creating "circles", which can be used to precisely specify who in your social network should get each piece of information that you share. However, it's unclear how many people use this feature. A related feature on Facebook called "Lists" is rarely used.

If a lack of personal privacy causes you to "not be yourself", a loss of contextual privacy allows other to "not see you as yourself". You risk being perceived as "other" when your actions are seen out of context.

Case Studies

All four of these kinds of privacy can intersect. For example, some social networks allow you to reveal your sexual orientation, which could be used secretly by an employer to discriminate against you (defensive privacy) or by a future Ashcroftian government to violate your civil rights (human rights privacy). It might lead you to being bothered at home because of people who either agree with or disagree with your orientation (personal privacy) _and often is an inappropriate revelation for casual professional acquaintances (contextual privacy_).

Studies of current events offer more examples.

The First Case: Celebrity Hacking

Jennifer Lawrence SDCC 2013 X-Men (by Gage Skidmore. licensed CC-BY)The recent unauthorized release of nude pictures of Jennifer Lawrence and others demonstrates the complex permutations of privacy.

To start with, it highlighted the problematic divide between public and private. There's no doubt that the pictures were intended to be private, and unlike some of the more troublesome uses of recording devices, these private situations weren't made public through the uncomfortable use of surveillance technology. Instead, the photos were publicized following theft. Unfortunately, now that they're in the public eye, they're unlikely to go away, ever. In other words, the genie's out of the bottle. All that the celebrities can do is make the photos less accessible by asserting their copyright (if they can to get search engines to de-link the information).

Perhaps Europe's Right to Be Forgotten will someday help to remove this type of occurrence from the internet. Or perhaps something like California's recent SB 255 might make it illegal to distribute nude photos without the model's permission. Alternatively, it could be social mores that win out, if we follow "the ethics of looking away", as advocated by Jessica Valenti); Jennifer Lawrence herself agreed there are ethical problems, saying: "Anybody who looked at those pictures, you're perpetuating a sexual offense." However for now, the pictures are out there and being viewed.

When considering the four kinds of privacy, these celebrities primarily suffered from an unwanted intimacy with the entire world (contextual privacy). Lawrence explained the problem succinctly, saying: "I didn't tell you that you could look at my naked body."

Celebrities and other high-profile victims might also face financial repercussions (defensive privacy). Ariana Grande was one of the younger victims of the theft and one of the few to risk the Streisand Effect by saying her photos weren't authentic. She should never have had to make such a statement one way or the other, but one understands why she would: as a teen pop star recently featured on a pair of Nickelodeon shows, she risks the greatest fiscal damage of any of the victims. But even Lawrence worried, stating: "I didn't know how this would affect my career."

Kevin Sullivan offered some of the most adroit commentary on the theft when he said "Don't call it a scandal". By calling this a scandal and smugly shaking our heads, we're blaming the victims, not the criminals. Worse, we're embarking on a slippery slope where we suggest that people take away their own freedoms through self-censorship. By calling it a scandal, we are saying that even in their own homes, using their own cameras, these people shouldn't be themselves (personal privacy).

The Second Case: #GamerGate

Zoe Quinn (by Zoe Quinn, licensed CC-BY-SA)The widespread and ongoing harassment of game designer Zoe Quinn is a somewhat frightening example for its intense focus on the repeated privacy violations of a single person.

It started with a blog post by Quinn's ex-boyfriend that accused her of infidelity. This once more highlights the troubled modern divide between public and private. Though some countries might protect the release of information of this sort using defamation laws, it seems unlikely that the US's slander or libel laws would do so. Despite the likely legality of that initial post, the public still learned inappropriate information about Quinn (contextual privacy).

However, worse privacy violations followed: the online community 4chan doxed Quinn by collecting considerable semi-public information about her (more contextual privacy), which allowed online harassment to evolve into threatening phone calls (human rights privacy). The information revealed during the harassment included the fact that Quinn had dated a game industry journalist, which could impact Quinn's ability to sell games in the future or even to get reviewed (defensive privacy). Obviously, the continual harassment has also violated Quinn's personal space and her right to be let alone (personal privacy).

Following the initial assaults on Quinn, the #GamerGate mob has moved on to harass other female game designers, including Anita Sarkeesian and Brianna Wu, as well as gamer and actress Felicia Day — all using the same privacy-busting methodology of cyberbullying and doxing. The movement may have reached its height when terrorist threats were made against a talk planned by Sarkeesian, forcing her to cancel it.

Twenty years ago, it was difficult for groups of individuals to work together to violate the privacy rights of an individual; there also wasn't a medium that could be used to publish personal details about someone unless there was a clear public interest. The cyberbullying stemming from #GamerGate shows that's all changed in the modern age. Which is why privacy is such a major concern for so many people on the internet.

The Third Case: The NSA

National_Security_AgencyThough it's become very easy for individuals to violate the privacy of celebrities like Jennifer Lawrence or individuals like Zoe Quinn, the government has also been stepping up its act in the 21st century, especially since 9/11. In the United States, it's the NSA who's been at the heart of the privacy controversies (with a little help from their friends in the FBI).

Many of the new privacy-busting powers of these organizations originated with the USA Patriot Act, but follow-up laws like the Protect America Act of 2007 and the FISA Amendment Act of 2008 are equally invasive. We only know how these laws are actually being utilized thanks to the whistle-blowing of Edward Snowden and others.

Most obviously, these types of laws have allowed the collection of private data in large quantities (human rights privacy). We now know that the NSA uses a previously secret program called PRISM to coordinate data collection using court orders and that they also appear to have broken into communication links to tap major data centers. There are claims that they also exploited the Heartbleed bug for years in order to collect data, though they deny this. The human rights repercussions of this multi-pronged, largely unfettered data collection are mind-boggling.

One of the most worrisome issues with this sort of large-scale data collection is that it can result in mistakes when something is taken out of context (contextual privacy). Someone might feel like they have nothing to fear from the government — until they're arrested for something they said while roleplaying in an MMORPG. Unfortunately, this isn't an unfounded concern: Snowden's whistleblowing revealed that the NSA spies on video game chats!

Meanwhile, this type of contextual disconnect has already caused problems in other legal contexts, such as when firefighter Lt. Philip Lyons was falsely charged with arson based on the purchase of a fire-starter that was revealed through loyalty card records. Reports similarly indicate that the FBI was tracking the sale of Middle Eastern foods in San Francisco as a terrorist-finding tool.

The European Union has much more mature protections against data collection of this sort, thanks to the Data Protection Directive of 1995, which is now in the process of being reformed. Sadly, the US has only responded in a piecemeal way that largely depends on private corporations' privacy policies.

Returning to the US, we find that many of these data collection and spying laws have been wrapped up with authoritarian regulations that go after individuals — such as one of Edward Snowden's email providers, LavaBit founder Ladar Levison.

Levison wrote an article about his interactions with the FBI that reads like an Orwellian parody. He described how the FBI served him papers seven times and forced him to appear in a court 1,000 miles from home without an attorney as part of their heavy-handed attempt to acquire Snowden's encryption keys. His company, LavaBit, was eventually shut down as a result. Because Levison tried to protect the privacy of one of his customers, he was threatened with jail and compelled to silence (personal privacy). Even Yahoo's CEO Marissa Meyer is personally worried about the trend. She says tech companies aren't talking more about the government's surveillance because they're afraid: "Releasing classified information is treason and you are incarcerated."

The issue of compelled silence arises from the National Security Letters authorized by the USA Patriot Act. It's particularly troublesome because it threatens democracy just as much as a lack of voting anonymity would. Because the Letters can't be discussed, there's no way to assess whether the government is doing a good job or not — and thus no way to punish a government at the voting booth if they're stepping over the line.

The Fourth Case: Americans in Afghanistan

Afgan Social Media Summit (cropped, original by Luisa Whalmsey, licensed CC-BY)My final case study is a personal one. I've regularly taught "Digital Influence" and "Using the Social Web for Social Change" classes in the Bainbridge Graduate Institute's MBA in Sustainable Systems program at Pinchot University. One of my students there was Luisa Walmsley.

While at BGI, Luisa met a woman who had co-founded Afghanistan's largest telecommunications company. Luisa later went to work at the company, then started her own consultancy doing operations and business development for Afghan media and technology companies — in the process helping women in Afghanistan to become better entrepreneurs. She's also been involved with Afghanistan's first-ever social media summit.

Unfortunately, being a female professional in Afghanistan can still be quite dangerous. Women have only been able to widely reenter the work force in the last decade, since the 2001 overthrow of the Taliban, and today they are sometimes still attacked or threatened for doing so. This makes Luisa's need for privacy important (human rights privacy).

Luisa's situation also demonstrates the sorts of divides you can find in a human rights privacy case. While in Afghanistan, Luisa follows basic security procedures to protect her privacy from both terrorists and the government — such as varying the times of day that she travels to the office. She's still willing to use social media but is careful, for example occasionally lying about her location or not talking about her travel plans or waiting days to post pictures that might reveal information as to her whereabouts. However Luisa has to be even more careful with the human rights concerns of the women she's working with, who could be assaulted or killed for their decision to make a career for themselves.

Even outside of her work proper, Luisa has run into other sorts of privacy issues in Afghanistan. She was once asked by an Afghan government agency to remove a Facebook post that reprinted an international organization's security alert concerning elections security, which she'd reposted in order to warn her friends. She's also received phone harassment (personal privacy) that would be unthinkable in the US: in Afghanistan men cold call phone numbers and when they hear a woman's voice on the other side, they continuously call back. Luisa was often able to stop the calls, but some of her female friends would receive over 200 harassing phone calls of this sort a day.

Unfortunately, Luisa's work also runs straight back into the privacy problems originating with the NSA. In 2013, the NSA admitted that they tracked phone calls of people "two to three hops" away from suspected terrorists. However in Afghanistan, everyone working in human rights or in the media is going to have some direct contact with the Taliban. Luisa surely has (one hop), so as soon as she calls or emails me (two hops), I'm suddenly in the NSA jackpot (more human rights privacy). And how does the NSA get that information? In 2014 we learned that they're recording nearly all domestic and international phone calls in Afghanistan (massive human rights privacy).

Because we know that the FBI once though it was relevant to track falafel sales, and because I gained an appreciation for Middle Eastern music 15 years ago, I could easily become a serious terror suspect myself (yet more human rights privacy). Worse, if you're reading this article (three hops), you now might be in the NSA's sights too. Hope you don't like hummus or baba ghannouj!

Conclusion

Privacy is important — perhaps more so than ever due to the assaults by con artists, governments, the press, and social media in the 21st century. However, in order to protect your own privacy, you must define your terms.

First, that requires: understanding the difference between private and public; considering how to prevent information from escaping from the one to the other; and thinking about how that might interact with free speech.

Second, it requires understanding the different kinds of privacy, and figuring out which one is most topical to you.

• Is it defensive privacy?_ W_here you worry about conmen, phishers, or other thieves stealing from you? Do you fear the finances loss?

• Is it human rights privacy?_ W_here you're threatened by the actions of a government or other entity? Do you fear the loss of your life or your freedom?

• Is it personal privacy? Where you just want to be be left alone? Do you fear the loss of the freedom to be yourself?

• Is it contextual privacy? Where you want to partition different portions of your life? Do you fear the loss of your relationships?

Most people will have some concerns about multiple sorts of privacy, but there will probably be one or two that strike a chord.

So what's important to you?

About the Author

Christopher Allen is a technologist and entrepreneur with a long history of work in the privacy and security fields. As the founder of Consensus Development, he led and co-authored the IETF-TLS (SSL 3.0) standard and sold the first commercial SSL toolkits. He's also the former CTO of the cryptographic security company Certicom (now part of Blackberry). Recently he was VP of Developer Relations for the secure smartphone company Blackphone, building on his experience with mobile software industry . He has produced a number of iOS apps, spoken at conferences, and organizes developer hackathons at such events as iOSDevCamp. Christopher also teaches Technology Leadership in the MBA in Sustainable Systems program at Bainbridge Graduate Institute at Pinchot University.

This article is based on an original post written in 2004 in Christopher's blog Life With Alacrity, and presentations on the topic of privacy used in his classes and posted at slideshare.net.

Image Credits

Cover Image is cropped from Katie Tegtmeyer's "Talk Shows on Mute" from Flickr, and is licensed CC-BY.

Public is cropped from Mike Baird's "July 4th Parrade in Cayucos, CA" from Flickr, and is licensed CC-BY.

Defensive Privacy is cropped from Christophe Verdier's "Day 342 — Hacker" from Flickr, and is licensed CC-BY-NC.

Human Rights Privacy is from an image in Ann Frank Diary, and is offered under fair use.

Personal Privacy is cropped from DonkeyHotey's "Dont Tread on Me" on Flickr, and is licensed CC-BY-SA.

Contextual Privacy is from Meg Willis's "Post Secret #2" from Flickr and is licensed CC-BY.

Portrait of Jennifer Lawrence is by Gage Skidmore "Jennifer Lawrence SDCC 2013 X-Men" from Japanese Wikipedia and is licensed CC-BY-SA.

Self Portrait of Zoe Quinn is by Zoe Quinn from Wikipedia and is licensed CC-BY-SA.

NSA Seal is from converted from svg image on Wikipedia and is Public Domain.

Portrait of Ladar Levison is cropped from a photo by Joe Paglieri from CNN Money, and is offered under fair use.

Luisa Walmsley at Afgan Social Media Summit is cropped from a photo by Luisa Whalmsey and is licensed CC-BY.

Appendices & Bibliography

See the following post The Four Kinds of Privacy: Appendices & Bibliography.


10 Design Principles for Governing the Commons

Resource SharingIn 2009, Elinor Ostrom received the Nobel Prize in Economics for her “analysis of economic governance, especially the commons.

Since then I've seen a number of different versions of her list of the 8 principles for effectively managing against the tragedy of the commons. However, I've found her original words — as well as many adaptions I've seen since — to be not very accessible. Also, since the original release of the list of 8 principles there has been some research resulting in updates and clarifications to her original list.

This last weekend, at two very different events — one on the future of working and the other on the future of the block chain (used by technologies like bitcoin) — I wanted to share these principles. However, I was unable to effectively articulate them.

So I decided to take an afternoon to re-read the original, as well as more contemporary adaptions, to see if I could summarize this into a list of effective design principles. I also wanted to generalize them for broader use as I often apply them to everything from how to manage an online community to how a business should function with competitors.

I ended up with 10 principles, each beginning with a verb-oriented commandment, followed by the design principle itself.

  1. DEFINE BOUNDARIES: There are clearly defined boundaries around the common resources of a system from the larger environment.
  2. DEFINE LEGITIMATE USERS: There is a clearly defined community of legitimate users of those resources.
  3. ADAPT LOCALLY: Rules for use of resources are adapted to local needs and conditions.
  4. DECIDE INCLUSIVELY: Those using resources are included in decision making.
  5. MONITOR EFFECTIVELY: There exists effective monitoring of the system by accountable monitors.
  6. SHARE KNOWLEDGE: All parties share knowledge of local conditions of the system.
  7. HOLD ACCOUNTABLE: Have graduated sanctions for those who violate community rules.
  8. OFFER MEDIATION: Offer cheap and easy access to confict resolution.
  9. GOVERN LOCALLY: Community self-determination is recognized by higher-level authorities.
  10. DON'T EXTERNALIZE COSTS: Resource systems embedded in other resource systems are organized in and accountable to multiple layers of nested communities.

I welcome your thoughts on ways to improve on this summarized list. In particular, in #10 I'd like to find a better way to express its complexity (the original is even more obtuse).


Mini Resume Card for Conference Season

Between the business of the March/April conference season and leaving Blackphone, I've run out of business cards. Rather than rush to print a bunch of new ones, I'm created this mini-resume for digital sharing and a two-sided Avery business card version that I am printing on my laser printer and sharing.

Not as pretty as my old Life With Alacrity cards, but effective in getting across the diversity of my professional experience and interests.

Christopher Allen Micro Resume

As someone who teaches Personal Branding in my courses at [email protected], I always find it hard to practice as I preach to ask for advice and suggestions. In this case I'm trying to tame my three-headed Cerebus of a profession with Privacy/Crypto/Developer Community, an Innovative Business Educator/Instructional Designer head, and my Collaborative Tools, Processes, Games and Play head. All come tied together in my body as ultimately being about collaboration, but it is hard to explain some of the correspondences.


Deep Context Shared Languages

Opening Scene of Darmok, STtNG S05E02If you consider yourself a futurist or an agent of change, you should read this article from The Atlantic "Shaka, When the Walls Fall". Yes, it uses a Star Trek episode as an allegory, it is a bit confusing and has a lot of complexity and depth, but it is a good introduction to a topic I care about — Deep Context Shared Languages.

I consider one of my missions in life to be to "create tools that allow people to communicate about complexity". There are many problems to address and many possible solutions to those problems. My Infinite Canvas Suite app tries to address problems of linearity. Another way to manage complexity is facilitating the creation and learning of Shared Languages.

Some of what Shared Language is is tribal — it has an elemental power to help a group form. But what a shared language also does is allow a group to take shortcuts. For instance, with the Group Works Deck Pattern Language, I can say the phrase "Viewpoint Shift" and practitioners know that I mean "Step from your usual perspective into another in order to better understand someone, shift energy, reframe meanings, open up new ideas, or simply see a situation with new eyes." But they also know more than that — they can connect that concept to their own experience of doing this practice.

The first time I was introduced to this concept was by expert facilitator and futurist Matt Taylor, who has in his communities a Shared Language concept called a "Glass Bead Game", which is from a story by Herman Hess. As Wikipedia cites it:

"The Glass Bead Game is "a kind of synthesis of human learning" in which themes, such as a musical phrase or a philosophical thought, are stated. As the Game progresses, associations between the themes become deeper and more varied."

Solution Box, Insight, Philosophy, Schematic DesignWhen you know the shared language of Matt's community, you can express concisely "In the SOLUTION BOX I am at INSIGHT, PHILOSOPHY and SCHEMATIC DESIGN." Which means, in other words, "I have a mature idea that I feel is a synthesis of all the challenges inherent in the project, I can talk about it with some clarity and express it as a visual solution. I have done a lot of work yet I do not have a buildable design." Someone familiar with the language can then respond in kind, rather than coming from a different perspective in the solution box and think that I have built something that is ready for production.

Sometimes hash tags are a more contemporary expression that there may exist a Deep Context. For instance, just saying #‎GlassBeadGame is a challenge for someone to respond in kind with a model of their own. Hash tags like #‎Intrapreneuring or #‎FreedomToFail also have Deep Contexts with different communities. If someone says to me #‎Cynefin I know that we share an understanding with me about the difference between "complicated" vs "complex" and how to work with each.

So what makes for great deep context shared languages? I don't know, but this article touches on it. It is more than just metaphor, more than allegory, more than meme. But if we can figure out how to create and teach it, it may be a powerful tool for communicating about complexity.

KEYQUOTE: "Here we might distinguish between the invocation of a particular logic and the simulation of a creature, thing, or idea by replicating its image. The simulation of life in art often concerns the reproduction of surfaces: in painting, the appearance of form, perspective, or the rendition of light; in literature the appearance of character or event; in photography and cinema the rendition of the world as it appears through optical element and upon emulsion or sensor; in theater the rendition of the behavior of a character or situation.

While all these examples “simulate” to various extents, they do so by a process of rendering. For example, the writer might simulate a convincing verbal intercourse by producing a credibility that allows the reader to take it as reality. Likewise, the actor might render a visible behavior or intonation that is suggestive of a particular emotion, event, or history that the theatrical or cinematic viewer takes as evidence for some unseen motivation.

A logic is also a behavior, but it is a behavior unlike the behavior of the literary or theatrical character, for whom behaving involves producing an outward sign of some deeper but abstracted motivation, understanding, or desire. By contrast logics are pure behaviors. They are abstract and intangible and yet also real.

If we pretend that “Shaka, when the walls fell” is a signifier, then its signified not by the fictional mythological character Shaka, nor the myth that contains whatever calamity caused the walls to fall, but the logic by which the situation itself came about. Tamarian language isn’t really language at all, but machinery."


Another post about Shared Language:


Freedom to Fail & Freakonomics podcast "Failure is Your Friend"


"Look Up" Your Strong Ties, Connect to Your Weak Ties


"The Really Big Questions" Podcast Asks "Why Do We Share?"