Logo for Saybrook University

Posts

Trust, Governance, and the Internet

By: Gary Metcalf | 30 Oct | 0 comments

 


recent article by Bruce Schneier, author of the book Liars and Outliers, is titled “The Battle for Power on the Internet.” As he introduces the problem: “We’re in the middle of an epic battle for power in cyberspace. On one side are the traditional, organized, institutional powers such as governments and large multinational corporations. On the other are the distributed and nimble: grassroots movements, dissident groups, hackers, and criminals” (par. 1). 

It is certainly a familiar theme at present.  Reports about surveillance and cyber-security fill a great deal of the news.  There are tensions between security and convenience.  It’s great to be able to access my bank accounts and my home security system through a smart phone, but those put a great deal of information into places that I don’t control, and which others with the right expertise might access.  Examples from social media to online shopping and investing go on and on. 

A major difficulty is the tendency to deal with all things Internet as if they were new; as if nothing similar had ever existed before and there were no frames of reference from which to work.  Schneier’s analogies are helpful in this respect.  A theme that he uses throughout this particular article is that of feudal societies; from which he draws a connection in terms of security:

Feudal security consolidates power in the hands of the few. Internet companies, like lords before them, act in their own self-interest. They use their relationship with us to increase their profits, sometimes at our expense. They act arbitrarily. They make mistakes. They’re deliberately—and incidentally— changing social norms. Medieval feudalism gave the lords vast powers over the landless peasants; we’re seeing the same thing on the Internet (par. 8). 

In cyberspace, the number of “landless peasants” is growing exponentially.  Not many years ago, if you wanted to access the Internet you needed at least a basic knowledge about computing.  The shift is now rapidly away from computers per se, to tablets and smart phones.  These are devices for consuming and using rather than tools for participating in how things develop.  Communities of open source developers are still active, but most tend to work on variations of Linux operating systems.  (A quick look at a Wikipedia entry shows Linux to hold only 1.65% of the desktop computing market.  By contrast, according to the same entry, Linux runs on 20.4% of servers, and 95.2% of supercomputers).  As more and more of our daily activities are accomplished with the use of devices that access the Internet, relatively fewer people have the capacity for affecting the systems on which most of us now rely.  As Schneier explains this:

The problem is that leveraging Internet power requires technical expertise. Those with sufficient ability will be able to stay ahead of institutional powers… Most people, though, are stuck in the middle. These are people who have don’t have the technical ability to evade either the large governments and corporations, avoid the criminal and hacker groups who prey on us, or join any resistance or dissident movements. These are the people who accept default configuration options, arbitrary terms of service, NSA-installed back doors, and the occasional complete loss of their data. These are the people who get increasingly isolated as government and corporate power align. In the feudal world, these are the hapless peasants. And it’s even worse when the feudal lords—or any powers—fight each other (par. 21-22). 

With respect to security threats posed by criminal activity, Schneier seems to treat the problem as an inevitable; a problem that is inherent in human nature and in societies:

Very broadly, because of the way humans behave as a species and as a society, every society is going to have a certain amount of crime. And there’s a particular crime rate society is willing to tolerate. With historically inefficient criminals, we were willing to live with some percentage of criminals in our society. As technology makes each individual criminal more powerful, the percentage we can tolerate decreases (par. 24). 

His answer to this dilemma falls to the manner in which centralized bodies of governance are held accountable through public oversight: 

Transparency and oversight give us the confidence to trust institutional powers to fight the bad side of distributed power, while still allowing the good side to flourish. For if we’re going to entrust our security to institutional powers, we need to know they will act in our interests and not abuse that power. Otherwise, democracy fails (par. 30). 

At present, those in control wield an increasing amount of power, which in Scheier’s view needs to be reconsidered: 

Medieval feudalism evolved into a more balanced relationship in which lords had responsibilities as well as rights. Today’s Internet feudalism is both ad-hoc and one-sided. Those in power have a lot of rights, but increasingly few responsibilities or limits. We need to rebalance this relationship (par. 33). 

He concludes with a need for a much larger discussion about the Internet’s role in our lives, and in society:

We’re at the beginning of some critical debates about the future of the Internet: the proper role of law enforcement, the character of ubiquitous surveillance, the collection and retention of our entire life’s history, how automatic algorithms should judge us, government control over the Internet, cyberwar rules of engagement, national sovereignty on the Internet, limitations on the power of corporations over our data, the ramifications of information consumerism, and so on (par. 35).

Schneier’s critique in terms of these issues is insightful and timely.  While he draws on analogies from social forms of governance, he tends to stay focused on questions of technology in application.  Many of the issues that Schneier raises in this particular context, however, parallel the larger social issues. 

Questions of governance and human nature, of individual skills and participation, run deeper than problems of technology – as important and pervasive as those are at present.  As Schneier describes well, the Internet started as a new space, where old rules and problems were thought not to apply.  It was something of a “greenfield,” to be developed in ways that many envisioned as utopian.  But like every space that humans occupy, it quickly began to mirror both the good and the bad that pervade other aspects of life in our societies. 

A new book, currently in press at Springer in Tokyo, and the first in our new Translational Systems Science series, explores many of these issues in the larger context of the world.  For decades, systems thinkers have proposed ways to consciously design human social systems.  Many were initially focused on work in organizations, such as the Idealized Design of Russ Ackoff, Interactive Management of John Warfield, Open Systems Theory of Fred and Merrelyn Emery, Viable Systems of Stafford Beer, and the more recent Structured Dialogic Design of Aleco Christakis and his colleagues.  Others, such as Bela Banathy’s Social Systems Design, were intended for broader applications from the beginning. 

One of the themes explored in my opening chapter for this book is the tension between what we believe we want, in an idealized sense, and the ways in which our social systems continue to evolve in directions that we do not like.  It would appear that we have made positive progress over the centuries.  There are notable exceptions, but in most places, conditions for the average person are better than they were several hundred years ago.  And yet, dissatisfaction with formal governments of our nation-states seems to continue to rise, as do concerns about environmental conditions, disparity of wealth, etc. 

Over 2400 years ago, Plato described his own vision of a utopian system of governance in “The Republic.”  He proposed rule by the elite, but as servant-leaders to the state.  They were to be selected, raised, and educated as the best of what society could produce. They were to have no personal wealth and no individual claim to family.  They were to live for the good of the society. 

As proposed in the book, there are many questions about the relevance and legitimacy of Plato’s ideas.  But as we contemplate issues about what we want, and how we might create our possible futures, it is an interesting place from which to begin.  What if Plato’s philosopher-kings were to rule the Internet?  Shall we create such a guiding image?  Do we think this is what we want, as the Internet so profoundly impacts the real worlds in which we live?  And if not, what do we want? 

 

Read other posts by Gary Metcalf

Keep up with our community: Facebook | Twitter | Saybrook's Organizational Systems Program

Comments and Discussions

Facebook Twitter LinkedIn YouTube Google Plus

share