by Dirk Helbing
-Food for thought, to trigger debate
How can we protect companies and people from violence and exploitation? How can we open up information systems for everyone without promoting an explosion of malicious activities such as cyber-crime? And how can we support the compliance with rule sets on which self-regulating systems are built?
These challenges are addressed by Social Information Technology based on the concept of crowd security. A self-regulating system of moderators and the use of reputation systems are part of the concept. Today’s reputation systems, however, are not good enough. It is essential to allow for multiple quality criteria and diverse recommendations, which are user-controlled. This leads to the concept of “social filtering” as a basis of a self-regulating information ecosystem, which promotes diversity and innovation.
Better awareness can help to keep us from engaging in detrimental, unfair or unsustainable interactions. However, we also need mechanisms and tools to protect us from violence, destruction and exploitation. Therefore, can we build Social Information Technologies for protection? And how would they look like? The aim of such Social Information Technologies would be to avoid such negative interactions, organize (collective) support or get fairly compensated. Of course, we also need to address here the issues of cyber-security and of the world's peace-keeping approach. Let us start here with the latter.
The "Balance of Threat" can be unstable
Like many, I have was raised in a period of cold war. Military threats were serious and real, but the third world war did not happen. This is generally considered to be a success of the “Balance of Threat” (or “Balance of Terror”): if one side were to attack the other, there would still be time to launch enough intercontinental nuclear warheads to eradicate the attacker. Given the "nuclear overkill" and assuming that no side would be crazy enough to risk elimination, nobody would start such a war.
However, what if this calculus is fundamentally flawed? There were quite a number of instances within a 60 years period, where the world came dauntingly close to a third world war. The Cuban missile crisis is just the most well-known, but there were others that most of us did not hear about. (see World War III and Risks of nuclear accidents is rising). Perhaps, we have survived the tragedy of nuclear deterrence by sheer chance?
The alarming misconception is that only shifts in relative power can destabilize a “Balance of Threat”. This falsely assumes that balanced situations, called equilibria, are inherently stable, which is actually often not the case. To illustrate, remember the simple experiment of a circular vehicle flow discussed earlier (see video): although it is apparently not difficult to drive a car at constant speed together with other cars, the equilibrium traffic flow will break down sooner or later. If only the density on the traffic circle is higher than a certain value, a so-called "phantom traffic jam" will form without any particular reason – no accident, no obstacles, nothing. The lesson here is that dynamical systems starting in equilibrium can easily get out of control even if everyone has good information, the latest technology and best intentions.
What if this is similarly true for the balance of threat? What if, this equilibrium is unstable? Then, it could suddenly and unexpectedly break down. I would contend that a global-scale war may start for two fundamentally different reasons. Consider a simple analogue from physics in which a metal plate is pushed from two opposite sides. In the first situation, if either of the two sides holding the plate becomes stronger than the other, the metal plate will move. Hence, the spheres of influence will shift. The second possibility is that both sides are pushing equally strong, but they are pushing so much that the metal plate suddenly bends and eventually breaks.
Often when an international conflict emerges, an action from one side triggers a counter-action from the opposing side. One sanction is met by something else and vice versa. In this escalating chain of events, everyone is pushing harder and harder without any chance for either side to gain the upper hand. In physics example, the metal plate may bend or break. In practical terms, the nerves of a political leader or army general, for example, may not be infinitely strong. Furthermore, not all events are under their control. Thus, under enormous pressure, things might keep escalating and may suddenly get out of control, even if nobody wants this to happen, if everyone just wants to save face. And this is still the most optimistic scenario, one in which all actors act rationally, for which there is no guarantee, however.
In recent years evidence has accumulated to demonstrate that in human history many wars have occurred due to either of the instabilities discussed above. The FuturICT blog on the Complexity Time Bomb described how war can result without aggressive intentions on either side. Furthermore, recent books have revealed that World War I resulted from an eventual loss of control - the outcome of a long chain of events – a domino effect that probably resulted from the second kind of instability. Moreover, conflict in the Middle East has lasted for many decades, and it taught us one thing: Winning every battle does not necessarily win a war (quoted in the movie “The Gatekeepers” by a former secret service chief). Similar lessons had to be learned from the wars in Afghanistan and Iraq. Therefore, a new kind of thinking about security is needed.
Limits of the sanctioning approach
Whilst sanctioning might in some cases create social order, it can also cause instability and escalation in others. In the conflict in the Middle East, punishment is unsuccessful - the punishee does not accept the punishment, because values and culture are different. In such cases, the punishment is considered to be an undue assault and aggression, and therefore a strong enough punishee will strike back to maintain his/her own values and culture. In this manner, a cycle of escalation ensues, where both sides further drive the escalation, each fuelled by their conviction they they are doing the right thing. In such a situation, deterrence is clearly not an effective solution. In other words, it is not useful to organize security alliances among countries which share the same values, as this creates precisely the cultural blocks that are unable to exercise acceptable sanctioning measures and will therefore run into escalating conflicts that can result in wars. Instead we need a new, symmetrical security architecture that is suited for a multi-polar world able to deal with cultural diversity. What we need are new strategies and a new kind of thinking. We also need a suitable approach in face of newly emerging cyber-threats.
How to manage a multi-polar world?
In the past, we have had a world with a few superpowers and blocs of countries forming alliances with them. Whenever one of these countries would be under attack, they would be under the protection of the others belonging to the same bloc. After World War II, the United States of America and Russia were the only superpowers remaining. With the breakdown of the Warsaw pact, there remained just one superpower. China is now the strongest economic power in the world and with Russia's comeback to world politics through the conflicts in Syria and the Ukraine, we are now living in a multi-polar world. Such a world is not well controllable anymore, as the "Three-Body Problem" suggests. This problem originally refers to the interaction of 3 celestial bodies, for which chaotic dynamics may result despite the simple conservation laws of mechanics. So, how much more unpredictable would a multi-polar world be?
It becomes increasingly obvious that today no power (political or business) in the world is strong enough to play the role of a world police, and that we need a new security architecture. If this would be an architecture for the entire world, it would need to have a number of features: The classical security alliances (power blocks) would have to be overcome. In view of globalization, thinking from the perspective of nation states seems to make decreasing sense. Furthermore, the concept of a "Balance of Threat" would have to be replaced by a "Network of Trust." The concept would have to be symmetric and not based on exclusive rights or veto power. It would have to be based on a set of shared values, and whoever violates them would feel the joint response of all the other countries in the world, independently of who their classical alliances were. For this approach to work well, mutual trust would have to grow, which would require more transparency and less secrecy.
In the emerging digital society, how much secrecy is still essential? I cannot give a definitive answer to this, but I do believe that secrecy in the right time, place and context may have some benefits (e.g. privacy). But how much opacity should public institutions acting behalf of their citizens be allowed to have? And for what time period? Will the concept of secrecy be feasible at all in the future? Certainly Wikileaks and the Snowden revelations raise the question of whether secrets can still be kept in a data-rich world. Moreover, secret services have often been accused of engaging in unlawful behaviour, which they claim is necessary, to get an inside view of the closed circles of terrorism and organized crime. However, it has been stressed by some that such a strategy may actually promote terrorism and crime, and undermine the legitimacy of secret services, or even the states or powers they are serving. Finally, the effectiveness of secret services has often been questioned, and also whether they do more good than harm.
What alternatives might we have to create a new security architecture? In this context, it is relevant to consider that more than 95 percent of the knowledge of secret services derives from public sources. As ever more activities in the world now leave a digital shadow and become traceable in real time, couldn't the largest part of public security be produced by public services rather than secret services? This does not necessarily mean to close down secret services, but to open up more information for wider circles. For example, why shouldn't specially qualified and authorized teams at public universities develop the algorithms and do the data mining to identify suspicious activities? Thanks to their higher transparency, they are exposed to scientific criticism and public scrutiny and would therefore be able to deliver higher-quality results. Given the many mistakes one can make when mining data, this would probably reduce the risk of wrong conclusions and other undesirable side effects. I am convinced that a step towards more transparency could largely increase the perceived legitimacy of the security apparatus and also the trust of people in the activities of their governments and states.
Perhaps, some readers of this book will find the above proposal to build public security on public efforts absurd, but it's not. In many countries, the police have already started to involve the citizens in their search for criminals such as through public webpages displaying pictures of suspects, as well as using text messages and social media. "Crowd security" is just the next logical step. In fact, we might put this into a bigger picture. As we know, the Internet started off with ARPANET, a military communication network. Opening it up for civilian use eventually enabled the creation of the World Wide Web, which then triggered off entirely new kinds of business and the digital economy. With the invention and ubiquity of Social Media, a large proportion of us has become part of a world-spanning network. The volume and dynamics of the related digital economy has become so extensive that the military and secret services can often not keep up with it anymore and, hence, they are increasingly buying themselves into civilian business solutions. This clearly shows that a future concept to protect our society and its citizens must largely build on the power of the civic society.
Crowd security rather than super powers
Let me give an example of a system, in which crowd security is surprisingly effective and efficient, and where it creates "civic resilience". In the late nineties, I spent some time as a visiting scientist at Tel Aviv University with Isaac Goldhirsch. At that time I read in the tourist guide that the average age of people in the country was 32, so I was prepared for the worst. But I found myself enjoying my stay in the Middle East immensely. Despite the daily threats, people seemed to have a positive attitude towards life.
One of the things that impressed me much was the way security at public beaches was achieved, all based on unwritten rules. Everyone knew that any bag at the beach might contain a bomb that could kill you. Bags with nobody around were considered to be particularly suspicious. But at a beach, there are always some people swimming, so unminded baggage is normal. In this situation, people solve the problem by forming an invisible security network. Upon joining the beach, everyone becomes part of this informal network and implicitly takes responsibility for what is going on. That is, everyone scans the neighbourhood for suspicious activities. Who has newly arrived at the beach? What kind of people are they? How do they behave? Do they know others? Where do they go, when leaving their baggage alone etc.? In this way, it is almost impossible to leave a bag containing a bomb without arousing the suspicions of other people. To the best of my knowledge, there were relatively few bomb explosions at the beaches.
I would like to term the above distributed security activity as "crowd security". We have recently learned about the benefits of "crowd intelligence," "crowd sourcing," and "crowd funding," so why not "crowd security"? In fact, the way societies establish and maintain social norms is very much based on a "peer punishment" of those who violate these norms. From raising eyebrows to criticizing others, or showing solidarity with someone who is being attacked, there is a lot one can do to support a fair coexistence of people. I recall that, during one of our summer schools on Lipari Island in Italy, one of our US speakers noted: "In my country, you cannot even distribute some flyers in a private mall without security stepping in, but nevertheless, there are shootings all the time. I am surprised that everything is so peaceful in the public space on this island: young people next to old ones, Italians next to all sorts of foreigners, and I have not even seen a single policeman all these days." Again, people seem to be able to sort things out in a constructive way.
How then can we generalize this within an international context? I have sometimes wondered if having less power might work better than having more. When having little power, you must be sensitive to what happens in your environment, and this will help you to adapt (thereby allowing self-regulation to work). However, if you have a lot of power, you wouldn’t make a sufficient effort to find a solution that satisfies as many people as possible. You would rather prioritize your own interests and force everybody else to adapt. But this would not create a system-optimal solution. As the example of cake-cutting suggests, the outcome wouldn't be fair, and therefore not sustainable on the long run. Why this? Because if you were too powerful, you would not get honest answers anymore, and sooner or later you would make really big mistakes that take a long time to recover from. For good reasons, Switzerland does not have a leader. The role of the presidency is taken for a short time period and rotates. This is interesting, as it requires everyone to find a sustainable balance of interests that is supported by many and, hence, has higher legitimacy. But there are more arguments than this for a decentralized, bottom-up "crowd security" approach.
The immune system as prime example
One of the most astonishing complex systems in the world is our immune system. Even though we are bombarded every day by thousands of viruses, bacteria, and other harmful agents, our immune system is pretty good in protecting us for usually 5 to 10 decades. This is probably more effective than any other protection system we know. And there is another even more surprising fact: in contrast to our central nervous system, the immune system is "decentrally organized". It is a well known fact that decentralized systems tend to be more resilient. In particular, while targeted attacks or point failures can shut down a centralized system, a decentralized system will usually survive the impact of attacks and recover. This is one reason for the robustness of the Internet -- and also the success of Guerrilla defence strategies (whether we like this or not).
Turning enemies into friends
There is actually a further surprise: a major part of our healthy immune response is based on our digestive tract, which contains up to a million billions of bacteria -- 10 times more than our body has cells. These bacteria are not only important to make the contents of our food accessible to our body, while they split them up into ingredients to find food for themselves. The rich zoo of about a thousand different bacteria in us even forms an entire ecosystem, which is fighting dangerous intruding bacteria that do not match the needs of our body. Bacteria that were once our enemies have eventually been turned into our allies through a symbiotic relationship that has eventually emerged through an evolutionary process. My friend and colleague, Dirk Brockmann recently pointed out to me to the really amazing level of cooperation, which is the basis of all developed life and now studied in the field of hologenomics. In fact, humans as well came up with tricky mechanisms encouraging cooperation. These are often based on exchange, such as trade, and a system of mutual incentive mechanisms, which promote coordination and cooperation. Social Information Technologies are intended to support this.
So why don't we build our societal protection system and the future Internet in a way that is inspired by our biological immune system? It appears that societies as well have something like a basic immune system. The peer-to-peer sanctioning of deviations from social norms is one example for this, which I already mentioned before. We now witness internet vigilantes or lynch mobs on the web, criticizing things that people find improper or distasteful. I acknowledge that lynch mobbing can be problematic and may violate human rights; this will require us to find a suitable framework. It seems that we are seeing here the early stage of the evolution of a new, social immune system. Rather than censoring or turning off social media as in some countries, we should develop them further to make them compatible with our laws and cultural values. Then systems like these could provide useful feedback that would help our societies and economy to provide better conditions, products and services.
The question is how do we best obtain a high level of security in a self-regulating economy and society? In perspective, we might create a security system that is partly based on automated routines and partly on crowd intelligence. If I can illustrate this again with the example of the Internet: let's assume that servers which are part of the Internet architecture, would autonomously analyze the data traffic for suspicious properties, but -- in contrast to what we are seeing today -- we would not run centralized data collection and data analytics. (Our brain certainly does not record and evaluate everything that happens in our immune system, including the digestive tract, but our body is nevertheless protected pretty well.) In case of detected suspicious activities, a number of responses are conceivable, for example: (1) the execution of the activity could be put on hold, while the sender is asked for feedback, (2) the event could trigger an alert to the sender or receiver of the data, a local administrator, or to a public forum, whatever seems appropriate. The published information could be screened by a crowd-based approach, to determine possible risks (particularly systemic risks) and to take proper action. While actions of type (1) would be performed automatically by computers, algorithms, or bots, actions of type (2) would correspond to the complementary crowd security approach. In fact, there would be several levels of self-regulation by the crowd, as I describe later. One may also imagine a closer meshing of computational and human-based procedures, which would mutually enhance each other.
Managing the chat room
We have seen that information exchange and communication on the web has quickly evolved. In the beginning, there was no regulation or self-regulation in place at all. These were the times of the Wild Wild Web, and people often did not respect human dignity or the rights of companies. But police and other executive authorities were also experimenting with new and controversial Internet-based instruments, such as Internet pillories to publicly name people.
All in all, however, one can see a gradual development of improved mechanisms and instruments. For example, public comments in news forums were initially published without moderation, but this spread a lot of low-quality content. Then, comments were increasingly assessed for their lawfulness (e.g. for respecting human dignity) before they went on the web. Then, it became possible to comment on comments. Now, comments are rated by the readers, and good ones get pushed to the top. The next logical step would be to rate commentators and raters. We can see thus the evolution of a self-regulatory system that channels the free expression of speech into increasingly constructive paths. I believe it is possible to reach a responsible use of the Internet based on principles of self-regulation. Eventually, most malicious behaviour will be managed by automated and crowd-based mechanisms such as the reporting of inappropriate content and reputation-based placements. A small fraction will have to be taken care of by a moderator, such as a chat room master and there will be a hierarchy of complaint instances to handle the remaining, complicated cases. I expect that, in the end, only a few cases will remain to be decided at the court, while most activities will be self-governed by social feedback loops in terms of sanctions and rewards by peers.
The above mechanisms will also feed-back from the virtual to the real world, and we will see an evolution of our over-regulated, inefficient, expensive and slow legal system into one that is largely self-regulating, more effective and more efficient. Here we may learn from the way interactive multi-player online games or Interactive Virtual Worlds are managed, particularly those populated by children. One of my colleagues, Seth Frey, has pointed me to one such example, the Penguin Club. To keep bad influences away from children, communication and actions within the Penguin Club world are monitored by administrators. As the entire population of Penguin Club users is too large to be mastered by a single person, there are several communities run on several servers, i.e. the Penguin Club world is distributed. Moreover, as every administrator manages his or her community autonomously, these may be viewed as parallel virtual worlds. This provides us with an exceptional opportunity to compare different ways of governance. Our study is far from being completed, so I just want to mention this much: It turns out that, if vandalism is automatically sanctioned by a robotic computer program, this tends to suppress creativity and results a boring world. This is reminiscent of the many failed past attempts to create well-functioning, liveable cities managed in a top-down way.
Returning to the virtual world of Penguin Club, I certainly don't want to argue in favour of vandalism, but I want to point out the following: the most creative and innovative ideas are, by their very nature incompatible with established rules, and it requires human judgement to determine, whether they should be accepted or sanctioned. This has an interesting implication: we may actually allow for different rules to be implemented in different communities, as they may find different things to be acceptable or not. This will eventually lead to diverse Interactive Virtual Worlds, which gives people an opportunity to personally choose their fitting world(s).
Embedding in our current institutional system
Of course, we need to make sure to stay within the limits of the constitution and fundamental laws, such as human rights and respect for human dignity. Such decision may require difficult moral judgements and require particular qualifications of the "judge," the administrator of the gaming community or chat room. So it does make sense to have a hierarchy of such "judges" based on their qualification to decide difficult matters in an acceptable and respected way. These arbiters would be called "community moderators".
How would a "hierarchy of competence" emerge among such community moderators? This would be based on previous merits, i.e. on qualifications, contributions, and performance. Decisions would be rated both from the lower and the upper level. Over sufficiently many decisions, this would determine who will be promoted -- always for a limited amount of time -- and who will not. If the punished individual accepts the sentence of the arbiter, the moderation procedure is finished, and the sentence is published. Otherwise, the procedure continues on the next higher level, which is supposed to spend more effort on finding a judgement compatible with previous traditions, to reach a reasonable level of continuity and predictability.
Whoever asks for a judgement process (or revision) would have to come up for the costs (depending on the system, this might also be virtual money, such as credit points). Judgements on higher levels would become more expensive, and for the sake of fairness, fees and fines will not correspond to a certain absolute amount of money, but to a certain percentage of the earnings made in the past, for example, in the last 3 years. For example, in Switzerland, such a percentage-based system is successfully applied to traffic fines.
Only when the above-described self-regulation fails to resolve a conflict of interest over all judgement instances of the Interactive Virtual World would today's central authorities need to step in. One might even think that many of today's legal cases could be handled in the above crowd-based way of conflict resolution, and that today's judges would then only form the highest hierarchy. This would fit the system of self-regulation proposed above into our current organization of society. I expect the resulting procedures to be effective and efficient. The long duration of many court cases could be dramatically cut down. In other words, new community-based institutions of self-regulation should be able to help resolve the large majority of conflicts of interest better than existing institutions. I see the role of courts, police, and military mainly to help restore a balance of interests and power, when other means have failed. In this connection, it is important to remember that control attempts in complex systems often fail and tend to damage the functionality of the system rather than fixing it in a sustainable way. Therefore, I don't think that these institutions should try to control what happens in society.
I believe that over time the principles of self-regulation will replace today's over-regulated system. A hundred years ago, only a handful of laws were made in the United Kingdom in one year. Now, a new regulation is put into practice every few hours. In this way, we have arrived in a system with literally tens of thousands of regulations. Even though we are supposed to, nobody can know all of them (but ignorance does not excuse us). Moreover, many laws are often revised shortly after their first implementation.
Even lawyers don't know all laws and regulations by heart. If you ask them, whether one thing is right or the opposite, they will usually answer: "it depends." So, we are confronted with a system of partially inconsistent over-regulation, which puts most people into a situation, where they effectively violate laws several times a year -- and they even don't know in advance how a court would judge the situation. This creates an awkward arbitrary element in our legal system. While some people get prosecuted, others get away, and this creates an unfair system, not just because some can afford to have better lawyers than others.
However, this is not the only way an unfair situation is created, while our law system intends just the opposite i.e to ensure a system that doesn't generate advantages for some individuals, companies, or groups. So what is the problem? Whenever a new law or regulation is applied, it requires some people or companies to adapt a lot, while others have to adapt just a little. This creates advantages for some and disadvantages for others. Powerful stakeholders would make sure a new law will fit their needs, such that they must adapt only a little, while their competitors would have to adapt much. Hence, the new law will make them again more powerful. However, even if we had no lobbying to reach law-making tailored to particular interest groups, the outcome would be similar. Just the stakeholders who profit most would vary more over time. The reason is simple: If N regulations are made and p is the probability that you have to adapt little, while (1-p) is the chance that you have to adapt a lot, the probability that you are a beneficiary k times is pk(1-p)(N-k). In other words, there is automatically a very small percentage of stakeholders who benefits from regulations enormously, while the great majority is considerably disadvantaged relative to them. Putting it differently: The homogenization of the socio-economic world comes along with a serious problem: the more rules we apply to everyone, the fewer people will find this world not well fit to their needs. And this explains a lot of the frustration among citizens and companies, not just in the European Union.
Only a highly diverse system with many niches governed by their own sets of rules allows everyone to thrive. Interestingly, this is exactly how nature works. It is the existence of numerous niches that allows many species to survive, and new ones to come up. For similar reasons, socio-economic diversity is an important precondition for innovation, which is important for economic prosperity and social well-being. Nature is much less governed by rules than today's service societies. For example, recent discoveries of "epigenetics" revealed that not even the genetic code is always read in the same way, but that its transcription largely depends on the biological and social environment.
Thus, how to build socio-economic niches, in which people can self-organize according to their own rules, within the boundaries of our constitution? Can we find mechanisms that promote social order, but allow different communities to co-exist, each one governed by their own sets of values and quality criteria? Yes, I believe, this is possible. Social Information Technologies will help people and companies to master the increasing levels of diversity in a mutually beneficial way. Furthermore, reputation systems can promote cooperation. If they are multi-dimensional, pluralistic, and community-driven, they can offer a powerful framework for social self-regulation, which provides enough space for diversity and opportunities for everyone.
Pluralistic, community-driven reputation systems
Here I want to elaborate a bit more on another important component of the "social immune system", namely reputation systems. These days, reputation and recommender systems are spreading over the Web which stresses their value and function. People can rate products, news, and comments, and they do! If they make the effort, there must be a reason for it. In fact, Amazon, Ebay, Tripadvisor and many other platforms offer other recommendations in exchange. Such recommendations are beneficial not only for users, who tend to get a better service, but also for companies, since a higher reputation allows them to sell a product or service at a higher price. However, it is not good enough to leave it to a company to decide, what recommendations we get and how we see the world. This would promote manipulation and undermine the "wisdom of the crowd" leading to bad outcomes. It is, therefore, important that recommender systems do not reduce socio-diversity. In other words, we should be able to look at the world from our own perspective, based on our own values and quality criteria. Only then, when these different perspectives come together, can collective intelligence emerge.
As a consequence, reputation systems would have to become much more user-controlled and pluralistic. Therefore, when users post ratings or comments on products, companies, news, pieces of information, and information sources (including people), it should be possible to assess not just the overall quality, but also different quality dimensions such as the physical, chemical, biological, environmental, economic, technological, and social qualities. Such dimensions may include popularity, durability, sustainability, social factors, or how controversial something is. It is, then, possible to identify communities based on shared tastes (and social relationships).
We know that people care about different things. Some may love slapstick comedies, while others detest them. So, it's important to consider the respective relevant reference group, and this might even change depending on the respective role we take, e.g. at work, at home, or in a circle of friends. To take this into account, each person should be able to have diverse profiles, which we may call "personas". For example, book recommendations would have to be different, if we look for a book for ourselves, for our family members, of for our friends.
Creating a trend to the better
Overall, the challenge of creating a universal, pluralistic reputation system may be imagined as having to transfer the principles, on which social order in a village is based, to the global village, i.e. to conditions of a globalized world. The underlying success principle is a merit-based matching of people making similar efforts. This can prevent the erosion of cooperation based on "indirect reciprocity," as scientists would say. For this approach to play out well, there are a number of things to consider: (1) the reputation system must be resistant to manipulation attempts; (2) people should not be terrorized by rumours; (3) to allow for more individual exploration and innovation than in a village, one would like to have the advantages of the greater freedoms of city life -- this requires sufficient options for anonymity (to an extent that cannot challenge systemic stability).
First, to respect the right of informational self-determination, a person would be able to decide what kind of personal information (social, economic, health, intimate, or other kind of information) it makes accessible for what purpose, for what period of time, and to what circle (such as everyone, non-profit organizations, commercial companies, friends, family members, or just particular individuals). These settings would, then, allow selected others to access and decrypt selected personal information. Of course, one might also decide not to reveal any personal information at all. However, I expect that having a reputation for something will be better for most people than having none, if it would help find people who have similar preferences and tastes.
Second, people should be able to post their comments or ratings either in an anonymous, pseudonymous, or personally identifiable way. But pseudonymous posts would have, for example, a 10 times higher weight than anonymous ones, and personal ones a 10 times higher weight than pseudonymous ones. Moreover, everyone who posts something would have to declare the category of information: is it a fact (potentially falsifiable and linked to evidence allowing to check it), an advertisement (if there is a personal benefit for posting it), or an opinion (any other information). Ratings would always have the category "opinion" or "advertisement". If people use the wrong category or post false information, as identified and reported by, say, 10 others, the weight of their ratings (their "influence") would be reduced by a factor of 10 (of course, these values may be adjusted). All other ratings of the same person or pseudonym would be reduced by a factor of 2. This mechanism ensures that manipulation or cheating does not pay off.
Third, users would be able to choose among many different reputation filters and recommender algorithms. Just imagine, we could set up the filters ourselves, share them with our friends and colleagues, modify them, and rate them. For example, we could have filters recommending us the latest news, the most controversial stories, the news that our friends are interested in, or a surprise filter. So, we could choose among a set of filters that we find most useful. Considering credibility and relevance, the filters would also put a stronger weight on information sources we trust (e.g. the opinions of friends or family members), and neglect information sources we do not want to rely on (e.g. anonymous ratings). For this, users would rate information sources as well, i.e. other raters. Then, spammers would quickly lose their reputation and, with this, their influence on recommendations made.
Users may not only use information filters (such as the ones generating personalized recommendations), but they will also be able to generate, share, and modify them. I would like to term this approach “social filtering.” (A simple system of this case has been implemented in Virtual Journal).
Together, the system of personal information filters would establish an "information ecosystem," in which increasingly reliable filters will evolve by modification and selection, thereby steadily enhancing our ability to find meaningful information. Then, the pluralistic reputation values of companies and their products (e.g. insurance contracts or loan schemes) will give a quite differentiated picture, which can also help the companies to develop customized and more useful/successful products. Reputation systems are therefore advantageous for both, customers and producers. Customers will get better offers, and producers can take a higher price for better quality, leading to mutual benefit.
Social Information Technologies for protection might be imagined to work like a kind of immune system, i.e. a decentralized system that responds to changes in our environment and checks out the compatibility with our own values and interests. If negative externalities are to be expected (i.e. if the value of an interaction would be negative), a protective "immune response" would be triggered.
Part of this would be an alarm system, a kind of "radar" that alerts a user of impending dangers and makes him or her aware of them. In fact, the "Internet of Things" will make changes – both gains and losses -- measurable, including psychological impacts such as stress, or social impacts, such as changes in reputation or power. Social Information Technologies for protection would help people to solidarize themselves against others who attack or exploit them. A similar protection mechanism may be set up for institutions, or even countries. Such social protection ("crowd security") might often be more efficient and effective than long-lasting and complicated lawsuits. Of course, protection by legal institutions would exist, but lawsuits would become more like a last resort than a first resort, for when social protection fails, e.g. when there is a need to protect someone from organized crime. Note that already a suitably designed reputation system would be expected to be quite efficient in discouraging certain kinds of exploitation or aggression, as it would discourage others from interacting with such people or companies, which would decrease the further success of those who trouble others.