Trust Networks: How We Actually Know Things
Introduction
How do you know what you know? Perception, intuition, empiricism, logical deduction: these are the kinds of answers philosophers have proposed for millennia. And yet when you reflect carefully on your store of knowledge, you'll discover that very little of it is the direct product of such methods. Instead it relies on trust.
Because trusting the right people is so important in life, we have an excellent intuitive understanding of trust. But when we think about knowledge in the abstract we become strangely stupid, and ignore our dependence on trust like camels ignoring the sand under their feet. We imagine that the transfer of information from person to person is a transparent and frictionless process, even though we don't act this way in practice.
Our ignorance is learned. Trust is based on ad hominem reasoning, which evaluates a claim by judging the person making it instead of the claim itself. Since we've been taught to believe that ad hominem isn't a valid form of argument, and that knowledge is based on valid arguments, we assume knowledge can't be based on ad hominems. But this is a mistake. Judgments of trustworthiness, and therefore ad hominems, are both ubiquitous and indispensable, even to the higher realms of knowledge.
In this essay I'll use a series of examples to show just why trust and ad hominem judgments of trustworthiness are so important. I'll begin with exceedingly simple instances of trust-based knowledge in order to demonstrate the concept clearly, and then gradually build on these to show how our political and even scientific knowledge is likewise grounded in trust. In the second part, I'll show how people group into networks of mutually trusting subjects. These trust networks are shaped by social conflicts that impair our ability to produce knowledge and degrade the accuracy of our beliefs—a problem that's particularly damaging in the present day. In the third part, I'll discuss potential remedies.
By drawing your attention to ad hominems, trust networks, and strategies for managing them, I hope I'll help you to reflect on your own beliefs and improve their accuracy.
Contents
Part I: How We Depend On Trust
A. Let's Cheat At Cards (The Basics of Trust)
B. Let's Start a War (Political Reporting)
C. Let's Oppress The Peasants In The Name Of Science (As If We Needed An Excuse!)
D. Let's Ad Hominem God (Faith, Paranoia, and Conspiracies)
Part II: Trust Networks and Epistemic Load
A. Let's Fire A Scrupulous Researcher (Trust Networks)
B. Let's Fry Our Brains On Social Media (Signaling Load)
C. Let's Kill A Virtuous Man (Partisan Load)
D. Let's Fight A Dirty War (Epistemic Load)
E. Let's Deal Drugs (Hacking Trust Networks)
Part III: Efficient Stupidity
A. Let's Outsource Intelligence (Passive Cognition)
B. Let's Paint It Gray (Trend Following)
C. Let's Start A Stampede (Passive Leverage)
D. Let's Play Opposite Day (Contrarians and Cognitive Altruists)
Part IV: Techniques For Improving Belief Accuracy In A World Of Ad Hominems
A. Let's Crash A Plane (Black Box Verification)
B. Let's Get Medieval (Social Technologies To Manage Trust)
C. Let's Read Fiction (Techniques For Navigating A World of Ad Hominems)
Part I: How We Depend On Trust
A. Let's Cheat At Cards (The Basics of Trust)
Imagine, dear readers, that you're playing poker. You've a very good hand and you're about to bet the farm on it. Your friend steps behind you and whispers in your ear. He informs you that one of your opponents has a royal flush; you'd better fold now, because otherwise you'll lose everything. Do you trust him? Of course you do. He's your friend! You take his claim as a matter of fact and fold.
Now imagine a very similar scenario. This time it's not your friend but your enemy who claims to have seen your opponent's hand and tells you to fold. He's a jealous bully who's hated you ever since his high school crush spurned him and went to prom with you instead. Do you trust him? Of course you don't. He's your enemy! After deciding he's not clever enough to use reverse psychology, you conclude he must be lying and bet the farm instead of folding.
Imagine the scenario a third time, except in this case it's a mischievous trickster who tells you the same things. He likes playing cruel practical jokes on people. Or again: now the man urging you to fold isn't your enemy per se, but he is the brother of another card player. Do you trust him?
It's obvious that when someone offers you information you can't verify directly, your decision to accept or not accept his information depends on your evaluation of his character. And not only his character, but also his relation to you as a friend or foe, his history of being right or wrong, and even his relation to the other parties affected by your beliefs. In other words, trust-based knowledge depends on ad hominem judgments.
What's perhaps not so obvious is that more sophisticated forms of knowledge depend on trust in much the same way this simple card cheating arrangement does.
B. Let's Start a War (Political Reporting)
Imagine, dear readers, that you open up the New York Times one morning and see a shocking headline. A foreign country—let's call it Atlantis—has attacked one of your ships unprovoked, blown it up, and killed hundreds of civilians. It's an act of war.
Well, probably. The government of Atlantis denies any involvement. Few people were on the scene. You have no way to personally confirm any of the facts. The most damning claims against Atlantis come from confidential sources in national security. Nevertheless, the journalist writes as if the reported version of events is confirmed all over the place and definitely true. Indeed, he seems just as confident as if he'd died on the ship himself!
So: do you support a declaration of war? The question isn't as easy as it might seem. Let's think about how you'd answer it.
You would like, of course, to confirm the facts directly. But as I've already stated, there's no way to do that. You're dependent on secondhand reports. And while you can sometimes identify falsehoods by looking for contradictions in hearsay, none are apparent here. It's simply a matter of deciding whether you trust the men making the claims: the journalist, his editors, media owners, your national security administration, etc. To do this you'll consider their previous record of honesty or dishonesty, as well as their motivations and loyalties, just as you did when cheating at cards. And then you'll make the most critical decision of all—the decision whether or not to go to war—on the basis of an ad hominem judgment of trustworthiness.
Of course, however important they might be, nobody believes our political decisions can always be based on absolute knowledge. So you're likely thinking, “Trust may be relevant in politics; but it's not relevant to the production of real knowledge like physics or mathematics.”
But that isn't true. Even hard science depends on trust, as I'll demonstrate next.
C. Let's Oppress The Peasants In The Name Of Science (As If We Needed An Excuse!)
Imagine, dear readers, a pair of scientists. Tycho is a wealthy noble who dabbles in astrology in his spare time. Johannes is a theorist with a great mind for geometrical models.
One day a private island happens to fall into Tycho's hands. He does what anyone would do with a private island: forces his peasants to build a state-of-the-art facility for recording astrological observations. (Let's face it, if he didn't crack the whip they'd just be swilling cheap beer and reenacting the least wholesome scenes from Chaucer.) Tycho then presents his observations to Johannes, who will set to work searching them for patterns.
Johannes' theories can only be as good as the observations that form their basis. So Johannes needs to confirm the observations provided by Tycho are indeed correct; if they're not, trying to fit them to geometrical models will be a waste of time. This isn't just a matter of evaluating Tycho's methods. It's also a question of trust. Is Tycho reliable, or lazy, erratic, and sloppy? Is he a friend, or an enemy who might knowingly share false information? Would he fabricate data to advance his career? Is he the partly unwilling victim of an astronomical blackmail ring? Johannes' scientific research depends on the answers to these distinctly unscientific questions, because they all have bearing on the reliability of Tycho's data.
This example drawn from the heart of physics is analogous to the card cheats we discussed earlier. One party transfers information to another party who can't verify it directly. The recipient is therefore forced to substitute an ad hominem judgment for direct verification.
Such substitution is omnipresent and indeed indispensable to the progress of science. Without it Johannes would have to repeat the observations himself. Even assuming he could arrange to do this (impossible in practice, since like far too many of us, he couldn't afford a private island), the efficiency cost would eventually halt progress. Indeed, if there's no trust, then every single person—to know anything at all—has to personally perform every experiment and verify every piece of information that feeds into the final conclusion. There's simply not enough time to do that.
As a matter of historical fact, Tycho seems to have been as trustworthy and exacting as could be hoped. But because his assistants were sloppy, his astronomical tables contain random errors. Johannes would have therefore been wise to keep the question of trustworthiness in the back of his mind even during his theoretical research, lest he throw out a good theory before considering that some anomalous data points might have been the product of human failings.
This scenario isn't unique to a particular early moment in the history of physics, nor is it a mere irrelevant technicality in the knowledge production process that you can brush aside, as you might feel tempted to do. Untrustworthy research is a sad reality, and not only in the soft sciences. For instance, Alzheimer's research was set back substantially by a fraudulent but highly cited paper that scientists in the field mistakenly trusted. Alzheimer's is responsible for over 100,000 deaths per year in the United States alone, so slowing progress toward the cure by as little as a day is equivalent to an act of mass murder. And someone was willing to commit that act of mass murder, dear readers, just for the paltry amount of status and money he'd gain by advancing his scientific career!
Even the hardest of hard sciences, mathematics, is dependent on trust. Because anyone can verify all the propositions in Euclid personally, we're tempted to assume professional mathematicians individually work through every proof from the Pythagorean theorem up to the latest publications, with no need to trust other humans. This is far from the case. Today's advanced mathematics has reached such a degree of complexity that only a tiny number of people possess both the brainpower and the time to verify new theories, let alone to check each step from the beginning of mathematical history. So although we conceive of mathematics as an accumulating sequence of absolutely certain intuitions, in practice this sequence can't be completed within any individual human mind. Instead, big gaps between intuitions are linked together with nothing more solid than trust in other mathematicians who claim to have checked them. In other words: ad hominem judgment.
A salient recent example is Shinichi Mochizuki's Inter-Universal Teichmuller Theory. Mochizuki claims his theory solves important mathematical problems, but its length and complexity are such that hardly anyone can confirm his claim. The tentative consensus among professional mathematicians seems to be that Mochizuki's proof is flawed, but it's hard to say what “consensus” could even mean in a context where only a handful of people can examine it with the necessary thoroughness.
Mochizuki, for his part, has gathered local supporters around him and published the Inter-Universal Teichmuller Theory in a journal he edits. The scenario resembles a gang turf war more than a shining ornament atop the ivory tower of knowledge, because even the large majority of seasoned professional mathematicians can do no better than guess at the theory's validity by deciding which little posse of better-informed mathematicians they consider more trustworthy. And even if a genuine consensus eventually does emerge, the vast majority of mathematicians will still decide the theory's validity on the basis of trust in other mathematicians, not direct knowledge.
So for the sciences, however hard or soft they might be, the traditionally recognized means of knowing—perception, intuition, etc.—are only the initial stages of knowledge production. Trust is an ineliminable corollary of the division of labor that permits their advanced development.
To summarize the argument:
D. Let's Ad Hominem God (Faith, Paranoia, and Conspiracies)
We've established that trust grounds our knowledge in card games, politics, science, and even mathematics. But what if I told you that trust actually grounds all our knowledge of anything whatsoever? Descartes came within a small step of making such a claim—and I'm going to take that small step for him.
Descartes speculated that there might be an evil demon distorting all of our perceptions and even our most basic cognition, so that nothing we perceive or think is real. The “cogito ergo sum” that follows this speculation is famous, of course. But few people today bother to read past the cogito to learn just why Descartes ultimately decided he could trust his thoughts and perceptions, and trust the world to be real rather than an illusion created by an evil demon. It was because he deduced—by means of the ontological proof—that God exists and cannot be an evil demon.
But I, dear reader, would rather pull that ontological proof out from under your feet like a magic carpet. Because if one doesn't accept the ontological proof—and I doubt you do—then in my new and upgraded version of Descartes' argument one can only take God's existence, goodness, and even the reality of the world itself on faith.
This faith that the Creator of our world is good is nothing but the ultimate ad hominem judgment: an extreme form of trust to underpin all of existence. And the paranoia motivating the fear that this Creator might instead be an evil deceiver—that too is nothing but an ad hominem judgment: an extreme form of distrust calling all of existence into question.
God and the evil demon in my version of Descartes' thought-experiment represent the two poles of what I call the faith-paranoia axis. This axis is fundamental to human cognition. How do you know, for instance, that your friends aren't secretly conspiring to murder you? Well, you don't know absolutely. They might be. And sometimes the mad begin to think along just these lines.
You're probably inclined to dismiss this type of paranoia as irrational and of no relevance to a sane person like yourself. But that's not the case, because even if it's “mad,” it contains no logical error. When you look at someone's face, listen to their tone of voice, and examine their diction, it isn't reason, but rather intuition—yes, the kind of intuition we use in ad hominem judgment—that informs you whether they're telling the truth or lying, whether they're a friend or a foe. And this intuition hasn't been calibrated by reason or scientific experiment, but by experience, and especially by that supra-individual form of assimilated experience known as instinct: your evolutionary inheritance.
Now: are there not people who are too credulous or too distrusting? And how can you be sure you're not among them? What if you began to sense an untrustworthy look in your friend's eyes; what if you began to suspect a cabal of evil scientists fabricated all the data at your disposal—what if, at certain moments, you began to sense the kind of skipping frame rate you'd expect from a glitchy VR set? Reason, in the strict sense, wouldn't offer you an exit.
The truth is that we never have enough information to be entirely sure there's no trick, no illusion, no conspiracy behind the way things appear; we can never exclude all possible grounds for paranoia on the basis of reason alone. The screen you think you're reading right now, the ground beneath your feet, even your memories might be illusory. You might in a basement dungeon, trapped inside a full-body VR suit by means of which an evil content moderator is manipulating your experiences—Plato's cave and Descartes' evil demon in modern costumes. (Apologies if I offend any of my dear readers by pointing this out, but the simulation hypothesis, so dear to Silicon Valley intellectuals, is a rehash of a thought-experiment already known to philosophy for hundreds and indeed thousands of years. I have no idea why “Plato's cave, but with computers!” so easily impresses intelligent people.)
Thus ad hominem judgments of trustworthiness, understood broadly, don't just ground politics and science and cards; they're at the very core of our ability to know anything at all. Our minds are tuned for a certain balance of faith and paranoia, a particular position on the faith-paranoia axis of cognition, that enables us to make good decisions about when to trust and when to doubt. Those who are mentally ill tilt away from the optimum, have too much faith or too much paranoia, and make, by consequence, poor choices. Or at least choices that seem poor to us. For functioning effectively in the world we inhabit may require trusting certain illusions, and perhaps it's precisely by seeing through to the true reality that the insane become dysfunctional.
But you're not dysfunctional, are you, dear readers? In fact, you're ready continue to the second part of this essay. It's just below.
Part II: Trust Networks and Epistemic Load
A. Let's Fire A Scrupulous Researcher (Trust Networks)
Thus far we've established that most of our knowledge depends on trust, and also that we tend to trust friends and distrust enemies. Friendship is typically mutual, so in practice groups of friends trust each other and distrust groups of enemies. The consequence of this is that when it comes to divisive issues—even those normally considered in the purview of science—humans cluster into distinct networks with shared beliefs. I call these trust networks.
We can formally define a trust network as a group of subjects who assume without direct verification that each others' claims are more likely to be true than the claims of non-members. The magnitude of this trust differential varies according to the circumstances.
In the examples that follow I'm going to emphasize the harms done by dysfunctional trust networks. This is because when trust networks are working well and arrive at the truth, we don't really need to understand how they work—just as we don't need to understand how our car works until the day it stops working. But when they fail, understanding becomes useful to facilitate repair. Thus it's first and foremost to the cases and causes of failure that we ought to direct our attention.
Nevertheless, despite the discouragingly realistic scenarios you're about to read, I don't believe, nor am I arguing, that trust networks are always harmful and dysfunctional, that all science is corrupt, that everything is relative, that social factors completely overwhelm our ability to discover the truth, nor even that dissidents are necessarily more trustworthy than venerable institutions. I'll offer a more positive vision in Part IV of this essay, so please be patient.
I'd like you to imagine, dear readers, a social scientist named Tom. Tom is a supporter of a political party we'll call the Yellow Party. In fact, it's hard to finish a conversation with Tom before hearing him bash the opposition, the Purple Party, at least once. Tom's latest published and peer-reviewed social science research confirms that the Yellow Party has the best platform ever, almost guaranteed to solve the most pressing social problems, and that the Purple Party has the worst platform in history, or nearly so. There's a second social scientist named Dick. His field of research is a bit different from Tom's, but he also supports the Yellow Party. And Dick's latest published research also confirms that the Yellow Party has the best platform ever, and that the Purple Party has the worst in history, or nearly so.
Now, the default human behavior is for Tom and Dick to evaluate each other on an ad hominem basis, recognize each other as allies, and then by consequence trust each other's research with less than deeply skeptical inspection. And that's exactly what they do. Of course certain i's have to be dotted and t's crossed, and the basic structure of the research has to resemble good science just as a card cheat's account of your opponent's hand has to include the correct number of cards if you're to take him seriously. But that's a simple enough matter, and both Tom and Dick are highly skilled at making their work look professional. So skilled, in fact, that they can make it look professional whether or not it's strictly true. Soon Tom is telling his students how great Dick's research is, and Dick is telling his students how great Tom's research is. When it comes time to select graduate students, Tom is disinclined to accept applicants who doubt Dick, and Dick is disinclined to accept applicants who doubt Tom. Well, unless they're really pretty.
There's also a third social scientist named Harry. Harry is scrupulously neutral and sets a very high standard for his research. He does everything he can to exclude bias and neither hacks nor even slashes his p's. He's pretty much oblivious to both parties. And when it comes time to pick graduate students or select new professors, Harry judges them purely on the basis of ability.
Harry, unfortunately, is about to win the academic version of a Darwin award. He, Tom, and Dick are all on the selection committee for new professors. When they have equal qualifications, Harry is just as likely to select a supporter of the Yellow Party as he is a supporter of the Purple Party. But Tom and Dick are inclined to favor supporters of the Yellow Party, all things being equal, and sometimes even when all things aren't equal. They don't actually feel biased. They just know their side is closer to the truth and, well, more trustworthy. We can distinguish this preference for friendly personnel from the extra trust we put in “friendly” beliefs, but because they're rooted in the same ad hominem judgment they go hand in hand, and ultimately reinforce one another.
In the long run it doesn't matter how many Harrys there are, because a positive bias added to any number of neutral votes will always generate a positive total bias. As it happens, Purple Party supporters are less likely to enter this field than Yellow Party supporters, so Tom and Dick start with a numerical advantage by default. Since the biased are more likely to hire the biased, the total bias will increase with each iteration of the hiring process until Yellow Party supporters dominate the institution.
As Yellow Party supporters become a majority and then a supermajority, scrupulously neutral Harry will begin to look very purple compared to the average professor. He's drifting away from the scientific consensus—or rather new personnel are drifting the scientific consensus away from him. If he hangs around long enough and dissents too loudly from the rising orthodoxy, he might even be ostracized as a Purple Party sympathizer and pressured out of his job.
The final result is that Tom and Dick build an institutionally validated network of mutually trusting allies with a shared picture of the world, which they'll call, for lack of a better term, “established science,” whence information supporting Purple Party positions will be thoroughly and effectively suppressed.
Tom and Dick's dominance in academia will grant them the ability to provide their friends with training, salaries, and credentials, while denying the same to their enemies. Since training amounts to a genuine advantage and bureaucratic hiring and promotion processes typically give more weight to credentials than results, the Yellow Party may eventually come to dominate official bureaucracies as well, and these bureaucracies may return the favor by rewarding Tom, Dick, and their friends with more research grants. In fact, the whole process could have been set in motion the other way around, with partisan grantors shifting the initial balance in academia by funding a greater number of friendly grantees. Either way it creates a self-reinforcing trust network of captured institutions.
However, Tom and Dick's enemies—the supporters of the Purple Party—are not impressed. Their default behavior is to distrust any supporters of the Yellow Party, and most especially to distrust anyone whose research claims the Yellow Party has the best political platform in history and the Purple Party the worst. They'll therefore tend to separate themselves from supporters of the Yellow Party and form a distinct trust network with its own mutually reinforcing allies and its own shared picture of the world. Naturally they won't dominate the same institutions. And when they do try to engage in social science and the like without institutional support, they'll typically be amateurish and sloppy, since they've been denied access to the training and funding necessary to achieve professional quality. (This will in actual fact decrease their trustworthiness, but as a consequence of their exclusion, not necessarily of the values, interests, and capabilities the Yellow Party will blame for their incompetence.) Eventually institutions will be viewed as part of either the Purple Party's trust network or part of the Yellow Party's trust network, depending on which faction dominates.
All this derives from simple game theory, without the need for unusual political conditions nor any formal conspiracy whatsoever—but just a predominance of like minds who instinctively filter new entrants for more like minds in an environment where performance and conformity to reality are selected for only weakly, or not at all.
In saying this I don't intend to imply that a conspiracy to precipitate the same result wouldn't be feasible, but only that it isn't strictly necessary. In fact, clever conspirators could achieve great things with little effort by leveraging the game theory just analyzed as a fulcrum to move the world. Once their partisans were installed, the conquest would be largely self-sustaining, or at least sustaining long enough for the ant to climb to the top of the blade of grass, and the mutual reinforcement we've just described could give them unbreakable control of several key institutions at once.
As for members of the lay public, they have even less time to thoroughly scrutinize Dick's work than Tom does, so how much they trust his research is in large part a function of whether he falls within their own trust network. Here it's important to clarify, however, that trust networks aren't only nor even necessarily a matter of political alignment (nor should you imagine that “centrists” are disconnected from these networks—to the contrary). The game theory I've just described can play out with any kind of faction, any number of trust networks can coexist, interconnect, and overlap, and there are other intra-network factors that influence trust as well. The boundaries between one network and another are not strict; they resemble rather a colony of spiderwebs crossing over and under each other. If you did set out to map them, you'd find that we all participate in many. For instance, your extended family could be considered a trust network even though the members may differ in political alignment; likewise for professional associations. I've chosen to illustrate here with a pair of fictional political parties because it's simple, easy, and familiar.
When social conflict is minimal and selection for genuine competence is incentivized, the effects described above will be modest and the divergence of trust networks will remain incomplete. This is because the requirement for competence puts a brake on selection for partisans and impractical ideologies. But in times and places where social conflict is heightened and selection for competence is not incentivized (or worse still, where partisanship per se is incentivized, as in journalism today), the trust networks will separate faster and diverge more substantially, because the friend-enemy distinction will be weighted more heavily—to the point where it overwhelms competence and character. As the famous saying goes, “He may be a psychopath, but he's our psychopath.”
The basic mathematical model below illustrates this point.
So, I've explained the concept of trust networks and given an overview of how they operate. In the next few sections I'm going to drill down and look more closely at how social forces within and between trust networks degrade the accuracy of collectively held beliefs. I call the action of these forces on belief epistemic load, and I'm going to separately analyze two salient components: signaling load and partisan load, both of which hinge on ad hominem judgments.
To tackle the first I'll begin with an abbreviated discussion of social signaling itself, already a familiar concept to my dear readers, and then use a few colorful examples to demonstrate how it degrades knowledge.
B. Let's Fry Our Brains On Social Media (Signaling Load)
We signal our identity, our positive character traits, and our membership in social groups and sub-groups by diverse means, which include clothing, hairstyles, aesthetic tastes, car models, bumper stickers, speech habits, customs, rituals, and even beliefs—professed or real, true or false. Such signaling is a basic component of our participation in society, and even refusal to signal, e.g. wearing utilitarian clothing, can and will be interpreted as a signal.
The prominence of these various signaling methods changes over time, and each has a different set of side-effects, on account of which their total merit is decidedly unequal. Belief is the most dangerous category of signaling by a long way; hairstyles and bumper stickers, on the other hand, are fairly benign. For while some may have died from dressing in climatically unsuitable uniforms, and a few from wearing inadvisably long hair too close to heavy machinery, millions upon millions have died because they adopted harmful beliefs to serve the same basic purpose. Indeed, socially motivated adoption of harmful beliefs is perhaps the single most common failure mode for civilized humans. Thus it behooves us to encourage our society to signal through other means whenever possible.
When our social regime uses beliefs to signal, we suffer a strong pressure to pronounce, demonstrate, and hold the beliefs that are optimal for signaling rather than those that optimally represent reality, which causes our collective beliefs to shift away from the latter and toward the former. I call this pressure on belief signaling load. Of course, beliefs always play a signaling role to some extent, but the degree and the particular types of beliefs involved differ by time and place, and they both influence the magnitude of the signaling load. I'll illustrate with the following pair of examples. They're more florid than the previous; since I'm writing this essay between novels I hope you'll forgive me.
Imagine, dear reader, that you're a young Occidental in the second half of the 20thcentury. Music has at this time an outsized importance: rock and roll songs are everywhere, the big hits are a big deal, everyone you know is going to concerts and wearing their favorite bands' t-shirts as well.
In another day and age you might not have been musically inclined, but rock and roll is so ubiquitous that you come to understand and enjoy it well enough, and you too navigate the host of bands to find the ones you like, wear their t-shirts, and take girls to their concerts. You're planning to be a high-powered lawyer with a big salary one day, so you pick bands that are polished and popular. Nothing too punky or arty—those bands are for the kids who are going to be brewing your coffee after you make it big; and when you see someone in the wrong t-shirt you're thankful for the clear warning to steer clear.
Your musical signaling gets you in with the right crowd, it draws the attention of the right kind of girls and lets them know you're the right kind of guy, and in short it allows you to advertise your identity and group membership effectively and at minimal personal cost, all while enjoying some pretty decent tunes and funding the arts. It's a win-win deal that clarifies social groups marvelously, and it seems so normal to you that you can hardly imagine society working any other way.
Okay. Now forget all that, and imagine instead, dear readers—I know this one will be hard—imagine instead you're a young Occidental in the first half of the 21st century. By the time you were old enough to tie your shoes your neck was already permanently craning forward, but you don't mind, because everyone else is addicted to their phone too. Music is passé, although you're unaware of just how passé, because you have no memory of the previous century. These days social signaling is mostly happening online.
You have accounts with several social media networks. Some of these are set up for signaling with photos and videos, showcasing lifestyles which may or may not actually be lived by the narcissists pictured, while others are set up for signaling with small chunks of text known as “takes.” (“Takes” are expressions of personal beliefs or opinions related to the current topic everyone is discussing. They don't require any special knowledge, experience, or insight that would justify opening one's mouth in public, but resemble instead the effusions of whales who gather in one place to spout the same water to varying heights.)
Like your 20thcentury predecessor and everyone else, you know intuitively that you need to signal your group membership and identity so you can find the right crowd and the right girls; and you know intuitively that it's good to be popular. Since you're not quite handsome enough to share lifestyle photos of yourself surfing, you decide to post your takes instead. They come from the heart (sort of, maybe—well, actually you're not quite sure and it's not worth worrying about) but it so happens that they amount to paraphrases of already popular takes, signed under your own name. Not that you're intentionally copying, of course. It just ends up looking that way to an outside observer.
You don't have any special insight into whether the semantic content of these takes is actually true (who does?) but it feels right to post them, and lo and behold, your signaling helps to make you some friends. Your friends, naturally, express similar beliefs and congratulate you on yours, and everyone in the trust network you've joined echoes the same takes with small variations, each aiming to post the coolest and most popular expression of shared beliefs that are under no particular obligation to reflect reality. (I'll intervene in this story to tell you a secret, dear readers. Many of these beliefs are false! Can you believe it?)
You have lofty aspirations, much like our ambitious lawyer from the previous century, but law isn't looking like such a great career these days, so you've set your eyes instead on investment banking (salaries start at $150k they say). You notice some social media accounts are advocating for offbeat, unpopular, and even taboo beliefs—beliefs not shared by the big or cool accounts, nor the accounts of model finance professionals you hope one day to emulate. You instinctively avoid following any of these losers. You're not even sure you'd let them serve you coffee, because if they found a sly way to sneak one of their unpopular ideas under your guard, you might say the wrong thing at a company party and fall into the same hole they'll end up in.
While it would be harmful to acknowledge, let alone understand, weirdos with socially unacceptable views, conflict is popular, and everyone enjoys blaming problems on a bad guy lest they have to accept the blame themselves. Happily you live in a democracy (yay) and there's a political party called the Purple Party ready to serve the purpose. The Purple Party is a big group of second-rate people with dumb ideas, though none quite so dangerous that you're afraid to recognize their existence. You know they're second-rate because none of the cool and popular people, nor the professionals you look up to, are willing to openly align themselves with the Purple Party. All the right people are in the Yellow Party. Perhaps you haven't spent any time thinking critically about the ideals or policies or actions of either party, but your friends and role models love to bash the Purple Party and anything associated with it, and in fact, they're such a functionally perfect outgroup that there's no reason to learn any more about them.
By default you assume that claims from supporters of the Purple Party are false, or at least so misleading they're as good as false, and you assume that claims from supporters of the Yellow Party are true, or at least so beneficent they're as good as true; and the less the Purple Party likes a proposal from someone on your side the better it must be, even if it seems rather bizarre and perhaps quite dangerous on first blush. Indeed, you assume all this so intuitively that the surface impression rippling across your consciousness is of a man gravitating toward the truth and speaking his mind, nothing more nor less. Thus your social signaling has integrated you and your friends into a wider trust network oriented partly, though not exclusively, around the friend-enemy distinction—perhaps the very trust network Tom and Dick were building up in our previous section.
Your counterparts in the Purple Party, of course, have done the same thing, and each trust network exerts a repulsive effect on the other, driving their beliefs in the opposite direction. Since you'll become more popular with the right people if you pillory the Purple Party, you pepper your posts with a pickled peck of just such pillorying. You do it with moderation though: you're aiming to be a respectable salaryman, not an agitator. And it works perfectly. All the right people like you more, and the wrong people are powerless losers anyway—though you've figured out that it's standard practice to pretend they're powerful in order to assign them a greater share of blame than would otherwise be possible, and to portray you and your friends as plucky underdogs, which is, after all, kind of cool in TV shows and not a frame you'd be willing to cede to the other side.
In short, it's obvious to you that studying any of the sociopolitical issues underlying your beliefs in depth would be a total waste of time. You're just playing a signaling game—and you're winning. You carry the winning you learned on social media over into real life too. When you're chumming with other suits in the finance industry over drinks you profess the same beliefs and mock the Purple Party in the same way. They eat it up, because it confirms you're one of them. Success will soon be yours; you only have to sit down and stretch out a knee—just like that—and an aspiring trophy wife who's also learned to adopt the right beliefs will sit on it. The world is your oyster.
I hope you found these two stories entertaining, dear readers. They're admittedly a parodical, oversaturated, high-contrast version of reality, but not so unfair and exaggerated that you'd be justified in dismissing them. Millions of people operate in essentially the same way as our example signalers, but lack the capacity to reflect on their own behavior in such excruciating detail.
Without indicting absolutely everyone, I'll also mention that many self-identified moderates who aren't keen to firmly take the side of one party or another are still happy to jump in on particular issues (those with lower stakes and a clear majority position reducing the potential for blowback) where they play out the very same game without appreciably greater probity: moral grandstanding at the expense of preapproved scapegoats, staged safely within the bounds of conventional wisdom. Others remain silent but listen from the sidelines well enough to know which way the wind blows, and the right takes echo in their brains and shape their opinions even if they choose not to repeat them aloud. Certainly some people resist the whole charade, but not enough people; and because they don't play the game, their visibility in the social media ecosystem, and hence their impact on society, is too small to shift the balance of mass opinion.
The two parallel examples above display two different signaling regimes with the same primary function. Yet the first regime has a low cost to society, and perhaps even positive externalities (someone has to fund the arts, right? right?), whereas the second regime entails significant negative externalities on account of its high epistemic load.
I'll explain. Under the rock-and-roll signaling regime, music is serving two functions. The primary function is to signal personal characteristics and group membership. The secondary function is to provide aesthetic enjoyment. These two functions are imperfectly aligned (the best music and the music most effective for signaling are not the same), but little harm comes from the misalignment (financial incentives will favor signaling efficacy over aesthetic quality, but this is more of an irritation to musicians than a serious harm to society at large). For a discussion of the problems caused by signaling with aesthetic opinions, you can read my essay Against Good Taste. But the point that's relevant to us here is that under this signaling regime, the signaling load affecting general knowledge is low.
Now, under the social-media signaling regime, professed beliefs are again serving two functions, and the primary function is again to signal admirable personal characteristics and group membership. But this time the secondary, almost vestigial function is to represent reality and thereby enable society to make good choices. Remember that the typically selfish and unreflective individual signaler is motivated by the primary function, and pays the secondary function only the minimum attention that allows him to maintain the appearance of caring, and sometimes not even that. Because the two functions are poorly aligned and the secondary function makes a large impact on collective prosperity, this signaling regime has potentially enormous negative externalities. Yes, I'm referring to mass adoption of wrong or even outright crazy beliefs—the disastrous consequence of high epistemic load.
You might object that no one really believes the things they profess to believe for the purposes of signaling. If only this were the case! We've already pointed out that direct knowledge is often unavailable, so that ad hominems are the only means even the most intelligent people have of distinguishing true claims from false ones. Because these necessary ad hominem judgments are largely interpretations of signals, the beliefs that make for good signals also make their proponents seem more trustworthy—and because they emanate from these more trustworthy-seeming people, they're also likelier to be taken as true.
But that's not all, dear readers. Trendy false beliefs typically sound better than the truth, so they're very tempting to the ignorant. Educators seem to be the kind of people who are particularly enamored of them, and kids scarf them down like candy from strangers—their parents rarely bothering to warn them away. When they do accrue more experience they're slow to correct their errors. And for many adults too it's psychologically easier to make professed beliefs into genuinely held ones: by so doing they gain an advantage in public discourse, since they don't have to fight through the friction of subjectively felt dissonance when they're speaking. Natural liars and natural fools enjoy a similar advantage. In fact, the belief signaling regime is a dream come true for sociopathic status maximizers of all stripes, who extend the competitive lead over naive altruists they've enjoyed ever since village size exceeded Dunbar's number and cousin marriage was banned out to the seventh degree. Some of them are reading right now, laughing to themselves that Sanilac is wasting time combating socially advantageous lies on the meta level while they're doing more productive things, like making money and donating a tax-deductible fraction of it toward the advancement of animal welfare to price their lessers out of red meat.
Low quality people convincing themselves to believe the false claims they use to signal their high quality isn't the only problem. As the situation worsens, respectable public figures begin to shun the truth, so that when intelligent and upright citizens who aren't unnaturally absorbed in the relevant issues (that is, the vast majority) look to them for guidance, they're misled; before long lies become the conventional wisdom in polite society. Once false beliefs gain such social preeminence that it's taboo to refute them with the truth, public reasoning is obliged to take the lies as premises, and thus forced toward faulty conclusions one cannot refute honestly in the public square—even in cases where “everyone knows” they're wrong. Whence an odd phenomenon one observes from time to time like the aurora borealis of a society frozen by epistemic darkness: to promote the public good, well-intentioned advocates attempt to “refute” these faulty conclusions with dishonest arguments of their own, having been barred from presenting the honest arguments against them precisely because they depend on taboo premises! This never works very well, nor at least very long; and in the end, unprincipled exceptions are ferreted out and important public decisions are made on the basis of false beliefs even if everyone does know they're false. Sometimes these include the foundational assumptions of a nation or culture, setting future generations up for serious trouble.
In short, social signaling with false beliefs genuinely degrades knowledge and creates problems that can't be blithely waved away just by saying, “No big deal—it doesn't matter if people keep saying that, because everyone knows it's not actually true.”
Now let's consider a more concrete example of signaling load: bag bans. They're only recently instituted and already a classic example of virtue signaling. I'm sure you're familiar with this concept. While real and useful, it's sometimes overused as a catch-all way of criticizing any virtuous behavior that could have dubious motives, even if that behavior might be genuinely virtuous, or simply have good effects. This criticism is misguided, because the most important element of beneficent social engineering is to align publicly recognized virtue with genuine virtue in order to maximize pursuit of the good. Be that as it may, our aim here is simple. We're just going to observe the effect virtue signaling has on collective beliefs.
I'd like you to imagine, dear readers, that disposable plastic bags are better for the environment than paper bags, but most people aren't aware of this and don't have time to read research demonstrating it. And suppose paper bags seem/feel/sound more natural and therefore better for the environment (it's brown like tree bark and also kind of rough!) even though the reverse is true. And suppose further that “saving the environment” is the most publicly recognized locus for demonstrating care. In such a scenario individuals will draw a personal advantage from proclaiming that paper bags are better than plastic, spreading misinformation to this effect, and even supporting a campaign to ban the latter, creating the impression they're virtuous people motivated by high morals and concerned with the public good. If their campaign starts to take, many respectable public figures, who know well when to catch a rising tide, will join the chorus denouncing plastic.
For the most ambitious social climbers even paper won't be a sufficient signal of how virtuous they are. Only reusable organic cotton bags will do. Sadly, while reusable organic cotton might send the most impressive signal, research shows it's actually the worst choice for the environment by a wide margin. Still—are you going to let yourself be caught with a cheap disposable plastic bag when all your friends are toting their locally grown kale home swaddled in organic free-range fair-trade cotton? Of course not. You'd look like the wrong kind of person—your status as a trusted group member would decline precipitously; and when you brag about the environmental benefits of your new hemp hiking boots, they wouldn't take you seriously. (Strangely, no one ever proposes to ban reusable organic cotton bags. If they really did care about the environment and wanted to go around banning things, they would.)
I spent about eight hours looking into the research behind plastic bag bans for the sake of these few paragraphs, and even such a substantial waste of time wasn't enough to clear up all the relevant ambiguities. It's possible, dear readers, that I've still overlooked some essential subtlety and drawn the wrong conclusions—in fact, maybe you shouldn't trust me! But the average voter won't spend eight seconds. He'll make a quick ad hominem judgment about the ban's proponents, whereupon their display of good intentions, bolstered by natural intuitions about words printed in green (green!) ink on rough brown paper and the sheer number of respectable people who seem to agree, will persuade him to believe the claims. Mainstream support will build into an avalanche, and retailers will loudly join in to make the inevitable a marketing opportunity. Eventually almost everyone will say and believe that paper is better than plastic—albeit not quite as good as reusable organic cotton.
In short, the initial conditions generate a signaling load that gradually pulls the whole trust network into falsehood. These shared delusions lead in turn to harmful policies and behaviors which, much like child sacrifices and circumcision, have the curious effect of reinforcing the delusions; for it's psychologically difficult to countenance the possibility that you and everyone you know could have made or enforced sacrifices which in the best case serve no purpose at all, and in the worst are both destructive and completely retarded. Those who've sacrificed in vain seek out justifications for the unjustifiable to avoid the anguish of regret, and one can assume a weighty initiation sacrifice would improve cults' member retention markedly.
The example of bag bans is instructive for several reasons. First, our intuitions about environmental impact are here directly opposite to actual environmental impact. Reusable organic cotton sounds the best, but it's the worst; disposable plastic sounds the worst, but it's the best; paper sounds better than plastic, but it's not as good. Since the beliefs optimal for signaling—those that flatter our uninformed intuitions and present us to others as caring and self-sacrificing—are very far from the beliefs that accurately reflect reality, they produce an especially high signaling load.
Second, there's a feedback loop that further augments this signaling load. At first just a few brave pioneers leverage human intuitions about brown paper, organic cotton, and reusability to show care and win status points. When this works, more people join in, and by consequence the faulty intuitions begin to seem more reliably true to those watching from the sidelines, who make ad hominem judgments based in part on crowd size (simple rule: “many people saying the same thing are more trustworthy than one”). This encourages others to join the crowd, causing the faulty intuitions to seem more true, causing the signaling to be more effective, causing more people to join the crowd, etc., etc. If momentum is strong enough, eventually most people within the trust network believe the falsehood, and that falsehood becomes the default assumption even for smart and inquisitive people who haven't yet taken the time to inquire at length. Dissent is then punished with a loss of status points that locks in the lie, because as the false belief seems truer, dissenters seem both even more wrong and even less caring, and therefore lower status—and therefore even more wrong! This cascade effect is one reason why a dubious idea might take hold in one society but not another: small variations in the early stages may determine whether or not an avalanche occurs. (Note that the process we've delineated here can also launch true ideas to prominence, provided luck favors them with a signaling advantage.)
Incidentally, the performance of reusable plastic bags varies enormously depending on the number of times they're reused, and studies vary further in the number of reuses they consider to be a necessary minimum. Advocates for one side or the other can easily portray them as better or worse than disposable plastic bags by assuming whatever reuse numbers suit the conclusions they want to draw, whether these are practically possible or not, leaving them great leeway to mislead the public without lying directly. Take a moment to consider, dear readers, just how many opportunities there must be to lie about more complicated issues if it's so easy to create confusion about grocery bags! And in truth even this debate is disingenuous, because if the extra cost of reusable bags—estimated at 5$ per capita by two independent studies—were simply levied as a direct tax and earmarked for environmental cleanup, we would attain a greater benefit with much less daily annoyance. But of course, such a tax would be useless for signaling—defeating the true purpose of the ban.
Whatever the truth turns out to be, both the economic inefficiency and the environmental harm caused by choosing the wrong grocery bags (quite insignificant) pale in comparison to the potential consequences when signaling load affects the collective beliefs relevant to issues with higher stakes and greater inter-group acrimony. I'll leave it to you to imagine what those issues might be.
Let's summarize before we move on to the next section.
Humans use declarations and demonstrations of belief as social signals that increase their status and perceived trustworthiness within their group. Using beliefs as social signals interferes with their other function, namely to represent reality, by pulling them away from reality and toward the optimal signals instead. This signaling load isn't just a plebian bias we can easily shrug off; it affects smart and open-minded people too, and the divide between privately held and publicly declared beliefs isn't wide enough to nullify it. That's because, as we established in Part I, most of our knowledge actually derives from ad hominem judgments in one way or another, and ad hominem judgments are profoundly influenced by signals of character, status, and group membership. Signaling load degrades our picture of reality, sometimes severely, and this degraded picture of reality leads to poor choices, which lead in turn to poor outcomes.
Signaling load isn't the only force degrading collective beliefs. In the next section I'll examine a second force I call partisan load.
C. Let's Kill A Virtuous Man (Partisan Load)
Imagine, dear reader, that you're holding a gun. You come upon a brawl between a known local blackguard and a visiting tourist of evidently respectable character. Their struggle is so vicious that one of them will surely die whether or not you intervene. Both combatants shout, “Rescue me! Shoot him!” Whom will you shoot? The blackguard, of course. Now imagine the same scenario, but you're at war. The gentleman is a brave enemy soldier; the blackguard is one of your countrymen, whose second-degree murder sentence was commuted so he could be pressed into the army. “Rescue me! Shoot him!” Whom will you shoot this time?
In each case you make an ad hominem judgment. But your answer changes, because as an inter-group conflict intensifies, the weight you give to the friend-enemy distinction increases at the expense of character traits like honesty, good judgment, and uprightness.
The same pattern applies in the realm of knowledge. Ad hominem judgments of friend and enemy work as a kind of epistemic immune system, exerting a protective effect in circumstances where our enemies might want to deceive us. The price for this protection is a relative reduction in the weight we give to good character, which during harmonious times is more closely correlated with reliable sources of information. Even if it's beneficial during a conflict, partisanship can never lift our ability to learn new truths above where it stands in peacetime; it rather impairs knowledge production and degrades the accuracy of beliefs for all factions, and thus society as a whole.
There are, however, exceptions to this analysis. Curiously, the net knowledge degradation caused by inter-group conflict isn't directly proportionate to the level of inter-group violence. This is because a serious threat, like a war, tends to focus the mind on realities relevant to surviving that threat; whereas when the stakes are lower but blood runs high, posturing to win intra-group status competitions has a more favorable cost-benefit ratio. Further, when tensions are low, excessive displays of partisanship are selected against, because they reduce the perception of trustworthiness. One sees here that the brew of factors impacting ad hominem judgments is very complex and impossible to model fully in a conceptual analysis—yet we intuitively understand these complexities in the context of narrative examples and real life.
Antipathy between conflicting trust networks doesn't just deemphasize good character. It also encourages doubt, paranoia, and aversion to the claims of perceived enemies—even when they're true. This partisan load takes the form of a repulsive force driving the beliefs of conflicting trust networks in opposite directions. “The evil enemy believes X, therefore I believe not-X.” In its most extreme forms one can witness grown men playing Opposite Day like small children: “The evil enemy believes X, therefore I believe negative-X!”
AIDS denialism in South Africa is an especially striking real-world example of the harmful effects of partisan load. In the 1990s and early 2000s, paranoia about racism and colonialism led the South African government to reject the Western scientific consensus that HIV causes AIDS and assert that the medicines developed by Western institutions to treat HIV were harmful rather than helpful. Denialists were able to trot out a few credentialed dissidents who supported these views to give their case the appearance of legitimacy and paint the medical establishment as corrupt. (N.B. dissidents are not always right, even when the system is broken.) Indeed, some South Africans believed Western medicines were intended to kill. If this seems farfetched, consider that vaccine skeptics in the West expressed a very similar range of sentiments in the polarized political climate surrounding the COVID epidemic, with the most extreme asserting that the new mRNA vaccines were part of a vast conspiracy intentionally aiming at mass murder or sterilization.
Denialism resulted in over 300,000 preventable deaths in South Africa between 2000 and 2005. While it's tempting to ascribe such a disaster purely to stupidity, long known to be the most powerful force guiding the behavior of human groups, this wasn't the only cause. It happened because the kind of ad hominem judgments we're all forced to make became heavily weighted toward the friend-enemy distinction in the context of an inter-group conflict, enabling the spread of paranoiac skepticism about information provided by a purported enemy. In short, it happened because of partisan load. While the death toll may not always rise so high, we see the same game play out whenever knowledge impacts divisive issues.
In the early days of the COVID epidemic, inter-group tensions pushed partisan load to the point of absurdity, degrading the quality of collective knowledge even on seemingly simple issues. The shift in beliefs about masks is especially instructive. (Perhaps my memory of these events is imperfect, but I've no wish to relive them by digging up old social media posts to confirm the story, so you may consider this a mere fable if you like.)
Under the direction of prominent members, one side, which we'll call Trust Network A, initially formed the collective belief that masks wouldn't help to reduce the spread of the disease. The other side, Trust Network B, had been primed by the high acrimony and antagonism of that year to doubt any position taken by Trust Network A, and believed to the contrary that this was a lie invented to cover up a shortage. Trust Network A vociferously denied this accusation. The advance guard of Trust Network B, always keen on self-reliance and preparedness for apocalyptic disaster (traits they fetishized to feed their never-to-be-realized dream of return to a Wild West where nature could be relied on to reward individual ability over parasitic social finesse) even began sharing recipes for homemade masks on social media.
And then, dear readers, before these recipes could spread from the fringes to the inattentive heartland of Trust Network B, Trust Network A flipped its position. Now Trust Network A collectively decided that masks did slow the spread of the disease, and promptly forgot its original position. It even jumped past this, and declared further that the very recently dismissed masks were so effective they should be mandatory! Thereupon Trust Network B, like a raging bull who'd not only seen red but was also being prodded from behind by spears, reversed its position too—and declared that masks do not, in fact, actually work. Not only that: some members of Trust Network B went further still, and declared that masks were harmful—that they were even worse than the disease they were intended to prevent! (Did they copy this playbook from the South African government, or was it just a happy coincidence?) When Trust Network A heard these refusals it was fired with antagonistic rage, like a skunk driven out of his burrow by wild dogs. Its members redoubled their insistence on mask mandates, demanded fresh mandates on other matters besides, and began advertising their masked faces as signals of epistemic solidarity—never mind that they'd all held the opposite view just a short while prior.
The question of whether masks do or don't reduce the spread of disease is a straightforward and objective one. There is, dear readers, only one right answer. But the bizarre reversals I've just described cannot be explained by a story of independent actors researching a question directly and arriving somehow at different conclusions. It's quite clear that the beliefs of both Trust Networks A and B were motivated by partisan load—which caused them to trust friendly sources without skepticism and reject enemy sources without consideration, even to the point of playing Opposite Day. Of course, since the question of whether masks help is as simple as X or not-X, partisans of one side ended up being correct (for our purposes here it doesn't matter which, and I won't venture my opinion). A tiny number of people may have even examined the available information with enough care to determine the right answer. Nevertheless, the great majority of them were only correct by accident, and deserve no more credit for the right answer than the Queen does when a flipped coin shows heads.
I should pause here to disabuse you of the mistaken notion that centrist, moderate, or neutral parties don't participate in trust networks and aren't subject to partisan load. Recall first that trust networks aren't closed containers, but chaotic webs which often lack clearly defined boundaries. Centrists are tied into these and have their own sets of trusted and distrusted sources. They are, moreover, subject to partisan load, but in their case it takes the form of forces repelling them from the extremes and pulling them toward the center—regardless of which beliefs are true or false judged on their own merits.
It's typical for centrists to lazily adopt the attitudes that allow them to minimize friction and get along with everyone. When it comes time to signal, they vigorously signal their normalcy, which amounts to a regurgitation of the acceptable opinions inculcated by the laugh track of late-night comedy television (designed for this purpose). So the idea that centrists are by nature the intellectual superiors of extremists isn't remotely true. Finally, even the most neutral parties still need to join sets of mutually trusting sources and engage in signaling in order to build an accurate picture of the world and function in human society (and as I'll explain in Part IVC, their relative indifference to partisan conflict is not necessarily so wise as it might appear). Thus the epistemic critique of partisanship in this section should not be read as a denunciation of taking sides as such, but as an admonition to subject your beliefs to the appropriate scrutiny if and when you do, in order to minimize the inevitable distortion.
Now I'd like you to consider, dear readers, a worst-case scenario for collective knowledge. Imagine you're living in a society split into two highly antagonistic trust networks that fight not on the honest, material plane of physical war, but on the slimy, smog-ridden plane of ideas. For signaling your society relies largely on demonstrations of belief that show allegiance to one trust network or the other. Individuals who are more afflicted by partisan and signaling load and thus further from reality enjoy more influence, visibility, and power than than the average person, and far more than is justified by the accuracy of their views. This applies to the intelligent as well as the stupid, because intelligence and resistance to epistemic load are distinct and poorly correlated traits. Indeed, smarties who are deluded or deceptive produce sophisticated rationalizations that can do greater harm than simplistic ones which are more easily seen through.
Does this sound familiar? It's a fair description of Occidental society in the age of social media.
Let's summarize before we move on.
Antagonism between conflicting trust networks causes their members to weight the friend-enemy distinction more heavily than other elements of character when they make ad hominem judgments, and to distrust and even invert the collective beliefs of enemy trust networks. I call the distortionary pressure this exerts on beliefs partisan load, and although it offers some protection against deception, it has the net effect of degrading the accuracy of collective knowledge relative to what it would be in the absence of conflict.
D. Let's Fight A Dirty War (Epistemic Load)
So, both signaling load and partisan load degrade our knowledge. They also overlap with and amplify each other, and their combined effect is more than merely additive. I'll call the sum of all forces inducing knowledge degradation epistemic load. For reasons I'll discuss further on, some forms of social organization induce a higher epistemic load than others, and some realms of knowledge are inherently prone to a higher epistemic load than others (the hard sciences are relatively, but not entirely, immune).
Signaling load and partisan load aren't the only forces that contribute to epistemic load, but their effects on collective knowledge are the most complex and the most worthy of extended analysis. One could easily define others and produce a fairly conventional list of self-explanatory biases. I'll mention a very simple and familiar one in passing: incentives. Men tend to believe things they're incentivized to believe, whether or not they're true. Homeowners, for instance, usually believe it's good for society when the cost of buying a house continually increases, even though this is manifestly false. The best solution is to turn this negative into a positive by incentivizing correct beliefs. E.g., paying people for good outcomes or correct predictions, as in betting markets. Unfortunately it's often impossible to do this, and instead false beliefs or at least false professions of belief are incentivized in practice.
I'll illustrate the combined effects of signaling and partisan load with a recent example. Before I continue, please remind yourself that our purpose here is only to understand how people form their beliefs. For this reason we're once again not interested in which side is correct. To help you set aside your opinions about the issue I'm going to write in the past tense, as if I'm telling a story. (If you think you can discern my own opinion from the following text, you're wrong.)
The two sides in the anthropogenic climate change debate of the late twentieth and early twenty-first century were often incorrectly portrayed as “pro-science” and “anti-science.” While there were some genuinely anti-science doubters, the true source of disagreement about climate change was not the validity of scientific method. What doubters were doubting was whether the people and institutions carrying out the method could really be trusted. Internal consensus isn't irreproachable proof of trustworthiness, because partisans (like Tom and Dick from our earlier academic example) can potentially promote their allies and squeeze out dissenters until institutions are dominated by a single unified trust network telling the same slanted story.
The critical point omitted from virtually every discussion of the issue was that detailed climate models are simply too complex for the lay public to understand or verify. It's fundamentally impossible for politicians and voters to make decisions about issues like these without relying on ad hominem judgments of institutions, researchers, and advocates—and this limitation applies to both doubters and believers. The two sides drew opposite conclusions because they differed in their views of who was a friend and who an enemy, who was reliable and who unreliable. Neither side was keen to acknowledge that their positions were derived from ad hominem judgment rather than direct knowledge, because such an admission would have weakened their credibility in public debates where the appearance of scientificity was an advantage. So they ended up talking past each other, trading graphs and equations when only open discussion of character would have gotten to the heart of their disagreement.
When we're unable to carry out a direct verification, whether because we can't see our opponents' cards in a game of poker or because we don't have years to learn the necessary scientific background in a game of climate models, we determine the validity of information by judging its providers, sometimes on a frankly partisan basis. “What's their track record?” “Do they have a motive to deceive us?” “Do they agree with us about other issues?” “Are they our kind of people?” “Who, whom?” etc. But to bolster our case and our ego many of us pretend, both to ourselves and others, that we've drawn our conclusion on the basis of direct knowledge to which we've in fact no access. Unfortunately, the pretense that we are or even can be above ad hominems bars the way to reflective self-criticism—and thus decreases rather than increases the accuracy of our beliefs.
The climate change debate shows how partisan load and signaling load can overlap and amplify each other, creating a cascade effect still more intense than that described in our bag ban example. The amplification occurs because beliefs about climate change are affected not only by partisanship and by signaling, but by the signaling of partisanship. I'll narrate this cascade in the abstract and leave you the joy of applying it to the particulars of the case.
- Prominent members of Trust Network A endorse Position X, while prominent members of Trust Network B denounce Position X.
- The rest of Trust Network A endorses Position X because they trust their prominent members and distrust their enemies' prominent members; the rest of Trust Network B denounces Position X for the same reasons.
- Position X becomes ubiquitous in Trust Network A, while Position not-X becomes ubiquitous in Trust Network B.
- Because of this ubiquity, Position X can serve as a signal of allegiance to Trust Network A and Position not-X a signal of allegiance to Trust Network B, each increasing status and trustworthiness within those respective networks and decreasing it in the opposing network.
- This further incentivizes members of Trust Network A to loudly endorse Position X and members of Trust Network B to loudly denounce Position X.
- Because even more members of Trust Network A now endorse Position X even more loudly it seems even more true to members of that trust network and becomes an even more widely recognized signal of allegiance. And the same happens for Position not-X in Trust Network B.
- Open dissent is strongly disincentivized because it causes one to appear disloyal and thus lose status within one's trust network.
- Now both trust networks are locked into opposing beliefs and hold them with much greater confidence than was justified by the initial information. Even if further evidence eventually proves one side right, the other will cling to its mistaken beliefs for an unreasonably long time, because whoever breaks rank first will seem traitorous and suffer a loss of status.
As a final summary, here's a geometrical rendering of how partisan load and signaling load work together to degrade knowledge and increase the divergence of trust networks.
The brown square in Panel 1 represents a real object whose shape is not immediately obvious to observers. Suppose that in the absence of any epistemic load, we would estimate the location of its vertices at the green dots in Panel 2. These beliefs generate an imperfect but recognizable approximation (the black polygon) of the real object (the brown square).
The yellow and purple polygons in Panel 3 represent the shapes one would publicly describe to maximize personal benefit from social signaling in two different trust networks. Note that these optimal signaling shapes are not the same for the two networks; nor do they match the real object. We can think of the vertices of both the original brown square and the two colored polygons as exerting an attractive force on beliefs (the green dots) that increases with distance. This is shown in Panel 4. The attractive force exerted by the two colored polygons is signaling load, and its magnitude varies depending on the circumstances.
When this signaling load shifts beliefs toward the colored polygons the new equilibrium beliefs end up in different locations and describe different shapes (Panel 5).
Now, suppose the two trust networks are antagonistic. Their beliefs will exert an additional repulsive force on each other, pushing them toward the pastel gold and lavender dots as shown in Panel 6. This repulsive force is partisan load, and once again its magnitude varies depending on the circumstances.
Panel 7 shows the final shapes described by the collective beliefs of two antagonistic trust networks. They now bear little resemblance to the real object seen in panel 1.
Epistemic load degrades the quality of knowledge right across society and in all of the conflicting trust networks, even if one is less bad than the other. This last point is critical, but usually missed by partisans. Society becomes like a ship trying to navigate the reef with a distorted map: unless providence intervenes, it's sure to run aground. And all this is before we consider the impact of foul play. A signaling regime with potent effects on mass belief is practically begging political actors, both state and non-state, to manipulate it for their own nefarious purposes, causing further degradation. I'll illustrate in the next section.
E. Let's Deal Drugs (Hacking Trust Networks)
This time I'd like you to imagine, dear reader, that you're cynical and amoral. Not on the level of a politician though; you're just an entrepreneur whose predominant motive is profit. You want to sell your product, whatever it takes, and you want to sell it today—on social media. For this example let's say your product is some kind of vitamin supplements, but it could be a lot of things.
Now, the evidence supporting your supplements is weak at best. But you know if you cite enough n=11 p=0.05 studies from universities in Iran, India, and Idaho people will be a little impressed by the sheer quantity of sciencey-looking citations and not take the time to sort through them to the point of recognizing they're all bunk. Ultimately they're going to judge your claims on an ad hominem basis—you're a salesman, so you don't need to read my essay to understand this, it's in your blood.
As we discussed earlier, people on social media are repeating unexamined beliefs to participate in a signaling game that increases their status within their trust network. So, you decide to hack the system to maximize your personal profits. It's easy. You do market research to determine the demographic most likely to purchase your supplements, create a social media account, and repeat all the takes that signal status, trustworthiness, and friendship to that demographic. You also share pictures of your lifestyle and physique that are flattering to you and aspirational to your target audience. Since you're amoral and profit-driven, it's easy for you to ignore any vestigial awareness of reality that might hamper normal humans and instead send the perfect signals regardless of their truthfulness. Once you develop a following you begin to shill your supplements. And it works. The right people trust you because they can see you're good and on their side. In fact, you're so unconcerned with representing reality that you seem superhuman and almost more on their side than any of their actual friends!
Yes, I'm aware that this is just standard marketing practice updated for modern times. The point relevant to our current argument is that amoral or malicious actors can easily hack the various forms of ad hominem judgment we've analyzed in this essay to build an unjustified trust in their products and claims. This sales technique is more effective now than it was in the past, because social media has increased the importance of personal brands relative to corporate brands. And it isn't limited to selling vitamins. It works well for products that are connected, however vaguely, to culture and lifestyle. And it's especially effective for promoting ideas.
The possibilities for political manipulation are too obvious and too boring to expand on at length, but cite I'll some in brief. For instance, you can signal allegiance to an enemy trust network to gain entry, and then sabotage it, perhaps by exaggerating unpopular positions (e.g. extremism and calls for violence), perhaps by drawing attention to distractions like nonexistent conspiracies, perhaps by promoting pied-piper leaders, or perhaps by earning the trust of key members and then exposing their dirty personal secrets. You can also sow hate and distrust between different factions within an enemy trust network to reduce its coherence and persuade some of its members to defect (e.g. by fomenting a battle of the sexes within a mixed-gender coalition to break it in half and render it impotent), or shrink an enemy trust network to an uncompetitive size by encouraging their purists to gatekeep over insignificant differences. You can flood a small enemy trust network with so many of your own agents that even if most of the latter are detected and purged, the resulting atmosphere of suspicion will cause legitimate members to suspect each other, effectively inducing an autoimmune disorder. And you can denounce essential keystone beliefs and their proponents while valorizing unimportant or false beliefs and their proponents, presenting the latter as an alternative in order to drive an enemy network down a false trail into a dead end.
It's difficult, and perhaps impossible, to protect against all these modes of attack, so once again the best way to minimize trouble is to avoid having determined enemies in the first place—and most especially to avoid having the kind of determined enemies who can pass as allies.
The most powerful hacking technique of all operates on a higher level of structure. You can use media, education, and the arts to change which values and ideas are associated with good qualities and which with bad, and to create new taboos and erase old ones. In other words, to change the optimal signals (the yellow and purple polygons from our geometrical example), causing signaling load to pull beliefs in a new direction, so they eventually arrive at a new equilibrium point. You can then repeat the process, shifting the optimal signals even further, and iterate until collective beliefs are drawn quite far from their original position. Of course, this isn't just a top-down process, because changing conditions on the ground can bring about changes to the media and arts naturally, which then have a feedback effect solidifying cultural understanding of the new conditions. But that doesn't mean top-down manipulation by artists, journalists, and their employers has no influence. To the contrary.
Let's consider the example of drug decriminalization. I've no doubt that many of my dear readers are in favor of decriminalization—and indeed, what could go wrong? True, the no-tolerance policy in Singapore cut overdose deaths, but only at the unconscionable price of killing drug dealers instead; and I myself am a noted advocate for exporting traditionally Oriental decadence to the Occident. The following analysis of media influence on perceptions of decriminalization will hold regardless of whether decriminalization is good or bad, and regardless of whether you're in favor or against: one can manipulate the value of signals for good causes as well as bad ones.
Suppose, then, that drug decriminalization is one of your faction's policy goals, but the public widely views drugs as harmful, low status, and even taboo, and decriminalization as out of the question. And suppose your faction has influence in the media and arts, especially those arts which have the greatest power over the public, namely television and cinema. Now, little by little you can insert into these television shows and movies, whose nominal topics may be entirely unrelated (generic action, comedy, romance, etc. plots), positive portrayals of drug users and negative portrayals of law enforcement. Manipulating status in this way is perhaps the most powerful of all propaganda techniques.
The best approach is the subtlest one—you won't have your heroic lead actor shooting up out of the blue. Instead you'll use minor background characters to normalize drug use. That is, you'll portray it as a casual background activity that normal people do on a normal day, and therefore not taboo or really even bad, and certainly nothing that should be criminal, because criminals aren't normal. To get there you'll start very small, with sympathetic characters in bit parts, then comedic actors, and eventually leading roles. The public will eat it up, provided you feed it to them slowly enough.
You can do the same thing in the news media. Consider the example below. This particular article presents itself as an anti-drug article. But is it? To answer that question let's set aside the question of the author's intent, which is unknowable, and just focus on the effect. Note that the information is presented via narrative, anecdote, and arbitrarily selected biographies instead of a statistical report. This already causes us to perceive the issue through the lens of character.
The payload here isn't in the article's factual claims, but in the headline and images you can see above, so it's these I'll analyze. Cocaine is basically pizza, just a normal thing normal people normally order from a normal “delivery service” on a normal day, and a pastime “long popular among New York professionals” (in other words, associated with above-average status). On the left, a thirtysomething woman wears humble clothing and glasses and holds an apple pie over a background of colonial windows; on the right, a flannel-clad fat lady with a little dog represents the median casually dressed 50-year-old exurban American woman as she displays herself to family on Facebook; at the center, a proud expectant father with his pregnant wife.
Whatever the author's intent, the message is that cocaine is wholesome, all-American, family-friendly, and indeed for everyone—including your nice cocaine aunt with the cute doggie who probably also enjoys a sniffle now and then. Faced with this array of symbols, only the most curmudgeonly oldster could come away believing it should be illegal, because normal people like these just shouldn't be treated like criminals. Indeed, the article's apparently anti-drug message perfectly sets up a popular talking point for decriminalization advocates: if cocaine were legal (as it should be, because it's a normal thing normal people do), it would be properly regulated and therefore not tainted—and the charismatic megafauna on display would still be alive today. Criminalization is therefore doing harm.
The article affects our ad hominem judgments in two senses. The obvious one is that the wholesomeness of the individuals pictured lends direct support to cause of decriminalization. Less obvious is that association with these wholesome drug-users will indirectly increase the assumed trustworthiness of related groups, including other drug users, decriminalization advocates, and their supporters, and thus increase the apparent reliability of claims made by these groups. All this shifts the epistemic load on relevant beliefs in a direction that benefits the cause of decriminalization and makes it appear wiser and safer than it seemed previously. And with repeated shifts over time, the public won't just change their values to favor decriminalization. They'll actually come to believe the facts support it, because the facts insofar as they know them are nothing more than derivatives of ad hominem judgments. If the article had instead told the same story, as it could easily have done, by highlighting the biographies of unpleasant characters—violent criminals and lowlifes—this beneficial shift would not have come about.
Below you can read, should the mood strike you, less heartening tales of decriminalization past.
The media's influence on epistemic load is a voluminous topic, but this brief analysis suffices for our purposes here. Before closing I'll mention one further tactic for manipulating trust networks that's in common use today: soft censorship on social media. I define soft censorship as a form of censorship that makes ideas harder to find without banning their expression outright.
Some people have convinced themselves that censorship doesn't work, and that it always backfires in the end. This is not the case. By reducing the visibility of enemy ideas and increasing the visibility of friendly ideas, censors don't just reduce the number of people exposed to enemy ideas, although this is already an important achievement in itself. They also cause an apparent weakening in the case for enemy beliefs just by making them less popular than they actually would be absent interference—and this is especially effective if they censor imperceptibly. Recall that one of the most typical forms of ad hominem judgment is to trust popular ideas more than unpopular ones. That means if you make ideas look unpopular using surreptitious soft censorship you will actually make them less popular in reality. By reducing the visibility of enemy advocates, you can also shrink their potential base of financial support and thereby starve them out, which, it should go without saying, further slows the dissemination of their ideas. Consider, dear readers, what effect this type of suppression has when it's deployed en masse against many thousands of influencers writing on many different topics. The sum effect can be a wholesale shift in the direction of society, achieved through means that are quite subtle, and even unnoticeable, to the general public.
I've now mentioned several times that popular ideas seem more likely to be true than unpopular ones. But why exactly is this so? That's a question I'll answer in the next part.
Part III: Efficient Stupidity
Philosophers, dear readers, have a fatal flaw. They're happy to spend hours and days and months and years ruminating on the limits of knowledge with total indifference to the economic incentives that should be encouraging them to put their minds to better use.
While there are obvious reasons their indifference to material costs and benefits constitutes a fatal flaw, in this part I'll be addressing the least obvious. Because all information comes at a cost, the limits on knowledge end up being, precisely, economic. Any account that fails to consider this has no hope of explaining how we really know things. So ironically, the same indifference to economic practicalities that gives philosophers freedom to contemplate the limits of knowledge makes them disinclined to recognize the true nature of those limits.
As I wrote earlier, advanced scientific and mathematical knowledge is inescapably dependent on trust, because it's impossible for any one person to afford the time and resources necessary to confirm every link in the long chain of progress. “Afford the time and resources” isn't just a casual turn of phrase here, but the very substance of the observation. Trust is fundamentally a strategy for obtaining information at lower cost. Without it, that information would be unaffordable—and therefore unknowable. To know more, we must know cheaply.
I'm going to further elaborate this economic theory of knowledge using concepts that originate in the realm of finance and investing. My intent is not at all to comment on finance as such, but merely to repurpose the ideas for use in a different field. They'll not only allow us to better understand how we each assemble the stock of information that shapes our view of the world, but also explain a variety of perplexing social phenomena, including mass delusions, bad fashion trends, and the remarkable success propagandists have at fooling smart people into believing dumb things.
A. Let's Outsource Intelligence (Passive Cognition)
Suppose you're looking to invest your savings. The market is pricing a security on the expectation it will return four percent per annum, but after weeks of research, you find good evidence it's more likely to return five. That information was worth the trouble, because it enables you to make a substantial profit by buying the security below its real value. Now, suppose the market is pricing a second security on the expectation it will return four percent, and after weeks of research, you find good evidence it's more likely to return 4.01%. That information wasn't worth the trouble, because the small advantage it provides doesn't justify the time you spent to obtain it. In other words, you overpaid for information. In this example you overpaid with your own time and labor, but you could just as well have overpaid by buying the information from a financial advisor, or hiring him to research and trade on your behalf.
As markets mature and competent investors multiply, the average gap between the cost of information and the surplus return provided by that information shrinks toward zero, and even, as in the second instance we've just described, falls below it. By consequence most investors will come out ahead if they blindly buy at market prices, without paying anything at all for information.
The argument I've just outlined is well known. Yet the specifics of its articulation draw so much attention that its broader implications have been overlooked. Let's strip out all these specifics and reformulate the argument from scratch to encompass cognition in general.
If, for a given topic, (a) the benefit one can expect from actively investigating it minus (c) the cost of actively investigating it is less than (p) the benefit one can expect from passively trusting conventional opinion, then passively trusting conventional opinion is the more profitable course.
In short: if a-c<p, then use “passive cognition,” as I'll call it.
Before you continue, please take a moment to understand this equation, as I'll refer back to it over and over. I'm going to leave the definition of “conventional opinion” intentionally vague, because what counts as conventional opinion in any given case varies according to a number of factors, some dependent on individual judgment, and specifying all of these factors would consume an inordinate amount of time without altering the main argument. The relevant point is that in contradistinction to the typical ad hominem reasoning we've analyzed previously, passive cognition doesn't hinge on evaluating the expertise and reliability of particular subjects (e.g. the skill of this or that financial advisor), but instead superficially surveying a weighted aggregate of subjects. The term conventional opinion refers to the dominant beliefs of this aggregate in the judgment of a passive thinker.
The most obvious form of passive cognition is herd behavior. Herd behavior means trusting that those around you are displaying the right beliefs and actions and thoughtlessly copying them without verifying this yourself. It's commonplace to mock herd behavior as stupid, yet this mockery elides the more interesting puzzle of why anyone would exhibit such behavior in the first place. A-c<p provides the answer. Herd behavior may indeed be “stupid” (because it involves no application of intelligence), but by the same blow, it's also efficient (because it obtains information at virtually no cost, and thereby the maximum net benefit). You might call it efficient stupidity. And in our resource-constrained world, efficiency wins—pulling stupidity along on its coattails.
Whenever a-c<p and individual benefit is the aim, passive cognition is objectively the best strategy. Those who employ it to reduce the price they pay for information can afford more information than those who don't, or at least obtain the same amount of information at lower cost, and that gives them a competitive advantage. A-c<p isn't just a cheat code for the lazy and unintelligent, but a normal state of affairs among social creatures, and we engage in it instinctively and continually, without being aware we're doing so. A series of illustrations will make this clear.
Suppose you're eating a tasty tuft of grass when you notice your fellow wildebeests beginning to flee. Should you linger to see whether there's really a lion behind them? The cost of verification could be your life, whereas your savings if you discover the herd is wrong will be tiny. Furthermore, although it's possible they're running for no reason, it's not probable. Since a is low, c is high, and p is considerable, a-c<<p. Blindly joining the stampede is therefore the correct course. Wildebeests might not formally make this a-c<p calculation, but evolution has molded their instincts according to its result.
And ours too. The example just given is not without human parallels. Suppose you're about to enter a building when a crowd of people comes rushing from the doors screaming. Should you go inside to check whether there's really something to fear? Certainly not. The potential cost of inspecting the building would exceed the potential benefit by a wide margin. When a-c<p, the logic is the same, whether for man or for beast.
Now let's consider a quite different example. Suppose you're a successful architect who wants to take up flute as a hobby. You don't know anything about flutes, but you still want a quality instrument. So you simply buy a reputable brand at the market price. It might not be the best value, but you realize the extra time you'd spend picking through cheap and obscure brands to find a diamond in the rough wouldn't justify the potential savings, especially when your pockets are deep and your ear isn't yet developed enough to discern small differences clearly.
Most people choose appliances and the like in the same way. And they're not necessarily wrong to do so. Depending on how much money you have, how much you value your time, and how much brand markup you'll have to pay, spending that time to read consumer reports and comb through reviews instead of blindly buying a reputable product at its market price may or may not provide a net benefit. When it comes to shopping, it's often the case that a-c<p.
Our earlier discussion of plastic-bag bans offers yet another example of a-c<p, and one worthy of particular attention. I mentioned it took me around eight hours of research to present you with an estimate of bags' environmental impact in which I had reasonable confidence. That gives us a value for c. To determine the value of a-p, let's estimate the surplus profit I can expect from having knowledge about shopping bags that's more accurate than conventional opinion. Hm. What surplus profit?
In a world where plastic bags have been banned and this ban is widely considered justifiable, not only does the knowledge that they're better for the environment than paper fail to bring me any surplus revenue, it actually entails a loss. For I now find myself tempted to engage in criminal acts of disposable-plastic-grocery-bag use that will damage my reputation (most of my fellow men having passively accepted the erroneous conventional opinion), and, moreover, subject to the chronic psychological distress that comes from watching those fellow men act on demonstrably false information day in and day out. Since a-p is squarely negative and c is eight-hours' labor, a-c<<p.
So actively seeking out the truth about bag bans was, if not technically stupid, at least an exceedingly foolish form of self-harm. I'll call it inefficient smarts. And that's the case even though conventional opinion is wrong and the correct answer (banning plastic bags harms the environment) is its opposite.
In so many cases, passive cognition appears to be a free win. But the fact that it can come out ahead even when it's actually providing false information forces us to confront a series of troubling questions. How often does passively obtained information turn out to be wrong? How many people need to be engaged in active cognition for passivity to have a good chance at approximating the truth? What happens to collectively shared passive beliefs when the rewards are insufficient to support that critical mass of active investigators?
In the realm of finance these are controversial questions, and as a mere starving artist I wouldn't presume to put forth an opinion. But elsewhere the answers are easier to determine. Not only are many of the scenarios where we rely on passive cognition simpler and more transparent than financial markets, but the countervailing force acting to restrain excesses in markets, namely, the long-run profits expected from correctly betting against the herd, are often entirely absent.
If none of the wildebeests pause to verify the reason they're running, an enormous stampede can form after a single animal overreacts to a mirage. And not only that: as this baseless stampede enlarges, the case for participating strengthens. C increases, because a wildebeest who turns back will be trampled by those behind him; and p appears to increase as well, because the growing number of running wildebeests lends more support to the view that there must be some threat worth running from. As a result, the equation a-c<p becomes more and more lopsided, and the stampede generates its own rationale in a self-reinforcing loop—a phenomenon I've already referenced in a previous section, but whose mechanism we can now explain mathematically. Self-reinforcing stampedes occur among humans too, sometimes leading to tragic consequences.
While physical stampedes involving homo sapiens might be rare, “cognitive stampedes” are far from it, especially where issues of social or political relevance are concerned. Once again, a-c<p explains why. For such issues the surplus revenue to be won from active cognition hardly ever exceeds its cost. That's because the cost of actively finding accurate information about them is borne individually, whereas the benefit, though large in aggregate, is so diffused throughout the population that it hardly helps the individual investigator. Even this diluted benefit only falls to that investigator if enough of his fellow men come to see the truth too, which is a rare thing; and if they don't, his actively won knowledge may well prove harmful, because discordant views isolate him socially and impair his ability to coordinate with his misguided peers. So all the incentives favor passive cognition. And naturally, that's what most people choose.
I'll once again illustrate using the example of bag bans. For whom is a-c>p? No investor can profit from learning the truth about bag bans. Plastic-bag manufacturers themselves don't benefit more from understanding real environmental impacts than from simply defending their interests. At best a handful of academic researchers and obscure journalists might eke out a small gain from writing honest papers (or not: dishonest papers sometimes bring greater rewards). But even so, they're easily drowned out by the louder multitude of politicians, journalists, and activists who personally benefit from repeating the nice-sounding claim that plastic-bag bans are a great idea without taking any time at all to look into the facts—and it's these latter groups who establish the conventional opinion the passive masses will subsequently embrace.
So the individually optimal strategy of accepting conventional opinion when a-c<p leads to a social quagmire, where everyone follows everyone else into an erroneous view that causes costly mistakes. And just like the wildebeest stampede I described a moment ago, this cognitive stampede is self-reinforcing. The more people passively share the erroneous view, the truer it seems; the social penalty for resisting the tide continually increases. Thus a false belief can approach fixation with only a tiny number of people having ever given it serious thought.
This point deserves special emphasis. If only a few individuals resorted to passive cognition, they'd be able to freeload on information others have paid for without generating harmful side-effects. But if a large fraction of a trust network uses passive cognition (and this is the norm), they cause an overall degradation in the accuracy of that network's beliefs. This degradation occurs because when passive thinkers adopt the conventional opinion, their sheer numbers cause that conventional opinion to seem even more true to other passive thinkers, who are by definition not scrutinizing the evidence for the belief, but only its dominance. Iteration of this process can generate a convergence cascade toward uniform beliefs, much like we observed in wildebeest stampedes a few paragraphs earlier.
Those who praise “the wisdom of the crowd” neglect to consider that as soon as people begin trusting the wisdom of a crowd, they make that crowd less wise, because they themselves are part of the crowd. For instance, if ten men guess the weight of a bull and another ten then passively repeat their median guess, the latter group reduces the standard deviation of total guesses and creates an illusion of greater agreement than really exists—lowering the signal-to-noise ratio while seeming to increase it. And in many real-world cases like bag bans, the ratio of passive to active is over 100:1.
So passive cognition precipitates and then reinforces a trust network's convergence on beliefs that aren't justified by a proportionately large number of active investigations arriving at identical findings, but which almost everyone unreflectively interprets as being so justified. It encourages overconfidence and inhibits dissent, and therefore inhibits error correction as well. We can call this form of distortion “passive load,” and it's certainly more distortionary outside of financial markets than within them.
The side-effects of passive cognition don't stop with the distortion it introduces directly. You'll have noticed that this section's analysis of bag bans resembles in some respects the earlier analysis concerning social signaling. What I demonstrated previously was that our propensity to use beliefs for the purpose of signaling rather than factual information ends up making those beliefs less accurate. What I've demonstrated this time is that on the basis of rational self-interest, none of us ever had a reason to scrutinize the facts about bag bans in the first place, because passively adopting conventional beliefs is the most profitable strategy; and this introduces distortions of its own. So signaling load and passive load are two distinct forces, which can interact with and indeed amplify each other, and in the case of passive load this amplificatory effect can be quite large.
Here's how. On superficial examination, truth-indifferent social signalers appear to raise the likelihood that their announced beliefs are true. Passive thinkers are hence more inclined to adopt the same beliefs, thereby increasing the uniformity of public opinion. This increase in the uniformity of public opinion causes non-convergence among others to look more like a sign of ignorance, delusion, or worse, and gives it a more firmly negative signaling value. In yet another feedback loop, this increases the incentive to signal belief, which increases the number of signalers, which encourages even more passive adoption of the same beliefs, and so on. (By now it should be clear that all of our social epistemic shortcuts are prone to convergence cascades, which reappear over and over in different analyses.)
To summarize this section, let's set up a simple game-theoretical model. In our model society there are only five individuals. One active investigator determined to find the truth, one politician who'll say whatever is to his advantage, one social signaler who doesn't care about the truth, and two passive thinkers. Of the two passive thinkers, the first adopts a belief whenever more people agree with it than disagree, and the second, less credulous, adopts a belief whenever most people agree with it.
Upon concluding his research, the active investigator makes claim X. But the politician decides claim X is to his disadvantage, so he makes claim not-X. The social signaler senses that claim not-X sounds nicer than claim X, so he too endorses claim not-X. The cautious passive thinker is not yet ready to take a side. But the incautious passive thinker sees that two people believe claim not-X while only one believes claim X, so he too comes to believe claim not-X. This changes the numbers. Now three out of five people believe claim not-X—and so the cautious passive thinker comes to believe it too. At the end of this process the single active investigator is alone in believing claim X, while the four others, none of whom have seriously investigated the claim, believe not-X more confidently than ever before.
What's important here isn't only that most people ended up believing not-X, but that the passive thinkers amplified a 2:1 disagreement among motivated early voices into a 4:1 disagreement in the trust network as a whole, effectively providing leverage to those early voices. A more sophisticated model would show that these ratios can be much bigger, with passive thinkers amplifying noisy truth-indifferent voices by orders of magnitude in the right circumstances.
The combined impact of passive load and signaling load—as well as simple dishonesty—allows total falsehoods to dominate a trust network indefinitely, even when the effects of believing those falsehoods are harmful for nearly everyone involved. This unpleasant outcome is rarely acknowledged in public life, not only because the apologetics of democracy forbid it, but because freeloaders and grifters benefit from speaking as if they're inefficient smarties with active knowledge of everything they discuss, whether or not that's actually the case. They also flatter their audiences by addressing them as if they're all active thinkers too; and they don't have to make any particular effort to maintain this pretense, because even the most ignorant members of the public can rarely be shaken from the mistaken assumption that their opinions are grounded in something more substantial than imitation of the bipedal wildebeests around them.
But we're not yet done. The equation a-c<p also allows me to free you from the dangerous illusion that the people at the top of our society are necessarily, or even normally, active thinkers we can rely on to mitigate the problems caused by passive cognition. This is far from the case.
To gain the kind of competitive advantage that brings success, it's usually necessary to direct one's cognitive resources toward a narrow domain. Because these resources are finite, their deployment in that domain comes at the expense of their deployment elsewhere, and vice versa. This opportunity cost raises c for all investigations outside one's specialty, widening, in other words, the range of topics for which a-c<p.
For instance, self-made business tycoons are indubitably among the most capable people in our society. Does that mean they have accurate knowledge about everything? Of course not. And sometimes the opposite can even be true. To reach such exceptional summits of wealth, they must long sustain a focus on moneymaking activities. Because even the most capable person has limited time, the trade-off for this focus is that they're compelled to passively absorb their information about topics that don't contribute to the maximization of their wealth, and in which they may indeed have little interest for that very reason. Consequently their views on those other topics are often generic, unexamined copy-pastes from the herd (not necessarily the plebian herd, but rather the respectable herd that avoids acknowledging itself as such), even though their personal success evinces unusual ability, and their power to influence the world is many orders of magnitude greater than the average herd member's. This broad passivity doesn't change until they scale back their focus on wealth and direct their cognitive energies into the processing of unrelated information—from a ground-zero starting point that's often no better than a clever teenager's. And occasionally they do; but with all the delights money can buy at their beck and call, it's a rare thing indeed. Whence the bizarre spectacle of men with a twelve-figure net worth reading their social values, politics, and “insights” straight from the editorial board of a popular newspaper.
The same observation can be extended a fortiori to most other careerists, as nearly all professions require a lower breadth of knowledge than founders and investors do. For instance, the training necessary to perform open-heart surgery is long, intense, and specific, with little relation to the external world. This active concentration in one sector forces a higher passive allocation in all the others. So narrow professional success comes at the price of, and in proportion to, broad ignorance. The ignorance isn't immediately obvious, because passive cognition provides cheap information to fill the void—like styrofoam packing peanuts that fill in, for just a few cents a liter, a large box with a tiny payload.
For all these reasons and others as well, the professional class of “elite human capital” often forms the tightest and least forgiving herd, with the biggest gap between cognitive power and actively gathered knowledge of the larger world. They perpetually overestimate the breadth of their understanding, because they're surrounded by capable, successful people who share the same goals and incentives favoring signaling and passive cognition at the expense of honesty and curiosity, and who therefore share too the same unexamined beliefs. Where narrowly demanding professions are concerned, it's not exaggeration to say that success is proof of ignorance. More generally, there's no reason to believe that those with status and power are less prone to passive load than the rest of society.
So passive cognition is a double-edged sword. It's extremely useful and indeed indispensable because it provides us with a large quantity of good-enough, functional information at almost no cost. But as a side-effect it generates a variety of harmful social phenomena, from self-reinforcing stampedes instigated by mirages, to ignorant but powerful billionaires “leading” the herd from the rear.
B. Let's Paint It Gray (Trend Following)
Thus far we've mainly been concerned with applying passive cognition to the realm of facts that can be evaluated on a simple true-or-false basis. Whatever complex social dynamics surround them, the claim that paper bags are better for the environment, or that a building is burning, or that a lion is chasing you, fall into this category. But passive cognition can also be applied to value judgments of various sorts. Not only the right price for a flute, or a fridge, or General Electric stock, but also the best characteristics to look for in a spouse and the most appealing color for a house or a blouse. If you don't have the time or talent to develop a good eye for aesthetics, you might simply weigh conventional opinion on what spouse or house or blouse you should buy, perhaps on the basis of magazine covers at the checkout, the end-cap advertisements at expensive clothing shops, the tastes of the cool kids at school, or whatever images repeatedly appear in your social media feed.
Most people are indeed too busy for beauty, so a quick drive down any modern residential street to look at the facades and gardens and attached garages will put a painful amount of efficient stupidity on full-frontal display. More stupidity, indeed, than the theory of passive cognition we've elaborated thus far could generate on its own. That's because an additional element intervenes: change over time. In other words, trends. Trends have different mechanics and consequences than passive cognition, and it's these I'll examine next.
Two simple principles are enough to generate the behavior that accompanies trends. First and most obvious, something whose price is about to rise is worth more than it otherwise would be, while something whose price is about to fall is worth less. Second, things are more likely to continue on their current path than to change direction. We call this tendency momentum.
When we consider these two points together, we find that something whose price has been rising is worth more than it otherwise would be, and something whose price has been falling is worth less, because momentum increases the likelihood of a higher or lower price in the future. Pricing that anticipated gain or loss into the present causes the rate of change to increase, which amounts to an increase in its momentum, and thus a further increase in the anticipated future price.
Normally these effects are restrained by the actual value of the thing, which is separate from price and scarcely affected by momentum. But when market participants' reaction to momentum is particularly pronounced, it produces a feedback loop strong enough to completely untether price from value. The result is a bubble.
Trading on momentum is, like passive cognition and ad hominem reasoning, a way of obtaining information at reduced cost. It reacts to price movements (information that can be obtained cheaply) instead of devoting effort to understanding the thing itself (information that can only be obtained with effort and intelligence). However, momentum isn't just a shortcut that some use and others can ignore at will. Momentum traders are market participants whose purchases and sales cause prices to diverge from value for everyone. Furthermore, humans who aren't consciously executing a momentum-based strategy still react to momentum instinctively and by default. Momentum effects can dominate for a considerable time before value reasserts itself. That means even someone with magically perfect knowledge of value is obliged to surf the waves of momentum in the short to medium term, whether he wants to or not.
We tend to think of bubbles as the province of financial markets. Yet in such markets one can make reasonable calculations about what valuations are actually plausible, and this anchor tends to reduce, albeit not prevent, the incidence of bubbles. In the fuzzier realm of aesthetics, where no objective, numerical calculation of value can be made, and where novelty is itself a source of value, bubbles form far more easily. Indeed, modern fashion, whether for houses or for blouses, amounts to an endless jacuzzi of bubbles with only the most tenuous basis in value.
Trends impact many domains, from fashion to art to ethics to politics. I'm going to begin by addressing the most frivolous of these and then work upward. As you read, I'd like you to keep in mind that the patterns we can observe clearly and easily in a frivolous example will hold too for more serious cases, where we'd prefer to believe mere momentum could have no influence on our decisions.
One small note before I continue. In this essay I'm going to write “trend following” or “momentum trading” depending only on what flows well in context, and without implying any distinction between the two, because this best matches our ordinary manner of speaking.
Imagine, dear reader, that you're a girl of above-average but not quite spectacular coolness, hampered by a weak intuition for beauty. One fall the coolest girl, whom we'll call Tiffany, comes back to school in permed hair and a jacket with big shoulderpads. It's a look you'd seen on MTV, but never in person, and it's pretty weird; but Tiffany has such natural beauty, style, and charisma that she makes it work, or at least makes it seem to work. Before long her friends, the second coolest girls at school, are sporting the same hair and the same shoulderpads.
When you pause to think about it during math class, where there's nothing else to do anyway, you can't quite tell if the big hair and big shoulderpads are fantastic or ridiculous; and at any rate Tiffany's charisma—she's dating the quarterback—warps your underdeveloped aesthetic sensibilities so much that you lose all power of discrimination. The important thing to hold on to, you understand without being able to put it into words, is that the new style is trending—and in a short time it won't need Tiffany's help to be recognized as cool. Everyone will be impressed. That means the faster you adopt it, the higher you'll end up rising in the esteem of your classmates. So when your math teacher calls on you to answer a question you weren't paying attention to, you tell him you have a fever, and also a feminine issue that's too delicate to explain; and after you demonstrate with a few coughs he gives you a pass to go home early.
As soon as you get home you dial the salon and schedule a perm. In the best of all possible worlds, you think, your parents would have bought you a carphone.
Sure enough, the new style catches on. As the school year passes it becomes more and more popular, and perms get bigger and bigger. Happily your early adoption of a style that rose in value so quickly gave your coolness a serious boost—almost enough to elevate you to the coveted ranks of Tiffany's friends (though not quite).
In the final months hairstyles accelerate toward a volume no one would have believed possible a year prior, every top includes shoulderpads that belong under a football jersey, Tiffany's perm is so big her head looks like an electrocuted willow tree, and, finally, even the least cool girl in school shows up to prom with big hair.
That moment, naturally, marks the peak. For as soon as the last buyer is all in, the momentum must end: there's simply nowhere to go but down.
Tiffany has an unbeatable intuition for trends, and she's the first to realize the one she'd kicked off is now rolling over. It might be a little premature, but she's not going to risk holding the bag for a dated style. So the next fall she shows up to school in overalls—and a pixie cut with no volume at all. Although Tiffany's a little too far ahead for girls of below-average coolness to understand yet, it becomes immediately apparent to you that the giant hair and big shoulderpads you've been wearing are, well, kind of silly. They're not even on MTV anymore, except for reruns of last year's hits. Tiffany's friends drop them in rapid succession, with momentum speeding down even faster than it sped up, and you drop them too. By the time graduation rolls around no one at your school would be caught dead with big hair.
You still don't really have any feeling for beauty, but thanks to the eagle eye you kept on the changing trends, you played your hand better than many girls who do. And now Tiffany's even inviting you to the best graduation party. The backup quarterback might be in your future.
Surely, some of my dear readers are thinking, blind trend following is just kid stuff. Adults grow out of the silliness recorded in the story above. Ha. Allow me to introduce you to a color known as millennial gray.
You grow up, graduate from school, find a job, and get married. You've always wanted a house, and you start hunting as soon as you can afford one. Your husband's job is so boring you keep forgetting his title, but he's making six figures and you have a lot of options. It definitely has to be the right house. All those brown and cream ones are starting to look super dated. You're not sure what Tiffany would think if she found out you were living in one of them, but you know it wouldn't be good.
So, what is the right house? Even if you had a good feeling for architecture you'd be too busy at your own job to spend much time mulling styles (as it happens you have no better feeling for architecture or interior design than for couture); but what you do know is that you want something modern and fresh and—you're paging through a trendy magazine on your lunch hour when an image rises up from it and imprints on the similarly colored matter in your skull—gray.
It might be the same color as the old multistory concrete garage you park in at the mall, but it's definitely trending. It's on literally every new house in this month's magazine, like, literally literally, and the words “modern” or “fresh,” or often both together, appear along with it on each glossy page. Part of you wonders if millennial gray doesn't look a bit... well, ugly. But then, you had the same vague sense of unease when you first saw your hair after a perm; and look how well that turned out! Clearly you need a gray house.
Your husband doesn't get it at first (he thought some of those dated brick houses looked homey) but he will eventually. So you insist. You explain that gray is trending, and pretty soon all of your friends will have gray houses, and that if you're the first to have one you'll be so.... You start to say “cool,” but that's a word for kids, not adults. You don't really know what the right word for adults is, but you know whatever it is is a great thing to be and it's a lot like being cool, except, obviously, for adults. After a few weeks he gives in (of course)—and you buy one. With a three-car garage!
You invite Tiffany and her husband to your housewarming party. She shows up in a tight beige dress and beige heels. Maybe her wardrobe is still one step ahead of yours, but you get the feeling she doesn't have a trendy gray house yet. If she did, wouldn't she have invited you to her own housewarming party? You're not quite sure, and the possible answers to the question have very different implication that set your head spinning, but you can't think of the right way to coax the answer out of her before dinner ends. None of your friends mentioned Tiffany throwing a housewarming party....
The years roll on. You have a boy, and all of his family memories are made with a gray background. The turnover for house styles is much slower than the turnover for hairstyles. New houses don't get built very quickly, and old people are slow to understand the genius of millennial gray. Most of them could freshen their homes up so easily, but for some reason they absolutely refuse to paint their red brick a modern color! So while the momentum behind the gray trend takes longer to build, it lasts longer too. A lot longer. In fact, you're already dyeing your hair black when you see a remodeling show on TV one day and realize it's time to give your house the same treatment. In a hurry!
Suddenly you no longer understand what it was you saw in millennial gray. Did it really look this dreadful all along?
(Yes. Yes it did.)
In the story above, our heroine, or anti-heroine as the case may be, relies on trend following to make her stylistic choices. It serves as a substitute for valuation that requires less time, effort, and talent, but yields similar or indeed superior social advantages, making her a respected member of her social network. Tiffany's approach is also worthy of note. In contrast to our heroine, Tiffany has a good aesthetic sense and is willing to put in time and effort; but she chases trends anyway—even when they're ugly—because she knows, either consciously or unconsciously, that these trends affect the perceptions of those around her. Her skill at this game allows her to repeatedly “buy” lower and “sell” higher than her look would merit in the absence of widespread trend-following behavior.
This sort of story is familiar to everyone, but when the mechanics are thus spelled out in plain language they seem nonsensical. Why, after all, should switching styles at the right time result in a “profit”? Before we go further, let's take a moment to make explicit the reasons people follow trends.
First, for the sake of social advantages. People who are believed to have a superior sense of style are more respected, have more networking opportunities, and their opinions are given more weight. Since styles change over time and are subject to momentum effects similar to those in financial markets, trend-following strategies help to attain these social advantages. Second, for the sake of tribalism. People follow stylistic trends to be part of a community that's also following the same trends, and to position themselves within that community. This isn't just a matter of signaling, but also coordination and a sense of belonging, and those who refuse to follow the trends for too long risk exclusion. Third, people follow trends to avoid having to process the changing world themselves. It's much easier to look around and notice something's going out of style than to think through the implications of social shifts that might be causing that change in style and react to them by your own lights. Fourth, for novelty. When they see a new and interesting style, people want more of that style—until eventually they grow bored of it, and the trend changes to favor some other novelty.
All four of these reasons play a role in our tale of blouses and houses, and they apply not just to styles, but to trends of a more serious nature as well. And yet, if there are such good reasons to follow trends—just as there are such good reasons to use passive cognition—why do we feel so little respect for the heroine of our story, and indeed less respect than the peers who haven't peered so thoroughly into her thought processes?
The answer has to do with the gap between price and value. Momentum is an unavoidable feature of markets, so all participants have to navigate it to some degree, and it's better to navigate it well than badly. But when someone's profits derive primarily from price movements that are unrelated to value creation and efficient capital allocation, he's gaming a system whose entire justification for existing is, precisely, value creation and efficient capital allocation. A person who extracts wealth from others without producing anything in return is a parasite by definition.
Similarly, the heroine of our story doesn't seem to have added any true beauty or charm to the world, nor has she displayed genuine virtues, nor does she have any understanding of couture or architecture or even hair. She's gotten ahead by blindly trading on momentum in the “prices” of blouses and houses: gaming the system rather than creating value.
Her strategy doesn't just reduce our respect for her. It does real harm as well. I'll explain further by analyzing some examples from the arts.
Trend following in the arts is something I look upon with special horror, because it's the cause of so much ugliness and stupidity. Works of art—and I include in this category every art from hairdressing to architecture to poetry—are subject to natural change over time due to shifts in how we live, our social mood, the evolving creative technologies available to artists, and our desire for novelty, which forces change even when all the prior factors remain static. This natural change invites momentum behaviors, and thus large deviations between price and value. By consequence, low-quality and indeed outright ugly art can dominate society for extended periods of time.
The ugliness doesn't appear just because the world is changing and art is changing with it. Artists can create beautiful forms of art to suit almost any imaginable social shift. It happens because even though almost everyone is involved with art in one way or another, few pay close attention, and even fewer care about value. Some want to chase trends for the four reasons given a moment ago, although they rarely reflect on their motivations explicitly; and the bulk of the population are passive thinkers who amount to human leverage amplifying the choices of these trend followers. Worse still, active trend followers and their passive retinue typically latch on to superficial indicators rather than giving full consideration to the variety and depth of stylistic changes that might suit the changing times.
Consider an example. Popular music in the 1980s was characterized by a handful of distinctive sounds, the most famous of which was the gated snare. The gated snare was first used on some records from the late 1970's where it was entirely appropriate; and because it was a new sound people liked, producers were naturally inclined to use it more often. But soon it became trendy, then ubiquitous, and then every record had to include a gated snare, just as every house would later have to be painted millennial gray, whether it was appropriate or not; and a musician who didn't put a gated snare on his record was likely to end up begging for change on the street. An exaggeration, but less so than you might think. Catering to trend followers is hugely important for popular success, and the loss of their support can easily push revenues below expenses. “If you're not trending, you're ending,” the saying says, and for good reason.
As a result, many records from that period are marred by a snare sound that doesn't suit the music, and indeed inflicts significant damage to its overall quality. Someone will retort, “it sounded good at the time, and that's what really matters.” But while I don't have space to prove it here, I deny that this was so. Trendy elements like a gated snare or gray paint are, when used inappropriately or to excess, never good. Not at the time, not before the time, and not after the time.
It's by superficially chasing such trendy elements that the heroine of our story chose her houses and blouses. It's also on a similarly superficial basis that everyone, for a period of ten or twenty years, was enamored of granite countertops whether or not the color and texture of granite actually suited their kitchen (it usually doesn't, and indeed can look rather horrible). Forj thosej jwho arej jinsensitive toj thej jarts jand havej troublej junderstanding whyj thisj jis soj badj, jimagine aj trendj wherej jevery wordj needsj toj beginj jor jend jwith thej jletter “j.” Yes, it's that bad. The extra j's don't belong there, but when they're salient elements of a trend people will demand them regardless. That's why, at the peak of AI mania, a CEO can boost his stock by mentioning AI fifteen times in an earnings call even though his company only makes dog food.
Before I continue I want to make clear that these are not problems artists, entertainers, designers, influencers, and impresarios can solve just by focusing on value and ignoring trends, which is the naive advice their “wise” teachers might give them. In the world of popular entertainment betting against trends hardly ever generates alpha, even when you “win” the bet in the long term (e.g. the trend was retarded and everyone eventually notices), because revenues are heavily concentrated in the short term. Suppose, for instance, you produce a game show that goes against the grain of current trends. Will the public want to watch that game show in five years when they realize the trends were wrong and you were right? Of course not. They prioritize current productions and don't care about old ones. And this largely holds for less frivolous forms of entertainment as well. So unless you time your contrarian bet perfectly (like Tiffany did at the tail end of the big-hair bubble), you'll fail to stay solvent in the short term with no chance to recoup when the market stops being irrational. This, of course, doesn't mean value never reasserts itself, but only that the revenues gained when it does hardly ever compensate for earlier losses. The great difficulty of profiting from a position contrary to stylistic trends, well understood by all the firms that fund entertainment, is what makes popular successes and failures in the sector so weakly correlated with genuine value. And no, “high” culture and avant-garde art aren't immune to this jacuzzi effect. To the contrary.
Now, are trends in the broader sense always bad? No. As I've mentioned, there are perfectly valid reasons for art to change over time, and artists who drive these changes forward in compelling ways are doing good and deserve credit. But trend following, like all the other cost-cutting epistemic techniques we've analyzed in this essay, almost always ends up distorting our collective beliefs despite being an efficient use of cognitive resources on an individual level. Following the nomenclature already established, we can call this distortionary force “momentum load.”
The examples I've given thus far mainly have to do with aesthetic judgment, and while they're dear to my heart as an artist, they might mislead you into underestimating the harmful effects of momentum load. Trends influence people not just when it comes to houses and blouses, but spouses too—and political causes and moral standards and even matters of fact. Trends influence what careers people pursue, who they spend their time with, and how many children they have.
And yet, when one searches beyond the realms of fashion and finance already addressed, clear-cut examples become elusive, making it hard to prove that trends are as influential as I claim. This difficulty is actually predicted by the theory. Momentum is not the initial impetus, but rather a continuation in excess of what one would expect on the basis of that initial impetus. It never occurs in isolation; so when the cold logic of dollars and cents isn't available to objectively check price against value, it's impossible to prove beyond doubt that there's an excess for which momentum is responsible. Even so, if you were to ask the honest opinion of anyone whose profits are substantially affected by the human tendency to follow trends (this includes not only fashion designers, entertainers, and investors, but also marketers, influencers, politicians, and propagandists), they'd all agree that momentum effects are real, and managing them well is important.
One example I'll put forward for your consideration is the LGBT craze that occurred around 2020. After a very long period of time where the fraction of society identifying as non-heterosexual hovered consistently around a few percent, it began to rise gradually, and then shot up rapidly, particularly for the youngest generation. The initial impetus—extensive promotion of non-heterosexual lifestyles in the media—was almost certainly amplified by trend-following behavior, resulting in a greater and faster change in attitudes than anyone expected.
Social media offers a particularly clear example of trend following, which is conveniently labeled as such. Once a topic begins “trending,” the algorithm shows it to more users, and so influencers post more about that topic to gain greater visibility—which causes it to trend even more. This extra boost for trending topics can have a meaningful influence on public opinion. For instance, suppose a public figure is caught misbehaving. Once discussion of their misbehavior passes a certain threshold of popularity on social media, there will be a non-linear increase in visibility, causing the news to go much further and be seen by many more people than would normally care. This affects the reputation of that public figure in a way that isn't proportionate to the actual interest of the story.
Trend following of this sort isn't just an artificial product of social media algorithms. News and gossip in general exhibit the same tendency. Once a topic reaches a certain threshold of trendiness, it begins to crop up in casual conversation and light journalism. This isn't because the topic is of particular importance and demands discussion, but because its popularity makes it a good pretext for people to position themselves socially by expressing their opinion about it, regardless of whether they'd normally take any interest. Naturally after it assumes this social function the topic becomes even more popular.
Celebrities and aspiring celebrities will intentionally let themselves be caught behaving outrageously in the hopes that they become a popular topic of conversation. Activists similarly strive to make their topic of concern one that everyone feels compelled to talk about regardless of how relevant it really is. For instance, transgenderism is arguably a topic of little relevance to pretty much anyone, but was dropped on society by activists like discursive anthrax—infecting more and more conversations and driving out less trendy topics of greater importance. In each of these cases crossing a certain threshold of interest results in a nonlinear increase, because people gravitate toward a shared set of conversational Schelling points that are determined by trends.
C. Let's Start A Stampede (Passive Leverage)
These observations lead us to a new issue. Momentum and passive cognition hold such sway over the public that any competent propagandist or marketer will do his best to manipulate them for his own ends. I've already discussed propagandistic manipulation earlier, so the following paragraphs will only address the particular techniques relevant to this part of the essay.
Passive thinkers are the easiest group to manipulate, because their entire strategy is to inattentively follow conventional opinion. The methods for doing so are several.
In my analysis of bag bans I pointed out that the first authoritative voices are especially influential, because credulous passive thinkers take them as a cue, and less credulous passive thinkers eventually follow their lead. To generalize, propagandists make a special effort to set the initial tone of news coverage and establish the appearance of conventional opinion from the get-go, because this can create a cascade of passive agreement that crystallizes the target beliefs throughout the trust network permanently.
The most effective technique for manipulating passive thinkers is the illusion of consensus. To create an illusion of consensus, propagandists send onstage an apparently diverse range of voices who argue over minor or irrelevant points, but agree on a target set of beliefs. These voices seem to fairly represent different perspectives, but they don't: the propagandist has selected them in advance to exclude any possibility of genuine dissent on the issues of relevance, or else to associate it with disreputable people. Televised roundtables follow this pattern, and the editorial board of major newspapers is selected according to the same principle, including pseudo-dissenters who complete the illusion of consensus with the illusion of debate. Passive thinkers take this stage show at face value, and adopt the target beliefs as if they'd just witnessed a multiplicity of different perspectives in agreement, which would normally be a strong signal of accuracy.
The illusion-of-consensus technique is especially effective in movies and television shows. By having all the characters express agreement with a particular view, propagandists can trick passive thinkers into believing there's a real consensus around that view. On an instinctive, subconscious level, they can't distinguish between the scripted agreement of characters and the real agreement of the kind of people those characters are written to represent. And this works perfectly well on intelligent people who should know better, but are simply too busy or too careless to think twice about the media they're casually consuming.
Propagandists can also stop short of an illusion of consensus and simply create the illusion that a target fringe view is common, normal, and part of the conventional palette of opinions. This clears the path for a subsequent social shift in the desired direction.
The Wall Street Journal article I analyzed earlier manipulates passive thinkers in a way that's closely related to the illusion of consensus technique. It deploys a series of anecdotes to represent drug users as representative of a wide cross-section of conventional society and drug couriers as deliverymen like any other deliverymen, whereas in reality drug users and drug deliverymen are more common in certain groups and among certain types of people to whom passive thinkers would be less sympathetic. None of the anecdotes are false, because drug users and sellers can indeed be found in all classes; yet selecting them to skew the perceived distribution in a statistically inaccurate way hits passive thinkers like a knockout blow. They absorb the payload as if it accurately represents conventional opinion without ever questioning the selection process. That's because the average person is statistically illiterate, but develops a feeling for population distributions by collating the many anecdotes he encounters in daily life. Even statistically literate people do this when they're passive and inattentive. It's an intuitive approach that works reasonably well in the absence of mass media, but as you can see, it's easily fooled today.
Finally, propagandists can reverse the illusion of consensus technique and instead create the illusion that no one trustworthy believes X, when in fact many or even most trustworthy people do believe X. Propagandists accomplish this either by excluding those who believe X from public debate, or by stigmatizing X and shaming those who believe it to discourage them from expressing their views openly. This technique is the most common and widely used method for neutering dissent and shrinking the Overton window, because as far as passive thinkers are concerned, unshared views neither exist, nor figure into conventional opinion. For instance, three-letter agencies famously pushed the derogatory phrase “conspiracy theory” to imply that anyone who raised questions about suspicious events was a delusional paranoiac.
Trends, of course, are manipulable as well. A moment ago I mentioned how trending news can have a disproportionate effect on the reputation of public figures. The same can be said for any newsworthy events that reflect positively or negatively on certain beliefs and values, or raise the profile of particular issues. Small nudges that hinder or enhance momentum, suppressing certain issues and pulling others to prominence, gradually but meaningfully change the views of the public. For instance, social media censors can kill the momentum of a tearjerking story about a father who protected his family with a gun by deboosting it in the algorithm (ensuring it's seen by fewer people), and they can cause a horrifying story about a father who killed his family with a gun to trend by boosting it, thus artificially shifting public opinion on gun rights. The degree of manipulation they need to use is smaller than one might think, because most stories of this sort will simply disappear if their momentum is killed. That's how momentum works: nobody cares about old news.
Since the popularity of a news story depends largely on its novelty, merely delaying it is enough to crater its impact. Similarly, the delayed correction of a false story has much less influence than the false story itself. Censors permit dissent again after a topic stops trending and falls out of public view, because the window of opportunity for influencing public opinion has closed. The existence of impotent dissent helps to perpetuate the illusion of freedom—and the illusion that dissenting views had always been allowed, but were simply unpopular (and therefore, as far as passive thinkers are concerned, wrong). While my remarks here highlight trend manipulation on social media, the old media world has used similar techniques for generations, killing or boosting stories at will, and with similar effects.
I'll conclude this section with what's perhaps the most iconic manipulation of passive cognition and momentum: the buffalo cliff.
The Native Americans of the great plains were said to send a boy dressed in a buffalo skin out into the herd of buffalo; and when the hunters appeared behind them the boy would imitate a young buffalo taking fright, and begin to run. Seeing what appeared to be a fleeing friend, one anxious buffalo would follow his lead, and then another, until the whole herd was stampeding along a course the hunters had prearranged. This course led the buffalo toward a high cliff that was impossible to see from a distance. And so the buffalo, the whole herd of them, who could not stop the momentum of their heavy bodies in the brief time between when they glimpsed their end and when they met it, would rush over the cliff and break their strong backs on the rocks below, where some would be butchered for food, and the rest left to rot in the sun. Such are the wages of passive cognition when hostile manipulators hold sway.
The catalog of propaganda techniques I've provided in this essay is of critical importance, because today a number of other writers continue to promulgate the erroneous view that mass mania and passive delusion are accidental occurrences in which everyone can be caught up without anyone being to blame, like mindless particles participating in a hurricane that forms through natural processes rather than human intention. This is far from the case, but many are taken in by the argument, because “debunking” the “superstition” that there could possibly be anyone behind the curtain sounds very intellectual and scientific—like explaining to ancient Babylonians that their statue of Marduk doesn't cause rain or drought depending on his mood and the quantity of sacrifices he's been offered.
Blame ought to be apportioned on both sides. First, passive thinkers are deserving of blame, because their overconfident freeloading puts our entire republic at risk. It's true that no one can know everything actively. But everyone can know that they don't know actively, and humbly refrain from adopting unexamined views with high confidence in a world awash in propaganda. Everyone can understand the techniques I've described and temper their credulity, particularly where mass media is concerned, and weight instead the firsthand anecdotes of friends over curated or fictional ones. I'm aware that most people won't exercise this restraint because they're lazy and stupid, but that doesn't make their poor behavior any less blameworthy.
Second, the propagandists and their masters—and there are both propagandists and masters, not always, but at least often—deserve to be blamed.
Reflect back on all the means to manipulate public opinion and all the flaws of social knowledge described in this essay, dear readers, and ask yourself whether someone more intent on exercising power than obeying a laissez-faire moral code would neglect to use and abuse them. This question has only one plausible answer. Yet their apologists have successfully confused the issue by arguing that a master of conspiracies with a million hands, who drives the buffalo over the cliff by moving each of their legs for them the whole way, could not possibly exist. This self-evidently ludicrous strawman has nothing to do with how real manipulation works. The devil's greatest trick, as the saying says, is persuading you he couldn't exist.
What actually happens is that passive thinkers and trend followers hand their manipulators, gift-wrapped, an almost unbelievable amount of leverage; and with the right fulcrum—in entertainment, in the media and social media, in the universities, and so forth—those manipulators can move the world. They do this not with absolute control, but with small, well-chosen nudges that leave herd dynamics to take care of the details. They don't care how the herd gets there, or who runs over whom on the way, but only that the buffalo move in the direction of the chosen cliff. And—they do.
In light of this carnival it's difficult to entertain the idea that democracy in the age of mass media could ever be what it purports to be. The powers of manipulators are simply too great, and the average man's resistance too feeble and halfhearted. Voting is downstream of information, and therefore downstream of propaganda. So as long as the pretense of democracy persists, the real, long-term direction of society will be determined by the oligarchy that most effectively controls its organs of information—by placing their lackeys and partisans in chokepoints for the dissemination of that information, such as leadership and decision-making positions in media, entertainment, and academia, and by funding producers of information who promote their desired narrative, including professors, influencers, artists, and journalists. The only effective way to combat this oligarchy, should it prove hostile, is not to wait for the passive masses to stop freeloading and signalers to stop signaling and trend followers to stop chasing trends, things they're reluctant to do even after reality punches them in the face, but to form a less destructive counter-oligarchy that captures these information chokepoints and funds its own information producers to guide society in some direction better than a buffalo cliff. Easier said than done, of course. But nowhere near as hard—as impossible, I'm inclined to say—as reforming the organs of information to lock out manipulators for good.
D. Let's Play Opposite Day (Contrarians and Cognitive Altruists)
These conclusions are grim, but they wouldn't be complete if they failed to mention the few who do work against epistemic load and reduce the harms of passivity and trend following. These fall into two groups: contrarians and cognitive altruists—those whom I humorously but accurately described as “inefficient smarties.”
Contrarians are people who, by disposition and without reasonable justification, continually try to find fault with and work in opposition to conventional opinion. When conventional opinion is to stop, they go; when conventional opinion favors black, they choose white. They're few in number, and rather irritating. But once again, the facile observation that always disagreeing for no good reason is senseless and annoying distracts from the more interesting question of why anyone would behave that way in the first place.
Investment researchers argue that “contrarian” trading, which is to say, betting against momentum, has a historical record of providing positive returns on a five-to-ten-year time scale. This is because growth stocks tend to be bid up beyond their potential, while value stocks tend to be oversold, and eventually these excesses are corrected, putting contrarians ahead.
Although our social world is much messier than the stock market, there's reason to believe a net contrarian benefit still applies. I've already offered several non-financial examples of how people chase trends beyond the point justified by real value, and this implies that if value were to reassert itself, there would be a reversion to the mean just as there is in markets. Novelty, moreover, tautologically wears off in time. So many trends end up flagging, and then reversing, sometimes dramatically. Those who bet on the reversal of a trend will lose out in the short and perhaps even the medium term, when momentum dominates, but often enough they'll make up for it in the long term when valuations normalize. Their contrarianism may also lead them to stumble onto valuable information because everyone else is looking for answers in the wrong place. So in addition to long-term benefits from regression to the mean, a contrarian strategy can occasionally generate windfall profits, despite failing the majority of the time.
Of course, if too many people were to bet against momentum to take advantage of a contrarian premium, that premium would disappear. So there must be an optimal fraction of such bets; and all things being equal, we should expect society to tend toward that fraction over time. Whether dispositional contrarians are the product of evolutionary game theory or have merely found an edge by chance is impossible to say with confidence, but considering the logic just outlined, the former explanation is certainly plausible.
Society as a whole benefits from contrarianism, and likely more so than the contrarians themselves. This is because when contrarians constantly search for flaws in conventional opinion and announce those flaws to their fellow men, they exert on their trust network a force contrary to passive load, and when they oppose themselves to the trend of the moment they exert a force contrary to momentum load—sometimes small, but sometimes substantial, making bubbles flatter, shorter, and less frequent than they otherwise would be. A simple refusal to go along with the crowd is sometimes enough to encourage a few of the less credulous passive thinkers to slow down and question their views, preventing a cascade toward uniformity. We can easily demonstrate this with our earlier toy model of five knowers. Adding a single contrarian is enough to stop the two passive thinkers from taking a side, making the final ratio of those who believe not-X to those who believe X 1:1 instead of 4:1. Obviously that's a big difference; and when the issue under contention is important it will lead to a huge divergence in outcomes. Finally, by doggedly looking wherever others aren't, contrarians increase the innovation rate of their society relative to one dominated by conformists, who are more likely to get stuck in dead ends because they're all looking in the same place for the same answers.
Different societies have different game-theoretical optima for their contrarian fraction. For instance, some societies punish vocal dissent more harshly, whether through informal or legal means, while other societies are more tolerant. Over time these differences should increase or reduce the number of contrarians. Societies that are more open to contrarians will have a better resistance to excesses of momentum and a higher innovation rate, while societies that are less open will be more orderly, but also less innovative and more prone to mass mania. Contrarians are also inherently resistant to tyranny, because whatever an authority tells them to do they will do the opposite, and this exerts a protective effect on their entire society. In good times a society without any contrarians will look neater, more orderly, steadier, and more respectable; but in the long run it will fall behind.
Contrarians are neither the only nor the most important group that helps a trust network resist epistemic load. Cognitive altruists constitute a second, more sophisticated group.
Cognitive altruists are people who seek out and share socially relevant information even when a-c<p. In other words, they aren't just active investigators, they're people who actively investigate in the face of incentives that favor passivity. They choose the losing strategy where efficient dummies choose the winning one. Trustworthy people can already be considered altruists in a sense, because they keep to the truth in cases where it would be to their advantage to lie; but being trustworthy and ignoring a-c<p are two different things, and it's the second I'd like to highlight here.
While cognitive altruists normally have high curiosity, curiosity alone isn't sufficient to make a cognitive altruist, since learning baseball statistics or other such empty information is of no use to others, and “the joy of learning” is included in the calculation of c. True cognitive altruists take on a genuine net cost and provide the benefit of actively investigated information to people around them, even if those people are passive freeloaders who give nothing in return. They have a passion for truth and take pains to correct public falsehoods. They try to destroy harmful illusions and bring the facts to public view even when those facts are unpopular. They resist trends not because they're contrarian, but because the trends are wrong. They fight against groupthink, but the benefits of their struggle ultimately go to those who engage in it.
Once again, we should ask why such people even exist, when to all appearances they instinctively insist on a losing strategy that fails the basic algebra of a-c<p.
The evolution of cognitive altruism can be explained along the same lines as other forms of altruism. While the altruist himself suffers a cost without a benefit, his relatives enjoy a larger total benefit due to his sacrifice, which increases the frequency of altruistic genes. It's important to note that the altruist does not necessarily identify and target his kin with this behavior even though relatedness is the source of the fitness advantage. That's because in the prehistoric evolutionary context, it would suffice to target those with whom one interacts most frequently, due to the small size and high relatedness of tribes and the comparatively low number of inter-tribal interactions. By consequence the fruits of his cognitive altruism are, in the modern world, shared with all rather than targeted to relatives.
The smaller and less dense communities of prehistory should also have provided greater incentives to active investigators than do denser and better connected modern ones, because the effectiveness of passivity scales with both population and density. All things being equal, fewer people require a higher fraction of active thinkers, reaching 100% upon Robinson Crusoe's shipwreck and falling no lower than 50% upon Friday's arrival. A city of 10,000 and a city of 1,000,000 need the same number of meteorologists.
Now, evolutionary psychology is far afield from the main thrust of this essay. And it's something I normally avoid in principle, because the temptation to give undue credence to speculation that sounds plausible but can be neither proven with logical certainty nor tested empirically is too great. Nevertheless, when basic math tells us that cognitive altruists are unfit in the modern environment, we're compelled to consider how past conditions might have differed from current ones to make sense of their behavior; and the account above is cogent.
Today cognitive altruists provide broad advantages to a mass society that doesn't reward them (or their relatives) in due proportion. Passive thinkers, meanwhile, have time and energy and opportunity to enrich themselves because cognitive altruists are spending their own time and energy keeping the collective beliefs that sustain our way of life from going off the rails. I'm not going to launch into a series of concrete examples demonstrating exactly how cognitive altruists help society, because this whole essay has been an exercise in cognitive altruism. Indeed, anyone who's read this far is a cognitive altruist to some degree. The fact that you now know bag bans are a sick joke is thanks to cognitive altruism—both yours and mine. You're consuming information about social problems that need to be corrected, but that the system lacks the incentives to correct; and by consuming and sharing that information without incentives, you're helping to push society in a better direction. For instance, if we were to add a cognitive altruist to the six knowers now in our toy model, it would be enough to break the 1:1 impasse, shifting his whole society in a direction which, while not guaranteed to be true, is at least more likely to be true than the uninformed assertions by interested parties that otherwise dominate debate.
In fact, only cognitive altruists can rescue a society whose informational incentives are dysfunctional, because only cognitive altruists are resistant to those incentives, and resistance to incentives is precisely what's required. What do I mean by “resistant to incentives?” Cognitive altruists seek a fundamental good even when it comes at a net personal cost. They don't calculate a-c<p. They just investigate anyway. Because they're indifferent to whether their investigation entails a net loss or gain, disincentives don't deter them unless they're quite substantial.
And yet, your loss and mine will be passive thinkers' gain. The state has always endeavored to harness the altruism bequeathed it by tribal evolution without rewarding that altruism to the extent that would be required to maintain it, and in the long run the expected result should be a continual decay of this precious legacy, concealed by a reduction in uncontrolled criminal violence that does nothing to reverse the more fundamental trend. Civilization squanders heroic bravery and self-sacrifice in the public square just as it does on the fields of war, and squanders the minor acts of heroism and self-sacrifice as well, only less visibly: altruism depletion, to give it a name. Even religion is all too often complicit.
We're doing nothing to slow such depletion. Nor is it anywhere even a topic of discussion. I'm raising the topic here because if advanced knowledge, and thus human progress, depend on trust, and if the accuracy of a society's collective beliefs depend on those who are willing to pay a personal price to defend us all from epistemic load, then human advancement and human prosperity depend, in the final analysis, on securing the persistence of people who are trustworthy and people who are cognitively altruistic, without which the whole thing will fall apart.
In failing to recognize how fundamentally knowledge depends on character, and indeed on a particular type of human capital that was produced, like coal and oil, by the pressures of the distant past, epistemologists have failed in turn to recognize how recklessly we're spending resources we haven't yet the means to renew—and which will one day run out. And it goes without saying that although knowledge is the topic of this essay, the same argument can be applied to other forms of altruism. By spending down the stock of people who are resistant to incentives, we march toward a world where systems that stumble into perverse incentives, as they inevitably do, can never be corrected short of a disastrous crash, and where the foundation of trust that allows civilization to be civilization crumbles away.
By now I've offered enough examples of how trust networks function and malfunction. In the fourth part of this essay I'll discuss techniques, both personal and institutional, for mitigating these harms and arriving closer to the truth.
Part IV: Techniques For Improving Belief Accuracy In A World Of Ad Hominems
A. Let's Crash A Plane (Black Box Verification)
Knowledge that's based on ad hominem judgments of trustworthiness is inherently less reliable than knowledge derived directly from perception or intuition. This is because we're often wrong about who's trustworthy, whereas we're rarely wrong in simple observations or matters of logic. Since each instance of trust-based information transfer in a knowledge production process has a chance of introducing fresh errors or falsehoods, the unreliability of trust-based knowledge is cumulative. Thus if we were wholly dependent on trust, the signal in advanced knowledge production would eventually be so degraded by noise that no further progress would be possible.
Fortunately we aren't wholly dependent on trust. We have an alternative means of confirming a complex body of knowledge that's rough but reliable. Treat it as a black box, ignoring its inner workings no matter how sophisticated they might be, and attend only to outputs—that is, predictions or products. If the outputs are good, deem the information within the black box to be true, at least approximately.
For instance, consider the scientific theory behind nuclear power. Not only is it too complex for you to fully understand without considerable study, but the experiments proving it are also impossible for you to carry out personally. Based on our previous analysis it might seem, then, that you could only determine the truth of the theory by evaluating the trustworthiness of the sources who propound it. Not so. It happens that your house runs on power from a local nuclear power plant. The engineers who build and maintain the plant all assert that the established science behind nuclear power is the basis of their work. You can therefore treat the theory behind nuclear power as a black box and judge its validity by its observable output without any understanding of the science. Simple: does it work? When you flick the switch, the lights in your house turn on. Therefore the theory is true, or at least true enough.
For a contrasting instance, consider the classic example of psychoanalysis. Imagine it's the 20th century and you're seeing a psychotherapist to help with your mental issues. But you don't seem to be getting better. You begin to wonder whether psychoanalysis is really true. Perhaps your therapist is just a masterful fraud. You look into the theories behind his therapy, but some of them are quite complex, there are many wordy volumes to read, and while you could grasp them if you took the time, you don't really feel like taking the time. So you treat psychoanalysis as a black box and evaluate the most basic measurement of its output. Namely, the cure rate. As it turns out, the cure rate is abysmal. You therefore deem the theory false without bothering to understand it.
As a third example, the efficacy of medical treatments can be determined by comparing all-cause mortality for the control and experimental group in a large randomized sample. This endpoint expresses the result simply and thoroughly, and is difficult to fake.
We use the black box verification method all the time. Almost no one personally repeats the foundational experiments that prove the validity of physics or any other science, but we can all observe that satellites and computer chips work, and that you can book flights to circle the globe, and that the people who make all these things possible rely on a body of knowledge which must therefore be true, or at least true-ish. And the main reason the hard sciences are less afflicted by epistemic load than other realms of human inquiry is that they make clear predictions which can usually be tested for accuracy in a straightforward way.
There are, however, numerous limitations to black box verification. It doesn't work in cases where outputs are difficult to observe and measure. It can't confirm or disprove historical accounts, nor directly verify predictions about rare events in the distant future. It's hard to apply it to very complex and interconnected systems like human societies, where a tangled multitude of causes and perpetually shifting conditions influence the final outcome. Furthermore, a theory may produce good outputs because it works well enough for a given use case, but still only be approximately and not strictly true (e.g., Ptolemy's astronomy is sufficient for maritime navigation). And the men operating the mechanism within the box—the ones creating products or making predictions—may lie about their theories and methods, or even be themselves deluded about the extent to which their theories are essential to their results, creating the illusion of a connection where none exists.
For instance, a personal trainer may recommend a mixture of beneficial and useless exercises. His clients will improve their fitness, but his understanding of exactly why will not be correct. Less innocently, a successful CEO—I'll invent an example—could falsely ascribe his high performance to his charitable willingness to hire underqualified ex-cons other companies reject, and from there he could go on to make further claims about human resources management that aren't true but which sound nice to his customer base, while the real reason for his success is the consistently high quality work he squeezes out of a very boring and standard engineering department by driving them to work long hours. Since pleasant-sounding lies about his methods may increase his company's profits, or even his personal social status at the expense of those profits, if we take him at his word without evaluating his trustworthiness, black box verification will generate a false positive.
So focusing on results can't fully eliminate our dependence on trust. But it does go a long way toward purifying the signal and reducing the noise in our knowledge base.
B. Let's Get Medieval (Social Technologies To Manage Trust)
We've developed an array of social technologies to manage trust and thereby improve the efficiency and reliability of knowledge production. For instance, the organization of a generic manufacturing business. We don't usually think of business management as a knowledge problem, but it's possible to understand it as such.
Consider Knight Manufacturing Inc., a large company producing various widgets. The various widgets consist of various components which are in turn produced through various processes. The CEO would like to verify that these processes are carried out in a fashion that maximizes the efficiency of widget production. But they're too numerous and complex for him to do this directly; he lacks both the technical expertise and the time. So his solution is to take advantage of the black box verification technique we just discussed by organizing the whole company into black box fiefdoms. Although you probably haven't heard it explained with these words before, you already understand how this works.
The CEO delegates the management of each division to a director. Rather than trying to evaluate every employee and process within those divisions personally, the CEO treats all the divisions as black boxes whose internal workings are unknown, and compensates the directors on the basis of the outputs of their assigned boxes (e.g. net profits on the widgets for which their respective divisions are responsible). The directors in turn don't understand every facet of engineering and production for every component, but instead appoint managers for each sub-division, treat their internal workings as a black box, and compensate them on the basis of their output. And so on.
This nested structure resembles a feudal state with a lord and vassals who in turn have their own vassals, each left to run his fief according to his own judgment and then evaluated by their lord on the basis of easily measured outputs like soldiers supplied and taxes paid. By incentivizing the desired outputs, the CEO also ensures his employees' motivations are aligned with the company's. This means he can trust them to do their work properly even if he never personally checks nor even understands the specifics.
Admittedly this simplified model is impracticable as presented, because human organizations are never so straightforward in practice. For instance, some directors might fake their output data if the CEO really never peers within the black boxes to confirm they're trustworthy. Nevertheless, it does illustrate in a simplified form the most important method large businesses use to reduce their dependence on trust.
A capitalist economy can also be thought of as a way of solving a knowledge problem by means of black boxes. Consumers evaluate and buy products without any understanding of how the technologies work or what goes on within the companies they're supporting. They simply reward good outputs, and this rewards the companies that have good know-how and good methods. Over time competition and natural selection increase the economy's collective “knowledge” without any need for trust. Political types would be delighted to run an economy by evaluating and rewarding methods rather than results, but one would expect much lower efficiency from such an approach.
The feudal black box method used by businesses isn't the only social structure we've developed to manage trust. The bureaucratic or legalistic approach, essentially opposite to the feudal one, is to organize an institution on the basis of algorithmic processes that minimize reliance on employees' individual human judgment and reduce them instead to its mechanistic executors. In other words, rather than aligning incentives, delegating power, and trusting subordinates' judgment, these bureaucracies restrict their employees' freedom of action so that outcomes are unaffected by their individual whims, and therefore more trustworthy.
We also rely on institutions to mediate trust by issuing credentials and certifications. By evaluating the trustworthiness of a known credentialing institution instead of judging individuals on their own merits, we can outsource the effort required for due diligence and thus reduce our costs. In much the same way, branding serves as a form of certification that saves customers the trouble of evaluating the trustworthiness of products individually, let alone the trustworthiness of the company's individual workers. A respected brand can collect part of this savings in the premium it charges for its products. (Note that in each of these cases an “ad hominem” judgment of trustworthiness is still in play; it's just applied to organizations rather than individuals.)
Sadly, all these social technologies for minimizing our dependence on trust have flaws.
The problems with bureaucracies are almost too cliché to need explanation. Algorithmic processes take on a life of their own rather than efficiently executing their intended purpose; and corrupt or partisan employees can manipulate procedures which appear on surface examination to allow little room for subjective intervention, and not only produce their desired outcome, but do so with a convincing patina of legitimacy, since the bureaucratic design allows them to plausibly, but falsely, deny they have any scope for personal influence.
Credentialing institutions can be captured by partisans and thereafter certify mediocre orthodoxy rather than excellence, squeezing out honest dissenters and cantankerous geniuses along with crackpots and incompetents, and thus eventually degrading the quality of knowledge production. Worse still, bureaucracies and credentialing institutions may mutually reinforce each other and calcify this type of inefficiency: bureaucracies mandate credentials to limit the scope of human judgment in personnel decisions, causing the bureaucracy itself—and other institutions it regulates—to be dominated by credentialed employees who then press for still greater emphasis on credentials out of self-interest and self-regard, whether or not the credentials are accurate assessments of competence, and perhaps especially if they aren't.
Black-box feudalism can fail too. Not every project can be separated into components with easily measurable outputs. Nor can all outputs be measured and incentivized to align the motives of employees and owners. For instance, the search for an Alzheimer's cure is a long project, longer than investors' time horizon and indeed longer than any individual scientist's career. The steps toward a cure don't produce partial cures nor even any profitable technology. The best and most diligent researchers might end up following a false trail to a dead end, and in some cases this negative result may even be a prerequisite for further progress. Thus it's not possible to organize the project simply by incentivizing the desired final endpoint of a cure. And because measurable intermediary outputs like citations aren't and can't be tightly linked to the goal, incentivizing them can cause harmful distortion due to Goodhart's Law, creating an illusion of progress where none is being made and slowing genuine progress.
Social technologies that minimize our reliance on trust are valuable and indeed necessary for advanced knowledge production. But they don't actually eliminate trust and ad hominem judgments from the knowledge production process, and they bring new risks of their own.
I'd like to write at length about how we can improve our institutions to be more trustworthy. Unfortunately that project is beyond my expertise, so instead I'll limit myself to one concluding observation that may prove useful.
The soft sciences presently suffer from very low reliability. Far too many—maybe even most—published research results are false. Nevertheless, this doesn't necessarily imply that all knowledge producers in the soft sciences are frauds doing sloppy work. A significant fraction of them may be scrupulous and doing excellent, reproducible work. The problem is that outsiders have no way to know which fraction. They don't have time to scrutinize individual papers and the careers of individual researchers to determine which are trustworthy. So although the field of e.g. academic psychology is made up of many individuals and universities of varying quality, it operates as a single institution, and ad hominem judgments of reliability are directed at this single institution, not at the individuals within it. That means if half of the field is bogus, the public will ignore the other half too (should ignore, at least; the prestige associated with academia still causes laymen to take bad science more seriously than they ought). The principle underlying this wholesale dismissal is simple. A compass that's wrong half the time is a worthless compass. Inferior, even, to folk wisdom.
So if you're an academic psychologist doing high quality work, it isn't enough to just keep doing high quality work in your own little corner. It's essential that you police the frauds and toss them out of the field, without which the reception of your own work will suffer, no matter its quality. You need to make academic psychology as a whole into a reliable compass by reducing the error rate. How that can be done is a question I'll leave you to answer.
C. Let's Read Fiction (Techniques For Navigating A World of Ad Hominems)
If our beliefs are really so dependent on the unreliable foundation of ad hominem judgments whose defects we discussed in the first two parts of this essay, it might seem that all is lost and we can never find the truth. But once we recognize and accept the way we really know things we can use our new self-understanding to fine-tune our methods and reduce our error rate. This can't be done by following just one or two simple rules. There's a long list of techniques that can help us to navigate the chaotic sea of character and find the truth.
- Stop pretending ad hominem judgments are irrational and avoidable. Instead, accept that they're necessary and use them consciously and intelligently. As I explained in Part I, even the smartest and best informed person needs to rely on ad hominem judgments for much of his knowledge. If you pretend you can know everything directly, or even that you can suspend judgment for any question you haven't answered directly, you'll only sink yourself deeper into delusion, and your beliefs will be less rather than more accurate.
- Don't delude yourself into assuming your own claims will be evaluated entirely on their merits. Instead, accept that you too will be the object of ad hominem judgments, and these judgments will significantly impact the reception of your claims. To be an effective knowledge disseminator you should signal trustworthiness with your track record, your outward presentation, your credentials, your character, and your alliances. Even if he has a brilliant idea, a man with no credentials, no connections, shabby clothes, and poor interpersonal skills will lose far more time lobbying to get that brilliant idea taken seriously than he would have spent if he'd polished his signaling first. Businessmen and politicians understand this intuitively. Unfortunately the kinds of men who invent important new ideas usually do not, perhaps because it's precisely indifference to signaling and partisanship that enables them to discover what other people overlook.
- Use the black box method to test claims whose details you can't understand or evaluate directly. Just check the final outputs: if they're good, the information within the box is probably good too; if they're bad, the information within the box is flawed at best, and likely wrong.
- Evaluate your sources' record. Do they have a history of making accurate predictions or producing good practical results? Those who've been right in the past are likely to be right again. And you too should try to build a track record of reliability, because in the course of time it will add weight to your claims. However, there's an important caveat, which I'll explain below.
- Distrust prestige transfer. Some public figures try to leverage their record of success in one domain to create the impression of reliability in another, unrelated domain. For instance, someone who's distinguished himself as a linguist or chess player might try to transfer the prestige he's accrued in his original field to a new field, like political theory, where it hasn't been honestly earned. But success in different domains requires a different cognitive style, different strengths, a different knowledge base, and years of different experience. It's not uncommon for successful CEOs to have naive views about topics that don't directly impact their business operations. You can't assume the composers of masses are experts in theology, nor vice versa. So prestige transfer should awaken your distrust.
- Evaluate your sources' incentives and disincentives. Sources who are incentivized for telling the truth are inherently more reliable. Yet when it comes to topics of public relevance, they rarely exist, and the best one can hope for is to find sources who aren't too highly incentivized for lying. Of course, a few rare people do compulsively seek and tell the truth due to an innate altruistic instinct, but they usually lack the motivation to explore complex topics in depth, and are therefore less useful than one would wish. (As previously mentioned, betting markets are an attempt at solving this problem.)
- Evaluate your sources' cognitive styles and use this information to interpret their claims. Some people tend to be paranoid and overstate possible negative outcomes, others are indisciplined and jump on new ideas without thoroughly examining them, others are especially prone to partisanship, others are stubborn and never back down when they're wrong, etc. By identifying the character of a speaker you can estimate the risk he'll make a habitual error or exaggeration, and use this to translate his claims into a more accurate form. For instance, a paranoid thinker can be expected to overestimate risk, treating low probability futures as if they're matters of pressing concern, so when absorbing his warnings (e.g. about asteroids) you should downgrade their urgency.
- Build a stable of trusted sources. Because past record is a good indicator of present reliability, it's important to observe your sources over a period of years. This is a time-consuming process by nature, so when you discover a reliable source you should consider him a valuable long-term acquisition.
- Lower your confidence in the most popular sources. Sources and claims that are afflicted by higher than average epistemic load are amplified, especially in the social media ecosystem. Because of this the most prominent people are not the most reliable people. You should interpret great popularity among the general public as a negative sign with respect to trustworthiness.
- Give obscure outsiders a chance. If you always follow the obvious signals of trustworthiness, like credentials, respectable presentation, and uniformly palatable opinions, you'll sometimes trap yourself in a cul de sac of mutually reinforcing conformists who've shut out dissenters. Once in a while you should make a foray into the wilderness, because obscure and disagreeable outsiders who are ridiculed, denounced, ostracized, and shamed by the mainstream occasionally are right when everyone respectable is wrong. Usually, of course, they're a waste of your time.
- Estimate the effect of signaling load and attempt to correct for it. Comb through all your beliefs to determine which function as positive social signals, and lower your confidence in these beliefs. You should lower your confidence even more if you've fallen into the habit of using them as signals yourself. Of course, they might be true; but the expected pattern is for them to be exaggerated in the direction of optimal signaling, and they could even be empty fabrications. You should also raise your confidence in beliefs that send negative signals. It's likely that some of these are correct, but socially unpalatable, and therefore unfairly denounced. Of course, it goes without saying that you should try to avoid overcorrecting. (Note that a dissident subculture isn't immune to signaling load, but rather develops its own local signals that aren't functionally different from those of society at large. Thus, being a dissident, or contrarian, or minority does not in any way exempt you from the need to correct for signaling load.)
- Estimate the effect of partisan load and attempt to correct for it. Most people already assume the truth falls somewhere between the extreme statements of opposing factions, so it might seem that correcting for partisan load is as simple as embracing moderation and aiming toward the center. However, this type of lazy centrism isn't actually a good way to find the truth. Political actors are experts at manipulating it. For instance, as we discussed earlier, they can use propaganda to portray their favored views as normal and centrist even if they're partisan minority views in reality. They can also can encourage their extremists to be more extreme in order to move the perceived center closer to their side. (E.g. Trust Network A says the answer is 1, Trust Network B says the answer is -1, a lazy centrist concludes the answer is 0. Whence political operators in Trust Network A can use a common sales tactic to get their way: by overshooting and claiming the answer is 3, they cause lazy centrists to conclude that it's 1, their original desideratum. Because they're vulnerable to this tactic, lazy centrists can actually encourage extremism!) There are, furthermore, plenty of historically verifiable cases where one side turned out to be wholly correct and the other wholly wrong, so that centrism would not have arrived at the truth. Thus, when you try to correct for partisan load, you shouldn't just take a moderate position between two sides and stop there. It's better to analyze the effects of partisan load carefully first.
- Don't assume that partisanship as such is bad. Partisan load does degrade the accuracy of our beliefs, but that doesn't necessarily mean you should reject partisanship. The reason partisanship isn't wholly bad, and indeed the reason it's a natural instinct in the first place, is that it's entirely possible—even likely—that an enemy is really your enemy. In other words, one trust network may be a real antagonist whose members really wish to deceive you and do you harm because they have interests that are contrary to yours. Humans are individuals who exhibit tribal coherence. If you insist on being naive and judge everyone only as an individual, you and your allies risk defeat, and in the worst case, even annihilation. Someone who encourages you to ignore partisanship when a genuine conflict is underway is not your friend, but your enemy, or at best a fool. Before rejecting partisanship you should evaluate the whole landscape in detail and choose a side if need be.
- Hide or camouflage unpopular views and signals of partisan alignment when trying to communicate to moderates, opponents, and general audiences. If you send the wrong signals or create the wrong associations you'll trigger an immediate rejection of your claims, no matter how good or true they are, because you'll be identified as an enemy and therefore dismissed. One solution to this is to focus narrowly on your issue of interest and avoid addressing other topics entirely. This prevents any controversial or partisan-aligned views you may hold from becoming a divisive distraction and reducing your impact. Another tactic is to advocate for positions that are more moderate than your actual beliefs, pushing for a direction and then pushing again rather than selling your ultimate target up front. Both of these approaches are in common use.
- Use your instincts. We have fine intuitions for making ad hominem judgments in context, and the rational judgments we make in the abstract are quite myopic in comparison. Good instincts are a serious asset, so if you have them you should value them. This is not, of course, to say that they can never be wrong.
- Use sensory information. Factual information is conveyed most efficiently in text form. However, information about the human subjects who transfer this factual information is conveyed most efficiently in audiovisual form, and some of the information that can be found in appearance and voice is completely absent from text.
- Look out for hackers. Look for signs that someone is intentionally manipulating ad hominem signals to induce trust or distrust where they aren't merited. Unfortunately it's not always possible to identify bad actors before they've done harm.
- Be forgiving of humans who are in the grip of bad ideas. I'm not so keen on this one myself, dear readers, but I feel at least obliged to mention it in order to signal care. All of us have some wrong and indeed outright stupid ideas we can't recognize as such. This isn't necessarily because we're stupid ourselves, although often that is indeed the case. Rather it's because ad hominem judgments, while unavoidable, are an imperfect source of knowledge, and they can't be relied on to filter out every bad idea percolating through our trust networks. We ought to be forgiving of others who are also in the grip of foolish ideas thus acquired, especially when they're young and inexperienced.
- Avoid overconfidence. It feels good to be confident in the beliefs of your trust network. But for the reasons just mentioned, it's inevitable that this confidence will sometimes be misplaced. If you want to form a probabilistically accurate picture of the world you should abstain from the joy of overconfidence, and always remain open to the possibility that some of your beliefs are false. In fact, it's safe to assume that some of your beliefs are false.
- Read fiction. As a writer, of course I would tell you to read fiction. So obviously you shouldn't trust me. But the reason I've littered this essay with so many examples is that, outside of real-world experience, narratives are the best means we have for thinking about and understanding ad hominem judgments and trust networks. Trust is the stuff novels are made of. Even cheap soap operas often take trustworthiness and trust networks as their main topic, with the drama unfolding around questions like: who's conspiring with whom, who's really on whose side, who's lying and who's telling the truth? If you keep your nose buried in numbers and make the mistake of dismissing everything else as wordy nonsense, you might end up trusting the wrong people and pay the price for it.
So. You've made it to the end, dear readers. Ad hominems are certainly a logical fallacy (and nothing in this essay should be construed to imply otherwise). But they're also indispensable, and we ought to accept that and give ourselves permission to make good use of them. If we don't, we'll simply make bad use of them. In other words, if we're to have the most accurate knowledge we can actually obtain given our practical limitations, our goal shouldn't be overcoming bias, but optimizing bias.
If you feel you've learned something from this free article, consider making a donation to support my work.
Trust Networks was written in 2022. Part III was added in 2024.
Other writing by J. Sanilac
Memoirs of an Evil Vizier – a comic novel
Dispelling Beauty Lies – the truth about feminine beauty, including practical advice for women
Ultrahumanism – a middle path through the jungle of modern and future technology
A Pragmatical Analysis of Religious Beliefs – are pragmatism and belief opposites?
Against Good Taste – aesthetics and harmful social signaling
Critique of the Mind-Body Problem – it's not solvable
GIMBY – a movement for low-density housing
The Computer-Simulation Theory Is Silly – GPWoo
End Attached Garages Now – a manifesto
Milgram Questions – what they are and how to call them out
Amor Fatty – how an obesity cure will end the body positivity movement