Analogies are ubiquitous in almost all writing, and are likely most common in writing about the social sciences. Perhaps it is because the social sciences are so terribly imprecise, impossible to quantify in the manner of the physical sciences, leaving the use of analogies as the only means of analyzing situations and drawing conclusions. Used properly, there is nothing objectionable about argument by analogy, I have used it extensively myself. So long as one bears in mind that being analogous is not the same as being identical, and those using the analogies chooses them well, selecting analogies that match in all essential respects, they can provide quite useful insight.
On the other hand, analogies can also be terribly misleading. If the situation chosen bears only a superficial resemblance, or if it differs in some essential way, it is easy for an analogy to mislead rather than elucidate. Sometimes, perhaps, this is done intentionally, by those whose arguments would not stand on their own, but more often, it seems those who offer such bad analogies are themselves misled*. Whether offered with sincerity, or to mislead, bad analogies are rather dangerous, as, being somewhat persuasive, and seeming sound to those unaware of which aspects of the argument are essential and which are not, they can easily sway many into a completely false sense of understanding concerning a situation about which they are now woefully uninformed.
Perhaps it should come as no surprise to those who read my blog regularly, but a great example of the harm a bad analogy can do is found in the case of my favorite whipping boy, Wikipedia**.
Wikipedia arose out of a very simple analogy. The idea was to take the principles which had worked so well in the free software movement, and apply them to online information. Thus, they hoped to take the volunteer collaboration that had created all sorts of quite popular software, such as Perl and Linux, and use it to create a repository of knowledge***. It sounds good on paper, I suppose, provided you don't think about it too long. But if you do, then you begin to notice that there are some very real problems with the analogy between software and encyclopedias, most notably, the way the two are used, and the minimum requirements to consider each a success.
Let us look at the original model, the development cycle of free software. The primary principle of this model was expressed early in the formation of the free software movement in the aphorism "release early, release often." This expresses perfectly the basic concept. Free software, being a collaborative effort, does not wait, as does commercial software, for all the bugs to be ironed out. Instead, once the software is in some vaguely useful form, that is, when it is sufficiently complete to make clear what it is supposed to do, and for coders to see how it is done, the code is made public. Those who have an interest in the subject matter then download the program and source code, start using it, and, when it breaks, go into the code to make patches, which they then share. In addition, these users also spot features they might need that were not included, and add them, resulting in code that is, through an iterative process, gradually freed of bugs, and provided with expanded features.
Of course, throughout this process there is a give and take. Sometimes a given patch breaks another, or reintroduces a bug previously fixed or removed. Or sometimes new features introduce new bugs, which need to be fixed in later revisions. Or, perhaps, a new feature might prove unpopular with the majority of users, and thus is ignored or eventually removed by later developers (sometimes only to be reintroduced by a strongly attached partisan later). In short, the code goes through a process whereby the number of bugs shrinks and grows, while at the same time the number of features and utility of the code undergoes a similar process, hopefully, despite all the oscillations, trending toward ever better and more useful code.
And this was the model upon which Wikipedia was based. Through an analogy to this design model, it was decided that an encyclopedia could be created which would be written by enthusiastic amateurs and unpaid volunteer professionals, with each contributing his own content, revising content that had previously been provided, and expanding the encyclopedia to include previously absent material. As with free software, like that well known (if cynical) description of the adversarial courtroom, the interplay of various individual contributions would gradually trend toward a more comprehensive, more error free encyclopedia.
I have written before about the logical fallacies inherent in this model, but as they point out one of the problems with the analogy, allow me to repeat those objections once more. In free software, generally there are few really strong emotions involved. Sometimes a programmer may be attached to a particular way to do something, or become fixated on a specific feature, either including or excluding, but, for the most part, these conflicts have little impact. At worst, the dissident developer with "branch" the project, creating an alternate version which matches his preferences, or, more often, will make an optional "module" containing the feature he prefers, so others can choose to use it or not.
Encyclopedias are not like programming. Many topics elicit very strong feelings. Whether it is debate over whether or not US foreign policy in 1950's Guatemala was justified or the proper attitude toward East Timor, people have strong feelings and are not about to give them up. And thus, there take place, not iterative revisions tending toward truth, but "reversion wars" where on individual changes an article and another changes it back, creating an endless cycle of opinionated and contradictory positions. And, unlike software, there is no way to create modular truth, to remove some controversial facts and make their inclusion a user choice. One must have a single narrative, one description, and thus, the writers must agree on a final form,a t least for the moment, or else the user is going to be lost in a welter of changing articles.
Which brings me to a second significant difference which the analogy hides. Software allows users to choose to use an earlier version. In the case of almost all free software, this takes the form of a "stable" release, that being an earlier version of the program which users and developers have found to be relatively free of bugs and capable of performing the functions desired. Of course, it omits a number of newer features, and maybe has some bugs now fixed, but it provides users unwilling to tolerate bugs a starting point in using the software, a version which is more like commercial software. And in so doing it avoids many of the bigger headaches of the free software model, the constant introduction of new bugs as developers continue to modify the code.
What the analogy ignores is that such an option is impossible with an encyclopedia. As I said above, you must have one version. Truth is not modular, nor can it exist in revisions. The encyclopedia, of necessity, presents a single, correct, page. Yes, there is a change history, but it is silly to think someone would choose to read the version of the page from four weeks ago to have a more stable encyclopedia. No, the encyclopedia must always present the current page as closest to the truth or else the whole philosophy behind Wikipedia falls apart. And thus, for better or worse, the user is left with a single choice, without the safety valve of a "stable" release.
Finally, another point I have often made, though in different form, software and encyclopedias have differing demands. A program can work 90% of the time and be acceptable. A few bugs, a few crashes, they can be allowed. The revision model even relies on this. If a bug were enough to make code unusable, then the free software release model would never work, it is precisely because some bugs can be shrugged off that this model works at all.
On the other hand, reference books have to be as close to 100% accurate as possible. Think of it this way, in the example above I said a 90% functional program is acceptable. What about a 90% accurate encyclopedia? If you know in advance 1 in 10 articles, or 1 in 10 facts, are wrong, can you even use the encyclopedia? Not knowing whether what you read is true or false (and if you knew enough to evaluate the truth, you wouldn't be using an encyclopedia), you must assume everything is false, making the entire encyclopedia worthless****. Thus, the analogy also fails by not recognizing the very different standards by which the two are judged.
But this is not an article about Wikipedia, but about analogies. And that brings me to my conclusion. As I said above, analogies can be very useful, and in social sciences may be the only way to argue. But one must be very careful with any analogy, even the most seemingly innocent, as making a false comparison can leave you convinced you know the truth, when really you have bought into a comparison between very dissimilar things.
* In my late teens and early twenties I was fond of Camus. However, even before I became a full fledged Objectivist and rejected the absurdities of Existentialism, I saw that many of Camus' descriptions of human thought were based on terribly tortured analogies. Sometimes they sounded good, and it was clear Camus himself found them convincing, but to me, even when I was much more inclined to a sympathetic view of his works, I found some of them so far fetched as to be absurd. (I used to mock him by writing fictional quotes supposedly from "The Myth of Sisyphus". For example: "A man's soul is a desert, and like a desert is filled with cacti and aloes and such. These plants all need water to grow, and so you must make sure your soul is well hydrated. Thus, drink five glasses of water a day." It was a bit more goofy than his arguments, but not much.)
** Wikipedia is the subject of a lot of criticism in this blog and its predecessor "Random Notes". This is not due to any particular dislike for the site, but rather, quite simply, because I find the philosophy behind Wikipedia both completely false, and a good example of many of the errors of reason particularly common to our modern minds. See "The Tragedy of the Creative Commons".
*** I am not going to argue here about the free software philosophy, as I have looked at two very different design models in my essay "Some Libertarian Analogies", as well as two different approaches to copyright in "Copyright as Politics". However, I would point out, in the case of the most successful free software, such as Linux (Linus Torvalds) and Perl (Larry Wall), there has always been a single personality, or at most a very small group of developers [say two or three], behind the initial product, and he has generally kept somewhat tight control over the future course, making the claim of complete free and open collaboration a bit suspect.
**** Obviously no encyclopedia is ever going to be perfect, but commercial print versions do their best to present the best present understanding of the matter in question, presenting contentious or unsettled matters as such, if they are presented at all. Thanks to the ability of anyone to edit Wikipedia, many articles contain quite controversial positions presented as fact. Nor is that all. As I pointed out before, even if vandalism is quickly caught and corrected, the fact remains, at the moment you read it, you have no way to know if the last edit was by an expert, an ill-informed amateur, a proponent of a crank theory, a hoaxer or a lunatic. Unless the vandalism is terribly obvious, it is impossible to know if your page is real, valid information, presented honestly, or a hoax. (And this ignores the fact that we all have heard of hoaxes which remained on Wikipedia for a long time, sometimes being cited by other articles, or even journalists. So the situation is even worse than this best possible version I am presenting here.)