The reader is invited to answer the following multiple-choice question: Which of these prophesied ends of the world, as gleaned from various information sources on the (End of the) World Wide Web, is most likely to occur in the year 2000 or shortly thereafter?
See end of article for related links.
A. The detonation of an atomic bomb in a major European city by a Libyan terrorist, as predicted by Pope John XXIII in 1961.
B. The sudden opening of the secret chambers in the Great Pyramid at Giza, with the revelation of secrets that lead to the establishment of Satan as a “public figure,” causing the American militia to start a “massive” war that eventually destroys the world, as predicted by Arizona militia leader William Cooper.
C. A worldwide atomic conflagration, as predicted in a hidden message in the Pentateuch.
D. The capture by the CIA of a space alien who crash-landed in the New Mexico desert. The alien proves to have escaped from a planet destroyed by God, who is “furious with His creations everywhere,” and is destroying them one by one as he works his way through the galaxies. Next up is… well, take a wild guess.
E. Ice, caused to accumulate by the first alignment in 6,000 years of Mercury, Venus, Earth, Mars, Jupiter, and Saturn, builds up on the South Pole to the point where it tips the world’s axis and destroys all life, as predicted in R.W. Noone’s Ice, the Ultimate Disaster.
F.The sun takes a turn for the superhot, melting the polar ice caps and flooding the world, sending to Hell all but the genuinely repentant, as foretold in three scrolls found in Noah’s Ark, which was unearthed more or less intact on a slope of Turkey’s Mount Ararat.
G.At the stroke of midnight on January 1, 2000, computers of all sizes and shapes the world over begin failing, as the “millennium bug” or “Y2K bug” comes alive inside the machines. Mainframe, midi-, mini-, and personal computers are joined in the collapse by computer chips embedded in automobiles, airplanes, elevators, medical machinery, and various other machines and appliances in a widespread conflagration that rapidly spreads to the point where it ends civilization as we know it. Electricity and telephone service is halted. Financial markets and the world banking system collapse. Prison doors suddenly pop open, turning loose hordes of monsters. Hospitals cease operating—sometimes in midoperation—and patients die. Worldwide depression, famine, and anarchy result. So it is foretold by legions of computer engineers.
This bizarre turn of events will be caused by the practice in computer programming of using only the last two digits of a year—86 for 1986, for example—in software code. When the date clicks over to 00, computers and computerized machinery will read the date as 1900, with devastating consequences.
The catastrophe can be avoided, say the folks at http://www.earthascending.com/ maya/Cal_Index.html, if we would only stop using the unnatural Gregorian calendar, an “artificial time” calendar “which the accidents of history happened to impose on humankind…. The Y2K computer crisis is inseparable from the question of calendar reform…. The Gregorian 12-month calendar and the related artificial timing device, the 60-minute mechanical clock, are both based on the erroneous assumption that measurements of space may be used to measure time. This error in time… leads to the artificial and mechanical 12:60 timing frequency upon which all of modern human civilization is based. The result is a self-destructive society addicted to mechanized speed, increasingly at odds with nature, and so short-sighted in its infatuation with technology that it has written its own death-wish: Y2K.” If we would only switch to “the regular measure of the Perpetual 13-Moon Calendar,” as devised by the Mayans, we would fall into the rhythms of the natural world, shed our pretensions, and be saved.
Well, almost. It turns out that this solution will only buy us 12 more years, because…
H. The Mayan calendar begins with the birth of Venus and ends 13 “baktun cycles” later, at Winter Solstice 2012. And the end of the calendar brings with it the end of the world.
Now for the results of your test:If you answered A, B,D, E, F, or H, you’ve been spending far too much time on the Internet. If you answered G, you have passed the chiliastic straight-face test, and have won the right to read on.
The obvious reason to believe in the Millennium Bug scenario and throw out the others is the credibility—and the ubiquity—of Y2K doomsayers. While the other end-of-the-world stories are propounded by the usual suspects—murky biblical passages, cave-dwelling survivalists, interpreters of cryptic papal pronouncements, wacko writers—the Y2K scenario is predicted by thousands of computer hardware and software engineers, eminently rational and scientific types who figure to be dispassionate in the face of irrational fear and mass hysteria. We tend not to associate superstition with spreadsheets.
The engineers have been joined by choruses of sober-faced lawyers, financial managers, government officials, and Defense Department officers who keep cranking out relentlessly analytical reports on the current state of the nation’s computer systems, the high cost and complexity of repairing them in time, and the utter unlikelihood that catastrophe can be averted. The US government’s Defense Logistics Agency, for example, breaks down the problem into numbers that sound daunting, to say the least. DLA says that it has 39,577,427 lines of code in its various computer systems, underlying 60,000 programs, 33,416 screens, 236,271 files, and 10,379 database tables. The system is a patchwork using 77 computer languages, 35 different hardware platforms, 16 operating systems, and 311 commercial software packages. After having inspected 5 million lines of code, the DLA finds that between 1 percent and 5 percent of it must be rewritten, at a cost of between $1 and $8 per line, in order to forestall a bite from the millennium bug.
The DLA summary highlights the central reason Y2K doomsayers are so dismayed: the complexity and interconnectivity of today’s computer systems. So many different systems from so many different computing generations interface in so many different ways that it is nearly impossible to predict how widespread the impact of a single Y2K failure will be. Combined with the difficulty of finding a bug in million-line programs that can be more than 20 years old and written in a language (COBOL) no longer used in state-of-the-art computing, the dimensions of the problem are largely unknown. Even if armies of programmers were to correct millions of date-calculating lines of code that would fall prey to the Y2K problem, no one will ever know whether all of the vulnerable lines have been found and fixed. Nor does anyone know how much havoc a single vulnerable line can wreak by spreading the consequences of its own collapse through its networked connections with healthy systems.
Engineers, financiers, and managers of large systems everywhere are spending enormous amounts of time trying to anticipate, define, and circumvent the Y2K problem. The federal government has budgeted $5 billion in 1999 for prevention. Fortune 500 companies will spend an additional $11 billion, with no assurance that their efforts will eliminate the problem. And the Federal Aviation Administration, looking at a network of 290 airlines, 550 airports, 17,000 suppliers, and some 23 million lines of code on 250 machines—some of which, incredibly, still use vacuum tubes—already has taken the “triage” approach. Since there isn’t time to fix everything, the FAA has decided, we have to pick which systems to risk leaving uncorrected.
Capers Jones, chair of the Burlington, Massachusetts, company Software Productivity Research Inc., a consulting firm that measures the efficiency and efficacy of software and tailors it to a company’s needs, offers an unusually cogent and dispassionate analysis of the problem and its potential danger to civilization. In a paper titled “How Serious Is the Year 2000 Software Problem?” he breaks Y2K down into tangible components. There are, by his estimation, 36,000,000 software applications in the United States, and most likely 50 percent of them are both active and “have Year 2000 hits.” Based on 25 years of “accumulated research on the efficiency of finding errors of other kinds,” Jones calculates, “the US average for ‘defect removal efficiency’ is just about 85 percent,” which means that under the “best-case scenario,” there would be “just over 1,700,000 unrepaired applications.”
After noting that the US is in better shape than any other country to deal with Y2K, and going through similarly complicated and detailed prognostication on the scope of the problem worldwide, Jones attempts to predict various success levels of effortsto eliminate Y2K problems. Repair only 80 percent of the Y2K bugs, he estimates, and there will likely be a severe worldwide recession. A 70 percent repair rate would result in worldwide depression; at 50 percent many governments would be toppled; and there would be worldwide famine if only 25 percent of the problems were fixed in time. So the real problem is not so much the bug itself as our efficiency in attacking it.
The leading repository of alarming and terrifying Y2K information is Gary North’s, where various worldwide ripple effects, from the collapse of financial institutions to famine, are breathlessly predicted. “The domino effect is the problem of falling systems,” North writes. “One system fails, but another is dependent on it…. This is what makes Y2K the most complex problem facing the world—possibly ever.” Among the possible dominoes: farming. “Modern commercial farming is tied to hybrid seeds. The plants produced by hybrid seeds produce seeds that will not produce healthy plants if planted. Every year, almost every large farm on earth must re-order another batch of hybrid seeds. If, for any reason, the seed companies fail, or the banks fail, farmers will not be able to plant anything. This will lead to a famine. Let’s not hedge our words: FAMINE…. If this is one of the dominoes, the result will be widespread starvation.”
Not one to err on the side of understatement, North describes Y2K as “a very big problem, worldwide in scope and without historical precedent (unless we count the Tower of Babel).”
This invocation of Babel is both troubling and telling. For no matter how detailed and engineerishly reasonable the explanations of the Y2K problem and its potential consequences, I have never found it possible to believe that it amounts to anything genuinely dangerous. The problem with the millennium bug in my mind has always been that it is too perfect a bug for our times. It is too apt a symbol to be real, too ingenious and appropriate an end to 20th-century civilization to be credible. It belongs in the realm of art rather than science. It reads like the Tower of Babel story updated for our age. Ours is a civilization that has fallen in thrall to engineers and scientists, who teach us that all religion, all belief in God, all faith, all irrational feeling is false, and that truth can be found only through hard, exact science. The computer is the false god of our age, a device in which we believe with the fervor former civilizations reserved for their gods and spirits. Small wonder, then, that as the End of the Millennium approaches, we are being brought up short, as were the prideful builders of the Tower of Babel, by an outraged God with such a finely honed sense of irony that He has arranged for the very technology that shut down our belief in Him to be our downfall.
So, at least, goes the thinking on countless Web sites: “One way to test our belief system is to deny us access to our treasures,” writes an anonymous correspondent at http://www.biblecode.com/cgi/netforum/discuss/a.cgi/14—2.2.1. “To permit an environment of confusion among the computer professionals, to permit the record systems of the world to die in their tracks. To bring us to the point of understanding that all that we have lived for on this earth, all that we have invented, improved, struggled for and built is worthless. The pensions, the profit-sharing, the savings accounts, the equity in our homes. It is all gone. Or worse yet, not ‘accessible.'”
Whenever I give voice to my skepticism, though, I am bombarded with vitriolic mail insisting that my complacency will lead directly to the deaths of dialysis patients, visitors to emergency rooms, surgery patients, and others who are under the direct care of our medical industry when the bug hits. “We are running out of time for constructive solutions,” writes Siobhan Harper-Jones of ComputingSafe 2000 LLC in a letter to the editor of the Weekly, “and articles such as Moody’s [“Die, Microsoft, Die!” SW 4/30/98] only encourage the public to delay further, because on your say-so they believe that there is no problem. As a result, you make panic more likely by delaying realization and constructive preparation until it is unmistakably too late to take any steps.” Ms. Harper-Jones goes on to list an array of hospital machinery (“ventilator, dialysis machine, pacemaker, fetal monitor . . . “) that relies on “embedded systems based on microchips” and that will kill patients unfortunate enough to be hooked up to them at midnight on January 1, 2000.
Talk with the people who work directly with purportedly endangered machinery, though, and you hear a more sanguine voice. Dr. Jon Ransom, director of the Northwest Kidney Center at St. Joseph’s Hospital in Bellingham, says, “I haven’t even thought about the Year 2000 bug. There isn’t any computing component to our dialysis machines. There’s a little clock on them that tells you what time it is, but that’s not going to affect the patient. The worse thing that might happen to us is some kind of minor glitch in our billing software.”
The bug then may prove a boon rather than a bane to Ransom’s patients.
At the University of Washington Medical Center, Brad Cummings, the center’s Year 2000 program manager, works full-time both at reducing millennium bug risks and at cautioning people not to overworry. Cummings describes his current work, still in its early stages, as “verifying inventory, ascertaining compliance [with Year 2000 safety standards], and prioritizing.” He estimates that there are some “10,000 devices” that need examination—from personal computers to mainframes to elevators to patient-care machines—all of which “have some sort of date computation in them. Some new elevators, for example, have chips that look at when the last maintenance was done on them, then compares that to today’s date.”
After working the problem for a year, Cummings has determined that “about 10 percent of our items are actually date-dependent, but most of those don’t have Y2K issues.” Based on his own experience, he is skeptical of high cost estimates, since “a lot of the stuff we’re doing for Y2K is stuff we would be doing anyway—a lot of software is being upgraded for other reasons. And many of the people working on this would be working anyway, so they don’t cost anything additional.”
While conceding that “there will be some things that are not going to work properly” on the Day of Reckoning, Cummings also sees Y2K as “a management and business issue as opposed to a technical issue,” and expects only minor machine misbehavior on January 1, 2000. When he looks up from his hands-on work at Y2K, he sees hysteria and hype far out of proportion to the real problem. “A lot of attorneys look at this as the greatest thing since asbestos,” he says at one point; later, he decries both “consultants who have an incentive to overstate the situation” and the Disinformation Superhighway: “The signal-to-noise ratio is a little low on the Web.”
It took me weeks of sifting through Web noise to find a trustworthy signal, in the form of Keith Alexander—a software engineer with extensive experience in “optimizing code designed to give the proper date results in as little data space as possible,” who weighs in on Y2K Web discussions with “don’t believe the hype” arguments.
“I don’t subscribe to the hysteria for a number of reasons,” he wrote me during a series of e-mail exchanges. “One reason is that there are a lot of programmers working hard to fix the critical systems right now…. Another technical reason not to worry is that some of these feared problems probably don’t actually exist…. Another is that people will be around on that day to help services along…. People will be around to save us from trouble. The CAT-scan machine operator can just fudge it and enter a date from the previous year. Sure the system should be fixed, but you will still get your CAT scan on time and no lives will be lost…. In many cases, nothing will happen. Reports and computer output may say that something happened in 1901 but in many cases that will be it. If the date was just there to provide information to a human, then the human will probably figure out what is going on. It would be nice if the output was correct, but it is no big deal.”
Alexander agrees that not all the Y2K bugs out there will be found and fixed. “But what is implied is that the life-threatening code (if there really is any) is what won’t get fixed. I just don’t buy that argument. I think that we won’t have it all fixed and it will be really irritating. But irritating is a long way from deadly so I think the problem is hyped.” He attributes the hype to “a public/media who understand the problem just enough to be dangerous,” and “people who do know how big the problem is and would like it fixed. They are talking to the first people telling them that there is more code to be fixed in the time left than can be fixed.” While that is true, it is also true than much of the code is inconsequential.
While there is some truth to the contentions of Cummings and Alexander that Y2K hype is attributable to relatively obvious human motivation and emotion, I’ve always felt that there was a special tint to the great expectations attending the Year 2000—the tint of religion, or superstition, or cosmic consciousness. Something about the nature of the problem as it is described by those who fear it—vast, largely indefinable, essentially unknowable, civilization-destroying if not world-ending—gives it more the air of the Last Judgment or the Apocalypse than of a simple yet vast engineering problem.
Weeks of wandering through the digital desert finally led me to an oasis in the form of www.mille.org, owned and operated by the Center for Millennial Studies, established by Richard Landes, a professor of medieval history at Boston University. Landes’ center is a multidisciplinary site for historians, engineers, businesspeople, financiers, and others with interest in the Millennium Bug to gather and swap ideas, notions, theories, and predictions. It is also a site where Y2K is studied in its proper context: not the world of technology, but the world of religion’s “apocalyptic time”—defined, writes Landes, as “that perception of time in which the End of the World (variously imagined) is so close that its anticipation changes the behavior of the believer.”
Landes is fascinated by the Year 2000 bug, which, in the words of Nicholas Zvegintsov, one of his cohorts, “is neatly attached to the most memorable date and time for the next or the last thousand years. Even the word ‘millennium’ carries with it overtones of unreal hope and mysterious dread.” Adds Landes: “It connects at odd but compelling angles with millennialism broadly defined, and deserves our attention.”
Landes sees the present time as a replay of the years leading up to the year 1000 and the year 1033, during which times prophets rose up proclaiming that the Last Judgment was at hand. In both cases, the prophecies were engendered by predictions that the world would end with the Last Judgment 1,000 years after the death of Jesus Christ. (This led to some confusion between those who believed that the Last Judgment would come at the turning of the millennium proper and those who believed it would come exactly 1,000 years after Christ’s death in AD 33.) The only difference between the years immediately before those two times and the years before 2000, Landes believes, is that in the interim we have shifted from “religious to scientific symbols.” Symbols aside, we still approach our millennium, as our Christian descendants did theirs, prey to “historical and eschatological imaginations” that keep conjuring up “promises of a radically altered existence.”
Landes sees premillennial discourse as a debate between “the roosters and the owls.” The rooster is “the apocalyptic believer, the one who, thrilled at the prospect, heralds a new day by crowing at the dawn, stirring the barnyard to wake, piercing the still air with his penetrating cries, joining in with other roosters.” The owl, skeptical and annoyed, is an “anti-apocalyptic believer, the one who prefers the caution and quiet of the familiar, who argues no, the night is still young and long hours separate us from the dawn, who warns that the foxes are out, the master still asleep, and only disaster can come from rousing the barnyard to untimely activity.”
Inevitably, the roosters hold sway as the millennium nears. We are, after all, conditioned by 2,000 years of a culture rooted in a religion that, as the philosopher Dietrich Bonheoffer writes in Creation and Fall, “bears witness to the end of all things. It lives from the end, it thinks from the end, it acts from the end, it proclaims its message from the end.” However secular our pretensions, most Americans, psychologically speaking, are still early Christians.
Even so, Landes believes that journalists and other reasonable sorts should take heart, for the roosters’ “dominance is as brief as it is powerful. With the (inevitable) passing of the apocalyptic moment, the roosters must either change their tune (redate, expunge the apocalyptic element) or cede the floor to the owls who, retrospectively, prove correct.”
When I look ahead, then, I see that we will see, by February 2000, a spate of revisionist articles as one reformed alarmist engineer after another weighs in with his analysis of the Millennium Bug That Wasn’t. We will hear about the heroic efforts of Y2K programmers, who staved off the End of Civilization as We Know It. We will hear about excruciatingly close calls, about elevator banks that were revived after a few seconds’ down time . . .
Or, more likely, we will hear none of the above. For the world will indeed have ended, as predicted, at the Dawn of 2000. But not, as is now believed, because of answer G in our multiple-choice quiz. As you finish reading this, India and Pakistan, locked in a nuclear arms race, are preparing to commence a war that will lead in short order to scenario C. (The Pentateuch was right!) The rest of the world, distracted by the Y2K hysteria, was unable to intervene in time. Thus the world will end not with a programmer’s whimper, but with a great big bang-bang-bang. Cock-a-doodle-doom!
Related Links and information:
wacky Y2K site
Defense Logistics Agency
Gary North’s Dire Predictions
The Bible Code