“You never get credit for the disasters you avert,” Technology forecaster Paul Saffo told the New York Times in 2013.
The greatest fear of the Y2K (Year 2000) bug was the fear of the unknown. In 1999, we thought “the unfixable” Y2K bug was a first step in our dystopian future. We can all have a laugh about it now, but very few of us were laughing in December 1999. We didn’t know, and that’s what scared us.
The crux of the fear, for those who didn’t live through it, was that computer programmers didn’t bother listing the full four numbers of a year. 1995, for example, was listed by most computers as 95. 1996 was listed by computers as 96, and so on and so forth. The fear was that when the calendar flipped from 1999 to 2000, computers might not be able to distinguish between 1900 and 2000, because computers, in computer-reliant industries, might not be able to distinguish between the years, since they had, to this point, only listed years as two digits.
If it was fixed, did it require a collective effort from private companies, government expenditures that some estimate in the range of $400 to $600 billion, and independent engineers, or was this largely exaggerated problem a relatively easy fix? Was the problem greatly exaggerated and overhyped?
Some of us had our own, internalized doomsday clock in December of 1999, because we feared the unknown. Did we fear it from the comfort of our own home, because we were told to fear it? We were told it would affect every human’s daily life in one way or another. Large or small, we thought every day life wouldn’t be as great as it once was in December 1999. Some of us thought the electricity grid might go down, we heard planes might fall from the sky, our cars and unprepared computers would become inoperable, and our bank’s automatic teller machines (ATMs) would not dispense money. We all laugh about it now, but some maintain that tragedy was averted, and when tragedy is averted without a noteworthy event, we quickly forget how tragic it could’ve and probably should’ve been.
If any problem solver fixes a problem before it ever becomes a problem, they receive no credit for it. If they’re concerned with receiving some form of credit, the most advantageous route is to forestall a solution to allow noteworthy events to occur, and then fix it and save the world. As we all know, this did not happen in the Y2K scare.
I knew people who stocked their pantries with bottled water and grain pellets, I knew others who withdrew extra cash from their bank’s ATM, and I knew a number of people who bought Y2K software updates for their computers. No one knew everything, but we all knew some things, and everyone knew that we had to be prepared for anything. Our reaction to the scare defined us in 1999, but it further defined us on January 1, 2000, as it was noteworthy what we did to avoid becoming a victim of something that never happened. Whatever you did became the subject of ridicule.
The theoretical question we asked one another in 1999 was not when it would affect us, because we all knew that. The question was how much would it affect our daily lives? Few reasonable and rational adults asked the question if it would affect us. Due to the fact that computers were still relatively new to us, we considered it a fait accompli that it would affect us. We grew up with science fiction movies that revolved around a plot that that which can help man today could one day, and in some way, ruin man in a dystopian manner that no one saw coming.
In those movies, the proverbial, street corner bell ringer was always the best-looking actor in the movie (which lends their character more gravitas) warning the less attractive (and thus less aware) side characters of impending doom. None of the average-to-ugly actors in the movies recognized the true, impending threat for what it was until it was too late. We didn’t want anyone to consider us average-to-ugly, so we mentally prepared for the day when an attractive person lofted a preposterous notion to us.
In 1985, someone posed a theoretical question about how Y2K might affect computers when the century switched, but the problem for us was we didn’t know how attractive that theoretician was, so we didn’t take it seriously. Their theoretical notion hinged on the idea that for decades computer programmers wrote the year in shorthand. They didn’t write out the year 1985, they wrote 85. Some claimed the shorthand was done to save memory space. Thus, when the year flipped from 99 to 00, we feared that all of our computers would believe the year was 1900, 1800, or even year 00? Most of us didn’t believe that computers would transport us bedside, next to the baby Jesus, but we feared that our computers would fail to recognize the logic of the switch, and that the bug it created might introduce such internal confusion in the computer’s mainframe that they would simply shutdown. We feared any human input introduced to combat this inconsistency would prove insufficient, and that human interference could lead to some unforeseen complications, and we feared our computers would be unable to sort it out? The theoretical question reached hysterical proportions in the fourteen years between 1985 and 1999, as America grew more and more reliant on computers for everything from its most important activities (travel) to its most basic (ATMs and the electrical grid).
My guess is that the recipient of that first theoretical question brought it to a closed-door boardroom, and some of those board members took that question out to other parties, until someone in the media heard the question and thought it might prove to be an excellent ongoing question to ask an audience in ongoing features every week. They could start a Tuesday Tech story of the week in which they asked the informed and uninformed what they thought of a problem that wasn’t a problem yet, but could be a problem when the calendar flipped.
Media figures play two roles in our lives, they tell us what we need to hear, read, and see, and they tell us what we want to hear. We don’t want to hear eggheads talk 1s and 0s, unless they can make it apply to our lives with a quality presentation. That, in my opinion, provides stark clarity on our mindset, because we prefer the presentations inherent in science fiction to the hard science of the actual factual.
“Nobody cares about computer programming,” we can guess a network executive informed that ambitious reporter’s Tuesday Tech proposal. “Why should I care about this?”
“The angle we’re proposing is more granular,” this reporter said. “The first network focused on the larger question of computer technology in their Tech Tuesday reports. In our Tech Thursday features, we’ll explore how much of our lives are now dependent on computers. Our energy grid, the tanks at the gas station, and the ATMs. We plan on bringing this theoretical problem home to where people live. We will say this Y2K bug is not just going to affect Silicon Valley and Wall Street, it could have far-reaching implications for citizens watching from Pocatello, Idaho to Destin, Florida, and here’s how …”
As usual with hysterical premises of this sort, the one component most news agencies, and the word-of-mouth hysteria that follows, fail to address is human ingenuity. Rarely, do we hear a reporter say, “We’ve all heard the problem called the Y2K bug, but we rarely hear about proposed solutions. Today, our guest Derrick Aspergren will talk about proposed solutions to comfort the audience at home.” The problem for news agencies is that the Derrick Apergrens of the world are often not very attractive or charismatic, and they speak in ones and zeroes. Even though most computer problems and solutions involve a lexicon of ones and zeroes, no one wants to hear it, and few will remember it. As a result, news agencies rarely give Derrick Aspergrens airtime, and they focus on the dramatic and provocative, proverbial bell ringers standing on a street corner.
In 1999, we rarely heard the question, can hardware engineers and electrical engineers fix a problem they created? The learned fear we’re conditioned to believe, based on the plot lines of so many science fiction movies, is that if we dig deep enough, we’ll discover that this isn’t a human problem at all, but a problem generated by a scary conglomeration of ones and zeroes we call AI (artificial intelligence). We knew little-to-nothing about the potential of AI in 1999, but we feared it, and its potential, because we feared the unknown. “AI is here, and there’s nothing we can do about it!” was (and is) the battle cry of conspiracy theorists on radio, in our neighborhoods, and in our work place. The truth is often much less dramatic.
The truth, we now know, was somewhere south of the hype. The truth lived somewhere in the question of whether the Y2K fear was real. If it required a big, worldwide fix, as some suggest happened, how come there were no Nobel Prizes handed out? “That’s because it required a collective effort from so many minds, around the world that there was no individual to accord credit.” Or, was the fix so easy that any hardware engineer, worth half of his college tuition payments, was able to do it?
Was the Y2K scare a tragedy averted by hardware engineers enduring mind-numbing hours of editing, or was the entire affair hyped up through media mis, dis, or mal-information? I don’t remember the reports from every media outlet, but how much focus did the round robin hysteria generated by the media place on possible and probable fixes? Some suggest if there was a need for a fix, it could be easily accomplished by hardware programmers, and others suggest it was never this world-shaking threat we thought it was.
The problem for us was that the problem was so much more interesting than the fix. Take a step back to December 1999, and imagine this news report, “Here we have a man named Geoffrey James, who says, “If Y2K experts (some of whom have a software background but none a hardware background) ask some electrical engineers about date checking in embedded systems, they will learn that only a complete idiot would do anything resembling the conversion and comparison of calendar dates inside a chip. We use elapsed time, which is a simple, single counter; it takes ten seconds to add to a circuit.
“I may oversimplifying but ultimately the reasoning doesn’t matter,” Geoffrey continues, “the unfixable system problem either isn’t real or isn’t significant enough to spawn a disaster. Because there aren’t any.” That rational and reasonable explanation from someone purportedly in the know would’ve gone in one ear and out the other, because for some of us there are no absolutes, and there are no quick fixes. When someone dangles the prospect of a simple solution to the simplest problems, we swat them away:
“You mean to tell me that all they have to do is add to a circuit. I ain’t buying it brother, and if I were you, I wouldn’t buy it either. I wouldn’t go out into the world naked with the beliefs of some egghead. We all have to prepare for this, in one way or another, we must prepare.”
Some of us thought the Y2K bug would force us to back to the primal life of the cavemen, or at least to the latest and greatest technology of the McKinley administration of 1900. Friends of mine thought those of us who know how to hunt and forage for food would once again take their rightful place atop the kingdom of those who grew so accustomed to the comfy life of a visit to the neighborhood grocery store. More than one person I knew thought our appliances might explode, and that Americans might finally know what it’s like to live in the poorest third-world nations in the world. They thought we would return to our primal life, and our TV shows and movies reflected that fear, anxiety, and (some say) desire to return to our primal roots.
News reports stated that hardware engineers and other electrical engineers were working on the problem, but they’re not sure they’ll have a workable in time. We knew the line: “For every problem there is a solution,” but when you’re in the midst of hysteria, lines like, “This was a man-made problem that requires a man-made solution” provide no comfort. We all know that tangled within mankind is a ratio of geniuses who not only know how to propose solutions, but they know how to apply and implement them. We know this, but humans suffer from an ever-present inferiority complex that suggests no mere mortal can resolve a crisis like this one. We know this because no self-respecting science fiction writer would ever be so lazy as to suggest that a mortal, whether they be a military leader with a blood lust who wants to detonate a warhead on the monster, a policeman who believes that a bullet can kill it, or an egotistical scientist can resolve this particular dystopian dilemma.
Even though this was a man-made problem, few outside the halls of hardware engineer offices believed man could solve the problem. We heard about geniuses who brought us incredible leaps of technology so often that it was old hat to us. We knew they could build it, but there was this fear, borne in the human inferiority complex, and propagated by the sci-fi movies we loved, that technology had spiraled so out of our control that it was now beyond human comprehension to fix it.
Was Y2K overhyped as an unfixable problem, was the solution so elementary that it simply took a mind-numbing number of man hours to implement it, or was it a simple hardware fix? I don’t know if the numerous media outlets who ran their Tech Tuesday features ever focused on the idea that the Y2K problem, of two digits vs. four, was generated by a theoretical question someone asked fifteen years before, but I told my terrified friends as much. “If this whole thing is based on a theoretical question, what is the theoretical answer?” With fellow uneducated types, I furthered, “And if we search through the theoretical answers, we might find an actual one.” The theme of my response involved the hope that we weren’t so terrified by the questions that we failed to seek answers, and I was shouted down. I was shouted down by uneducated types, like me, and I was, am, and forever will be woefully uninformed on this subject. They told me that I didn’t understand the complexities involved, that this situation was far more serious, and that I was underestimating it. I’d love to say that I adjusted the focus of my glasses, as I attempted to adjust theirs, but when the screaming majority in your inner circle is convinced to consensus those who are relatively uninformed either silence or buckle. I cowered, and I regrettably conformed to some of their fears, but I didn’t know any better. None of us did. The one takeaway I have from the hysteria we now call Y2K is that we should use Y2K hysterica’s fears as a precedent. If we have theoretical questions based on theoretical questions we should ask them of the more informed, more educated “experts”, because theoretical questions could eventually lead to some actual answers. The alternative might result in us shutting down the world over some hysterical fear of the unknown.
[…] © Rilaly […]
LikeLike