Why 2K ? - Part 2 |
by Wayne M. Krakau - Chicago Computer Guide, March 1999 |
Well, were back to that critical question
of why in the world did the computer industry get itself into this Year 2000 mess. Last
months example showed that only the young and naive are foolish enough to try to
buck the "quick and dirty" doctrine within the computer industry if management
is really determined to embrace that shortsighted attitude. In the PC world, this doctrine is exemplified by that old favorite program (sarcasm intended), MS-DOS. Never in the history of computing has there been a program that has invited more criticism. Complaints over the years have included such subjects as memory management, physical memory limitations, and the lack of management utilities, but most are directly related to weaknesses in the original design. Even today, a lot of the problems with Windows 95 and Windows 98 can be traced back to the fact that they are still running with pieces of MS-DOS within them. If you buy into Microsofts and especially Bill Gates claim that Microsoft is a bastion of innovation, (yeah, right) you might think that MS-DOS deficiencies were some project management error in the early days of the company by Mr. Gates himself. Well, while that innovation story plays well in the press as a potential defense against government intervention in Microsofts affairs due to alleged illegal monopolistic practices, it doesnt match the facts. MS-DOS is a prime example. Microsoft purchased it to fill IBMs request for an operating system for Intel 8086/88 processors. It was written by one person gradually taking form over a couple of weeks to test the new Intel 8086 processor. It was never intended to be a full-featured operating system. It was only a bit of test code hammered out to accomplish one specific, limited task. Does this sound familiar? Ill give you a hint. MS-DOSs original name was Quick and Dirty DOS (Disk Operating System), a truly appropriate moniker. Sure, Microsoft polished it up a bit before releasing it, and added various improvements over the life of the product, but mostly, it was still the quickie test program that a lone programmer originally created. (For you Oliver Stone fans, think of Bill Gates as playing the part of the second programmer on the grassy knoll.) Thus, the entire PC industry was held back by the limitations built into the design of a program that was never intended to be used beyond those first couple of weeks of testing. There are two other closely related factors involved in the spread of the Y2K bug. The first is what I have previously referred in my columns as the Programmer as an Artist Syndrome. This is a pervasive attitude that is spread from programmer to programmer and is significantly aggravated by the fact that the non-computer public at-large holds programmers in awe due to their knowledge of computers. Programmers have not been properly indoctrinated with the idea that they are craftspeople, charged with the intrinsically creative task of making useful programs, not merely artists making programs strictly either for their esthetic value or simply as an outlet for their creative urges with little or no thought of usefulness, maintainability, or reliability. The most obvious sign of this is when a programmer is personally offended when either a supervisor or a peer review team constructively criticizes his or her precious code. Its like telling Picasso that his representation of women in his paintings is not realistic enough. The second factor is what I call the Macho Programmer Syndrome, though it is definitely not exclusive to males. This attitude problem at its most severe instills programmers with the belief that the more convoluted, obtuse, and downright unreadable their code is, the better. Their perceived job is to come up with incredibly unique, imaginative and original algorithms (formulas), and if nobody else can understand them, so much the better. An example of this is an interview with Apples lead programmer for the original Mac that I read in a national magazine years ago. He openly bragged that when his team found a bug in a particular subroutine (a logical subsection of a program) more than about two weeks after the subroutine was written, even the programmer who wrote it, much less any other team member, couldnt understand or debug it. His solution was to simply throw out the entire subprogram and rewrite it from scratch. He was very proud of his teams ability to write this stuff. Luckily, this attitude is not as common as it used to be. Going against these attitudes, is the trend, sadly appearing mostly in the corporate world, but not in smaller companies, toward programming methodologies. These methodologies include such practices as establishing corporate standards, teaching structured programming, design, and especially testing techniques, including peer review teams on all projects, encouraging general business education among computer specialists, and various other quality-related policies. These policies increase direct development costs but greatly reduce future cost of debugging, maintenance, and enhancements. In a perfect world, these methodologies would win out. In our real world of downsizing, mergers, acquisitions, and especially the popularity of the next-quarter-results-only style of management, theres lots of backsliding. In addition, computer departments are considered an expense, almost all programmers are salaried, and computing experience is not valued by management. Just to make things even more interesting, the Peter Principle butts in so that expert programmers are sometimes promoted, only to become lousy (And often frustrated) managers. Because of these management factors computer training is often the first thing to go when budget cuts are needed. Staffing budgets come next. Why pay for extra staff or for outside temps when you can simply force the existing salaried staff to work 12 hours a day, seven days a week, rotating between day and night shifts to avoid overloading your mainframe or your LAN. This is not an exaggeration. It is from personal experience. It was, in fact, the final straw in my decision to leave the corporate world and turn my part-time business into a full-time proposition. I figured that if I am going to get an ulcer or even a heart attack from overwork, I might as well do it in a comparatively pleasant environment of my own choosing. Dilbert is really a lot closer to nonfiction than most people realize. If the older, more experienced programmers leave for better jobs elsewhere, then just hire lower-priced beginners (or import foreign employees with special visas who you can oppress even more than your old employees), with the added benefit of having avoided paying some pensions. Or, you could just make the remaining staff work even longer hours. If the older group doesnt leave fast enough, then find ways to either make them even more miserable or even to outright fire them, as long as you document the firings well enough to avoid age discrimination suits, of course. While some of this might sound like programmer-bashing, I intend it to be a criticism more of the education, ongoing training, and management of programmers. I am definitely in favor of increased self-responsibility and higher ethical standards for individual programmers, but I think that I have made a case for providing a better programming environment to avoid messes like the Y2k Bug. Dealing with programmers who have never worked in a supervised team environment is basically a gamble. Some of them have realized the necessity of quality control techniques on their own. Others have learned it in classrooms. But many havent been exposed to these techniques at all, and still others left the corporate world specifically because they felt oppressed by ham-handed attempts to implement quality control methods. Its a throw of the dice. Next month this series will continue. For now, does anybody know of some good land with a defensible perimeter for sale somewhere in the wilds of Montana? |
�1999, Wayne M. Krakau |