Crisis Within the Crisis: Y2K Cost Estimation Errors and Impact on Y2K Budgets

 

Introductory Remarks from ASM '97 Conference

 

By Dr. Keith Jones, C.Q.A., C.S.T.E

 

------------------------------------------------------------------------

 

 

Good afternoon.

 

I have a couple of introductory points that I would like to make before I begin discussing the important topics of Year 2000 cost estimating and project monitoring. Some of the materials I am going to cover focus on some relatively exotic and controversial metrics and methods you might not normally consider using -- but they might still be useful for Y2K projects.

 

This presentation has to do with a very timely and mission critical topic -- since most of the organizations in the world today will either survive and thrive or falter and fall by the wayside depending on how well they budget and manage their Year 2000 software repair projects. There is a growing "crisis within the crisis", which is due to the fact that many of the organizations as well as programmer outsourcing vendors are already getting into trouble and potentially losing critical time and resources because of widespread misunderstanding and misapplication of some of the most widely known Year 2000 cost estimating benchmarks in budgeting their Y2K Projects or contract negotiations with their Y2K outsourcing vendors.

 

One notable example is the Gartner Group benchmark of $1.10 to $1.65 per Line of Code which has been so widely reported in the press, and which is the most recommended method of providing a 'quick-and-dirty' initial Y2K project scoping for organizations that do not have a Function Point software portfolio inventory. This is a method that I recommend and explain in detail in my book, however, unfortunately too many CEO's are buying more of the Y2K awareness books than the detailed 'how-to' solutions books such as our book, and as a result a lot of organizational leaders are misled by the seeming simplicity of the Gartner benchmarks, and end up making "order of magnitude" errors in their Year 2000 budgets. Hopefully in 1998 we can take most organizations past the Y2K Awareness stage and the emphasis will start moving quickly toward realistic solutions such as in the ISQA book.

 

These "order of magnitude" errors that are beginning to surface as most organizations complete their first Year 2000 pilot project in 1997 are sometimes budgeted on the high side, but in most cases they run short of funds pretty fast because of a failure to take into account all of the factors that previous research has shown to be critical. As a side note, we are already seeing testing to require up to 50% more than anyone including Gartner Group thought was needed in the past (and Gartner and SPR both already thought testing was between 40% to 50% of total costs). So total average costs are rising, but the good news is that we are able to confirm that the nature of the type of industry or business is the primary risk driver, and if your industry has lower risks, you may fall on the low side on actual average costs, and you may not have as much to worry about (see our additional late breaking slides on our website for details). But if you are in a higher risk industry, then you can expect your average costs to be at least twice as much as previously thought.

 

It is also important to be aware of all the underlying assumptions in the Gartner Group benchmarks before you use them as hard fast rules to budget any Year 2000 project, and especially before you sign any contracts.  The most critical abuses of the GG benchmarks are because people failed to have anyone on their staff assigned to read the full details of the GG methodology (anyone who plans to use their benchmarks should not do so without getting the full supporting documentation from Gartner). Very quickly, the main sources of all of the "order of magnitude error" resulting from GG benchmarks are due to: 1) not "backing out" comment lines from raw LOC counts; 2) not factoring other languages against COBOL level of efforts assumed in the GG benchmarks; 3) not providing additional factors for all of the overhead categories, since the raw GG benchmarks are essentially for estimating labor only; and 4) not taking into account any of the risk adjustment factors such as SPR has identified.

 

Even in the best case, if your organization has never used software measures to scope and estimate projects before, you should be fully aware that your estimates will be at the very least either 50% or more too low or 50% or more too high -- either one can break your organization, by getting you in over your head too far to take advantage of triage or tying up critical resources that might have been better directed to new development to support new business growth. Gartner did a very important service by making the public aware of the sheer magnitude of Year 2000 repair costs, and provided a simple crude rule of thumb for estimating, but most of their warnings about the use of non-standard metrics and the likely "order of magnitude errors" have been largely ignored since then. So it is important to get the word out that Y2K cost estimation is not a simple or easy process and because of the number of uncontrollable and unique variables involved, it will be a lot of work and careful monitoring until you can get your cost models calibrated to the point where you can apply a consistent set of cost driver rules to your own organizations Y2K project estimates.

 

All of this is leading to a "crisis within the crisis" due to the fact that, on one hand, some organizations may be agreeing to fixed bid contract amounts that are 50% or more than the work at their site actually merits (this is happening most often in the "lower" risk industry organizations) -- and on the other hand, a lot of CEO's have been using these published benchmarks in hard-nosed negotiations, and are refusing to agree to pay more than the published benchmark ranges, even if they are actually in a very high risk industry and have a lot of other overwhelming and obvious risk factors.  The result is that a lot of funding that should be going to Y2K repair expenses is going to windfall profit-taking by some vendors, and on the other hand, a lot of necessary Y2K repairs in some of the most mission critical software at a lot of organizations is simply not going to get done because of the delays in the negotiating process due to some misassumptions about top-end Y2K repair cost limits.

 

Gartner has repeatedly warned about the potential for order of magnitude errors when using non-standard project estimation metrics, or in any situation where you have to make your estimates based on limited data. In fact, most Y2K experts are now warning against any use of fixed bid contract arrangements, due to the potential for significant losses on both sides of the table, and instead it is recommended that some form of activity based billing should be used in this situation. The reasons why most organizations as well as vendors have balked at this form of contract is because there is some fear from both sides that the other will too easily walk away from their agreement, at a time when critical needs and opportunities will be lost during the short time remaining before Y2K-Day. However, this is why I.S.Q.A. has recommended "hybrid" activity based billing contracts which are "packeted" within small to medium sized "project blocks", in order to help assure ongoing support through completion for greater security as well as economy for both sides.

 

Regardless of the structure of the Y2K repair work agreements, it is critically important to avoid delays and get started as soon as possible. If some level of benchmarking is tied to the compensation clauses of the working agreements, similar to performance bonus clauses, but instead using some form of industry average "cost of labor" increases, according to some industry standard average by a reputable third-party, as a driver to mediate a final adjustable rate,will afford both parties optimum protection, by assuring the most possible work gets done at prevailing market rates that are fair to both the customer and service providers.

 

Although factors such as language, age of programs and industry type are the most important determinants of the most common Y2K project costs, in actuality there are many more localized factors which determine actual Y2K project costs. To a very great extent, the most important determinants of actual Y2K project costs will relate to how much money, time, and quality have been invested into your software, and how consistently and thoroughly it has been maintained. If your organization has gone through a lot of business turmoil over the past decade, with a lot of downsizing or belt tightening which led to considerable neglect of the legacy software systems that you seek to repair, your prospects Beyond 2000 are not bright.

 

Anyone  who is acquainted with the breakouts of the frequencies of organizations at each of the SEI maturity levels should not have been too surprised at the results of the impromptu survey Ed Yourdon conducted on us yesterday morning in his talk about "Death March Projects". Less than one percent of all the delegates at this ASM conference were under budget and ahead of schedule on their Year 2000 projects. 25% or so were between 5% to 10% over budget and behind schedule (which is actually very very good), and the rest of us were 50% to 100% (or more) over budget.

 

I would suggest that these results were very closely related to the SEI maturity levels of the organizations we represent -- which means -- the organizations represtented here at ASM are probably all in the top levels of Software Engineering Institute maturity, and the most important determinants of whether you are under budget or 50% or more over budget on your Y2K projects are directly related to your SEI maturity and the effectiveness of the processes you use to measure your software, and applying your measurements to real world project planning. Ed Yourdon has also pointed out that, fair or not, after the smoke clears, software measurement and software quality professionals will probably receive more blame than anyone else for the Year 2000 crisis, yet I would contend that those organizations which are already on the highest rungs of software measurement maturity are going to be the best prepared, and ultimately the most successful, in responding to the Year 2000 crisis.

 

I think that anyone who has ever attended an ASM conference would personally prefer to use some version of Function Points rather than Lines of Code to measure software. But remember that the 3000 or so organizations that routinely send delegates to ASM actually represent only a small percentage of the total number of U.S. Fortune 500 companies, and a similar low percentage of the U.S. GNP. The organizations that invest in a systematic software measurement program and use the best available methods are among the "less than 3%" of all organizations figure traditionally quoted as the number which formally commit to any level of systematic software measurment. However, they are consistently at the top of the SEI maturity scale, and not only know how to collect software measurements, but also how to apply them effectively to reduce costs and increase productivity and quality.

 

All of the ASM delegates here today should probably thank their organization's upper management (and pat yourselves on the back as well) for having the foresight and good judgement to make the commitment to even attempt to achieve "Best Of Class" status as an industry leader along the guidelines recognized by the SEI maturity scale.

 

You should be grateful, because if anyone survives Year 2000, it will probably include most of the people here today (I said, 'probably'). But having said that, you should never forget about the other 99% of the world's economy who may not be so lucky. Never assume that your own organization will survive just because it deserves to.

 

Year 2000 has been referred to as a "technology disease". Gartner Group has called it a "virus". The reason is that even if you completely repair and test ALL of your own software for Year 2000 defects, if even ONE critical customer, supplier, or strategic partner does not, your organization may STILL not survive -- because they may "infect" you with bad data.

 

As a quick side note, I would like to comment that both size and complexity are only secondary factors in Year 2000 project estimation and monitoring.  The most important measurment to Year 2000 is defect density.  The size and complexity of the software ARE product metrics and DO impact the overall amount of work to be done, but the msot important determinants of Y2K project cost and success will be: a) how many program units have to be opened and recompiled, and b) the types of changes that need to be applied and tested. Always remember the Year 2000 problem is mostly about software defects, not size.

 

Do you know what is currently the #1 unexpected problem that causes Year 2000 schedule slippages?  Answer:  Compiler and reconfiguration problems. Complexity and number of defects per program unit increase time and effort per repair, but the more programs you have to open the greater the chances of Configuration Management problems.

 

Which is worse? One program with 10 century date defectsor ten programs with one Y2K problem each? Answer:  They are both bad, but 10 defects per program may mean a whole lot more test cases, and take a whole lot more than 10 times longer to test. But again, the more programs you open, the greater the chances of a real showstopper you cannot fix.

 

Do you know the average cost of the typical undetected "trivial" (or cosmetic) Y2K defect? Avatar did a study and came up with a magic number of $4,666 per bug. But as Capers Jones will tell you, just one "nontrivial" uncorrected Y2K defect could cause a liability situation that could cause your organization to have to permanently shut its door. 

 

Just after this tutorial, Capers Jones will be speaking on one of the most important issues in software metrics today. Capers Jones is very concerned, and I am as well, about the amount of effort that has been directed at public argument over the details of the various methods that are being used to estimate costs of Year 2000 projects, not just for individual organizations but with regard to total worldwide costs. Capers is calling for a "truce" since all of the various methods have their merit and their proper place, and it is more important to use the best method that you have available to set an upper bounds estimate for your budgeting of Year 2000 projects, and not waste anymore time, and just get on with your repairs. I would further urge that everyone be aware that the initial cost estimations are not an end point of themselves, and there WILL be "order of magnitude errors" regardless of what type of measures or methods you use to do your estimating (and that is true for both Function Points and Lines of Code -- however, you should be aware much greater order of magnitude errors will occur with use of non-standard measurements and methods for assessing inventories).

 

The reason for this is that Year 2000 is such an unusual situation, and involves so many unknowns and uncontrollable variables compared to any project that you have been accustomed to in the past, that you will have to continuously refine and readjust your estimates. But the good news is that once you calibrate your models and begin to get accurate estimates for given types of software in your portfolio, most errors will drop off.

 

In past years a lot of ASM conferences have focused on debates about sizing and comlexity metrics. The debates have been pretty much one-sided, and most organizations that commit resources to any quality measurement program approaching the top levels of SEI maturity already know the best methods to achieve full benefits of their software measurements.

 

But what about the organizations that have not invested in a world class quality measurement program in the past? These organizations are going to have some incredible challenges in putting together even a rudimentary -level software measurement team to support their own effective Year 2000 project planning cost estimates and monitoring. Some organizations that have investged sizable fortunes in their measurement prgrams, while they watched strategic partners (or even competitors) flourish despite not having made comparable quality measurment program investments will probably feel a little like the smart farmers who saved up more than enough food for the winter when their foolish neighbors turn to them for help.

 

But never forget that it is not just your competititon, but also your customers, your strategic partners, and your critical supply chain vendors who most direclyimpact your business operations and cash flow who may be "caught cold" without a quality software measurement program for their own Y2K project planning and management.

 

Always remember that all of your best efforts to repair Year 2000 date defects, and all investments you have made up to this point in your software measurment programs -- in preparation for a crisis just like Year 2000 -- will be totally in vain if the critical business processes external to your organization are compromised by their lack of software measurement resources and expertise that your have achieved. But if your customers, suppliers, and partners cannot do business, or worse, cannot pay their bills, then your own high levels of SEI maturity, or any amount of money you have just recently invested in fixing your own Year 2000 software date handling defects, may not be able to avert your own similar Year 2000 failures. Always remember that "no computer is an island".

 

If you can spare the time and resources to provide all your critical suppliers and partners with your own expert software measurement services, then you should do so. But if your organization cannot spare you, then you should at least recommend that they immediately start a very simple and rudimentary software measurment program more similar to what your own organization probably attempted long ago when you may have been at the more introductory novice levels of software measurement or project cost estimating.

 

All of this said, this presentation is intended as a very high level description of such a rudimentary measurement program which you might recommend for your critical suppliers or partners who may be novices at software measurment and project cost estimating models. What is unique about this proposal is that it applies methods which will provide for the best bridge between your own advanced methods and their more basic methods, so that you can be best assured that your most critical customers, partners and supply chain vendors will be able to pay their bills and continue to feed your own business growth (make very sure that your senior management and especially your finance and marketing departments understand the importance of this) -- so your own organization can grow and prosper Beyond 2000.

 

(Regular tutorial followed, with all materials provided in ASM Proceedings and can be obtained from ISQA or taught as an inhouse tutorial by ISQA certified trainers).

 

Keith Jones, Ph.D., C.Q.A., C.S.T.E.

ISQA, Box 2437, Palm Harbor, FL 34682

www.isqa.com