Skip to Main Content
Contact Us
Meeting Cancellation Policy

Established September, 1992

Newsletter of the Boston SPIN

Issue 11, September 1996

Return to Newsletter Index



SEP 17 (Tuesday): Boston SPIN meeting.

"Good Enough Software Quality"
James Bach (ST Labs, Chief Engineer)
6:30 PM (refreshments), 7:00-8:30 PM (meeting)
GTE, Building #5, 77 A Street, Needham, MA
Info: Ken Oasis, (617) 563-4197, or email:

NOV 2 (Saturday), 9 to 4:30 pm:

Watts Humphrey on "Improving Personal Productivity".
A Professional Development Seminar of the Greater Boston
Chapter of the ACM. $75 to $95 including lunch and notes.
To obtain flyer with pre-registration form:
or leave a message at 617-862-1181.

MAR 17-20, 1997 -- SEPG Conference

"People - Process - Technology: Stuff That Works"
San Jose Convention Center, San Jose, California
Contact: SEI Customer Relations, 412-268-5800
FAX: 412-268-5758

We have 2 separate email lists: one for this newsletter and one containing LOTS of announcements that we receive from process organizations and forward out. To add yourself to the announcements list send email to Charlie Ryan

Back to top


We have started a Job Bank bulletin board at meetings. Job opportunities may also be submitted to this newsletter. See the new "Job Bank" section.

Back to top


February 1996 Meeting Report
by Ed Maher, courtesy of Digital Equipment Corporation

Topic: Inspections


Jean MacLeod (Hewlett-Packard)

Steve Bramley (Motorola ISG)

Peter Harris (Sybase Inc.)

Moderator: Don O'Neill (independent consultant)


This was an interesting set of presentations on software inspections.

Each presenter had a different slant on the topic:

  • Jean spoke about the set-up and evolution of an inspection program.
  • Peter addressed how management should be involved and can benefit from an inspection program.
  • Steve provided some real-life examples of how inspection data can be analyzed and used to improve both the inspection process and the software development process.

Back to top



Don started out by taking a survey of the audience to see how many came from organizations that use software inspections (about 20% of the people in the audience raised their hands). This is fairly reflective of industry as a whole; data shows that about 22% of all software organizations use inspections as part of their standard process.

He then presented the high-level components of a standard (Fagan-based) software inspection program:

  • Commitment & motivation: There has to be an acknowledged problem that inspections solve.
  • A formal roll-out program, including:
    • training
    • policy
    • procedure
    • organizational coordination
    • measurement
  • Defined elements for the software inspection process, including:
    • structured review
    • defined roles
    • use of checklists
    • use of forms & reports
    • collection of data.
  • Return on investment; it is important to clearly demonstrate the ROI of software inspections.

He also mentioned that the use of inspection data should be a natural part of the engineering process.

Back to top


HP has been doing formal reviews or inspections going back to the 70's. They have an "HP Standard Inspection Process". The basic steps are:

  • planning
  • kick-off
  • preparation
  • logging meeting
  • rework
  • follow-up

As frequently happens once you have a standard process, there was pushback because people felt that the "standard" process didn't apply to their situation. This was not resistance to change; she believed that the engineers recognized the benefits of inspections. Their concerns were based on the cost (Is the ROI worth it in every instance?) and the formality (Can the same goal sometimes be achieved with a little less formality?).

To address these concerns, they developed some scaled-down review processes which could be used in some circumstances. These would meet the goal of identifying defects, but they would take less time and involve less formality than a formal inspection. A logging meeting would not always be necessary, but data would be collected.

Having different review/inspection techniques meant that they needed guidelines to help determine which process to follow in each situation. Product scope and risk were identified as being the driving factors in this decision. Supporting this were things such as: the business goals, the availability of the needed documentation, the availability of the needed expertise, the objective of the review, the desirable feedback mechanism, and side benefits (such as cross training).

A decision matrix was developed, with product scope on the vertical axis and risk on the horizontal axis. As scope and risk increase you move from the bottom left to the top right of the matrix. Where you land determines whether an inspection or a review is more appropriate.

To close she described what she felt made for effective reviews:

  • Good planning
  • Timeliness
  • Explicit definition of the review objective
  • Clear communication before the actual review
  • Adequate preparation by all participants
  • Assignment of roles
  • Use of checklists & standards
  • Logging everything (don't waste time debating during the meeting)
  • Constructive interaction

Back to top


Peter (who currently works at Sybase) got up to discuss his prior experiences at Lotus, focusing on a manager's role in an inspection program. He made the point that most inspection programs include emphasis on management not being involved in an inspection meeting and not seeing specific defect data. The problem is that there isn't usually enough attention to what a manager *can* do.

He also pointed out that management frequently resists inspections based on a belief that the engineers don't want to do them. If this is true, then focusing on the problem of "management support" isn't worth the effort -- their support reflects their perception of the engineers' support. He believes that the challenge is to demonstrate the benefits to the engineers and to define the program with a focus on "management participation," not just "management support."

Managers should: use the results, ask questions, trust the participants' judgment, and (as everyone knows) stay out of the meetings. The point about trusting the judgment of the reviewers is critical. If a manager agrees to spend the cycles to have an inspection, but vetos a recommendation for a rewrite, then the message is that the manager really doesn't believe in the process.

Peter addressed a problem associated with publishing a schedule which shows that resources are going to be tied up doing inspections. The inspection time is frequently looked at as being discretionary. A way to convince people that inspection time is part of the critical path (and shouldn't be compromised) is to use the data. Peter had some data from real projects at Lotus:

  • The average defect found in an inspection costed $37.50.
  • The average defect found in a QA cycle costed $450.00,
  • A customer patch (on average) costed $10,000.
  • Changes to the distribution media costed $250,000 to $1,000,000.

This is a pretty compelling argument that the time spent finding defects in an inspection is time well spent.

Peter closed by stating that management support of an inspection program is not enough. Managers need to understand the problem that inspections are addressing, understand how inspections solve the problem, and be involved in the inspection program.

Back to top


Steve spoke about how we can use inspection results to fine-tune the inspection process and to improve the development process.

Most of his talk was based on the use of one project as an example. (He had a disclaimer that some data has been changed and that the described process is not the standard Motorola process.)

His organization determined that they needed to do something to improve the effectiveness of their inspections. He made a deal with a project: if they gave him the data, he would give them timely feedback about their inspection process and (as a bonus) feedback on their development process. He began collecting the metrics, focusing on productivity (as measured by: # defects found / total effort to find them). He used statistical techniques to identify productivity outliers and then analyzedthe inspection data from those inspections -- looking not just at the metrics, but also at the defect descriptions.

He showed us a number of actual feedback reports from specific inspections. Each feedback report was divided into two major parts: information & recommendations on the inspection process, and information & recommendations on the software development process.

Below are some of the specific things that he was able to identify and report back to the project team (these come from about ten different inspection meetings).

Inspection process feedback examples:

  • The number and type of defects indicate that this spec wasn't ready to be inspected (they should assess readiness before starting an inspection meeting).
  • Metrics which would allow for a productivity measure were not provided.
  • Issues are being tagged as defects.
  • Problem descriptions were not logged (only descriptions of the fixes).
  • Many problems with the code comments are being logged as defects (the process calls for defects to be only those things that affect the executable).
  • A four-hour inspection meeting is too long.

Development process feedback examples:

  • A number of defects indicate that the coding standard should be changed to address the use of nested "includes."
  • It looks as if the developer had a difficult time getting needed input from the team to produce the design spec. (There may be a communication or coordination problem.)
  • Rel. 1.0 specs show much confusion around how to handle post-release requirements. (This should be resolved and communicated.)
  • The author called this a Low Level Design, but the plan called for only one level of design. (The plan should be updated.)
  • The specification template was not used.
  • Unplanned requirements shouldn't be tagged as defects in a design inspection. (The plan should be updated.)

To conclude, he reiterated the value of inspection feedback and stated that we should be continuously asking ourselves:

"What's the data telling me?"

Back to top

Discussion Period

Q: When preparing the packets of material for the inspection team, how do you handle the situation where the material doesn't exist or it's in an inconvenient form (for example, on someone's white board)?

A: Jean said that you can't let the lack of upstream documents preclude an inspection. People can use common sense. She also reiterated the importance of making sure that the inspection team is prepared. If they aren't, then postpone the meeting.

Steve also said that prep is critical. He recommends that every inspection begin with a call for prep time. By doing this, he gets the data, helps determine if there was adequate prep, and provides a little peer-pressure.

Don agreed. He added that if you notice that the upstream documents are frequently missing, then that fact should be communicated as a process problem that may need addressing.

Q: (Directed at Jean) What was HP's peer-review efficiency and how do they make sure that the inspection process is being followed?

A: She did not have the efficiency data but believes that most people recognize that review and inspection time is time well spent. As for auditing the inspection process, the analysis of the data usually points out if the process is being used appropriately.

Q: (Directed at Steve) What is the moderator's role? The process that you described indicates a lot of administrative work.

A: The moderator forms the team, schedules the meeting, verifies the entry conditions, runs the meeting, and verifies that the rework occurred. Much of this is done electronically. It isn't as clerical as it sounds.

Jean mentioned that they started out doing something similar, but had a hard time recruiting moderators. As a result, they now have the Author do most of the prep work.

Q: (Directed at Jean) You mentioned that the moderator verifies the rework. Does that mean that moderators have to be very technical?

A: By "verify" she means that they make sure that it was done. They don't review the content of the work.

Q: (Directed at Peter) When instituting or tuning the inspection process, what if the first-line managers don't have the time to help, support, or participate?

A: First, go to the second-level manager and see where their priorities are. Individuals often take their priorities from above. Also, being "too busy" often means that there is some resistance or fear about the process. It is important to investigate this to see if there is some aspect of the process that needs clarification or changing.

Don agreed with this and mentioned that it has been his experience that (as a generalization) technical people are overt resisters and managers are covert resisters. As a result, you often have to push a little to recognize when managers are resisting (so that you can then address the root cause of the resistance.)

Q: (Directed at Steve) Can you address the fudging of data? How do you know that people aren't just giving you the data that they feel you want to see?

A: During training, Motorola puts emphasis on how metrics will be used. They also get the project team's input on which metrics *they* want to use. He hopes that this would preclude someone fudging the data.

Back to top


- The "Software Inspections and Review Organization" (SIRO) has a Web Page at:

(SIRO is a volunteer organization devoted to the exchange of information about group-based examination of software work products; including the state of practice, emerging techniques, support resources, and industry use & metrics.)

- Some references:

  • Michael Fagan, "Design and Code Inspections to Reduce Errors in Program Development" in the IBM Systems Journal, Volume 15, no. 3, 1976.
  • Michael Fagan, "Advances in Software Inspections", IEEE Transactions on Software Engineering, vol. SE-12, no. 7, July 1986.
  • Daniel Freedman and Gerald Weinberg; "Handbook of Walkthroughs, Inspections, and Technical Reviews, Evaluating Programs, Projects, and Products"

Ed Maher works for Digital Equipment Corporation within the Open VMS Engineering organization.

Back to top

May 1996 Meeting Report
by Ed Maher, courtesy of Digital Equipment Corporation

Topic: Focusing TQM Energy With Visual Language

Speaker: Speaker: Larry Raymond (Lotus/IBM -- Manager of Process Engineering in the Global Operations and Administration function) Author of "Reinventing Communication - A Guide to Using Visual Language for Planning, Problem Solving and Reengineering" (ASQC Quality Press, 1994)


The talk was about a unique communication and brainstorming technique. It involves:

  • Using colorful picture stickers (they are like colorforms) to build stories, which are called maps. The stickers are used as symbols that capture implicit information associated with the process. Picture maps are created to represent the current state of a process, the desired state of the process, and the actions needed to get from the current state to the desired state.
  • Using a very rigorous brainstorming approach to build these maps.

This was a fairly informal presentation -- he talked off-the-cuff, used props, and illustrated almost all of his points with his experiences. He came with a set of slides that he chose at the last minute not to use.

Back to top


What is Visual Language?

It is a process whereby a team uses metaphorical pictures to communicate a process. He described its use in helping to formulate clear goals and as a way to achieve those goals. This is accomplished by mapping the current process, mapping the desired process, and mapping the path to get from current to desired. The picture maps are built by teams of people working closely together.

The two kinds of picture mapping that Larry described were:

  • "Village Mapping", which is used to describe processes -- usually independent of time. Sub-processes occur in buildings connected by roads.Some Village Mapping techniques:
    • The places where work is done (e.g., departments) are represented by buildings (factories, houses, office buildings, huts, farms, etc).
    • The relationships between the "buildings" are represented by roads.
    • Information produced and/or exchanged is represented by envelopes.
    • Problems are represented by things like: storms, fires and swamps.
  • "River Mapping", which is used to describe projects over time.The rivers flow through time and actions take place along them.Some River mapping techniques:
    • Signs are used to explicitly identify actions in anappropriate sequence
    • Symbols such as: a chalkboard (for training), drafting tables (for design), a toe in water (for test), and a treasure chest (representing success) are used to illustrate the project activities
    • Obstacles (or risks) can be represented by things like wild animals, fallen trees, and storms.
    • Ways to mitigate obstacles and risks can also be illustrated (I didn't catch any good examples).

Back to top

How is Visual Language Used?

He described the process using a real example involving a need to come up with a solid high-level strategy for performing product localization. The Visual Language steps followed to achieve this were:

  • An appropriate group of individuals (about 45) was assembled. The participants were selected because they understood the process, had the power to institute change, and were respected by their peers.
  • The sponsor reiterated the objective of the group.
  • Everyone went through a very small amount of training in the Visual Language techniques.
  • The group broke out into small teams (about five people per team). He mentioned that the Village Mapping teams seem to work better if they are made up of people who don't usually work together.
  • Each team then performed the following steps in order to come up with a Village Mapping of the current process:
  • They identified all the stakeholders (departments responsible for each step of the process), assigned each of them a building that "feels" like that group, and arranged the buildings.
  • Once that was complete; they identified the relationships between all the stakeholders, selected the most appropriate roads to represent these relationships (i.e., winding, dirt path, rocky,...), and placed them between the buildings.
  • Next they identified the biggest problems; assigned symbols to these problems and placed them where appropriate.
  • Then they identified the things going on that were out of their control; assigned symbols to them and placed them in the picture.
  • Each team then got up (one at a time) and walked the entire group through their picture; explaining the process as well as their rationale for the selected symbols. Larry pointed out that it is against the rules to speak without touching some part of the map; this helps to focus the presentation on the process. There are no questions or discussions. These team presentations are done quickly -- a maximum of five minutes each.
  • Once all of the teams went through their presentations, the entire group worked to come up with a single consolidated map with which they could all agree. This was accomplished by:
  • First, identifying those things that were common in all of the maps and selecting the symbols that should be used to represent them in the consolidated map.
  • They then went through each team's map and pulled out the unique things that the group felt belonged in the consolidated map.
  • The group had then completed its first Map.

(At this point, Larry mentioned that it's hard to hide when you're involved in one of these. If you don't know what's going on (and you're supposed to) it will quickly become fairly obvious.)

  • Next they shuffled the teams and mapped out the ideal process (following the same steps as before to come up with a consolidated map.) He described this as a "blue sky exercise" and said that participants shouldn't dwell on practicalities.
  • Once the consolidated map of the ideal state was assembled, they shuffled the teams one last time and designed a process to take them from the current state to the desired state (producing a River Map rather than a Village Map). He mentioned that the teams formed for a River Mapping exercise seem to work better if they are existing functional teams. The steps each team followed were;
  • First, they listed the necessary actions in logical time and sequenced them along one or more rivers.
  • They added the appropriate symbols for each action.
  • They identified any obstacles; selected symbols and placed them on the map.
  • Then they identified those things that could assist in mitigating the obstacles, selected symbols, and placed them accordingly.
  • At this point the process became the same as was followed for the Village Mapping exercises. The result was a consolidated Village Map of the process which would take them from their current state to their desired state.

Someone asked whether the group discussion ever involves people objecting to the direction of the group. He said that this would be unusual; generally people are very involved and are also in a constructive frame of mind, so that there is no need or inclination for serious objections.

  • The following are some ground rules for this process that came out during his talk:
  • The break-out teams should not have a leader.
  • Everyone must participate; opinions should be pro-actively solicited.
  • The people responsible for each area are strongly encouraged to participate.
  • If some stakeholder chooses not to participate, they have toagree up-front to not do anything down the road to block the resulting action.

Someone asked whether this process could be used by individuals. Larry said that it could, and described a situation that recently occurred during "take your daughter to work" day. Visual Language was successfully used with two teenage girls to map out what they needed to do to get into the college of their choice.

Back to top

QUESTIONS & ANSWERS (paraphrased):

Q: When teams of people are working in parallel to come up with a map, do you always end up with one consolidated map, or sometimes do you end up with more than one?

A: It depends on the objective of the exercise. If the goal is for unity, then one map is a required outcome.

Q: How does cost fit in to a River Map?

A: It doesn't. The intent is to produce a clear goal and a way to achieve it. Consensus and clarity are important, but details like cost are not.

Q: Are the final river maps ever updated (after the resulting plans begin being executed)?

A: No. There is generally an underlying project plan that does need to be tracked and kept current.

Q: The actions listed on the example River Maps look fairly high level. How do you translate them into action plans?

A: This can be accomplished either by cascading the exercise or by using typical project planning techniques.

Back to top


by Judi Brodman (Logos International and TSS, Inc.)

* Lessons Learned: Apply them carefully

* Change Agents: More advice -- this time from Watts Humphrey

Greetings once again, SPINners!

Fall is approaching much too rapidly this year and already my garden is waning. Much to my amusement, as I was planning this year's garden, I realized that I had developed a rigorous process for selecting the seeds and plants that I grow in the garden every year.

I am fastidious in maintaining a garden log for every plant that I grow in my garden; the log contains information such as the seed plant) description, date the plant was started indoors (if plant was purchased -- where and when), date the plant was placed outdoors, condition of the plant during the growing season, bug attacks, yield of the plant, weather conditions during the summer (hot, dry, humid, etc.), and a summary in which I recommend growing that plant again or replacing it with another type of plant next year. In the middle of winter's worst snow storm, I sit in front of the fire, pull out the logs and seed catalogues, and decide which seeds to order based on their previous performance in my garden. I'm sure that many of you who are gardeners do the same thing. We try to make our gardens better and better each year, keeping some plants, replacing others -- always looking for the perfect blend of plants and conditions for our garden -- using lessons learned to improve next year's garden.

While performing software process improvement (SPI), SEPGers/change agents are attempting to learn from logs or lessons-learned written by other organizations. These lessons will help the organizations if the change agents remember, as in gardening, that the conditions in which one organization's process thrives and matures can vary greatly from the conditions in which another organization's process thrives and matures -- the culture (soil) may differ, management support (water) may differ, the team members (plants) will certainly differ, and tools and training (sunlight) will differ. Change agents who use these lessons need to understand the climate of the organization in which the lessonsoccurred.

In addition, a change agent must understand the climate in which he/she lives in order to make decisions about the application of someone else's lessons-learned -- i.e., will this lesson apply to my climate.

If you look carefully at the published lessons-learned, you will see that they are described in a few words and are at such a high level of abstraction that they are often difficult to apply to other organizations even if the climates of the organizations are the same.

As I look closely at these lessons, I see that they are really factors that have changed the outcome of a SPI program in an organization -- let's call them "enablers" and "barriers." If enablers are present in an organization, the chances that the organization will be successful at SPI are greatly enhanced; if barriers are present, the organization's chances for success are greatly diminished.

I've listed below some of the enablers that are most frequently associated with successful SPI initiatives:

Back to top

  • management commitment, buy-in, sponsorship
  • active monitoring of SPI progress by Senior Management
  • demonstrated sponsorship for achieving Level 2 and 3
  • a strategic approach to process improvement
  • early definition and application of metrics
  • more guidance in the use of CMM
  • highly respected change agents (people)
  • dedicated SPI staff (sufficient time/resources)
  • clearly compensated assignment of responsibility
  • analysis of process for changes that are value-added (not change for change's sake)
  • introduction of a training program
  • active defect prevention process
  • understanding of the SPI venture
  • SPI goals: clearly stated, well understood at all levels of organization
  • total dedication
  • knowledge of what can be accomplished in an environment
  • tight link between SPI and business goals

Back to top

The following barriers are most frequently associated with an organization's failure to achieve any SPI:

  • actively charged organizational politics
  • cynicism from previous unsuccessful improvement experiences
  • lack of sustained sponsor commitment
  • SPI considered "busy work"
  • belief that SPI gets in the way of real work
  • turf guarding
  • resistance to change
  • insufficient groundwork -- false starts
  • insufficient guidance on how to, not just what to, improve
  • inappropriate/unrealistic expectations -- "quick-fix" approach

Many of these phrases are casually thrown around and have become industry buzzwords -- management commitment, sponsorship, buy-in, goals, strategic approach -- but what do they really mean? Your ability -- as a manager and as a change agent -- to understand the meaning behind these buzzwords, to recognize the existence of real enablers and barriers in your organization, and then to use that information when planning your SPI initiatives, may make the difference between success and failure in your organization. If there is a particular enabler or barrier you are interested in discussing in this column, please let me know.

Now, for a little added advice for you SEPGers/change agents. Last October, I wrote a column answering your questions about staffing an SEPG and performing the duties required of that position. After I wrote it, I asked Watts Humphrey if he would add some thoughts on "an SEPG improvement strategy." The following paragraphs contain Watt's advice to those of you who are still struggling with your assignment as SEPGer/change agent and who have insufficient resources, or who are having trouble obtaining management "commitment/sponsorship" (hear the buzzes???).

Watts adds, "When SEPG folks complain about process improvemen problems, their principal concern is often with their inability to get things done. Fundamentally, to make a process change, you need to cause action. To be effective, however, most of these improvement actions must change the behavior of the managers and project engineers. While the SEPG can do a great deal to initiate and support change, real change only happens on projects! Everything else is window dressing! The question then, is how to get the project people to do things that do not directly contribute to delivering products? This is a difficult problem for many reasons. First, project people are always busy. Second, they may not understand what you want them to do or why. And third, they may not believe that what you propose will help them do their jobs.

"The first essential step in any process improvement program is to get the management team to recognize that they must behave differently. This is a principal role of assessments. As long as management is willing to live with the consequences of a chaotic process, they will continue to be at Level 1, regardless of anything you or anyone else does. The key things they need to insist on are an orderly system for making commitments, documented and signed-off plans for all projects, and a quality policy. If they don't buy this, nothing else will work.

"Assuming a basic level of management agreement, I suggest you then follow a rational approach to implementing change. That is, rely on facts, data, and plans. If the key engineers and their managers do not agree or are not convinced of the need for process improvement, keep working on it until they do agree or are at least willing to go along. Remember to focus on rational arguments and on their questions and concerns. While it is true that improvement actions are hard to initiate, they are practically impossible when the involved managers and professionals are opposed. It is always important to have strong senior management support, but if the troops do not agree that an action makes sense, it is almost impossible to make it happen.

"I suggest that the SEPG start by building a strong and logical position. An effective strategy for doing this is the following:

Back to top

  1. "Make sure that the senior management team recognizes that process improvement is a line responsibility. If the projects do not participate in and actively support process improvement, it cannot happen. The SEPG can be an enormous help, but they cannot actually cause improvement. The SEPG can, however, provide technical skills and act as champion in motivating, planning, and generally facilitating the change process
  2. " Get agreement from the management team on a few critical improvement actions to accomplish first. If you have not successfully made a process improvement before, I suggest starting with only one change. Often management is interested in some vague goal like getting to Level 2 this year. To make any progress, however, you need to translate this into specific actions. One way to do this would be to focus on the actions required to satisfy one KPA. (I generally suggest that Software Project Planning be the first KPA that Level 1 organizations implement.) When management argues that they need to do much more, agree but suggest starting with one specific action. Once that action is well planned and implementation is fully under way, then start on another action, etc.
  3. " The next key step is to ensure that everybody knows their improvement responsibilities. It would be a good idea to get these in writing. This could be in a management policy statement about the importance of and responsibilities for process improvement. You should also include in this the role of the SEPG and the roles of the various project and support groups.
  4. " Then put together a plan. Keep it simple, narrowly focused, and very specific on roles and tasks. Make the resource needs clear, establish checkpoints, and get commitments for the needed resources. Make the plan reasonably aggressive, publicly commit to your planned improvement work, and ask the improvement team members to publicly commit to their actions and dates. Then put these commitments in writing.
  5. " Demonstrate what is involved in getting just one change accomplished. You can do this with monthly progress reports to management. Include a table of all the outstanding schedule and resource commitments and progress against them. Report actions at a detailed enough level to make it clear where you are getting support and where it is lacking. Also report the time and effort involved in doing the work. This will help everybody realize what is involved and it will make it obvious that the problem is not work hours but availability of project people.
  6. " Make a big deal out of success. Periodically identify real achievements, credit the people responsible by name, and publicize their accomplishments. This builds enthusiasm, enlists allies, and demonstrates progress. Once you have demonstrated the ability to make one significant change, you will be well on your way.
  7. " The key to process improvement is making many small and simple process changes. The major benefits from most process changes come from a relatively few simple actions. Don't embroider the process. I suggest limiting a process definition to one page with action-oriented bullets. If you need more, reference standards or subprocesses but restrict each subprocess to one page. Include just enough so people know what to do and when. Get some initial feedback from the first people who are to by one pilot group. You will probably have to train their people, and will almost certainly have to help them get started. While they are implementing the change, focus on getting feedback on how the process works in practice and use this feedback to adjust and enhance the process. You will learn a great deal more from implementing one simple process change than from planning or talking about any number of complex changes. Learn from practice, not opinion. Also get comments and suggestions on the improvement process itself.

"If project management does not agree to assigning project people to the improvement tasks, you are pushing a rope. Don't waste a lot of time wheedling or pleading. Go back to senior management and make it clear that the projects are essential but they are not participating. Emphasize that without the projects' involvement, process improvement is a waste of time and money. Under these conditions, either management steps in and helps or you will have to regroup.

"If senior management will not handle the problem, they are not truly convinced of the need for improvement. Regardless of what they say, you need to start over and convince management that process improvement is important for them now.

"If you can't do this, you only have two choices. Discontinue the improvement effort until you have management's attention or try to enlist some allies. If you know one or more project managers who are supportive, get agreement to use one of their projects as a pilot. Then follow the strategy outlined above. The idea is to get one quick success. The best evidence of the benefits of improvement is a supportive project manager and team. While improvement data can be helpful, it is hard to get and, once you have a project success, it is unnecessary.

"The key is to take process improvement a step at a time. Make sure that management is in agreement with the need for process improvement, and that they know its likely costs and benefits. This approach works, but it takes time, patience, and unfailing optimism. It also takes consistent leadership and a can-do attitude. If you can convince management of the need for change, this approach will work. If you can't, nothing will work."

I personally would like to thank Watts for taking the time to address these issues for us. I hope his advice and encouragement will help you accomplish those first few successes you need to become a believable software process improvement force in your organization.

This column is for you; let's make a difference!! Send your comments and questions to "Dear SPIN Doctor" at or directly to the Editor. Sign them or use a "pen-name" -- I respect your confidentiality!!

-- The SPIN Doctor

Back to top


No new postings


The Boston SPIN is a forum for the free and open exchange of software process improvement experiences and ideas. Meetings are usually held on third Tuesdays, September to June.

We thank our sponsors, GTE and Raytheon. We also thank U/Mass at Lowell for hosting our WEB page, and Digital Equipment Corporation for Ed Maher's SPIN Meeting Reports.

For information about SPINs in general, including ***HOW TO START A SPIN***, contact: DAWNA BAIRD of SEI, (412) 268-5539,

Boston SPIN welcomes volunteers and sponsors.

For more information about our programs and events contact: CHARLIE RYAN
Technical Assessments, Inc.
ESC/ENS (Bldg 1704)
5 Eglin St
Hanscom AFB MA 01731-2116
(617) 377-8324; FAX (617) 377-8325;

IN THE SPIN is published monthly or bimopnthly September to June. Letters, notices (including job postings), and contributed articles are welcomed. Articles do not necessarily represent the opinions of anyone besides their authors. We do not publish advertisements or job searches, but we will gladly publish job postings.

IN THE SPIN is available only by email. TO SUBSCRIBE send email address to

SEND letters-to-editor, notices, job postings, calendar entries, quips, quotes, anecdotes, articles, offers, and general correspondence to Sallie Satterthwaite, (508) 369-2365, If possible, please format input as Courier text with explicit line breaks and a maximum line length of 65 characters. Send SPIN Doctor questions to the address given in the SPIN Doctor column.


HTML formatting by Thomas E. Janzen.

Return to Newsletter Index

Back to top