Skip to Main Content
Home
Contact Us
Meeting Cancellation Policy

Established September, 1992

Newsletter of the Boston SPIN

Issue 7, November 1995

Return to Newsletter Index

Contents


CALENDAR

DEC 12 (Tuesday) -- BCS Software Quality Group program

Contact: Adam Sacks,
asacks@world.std.com

DEC 19 (Tuesday)-- Boston SPIN Monthly Meeting

Real Process Improvement: Benefit & Risk
"Come hear Tim Lister be critical of Process Improvement at a SPIN meeting!!!"
6:30 PM (refreshments), 7:00-8:30 PM (meeting)
GTE, Building #5, 77 A Street, Needham, MA
(Admission Free, Wheelchair accessible)

MAY 20-23,1996 -- 8th Software Engineering Process Group (SEPG) Conference

Atlantic City, N.J.,
"Broadening the Perspective for the Next Century"

Back to top

NOTICES

We would like to thank University of Lowell for making our Web page possible!

We have started a Job Bank bulletin board at meetings. Job opportunities may also be submitted to this newsletter. See the new "Job Bank" section.

Our Meeting reporter Ed Maher was unable to make the November meeting ("The Software Acquisition Capability Maturity Model"). Would someone else like to volunteer a summary of that meeting? It need not be long -- people will be grateful for the key points.

Back to top

MEETING REPORTS

June 1995 Meeting Report
by Ed Maher, courtesy of Digital Equipment Corporation

Topic:

The Air Force Software Process Improvement Program

Speaker:

Michael Reed - Division Chief, Software Management Division at the Air Force Command, Control, Communications, and Computer Agency, Scott AFB, IL

Summary

This presentation explained how the Air Force is implementing a software process improvement program. Central to this activity is that they require Software Process Assessments, and that all organizations are moving toward Level 3 on the CMM maturity scale.

As background, the Air Force employs about 8000 people doing software engineering. They have 38 different software houses and work in many different domains (i.e., embedded systems, command and control, MIS).

Back to top

Detail

The Software Management Division at Scott AFB, IL. acts as a corporate Software Engineering Process group (SEPG) for the entire Air Force. Each unit has its own SEPG. This corporate SEPG is a resource that the local SEPGs can use along with other external resources. His staff usually leads the assessments, with team representation from the local staff. They are using the IDEAL (Initiating, Diagnosing, Establishing, Acting, and Leveraging) approach to process improvement. He stressed the importance of all aspects of this approach -- especially the Initiating phase. "Initiating" is where you would establish the business justification for the improvement, identify and align the sponsors, and establish the infrastructure for the improvement. They evaluate how the intent of IDEAL is being met as part of their assessments.

He listed their Process Improvement Principles:

  1. Support from the top.
  2. Involve everyone.
  3. The goal is to attain knowledge of the current process.
  4. Change is continuous.
  5. Software process change requires conscious effort and periodic reinforcement.
  6. Software process improvement requires investment.

He then identified risks to a process improvement program:

  • Lack of commitment from the top (saying "I paid for it; what more do you want?" doesn't cut it)
  • Resistance to change
  • Inadequate resources applied to improvement (not just inadequate budget)
  • Unrealistic goals
  • Cultural inertia
  • Inadequate planning and tracking

Back to top

Their process improvement program was kicked-off in September 1991 with the issuing of a policy stating that everyone will be assessed by October 1994 and everyone will be at Level 3 by 1998. The first "process improvement plan" did not turn out to be a plan for improving process -- it was a plan for conducting assessments. Recognizing that as a problem, they soon made sure that everyone acknowledged that the goal would be improvement, not just assessment.

His Corporate-SEPG organization offered support both before and after the assessment; they continually focused on improvement and helped with defining new processes and with identifying appropriate metrics to measure the improvement. He mentioned in passing that they are also conducting a Malcolm Baldridge-based quality improvement program -- no details were provided.

They trained a number of people in how to lead assessments and began assessing the organizations. The first go-around resulted in 33 out of 38 organizations assessed. The five organizations that were not assessed were small, and they will be assessed later. If you slice the data by headcount, the 33 organizations that were assessed contain 98% of the total software-related headcount.

In response to a question, he stated that most organizations are acting on the assessment results -- though a few did just put the report on the shelf. 25 of the 33 assessed organizations produced an action plan

After the first pass, they found 73% of the orgs were Level 1, 21% were Level 2, and 6% were at Level 3. If you slice it by people, 41% of the software professionals were in level 1 organizations, 47% were in level 2 organizations, and 12% were in Level 3 organizations.

He mentioned that they are aware that there could be a "conflict of interest" problem related to having a Level 3 goal and determining it by internal assessment, as opposed to an independent Capability Evaluation. {Remember that the primary intent of an assessment is to provide a basis for process improvement with the determination of a level being secondary. If people know that they are being "goaled" on achievement of a level, it is hard for everyone to maintain their objectivity when participating in the assessment.} He did not go into how they are managing that potential problem. He did say that no two organizations are ever compared based on the CMM Level -- the only comparisons are based on business indicators.

The objectives of the assessment are to:

  • Understand the current practices
  • Identify the key areas for improvement (not identify solutions)
  • Provide a framework for the subsequent improvement

For smaller organizations (30 people or less), they have a different process which takes less time, uses fewer people, and includes producing recommended solutions. {The audience was very interested in these scaled down assessments.} He was not able to state whether these small-scale assessments were more or less successful than a standard assessment -- not enough data yet.

He was fairly candid about the fact that there still are some obstacles, for example:

  • Process improvement is still perceived as being something extra, not a better way to do business.
  • There are not enough reinforcement mechanisms for succeeding at process improvement.
  • There are too many "waves of change" that can get in each other's way and cause people to tune-out any proposed change.

Back to top

Some Questions and Answers:

Q: Do you think that assessment of a unit that has many projects produces findings applicable to all the projects?

A: Yes, because part of the assessment planning involves scoping out the projects that are assessed to insure that they are representative of the entire organization.

Q: Have you tailored the SPA process based on the type of software work that is done?

A: No. The assessment process was the same regardless of what kinds of development were being done. However, the intent of some of the CMM key process areas does change based on the kinds of software projects being assessed. As mentioned above, they do modify the assessment process based on organization size.

Q: Have you done anything with "interim profiles"? {An interim profile is a low-cost assessment method that is performed between assessments and provides an indication of progress toward the next maturity level. Bruce Hoffman from Unisys presented some experiences with interim profiles at the May SEPG Conference, and the SEI has a technical report on the subject.}

A: They have tried it once (a pilot); results are premature.

Q: When you say to Senior Management that they have to be behind this, what happens?

A: The spectrum; some just pay lip-service and some really do "walk the talk"

Ed Maher works for Digital Equipment Corporation within the Open VMS Engineering organization.

Back to top

September 1995 Meeting Report
by Ed Maher, courtesy of Digital Equipment Corporation

Topic:

Can Software Projects Learn?

Speaker:

Paul Brenner, President, Arthur D. Little Program Systems Management Company

Summary

This presentation revolved around one organization's attempt at making significant process change. A central point of the pitch was that the ability to learn is an important skill that is far too often ignored. His approach for addressing this was an iterative cycle of Awareness, Understanding, and Action.

(This was a nice complement to Eileen Steets Quann's keynote at the recent SEPG Conference -- "Transforming Your Workplace Into a Learning Organization". They both covered technique and benefits of formal learning. Eileen's talk was more general, put more emphasis on why learning is so important, and included some tips on how to go about it. Paul provided more detail on one approach to organizational learning and primarily illustrated the benefits using this real-life success story.)

Detail

The first thing that Paul did was to conduct an exercise with the audience to get a sense of where the represented organizations were with learning ("A Litmus Test for Learning Organizations"). This is described at the end of this report.

He based most of his talk on a real situation that Arthur D. Little is working on. It involves a large organization responsible for monitoring commercials for advertisers to insure that they broadcast as expected (quantity, placement, timing). This organization is in the process of moving towards a computerized solution that will result in computers receiving the signals and doing the analysis.

When getting ready to implement this system, they set up a rigorous software management process, including the following changes to their "standard" process:

  • Switching to object technology
  • Requirements being intensely analyzed
  • Use of function point analysis
  • Planning which includes a detailed work breakdown structure
  • Efficient division of labor (between Engineering and Operations)
  • Soliciting inputs from all stakeholders (finance, marketing, etc.)
  • Assigning a decisive project leader
  • Utilizing expert consultants

Of course everything did not work out as planned:

  • They found themselves in requirements analysis gridlock.
  • The people following the new methodologies did not understand why they were doing many things.
  • Morale was very low.
  • The project leader started becoming more directive.
  • People & organizations started focusing on identifying who was at fault -- as opposed to solving problems.

Someone from the audience asked the rhetorical question: "has anyone else seen a similar situation?"

Back to top

How did all of their grand planning result in such a situation? A quick analysis pointed out that many of their planned improvements contributed to the problems:

  • The switch to object technology along with the planned use of function point analysis ended up contributing to the requirements gridlock so that the product requirements never were completed.
  • The "detailed" work breakdown structure turned out to be too detailed. The result was a rigid process that left no room for experimentation.
  • The "efficient" division of labor caused stovepipes to develop with insufficient communication.
  • All the inputs that were solicited were just thrown over the wall and not integrated into the process.
  • The decisive project leader ended up being more "dictative" than decisive. This was like pushing a rope with no pull from the people responsible for the actions.
  • The use of expert consultants resulted in people not understanding the problems that were getting in their way. In addition, the focus seemed to be on fixing the short-term crisis with insufficient awareness of the long-term. People in the organization were not learning how to solve problems.

Back to the topic at hand. What does all of this have to do with organizational learning? Like many things in the process improvement domain, organizational learning involves an iterative cycle:

  • Create Shared Awareness
  • Develop Common Understanding
  • Produce Aligned Action

He described the "how and why" of each of these, along with some real world examples. (Unfortunately, I missed many of the examples):

Create Shared Awareness

Awareness is the first step in any learning cycle. A learning organization has an infrastructure that allows for continuous assimilation of information (internal and external). This needs to be pro-active and be everyone's responsibility. It won't work as well if certain people are designated as being responsible for soliciting and processing information. He gave a series of examples of companies that are successful at this. Two examples:

  • In AT&T, the Chairman brings in 150 managers from around the world several times a year to share experiences.
  • NUMMI (an auto shop) has a policy of rotating people through different jobs. This has a negative impact on productivity, but that is offset by the benefit of having people being familiar with each others' responsibilities.

Back to top

Develop Common Understanding

Make sure that everyone is playing from the same deck. They all have the same understanding of the requirements, of the business rationale for the requirements, of the organizational goals, and of the problems that are currently getting in the way. Two examples:

  • ABB holds quarterly meetings with representatives from all over the company; these meetings are dedicated to interpreting business data.
  • Dupont maintains and makes available to all employees a manual of all business processes.

Produce Aligned Action

This is the real payoff for a learning organization. Action plans are committed based on a common understanding. Change is explicitly managed and communicated. Change activity includes training and motivation. Controlled experiments are performed before introducing any change across an organization.

As this cycle iterates, the learning is built upon and extended.

What are the kinds of things you have to do to be successful at implementing this approach?

To insure that you have effective change management and leadership, you must have:

  • good internal communication
  • some formal teaching/training/coaching
  • a reward system that is aligned with the change activity.

(I think that this point about the reward system has now been made by five SPIN presenters in a row; all from different perspectives. Maybe there is something to it?}

Another recommendation was that if you want to accelerate improvement, don't forget to address that people have to learn to learn. For example:

  • L. L. Bean has a team of people responsible for improving their process improvement process.
  • Many organizations have begun setting up data bases of shared knowledge (Notes files, Web pages, ...). This can be any kind of knowledge, such as activity review reports, what other companies are doing, best practices, things attempted that were not so successful, analysis of why things were or were not successful, etc.

Back to top

Back to the case study -- what did they do to recover?

Tracing this to the three aspects of a learning organization, they:

Created the shared awareness

As part of project planning, they explicitly planned to learn.

  • They prepared and trained people to solve problems -- as opposed to focusing exclusively on preparing contingency plans.
  • They studied the implementation challenges of others; researching why some groups that switched to object technology succeeded and others failed. This resulted in a decision to simulate and prototype everything, including one rapid prototype of an entire vertical slice of the product. This was done as much to learn about their process as it was done to learn about their product.
  • Everything that they did involved the entire organization.
  • They experimented with a streamlined QA organization.
  • The project team actively managed each phase of their life cycle. All goals (Company, project team, and individual) were shared using Lotus Notes. Gap analysis of goals was done to identify any inconsistency or obstacles. They also looked for the unwritten rules -- an acknowledgment of the importance of knowing your culture.

Developed a common understanding

They created a company-wide enterprise model and established five cross-functional teams (Communication, Project Management, Development, Quality Assurance, and "Change & Learning").

Produced aligned action

Each process leader (from the cross-functional teams) experimented with new approaches -- many of which were successful.

Someone in the audience asked about the "dictative" project leader. Did he change his behavior? Unfortunately, the answer is "no". He is no longer on the project and they brought someone in who has a more people-oriented management style.

At this point Paul identified a great new metric:

Grumbles and Snickers

A "grumble" is an instance of someone complaining about how a change is disrupting their life; a "snicker" is someone observing that a change is disrupting someone else's life. One measure used to assess the success of a change was to have someone go around talking to people and get a sense for how much grumbling and snickering was occurring.

He pointed out the distinction between creating new knowledge and transferring existing knowledge. He did not make a value judgment of one over the other as learning vehicles.

Knowledge Creation Knowledge Transfer
Experimentation Training
Risk taking Rigor
External information Internal information
Right Brian Left Brain

He pointed out that software engineering requires both training and creativity, as contrasted with art (which requires mostly creativity) and accounting (which requires mostly training). The CMM seems to encourage the training/rigor at the lower levels and not really address creativity until Levels 4 and 5. He suggests that this is a flaw in the model and that if creativity is focused on at the lower levels, maturity progression may occur faster.

Some random points:

  • If all of your experiments are a success, then you are not taking enough risks.
  • Don't spoon-feed people; design the process so that people have to think and contribute to the success of the organization.
  • He referenced the article "Beating Murphy's Laws" by Chew, Bohn, & Leonard-Barten in the Spring 1991 Sloan Management Review.
  • He mentioned that he was happy to see all the focus that the process improvement community has started to put on culture. (Many aspects of his talk reinforced points made by Stu Jeans in his "Culture" presentation this past May and by Bill Hefley in his "People CMM" presentation in March.)

Questions:

Q: How do we know that the next project in the case study won't fail?

A: The changes in place were not just to the project team. They changed many things in the infrastructure and culture. These won't just go away. (He also said that he would recommend that the project team in question be split up as a way to spread their experience around.)

Q: Isn't it dangerous to experiment in ways that cut corners? I.e., such that if an experiment failed, the end product would suffer.

A: They were careful not to experiment with the attributes of the end product -- only with the process used to get there.

Back to top

The exercise:

There were eight questions relating to organizational learning that we were asked to rate on a scale of 1 to 5. In addition we were asked which of the questions represented things that were most important to us. The three that came out most important were:

  1. Knowledge Transfer

    (how things learned in the organization are proactively managed and communicated across the organization)

    The average response indicated that this is seldom being done.

  2. Review and Record

    (how much is captured and communicated after a team/project finish their assignment)

    The average response indicated that this is also seldom being done.

  3. Learn from Experience

    (avoiding repeating mistakes)

The average response was that this is sometimes occurring.

He was asked if he had any expectation for the outcome of this exercise. He didn't. Someone else expressed some surprise that "Learn to Learn" wasn't the most important factor. (Defined as: Organization hones its skills for generating, acquiring, and using knowledge by learning from the learning process of other organizations). He said that this isn't always acknowledged due to the "not invented here" syndrome.

Ed Maher works for Digital Equipment Corporation within the Open VMS Engineering organization.

Back to top

COST JUSTIFYING
a Test Coverage Analyzer Tool by Michael Caron

A SEI Level 1 software shop generally establishes a competent functional testing organization before investing in other testing activities. Code and specification walkthroughs and inspections are the next priority within most testing organizations followed by test automation. SEI's Capability Maturity Model does not recommend test coverage goals until Level 3. A test coverage analyzer is a way to verify that these test coverage goals are being met by quantifying the coverage of the executed code during testing.

Test coverage goals can be identified for unit, integration, functional, and system testing. A project or organization that is finding large quantities of defects during functional or system testing or after release, and that has already implemented walkthroughs, is an ideal candidate to try a test coverage analyzer. Another prime candidate is a project or organization which is finding more than 25 percent of defects through unstructured testing activities while in functional and system testing.

On the other hand, meeting test coverage goals will identify few defects in software that is table-driven, or defects resulting from omissions (always a strong argument for black-box testing over white-box testing). However, recognizing low test coverage is always useful because it indicates faults in the test process [5]. For a discussion of the different forms of coverage, see Myers [6].

Commercial software development organizations typically execute 45 to 55 percent of their code during formal testing [2]. These organizations can achieve 80-percent branch coverage with very little additional effort. These organizations generally find at least seven defects per 1,000 lines of non-commented source statements after unit and integration testing, but before product release. Another three to ten defects per 1,000 lines are found after product release.

The premise for cost savings by a test coverage analyzer is that more defects will be found in unit and integration testing as opposed to later testing phases. The cost to fix those defects will be less expensive than if they were found and fixed later in the development cycle.

However, it is important to recognize that test drivers are needed for unit and integration testing, and there may be little reuse of the unit test drivers at the integration test level. Therefore, focusing more on test coverage goals for integration testing as opposed to unit testing should reduce testing costs by reducing the effort needed for developing the unit test drivers. Also, research by L. Lauterback and W. Randell showed that 2.7 times more defects were found by branch coverage at integration testing as opposed to unit testing [2,4]. Therefore, setting 80-percent branch coverage goals at the integration testing level will probably result in the most benefit.

The following analysis scenario is based on a seven-person, eleven months of coding effort to generate 50,000 lines of C or C++ code.

Four additional months testing and correcting defects are necessary prior to releasing the software.

The defect density after unit and integration testing and before release is seven defects per 1,000 lines of code (KLOC). The post-release defect density is three defects per KLOC.

Back to top

Some assumptions for these calculations are:

  • The loaded annual cost of an engineer is $115,000 ($9,600/month).
  • It will take one person-month to select and deploy the tool.
  • The purchase price for the tool will be $10,000.
  • It will take an average of 6.3 hours to find, fix, and retest a defect [2].
  • It will take three days for each engineer to become proficient in using the tool.
  • The engineers will not spend any additional time writing integration tests that provide 80-percent test coverage [2].
  • We will find 13 percent more defects during integration testing because of test coverage than if we did not use the tool [3,5].

Given the above assumptions, then our cost savings come from finding and fixing the defects in a subsystem while the engineer is still working in that subsystem.

For this application, we would normally (if not doing test coverage analysis) find

350 defects (50 KLOC * 7 defects/KLOC)
during functional and system testing, and
150 defects (50 KLOC * 3 defects/KLOC)
after release.

If we use a test coverage analyzer as outlined above, we can now expect to find and fix an additional 65 defects (13 percent of 500) while in integration testing. According to Boehm [1], it will cost twice as much to fix an error found during functional and system testing (Boehm refers to this phase as development test) as opposed to fixing the error during unit and integration testing (Boehm includes unit and integration testing as part of the coding phase). Since 46 of those defects are now fixed in integration testing as opposed to functional and system testing, we will save 1.6 person-months in repair costs (46 defects * 6.3 hours/defect * (2 - 1 relative cost index) * 1 day per 8 hours * 1 month per 22 days).

There is a considerable savings in finding a post-release defect while in integration testing. According to Boehm [1], it will cost five times as much to fix an error found after release as opposed to fixing the error during unit and integration testing. Therefore the savings will amount to 2.7 person-months (19 defects * 6.3 hours/defect * (5 - 1 relative cost index) * 1 day per 8 hours * 1 month per 22 days).

The total savings is 4.3 person months or $41,280 from using this tool.

The net savings are $12,080 and 2.3 person-months while pulling in the schedule by 7 days.

Furthermore, the next version of the commercial product will take even less time and involve little or no start-up or training costs, leading to even greater savings.

Back to top

COST/BENEFIT ANALYSIS

ITEMS COSTS BENEFITS
Training 1 person-month
$9,600
 
Start-up costs 1 person-month
$9,600
 
Repair time   4.3 person-months
$41,280
Tool cost $10,000  
Time to market   7 days

More details on test coverage tools can be found at the UnitedStates Air Force's Software Technology Support Center's website , http://www.stsc.hill.af.mil/ under Software Test Technologies Report.

Also, a public domain test coverage tool, GCT, is available for UNIX, GCC users at ftp://cs.uiuc.edu/pub/testing

References

  1. Boehm, B., "Software Engineering Economics." Englewood Cliffs,

    NJ:Prentice-Hall, Inc., 1981, pp. 40, 382.

  2. Grady, Robert, "Practical Software Metrics for Project Management and Process Improvement."

    Englewood Cliffs, NJ: Prentice-Hall, Inc., 1992, pp. 16, 60, 178.

  3. Howden, William, "Applicability of Software Validation Techniques to Scientific Programs."

    Transactions on Programming Languages and Systems, vol. 2, no. 3, pp. 307- 320, July, 1980.

  4. Marick, Brian, "A Survey of Test Effectiveness and Cost Studies."

    Urbana, Illinois: University of Illinois, Report No. UIUCDCS-R-90-1652, December, 1990. (Available through Research Access, Pittsburgh, PA)

  5. Marick, Brian, "Experience with the Cost of Different Coverage Goals for Testing."

    Internal paper from Motorola, Inc.

  6. Myers, Glenford, "The Art of Software Testing."

    New York, NY: John Wiley & Sons, 1979.

Michael Caron has 15 years of experience developing real time and commercial software products, including the application of technical management and SEI skills. He is currently between positions. He can be reached at mrcaron@ici.net.

Back to top

THE SPIN DOCTOR
(Judi Brodman)

Dear SPINners:

I hope that you all had an enjoyable and productive summer (or winter for those of you in the southern hemisphere). Here, in New England, fall is in full bloom! How did it happen so quickly? Yesterday, when I looked out, the trees were all luscious green -- today, the trees are magnificent red, yellow and orange! Did this change actually occur overnight? We know that it didn't. The change started as far back as August when the color of individual leaves began to change. The change was so subtle that we didn't really see it until a majority of the leaves had changed creating the breathtaking sight we see today.

Change within an organization occurs the same way -- seemingly overnight. But in reality the change is occurring little by little over a long period of time. Individuals within the organization, like leaves on a tree, change their "color" at different rates, but eventually, like the tree, the entire organization will change its color. There is another lesson we could learn from nature -- the color change occurs from the top down on a tree -- leaves at the top change first. Has your management changed their color yet?!

This discussion of the process of change in nature leads to the subject of my column this month -- the process of change in the software organization. Many SEPG leaders or process change agents have written and expressed frustration and anger at their management for placing them in a "no-win" situation. They feel like "failures" because they are not able to make improvements or change happen in their organizations. They have been given a "mandate to raise the organization to a certain level of maturity by a certain date yet everything else has more priority than the software process improvement tasks". I don't have room to print and answer each of the letters I received on the subject of change in an organization but I will try to address the issues raised in these letters collectively in this column.

In his book "Managing for Innovation -- Leading Technical People," Watts Humphrey states that "a change agent provides the energy, enthusiasm, and direction needed to overcome resistance and cause change". He also says, in "Managing the Software Process," that "the SEPG fills this role by providing the skilled resources, the creativity, and the management leverage needed to make things happen". These two simple statements contain all the ingredients needed to produce a successful SEPG or change agent.

Back to top

  1. -- Skilled Resources

    First, let's discuss skilled resources. You write that skilled resources to perform process improvement tasks are not available to you, that you, yourself, are not skilled enough to perform process change, and that you were "assigned" the responsibility of being the process change agent.

    I read somewhere that process agents should be recruited -- not conscripted. What a novel idea! Watts states "Care is required, however, to guard against getting the lame, the halt, and the tired." This means that management should not assign people to these roles just because they lack a charge number or they know "something about quality".

    What skills can help to make a person successful in the role of SEPG leader/change agent? First, the candidate needs a wide breadth of software experience. This experience allows the candidate to understand the changes that are needed on individual projects as well as across the organization and, to understand the impact of certain changes on projects and on the organization.

    Second, the candidate needs to be able to plan and manage process improvement as though it were a software project. Watts states that the SEPG leader "must be a knowledgeable manager with a demonstrated ability to make things happen, the respect of the project professionals, and the support of the top management".

    • Do you, as the chosen leader, have the qualities and background necessary to guide a process improvement program to successful conclusions in your organization?
    • Are you a manager who can make things happen?
    • Do you have the respect of project leaders and organizational managers such that you can perform the tasks required of you as the change agent ?
    • Do you have the energy and enthusiasm needed to make the organization change?

    Management must learn to recruit people who fit this "job description". To make it easier to recruit qualified candidates, organizations can rotate the role of change agent throughout the organization. Candidates can volunteer knowing that the duration of their assignment will be a certain period of time. The rotation of people through these roles exposes a great number of software personnel to the tasks and responsibilities involved in performing process improvement.

  2. -- Creativity

    Second, let's discuss creativity. Creativity appears, on the surface, to be a rather strange quality to associate with the description of the change agent. But remember, the change agent is a leader in the organization. As a leader, he or she needs to take the organizational vision of process improvement -- to reach a certain level of maturity by a certain date, to increase customer satisfaction, etc. -- to reality. That transition from vision to reality is no easy task!

    The SEPG leader or process agent must be able to "create the kind of tension that will help men rise from the depths..." says Peter Senge in "The Fifth Discipline" -- the tension being called "creative tension". You need to motivate personnel, create enthusiasm, and energize the organization. You need to make the organization understand both the vision and the current reality relative to that vision.

    Some of this understanding will come when you translate assessment outputs (weaknesses and strengths) into action plans. You need to be creative in finding solutions to project and organizational weaknesses. But you also have to guide the pace at which change occurs in the organization. Watts states "if [the pace of change] is too slow, progress will be limited, while too rapid a pace will be disruptive and self-defeating".

    You need to schedule tasks that can be accomplished on time, within budget AND successfully. Successful completion of tasks at the start of your process improvement program is extremely important to the success of the program and to the acceptance of you as the leader. If you are not creative, imaginative, AND careful, you may never succeed in moving from the current reality to the vision. Instead, you may end up pulling the vision into the current reality!

  3. -- Management Leverage

    Third, let's discuss management leverage. You wrote "I was appointed by the ....... Director and it was assumed that that in itself would carry with it some power, but this has not worked with the projects. The issue of power and responsibilities was not really discussed".

    The SEPG leader or change agent, assigned certain duties and responsibilities by management, needs the authority or power to make things happen. The organization must understand what the process change agent's role and authority is and that the agent has the full backing and authority of management to perform the process improvement tasks.

    You also write that you have "put together a schedule of activities that would work but have been unable to meet many of the deadlines due to SEPG members needing to focus on project priorities".

    Your resources are people chosen from on-going projects who still have project duties to perform. In this case, process improvement tasks are assigned a very low priority in the organization -- project duties always come first -- and you find that your action plans and schedules are out of date as soon as you publish them.

    It sounds like process improvement work is viewed by both the project managers and the organization as "busy work". Management needs to be made aware of this attitude. Process improvement tasks need to be planned and managed as a project with the similar reporting responsibilities to management as other projects in the organization. Watts states that "the decision to make changes must rest with line management, but the SEPG should provide the technical guidance on what changes are most important. They must be aware of the state of the software process, appreciate the areas needing improvement, and present practical improvement recommendations to management".

    In these statements, a very cooperative, working relationship between management and the SEPG is described -- a relationship that SEPGers and change agents need to strive for.

Back to top

I asked Mark Paulk to make a few comments on the issues raised in your letters and his comments are as follows:

"Does management pay attention to what they are doing? If they (the SEPG) have to present (to management) on what they've accomplished every month, the visibility gets the message across that this (software process improvement tasks) is something we should really spend time on."

"Do it as formal presentations to management, maybe rotate the presentation responsibilities (among SEPG members). That usually helps -- and encourages management sponsorship at the same time." Mark made some very valid points. I especially like the idea of rotating presentations to management among the SEPG members; it's a great way to have the SEPG members recognize the importance of the process improvement tasks and the accountability that the SEPG has to management."

Think of some of the more visible SEPG leaders -- Ray Dion, who headed Raytheon's process movement, or Ron Willis, who heads the process improvement movement in Hughes -- and know that it is possible to be successful in the change agent role when all three ingredients are present: skilled resources, creativity, and management leverage!!

If you have any questions or comments on this column, please send them along to me so we can all learn from them! That's why I'm here!!

This column is for you; let's make a difference!! Send your comments and questions to " Dear SPIN doctor" at: brodman@tiac.net or directly to the Editor. Sign them or use a "pen-name" -- I respect your confidentiality!!

-- The SPIN Doctor

P.S.

To Paris, France:

I am still working on finding you estimation techniques (LOC) for C, C++, 68K assembler. Anyone have information on estimation techniques for these languages?

Back to top

JOB BANK

Inso Corp.

31 St. James Ave.
Boston, MA 02116-4101
Phone - 617 753-6770
Fax - 617-753-6666
Email - ssprague@inso.com (Susan Sprague)

Inso Corporation develops software products that help people enhance the quality of their written communications and use information and ideas more effectively. We specialize in the fields of computational linguistics, language-focused software engineering, and information-based technology.

Specialists in these fields contribute to the second-to-none Inso Corporation service and commitment to improving the quality of OEM products and writing of end-users. Inso is an Equal Opportunity Employer and offers a comprehensive benefits program. Compensation on our available positions is commensurate with experience.

Position Description:

Software Engineering Process Engineer

Successful candidate serves as a focal point for process improvement and oversees and manages task force activities and process implementation in product groups as outlined in SEI Maturity model. Qualifications include a good understanding of software processes and in depth knowledge of process methods and practices. Must possess strong communication skills. Ideal candidate will have project development and application expertise. B.A./B.S. or equivalent preferred.

Back to top

Eliassen Group, Inc.

591 North Ave., 5B
Wakefield, MA 01880
617-246-1600 / 800-428-9073 (phone)
617-245-6537 (fax)
eliassen@world.std.com

Eliassen Group, Inc., is a dynamic computer consulting and permanent placement firm with an excellent reputation for providing quality service to distinguished clients and consultants. Our clientele consists of many top organizations in the country, from small start-ups to Fortune 500 firms. We specialize in the development of distributed computing architectures and client-server based applications.

CONTRACT opportunities: -- CONTACT Laina or Jennie with JOB#

JOB#: 3815 APPLICATIONS TEST/UI/NOTES QA

LOCATION: Cambridge, MA
DURATION: 3 Months
REQUIRED: Software QA.

Our client is looking for a qualified individual who will be performing testing on their product. Testing will be mainly manual, unstructured. They are looking for a candidate with strong applications testing background, as well as knowledge of the QE process, 3+ years would be ideal. Candidate really must understand UI and all of the issues associated with it (this group is responsible for all of the UI editor testing). Only candidates with three (3) years commercial experience will be considered.

JOB#: 3682 SR.SQA ENG/WINDOWS/WIN95/TEST PLANS/
BUG REPORTS/IPX/NETBIOS

LOCATION: Andover, MA
DURATION: 3 Months
REQUIRED: Test Plans, Windows.
PLUSES: Windows 95, Chicago, IPX, MS-Test, NetBIOS.

Our client is looking for a qualified individual who has extensive Windows knowledge; Windows 95 a definite plus. Will be developing and implementing test plans and bug reports, and installing and configuring modems and sound cards. IPX, NetBIOS knowledge a plus. Knowledge of MS-Test preferred. Must be able to work with minimal supervision. Only candidates with three (3) years commercial experience will be considered.

JOB#: 3772 WINDOWS NT/QA

LOCATION: Woburn Area, MA
DURATION: 6 Months
REQUIRED: Quality Assurance, NT.
PLUSES: MS-Test, Win Runner, Test Plans, Test Tools.

Our client is looking for someone with at least 2+ years Windows NT, 2+ years experience with automated test tools (like MS-Test, WinRunner or equivalent), and 2+ years QA background. Will need the ability to generate test plans and procedures, and will need to have good technical skillset. People working on this team will be working on the administration interface for their new platform, and hardware base. This is an existing and high-profile project for the company. The first delivery will be to the state's largest cellular seervice provider. Must be able to start in November. Only candidates with three (3) years commercial experience will be considered.

JOB#: 3687 WIN 95/NT/QA/MS-TEST
Recruiter CONTACT: Eileen or Jennie

LOCATION: Burlington Area, MA
DURATION: 6 Months
REQUIRED: Quality Assurance, Software QA, MS-Test, either NT or Win95.
PLUSES: Chicago, NT, both NT and Win95.

Back to top

Software Quality Partners

Attn: Human Resources
One Van de Graaff Drive,
Burlington, MA 01803,
Email: ACSI@delphi.com, Fax: 617-272-2433
Voice: 617-272-7393

Software Quality Partners is a leading national consulting organization providing Software Quality and test automation services. Our vision to be the leader in Software Quality services is achieved by hiring the best in Software Quality.

Currently, we are seeking Software Quality professionals to join our consulting staff on either a permanent/direct or contract basis to help us attain our aggressive growth plans. We are seeking professionals at a variety of skill levels who possess excellent Software Quality capabilities and hands-on technical skills. Experience with automated test tools and experience developing test plans and procedures is helpful. Ability to work in C/C++, UNIX, and Client/Server environments is a plus.

Opportunities also exist for SEI trained auditors with experience working on process improvement on very large projects (many $100's of millions).

Positions involve national travel.

Back to top

ACSI

Human Resources,
One Van de Graaff Drive, Burlington, MA 01803
617-272-8841 x234
Fax: 617-272-2433
acsi@delphi.com

SOFTWARE CONSULTANTS

ACSI is a leading provider of simulation systems and commercial software development services. We are currently seeking motivated professionals who are interested in a challenging consulting career working with state-of-the-art technology. We have an immediate need for candidates with excellent software development skills in the the following:

*Sybase, C, SQL *Sybase, C, SQL, UNIX, and Powerbuilder

*MS-Access v2.0

*Strong C/UNIX with Pro*C, SQL, and Oracle a plus

*Visual Basic, Sybase, C, and Windows

*IBM/MVS environment with COBOL, CICS, DB2, ISO/TSPF, SPF, JCL, and VSAM. TELON experience a plus.

A background in financial services is desirable.

Back to top

ANSYS, Inc.

Human Resources
PO Box 28
Houston, PA 15342
(412)873-3094
ddd@ansys.com
An Equal Opportunity Employer - M/F/H/V

TESTING SPECIALISTS

ANSYS, Inc., a major developer of state-of-the-art engineering software has two positions open for software testing engineers in our Corporate Quality Department.

Testing Specialist/Usability and Integration Testing (Job #320):

This person must have experience in testing for the usability of software in the terms of ease-of-use, productivity, intuitiveness and appearance. A solid understanding of state-of-the-art software and software testing techniques is required. Candidate should have a BS or MS in Computer Science or Engineering.

Testing Specialist/Graphical User Interface Testing (Job #321):

This person must have 2 or more years experience in testing graphical user interfaces and menu-driven software. A solid understanding of state-of-the-art software and software testing techniques is required. Candidate should have a BS or MS in Computer Science or Engineering.

Back to top

MASTHEAD

The Boston SPIN is a forum for the free and open exchange of software process improvement experiences and ideas. Meetings are usually held on third Tuesdays, September to June.

We thank our sponsor, GTE.

For information about SPINs in general, including ***HOW TO START A SPIN***, contact:
DAWNA BAIRD of SEI, (412) 268-5539, dbaird@sei.cmm.edu.

Boston SPIN welcomes volunteers and new sponsors. For more information about our programs and events contact:
CHARLIE RYAN, Technical Assessments, Inc.,
ESC/ENS (Bldg 1704), 5 Eglin St, Hanscom AFB MA 01731-2116;
(617) 377-8324; FAX (617) 377-8325; rprice@ma.ultranet.com (Ron Price).

SEND letters-to-editor, notices, job postings, calendar entries, quips, quotes, anecdotes, articles, offers, and general correspondence to Sallie Satterthwaite, (508) 369-2365, sallie@world.std.com. If possible, please format input as text with explicit line breaks and the maximum line length seen here. Send SPIN Doctor questions to the address given in the SPIN Doctor column.

Our WEB HOME PAGE is at
http://www.cs.uml.edu/Boston-SPIN/ The following will also work:
http://www.cs.uml.edu/Boston-SPIN/index.html

This document was converted to HTML by Ken Phipps, GTE Government Systems.

Return to Newsletter Index

Back to top