Contact Us
Meeting Cancellation Policy

Established September, 1992

Newsletter of the Boston SPIN

Issue 3, March 1995

Return to Newsletter Index



APR 11 (second Tue),7-9 pm

-- Monthly BCS/SQG (Software Quality Group) meeting -- "Problems in Testing", conducted by Don Willett -- BCS office, Waltham -- Info: Adam Sacks, 617-951-6057 (w), 617-648-3643 (h)

APR 17 (third Tue), 630-830 pm

-- Monthly SPIN meeting: "Process Improvement + Metrics = Performance Improvement", by Dr. Howard Rubin. GTE Building 5, 77 A St, Needham

APR 14 (Fri)

-- Deadline for early registration for SEPG Workshop


-- BCS/SQG Saturday PDS series conducted by Johanna Rothman -- Info: Adam Sacks, 617-951-6057 (w) -- Registration ($195/$244): BCS, 617-290-5700

APR 22 "Best-in-Class Software Engineering"

MAY 20 "A Pragmatic Approach to Software Quality"

JUN 17 "High Tech Project Leadership"

MAY 22-25 (Mon-Thu)

-- PROCESS AT WORK, the 1995 SEPG Workshop, Boston Sheraton -- Annual national conference for software process improvement, this year sponsored by SEI and Boston SPIN -- Info: 412-268-6467, 412-268-7388

Back to top


SEPG/95 SPIN Boothors Wanted

Spend a fun-filled hour hosting the Boston SPIN booth -- no conflict with scheduled conference events. A great way to network! Please contact Al Jost, (508) 443- 7244,

Fly with the Eagles

At the May meeting next year's Steering Committee will be elected. Joe Farinello is chair of the Nominating Committee and welcomes your offers to run and suggestions for candidates. The elected officers are Chair, Vice Chair, Treasurer, Secretary, and two At-Large members.

To reach Joe Farinello:
ESC/ENS (Bldg 1704)
5 Eglin Street
Hanscom AFB, MA 01731-2116
Work Phone: 617-377-8561

Looking for Contacts

I work for the Liberty Mutual Insurance Company in the Information System department and am in the process of starting an SEPG. Would you be able to put me in contact with SPIN members who are in the Financial Services/Insurance/Banking industry? If not, how about SPIN members who work in an internal information systems department?

Any help you can provide is greatly appreciated.

Jim Trusselle
Liberty Mutual Ins. Co.
225 Borthwick Ave.
Portsmouth, NH 03801
(603) 431-8400 x 53217

Programs "R" You

Your input is critical to our putting on good programs. Our Program Committee welcomes your suggestions and your offers to present. Please contact Ken Oasis at (617) 498-5693,

Back to top


to Digital Equipment Corporation for granting permission to reuse the SPIN meeting reports that Ed Maher writes each month, and to Ed for creating reusable versions (deDECed). We will have two per month for a while (something old, something new) as Ed started doing these reports last October.


to all who have volunteered to help at the SEPG Workshop or who submitted logo ideas for SPIN or for the Workshop. The SPIN logo decision is still in the works

Back to top


October 1994 Meeting Report
by Ed Maher, courtesy of Digital Equipment Corporation


Rist Management -- Limitations Of The Rearview Mirror


Clyde Chittester (SEI Risk Management Program Director)


The presentation was focused around a new program at the SEI that addresses risk management in the context of software acquisition. This work, like most of the SEI work, is influenced by the model of a single purchaser of a large computer system. The goal is that organizations consciously assess what can go wrong and what the severity of the impact would be by continuously identifying, communicating, and resolving risks.

I found this to be one of the more interesting talks that I have seen at a SPIN.


The SEI suggests that software projects put an explicit risk management plan in place that includes iterations of: identification, analysis, planning, tracking, and controlling. Most of the discussion involved identification, analysis, and planning. The process he described has been piloted on about 45 projects (industry and government sectors). He described these steps as follows:

  • Risk Identification

    Risks would be identified by something that resembles a scaled-down version of an SEI assessment. There is a fairly detailed set of questions asked of representatives of all participating functions, including people outside the software domain.

  • Risk Analysis

    All of the risks would be collected and grouped. He presented a taxonomy for grouping the risks. It contains 64 different attributes, distributed among 13 elements contained within three major classes (Product Engineering, Development Environment, and Program Constraints). These risks would then be prioritized with a goal of keeping your eye on all of them, but only trying to manage 10-15 of them.

  • Planning

    For those risks that you decide to manage, mitigation strategies should be formulated that identify how they could be avoided And what will be done should they unfold.

  • Tracking/controlling

    The mitigation strategies should be tracked just like other project activities are tracked. He did not spend much time on tracking and controlling as the SEI work in this area has just begun.

He described a Risk Repository that the SEI would maintain that organizations could populate with their own risk information and could access for help in their own risk management activities. This data would be anonymous and would be categorized by the type of business.

To accompany this work, the SEI will be publishing a Software Acquisition Maturity Model (SAMM), that will be complimentary to the CMM but be focused more on the customer. A central piece of the SAMM has to do with managing risks. It will be in the same format as the CMM (levels, key process areas, key practices,...) and will be integrated with an Improvement Method Description and a Pilot Usage Report that will include data from all the pilots.

Some Interesting Things That Came Up:

  • Clyde repeatedly stated that there is nothing wrong with taking risks; what's important is that you do so knowingly.
  • Risks can come from a number of places, but usually manifest as problems with cost or schedule.
  • Software risks (those risks associated with the software you are using in your process) are among the least measured or managed risks.
  • The data thus far shows a high percentage of risks coming from one of the following: Requirements, the Management Process, Resources, and the Customer. After that, there is a fairly significant drop off. The slides contain a more detailed breakdown of risk areas. He did caution that they don't really have a lot of data yet (about 45 pilots and about 850 risks).
  • Most organizations and managers believe that they do a good job of managing risk. What they don't realize is that they may have an ad-hoc, people-dependent process and they might not recognize that many of the fires they fight could have been avoided with a formal risk management program.
  • Someone from the audience suggested that if the DoD formally adopted the SAMM, then contractors would make up risks so that they could show the government how well these risks were being managed.

Back to top

March 1995 Meeting Report
by Ed Maher, courtesy of Digital Equipment Corporation


An Introduction To The People CMM (P-CMM)


Bill Hefley (SEI)


The intent of this model is to provide assistance in:

  • characterizing the maturity of an organization's people management skills,
  • setting priorities for improving these skills,
  • integrating talent growth with process improvement,
  • establishing a culture of software engineering excellence that attracts and retains the best talent


The P-CMM is being developed as an acknowledgment to the widely- held belief that the SEI work in general, and the CMM in particular, has been flawed because it did not pay enough attention to people.

There seems to be no disagreement on the significance that personnel capabilities can have to the success of an engineering organization;

  • The SEI believes (based on their data) that variation in individual performance can result in as much as a 28 to 1 variation in the overall performance of the organization.
  • Barry Boehm's widely referenced book "Software Engineering Economics" which came out in 1981 places personal capabilities as the greatest determinant of productivity -- by a wide margin. (It's graphed on the cover.)

In addition, the SEI has noticed in their dealings with many different software engineering organizations that:

  • Process improvement programs generally focus on process, tools, or technology and not on people.
  • Many organizations know that they should address people/cultural issues as part of their improvement activities but they do not know how to go about it
  • High maturity organizations have seen that their success required changes in the way they manage people and that these changes were not reflected in the CMM.

The meeting had an interesting digression after he put up a supporting slide representing "What Vice Presidents think". The gist of the slide was that VPs recognize people as being the most important aspect of their organization (not process, not tools, not technology, etc.). This led to a discussion around a belief by many that senior management often says things like that as a way to avoid process improvement and as a way to pander to their employees.

Another belief that came out in discussion is that many senior managers think that they have great people (and do not need process) because they see effective fire-fighters and do not recognize the equal or greater value of people who prevent fires

Getting back to the topic at hand -- Once this need was recognized, the obvious solution was for them to apply what they knew about a maturity model (from their experience with the CMM) to the people factors domain.

The basic framework of the P-CMM is similar to the CMM; there are Key Process Areas (KPAs) and Maturity Levels. The KPAs are organized by maturity level and the maturity level indicates staff capability.

The people management functions that they explicitly address are;

  • recruiting,
  • selection,
  • performance management,
  • training,
  • compensation,
  • career development,
  • organizational design, and
  • team and culture development.

What follows is a brief description of the five maturity levels along with the names of the associated key process areas:

Level 1 is the "herded" level (as with the CMM, there are no Level 1 KPAs). A Level 1 organization treats people management as overhead. Managers are not generally trained in how to manage people, and there is little analysis done on the effectiveness of the people management activities. In a P-CMM Level 1 organization, there is a lot of time spent on personal agendas and there is usually a high turnover rate.

At Level 2, management feels responsible for the people and there are people management activities that focus on individual contribution. The level 2 P-CMM key process areas are:

  • Work Environment
  • Compensation and Reward
  • Training
  • Performance Management
  • Staffing

At Level 3, the organization is more formally developing its knowledge and skills. It has identified its primary competencies and the people are aligned with them. People management activities are planned. The Level 3 P-CMM key process areas are:

  • Participatory Culture
  • Career Development
  • Competency Based Practices
  • Competency Based Development
  • Knowledge and Skills Analysis
  • People Management Planning

At level 4 the organization is measuring its knowledge, skills and performance. It is quantitatively managing organization growth in people management capabilities, and establishing competency-based teams. The Level 4 P-CMM key process areas are:

  • Organizational Performance Alignment
  • Organizational Competency Management
  • Team Based Practices
  • Team Building
  • Mentoring

At Level 5 there is continuous improvement of the methods which are used to develop personnel and organizational competence. The Level 5 P-CMM key process areas are:

  • People Management Innovation
  • Coaching
  • Personnel Competency Development

Just as with the CMM, they are planning on having a process for assessing an organization against the P-CMM. This is in the preliminary stages. There will be two purposes for this assessment; Identify people management strengths and weaknesses and then align them for improvement Identify the readiness of a team for a specific project.

He said that assessments against the P-CMM would be similar to CMM-based Assessments, but that they may introduce some new concepts. The use of a widely distributed survey form was the only new concept mentioned.

Some questions that came up:

Q: Why is Training a Level 2 key process area in the P-CMM, when it is a Level 3 key process area in the CMM?

A: They address different aspects of training. At level 2 of the P-CMM, people in the organization should receive the training that they need -- how this is determined or managed is not as important. At Level 3 of the CMM, there should be organizational infrastructure for managing a training program and there should be training plans for all projects.

Q: Would it be likely that a CMM-based assessment found an organization to be at Level 3, but a P-CMM-based assessment found them to be a Level 1?

A: It is not precluded. The converse (a high P-CMM maturity org that was low CMM maturity) would be much less likely because it would be unusual for them to be disciplined enough to get their people management skills in order, without having applied that discipline to their other processes.

Q: Is this tied to software engineering?

A: Even though their domain is software, they found that what they came up with would apply to just about any domain (certainly any engineering domain). He joked about how some of the feedback from the first reviewers included concern about there not being enough references to software. (They do not see that as a problem.)

Q: Does the P-CMM apply to an org that has a distributed responsibility (across a team for example)?

A: Yes, there are practices which address this and none that preclude it.

Q: Does the P-CMM apply to all levels of people management (for example, the VP's management of his/her directors)?

A: Yes

Ed Maher works for Digital Equipment Corporation within the Open VMS Engineering organization.

Information update: V0.3 of the P-CMM is due at end of April; V1 this fall. Validation pilots are planned for 1996-97.

For more information,
contact Marlene MacDonald,
SEI, Carnegie Mellon U,
Pittsburgh PA

P-CMM can be ftped in a variety of formats from

Back to top

CONGRATULATIONS to Al Lehotsky (Quality Software Management) and the Open Software Foundation for receiving Addison-Wesley's Excellence in Quality award.

Al's winning description of the application of TQM to achieve process improvements in the OSF/1 project is summarized below:

Excellence in Quality at the Open Software Foundation

OSF/1 is a large, complex software product, consisting of over 2 million lines of source code. In developing and maintaining OSF/1, we faced a series of problems, including these:

  • We needed rapid product release cycles -- major computer vendors such as IBM and Digital had critical dependencies on OSF/1 technologies.
  • We needed improved product quality and reliability.
  • We needed increased productivity from a smaller group of software engineers.
  • We were required to reduce our defect backlog by an order of magnitude (from over 1000 unfixed bugs down to ~100).
Our engineering team met these challenges by using TQM methods, including:
  • Cross-functional teams to examine the software development process and identify improvements (critical to the success of this was making the engineers responsible for defining the necessary process changes)
  • Easy-to-measure, relevant metrics that provided insight into our product and processes (both defect metrics and performance metrics were established)
  • Root-cause analysis to identify common problems.
Implementing the basic improvements led to measurable benefits, including:
  • Basic build-test-release cycle reduced from one month to one week
  • Defect backlog reduced by a factor of 10, while reducing the effort necessary to fix and document each bug-fix
  • MTBF increased from ~10 hours under stress to greater than 160 hours under heavy load
  • Two major product upgrades delivered with increasing quality and functionality while undergoing greater than 25% reduction in staffing for each release
  • Predictable schedules established -- product release dates scheduled at 12-15 month intervals were met plus or minus two weeks.

Back to top

from Rachel Silber

Software Process Sites on the Web

If you've got access to the World-Wide Web, you've probably enjoyed the wealth of software process information that's available if you go looking. Here are some shortcuts to places with useful information or collections of links.

WWW Virtual Library - Software Engineering

This site is one of the primary collections of links on software engineering. A good jumping-off point to whatever you are interested in.

CreaCon Presents: "The Quality Wave"

This site is a link to information about quality in general, including information on TQM. There are links to information about system dynamics and risk analysis, and the page seems to be growing. This page invites you to "surf the quality wave", providing a link to a Dilbert cartoon of the day.

Comp.Software.Testing Archive: Main Index

The main usenet forum on software quality assurance issues. This is a complete archive, organized by date, and also searchable by keyword.

Drilling down to a couple of specific software process topics, design patterns and collaborative work have lots of information available on the Web.

The Patterns Home Page
URL: (Patterns tab)

The application of design patterns to producing better software, with more reusability, and conveying the information to other software designers is the hot topic in this set of links. Research papers available on the net, upcoming conferences, and a couple of mailing lists are all referenced here.

Collaborative Networked Communication: MUDs as Systems Tools

There's lots of work out on the web about collaborative development and even some about using the WWW to support collaborative development, but much of it is speculative. This paper reports on something that you can do immediately -- "text-based virtual reality" to improve internal team communication. It seems to have worked for this group! I found this paper to be readable, practical, and thought-provoking.

I'd love to write a longer column on the Web for a future SPIN issue. Send me,, your favorite links, and I'll write up all the links that fit.

Back to top

Using the CMM as a Career Tool

Where do you want to work next? Figuring out what potential employers are really offering can be an uncertain proposition. Paul Morris of Paul Morris Personnel ( suggests that the professional software developer may well want to take advantage of the CMM scale to characterize the kind of working environment they'd like to have. "Look for a level 3, mature object-based software environment," he said in a telephone interview. "Beware of companies who claim C++ and object-oriented software but haven't yet shown a commitment to software process."

He went on to explain that companies in the early stages of conversion to C++ and object oriented systems may get discouraged when O-O technology alone does not make a dramatic or quick difference in productivity. They may underestimate the time needed to retrain existing staff and rework their existing code base.

"Identify the software culture you want to work in," Morris said. "Most engineers would like to work where people do things right." He believes that attention to the level of software capability you've worked at can be professionally important. "You only have one career. Be aware of what you are getting for your time."

The software culture question is applicable in both the defense and commercial software industries. While most commercial companies do not characterize themselves on the CMM scale, the criteria of having a track record with object-oriented technology and of undergoing a natural evolution of the corporate software process can be just as valuable. Asking interview questions with these issues in mind can reveal important information about the organization.

Back to top

Who Does It Right?

One of the software process improvement tools available to us is studying other companies to find role models and inspiration. Finding the best practices, benchmarking them, and starting a disciplined program to influence an organization toward them, can be a long-term project for a large organization.

Tom Peters, in "The Pursuit of WOW!", comes up with a more intuitive approach:

Suppose you commit to achieving new heights in quality or service here and how. In your own mind, you're an instant Nordstrom (retail) or Motorola (manufacturing). But your next task ... is to go through your boring in-basket.

What an opportunity! Respond to the first item that turns up as you imagine a Nordstrom or Motorola exec would.

In pursuit of process improvement, you can't get more direct than that.

My personal inspiration is PureCoverage from Pure Software. Other coverage tools I'd worked with had been clumsy and hard to get the information I needed. In some cases, they'd required complete recompilation, a long process with a large system. When I installed PureCoverage, I got useful results within a couple of hours.

The user interface put the information I needed to know right at my fingertips. And most surprising of all, this was happening with a 1.0 release of the software.

So, who is your role model in the software field? And what experience did you have with their product or service to inspire you? Mail to Your replies will be the basis for a future In the SPIN article.

Back to top

conducted by Judi Brodman

Vernal Equinox Greetings!

As I took out the computer to begin this column, I thought of how many advances the hardware community has made in the last 10 years. My notebook is smaller than my handbag and more powerful than my office computer! I can take it with me when I travel and answer your letters, via the built-in modem, from anywhere in the world. Because of its interfaces, I can hook up my color printer to it, add a CD ROM, change the screen.... This is all so terrific and yet their advancements only serve to highlight how slowly the software field is advancing in comparison to the hardware field. I found proof of our "inch" step advances in my office last week while I was doing some Spring cleaning.

First, Spring cleaning in my office means "weeding" through the volumes of information that I accumulate on a yearly basis ... technical reports, journals, magazines, etc. This spring, I attacked my beloved IEEE journals; for me, their life span is about 10 years! And even then, I can't just throw them out; I have to check the table of contents and sections like "Quality Time", and tear out and file articles that I feel are still timely. What I found shocking was that we are still "talking" about the same software issues that authors were writing about in 1985... measurement (metrics) and software quality! Why do we continue to struggle in these areas? Why can't we find solutions to these problems? We continue to collect metrics that our customers IMPOSE on us.

Twenty years ago, these same metrics didn't help us manage on- going software projects; they didn't help us estimate size, schedule and costs for new projects; and they still don't help us to manage or estimate now! Why can't we decide for ourselves what data benefits each of our organizations ... and then, having done that, look at the data collectively as a group and define metrics that really work for management and estimation of projects? Jum Nunnally, in his fascinating book "Psychometrics Theory" (McGraw- Hill, 1967), stated that before a science can move forward, a measurement method is needed ("In fact it seems that major advances in... all sciences, are preceded by breakthroughs in measurement methods"). How long are we going to "wait" for these breakthroughs to happen in software??

Because of my impatience, I propose that we dedicate this column for the next few months to open discussion of the underlying issues that are hampering our "breakthrough". If you are making advancements, let us hear from you; if you have found a particular metric(s) or quality "booster" that works for your organization, share it with us; if you are struggling with these areas, write and describe the problem. Let us use this column as a way to BEGIN THE BREAKTHROUGH!!

Now to our letters...

TO KEVIN IN TUCSON: [see last month's In the SPIN]

I sent your letter along with my answer to MARK PAULK (SEI) and asked him to add any thoughts he might have on your problem, and here is what he had to say...

"These are good comments, but I think I would add that the driver for change is usually dissatisfaction with the status quo. If things are working well now, there's no incentive for the discomfort of change -- and we always wonder whether it's just change or really improvement.

"Some leaders envision challenges in the future and are proactive in attacking the future. In that environment, there's a perpetual `dissatisfaction' with the status quo and a feeling that `we're leading edge, the pioneers.'

"The far more common circumstance is to have noticeable problems (sometimes the problem is that your customer isn't happy with functionality, quality, cost, or schedule and is insisting on an SCE to "qualify" their suppliers!). This discomfort - hopefully in terms of measurable business objectives - is translated into management commitment.

"Frequently the staff in the trenches doing the work have been struggling to "work around the system" to get useful work done are well aware of the problems! Sometimes their view of the sources of the problems is somewhat different from management's :-). Getting their buy-in may be a matter of overcoming entrenched cynicism and changing processes that have evolved to `work' a dysfunctional system. This is the challenge of organizational change!

"You need management sponsorship to build ORGANIZATIONAL capability. You need staff and middle-management buy-in to actually do the improvement. You don't need 100% participation to get the journey started, but to drive to world-class software organization, you'll eventually have to get pretty close to 100% deployment - and that means everyone affected needs to see the net benefits."

Kevin, I hope both of these perspectives -- mine last month and Mark's this month -- have helped you approach your problems in a new way. Let us know what works and what doesn't....

I also received a letter from someone struggling with a way to introduce quality into an organization that never had to worry about quality before. Because I wanted to get comments that were pertinent to his situation, I sent his letter off to HERB KRASNER, one of my quality "experts" for R & D.

First, the letter:

Dear SPIN doctor:

Until about two years back, we used to think of ourselves here at the Research Institute as people who do research and develop newer and better algorithms. We are now interested in developing, selling and even supporting products. Because of this history, people have no experience in really supporting products. For example, they have never seen a thick pile of bug reports. How do I sell Quality to people who have never faced Quality problems - I need some tips.

Searching for Quality in R to D

Now, Herb's response:

"Dear Mr. Searching for Quality in R to D:

"The blunt answer from my experiences with both research and product development organizations, is that you cannot "sell" quality in this situation because the current culture isn't even aware that quality is an issue.

"However, there are several things that you can do to speed up the process of creating an awareness of software quality issues.

  1. The `Q' word means little to quality-impaired researchers, so start giving basic software quality 101 talks at lunchtime to introduce concepts, establish terminology and raise issues.
  2. Find out more about current quality attitudes and the level of consensus about quality issues without using the `Q' word. I would recommend you start with a face-to-face interview of a few selected staff opinion leaders that focuses on what they think the major challenges will be in going from a research to product delivery organization.
  3. After you have determined what the issues are in the current lexicon/culture, then you can begin to seek the level of consensus through a more broadly distributed survey form with more focused questions. Once again, defining quality in terms that the organization will understand, is the challenge. The real trick is to start the discussions about software quality without using the term `Quality' because that concept will be rejected by the current paradigm.

"Try the following - go into the office of one of your reasonably friendly researchers and say: `I am concerned about our ability to develop, sell and support software products -- and I would like to learn more about what you think our major challenges are for delivering good products - but since I don't even know what to ask you, perhaps you can help me out by telling me what questions you would ask if you were me'. Once you have the questions, ask them. I'll bet you will learn a lot more this way.

"Yours truly,

"The SPIN Doctor (Software Quality Improvement Network)"

This column is for you; let's make a difference!! Send your comments and questions to "Dear SPIN Doctor" at or directly to the editor.

Question of the month: If you could change one thing about your organization, what would it be??!! Give me a quick answer and I'll post the results next month!

The SPIN Doctor

Back to top


The only rule I have in management is to ensure that I have good people -- real good people -- and that I grow good people, and that I provide an environment where good people can produce.

--A corporate Vice President quoted by Curtis, Krasner & Iscoe in "Communications of the ACM", Nov. 1988, and requoted by Bill Hefley at our March meeting (contributed by Johanna Rothman)

It is recommended that organizations monitor the performance evaluation process to include more objective measures of job performance and ensure that the performance appraisal process is administered fairly.

--Magid Igbaria and Wayne Wormley, in a study report finding differences in the way "black" and "white" IS staff within one large company were performance rated, "Communications of the ACM", March 1995

...[S]taff members' assessment of their project's coordination strongly coordinates with customers' satisfaction with the software development company and the software it produces.... [S]enior management's assessment of the quality of projects is unrelated to other measures of project success....

--Robert Kraut and Lynne Streeter, in a study report finding that both formal and informal communications are needed for successful projects and recommending avenues of exploration to improve current practices, "Communications of the ACM", March 1995

If the code and the comments disagree, then both are probably wrong.

--attributed to Norm Schryer

(If anyone knows who Norm Schryer is, it would be nice to know.)

Back to top


The Boston SPIN is a forum for the free and open exchange of software process improvement experiences and ideas. Meetings are held on third Tuesdays, September to June.

We thank our sponsors: GTE and BULL INFORMATION SYSTEMS.

For information about SPINs in general, including ***HOW TO START A SPIN***, contact:

DAWNA BAIRD of SEI, (412) 268-5539,

Boston SPIN welcomes volunteers and new sponsors. For more information about our programs and events contact:

CHARLIE RYAN, Technical Assessments, Inc.,
ESC/ENS (Bldg 1704), 5 Eglin St, Hanscom AFB MA 01731-2116;
(617) 377-8324; FAX (617) 377-8325;

IN THE SPIN is published monthly September to June. Letters, notices (including job postings), and contributed articles are welcomed. Articles do not necessarily represent the opinions of anyone besides their authors.

IN THE SPIN is available only by email.

TO SUBSCRIBE send email address to Ron Price

SEND letters-to-editor, notices (including job postings but not advertisements), calendar entries, quips, quotes, anecdotes, articles, offers, and general correspondence to Sallie Hoffman, (508) 369-2365

Return to Newsletter Index

Back to top