Evaluation suffers from the stigma of being an afterthought, add-on, or, worse yet, an extra expense. The irony is that evaluation happens all of the time. Measurement—the purposeful gathering of information and comparing what you learn to some standard or expectation—however, is not happening enough. Measurement should be ingrained throughout every step of the HPT process beginning with the decision to invest in a solution, continuing through development, and ending sometime after implementation. Figure 1 below is a model that shows when measurement should occur and what the output or measurement deliverable is at each stage of the process. The model is a communication tool you can use to facilitate discussions with customers, co-workers, solution providers, and other stakeholders about the evidence used to justify an investment in a solution, ensure its success, and eventually prove its worth. Figure 2 goes further to show the elements of each measurement product.

Phase I

Phase II
Create &

Phase III

Measure Need
Measure Results

Output is a
Business Case or rationale for action

Output is
Confirmed workability and accuracy
Recommended corrective action based on formative and leading or predictive indicators

Output is
Report of results and outcomes

Figure 1. Three-Point Measurement Model.

Phase I

Phase II
Create &

Phase III

Measure Need
Measure Results

Business Case
Set baseline
Define KPIs* or accomplishment
Confirm feasibility

Formative Evaluation
Confirm workability, utility, reaction, learning
Check accuracy, completeness, technology compatibility
ID leading indicators

Summative Evaluation
Measure transfer and impact

Figure 2. Three-Point Measurement Model and Product Elements
(*KPIs are key performance indicators or the measures the
organization expects to see change or improve as the result of one or more interventions)

The model introduces the business case—an argument for dedicating resources to achieve some goal—which, in turn, opens the door to discussions about the behaviors or results an intervention is expected to add, eliminate, or change in some way. Most goals require people, employees, or customers, to change their behaviors. Some even require a solution to be compatible with the current technology, culture, or laws. You can discuss the feasibility of a solution producing the intended benefits by asking whether the organization has the funds, infrastructure, and commitment to implement and support it long term. Every solution requires sustained attention and funding. Most require acceptance by workers, customers, suppliers, distributors, or regulatory officials. Some require additional or advanced technology, equipment, and specialized expertise to be effective. What you are doing is gathering information or finding out what evidence exists that supports taking some type of action and addresses the feasibility of that action.

Even before a solution is decided on, you can use the model to explain the importance of measuring during the design, development, or acquisition stages to ensure the solution is workable and compatible with other systems. Once a solution is under development, the model can help you discuss how the pieces and parts will be tested to ensure they work in your environment. This is also the time, if it was not done during the business case, to identify leading indicators, interim behaviors, and results that indicate a solution is being adopted and ingrained in the organization at the expected rate. Unfortunately, I see too many clients relying on the hope-and-pray model of implementation. They roll out a solution and then pray it works. They fail to shepherd a solution through the initial stages of the change process. The pilot or launch is only the first step of institutionalizing and sustaining new behaviors.

Summative evaluation measures the speed at which a solution is being adopted in the workplace and how soon results will appear. Transfer is measured by tracking how quickly and how many people or to what degree a change in results is happening. However, to claim a solution is successful requires you to have established a baseline against which you can compare the new results. The model encourages discussion about who will measure the effectiveness of a solution, what will be used as the baseline, how the data will be captured, and how much to capture and for how long.

The three-point measurement model can help bring to the surface information that is assumed versus known, yet is expected to be used to:

  • Support the argument for change or the need for a specific solution
  • Assess a solution’s workability and feasibility
  • Judge the degree a solution was effective

    Judith Hale, PhD, CPT, is the author of Performance-Based Management, The Performance Consultant’s Fieldbook, Performance-Based Certification, and Performance-Based Evaluation. She has been a consultant to management in the public and private sectors for more than 25 years. She specializes in needs assessments, certification programs, evaluation protocols, and the implementation of major interventions. She is a past president of ISPI. Judith was awarded a BA from Ohio State University, an MA from Miami University, and a PhD from Purdue University. She may be reached at Haleassoci@aol.com.


Would you like to advertise in this space? Contact marketing@ispi.org



To claim a solution is successful requires you to have established a baseline against which you can compare the new results.

by Carol Haig, CPT and Roger Addison, CPT, EdD

This month we turn the spotlight
on Dale Brethower, PhD, past ISPI president, 2004 Honorary Life Member, and Professor Emeritus of Psychology, Western Michigan University. Dale, who may be reached at dalebrethower@earthlink.net, provides performance-based training and management consulting through his work at Performance-Based Systems.

Top Three Predictions
Dale offers these “fearless” predictions for the near future. First, he tells us that history will continue to repeat itself within the HPT domain in the form of fads in products and services that will wax and wane in a predictable pattern.

Second, and related to the first prediction, is that the next few fads will be easy to recognize and are already known to us. We can look at recent history for examples such as quality circles, T-groups, and self-directed work teams.

Third, timeliness will be increasingly important in the delivery of instruction and performance improvement solutions. New fads play into this very real need as organizations seek to become ever more responsive to their customers, suppliers, investors, the global community, and the resulting requirements for upgrading employees’ current skills and knowledge.

Why These Predictions
In our HPT history, we have many examples of fads that came on the scene and then followed a predictable five-phase pattern that we can easily recognize:

  1. A few people do outstanding work.
  2. Some others understand how this work was accomplished and replicate the process successfully; others imitate surface features without applying the key principles and produce a poor imitation.
  3. Imitators brand their work as a new “best thing,” creating excited interest in the marketplace that then fades.
  4. Consumers say they tried the new “best thing,” say it did not work, and go looking for the next new “best thing.”
  5. When a similar “best thing” is discovered, it is renamed and re-branded so that it will not bring up the bad memories associated with its predecessor; what information is left from the first “best thing” will not include any history of supporting principles or lessons learned.

Dale is confident in this pattern because it is the nature of people in the workplace to continually search for new solutions to problems and opportunities. We are eternally hopeful that the newest “best thing” will enable us to meet our biggest challenges. In the excitement of discovery and our rush to replicate, we habitually ignore the lessons of the pattern described above and frequently fall into imitating the surface features of the “best thing,” ensuring that it will fail our clients and us. And so it goes.

Teaching machines are one example. B.F. Skinner and his colleagues built a teaching machine and used it to deliver instruction at Harvard University. A few others did similar things in New Mexico, Pennsylvania, and other locations. People saw the programs and started imitating them; others saw the machines and started imitating them.

Some of the instructional programs imitated the process of developing programmed learning (focusing on learning outcomes, developmental testing, and data-based revision), and other programs imitated surface features (small steps, sentences with blanks in them). The programs imitating surface features did not work very well and helped programmed instruction to fall from favor.

A closer look at the history of fads will quickly show significant evolutionary relationships. This is why Dale can predict specifically what we can expect. Take the teaching machines just mentioned. That technology begat computer-assisted instruction (CAI), computer-based training (CBT), and today’s e-learning. These fads all promised solutions to a real need: efficient, economical, flexible, on-demand widely disseminated instructional delivery. Each iteration moved us forward and each has fallen, or will likely fall, victim to a marketplace polluted with poor imitations that will propel us forward to the next related fad.

Potential future fads that are already with us may include instructional objects, Internet learning, incentive and recognition systems, and 360° feedback. When a performance innovation is not based on fundamental principles of human learning, motivation, and performance, or connected to business needs, it will become a fad and pass into history.

Timely delivery of instruction has been of great interest to both HPT practitioners and our clients for some time. As the speed of commerce continues to increase and the Internet extends its reach, workers will require accessible, current skills and knowledge directly related to their tasks and responsibilities. Many organizations have tackled this problem with mixed results, largely because they neglected to consider the whole organizational system when they developed and implemented their solutions. As we in HPT know, any intervention made in one organizational sector will affect others and should be addressed in the initial planning.

How Organizations Will Be Different
Actually, organizations will not be different, Dale tells us, unless we, as performance improvement specialists, help our clients recognize the fad pattern and consciously break it when the next “best thing” presents itself. How can we break the pattern? By building supportive management systems to work in concert with changes to organizational culture, work processes, work groups, or individual workers. By ensuring that all initiatives link to and validate business issues. By sharing our principles and practices across organizational functions so that everyone contributes to results. By ensuring that any replication of the “best thing” is done with care and that the key principles, critical business links, and cultural norms are molded together for success.

Implications for Performance-Based Systems
For Dale’s work at Performance-Based Systems, the task is clear: help clients break the fad pattern. An effective approach is to ask just one question: How will this intervention or solution help the customer, the organization, and the employees? By guiding clients to link their performance improvement activities to their critical business or work process issues, we can help them succeed in breaking the fad pattern and achieve meaningful success.

If you have any predictions about the future of HPT that you feel would be of interest to the PerformanceXpress readership, please contact Carol Haig, CPT, at carolhaig@earthlink.net or Roger Addison, CPT, at roger@ispi.org.


Each year, ASTD surveys
and benchmarks a large number of organizations to develop a State of the Industry Report for the training industry. Longtime ISPI member Brenda Sugrue, CPT, now heads up this effort for ASTD.

The 2003 report had some interesting revelations regarding changes in the number and salaries of instructional designers and performance consultants for their Benchmarking Service (BMS) organizations from 1999 to 2002. The BMS statistics represent a broad range of organizations from around the world.



Instructional Designers
N Mean Salary

Performance Consultants
N Mean Salary


N = 405

21 $44,765

6 $51,525


N = 394

11 $52,349

5 $64,937


N = 270

12 $42,601

3 $42,557


N = 276

12 $56,900

10 $80,671

Source: ASTD 2003 State of the Industry Report, page 20.

Composed of a smaller sample size, their Benchmarking Forum (BMF) showed similar trends. The BMF organizations consist of Fortune 500 companies and other large public sector organizations.



Instructional Designers
N Mean Salary

Performance Consultants
N Mean Salary


N = 42

67 $58,484

12 $64,661


N = 45

66 $56,360

14 $64,954


N = 34

76 $61,625

15 $72,343


N = 21

20 $69,499

15 $86,287

Source: ASTD 2003 State of the Industry Report, page 20.

While there has been a decline in the number of instructional designers from 1999 to 2002, there has been an increase in the number of performance consultants. Within the BMS organizations, the average number of instructional designers fell from 21 in 1999 to 12 in 2002. The numbers are even more dramatic for the BMF organizations where the average number of instructional designers fell from 67 in 1999 to 20 in 2002.

From 1999 to 2002, the number of performance consultants has grown from 6 to 10 for BMS organizations, and from 12 to 15 for BMF organizations. Salaries for performance consultants have greatly outpaced those of instructional designers in both the BMS and BMF organizations. It will be interesting to see if the trends of the past four years continue in the future.

Where have all the instructional designers gone?
Some factors that have contributed to the decrease in numbers include downsizing of organizations, decreased expenditures on training, outsourcing, off-shoring (exporting of jobs to other countries), and migration of instructional designers to performance consulting.

Perhaps it’s time for you to consider a move to performance consulting. For more information on becoming a Certified Performance Technologist (CPT), visit www.certifiedpt.org. Join the others who have been certified for their performance improvement efforts.



Would you like to advertise in this space? Contact marketing@ispi.org



Salaries for performance consultants have greatly outpaced those of instructional designers.

A couple of months ago, I started an online RAME activity asking for your advice for the ISPI President-elect. The activity was conducted in three rounds:

  • Round 1. Contribute a piece of advice for exemplary behavior as a President-elect.
  • Round 2. Review a set of advice from other players and select the top two. (Other players will review your advice and compare it with other pieces of advice.)
  • Round 3. Review the best pieces of advice selected by different groups and select the top two “best-of-the-best” advice.

RAME Results
Here’s the winning piece of advice (which received 120 points) from Nancy Jokovich:

Try to involve as many members—from Board to chapters to individuals—in the activities and processes of ISPI. Involvement will help strengthen the association and grow the next generation of leaders. Good luck!!

Thanks, Nancy. I will definitely implement your advice. Please send me a nasty note if I fall short of your expectations.

Closely following Nancy’s advice is this piece of advice from Elin Soderholm (which received 109 points):

Increase ISPI’s visibility and influence with corporate decision makers. We need to promote the performance improvement approach and our own expertise.

This is definitely something that we plan to do. After all, increasing ISPI’s visibility and influence is one of the main missions of our organization. I shall work with other Board members, chapters, and individuals to figure out how we can implement this suggestion effectively and efficiently. Thanks, Elin.

Here’s the third piece of advice from Elsa (which received 100 points):

Remain curious. Request help. Ask questions. Listen. Observe. Communicate much and often. Set realistic goals. Develop steps to get us from where we are to where you want us to be. Continuously improve. Measure progress. Give thanks and credit. Be inclusive. Recognize efforts. Celebrate successes.

Thanks, Elsa, for your generosity. I ask for one piece of advice and you give me 14. I’ll type them up on an index card and carry the card in my wallet. I am curious, though: What made you come up with this list? I am requesting your help: Can you send me an email note with suggestions for effectively involving our membership in implementing these wonderful ideas? And what role would you like to play in this venture? (OK, I have made a start with the first two items.)

Here’s the fourth piece of advice from Brenda Thorpe:

Actively engaging the new students of the field in the operation of the organization and creating a mentoring and internship system for students to learn firsthand from the top individuals in IPT.

Although this item did not receive as many points as the earlier ones, I have a special affinity for it. After all, I live in a college town, many of best friends are going to school at Chico, and I am proud to be on the faculty of the Boise State program. Thanks, Brenda, for your suggestion. Be assured that I (and my colleagues) will vigorously pursue it. A curious note: Do we see a gender-based trend among the winning suggestions?

My special thanks to the 45 participants in this RAME who contributed valuable suggestions and astute choices. For a complete list of all suggestions, click here.

Community Discussions
Last month, I invited ISPI members to participate in an OQ (Open Question) discussion about our seven professional communities. We received 36 different comments that seem to fall into these categories:

  • Questions about the benefits of the community structure
  • Re-examination of the logic behind the current categorization (and suggesting alternatives)
  • Sources of additional background information
  • Questions about what we plan to do with the community structure
  • Suggestions about what could be done with the community structure

This easy-to-use OQ structure is still open (and will remain open until my 93rd birthday). Please take a few minutes to add your thoughts about ISPI’s communities and to review other people’s thoughts.

This Month’s RAME
Recently, Bill Corbin from the Front-Range Chapter (Denver, CO) raised some issues about the relationship between national ISPI and chapters. Thanks for your comments, Bill. Guy Wallace responded to him offline with additional suggestions on how to continue the dialogue. Thanks, Guy, for your prompt response.

I would like to continue this dialogue about national-chapter relationships. So, I have created an online RAME activity for brainstorming effective ideas. Please join me in this activity by spending less than a minute to register by clicking here and completing a three-item online form. We will use a three-round process similar to the one that we used for seeking advice for the President-elect. Let’s keep the conversations going. Please join me in this venture.


On my return eight-hour flight from the last Board of Directors meeting, I savored a few precious pressurized moments and read the May issue of the Harvard Business Review. It featured the article “Building Better Boards” by David A. Nadler. How apropos! My homework assignment was to write this column about serving on ISPI’s Board of Directors. Before I give you my scintillating sales pitch and convince you to submit your name to the nominations committee, let me highlight a few of Nadler’s insightful points. They are keenly relevant to our own Board.

“What Boards should be: seats of challenge and inquiry that add value without meddling and make CEOs more effective but not all powerful.”

True. As Executive Director, Rick Battaglia provides continuity, corporate memory, and business acumen. It’s the Board’s job to set direction and make strategic decisions. Rick is at the helm, but our Board determines the desired course. Collectively, we combine our perspectives and roles to navigate through the Society’s doldrums and storms.

“The high performance Board…is competent, coordinated, collegial, and focused on an unambiguous goal.”

The competence of the ISPI Directors is vetted through the nominations and election process. As volunteers, the Directors are a collegial bunch. However, the yearly revolving membership of the Board presents some interesting performance challenges. To achieve high performance, the individual ISPI Directors need to have depth and breadth in their HPT and ISPI experiences. In addition to their own areas of expertise, each needs to be a consummate team player. The calendar doesn’t allow the luxury of extended maneuvers through the typical stages of group development. We hit the ground running. From the start, each Director needs to demonstrate respect and operate from a corporate perspective. No personal agendas allowed.

“A team is only as good as its members, and high quality members are alarmingly scarce.”

We’re fortunate—ISPI is unique. Our membership is rich in talent, knowledge, experience, and dedication. Our challenge is not so much in finding high-quality members but in cultivating and grooming our future leaders. Opportunities abound. Just identify your interest and volunteer.

“Agendas dictate what the Board discusses and at what length. To control the agenda is to control the work of the Board.”

True. Over the last two-plus years, I’ve observed that packed agendas seem to be the norm for our Board meetings. There’s always more to do than time permits. Serving on the Board means accepting schedules; meeting deadlines; and completing pre-work, post-work, and special assignments. Preparation is key for each meeting.

“Boards should find ways to stay engaged with company’s issues outside of regular meetings as well.”

We do! Directors coordinate with the designated Committee and Task Force Chairs on a regular basis, present at chapter events, meet with Advocates and partners, participate in ISPI Institutes and related professional conferences, and, most important, listen to members. We solicit member feedback through a variety of forums, including surveys and web-based discussions (just check out Thiagi’s innovative games in this newsletter each month).

A Director position does come with some potentially intoxicating risks. Members and outside professionals recognize the power and influence of our Board position. It is essential that Board members maintain balance with their charge. We share the views of constituents honestly and debate our own opinions passionately. Yet, when Board decisions are reached, the Directors commit to the corporate position for the Society.

“Boards face a huge information challenge.”

Definitely! In addition to the frequent exchange of emails, we have web discussions, teleconferences, and myriad documents for late-night reading. Communication is critical and challenging. During one’s tenure on the Board, your mailbox and your brain will not be empty.

“Boards cannot easily change their cultures. The closer Directors get to an engaged culture, the closer they are to being the best Board possible.”

Cultures are systems of informal yet powerful norms based on shared values and behaviors. For the ISPI Board to be its best, the Directors must address their collective norms, beliefs, and values. We do so in our very first official Board meeting. We created a “BOD placemat” as a visible reminder of our collective agreements.

“Good Boards become great ones…when Boards define their optimal roles and tasks and marshal the people, agendas, information, and culture to support them.”

Serving on the ISPI Board of Directors is a privilege, a responsibility, and a commitment. We take our roles and responsibilities seriously. But realize there is life beyond the Board. We have jobs, families, hobbies, and personal commitments. These are respected, valued, and shared. The Board experience is life changing: the learning, the friendships, the professional support, the travel, and the powerful intrinsic rewards of service and contributing to our Society. We each gain much more than we give.

And for my scintillating sales pitch—that was it. For more information on nominations, please click here, or read the article found near the end of this issue.



The quality profession,
like many others, is subject to fashionable concepts. Many of them are very narrowly focused, and often pass from the scene after a few short years as the “next big thing.” Others, mainly due to their more comprehensive nature, have more staying power.

Three “quality” models are widely adopted at the moment:

  • ISO 9000
  • Six Sigma
  • The Baldrige National Quality Program Criteria for Performance Excellence

Let’s take a look at each of these models as a possible basis for performance improvement.

ISO 9000
Originally released in 1987, the ISO 9000 standard is now in its third release (ISO 9001-2000). It was originally promoted by the European Community as a means for defining and enforcing minimum quality systems standards for various companies around the globe who would be supplying products to EC countries. It has since been embraced worldwide, largely because it establishes a standardized basis for evaluating quality systems.

The ISO 9000 standard defines the elements that are expected to be present in an acceptable quality system. Detailed requirements of the standard are derived from eight management principles:

  1. Customer focus
  2. Leadership
  3. Involvement of people
  4. Process approach
  5. System approach to management
  6. Continual improvement
  7. Factual approach to decision making
  8. Mutually beneficial approach to supplier relationships

Particular emphasis is placed on documentation—procedures and the data that shows these procedures are being followed. There is little judgment placed on whether the procedures are good or effective, as long as they are well controlled, and it can be proven that they are being followed. Very little attention is focused on actual business results. ISO 9000 registration simply indicates that a company complies with its own quality system, however the company has defined it.

The standard is developed by an International Organization for Standardization technical committee, along with other international organizations, both governmental and non-governmental. Draft standards developed by the committee are circulated to the member bodies for voting and are implemented if at least 75% of the member bodies casting a vote approve of the draft.

Six Sigma
While Six Sigma is certainly enjoying a huge wave of publicity at the moment, it is not truly a “quality systems model” in any strict sense. It is, rather, a rigorous model for problem solving and process improvement. Six Sigma places great emphasis on defining problems in customer terms, using strict numerical measurements for defining the problem, establishing the improvement goal, and measuring the results of an improvement project.

Initially popularized by Motorola in 1988, it has since been embraced by many large companies. The fundamental discipline is a five-step problem-solving model. Normally abbreviated DMAIC, the steps are:

  1. Define
  2. Measure
  3. Analyze
  4. Improve
  5. Control

The Six Sigma tools are the standard tools of the quality engineering profession that have been used and refined for many years. The key contribution of Six Sigma in the tools area has been to carefully define the appropriate places for using the various tools in the overall problem-solving process.

There is neither a standard defining the exact nature of Six Sigma, nor any widely recognized body responsible for ensuring that it has any consistent meaning or content. Therefore, companies, consultants, authors, and professional organizations are all free to define it as they wish.

The Baldrige Criteria
The Baldrige National Quality Program Criteria for Performance Excellence serve as the basis for evaluating applications for the Malcolm Baldrige National Quality Award. The award was created by an act of Congress in 1987, with the goal of strengthening U.S. business competitiveness. The criteria are revised every year to be sure that they remain valid in the face of an ever-changing business environment.

Over the years, many state-level and international quality awards have been created using the Baldrige criteria as the model. Also, a large number of companies use the Baldrige criteria as the basis for internal evaluation of their businesses, with no intention of applying for any external award. Separate criteria have been developed for evaluating health care and educational organizations, and work is under way to develop criteria for non-profit institutions, including governmental bodies. The business criteria cover seven categories:

  1. Leadership
  2. Strategic Planning
  3. Customer and Market Focus
  4. Measurement, Analysis, and Knowledge Management
  5. Human Resource Focus
  6. Process Management
  7. Business Results

The focus on results is one of the strongest aspects of the Baldrige criteria. In evaluations of formal award applications, 450 out of 1,000 total points are allocated to the Business Results category. The Baldrige criteria are developed by the staff of the National Institute of Standards and Technology (NIST), along with a team of highly experienced performance improvement professionals.

Relationship to HPT
The table below shows the rough “goodness of fit” of the three quality system models with the critical attributes of human performance technology (HPT), as described by Harold Stolovitch and Erika Keeps in Chapter 1, “What is Human Performance Technology?” in the Handbook of Human Performance Technology.

Attributes of HPT

ISO 9000

Six Sigma

Baldrige Criteria




Very High


Medium—evaluates most aspects of a business from a process point of view, but tolerant of a “siloed” view, if the company defines its quality system that way

Medium—places much focus on exhaustive analysis to find the root causes of problems, wherever they originate within the system

Very High—criteria cover all elements of a business system, and place heavy emphasis on the alignment between these elements

Grounded in scientifically derived theories and best available empirical evidence

Medium—standard is developed by a technical committee and approved by vote of member bodies

Medium—an assemblage of standard problem-solving tools, applied in a very disciplined fashion

High—criteria are developed by very experienced performance improvement professionals

Open to all means, methods, and media

Medium—standard is quite prescriptive, and updates are released every six to seven years

Medium—range of tools is quite extensive, but confined to the realms of process design and problem solving

Very High—criteria are very comprehensive, and are updated annually to reflect the latest business practices

Focused on achievements that human performers and the system value

Low—standard just requires a well-documented quality system, supported by evidence that it is being followed; very little focus on actual worthy performance

Medium—strong emphasis on careful definition of problems in customer terms and achieving measurable goals, but there is no “front end” process for ensuring that the most important issues are being addressed

Very High—nearly half of the weighted criteria involve measured results, clearly aligned with the most important aspects of the business

References and Related Readings
ANSI/ISO/ASQ. (2001). Q9001-2000: Quality management systems: Requirements.

NIST. (2004). Baldrige National Quality Program Criteria for Performance Excellence.

Pyzdek, T. (2003). The Six Sigma handbook. New York: McGraw-Hill.

Stolovitch, H.D., & Keeps, E. J. (1999). Handbook of human performance technology (2nd ed.). San Francisco: Jossey-Bass/Pfeiffer.

Pat McMahon, CPT, is a Principal Advisor with Proofpoint Systems, Inc. He is a Certified ISO 9000 Quality Systems Auditor, a Six Sigma Black Belt, and a Malcolm Baldrige National Quality Award Senior Examiner. He may be reached at pmcmahon@prodigy.net.



The 2004 Performance-Based Instructional Systems Design Conference: Focusing on Results September 27-October 2, 2004 in Chicago, Illinois, is devoted to presenting, discussing, and debating the latest models, practical methods, tools, and case studies for the design, measurement, and evaluation of learning programs and interventions. 

Today’s models and methods must look beyond learning to address performance results. ISD will be examined as a systematic approach to improving performance—that is, to defining and delivering success. 

To learn more about the Keynote, Masters Series, and Concurrent sessions being presented during this conference, click here. Attendees will take away valuable hands-on solutions to their most critical challenges in ISD, and will return to their employers and clients with the tools needed to improve performance and deliver success, as defined by their particular organization.

Register to attend this conference through the ISPI website, or call ISPI at 301.587.8570. Networking lunches are included in the conference, workshop, and institute registration fees. CPTs who attend this conference may receive up to 24 points toward re-certification.


Order your copy of ISD Revisited in conjunction with your conference registration and save 25% off the list price.

B.F. Skinner once remarked that his most important contribution was use of rate of response as the key measure in his research (Hall, 1967). This is often surprising to people who think of Skinner—considered by members of the American Psychological Association to be the most influential psychologist of the 20th century—as the behaviorist responsible for pioneering research in such areas as reinforcement schedules, behavior shaping, the study of verbal behavior, engineered communities, programmed instruction, and so on. But like most big advances in science, Skinner’s natural science revolution in psychology stemmed from a new measurement technology: use of response rate or frequency (count per unit of time) as the primary indicator of response strength or behavior probability.

One of my favorite conversations in recent years was with our colleague Professor Richard Clark, the esteemed cognitive psychologist and instructional technologist. He said, “There are lots of things I think Skinner got wrong, but I certainly think he got the measurement part right!” I’ve had a warm place in my heart for Dick ever since!

Skinner’s daughter, herself a well-known educational psychologist, wrote that “teaching is not only producing new behavior, it is also changing the likelihood that a student will respond in a certain way. Since we cannot see a likelihood, we look instead at how frequently a student does something. We see how fast he can add. The student who does problems correctly at a higher rate is said to know addition facts better than one who does them at a lower rate” (Vargas, 1977, p. 62). In other words, rate or frequency is a sensitive indicator of the likelihood that someone can or will do something, a potential measure of both competence and motivation.

As performance technologists, we must measure performance. Perhaps because most of us have lived since childhood with the ubiquitous percent correct, we accept it in our professional lives more or less unconsciously. As discussed in an earlier column, however, using percentages in the absence of the numbers from which they are derived can be a dangerous practice. When we calculate a percentage, the units in the numerator and denominator cancel out to form a “dimensionless quantity” (Johnston & Pennypacker, 1980, p. 139), eliminating information about the actual number of items, questions, behaviors, or accomplishments involved (Was it 10 or was it 1,000?) and ignoring the length of time during which the performance occurred. Consequently, we can’t unambiguously describe or repeat the actual performance without additional information. For this reason, percent is not a true measure of performance!

In contrast, count per minute is an incredibly sensitive measure of performance that directly translates into observables. Whether we’re counting instances of behavior (e.g., responding to a customer’s question), of accomplishments (e.g., defect-free widgets), or of business results (e.g., dollar value of transactions), using counts per unit of time is a straightforward and unambiguous metric. If we count both “good” (meeting accuracy or quality criteria) and “bad” (failing to meet accuracy or quality criteria) items, we have a direct measure of productivity or performance, as well as of accuracy or quality (in the ratio between the two).

Lindsley’s application of Skinner’s rate measure to education, in place of percent correct, produced unparalleled improvements in educational effectiveness with the methodology he called precision teaching (Binder & Watkins, 1990). Likewise, using count per minute correct and error frequencies has empowered HPT professionals, for example call center managers and trainers (Binder & Swinney, 2002) who defined count per minute practice goals during a new hire training program, after which new representatives exceeded call center productivity benchmarks by more than 60% (from just under six calls per hour to just over nine calls per hour). Using count per minute as a measure of performance has produced enormous gains in learning efficiency because its greater sensitivity provides more useful feedback to learners, teachers, and program designers (Binder, 2003).

Any project or program that measures skill, knowledge, productivity, or quality will benefit from substituting count per minute (or per hour, per day) for percent scores and setting fluency-based performance standards rather than percent correct criteria.

I encourage you to add timed measurement and rate of response to your projects as a way to test this assertion. As usual, if you have comments or questions, please send them along.

Binder, C. (2003). Doesn’t everybody need fluency? Performance Improvement, 42(3), 14-20.

Binder, C., & Sweeney, L. (2002). Building fluent performance in a customer call center. Performance Improvement, 41(2), 29-37.

Binder, C., & Watkins, C.L. (1990). Precision teaching and direct instruction: Measurably superior instructional technology in schools. Performance Improvement Quarterly, 3(4), 74-96.

Hall, M.H. (1967). An interview with “Mr. Behaviorist”: B.F. Skinner. Psychology Today, 1(5), 20-23, 68-71.

Johnston, J.M., & Pennypacker, H.S. (1980). Strategies and tactics of human behavioral research. Hillsdale, NJ: Lawrence Erlbaum Associates

Vargas, J.S. (1977). Behavioral psychology for teachers. New York: Harper & Row.

Dr. Carl Binder is a Senior Partner at Binder Riha Associates, a consulting firm that helps clients improve processes, performance, behavior, and the measurement of results. His easy-to-remember email address is CarlBinder@aol.com and you can read other articles by him at www.Binder-Riha.com/publications.htm. See past issues of this column by clicking on the “Back Issues” link at the bottom of the blue navigation bar to the left.



Rate or frequency is a sensitive indicator of the likelihood that someone can or will do something.

Ever wonder what other members of ISPI do in their jobs? Are you curious about the income profiles of professionals working in the performance improvement field? What do they like and not like about working as performance technology professionals?

ISPI, with input from members of the Instructional Systems Technology Department at Indiana University, has designed and developed an online survey that addresses the above questions. The survey contains 20 questions and takes less than 10 minutes to complete.

We encourage all ISPI members to complete the survey. Your participation is critical. The data we generate will provide valuable information for all of our members and those interested in our field. By developing a profile detailing the professional practices of our members, we will have a better understanding of our profession.

The survey will be sent to all ISPI members by email in early July. So, watch for it and please respond. Assuming a successful response rate, we will begin sharing the findings this fall.


This past semester I taught
a graduate course about performance technology for 20 on-campus students and 24 distance students. One assignment, dubbed PT Makeover, asked graduate students to choose a past project that would benefit from some serious PT “magic.” Over the next year, in several issues of PerformanceXpress, I’ll share some of their stories with you.

In the article below, Quality—not Quantity—Analysis, Rhea Fix makes a strong case for using analysis to decide what to do and what not to do. Many want to teach it all, or almost all, just in case we might leave something out. Rhea shows the benefits in getting very clear about what matters and encouraging employees to look at themselves in light of these directions.

A Program Built on Best Practices
The telesales division of a Wisconsin-based utility company did everything it was supposed to do. A new suite of products and services had been launched in previous months. While top telesales reps at the company achieved the desired numbers and results, other reps continued to turn in lackluster numbers—even 90 days after launch. The question was how to get those other reps up to speed. The product presentations delivered at launch just weren’t enough.

So, the training team set out to gather data. They documented best practices, analyzed needs, and spoke to several levels of leadership in sales and product marketing. They had a team of stakeholders and experts who contributed regularly to the project. But something went wrong, because, in the end, they had a training program, but reps complained that it didn’t give them a clear idea of how to sell. They understood the strategy and marketing. They refreshed basic skills. They reported that they weren’t sure what they were supposed to do differently. The program developers, even after the study, didn’t know which parts of the program needed fixing. They needed a makeover, but didn’t know where to begin. Our team was asked to put fresh eyes on the problem.

Collecting Best Practices (and a lot of best opinions)
We started by looking at the assessment approach from the original project team. They had set a scope, discussed goals, and determined a means of collecting best practices and measuring results. The assessment included:

  • Shadowing reps to observe practices and effectiveness
  • Reviewing recorded calls to listen for best practices and effectiveness
  • Reviewing existing training on sales and support at the company
  • Reviewing current and future organizational initiatives (for consistency)
  • Interviewing reps for best practices
  • Interviewing executives for company goals, field performance issues, and organizational data
  • Interviewing subject matter experts for best practices
  • Interviewing stakeholders for goals, optimals, and perceived performance issues

Their resulting course came out too long, about 90 minutes, and included so much information that the essence of effective selling skills was lost.

Scaling the Scope
Before starting the fresh analysis, we scaled the scope back to the original project’s goal, which was to give the reps a few things they could do differently to sell this line of products. Success would be measured in terms of increasing participants’ sales numbers compared to top performers in the company over the same period. This would provide a balance against other factors of sales performance like promotions and market incentives.

Getting the Right Kind of Data
Armed with a more manageable project, we reassessed. We needed to find out two things: what top performers were doing or not doing (optimals) and what low performers were doing or not doing (actuals). Our assessment approach included:

  • Sponsor and stakeholder questions
    • What seems to be the source of the problem?
    • What has been tried already?
    • What is the biggest problem in terms of sales for this new product line?
    • What is considered good performance in selling this product line?
    • What resources are available to reps in selling this product?
    • What do top performers say they do to meet these goals?
    • Do low performers for this product excel in selling other products?
    • Do good or low performers exist on the same teams or in different areas of the company?
    • What outside influences may be affecting sales?
    • Is performance tied to tenure with the company? Experience in selling?
    • Do top sellers experience a high rate of customer complaints or returns?
  • Telesales center performance reports (good samples and bad samples)
  • Mystery caller data (good samples and bad samples)
  • Sales performance data
  • Customer escalation records
  • Interviews with experts in sales, customer satisfaction, and executive complaint departments
  • Online survey of new hire telesales reps
  • Online survey of telesales rep managers
  • Focus group data from top reps

Defining the Course Solution with Data
The resulting data revealed some interesting things. It pinpointed where reps were having the most trouble, and it disclosed some things that couldn’t be resolved with a training solution. The resulting course gave reps a road map for success, and made them aware of pitfalls that could send them spinning in the wrong direction.

The first analysis revealed a lot of “stuff,” but not the right kinds of stuff, so it was difficult to build a program that equipped reps with observable skills. On the other hand, the new program was defined and limited to gaining a few observable skills, making it easier to perform the desired actions to improve sales. In the initial program, the steps and skills that reps needed were buried beneath execu-strategy and product and marketing information. The new program compared effective and ineffective approaches, highlighted those actions with most promise, and allowed reps to identify their weaknesses and adopt new approaches.

Rhea Fix is President of Red Pepper Consulting, a training and consulting firm specializing in e-learning solutions and implementation. She has a background in film, telecommunications, and writing and is a self-acclaimed technophile. She is completing her final year of study in San Diego State University’s online master’s program in Instructional Technology. She is a member of ISPI and ASTD Arkansas. Rhea may be reached at rhea@redpepperconsulting.com.



It is time once again for you,
the ISPI membership, to determine the future direction of ISPI by nominating those members who you feel have the qualifications, experiences, and vision to lead our Society. Up for nominations this year are the President-elect and two Board members. They will join the President, three continuing Board members, and the non-voting Executive Director who make up the eight-member Board.

The duties of the Board are to manage the affairs of the Society and determine the strategic direction and policy of the Society.

Brief Job Descriptions
The President-elect assumes the presidency of ISPI for a one-year term at the conclusion of his or her one-year term as President-elect. The President-elect’s efforts are directed to assuming the Presidency, and assignments are designed to prepare for that transition. The President-elect serves to provide continuity of programs, goals, objectives, and strategic direction in keeping with policy established by the Board of Directors.

Each Director on the Board serves a two-year term and is a leader in motivating support for established policy. He or she serves to develop new policy and serves to obtain support for ISPI’s programs. A Director should provide an objective point of view in open discussion on issues affecting the membership and profession. He or she should thoroughly analyze each problem considered, vote responsibly, and then support those actions adopted by majority vote. Individually, each member of the Board is considered a spokesperson for ISPI and represents the integrity, dedication, and loyalty to established policy.

The deadline for nominations is August 27, 2004. If you would like to nominate a member, please send the following information to nomination@ispi.org:

  • The candidate’s name and contact information.
  • The position for which the candidate is being nominated.
  • Your name and contact information.
  • A 250-word statement on the candidate’s qualifications.

If you are interested in additional information on the nominations process, or the complete job descriptions and qualifications required, click here.



The International Society for Performance Improvement’s 43rd Annual International Performance Improvement Conference & Exposition in Vancouver, British Columbia, Canada, April 10-15, 2005 will feature several opportunities for you to develop your professional skills, learn new HPT tools and techniques, and hear the latest research findings in our field.

How can you participate? Attend! Present! Volunteer! It is not too early to mark these dates on your calendar:

  • July 30, 2004: Deadline to submit workshop proposal
  • August 31, 2004: Deadline to submit session proposal and early speaker registration for conference
  • April 10-12, 2005: Attend an HPT Institute prior to the conference
  • April 11-12, 2005: Attend a pre-conference workshop
  • April 12-15, 2005: Attend ISPI’s 43rd Annual Conference & Exposition

Here are some suggestions to help you prepare a successful conference proposal submission, especially if you are a novice speaking at ISPI:

  • Review the 2005 Call for Proposals, which outlines the review criteria for session proposals. Then, download the Session Proposal Template.
  • Review the Sample Session Proposal. This is an example of an accepted session proposal, updated to include all of the required information for 2005.
  • Download and review the Sample Handout and Sample Performance Tool as these will provide guidance as you are preparing your session proposal.
  • Consider a coach! Review the 2004 Conference Program, and see if you recognize anyone you might contact to provide feedback on your proposal.

If you have any questions or would like additional information, please contact ISPI at 301.587.8570 or by email at conference@ispi.org.



Performance Marketplace is a convenient way  to exchange information of interest to the performance improvement community. Take a few moments each month to scan the listings for important new events, publications, services, and employment opportunities. To post information for our readers, contact ISPI Director of Marketing, Keith Pew at keithp@ispi.org or 301.587.8570.

Books and Reports
This second edition of Fundamentals of Performance Technology contains two new appendices that describe the ISPI developed Standards of Performance of Technology and map the content of Fundamentals of Performance Technology and Performance Improvement Interventions to those Standards.

Serious Performance Consulting According to Rummler uses an extensive case study to illustrate what a serious performance consulting engagement looks like, and what a serious performance consultant does. Do you have what it takes to be a SPC?

Training Ain't Performance is a whimsical, entertaining, and solidly written book that addresses human performance. From beginning to end, readers are guided toward an understanding of human performance improvement and how to use it for real organizational value.

Conferences, Seminars, and Workshops
Add performance and pizzazz to your training. Whether it’s a 45-minute presentation or a week-long workshop, Thiagi can make your training come alive with interactive experiential activities. Nobody does instructional design faster, cheaper, and better than Thiagi.

Darryl L. Sink & Associates, Inc. is offering these workshops for Fall 2004: Designing Instruction for Web-Based Training, Atlanta, September 14-16; The Instructional Developer Workshop, Los Angeles, September 21-23; The Criterion Referenced Testing Workshop, Oak Brook, IL, October 5-6. Visit www.dsink.com to register!





Job and Career Resources
ISPI Online CareerSite is your source for performance improvement employment. Search listings and manage your resume and job applications online.

Magazines, Newsletters, and Journals
The International Journal of Coaching in Organizations (IJCO) is a professional journal, published quarterly to provide reflection and critical analysis of coaching in organizations. The journal offers research and experiential learning from experienced practitioners representing various coaching schools and methodologies.

Performance Improvement Quarterly, co-published by ISPI and FSU, is a peer-reviewed journal created to stimulate professional discussion in the field and to advance the discipline of Human Performance Technology through literature reviews, experimental studies with a scholarly base, and case studies. Subscribe today!

Resource Directories
ISPI Online Buyers Guide offers resources for your performance improvement, training, instructional design and organizational development initiatives.




Are you working to improve workplace performance? Then, ISPI membership is your key to professional development through education, certification, networking, and professional affinity programs.

If you are already a member, we thank you for your support. If you have been considering membership or are about to renew, there is no better time to join ISPI. To apply for membership or renew, visit www.ispi.org, or simply click here.



ISPI is looking for Human Performance Technology (HPT) articles (approximately 500 words and not previously published) for PerformanceXpress that bridge the gap from research to practice (please, no product or service promotion is permitted). Below are a few examples of the article formats that can be used:

  • Short “I wish I had thought of that” Articles
  • Practical Application Articles
  • The Application of HPT
  • Success Stories

In addition to the article, please include a short bio (2-3 lines) and a contact email address. All submissions should be sent to april@ispi.org. Each article will be reviewed by one of ISPI’s on-staff HPT experts, and the author will be contacted if it is accepted for publication. If you have any further questions, please contact april@ispi.org.



Go to printer-friendly version of this issue.

Feel free to forward ISPI’s PerformanceXpress newsletter to your colleagues or anyone you think may benefit from the information. If you are reading someone else’s PerformanceXpress, send your complete contact information to april@ispi.org, and you will be added to the PerformanceXpress emailing list.

PerformanceXpress is an ISPI member benefit designed to build community, stimulate discussion, and keep you informed of the Society’s activities and events. This newsletter is published monthly and will be emailed to you at the beginning of each month.

If you have any questions or comments, please contact April Davis, ISPI’s Senior Director of Publications, at april@ispi.org.

1400 Spring Street, Suite 260
Silver Spring, MD 20910 USA
Phone: 1.301.587.8570
Fax: 1.301.587.8573