By Don Winiecki, EdD, PhD, Boise State University

In any field of study–whether it be focused on application, dedicated to research, or a combination of the two–one thing is common: an orientation toward truth. This is the case whether one is interested in data-driven findings of a needs assessment, decisions made in the development and delivery of any sort of intervention, or the evaluation of interventions or programs in their current state. Without some confidence that the data one collects, the metrics and measures one applies, or the principles upon which one bases one’s work are true, our practice is simply superstition.

But the definition of truth is elusive and there seem to exist no standards within performance technology that help us in this matter. Fortunately, the philosophy of knowledge (epistemology) has helped us develop tools for assessing the truth of our claims. For example, Ford (1975, pp. 80-89) offers a four level taxonomy of claims to truth that are common in everyday practical activity. Like all taxonomies, items higher on the list rely on each lower item for support.

  • Truth 1: This is the level upon which all of the other levels rest in turn. Truth 1 is simply the claim that something is true. The only basis for this level is faith–it is “just true.”
  • Truth 2: This is the stability of consensus–things that are consistent with the beliefs and practices of individuals who are “supposed to know” satisfy this level. If enough of the “right people” believe something to be true, we are supposed to feel safe in going along with them.
  • Truth 3: This is the truth offered by statistical tests and probabilities, which make use of verifiable empirical data. Anything that can be said to be mathematically or logically consistent with claims to truth 4 fits this level.
  • Truth 4: This is empirical truth–that which is based on one’s verifiable perceptions or those provided by equipment designed to increase the sensitivity of our perceptions. This is what practitioners of natural and physical science attempt to achieve in their laboratory or field-based research.

What is important in all this is the ability it provides us in checking the relation between the levels of truth claimed by our clients and the demands of a field dedicated to verifiable, data-driven analysis.

For example, it is common for human performance technology (HPT) practitioners to be simply told by management that employees “need training.” However, in the absence of performance-based evidence to verify a gap in knowledge or skill, this is an appeal to truth 1 camouflaged by the sometimes unsubstantiated claim of knowledge held by a member of the “right group” (truth 2).

Similarly, a consultant’s claim that some particular intervention is necessary to solve a problem or realize an opportunity because “it worked for all of our other clients” may simply be an appeal to faith (truth 1) under the cover of truth 2 and truth 3. Like the previous example, without direct evidence derived from actual performers in actual practice, it remains largely a matter of faith.

The lesson here is that in all cases we should be basing our work–and, ultimately, our reputations–on data, which provides for us direct access to claims of truth 4, empirical truth as demanded by the sciences. However, it is always possible that data, which appears to meet this standard, is affected by error or even outright falsification (Winiecki, 2006). In such cases we risk the efficacy of truth 4 by using it as a cover for unsubstantiated claims to truth 1.

But the brief examples presented above miss one important source of problems that we face. This threat is located within our own field by proponents of universals–claims of truth 1 based on one’s unmediated and special knowledge of “out there” values and always-was-and-always-will-be truth without any reliance upon research-based evidence. We have many so-called “models” available to us, and claims that “it works for me” or that “it is the right thing to do” may be convincing depending on the charisma of their proponents, but also may very well obscure factors that fail our most demanding and necessary tests for credibility.

In all of our work then, the only recourse is to be skeptical and thorough in vetting all data and even all processes that produce the data we aim to use in our work–even processes that originate in our own field of effort. Verifying the credibility of data and processes is indeed demanding, but no less demanding than the work of those who have earned the labels “scientist” and “engineer.” Our work is just as important and entirely deserving of this demand.

While not for the fainthearted and often difficult to accomplish, the ongoing questioning, testing, and breaking and then revising the claimed universals of one’s field of practice or study are essential elements in the conversion of a field of practice into a legitimate discipline. Only by gradually removing superstition and charismatically asserted truths can a field gain credibility and, ultimately, legitimate status.

REFERENCES
Ford, J. (1975). Paradigms and Fairy Tales: An Introduction to the Science of Meanings (Vols. 1-2, Vol. 1). London: Routledge & Kegan Paul.

Winiecki, D. (2006). Systems, measures and workers: Producing and obscuring the system and making systemic performance improvement difficult. In J. Pershing (Ed.), The Handbook of Human Performance Technology (3rd ed., pp. 1224-1250). San Francisco: Jossey-Bass/Pfeiffer

About the Author
Don Winiecki, EdD, PhD, is a professor in the Instructional & Performance Technology department and adjunct professor in the Sociology department at Boise State University. He teaches courses in needs assessment and ethnographic research in organizations. His current research focuses on the system of values and beliefs in HPT. He may be reached at dwiniecki@boisestate.edu.