By Robbie Proulx and Tiffany Smith Tales from the Field, a monthly column, consists of reports of evidence-based performance improvement practice and advice, presented by graduate students, alumni, and faculty of Boise State University’s Organizational Performance and Workplace Learning department.
The Challenge of the E-Book
Here is a quick question: How many personal smart devices do you have on your desk right now? Many of you are probably using a PC, laptop, or tablet along with a Smartphone, and likely using those devices to access information that you need for personal and professional purposes. It is worth noting that libraries are stepping into the future along with the rest of us, dealing with technology and asking questions that will reshape the future of the public library system. As the number of books that line the shelves of libraries shrinks, libraries now share a role in helping everyone in the community have access to the new digital media, the e-book.
Digital Inclusion Concerns
Digital literacy and digital inclusion are primary concerns of the Anytown City Library (pseudonym), located in a mid-sized city in Arizona. In response to the need for increased digital inclusion, the library manager implemented the E-Book Training Program in late 2010. This program, conceived by library management team members, provides step-by-step technical assistance for patrons when downloading digital resources using personal devices, such as iPads® or Kindles®. From early 2012 until March 2013, the library’s training evolved to the point that it focused less on structured classes and more on drop-in support sessions.
Evaluating the E-Books Training Program
In the spring of 2013, a team of two Boise State University students conducted an evaluation of the library’s training and drop-in sessions. During the four months of this case study, the evaluation team looked at the effectiveness of the Jimenez E-Book Training Program and asked a question: Is the E-Book Training Program implemented in the Anytown Public Library contributing to increased digital inclusion among adults in the community? The team planned to show the library whether their training initiative was helping to bridge the digital divide by increasing awareness of the library’s electronic resources.
Models Were Essential
To begin, the evaluation team donned a “detective” hat and talked to both library management and employees to research and create a program logic model (see Table 1). This shorthand model, based on W.K. Kellogg Foundation’s guidelines (2004, p. 12), helped reduce the library’s training program into its basic components: resources/inputs, activities, outputs, outcomes, and impacts. The team also created a training impact model, based on Brinkerhoff’s success case method guidelines (2006, pp. 71-82).
Table 1: E-Book Training Program Logic Model
|Short- and Long-Term Outcomes||Short-term:
Long-term: Library staff have fewer hard-copy book problems (because more patrons are downloading)
Creating these models was a valuable–and critical–step in the evaluation. Both models allowed the team to work systematically when looking at the training program impact, and they encouraged the team to think systemically about the potential effects of the training. For example, both models enabled the team to systematically review the library’s basic system components. In a systemic approach, the team also reviewed the library’s components as a whole to understand how the components were interrelated. Using these dual research approaches ultimately helped the team determine what it would recommend to the client. Because the library’s training program was already in place, the team performed a summative evaluation on this mature evaluand–the E-Books Training Program–and focused on outcomes of the training program logic model more than its processes. After comparing the information each model provided, the team decided to use the program logic model for the next important step: defining the dimensions of the evaluation. Defining Dimensions and Ranking At this point, the team worked with library management and began identifying the aspects of the training program most important to the library and its patrons. In evaluation lingo, these aspects are termed dimensions. To keep the evaluation manageable for the short time frame, the study was limited to three primary dimensions. Each dimension was associated with a specific question related to the training program (see Table 2).
Table 2: E-Book Training Evaluation Dimensions Weighting and Reasoning
|Process||1. Availability of library resources: Are enough resources available for trainees to use at the library, including drop-in sessions that fit patrons’ schedules, available and fast WIFI connections, and e-books and resources that satisfy patrons’ interests?||Very important||According to stakeholders, library resources are very important during the e-books training. Without certain resources, such as Wi-Fi availability, the library would be unable to offer its training programs or drop-in sessions to patrons.|
|Outcome||2. Need for assistance: How much assistance is needed after trainees complete the classes?||Important||After completing e-books training or drop-in sessions, stakeholders expect a reduced number of questions for library staff. This direct training correlation makes reduced need for assistance an important or “nice to have” dimension–though not a critical one.|
|3. Usage of library resources: Is there increased use of library resources and programs?||Extremely important||The library’s overall goal of digital inclusion suggests that there will be an increased use of digital resources over time. Stakeholders believe this training impact relates strongly to digital inclusion and is extremely important in the long term.|
Methodology: Dealing with Data Defining and structuring the team’s data instruments was the next issue at hand. After discussing the best ways to gather the required data, the team settled on three main data inquiry techniques: open-ended, semi-structured interviews; web- and paper-based surveys; and library usage records. While it was difficult to find a large number of trainees to survey, a reasonable number of staff members completed the survey and participated in telephone interviews. The team found the staff interviews to be particularly revealing: Not only did staff answer questions, but many had ideas about how the training might be improved. During any evaluation, it is important to use triangulation (with different sources of data) and critical multiplism (using multiple types of information that complement each other) to increase credibility of conclusions. For example, if library employees stated that patrons were performing more downloads in 2013 than in 2012, the team examined library records related to downloads for validation. Figure 1 explains the team thought process during data triangulation and how the team used multiple sources and types of information to support its findings.
Figure 1: Triangulation process
The next step was challenging: determining the key survey and interview questions. After tossing out several first drafts, the team finally zeroed in on suitable questions to tie employee interviews and surveys to each of the three dimensions. Finally, the survey data was ready for entry into QualtricsÒ, and the team could conduct its online surveys and telephone interviews. Deciphering Data and Drawing Conclusions In any evaluation project, ethics are a fundamental consideration. The team strived to conduct all aspects of the evaluation using methodologies that were consistent with established professional principles from ISPI’s Code of Ethics (2014). For example, the evaluation team:
- Added value by listening closely to the client’s feedback regarding the most valuable dimensions to the E-Book Training Program. As a result, the team included these dimensions in the final report to the client.
- Promoted validated practice by using industry standards to guide the evaluation, including:
- The AEA’s Guiding Principles for Evaluators
- Boise State University’s Institutional Review Board (BSU IRB) guidelines
After the team’s initial detective work was completed, reviewing the compiled data and seeing how the results supported the three dimensions was relatively easy. The team’s initial hard work paid off! Based on the triangulated data, the evaluation team determined the quality of each dimension on a 3-point scale (good, mediocre, poor). As shown in Table 3, the team concluded that Dimension One, availability of resources could be rated as good, based on availability and speed of Wi-Fi connections, and the variety of e-books and other resource offerings to engage and spark the interest of trainees to download. Dimensions Two and Three, need for assistance and increased usage of library resources, were rated mediocre and good, respectively. Need for assistance was rated mediocre because it was noted that questions from trainees did not seem to decrease after trainees attended the training. Increased usage of library resources was rated good because both survey results and data from the library indicated library electronic resource usage, particularly e-books, was increasing. After reviewing all three dimensions against the synthesis rubric, the evaluation team concluded the overall quality of the training as it related to increased digital inclusion was good. Even though questions from trainees did not seem to decrease after training, usage of its electronic resources is increasing at a rapid pace and the library is providing many varied resources to serve its patrons.
Table 3: Dimensions, Overall Results and Synthesized Result
|Dimension||Importance Weighting||Overall Results||Synthesized Result|
|1. Availability of library resources||Very important||Good||Good|
|2. Need for assistance||Important||Mediocre|
|3. Usage of library resources||Extremely important||Good|
With its conclusions in hand, the team provided the library with targeted information related to the strengths and weaknesses of the E-Books Training Program as well as suggestions for improvement. One of the best parts of the evaluation was being able to “sit at the table” with library management and discuss the team’s findings with confidence. The team was also pleasantly surprised how much data this relatively small project revealed about the library’s training program. The interviews with selected library employees, in particular, were invaluable. Would the team perform another evaluation? In a heartbeat! Sound evaluation is well worth the reward regardless of the size or focus of the training. References Brinkerhoff, R. O. (2006). Telling training’s story: Evaluation made simple, credible, and effective. San Francisco, CA: Berrett-Koehler. ISPI. (2014). Code of ethics. Retrieved from http://www.ispi.org/content.aspx?id=1658 W. K. Kellogg Foundation. (2004). Logic model development guide. Retrieved from http://www.wkkf.org/knowledge-center/resources/2006/02/wk-kellogg-foundation-logic-model-development-guide.aspx
About the Authors Robbie Proulx is an independent consultant who has worked extensively with organizations to create and implement effective performance solutions. She specializes in e-learning and blended learning. Robbie is a 2013 graduate of Boise State’s Organizational Performance and Workplace Learning master’s program and also holds BSU’s Workplace e-Learning and Performance Support graduate certification. She can be reached at email@example.com.
Tiffany Smith is a former network engineer who has worked in the IT profession for 17 years and has transitioned into the learning and development profession over the last four years. Tiffany plans to leverage her past technology experience and recent degree to help organizations develop intervention programs designed to improve individual and work team efficiency and performance. Tiffany is a 2014 graduate of Boise State University’s Organizational Performance and Workplace Learning master’s program and holds BSU’s Workplace e-Learning and Performance Support graduate certification. She can be reached at firstname.lastname@example.org.