By Jessica Scheufler, Ratondrea O’Neal, Rose M. Nicholson, and Jenny Hargett
Tales from the Field, a monthly column, consists of reports of evidence-based performance improvement practice and advice, presented by graduate students, alumni, and faculty of Boise State University’s Organizational Performance and Workplace Learning Department.
Over the last 10 years, a university installed interactive whiteboards in more than 30 classrooms. Even though instructors had access to these boards and the university provided training workshops and resources on how to use them, the client was concerned that instructors were not maximizing the technology’s interactive capabilities and suspected that some instructors did not use the boards at all. The client wanted to find out who was using them and how they were using them.
Because the client was mostly concerned about finding out who used the technology and how instructors used it in the classrooms, our evaluation team conducted a summative evaluation in the spring of 2015 to answer the following dimensional questions:
- Alignment: How well do the training and resources align with the effective use of the interactive whiteboards?
- Usage: Which instructors are using the interactive whiteboards and how?
- Preparation and delivery: How has using interactive whiteboards affected the way instructors prepare and deliver classroom instruction?
- Student engagement: How has using interactive whiteboards in the classroom affected student engagement?
After defining the dimensional evaluation questions, the team met with the client to discuss how important the different dimensions were to the university. The team and client prioritized the dimensions using a weighting system. Dimensions were categorized as either important or very important, as shown in Table 1.
Table 1. Overview of Dimensions and Importance Weighting
|1. Alignment||Very Important|
|2. Usage||Very Important|
|3. Preparation and Delivery||Important|
|4. Student Engagement||Important|
The team developed data collection methods based on Brinkerhoff’s success case method (SCM). Although the SCM is most commonly used to evaluate the impact of training, one of the purposes of SCM is “creating examples of success for use in marketing a program or otherwise convincing others of its value” (Brinkerhoff, 2006, p. 59). With this in mind, the team decided that reporting success cases would provide the client with value because the success cases provide information about the perceived benefits of using the boards and help the university improve training.
Adhering to evaluation best practices, the team used Scriven’s KEC guidelines (as cited in Davidson, 2005), collected data from multiple sources, and used more than one data collection method. The team collected data using an online survey for instructors, teaching assistants, and students. The team also interviewed an instructional designer who not only used interactive whiteboards but also helped develop the training and resources for using the whiteboards. The purpose of the surveys and interviews was twofold:
- To clarify actual usage and perceptions of the technology
- To identify high-success or low-success cases
Although the team received survey responses from 12 faculty (of 111) and 46 students (of 3,517), the team was not able to interview success cases because no one who reported successful interactive whiteboard use volunteered to participate in interviews. In the absence of sufficient data to support continuing with the SCM (Brinkerhoff, 2006), the team analyzed the data using a theory-driven approach. The theory-driven approach was good fit for analyzing the existing data because:
- There is ample documentation about best practices for using interactive whiteboards to facilitate student engagement.
- The data collected centered on the use of best practices.
The team used weighted rubrics to evaluate the quality of each dimension.
Dimension 1: Alignment; Overall Rating: Fair
The team found that while most of the technology-related resources the university provides to instructors explain how to operate the boards, only a few resources are above beginner level. Survey data confirmed that most of the resources focus on the technical aspects of the boards (intermediate), instead of how to use them to engage students in active learning (advanced and expert level). Fifty-six percent of the instructors agree or strongly agree that training helped them create engaging content for students, which resulted in a rating of “good,” while 34% agree or strongly agree that the resources helped them create engaging content, which resulted in a rating of “fair.” The combined ratings of the weighted training, resources, and importance resulted in an overall rating of “fair” for this dimension.
Dimension 2: Usage; Overall Rating: Poor
The data revealed that half of the instructors surveyed reported using the boards in their classrooms. On average, instructors who did use the technology used it occasionally. Additionally, 69% of students reported that their instructors use the boards rarely or not at all.
Dimension 3: Preparation and Delivery; Overall Rating: Good, but limited
Although the use of technology was quite limited, instructor reports and interview data suggested that instructors who used the technology did prepare differently when they have access to it, and they utilized it to prepare and deliver content in a more interactive manner. The overall rating for this dimension was good, but because the actual use of the boards was limited, data on this dimension provided a limited contribution to the overall assessment of program quality.
Dimension 4: Student Engagement; Overall Rating: Good, but limited
Survey data findings indicated that faculty who used the boards did use them to affect student engagement, and student survey data confirmed this finding.
Table 2 shows each dimension, its importance rating, and its evaluation quality rating.
Table 2. Overview of Dimensions, Weighting, and Dimensional Quality
Interactive Whiteboard Usage
|Preparation and Delivery||
Because alignment and usage dimensions were both weighted as “very important” and received poor and fair ratings, the team rated the overall interactive whiteboard usage as between fair and poor. Given the results of the evaluation, the team identified the strengths and weaknesses of the current interactive whiteboard usage as shown in Table 3.
Table 3. Strengths and Weaknesses
Although the team recognized a difficulty in generating comprehensive recommendations based on limited data (e.g., a low survey return rate), the team as a group of practitioners knew it was important to help the client make sense of our findings. So, the team made several evidence-based recommendations that the team thought would help the university make improvements. One of the most significant findings was that only 50% of instructors reported using the boards. This is significant because the university incurs costs for maintaining the boards. Due to time constraints, the team was not able to evaluate the costs of the boards, so the team recommended that the university conduct a cost–benefit analysis to see if keeping all the boards is worth the maintenance costs.
In addition to conducting a cost–benefit analysis, the team also recommended using this evaluation as a starting point to improve the training. The data suggested that the training did not match the instructors’ skill levels or interests, so the team recommended that the university assess the needs of its instructors and use the needs-based information to improve training.
Brinkerhoff, R. O. (2006). Telling training’s story, evaluation made simple, credible, and effective. San Francisco, CA: Berrett-Koehler Publishers, Inc.
Davidson, J. E. (2005). Evaluation methodology basic: The nuts and bolts of sound evaluation. Thousand Oaks, CA: Sage Publications.
About the Authors
Jessica Scheufler is a graduate assistant and student in the Organizational Performance and Workplace Learning program at Boise State University and plans to graduate in the spring of 2016. She also holds a journalism degree from the University of Georgia and is an experienced photographer and digital content strategist. When Jessica graduates, she plans on using her experience and skills in communications and performance improvement to help organizations improve performance. Jessica can be reached at email@example.com.
Ratondrea O’Neal is a consumer learning North America instructional designer at Citigroup Financial Services with over 17 years of experience in training and development. Currently a graduate student at Boise State University’s College of Engineering, Ratondrea is scheduled to complete an MS degree in Organizational and Performance Workplace Learning (OPWL) in fall 2015 and holds a Graduate Certificate in Workplace E-Learning and Performance Support (WELPS) from Boise State University and a BA in Sociology from the University of South Florida. Ratondrea can be reached at firstname.lastname@example.org or email@example.com.
Rose M. Nicholson is a learning systems strategist at Booz Allen Hamilton with over 20 years of learning and development experience. Currently a graduate student at Boise State University’s College of Engineering, Rose is scheduled to complete an MS degree in Organizational and Performance Workplace Learning (OPWL) in summer 2016 and holds Certified Professional in Learning and Performance (CPLP) credentials and a BA in Human Resources from Saint Leo University.