By Kate Aument, Kathy Pillion, and Christopher Stubbert

Tales from the Field, a monthly column, consists of reports of evidence-based performance improvement practice and advice, presented by graduate students, alumni, and faculty of Boise State University’s Organizational Performance and Workplace Learning Department.

Overview

University (a pseudonym) is one of the top 15 professional pharmacy programs ranked by the U.S. News & World Report education rankings (2015). Around 2002, the Office of Academic Affairs staff at University’s professional program implemented a peer observation program as a component to the overall curriculum assessment strategy. At the time, the primary feedback about instructor teaching performance was based solely on student evaluations that were completed at the end of each course. Other institutions have been implementing peer observation programs to complement students’ feedback, address content and subject matter, develop teaching skills, make curriculum decisions, and assist instructors with adapting to change in higher education (Barnett & Matthews, 2009; Hammersley-Fletcher & Orsmond, 2004; O’Keefe, Lecouteur, Miller, & McGowan, 2009; Sullivan, Buckle, Nicky, & Atkinson, 2012). The peer observation program was created to provide an additional evaluation method about an instructor’s teaching performance by:

  • Measuring the effectiveness of a faculty’s instructional efforts
  • Gathering information about teaching behaviors that may triangulate data received from student evaluation forms and feedback
  • Providing faculty with corroboration from colleagues to apply for promotion and tenure, as well as internal and external teaching awards

The program provides formative feedback to individual instructors about their instructional practice or a summative review for an instructor to use when applying for promotion and tenure. This voluntary program serves all ranks of instructors, but is used more by instructors pursuing promotion and tenure. A logic model of the program was put together with input from stakeholders to outline the process and intended outcomes or impacts.

The evaluation team worked with the client to conduct a formative evaluation of the peer observation program to help determine program improvements. The program was implemented 13 years prior and since then dramatic changes in the teaching environment and curriculum produced new needs for instructors (i.e., distance education, technology in the classroom, and paperless courses). After consultation with key stakeholders, the team investigated the following evaluation question:

How may the peer observation program be improved to better meet its goals?

The program goals are:

  • Strengthen the college’s teaching evaluation plan to ensure alignment with Accreditation Council for Pharmacy Education (ACPE) accreditation standards
  • Provide  instructors formative feedback about their instructional practice
  • Provide summative feedback about instructors’ teaching performance for them to use in their dossier for promotion and tenure
  • Promote corroboration between colleagues to help maintain and improve quality of teaching in the curriculum
  • Investigate any associated outcomes from participation in the peer observation program, including instructors receiving internal and external teaching awards

Methodology

The evaluation team used a systematic process to complete the evaluation by following the main framework of Michael Scriven’s (2007) key evaluation checklist: executive summary, methodology, background and context, stakeholders, resources, values, process, outcomes, synthesis, recommendations, and reporting.

The team conducted a mixed-goal–based and goal-free evaluation. The team evaluated how well the current program met its intended goals (goal-based), and if it contributed to any unexpected outcomes (goal-free). The evaluation dimensions focus primarily on the value of the program to the organization (worth-based evaluation). However, the team also considered the merit of the program by reviewing published literature about similar programs to incorporate perspectives pertaining to the intrinsic value of the program when providing recommendations (Davidson, 2005).

Working with the client, the team determined the evaluation would be based on five dimensions. The clients and stakeholders discussed with the team their perceptions regarding the importance of each dimension, and completed a matrix for determining the relative importance of components using a scale that distinguished between extremely important, very important, and important (Davidson, 2005). All dimensions, except one, were weighted extremely important–the fifth dimension was weighted important. Dimension weightings are shown in Table 1.

Table 1. Peer Observation Program Dimensions and Importance Weightings

Category Dimension Importance Weighting

Stakeholder’s Matrix Rating

(Assistant dean and three department heads)

Process 1. ACPE accreditation standards: How well does the peer observation program align with the ACPE accreditation standards? Extremely Important Extremely Important: 3

Very Important: 0

Important: 1

2. Formative feedback: How well does the design of the peer observation program provide instructors formative feedback about their instructional practice? Extremely Important Extremely Important: 3

Very Important: 1

Important: 0

3. Performance review: How well does the peer observation program design provide summative information about the teaching performance of an instructor? Extremely Important Extremely Important: 2

Very Important: 1

Important: 0

Uncertain: 1

Outcome 4. Teaching performance: How well does the Peer Observation Program help positively change an instructor’s teaching performance? Extremely Important Extremely Important: 3

Very Important: 0

Important: 1

5. Faculty participation outcomes: What expected (e.g., teaching awards) and unexpected outcomes does the peer observation program contribute to producing?

 

Important Extremely Important: 1

Very Important: 1

Important: 1

Uncertain: 1

 

Data Collection

The team incorporated qualitative data from interviews and focus groups to supplement quantitative data gathered from surveys. The collection of interviews and web-based surveys complemented extant data collection limitations, such as incomplete records. The extant data complemented potential qualitative data collection limitations, such as varying individual interview experiences (Butterfoss, Francisco, & Capwell, 2000, pp. 310-311).

The response rates for participation in this evaluation were positive. The evaluation team’s conclusions are based on the following perspectives:

  • Twenty-seven instructors completed the web-based survey out of 51 invitations (53%).
  • Eight peer observers completed the survey out of 11 invitations (72%).
  • Eight instructors attended the two instructor focus group sessions.
  • Four peer observers attended the peer observer focus group.
  • Three department heads participated in a semi-structured interview about summative review.
  • The assistant and associate dean participated in a semi-structured interview relating to ACPE accreditation standards.

The paragraphs below provide an overview of the triangulation approach to the data collected per dimension and the qualitative and quantitative data sources that are used to measure each dimension.

Dimension 1: ACPE Accreditation Standards
The team used the 2007 and 2016 ACPE accreditation standards and interviews with the assistant and associate deans to triangulate the data to evaluate the dimensional question: How well does the peer observation program align with the ACPE accreditation standards? According to the college’s 2014 ACPE self-study report, the program successfully supported five standards. However, ACPE revised the standards for the upcoming year, 2016, which the college will have to prepare to meet. The program successfully supported four of the revised 2016 standards, but the peer observation form can be revised to help meet 2016 ACPE Accreditation Standard 19b: Distant education considerations. During both qualitative interviews, the program was regarded highly with the purpose of supporting the college’s overall teaching evaluation and assessment plan. The only concern mentioned was related to the capacity of the program to meet all faculty requests due to the limited number of peer observers.

Dimension 2: Formative Feedback
The team administered web-based surveys and focus groups with instructors and peer observers and triangulated the data to answer the dimensional question: How well does the design of the peer observation program provide instructors formative feedback about their instructional practice? Instructors noted that the process of having a peer observation does not negatively affect the classroom experience for students, and that their relationships with peer observers after the observation remained either the same or in few cases strengthened as a result. However, instructors reported that a few times scheduling problems prevented them from participating in the program and that communication to them about the program’s availability and purpose was not always clear. Amendment suggestions to the peer observation form included adding different delivery methods, instructional objectives, and assessment strategy feedback.

Dimension 3: Performance Review
The team administered web-based surveys and focus groups with instructors and peer observers, as well as interviews with department heads, and triangulated the data to evaluate the dimensional question: How well does the peer observation program design provide summative information about the teaching performance of an instructor? Department heads identified the program as valuable for identifying “red flags” about instructors’ teaching performance when reviewing their dossier for promotion and tenure. However, they also discussed limitations of the program due to the perspective being from one peer about one class session. Instructors expressed the value of the peer observation when teaching the first couple of years to receive feedback, but are concerned of the implications the results may have on their record. There is a lack of communication about teaching performance expectations to new instructors. In the peer observation program process guidelines, it states that department heads should not have access to formative evaluations unless requested by instructor; however, there is no system or assurance that this is true to instructors. Instructors, department heads, and peer observers all mentioned opportunities for the peer observation form to improve and be updated to address distant learning and assessment strategies, and incorporate more versatile and innovative teaching methods. This concern also relates to Dimension 2: Formative Feedback.

Dimension 4: Teaching Performance
The team triangulated the data obtained from the web-based surveys and focus groups with instructors and peer observers to answer the dimensional question: How well does the peer observation program help positively change an instructor’s teaching performance? Instructors noted that participating in the program did change their teaching performance; however, the feedback did not always provide concrete changes. Peer observers noted that instructors did incorporate their feedback, but found it difficult to see the long-term impact of the program on the individual instructors they observed.

Dimension 5: Faculty Participation Outcomes
The team triangulated the data obtained from the web-based surveys and focus groups with instructors and peer observers to answer the question: What expected (e.g., teaching awards) and unexpected outcomes does the peer observation program contribute to producing? Faculty members (both instructors and peer observers) commented on the value of participating in the program. There were some positive comments about participation from both groups; however, there were also improvement comments, such as having tangible recognition for peer observers for their efforts. As noted previously, the instructors lacked awareness of the program and its connection to promotion and teaching awards.

Conclusions

The evaluation team focused on addressing the evaluation question, “How may the peer observation program be improved to better meet its goals?” As illustrated in Table 2, all process and outcome dimensions were rated on a three-level quality scale with importance weighting determined by the client. As a result of the evaluation, all dimensions were rated as some improvement needed.

Table 2. Peer Observation Program Evaluation Results

Category Dimension Quality Rating Importance Weighting
Exceeded or met expectations Some improvement needed Significant improvement needed
Process 1. ACPE accreditation standards X Extremely Important
2. Formative feedback X Extremely Important
3. Performance review X Extremely Important
Outcome 4. Teaching performance X Extremely Important
5. Faculty participation outcomes X Important

 

Recommendations and Reporting

The evaluation team presented the results to the assistant dean and shared the document with the associate dean. Based on the evidence collected during the evaluation, the team recommended stakeholders to consider the following next steps:

  1. Review the archiving process of the peer observation program. Include electronic copies of the peer observation form to instructors, as well as add a section on the form that specifies whether the peer observation conducted was formative or summative.
  2. Review and revise the peer observation form based on identified needs.
  3. Improve the scheduling process for peer observations.
  4. Improve the communication about the program to instructors and department heads.
  5. Create and continue to support opportunities for instructors to receive informal feedback about their teaching.

As a result of the evaluation, the 2015 peer observation training session for new peer observers provided participants with an additional job aid to help guide observers through pre- and post-meetings. New peer observers also have a slightly modified peer observation form that incorporates a question inquiring directly about engaging the distant campus, and teaching innovations in the classroom. Any additional and major proposed changes of the peer observation program must be evaluated and approved by the University’s Curricular Assessment Committee with further ratification by the faculty. These recommendations will be shared with the Curricular Assessment Committee when the timing is appropriate.

References

Barnett, C. W., & Matthews, H. W. (2009). Teaching evaluation practices in colleges and schools of pharmacy. American Journal of Pharmaceutical Education, 73(6), 1-8.

Butterfoss, F. D., Francisco, V., & Capwell, E. M. (2000). Choosing effective evaluation methods. Health Promotion Practice, 1(4), 307-313.

Davidson, J. E. (2005). Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks, CA: Sage.

Hammersley-Fletcher, L., & Orsmond, P. (2004). Evaluating our peers: Is peer observation a meaningful process? Studies in Higher Education, 29(4), 489-503.

O’Keefe, M., Lecouteur, A., Miller, J., & McGowan, U. (2009). The colleague development program: A multidisciplinary program of peer observation partnerships. Medical Teacher, 31, 106-1065.

Scriven, M. (2007). Key evaluation checklist. Retrieved from http://www.wmich.edu/evalctr/archive_checklists/kec_feb07.pdf

Sullivan, P. B., Buckle, A., Nicky, G., & Atkinson, S. H. (2012). Peer observation of teaching as a faculty development tool. BMC Medical Education, 12(26), 1-6.

U.S. News & World Report LP. (2015). Graduate Schools: Pharmacy. Retrieved from http://grad-schools.usnews.rankingsandreviews.com/best-graduate-schools/top-health-schools/pharmacy-rankings

About the Authors

KateAumentKate Aument currently works as an instructional designer. She has worked as an instructional designer for five years developing continuing education programs and training programs. She plans to graduate from Boise State with a Master of Science in Organizational Performance and Workplace Learning in spring 2016. Kate may be reached at kateleifheit@gmail.com.

 

Kathy PillionKathy Pillion brings more than 25 years’ experience as a human resources consultant and facilitator to her role as director at SG Learning & Development, and throughout her career has provided leadership development, career coaching, and organizational change consultancy to a diverse client base in both public and private organizations, as well as in the vocational education and training sector. Kathy may be reached at kathy@sglearning.com.au.

Christopher StubbertChristopher Stubbert is a training development officer with the Canadian Armed Forces in Ottawa, Ontario. He holds a Bachelor of Commerce from Saint Mary’s University and has close to 20 years of experience in the military. He plans to complete his M.S. in Organizational Performance and Workplace Learning from Boise State University in fall 2016. Chris may be reached at chris.stubbert@gmail.com.