By: Annette Wisniewski, Debbie Campbell, and Brad Inderbitzen
Tales from the Field, a monthly column, consists of reports of evidence-based performance improvement practice and advice, presented by graduate students, alumni, and faculty of Boise State University’s Instructional and Performance Technology department.
How do you conduct a robust evaluation with insufficient data? A team of graduate students from Boise State University recently confronted this problem during a summative evaluation of Adventure Scouts, a non-traditional Scouting program currently offered in select Northeast Illinois communities. In this case, the evaluand was a fairly young, rapidly-growing organization, funded solely by donations. Because of its limited resources and lack of maturity, record keeping was often inconsistent or non-existent, which made it difficult for the evaluation team to acquire extant data. Gathering additional data was also challenging, due to a very limited timeframe in which to conduct the evaluation. Despite these obstacles, the team was committed to helping Adventure Scouts leadership determine the program’s effectiveness, since the stakeholders were hoping to convince other communities in the area to sponsor additional locations.
Purpose of Evaluation
In September of 2011, Joanne Osmond, Vice President of Membership for the Northeast Illinois Council (NEIC) of Boy Scouts of America (BSA), and other NEIC members were enthusiastic for an independent party to evaluate the Adventure Scout’s program, a modified version of Cub Scouts. The main question that stakeholders wanted the evaluation to answer was:
“Is the Adventure Scouts program effective as an alternative Scouting experience for underprivileged minority boys?”
The NEIC began the Adventure Scouts program in 2008 to provide character development and values-based leadership training to boys in Lake County, Illinois. This program differs from traditional Scouting programs in that it has been tailored to better serve Hispanic and African American families in low social-economic areas.
Type of Evaluation
To determine the effectiveness of the Adventure Scouts program, a summative evaluation was objectively conducted, which assessed participation and retention rates of Hispanic and African American boys in schools offering the Adventure Scouts program. It also compared those rates against schools that only offer a traditional Scouting program to underprivileged minority boys. Additionally, the evaluation looked at the program’s ability to help develop good moral character and leadership skills.
The evaluators used W. K. Kellogg Foundation’s (2004) Logic Model to identify primary elements of the program and to recognize relationships among the elements. This ensured that all stakeholders had a shared definition of the program, purpose of the evaluand, and goals of the evaluation project. Evaluators also used the Key Evaluation Checklist (Scriven, 2007) as the framework for gathering data and conducting the evaluation. The first step in the process was to determine dimensions of merit for the evaluand and the relative importance of each dimension. After interviewing two primary stakeholders, the evaluators identified six dimensions of merit (see Table 1).
Table 1: Dimensions of Merit
|Category||Dimensions of Merit||Importance Weighting||Rationale|
|Process||Content Alignment||Extremely Important (3)||If the program content aligns with less than 80% of BSA standard curriculum, the program would be at risk of losing stakeholder and sponsor support, since the quality and effectiveness of the standard curriculum are well documented.|
|Participation Rate||Extremely important (3)||Involvement in Scouting is totally voluntary. The boys will not remain involved in the program unless they continue to find it appealing and socially acceptable within their culture.|
|Cultural Accessibility||Extremely important (3)||The boys have easy access to den meetings as they are held in the boys’ schools or community centers. To determine accessibility of supplemental programs, measure participation in activities which are not specifically required for boys to achieve their ranks.|
|Outcome||Retention Rate||Extremely important (3)||In most cases, the boys are voluntarily giving up recess or free time to participate in Scouting. If the boys are enjoying the program, they are more likely to continue on to the next rank.|
|Leadership Skills||Important (2)||Leadership skills are not the primary focus of Cub Scouts, but become more important as the boys reach middle school. Thus, the skills are introduced here and reinforced in Boy Scouts.|
|Character Building||Important (2)||Good moral character development, as defined by the Cub Scout’s 12 core values, is an integral part of the Adventure Scouts program. It is important that the program help to support the boys’ ethical development, but it is only one part of the program’s agenda.|
Data Collection Methods
The team used surveys and extant data reviews to gather data about the program that would be required to effectively evaluate each dimension of merit.
Initially, the team planned to include all Adventure Scout locations as part of the evaluation. Unlike the traditional Cub Scout program, which was well documented, the Adventure Scouts was new; standards for record keeping had not yet been established. Since the team could not acquire data from all locations, the focus of the evaluation was limited to the Adventure Scouts sessions held at schools for which attendance records existed. In addition to record reviews, the team surveyed direct staff to verify that Scouts were still developing good ethical conduct in the adapted Cub Scouts program.
Evaluation Results and Overall Significance
After analyzing the data, the evaluation team determined that the Adventure Scouts program achieved an overall rating of Very Good (see Tables 2, 3, and 4). Of the three process dimensions, Content Alignment and Participation Rates both received ratings of Excellent. The content of the Adventure Scouts program strongly aligned with the curriculum of the traditional Cub Scouts program. The primary difference between the two programs was that Adventure Scouts covered the curriculum over a three-year rotation. The Adventure Scouts program resulted in a much greater percentage of total available youth (TAY) participating in Scouts than the traditional programs offered in similar communities. Cultural Accessibility could not be evaluated because of a lack of data.
Of the three outcome dimensions, Retention Rate received a rating of Barely Adequate. While the Adventure Scouts program was able to attract high percentages of boys into the program, data revealed that the number of boys continuing on to the next level of Scouts was unexpectedly low. However, the other two dimensions, Leadership Skills and Character Building, both received a rating of Very Good. Boys participating in Adventure Scouts demonstrated that they were learning leadership skills and developing good character.
Table 2. Dimension and overall program quality ratings.
|Dimension||Adventure Scouts Program
Overall Quality: Very Good
|Content Alignment||X||Extremely Important|
|Participation Rate||X||Extremely Important|
|Cultural Accessibility||Data Unavailable||Extremely Important|
|Retention Rate||X||Extremely Important|
|Excellent||Very Good||Good||Barely Adequate||Poor|
Table 3. Dimension weighting, rating, and scores.
|Content Alignment||Extremely Important (3)||Excellent (5)||15|
|Participation Rates||Extremely Important (3)||Excellent (5)||15|
|Cultural Accessibility||Extremely Important (3)||Insufficient Data||N/A|
|Retention Rate||Extremely Important (3)||Barely Adequate (2)||6|
|Leadership Skills||Important (2)||Very Good (4)||8|
|Character Building||Important (2)||Very Good (4)||8|
|Overall Score||Very Good||52|
Table 4. Program overall rubric.
|Excellent||Very Good||Good||Barely Adequate||Poor|
Recommendations and Explanations
Stakeholders need to further investigate the cause of a poor retention rate as well as determine whether the program achieves the Cultural Accessibility dimension, which could not be evaluated because of insufficient data. To assist them, the evaluation team made a few recommendations for improving future data gathering and evaluation efforts. With more consistent and complete data, the Adventure Scouts stakeholders would be able to more accurately determine the impact of the Adventure Scouts program, more clearly document program strengths, and further clarify areas for improvement.
- Use consistent record keeping across locations (same format, same data categories). Provide identical forms to direct staff, lead staff, and administrators so that they would know exactly what data should be gathered. Provide guidelines on the frequency of updates required and then periodically review the data to ensure that the records are completed fully and in a timely manner.
- Track attendance consistently for both weekly and optional activities. Develop a standard practice for conducting roll call. Keep a spreadsheet of all families and optional activities so that a quick check mark can identify who attended what event.
- Document what boys are asked to do at home. Create a companion to the curriculum that would provide a place for direct staff to note what tasks boys are given to take home.
- Log when boys take home books, have parents sign, and return books. Add another section in the curriculum companion where direct staff could note that boys have had parental signoff in their Scout manuals.
- Find effective ways to solicit feedback from parents on the Adventure Scouts program, such as beginning-of-year and end-of-year surveys.
- Consider presenting or publishing Adventure Scout success stories and lessons learned to prospective families and communities.
Stakeholders should also consider conducting a formative evaluation to find ways to strengthen or improve the program by examining, amongst other things, the delivery of the program, the quality of its implementation, and its procedures. A formative evaluation would assess the ongoing program to help identify any discrepancies between the expected outputs of the program and what is actually happening. The evaluation would help to further analyze strengths and weaknesses, uncover obstacles, barriers or unexpected opportunities, and generate understandings about how the program implementation could be improved.
Scriven, M. (2007). Key evaluation checklist. http://www.wmich.edu/evalctr/checklists/kec_feb07.pdf
W. K. Kellogg Foundation (2004). Logic Model Development Guide. Retrieved from http://www.wkkf.org/knowledge-center/resources/2006/02/WK-Kellogg-Foundation-Logic-Model-Development-Guide.aspx
Annette Wisniewski is an instructional systems design professional with extensive experience in training development, including e-learning, instructor-led training, virtual classroom, and blended learning solutions. She is president of Treetop Lane Consulting, Inc. (www.treetop-lane.com) and an IPT graduate student at BSU. Annette earned her Workplace E-learning and Performance Support certification in the summer of 2011 and will earn her MS in IPT the spring of 2012. Annette can be reached at Annette@treetop-lane.com.
Debbie Campbell is an independent learning and performance consultant with Xcelerant Solutions Group in Atlanta, GA. She is pursuing an MS degree in Instructional and Performance Technology at Boise State University and will earn a graduate certificate in Human Performance Technology at BSU in the spring of 2012. Debbie can be reached at firstname.lastname@example.org.
Brad Inderbitzen is a Lead Instructional Designer at AT&T Mobility in Atlanta, GA. He will complete a Master of Science degree in Instructional and Performance Technology (IPT) and a Human Performance Technology (HPT) graduate certificate from Boise State University in 2012. Brad may be reached at JI7187@att.com.