By Gordon A. Hood, Verle-Ranae L. Hoskins, and Jaymie Rietmann

Tales from the Field, a monthly column, consists of reports of evidence-based performance improvement practice and advice, presented by graduate students, alumni, and faculty of Boise State University’s Organizational Performance and Workplace Learning department.

The Ship Rider Program
One of the U.S. Coast Guard’s statutory missions is Living Marine Resource Protection. Coast Guard maritime law enforcement personnel conduct at-sea boardings of fishing vessels to verify compliance with federal law, protecting the long-term sustainability of the $24 billion domestic fishing industry (USCG, 2011).  To train these boarding teams on the laws and marine resources specific to each operating area, the Coast Guard operates five Regional Fisheries Training Centers, part of the Maritime Law Enforcement Academy and Force Readiness Command. The North Pacific Regional Fisheries Training Center (NPRFTC) in Kodiak, Alaska, was established in 1995 to train Coast Guard boarding officers enforcing fisheries laws in the Gulf of Alaska and Bering Sea. These boarding officers and boarding team members, assigned to ships from a variety of West Coast homeports, spend four days conducting highly dynamic, hands-on classroom training, learning applicable regulations and species identification, and practicing their skills in several different realistic mock boarding scenarios.

Coast Guard ships conducting Alaska fisheries patrols often rotate between many different missions and operating areas throughout the Pacific, so to provide them additional expertise and experience with the Alaska fisheries mission, an NPRFTC instructor often rides along with Coast Guard law enforcement personnel who recently completed the classroom course. The instructors act as subject matter experts, in an advisory role, during at-sea boardings of commercial fishing vessels. The purpose of this Ship Rider program is to improve crewmember confidence and increase effectiveness of the enforcement patrol.

Evaluation Design
We conducted a summative evaluation of the Ship Rider program to assess its overall quality. The primary question this evaluation sought to answer was: Does the Ship Rider program improve boarding team proficiency with fisheries regulations? The evaluation explored the worth of the program; specifically, assessing whether or not the Ship Rider program meets the service need of improved proficiency in enforcing fisheries laws.

We followed Scriven’s (2007) key evaluation checklist, and utilized both Kirkpatrick’s (1996) evaluation model and Brinkerhoff’s (2006) success case method while developing the evaluation methodology. We collected data through document review, instructor interviews, survey of participants, and participant interviews to identify success and non-success cases.

This evaluation focused specifically on four dimensions of the program. The dimensions were selected to evaluate the program’s process and outcomes, specifically considering Kirkpatrick’s (1996) model:

  1. Program design (process)–How adequately is the current program designed to facilitate the achievement of organizational goals?
  2. Knowledge (level 2 outcome)–Is the program improving crewmember knowledge?
  3. Confidence in Application (level 3 outcome)–Is the program increasing crewmember confidence in the application of fisheries regulations?
  4. Effectiveness (level 4 outcome)–Is the program increasing effectiveness of cutter (ship) patrols?

We used selected stakeholder input, organizational need, and program theory, as suggested by Davidson (2005), to determine how to weight the relative importance of the four dimensions, as shown in Table 1:

  1. Selected Stakeholder Input: The client and senior stakeholders at NPRFTC, who are experienced and have a clear understanding of organizational needs and values, were asked to determine the importance of the dimensions.
  2. Organizational Need: Dimensions were weighted based on their link and benefit to organizational needs and values–specifically the statutory requirement to enforce Living Marine Resource laws.
  3. Program Theory: Kirkpatrick’s (1996) levels were influential in the weighting process, as this program appears to target learning (level 2), behavior (level 3), and results (level 4).

The approach of this evaluation was goal-based; we familiarized ourselves with the program’s goals at the onset of the evaluation. Furthermore, this evaluation looked beyond just the crew’s performance when the Ship Rider instructor was aboard, to report on knowledge transfer, lasting behavioral change, and other results occurring after the Ship Rider instructor departed.

Evidence-Based Practice
For each dimension, we collected data from multiple sources, which provided a means to triangulate and compare the information collected against evidence previously compiled regarding the success of the Ship Rider program. It was important to have the most accurate understanding of the variables that affected each dimension.

We began by compiling available program documents and extant data pertaining to how long a boarding took to complete, and the number of violations identified. This review provided comparative data to quantify success or non-success cases. We also conducted a web-based survey of crewmembers that participated in the NPRFTC training within the last two years. The data collected from the survey provided input regarding knowledge change, behavioral change, and organizational results from the learner-participant perspective. Then, during crewmember interviews, we were able to focus on success cases to expound upon and aid in the interpretation of the survey results. One-on-one interviews of instructors provided further input for unrealized value and comparison of this program–which does not have a curriculum–to other programs with a standardized curriculum.

We focused on evidence-based practice for this evaluation, deliberately gathering information about what works and facts supporting a conclusion. Interviews of instructors and participant comments indicated specific evidence that supported increased knowledge or change in behavior, not just opinion. Surveys collected factual data to be used as evidence of program worth. The documented comparative data also provided triangulation of evidence.

Survey scores were quantified on a 5-point scale and the mean was used to measure against a grading rubric. Instructor interview responses were compared to each other using the majority answer to determine question score. We used the success case method interviews to further validate survey and interview data, and allow us to draw compelling conclusions about program outcome and unrealized value. Each dimension was scored poor, good, or excellent by applying the grading rubric. We then applied importance weighting to the raw result dimension score using the overall quality rubric.

Evaluation Results
Using a weighted rubric to synthesize results of the separate dimensions, we concluded that the overall quality of the Ship Rider program is Good. Specific dimension scores are reported in Table 1.

Table 1. Results of Evaluation: Overall quality of the Ship Rider program

Overall Quality of the Ship Rider Program: Good
Dimension Weighting Results Poor Good Excellent
1. Program design Very Important Focus areas for this dimension included lag time in having a Ship Rider assigned, consistency of Ship Rider performance, and design in achieving organizational goals. X
2. Knowledge Important Crewmembers generally credit their knowledge to the classroom course and prior experiences; however, crewmembers find the resources and extra onboard training provided by Ship Riders invaluable to supporting and further developing their knowledge. X
3. Confidence
in Application
Very Important Having the expertise of the Ship Rider available helps new crewmembers to have confidence in their enforcement decisions. X
4. Effectiveness Extremely Important We concluded that the Poor quality result of program effectiveness is due to selected criteria and methodology, and may not be a good representation of the dimension. X

One dimension addressed the process of the Ship Rider program: program design. Without a formal curriculum in place for the Ship Rider program, we relied on feedback from instructors as administrators of the program and crewmembers as program beneficiaries. The focus areas for the program design dimension included lag time in having a Ship Rider assigned to a ship, consistency of Ship Rider performance, and design in achieving organizational goals. Analysis of the data indicated that the design of the Ship Rider program is Excellent.

Three dimensions addressed the program’s outcomes: knowledge, confidence in application, and effectiveness. Based on the data from crewmember survey responses and instructor interviews, the knowledge dimension resulted in a quality rating of Good. Crewmembers generally credit their knowledge to the classroom course and prior experiences; however, crewmembers find the resources and extra onboard training provided by Ship Riders invaluable to supporting and further developing their fisheries enforcement knowledge.

Survey results, confirmed by instructor interviews, showed that the Ship Rider program has an Excellent quality rating for the confidence in application dimension. The majority of respondents (18 of 23) felt that their confidence in enforcing Alaska living marine resource regulations was improved and remained high after participating in the Ship Rider program. In addition, 20 of 23 respondents agreed that the Ship Rider helped improve their ability to enforce these laws. Instructor interviews unanimously confirmed an increase in crewmember confidence, evidenced by being able to taper the amount of support provided over the course of the ride-along. The success case method interviews reinforced the Excellent quality rating by revealing that while crewmembers could technically do the job without a Ship Rider, it is of greater long-term value to have the support available in the operational environment. In addition, having the expertise of the Ship Rider available gives boarding team members and command cadre an on-site resource to clarify emergent questions and reinforce confidence in their enforcement decisions.

We had identified effectiveness as a very important dimension in determining the overall quality of the Ship Rider program. However, this dimension proved to rate Poor, as the survey responses did not show any quantifiable improvement in the time a boarding took to complete, the number and type of violations identified, or the compilation of the evidence package for the violations. We chose these criteria early in the process, based on data from a white paper and after-action reports. However, based on interviews with instructors and crewmembers, we later decided these criteria were not adequately measuring the effectiveness of the program, due to external factors outside the program’s control: size and condition of fishing vessel, type of boarding, public familiarity with regulations, weather conditions, and the fishermen’s cooperation or compliance with the boarding process.

Suggesting that the program is indeed effective, 18 of 23 survey respondents agreed with the statement, “having a Ship Rider increases effectiveness of cutter patrols.” Individual statements from the success case method interviews were even stronger:

“For the guys themselves–impact on effectiveness is night and day…”
“The boarding time decreases. Quicker boardings improve relations.”
“Is [the program] effective and worth it? Yes, period, it is worth it.”
“The Ship Rider gave us a process that made the boardings faster, more systematic.”

Although the evaluation rubric rated effectiveness as Poor, the overwhelming feedback to the contrary, and the lack of additional data on boarding times, violations identified, and quality of case packages affects the validity of that rating. Considering these factors, we concluded that the criteria selected inaccurately skews the rating; if the related survey questions were eliminated, the dimension would result in a score of Good.

Results indicate that the answer to the evaluation question is “Yes. Overall, the program does improve boarding team proficiency with fisheries regulations.” Moreover, the program is Good at meeting this goal and satisfying the question.

In an attempt to apply Brinkerhoff’s (2006) success case method, we identified interview volunteers through the participant survey. Although there were several reports of potential non-successes indicated in the survey, none of the personnel reporting non-successes responded to our interview request. However, we were able to interview four crewmembers as success cases; the data collected during their interviews corroborated the information we had gathered through interviewing four instructors qualified to conduct ship rides. These data led us to draw two compelling conclusions:

  1. Increased confidence is the most valuable outcome of this program.
  2. The program offers unrealized value, specifically:
    • Ship Riders are knowledgeable of the latest priorities of the operational commander, and current updates from the National Marine Fisheries Service.
    • Ship Riders are a great resource for other law enforcement questions, e.g., relating to the inspection of equipment required for safety of life at sea.
    • Ship Riders offer credibility. Being immersed in the local community, the Ship Riders speak the technical language of the fishermen.
    • The Ship Rider program offers a feedback mechanism for the classroom instruction by allowing the instructors to watch their students perform in a real-life environment.

Recommendations
One of the most valuable pieces of an evaluation is the recommendation it makes to systemically improve a program. The following are systematic recommendations we generated to help the program produce or measure valued results for each dimension.

Program Design: To improve consistency of performance among Ship Riders, we recommend creating and adhering to a standard curriculum and a more organized instructional process. Additionally, a Kirkpatrick Level One reaction survey could provide continual feedback on the design of the program.

Knowledge: To measure crewmember knowledge change resulting from Ship Rider presence, we recommend developing an evaluation tool, such as a before-and-after knowledge test or other practical assessment, to measure crewmember knowledge change–what was learned during the on-the-job training.

Confidence in Application: To improve crewmembers’ confidence in enforcing Alaska living marine resources regulations, we recommend assigning Ship Riders based on crewmember training and experience. Very experienced crewmembers may benefit less from the program than those new to the mission or area of operations. New or less-experienced boarding teams may also benefit from the Ship Rider being aboard for a longer period of the patrol.

Effectiveness: To better measure the effectiveness of the program, we recommend collecting and comparing quantitative data, such as length of boarding, violations identified, and other non-enforcement actions such as on-the-spot fixes in a before–during–after format, for future program analysis.

References

Brinkerhoff, R. O. (2006). Telling training’s story: Evaluation made simple, credible, and effective. San Francisco, CA: Berrett-Koehler.

Davidson, E. J. (2005). Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks, CA: Sage.

Kirkpatrick, D. (1996). Evaluating training programs: The four levels. San Francisco, CA: Berrett-Koehler.

Scriven, M. (2007). Key evaluation checklist. Retrieved from http://www.wmich.edu/evalctr/archive_checklists/kec_feb07.pdf

USCG. (2011). Living marine resources. Retrieved from http://www.uscg.mil/hq/cg5/cg531/LMR.asp

About the Authors
GordonHoodGordon A. Hood is an active duty Lieutenant in the U.S. Coast Guard and has served in various leadership positions aboard ships, most recently as executive officer of a buoy tender in Kodiak, Alaska. He also spent an assignment on faculty as an instructor and liaison at the U.S. Naval Academy. Gordon is currently a full-time graduate student of Organizational Performance and Workplace Learning at Boise State University and will complete his master’s degree in August 2015. He can be reached at gordon.a.hood@uscg.mil

JaymieReitmannJaymie Rietmann lives and works in Boise, Idaho. She currently serves as the HR training officer for Easter Seals-Goodwill, where she works to improve employee performance through onboarding, supervisor training, and performance improvement consulting. Jaymie completed Boise State’s Workplace Instructional Design (WIDe) certificate in 2013 and will complete the master’s degree in Organizational Performance and Workplace Learning program in 2015. Jaymie can be reached at Jaymie.Kaye@gmail.com

VerleRanaeHoskinsVerle-Ranae L. Hoskins currently lives and works in Seattle, WA. After graduating with her B.A. in Applied Behavioral Analysis, she spent five years working with children and families at a learning center. She is currently finishing up her master’s degree in Organizational Performance and Workplace Learning through Boise State University. She can be reached at verle-ranaehoskins@u.boisestate.edu