Program
Evaluation
A formal 10-Step Evaluation identifies critical dimensions and areas for improvement of an application process, designed to biannually identify the top 5 of 300 qualified candidates, for admission to an exclusive tactical training program.
Note: This is information from an actual evaluation project. The Corporation, OP, OPIC and related terms are pseudonyms used to protect the privacy of the client and to safeguard sensitive information. No associations are claimed, nor should be assumed, from any images, data or information herein.
Background
Organization
The Corporation (a pseudonym) fills a vital role in national and global physical security, providing tactical and strategic weapons support. The OnPoint Officer (OP) is a key leadership role within the Corporation. OPs require high-level specialized training, delivered by OnPoint Instructors (OPIs).
The five and one-half month OnPoint Instructor Course (OPIC) is offered only twice per year and accepts only 4 – 6 out of approximately 300 eligible OPs each class. In 2020, the OPIC developed a scoresheet to screen candidates’ applications prior to selecting a maximum of 12 for in-person interviews. The introduction of the scoresheet effectively created a two-phase OPIC application process:
- Phase-I – Formal, standardized application process built around the scoresheet
- Phase-II – Invitation-only (based on Phase-I) in-person selection process
Program Stakeholders
The Corporation OPIC leadership (client) developed Phase-I, and they were the main reviewers and graders for the Phase-I assessment. As upstream stakeholders, their main concern was determining the effectiveness of Phase-I in selecting the best qualified candidates for OPIC.
Future OPIC applicants are direct stakeholders as users of Phase-I, as are the OPIC staff who depend upon the utility of Phase-I and the quality of the selected candidates it produces for OPIC training.
The U.S. and foreign military partners are downstream stakeholders. The critical function performed by OPIC graduates and the OPs they train will significantly contribute to their success in ways that can neither be quantified or disclosed in this report and are beyond the scope of this evaluation.
Evaluation Request and Purpose
Following the 2020 Phase-I rollout, the client requested a program evaluation to determine its effectiveness in consistently identifying the top 2% of 300 qualified OPIC candidates.
The program evaluation project team consisted of three master’s degree students enrolled in the Evaluation course offered by the Organizational and Workplace Learning (OPWL) department at Boise State University. The team conducted this program evaluation as our OPWL 530 Evaluation course project (Robertson et al., 2021).
The client intended to use the evaluation findings to identify areas for improvement in Phase-I of the OPIC application process. Accordingly, the evaluation team conducted a formative goal-based evaluation with a goal-free approach to determine how well the Phase-I scoresheet system selected candidates with the highest potential to succeed in OPIC. The evaluation team also incorporated a goal-free evaluation approach to investigate if there had been any unexpected results from the recent addition of the Phase-I scoresheet.
Methodology
Process
The evaluation team employed an evidence-based 10-step evaluation process (Chyung, 2019) to conduct the identification, planning, and implementation phases of the project. (See Figure 1)
Figure 1. 10-Step Evaluation Process
Dimensions
Having identified the stakeholders of the Phase-I evaluand, and the formative focus of the evaluation, the team developed a Program Logic Model (PLM) (Appendix A). Using the PLM, the team selected the three most pertinent program dimensions for investigation and established the relative importance weight (IW) of each, as listed in Figure 2.
Figure 2. Three Evaluation Dimensions
1. Applicant characteristics
How well does Phase-I capture and facilitate the five macro-characteristics of candidates: adaptive, cognitive, interpersonal, character, technical?
• PLM: Resources & Activities
• IW: Very Important
2. Application interpretation
How consistently do OPIC staff score applications?
• PLM: Activities
• IW: Important
3. Process clarity
How clear is the application process to individual applicants?
• PLM: Resources & Activities
• IW: Fairly Important
Data Collection
The evaluation team followed the best practice ethical guidelines of the American Evaluation Association (AEA) (2018) and the Joint Committee on Standards for Educational Evaluation (JCSEE) (n.d.) while designing, collecting and reporting extant data, survey responses, and in-person interviews. The team selected the following sources of data to investigate the three dimensions, using the instruments listed in Table 1:- Phase-I application scoresheets of current and prior students (extant data)
- OPIC candidate Commanders’ recommendation letters analysis (extant data)
- Current OPIC staff and students (in-person semi-structured interviews)
- Prior OPIC staff, students and applicants (web-based self-administered survey & extant data)
Table 1. Summary of Data Collection Instruments
Results
The three dimensions were measured individually using multiple data sources and measurement methods. The evaluation team applied “critical multiplism” in the selection of methods, and triangulated the resulting data sets, thus verifying what any one source indicated, with information from the other sources (Chyung, 2019, p. 118). A customized two-stage (instrument/dimension) rubric was used to score each dimension of Phase-I’s performance as an OPIC candidate selection tool, based on the desired (expected) performance relative to each dimension. (See Instruments and Rubrics Development Worksheet, Appendix B)
(See Table 2)
Table 2. Evaluation of OPIC Phase-I Performance Relative to Each Dimension
Conclusions
"Overall, the OPIC Phase-I process needs improvement in all dimensions that were evaluated in order to meet the expectations of the client for the process itself."
(Robertson et al., 2021, p. 20)
Specifically, regarding the primary and most important dimension (Dimension #1), and the ability of Phase-I to assess applicants’ five macro-characteristics:
"As the process currently stands, there is no way to determine the five macro-characteristics of an OPIC applicant from the application itself. However, there are several avenues available to gather this data within this process."
(Robertson et al., 2021, p. 21)
Recommendations
Dimension 1:
To better assess relevant characteristics of applicants during Phase-I, use additional sources of data to gain more in-depth information (e.g., personality and aptitude tests, and direct supervisor surveys).
Dimension 2:
To better interpret Phase-I for objectivity and consistency, the scoresheet should mirror the application format, and it should include a clear scoring guide for each area of the application.
Dimension 3:
To better prepare applicants for Phase-I, market the OPIC program and its application process to leadership and potential applicants using a comprehensive multimedia approach within the Corporation.
References
American Evaluation Association. (2018). American Evaluation Association guiding principles for evaluators. Retrieved March 14, 2021, from https://www.eval.org/About/Guiding-Principles
Chyung, S. Y. (2019). 10-Step evaluation for training and performance improvement. Sage.
Joint Committee on Standards for Educational Evaluation. (n.d.). Program evaluation standards. https://evaluationstandards.org/program/
Robertson, J., Blanchard, M., & McCraw, V. (2021). ‘OnPoint’ instructor course (OPWL 530 Evaluation Project Spring 2021) [MS Word].