The final 25-item SIT-UAS demonstrated good internal consistency (α = 0.84) and inter-rater agreement for scoring (Fleiss’ κ = 0.76). SIT-UAS scores correlated significantly with instructor-rated non-technical competence (r = 0.61, p < .001) and predicted simulator mission success (OR = 3.4 per SD increase).
Traditional technical assessments for UAS pilots focus on procedural knowledge, yet most operational failures stem from poor judgement under ambiguous or time-critical conditions. Existing selection tools lack scenario-based items specific to UAS challenges (e.g., beyond visual line of sight operations, lost link procedures). sit uas
Correct (expert rated best): B (Immediate RTH – prioritizes safety over mission) Least effective: A (Continued flight risks loss of aircraft) beyond visual line of sight operations
A three-phase mixed-methods approach: (1) Critical incident interviews with 20 expert UAS operators to generate realistic scenarios; (2) Expert panel (n=10) to establish correct/incorrect response keys; (3) Validation with 150 UAS trainees, comparing SIT-UAS scores against instructor ratings and simulator performance. (3) Validation with 150 UAS trainees
(Your Name), Academic Affiliation Journal: Human Factors in Aviation or International Journal of Aerospace Psychology Abstract Objective: To design and empirically validate a Situational Judgement Test (SIT) tailored for Unmanned Aircraft System (UAS) operators (SIT-UAS), assessing non-technical skills such as decision-making, situational awareness, and risk management.