What the camera sees Ron Bowles, PhD (c) Introduction What you see is a function of where you stand and what you look at. This project explores what the camera sees in an immersive simulation setting. Specifically, this li d research h project j t compares th the d data t provided id d b by video id ffrom cameras using i applied differing points of view against evaluation criteria representing different types of learning outcomes in a simulation involving paramedic, police, and fire recruits. Findings 1) What does the camera see? • • Methods • No single camera angle captures all observable behaviours Video alone is not effective as a replacement for existing simulation evaluation model Video is an excellent audio record of what was said/done We staged two sets of simulations, using the JIBC Donald B. Rix Public Safety Simulation Centre. In the first simulation, Fire and Paramedic recruits perform a “layered response” call for a multiple trauma patient. The second scenario start with a g a shooting. g A Paramedic crew responds p to manage g the Police incident involving injured suspect. 2) How does point-of-view change what is seen? • Operated is better than static. Context matters. Camera operators focused on g shots missed. The roving g cameras keyy activities and aspects that the wide angle were able to move around and obtain clearer views of critical activities. Camera functions and POVs ranked from most to least effective for evaluation: Overhead, roving camera Handheld, roving camera (“outside” the simulation) Handheld, roving camera (operated by bystander who is part of simulation) Overhead static camera Floor level static camera • The simulations were recorded using a mix of 6 static (mounted, wide angle) and operated cameras from 3 points-of-view. • • • Overhead POV: • Static camera (fixed, wide angle) • Operated (handheld, camera operator zooms/changes focus to follow activity) Floor level level, “external” external to the simulation POV: • Static camera (fixed, wide angle, from traditional evaluator POV) • Operated (handheld, camera operator zooms/changes focus to follow activity) Floor level, “engaged” in the simulation POV: • Roving (camera operator as bystander with camcorder, moving throughout the simulation) • Head cam (helmet camera worn by participant in the simulation) Program evaluators were given marking forms and a video of one of the camera angles. l Th They marked k d the h call, ll noting i what h aspects off the h callll could ld not b be adequately d l assessed from that perspective. The evaluators were then given a video with a synchronized “collage” of 4 POVs and marked the call again. The evaluators completed a questionnaire that explored their impressions and experiences in using the various video POVs and formats to mark the simulations. 3) How do you use video video, along with other media, media to re-present assessment and evaluation of a call? • • Collage view was seen as extremely useful. The quad split is compelling, and it was interesting to observe evaluators using it to evaluate students. The evaluators would constantly change focus from one POV to another throughout the simulation. The evaluators using the videos provided more qualitative feedback, while “live” evaluators focused more on quantitative issues. Feedback from video evaluation focused on process (interaction, decision-making), while live evaluators focused on procedure d ((skill kill performance). f ) Attendant’s POV Helmet Cam Overhead Static POV Discussion Interestingly, the static camera Interestingly camera, situated where live evaluators are usually located in classroom simulations, is ranked as the least effective POV for evaluation. The overhead, roving camera was ranked as the most useful, followed by the floor level roving cameras. The camera operators were able to follow the flow of the calls and zoom or move to highlight critical activities. The helmet camera was not useful for evaluation. The narrow angle and unpredictable movement of the operator’s head rarely showed what the various participants in the simulation were doing. Comparison of the helmet video with widerangle video also showed that the attendants tend to “point” their head towards the middle iddl off th the scene, th then use peripheral i h l vision i i when h llooking ki att specific ifi activities. ti iti Thus, the helmet cameras were also not useful for showing what the attendants were focusing on during the calls. The helmet cameras did, however, provide the best audio source. Several evaluators commented that the audio was more useful than the video in some instances. All evaluators commented that they did not think that video evaluation would work with the existing, checklist-based evaluation model, which focuses on skill performance and sequencing. This feedback was corroborated by analysis of the feedback given by the evaluators. However, the evaluators noted that the video was very useful for assessing overall performance, and particularly aspects such as teamwork, leadership, decision-making, and time management. Evaluator’s POV Static Camera Floor level POV Roving Camera JUSTICE INSTITUTE OF BRITISH COLUMBIA