An agency’s body-worn camera video contains multiple data points that can be operationalized to benchmark agent performance and inform training. Harnessing this wealth of knowledge is the mission of David A. Makin, Ph.D., associate professor of criminal justice and criminology at Washington State University and director of WSU’s Complex Social Interactions Laboratory.

Using data analytics and machine learning, Makin and his team code and catalog key variables in body cam videos associated with a range of outcomes specified by agencies participating in the research. Importantly, work undertaken in the lab captures situational and environmental factors such as geographic location, ambient noise level, time of day, presence, and actions taken by passers-by to better contextualize and therefore better understand the interactions between the police and the community.

Recently, the WSU research team reached a milestone of 20,000 hours (nearly 120 weeks) of analyzed footage. I sat down with Dr. Makin to discuss how this research can help improve police-community interactions and create data-driven solutions to improve situational awareness, officer safety and de-escalation .

“What does de-escalation look like objectively?  From the video reviews we've conducted, I'd say it's not so much de-escalation, as non-escalation.

“What does de-escalation look like objectively? From the video reviews we’ve conducted, I’d say it’s not so much de-escalation, as non-escalation. (Getty Images)

Can you describe the objectives of your research?

Our job is really to help agencies view body-worn cameras as less of an unfunded expense or mandate and more of an investment to advance training beyond the classroom, mitigate risk, and improve performance.

We are conducting a few different studies and are in phase I of a project to develop a training repository to support the FTO process and CIT training. For the latter, we examined BWC sequence samples for observable signs of cognitive impairment.

The objectives of the project are to examine the extent to which agents identify behavioral cues and to explore the factors associated with the detection of these cues. For example, are CIT-trained agents better at identifying these behavioral cues?

We are also running a multi-agency project examining differential treatment in traffic stops, which is an extension of work originally supported by the WA State Traffic Safety Commission. In this study, we assess the essential components of procedural justice – these are mostly objective measures. For example, how often do officers indicate the reason for the stop? We spent several weeks working with each agency on developing an instrument that best represents procedural justice at the objective level.

There are a whole range of other projects, although central to these is helping agencies maximize their BWC program.

When an agency contacts you, what is the first step in the process?

A lot of what agencies want has to do with benchmarking, so the first thing to do is establish an agency’s credentials. So, for example, when it comes to analyzing an officer’s use of force, our goal is not to add a label to say whether an incident is good or bad, but to objectively model the incident. use of force by an agency. We review every interaction they have recorded to analyze things like:

  • What is the first point of contact?
  • How fast is the force used?
  • What is the duration of this applied force?

Once you have all that data, you start to see patterns where you can say, “OK, officers seem to be faster here or slower here in using force.” Again, we don’t put labels there to say whether an interaction is good or bad, but to provide data so that agencies can be informed to ask questions such as: “What is too fast when using force?” or “What’s too slow when using force?” This allows agencies to review all models and identify problem areas, as well as what agents are doing correctly, according to policy.

Another example would be the use of directed profanity. If you were to hear agents using a phrase like “Like, that’s not shit,” some people might have a problem with that, but others might say it’s kind of our way of communicating. However, if an officer says, “Why are you an asshole?” it is directed blasphemy. We have coding for that, so if we look at the body camera of traffic stops or other types of interactions, we can send data back to an agency that shows a random sample of those interactions and the percentage of times the agents use directed profanity.

If the agencies want to, then they can start learning from each other to say, “Your directed profanity rates are zero, but ours are 8%, what are you doing differently to achieve that?”

We also code for what the person of interest or bystanders are doing, because policing is complex and human interactions can be complex. So we can then give context to when an officer might use directed profanity.

Can you talk about the work you are doing to use video data from body cameras to improve training for field agents?

We’re building a repository for a small agency that doesn’t get a lot of certain types of calls. They would like their interns, when going through the FTO process, to be exposed to certain interactions, such as crisis contact or domestic violence or interactions with a certain level of intensity. The easy part is that we have this great database that we can tap into, but of course the hardest part is identifying and understanding what makes for good police-citizen interaction. In other words, you have to sit down with trainers, which we’re going to do in the fall. What I’ve found is that the hardest thing for an agency is to quantify what makes an interaction “good.” The long-term objective is to work with trainers and police experts capable of analyzing interactions.

We’ve spent so much time talking about accountability in the context of body cam video as a record of when something goes wrong, but there’s another way to look at this technology: it’s also a record of exemplary behavior. We should really learn from these examples as well.

Is IS data and information only available to agencies that have registered with you or can any agency view it?

At this time, only agencies partnering with us can view the data as we adhere to strict privacy guidelines as a research laboratory. But if agencies want our code books, we are happy to share them.

Could the data you collect be used alongside early intervention programs where the review process could take place in real time?

We have a provisional patent for software that could accomplish this. It’s called QPI, which stands for “Quantifying Police Interactions.” It’s built on a semi-automated machine learning platform and the goal of the software is to do exactly what we do in the lab, but empower agencies to be able to do it. They could use the software to objectively review and identify what agents should do or, in some cases, not do. But our software is not designed to function as a “gotcha”, it is built on the foundation of evidence-based practices, so you can identify when interventions were needed. Importantly, we offer QPI as software as a service, ensuring that the price allows for the maintenance of the software keeping the cost as low as possible.

At the level of early intervention, here is an example. We did a project for an agency’s domestic violence unit where the unit sergeant wanted to analyze how, or if, officers were using trauma-informed practices. We reviewed the video and provided an objective analysis of how often officers referred victims to services, explaining next steps, etc. Law enforcement spends a lot of money on training, so the goal is to find out if it works.

How would agencies use this data for training?

Take directed blasphemy. You can have a random exam and then sit down and talk to the agents. Maybe you just have a conversation every shift briefing about how agents talk to people.

Or if you’re training agents on procedural justice where there’s x number of things an agency expects of agents, you can filter out those so-called traffic stop interactions and get data on how often your agents do that which we would call Key Performance Indicators (KPIs). Say you were measuring eight things, you could find the “hit rate” on those, then look at how those KPIs vary by driver race or gender. You can even do this by location to see if there’s a differential effect based on neighborhood.

Final thoughts?

We’ve basically done 20,000 hours of travel already, so we can look at the interactions to see all the different ways they’ve been handled and start isolating what works and what doesn’t.

The most important thing for me is de-escalation. What does de-escalation look like objectively? You’d probably have 100 ways different trainers train de-escalation, but what we can start to identify is whether these techniques are effective. From the video reviews we’ve conducted, I’d say it’s not so much de-escalation as non-escalation. Like, don’t do these things. For example, never say “Because I told you so”. Don’t threaten to arrest people if they don’t listen to you.

The data can show agents that we told you not to do these things and here’s what happens when you do these things. So these are not just anecdotes. It is not only the trainer who says this or that. It uses objective data to say not to do these things or to do these things and the trainer is able to engage the video footage. Agencies need to learn from these images.

How can agencies reach you to find out more about your work?

Our website is the best way to reach us. It contains our contact details and an FAQ page for agencies.

NEXT: Download Police1’s 2022 Guide to Body Worn Cameras