News & Events

Benchmarking: Are we comparing apples and oranges, and how are we using Process Evolution tools?


The third session of our recent conference, led by Matt Gill, delved into the intricacies of benchmarking in the realm of policing. Focused primarily on the Response Profiler tool, Matt’s presentation aimed to unravel the complexities of comparing police forces and their data effectively, emphasizing the importance of understanding contextual nuances.

Key Insights on Benchmarking:

  • Comparison over Competition: Matt challenged the notion that public service benchmarking must emulate competitive private sector practices. Instead, he advocated for a nuanced approach, suggesting that benchmarking in policing involves comparing universally measured metrics to identify outliers. The key questions arising from such comparisons are whether there is a rational explanation for differences and, if not, whether the specific metric is being measured appropriately within a particular force.
  • Diverse Perspectives on Response Profiler: An eye-opening revelation was the diverse approaches our users take when using the Response Profiler. Some input all demand from the control room, while others focus solely on demand actioned by the force or specifically by the Response function. Matt emphasized that while these approaches are valid, understanding the contextual backdrop is crucial when comparing data, outcomes, or Process Evolution’s models across different forces.
  • Matching Processing Portions to Demand Input: Matt underscored the importance of aligning the processing portions of policing models with the chosen demand input. If a model is designed to profile utilisation and effectiveness based on data from the Response function, it’s essential to accurately reflect forcewide data for relevant aspects such as time on scene or arrests on site. This alignment ensures the accuracy and relevance of the models being developed.
  • Data Integrity Variations: A critical observation was the variation in data integrity levels among different forces. Matt highlighted instances where Time on Scene in the Response Profilers exhibited a variance of over 70 minutes. For professionals involved in comparing data across forces, understanding whether specific metrics have a wider or more specific range of input than their own force becomes pivotal.

Matt’s session not only shed light on the intricacies of benchmarking in policing but also underscored the need for a nuanced and context-driven approach. As policing professionals navigate the complexities of benchmarking, understanding the diverse perspectives, aligning processing portions, and considering data integrity variations become crucial for meaningful comparisons.

For more insights and updates, stay connected with our ongoing conversations at or reach out to us via email at