What do drones, AI and proactive policing have in common?
Data.
Ellen Joyner-Roberson, CFE, Global Marketing Manager, Security Intelligence Practice, SAS
In Marvel Studios’ Iron Man movies, industrialist and genius engineer Tony Stark transforms himself into a superhero who battles evil. In the ultimate display of proactive policing, he’s aided by “Jarvis” (Just a Rather Very Intelligent System), a highly advanced artificial intelligence created by Stark to manage almost everything in his life, but especially to help him fight crime.
Jarvis is just an AI in a superhero movie. But what if you could actually use smart machines to fight crime and make communities safer in real life? Today it’s not so futuristic, and this kind of technology to enable proactive policing may be coming soon to a community or country near you.
For example, in 2017 at least 167 US fire and police agencies acquired drones for proactive policing, which is more than double the number of agencies that obtained unmanned aircraft in 2015. More agencies acquired drones in 2016 than in the previous three years combined, according to a report by the Center for the Study of the Drone at Bard College. Law enforcement agencies like Indiana’s Noble County Sheriff are using drones to locate suspects during pursuits. In another example, the Oakland, CA, fire department used a drone after a deadly warehouse fire to scan for hot spots, a job that’s both difficult and dangerous for a human. This is a perfect application of using data derived from technology to save lives.
Managing the Intelligence Life Cycle: A More Effective Way to Tackle Crime
Faced with vast amounts of varied data, your agency needs to make information available to the appropriate personnel for grading, analysis and review. Higher priority items need to be routed and escalated accordingly. This requires a solution which is rigorous enough to enforce correct procedures, fast enough to avoid delays and flexible enough to change with organizational needs.
Analyzing visual data captured by drones
The increasing use of drones represents a practical application of AI and machine learning. While drones are capable of doing much more than visual surveillance, advances in object detection have greatly expanded the use of drones for images and video stream analytics. Object detection and classification are basic tasks in video analytics, and are at the forefront of research in AI and machine learning. A variety of algorithms, including YOLO (you only look once) and deep-learning methods such as CNNs (convolutional neural networks), are at the heart of these systems and the basis for developing more complex applications.
Historically, object detection and video analytics approaches were manual and time consuming, requiring extensive human involvement. Most importantly, experts had to restrict image quality and perform a variety of pre-processing steps before training object detection algorithms. This led to high error rates and sometimes questionable results due to algorithmic limitations and computing capacity. It wasn’t so long ago that 10 frames per second was considered fast. Today, however, it’s not uncommon to perform analysis at 100 frames per second or more with human-level accuracy – or better. Our capacity to analyze images at scale and quality has never been greater, and the technology is just starting to grow and expand.
Keeping pace with technology
There are a few restrictions slowing progress on using drones and AI for proactive policing: laws, policy and privacy. Local and state agencies in the US have passed rules imposing strict regulations on how law enforcement and other government agencies can use drones. Indiana law specifies that police departments can use drones for search-and-rescue efforts, to record crash scenes and to help in emergencies, but otherwise a warrant is required to use a drone. That means police probably won’t be able to fly them near large gatherings unless a terrorist attack or crime is under way.
Similarly, many countries have procured drones for law and order and aerial surveillance, as European and other international institutions are taking a keen interest in AI. They serve a role in establishing suitable international ethical and regulatory frameworks for these developments. These include, for example, the new General Data Protection Regulation (GDPR), but also new frameworks for ethical use of AI, including transparency of decision making.
Other forms of AI for proactive policing
Drones are not the only types of AI being considered for proactive policing. What about identifying stressed out police officers who may need a break? A system developed by Rayid Ghani at the University of Chicago increases the accuracy of identifying these “at-risk” officers by 12 percent, and reduces false positives by a third. The system is also used by the Charlotte-Mecklenburg (NC) Police Department.
To succeed in this environment, agencies need modern tools capable of joining and interpreting huge quantities of data, alert generation, anomaly detection and AI techniques while supporting officers through the rigors of due process. Ellen Joyner-Roberson Security Intelligence Practice SAS
AI essentials in proactive policing
Law enforcement and public safety agencies engaged at all levels of government are required to exploit disparate and diverse data sets to be effective in their operations – that’s what drones, AI and proactive policing have in common. But massive and ever-increasing quantities of data can strain limited operational resources. And without appropriate focus, the risk of failing to identify areas of increasing threat multiplies. The challenge for agencies is further intensified by the need to adhere to mandatory legislative processes. To succeed in this environment, agencies need modern tools capable of joining and interpreting huge quantities of data, alert generation, anomaly detection and AI techniques while supporting officers through the rigors of due process.
The reality is that AI is showing improved results. Intelligence analysts are testing and using deep learning, natural language processing and machine learning techniques in real-life law enforcement scenarios that can help change our world for the good. Although we haven’t reached super hero status in our accomplishments (though it would appear we had with the filming of Iron Man 3 at our own SAS headquarters), who knows what the future holds? Like anything that is truly worthy and good for humankind, there will always be pitfalls and challenges, but now is the time to seize what possibilities lie ahead.
About the Author
Ellen Joyner-Roberson, CFE, is Global Marketing Manager at SAS, where she defines industry strategy and messaging for the global fraud and security markets in banking, insurance, health care and government. With more than 30 years of experience in information technology, she helps clients capitalize on the power of analytics to combat fraud and keep the public safe. This includes bringing greater awareness of how to apply machine learning and AI to detect evolving fraud tactics while realizing ROI in technology investments. In addition, she consults with clients to reduce fraud losses and mitigate risk across their enterprises. Joyner-Roberson graduated from Sweet Brier College with a degree in math and computer science.
Recommended reading
- Article AI anxiety: Calm in the face of changeAI anxiety is no joke. Whether you fear jobs becoming obsolete, information being distorted or simply missing out, understanding AI anxiety can help you conquer it.
- Article Detecting health care claims fraudHealth care claims fraud could represent as much as 10 percent of total claims cost. Learn how to fight back with analytics.
- Article An executive’s guide to cognitive computingCognitive computing is the latest buzzworthy term that everyone seems to be talking about in the technology industry. But can machines really think?
- Are you covering who you think you’re covering? How rigorous are you in determining membership eligibility? If you are not diligent enough, you may be letting money slip out the door. In fact, by some estimates, between 4 and 18 percent of all health plan benefits are paid out in error due to eligibility fraud issues.
Ready to subscribe to Insights now?