The impact of AI-driven surveillance on policing in America

The boom of AI in the United States heralds a new frontier in law enforcement’s surveillance— equipped with the power to see, analyse, and predict with unprecedented precision. It transcends traditional monitoring to forge a future where crime fighting is as much about data as it is about detective work.

IFSEC Global's Video Surveillance Report 2023 underscores the growing reliance on AI-driven systems that process and analyse video data in real time, transforming the way police identify suspects, track movements, and anticipate crime hotspots.

Facial recognition technology (FRT) via platforms like Clearview AI empowers law enforcement to pinpoint individuals in crowded spaces; enhancing suspect identification and tracking capabilities. 

Meanwhile, AI's behaviour analysis and data integration capacities promise a comprehensive approach to understanding and preventing crime. This includes leveraging insights from social media, public records, and live video feeds.

However, this promising horizon is not without its concerns - raising several considerations about privacy and ethics. 


The double-edged sword: efficiency vs. ethics

There is a paradox within AI-driven surveillance: the quest to advance law enforcement capabilities while carefully balancing the need for transparency and accountability.

Studies, such as one by the London Metropolitan Police, show the see-saw of pros and cons that come with facial recognition - pointing to instances of wrongful arrests and human rights violations. AI can adopt and amplify existing biases—for example, racially prejudiced facial recognition algorithms—which points to a critical gap in human oversight and accountability in automated policing tools.

Adding to this complexity is AI's role in reviewing bodycam footage. Yes, this process aims to improve police accountability and review practices, but it faces substantial obstacles. Concerns over the confidentiality of the review process and the scarcity of public disclosure regarding the outcomes mean that AI, in this instance, needs to be handled with rigorous ethical standards and comprehensive privacy protections to maintain public trust.


The path forward with the public in mind

Surveys show a divided landscape of excitement and apprehension across the American public’s view of AI in law enforcement. For example, 46% think police use of FRT is good for society, while 27% believe it is a bad idea and 27% are unsure. Additionally, 57% think crime would be unaffected by the widespread use of FRT, while 33% think crime would decrease and 8% believe it would rise.

The potential for drones, biometrics, and AI-analysed bodycam footage to enhance public safety is clear. However, concerns continue to centre on how transparency around these technologies can be grounded in their application.

The development and deployment of AI in policing must be guided by ethical frameworks and regulatory oversight. 

Public engagement and policy evolution will determine the extent to which AI can serve the public good while safeguarding individual freedoms. 

An ongoing dialogue—around innovation and integrity, efficiency and ethics— is imperative to shape the evolving narrative of AI in police surveillance. Engaging with and listening to community voices will be key in managing AI in law enforcement surveillance, and ensuring that these tools are used responsibly.


Our redaction solution, Secure Redact, is at the forefront of balancing AI's capabilities with privacy needs.

How? Sign up today to find out.

Previous
Previous

Are Housing Associations risking resident safety with poor data protection?

Next
Next

How do Americans feel about data security?