Hey!
How’s it going? I’m taking some leave at the moment but I wanted to share something that I’ve been thinking about.
A few months back, I wrote about how the Australian Federal Police (AFP) were using a little-known but widely used retail surveillance software called Auror without considering the privacy risk for the public. Internal documents showed more than 100 police staff began using it before the agency even thought to consider if it was safe to use. (After my reporting, the AFP suspended their use of the technology).
The story of Auror’s uptake is strikingly similar to how the AFP also started using controversial facial recognition company Clearview AI’s tech. Just before I put on my out-of-office, I reported how the company — which broke Australia’s privacy law and the use of its technology by the AFP was condemned by the privacy commissioner — was again meeting with the AFP, including appearing at a high-level meeting of Australia’s top child abuse cops to pitch facial recognition technology.
The similarity between these two stories is how police have repeatedly adopted new technologies to try and catch bad guys without considering the consequences for citizens. Both stories came about because of freedom of information requests; the AFP weren’t forthcoming with what they were doing and provided scant little responses to questions when asked about the documents I obtained.
Another story that people might have forgotten is 2021’s Anom sting, a transnational sting set up by the AFP, the FBI and, as it turns out, the Lithuanian Criminal Police Bureau among other agencies. It was an audacious operation that involved setting up a supposedly secure messaging app that was distributed to criminals that, unbeknownst to its users, cc’d every message to police. It was enormously successful and wildly well-resourced operation that police were happy to promote once it was made public. What’s been given less attention is the legal challenges as defendants have tried to understand whether the evidence collected through the app was done lawfully, and how it appears that the police agencies jurisdiction shopped to find a country to route the messages through so they could collect the data and sidestep domestic privacy protections.
I can imagine it must be frustrating to be a police officer who sees criminals using new tech while they have to deal with the bureaucracy and scrutiny that comes with the job. But this bureaucracy and scrutiny exists for a reason! If police have powers that go beyond what citizens have, it’s reasonable to place higher expectations on them as well. There’s a big difference between an individual dicking around with facial recognition software (not to excuse this completely) and a police officer using dodgy tech to wrongly arrest someone. Plus, as was made evidence with the Anom app, police agencies have significantly more resources than all but the biggest crime organisations.
At the very least, I think it’s crucial that information about these technologies are made public so that we can discuss whether we think it’s okay for police to use these and to understand how it impacts all of us. I hope to do more reporting on this in the future when I get back to work.
Do you know of police (or other government agencies) using new tech like this? Please contact me here to talk about this or literally anything else. Confidentiality guaranteed!
Yours,
CW.