NZ police are using AI to catch criminals – but the law urgently needs to catch up too (The Conversation)
The use of artificial intelligence (AI) by New Zealand police is putting the spotlight on policing tactics in the 21st century.
A recent Official Information Act request by Radio New Zealand revealed the use of SearchX, an AI tool that can draw connections between suspects and their wider networks.
SearchX works by instantly finding connections between people, locations, criminal charges and other factors likely to increase the risk of harm to officers.
Police say SearchX is at the heart of a NZ$200 million front-line safety programme, primarily developed after the death of police constable Matthew Hunt in West Auckland in 2020, as well as other recent gun violence.
But the use of SearchX and other AI programmes raises questions about the invasive nature of the technology, inherent biases and whether New Zealand’s current legal framework will be enough to protect the rights of everyone.
Controversial technologies
At this stage, New Zealanders only have a limited view of the AI programmes being used by the police. While some the programmes are public, others are being kept under wraps.
Police have acknowledged using Cellebrite, a controversial phone hacker technology. This programme extracts personal data from iPhones and Android mobiles and can access more than 50 social media platforms, including Instagram and Facebook.
The police have also acknowledged using BriefCam, which aggregates video footage, including facial recognition and vehicle licence plates.
Briefcam allows police to focus on and track a person or vehicle of interest. Police claim Briefcam can reduce the time analysing CCTV footage from three months to two hours.
Other AI tools such as Clearview AI – which takes photographs from publicly accessible social media sites to identify a person – were tested by police before being abandoned.
The use of Clearview was particularly controversial as it was trialled without the clearance of the police leadership team or the Privacy Commissioner.
Eroding privacy?
The promise of AI is that it can predict and prevent crime. But there are also concerns over the use of these tools by police.
Cellebrite and Briefcam are highly intrusive programmes. They enable law enforcement to access and analyse personal data without people realising, much less providing consent.
But under current legislation, the use of both programmes by police is legal.
The Privacy Act 2020 allows government agencies – including police – to collect, withhold, use or disclose personal information in a way that would otherwise breach the act, where necessary for the “maintenance of the law”.
AI’s biased decisions
Privacy is not the only issue being raised by the use of these programmes. There is a tendency to assume decisions made by AI are more accurate than humans – particularly as tasks become more difficult.
This bias in favour of AI decisions means investigations may harden towards the AI-identified perpetrator rather than other suspects.
Some of the mistakes can be tied to biases in the algorithms. In the past decade, scholars have begun to document the negative impacts of AI on people with low incomes and the working class, particularly in the justice system.
Research has shown ethnic minorities are more likely to be misidentified by facial recognition software.
AI’s use in predictive policing is also an issue as AI can be fed data from over-policed neighbourhoods, which fails to record crime occurring in other neighbourhoods.
The bias is compounded further as AI increasingly directs police patrols and other surveillance onto these already over-policed neighbourhoods.
This is not just a problem overseas. Analyses of the New Zealand government’s use of AI have raised a number of concerns, such as the issue of transparency and privacy, as well as how to manage “dirty data” – data with human biases already baked in before it is entered into AI programmes.
We need updated laws
There is no legal framework for the use of AI in New Zealand, much less for the police use of it. This lack of regulation is not unique, though. Europe’s long awaited AI law still hasn’t been implemented.
That said, New Zealand Police is a signatory to the Australia New Zealand Police Artificial Intelligence Principles. These establish guidelines around transparency, proportionality and justifiability, human oversight, explainability, fairness, reliability, accountability, privacy and security.
The Algorithm Charter for Aotearoa New Zealand covers the ethical and responsible use of AI by government agencies.
Under the principles, police are meant to continuously monitor, test and develop AI systems and ensure data are relevant and contemporary. Under the charter, police must have a point of contact for public inquiries and a channel for challenging or appealing decisions made by AI.
But these are both voluntary codes, leaving significant gaps for legal accountability and police antipathy.
And it’s not looking good so far. Police have failed to implement one of the first – and most basic – steps of the charter: to establish a point of inquiry for people who are concerned by the use of AI.
There is no special page on the police website dealing with the use of AI, nor is there anything on the main feedback page specifically mentioning the topic.
In the absence of a clear legal framework, with an independent body monitoring the police’s actions and enforcing the law, New Zealanders are left relying on police to monitor themselves.
AI is barely on the radar ahead of the 2023 election. But as it becomes more pervasive across government agencies, New Zealand must follow Europe’s lead and enact AI regulation to ensure police use of AI doesn’t cause more problems than it solves.
How Australian undercover police ‘fed’ an autistic 13-year-old’s fixation with Islamic State (The Guardian)
Court finds counter-terrorism unit’s conduct fell ‘profoundly short’ of minimum standards expected of law enforcement officers.
Counter-terrorism police encouraged an autistic 13-year-old boy in his fixation on Islamic State in an undercover operation after his parents sought help from the authorities.
The boy, given the pseudonym Thomas Carrick, was later charged with terror offences after an undercover officer “fed his fixation” and “doomed” the rehabilitation efforts Thomas and his parents had engaged in, a Victorian children’s court magistrate found.
Thomas spent three months in custody before he was granted bail in October 2022, after an earlier bail was revoked because he failed to comply with conditions.
Thomas, an NDIS recipient with an IQ of 71, was first reported to police by Victoria’s Department of Health and Human Services (DHHS) and then by his parents because of his fixation with Islamic State, which included him accessing extremist material online and making threats to other students.
On 17 April 2021, his parents went to a police station and asked for help because Thomas was watching Islamic State-related videos on his computer and had asked his mother to buy bomb-making ingredients such as sulphur and acetone.
Thomas was investigated and charged with two terror offences by the Joint Counter Terrorism Team (JCTT), which comprises Australian federal police, Victoria police and Asio members.
The court granted a permanent stay on the charges in October last year, but a copy of the decision has only recently been published.
“The community would not expect law enforcement officers to encourage a 13-14 year old child towards racial hatred, distrust of police and violent extremism, encouraging the child’s fixation on ISIS,” magistrate Lesley Fleming said in the decision.
“The community would not expect law enforcement to use the guise of a rehabilitation service to entice the parents of a troubled child to engage in a process that results in potential harm to the child.
“The conduct engaged in by the JCTT and the AFP falls so profoundly short of the minimum standards expected of law enforcement offices [sic] that to refuse this [stay] application would be to condone and encourage further instances of such conduct.”
Fleming found the JCTT also deliberately delayed charging Thomas with offences until after he turned 14, as it made it harder for him to use the defence of doli incapax, which refers to the concept that a child is not criminally responsible for their actions.
Police also inappropriately searched Thomas’s property shortly before he was charged, Fleming found.
“There was a deliberate, invasive and totally inappropriate search of [Thomas’s] bedroom without lawful excuse.
“The search involved multiple Victoria Police members under the guise of attending to provide support to the family within the CVE [Countering Violent Extremism] framework.
“The conduct of the law enforcement officers involved subterfuge.”
Fleming, who noted that English was not the first language of Thomas’s parents, found his father told police “he was prepared to sacrifice my son for the safety of the Australian community”.
There was no evidence the AFP took any action in relation to the DHHS complaint, Fleming found. An online persona which later communicated with Thomas was activated a day earlier.
How the undercover operation unfolded
After Thomas’s parents spoke to Victoria police, Fleming found a decision was made by the force to manage Thomas “therapeutically”.
His parents provided Victoria police access to Thomas, their home, his phone, his mother’s phone, and to personal information about his school and psychologist.
Less than month after Victoria police started working with Thomas, a case manager was told by a psychologist who was working with them that Thomas’s “verbalisations need to be considered within the context of his ASD [autism spectrum disorder] and possible cognitive impairment.
“One of the key diagnostic criteria for ASD is highly restricted, fixated interests that are abnormal in intensity or focus,” the psychologist told the case worker.
“It is suggested that ISIS represents a circumscribed interest: an intense, narrow preoccupying interest that provides intense focus, social identity for him, a topic to be researched … as well as a topic of conversation that brings him attention.”
A police officer who performed a report based on information downloaded from Thomas’s phone found that he appeared fascinated with China and symbols of the Chinese Communist party and that there were no religious images or verses from the Qur’an present.
Victoria police also arranged for an Imam to meet regularly with Thomas to discuss Islam and answer any questions he may have had.
But three months after his parents went to police, the JCTT started an operation targeting Thomas, code-named Bourglinster.
It would run in parallel with the efforts to counter his violence extremism.
An online covert operative was tasked with communicating to Thomas using two personae: a 24-year-old Muslim man from NSW, and a more extreme person located overseas.
The purpose of the operation was to find Thomas online and “engage him in chat to ascertain his intent if any”, the operative told the court. The strategy was to gather intelligence and information that could be used to charge Thomas with terrorism offences.
On the first occasion Thomas spoke with the operative online, he asked the officer: “are you a spy” and “do you work with the Asio”, to which the operative, in the role of the first persona, responded “I hate these killab [dog]”.
The operative then wrote “should I ask the same of you akhi” to which Thomas replied “I am 13 years old”.
The operative chatted with Thomas on 55 of the next 71 days, including during breaks at school and late at night.
The operative told an operational psychologist, who was expected to provide advice to him about how to communicate effectively online with Thomas, that “this … is a kid on the spectrum, I’m letting him do all the talking [and] just building rapport”.
There were 1,400 pages of online chats between the pair, Fleming found.
The first persona introduced Thomas to the second, more extreme, persona, who encouraged him to make a bomb or kill an AFP member.
But the operative gave evidence that Thomas was naive, and living a “fantasy life online”, including by asking questions like whether he could join the kids’ section of Islamic State.
On 8 August 2021, Thomas sent a photo to the operative which showed him wearing his school uniform, a hoodie and a face mask and holding a knife with “ISIS” written on it in marker.
His house was searched within days, and he was charged less than two months later.
Fleming found that AFP assistant and deputy commissioners had been involved in authorising the operation which resulted in Thomas being charged, and that “the AFP was at all times aware of TC’s age, his complex mental health issues, and his fixation on ISIS”.
A decision to arrest Thomas was authorised by an assistant commissioner after a detective superintendent failed to inform them that they had information the undercover operation was having a negative impact on therapeutically changing Thomas’s behaviour.
Fleming said the prospect of diverting and rehabilitating Thomas was always destined to fail once the operative started communicating with him, and the magistrate could not accept evidence given by police that these efforts had primacy over the criminal investigation.
“It is a nonsense to expect this Court to accept that an effective rehabilitation process can be undertaken when there is a seasoned covert operator online engaging TC, encouraging TC’s fixation and that TC’s rehabilitation team, his parents and his psychologist are oblivious to the existence of the [covert operator].
“The rehabilitation of TC was doomed once the [operator] connected online…befriended TC and fed his fixation, providing him with a new terminology, new boundaries and an outlet for him to express, what was in part, his fantasy world.”