On National’s Selling Of Its “Social Investment” Policy
There’s a 19th century flavour to National’s “social investment” strategy, in that it aims to seek capital from philanthropists and charitable organisations – some of them having their own religious agendas- to fund and deliver the provision of social services. Beyond that point, the details are remarkably scarce.
Regardless, “social investment “ has become the buzzword for National’s approach to welfare and to the state’s social spending in general. It will involve the use of data-driven, algorithmically-based models that assign priority scores to individuals and families receiving benefits, with these scores being based (to a significant extent) on the recipients’ previous record of contact with state agencies.
The “social investment“ approach was developed by Bill English and Paula Bennett in the mid 2010s, on the back of data matching work already well underway at the Treasury and the Statistics Department by 2013. This same combination of the state’s stores of Big Data with more privatised and profit driven modes of welfare delivery has recently been resurrected by Christopher Luxon. Social investment was at the core of his first announcements as National Party leader. It has provided the centrepiece of his keynote speech at last year’s National Party conference. As Luxon promised the party faithful:
National will bring back the long-term, social investment approach so that resources are directed where they can do the most good. That means developing targeted interventions to steer at-risk young people in a direction that gives them the chance of a positive and productive life.
Since then, deputy Nicola Willis has taken over the messaging details on social investment, partly because Luxon has amply demonstrated that mastering the details of policy just isn’t one of his strengths. National lost office in 2018 before Bill English could put ‘social investment’ into full working order. However, some of the Auckland University academics (notably professor Rhema Vaithianathan) who had helped to devise its methodology then proceeded to promote the use of those tools in the United States.
In Pittsburgh, Pennsylvania for example, social investment is better known as the Allegheny Family Screening Tool. With minor tweakings, it has also been implemented in parts of Florida, Maine, Oregon and southern California. The main developers and promoters of the Allegheny Family Screening Tool (and some other digital models inspired by it) were Rhema Vaithianathan, and her US colleague Emily Putnam-Hornstein.
Now… If Big Data really could safely and reliably identify the families most likely to be at risk of (a) welfare dependency and (b) the harmful social behaviours associated with it, the social investment approach would have a lot fewer critics. It is only a tool after all. It can be used to target people for the punitive interventions deemed necessary to avoid the bad social outcomes identified by the model. But theoretically at least, it could also enable the state to direct its resources more efficiently and help to break the cycle of welfare dependence. That sounds good, right?
But here‘s the thing. Last week, investigative journalists at the Associated Press reported that the US Justice Department is currently “scrutinising” Allegheny County’s use of artificial intelligence tools in welfare case management on the grounds that they may actually harden existing inequalities, and lead to punitive and prejudicial treatment of (a) people with disabilities and/or mental health problems, or (b) people who belong to ethnic minorities. From the AP report:
The Justice Department has been scrutinizing a controversial artificial intelligence tool used by a Pittsburgh-area child protective services agency following concerns that the tool could lead to discrimination against families with disabilities….
The interest from federal civil rights attorneys comes after an AP investigation revealed potential bias and transparency issues surrounding the increasing use of algorithms within the troubled child welfare system in the U.S. While some see such opaque tools as a promising way to help overwhelmed social workers predict which children may face harm, others say their reliance on historical data risks automating past inequalities.
As mentioned, being in receipt of a benefit and having had prior contract with welfare agencies will be treated by the algorithm as a risk indicator of ongoing reliance on benefit support, aka welfare dependency. In which case, it is pretty easy to see how a vicious circle could be forged. Interventions by welfare agencies will beget further interventions, which will have become self-validating.
To its supporters, the algorithm provides an” “objective” numerical rationale for a gamut of carrot and stick responses - up to and including benefit termination and child removals. New Zealanders aware of the criticisms regularly levelled at Oranga Tamariki will find this part of the AP report very familiar:
Child protective services workers can face critiques from all sides. They are assigned blame for both over-surveillance and for not giving enough support to the families who land in their view. The system has long been criticized for disproportionately separating Black, poor, disabled and marginalized families and for insufficiently addressing – let alone eradicating – child abuse and deaths.
As mentioned, the people most likely to be targeted by the algorithms tend to be members of ethnic minorities who live in poor neighbourhoods beset by high rates of drug and alcohol addiction and criminal activity. Or they will tend to be people with disabilities since – for obvious reasons – families with members who have disabilities are likely to interact with support agencies more often than the general population.
Pointedly, Luxon has indicated that he expects the disabled to be subjected to the same work “incentives” that the social investment approach will direct at other people on benefits. The pejorative comments from centre right political parties about people on benefits suggests that the interventions triggered by the social investment model are far more likely to be punitive than benign. (More toughlove stick, fewer liberal carrots.) ACT, for instance, has been urging that the state use advances in digital technology to monitor and control how people on benefits spend their money. Again, note the Victorian overtones. “Social investment” risks creating a digital poorhouse.
Defenders of the social investment approach tend to argue that the discretion will always remain with social workers to over-ride the dictates of the algorithms. In practice however, the algorithms readily become the default setting. The likely prevalence of their use will put the onus back on social workers to come up with a rationale as to why the algorithmic findings should be ignored in any particular instance.
In such situations, the line of least resistance will become the norm. Given how often Oranga Tamariki has been accused of simultaneously (a) acting too slowly to protect children at risk and (b) acting too quickly in removing children from family contexts of potential danger, one can see the attraction of letting an algorithm make the decision and cop the blame. The data made me (not) do it.
New Zealand Made
Last week, the Pittsburgh Post-Gazette expanded on the news of the Justice Department’s interest in the local Allegheny Family Screening Tool. As Werewolf reported in mid 2022, US media accounts of the use of AI in welfare management have often acknowledged New Zealand’s pioneering work in developing the social investment approach. As long ago as 2017 however, Illinois stopped using its predictive analytics tool because it was found to be ineffective in predicting the worst cases of abuse. Reportedly, it also generated a volume of digital “alerts” that swamped the ability of the state’s social workers to respond in a timely fashion. The digital noise was obscuring the genuine cases of concern.
Last year, Oregon (which had followed the Allegheny County precedent) also ceased using predictive analytics to target its responses to children at risk, because of earlier AP research findings that the procedure can lead to the racial stereotyping of the very people and communities it was supposedly trying to help.
Big Data Matching
In line with what National is proposing here, the algorithm being used in Allegheny County taps into personal data from government data sets—including Medicaid, mental health, and jail and probation records—to calculate numerical risk scores of likely anti-social behaviours and/or the likelihood of chronic welfare dependency and abuse.
Last July, Stuff’s Charlie Mitchell wrote a terrific backgrounder on the recent history (and alarming extent) of the data-matching capability now available to the pool of researchers granted access to the massive Integrated Data Infrastructure (IDI) data bank held by the Statistics Department.
One of the concerns with National’s plans to radically de-centralise welfare delivery has to do with whether the charitable bodies and religious groups that National plans to entrust with this work will also be given access to the highly personal data contained within the IDI. (While data is anonymised, the IDI matching process can, with a very high level of probability, readily identify individuals.)
One 2017 research paper obtained by Mitchell under the OIA, had used the IDI to examine around 50,000 children born in 1998. It then tracked them through the IDI to the point where the individuals under scrutiny either passed or failed NCEA Level 2:
What factors in their early lives best predicted whether they would fail? There were five that stood out. The proportion of time the child was supported by a benefit; being male; being Māori; mother’s age at first child; and mother’s education.
Again, note how the mere receipt of a benefit is being associated with anti-social outcomes, and especially so if the recipients are Maori, and/or are women. By 2019, this list had expanded to include 26 variables, and has since expanded again to contain 37 behavioural characteristics. As Michell indicated, the potential for stigmatisation was recognised:
Documents reveal that more variables were considered – and could technically have been added, because they were in the IDI – but were rejected due to the “high risk of potential stigma”. Among them were whether the child had suffered an accident at home (such as poisoning), and the number of times a parent had been hospitalised for mental health reasons, including self-harm.
Other objections aside, the privacy concerns are obvious. Bring up to 37 factors to bear, and it will be pretty clear which individuals the officials will then have in their sights for corrective action.
Baby and bathwater
Supposedly, the variables deemed relevant to social investment procedures can be screened for bias, and the algorithm can also be tweaked to lower the risk of negative social stereotyping. Overseas, some of these tweaks have not done much to allay the underlying concerns. For example: the Allegheny Family Screening Tool reportedly makes regular use of “proxy” variables to make up for gaps in the statistical data, or in situations where the data is contradictory. Yet reportedly, these proxies include “co-referrals” whereby the families in question have been the subject of complaints lodged by ordinary members of the public.
Problem being… These “co-referral” proxies are themselves reflective of the same racial stereotypes, given that poor families from ethnic minorities are many times more likely to be the subject of such complaints as white families. The American Civil Liberties Union (ACLU) reached this pessimistic conclusion its essay about family surveillance by algorithm:
Historically over-regulated and over-separated communities may get caught in a feedback loop that quickly magnifies the biases in these systems. Even with fancy — and expensive — predictive analytics, the family regulation system risks surveilling certain communities simply because they have surveilled people like them before. Or, as one legal scholar memorably framed it, “bias in, bias out.”
In sum, the alleged risks and the technical “fixes” being trialled to allay concerns about “social investment” strongly indicate that the entire approach is still very much a work in progress. No one is saying Big Data cannot ever be a useful tool, but only if used with care, within prescribed contexts. Needless to say, that’s not what’s happening.
It is also safe to surmise that… If the US Justice Department was “scrutinising” a key Labour policy because federal agencies had concerns about the potential for negative social outcomes from say, Fair Pay Agreements, this would be making front page news in New Zealand. Instead, National’s portrayal of social investment as an entirely benign process is being taken at face value here.
Plainly, National needs to be far more transparent about how far it intends to pursue the social investment approach, and how it aims to address its potential harms. Especially given that some of the same people who helped to devise the original English/Bennett model have been central to developing the tools now reportedly being scrutinised in the US, by the Justice Department.
Footnote: For starters, here are a few areas where we need more information before voters give National the power to put its “social investment” policies into practice:
1.How much does National expect to invest annually in its mooted Social Investment Fund (a) during its start-up phase, and (b) once it has become fully operational ?
2.How will the private sector firms, NGOs and charities involved in the delivery of the social investment approach be vetted for participation in the programme, and how will their activities be regulated by the state ?
3.Will the private sector participants be given the power to impose sanctions and recommend the termination of benefit supports?
4. Will the philanthropist and charities urged to invest in National’s Social Investment Fund be able to claim those “donations” as tax deductions? If so, how will National ensure these costs are not falsely inflated? What will be the estimated annual cost (a) of the tax deductions and (b) of the additional IRD monitoring?
5. Given the stress National is placing on gaining “results” - will the social investment approach be treating (a) poverty alleviation or (b) a reduction in benefit numbers as its hallmark of success?
6.If poverty alleviation is to be the primary goal of social investment how does National propose to measure whether its social investment programme has been a success or failure in alleviating poverty?
7.Will the participating private sector firms, NGOs and charities be allowed to make a profit over and above their operating costs from their involvement in the Social Investment Fund process – and if so by what means and at what acceptable rate of return?
8. Is National proposing to underwrite all of the upfront costs of the private sector engagement in social investment?
Social investment – or privatisation?
In an Orwellian twist, the “social investment” approach does not identify – let alone “invest” in correcting – any of society’s structural factors that contribute to poverty, criminal behaviours, drug and alcohol addiction, gang membership or welfare dependency. Instead, the approach treats individual behaviours as the sole genesis of the problem, and then tracks the traits and outcomes that aggregates of such individuals share in common. Being on a benefit – and getting off a benefit – are treated as a lifestyle choice, or as a preference that the state would be well advised to treat with a firm hand.
Theoretically, the state could use Big Data to direct practical assistance – in the shape of higher benefits levels? - to the individuals, families and communities that the data has identified as being at risk. Fat chance of that. In practice, the welfare algorithms more often become a targeting tool aimed at the underclass – and as mentioned, they routinely target the individuals and families who have had prior contact with Police, welfare workers and the courts.
That’s because the algorithms base the likely risk scores of offending or reliance on welfare upon the historical outcomes for people coming from similar family backgrounds of abuse or violence, and with similar reliance on income from benefits, and with similar racial and gender characteristics etc. Interventions by Police and social workers will then routinely ensue. For those on the receiving end, this data-driven approach can look less like a lifeline, and more like a harassment device.
Many critics of social investment have pointed out its similarity to the Tom Cruise film Minority Report – in that action by the authorities is likely to be triggered less by actual behaviours than by AI tools designed to pre-empt potential anti-social behaviours before they happen, largely based on a risk calculus model. It’s not what you did, it’s what the data has told Big Brother about what you might be likely to do in future. Poverty, in this context, is at risk of being treated as a thought crime.
Bonding the state
So far the outcomes in New Zealand from the social investment approach have been banal, but instructive. In line with its enthusiasm for data-driven social interventions, the English/Bennett duo launched a “social bonds” method of winning private sector involvement in welfare delivery. The pilot attempt failed.
The government's first social bond has collapsed, with negotiations breaking down and the provider walking away. The largely untested social bond model uses private investors' money to pay a provider for a social service. If the service is successful, the government pays out.
And here’s one big reason why the wheels fell off :
The government said when it announced the funding last year that it would not face any liability for the scheme and would only pay if the programme delivered a result. In April, RNZ reported the programme was at a standstill because investors were apparently reluctant due to uncertainty over whether the government would guarantee the security of the private funding. RNZ understands the government has now offered money to investors to get them over the line and put money in the bonds.
Got that? The Key/English government did its best to insert the private sector and its fabled market efficiencies into welfare delivery. Yet the prospective investors backed off because the government wouldn’t guarantee to underwrite all of the costs the providers might conceivably be at risk of incurring, upfront.
This sounds less like a free market, and more like a guaranteed gravy train. Similarly in future, any upfront costs to the private sector from their involvement with the social investment strategy could well be met entirely by a National government. Quite possibly this “investment” will not only be made risk free, but tax deductible, and with a margin on top as a bonus for delivering the right “results.”
You might well wonder why – other than for ideological reasons – a National government would want to pay a private sector middleman upfront to deal with families in need when its own agencies are doing the same job, often in tandem with the same limited number of community groups.
Or maybe… Does National have a different job in mind, with a different definition of what would constitute successful ” results”? One immediate way that revenue could be saved via the social investment model would be if as many people as possible were to be removed from the welfare rolls, and mainly by hiring private firms and organisation to do this dirty work, for a fee.
This isn’t (just) a conspiracy theory. After all… The private sector participants will have a clear financial incentive to reduce the benefit rolls because this reduction will - presumably - be the “results” that determine whether their contracts with the Social Investment [Slush] Fund get renewed. In that respect, the social investment promises to be less about “investing” in alleviating poverty, than in policing those in need more rigorously, such as they eventually fall off the books.
Footnote One: While not as gaffe-prone as her leader, Nicola Willis has hardly been a model of clarity as to why an increase in the privatisation of welfare delivery is desirable, or necessary. To repeat: The Social Investment Fund would duplicate the existing work of state agencies that already co-operate (and already help to fund) a considerable number of NGOs, charities and community groups.
These happen to include many of the same grassroots activists - community organiser David Letele, mental health activist Mike King and KiwiCan’s Julie Chapman – cited by Willis to RNZ’s Corin Dann as being likely to benefit from what National has in mind. Except, as Dann pointed out, they already are receiving funding from the current government.
No matter. Willis also promised to Dann that “I will bring those [social investment] principles to bear across our government departments, across our policy making…. this approach has to be brought to bear on all of the investment we do across government, so that we are intervening earlier in peoples’ lives, using data and evaluation.. scaling up the things that are working, scaling down the things that aren’t.”
Right. So, Dann reasonably asked, did this promised extension of social investment principles into all areas of the state’s social spending mean - for example - that National would be bringing back private prisons? “No, that’s not our policy, to do it with private prisons.” No, not that. “But look,” she indicated shortly afterwards, “this isn’t about one particular way of doing things.” Hmm. So National is going to apply data-driven social investment discipline to all areas of the government’s social spending programme – except where it won’t, and because besides, this isn’t the only way of doing things. Clear as mud.
Footnote Two: One item on Bill English’s otherwise failed “social bonds” programme remains in existence. As Willis pointed out on RNZ, the Genesis Youth Trust was working with Oranga Tamariki and with 1,000 youth offenders, trying to keep them on the straight and narrow.”
There’s been an investment partner linked to that project, Willis continued. Some-one who said, she explains,“ Well, if you’re going to invest in the people and the resources you need, you’re going to need upfront cash. We will give you that upfront cash. And the government has said well, we will pay for that, so that money goes to the investor and they get that money over time.”
Righto. Again, it is unclear why the government didn’t simply channel all the money through Oranga Tamariki, rather than pay a middleman a premium for his or her efforts… Yet going by Willis’ own account, it would seem that the current government, Oranga Tamariki, and a community provider in the shape of the Genesis Youth Trust are all already working together on a project aimed at reducing youth re-offending. In this limited instance, ‘social investment’ seems to be already happening.
True, National does plan to massively scale it up. Interestingly, this route is being taken even though, as Willis indicated to RNZ, the ultimate success or failure rate of the Genesis Youth Trust scheme was, as yet, still unknown to her.
Logic would suggest that it might therefore have been wise to wait and assess those outcomes before promising to apply something very much like the same model, nationwide. But no such luck. Social investment is going to be scaled up by National, regardless. Other objections aside, there are sound historical reasons for feeling worried whenever the National Party promises to Think Big.
Footnote Three: The “social investment approach to welfare delivery virtually begs to be satirised.