Page 67 - Reforming Benefits Decision-Making
P. 67
as the above shows, automated systems do not always produce fair or
reasonable outcomes, particularly where they encounter situations which have
not been thought about in their programming. Automated systems can also
186
contain inbuilt direct and indirect discrimination within their programming.
Without transparency regarding the development and operation of automated
systems accountability and remedy for these errors and discrimination is not
possible.
2.87 As Richard Pope has pointed out, currently, the UC system is very one sided–
it collects large amounts of personal data and has a detailed, real-time view of
how the public are using the service. However, those wishing to hold the
government to account for its actions have little information to go on. Being
transparent about how DWP’s systems work and how they change will enable
185 As demonstrated by the recent Post Office Horizon scandal. Between 2000 and 2014 736 sub-
postmasters and sub-postmistresses were prosecuted for theft, fraud and false accounting on the basis of
information from the Horizon IT system - an electronic point of sale and accounting system used in
post office branches. Some went to prison, and many were financially ruined. In December 2019 the
High Court found that the Horizon system contained numerous bugs, errors and defects and that there
was a ‘material risk’ that the shortfalls in branch accounts were caused by the IT system (Bates v Post
Office Limited [2019] EWHC 3408 (QB)) In April 2021 the Court of Appeal quashed the convictions
of 39 sub-postmasters, clearing the way for many of the others to challenge their convictions (Hamilton
v Post Office Limited [2021] EWCA Crim 577 (see also K. Peachy ‘Post Office scandal: What the
Horizon saga is all about’ (BBC, 23 April 2021). More than 20,000 parents were falsely accused of
child benefit fraud by an automated fraud detection system in the Netherlands (‘A benefits scandal
sinks the Dutch government’ (The Economist, 21 January 2021). In Ontario, Canada a predictive
analytics programme which supplemented case workers’ assessments of eligibility and benefits level,
was found to have had at the time of its launch 2,400 serious defects, which impacted clients’ eligibility
for benefits and the payments they received. Ministry of Community and Social Science, ‘SAMS –
Social Assistance Management System’ in 2015 Annual Report of the Office of the Auditor General of
Ontario (2015).
186 For example, the risk factors that the algorithm used in the benefits-fraud detection system in the
Netherlands, included parents with dual-nationality as a fraud risk, which amounted to ethnic profiling.
See Dutch Parliamentary Inquiry Committee, Unprecedent Injustice (December 2020). The data used
to train an AI system may lead to implicit biases, or the parameters which have limited adaptability to
individual situations can result in indirect discrimination. This concern was raised in relation to an
authentication algorithm piloted in California, since the questions risked creating additional obstacles
for marginalised and migrant populations, for example by assuming established residential ties. See
Coalition of California Welfare Rights Organisations, Inc., ‘Advocate Response to DSS Options for
Replacing the Statewide Fingerprint Imaging System (SFIS)” (2017).
58