Page 67 - Reforming Benefits Decision-Making -(updated - August 2021)
P. 67

as  the above  shows, automated systems do not  always produce  fair or
               reasonable outcomes, particularly where they encounter situations which have
               not been thought about in  their programming. Automated systems can also
               contain inbuilt direct and indirect discrimination within their programming.
                                                                                  186
               Without transparency regarding the development and operation of automated
               systems accountability and remedy for these errors and discrimination is not
               possible.

          2.87  As Richard Pope has pointed out, currently, the UC system is very one sided–
               it collects large amounts of personal data and has a detailed, real-time view of
               how the public are using  the  service. However,  those wishing to hold the
               government to account for its actions have little information to go on. Being
               transparent about how DWP’s systems work and how they change will enable
               effective scrutiny of UC,  which will help contribute to public support  and
               confidence in the system.
                                      187

          2.88  A number of consultees we spoke to raised questions or concerns about what
               the Intelligent Automation Garage is doing. There is very little information
               about it in the public domain. The DWP told us that to date it has built and
               deployed  50 automations. These are  applied to  mundane processes and

          v Post Office Limited [2021] EWCA Crim 577 (see also K. Peachy ‘Post Office scandal: What the
          Horizon saga is all about’ (BBC, 23 April 2021). More than 20,000 parents were falsely accused of
          childcare benefit fraud by an automated fraud detection system in the Netherlands (‘A benefits scandal
          sinks the Dutch  government’ (The Economist,  21 January  2021). In Ontario, Canada a predictive
          analytics programme which supplemented case workers’ assessments of eligibility and benefits level,
          was found to have had at the time of its launch 2,400 serious defects, which impacted clients’ eligibility
          for benefits and the payments they received. Ministry of Community and Social Science, ‘SAMS  –
          Social Assistance Management System’ in 2015 Annual Report of the Office of the Auditor General of
          Ontario (2015).
          186  For example, the risk factors that the algorithm used in a childcare benefits-fraud detection system
          in the Netherlands, included parents with dual-nationality as a fraud risk, which amounted to ethnic
          profiling. See Dutch Parliamentary Inquiry Committee, Unprecedent Injustice (December 2020). The
          data used to train an AI system may lead to implicit  biases, or the parameters which have limited
          adaptability to individual situations can result in indirect discrimination. This concern was raised in
          relation to an authentication algorithm piloted in California, since the  questions risked creating
          additional obstacles for marginalised and migrant populations, for example by assuming established

          residential ties. See Coalition of California Welfare Rights Organisations, Inc., ‘Advocate Response to
          DSS Options for Replacing the Statewide Fingerprint Imaging System (SFIS)” (2017).
          187  R. Pope, Universal Credit: Digital Welfare (see n. 56 above) p. 100-101.


                                                                                  58
   62   63   64   65   66   67   68   69   70   71   72