Are robots an answer to the ‘care crisis’?

This piece first appeared on the ANZSOG blog

An ANZSOG-funded research project is exploring the increasing use of robots in care services to replace or complement the roles of humans. In this article, researchers explore how the growth of robots in care services is changing how we think about care, and what we need to do to ensure that the ethics of care are maintained. The full report is expected to be published in the near future.

By Catherine Smith, Helen Dickinson, Gemma Carey & Nicole Carey

It is well-established within policy and practice circles that we are facing an impending crisis of care.

Australia, like a number of other advanced liberal democracies, is anticipating a future with an older population, with a more complex mix of chronic illness and disease. A number of care organisations already operate under tight fiscal constraints and report challenges in recruiting sufficient numbers of appropriately qualified staff. In the future, fewer numbers in the working-age population and increased numbers of retirees will compound this problem. If we then add to this equation the fact that consumer expectations are increasing, then it starts to look like future care services are facing somewhat of a perfect storm.

Robots are increasingly becoming a feature of our care services, capable of fulfilling a number of roles from manual tasks through to social interaction. Their wider use has been heralded as an important tool in dealing with our impending care crisis.

We have recently completed an ANZSOG-funded research project exploring the roles robots should play in care settings, with particular attention to what this tells us in terms of definition of care. In our research we explored how robots are currently being used across a range of care services (health, disability, education and aged care) and areas where they will likely develop further in the future. We found that care is not a simple concept but a complex and relational set of practices which has important implications for policy.

One thing that we were interested in exploring with interviewees is: if robots are an answer to the care crisis, then what is it that we mean by care? Care is one of those terms that we all use regularly, but don’t often stop to define precisely what it is that we mean. So what activities do we think robots might undertake and what are the implications?

What is care?

Typically, when public services think about designing care services they inevitably pull together a series of different activities (e.g. cleaning, washing, feeding and supporting other practical needs in day-to-day living) that comprise those services. Indeed, if we think back to when care services were first outsourced from local governments, this was often done by individuals observing workers and listing the different activities that they undertook.

In our research, although care was defined in terms of different sets of activities, there are other facets that are also thought to be crucial, namely, care being a relational and responsive activity.

Participants aligned the concepts of robots and care with a definition of care that largely focused on the relational aspect of care service practices. Most people reasoned that humans and human interaction is essential to care relationships, and robots would not be able to replace this.

However, in the care of people with needs associated with autism and dementia the non-human qualities of a robot are seen to be a strength in relational care. In both situations, participants identified that robots are able to undertake repetitive tasks without experiencing the monotony and potential boredom of a human. Robots were described in these scenarios as having no emotional baggage, being patient and unable to get angry.

This was seen as an opportunity to remove a potential stressor from the relationship between the primary carer and those being cared for, and as an augmentation of their care relationship, not a replacement. It was identified as an opportunity to provide the carer with the additional time to address other activities.

Robots are identified as a way to combat loneliness and isolation but with a caveat of concern that they could also generate further isolation if their ‘company’ is used to replace human contact. In most cases, the robot is conceptualised as facilitating relationships. Some participants saw that they provided a conversation piece and relational bridge for the cared for and other people in their wider community, such as peers or family members from other generations.

Care is therefore seen as something that is defined in terms of a relationship, and where responsiveness to the needs of the cared-for is elemental to success. An element which arises in much of the care literature is one of reciprocity, where there is a synergy that develops in such a relationship. The role of the cared-for and the carer can be fluid, with the cared-for strengthened by the value they can bring to the relationship, and the reward that is felt in the giving of care.

Concerns of this nature arose particularly in discussions of ‘Paro’ – a robotic seal that responds with sound and movement to the touch of another. The robot is soft to touch and invites actions of nurture. This was identified as particularly useful for people with conditions such as dementia and autism, where its primary use was settling erratic behaviour. The opportunity to provide for responsiveness and reciprocity was otherwise largely unexplored beyond general discussions around the importance of empathy and the need for human carers to achieve it.

Ethics of care and implications for policy and practice

Describing care as a responsive, relational activity is very much in-line with a way of conceptualising this practice as consistent with an ‘ethics of care’ perspective.

In care ethics, care involves bestowing value on the cared for and activity that provides for their needs. Tronto identified that good care comes about when both of these dimensions – caring about and caring for – are present. Care is oriented toward particular beliefs, including concern and the ability to discern the risks of interference over the risks of inaction; interpretation of the responsibilities in each situation as opposed to aligning to a rigid set of rights; and responsiveness aligned with the setting and the individual. Privacy, dignity and agency are all of particular concern in the provision of care in services as a result of these orientations.

If we define care practice in terms of ethics, then accountability of the relationships of care goes beyond the cared for and the carer. It also includes those who have determined the ethical systems that guide robot behaviour, and therefore expands the care relationship into opaque and impersonal elements that require consideration. This has important implications in terms of policy and practice. If we replace some or part of a care process with a robot, it may have far-reaching implications.

We therefore need to carefully consider how robot technologies fit within models of care. Without this there is a danger that we will not use these tools to their full effect, or will create unanticipated consequences.

Members of the research team:

  • Helen Dickinson, Public Service Research Group, University of New South Wales, Canberra
  • Nicole Carey, Self-Organizing Systems Research Group, Harvard University
  • Catherine Smith, Youth Research Centre, University of Melbourne
  • Gemma Carey, Centre for Social Impact, University of New South Wales
Advertisements

Where should we use robots in care services?

Rarely a day goes by without a story in the media about robots and the various threats and opportunities they pose to various aspects of our day-to-day life.  The Public Service Research Group has recently been awarded a research grant by the Australia and New Zealand School of Government to investigate the use of robots in care services and the implications for government in stewarding these technologies.  Led by myself and Gemma Carey, this project also involves Catherine Smith (Melbourne Graduate School, University of Melbourne) and Nicole Carey (Self Organizing Systems Research Group, Harvard University).

The good folk over at The Mandarin have published a piece from us today on this project (you can find it here).  If you are interested in finding out more about this project or potentially being a case study then please get in touch with us.

Robo-debt, Centrelink and collaborative working

There is a story that has featured fairly prominently in the Australian media of late that hits many of research interests.  The story focuses on Centrelink, which is a program that sits within the Department of Human Services and delivers a range of different government payments and services for those who are unemployed, people with disabilities, carers, parents, Indigenous Australians and others.  The majority of Centrelink’s work is in relation to disbursing social security payments.

Last year an automated debt recovery system (the snappily titled Online Compliance Intervention) was designed with the aim of recovering $4.5 million in welfare debt every day.  This computer program compares data gathered from other government agencies (e.g. the Australian Tax Office) and compares it to what has been reported to Centrelink.  The aim here is to work out where people have been overpaid benefits and then to work to reclaim these.

In the past the same kind of system was used but referrals were passed to a Centrelink officer who would investigate before sending out a letter asking for more information about any discrepancies.  Between July and December of last year 170,000 compliance notices were automatically sent out, where previously only about 20,000 a year were issued.

The problem came about because many people reported not receiving these letters (some went to old addresses or they did not check their MyGov account) and then they were contacted by private debt collectors who work for the department.  Essentially if you don’t respond to one of these letters in 21 days and provide more information, this is assumed to be evidence of an overpayment and the process is started to reclaim monies owed.

 

Although the offical figures aren’t known, it has been suggested that about 20 per cent of those receiving these notifications are in error.  By the nature of the work that Centrelink do, this typically means that these are individuals and families in the lowest socio-economic groups that are being pursued wrongfully for debts that they don’t owe.  In some cases people were being asked for information (such as payslips) going back to 2012 and were having to pay back debts they did not owe because they couldn’t prove that they didn’t owe this money.

As this story hit the news it gained the moniker of ‘robo-debt’ and there were a number of stories initially that blamed the robots for incorrectly calculating figures.  Stories tended to talk about decisions being made that would not have got past “human” quality control such as the fact that robots calculate fortnightly income by dividing the annual income by 26 or they don’t pick up on mis-spellings of employer names and other egregious flaws that resulted in people being inappropriately pursued for debt.

I’ve been getting interested in the use of robots in public services of late (more to follow soon) and here were the robots being blamed for poor people being pursued for money they didn’t owe.  What seems to be fairly well established now is that the robots weren’t to blame for this outcome – they were just doing what they had been programmed to do.  What had been changed is that the quality control had been removed and the algorithms that were used to identify debts had not been revisited given what we know about the number of cases that were initially identified in the old system and which were actually followed up.  The processes of communication with those being pursued for debts was in practice more problematic than the initial process of identification and had led to the distress of many of those caught up in this process.

The final bit of the story that has come out more recently is even more interesting with respect to what has gone wrong.  In a recent ABC news story, Henry Belot found that there was no formal briefing on this issue between the Department of Social Services (DSS) and the Department of Human Services (DHS).  Broadly speaking the DSS has responsibility for developing policy relating the debt recovery policy, which DHS then leads on the implementation of.  So the department that developed the policy didn’t formally brief the department that implemented it.

 

 

 

 

 

 

What this shows is the importance of developing policy with those that implement it and the challenges that arise when you don’t do this.  This is a stark illustration of the policy implementation gap, which is a key theme of the new research group that I head – the Public Service Research Group at UNSW.  This example further demonstrates the importance of collaborative working and what happens when different agencies don’t work together.  You can find more analysis on this and expert commentary on this from me and others in Henry’s piece which is here.