Building on work that Gemma Carey and myself have previously done on the role for feminist theory in public policy, for this special issue we had the great privilege to work with Eva Cox on a special issue of the Australian Journal of Public Administration. Titled gender, power and use of evidence in policy, we sought to bring a gender analysis and/or feminist lens to a diverse range of policy and public administration literature, ‘slanting’ how we perceive and understand them.
In 2015, Gemma and I curated an online special issue for the Australian Journal of Public Administration in which we surveyed research published in the journal archive relating to issues of gender and feminism. We found that two major silences exist in public administration concerning gender. The first is the place of women and gender equity within public service workforces. The second silence is the role that feminist theories could play in tackling contemporary public management challenges. We argued that there are particular contributions that feminist theories could make in relation to topics such as collaboration, boundary-spanning and skill requirements for future public sector workers. From this work, we conceived a special issue dedicated to addressing these silences.
In 2016 we put out a call for papers to address this space. We challenged authors to not just consider gender in their work but also adopt and explore how a feminist approach might enhance work in their various domains of policy work. While feminist policy is not a new idea, we believe this collection provides a much-needed foray into the practical application of feminism across a breadth of policy work. Reflecting a parallel process, we took a feminist approach to putting together this special issue. Rather than the traditional blind peer review process, all three editors reviewed each paper multiple times – working with authors to craft their research. The aim of this was to usurp the traditional authoritarian review process with a more constructive and collaborative practice. In doing so, we provided a robust peer-review process that paralleled the theoretical approaches reflective in the work included in this special issue.
The special issue includes any array of great papers including:
Ultimately we recognise that this is a long-term project. Like policy itself, change is often frustrating incremental when it comes to both the way we think about women and more broadly altering the paradigms in which we operate.
A few years back I was lucky enough to be part of team who were funded to do some work exploring the impact that the NDIS is having on service providers. The work was funded by the Australia and New Zealand School of Government and reported a little bit ago, but now the journal articles from this project are starting to emerge.
The first is in Health and Social Care in the Community and the abstract appears below:
As governments worldwide turn to personalised budgets and market‐based solutions for the distribution of care services, the care sector is challenged to adapt to new ways of working. The Australian National Disability Insurance Scheme (NDIS) is an example of a personalised funding scheme that began full implementation in July 2016. It is presented as providing greater choice and control for people with lifelong disability in Australia. It is argued that the changes to the disability care sector that result from the NDIS will have profound impacts for the care sector and also the quality of care and well‐being of individuals with a disability. Once established, the NDIS will join similar schemes in the UK and Europe as one of the most extensive public service markets in the world in terms of numbers of clients, geographical spread, and potential for service innovation. This paper reports on a network analysis of service provider adaptation in two locations—providing early insight into the implementation challenges facing the NDIS and the reconstruction of the disability service market. It demonstrates that organisations are facing challenges in adapting to the new market context and seek advice about adaptation from a stratified set of sources.
If you have seen science fiction television series such as Humans or Westworld, you might be imagining a near future where intelligent, humanoid robots play an important role in meeting the needs of people, including caring for children or older relatives.
The reality is that current technologies in this sector are not yet very humanoid, but nonetheless, a range of robots are being used in our care services including disability, aged care, education, and health.
Our new research, published today by the Australia and New Zealand School of Government, finds that governments need to carefully plan for the inevitable expansion of these technologies to safeguard vulnerable people.
Care crisis and the rise of robots
Australia, like a number of other advanced liberal democracies, is anticipating a future with an older population, with a more complex mix of chronic illness and disease. A number of care organisations already operate under tight fiscal constraints and report challenges recruiting enough qualified staff.
In the future, fewer numbers in the working-age population and increased numbers of retirees will compound this problem. If we then add to this equation the fact consumer expectations are increasing, it starts to look like future care services are facing a somewhat perfect storm.
Robots are increasingly becoming a feature of our care services, capable of fulfilling a number of roles from manual tasks through to social interaction. Their wider use has been heralded as an important tool in dealing with our impending care crisis. Countries such as Japan see robots playing a key role in filling their workforce gaps in care services.
A number of Australian residential aged care facilities are using Paro, a therapeutic robot that looks and sounds like a baby harp seal. Paro interacts by moving its head, heavily-lashed wide eyes and flippers, making sounds and responding to particular forms of touch on its furry coat.
Paro has been used extensively in aged care in the United States, Europe and parts of Asia, typically among people living with dementia.
Nao is an interactive companion robot developed in a humanoid form but standing just 58cm tall in height.
Our research explored the roles robots should and, even more critically, should not play in care delivery. We also investigated the role of government as a steward in shaping this framework through interviews with 35 policy, health care and academic experts from across Australia and New Zealand.
We found that despite these technologies already being in use in aged care facilities, schools and hospitals, government agencies don’t typically think strategically about their use and often aren’t aware of the risks and potential unintended consequences.
This means the sector is largely being driven by the interests of technology suppliers. Providers in some cases are purchasing these technologies to differentiate them in the market, but are also not always engaging in critical analysis.
Our study participants identified that robots were “leveraged” as something new and attractive to keep young people interested in learning, or as “a conversation starter” with prospective families exploring aged care providers.
But there are significant risks as the technologies become more developed. Drawing on research in other emerging technologies, our participants raised concerns about addiction and reliance on the robot. What would happen if the robot broke or became obsolete, and who would be responsible if a robot caused harm?
As artificial intelligence develops, robots will develop different levels of capabilities for “knowing” the human they are caring for. This raises concerns about potential hacking and security issues. On the flip side, it raises questions of inequity if different levels of care available at different price points.
Participants were also concerned about the unintended consequences of robot relationships on human relationships. Families may feel that the robot proxy is sufficient companionship, for instance, and leave their aged relative socially isolated.
What should governments do?
Government has an important role to play by regulating the rapidly developing market.
We suggest a responsive regulatory approach, which relies on the sector to self- and peer-regulate, and to escalate issues as they arise for subsequent regulation. Such engagement will require education, behaviour change, and a variety of regulatory measures that go beyond formal rules.
Government has an important role in helping providers understand the different technologies available and their evidence base. Care providers often struggle to access good evidence about technologies and their effectiveness. As such, they’re largely being informed by the market, rather than high quality evidence.
Many of the stakeholders we spoke to for our research also see a role for government in helping generate an evidence base that’s accessible to providers. This is particularly important where technologies may have been tested, but in a different national context.
Many respondents called for establishment of industry standards to protect against data and privacy threats, and the loss of jobs.
Finally, governments have a responsibility to ensure vulnerable people aren’t exploited or harmed by technologies. And they must also ensure robots don’t replace human care and lead to greater social isolation.
An ANZSOG-funded research project is exploring the increasing use of robots in care services to replace or complement the roles of humans. In this article, researchers explore how the growth of robots in care services is changing how we think about care, and what we need to do to ensure that the ethics of care are maintained. The full report is expected to be published in the near future.
By Catherine Smith, Helen Dickinson, Gemma Carey & Nicole Carey
It is well-established within policy and practice circles that we are facing an impending crisis of care.
Australia, like a number of other advanced liberal democracies, is anticipating a future with an older population, with a more complex mix of chronic illness and disease. A number of care organisations already operate under tight fiscal constraints and report challenges in recruiting sufficient numbers of appropriately qualified staff. In the future, fewer numbers in the working-age population and increased numbers of retirees will compound this problem. If we then add to this equation the fact that consumer expectations are increasing, then it starts to look like future care services are facing somewhat of a perfect storm.
Robots are increasingly becoming a feature of our care services, capable of fulfilling a number of roles from manual tasks through to social interaction. Their wider use has been heralded as an important tool in dealing with our impending care crisis.
We have recently completed an ANZSOG-funded research project exploring the roles robots should play in care settings, with particular attention to what this tells us in terms of definition of care. In our research we explored how robots are currently being used across a range of care services (health, disability, education and aged care) and areas where they will likely develop further in the future. We found that care is not a simple concept but a complex and relational set of practices which has important implications for policy.
One thing that we were interested in exploring with interviewees is: if robots are an answer to the care crisis, then what is it that we mean by care? Care is one of those terms that we all use regularly, but don’t often stop to define precisely what it is that we mean. So what activities do we think robots might undertake and what are the implications?
What is care?
Typically, when public services think about designing care services they inevitably pull together a series of different activities (e.g. cleaning, washing, feeding and supporting other practical needs in day-to-day living) that comprise those services. Indeed, if we think back to when care services were first outsourced from local governments, this was often done by individuals observing workers and listing the different activities that they undertook.
In our research, although care was defined in terms of different sets of activities, there are other facets that are also thought to be crucial, namely, care being a relational and responsive activity.
Participants aligned the concepts of robots and care with a definition of care that largely focused on the relational aspect of care service practices. Most people reasoned that humans and human interaction is essential to care relationships, and robots would not be able to replace this.
However, in the care of people with needs associated with autism and dementia the non-human qualities of a robot are seen to be a strength in relational care. In both situations, participants identified that robots are able to undertake repetitive tasks without experiencing the monotony and potential boredom of a human. Robots were described in these scenarios as having no emotional baggage, being patient and unable to get angry.
This was seen as an opportunity to remove a potential stressor from the relationship between the primary carer and those being cared for, and as an augmentation of their care relationship, not a replacement. It was identified as an opportunity to provide the carer with the additional time to address other activities.
Robots are identified as a way to combat loneliness and isolation but with a caveat of concern that they could also generate further isolation if their ‘company’ is used to replace human contact. In most cases, the robot is conceptualised as facilitating relationships. Some participants saw that they provided a conversation piece and relational bridge for the cared for and other people in their wider community, such as peers or family members from other generations.
Care is therefore seen as something that is defined in terms of a relationship, and where responsiveness to the needs of the cared-for is elemental to success. An element which arises in much of the care literature is one of reciprocity, where there is a synergy that develops in such a relationship. The role of the cared-for and the carer can be fluid, with the cared-for strengthened by the value they can bring to the relationship, and the reward that is felt in the giving of care.
Concerns of this nature arose particularly in discussions of ‘Paro’ – a robotic seal that responds with sound and movement to the touch of another. The robot is soft to touch and invites actions of nurture. This was identified as particularly useful for people with conditions such as dementia and autism, where its primary use was settling erratic behaviour. The opportunity to provide for responsiveness and reciprocity was otherwise largely unexplored beyond general discussions around the importance of empathy and the need for human carers to achieve it.
Ethics of care and implications for policy and practice
Describing care as a responsive, relational activity is very much in-line with a way of conceptualising this practice as consistent with an ‘ethics of care’ perspective.
In care ethics, care involves bestowing value on the cared for and activity that provides for their needs. Tronto identified that good care comes about when both of these dimensions – caring about and caring for – are present. Care is oriented toward particular beliefs, including concern and the ability to discern the risks of interference over the risks of inaction; interpretation of the responsibilities in each situation as opposed to aligning to a rigid set of rights; and responsiveness aligned with the setting and the individual. Privacy, dignity and agency are all of particular concern in the provision of care in services as a result of these orientations.
If we define care practice in terms of ethics, then accountability of the relationships of care goes beyond the cared for and the carer. It also includes those who have determined the ethical systems that guide robot behaviour, and therefore expands the care relationship into opaque and impersonal elements that require consideration. This has important implications in terms of policy and practice. If we replace some or part of a care process with a robot, it may have far-reaching implications.
We therefore need to carefully consider how robot technologies fit within models of care. Without this there is a danger that we will not use these tools to their full effect, or will create unanticipated consequences.
Members of the research team:
Helen Dickinson, Public Service Research Group, University of New South Wales, Canberra
Nicole Carey, Self-Organizing Systems Research Group, Harvard University
Catherine Smith, Youth Research Centre, University of Melbourne
Gemma Carey, Centre for Social Impact, University of New South Wales
A few times a year the good people over at Power to Persuade turn over the blog to the Public Service Research Group for a week. Over that week we run a number of pieces based on work that we are doing across the group. In the recent PSRG week one of the pieces that ran was based on a chapter that I wrote with colleagues at PSRG on the development and recruitment of the public service workforce. This chapter comes from the book I was involved in editing on the future of the public service, published by Springer.
In this piece we delve into issues of development and recruitment employing a social learning framework. We outline four distinct elements to this approach that we argue can serve as a framework for building workforce capability and supporting change within the public service.
The piece can be found here and if you go to the site you can also find the other great pieces from PSRG colleagues.
It has been a number of years in the making but our (me, Catherine Needham, Catherine Mangan & Helen Sullivan) edited book Reimagining the Future Public Service Workforce has just been published by Springer. This book investigates the professional needs and training requirements of an ever-changing public service workforce in Australia and the United Kingdom. It explores the nature of future roles, the types of skills and competencies that will be required and how organisations might recruit, train and develop public servants for these roles.
The book draws on leading international research and practitioners also make recommendations for how local organisations can equip future public servants with the skills and professional capacities for these shifting professional demands, and the skillsets they will require.
Drawing on ideas that have been developed in the Australian and UK context, the book delves into the major themes involved in re-imagining the public service workforce and the various forms of capacities and capabilities that this entails. It then explores delivery of this future vision, and its implications in terms of development, recruitment and strategy.
I have been working on a paper on 3D printing and public administration for just over two years now. I am fascinated by 3D printing technologies and wanted to find a way to write about this and am delighted that I have just had a paper published by Public Administration Review on this topic. The abstract is copied below and you can access the paper free (thanks to UNSW, Canberra for the funding to do this) here.
In recent years, developments in 3D printing have grasped the public’s attention. There are a range of different applications for these technologies that have a number of social, economic, and environmental implications. This essay considers these advancements and what the role of government should be in overseeing these technologies. It argues that although these technologies have been absent from the public administration literature to date, there is an important role that the field can play in supporting governments in this endeavor. In illustrating this, the final section of the essay draws considers how a multilevel governance framework of technology might allow us to consider the broader implications of these technologies.
In recent decades governments in industrialised nations worldwide have been embracing marketbased models for health and social care provision including the use of personalised budgets. The Australian National Disability Insurance Scheme (NDIS) which commenced full implementation in 2016 is an example of a personalised funding scheme which has involved substantial expansion of public funding in disability services. The scheme involves the creation of a competitive quasi market of publicly funded disability service providers who had previously been block funded and had historical practices of communication and collaborative working.
Research has shown that introducing or increasing competition can impact collaborative efforts between service providers. I am part of a team who have recently released a report based on qualitative interview data from disability service providers during the roll out of the NDIS to examine the effects of the introduction of a more competitive environment on collaborative working between providers who had historical relationships of working together. The data shows that while collaborative efforts were largely perceived to be continuing, there are signs of organisations shifting to more competitive relationships in the new quasi market. This shift has implications for care integration and care co-ordination, providing insight into how increasing competition between providers may affect care integration.
You can find the full report through the Centre for Social Impact here.
This is the era of the so called ‘sandwich generation’with busy professionals caring for children and ageing parents. Imagine being able to more effectively manage both sets of care relationships via a series of new technologies – and better look after yourself in the process. That’s the future being promoted by a number of startup tech firms at a recent showcase.
Here we saw tech that allows you to monitor your children via smart devices. Through this you can check out where they are, how they are performing in school, how much screen time they are consuming (and remotely cease this if you think it is too much). The next big consumer boom in the med tech space is predicted to be in genomic testing. So you will know just what to feed your children given your knowledge of their predispositions to certain conditions and intolerances. Your smart kitchen ensures that you are always fully stocked on necessities, by automatically ordering products you run out of.
When you have a few minutes in your day, you check in with your robot life coach to view your own vitals and see how you are tracking in relation to a number of your life goals. Maybe you even do this while moving around in your autonomous vehicle, which is safer than you personally driving the vehicle and frees you up to work on the move. Your home personal assistants even monitor your speech patterns to check for symptoms of depression or Parkinson’s.
All of this you can do safe in the knowledge that your parents are well and being constantly monitored via wearables or in-home robots. These will tell you if they should suffer a fall or if one of a number of pulse, blood oxygen or other readings indicate something of concern. If anything should cause worry you can be immediately connected to a healthcare professional who can also access your parent’s personal data and advise on courses of action – all supported by artificial intelligence.
Sounds pretty cool, right? There are huge number of companies emerging that are keen to support you to more effectively “manage” your personal and collective caring responsibilities. But what costs does this come at and are there aspects of this we should be concerned about?
These potential applications raise a number of important questions, many of which have ethical and moral dilemmas. How safe is this data that is being shared and who owns it? Blockchain is widely employed as a way of ensuring that this is kept and transmitted safely, but is this infallible? If your DNA is being profiled who are you happy being able to access this? Maybe you want your GP to see this, but what about your insurance company? What about researchers? With a big enough data we might be able to make some new breakthroughs in the health arena. So should we all consent to share our anonymised and aggregated data? The recent response to the My Health Record scheme suggests that many of us are wary about this.
Would knowledge of genomic predispositions make us behave more “rationally”? If you knew that you were more likely to develop heart disease would you eat more healthily? Conversely, if this wasn’t a worry for you would you then engage in more “risky” behaviours? If we know one thing for sure it is that people don’t always behave in ways that are predictable or considered the more ‘rational’ option.
Some of the companies we spoke with talked about offering incentives to individuals to share different aspects of their data in return for vouchers or discounts off other products and services. While this avoids a situation where individuals are not rewarded for the use of their data, it raises potential equity issues. Those most likely to respond to such incentives are likely to be those of lower socio-economic groups. Indeed, the Australian Human Rights Commission has recently raised a number of concerns relating to technology and their potential to enhance inequities.
Although the discourse around many of these technological developments is that they should make us more “safe”, there is already evidence that suggests reason for concern. Addiction to technology is a real concern for many, particularly in relation to younger children. China has uniquely set legislation addressing this after concerns about wellbeing and intense game play regarding on-line game Honour of Kings. The developer, Tencent, responded by limiting users to the amount of time they can spend on the game and at what times. Although welcome by some, this a blunt approach and does little to address our obsession with electronic devices more broadly.
A feeling of safety in many of the examples we came across typically also involved some significant surveillance, either via camera, by data or a combination. Although this might lead to some of us feeling more secure, for others this could come with concerns about issues of human rights. A recent story in the New York Times highlights the issues associated with the expansion of facial recognition software and surveillance in China and some significant concerns about this within the context of an authoritarian regime.
It is clear that there are some exciting developments to come in the technology space that will have a profound impact on our everyday lives. But these developments also bring with them a series of potential negative impacts and associated ethical and moral concerns. Although some of these developments are some way in the future, many of these exist in our everyday lives now. Yet, one of the issues we are coming across in our research into the use of robots in care services is that governments are not yet systematically having conversations in a widespread way about what these technologies mean in terms of the ways in which we design and deliver public services and some of the challenges that these raise. A failure to consider these issues will likely mean that the first time we ever really consider these issues is when some sort of incident arises, by which time it will be too late.
While there is much to be excited about in terms of the future of care services, there are some developments that should give us pause for thought. It is a future we need to prepare for to ensure that we get the type of care services we want and need.
This post contributed by Catherine Smith, Melbourne Graduate School of Education, University of Melbourne and Helen Dickinson, Public Service Research Group, University of New South Wales, Canberra.
In this post, Gemma Carey (@gemcarey), Helen Dickinson (@drhdickinson), Michael Fletcher and Daniel Reeders (@engagedpractx)examine the role of National Disability Insurance Scheme (NDIS) actuaries, describing their purpose in the scheme, the limitations in the ways they are used and the implications.A full account of their research can be found here.
Most of us are familiar with actuarial approaches, though we may not be aware of them. If you have house insurance, insure your car or have a job (where you are covered by work cover) the premiums you pay are based on actuarial modelling.
Actuaries and actuarial modelling are central to the operation of the National Disability Insurance Scheme (NDIS). Internationally, the way that actuaries are used within the NDIS is very unusual although it is something that has not been written about extensively. If you have heard about actuaries and the NDIS it is probably because the outsourcing of this function made the news, largely due to $2.3 million that is being paid out on this over 5 years.
In this piece we unpack this role, describing the function of actuaries in the scheme and the limitations in the ways in which we are using them.
The NDIS Act outlines that the Scheme Actuary is responsible for overseeing and ensuring the financial sustainability of the scheme. Official duties of the Actuary, are to assess: (i) the financial sustainability of the NDIS; (ii) risks to that sustainability; and (iii) on the basis of information held by the NDIA, any trends in provision of supports to people with disability, including (a) the causes of those risks and trends; and (b) estimates of future expenditure of the NDIS. However, the Act does not authorize public monitoring and evaluation of how well the scheme is meeting its goals of ensuring choice, control, and better outcomes for individuals.
Supports to be provided under the scheme are based on the principle of providing ‘necessary and reasonable care’. This implies that estimating future costs requires not only adequate data on life expectancy, but also the life-long impacts of factors such as the medical progression of disabilities, the impact of new technologies on what might be regarded as ‘reasonable’, and changes in family circumstances affecting the availability of informal care. There are inherent difficulties in operationalising ideas such as ‘reasonable and necessary care’, which are inherently fuzzy. Moreover, the NDIS Act authorizes expenditures only indirectly, as a necessary implication of a provision which requires that expenditures ‘represent value for money.’ This introduces a role for the Scheme Actuary into almost all aspects of the system, since pricing of services and planning personalized budgets all impact upon value for money.
How is evaluation of the scheme done?
Neither the Act nor the initial design outline provisions for meaningful and ongoing monitoring and evaluation of impact, whether against the policy objectives or the participants’ self-identified goals. As a result, ‘value for money’ can only be judged in terms of efficiency – units of service delivered rather than outcomes achieved.
To fulfil the mandate set out in the NDIS Act, scheme actuaries require complex and longitudinal data, particularly to ensure continuous monitoring. Serious questions remain over how these data are obtained and its quality, with a current lack of transparency around the monitoring framework being designed by the actuaries and implemented by the NDIA, an agency whose capacity has come under considerable scrutiny. The Productivity Commission report (the blueprint for the scheme) argued that actuarial modelling would also play an important role in evaluating specific services and interventions funded under the NDIS. How this has translated into practice is unknown, as a result of limited transparency with both the actuaries and the NDIA.
The scheme is overly-focused on costs
Normally, actuarial cost modelling of services works through estimating costs based on independent information about prices and expenditures. However, in the NDIS, actuaries set the prices of services and supports, and, to some degree, also make decisions regarding what services are to be provided to whom through the NDIA and planners. For example, the actuaries have advised planners to not be afraid to make large upfront investments in equipment. As noted in the rules for the scheme actuary, the role is to “monitor, assess, and report on consistency of resource allocation across regions, planners, disability type, and other groupings as appropriate”. This could potentially see them involved in planning in a much more hands-on way in the future
The actuarial modelling of NDIS performance focuses solely on costs. As the Productivity Commission notes: “Financial (or actuarial) models measure any discrepancies between expected and actual costs and outcomes, and the adequacy of revenues to meet projected costs over the long-term”. The models explain why such discrepancies may have occurred, and analyse their implications for the financial sustainability of the scheme and its objectives for achieving outcomes for people with disability (either in aggregate or in specific categories). By itself, this modelling is limited in its ability to measure personal wellbeing or social and economic outcomes. It also cannot assess whether participants’ goals are being met, or whether participants experience their choice and control as purely formal (i.e. I get to choose who provides the service) or substantive (i.e. I get to choose how the service is provided). For a more robust evaluation of wellbeing, outcomes, and goals – which is after all the fundamental objective of the NDIS – alternative methods are needed and as the NDIS Costs Report points out, is a more difficult task than measuring costs against cost expectations
To date, there is limited information on benefits to individuals and families, which means that it is not possible to conduct a proper cost-benefit analysis. The NDIA has developed and piloted what it calls the NDIS Short Form Outcomes Framework, which comprises 8 participant domains (including choice and control, daily activities, relationships, home environment, health and wellbeing and life-long learning) and five family carer domains (e.g. whether families have the support they need, whether they know their rights, if they can gain access to desired services). The short form questionnaire does not attempt to assess whether participants feel the services delivered contribute to achieving their stated personal goals, largely because personal goals are so diverse and the instruments being used are not able to measure this.
In other words, while packages in the NDIS are personalized, the measures for success of the scheme are not. The NDIS needs a proper monitoring and evaluation framework that goes beyond assessing costs if we are to understand its real impact on lives.