Writing for academics?

My recent article on Nudge and Co-design with Emma Blomkamp received quite a bit of attention after it was published on social media platforms. People had some good things to say about it on twitter, and our research was also published on a few blogs.

It made me pause and reflect on how important it is to ensure that papers and articles are available and accessible to everyone, not only those with journal subscriptions.

The links to the blogs are here

https://www.anzsog.edu.au/resource-library/research/nudge-and-co-design-complementary-or-contradictory-approaches

COVID-19 and behavioural insights: Questions for public policy

Photo by Michael Rivera on Wikimedia Commons

This article was first published on the Crawford School of Public Policy website as part of their COVID-19 insights, experts and analysis series

06 July 2020

By Colette Einfeld

My local supermarket has stickers on the floor at the checkout, marking out one and a half metres from the person in front. There are also signs asking me to keep ‘one trolley’ distance away from others. Elsewhere in Australia, signs ask you to stay one ‘kangaroo’ away; in Canada, one hockey stick. In the parks of Copenhagen and New York City, squares and circles on the grass guide people on how far to sit apart. These social distancing measures are examples of how behavioural insights are helping during the coronavirus pandemic.

Behavioural insights apply theory and principles from psychology and behavioural economics to policy problems. The approach has been built on “nudges”, which suggests people don’t always act in their self-interest, so it is the responsibility of policymakers to help people make better decisions.

Behavioural insights have come to the fore as an evidence-driven, expert-led approach to policy making. It has been hailed as ensuring cheaper more effective government. Examples of behavioural insights projects undertaken overseas and here in Australia include encouraging more people to pay their fines on time by changing the wording of a letter, increasing cervical screening rates by prompting women to write down appointment details on their fridge, and nudging people to register as organ donors. Behavioural insights have been applied to areas as diverse as education, obesity, tax, and energy. Governments around the world have also been applying behavioural insights in the pandemic. In part this has been driven by the idea that, without a vaccine or cure, it is only people’s behaviour that can limit the spread of coronavirus.

Governments have used behavioural insights to encourage social distancing, as in the examples above, as well as to develop signage about handwashing and alert people to using hand sanitiser. In the UK, Behavioural insights teams have worked with the NHS to design text messages to send to vulnerable people isolating at home.

However, a recent Guardian article accused behavioural insights of overstepping. The head of the UK’s behavioural insights team is part of the governments Scientific Advisory Group for Emergency (SAGE). He raised concerns about behavioural fatigue, advice that has been linked to the ill fated ‘herd immunity’ response and the decision not to lock down the country. Since then, over 600 academics have signed an open letter to the UK government, raising concerns about the lack of evidence for behavioural fatigue. A succession of blogs have been published questioning the role of behavioural insights in the pandemic.

This raises some legitimate concerns that I am exploring in my research on the use of behavioural insights in Australia. For me, the use of behavioural insights has also raised three key questions about the future of the approach, which also reflect on public policy more broadly.

  1. What counts as evidence? Behavioural Insights teams have actively promoted the use of Randomised Control Trials (RCTs) as the ‘best way’ of understanding whether an approach works. An example of a RCT might be if a sample of the population receives one version of a letter to encourage tax compliance, whereas the other ‘control’ group receives the original letter or no letter. The differences in tax compliance can then be measured to see if the intervention worked, before it is rolled out to the wider target population. While some teams are embracing broader experimental methodologies, RCTs still seem to be privileged as the ‘best way’ to find out what works and the technical ability to run RCTs can even dictate which projects are undertaken. However, the rapid escalation of the pandemic means that interventions, whether to encourage handwashing, social distancing, or understanding ‘behavioural fatigue’, cannot always be tested before they are rolled out to the whole population. Will RCTs continue to be the gold standard of evidence, or will other types of evidence and knowledge begin to become more prominent in developing public policy?
  2. Who are the experts? Given that RCTs can no longer be undertaken before rolling out an intervention, understanding the local context, local perspectives, local experiences, and local experts becomes even more important to understand what works in a community or population. Yet, behavioural insights and nudge have been criticised as been the domain of experts, making decisions on what is the ‘best choice’ for people. In response there has been a number of alternatives or extensions of behavioural insights offered in efforts to include citizens in making decisions about nudges. However, this pandemic has seen a turn to experts and expertise as scientists and public health experts have been sought to answer questions and offer advice in a rapidly evolving situation. What might the pandemic mean for the ongoing role of experts, and local experts in behavioural insights and public policy?
  3. What about vulnerable groups? It would be remiss not to consider the impacts on vulnerable groups and the ethics of nudging. One of the persistent criticisms of behavioural insights has been whether this is an ethical approach to policymaking. The techniques that behavioural insights can use may disproportionately affect those that are most vulnerable, driving calls to consider distributional justice in the use of nudges. Behavioural insights may be used to target vulnerable groups, such as in the NHS example above, but it may also be levelled at the population without taking into account the different effects on parts of the population. In some areas social distancing, isolating at home, and regular handwashing may not be possible. Without widespread testing of behavioural insight interventions, and without using local knowledge about these groups experiences, there is a danger that such techniques might amplify adverse effects on more vulnerable groups.

Commentary on the use, and misuse, of behavioural insights in the coronavirus pandemic seems to reflect a growing recognition that the enthusiasm for behavioural insights needs to be tempered, with a recognition that the approach should not always be the first or only response. It suggests the need for a reframing from ‘how can behavioural insights help’ to ‘can behavioural insights help in this situation’? More broadly, there is the opportunity to continue the work of behavioural insights, perhaps even public policy generally, to embrace different types of evidence and expertise, ensuring that such interventions are developed with people and ethics at their centre.

Not everything that counts can be counted, and not everything that can be counted counts

This thought piece was first published as part of the NextGen Research page. The NextGen project is Australia’s largest study on engagement and infrastructure delivery

Over the last few months in my role as CI on the NextGen engagement project, I’ve been lucky to talk to a range of dedicated community engagement professionals and committed infrastructure practitioners. We launched our successful national survey, and facilitated launches and workshops all over the country. I have been talking to people about why they think community engagement fails, and what is needed. In our workshops, we asked:

‘What do we need and what do you want to learn?’

We’ve heard lots of different responses, but one idea that keeps coming up is that we need data. In our workshops for example, practitioner asked “are there more quantitative means of measuring engagement” and “is there a way to quantify the value of engagement?”.

Practitioners are telling us quantitative data and metrics are needed as this is the language of finance, of engineers, of the business and senior execs. Community engagement practitioners need to be able to talk this language to show why community engagement is important. Data could prove how many days, and therefore how much money, is lost on a project through community protest or community blockades because community engagement is lacking. It could also demonstrate the cost of government intervention on behalf of communities if a project does not invest in developing a social licence. Quantitative evidence on the importance of engaging with the community, and the risks to projects if community isn’t engaged, is seen to play an important role in building the business case internally. Building the evidence base builds the case for resources, and for ensuring community engagement is a focus for infrastructure development and delivery. 

I know from my own experiences working with community engagement professionals how they have one eye outward, engaging with the community, and one inward, seeking to defend, argue and protest the importance of what they do. Very often this is with colleagues whose background is finance or engineering, where their very day to day business is calculating and working with numbers and data.

And yet, I can’t help thinking, what then for the role of other types of data?

In my other research in public policy, I have recently written about the evidence based policy movement. Here academics and practitioners are calling for data to inform and evaluate, seeking evidence on ‘what works’. On the surface, there isn’t much to argue with. After all, why wouldn’t we want our government’s policies to be based on evidence? But there are concerns that a particular type of evidence, evidence using quantitative data, seems to be seen as more valid, more legitimate than other forms of knowledge. This leads us to ask what the role then is for the knowledge of policy makers and the public service in understanding what works? What about the experiences and expertise of the stakeholders, NGOs or frontline staff who may be involved in the delivery of policies? Or what, even, of the stories of the clients, or ‘policy targets’?

The parallels to community engagement, to me at least, are clear. I suggest that in using data and metrics as evidence for community engagement, there is a risk that other forms of knowledge may be side-lined. What becomes the role of the expertise of community engagement staff, or knowledge and experiences of the affected community? These experiences and knowledge are, I would argue, equally important but are often qualitative in nature. Indeed, what about other reasons besides it saves time and money – is there a role for arguing that community engagement is the right thing to do? Or is the failure of this argument leading to the call for metrics?

What do you think? Is this a valid concern, or are the risks to community engagement of not talking the language a greater concern?

Where’s the community? The elephant in the room

This thought piece was first published as part of the NextGen Research page. The NextGen project is Australia’s largest study on engagement and infrastructure delivery

The Next Gen engagement project asks;

How can we do community engagement better and in doing so, deliver the infrastructure we need more efficiently and with better outcomes for impacted communities?

We are adopting a co-design approach, hosting a number of facilitated discussions and undertaking a national survey with our project partners. Our partners include leading infrastructure bodies, research partners and civil society groups. The ultimate aim is to develop a program or research work for the next 3-5 years to answer some of the issues raised during the project.

Our situational analysis asks a range of really important, critical questions, such as

  • If we ask communities for more input and give stakeholders more say, how do we ensure that their ideas and advice are incorporated into project planning and delivery in a meaningful way? At the same time, how do we ensure that Australia still gets the infrastructure it needs, rather than just the infrastructure that is popular?
  • How can projects allow for consultation about whether or not a project should be started, not just how a project is delivered?
  • What is the earliest point at which engagement is likely to be most meaningful and most effective, for all parties concerned?

We are working with dedicated, engaged professionals, who are committed to delivering great outcomes for infrastructure and the affected communities. So, some have (rightly) asked – if this is about community engagement, if ensuring community is at the fore of the project, if we are arguing that communities should be involved and listened to…where, in the NextGen project, is the community?

It is the elephant in the room. Involving the community is something we have grappled with and wrestled with, discussed and (heatedly!) debated. Should we be including community groups at this stage? If so, which groups? If our focus is on industry, are we reinforcing notions that ‘we know best’ when to engage with community, something we are trying to question?

Some of our concerns around ‘who represents the community’ and ‘which community’ acknowledge that capturing all community perspectives at this stage is simply beyond the scope of this project. We still could be seeking out input, perhaps through selection of one community impacted by infrastructure development. But this leads to one of our key concerns – research fatigue.

Here we draw on our experience as researchers, both as academics and social scientists, who between us have extensive experience working with and in affected communities. This research suggests many communities are research-fatigued. Research fatigue is when people, particularly because of their identity or, importantly here, location, become involved in so many research projects they get tired of talking to researchers (Clark 2008). This fatigue can come from a frustration of providing time, knowledge and experience, and seeing little impact or outcomes from the project. Some of the communities in the Hunter Valley for example may have multiple coal mines nearby who want to talk to them, as well as CSG companies, as well as consultants, as well as academics, all wanting to talk to understand and measure the community’s experiences. This observation has led to projects such as the Hunter Valley Mining Dialogue and our work encouraging the consideration of Cumulative Impacts. Cumulative Impacts, and Cumulative Impact Assessments, recognise that communities may be affected by multiple projects, across time and industries, and ensuring coordinated approaches to researching and measuring impacts both acknowledges the total impacts of these projects and can reduce the burden on communities.

For the NextGen project, we want to ensure communities are meaningfully engaged. For us, this means we respect communities time and knowledge that they bring to our research, so we want to be clear that when this is provided it will be used and useful. We want to be able to take industries ideas to communities and ask them – this is what industry is saying are the challenges, what do you think? We want to understand if communities agree or disagree and why, what is missing, and what is needed from their perspective, in a way that respects their experiences and knowledge. For us, what this looks like and who we talk to is likely at the next stage of the project but, we’re interested to hear from you on this one. Is there a way at this early stage we can include communities, and which communities would that be?  Is our pragmatism and concerns about meaningful engagement outweighing our obligations to have all groups represented? We’d love to hear what you think on this one.