Ethics and risks of generative AI in psychology

Australian Psychological Society

Generative AI is influencing clinical practice at lightning speed, but discussion of ethical issues and risks has lagged. Academic Dr Erika Penney shares her approach – from whether to use generative AI in diagnoses, to the cultural barriers that could create inequity.

Generative artificial intelligence (GenAI) tools are evolving rapidly – in scope, diversity and sophistication. It’s only a matter of time before they could become mainstream in clinical practice. Yet, when it comes to ethics and risks, little guidance is available to psychologists.

Dr Erika Penney, education-focused academic at the University of Technology Sydney, breaks the ethics and potential risks of GenAI down into three categories.

The first is its appropriateness in making diagnoses, writing clinical notes and developing treatment plans. The second is its use as a tool in therapy, from generating ideas for explaining complex concepts to role-playing. The third is equity, particularly economic and cultural barriers to access.

“[Psychologists] are waking every day to a new type of AI. It can be overwhelming. Many are fearful, and there’s a kernel of truth in that fear. It speaks to how well psychologists are trained in ethics,” says Dr Penney, who is speaking at the APS AI and Psychology Symposium on 13 October 2023.

“But it also creates resistance. And strong abstinence and rigid rules around technological advances don’t, historically, tend to be successful. Instead, [psychologists] [should] bring the critical thinking and clinical reasoning skills they already have to this new field.”

Diagnoses, clinical notes and treatment plans

For Dr Penney, the use of GenAI in making diagnoses, writing clinical notes and developing treatment plans is a “black and white” issue. She does not recommend it – for three reasons.

First, it requires the inputting of confidential, sensitive and/or identifying personal information, which raises a host of ethical and legal risks.

“You might breach privacy laws and fail to meet your lawful obligations for the transparent and secure use of information,” she says. “Even if an AI tool claims not to store data, there could be underlying platforms that do.”

Second, it raises the challenge of consent.

“If you fail to inform a client that GenAI is involved in making important clinical decisions about their diagnosis or therapy, that’s a violation of the principle of informed consent,” she says.

Further, gaining consent is difficult because this requires ensuring the client understands all risks and benefits, which are complex and far-reaching – from the theft of data to the likelihood of an unsound result.

“It’s hard to imagine that, if someone understood [all this], they’d agree to their personal and health data being given to a GenAI,” says Dr Penney.

GenAI in other aspects of practice

Third, reliance on GenAI, particularly without clinical oversight, could lead to incorrect diagnoses and inappropriate treatment plans.

This is a “more nuanced” area to which psychologists can bring their clinical reasoning and critical thinking.

“Without inputting confidential information, you could use GenAI to soundboard ideas. For example, you could ask for analogies about how to explain graded exposure to a child,” says Dr Penney.

“But, before employing the ideas generated, use your pre-existing skills to decide if they have merit, if they properly explain the concept, and so on.”

The same applies to using GenAI during therapy.

“Say you’ve a client who’s been practising assertiveness. You could suggest they ask ChatGPT to pretend be their boss, while they ask for leave.

“But, you would educate the client beforehand. For example, you might explain that AI doesn’t fully understand emotion or isn’t always culturally sensitive. You would put a plan in place for what would happen afterwards, if the client felt upset.”

In this regard, GenAI is similar to any other strategy.

“You can come up with all sorts of creative ideas, but you should always discuss with the client the pros and cons, limitations, things they should be aware of, and ask yourself, ‘Is it in line with confidentiality and privacy principles? Have I given the client enough education? Am I aware of all the limitations?'”

Equity and biases

Whichever way psychologists use GenAI, they should be sensitive to economic and cultural barriers to access. Economically, the issue is a matter of cost. While some versions of GenAI are free, others – usually those more rigorous and knowledgeable – require subscriptions.

Cultural barriers are more complicated. “We cannot underestimate potential cultural biases,” says Dr Penney.

“GenAI’s source-pool is one-third of the Internet, where most research is by Westerners and in English. What’s the likelihood it’s going to adequately represent linguistically and culturally diverse perspectives?

“There’s potential for bias, incomplete data and reinforcing stereotypes. GenAI might not be appropriate for some groups. We need to hold these limitations in mind – just as we would when looking up treatment literature through scientific databases, where we also generally find more western-oriented research.

“On a wider scale, we have a responsibility not to get all our information from GenAI. At this stage, we don’t want to let it do too much without our oversight.

“And, wherever [psychologists] have opportunities to advocate, they should, by writing letters and giving feedback to creators that they want GenAI tools with diverse perspectives.”

Looking ahead

The ethical issues and risks raised by GenAI will likely change as it adapts to clinicians’ and patients’ needs, and the law catches up.

“Some applications are now being purpose-built for healthcare. For example, in a 2020 paper, Eyigoz et al. discussed the development of a program that could predict whether a person would develop Alzheimer’s by analysing their text messages. Hopefully, purpose-built such apps will be more thoughtful about privacy and health laws,” says Dr Penney.

“We also need stricter regulations, as the Australian Medical Association called for earlier this year after discovering that doctors in Perth-based hospitals had been using ChatGPT to write clinical notes.”

So, flexibility is key.

“Rather than trying to rote learn what we can and can’t do, instead, we should bring clinical reasoning and an attitude of life-long learning.

“What we learn is feasible today – or in a month, or a year, or five years – might change, but what won’t change is our ability to reason clinically and think critically. We will come to different decisions as the technology evolves.”

/Public Release. View in full here.