Can AI write laws? Lawyer puts ChatGPT to the test

Charles Darwin University

A Charles Darwin University (CDU) academic has answered one of the modern-day legal world’s most burning questions: Can Artificial Intelligence (AI) write laws?

New research by CDU Associate Professor Guzyal Hill put ChatGPT to the test by asking it to compare, analyse and produce domestic violence legislation, exploring the quality of its legal draft work alongside the Australian Law Council.

Given the complexity of domestic violence as a deeply human issue and the growing prevalence of AI, it was a natural next step for Associate Professor Hill to explore if the technology could develop successful recommendations and legislation.

“Domestic violence represents a complex human problem, with up to 50 women dying every year in Australia alone,” Associate Professor Hill said.

“The federal, state and territory governments introduced the joint National Plan to end violence against women and children within one generation. Can ChatGPT help in producing a high-quality definition of domestic violence?”

“After running several tests and comparing with the definition produced by the Australian Law Council, the answer is ‘not yet’ – human drafting is still superior. ChatGPT, however, was very useful in classifying and identifying underlying patterns of types of domestic violence.

“For non-lawyers, ChatGPT and similar LLMs should never be used for legal advice. A lot of ChatGPT references include the US law. Law in Australia is simply different, not even talking about differences between, say, Queensland and South Australia. I have noticed ChatGPT now includes a disclosure that it cannot provide legal advice.”

Associate Professor Hill, a lawyer and former legislative drafter, said given the prevalence of AI, more research was needed to explore its place in the legal profession.

“For lawyers and law students, AI is an area where we must upskill,” Associate Professor Hill said.

“Eluding or ignoring AI has many unpredictable drawbacks and at least several predictable dangers, such as making major mistakes in misuse of AI; missing an opportunity to lead the debate on the development of law with the emergence of AI; and allowing experts from other fields to develop solutions that do not consider fundamental human rights or contradict foundational principles of rule of law.

“Without any doubt, AI poses serious risks and threats if used unchecked. Lawyers and law students should treat AI in a way that is practical, cautious, and yet curious. At this point, AI systems are an augmentation of human acuity rather than an abrogation of legal analysis and reasoning. We, as lawyers, have an opportunity to inhabit this new AI domain with the potential to transform law and the way we approach law globally.”

If you would like to research this topic as PhD research and you have relevant qualifications or experience

/Public Release.