Artificial Intelligence (AI) offers considerable promise in the legal field, but its benefits are counterbalanced by serious risks, particularly when used uncritically in legal proceedings. Across Australia, courts and regulators are sounding alarm bells as AI tools become more widely adopted without adequate oversight. These warnings are grounded in real-world experiences of fabricated citations, misleading briefs, and concerns about professional accountability.

 

AI “Hallucinations” and Fabricated Citations

 

Termed “hallucinations,” AI models like ChatGPT, Deep Seek, and similar models can generate entirely invented case law, citations, or legal terminology, presented with high confidence and perfect plausibility.

 

This issue is not simply technical, and it reflects the reality that generative models do not truly “know” law or fact. They are trained to predict sequences of words based on patterns in data, which means they often confuse probability with accuracy. This makes them inherently risky when used without critical legal scrutiny, especially in jurisdictions like Australia, where citation accuracy is non-negotiable.

 

AI-Generated Legal Errors: A Growing Concern

 

The legal sector has already seen several high-profile examples of AI going wrong. In early 2025, a Melbourne law firm was publicly reprimanded after submitting court documents containing AI-generated citations that simply didn’t exist. These fictitious authorities were presented as though they were real, but on scrutiny, the court found them to be entirely fabricated. The firm had relied on a generative AI tool without proper verification.

 

In another matter, a lawyer in an NSW immigration appeal used AI to draft submissions that cited seventeen non-existent cases. The matter was delayed as the court attempted to untangle the error, and the practitioner faced a formal review.

 

These are not isolated incidents. A family law matter in Victoria also had to be adjourned after a solicitor used an AI tool to generate a list of case citations, none of which were real. The matter was reported to the Legal Services Board, and the court reiterated the duty of lawyers to personally verify all authorities relied upon.

 

Such cases are particularly concerning immigration law, where facts are case-specific, and each document, whether it’s a statement, a statutory declaration, or a skills assessment. It is critical to the application’s success. A seemingly minor AI error can mean the difference between a visa grant and refusal.

 

NSW Supreme Court Restrictions on AI Use

 

In response to such incidents, the Supreme Court of New South Wales issued Practice Note SC Gen 23, which came into effect on 3 February 2025. This practice note outlines when and how AI can and cannot be used in court proceedings:

 

Affidavits, witness statements, character references, and expert reports must not be drafted using AI. These documents require a declaration that they were prepared without the assistance of AI tools.

 

AI-generated expert evidence may only be submitted with the court’s prior permission and must include a clear disclosure of the nature and extent of the AI use.

 

AI may be used for low-risk and administrative tasks, such as preparing chronologies, indexes, and document summaries, but lawyers must personally review and verify all legal content before relying on it.

 

Use of AI in written submissions is permitted only if the practitioner has verified every legal authority and factual claim included.

 

These rules are aimed at preserving the integrity of evidence and submissions before the court. They reflect a broader trend among Australian jurisdictions to control the use of generative AI in legal practice and to hold practitioners accountable for any misuse.

 

In immigration law, AI is also increasingly being used behind the scenes for form population, client screening, and even preliminary risk assessments. While this can improve efficiency, it must never replace full legal review. NLP models can extract passport details or employment history from uploaded documents and pre-fill forms, but these models lack the judgment needed to identify contradictions or inconsistencies across evidence.

 

AI is a Tool and Not a Fact Checker

 

The key issue with AI in legal work is not whether it can be useful, but that it is frequently inaccurate. AI tools are trained on large datasets and use pattern recognition to generate plausible content. However, they are known to “hallucinate” to generate false or misleading information that appears legitimate.

 

In legal matters, where the accuracy of citations, statutory interpretation, and precedent is critical, even one false claim can derail a case. That’s why it is essential to understand:

 

AI can assist, but must not replace human legal judgment.

 

Every output from an AI tool must be independently verified, including case law, legislation references, and legal arguments.

 

Disclosure is essential. If AI was used in preparing any part of a legal submission, that fact must be made known to the court where required.

 

AI does not take responsibility; you do. The legal practitioner or any relevant person drafting the submissions remains fully responsible for every word submitted to the court, regardless of whether it was initially written by a machine.

 

Practitioners using AI to analyse past trends, for example, in visa refusal patterns or tribunal appeal success rates, must remember that predictive tools do not account for recent policy changes or legislative shifts. Moreover, most AI is trained on public or outdated data, not DIA or tribunal-specific outcomes. Overreliance on such patterns can lead to misleading advice and false confidence.

 

AI chatbots are also increasingly used in client communications. While these tools can reduce the administrative burden and answer general queries, they must be programmed carefully and should never be relied upon for personalised legal advice. Inaccurate responses can mislead vulnerable clients and potentially expose a firm to liability.

 

Conclusion

 

AI is a powerful, efficiency‑enhancing tool, but it is far from flawless. In Australian legal practice, the consequences of over­reliance on AI have already surfaced in fabricated citations, court rebukes, and regulatory action.

 

AI remains indicative, not definitive. It can guide research, help generate drafts, and support decision-making, but human professionals must always verify and validate its outputs. Ethical, accurate legal work depends on rigorous oversight, transparency, and the enduring application of expert judgment.

 

As Alex Kaufman MMIA remarked in the 2025 Migration Institute of Australia presentation: “AI won’t replace lawyers, but lawyers who use AI will replace lawyers who don’t.” The message, however, is clear—use it wisely, or risk consequences that your licence, reputation, and clients cannot afford.