Lawyers

‘Robo-Justice’: A Critical Examination of Artificial Intelligence (AI) Adoption in Ghana’s Legal System

‘Robo-Justice’: A Critical Examination of Artificial Intelligence (AI) Adoption in Ghana’s Legal System

Abstract 

This article examines how Artificial Intelligence (AI) is increasingly influencing Ghana’s legal system and the opportunities and risks it presents. While AI can improve efficiency in legal research, drafting and judicial administration, careless use can lead to fabricated cases, inaccurate citations, and ethical breaches, as seen in several foreign courts. This article stresses that AI cannot replace lawyers or judges because it lacks human judgment, ethical reasoning, and consistency. Even proposals to use AI in empanelling judges must remain strictly advisory since the constitutional authority and discretion to empanel judges rests solely with the Chief Justice. The authors call for training, verification guidelines, and stronger collaboration between the Bench and Bar to ensure AI supports, rather than undermines, the integrity and credibility of justice delivery in Ghana.

Keywords: Artificial Intelligence, Ghana Legal System, Judiciary, Legal Profession, Chief Justice. 

 

Introduction 

One of us once questioned the use of AI, and a lawyer who is also a senior public servant offered a powerful analogy in response. He explained that our forefathers relied on primitive tools like stones and axes for farming, but with technological advancement, we progressed to using cutlasses and other more sophisticated equipment. Would anyone willingly return to the old methods? Absolutely not. Who in this world doesn’t want to live a soft life?  We must embrace innovation just as we have in agriculture. The key is to learn, adapt, and harness these new tools wisely. This explanation transformed his opinion about the use of Artificial Intelligence (AI), and he now sees its potential to transform the legal profession and enhance the delivery of justice.

The integration of AI into Ghana’s legal system which we, the writers, have referred to as “Robo-Justice,”[1]raises critical questions about fairness, transparency, accountability, and the protection of constitutional rights. This article shall examine the prospects, challenges, and implications of adopting AI into Ghana’s legal system, assessing whether technological innovation can coexist with the fundamental principles that underpin the rule of law. 

AI and Law in Ghana

Artificial Intelligence (AI), though a relatively new concept, has garnered significant attention from the general public as a convenient and innovative technology.[2]  Its rapid evolution is increasingly reshaping legal systems worldwide, prompting jurisdictions to explore how technological innovation may be deployed to enhance the efficiency, accessibility and overall delivery of justice.[3] Within legal practice, AI-powered tools are transforming traditional processes by offering lawyers unprecedented speed, efficiency and convenience in tasks such as legal research, case analysis and the drafting of court documents.[4]

In Ghana, ongoing legal and judicial reforms, coupled with a sustained quest for institutional efficiency, have sparked growing interest in leveraging Artificial Intelligence (AI) to support legal research, empanelling judges, case management, judicial decision-making, and access to justice. The rapid advancement of Artificial Intelligence (AI) is profoundly reshaping how knowledge is produced, accessed, and evaluated in contemporary society.[5] Ghana, like the rest of the world, has firmly entered the digital era and the legal regime is marching in tandem with the times.[6]  

However, the careless or unethical use of Artificial Intelligence (AI) in legal writing carries significant risks, including factual inaccuracies, fabrication of authorities, and ethical breaches.[7] Courts in various jurisdictions have already sanctioned lawyers for submitting filings containing fictitious case citations or misleading arguments generated by AI without proper verification.[8]  In his response to the Order to Show Cause Relating to Artificial Intelligence in Wadsworth v Walmart Inc,[9] Michael Morgan stated that:

Artificial Intelligence is a powerful tool when used properly and dangerous when used carelessly, and I understand and appreciate the need for the Court to deter careless use in the new age of Artificial Intelligence.”[10]

The danger of AI threatens the integrity of the legal process and the duty of candour owed to the courts. Experiences from other jurisdictions, where lawyers have been sanctioned for filing AI-generated false citations, serve as a cautionary lesson for Ghana. 

What is Artificial Intelligence

Artificial Intelligence (AI) refers broadly to the capability of machines or computer systems to perform tasks that would ordinarily require human intelligence. This includes learning from experience, recognizing patterns, understanding natural language, and making decisions.[11] The Organization for Economic Co-operation and Development (OECD), through its AI Experts Group (AIGO), provides a functional and operational definition of AI, describing it as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”[12] According to the OECD, an AI system utilises machine and/or human-based inputs to perceive real and/or virtual environments, abstracts these perceptions into models automatically through techniques like machine learning.[13]

According to the IAPP Glossary, Artificial Intelligence can be defined as: “A broad term used to describe a designed system that uses various computational techniques to perform or automate tasks. This can include techniques such as machine learning, in which machines learn from experience, adjusting to new input data and potentially performing tasks previously performed by humans. More specifically, AI is a field of computer science dedicated to simulating intelligent behaviour in computers. It can include automated decision-making.”[14] AI or machine intelligence is any form of intelligence demonstrated by machines as opposed to natural intelligence from human beings and animals,[15] geared towards problem solving, planning, learning and even speech recognition.[16] 

Artificial lntelligence (AI) has broken into a dizzying gallop. Each day seems to herald some new AI- powered algorithmic wonder. As a general- purpose technology, AI has been dubbed “the new electricity.”[17]Artificial Intelligence (AI) has profoundly transformed the modern world, permeating various aspects of daily life. It has evolved from merely mimicking human behaviour, where all actions must be pre-programmed by a developer, to advanced applications such as machine learning (ML) and neural networks (NNs) that adjust their behaviour based on new information.[18] 

In the legal profession, AI manifests in various ways. Legal professionals increasingly interact with AI through tools such as legal research assistants, contract analysis software, predictive analytics for litigation outcomes, automated document review, and even virtual courtrooms. These applications not only streamline legal workflows but also significantly alter the traditional methods of legal analysis, strategy, and decision-making. AI offers unprecedented opportunities for advancing human rights through enhanced access to fundamental services and greater societal equity.[19] 

Chief Justice Eugene Volokh argues that if AI technology reaches the point that it can “create persuasive opinions, capable of regularly winning opinion-writing competitions against human judges,” then “we should in principle accept it as a judge.”[20] As Volokh recognizes, this is a “thought experiment,” as AI technology is currently far from this point.[21] Nevertheless, such a thought experiment can provoke important discussions about the proper role of humans versus Artificial Intelligence in the legal field.[22] Professor Volokh’s argument that we should replace judges with AI is contingent on them passing what he calls the “Modified John Henry Test,” an opinion writing competition wherein “a computer program is arrayed against, say, ten average performers” in the given field, and if “the computer performs at least as well as the average performer,” then it passes the test and is an “adequate substitute for humans.”[23] According to Webster, Artificial Intelligence refers to the capability of computer systems or algorithms to imitate intelligent human behaviour.[24] 

Origin of Artificial Intelligence (AI)

The philosophical and technical foundations of AI date back to the mid-twentieth (20th) century. In 1950, British mathematician and computer scientist Alan Turing posed a seminal question in his paper “Computing Machinery and Intelligence”Can machines think?”—which laid the conceptual groundwork for Artificial Intelligence.[25]This inquiry introduced the idea that a machine's ability to imitate human reasoning could be the basis of evaluating intelligence, leading to the now-famous “Turing Test.”   

However, the term “Artificial Intelligence” was formally introduced in 1955 through the Dartmouth Summer Research Conference Proposal, a pioneering document authored by John McCarthy, Marvin L. Minsky, Nathaniel Rochester, and Claude Shannon. This proposal articulated the vision of simulating every aspect of learning or any other feature of intelligence using machines.[26] John McCarthy, who is typically thought to have coined the term AI was an American computer and cognitive scientist, and one of the founders of the AI discipline. Marvin Lee Minsky was an American cognitive scientist in the field of AI and one of the main AI theorists.[27] 

Justice Demands Competence and Integrity

Ghana’s Legal System is anchored in the 1992 Constitution, which serves as the supreme law of the land[28] and provides the basis for governance, justice delivery, and the protection of fundamental human rights.[29] As a common law jurisdiction,[30] Ghana draws heavily from English common law traditions,[31] customary law principles,[32] statutory enactments,[33] judicial precedents,[34] and where applicable, international law.[35] These sources collectively shape legal doctrine, interpretation, and adjudication within Ghana’s courts.  The judiciary, comprising the Superior Courts and Lower Courts[36]  functions independently and is entrusted with safeguarding the rule of law, protecting rights, and ensuring justice.

The image of successful lawyers suggests that their success depends on the ability to think on their feet, to use words persuasively, to mesmerize by their fluency, and to impress by their very presence and charisma those whom they address.[37] Certainly, those skills help. However, the factual basis for their confidence is effective preparation for the task they are undertaking.[38] The practicing lawyer seeks the law he or she needs in order to answer extremely specific legal questions posed by a particular problem on which the client seeks advice.[39] Lord Denning once said; 

"God forbid that a lawyer knows all the law, but a good lawyer is one who knows where to find the law." 

Legal research is the lifeblood of advocacy and adjudication. Lawyers, judges, and scholars rely on research to interpret the law, resolve disputes, and develop persuasive arguments. Accurate and thorough research ensures consistency, coherence, and predictability, all of which underpin a credible justice system. Bad research and poorly drafted documents risk misleading the court, weakening arguments, or undermining the administration of justice. 

Integrity remains a basis of legal practice in Ghana. The Ghanaian legal profession governed by the General Legal Council[40] upholds strict ethical standards meant to safeguard public confidence in the justice system. Lawyers are required to adhere to principles of honesty, diligence,[41] confidentiality, and respect for the courts. A lawyer’s duty is not only to the client but also to the court and the administration of justice.[42] 

The core legislation governing professional conduct and ethics of lawyers is the Legal Profession Act, 1960 (Act 32) and the Legal Profession (Professional Conduct and Etiquette) Rules, 2020 (LI 2423) and the Ghana Bar Association Code of Ethics

According to LI 2423, a lawyer is not required to engage in conduct whether in the course of practice or otherwise that is: dishonest or calculated, or likely to a material degree, to be prejudicial to the administration of justice or adversely prejudice the ability of the lawyer to practice in accordance with these Rules. This means lawyers must avoid misleading the court and instead present facts truthfully while upholding the dignity of the profession at all times. Rule 62(e) of the Legal Profession (Professional Conduct and Etiquette) Rules states that: 

A lawyer, in the offer of legal services, shall not use means that bring the profession or the administration of justice into disrepute.[43]

Thus, any act or conduct−whether in the course of research or otherwise−must not bring the administration of justice into disrepute. 

6.0 Integrity of Evidence in the Era of AI

We live in an era of “React first Think Later” mentalities where content posted by any person is believed to be true despite presenting only one side of the story. This tendency fuels rumours, disinformation and misinformation eventually leading to serious consequences for the other party.[44]  

Evidence plays an integral role in almost all civil and criminal proceedings and judicial decisions are based on the strength of the evidence adduced by disputants in Court. If a party goes to court without any evidence, he will lose that legal battle. As has been held by the court in the case of The State v Ali Kasena[45]“… a multitude of allegations does not make a proof.[46] To succeed in court action, the Plaintiff or the Prosecution will have to produce relevant evidence before the Court to support the action. On the same footing, the Defendant or the accused person must equally produce evidence to counter the claims or allegations levelled against him.[47]

In litigation, the rule of the game is evidence. Section 179(1) of the Evidence Decree, 1975 (NRCD 323 defines evidence as “testimony, writings, material objects or other things presented to senses which are offered to prove the existence or non−existence of a fact.”[48] It is trite law that before any evidence can be admitted, it must first be relevant. Evidence is relevant where it has the tendency to prove or disprove a fact in issue or a matter in controversy between the parties. Lord Simon in DPP v Kilbourn,[49] states that “Evidence is relevant if it is logically probative or disprobative of a matter which requires proof.” In the case of Majolagbe v Larbi[50] Ollenu J (as he then was) defined proof in law as: 

The establishment of facts by proper legal means where a party makes an assertion capable of proof in some way e.g. by producing documents, description of things references to other facts, instances or circumstances and his averment is denied, he does not prove it by merely going into the witness box and repeating that averment on oath, or having it repeated on oath by his witness. He proves it by producing other evidence of facts and circumstances, from which the court can be satisfied that what he avers is true.”[51]

From the above definition articulated by the Late Justice Ollenu, some acceptable means of proving an allegation include the testimony of a witness or expert, documents, photographs, etc. 

In this era, the most prevalent means of proving an allegation is by producing or tendering document[52] in electronic form. Digital evidence is defined as any probative information stored or transmitted digitally and which a party to a judicial dispute may use in a trial. Examples of digital/electronic evidence are; email, digital photographs, word processing documents, videos, etc.  

The Electronic Transactions Act 2008 refers to electronic evidence as “electronic record” which includes data generated, sent, received or stored by electronic means, voice where voice is used in an automated transaction and a stored record.[53] The Act provides specific rules by which courts should admit electronic evidence.[54] 

It is important to note, however, that the general rules regarding admissibility of evidence cannot and are not replaced by the rules on admitting electronic evidence. The fundamental rules of “relevancy” contained in the Evidence Act 1973[55] are applicable in assessing the admissibility of any form of evidence including that which is electronic in nature.  

In the year 2014, the Supreme Court in Anvar India thoroughly examined how general rules of evidence apply to electronic records and how specific rules for digital evidence should be handled in the case of Anvar P. V. v. P. K Basheer.[56]The Court noted that electronic records can be easily altered or manipulated and are not the same as traditional documents. 

The case of Anvar P.V. v. P.K. Basheer has had a significant impact beyond the specific facts of the case, as it laid the groundwork for future developments in digital evidence law. Anvar remains a pivotal case in the area of digital evidence, shaping how electronic records are admitted as proof in India and other common law jurisdiction can leverage on this insightful decision. This places a significant burden on judiciaries across common law jurisdictions to develop and implement enhanced methods for authenticating electronic evidence, particularly in the era of Artificial Intelligence. 

The very learned and law lecturer Maxwell Opoku Agyemang in his recent article on Digital Space, stated that: 

In advocating for the consideration of the information technology in our legal system, we must also not be oblivious of the suspicion of the courts in admitting digital or electronic evidence.”[57]

The belief of the courts may be seen in the US case of Clair v Johnny’s Oyster & Shrimp Inc:[58]

While some look to the internet as an innovative vehicle for communication, the court continues to warily and wearily view it largely as one large catalyst for rumour, innuendo and misinformation. So as not to mince words, the court reiterates that this so-called web provides no way verifying the authenticity of the alleged contentions.”[59]

The courts are enjoined to treat digital or electronic records as authentic where they satisfy the prescribed standards and are properly filed by authorised persons. Notwithstanding this position, ambiguities persist regarding the meaning and scope of the requirement of “due care,” a matter often left to be interrogated at the evidentiary stage of proceedings. The flexibility of the law, while beneficial, may inadvertently create room for abuse, including the admission of forged documents or manipulated audio-visual materials such as AI-generated videos and recordings. 

Given the presumption of authenticity accorded to electronic records, rigorous or painstaking verification processes may be overlooked, thereby diminishing the probative value of such evidence, particularly amid growing concerns about digital manipulation and tampering. Furthermore, increased reliance on digital systems exposes the evidentiary process to cybersecurity risks. Where such systems are compromised through hacking or unauthorised interference, electronic records may be altered or corrupted, further undermining their reliability and integrity as evidence.

Notwithstanding these concerns, there is no doubt that the courts cannot function effectively without adopting and adapting to the digital age. Consequently, trial judges and legal practitioners must develop a firm mastery of the rules governing the admissibility of digital or electronic evidence, which has become a prominent and unavoidable feature of modern evidentiary practice.[60] 

Grimm J predicted in Lorraine v Markel American Insurance Company that, 

Because it can be expected that electronic evidence will constitute much if not most of the evidence used in future practice or at a trial, counsel should know how to get it right on the first try.”[61] 

Her Ladyship Dr. Dorinda Smith Arthur in the Akosua Serwaah Fosuh v Abusua-Panin Kofi Owusu and others,[62]  the famous Daddy Lumba’s case cautioned that;

in the era of photo shoots and Artificial Intelligence, the court is cautious in accepting photographs alone without further credible corroborating documentary evidence where prove of a fact demands a strict documentary proof.” 

Our apex Court in the case of Cubbage v Asare & Others[63] and Abena Pokua v ADB[64] has considered the intrusion of digital recordings and their admissibility in a trial. Though both cases were on the issue of admissibility of improperly obtained evidence, the court especially in Cubbage case discussed the issue of privacy and individual right to be left alone even in the digital age. 

Artificial Intelligence poses significant challenges to the admissibility of electronic evidence, primarily due to their ability to fabricate hyper realistic audio and video content more so with the AI tools. Courts rely on electronic evidence as an accurate reflection of events, but AI blur the line between authentic and manipulated media, raising concerns over reliability and authenticity. Given that AI tools can convincingly alter appearances, voices, and even behaviour, such content is often difficult to distinguish from authentic footage, particularly in the absence of advanced forensic analysis. 

Lawyers Must Think−AI Cannot Do It for You 

‘AI has turned the business world on its head and no more so than in the legal sector, where knowledge intensive roles going toe−to−toe with knowledge-based tech is driving a significant shift in the legal industry.’[65] Nearly 70 years ago, a conference at Dartmouth College established Artificial Intelligence (AI) as a field of study.[66] The proposal submitted by the conference conveners described the project as an attempt “to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”[67] 

The question of whether machines can be intelligent has captivated humanity since the early days of computers and programming. When John McCarthy organized the now-famous first summer project in Dartmouth in 1956, he first used the term Artificial Intelligence (AI), which refers to this field. Through the creation of real machines (computers or robots) that display intelligent behaviour, the field of Artificial Intelligence aims to provide an answer to this query.[68]  

‘The public was first able to access Chat GPT, a generative AI tool, in November 2022. Everyone now has access to AI, including attorneys. Reports of the tool being used without verification quickly surfaced, especially when it was used to highlight fictitious court decisions. The question of whether lawyers can use AI solutions was raised by this. But this is not a novel topic—every new technology raises a similar query. Cloud computing serves as an example of how lawyers can use new technologies in a morally and legally acceptable way. Nonetheless, competence and adequate preparation are required.’[69] 

The quick development of Artificial Intelligence (AI) has significantly changed the way that knowledge is created, accessed, and assessed. This shift is especially noticeable in higher education, where research, instruction, and learning are increasingly incorporating AI-powered tools like ChatGPT and large language models (LLMs). Although these technologies offer many benefits, like increasing information accessibility, automating repetitive tasks, and improving written content, they also raise serious questions about how they will affect fields like legal profession which rely on ethical judgment, interpretative analysis, and nuanced reasoning.’[70]   

‘The rise of Artificial Intelligence (AI) tools has transformed legal practice, offering lawyers unprecedented speed and convenience in drafting court documents. However, the careless or uncritical use of AI in legal writing carries significant risks, including inaccuracies, fabrication of authorities, and ethical breaches. Courts in various jurisdictions have already sanctioned lawyers for submitting filings containing fictitious case citations or misleading arguments generated by AI without proper verification.’[71] AI should not be used to substitute critical and logical thinking. 

Sir Sam Jonah recently in one of his speeches said:

Artificial Intelligence: No algorithm, no machine, no AI can replace your own creativity, your own intelligence, your own grit. And above all no technology can replace authentic human communication.[72]

AI may possess vast knowledge, but it does not possess wisdom. That is why it must be guided by human judgment. When one fails to exercise proper oversight over AI, it reflects poorly on their own prudence and responsibility.

AI Errors, Fake Cases, Real Sanctions

This section shall briefly examine the dangers of relying on AI in legal drafting and examines some key court decisions from other jurisdiction where lawyers were penalized for their reckless use of such AI tools, emphasising the importance of diligence and professional responsibility in the age of AI.  

Unknowingly citing non-existent authorities

Certain mistakes frequently occur when legal practitioners use AI carelessly to draft court documents. The following is a summary of these errors, which have resulted in judicial condemnation and penalties in multiple instances. One of the most serious consequences of careless AI use is submitting court documents that cite cases which do not exist. AI tools can generate convincing but entirely fictitious legal authorities, and lawyers who fail to verify these citations risk misleading the court and breaching their ethical duties.  

Below are examples of foreign cases where courts discovered that fictitious legal authorities were submitted due to AI usage;

In the case of Northbound Processing (Pty) Ltd v. South African Diamond and Precious Metals Regulator & Others,[73]the applicant’s legal team submitted written heads of argument that cited several non-existent legal authorities. The fictitious citations were later acknowledged by counsel to have originated from an AI-powered legal research tool (“Legal Genius”) which had generated plausible-sounding but fabricated case law, a phenomenon known as AI hallucination. The court noted that counsel failed to verify the citations against authoritative sources, which amounted to negligence and posed a serious risk to the integrity of judicial proceedings. Even though counsel apologised and stated there was no intent to mislead the court, the judge stressed that both deliberate and negligent misrepresentations of law undermine public confidence in the justice system. DJ Smit, AJ observed that; 

“I gave counsel for Northbound further opportunity to respond to these issues. In response to a direct question from the court whether the incorrect citations constituted so-called Artificial Intelligence “hallucinations”, Mr Barclay-Beuthin confirmed that they appeared to be so. He explained that, aside from the time pressure caused by various factors – including the urgency of this application, severe time-pressure to complete the heads and the indisposition of Mr Nowitz (who initially acted as Northbound’s junior counsel but fell out of the matter after doing an initial draft of the heads without the incorrect citations) – he used an online subscription tool called “Legal Genius” which claimed that it was “exclusively trained on South African legal judgments and legislation.” . Mr Subel SC also apologised unreservedly for the oversight on behalf of Northbound’s legal team. He stated that it was inconceivable to him that the authorities had been identified in the manner that Mr Barclay-Beuthin explained. He also explained that he relied upon an experienced legal team (which included two competent junior counsel) upon whom he believed he could (and indeed did) rely. He only did a “sense-check” on Northbound’s heads before they were filed and did not have sufficient opportunity to check the accuracy of the citations but considered that the propositions to which they related were trite and did not even require case law references. Finally, he emphasised that he independently prepared his oral argument, which made no reference to the heads as filed.”[74]

The judge referred the conduct of the applicant’s lawyers to the Legal Practice Council for investigation and possible disciplinary action. 

In the case of Ayinde v The London Borough of Haringey; Al-Haroun v Qatar National Bank QPSC,[75] the claimant submitted a witness statement that cited eighteen fictitious cases, as well as several genuine cases that did not support the arguments for which they were invoked. The claimant’s solicitor adopted these same citations in his own witness statement, without independently verifying their accuracy. The court characterized the solicitor’s failure to check the citations as “lamentable,” noting that “a lawyer cannot delegate responsibility for the accuracy of legal authorities or quotations to a lay client when submitting documents to the court.”[76] However, the court accepted that the lawyer was unaware the citations were fabricated and had no intention of misleading the court. On that basis, it concluded that the threshold for contempt had not been reached. The lawyer subsequently reported the matter to the Solicitors Regulation Authority himself, and the court also elected to make a referral. 

Also, in the case of Mavundla v. Member of the Executive Council, Department of Co-operative Government and Traditional Affairs,[77] the Court discovered that most of the cited cases did not exist, and others were misrepresented or irrelevant. Upon investigation, it emerged that the citations were drafted by a candidate attorney    (Ms Farouk) without proper verification and incorporated into submissions by counsel (Ms Pillay) and the attorney of record (Mr Singh) without review. The court repeatedly adjourned proceedings to give the legal team an opportunity to substantiate the citations, which they failed to do. The court found the conduct of the applicant’s lawyers irresponsible, unprofessional, and a breach of their duty to the court. It stressed that lawyers must independently verify any authority cited and cannot rely on unverified research from juniors or clients. The judgment also noted that some of the fabricated authorities appeared to have come from AI-generated research, despite denials. 

E Bezuidenhout J observed that; 

During the course of writing this judgment it came to my knowledge that the case reference or citation for Pieterse might be incorrect. I checked my notes and asked the chief stenographer to listen to the recording, but this was the exact reference provided by Ms Pillay. There is no such case reported in the South African Law Reports, nor in the All-South African Law Reports, and no reference to such a case could be found on the website of the South African Legal Information Institute, referred to as ‘SAFLII’. No reference could likewise be found for Burgers, Dube or Aon SA. I requested the two law researchers employed at the Pietermaritzburg High Court to peruse the supplementary notice of appeal and to provide all the cited cases to me. Of the nine cases referred to and cited, only two could be found to exist, albeit that the citation of one was incorrect.”[78]

The application for leave to appeal was dismissed with costs. 

In the American case of Eric Coomer v Michael J. Lindel,[79] the court considered issues arising from defective legal filings. The defendants’ brief opposing a motion in limine contained nearly thirty erroneous citations, including misquotes, misrepresentations of legal principles, incorrect attributions, and even references to non-existent cases. When questioned, defence counsel, Mr. Kachouroff, admitted that the filing had been run through generative AI and that he had failed to verify the citations. The court ordered counsel to show cause why sanctions should not be imposed. In their response, the defendants argued that the errors were due to the mistaken filing of a draft version, not intentional misconduct, and that corrective action was taken once the mistake was discovered.Despite this explanation, the court found the conduct inexcusable and ruled that sanctions were warranted against Mr. Kachouroff, the law firm McSweeny Cynkar & Kachouroff PPLC, and Ms. DeMaster, both jointly and individually.

In the case of Thomas Dexter Jakes v Duane Youngblood,[80] Attorney Blackburn, representing Defendant Youngblood, filed a motion to dismiss the Plaintiff’s complaint with prejudice but submitted briefs containing fabricated quotations and misrepresented case law, including false citations to the court’s own prior opinion. The Plaintiff, Jakes, identified these issues in his opposition. Rather than addressing the misconduct, Blackburn accused Jakes of similar misrepresentations, but the court found no such errors in Jakes’ filings. The court concluded that only Blackburn’s briefs contained fabricated and misleading authorities, and noted that Blackburn repeated the misconduct in his reply brief, which the court deemed a serious and troubling ethical violation. 

Judicial Caution on AI in Legal Research

In the case of Ayinde v The London Borough of Haringey; Al-Haroun v Qatar National Bank QPSC,[81] the President of the King’s Bench Division made the following observations on the use of AI during legal research by lawyers and advocates; ‘In the context of legal research, the risks of using Artificial Intelligence are now well known. Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect. The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source.’[82]

‘Those who use Artificial Intelligence to conduct legal research notwithstanding these risks have a professional duty to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work (to advise clients or before a court, for example).’ [83] ‘There are serious implications for the administration of justice and public confidence in the justice system if Artificial Intelligence is misused. In those circumstances, practical and effective measures must be taken by those within the legal profession with individual leadership responsibilities and by those with the responsibility for regulating the provision of legal services. Those measures must ensure that every individual currently providing legal services understands and complies with their professional and ethical obligations and their duties to the court if using Artificial Intelligence.’[84]  

The court has a range of powers to ensure that lawyers comply with their duties to the court. Where those duties are not complied with, the court’s powers include public admonition of the lawyer, the imposition of a costs order, striking out a case, referral to a regulator, the initiation of contempt proceedings, and referral to the police. The course of action followed will depend on the circumstances of the case.[85] Where a lawyer places false citations before the court (whether because of the use of Artificial Intelligence without proper checks being made, or otherwise) that is likely to involve a breach of ethical and regulatory requirements, and it is likely to be appropriate for the court to make a reference to the appropriate regulator.[86]

Best Practices for Ghanaian Lawyers When Using AI

While AI tools can enhance efficiency and support research, their unethical use has led to ethical breaches and professional misconduct in several jurisdictions. Ghanaian legal practitioners should adopt the following best practices to responsibly use AI;

Verify All AI-Generated Content 

United States’ Senator Daniel Patrick Moynihan famously remarked: “You are entitled to your opinion, but you are not entitled to your own facts.” These cases of AI hallucinations demonstrate that AI seems to feel very entitled to its own facts. And therein lies the danger.[87] It is therefore prudent that you always double-check any citations, legal principles, and factual claims generated by AI against authoritative and trusted sources (e.g., Law Reports, Statutes, Online Databases such as The Law Platform, Dennislaw, Superlawgh, etc.) Do not assume that an AI tool’s output is correct. Treat it as a draft or suggestion, not a finished product. In Ayinde v The London Borough of Haringey; Al Haroun v Qatar National Bank QPSC,[88] court observed that, 

Those who use Artificial Intelligence to conduct legal research notwithstanding these risks have a professional duty to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work.”[89]

Train and Educate your Team 

The truth is, we encounter AI daily whether we like it or not. Log into your email and you are offered an auto-reply; schedule a virtual consultation and you are greeted by an automated note-taker upload a Pdf to Adobe and it offers to summarize it instantly. Word processors now suggest full sentences; Grammarly politely offers to “rephrase in professional English”. There is simply no escaping the constant pop-up invitations to allow some computerized “assistant” to do in seconds what once required deliberate thought.[90]            

Flowing from above, it has become imperative for Senior Lawyers to educate their support staff, clerks, and juniors about the risks of AI (fabricated citations or facts) and how to detect them. Supervise their use of AI tools and check their work thoroughly before filing or presenting it. The case of Mavundla v. Member of the Executive Council, Department of Co-operative Government and Traditional Affairs[91] illustrates the critical need to train and supervise support staff, clerks, and juniors on the risks of using AI tools for legal research. In that case, a candidate attorney prepared a supplementary notice of appeal that cited numerous fabricated or non-existent cases, apparently relying on faulty AI-generated “research.” Neither the candidate nor the supervising attorneys verified the authorities before filing and presenting them in court. The court described this conduct as irresponsible and unprofessional, underscoring the lawyers’ duty to avoid misleading the court. Additionally, lawyers and staff should take courses and training programs on AI. Some recommended options include: AI Law and Legal Training, AI Fluency, AI for Lawyers: Law and Policy, and AI Security and Governance.

Be Sceptical of Unfamiliar Authorities 

If AI suggests a case or statute, you have never seen or that seems unusual, pause and independently locate and read it. Use trusted primary sources such as the Constitution, Acts of Parliament, Ghana Law Reports, Dennislaw, The Law Platform Law Reports, Superlawgh, etc. A lawyer’s failure to do this was seen in Parker v Forsyth NO and others,[92] where the plaintiff’s lawyer subsequently, at the request of the court and defendant’s lawyer, submitted a list of authorities to the defendant’s Lawyers. The defendant’s lawyers could not locate any of the cases referenced and requested to be provided with the source of the authorities. The plaintiff’s lawyers eventually admitted that they had neither accessed nor read the cases cited and could not source them. It then came to light that the cases referenced had been sourced from an Artificial Intelligence chatbot, namely ChatGPT.

Understand the Limits of AI 

Be aware that AI does not “understand” law, it generates patterns based on data, which can include outdated, irrelevant, or fictional information. AI is a writing draft. Recognize that AI lacks nuance and context, especially for Ghanaian legal issues, which may not be well-represented in its training data.

Exercise Judgment Under Pressure 

Even when deadlines are tight or clients are demanding, resist the temptation to rely blindly on AI outputs. Remember that professional responsibility to the court and client outweighs convenience.

Maintain Confidentiality 

Never enter confidential client information into public AI tools, which may not guarantee data privacy. Many publicly available AI tools (like ChatGPT, Gemini, or Claude) operate as cloud-based services, which means that any information entered into them is transmitted to servers controlled by the AI provider often outside Ghana, and often stored, logged, or even reviewed for training purposes. When you paste confidential information into a public AI chatbot, the text is sent to servers owned by the provider. Many of these providers use the data to improve their models or store it in logs for debugging and monitoring purposes, meaning your client’s private details may reside indefinitely on servers outside your control.

Ghanaian Courts and AI Missteps: Are They Ready?

While Ghanaian courts have not yet made judicial pronouncements specifically addressing lawyers’ careless use of AI generated content, the risk is real and likely already manifesting in subtle ways. These issues may already be occurring unnoticed, unchallenged, or incorrectly attributed to simple errors rather than systemic problems with how lawyers (or their juniors) are using technology. 

Courts have an important duty to safeguard the integrity of judicial proceedings by ensuring that submissions made before them are accurate, honest, and grounded in proper legal authority. This duty is twofold: it rests primarily on the lawyer, who must not mislead the court, and secondarily on the judge, who must carefully scrutinize submissions, interrogate citations when something appears amiss, and be equipped to detect misleading or fabricated material. Judges cannot abdicate this responsibility simply because the duty to verify lies first with the advocate.  

To fulfil this duty in the age of generative AI, the Ghanaian judiciary should proactively build the capacity of judges and their research assistants to identify the hallmarks of AI fabricated authorities and understand the limitations of AI tools. Judges should not simply rely on their instincts or experience when confronting novel citations, as the sophistication of AI hallucinated content can make errors difficult to detect without deliberate scrutiny.

Lessons from Other Jurisdictions

Many jurisdictions have already recognized the need to train judges and their legal research teams in technology related competencies: 

In South Africa, following cases like Mavundla and Parker v Forsyth, some divisions of the High Court began running internal judicial colloquia on the ethical risks of AI and methods to verify questionable citations, often involving judicial researchers.  

In the United States, the Federal Judicial Centre (FJC) has developed specific programs on AI literacy for judges, including workshops on identifying “hallucinated” legal authorities and understanding the ethical implications of AI assisted advocacy. The FJC has also produced an extensive guide titled An Introduction to Artificial Intelligence for Federal Judges, which equips judges with technical background and guidance on identifying risks such as AI-generated hallucinated legal citations.  

In the United Kingdom, the Judicial College has incorporated modules on digital literacy and AI related risks into its continuing judicial education programs. In some circuits, judges are provided access to verified legal databases and are trained to use them to check authorities quickly during hearings. In Canada, the National Judicial Institute has published guidelines and delivered seminars on technology in the courtroom, including the potential for AI to mislead courts if lawyers are careless.

Empanelling Judges by the Chief Justice using AI

The Chief Justice is the Head of the Judiciary and is charged with the administration, management, and supervision of the Judiciary. Article 125(4) of the 1992 Constitution states that: 

The Chief Justice shall, subject to this Constitution, be the Head of the Judiciary and shall be responsible for the administration and supervision of the Judiciary.”

Additionally, the Chief Justice has the administrative duty to empanel Justices of the Supreme Court and other Courts in the country. This important role was upheld by the Supreme Court in the case of Agyei Twum v. Attorney General and Another.[93]

On 10th November, 2025, while being vetted by Parliament, the then Chief Justice Nominee, His Lordship Justice Paul Baffoe-Bonnie, outlined several initiatives he intended to pursue. Notably, he proposed employing Artificial Intelligence in the empanelling of judges. This view has generated significant debate among lawyers as well as other social and political commentators across board.

In considering the use of AI to support this function, it is important to emphasise that AI can only serve as an advisory or administrative aid such as helping analyse caseloads, scheduling, or identifying judges’ specialisations. It cannot replace the constitutional mandateof the Chief Justice. Empanelling involves sensitive judicial considerations, including expertise, conflicts of interest, seniority, and public confidence. These require human judgment, constitutional responsibility, and accountability, none of which AI can assume.  

Thus, while AI may enhance efficiency, the constitutional and jurisprudential position remains unequivocal: the Chief Justice alone bears ultimate responsibility for empanelling judges, and this power cannot be delegated to or replaced by AI. This is because, where a statute confers power on an individual, that person is duty-bound to perform the function personally.[94]

Why AI Cannot Replace Lawyers and Judges

Experts at a Harvard Law School event agreed that while AI tools like ChatGPT, Gemini, and Claude are increasingly used in legal practice, they are not ready to replace judges or act as final decision-makers. Judges and lawyers already use AI for research, drafting, and case preparation, but serious limitations remain. Judge Kevin Newsom’s experience using ChatGPT highlighted that AI can help explore ordinary meaning in textualist interpretation, yet it should not decide cases. Research presented by Carissa Chen showed that large language models give inconsistent, unpredictable answers, are influenced by irrelevant information, and can be manipulated through poisoned data meaning they fail to interpret text the way humans do.[95]

Although AI might assist in identifying areas where consensus exists on ordinary meaning, its randomness and lack of stable reasoning make it unreliable for legal judgment. Panellists agreed that AI may become more useful if trained on empirical linguistic data and used with transparency, but for now, human judges must retain control. AI can support legal work, but it cannot replicate human wisdom, constitutional responsibility, or the stable interpretive reasoning required in adjudication.[96] AI will not replace lawyers entirely, but lawyers who learn to use it wisely will inevitably outpace those who resist it.[97]

The following are some reasons;

Legal Work Requiring Contextual and Ethical Judgement

A significant portion of legal work involves decisions that depend not only on written rules but also on ethical values and complex social considerations. AI, while excellent at analysing data and finding patterns, cannot replace lawyers who understand the social context, values, or moral aspects of a case.

AI Helps, But Doesn't Take Over Creativity and Strategy

The work of a lawyer or legal consultant often requires creative thinking and strategic decision making in dealing with unforeseen situations. AI can provide data and pattern analysis, but the final decision about legal strategy or the best way to approach a case still requires human experience.                                                                                               

Human Interaction that Cannot Be Replaced

A significant portion of legal work involves direct interaction with clients. Lawyers often serve as personal advisors who provide emotional support, understanding, and strategic advice that goes beyond just legal interpretation. In this regard, lawyers rely heavily on communication skills, empathy, and interpersonal abilities, which are very difficult for AI to replicate.

AI Requires Human Oversight and Final Decision-Making

Although AI can assist with administrative or technical tasks, oversight from legal professionals is still needed to ensure that decisions made are ethical, legal, and socially appropriate. AI can provide recommendations or analysis, but the final decision about the legal actions taken still rests with human lawyers. Lord Patrick Devlin in his renowned book entitled ‘The Judge’ stated: 

"If a judge leaves the law and makes his own decision, even if in substance they are just, he loses the protection of the law and sacrifices the appearance of impartiality which is given by adherence to the law. He expresses himself personally to the dissatisfied litigant and exposes himself to criticism. But if the stroke is inflicted by law, it leaves no sense of individual injustice; the losing party is not a victim who has been singled out; it is the same for everybody he says. And how many a defeated litigant has salved his wounds with the thought that the law is an ass."[98]

Limitations in Handling Unique or Unpredictable Situations

AI excels at solving structured problems based on historical data, but many legal situations are unique or unpredictable, where AI might struggle to offer the right solution. Law is a dynamic field that often involves uncertainty or gaps in the law that require human judgment.

AI and Legal Processes That Can Be Automated

While AI will not replace lawyers entirely, there are many repetitive and administrative tasks in the legal profession that can be automated. e.g.; 1. Drafting standard contracts. 2. Document search. 3. Document review. 4. Managing large data for compliance and risk analysis.

Changes in the Role of Legal Professionals

It is more realistic to say that AI will change the role of legal professionals rather than replace them. Lawyers and other legal professionals are likely to focus more on higher-value tasks, such as legal strategy, negotiation, consultation, and complex litigation, while AI will handle the more administrative and technical aspects.

Recommendations 

Given the risks already observed in other jurisdictions, Ghana is no exception. There have been instances of fabricated cases and authorities appearing in some lower court’s rulings and judgments, as well as in certain submissions filed by lawyers before the courts. 

To prevent this practice from becoming widespread in our courts and legal practice, the Ghanaian Judiciary and the Ghana Bar Association, through the Judicial Training Institute and Bar Conferences, would be well advised to:

  • Develop short courses or training modules for Judges, Lawyers, Research Assistants and firm Clerks on the impact of AI on legal practice and submissions.
  • Create quick-reference guides to assist practitioners in verifying citations, detecting inaccuracies, and identifying red flags in AI-generated work.
  • Promote sustained professional dialogue between the Bench and the Bar on preserving integrity and ethical standards within an AI-enabled legal environment.

For the avoidance of doubt, L.I. 2423 ought to be amended to include sanctions for lawyers who submit fabricated authorities to the court. 

It is time to regulate Artificial Intelligence (AI) in Ghana in a manner that removes barriers and encourages the adoption of AI applications across sectors.

The regulators of legal education should also incorporate the ethical use of Artificial Intelligence into the academic curriculum, as AI is here to stay.

Conclusion

Artificial Intelligence offers Ghana’s legal system significant opportunities for efficiency, improved research, and enhanced judicial administration. Yet its benefits come with equally serious risks when used carelessly, including fabricated authorities, ethical breaches, and damage to public confidence in the justice system.  

As seen in other jurisdictions, unverified AI-generated content can lead to sanctions, professional misconduct findings, and erosion of judicial integrity. Ghana must therefore adopt AI cautiously embracing its advantages while reinforcing the irreplaceable role of human judgment, ethical reasoning, and constitutional responsibility. 

Lawyers are reminded to remain diligent, verifying every citation and exercising independent thought, while judges should also be equipped to detect AI-related errors. Training, oversight, and collaborative guidance from the Bench and Bar are essential to ensuring that AI strengthens rather than undermines justice delivery. In the end, AI should serve as a tool not a substitute for the competence, integrity, and wisdom that define Ghana’s legal profession.

------------

[1] ‘Robo’ was coined from the word Robot. According to Britannica, robot is any automatically operated machine that replaces human effort, though it may not resemble human beings in appearance or perform functions in a humanlike manner. Although there is no universally accepted definition of Justice, Black’s Law Dictionary 11th Edition defines Justice as the fair treatment of people. The concept of justice is traced to the great Roman jurist, Ulpian, who defined justice as, ‘To live honourably, not to harm your neighbour and to give everyone his due.’

[2] Ibid

[3] Y Guo & D Lee ‘Leveraging ChatGPT for enhancing critical thinking skills’ (2023) Journal of Chemical Education 4877.

[4] Paul Mukibi and Aziz Kitaka, The Dangers of Using AI while Drafting Court Documents: An Analysis of Court Decisions on Lawyers’ Careless Use of AI (AI and Legal Practice, Guidance Paper No 1, July 2025).

[5] Ibid 

[6] Philip Ebow Bondzi-Simpson, Company Law in Ghana (3rd edn, Avant Associates Ltd 2020) 9.

[7] Ibid 

[8] Ibid 

[9] No 2:23-CV-118-KHR (D Wyo)

[10] Wadsworth v Walmart Inc and Jetson Electric Bikes LLC, Case No 2:23-CV-118-KHR (D Wyo, Jury Trial Demanded, Declaration of T Michael Morgan in Response to Order to Show Cause)

[11] Research Briefing on AI by the House of Commons Library titled ‘Artificial Intelligence: A reading list’ https://researchbriefings.files.parliament.uk/documents/CBP 10003/CBP 10003.pdf    

[12] Organization for Economic Co-operation and Development’s (OECD) definition of an “AI System” as of 3 May 2024.

[13] The OECD, ‘Artificial Intelligence in Society’ (2019) OECD Publishing 36, 47 <https://doi.org/10.1787/eedfee77-en > accessed 17 July 2025 

[14] Caiky Avellar, Artificial Intelligence Governance Professional (AIGP) Study Notes (2025).

[15] Ziyad Saleh, ‘Artificial Intelligence: Definition, Ethics and Standards’ (2019) 

[16] Ibid 

[17] United Nations Development Programme, Human Development Report 2025: A matter of choice — People and possibilities in the age of AI (United Nations Development Programme, New York, 2025) Print ISBN 9789211576092, PDF ISBN 9789211542639, Print ISSN 0969-4501, Online ISSN 2412-3129.

[18] M Bearman & R Luckin ‘Preparing university assessment for a world with AI: Tasks for human intelligence’ in M Bearman and others Re-imagining university assessment in a digital world 2020 51: machine learning is commonly seen in AI systems that outperform humans in strategy-based games like chess, drive autonomous vehicles, process spoken language and verify identities in airport e-passport gates. It is also widely used in spam filtering, where email providers continuously refine their ability to detect junk mail by learning from new patterns.

[19] L Zhdankina, Protecting Human Dignity and Equality in the Era of Artificial Intelligence: Beyond Efficiency (PhD, University of Glasgow; The British Academy ECR, UK).

[20] Eugene Volokh, Chief Justice Robots, 68 DUKE L.J. 1135, 1138 (2019).

[21] Id. at 1137. This paper thus falls more into what has been called the “futurist” category of AI literature. See Harry Surden, Artificial Intelligence and Law: An Overview, 35 GA ST. L. REV. 1305, 1306 (2019) (“A key motivation in writing this article is to provide a realistic, demystified view of AI that is rooted in the actual capabilities of the technology. This is meant to contrast with discussions about AI and law that are decidedly futurist in nature.”).

[22] Andrew C Michaels, ‘Artificial Intelligence, Legal Change, and Separation of Powers’ (2020) 88 University of Cincinnati Law Review 1083

[23] Volokh, supra note 1, at 1138-39

[24] “Artificial Intelligence Definition & Meaning - Merriam-Webster,” available at: https:// www.merriam-webster.com/dictionary/artificial%20intelligence (last visited December 14, 2025).

[25] Ibid 

[26] Alzbeta Krausova, ‘Intersections between Law and Artificial Intelligence, International Journal of Computer’ 

(IJC) (2017) 27 (1), 55, 68. <https://core.ac.uk/download/pdf/229656008.pdf >accessed 17 July 2025 

[27] Daniel Ben-Ari, Yael Frish & Adam Lazovski, "Danger, Will Robinson"? Artificial Intelligence in the Practice of Law: An Analysis and Proof of Concept Experiment, 23 Rich. J.L. & Tech 1 (2022). <https://core.ac.uk/download/pdf/229656008.pdf > accessed 17 July 2025

[28] Constitution of Ghana 1992, Article 1(2)

[29] Ibid 

[30] Thomas Kojo Quansah and Theophilus Edwin Coleman, ‘Presence as a Ground for Jurisdiction in Common Law Africa’ (2025) 12(1) Journal of Comparative Law in Africa 103; W C Daniels, ‘Administrative Jurisdiction in Ghana’ (1973) 5 Review of Ghana Law 164; W B Harvey, ‘The Evolution of Ghana Law Since Independence’ (1962) 27(4) Law and Contemporary Problems 581.

[31] Constitution of Ghana 1992, Article 11 (1) (e)

[32] Constitution of Ghana 1992, Article 11 (3)

[33] Constitution of Ghana 1992, Article 11 (1) (b)

[34] Constitution of Ghana 1992, Article 11 (2)

[35] Constitution of Ghana 1992, Article 73

[36] Constitution of Ghana 1992, Article 126(1)

[37] The City Law School. Opinion Writing and Case Preparation. Edited by Nigel Duncan and Allison Wolfgarten. Oxford: Oxford University Press, 2024.

[38] Ibid 

[39] Ibid 

[40] Section 1 of Legal Profession Act, 1960 (Act 32) establishes the General Legal Council as the body responsible for the organization of legal education and for upholding standards of professional conduct.

[41] LI 2423, rule 10

[42] Rule 1 of the Legal Profession (Professional Conduct and Etiquette) Rules, 2020 (LI 2423) provides that the rules are to be interpreted in a manner that recognized that: a) A lawyer has a duty in the practice of law to discharge his responsibilities to his client, the Court, the public and any other lawyer with integrity. b) A lawyer has a special responsibility by virtue of the privileges afforded the legal profession and the important role the profession plays in a free and democratic society and in the administration of justice, including a special responsibility to recognize the diversity of the Ghanaian community, to protect the dignity of the individuals and to respect human rights laws in force in the country c) It is the duty of the lawyer at all times to uphold the dignity and standing of the legal profession.

[43]  L.I. 2423, rule 62(e)

[44] Sifa P Inamdar, ‘Accountability in the Age of Deepfake: Legal and Ethical Challenges in AI-Driven Manipulations’ in Pradeep Kumar and Gaurav Gupta (eds), Artificial Intelligence and Law (Satyam Law International 2024)

[45] [1962] 1 GLR 144

[46] Ibid

[47] Benjamin Tachie Antiedu, ‘Reading the Law’ (2019). Pentecost Printing Press page 247

[48] Evidence Act, 1975 (NRCD 323), Section 179(1)

[49] [1973] AC 729

[50] [1959] GLR 190

[51] Ibid at 192

[52] Document refers to anything upon which evidence or information is recorded in any manner intelligible to the senses or capable of being made intelligible by use of equipment. Examples, Photocopies, Pen drives, etc. See the cases of Dubai Bank v Galadari (No.7) [1992] 1 All ER 658; R v Daye [1908] 2 KB 333; Grant v Southwestern & County Properties Ltd [1975] 2 All ER 465; Delby & Co v Weldon (No. 9) [1991] 2 All ER 901

[53] Electronic Transactions Act 2008 (Act 772), Section 144.

[54] Electronic Transactions Act 2008 (Act 772), Section 7(1).

[55] Evidence Act 1975 NRCD 323, Part IV.

[56] Anvar P. V. P. K Basheer, AIR 2015 SUPREME COURT 180

[57] Maxwell Opoku Agyeman, ‘The New Frontier of Law on Trespass and Unlawful Search of Property’ (unpublished manuscript)

[58] (1999) 76 7Supp 2D 773 at 774−774

[59] Ibid 

[60] It appears that the future about which Grimm J (cited infra) spoke is upon us (or at least very nearly upon us) in Ghanaian practice. Digital technology permeates every facet of our daily lives. From the means by which most people communicate, to how we regularly access news and general information, broadcast our thoughts and opinions, advertise our businesses and navigate unknown locations, digital technology plays a role. This means that evidence can be extracted from this technology to attempt to prove different types of facts in issue.” See Elorm Kwame Kota Zormelo, ‘Admissibility of Electronic Evidence in Ghanaian Practice’ (2018) Ghana Law Hub https://ghanalawhub.com/admissibility-of-electronic-evidence-in-ghanaian-practice/ accessed 14 December 2025.

[61] Lorraine v Markel American Insurance Company 241 F.R.D. 534 (D.Md. May 4, 2007).

[62] Suit No GJ12/20/2026 (High Court, Kumasi) (unreported)

[63] RAPHAEL CUBAGEE vs. MICHAEL YEBOAH ASARE, K. GYASI COMPANY LIMITED, ASSEMBLY OF GOD CHURCH [2018] DLSC141 

[64] MRS. ABENA POKUAA ACKAH vs. AGRICULTURAL DEVELOPMENT BANK [2017] DLSC17580

[65] Maria Ward Brennan, ‘Legora CEO: Law Degree Is Not Enough — AI Knows the Law, What Do You Bring?’ City A.M. (2 December 2025)

[66] Nathalie A Smuha (ed), The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence (Cambridge University Press 2022).

[67] John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955

[68] Ibid at 17

[69] AI in the Work of an Attorney-at-Law: Recommendations on How Attorneys-at-Law Should Use AI-Based Tools (1st edn, Warsaw 2025).

[70] Lizelle le Roux, ‘Artificial Intelligence, Legal Education and the Imperatives of Thinking and Human Judgment’ in Charles Maimela (ed), Legal Pedagogy, Practice and Curriculum Transformation: What Does the Future Hold and Look Like? (Pretoria University Law Press (PULP) 2025).

[71] Paul Mukibi and Aziz Kitak, The Dangers of Using AI while Drafting Court Documents: Analysis of Court Decisions on Lawyers’ Careless Use of AI (AI and Legal Practice Guidance Paper No 1, July 2025).

[72] Joy News on the 26/9/2025

[73] (High Court of South Africa, Gauteng Division, Johannesburg, Case No: 2025-072038)

[74] Ibid at page 20 paragraph 29

[75] [2025] EWHC 1383

[76] Ibid

[77] (KwaZulu-Natal 2025 (3) SA 534 (KZP)

[78] Ibid at page 10 paragraph 20

[79] Civil Action No. 22−cv−01129−MYW−SB

[80] Civil Action No. 2: 24−cv−1608

[81] [2025] EWHC 1383,

[82] Ibid paragraph 6

[83] Ibid paragraph 7

[84] Ibid paragraph 8

[85] Ibid paragraphs 23-24

[86] Ibid paragraph 29

[87] Philile Shandu, 'The Wonders and Dangers of Artificial Intelligence in Legal Research' (December 2025) Bar News (KwaZulu-Natal Bar) pg18

[88] [2025] EWHC 1383

[89] Ibid

[90] Shandu (n 80)17

[91] Ibid

[92] ([2023] ZAGPRD 1)

[93] [2005-2006] SCGLR 732

[94] See Tsikata vs. Chief Justice & Attorney-General [2002] DLSC1275

[95] Can ChatGPT replace judges? - Harvard Law School | Harvard Law School

[96] Ibid

[97] Shandu (n 80)18

[98] Patrick Devlin ‘The Judge’ (1979) at p 4