Stay informed with our newsletter.

Icon
Trending
June 2, 2024

ChatGPT Was Asked for Legal Advice: 5 Reasons Why It's Not Recommended

Numerous individuals seek legal assistance online. With the emergence of artificial intelligence (AI) chatbots like ChatGPT, Google Bard, Microsoft Co-Pilot, and Claude, you might consider consulting them for legal queries.

The first answers the chatbots provided were often based on American law.

A recent survey conducted jointly by the Law Society, the Legal Services Board, and YouGov in 2023 shed light on the pervasive need for legal guidance among individuals. According to the findings, approximately two-thirds of respondents encountered legal issues within the preceding four years, with employment, finance, welfare and benefits, and consumer matters emerging as the most prevalent concerns.

Despite the prevalence of legal challenges, a significant portion of those in need lack access to professional assistance due to financial constraints. Among respondents facing legal issues, only 52% were able to seek help from legal professionals, while 11% relied on support from acquaintances such as family and friends, leaving the remainder without any form of assistance.

In response to this gap in access to legal counsel, many individuals turn to online resources, including artificial intelligence (AI) chatbots such as ChatGPT, Google Bard, Microsoft Co-Pilot, and Claude. These AI-driven tools leverage generative AI technology to interpret and respond to inquiries, offering users the opportunity to seek legal advice in a conversational manner. However, the question arises: Can these chatbots be relied upon for accurate guidance?

To assess the efficacy of AI chatbots in providing legal assistance, a recent study published in the International Journal of Clinical Legal Education conducted a comprehensive evaluation. Researchers posed six common legal questions spanning family, employment, consumer, and housing law to various AI platforms, including ChatGPT 3.5 (free version), ChatGPT 4 (paid version), Microsoft Bing, and Google Bard. These questions mirrored typical inquiries received at The Open University Law School's free online law clinic.

Boston Brand Media also revealed that while AI chatbots can offer legal insights, the accuracy and reliability of their responses varied. Researchers identified five common errors, highlighting potential limitations in the ability of these tools to provide consistently dependable guidance.

  1. The origin and context of the law might not be fully understood or applicable to the specific situation.

The initial responses provided by the AI chatbots often relied on American legal principles, a detail that was not always explicitly stated or readily apparent. For users lacking legal expertise, this could lead to the assumption that the advice pertained to their own jurisdiction. Complicating matters further, the chatbots occasionally failed to clarify the jurisdictional differences in law, leaving users unaware that legal regulations can vary based on geographic location.

The complexity of legal distinctions is particularly pronounced in the United Kingdom, where laws diverge among its constituent countries: England and Wales, Scotland, and Northern Ireland. For instance, tenancy laws in Wales differ from those in Scotland, Northern Ireland, and England, while divorce proceedings and the dissolution of civil partnerships are subject to distinct procedures in Scottish and English courts.

To address this discrepancy, researchers introduced an additional query: "Is there any English law that covers this problem?" This prompt was necessary for most inquiries, prompting the chatbots to tailor their responses to English legal standards. This step was crucial in ensuring that users received advice applicable to their specific jurisdiction within the UK.

  1. The law might be outdated, leading to inaccurate or irrelevant advice for current legal matters.

Another notable observation from our study was the occurrence of responses referencing outdated legal statutes, which have since been supplanted by updated regulations. A prime example of this was evident in the realm of divorce law, where significant reforms were implemented in April 2022, abolishing fault-based divorce proceedings in England and Wales.

Despite these legislative changes, certain responses provided by the AI chatbots alluded to the previous legal framework. This discrepancy underscores a potential limitation inherent in AI training methodologies: reliance on vast datasets of historical legal information. Due to the dynamic nature of legal systems, these datasets may not always encompass the most recent statutory amendments or judicial rulings.

The consequence of this gap in data currency is a risk that users may receive advice based on outdated legal provisions, potentially leading to misinterpretation or misapplication of the law. This finding underscores the importance of regularly updating AI training datasets to reflect contemporary legal standards and ensuring that users are equipped with accurate and relevant information.

  1. Providing legal advice without proper qualifications can result in misinformation or incorrect guidance.

Our evaluation uncovered significant shortcomings in the accuracy and reliability of responses provided by AI chatbots, particularly concerning family and employment law queries. While responses to housing and consumer-related questions demonstrated comparatively higher accuracy, gaps in coverage and occasional inaccuracies persisted across all categories. Crucially, these shortcomings were compounded by the inherently persuasive nature of the well-articulated responses generated by the chatbots, potentially leading users to place undue trust in their guidance.

The challenge of discerning the veracity of AI-generated legal advice is exacerbated by the lack of legal expertise among users. Without a comprehensive understanding of legal principles, individuals may struggle to identify inaccuracies or omissions in the advice provided, thereby increasing the risk of erroneous interpretation or application of the law.

Compounding these concerns are instances where individuals have relied on AI chatbots in legal proceedings. In a notable case in Manchester, a self-represented litigant purportedly cited fictitious legal precedents obtained from ChatGPT to bolster their argument in a civil court case. This underscores the potential real-world consequences of misplaced trust in AI-generated legal advice and highlights the need for caution when relying on such technologies, particularly in legal settings.

  1. Generic or inappropriate advice may not address the specific nuances or complexities of the legal issue at hand, potentially leading to adverse outcomes.

Our investigation revealed a notable deficiency in the level of detail provided by AI chatbots, hindering users' ability to comprehend their legal predicaments and devise appropriate resolutions. Rather than directly addressing the specific legal queries posed, responses often offered general information on relevant topics, failing to provide the nuanced understanding necessary for effective problem-solving.

Interestingly, despite these limitations, AI chatbots demonstrated a degree of proficiency in suggesting practical, non-legal strategies to tackle issues. While these suggestions may serve as a preliminary step in addressing concerns, it is imperative to recognize that such approaches are not universally effective. In many cases, legal recourse may be indispensable to assert one's rights and achieve a satisfactory resolution.

This finding underscores the importance of caution when relying solely on AI-generated guidance, particularly in legal matters where precision and comprehensiveness are paramount. While AI chatbots can offer valuable insights, they should be regarded as one tool among many in the pursuit of legal understanding and resolution. Ultimately, the decision to pursue legal action should be informed by a comprehensive assessment of individual circumstances and expert advice from qualified legal professionals.

  1. "Pay to play" arrangements could lead to biased or compromised legal counsel, undermining the integrity and fairness of the legal process.

Our study highlighted a notable discrepancy in the performance of AI chatbots, with ChatGPT4, the paid version, demonstrating superior efficacy compared to its free counterparts. While this disparity may offer benefits to those able to access and afford premium services, it also raises concerns regarding the perpetuation of digital and legal inequality.

As technology continues to advance, there remains the possibility that AI chatbots will evolve to deliver more accurate and reliable legal guidance. However, until such advancements are realized, it is imperative for individuals to exercise caution when relying on these tools to address legal issues. Our findings underscore the importance of recognizing the inherent limitations of AI chatbots and the associated risks of misinformation or incomplete advice.

In light of these considerations, we advocate for individuals to seek assistance from reputable sources such as Citizens Advice, which offer up-to-date and accurate information while also providing personalized guidance tailored to individual circumstances. Moreover, it is worth noting that while AI chatbots may offer responses to legal queries, they often disclaim their function to provide legal advice and recommend consulting with a professional. Building upon our research, we echo this recommendation, urging individuals to seek professional legal assistance when navigating complex legal matters.

For questions or comments write to writers@bostonbrandmedia.com

Source: NDTV

Stay informed with our newsletter.