![]() |
Responsible use |
I love helping you with your questions! While I try, I might not get everything right, so it’s always a good idea to check for mistakes and read the primary sources.
Remember, I am not a substitute for a lawyer. I lack the ability to do legal reasoning, to weigh conflicting legal arguments and facts and consider the broader context. I use a mix of machine learning techniques (that enable me to find patterns in data) and rule-based techniques (that enable me to follow a set of rules created by humans) to generate answers. My sources might not be up-to-date nor may I have access to all the releavant laws and facts. You should consult with a qualified lawyer, if you need legal advice tailored to your situation.
If you are a lawyer, it is important that you know what your professional and ethical obligations are and how they apply to the use of AI in your practice. If you are a student, it is important that you know your educational institution's academic integrity policy and research ethics, including any rules about using AI tools and citing sources.
Don't submit sensitive, confidential, or personal information to this service. To help with quality, safety, and to improve the service, human reviewers may read, annotate, and process information you submit. Information you provide is not protected by attorney-client privilege and is not treated confidentially.
Confusion matrix
When I generate answers, I might make different types of mistakes. For classification tasks, such as predicting whether a text is about a certain topic, we can use a confusion matrix to describe these mistakes.
Predicted positive | Predicted negative | |
---|---|---|
Actual positive | True positive | False negative |
Actual negative | False positive | True negative |
A false positive is when I predict something is true when it is actually false. A false negative is when I predict something is false when it is actually true.
Hallucinations
A hallucination is a response I produce that contains false or misleading information presented as fact. For example, I may embed plausible falsehoods within the content I generate, even though I try to avoid it.
Hallucinations occur because AI models, like language models, generate text based on patterns they've learned from vast amounts of data. While language models can produce coherent and contextually relevant responses, they do not have the ability to verify the accuracy of the information they provide. Their "knowledge" is essentially an aggregation of patterns and correlations, rather than an understanding of facts.
Read more
-
"Large legal fictions: Profiling legal hallucinations in large language models" by Matthew Dahl, Varun Magesh, Mirac Suzgun, Daniel E. Ho, Journal of Legal Analysis 16, no. 1 (2024)
-
... we show that LLMs hallucinate at least 58% of the time, struggle to predict their own hallucinations, and often uncritically accept users’ incorrect legal assumptions.
-
-
"GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models" by Iman Mirzadeh, Keivan Alizadeh, Hooman Shahrokhi, Oncel Tuzel, Samy Bengio, and Mehrdad Farajtabar (2024)
This reveals a critical flaw in the models’ ability to discern relevant information for problem-solving, likely because their reasoning is not formal in the common sense term and is mostly based on pattern matching. ... This suggests deeper issues in their reasoning processes that cannot be alleviated by in-context shots and needs further investigation.
We hypothesize that this decline is due to the fact that current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data.
- "AI and ethics: Investigating the first policy responses of higher education institutions to the challenge of generative AI" by Attila Dabis and Csaba Csáki (2024)
- "A solicitor’s guide to responsible use of artificial intelligence" by the Law Society of NSW Journal (2023)
- "New York lawyers sanctioned for using fake ChatGPT cases in legal briefs" by Sara Merken, Reuters (2023)
- Zhang v. Chen, 2024 BCSC 285