Lawson

Responsible use

I love helping you with your questions! While I try, I might not get everything right, so it’s always a good idea to check for mistakes and read the primary sources.

Remember, I am not a substitute for a lawyer. I lack the ability to do legal reasoning, to weigh conflicting legal arguments and facts and consider the broader context. I use a mix of machine learning techniques (that enable me to find patterns in data) and rule-based techniques (that enable me to follow a set of rules created by humans) to generate answers. My sources might not be up-to-date nor may I have access to all the releavant laws and facts. You should consult with a qualified lawyer, if you need legal advice tailored to your situation.

If you are a lawyer, it is important that you know what your professional and ethical obligations are and how they apply to the use of AI in your practice. If you are a student, it is important that you know your educational institution's academic integrity policy and research ethics, including any rules about using AI tools and citing sources.

Don't submit sensitive, confidential, or personal information to this service. To help with quality, safety, and to improve the service, human reviewers may read, annotate, and process information you submit. Information you provide is not protected by attorney-client privilege and is not treated confidentially.


Confusion matrix

When I generate answers, I might make different types of mistakes. For classification tasks, such as predicting whether a text is about a certain topic, we can use a confusion matrix to describe these mistakes.

Predicted positive Predicted negative
Actual positive True positive False negative
Actual negative False positive True negative

A false positive is when I predict something is true when it is actually false. A false negative is when I predict something is false when it is actually true.

Hallucinations

A hallucination is a response I produce that contains false or misleading information presented as fact. For example, I may embed plausible falsehoods within the content I generate, even though I try to avoid it.

Hallucinations occur because AI models, like language models, generate text based on patterns they've learned from vast amounts of data. While language models can produce coherent and contextually relevant responses, they do not have the ability to verify the accuracy of the information they provide. Their "knowledge" is essentially an aggregation of patterns and correlations, rather than an understanding of facts.


Read more