Executive Introduction
Artificial intelligence (AI) tools are changing the way legal work can be delivered: helping to improve accuracy, efficiency, and access to information. We use these tools carefully and responsibly, always ensuring that professional judgment, confidentiality, and ethical standards remain at the centre of every piece of work. This notice explains how AI and large language models (LLMs) may be used at Kinshi, the safeguards in place, and how client trust and data protection are maintained at all times.
Artificial Intelligence (AI) and Large Language Model (LLM) Notice
November 2025
Commitment to Responsible and Ethical Innovation
Artificial Intelligence (AI) technologies, including Large Language Models (LLMs), may be used to support and enhance the delivery of legal and related services. These technologies are implemented in a way that upholds the professional and ethical standards expected of solicitors, including duties of competence, integrity, confidentiality, and transparency.
This notice explains how AI and LLMs are used, their limitations, and the ethical and governance principles guiding their use.
Distinguishing AI and LLMs in Legal Applications
Artificial Intelligence (AI)
AI refers to computer systems capable of performing tasks that typically require human intelligence, such as prediction, automation, or data analysis. In practice, AI may assist with:
contract review and analytics
document classification and tagging
workflow automation and case triage
Large Language Models (LLMs)
LLMs are a subset of AI focused on understanding and generating human language. In a legal context, they may be used for:
drafting and summarising documents
assisting with research
generating template-based clauses
powering chat-based support tools
LLMs do not possess legal understanding or reasoning. They generate text based on patterns in data, and may produce inaccurate or “hallucinated” outputs that appear plausible but are incorrect.
Main Limitations of AI and LLMs in Legal Practice
Hallucinations and Inaccuracy
LLMs can generate fictitious information or incorrect conclusions. All outputs are reviewed by a suitably qualified and experienced solicitor before use in any legal context.
Lack of Legal Reasoning
LLMs do not “understand” the law or the context of a matter and must never be relied upon for legal judgment or advice.
Bias
AI tools may reflect or amplify biases present in their training data. Steps are taken to identify, assess, and mitigate such risks, consistent with a commitment to fairness and inclusion.
Confidentiality and Data Protection
Personal data and confidential client information are not entered into public or insecure AI tools. Where internal or third-party systems are used, appropriate data protection, confidentiality, and access controls are applied to ensure compliance with the UK GDPR and duties of confidentiality.
No Model Training or Fine-Tuning
AI or LLM tools are used solely to generate or analyse text. No client, personal, or confidential information is used for the purpose of training, fine-tuning, or otherwise contributing to any AI model.
Transparency and Accountability
AI and LLM processes can be opaque. Where relevant, clients are informed about how these tools are used, and a suitably qualified and experienced solicitor retains full responsibility for all legal outputs.
Rapidly Evolving Standards
Regulatory developments and ethical guidance from the Solicitors Regulation Authority (SRA) and other professional bodies are monitored to ensure practices remain current and compliant.
Ethical and Governance Framework
Human Oversight
No legal advice is provided solely by AI or LLMs. All outputs are reviewed and approved by a suitably qualified and experienced solicitor before use in any legal context.
Transparency
AI and LLM tools are used as research and drafting aids only. Outputs are always verified against primary legal sources, such as legislation, case law, or official guidance.
Confidentiality and Data Protection
Full compliance with the UK General Data Protection Regulation (UK GDPR) and professional confidentiality obligations is maintained at all times. Client information is only processed using AI/LLM systems where adequate safeguards are in place.
Bias Mitigation
AI outputs are reviewed to identify and correct potential bias. Fairness and non-discrimination are core principles in the delivery of all services.
Accountability
The use of AI or LLMs does not diminish professional accountability or a client’s ability to rely on legal advice provided. Responsibility for all outputs and advice remains with the solicitor.
Continuous Review and Governance
This notice is kept under regular review and updated in line with technological developments, regulatory changes, and evolving professional best practices, including emerging standards under the UK AI governance framework, the EU AI Act, and international guidance such as ISO/IEC 42001 (AI management systems).
Questions and Contact
If you have any questions about the use of AI or LLMs in legal work, or would like further information, please contact us at Operations@kinshi-lodelaw.com.