Лектори докладно проаналізували разом з учасниками правові, етичні та регуляторні аспекти використання штучного інтелекту в юридичній практиці, а саме:
1. Global Regulatory Landscape on AI in Law (Глобальний регуляторний ландшафт щодо штучного інтелекту в законодавстві).
2. European AI Act and Its Implications for Legal Practice (Законодавство ЄС у сфері штучного інтелекту та його вплив на юридичну практику).
3. AI in the focus of English courts: warnings and recommendations (Штучний інтелект в фокусі уваги англійських судів: застереження та рекомендації).
4. AI hallucinations in court cases: case examples in the UK, US and Australia («Галюцинації» штучного інтелекту в судових справах: приклади випадків в Великій Британії, США та Австралії).
У рамках характеристики використання штучного інтелекту в юридичній практиці акцентовано на наступному:
1. Global Regulatory Landscape on AI in Law (Глобальний регуляторний ландшафт щодо штучного інтелекту в законодавстві):
According to Article 3(1) of the EU AI Act (Regulation (EU) 2024/1689), an “AI system” is defined as:
A machine-based system that is designed to operate with varying levels of autonomy, may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs - such as predictions, content, recommendations, or decisions - that can influence physical or virtual environments.
Cross-border Challenges:
1) Jurisdictional conflicts.
2) Harmonization problems for global law firms and multinational businesses.
3) Enforcement.
European AI Act:
· The EU Artificial Intelligence Act;
· OECD & UNESCO Guidelines –international soft law frameworks on trustworthy AI, ethics, and human rights.
2. European AI Act and Its Implications for Legal Practice (Законодавство ЄС у сфері штучного інтелекту та його вплив на юридичну практику)
EU AI Act (2024 adoption, entering into force gradually) the first comprehensive AI regulation worldwide, including risk-based classification and strict rules for high-risk systems (legal services may fall into this category).
China’s Regulatory Framework:
Proactive regulation with specific laws on:
1) Algorithms;
2) Deepfakes;
3) Generative AI.
Spain inaugurated the first national AI supervisory agency (AESIA) in September 2023, responsible for regulating AI systems, ensuring compliance with both national rules and the forthcoming EU AI Act:
Through Royal Decree 817/2023, Spain introduced Europe’s inaugural AI sandbox, enabling controlled testing of high-risk systems under regulatory supervision to foster innovation and ensure safety.
A draft bill (March 2025) targets AI-generated content, “deepfakes,” and manipulative algorithms, with penalties of up to €35 million or 7% of global turnover for non-labeling or unethical practices.
France: The Digital Republic Act (2016) added a "right to explanation" for algorithm-based administrative decisions. Citizens can request details about the algorithm’s role, data sources, and logic – extending beyond automated decisions covered by GDPR.
3. AI in the focus of English courts: warnings and recommendations (Штучний інтелект в фокусі уваги англійських судів: застереження та рекомендації)
Artificial Intelligence (AI) Guidance for Judicial Office Holders: The practice of designing, developing, and deploying AI with certain values, such as being trustworthy, ethical, transparent, explainable, fair, robust upholding privacy rights.
Key risks identified:
1. AI outputs often not authoritative, may include fabricated or misleading cases (“hallucinations”).
2. Public AI tools trained on US-centric data, not always reflecting English law.
3. Potential breaches of confidentiality and privacy if sensitive data is input into public chatbots.
Responsible AI Use:
1. AIcanbeausefulsecondarytool, but never a substitute for legal expertise:- AI can assist with: summarising large bodies of text, administrative tasks (emails, notes, scheduling), drafting presentations.
2. Judicial Guidance (2025) stresses:
· Verification, confidentiality, accountability are paramount.
· Secure systems (Copilot) should replace public tools.
· Courts must protect justice against hallucinations, bias, and deepfakes.
3. Sets a benchmark for responsible AI use in English courts
AI action plan for justice Published 31 July 2025:
Three strategic priorities:
1. Strengthen foundations: Improve AI leadership, governance, ethics, data quality, digital infrastructure, and procurement frameworks. Establish a Justice AI Unit, an AI Ethics Framework, AI Steering Group, and communications strategy for transparency.
2. Embed AI across services: Use a “Scan, Pilot, Scale” model to apply AI to high-impact areas, transcription, knowledge retrieval, operational support, and citizen-facing tools, to improve justice services.
3. Invest in people and partnerships: Build AI capability across justice workforce, recruit technical talent, create AI Champions, and collaborate with regulators, legal service providers, and innovators.
4. AI hallucinations in court cases: case examples in the UK, US and Australia («Галюцинації» штучного інтелекту в судових справах: приклади випадків в Великій Британії, США та Австралії)
Risk Outlook report: the use of artificial intelligence in the legal market, 20 November 2023:
All computers can make mistakes. AI language models such as ChatGPT, however, can be more prone to this. That is because they work by anticipating the text that should follow the input they are given, but do not have a concept of ‘reality’. The result is known as ‘hallucination’, where a system produces highly plausible but incorrect results.
Case Study 1 UK: Ayinde v London Borough of Haringey & Al Haroun v Qatar National Bank:
· Issue: Solicitor submitted AI-generated case citations (“hallucinations”).
· Court finding: Duty to the court prevails, solicitor personally liable, despite client being source.
· Sanction: Personal costs order imposed.
· Lesson: Lawyers cannot delegate verification to clients or AI tools.
UK: Ayinde v London Borough of Haringey & Al Haroun v Qatar National Bank
“all legal representatives are responsible for the material they put before the court/tribunal and have a professional obligation to ensure it is accurate and appropriate”.
Case Study 2 USA: Shahid v. Essam (Court of Appeals of Georgia, First Division, June 30, 2025):
· Issue: Lawyer submitted fictitious citations generated by AI; included in trial judgment & appeal.
· Court finding: AI use not improper per se, but responsibility for accuracy lies with lawyer.
· Sanction: Costs order against husband’s lawyer.
· Lesson: Courts will penalise negligence in supervising AI outputs.
Case Study 3 Australia: Murray on behalf of the Wamba Wemba Native Title Claim Group v State of Victoria [2025] FCA 731:
· Issue: Junior solicitor used Google Scholar/AI, leading to false citations.
· Court finding: Caused delay, inconvenience, and undermined justice.
· Sanction: Indemnity costs order against law firm.
· Lesson: Courts treat AI errors as serious professional misconduct.