
Лектор докладно проаналізувала разом з учасниками корпоративну політику щодо штучного інтелекту, а саме:
1. Legal terminology overview (Огляд юридичної термінології).
2. Why does a company need AI policy? (Чому компанії потрібна корпоративна політика щодо ШІ)?
3. What shall be included in the AI corporate policy? Tips for drafting (Що має бути включене в корпоративну політику щодо ШІ? Поради щодо укладання).
У рамках характеристики корпоративної політики щодо штучного інтелекту акцентовано на наступному:
1. Legal terminology overview (Огляд юридичної термінології)
Організація економічного співробітництва та розвитку (OECD), leading definition:
AI (Artificial Intelligence)- an AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Different AI systems vary in their levels of autonomy and adaptiveness after deployment.
The EUArtificial Intelligence Act, 2024:
«AI system» means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
USA Artificial Intelligence Research, Innovation, and Accountability Act of 2024 (bill):
«AI system» means an engineered system that: (a) generates outputs, such as content, predictions, recommendations, or decisions for a given set of objectives; and (b) is designed to operate with varying levels of adaptability and autonomy using machine and human-based inputs.
UK Artificial Intelligence (Regulation) Bill [HL]:
In this Act “artificial intelligence” and «AI» mean technology enabling the programming or training of a device or software to —
-
perceive environments through the use of data,
-
interpret data using automated processing designed to approximate cognitive abilities’, and
-
make recommendations, predictions or decisions, with a view to achieving a specific objective.
AI includes generative AI, meaning deep or large language models able to generate text and other content based on the data on which they were trained.
Ayinde, R v The London Borough of Haringey [2025]:
«Artificial intelligence is a tool that carries with it risks as well as opportunities. Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained».
2. Why does a company need AI policy?
Case review:
-
The Ayinde case: Helen Evans KC and Melody Hadfield (instructed by Clyde & Co LLP) for Sarah Forey (barrister) Andrew Edge (instructed by Kingsley Napley LLP) for Victor Amadigwe (solicitor), Sunnelah Hussain (paralegal) and Haringey Law Centre.
-
The Al-Haroun case: David Lonsdale (instructed by Primus Solicitors) for Abid Hussain (solicitor) and Primus Solicitors Case Nos: AC-2024-LON-003062 and CL-2024-000435 Neutral Citation Number: [2025] EWHC 1383 (Admin).
Artificial intelligence is a powerful technology. It can be a useful tool in litigation, both civil and criminal. It is used for example to assist in the management of large disclosure exercises in the Business and Property Courts.
Those who use artificial intelligence to conduct legal research notwithstanding these risks have a professional duty therefore to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work (to advise clients or before a court, for example). Authoritative sources include the Government’s database of legislation, the National Archives database of court judgments, the official Law Reports published by the Incorporated Council of Law Reporting for England and Wales and the databases of reputable legal publishers.
AI hallucinations:
-
Artificial Intelligence (AI) Guidance for Judicial Office Holders: are incorrect or misleading results that AI models generate. These errors can be caused by a variety of factors, including insufficient training data, the model’s statistical nature, incorrect assumptions made by the model, or biases in the data used to train the model.
-
Risk Outlook report: the use of artificial intelligence in the legal market, 20 November 2023: All computers can make mistakes. AI language models such as ChatGPT, however, can be more prone to this. That is because they work by anticipating the text that should follow the input they are given, but do not have a concept of ‘reality’. The result is known as ‘hallucination’, where a system produces highly plausible but incorrect results.
The EU Artificial Intelligence Act, 2024 (Article 95 (1,2)): The AI Office and the Member States shall encourage and facilitate the drawing up of codes of conduct.
Reasons to introduce AI policy:
-
To promoteethical decision-making.
-
Ensuring transparency and accountability.
-
Data security and privacy reduce legal risk.
3. What shall be included in the AI corporate policy? Tips for drafting (Що має бути включене в корпоративну політику щодо ШІ? Поради щодо укладання)
ISACA - Information Systems Audit and Control Association.
Key Considerations When Adopting a Generative AI Policy:
-
Scope of impact
-
AI protection
-
Ethical principles
-
Good behavior
-
Data handling and training
-
Transparency and attribution
-
Legal and compliance requirements
-
Limitations and risks involved
-
Policy link to others
-
Report and investigate of violations
-
Other.
The EU Artificial Intelligence Act, 2024 (Article 5: Prohibited AI Practices):
The EUAI Act prohibits certain uses of artificial intelligence (AI).
-
These include AI systems that manipulate people's decisions or exploit their vulnerabilities, systems that evaluate or classify people based on their social behavior or personal traits, and systems that predict a person's risk of committing a crime.
-
The Act also bans AI systems that scrape facial images from the internet or CCTV footage, infer emotions in the workplace or educational institutions, and categorize people based on their biometric data.
-
However, some exceptions are made for law enforcement purposes, such as searching for missing persons or preventing terrorist attacks.