PDF available.
This paper explores the global and national regulatory approaches to artificial intelligence, with a focus on Georgia’s legal system. It examines cybercrime challenges, AI-generated content, and the implementation gaps in Georgian law, offering comparative insights and practical recommendations to enhance legal protections and align with international standards.
To Cite: Chimchiuri, L. (2025). AI and Law: Global and National Regulatory Approaches - The Case Study of Georgia. Cyber Jurisprudence International Initiative Research. Article No.1.
Contents
1. Introduction
2. The History of Legal Regulation of Artificial Intelligence
3. The essence of artificial intelligence (AI) and the issue of its legal regulation.
4. Conclusion
1. Introduction
In the modern world, cybercrime has become a significant challenge. Its legal regulation and cybersecurity policy are global issues; however, this problem is especially acute for developing countries, as they often lack the appropriate equipment or legislation to protect themselves from cyberattacks. Such countries become attractive targets for cybercriminals. This poses a particular threat to both individuals’ and the state’s data, which can be leaked and cause serious harm to the state and its citizens. Moreover, the absence of a universally accepted definition of cybercrime in the Cybercrime Convention creates serious gaps in legislation. Due to the transnational nature of this crime, a swift and effective response is often not possible, and perpetrators frequently remain unpunished. The South Caucasus region, including Georgia, has visibly experienced an increase in cyberattacks over the past decade. However, legal responses to these incidents are, in most cases, neither immediate nor effective. This is due to legal diversity in the region, uneven integration of international standards, and varying levels of institutional capacity, all of which complicate international and regional cooperation against cybercrime. All of this has led to the emergence of two new fields: cyber law and cybersecurity.
The research aims to serve the improvement of legislation and the development of recommendations for its effective implementation in practice, which is a crucial prerequisite for strengthening cybersecurity in Georgia. While the legislation of developed countries (for example, the European Union’s General Data Protection Regulation – GDPR) is evolving dynamically, countries of the Global South, including Georgia, still lag in establishing harmonized, enforceable, and effective legal systems. This lag poses a threat to both Georgia’s national cybersecurity systems and its digital sovereignty, international cooperation, the stability of the investment environment, and the protection of fundamental rights, including the right to privacy and freedom of expression.
Therefore, the problem is twofold: on one hand, the population is highly vulnerable to crimes such as identity theft, unlawful surveillance, and data misuse; on the other hand, weak legal norms or the complete absence of regulations seriously hinder swift response and effective justice.
The study employs the historical method of law, the comparative legal method, and the dogmatic research method. Using the historical method of law, I examined the origin of cybercrime and the process of forming legal norms against it. By applying the comparative legal method, I identified the following: the similarities and differences between the Budapest Convention and the Criminal Code; the legal gaps existing in the provisions of this law; the extent to which these norms comply with international standards; and what measures should be taken to address these gaps. This method helps in a thorough investigation of the problem and the identification of existing shortcomings, i.e., what needs improvement and how Georgia can strengthen its legal system. Using the dogmatic research method, I analyzed the views and concepts proposed by scholars.
This study will be dedicated to analyzing formal legal texts (laws, regulations, and court decisions) related to obtaining financial gain through the impairment of computer data and/or computer systems, as well as the creation of false official computer data. Additionally, I will review publications by recognized scholars in the field of law. This will help us better understand how laws function in theory and what challenges they face in practice. Furthermore, the presented research aims to assess the deficit in regulation of cybercrime committed using AI systems in Georgia and the necessity of implementing the European Union’s Artificial Intelligence Act provisions on high-risk systems (Articles 5 and 6); the legal need to prohibit unethical use of biometric data, facial recognition, behavioral monitoring, and discriminatory classification of individuals; and finally, this research will provide necessary recommendations.
It is essential to prevent cybercrime through legal norms to ensure the effective and fundamental protection of human rights, which remains an ongoing challenge. Establishing a balance requires the involvement of multiple stakeholders such as governments, technology companies, civil society, law enforcement agencies, and others. Through joint efforts, they will develop mechanisms, legal norms, and create a secure and legally protected digital environment.
2. The History of Legal Regulation of Artificial Intelligence
Scientists once recognized only natural intelligence, which is divided into two parts - EQ (Emotional Quotient) and IQ (Intelligence Quotient). EQ, or emotional intelligence, is a person’s ability to perceive their feelings, express empathy, and regulate emotions. IQ, or intellectual quotient, is based on a person’s knowledge and helps them identify and solve problems. According to psychologists, intelligence levels vary among individuals and can be measured quantitatively. Despite the diversity in defining and understanding intelligence, the concept remained a narrow field of study until 1943, when American science fiction writer Isaac Asimov published his short story “The Round Up.” This work inspired generations of scientists in the fields of robotics and artificial intelligence.[2]
In the 1950s, John McCarthy emerged as a key figure who first coined the term “Artificial Intelligence” (AI) in 1956.[3] McCarthy is known as the "Father of Artificial Intelligence." He served as the director of Stanford’s Artificial Intelligence Laboratory from 1965 to 1980. He also founded two AI laboratories in 1957 at the Massachusetts Institute of Technology (MIT) and in 1963 at Stanford University.[4]
The evolution of artificial intelligence from the 1990s to the present has been marked by remarkable milestones. One notable event was in 1997 when IBM’s Deep Blue, a chess-playing computer, defeated the world chess champion Garry Kasparov. In the 2020s, generative pretrained transformers (GPTs) such as OpenAI’s GPT-3 revolutionized language models by enabling human-like text generation across various applications. In 2022, ChatGPT, based on GPT-3.5, further enhanced AI interaction through a conversational interface.[5]
In the 21st century, the field of AI has grown and developed so much over the past 50 years that it now helps solve many problems, such as blocking spam, recognizing images and voices, enabling high-quality search within systems, and much more.
The concept of cybercrime committed by artificial intelligence is still evolving. As of now, there has been no incident where AI has independently committed a cybercrime. However, cybercriminals use AI to carry out offenses such as creating fake official computer data and launching cyberattacks. They also create viruses and particularly malicious tools like deepfakes. Simply put, deepfakes are synthetic media files in which images, videos, or audio typically featuring a specific individual are manipulated and replaced with another person’s face or voice. These creations are powered by generative artificial intelligence networks known as Generative Adversarial Networks (GANs). These networks process information, generate patterns, and learn knowledge, like how the human brain functions.[6] For example, in 2019, cybercriminals used deepfake audio technology to imitate the voice of a company’s CEO. Using this deepfake, they requested a transfer of €220,000 from the manager of the German company’s UK subsidiary. As a result, the cybercriminals successfully stole €220,000, causing significant harm to the company. This incident highlights the potential of AI-generated content and voice technologies to facilitate cyber fraud. As artificial intelligence becomes more sophisticated, the risks increase, making it essential to properly inform and educate the public about these threats.[7]
Regulating AI is complicated by the fact that there is currently no universally accepted, comprehensive definition of artificial intelligence, as its concept remains broad, dynamic, and continuously evolving. However, the Council of Europe defines AI as: "a combination of sciences, theories, and technologies aimed at reproducing human cognitive abilities by machines. Given the current level of development, artificial intelligence means delegating complex intellectual tasks, normally performed by humans, to machines."[8]
The emerging threats posed by artificial intelligence alarmed states and compelled them to begin working on ways to regulate it. On September 5, 2024, the Council of Europe adopted the Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law in Vilnius.[9] The convention aims to protect transparency, ethical standards in artificial intelligence, as well as individuals and society. The European Union is also working on the EU Artificial Intelligence Act, which is still in the development process. Its goal is to establish safe and ethical standards for the use of AI. We will discuss these in more detail in the following chapters.
In addition to the aforementioned, the following norms have been developed worldwide: the European Union’s General Data Protection Regulation (GDPR) adopted in 2016, which regulates the use of data by AI; the AI Principles adopted in 2019 by the Organisation for Economic Co-operation and Development (OECD), which emphasize the ethical use of AI, transparency, and the protection of human rights; and UNESCO’s AI Ethics Recommendations from 2021, which aim to establish global standards for protecting human rights, fairness, and transparency.
3. The essence of artificial intelligence (AI) and the issue of its legal regulation
When we talk about artificial intelligence, it is essential to emphasize that AI does not and cannot represent a criminally responsible subject; it is merely a tool or instrument for carrying out a specific action and achieving a result. The issue of subjects[10] - In this case, the subject is a natural or legal person who uses artificial intelligence to create false official data.
The element of the subjective side[11] includes both direct and indirect intent, and the act cannot be committed without premeditation. The motive of the crime is irrelevant for qualification. However, it is also important to note that the use of AI may complicate the determination of intent; the person's awareness is essential - they must know that the AI was creating false data, which will result in their criminal conviction.
In this case, the object of the crime is computer data, as well as the reliability of state governance, public administration, or legal processes.[12]. As for the objective elements, the objective composition of the crime includes actions such as the creation of false computer data and/or presenting them as real data, specifically: generation of text, scripts, or documents; automatic or semi-automatic entry of records into a registry; falsification of visual/audio data created using deepfake technologies.
Given these factors, another problem arises: artificial intelligence may technically function correctly but still be legally unacceptable. This raises the question - how can we distinguish between a technical error by AI during data generation and a deliberate crime?
In 2021, Johnson, who was serving a prison sentence, filed a lawsuit against Alabama prison officials for failing to protect him from prison violence. To defend the case in court, the Alabama Attorney General’s Office hired the law firm Butler Snow, which for years had received millions of dollars from the state to defend Alabama’s problematic prison system. During the court hearing, an attorney from this firm, working with Lunsford, cited cases generated by artificial intelligence that later turned out not to exist. What happened here is unacceptable. Tempted by the convenience of AI, the lawyer improperly used generative AI to prepare the case and failed to verify the citations provided, which turned out to be "hallucinations" of the AI system. This is one of the growing number of cases where lawyers across the country use artificial intelligence and, as a result, introduce false, AI-generated information into official legal documents. A specialized database identified 106 such cases worldwide where courts discovered AI "hallucinations" in legal documents.[13]
In such cases, negligence may be involved. According to Georgian criminal law, an act is committed by negligence if the person did not realize that the act was prohibited under the norm of foresight and did not consider the possibility of an unlawful consequence, although they were obliged and able to do so. For example, if a person lacked knowledge or prior information about the AI’s flaw (e.g., it was their first time using it and not for professional purposes), liability would arise for a crime committed by negligence, not intentionally. On the other hand, if the person is a lawyer, journalist, or student who knew the risks and had adequate information, had an obligation to verify, but continued their actions, then indirect intent (dolus eventualis) may be present.[14]
When qualifying the aforementioned crimes, several key problems emerge: the difficulty of identifying the subject when using technological mechanisms; determining intent, especially regarding accomplices; obtaining digital evidence and assessing its reliability. Currently, there is no universally accepted definition of either cybercrime or artificial intelligence that fully encompasses their nature. Scientists, various legal documents, and organizations use their own, somewhat differing, definitions. Let’s consider definitions provided by the Organisation for Economic Co-operation and Development (OECD), the European Parliament, and the European Commission. First of all, it is important to note that researchers distinguish four different approaches to artificial intelligence: Rationally thinking machines, Rationally acting machines, Human-like thinking machines, and Human-like acting machines. Many functions and applications of AI technologies are defined based on these characteristics.[15]
Accordingly, the European Commission’s definition of artificial intelligence is as follows: “Artificial Intelligence (AI) refers to systems that display intelligent behavior by analyzing their environment and performing autonomous actions to achieve specific goals to a certain degree. AI-based systems can exist solely as software operating in virtual environments (for example, voice assistants, image analysis programs, search engines, speech and facial recognition systems) or be embedded in technical devices (such as modern robots, autonomous vehicles, drones, or Internet of Things (IoT) applications).”[16]
The Organisation for Economic Co-operation and Development (OECD) provides the following definition of AI: “Artificial intelligence is a machine-based system that, for explicit or implicit objectives, makes inferences about how to produce outcomes such as predictions, content, recommendations, or decisions that can influence a physical or virtual environment. Different AI systems vary in their levels of autonomy and adaptability after deployment.”[17]
The definition of artificial intelligence developed by the European Parliament, as found in the AI Act and derived from the OECD’s definition, is as follows: “Artificial intelligence system’ means a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.[18]”
Similarly, the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) have defined AI as a set of enhanced information and communication technologies (ICTs) capable of imitating certain functions of human intelligence, including functions such as perception, learning, training, and even reasoning.[19]
The European Union seeks to harness the opportunities presented by artificial intelligence and attract investments in the field. However, alongside these efforts, it places great importance on the protection of human rights and freedoms, as well as the preservation of European values in the process of utilizing technological innovations, an area that poses significant challenges for the EU. To address these challenges, the EU has introduced several legal regulations. Notably, on April 27, 2016, it adopted the General Data Protection Regulation (GDPR), which came into force on May 25, 2018. Later, on April 21, 2021, the EU introduced the Artificial Intelligence Act (AI Act), with a final political agreement reached on December 8, 2023. The full implementation of the Act is expected by 2026. In addition, a recent international treaty worth highlighting is the Council of Europe’s Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, adopted on May 17, 2024. Let us now examine the EU Artificial Intelligence Act. According to this Act, AI system providers and developers must take appropriate measures to ensure that their personnel and all individuals involved in the development or use of AI systems possess an adequate level of knowledge about AI.[20] In addition, the AI Act prohibits the following AI practices:
Ø The placing on the market, putting into service, or use of an AI system that employs subliminal techniques or deliberately manipulative and deceptive methods aimed at influencing the behavior of an individual or group, leading them to make a decision they would not otherwise have made, and which may cause significant harm to that individual or group.[21]
Ø The use of an artificial intelligence system to obtain material benefit is prohibited if it exploits any vulnerability of a natural person or a specific group of individuals, such as their age, disability, or particular social or economic situation.[22]
Ø The use of AI systems for monitoring, evaluating, or classifying individuals based on their social behavior is prohibited when it results in a “social score” that leads to discriminatory treatment of individuals or groups, especially when such treatment is unrelated to the context in which the data was originally generated or collected (For example, if a teacher is dismissed from their job solely for expressing a critical opinion about educational programs on social media, despite having no violations in their professional conduct.)[23]
Ø The use of an AI system to assess or predict the risk of a natural person committing a criminal offense is prohibited when based solely on the person’s profile or the evaluation of their personality traits and characteristics.[24];
Ø The creation of databases using facial recognition AI systems based on information collected from the internet, social media, CCTV, or other sources is prohibited, except when strictly necessary for legitimate legal purposes.[25]
Ø The use of artificial intelligence (AI) systems to monitor or analyze a natural person’s emotions or behaviors in the workplace or educational institutions is prohibited.[26]
Ø The use of biometric categorization systems that classify individuals based on their biometric data and group them according to race, political opinions, trade union membership, religious or philosophical beliefs, sexual life, or sexual orientation is prohibited.[27];
Ø The Act clearly states that it is necessary to categorize artificial intelligence systems based on the level of risk they pose, which determines the corresponding legal obligations and restrictions.[28]
Regarding the European Union’s General Data Protection Regulation (GDPR), it is important to highlight its seven fundamental principles (concerning the processing of personal data) that must be taken into account starting from the development process of artificial intelligence. These are: Transparency, fairness, and lawfulness, Purpose limitation (processing data only for specified, explicit, and legitimate purposes), Data minimization (collecting only the data necessary for the intended purpose), Accuracy (ensuring personal data is accurate and kept up to date), Accountability (responsibility for compliance with data protection principles), Integrity and confidentiality (ensuring the security of personal data), Storage limitation (retaining personal data only as long as necessary).[29]
The GDPR guarantees three fundamental rights[30]: Right of Access - Every individual has the right to obtain access to the personal data being processed about them; Right to be Forgotten - The data subject has the right to request the deletion of processed data, and the data controller is obliged to ensure the realization of this right if there is no legal obligation to continue processing the data[31]; Right to Explanation -The data controller must provide explanations regarding questions related to the processing of personal data. [32]
The Council of Europe’s Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law obliges participating states to take appropriate measures to ensure that activities carried out throughout the lifecycle of artificial intelligence systems comply with obligations to protect human rights, as defined in applicable international and national legal instruments.[33]
Each party is also obligated to take appropriate measures to ensure the privacy rights of individuals and the protection of their data, including by relevant domestic and international laws, standards, and frameworks.[34]Parties should develop measures that enhance the reliability of artificial intelligence systems and increase trust in their outcomes.[35]
In parallel with preventing possible negative impacts on human rights, democracy, and the rule of law, and to promote innovation, each party is obliged, as necessary, to establish legal frameworks that enable the development, experimentation, and testing of artificial intelligence systems in controlled environments under the supervision of competent authorities.[36]
Also important are UNESCO’s Recommendations on the Ethics of Artificial Intelligence, adopted on November 23, 2021. These recommendations do not provide a fixed definition of artificial intelligence, as such a definition must evolve following technological developments. Instead, the recommendations aim to establish a foundation to ensure that AI systems serve the well-being of humanity, individuals, societies, the environment, and ecosystems, while preventing harm. Furthermore, they seek to promote the peaceful use of artificial intelligence systems.
According to UNESCO’s Recommendations, states should avoid, prevent, and mitigate unacceptable harms (security risks) and vulnerabilities to attacks (defensive risks) throughout the lifecycle of AI systems, to ensure the safety and protection of humans, the environment, and ecosystems.[37]
Artificial intelligence factors should promote social justice and ensure all forms of legal equality and freedom from discrimination, as defined by international law. Privacy, which is one of the fundamental rights, must also be protected. Member states should ensure ethical and legal accountability toward natural or legal persons in cases related to AI systems and their recommendations.[38]
Transparency and explainability of artificial intelligence systems are often essential prerequisites to ensure the respect, protection, and promotion of human rights, freedoms, and ethical principles. Participants in artificial intelligence development and member states are obligated to respect and uphold human rights and universal freedoms, promote the protection of the environment and ecosystems, and recognize their ethical and legal responsibilities following national and international legislation[39].
UNESCO’s recommendations also advise states to carry out awareness-raising campaigns about artificial intelligence, conduct training sessions, organize public lectures, develop specialized courses, and more.[40] These activities should be carried out jointly by governments, international organizations, civil society, academia, the media, and the private sector. The process must take into account existing linguistic, social, and cultural diversity to ensure effective public engagement. This will also enable everyone to make informed decisions regarding the use of artificial intelligence systems and protect against undue influence. As already mentioned, Georgia is a party to several international treaties, including the Budapest Convention on Cybercrime and the Council of Europe Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. As for the UN Convention on Cybercrime adopted on December 24, 2024, the list of participating states has not yet been made public, and the official signing ceremony is set to take place in Vietnam in 2025. This Convention is particularly important for strengthening international cooperation against cybercriminals.
The General Data Protection Regulation (GDPR) and the European Union’s Artificial Intelligence Act apply only to EU member states and cannot be ratified by Georgia. However, Georgia can adopt and integrate essential and relevant provisions from these legal instruments that would help improve its legislation and better regulate cyberspace, such as the norms discussed in the previous section.
As for UNESCO’s Recommendations on the Ethics of Artificial Intelligence, these are non-binding and carry a recommendatory character, meaning Georgia is free to consider or disregard them. Nevertheless, for the sake of international reintegration and to ensure legal protection against unlawful and deliberate acts by cybercriminals, it would be preferable for Georgia to apply these recommendations as part of the reintegration process.
The creation of fake official documents using artificial intelligence is possible, and cybercriminals exploit this maliciously. The legislation currently in force in Georgia does not respond quickly to the rapidly changing technological reality. The Criminal Code does not address cybercrimes committed through AI devices, which complicates the situation. I believe it is necessary to fix this gap. It would be beneficial to adopt certain provisions from the European Union’s Artificial Intelligence Act, specifically focusing on Articles 5 and 6, which directly concern AI regulation, namely: The placement of AI systems on the market, their trade, and operation should be prohibited, as well as the use of AI systems that can be deliberately manipulated by criminals to influence the behavior of individuals or groups in ways that can cause significant harm to them; The use of AI systems that study individuals' social behaviors for classification purposes, which may lead to discrimination against physical persons, should be prohibited; The use of AI systems for facial recognition and the collection of personal information from the internet, social networks, CCTV, and other sources should be banned because there is a risk that cybercriminals could misuse such information to create databases for unlawful purposes beyond legitimate legal aims; The use of biometric categorization systems that classify individuals based on their biometric data into groups by race, political beliefs, sexual orientation, or other characteristics should be prohibited, as this carries a high risk of discrimination in society.
It should also be noted that artificial intelligence is divided into categories, so it is necessary to classify AI based on risk levels, which will then determine the relevant legal obligations and restrictions. It is unacceptable to have vague generalizations in laws related to cybercrime and artificial intelligence because such ambiguity could endanger constitutionally guaranteed rights and freedoms, particularly freedom of speech and privacy. The law must be proportional, respecting the principle of proportionality. It should clearly distinguish between malicious actions and those aimed at public benefit.
Moreover, the Association Agreement with the European Union, of which Georgia is a member, obliges the parties to strengthen cooperation in the fields of freedom, security, and justice, to reinforce the rule of law and respect human rights and fundamental freedoms. Accordingly, we are obliged to strengthen justice and bring crime-fighting mechanisms into compliance with international standards.
Additionally, the manual prepared by the General Prosecutor’s Office of Georgia on the compliance of crimes under the Criminal Code with international standards (published in 2021) lists norms that are not yet implemented in Georgia’s Criminal Code from the Budapest Convention. These include: The storage, distribution, or creation of child pornography using computer systems; The storage, distribution, or unlawful use of pornography; The storage, distribution, or creation of pornography using computer systems.
Implementing these provisions in the Criminal Code is important because countries lacking appropriate cybercrime legislation are easy targets for cybercriminals. Addressing such gaps is crucial to creating a safe cyber environment for everyone, especially minors.
Other problems in Georgia hinder the fight against these crimes and rapid response. Specifically, law enforcement agencies in the regions often suffer from a lack of adequate technical equipment and software. The state needs to increase its budget to provide law enforcement with the necessary infrastructure.
Moreover, there is low public awareness regarding cybercrime. Elderly citizens and teenagers, who are often victims of social networks, are prime targets for cybercriminals.
Schools and higher education institutions do not sufficiently integrate cybersecurity courses, wide-scale training is lacking, and there are no adequate programs to raise awareness. Therefore, certified training sessions, awareness campaigns, and qualified personnel training are necessary. The state should update its cybersecurity strategy based on international experience, which will support combating cybercrime and ensure fast and effective responses.
1. Abramishvili, S. (2024). Exploring the Application of AI in the Public Sector: The Case of Estonia and Lessons for Georgia. Georgian Foundation for Strategic and International Studies.
2. AvicenaTech Corp. (2024). The History of Artificial Intelligence. www.avicena.tech. Retrieved September 3, 2024, from https://www.avicena.tech
3. Butler Snow LLP. (2025). Response to Order to Show Cause, Case No. 2:21-cv-01701-AMM, U.S. District Court, Northern District of Alabama, Southern Division. Document 195, May 19, 2025.
4. Chrastil, N. (2025, May 24). Alabama paid a law firm millions to defend its prisons. It used AI and turned in fake citations. The Guardian. Retrieved June 18, 2025, from https://www.theguardian.com
5. Council of Europe. (2024). Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. Treaty Series, No. 225. Retrieved March 1, 2025, from https://rm.coe.int/1680afae3c
6. Council of Europe. (2024). Glossary of Key Terms on Artificial Intelligence, Data, and Cybersecurity. Retrieved September 17, 2024, from https://www.coe.int/en/web/human-rights-rule-of-law/artificial-intelligence/glossary
7. European Commission, High-Level Expert Group on Artificial Intelligence. (2018). A Definition of AI: Main Capabilities and Scientific Disciplines. Brussels.
8. European Parliament and Council of the European Union. (2016). Regulation (EU) 2016/679 (General Data Protection Regulation – GDPR). Official Journal of the European Union. Retrieved February 10, 2025, from https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng
9. European Parliament and Council of the European Union. (2024). Regulation (EU) 2024/1689 on Harmonised Rules on Artificial Intelligence (AI Act). Official Journal of the European Union, June 13, 2024. Retrieved February 5, 2025, from https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng
10. Falade, P. V. (2023). Decoding the Threat Landscape: ChatGPT, FraudGPT, and WormGPT in Social Engineering Attacks. International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 9(5), 185–198. https://doi.org/10.32628/CSEIT2390533
11. Gabisonia, Z. (2022). Internet Law and Artificial Intelligence (ინტერნეტ სამართალი და ხელოვნური ინტელექტი). იურისტების სამყარო. თბილისი.
12. Goderdzishvili, N. (2020). Handbook on Artificial Intelligence: Essence, International Standards, Ethical Norms and Recommendations (ხელოვნური ინტელექტი: არსი, საერთაშორისო სტანდარტები, ეთიკური ნორმები და რეკომენდაციები). ინფორმაციის თავისუფლების განვითარების ინსტიტუტი (IDFI).
13. KPMG. (2023). Deepfakes: Real Threat. KPMG LLP. Retrieved September 3, 2024, from https://kpmg.com
14. Montasari, R. (2023). Countering Cyberterrorism: The Confluence of Artificial Intelligence, Cyber Forensics, and Digital Policing in US and UK National Cybersecurity. Advances in Information Security, Vol. 101. Springer. https://doi.org/10.1007/978-3-031-21920-7
15. Natsqebia, G., & Todua, N. (2019). Criminal Law: General Part (სისხლის სამართალი: სახელმძღვანელო, ზოგადი ნაწილი) (4th ed.). თბილისი.
16. OECD. (2023). Explanatory Memorandum on the Updated OECD Definition of an AI System. DSTI/CDEP/AIGO (2023)8/FINAL.
17. UNESCO. (2019). Preliminary Study on the Ethics of Artificial Intelligence. SHS/COMEST/EXTWG-ETHICS-AI/2019/1, Paris.
UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. 41st Session of the General C
[1] LLM, Young Researcher, Assistant to the Rector, Sokhumi State University, Tbilisi, Georgia, United Nations Representative of the ECOSOC NGO CIRID accredited to UNOG (Geneva), UNOV (Vienna), and UNHQ (New York)
Email: lika.chimchiuri@sou.edu.ge
Scholar at CyJurII
[2] Montasari, Reza. Countering Cyberterrorism: The Confluence of Artificial Intelligence, Cyber Forensics, and Digital Policing in US and UK National Cybersecurity. Advances in Information Security, vol. 101, Springer, 2023, p. 82, https://doi.org/10.1007/978-3-031-21920-7, ნახვის თარიღი 03 სექტემბერი 2024
[3] 11. Gabisonia, Z. (2022). Internet Law and Artificial Intelligence (გაბისონია ზვიად, ინტერნეტ სამართალი და ხელოვნური ინტელექტი, იურისტების სამყარო, თბილისი, 2022) p. 441
[4] Montasari, Reza. Countering Cyberterrorism: The Confluence of Artificial Intelligence, Cyber Forensics, and Digital Policing in US and UK National Cybersecurity. Advances in Information Security, vol. 101, Springer, 2023, p. 83, https://doi.org/10.1007/978-3-031-21920-7, Accessed date 03 September 2024
[5]The History of Artificial Intelligence. AvicenaTech Corp., 2024, p. 4, www.avicena.tech, The-History-of-AI_Avicena.pdf, Accessed date 03 September 2024
[6] KPMG. Deepfakes: Real Threat. KPMG LLP, 2023, p. 4, https://kpmg.com, ნახვის თარიღი 03.09.2024
[7] Falade, Polra Victor. "Decoding the Threat Landscape: ChatGPT, FraudGPT, and WormGPT in Social Engineering Attacks." International Journal of Scientific Research in Computer Science, Engineering and Information Technology, vol. 9, no. 5, Sept.-Oct. 2023, pp. 185 – 186, https://doi.org/10.32628/CSEIT2390533, Accessed date 03 September 2024
[8] Council of Europe. Glossary of Key Terms on Artificial Intelligence, Data, and Cybersecurity. www.coe.int/ai, https://www.coe.int/en/web/human-rights-rule-of-law/artificial-intelligence/glossary, Accessed date 17 September 2024
[9] Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law
[10] Natsqebia, G., & Todua, N. (2019). Criminal Law: General Part (სისხლის სამართალი: სახელმძღვანელო, ზოგადი ნაწილი) (4th ed.). თბილისი. p. 130
[11] Natsqebia, G., & Todua, N. (2019). Criminal Law: General Part (სისხლის სამართალი: სახელმძღვანელო, ზოგადი ნაწილი) (4th ed.). თბილისი. p. 131
[12] Natsqebia, G., & Todua, N. (2019). Criminal Law: General Part (სისხლის სამართალი: სახელმძღვანელო, ზოგადი ნაწილი) (4th ed.). თბილისი, p.131
[13] Chrastil, Nicholas. “Alabama Paid a Law Firm Millions to Defend Its Prisons. It Used AI and Turned in Fake Citations.” The Guardian, 24 მაისი 2025, Alabama paid a law firm millions to defend its prisons. It used AI and turned in fake citations | Artificial intelligence (AI) | The Guardian, Accessed date 18 June 2025
[14] Natsqebia, G., & Todua, N. (2019). Criminal Law: General Part (სისხლის სამართალი: სახელმძღვანელო, ზოგადი ნაწილი) (4th ed.). თბილისი, p. 176
[15] Abramishvili, Salome. Exploring the Application of AI in the Public Sector: The Case of Estonia and Lessons for Georgia. Georgian Foundation for Strategic and International Studies, 2024. ISBN 978-9941-8-7045-3, p. 3
[16] European Commission, High-Level Expert Group on Artificial Intelligence. A Definition of AI: Main Capabilities and Scientific Disciplines. 18 Dec. 2018, Brussels, p.1
[17] OECD. Explanatory Memorandum on the Updated OECD Definition of an AI System. OECD, 2023. DSTI/CDEP/AIGO (2023)8/FINAL, p. 4
[18] European Parliament and Council. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828. Official Journal of the European Union, L 1689, 13 June 2024, https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng, Accessed Date 5 Feb. 2025, article 4
[19] UNESCO, World Commission on the Ethics of Scientific Knowledge and Technology (COMEST). Preliminary Study on the Ethics of Artificial Intelligence. SHS/COMEST/EXTWG-ETHICS-AI/2019/1, 26 Feb. 2019, Paris, p. 5
[20] European Parliament and Council of the European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Various Regulations and Directives (Artificial Intelligence Act). Official Journal of the European Union, 13 June 2024, https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng, Accessed Date 5 Feb. 2025, article 4
[21] European Parliament and Council of the European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Various Regulations and Directives (Artificial Intelligence Act). Official Journal of the European Union, 13 June 2024, https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng, Accessed Date 5 Feb. 2025, article 5
[22] Ibid., article 5
[23] Ibid., article 5
[24] European Parliament and Council of the European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Various Regulations and Directives (Artificial Intelligence Act). Official Journal of the European Union, 13 June 2024, https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng, Accessed Date 5 Feb. 2025, article 5
[25] Ibid.
[26] Ibid.
[27] Ibid.
[28] European Parliament and Council of the European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Various Regulations and Directives (Artificial Intelligence Act). Official Journal of the European Union, 13 June 2024, https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng, Accessed Date 5 Feb. 2025, article 6
[29] 12. Goderdzishvili, N. (2020). Handbook on Artificial Intelligence: Essence, International Standards, Ethical Norms and Recommendations (ხელოვნური ინტელექტი: არსი, საერთაშორისო სტანდარტები, ეთიკური ნორმები და რეკომენდაციები). ინფორმაციის თავისუფლების განვითარების ინსტიტუტი (IDFI). p. 19
[30] European Parliament and Council of the European Union, Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation), Official Journal of the European Union, 27 Apr. 2016, https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng, Accessed Date 10 Feb. 2025, article 15
[31] European Parliament and Council of the European Union, Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation), Official Journal of the European Union, 27 Apr. 2016, https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng, Accessed Date 10 Feb. 2025, article 17
[32] European Parliament and Council of the European Union, Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation), Official Journal of the European Union, 27 Apr. 2016, https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng, Accessed Date 10 Feb. 2025, article 12
[33] Council of Europe. Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. 5 Sept. 2024, Vilnius. Council of Europe Treaty Series, no. 225, https://rm.coe.int/1680afae3c, Accessed Date 1 March, 2025, article 1
[34] Council of Europe. Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. 5 Sept. 2024, Vilnius. Council of Europe Treaty Series, no. 225, https://rm.coe.int/1680afae3c, Accessed Date 1 March, 2025, article 11
[35] Council of Europe. Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. 5 Sept. 2024, Vilnius. Council of Europe Treaty Series, no. 225, https://rm.coe.int/1680afae3c, Accessed Date 1 March, 2025, article 12
[36] Council of Europe. Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. 5 Sept. 2024, Vilnius. Council of Europe Treaty Series, no. 225, https://rm.coe.int/1680afae3c, Accessed Date 1 March, 2025, article 13
[37] United Nations Educational, Scientific and Cultural Organization. Recommendation on the Ethics of Artificial Intelligence. 41st session of the General Conference, 9–24 Nov. 2021, Paris, UNESCO, pp. 7-10
[38] United Nations Educational, Scientific and Cultural Organization. Recommendation on the Ethics of Artificial Intelligence. 41st session of the General Conference, 9–24 Nov. 2021, Paris, UNESCO, p. 8
[39] United Nations Educational, Scientific and Cultural Organization. Recommendation on the Ethics of Artificial Intelligence. 41st session of the General Conference, 9–24 Nov. 2021, Paris, UNESCO, p. 9
[40] United Nations Educational, Scientific and Cultural Organization. Recommendation on the Ethics of Artificial Intelligence. 41st session of the General Conference, 9–24 Nov. 2021, Paris, UNESCO, p.10