Artificial Intelligence and Regulatory Challenges: Trends and Prospects Analysis
The rapid development of artificial intelligence (AI) creates new opportunities for the economy but also requires attention to ethical and regulatory aspects. Ukraine, following the European Union, is formulating its AI regulatory strategy, balancing innovation with the protection of citizens’ rights.
According to a KPMG study in 2023, 74% of global business leaders consider generative AI to be one of the top technologies that will impact their companies in the coming years. Goldman Sachs analysts predict that by 2025, global investments in AI will reach $200 billion. AI is expected to contribute an additional 7% increase to the global GDP over the next decade through increased productivity. Simultaneously, internet statistics indicate significant growth in interest in AI: notably, the ChatGPT service entered the top 50 visited sites worldwide in just one year.
However, alongside technological progress, ethical and social challenges are growing. AI can act as a “black box,” complicating the transparency of algorithms. This raises particular concerns in areas where algorithms affect critical decisions, such as personnel selection or creditworthiness assessment. Additionally, there is a threat to data privacy, as AI processes vast amounts of personal information, as well as concerns about automation and its impact on the labor market.
On an international level, the AI market is regulated in various ways. The US adheres to a liberal approach, relying on self-regulation and market mechanisms. The European Union has approved the AI Act, a regulatory document covering 27 countries, prioritizing human rights and ethics. China has chosen a policy of maximum state oversight.
Ukraine, striving for European integration, aligns its legislation with European standards. Specifically, in 2025, the government approved a bill on the reporting of international digital platforms to Ukrainian tax authorities. Such steps not only bring part of the online economy out of the shadows but also facilitate mutually beneficial adaptation of Ukrainian businesses to new conditions.
Currently, Ukraine is developing a national AI regulatory strategy, taking into account European experience. The first phase of the strategy involves voluntary self-regulation and preparing businesses for new requirements, while the second phase involves introducing ombudsman laws harmonized with the European AI Act. This will allow for the smooth implementation of new rules, avoiding the stalling of innovations.
Ukraine is not only regulating but also actively implementing digital innovations in the public sector. The “Diia” platform provides hundreds of services online, and the government collaborates with startups to test AI solutions in real conditions, creating a unique context. Experts see such a strategy as an opportunity to combine innovative approaches with effective regulatory conditions, forming a foundation for the stable development of the IT sector.
| Country | AI Regulation Approach |
|---|---|
| USA | Liberal, self-regulation, market mechanisms |
| EU | Regulatory, priorities: human rights, ethics |
| China | Maximum state oversight |
| Ukraine | Harmonization with the EU, self-regulation, innovations |




