The increasing adoption and development of artificialintelligence (AI) technologies are having a significant and multifaceted impact on the political sphere. AI is viewed both as a fundamental pillar for modernizing political processes and as a potential threat to democratic integrity and stability. Artificial intelligence has been making its impact on politics in many ways but some key places are in Elections, public trust in politics, and in political policy.

Generative AI campaigns
The emergence of generative AI systems like ChatGPT in 2022 accelerated both capabilities and concerns about AI’s political impact. During 2024, election cycles around the world saw widespread use of AI for campaign content creation, voter targeting, and real-time sentiment analysis.Twenty major tech companies pledged to combat AI misuse in elections, reflecting industry recognition of the technology’s potential for democratic harm.

Potential benefits of AI in politics
Artificialintelligence is increasingly utilized in the political sphere, with some people saying it’s offering various potential benefits to democratic processes. AI tools can facilitate improved communication between citizens and public administration. Some say that technologies present an opportunity to enhance the democratic process, enabling citizens to gain a better understanding of political issues and participate more easily in democratic discourse. Politicians have been utilizing AI to promote strategies and foster closer communication with citizens, potentially increasing democratic participation and educating the public on policy matters. For example, the Danish Synthetic Party is led by an AI responsible for its political program, and Denmark’s Prime Minister Mette Frederiksen used Chat GPT in a parliamentary speech to highlight AI’s potential. Supporters of Artificial Intelligence have said that AI applications like chatbots or learning machine tools can foster a more direct and persuasive contact with people, educate citizens on democratic principles and policy matters, and motivate them to express their opinions to governments and politicians The integration of AI can also make political campaigns more efficient and cost-effective, allowing for quick execution and the ability to capture citizen queries and predict their needs for more targeted engagement.

Policy regulations and AI
Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.
Since 2016, numerous AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is deemed necessary to both foster AI innovation and manage associated risks.
Furthermore, organizations deploying AI have a central role to play in creating and implementing trustworthy AI, adhering to established principles, and taking accountability for mitigating risks.
Regulating AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI. Public administration and policy considerations generally focus on the technical and economic implications and on trustworthy and human-centered AI systems, regulation of artificial superintelligence, the risks and biases of machine-learning algorithms, the explainability of model outputs, and the tension between open source AI and unchecked AI use.
There have been both hard law and soft law proposals to regulate AI.Some legal scholars have noted that hard law approaches to AI regulation have substantial challenges. Among the challenges, AI technology is rapidly evolving leading to a “pacing problem” where traditional laws and regulations often cannot keep up with emerging applications and their associated risks and benefits. Similarly, the diversity of AI applications challenges existing regulatory agencies, which often have limited jurisdictional scope. As an alternative, some legal scholars argue that soft law approaches to AI regulation are promising, as they offer greater flexibility to adapt to emerging technologies and the evolving nature of AI applications. However, soft law approaches often lack substantial enforcement potential.
AI regulation could derive from basic principles. A 2020 Berkman Klein Center for Internet & Society meta-review of existing sets of principles, such as the Asilomar Principles and the Beijing Principles, identified eight such basic principles: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and respect for human values. AI law and regulations have been divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues.A public administration approach sees a relationship between AI law and regulation, the ethics of AI, and ‘AI society’, defined as workforce substitution and transformation, social acceptance and trust in AI, and the transformation of human to machine interaction. The development of public sector strategies for management and regulation of AI is deemed necessary at the local, national, and international levels and in a variety of fields, from public service management and accountability to law enforcement, healthcare (especially the concept of a Human Guarantee),the financial sector, robotics, autonomous vehicles,]the military and national security, and international law.

In 2025, the UK and US governments declined to sign an international agreement on AI at the AI Action Summit in Paris. The agreement was described as proposing an open, inclusive and ethical approach to AI development, including environmental protection measures. US Vice President JD Vance argued that the agreement would be detrimental to the growth of the AI industry. The UK government added that the agreement “didn’t provide enough practical clarity on global governance, nor sufficiently address harder questions around national security”.
AI influence and public trust in politics
Artificial intelligence (AI) profoundly impacts public trust in politics by introducing significant risks. The use of AI in politics raises serious
ethical and legal concerns. AI tools can process massive amounts of data to analyze user trends and behaviors, enabling highly targeted and persuasive campaign messages that can manipulate public opinion and damage the direct, original dimension of political communication. This phenomenon can lead to widespread deception and damage public trust in democratic institutions, as seen with AI-generated attack videos in political campaigns. The lack of a uniform and binding regulatory framework for AI further exacerbates concerns about privacy and security, and raises questions about accountability for false or biased outcomes produced by AI systems.

Furthermore, AI systems are not neutral; they are embedded in social, political, cultural, and economic structures and designed to benefit existing dominant interests, often amplifying hierarchies and encoding narrow classifications. This means that AI systems can reproduce and intensify existing structural inequalities, particularly when used in sensitive areas like the justice system or welfare distribution. AI development often obscures its material and human costs, including energy consumption, labor exploitation, and mass data harvesting, further distancing the public from understanding its true impact. Despite the proliferation of AI ethics frameworks, many lack representation from the communities most affected, are often unenforceable, and may prioritize profit over ethical concerns, leading to a persistent asymmetry of power where technical systems extend inequality. This dynamic makes it challenging to build trust, as the public struggles to discern truth from AI-generated misinformation and holds those responsible for AI’s negative consequences accountable.
AI in Elections
Artificial intelligence is increasingly impacting elections globally, with growing concerns that powerful generative AI systems and deepfakes will destabilize democracies. These technologies make it easy for anyone with a smartphone and a imagination to create fake, yet convincing, content aimed at fooling voters. AI deepfakes tied to elections in Europe and Asia have spread through social media throughout 2025, serving as a warning for future elections in other nations. Recent examples include AI-generated audio recordings of Slovakia’s liberal party leader discussing vote rigging and raising beer prices, a video of Moldova’s pro-Western president throwing support behind a Russian-friendly party, and a robocall impersonating U.S. President Joe Biden urging voters to abstain from a primary election As the public becomes more aware that video and audio can be convincingly faked, some may exploit this by denouncing authentic media as deepfakes.

The deployment of AI in the political area falls into a high-risk category due to its potential problems. AI tools, when deployed on social media, can generate misleading content at a speed and scale that outpaces governmental oversight and society’s ability to manage the consequences. Some nations, including Russia, Iran, and China, have leveraged AI in their influence operations to tailor polarizing content and spread synthetic media. Authorities worldwide are trying to establish guardrails, with efforts including banning AI-generated voices in robocalls in the U.S, major tech companies signing a pact to prevent AI from disrupting elections, and the EU’s AI Act imposing obligations for transparency, detection, and tracing of AI-generated material. Many states in the U.S. have introduced legislation requiring disclosure of AI use in election content.However, enforcing regulations is a significant hurdle, given that deepfakes are challenging to detect and source, and the technology is rapidly advancing. A comprehensive, multifaceted approach combining regulatory tools, technical solutions like watermarking and detection software, and public digital literacy initiatives is considered crucial to safeguard democratic elections.