
Is AI becoming democracy’s little helper? Without more transparency, how would we know?
- Written by Ruona Meyer
- Illustration by Andrei Pacea
From bill drafting and amendment, transcriptions and translations or managing classification of historic and current legislative documents, AI is becoming a tool for parliaments worldwide.
Somewhere between using ChatGPT to settle family debates and choosing what to cook for dinner, a Brazilian lawmaker decided to use ChatGPT to produce the world’s first entirely AI-drafted Bill.
In June 2023, Councilman Ramiro Rosario, 39, secretly used the chatbot to submit a draft law to prevent residents of Porto Alegre from being charged for stolen water meters. Rosario only revealed his “experiment,” six months later, a week after the unanimously-passed bill officially became law.
From Arizona to Brussels and Costa Rica, self-disclosure of AI use has become a means of showing innovation, and even an avenue of self-promotion for lawmakers and legislative bodies. On the other hand, not every democratic worker is happy to disclose their AI use.
On April Fool’s Day 2025, Politico asked: “Is ChatGPT doing the (European) Commission’s homework?” after Europe’s Members of Parliament (in closed-door meetings) accused their peers of using AI, because of “robotic replies” in executive communication. The answer to that question remains shrouded in secrecy, similar to how US officials kept mum after critics accused President Donald Trump of using AI to churn out “badly formatted” and “sloppy” Executive orders.
But lawmakers specifically appear to have a confusing relationship with artificial intelligence. They are either: trying to regulate or police AI through legislation; accusing each other of using AI…or they are users and deployers, even actors within democratic systems where AI is increasingly being used to navigate the bureaucratic and financial constraints of lawmaking.
From bill drafting and amendment, transcriptions and translations or managing classification of historic and current legislative documents, AI is becoming a tool for parliaments worldwide. But what trends are emerging as AI-use takes hold in increasingly molding democracies, one parliament and piece of legislation at a time?

Emerging Trends
The Inter-Parliamentary Union (IPU), a global coalition of national parliaments established in 1889, provides some insights. Self-reports from governments show there are 72 case studies of AI use from eight countries and one multilateral body (the European Union), as of July 6, 2025.
Three trends are noticeable here.
1. Parliamentary use of AI does not appear to prioritise public engagement nor tools for citizens to deeply engage with members of parliament (there are only seven “public engagement and open parliament” case studies, currently implemented by Bahrain, Brazil and Italy.)
2. The countries that are currently global superpowers are either not on this list, or when they are, they do not appear to be heavy deployers of AI for legislative processes (of course they may have decided not to report to the IPU.)
3. AI use for cybersecurity purposes may need to match the scale of institutional cyberattacks (with only two cases – one each from Finland and Brazil.)
What concerns arise regarding each of these trends?
Concern 1: Highly autonomous AI tools, but less public transparency
The same legislative offices and officers that exist only due to the votes and taxes of the people, must tangibly show the same transparency they seek from tech companies, other deployers and end-users of AI systems.
As lawmakers prioritise AI-driven productivity tools for their work, rather than open parliament uses, they appear to forget they are deployers of AI.
The same legislative offices and officers that exist only due to the votes and taxes of the people, must tangibly show the same transparency they seek from tech companies, other deployers and end-users of AI systems.
The biggest contradiction comes from the EU parliament. Despite creating the oldest AI legislation in the world (the EU AI Act), and recently insisting its implementation won’t be postponed, the highs and lows of AI use by Brussels actors are discussed in closed-door meetings or through un-published screenshots and quotes from anonymous staff who “aren’t authorized to speak publicly” to journalists.
Worth repeating is what the first two sentences of Paragraph 61 in the introduction of the EU AI Act says: “Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, the rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial.”
The section adds that: “In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to be used by a judicial authority or on its behalf to assist judicial authorities in researching and interpreting facts and the law and in applying the law to a concrete set of facts.”
The glaring challenge here is that the EU AI Act does not make the direct link between judges and democratic processes occurring outside the courts. Specifically, there is no clear provision that classifies the high-risk AI use that occurs when a lawmaker uses AI to assist their role in the creation of legislation–e.g., requesting initiation of legislation, proposing amendments to legislation or substantially rejecting or modifying said legislation. In this case, the lawmakers and the Council of the European Union then approve the legislation, which in turn directly assists judges in deciding what constitutes justice for the people, year in, year out.
Secondly, the last part of Section 62 of the EU AI Act says “AI systems intended to be used to influence the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda should be classified as high-risk AI systems.”
Here, there is no clear consideration that parliamentarians and their offices traditionally exist to influence voting patterns in their favour at every election, and they use AI to research or interpret facts for procedures and debates that further these goals. Legislators are practically using AI to bake cakes (i.e., laws) that are to be shared for all, in the name of democracy. Yet, there is no provision for the reality that AI’s deployment bias* or feedback loop bias* can result in tainted training data (i.e., ingredients) that can indeed influence elections and referendum outcomes. Simply put: if you are a councillor and decide to use AI to generate a pasta recipe at home, that is different from your office using AI to discuss, amend or contribute to a law that will influence who the courts order to pay a fine or not. The hope is that a global connection of these dots–the influence of AI on democratic purposes–happens soon, and is in-depth. Because, as the trend towards autonomous AI tools takes off, lawmakers and parliaments may be forgetting they essentially exist for the people, and every piece of AI they use can/should be termed high-risk because it will affect humans, and human endeavours.
Meanwhile, the list of countries that appear to have advanced in using AI to run a democracy* is interesting.

Concern 2: Discourse should focus on pioneer deployers, not wider optics
Worldwide, only 32 countries, located mainly in the Global North, have AI-specialized data centres, with US and Chinese companies reportedly controlling at least 90% of all the data centres.
While the North-South divide on AI in general rages on, the conversation around AI for legislative purposes needs to prioritise lessons and challenges from the actual early deployers. This is because being a global supercomputing power does not necessarily equate to mainstream use of AI for democratic processes. The US, which houses 41% of all 10,126 data centers worldwide, has lawmakers that appear mainly skeptical and still AI-curious. Meanwhile, Bahrain’s parliament, with just seven data centers, reports more legislative-based use of AI than Canada, which houses at least 277 data centers.
In the Global South, Brazil and Chile are clear legislative outliers. The former is using AI for cybersecurity, attempting to secure its voting processes through Senator facial recognition. This is to stop scandals such as those reported in 2021, when aides voted on behalf of parliamentarians.
Meanwhile, Chile’s legislative AI-suite includes CAMINAR-A, used to monitor budgetary spending, down to verification of individual receipts, Also significant is that Chile is spearheading collaborations with dozens of institutions across Latin America and the Caribbean to build LatamGPT, generative AI that “caters to the contexts and needs of Latin American users.”
Concern 3: More attention, information needed on AI systems for Cybersecurity
As AI lowers the cost of carrying out cyberattacks, parliaments’ AI models could be targeted directly to produce outputs skewed towards selected outcomes.
Parliaments are routinely targeted for cyberattacks,* with parliamentary offices in New Zealand and the US recently falling victim. Attacks have also been timed to prevent lawmakers in at least two countries from casting votes on bills and wiped out network servers connecting computers in a third. Parliamentary email addresses are usually major targets; the UK parliament deals with at least three million attempted cyberattacks every month.
Relevant here is that as legislative offices increase their testing and use of AI, more vulnerabilities and opportunities for cyberattacks will occur. Specific examples: as AI lowers the cost of carrying out cyberattacks, parliaments’ AI models could be targeted directly, to produce outputs skewed towards selected outcomes. Systems could be undermined at unique times to efficiently derail legislative processes, or parliaments’ chatbots could be infiltrated to spread disinformation to the public, one response at a time. Worth noting is Finland’s approach, four years after suffering a cyberattack. To protect parliamentary applications, Finland’s AI tool checks on code used by app developers at various stages, then audits of the process are performed by a separate third party. More proactive use of AI in this way is welcome, to prevent the monetary and national security-based consequences of cyberattacks, which are estimated by the World Bank to cost at least USD$265 billion every year, by 2031.
Lawmakers .
It’s time to catch-up
Worldwide, legislatures are the most populated (and expensive) arm of governments. The return on AI investment here needs to be justified, and publicised.
Everyone–from tech giants seeking looser or delayed regulation, to civil servants arguing for sovereignty–talks about winning “the AI Race.” Yet, there is no such hurry to develop systems for the public audit of the use of AI in legislative offices and by legislative actors. Environmental impacts and computing transparency are ongoing frontiers for global in(action) and it’s time our civil service corps catch up. Because although AI can greatly advance democracy when deployed in democratic processes, public accountability around legislative AI use should not be an afterthought, or non-existent.
We live in a world where crucial information about AI and its impact remain hidden behind think-tank paywalls, restricted policy briefs, reams of legalese and niche institutional databases. If elected officials want to hitch onto the “adopt AI immediately” wagon, they must prioritise the following: transparency of data storage/management; publicly-accessible environmental impact assessment reports of their AI use; and open communication with journalists seeking information. To put it simply: is your parliamentary office pursuing batch prompting for your AI tools, or are you using up more water than it would cost to hire humans to do the same job? Lawmakers using or building AI tools need to show the same level of transparency their AI Bills and Acts expect from other deployers of AI.
After all, there is something terribly hypocritical about lawmakers in some of the world’s most water-challenged countries also being the ones at the forefront of water-guzzling AI use.
*Bahrain is a constitutional monarchy. The reported AI use is by the country’s Shura Council and its Council of Representatives.
*Deployment Bias “occurs when an AI system that works well in a test environment performs poorly when deployed in the real world.” Feedback Loop Bias is “when the output of an AI system influences future inputs, potentially reinforcing and amplifying existing biases over time”
*Cyberattacks “are attempts to misuse information, by stealing, destroying or exposing it and they aim to disrupt or destroy computer systems and networks.”
About the Author
Ruona Meyer is a journalist, researcher and media trainer. Her most notable AI-based work was the design and delivery of this Framework for the European Network of Equality Bodies.
No AI was used in the writing of this editorial, or the design of the graphics.