Global Development Institute Blog

by Louisa Hann

If today’s Silicon Valley billionaires are to be believed, AI is about to supercharge your quality of life, boost your productivity, and provide access to “wildly abundant” intelligence. As Sam Altman, CEO of OpenAI puts it, the future “can be vastly better than the present”, with the world “getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before”.

Beyond the hype and hubris, however, the future of emerging technologies lumped under the term “artificial intelligence” are subject to fiery debate. Beyond discussions surrounding a looming ‘jobpocalypse’, AI has been associated with a host of operational issues from botched surgeries to the pollution of academic papers with fake citations.

So, while the tech elite push utopian narratives, what can we learn from academics engaging in sober critique grounded in evidence? With AI adoption inevitably impacting the development landscape and wider geopolitics, many GDI academics are busy analysing its potentials and pitfalls – and we’ve rounded up their latest findings below…

 

AI threatens farmers’ livelihoods in the Global South

AI has been touted as a gamechanger for the agricultural sector, promising to improve productivity, efficiency, and sustainability in the face of pressing challenges such as climate change, soil degradation, and high input costs. However, as Katarzyna Cieslik argues in a recent intervention, the imposition of AI on the complex web of material relations that sustains rural land and livelihoods – especially in the Global South – can erode vital agrarian knowledges and compound postcolonial inequalities.

So, what does this mean in practice? Those pushing AI-based agricultural systems claim the data that they collect and process can aid farmers by, for example, providing insights into harvest cycles and farm profitability. However, as Cieslik explains, the material hardware involved in these systems – such as sensor networks and livestock collars – requires consistent material care, generating new forms of agricultural labour. This can harm farms workers’ autonomy, affording them less time to care for the land and gradually eroding local agricultural knowledge developed over centuries. With knowledge-based decisions increasingly replaced by the profit imperatives of tech giants, crop diversity diminishes for the sake of automation and efficiency – with severe social and ecological consequences.

As explained in the introduction to the collection to which Cieslik contributes, this problem reflects broader capitalist dynamics, as “the growth of AI is adding another layer to […] logics of extraction and rentier relations, in ways that often deepen existing inequalities as well as create new ones.” As AI giants promote the benefits of automation across various spheres, therefore, we must be vigilant about whose interests they truly serve.

 

AI for forest restoration – a potential boon (with caveats…)

At first glance, the case for AI adoption to support reforestation efforts is very compelling. As Mariana Hernandez-Montilla and colleagues explore in a recent article on forest restoration, AI-based algorithms represent just one in a whole host of developments that could shape the field in coming years. They could bring significant technical advances in tree species identification, quantification of carbon stocks, monitoring processes, and more. New AI platforms such as Earth Index also have the potential to democratise knowledge and data access surrounding land use.

However, dig a little deeper, and the politics of AI adoption become more complex. The authors point out that much AI technology requires sophisticated monitoring equipment that means it is not truly free for public use. While platforms such as Google Earth provide some access to very high-resolution imagery, much data is still ringfenced for those able to pay. Such restrictions may disadvantage the Indigenous peoples and local communities based in reforestation areas, especially if they preclude the incorporation of local knowledge in decision-making processes. As such, the authors suggest AI should be approached as just one of several tools in broader efforts to achieve equitable reforestation.

 

Authoritarian regimes are using AI to govern

While AI’s cheerleaders tend to emphasise the technology’s potential to liberate humanity from monotonous tasks, we can’t ignore the potential for oppressive actors to harness AI for their own ends. As Arash Beidollahkhani explores in a recent article for Democratization, authoritarian regimes are using AI to bolster state control, surveillance, and repression. While autocrats have always relied on such methods, AI technologies are enhancing their efficacy through, for example, facial recognition and predictive analytics that pre-empt individuals’ involvement in resistance movements and suppress dissent.

Beidollahkhani focuses on three states to exemplify the potential dangers of AI under authoritarian control – Iran, Saudi Arabia, and the United Arab Emirates. In Iran, for example, the government has harnessed AI-assisted tools to enforce morality laws, using facial recognition technologies to identify and punish women who fail to comply with dress codes. In Saudi Arabia, the government uses targeted AI-generated messaging to shore up support for the regime:

During diplomatic rifts or human rights criticisms, bots are reprogrammed in real-time to propagate reconciliation narratives or discredit whistleblowers. This dynamic responsiveness distinguishes Saudi Arabia’s AI-enabled disinformation strategy from earlier static campaigns, allowing it to frame international discourse while suppressing reputational threats through soft-power manipulation.

However, as the article explains, AI and other technologies may create the foundations for novel forms of digital resistance, helping activists evade surveillance through decentralised networks, for example. As the world grapples with AI’s rapid development, then, we must remain vigilant to its potential societal consequences and dissident potentials, especially given the prevalence of authoritarian creep throughout the world.

 

We should proactively consider AI’s undesirable applications

The way we talk about AI often contains a spirit of fatalism or inevitability. Whether we’re discussing job automation or the future of creative pursuits, many believe that if AI is capable of automating a task, then humanity will submit to the quicker and cheaper option. But should we automate a task just because we can? And how can we assess and push back against the potential drawbacks of AI?

As Anuradha Ganapathy explains in a blog post, her recent fieldwork with communities in rural India shed some light on these tricky questions. Her aim was to learn more about a new digital tool designed to provide communities with comprehensive information on the landscape and its resources, such as water stress, forest health, soil type, and flora fauna biodiversity. The aim of digitising and distributing this data was to enhance awareness surrounding resource distribution in village planning processes.

As Ganapathy notes, she expected AI to provide the foundation for this technology, but evidence of AI use wasn’t forthcoming:

[The villagers] did not need workflows to be automated. They needed data to challenge and dismantle the structures that upheld these workflows. Structures built on top-down norms of budget and target allocation. Structures enabled by caste hierarchies and power asymmetries. Structures that rendered them invisible or important.

 

They were not looking to be educated on how the algorithm works, what rules it was coded on, what norms guided data use. Instead, they asked if the tool could amplify their voices, legitimise their rights and entitlements, capture systematic violations and erasures, and hold local village councils to account.

For many of us, the ways in which AI tools operate remain shrouded in mystery. When ChatGPT first launched, for example, few were aware of how OpenAI trained its technologies. As Ganapathy’s story reveals, maintaining oversight of the purpose and processes behind a tool can ensure it works in the interest of users, as opposed to broader systems that may subjugate them.

It’s for this reason that she advocates a critical approach to AI that is sensitive to potential ‘non-use cases’. In other words, identifying the areas where AI presents a threat to equity or social justice can help focus our efforts to resist technology’s anti-liberatory mechanisms. Rather than focusing on ‘use cases’ alone, this approach reminds us that AI adoption is not mandatory, but something we can choose to implement as we work towards a better future.

 

Keep up with GDI’s latest research

Want to discover more about GDI’s technology-focused research? Check out the latest updates from the Centre for Digital Development by heading to their ICT4D blog.

You can also read more about GDI’s latest output by visiting the GDI website or search publications in Research Explorer. If you’re a social media user, keep track of our latest updates via InstagramLinkedIn, and Bluesky. For those looking to receive updates and commentary straight in your inbox, we recommend signing up to our monthly newsletter.

Top image by Ales Nesetril on Unsplash.

Note:  This article gives the views of the author/academic featured and does not necessarily represent the views of the Global Development Institute as a whole.

Please feel free to use this post under the following Creative Commons license: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0). Full information is available here.