by Anuradha Ganapathy, PhD Researcher at the Global Development Institute
Can you write about AI without ever once referring to it?
These were my first thoughts when I came back from the field, learning from and with communities in rural India who were experimenting with a new AI tool. Built on geospatial data and machine learning algorithms, the tool provided data on socio-ecological indicators such as water stress, forest health, soil type, flora fauna biodiversity, etc., to enable communities to develop a shared understanding of their landscape.
It facilitated a participatory digital mapping process in which community resource persons worked alongside community members to disaggregate and contextualise this data at village level, integrate lived realities of access to and use of resources, and document household characteristics (e.g., caste groups, and livelihood information), in order to create a comprehensive socio-ecological profile of the village.
Building on existing Participatory Rural Appraisal methods, the tool digitises and systematises this process by strengthening the evidence basis of community demands for natural resource assets (such as wells or ponds), making inequities in the distribution of and access to these assets more visible, and enhancing the transparency and accountability in village planning processes.
Throughout the project, I kept looking for AI. As a technology, as a marker of significance, as a key actor. I didn’t find it. Its only now I understand why. Because I was looking for it in spaces it didn’t belong.
The people I met didn’t need an algorithm to make rules for them; they needed safeguards to ensure that the codified rules of the commons were not misappropriated for the benefit of a powerful few.
They did not need workflows to be automated. They needed data to challenge and dismantle the structures that upheld these workflows. Structures built on top-down norms of budget and target allocation. Structures enabled by caste hierarchies and power asymmetries. Structures that rendered them invisible or important.
They were not looking to be educated on how the algorithm works, what rules it was coded on, what norms guided data use. Instead, they asked if the tool could amplify their voices, legitimise their rights and entitlements, capture systematic violations and erasures, and hold local village councils to account.
We talked about changing rainfall patterns. About human-animal conflict. About encroachments on forest land. About persistent caste conflicts. Far away from the imagined dystopias of “driver-less” cars, “employee-less” workplaces and “job-less” human beings, these were the lived dystopias of the everyday. They played out in daily encounters, posing real threats to lives and livelihoods.
Amidst these, there were many utopias to look forward to. But these were not ignited by claims to human-like intelligence. Or by some grand ideas of a tool doing something better than a human ever could. Instead, they came from being able to distort, even if only partially, the structures of entrenched power. From being able to participate as “equals” in spaces where regressive caste norms and practices continued to persist. From navigating unpleasant confrontations and difficult conversations.
Where did AI fit in here? It’s hard to say. Maybe as a data point or a technical assistant. As an insight related to depleting ground-water resources or changing cropping patterns. As telling evidence of uneven access. As a digital annotator and archive, a note taker and record-keeper. Or even as a glitch in the app. But never recognised as AI in any case. Did AI matter? Yes and no. In any case, the real question that mattered was – are communities getting any closer to realising the utopias they desired?
AI as the uninvited guest
The connections we make between what we believe AI can do, what we want it to, and what it actually does, are always tenuous, and usually wrong. This is because in today’s context, AI is made to appear in our lives as an uninvited guest at the dinner table: arrive without warning, and without being asked, but always expecting to be served.
Perhaps we need to think differently about what constitutes AI literacies. Perhaps AI literacies should begin with developing a shared (and decisive) understanding of AI intrusions – i.e., of spaces that AI seems to simply assume authority, without being given any legitimacy in the first place. The debate is not about which tasks AI can or cannot perform better than human beings. Or that it fails miserably at certain tasks (which it does) and therefore needs to be contained. This framing concedes too much to a narrow logic of efficiency and performance.
The debate must become normative – it must move to the spaces, material or discursive, that we want to (need to?) protect – our lands, our languages, our rights, or our ways of knowing and being. The spaces where human beings, with all their vulnerabilities, failings, and limitations, remain indispensable, because their presence anchors responsibility, judgment, and moral accountability to the process.
From use case to non-use cases of AI
As we passionately contest “use cases”[1] of AI for socio-economic development, we should also start earmarking “non-use” cases for AI. Spaces that we don’t want AI to transgress into. Roles it shouldn’t be assuming. Work that should be out-of-bounds for it.
Non-use, as I see it, is the vocabulary of power and resistance. It doesn’t privilege technological prowess, nor is it beholden to a starry-eyed vision of a glitzy tech future. It shifts the discourse from what AI models can or can’t do, to what we as a society are willing to accept as a legitimate function of AI. It turns the gaze to those who are most impacted by a current / potential AI intervention and asks – did they want it in the first place? And if they didn’t, then why is it here? The futures that we cannot do without need a clearer articulation and imagination of the AI intrusions that we are unwilling to live with.
[1] In software testing language, use cases define functionality of a tool from the user’s perspective, highlighting what the system should do.
Top image by Growtika on Unsplash.
Note: This article gives the views of the author/academic featured and does not necessarily represent the views of the Global Development Institute as a whole
Please feel free to use this post under the following Creative Commons license: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0). Full information is available here.