Digital downsides: inequities, intrusion and oppression
Each passing week seems to generate yet another poke in the eye for technocratic, utopian visions of a future digitally developed world. Earlier this month, the World Bank’s landmark review of internet for development, which IDS and Nesta are launching in the UK next week, found that while digital technologies have spread quickly, the benefits are yet to be realized by most of the world’s population. Moreover, there are structural features of digital innovations and markets that need to be be tackled if these widespread inequities are not to become deeply and permanently entrenched. (Look out my follow up blog next week on the UK launch event hosted by IDS and Nesta).
Last week saw security concerns come to the forefront, with major security issues identified with the Internet of Things. The technology site Ars Technica revealed a search engine that allows users to access live video from any device with an internet-connected webcam without adequate security precautions – the most disturbing example being videos of sleeping babies whose parents use webcams as a monitoring device. The engine was developed to reveal the fragility of the Internet of Things, which is being largely ignored as device manufacturers drive down costs in the race to connect everything from toasters to cars to medical devices to the internet. Analogous challenges have been identified by Privacy International in their excellent work on ‘Aiding Surveillance’, which argued that ‘the collection and use of personal information in these development and aid initiatives is without precedent, and subject to few legal safeguards.’
And this week, IDS has re-launched the IDS Bulletin, as open-access, with the inaugural issue appropriately enough on the topic of ‘opening governance’. It contains numerous accounts of the limitations, and risks, of technology for achieving social, political and economic change for citizens. As the editors Rosie McGee and Duncan Edwards put it in their editorial summary: ‘tech initiatives which push on open doors succeed but ones which push on closed or locked doors don’t; it is not the technology that leads to the accountability impact but the agency, organisational, institutional and cultural aspects of the context.’ More worryingly, there are clear instances of governments using the empowering potential of technology to exert control and oppress citizens. In Mexico, one contributor finds that ‘citizens have to struggle against increasingly sophisticated techniques of control and repression that successfully exploit the very mechanisms that many consider to be emancipatory technologies’
Inequality and exclusion; surveillance and intrusions; technological biases; control and repression. It may be easy to write these downsides of digital development off. Haven’t all technologies throughout history have come with opportunities and costs? Isn’t the whole point of effective policy to ensure benefits are realised while navigating pitfalls? But these downsides are emerging so frequently, and with such credible evidence behind them, that one cannot help but ask what might be done to address these challenges.
Principles for digital development
A useful starting point might be the principles for digital development programmes and interventions (see figure 1) to which many donors and international organisations have signed up.
Applied systematically, and used as a means to holding projects to account, these principles have the potential to mitigate some of the downsides of digital development. However, in practice the principles are too often used as aspirations, and not enough as a means of ensuring accountability. Moreover, while the principles do include privacy and security, there is nothing explicit on broader issues of ethics, values and morals.
There is also a vital question of ‘whose principles count?’ – as they stand, the principles are almost exclusively the province of international organisations, and not governments, national actors and communities who are, after all, the primary stakeholders in the development process. Although one may assume that these actors are seen as the ‘user’ there are two problems here. First, in common with development more generally, by the time ‘users’ are brought into the mix, the objectives and parameters of the project are often clearly defined and the user only gets to make incremental changes about how projects might be implemented.
Second, there is also the question of who one understands the user to be. While some may assume users are poor marginalized citizens whose voices aren’t being heard, others may think that users are NGOs or operational organisations deploying the technology in question. ‘Design with the deployer’ means digital design can and does happen without any reality-checks with conditions and contexts on the ground, much less with the needs, demands, priorities and realities of poor and marginalized people.
And while there is a focus in the principles on reusing and improving, and ‘failing quickly’ this is almost entirely based on technological adjustments – there is almost nothing on taking responsibility for deeper failures in projects and programmes let alone those that may go beyond and that might impact negatively on the lives of people and communities in developing countries. Consider the following three well publicised examples:
- The now-infamous case of Uganda banning all mobile health applications in 2012 because the health system was under considerable strain from the proliferation of apps and tools with no coordination or oversight.
- Digital schools in South Africa leading to large volumes of tablet orders, spikes in muggings of students and burglaries in schools, but still not enough trained teachers.
- India’s digital biometric identity card, Aadhar, initially presented as a voluntary mechanism, with its pro-poor benefits lauded, but increasingly being seen as a tool for identification and surveillance, rather than identity and protection.
Let’s not mince words here. These all are examples of some pretty fundamental failures of digital development, whether principled or otherwise. They have caused, or have the potential to cause, considerable damage to the people who are most vulnerable and most in need, and whose lives development efforts are supposed to help improve.
These failures can be attributed, in part at least, to the fact that dominant interests and needs – of private individuals, states, markets, and development agencies themselves – appear to have been placed above the needs of citizens.
These examples have all used the language, labels and codes of development – and so cynically, these examples may be seen as the result of deliberate and wilful obfuscation and the co-option of development goals. More charitably, they are the result of blundering incompetence. And these are just the obvious and immediate examples – little wonder that some have started to use the phrase ‘digital white elephants’ to describe this new generation of technocratic development failures.
Oaths and Watchdogs?
So how do we deal with this? In the parlance of Silicon Valley, how do we ensure that these and similar failures are bugs, rather than features of the digital development movement? I’d like to make two suggestions, drawing on various movements underway in the wider realm of digital ethics and accountability.
First, building on reflections and efforts that are already underway (see here, here and here), we need a ‘zeroth principle’ for digital development efforts that sits above all of the above and comes foremost in all our calculations, assessments, designs, plans and interventions. Such a principle needs to be simple and have the quality of a Hippocratic Oath: above all, do no harm. Just like the physicians version, this will be a symbolic and practical attestation to undertake digital development in the best interests of poor and vulnerable people in developing countries. It would also call upon organisations to demonstrate their commitment by being transparent about how they fulfil this principle.
Second, to monitor this and the other principles, and to hold efforts to account, we need to add a new kind of digital development role. I see this as a new breed of digital development specialist who works with – and occasionally in spite of – the current bubbling mix of designers, researchers, technologists, facilitators, investors and users. We are going to need more accountability specialists, watchdogs and advocates, who can work to give the principles teeth, to highlight and mitigate the risks of digital development, and to lead much more objective evaluation of digital development efforts, as a complement to the aspirational work already underway on principles. They will need to work to look at specific initiatives, but also take a broader, more systemic, perspective on the pluses and minuses of digital development (IDS has plans in this latter area about which more soon).
These digital development watchdogs, accountability specialists and evaluators are not going to be popular or lauded, in fact they are likely to draw much opprobrium and criticism from many quarters, including from within the development sector. But as long as they do their job correctly and with diligence, such criticisms should be seen as nothing less an indicator of success and a firm endorsement of their value. Different organisations increasingly seem to be picking up on this: Privacy International’s work mentioned above, Amnesty’s new work focusing on technology, Human Rights Watch’s work on digital rights, and the Responsible Data Forum’s work on ethics and practices of data in development.
Without these and similar measures playing a more central role in digital development efforts, the digital development agenda risks moving forward with insufficient reflection on its deeper failures and unintended consequences, and with a worrying degree of naivety about its capture by dominant interests and groups. In short: left unchecked, the dark side will prevail.