Many of the most important documents that codify human rights were written before the age of digital interdependence. They include the Universal Declaration of Human Rights; the International Covenant on Economic, Social and Cultural Rights and the International Covenant on Civil and Political Rights; the Convention on the Elimination of All Forms of Discrimination Against Women; and the Convention on the Rights of the Child.
The rights these treaties and conventions codify apply in full in the digital age – and often with fresh urgency.
Digital technologies are widely used to advocate for, defend and exercise human rights – but also to violate them. Social media, for example, has provided powerful new ways to exercise the rights to free expression and association and to document rights violations. It is also used to violate rights by spreading lies that incite hatred and foment violence, often at terrible speed and with the cloak of anonymity.
The most outrageous cases make the headlines. The live streaming of mass shootings in New Zealand. Incitement of violence against an ethnic minority in Myanmar. The #gamergate scandal, in which women working in video games were threatened with rape. The suicides of a British teenager who had viewed self-harm content on social media and an Indian man bullied after posting videos of himself dressed as a woman.
But these are manifestations of a problem that runs wide and deep: one survey of UK adult internet users found that 40 percent of 16- 24 year-olds have reported some form of harmful online content, with examples ranging from racism to harassment and child abuse. Children are at particular risk: almost a third of under-18s report having recently been exposed to “violent or hateful contact or behavior online”. Elderly people are also more prone to online fraud and misinformation.
Governments have increasingly sought to cut off social media in febrile situations – such as after a terrorist attack – when the risks of rapidly spreading misinformation are especially high. But denying access to the internet can also be part of a sustained government policy that itself violates citizens’ rights, including depriving people of access to information. Across the globe, governments directed 188 separate internet shutdowns in 2018, up from 108 in 2017.
Protecting Human Rights in the Digital Age
Universal human rights apply equally online as offline – freedom of expression and assembly, for example, are no less important in cyberspace than in the town square. That said, in many cases, it is far from obvious how human rights laws and treaties drafted in a pre-digital era should be applied in the digital age.
There is an urgent need to examine how time-honored human rights frameworks and conventions – and the obligations that flow from those commitments – can guide actions and policies relating to digital cooperation and digital technology.
The Panel’s recommendation urges the UN Secretary-General to begin a process that invites views from all stakeholders on how human rights can be meaningfully applied to ensure that no gaps in protection are caused by new and emerging digital technologies.
Such a process could draw inspiration from many recent national and global efforts to apply human rights for the digital age.106 Illustrative examples include:
- India’s Supreme Court has issued a judgment defining what the right to privacy means in the digital context.
- Nigeria’s draft Digital Rights and Freedom Bill tries to apply international human rights law to national digital realities.
- The Global Compact and UNICEF have developed guidance on how businesses should approach children’s rights in the digital age.
- UNESCO has used its Rights, Openness, Access and Multi-stakeholder governance (ROAM) framework to discuss AI’s implications for rights including freedom of expression, privacy, equality, and participation in public life.
- The Council of Europe has developed recommendations and guidelines, and the European Court of Human Rights has produced case law, interpreting the European Convention on Human Rights in the digital realm.
We must collectively ensure that advances in technology are not used to erode human rights or avoid accountability. Human rights defenders should not be targeted for their use of digital media. International mechanisms for human rights reporting by states should better incorporate the digital dimension.
In the digital age, the role of the private sector in human rights is becoming increasingly pronounced. As digital technologies and digital services reach scale so quickly, decisions taken by private companies are increasingly affecting millions of people across national borders.
The roles of government and business are described in the 2011 Guiding Principles on Business and Human Rights. Though not binding, they were unanimously endorsed by the UN Human Rights Council and the UN General Assembly. They affirm that while states have the duty to protect rights and provide remedies, businesses also have a responsibility to respect human rights, evaluate risk and assess the human rights impact of their actions.
There is now a critical need for clearer guidance about what should be expected on human rights from private companies as they develop and deploy digital technologies. The need is especially pressing for social media companies, which is why our recommendation calls for them to put in place procedures, staff, and better ways of working with civil society and human rights defenders to prevent or quickly redress violations.
We heard from one interviewee that companies can struggle to understand local context quickly enough to respond effectively in fast-developing conflict situations and may welcome UN or other expert insight in helping them assess concerns being raised by local actors. One potential avenue for information sharing is the UN Forum on Business and Human Rights, through which the Office of the High Commissioner for Human Rights in Geneva hosts regular discussions among the private sector and civil society.
As any new technology is developed, we should ask how it might inadvertently create new ways of violating rights – especially of people who are already often marginalized or discriminated against.
Civil society organizations would like to go beyond information sharing and use such forums to identify patterns of violations and hold the private sector to account. Governments also are becoming less willing to accept a hands-off regulatory approach: in the UK, for example, legislators are exploring how existing legal principles such as “duty of care” could be applied to social media firms.
As any new technology is developed, we should ask how it might inadvertently create new ways of violating rights – especially of people who are already often marginalized or discriminated against. Women, for example, experience higher levels of online harassment than men. The development of personal care robots is raising questions about the rights of elderly people to dignity, privacy, and agency.
The rights of children need especially acute attention. Children go online at ever younger ages, and under-18s make up one-third of all internet users. They are most vulnerable to online bullying and sexual exploitation. Digital technologies should promote the best interests of children and respect their agency to articulate their needs, in accordance with the Convention on the Rights of the Child.
Online services and apps used by children should be subject to strict design and data consent standards. Notable examples include the American Children’s Online Privacy Protection Rule of 2013 and the draft Age Appropriate Design Code announced by the UK Information Commissioner in 2019, which defines standards for apps, games, and many other digital services even if they are not intended for children.
Human Dignity, Agency, and Choice
We are delegating more and more decisions to intelligent systems, from how to get to work to what to eat for dinner. This can improve our lives, by freeing up time for activities we find more important. But it is also forcing us to rethink our understandings of human dignity and agency, as algorithms are increasingly sophisticated at manipulating our choices – for example, to keep our attention glued to a screen.
It is also becoming apparent that ‘intelligent’ systems can reinforce discrimination. Many algorithms have been shown to reflect the biases of their creators. This is just one reason why employment in the technology sector needs to be more diverse – as noted in our recommendation, which calls for improving gender equality. Gaps in the data on which algorithms are trained can likewise automate existing patterns of discrimination, as machine learning systems are only as good as the data that is fed to them.
Often the discrimination is too subtle to notice, but the real-life consequences can be profound when AI systems are used to make decisions such as who is eligible for home loans or public services such as health care. The harm caused can be complicated to redress. A growing number of initiatives, such as the Institute of Electrical and Electronics Engineers (IEEE)’s Global Initiative on Ethics of Autonomous and Intelligent Systems, are seeking to define how developers of artificial intelligence should address these and similar problems.
Other initiatives are looking at questions of human responsibility and legal accountability – a complex and rapidly-changing area. Legal systems assume that decisions can be traced back to people. Autonomous intelligent systems raise the danger that humans could evade responsibility for decisions made or actions taken by the technology they designed, trained, adapted, or deployed. In any given case, legal liability might ultimately rest with the people who developed the technology, the people who chose the data on which to train the technology, and/or the people who chose to deploy the technology in a given situation.
These questions come into sharpest focus with lethal autonomous weapons systems – machines that can autonomously select targets and kill. UN Secretary-General António Guterres has called for a ban on machines with the power and discretion to take lives without human involvement, a position which this Panel supports.
Gaps in the data on which algorithms are trained can likewise automate existing patterns of discrimination, as machine learning systems are only as good as the data that is fed to them.
The Panel supports, as stated in our recommendation, the emerging global consensus that autonomous intelligent systems be designed so that their decisions can be explained, and humans remain accountable. These systems demand the highest standards of ethics and engineering. They should be used with extreme caution to make decisions affecting people’s social or economic opportunities or rights, and individuals should have meaningful opportunities to appeal. Life and death decisions should not be delegated to machines.
The Right to Privacy
The right to privacy has become particularly contentious as digital technologies have given governments and private companies vast new possibilities for surveillance, tracking, and monitoring, some of which are invasive of privacy. As with so many areas of digital technology, there needs to be a society-wide conversation, based on informed consent, about the boundaries and norms for such uses of digital technology and AI. Surveillance, tracking, or monitoring by governments or businesses should not violate international human rights law.
Notions and expectations of privacy also differ across cultures and societies. How should an individual’s right to privacy be balanced against the interest of businesses in accessing data to improve services and government interest in accessing data for legitimate public purposes related to law enforcement and national security?
Societies around the world debate these questions heatedly when hard cases come to light, such as Apple’s 2016 refusal of the United States Federal Bureau of Investigation (FBI)’s request to assist in unlocking an iPhone of the suspect in a shooting case. Different governments are taking different approaches: some are forcing technology companies to provide technical means of access, sometimes referred to as “backdoors”, so the state can access personal data.
Complications arise when data is located in another country: in 2013, Microsoft refused an FBI request to provide a suspect’s emails that were stored on a server in Ireland. The United States of America (USA) has since passed a law obliging American companies to comply with warrants to provide data of American citizens even if it is stored abroad.136 It enables other governments to separately negotiate agreements to access their citizens’ data stored by American companies in the USA.
There currently seems to be little alternative to handling cross-border law enforcement requests through a complex and slow-moving patchwork of bilateral agreements – the attitudes of people and governments around the world differ widely, and the decision-making role of global technology companies is evolving. Nonetheless, it is possible that regional and multilateral arrangements could develop over time.
For individuals, what companies can do with their personal data is not just a question of legality but practical understanding – to manage permissions for every single organization we interact with would be incredibly time-consuming and confusing. How to give people greater meaningful control over their personal data is an important question for digital cooperation.
Alongside the right to privacy is the important question of who realizes the economic value that can be derived from personal data. Consumers typically have little awareness of how their personal information is sold or otherwise used to generate economic benefit.
There are emerging ideas to make data transactions more explicit and share the value extracted from personal data with the individuals who provide it. These could include business models which give users greater privacy by default: promising examples include the web browser Brave and the search engine DuckDuckGo. They could include new legal structures: the UK138 and India are among countries exploring the idea of a third-party ‘data fiduciary’ who users can authorize to manage their personal data on their behalf.
Trust and Social Cohesion
The world is suffering from a “trust deficit disorder”, in the words of the UN Secretary-General addressing the UN General Assembly in 2018. Trust among nations and in multilateral processes has weakened as states focus more on strategic competition than common interests and behave more aggressively. Building trust, and underpinning it with clear and agreed standards, is central to the success of digital cooperation.
Digital technologies have enabled some new interactions that promote trust, notably by verifying people’s identities and allowing others to rate them. Although not reliable in all instances, such systems have enabled many entrepreneurs on e-commerce platforms to win the trust of consumers, and given many people on sharing platforms the confidence to invite strangers into their cars or homes.
In other ways, digital technologies are eroding trust. Lies can now spread more easily, including through algorithms that generate and promote misinformation, sowing discord and undermining confidence in political processes. The use of artificial intelligence to produce “deep fakes” – audio and visual content that convincingly mimics real humans – further complicates the task of telling truth from misinformation.
Violations of privacy and security are undermining people’s trust in governments and companies. Trust between states is challenged by new ways to conduct espionage, manipulate public opinion and infiltrate critical infrastructure. While academia has traditionally nurtured international cooperation in artificial intelligence, governments are incentivized to secrecy by the awareness that future breakthroughs could dramatically shift the balance of power.
The trust deficit might in part be tackled by new technologies, such as training algorithms to identify and take down misinformation. But such solutions will pose their own issues: could we trust the accuracy and impartiality of the algorithms? Ultimately, trust needs to be built through clear standards and agreements based on mutual self-interest and values and with wide participation among all stakeholders, and mechanisms to impose costs for violations.
All citizens can play a role in building societal resilience against the misuse of digital technology. We all need to deepen our understanding of the political, social, cultural, and economic impacts of digital technologies and what it means to use them responsibly. We encourage nations to consider how educational systems can train students to thoughtfully consider the sources and credibility of information.
There are many encouraging instances of digital cooperation being used to build individual capacities that will collectively make it harder for the irresponsible use of digital technologies to erode societal trust. Examples drawn to the Panel’s attention by written submissions and interviews include:
- The 5Rights Foundation and British Telecom developed an initiative to help children understand how the apps and games they use to make money, including techniques to keep their attention for longer.
- The Cisco Networking Academy and United Nations Volunteers are training youth in Asia and Latin America to explore how digital technologies can enable them to become agents of social change in their communities.
- The Digital Empowerment Foundation is working in India with WhatsApp and community leaders to stop the spread of misinformation on social media.