May 7, 2024
Create an IPCC-like body to harness benefits and combat harms of digital tech

Create an IPCC-like body to harness benefits and combat harms of digital tech

Search engines, online banking, social-media platforms and large-language models, such as ChatGPT, are among the many computational systems that offer (or could offer) tremendous benefits. They provide people with unprecedented access to information. They help to connect hundreds of millions of individuals. And they could make all sorts of tasks easier, from writing computer code to preparing scientific manuscripts.

Such innovations also come with risks.

The speed at which content can be generated and shared creates new possibilities for amplifying hate speech, misinformation and disinformation1,2. Decision-making that is augmented by algorithms can exacerbate existing societal biases and create new forms of inequity, for instance in policing and health care3. And generative artificial-intelligence (AI) systems that create visual and written content at scale could be used in ways for which the world is not prepared, culturally or legally.

Although an increasing number of universities, institutes, think tanks and government organizations are attempting to make sense of and improve the digital world, technology companies are, in our view, deploying a range of tactics to influence debate about the tools they are developing. As independent researchers studying the societal impacts of digital information technologies, we continually weigh the risks of corporations taking legal action against us for the most basic scholarly activities: collecting and sharing data, analysing findings, publishing papers and distributing results. Also, the type of data made available tends to focus research efforts on the behaviour of users, rather than on the design of the platforms themselves.

The challenges of climate change and ecosystem degradation have similarities to those now stemming from the global information ecosystem in terms of complexity, scale and importance. In the same way as bodies such as the United Nations Intergovernmental Panel on Climate Change (IPCC) conduct assessments of global environmental change that inform evidence-based policy, an analogous panel is now needed to understand and address the impact of emerging information technologies on the world’s social, economic, political and natural systems.

An Intergovernmental Panel on Information Technology would have more leverage when it comes to persuading technology companies to share their data than would independent researchers or non-profit groups, such as the Coalition for Independent Technology Research, which defends the right of researchers to study the impact of technology on society. A panel would also have credibility in non-Western countries — this will be increasingly crucial as the impacts of digital communication technologies play out in different cultural contexts.

We are aware that a non-profit organization called PeaceTech Lab in Washington DC — headed by current and former technology, media and telecommunications executives who have partnered with Microsoft, Amazon and Facebook — is assembling a panel with a similar charge to that of the body we are proposing (see go.nature.com/3vctvmb). We question, however, whether such a group can operate with independence.

Help or harm?

It is often straightforward to understand the short-term incentives that lead people to adopt digital information technologies. It is much harder to predict long-term responses and understand what harms could need to be mitigated down the road, while increasing the potential benefits.

Machine-learning algorithms designed to guide landlords on rental pricing, for example, have supported cartel-like dynamics with respect to rent pricing and supply restrictions. And algorithms that direct police towards ‘high-crime areas’, using data on the locations of past arrests, can exacerbate existing biases in the criminal justice system.

Meanwhile, generative AI threatens the workplace structure of entire industries, and challenges people’s ideas of proof, evidence and veracity. Using generative AI systems for text, such as ChatGPT, could undermine public understanding of science by driving the industrial-scale production of texts containing falsehoods and irrelevancies. Conversely, such systems could also level the playing field in international science by reducing or eliminating language barriers.

A mobile money and sim card kiosk in Accra, Ghana

A kiosk in Accra, Ghana, selling money-transfer services for mobile phones.Credit: Ernest Ankomah/Bloomberg via Getty

Various groups have been trying to gain insights about the impacts of digital information technologies on society, but their efforts have often been stymied by threats of legal action or actual lawsuits. Our own experiences, and those of many others working in this area, suggest to us that technology companies are increasingly using various tactics to hamper external scientific study and influence public discourse.

Access to data about even basic quantitative patterns of use and user behaviour are often tightly restricted by technology platforms. In 2021 Meta, the owner of Facebook, sent a cease-and-desist notice to researchers at New York University after they had created a browser extension to gather data on targeted advertising on the platform. From conversations with colleagues, we know that others have since been dissuaded from conducting this type of work.

Technology corporations are also selective in what they publish from their own research teams. For example, with external collaborators, Meta has published findings on the benefits of Facebook when people are grieving4, and the platform’s tendency to encourage altruism5. Yet we hear little about Meta’s own research on potential harms. Documents made available to the US Securities and Exchange Commission in 2021 reveal that Facebook’s ranking algorithm scored emoji reactions as five times more valuable than ‘likes’ for three years — even while internal data showed that posts that sparked the angry emoji were more likely to include potentially harmful and false content, such as incorrect claims about vaccines.

We think that as awareness of the issues around potentially harmful and misleading content has grown, companies have steered research agendas, putting responsibility for problems more frequently on individual users. Twitter, Facebook and Jigsaw (a think tank within Google) have cooperated extensively with academic researchers working on warning labels for misleading content, for instance. Without any other intervention, this essentially suggests that the burden of responsibility for stopping the spread of misinformation is on the user. (Research so far suggests that although these labels raise awareness and can reduce sharing by individuals, when used in isolation they are unlikely to make a significant dent in the spread of false information overall6.)

What is not interrogated — or reported publicly — is how the core capabilities of a platform result in harmful outcomes, or how design choices intended to increase engagement and revenue also exacerbate the spread of inflammatory, sensationalized, false and even violence-inciting content.

Most companies require a specified project description before they will share any data, which effectively gives corporations full editorial discretion. In combination with companies providing limited information about platform design, the net result is that the data that independent investigators need to measure and mitigate harms is generally unavailable.

A consolidated approach

Since 1988, the IPCC has brought together international experts to improve understanding of the causes and impacts of greenhouse-gas emissions, and what might be done to manage them. Likewise, the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES), established in 2012, has improved knowledge and raised awareness about the world’s failing ecosystems and their importance to human development and health. These bodies are tasked with consolidating existing information, collecting more data as needed, synthesizing that knowledge and sharing it with decision makers. Although the bodies do not have regulatory authority, their global assessment reports increase awareness and enable evidence-based policy.

As a society, we are failing to adequately address the use of digital information technologies as a global challenge that affects nearly every aspect of modern life. Similarly to climate change and ecosystem degradation, the use of these technologies has difficult-to-predict consequences that span generations and continents.

So far, efforts to better manage online information ecosystems have largely involved implementing guard rails. The US AI Bill of Rights, for instance, promises to provide individuals with options for privacy, and freedom from harm caused by AI, but is vague on how harm could be reliably assessed and averted. A first step towards proper stewardship is creating an infrastructure that can consolidate and summarize the state of knowledge on the potential societal impacts of digital communications technologies — in a format that is digestible and accessible for policymakers around the world.

An Intergovernmental Panel on Information Technology, including experts in policy, law, physical and social sciences, engineering, the humanities, government and ethics, offers the best possibility for achieving this. As with the IPCC and the IPBES, the goal would not be to establish international consensus on how to manage the digital world, or to hand down regulatory recommendations — but to provide a knowledge base to underpin the decisions of actors such as governments, humanitarian groups and even businesses.

Such an organization will face challenges that are distinct from those of the IPCC and the IPBES. Climate change and ecosystem degradation are problems characterized by rich data, comparatively well understood causes and consequences, and measurable and obvious economic harm in the long term. As such, the IPCC and the IPBES can base their reports on shared assumptions regarding food security, natural disasters and sustainability.

By contrast, most researchers investigating the impacts of digital technologies are severely limited in their access to data about the systems that they study. Researchers are also confronting a rapidly moving target: many firms, from social-media platforms and search engines to ride-sharing services and news outlets, are constantly running suites of A–B tests on their users — adjusting some feature in the interface, and assessing what effect this has on user behaviour.

An intergovernmental panel, representing the interests of UN member states, could identify where current levels of transparency are not generating sufficient insight. It could also galvanize research and motivate regulators to enact policy that engenders increased transparency, accountability and auditing. Although countries such as China, Russia and the United States might not agree on how various platforms and services should be deployed or constrained, as the consequences of the digital world play across international borders, any hope of negotiation between nations requires a clearer picture of what is happening and why, and what policy responses are available.

Working with ethicists and human-rights organizations, such a panel would be guided by shared goals, such as those enshrined in international human-rights treaties and norms. It would also be guided by emerging formulations of people’s rights in the face of a fast-changing digital environment — such as rights to meaningful privacy and consent, a healthier information ecology and better safety online. With this framing, an Intergovernmental Panel on Information Technology could, for example, agree to gather information on the prevalence and economic consequences of fraud or the effects of social media on adolescent mental health. It could gather reliable indicators of illegitimate interference or manipulation in elections, or assess the downstream effects of unregulated financial dynamics.

Progress will require nations to negotiate. In many cases, progress might be at odds with the short-term interests of some of the world’s largest corporations — and with some of its most powerful governments. But this is nothing new for an intergovernmental organization. Certainly, the problems emerging from the online information ecosystem will not be fixed at national scales, within distinct academic disciplines or through a network of academic and non-profit institutions with little power and conflicting incentives.

The ongoing transformation of the digital public sphere is characterized by risks, benefits and trade-offs. The COVID-19 pandemic has demonstrated the power of the online information ecosystem when it comes to mobilizing thousands of researchers across all of science to solve a societal problem. But as with other historical transformations, such as those brought by the Industrial Revolution, management and adaptation are key to ensuring that digital communication technologies promote beneficial dynamics.

Source link