May 29, 2024
Will Biden’s Meetings with A.I. Companies Make Any Difference?

Will Biden’s Meetings with A.I. Companies Make Any Difference?

On Friday, the Biden Administration announced that seven leading American artificial-intelligence companies had agreed to put some voluntary guardrails around their products. Google, Microsoft, Meta, Amazon, OpenAI, Anthropic, and Inflection pledged to insure that their products are meeting safety requirements before releasing them to the public; that they will engage outside experts to test their systems and report any vulnerabilities; and that they will develop technical mechanisms to let users know when they are looking at A.I.-generated content, likely through some kind of watermarking system. They also said that they were committed to investigating and mitigating the societal risks posed by A.I. systems, including “harmful” algorithmic bias and privacy breaches. There are three ways to greet the announcement: with hope that it could protect people from the most dangerous aspects of A.I., with skepticism that it will, or with cynicism that it is a ploy by Big Tech to avoid governmental regulation of real consequence.

The deal was the latest effort by the White House to use what limited power it has to rein in A.I. Over the past ten months, the Administration has issued a Blueprint for an A.I. Bill of Rights, an Executive Order to root out bias in technology, including artificial intelligence, and an updated National Artificial Intelligence Research and Development Strategic Plan, all of which are well-considered but largely aspirational. In that time, OpenAI released ChatGPT, its game-changing chatbot, which is capable of answering queries with striking fluency, and of writing code; Google released Bard, its own impressive chatbot; Microsoft added ChatGPT to its search engine, Bing, and is integrating it into a number of its popular products; Meta, the owner of Facebook, débuted a large language model called LLaMA; and both OpenAI and the startup Stability AI introduced platforms that can generate images from text prompts.

A.I.’s rapidly evolving skills and capacities have engendered a collective global freak-out about what might be coming next: an A.I. that supplants us at work, an A.I. that is smarter and more intellectually agile than we are; an insensible A.I. that annihilates human civilization. Sam Altman, the C.E.O. of OpenAI, warned Congress, “If this technology goes wrong, it can go quite wrong,” and called on lawmakers to regulate it.

They appear to be trying. In January, Congressman Ted Lieu, a California Democrat with a degree in computer science, introduced a nonbinding measure urging House members to regulate A.I., which he generated using ChatGPT. In June alone, members of Congress introduced three bills addressing different aspects of A.I., all with bipartisan support. One would require the U.S. to inform users when they are interacting with artificial intelligence in government communications and establish an appeals process to challenge A.I.-mediated decisions, while another proposes to hold social-media companies responsible for spreading harmful material created by artificial intelligence by denying them protection under Section 230, the part of the Communications Decency Act that immunizes tech platforms from liability for what they publish. Lieu joined with colleagues on both sides of the aisle to propose the creation of a twenty-person, bipartisan commission to review, recommend, and establish regulations for A.I.

As those proposals wend their way through the legislative process, Senate Majority Leader Chuck Schumer has taken a more circumspect tack. In a recent address at the Center for Strategic and International Studies, he outlined a plan to bring lawmakers up to speed on emerging technology by convening at least nine panels of experts to give them a crash course in A.I. and help them craft informed legislation. (One assumes that he aims, in part, to avoid the embarrassing ignorance on display over the years when members of Congress discussed regulating social media.) It is questionable, though, if Schumer’s good intentions will deliver substantial results before the next election.

Meanwhile, the E.U. is moving with greater dispatch. In May, the European Parliament’s comprehensive Artificial Intelligence Act moved out of committee and the entire Parliament voted to advance its version of the bill to the Council of the European Union, which will determine its final details. If all goes according to plan, it is expected to be enacted by the end of the year. Among its provisions are a prohibition on the use of facial recognition and a requirement that chatbot creators disclose the copyrighted material used to train their models. (This has become a matter of contention in the United States; recently, the comedian Sarah Silverman joined plaintiffs in a class-action suit accusing OpenAI and Meta of violating copyright by using their written work without permission.) In its broadest strokes, the act will prohibit the use of artificial technologies that pose an “unacceptable level of risk to people’s safety.” Assuming that it passes, it will become the world’s first comprehensive legal framework for A.I.

With nothing comparable on the docket in this country, it seemed likely that U.S. tech companies would continue to advance their products unabated, come what may. That’s why last Friday’s news was significant. Here were Google, Microsoft, Meta, Amazon, OpenAI, Anthropic, and Inflection, seeming to concede that they might not be able to control their A.I. platforms on their own. In a blog post published the day of the announcement, Brad Smith, the president of Microsoft, summarized the goal of the agreement in three words: “safe, secure, trustworthy.”

But there is a paradox: all of the signatories have already released generative-A.I. systems, so it is hard to imagine how they plan to keep the public safe from the dangers these already pose, such as writing malicious code or spreading noxious misinformation; the agreement makes no mention of removing these products from the market until they have been vetted by experts. (It is also not clear who those experts will be, how they will be chosen, whether the same experts will be tasked with examining all the systems, and by what measure they will determine risk.) Just days before the White House announcement, Meta released an open-source version of its large language model, LLaMA2. The company said it is available free of charge for research and commercial use, which means that once it is in the wild, Meta may not be able to control who has access to it. According to Dame Wendy Hall, the Regius Professor of computer science at the University of Southampton, speaking on British television, open-sourcing A.I. is “a bit like giving people a template to build a nuclear bomb.”

The companies’ commitment to watermarking the material generated by their A.I. products is a welcome and necessary safety provision. As Anne Neuberger, Deputy National Security Advisor for Cyber and Emerging Technologies, told me, this “will reduce the dangers of fraud and deception as information spreads, since there will now be ways to trace generated content back to its source.” But it is not straightforward. According to Sam Gregory, the executive director of the nonprofit organization Witness, which uses technology to protect human rights, “part of the challenge is that there is no shared definition of watermarking,” and many watermarks “can generally be easily cropped out.” (They could also be forged or manipulated.) Gregory mentioned a more informative approach, advanced by Microsoft, that would create a detailed metadata trail that reflects the history of a given image. He argued that disclosures should not reveal the people who use A.I. tools, however. “Obliging or nudging people to confirm media origins seems promising until you place this approach in a global context of privacy risks, dissident voices, and authoritarian laws that target free speech.”

Perhaps the most controversial part of the brokered agreement is what the companies have, for the most part, agreed not to share: the parameters that are known as the “weights” of their algorithms. These determine how the model sorts through source material and generates responses to queries. Some argue that secrecy in this area is a good thing: Neuberger told me that “notwithstanding open-sourcing,” shielding model weights from public view will make it harder for bad actors to “steal state-of-the-art models and fine-tune them to be better at generating malware, or other A.I.-driven approaches to novel cyberattacks.” But Kit Walsh, a senior staff attorney at the Electronic Frontier Foundation, pointed out that withholding key information about how the models are constructed undermines transparency and could conceal sources of bias. “Keeping the details of AI technologies secret is likely to thwart good-faith researchers trying to protect the public interest, as well as competition, and open science,” he wrote in an e-mail. “This is especially true as AI is deployed to make decisions about housing, employment, access to medicine, criminal punishment, and a multitude of other applications where public accountability is essential.”

Compliance with the agreement is voluntary, and there is no enforcement mechanism to hold these seven—or any other—companies to account. It could represent a small step on what is likely to be a long and twisting road to consequential government regulation. It could also be a way for Big Tech to write its own rules. In May, Altman criticized the E.U.’s proposed A.I. regulations in front of a British audience. He warned that if they are too strict, there’s a chance the company could cease operating in Europe. His remarks were a glimpse of the obstacles ahead. A company that doesn’t like the rules could threaten to pack up and leave. Then what? ♦

Source link