Ethicists fire back at ‘AI Pause’ letter they say ‘ignores actual harm’

Photo of author

By Webdesk


A group of well-known AI ethicists has penned a counterpoint to this week’s controversial letter calling for a six-month “pause” for AI development, criticizing it for its focus on hypothetical future threats when real harm is to be done. is due to misuse of today’s technology.

Thousands of people, including household names such as Steve Wozniak and Elon Musk, signed the Future of Life Institute’s open letter earlier this week proposing to suspend development of AI models such as GPT-4 to avoid “losses”. prevent. control of our civilization’, among other threats.

Timnit Gebru, Emily M. Bender, Angelina McMillan-Major, and Margaret Mitchell are all major figures in AI and ethics, known (besides their work) for being pushed out of Google over a paper criticizing AI’s capabilities. They are currently collaborating at the DAIR Institute, a new research group focused on studying and uncovering and preventing AI-related harm.

But they were not on the list of signatories and have now issued a reprimand pointing out that the letter does not address existing problems caused by the technology.

“Those hypothetical risks are the focus of a dangerous ideology called long-term thinking that ignores the real harms arising from the deployment of AI systems today,” they wrote, citing worker exploitation, data theft, synthetic media that keep existing power structures intact and the further concentration of those power structures in fewer hands.

The choice to worry about a Terminator or Matrix-esque robot apocalypse is a diversionary tactic when, at the same time, we’re getting reports of companies like Clearview AI being used by police to essentially frame an innocent man. A T-1000 isn’t necessary if you have Ring cams on every front door that can be accessed through online stamp factories.

While the DAIR crew agrees with some of the letter’s objectives, such as identifying synthetic media, they emphasize that action must be taken now, against today’s problems, with remedies at our disposal:

What we need is regulation that enforces transparency. Not only should it always be clear when we encounter synthetic media, but organizations building these systems should also be required to document and disclose the training data and model architectures. The responsibility for creating tools that are safe to use should lie with the companies that build and implement generative systems, meaning that the builders of these systems should be held accountable for the output of their products.

The current race towards ever-expanding “AI experiments” is not a predetermined path where our only choice is how fast to run, but rather a series of decisions driven by the profit motive. Companies’ actions and choices must be shaped by regulations that protect people’s rights and interests.

It is indeed time to act: but the focus of our concern should not be on imaginary ‘powerful digital minds’. Instead, we should focus on the very real and very present exploitative practices of the companies that claim to build them, which are rapidly centralizing power and widening social inequality.

Incidentally, this letter echoes a sentiment I heard from Uncharted Power founder Jessica Matthews yesterday at the AfroTech event in Seattle: “You shouldn’t be afraid of AI. You have to be afraid of the people who build it.” (Her solution: become the people who build it.)

While it is virtually unlikely that any major company would ever agree to halt its research efforts in accordance with the open letter, it is clear judging by the order received that the risks – real and hypothetical – of AI are of great concern in many segments are. society. But if they don’t, maybe someone will have to do it for them.



Source link

Leave a Comment

Share via
Copy link