AI think tank calls GPT-4 a risk to public safety
APR 13, 2023
Description Community
About

An AI think tank has filed a complaint with the FTC in a bid to stop OpenAI from further commercial deployments of GPT-4.


The Center for Artificial Intelligence and Digital Policy (CAIDP) claims OpenAI has violated section five of the FTC Act—alleging the company of deceptive and unfair practices.


Marc Rotenberg, Founder and President of the CAIDP, said:


“The FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices. We believe that the FTC should look closely at OpenAI and GPT-4.


We are specifically asking the FTC to determine whether the company has complied with the guidance the federal agency has issued.”


The CAIDP claims that OpenAI’s GPT-4 is “biased, deceptive, and a risk to privacy and public safety”.


The think tank cited contents in the GPT-4 System Card that describe the model’s potential to reinforce biases and worldviews, including harmful stereotypes and demeaning associations for certain marginalised groups.


In the aforementioned System Card, OpenAI acknowledges that it “found that the model has the potential to reinforce and reproduce specific biases and worldviews, including harmful stereotypical and demeaning associations for certain marginalized groups.”


Furthermore, the document states: “AI systems will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement.”


Other harmful outcomes that OpenAI says GPT-4 could lead to include:


Advice or encouragement for self-harm behaviours


Graphic material such as erotic or violent content


Harassing, demeaning, and hateful content


Content useful for planning attacks or violence


Instructions for finding illegal content


The CAIDP claims that OpenAI released GPT-4 to the public without an independent assessment of its risks.


Last week, the FTC told American companies advertising AI products:


“Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors.


Your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal.”


With its filing, the CAIDP calls on the FTC to investigate the products of OpenAI and other operators of powerful AI systems, prevent further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.

Comments