Intervention:

Require Labels on AI Created Content

Definition: Enact legislation for the mandatory prominent disclosure of AI generation.
Kind of Intervention:
Humans Only
Reversible:
Challenging to Rollout
Suitability:
General
Technical Difficulty:
Challenging
Legislative Target:
Yes

To be effective, this would have to be:

  1. Particularized: Rather than having a one-time-banner on a site saying "This site may have AI generated material", require a fine-grained assessment of AI generated material. This could even be as narrow as "which characters were created by an AI", something that could be achieved relatively easily with an extension of the Unicode standard.
  2. Legally Enforced: There are too many incentives to try to remove these banners (both as individual content creators, and as organizations) for them to be purely voluntarily enforced.
  3. Assisted by Automation: Watermarking strategies on the AI side could help with this, as could automated crawlers looking for automatically generated content that isn't labeled. While these approaches would be imperfect, they'd discourage non-disclosed use.
  4. Tied into platform controls: Users should be able to express "I don't want to see anything that was more than 50% made by a computer", for example.
  5. Cross Medium: Regardless of whether AI is used over phone lines, on chat messages, on social media, in banner advertisements, or in printed works, each should require clear and conspicuous disclosure.
  6. Commonly Readable Format: In order to support features like search implementing this, there would need to be universal technical standards for labeling the provenance of media and content, and they'd need to be widely adhered to.
  7. Without Exception: Legislation on this front would face wave after wave of criticism from individual industries, insisting that each of them deserves a carve out. In order for this intervention to have import, it would have to be without exception.
Is something missing, or could it be better?
Loading...