Description: Text-to-image models trained using the LAION-5B dataset such as Stable Diffusion and Imagen were able to regurgitate private medical record photos which were used as training data without consent or recourse for removal.
Entities
View all entitiesAlleged: Stability AI , Google and LAION developed an AI system deployed by Stability AI and Google, which harmed people having medical photos online.
Incident Stats
Incident ID
465
Report Count
1
Incident Date
2022-03-03
Editors
Khoa Lam
Applied Taxonomies
CSETv1_Annotator-1 Taxonomy Classifications
Taxonomy DetailsIncident Number
The number of the incident in the AI Incident Database.
465
Incident Reports
Reports Timeline
arstechnica.com · 2022
- View the original report at its source
- View the report at the Internet Archive
Late last week, a California-based AI artist who goes by the name Lapine discovered private medical record photos taken by her doctor in 2013 referenced in the LAION-5B image set, which is a scrape of publicly available images on the web. A…
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Sexist and Racist Google Adsense Advertisements
· 27 reports
All Image Captions Produced are Violent
· 28 reports
AI-Designed Phone Cases Are Unexpected
· 7 reports
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Sexist and Racist Google Adsense Advertisements
· 27 reports
All Image Captions Produced are Violent
· 28 reports
AI-Designed Phone Cases Are Unexpected
· 7 reports