Entities
View all entitiesIncident Stats
GMF Taxonomy Classifications
Taxonomy DetailsKnown AI Goal Snippets
(Snippet Text: Without sufficient guardrails, models like DALL·E 2 could be used to generate a wide range of deceptive and otherwise harmful content, and could affect how people perceive the authenticity of content more generally., Related Classifications: Visual Art Generation)
CSETv1_Annotator-1 Taxonomy Classifications
Taxonomy DetailsIncident Number
179
Incident Reports
Reports Timeline
- View the original report at its source
- View the report at the Internet Archive
Summary
Below, we summarize initial findings on potential risks associated with DALL·E 2, and mitigations aimed at addressing those risks as part of the ongoing Preview of this technology. We are sharing these findings in order to enable br…
- View the original report at its source
- View the report at the Internet Archive
You may have seen some weird and whimsical pictures floating around the internet recently. There’s a Shiba Inu dog wearing a beret and black turtleneck. And a sea otter in the style of “Girl with a Pearl Earring” by the Dutch painter Vermee…
- View the original report at its source
- View the report at the Internet Archive
Researchers experimenting with OpenAI's text-to-image tool, DALL-E 2, noticed that it seems to covertly be adding words such as "black" and "female" to image prompts, seemingly in an effort to diversify its output
Artificial intelligence fi…
Variants
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Similar Incidents
Did our AI mess up? Flag the unrelated incidents