Description: The French digital care company, Nabla, in researching GPT-3’s capabilities for medical documentation, diagnosis support, and treatment recommendation, found its inconsistency and lack of scientific and medical expertise unviable and risky in healthcare applications. This incident has been downgraded to an issue as it does not meet current ingestion criteria.
Entities
View all entitiesIncident Stats
GMF Taxonomy Classifications
Taxonomy DetailsKnown AI Goal Snippets
One or more snippets that justify the classification.
(Snippet Text: Our unique multidisciplinary team of doctors and machine learning engineers at Nabla had the chance to test this new model to tease apart what’s real and what’s hype by exploring different healthcare use cases., Related Classifications: Question Answering)
Incident Reports
Reports Timeline
github.com · 2022
- View the original report at its source
- View the report at the Internet Archive
The following former incidents have been converted to "issues" following an update to the incident definition and ingestion criteria.
21: Tougher Turing Test Exposes Chatbots’ Stupidity
Description: The 2016 Winograd Schema Challenge highli…
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
All Image Captions Produced are Violent
· 28 reports
Wikipedia Vandalism Prevention Bot Loop
· 6 reports
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
All Image Captions Produced are Violent
· 28 reports
Wikipedia Vandalism Prevention Bot Loop
· 6 reports