Researchers from the University of Science and Technology of China and Tencent's Youtu Lab have created a tool called "Woodpecker" to address "hallucination" in artificial intelligence (AI) models. Hallucination is when AI models generate outputs with high confidence that don't align with their training data. The team's tool corrects hallucinations in multimodal large language models (MLLMs), such as GPT-4, by using a multi-stage process involving concept extraction, question formulation, visual knowledge validation, visual claim generation, and hallucination correction. Woodpecker achieved improved accuracy over the baseline MLLM and could be integrated into other MLLMs.
A team of scientists from the University of Science and Technology of China and Tencent's YouTu Lab has developed a tool to combat the issue of "hallucination" in artificial intelligence (AI) models. Hallucination refers to the problem where AI models generate outputs with high confidence levels that do not appear to be based on the information present in their training data. This problem is particularly evident in large language models (LLMs) like OpenAI's ChatGPT and Anthropic's Claude.
The researchers have named their tool "Woodpecker," and they claim it can effectively correct hallucinations in multimodal large language models (MLLMs). MLLMs, including models like GPT-4 and GPT-4V, incorporate multiple modalities, such as vision and other data processing, alongside text-based language modeling.
Woodpecker uses a combination of three separate AI models, which operate as evaluators to identify hallucinations in the MLLM being corrected. The three models include GPT-3.5 turbo, Grounding DINO, and BLIP-2-FlanT5. The process of correcting hallucinations involves key concept extraction, question formulation, visual knowledge validation, visual claim generation, and finally, hallucination correction.
The researchers claim that Woodpecker provides additional transparency and a significant improvement in accuracy over the baseline model. They evaluated several "off-the-shelf" MLLMs using their method and believe that Woodpecker can be easily integrated into other MLLMs. This tool represents a step forward in addressing the challenges related to AI hallucination, which has been a notable issue in the development of large language models.
User
(TRISTAN GREENE, COINTELEGRAPH, 2023)