Using a 7B Model + RAG to Identify and Edit Word-level Hallucinations
Finetuning a 7B model to outperform GPT-4 for hallucination detection.
January 20, 2024 · 2 min read
Finetuning a 7B model to outperform GPT-4 for hallucination detection.
Provides a taxonomy of different types of hallucinations.
32k context length retreival models with sub-quadratic attention mechanism.
Using persuasion ot jailbreak LLM's.
Presents a 7B parameter embedding model.
text2image diffusion models learn and use a secret language.