Quick Answer
Yes, all AI models hallucinate. ChatGPT has improved but still fabricates facts, especially on obscure topics.
Detailed Answer
ChatGPT can confidently state false information, cite non-existent sources, or invent statistics. GPT-5 reduced hallucination rates significantly, but the problem persists for niche topics, recent events, and specific numbers. Always verify critical facts. Using Perplexity for sourced answers or comparing multiple AI responses with Council helps catch hallucinations before they cause problems.
Find Your Own Answer
The best way to decide is to try them yourself. Council lets you compare multiple AI responses to the same question.