2 Comments
User's avatar
Jeremy's avatar

I wonder whether it will/would be feasible for developers to dampen the tendency to hallucinate, even without major new breakthroughs?

Expand full comment
Michael G Wagner's avatar

You can minimize hallucinations. One of the ways this is done is by having a second model monitor the output of the first one. Sort of an AI teacher-student relationship. :) Most reasoning models do that in one way or the other.

Expand full comment