Note this is using logical conceptualism, and is merely a conjecture
What can be analyzed as the chief difference between "Neural Networks," and the result of the current methods of training. Is that regardless of whether that network is able to give exact definitions, or hallucinates details to fit the desired output. That we as humans beings get frustrated when we do not known something and are able to say that we do not know is the human ability to halt.
Where this becomes difficult in the modern age, is the same reason why we rely on brands and are keen to deep fakes and the impersonation of celebrities. That we utilize trust to create a guarantee that an output of another is something that we can benefit ourselves. Noting the sheer magnitude of scams, phishing emails, and all efforts to assert trust to individuals.
The difficulty when engaging with an LLM, is that we are expecting trustworthy output, and that is merely the demonstration as noted via some deep fake scams. Merely stating information confidently. The difficulty here is that we are having to build some resilience to hallucinations. As that difference of being able to safely say I do not know, versus fitting for the intended output, circumvents our baseline ability to trust.
That the moment that someone speaks with confidence, we are trusting their own halting ability to only provide information that they themselves have been able to prove through some metric. And while hallucinations have the capability of providing an avenue towards idea generation. Where you are finding what should exist based on some output.
We likewise need to keep in mind humanities willingness to believe those who speak with confidence. Especially if they are utilizing iconography that in the past has been proven to be trustworthy.
Therefore the halting quality is a valuable measurement not in absolutes, but by way of varying degree towards the ability to halt. And further is likewise a direct measurement of "Logical Consistency." And is one of the addition measurements being proposed by this framework towards guaranteeing safety alongside alignment approaches like "Mechanistic Interpretability."