Discussion about this post

User's avatar
R. Saravanan's avatar

Very interesting analysis. Some further thoughts:

- How does good consensus arise in the first place? By different people doing independent analyses to reach the same conclusion. But if the analyses are not independent checks, the consensus could end up being bad. LLMs are still new and they benefit from prior good consensuses in their training data. Going forward, easy access to LLMs may corrupt the consensus development process and amplify the anchoring bias problem. The growth in "AI slop" is a symptom of this.

- Expertise is a combination of knowledge and logic. My experience with LLMs is they don't understand mathematical logic; their version of logic in an argument is statistical, which is mostly right but can be crucially wrong sometimes—often only a domain expert can spot that. (LLMs don't understand that if we can show 1+1=3 then we can show that any two numbers added together equals any other number, leading to logical absurdity. Of course, once they read the previous sentence, they will "know" it, if it appears in many texts.)

Say the consensus is X implies Y in a domain. Suppose we can show that X implies Z in a related domain and therefore that Z implies NOT Y, which contradicts the original consensus that Z implies Y. Since there is ambiguity in natural language in the different terms used to describe X and Z, LLMs don't have problems accepting two contradictory positions and will present longwinded and very plausible sounding arguments to reconcile the contradiction. (Perhaps neurosymbolic AI will address this contradiction issue but I don't know much about it.)

Expand full comment
Alex Stinson's avatar

You might want to check out my recent close reading of Grokipedia: https://caad.info/analysis/newsletters/cop-look-listen-issue-09-20-nov-25/ -- though producing blandly right information most of the time, its still working in manipulation fairly regularly across entries

Expand full comment
23 more comments...

No posts

Ready for more?