"universal unstructured information processor that can simulate any intuitive procedure (reasoning) given sufficient resources and the proper context"
the "reasoning" part is called into question somewhat here:
https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf
"despite sophisticated self-reflection mechanisms, these models fail to develop generalizable reasoning capabilities beyond certain complexity thresholds. We identified three distinct reasoning regimes: standard LLMs outperform LRMs at low complexity, LRMs excel at moderate complexity, and both collapse at high complexity."
at one's most cynical one might be forgiven for walking away from this short paper with the conclusion that the current state of large models is a super-expensive super-sophisticated autocomplete? too cynical??