Geoffrey Hinton says LLMs are no longer just predicting the next word – new models learn by reasoning and identifying contradictions in their own logic. This unbounded self-improvement will “end up making it much smarter than us.”
submitted by /u/MetaKnowing [link] [comments]









