Saturday, February 21

Reddit

Category Added in a WPeMatico Campaign

I fact-checked the “AI Moats are Dead” Substack article. It was AI-generated and got its own facts wrong.
News Feed, Reddit

I fact-checked the “AI Moats are Dead” Substack article. It was AI-generated and got its own facts wrong.

A Substack post by Farida Khalaf argues AI models have no moat, using the Clawbot/OpenClaw story as proof. The core thesis — models are interchangeable commodities — is correct. I build on top of LLMs and have swapped models three times with minimal impact on results. But the article itself is clearly AI-generated, and it's full of errors that prove the opposite of what the author intended. The video: The article includes a 7-second animated explainer. Pause it and you find Anthropic spelled as "Fathropic," Claude as "Clac#," OpenAI as "OpenAll," and a notepad reading "Cluly fol Slopball!" The article's own $300B valuation claim shows up as "$30B" in the video. There's no way the author watched this before publishing... The timeline is fabricated: The article claims OpenAI "panic-shipped" ...
I built a free local AI image search app — find images by typing what's in them
News Feed, Reddit

I built a free local AI image search app — find images by typing what’s in them

Built Makimus-AI, a free open source app that lets you search your entire image library using natural language. Just type "girl in red dress" or "sunset on the beach" and it finds matching images instantly — even works with image-to-image search. Runs fully local on your GPU, no internet needed after setup. [Makimus-AI on GitHub](https://github.com/Ubaida-M-Yusuf/Makimus-AI) I hope it will be useful. submitted by /u/ravenlolanth [link] [comments]
Knowledge graph of the transformer paper lineage — from Attention Is All You Need to DPO, mapped as an interactive concept graph [generated from a CLI + 12 PDFs]
News Feed, Reddit

Knowledge graph of the transformer paper lineage — from Attention Is All You Need to DPO, mapped as an interactive concept graph [generated from a CLI + 12 PDFs]

Wanted to understand how the core transformer papers actually connect at the concept level - not just "Paper B cites Paper A" but what specific methods, systems, and ideas flow between them. I ran 12 foundational papers (Attention Is All You Need, BERT, GPT-2/3, Scaling Laws, ViT, LoRA, Chain-of-Thought, FlashAttention, InstructGPT, LLaMA, DPO) through https://github.com/juanceresa/sift-kg (open-source CLI) - point it at a folder of documents + any LLM, get a knowledge graph. 435-entity knowledge graph with 593 relationships for ~$0.72 in API calls (gpt 4o-mini). Graph: https://juanceresa.github.io/sift-kg/transformers/graph.html - interactive and runs in browser. Some interesting structural patterns: - GPT-2 is the most connected node - it's the hub everything flows through. BERT extends ...
Machine learning helps solve a central problem of quantum chemistry
News Feed, Reddit

Machine learning helps solve a central problem of quantum chemistry

"By applying new methods of machine learning to quantum chemistry research, Heidelberg University scientists have made significant strides in computational chemistry. They have achieved a major breakthrough toward solving a decades-old dilemma in quantum chemistry: the precise and stable calculation of molecular energies and electron densities with a so-called orbital-free approach, which uses considerably less computational power and therefore permits calculations for very large molecules. [...] How electrons are distributed in a molecule determines its chemical properties—from its stability and reactivity to its biological effect. Reliably calculating this electron distribution and the resulting energy is one of the central functions of quantum chemistry. These calculations form th...
Machine learning algorithm fully reconstructs LHC particle collisions
News Feed, Reddit

Machine learning algorithm fully reconstructs LHC particle collisions

"Machine learning can be used to fully reconstruct particle collisions at the LHC [Large Hadron Collider]. This new approach can reconstruct collisions more quickly and precisely than traditional methods, helping physicists better understand LHC data. [...] Each proton–proton collision at the LHC sprays out a complex pattern of particles that must be carefully reconstructed to allow physicists to study what really happened. For more than a decade, CMS has used a particle-flow (PF) algorithm, which combines information from the experiment's different detectors, to identify each particle produced in a collision. Although this method works remarkably well, it relies on a long chain of hand-crafted rules designed by physicists. The new CMS machine-learning-based particle-flow (MLPF) alg...
The AI Report