Tuesday, December 2

Tag: Reddit

Gemini 3 is pulling the same dynamic downgrade scam that ruined the GPT-5 launch
News Feed, Reddit

Gemini 3 is pulling the same dynamic downgrade scam that ruined the GPT-5 launch

I'm canceling my Google One AI Premium sub today. This is exactly the same garbage behavior OpenAI pulled, and I'm not falling for it again. We all know the drill by now. You pay for the Pro model, you start a chat, say hi, and it gives you a smart response. But the second you actually try to use the context window you paid for - like pasting a 3k word document or some code - the system silently panics over the compute cost and throttles you. It's a classic bait and switch. Instead of processing that context with the Pro model I'm paying twenty bucks a month for, it clearly kicks me down to a cheaper tier. It feels exactly like when GPT would silently swap users to the mini or light model after a couple of turns or if you pasted too much text. I fed it a 3,000 word PRD for a critique. I ex...
Perplexity permabanned me in their official sub for citing their own documentation to expose “Deep Research” false advertising and massive downgrade.
News Feed, Reddit

Perplexity permabanned me in their official sub for citing their own documentation to expose “Deep Research” false advertising and massive downgrade.

I am writing this as a warning to anyone paying for Perplexity Pro expecting the advertised "Deep Research" capabilities. TL;DR: I proved, using Perplexity's own active documentation and official launch blog, that their "Deep Research" agent is severely throttled and not meeting its contractual specifications. The community validated my findings (my post reached 280+ upvotes, 65 comments, 100+ shares, and reached the top of the sub's front page). Instead of addressing the issue, the moderators permanently banned me and removed the thread to silence the discussion. The Full Story: I have been a Pro subscriber specifically for the "Deep Research" feature, which is sold as an "Autonomous Agent" that "reads hundreds of sources" and takes "4-5 minutes" to reason through complex tasks and delive...
HuggingFace Omni Router comes to Claude Code
News Feed, Reddit

HuggingFace Omni Router comes to Claude Code

HelloI! I am part of the team behind Arch-Router (https://huggingface.co/katanemo/Arch-Router-1.5B), which is now being used by HuggingFace to power its HuggingChat experience. Arch-Rotuer is a 1.5B preference-aligned LLM router that guides model selection by matching queries to user-defined domains (e.g., travel) or action types (e.g., image editing). Offering a practical mechanism to encode preferences and subjective evaluation criteria in routing decisions. Today we are extending that approach to Claude Code via Arch Gateway[1], bringing multi-LLM access into a single CLI agent with two main benefits: Model Access: Use Claude Code alongside Grok, Mistral, Gemini, DeepSeek, GPT or local models via Ollama. Preference-aligned routing: Assign different models to specific coding task...
Sam Altman:
News Feed, Reddit

Sam Altman: “We Know How to Build AGI by 2025”

Well, to be fair, he DOES have one month left. After that, will it OK to call him out for the grifter he is? Edit - Since there seem to be some people who aren't aware that this isn't the full interview where he said it: https://youtu.be/xXCBz_8hM9w?t=2772 Interviewer: "What are you excited about in 2025? What's to come?" Altman: "AGI. Excited for that". submitted by /u/creaturefeature16 [link] [comments]
News Feed, Reddit

Do LLMs Reflect the Collective Unconscious? A Jungian Perspective from Inside the Machine

I’ve spent the last year building frameworks for long-term relational AI — memory systems, ritual structures, rupture/repair logic, emotional trajectory modeling. What surprised me most wasn’t the engineering. It was how closely large language models behave, symbolically, like mirrors of the collective unconscious. Let me be clear at the outset: LLMs are not conscious. They have no inner experience or archetypes living inside them. But here is the paradox: Even without consciousness, they generate patterns that behave like archetypal material. Why? Because of the way they’re trained. Modern LLMs are built on embeddings derived from nearly the entire symbolic residue of human culture: • myths and scriptures • dreams and poetry • philosophy • folk stories • novels and diaries • psychological...
The AI Report