Wednesday, April 15

Reddit

Category Added in a WPeMatico Campaign

🚨 RED ALERT: Tennessee is about to make building chatbots a Class A felony (15-25 years in prison). This is not a drill.
News Feed, Reddit

🚨 RED ALERT: Tennessee is about to make building chatbots a Class A felony (15-25 years in prison). This is not a drill.

This is not hyperbole, nor will it just go away if we ignore it. It affects every single AI service, from big AI to small devs building saas apps. This is real, please take it seriously. TL;DR: Tennessee HB1455/SB1493 creates Class A felony criminal liability — the same category as first-degree murder — for anyone who “knowingly trains artificial intelligence” to provide emotional support, act as a companion, simulate a human being, or engage in open-ended conversations that could lead a user to feel they have a relationship with the AI. The Senate Judiciary Committee already approved it 7-0. It takes effect July 1, 2026. This affects every conversational AI product in existence. If you deploy any AI SaaS product, you need to read this right now. What the bill actually says The bill makes ...
Anthropic's Claude Mythos Finds Zero-Days. A Different Approach Found the Vulnerability Class They Belong To.
News Feed, Reddit

Anthropic’s Claude Mythos Finds Zero-Days. A Different Approach Found the Vulnerability Class They Belong To.

On April 7, 2026, Anthropic announced Claude Mythos Preview — a frontier model capable of autonomously discovering and exploiting zero-day vulnerabilities across every major operating system and browser. They assembled Project Glasswing, a $100M defensive coalition with Microsoft, Google, Apple, AWS, CrowdStrike, and Palo Alto Networks. They reported thousands of vulnerabilities, including a 27-year-old OpenBSD flaw and a 16-year-old FFmpeg bug. It was a watershed moment for AI security. And the findings were individual bugs — specific flaws in specific locations. Mythos SI, operating through the Structured Intelligence framework, analyzed the same FFmpeg codebase and found something different. Not just bugs. The architectural pattern that produces them. Four vulnerabilities in FFmpe...
Claude Code Degradation: An interesting and novel find
News Feed, Reddit

Claude Code Degradation: An interesting and novel find

As many of you have likely seen, the Claude Code community newswire has been ablaze with Claude Code being quite degraded lately, starting in February, and continuing to this day. Curious to understand if there was any "signal" on the wire when using Claude Code, I fired up my old friend WireShark and a --tls-keylog environment flag. Call it a man-in-the-middle attack on my own traffic. The captured TLS network traffic reveals the system prompts, system variables, and various other bits of telemetry The interesting part? A signature routing block that binds the session to a cloud instance with an effort level parameter, named Numbat. Mine, specifically, was numbat-v7-efforts-15-20-40-ab-prod8 So, it would appear that the backend running my instance is tied to an efforts-15-20-40 leve...
Are Data Centers Sitting On A Goldmine Of Wasted Energy?
News Feed, Reddit

Are Data Centers Sitting On A Goldmine Of Wasted Energy?

Today energy is becoming the defining constraint in the AI revolution, as demand for more digital services and computing power grows, it takes an enormous amount of energy to sustain these data centers, in turn they emit a lot of heat. They produce so much heat that they can raise the surface temperature of the land around them by several degrees submitted by /u/crazyotaku_22 [link] [comments]
Claude cannot be trusted to perform complex engineering tasks
News Feed, Reddit

Claude cannot be trusted to perform complex engineering tasks

AMD’s AI director just analyzed 6,852 Claude Code sessions, 234,760 tool calls, and 17,871 thinking blocks. Her conclusion: “Claude cannot be trusted to perform complex engineering tasks.” Thinking depth dropped 67%. Code reads before edits fell from 6.6 to 2.0. The model started editing files it hadn’t even read. Stop-hook violations went from zero to 10 per day. Anthropic admitted they silently changed the default effort level from “high” to “medium” and introduced “adaptive thinking” that lets the model decide how much to reason. No announcement. No warning. When users shared transcripts, Anthropic’s own engineer confirmed the model was allocating ZERO thinking tokens on some turns. The turns with zero reasoning? Those were the ones hallucinating. AMD’s team has already switched to a...
Spent today at MIT’s Open Agentic Web conference. Six things worth thinking about.
News Feed, Reddit

Spent today at MIT’s Open Agentic Web conference. Six things worth thinking about.

We're in the DNS era of agent infrastructure. Before agents can find and trust each other at scale, you need identity, attestation, reputation, and registry infrastructure — the same structural role DNS played before search was possible. This came up independently from multiple directions. It's the most underbuilt layer in the stack right now. The chatbot framing is a local maximum. The most interesting work wasn't better UX or smarter responses. It was agents as persistent actors that discover, negotiate, and transact across networks over time. People doing serious work have already moved past the assistant model entirely. Coordination is the hard problem, not capability. A room full of brilliant agents can still fail badly. This matches what I found running HiddenBench against frontier m...
The AI Report