Wednesday, April 29

Reddit

Category Added in a WPeMatico Campaign

How are LLMs ‘corrected’ when users identify them spreading misinformation or saying something harmful?
News Feed, Reddit

How are LLMs ‘corrected’ when users identify them spreading misinformation or saying something harmful?

I watched Last Week Tonight's piece on AI chatbots today, and it got me thinking about that old screenshot of a Google search in which Gemini recommends adding "1/8 cup of non-toxic glue" to pizza in order to make the cheese better stick to the slice. When something like this goes viral, I have to assume (though I could be wrong) that an employee at Google specifically goes out of their way to address that topic in particular. The image is a meme, of course, but I imagine Google wouldn't be keen to leave themselves open to liability if their LLM recommends that users consume glue. Does the developer "talk" to the LLM to correct it about that specific case? Do they compile specific information about (e.g.) pizza construction techniques and feed it that data to bring it to the forefront? Do ...
If AI is about to get 10x smarter, how do we prevent the internet from collapsing under synthetic noise?
News Feed, Reddit

If AI is about to get 10x smarter, how do we prevent the internet from collapsing under synthetic noise?

Im all for acceleration. I think the faster we hit AGI the better. but theres a bottleneck nobody here talks about enough-training data. right now we are quietly poisoning the well. More than half of online content is already synthetic. bots talking to bots, articles written by AI, reddit threads generated by LLMs. when the next generation of models trains on this they eat their own tail. model collapse is real. we saw it with image generators. Outputs get blander, weirder, less useful.we need a way to label or filter human-generated data. not because humans are better but because diversity prevents collapse. I know the standard solution sounds like a dystopian meme. biometric scanners, iris codes, hardware verification. and yeah maybe it is dystopian. but so is a dead internet where nothi...
In 10 Minutes with AI, I Just Got More Closure on My Divorce than 4 Years of Therapy
News Feed, Reddit

In 10 Minutes with AI, I Just Got More Closure on My Divorce than 4 Years of Therapy

Apologies if this is rather personal for this sub but I feel a need to express how profoundly useful it was for me tonight. A Chatbot very likely just saved my life. I am positively floored by how therapeutic it was in processing the beginning and ending of my relationship with my former spouse. I feel as though I finally can give myself permission to let go and move on with my life. I don’t know what this says about technology and society, but it’s beautiful. submitted by /u/trusch82 [link] [comments]
Is the era of all-you-can-eat AI ending? (i will not promote)
News Feed, Reddit

Is the era of all-you-can-eat AI ending? (i will not promote)

I am a GitHub Copilot Pro+ user. I have been enjoying 39 dollars plan that actually is worth 60 dollars compute with 1500 premium prompts to models count based. Given the availability of free tier models and model switching option, It has felt like never ending. It will be turned into token based after June. This corresponds to the projections about "the death of the ai buffet" I think. Less bundled memberships, more token based costs. As all these foundational model providers crave for profit, I think this is the natural step we are heading. They need to be able to measure and limit the use for profit. I am just curious how fast that will happen? Should we not take cheap & free AI for granted? Or can open-source models actually create a balance? If we are heading for less accessibil...
Are AI agents actually giving people ROI yet, or just saving time?
News Feed, Reddit

Are AI agents actually giving people ROI yet, or just saving time?

I'm curious how people are thinking about ROI from agents beyond productivity. A lot of the discussion is still around "this saved me 3 hours" (in some cases wasted more lol) or "this automated a workflow." That's obviously useful, but it feels like a limited way to measure value. For people using agents seriously, are you tracking anything beyond time saved? like for example: - did the agent create something reusable? - did it improve a workflow over time? - did it generate outputs that had value outside the original task? - did it create something others would pay for? - did it help produce knowledge, decisions, or execution that compounds? I'm especially interested in people using agents for coding, research, business ops, content, data work, or niche expert workflows. just want to hear...
The AI Report