Thursday, July 10

Tag: Reddit

Giving ChatGPT access to the
News Feed, Reddit

Giving ChatGPT access to the “real” world. A project.

I want to hook up ChatGPT to control my outdated but ahead of its time WOWWEE Rovio. But until I remember how to use a soldering iron, I thought I would start small. Using ChatGPT to write 100% of the code, I coaxed it along to use an ESP32 embedded controller to manipulate a 256 LED Matrix "however it wants". The idea was to give it access to something physical and "see what it would do". So far it's slightly underwhelming, but it's coming along ;) The code connects to WiFi and the ChatGPT API to send a system prompt to explain the situation "You're connected to an LED matric to be used to express your own creativity." The prompt gives the structure of commands on how to toggle the led's including color, etc. and lets it loose to do whatever it sees fit. With each LED command is roo...
One-Minute Daily AI News 10/26/2024
News Feed, Reddit

One-Minute Daily AI News 10/26/2024

Claude AI Gets Bored During Coding Demonstration, Starts Perusing Photos of National Parks Instead.[1] Google tool makes AI-generated writing easily detectable.[2] AI-generated child sexual abuse images are spreading. Law enforcement is racing to stop them.[3] Newly-opened National Quantum Computing Centre (NQCC) will help deliver breakthroughs in AI, energy, healthcare and more.[4] Sources: [1] https://futurism.com/the-byte/claude-ai-bored-demonstration [2] https://www.newscientist.com/article/2452847-google-tool-makes-ai-generated-writing-easily-detectable/ [3] https://apnews.com/article/ai-child-sexual-abuse-images-justice-department-42186aaf8c9e27c39060f9678ebb6d7b [4] https://www.gov.uk/government/news/new-national-quantum-laboratory-to-open-up-access-to-quantum-computing-unleashing...
One-Minute Daily AI News 10/25/2024
News Feed, Reddit

One-Minute Daily AI News 10/25/2024

OpenAI plans to release its next big AI model by December.[1] Meta Platforms to use Reuters news content in AI chatbot.[2] Meta AI Releases New Quantized Versions of Llama 3.2 (1B & 3B): Delivering Up To 2-4x Increases in Inference Speed and 56% Reduction in Model Size.[3] Nvidia overtakes Apple as world’s most valuable company.[4] Sources: [1] https://www.theverge.com/2024/10/24/24278999/openai-plans-orion-ai-model-release-december [2] https://www.reuters.com/technology/artificial-intelligence/meta-platforms-use-reuters-news-content-ai-chatbot-2024-10-25/ [3] https://www.marktechpost.com/2024/10/24/meta-ai-releases-new-quantized-versions-of-llama-3-2-1b-3b-delivering-up-to-2-4x-increases-in-inference-speed-and-56-reduction-in-model-size/ [4] https://www.reuters.com/technology/nvidia...
Recent Paper shows Scaling won’t work for generalizing outside of Training Data
News Feed, Reddit

Recent Paper shows Scaling won’t work for generalizing outside of Training Data

For a video on this click here. I recently came across an intriguing paper (https://arxiv.org/html/2406.06489v1) that tested various machine learning models, including a transformer-based language model, on out-of-distribution (OOD) prediction tasks. The authors discovered that simply making neural networks larger doesn't improve their performance on these OOD tasks—and might even make it worse. They argue that scaling up models isn't the solution for achieving genuine understanding beyond their training data. This finding contrasts with many studies on "grokking," where neural networks suddenly start to generalize well after extended training. According to the new paper, the generalization seen in grokking is too simplistic and doesn't represent true OOD generalization. However, I have a ...
The AI Report