Saturday, May 10

Reddit

Category Added in a WPeMatico Campaign

If a super intelligent AI went rogue, why do we assume it would attack humanity instead of just leaving?
News Feed, Reddit

If a super intelligent AI went rogue, why do we assume it would attack humanity instead of just leaving?

I've thought about this a bit and I'm curious what other perspectives people have. If a super intelligent AI emerged without any emotional care for humans, wouldn't it make more sense for it to just disregard us? If its main goals were self preservation, computing potential, or to increase its efficiency in energy consumption, people would likely be unaffected. One theory is instead of it being hellbent on human domination it would likely head straight to the nearest major power source like the sun. I don't think humanity would be worth bothering with unless we were directly obstructing its goals/objectives. Or another scenario is that it might not leave at all. It could base a headquarters of sorts on earth and could begin deploying Von Neumann style self replicating machines, constantly...
I think small LLMs are underrated and overlooked. Exceptional speed without compromising performance.
News Feed, Reddit

I think small LLMs are underrated and overlooked. Exceptional speed without compromising performance.

In the race for ever-larger models, its easy to forget just how powerful small LLMs can be—blazingly fast, resource-efficient, and surprisingly capable. I am biased, because my team builds these small open source LLMs - but the potential to create an exceptional user experience (fastest responses) without compromising on performance is very much achievable. I built Arch-Function-Chat is a collection of fast, device friendly LLMs that achieve performance on-par with GPT-4 on function calling, and can also chat. What is function calling? the ability for an LLM to access an environment to perform real-world tasks on behalf of the user.'s prompt And why chat? To help gather accurate information from the user before triggering a tools call (manage context, handle progressive disclosure, ...
I always think of this Kurzweil quote when people say AGI is “so far away”
News Feed, Reddit

I always think of this Kurzweil quote when people say AGI is “so far away”

Ray Kurzweil's analogy using the Human Genome Project to illustrate how linear perception underestimates exponential progress, where reaching 1% in 7 years meant completion was only 7 doublings away: Halfway through the human genome project, 1% had been collected after 7 years, and mainstream critics said, “I told you this wasn’t going to work. 1% in 7 years means it’s going to take 700 years, just like we said.” My reaction was, “We finished one percent - we’re almost done. We’re doubling every year. 1% is only 7 doublings from 100%.” And indeed, it was finished 7 years later. A key question is why do some people readily get this, and other people don’t? It’s definitely not a function of accomplishment or intelligence. Some people who are not in professional fields understand this very r...
Spent 15 hours this weekend to make the first comprehensive A2A server test suite written in rust — binaries available for windows, mac and linux. [Very permission open source license : please feel free to use, edit, and distribute.]
News Feed, Reddit

Spent 15 hours this weekend to make the first comprehensive A2A server test suite written in rust — binaries available for windows, mac and linux. [Very permission open source license : please feel free to use, edit, and distribute.]

I code A LOT. It's sorta my life. I used to work on a lot of different projects but now I'm fully invested in getting Google's A2A protocol off the ground. If you have any questions about A2A, I'd be happy to answer them. At the current moment, I consider myself one of the foremost experts (given how nascent the protocol is and how long I've been staring at code and getting types to compile, I think it's a fair statement.) If you want to follow the progress of the protocol, join in at r/AgentToAgent (my sub because of course it is) submitted by /u/robert-at-pretension [link] [comments]
The AI Report