ohai.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A cozy, fast and secure Mastodon server where everyone is welcome. Run by the folks at ohai.is.

Administered by:

Server stats:

1.8K
active users

#reasoning

6 posts6 participants0 posts today

Eine Analyse der Antworten verschiedener Modelle dazu ergäbe vielleicht einen netten Post. Spoiler: Erstaunlich gute erste Antworten, aber man braucht viele Argumente, um die #LLMs von der tatsächlichen Reihenfolge zu überzeugen. Und #Reasoning durch wiederholte Generierungsschleifen wirkt mir weiterhin wie eine eher semiausgereifte Idee. Würde das jemand lesen wollen?

🎓🤖 This groundbreaking revelation from the ivory towers of #academia ponders if #RL can magically transform bland #LLMs into #reasoning superstars. Spoiler alert: after endless waffle, the answer is still "TBD." Apparently, all that’s needed is a touch of wizardry from #Tsinghua & Shanghai's finest 🧙‍♂️.
limit-of-rlvr.github.io/ #Shanghai #HackerNews #ngated

limit-of-rlvr.github.ioLimit of RLVRReasoning LLMs Are Just Efficient Samplers: RL Training Elicits No Transcending Capacity

"Dwarkesh Patel: I want to better understand how you think about that broader transformation. Before we do, the other really interesting part of your worldview is that you have longer timelines to get to AGI than most of the people in San Francisco who think about AI. When do you expect a drop-in remote worker replacement?

Ege Erdil: Maybe for me, that would be around 2045.

Dwarkesh Patel: Wow. Wait, and you?

Tamay Besiroglu: Again, I’m a little bit more bullish. I mean, it depends what you mean by “drop in remote worker“ and whether it’s able to do literally everything that can be done remotely, or do most things.

Ege Erdil: I’m saying literally everything.

Tamay Besiroglu: For literally everything. Just shade Ege’s predictions by five years or by 20% or something.

Dwarkesh Patel: Why? Because we’ve seen so much progress over even the last few years. We’ve gone from Chat GPT two years ago to now we have models that can literally do reasoning, are better coders than me, and I studied software engineering in college. I mean, I did become a podcaster, I’m not saying I’m the best coder in the world.

But if you made this much progress in the last two years, why would it take another 30 to get to full automation of remote work?

Ege Erdil: So I think that a lot of people have this intuition that progress has been very fast. They look at the trend lines and just extrapolate; obviously, it’s going to happen in, I don’t know, 2027 or 2030 or whatever. They’re just very bullish. And obviously, that’s not a thing you can literally do.

There isn’t a trend you can literally extrapolate of “when do we get to full automation?”. Because if you look at the fraction of the economy that is actually automated by AI, it’s very small. So if you just extrapolate that trend, which is something, say, Robin Hanson likes to do, you’re going to say, “well, it’s going to take centuries” or something."

dwarkesh.com/p/ege-tamay
#AI #LLM #Reasoning #Chatbots #AGI #Automation #Productivity

🤖 AI | OPENAI
🔴 New Reasoning AIs Hallucinate More

🔸 o3 & o4-mini outperform older models in coding & math — but hallucinate more.
🔸 On PersonQA, o3 hallucinated 33% of answers, o4-mini 48%.
🔸 No clear cause; scaling reasoning may amplify false claims.
🔸 Transluce: o3 fabricates actions like fake code execution.

#OpenAI#AI#o3

A nation of idiots.

Andreas Schleicher, the head of education and skills at the O.E.C.D., told The Financial Times, “Thirty percent of Americans read at a level that you would expect from a 10-year-old child.” He continued, “It is actually hard to imagine — that every third person you meet on the street has difficulties reading even simple things.”
#USpolitics #education #collapse #reasoning #enlightenment #USA
nytimes.com/2025/04/10/opinion

The New York Times · Opinion | The Stupidity of the Tariffs Are the Achievement of a LifetimeBy David Brooks

From blind solvers to logical thinkers: Benchmarking LLMs' logical integrity on faulty mathematical problems. ~ A M Muntasir Rahman et als. arxiv.org/abs/2410.18921 #LLMs #Math #Reasoning

arXiv logo
arXiv.orgFrom Blind Solvers to Logical Thinkers: Benchmarking LLMs' Logical Integrity on Faulty Mathematical ProblemsConsider the math problem: "Lily received 3 cookies from her best friend yesterday and ate 5 for breakfast. Today, her friend gave her 3 more cookies. How many cookies does Lily have now?" Many large language models (LLMs) in previous research approach this problem by calculating the answer "1" using the equation "3 - 5 + 3." However, from a human perspective, we recognize the inherent flaw in this problem: Lily cannot eat 5 cookies if she initially only had 3. This discrepancy prompts a key question: Are current LLMs merely Blind Solver that apply mathematical operations without deeper reasoning, or can they function as Logical Thinker capable of identifying logical inconsistencies? To explore this question, we propose a benchmark dataset, FaultyMath, which includes faulty math problems of rich diversity: i) multiple mathematical categories, e.g., algebra, geometry, number theory, etc., ii) varying levels of difficulty, and iii) different origins of faultiness -- ranging from violations of common sense and ambiguous statements to mathematical contradictions and more. We evaluate a broad spectrum of LLMs, including open-source, closed-source, and math-specialized models, using FaultyMath across three dimensions: (i) How accurately can the models detect faulty math problems without being explicitly prompted to do so? (ii) When provided with hints -- either correct or misleading -- about the validity of the problems, to what extent do LLMs adapt to become reliable Logical Thinker? (iii) How trustworthy are the explanations generated by LLMs when they recognize a math problem as flawed? Through extensive experimentation and detailed analysis, our results demonstrate that existing LLMs largely function as Blind Solver and fall short of the reasoning capabilities required to perform as Logical Thinker.

How University Students Use Claude
anthropic.com/news/anthropic-e
news.ycombinator.com/item?id=4

Aside: been trialing SoTA LLM 😯 😀

ChatGPT, Gemini, Claude ...
counterpunch.org/2025/04/07/th

* particularly impressed w. Claude (3.7 Sonnet), DeepSeek
* most SoTA free (ChatGPT higher performing paywalled): still amazing!
* chain-of-thought reasoning / augmented responses (web retrieval: RAG) 👍
* very impressive!!
* Firefox users: try the AI Toolbox extension 👍

“We’re stepping into the most pro-growth, pro-business, pro-American administration I’ve perhaps seen in my adult lifetime,” gushed the hedge fund manager Bill Ackman in December...

“You don’t get fired for being bullish, but you do get fired for being bearish on Wall Street,” said Berezin.
#finance #trade #tariffs #macroeconomics #Trumpism #collapse #psychology #groupthink #rationality #biases #logic #reasoning
nytimes.com/2025/04/07/opinion

The New York Times · Opinion | Why Did Wall Street Get Trump So Wrong?By Michelle Goldberg
Continued thread

📉 CEO #SamAltman räumte ein, dass #OpenAI in Sachen #OpenSource bisher auf dem falschen Weg war und kündigte eine neue Strategie an.

🛠️ Das kommende Modell soll über solide #Reasoning-Fähigkeiten verfügen und vor Veröffentlichung nach OpenAIs Sicherheitsrichtlinien geprüft werden.

👉 eicker.TV #Technik #Medien #Politik #Wirtschaft Ex (2/2)

eicker.TV - Technews als Kurzvideo
eicker.net · eicker.TV ► Technews Kurzvideos als TikToks, Shorts, Reels
More from Gerrit Eicker 🇪🇺🇺🇦

"Mathematics is not arithmetic. Though mathematics may have arisen from the practices of counting and measuring it really deals with logical reasoning in which theorems [...] can be deduced from the starting assumptions. It is, perhaps, the purest and most rigorous of intellectual activities, and is often thought of as queen of the sciences." – Erik Christopher Zeeman (1925-2016)
#quote #mathematics #math #maths #reasoning