ohai.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A cozy, fast and secure Mastodon server where everyone is welcome. Run by the folks at ohai.is.

Administered by:

Server stats:

1.9K
active users

#gemini

115 posts96 participants4 posts today

People have low expectations for AI. I asked ChatGPT (paid account, not the free stuff) what would make ChatGPT better. I think the answer is a lot higher quality than what people keep saying one should expect. This is what I got:

--- My Prompt:
With DeepSeek, there are things it is very good at, but it fails quickly and rather spectacularly. Your models tend to break down as well, but somewhere there must be a model that will be able to stay on topic, even when the topic changes, so that conversations move around like human conversations move around subject-to-subject. What would take ChatGPT over the top?

--- ChatGPT reply:
Here’s what it’ll take to get closer to that human-like conversation flow:

  1. Long-Term Memory That Actually Works. Most models either have no memory or they remember only in a clunky, artificial way.

A better memory system would not just recall facts, but contextual relevance—like “we talked about LLM efficiency two days ago and you hate cheerleader language.”

It has to be smart recall, not just data storage.

  1. Thread Awareness
    Imagine a model that can say: “You mentioned DeepSeek earlier. Should I compare it again now that we’ve brought up open-source tools?”
    That requires models to track not just words, but conversational branches—like a tree, not a straight line.

  2. Graceful Fallbacks
    When a model doesn’t know, it should gracefully pivot, not hallucinate or go off-topic. This kind of conversational “stability” is rare and subtle.

Most models that fail “spectacularly,” as you said, are trying to bluff through complexity instead of narrowing scope or clarifying.

  1. Hybrid Contextual ReasoningThe future model needs to reason across subjects rather than switching on and off.

If we're talking about language models, and you shift to science fiction, it should be able to bring up Asimov or neural networks in storytelling—bridging topics, not just hopping between them.

Right now, most models either:

Stay sharp but shallow (good at Q&A, bad at memory), or

Drift into friendly nonsense (chatty, but brittle)

The sweet spot will be a system that grows with you across time and topics—and doesn’t forget what you value in a conversation.

....

When you see stuff posted online meant to minimize the quality of output possible, This was a single question with no leader. I think this is the kind of answer we want from our AI devices.

Thoughts? Feel free to tear this response apart. It needs that kind of human analysis.

#ai#openai#deepseek

People have low expectations for AI. I asked ChatGPT (paid account, not the free stuff) what would make ChatGPT better. I think the answer is a lot higher quality than what people keep saying one should expect. This is what I got:

--- My Prompt:
With DeepSeek, there are things it is very good at, but it fails quickly and rather spectacularly. Your models tend to break down as well, but somewhere there must be a model that will be able to stay on topic, even when the topic changes, so that conversations move around like human conversations move around subject-to-subject. What would take ChatGPT over the top?

--- ChatGPT reply:
Here’s what it’ll take to get closer to that human-like conversation flow:

  1. Long-Term Memory That Actually WorksMost models either have no memory or they remember only in a clunky, artificial way.

A better memory system would not just recall facts, but contextual relevance—like “we talked about LLM efficiency two days ago and you hate cheerleader language.”

It has to be smart recall, not just data storage.

  1. Thread Awareness
    Imagine a model that can say: “You mentioned DeepSeek earlier. Should I compare it again now that we’ve brought up open-source tools?”
    That requires models to track not just words, but conversational branches—like a tree, not a straight line.

  2. Graceful Fallbacks
    When a model doesn’t know, it should gracefully pivot, not hallucinate or go off-topic. This kind of conversational “stability” is rare and subtle.

Most models that fail “spectacularly,” as you said, are trying to bluff through complexity instead of narrowing scope or clarifying.

  1. Hybrid Contextual ReasoningThe future model needs to reason across subjects rather than switching on and off.

If we're talking about language models, and you shift to science fiction, it should be able to bring up Asimov or neural networks in storytelling—bridging topics, not just hopping between them.

Right now, most models either:

Stay sharp but shallow (good at Q&A, bad at memory), or

Drift into friendly nonsense (chatty, but brittle)

The sweet spot will be a system that grows with you across time and topics—and doesn’t forget what you value in a conversation.

When you see stuff posted online meant to minimize the quality of output possible, This was a single question with no leader. I think this is the kind of answer we want from our AI devices.

Thoughts? Feel free to tear this response apart. It needs that kind of human analysis.

#ai#openai#deepseek

Ich glaube, es hackt!: #Google möchte deinen Bildschirm beobachten - und zwar immer

In dieser Episode sprechen Rüdiger Trost und Tobias Schrödel darüber, wie Künstliche Intelligenz die #Privatsphäre der Nutzer beeinflusst. Sie werfen einen Blick auf die neuen Features der #GoogleAI #Gemini, die es der #KI ermöglichen, den Bildschirm des Nutzers zu sehen und persönliche Daten auszuwerten.

Das bringt natürlich einige ernste Datenschutzfragen mit sich – vor allem, wenn es darum geht, wie und wo solche Daten gespeichert und genutzt werden. Auch die Rolle von #Microsoft und anderen großen Playern in diesem Bereich kommt zur Sprache.

Es wird klar: Die ganze Sache mit Daten, #Privatsphäre und Sicherheit im digitalen Zeitalter ist ziemlich komplex.

Die beiden diskutieren, welche Herausforderungen dabei entstehen und welche Missverständnisse durch die Analyse persönlicher Daten auftreten können.

Webseite der Episode:
igeh.podigee.io/66-shouldersur

Mediendatei: audio.podigee-cdn.net/1853673-

🧠 Come #Google ha reso #Gemini 2.5 Pro così performante? Non è solo potenza di calcolo, ma un approccio strategico allo sviluppo del modello #AI

👉 Una sintesi: linkedin.com/posts/alessiopoma

___ 

✉️ 𝗦𝗲 𝘃𝘂𝗼𝗶 𝗿𝗶𝗺𝗮𝗻𝗲𝗿𝗲 𝗮𝗴𝗴𝗶𝗼𝗿𝗻𝗮𝘁𝗼/𝗮 𝘀𝘂 𝗾𝘂𝗲𝘀𝘁𝗲 𝘁𝗲𝗺𝗮𝘁𝗶𝗰𝗵𝗲, 𝗶𝘀𝗰𝗿𝗶𝘃𝗶𝘁𝗶 𝗮𝗹𝗹𝗮 𝗺𝗶𝗮 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿: bit.ly/newsletter-alessiopomar 

www.linkedin.comLaunching Gemini 2.5 | Alessio Pomaro🧠 Come #Google ha reso #Gemini 2.5 Pro così performante? Non è solo potenza di calcolo, ma un approccio strategico allo sviluppo del modello #AI. 👉 In…

Lol - ich will Gemini gar nicht "loswerden"! Die KI-Übersicht ist mir bei der Websuche keineswegs lästig. Das Gegenteil ist der Fall 🤩
#Google #Gemini #AI #KI #press
Google Websuche: So könnt ihr Gemini jetzt loswerden - einfacher Trick entfernt die lästige KI-Übersicht

Quelle: GoogleWatchBlog
search.app/Jjf3u

Über die Google App geteilt

google search ki ai gemini
GoogleWatchBlog · Google Websuche: So könnt ihr Gemini jetzt loswerden - einfacher Trick entfernt die lästige KI-ÜbersichtGoogle hat die Websuche zu einem Gemini-Produkt gewandelt. Wer die KI-Übersicht abschalten will, kann das jetzt mit einem Trick tun.

🦾New Leader - Gemini 2.5 Pro
📖Google has released Gemini 2.5 Pro — a powerful AI that handles tasks in logic, mathematics, and coding.
📖Complex tests were used to evaluate its capabilities, such as GPQA — a set of graduate-level questions in physics, chemistry, and biology, and AIME — a mathematics competition for high school students. These tests demonstrate how well the AI solves problems that are difficult even for experts.
chat.problembo.com
#chat #GPT #gemini #google