ohai.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A cozy, fast and secure Mastodon server where everyone is welcome. Run by the folks at ohai.is.

Administered by:

Server stats:

1.8K
active users

#ollama

14 posts13 participants0 posts today

🌘 在NixOS WSL上運行Nvidia和Ollama:全天候啟動您的遊戲PC上的LLM
➤ 在你的遊戲PC上輕鬆運行大型語言模型
yomaq.github.io/posts/nvidia-o
這篇文章詳細介紹了作者如何在遊戲PC上使用NixOS和WSL環境,成功配置Nvidia GPU和Ollama,實現LLM模型的持續運行。作者解決了vram鎖定、WSL自動關閉以及NixOS對Nvidia的初始支持不足等問題,並分享了詳細的配置步驟,包括保持WSL運行的設置、NixOS的安裝配置、Nvidia Container Toolkit的設置以及Ollama Docker容器的配置,並整合了Tailscale以簡化網絡連接。
+ 這篇文章真的很有用,我一直想試試本地LLM,但設定起來太麻煩了。這個方法看起來更可行!
+ NixOS看起來很強大,但是學習曲線有點陡峭。不過為了能在本地運行LLM,我
#NixOS #WSL #Nvidia #Ollama #LLM #Docker

yomaq · Nvidia on Nixos WSL - Ollama up 24/7 on your gaming PCConvenient LLMs at Home

You can now use Ollama model with Continue.Dev to do “agentic” code generation in VSCode. However if you use LiteLLM to manage your model access, you won’t be able to take advantage of this feature just yet github.com/continuedev/continu

#selfhosted #ai #ollama (brainsteam.co.uk/notes/2025/04)

Validations I believe this is a way to improve. I'll try to join the Continue Discord for questions I'm not able to find an open issue that requests the same enhancement Problem Currently extension...
GitHubSupport of "Agents" for models with "openai" provider · Issue #5044 · continuedev/continueBy ibuziuk

Good or bad? My laptop with a 12th gen i7-12800H and an #nvidia A1000 gpu can sustain 3.8ghz turbo on all cores, almost a 1 ghz turbo (sustained) over the 2.9ghz promise from #intel. I am using Ollama and Gemma3:27b to beat on it. GPU is a bit of a lap potato of course and hovers around 65C, while the CPU rides 96C.

Tokens per second is about 25.

#ollama#AI#LLM

I've done some #vibehosting yesterday... I couldn't be bothered investigating why #fail2ban keeps banning my IP after fetching emails from my email server, so I've decided to delegate my issues to #ollama.

I've set a knowledge base with all the necessary config and log files, etc, and asked #QwQ to investigate... Since it's a #localLLM, I had no issues submitting even the most sensitive information to it.

QwQ did come up with tailored suggestions on how to fix the problem. #openwebui

Hallo schlaues Fediverse, ich tauche gerade in ein völlig absurdes #Rabbithole und mein M1 Macbook hat dank #LlmStudio und #Ollama seinen Lüfter wiederentdeckt… Aktuell ist lokal bei #LLM mit 8-12b Schluss (32GB Ram). Gibt es irgendwo #Benchmarks die mir bitte ausreden, dass das mit einem M4 >48GB RAM drastisch besser wird? Oder wäre was ganz anderes schlauer? Oder anderes Hobby? Muss Mobil (erreichbar) sein, weil zu unsteter Lebenswandel für ein Desktop. Empfehlungen gern in den Kommentaren.

🚀 Nouveau post sur Firebleu Website !

J’explore aujourd’hui l’auto-hébergement d’une IA puissante grâce à Deepseek, un modèle impressionnant à tester chez soi !
Tu veux garder le contrôle sur tes données ? Ce guide est pour toi 🧠🔒

👉 Lien : firebleu.website/deepseek-et-l

- · Deepseek et l’auto-hébergement : la solution miracle ? -
More from laflammebleutée

Introducing MoonPiLlama

Adam Jenkins has made a Youtube video showing how to install #MoodleBox and Ollama on a Raspberry Pi4. MoodleBox is a custom distribution of #Moodle specifically for the Raspberry Pi. However the Moodle part includes good information on generally how to get Ollama and Moodle to work together. It also includes gratuitous use of a yellow rubber duck
(Yes Pi 4 not Pi 5).

youtube.com/watch?v=KqQfzhJJFP

Replied in thread

@nothingfuture I haven't dug in too deep into it (yet) but you could find more information about the training data that was used for the models they list in the Google Doc.

For instance, huggingface.co/microsoft/Phi-3 was "trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties."

#nercomp25 #AI #ollama

huggingface.comicrosoft/Phi-3-medium-128k-instruct · Hugging FaceWe’re on a journey to advance and democratize artificial intelligence through open source and open science.