This looks like an interesting idea on #crowdsourcing GPU compute for running #llm inference https://petals.dev/
Mastodon is the best way to keep up with what's happening.
Follow anyone across the fediverse and see it all in chronological order. No algorithms, ads, or clickbait in sight.