Like the issue with modern AI is the data centers and central control no? How feasible would an AI be, whose code is FOSS and that is trained and running decentralized?
Like the issue with modern AI is the data centers and central control no? How feasible would an AI be, whose code is FOSS and that is trained and running decentralized?
Hardest issue would probably be financing, and motivation.
GPUs are expensive, electricity is expensive. All the current major LLMs are huge loss leaders for giant players with deep pockets. A distributed AI service would be by smaller players without the financing nor the motivation to upfront all the cost.
There is “folding@home” where you donate time on your hardware for scientific calculations, but that’s quite different from donating time on your hardware to some random unknown stranger to generate AI cat images or summarise a news article.
Lemmy and Mastodon etc have a comparatively modest monetary (and energy/environmental) cost, and the benefit is building communities and bringing people together. For distributed AI the cost ( monetary and energy/environmental) is higher, and the benefit is limited.
This is only a problem in a world where we spend no time to optimize these models like we are doing today where we just throw more power at them rather than engineering them to be…better. Look at how China is doing AI - their more limited resources in this regard have forced them to invest in LLM models that work on more modest hardware with much less necessary power. This is necessarily the direction this development must continue to make it a viable product for the average person to engage with without the need for an oppressive mega corporation footing the infrastructure bill (and poisoning the surrounding population at the same time).
Edit: I’ll add that I am broadly not in favor of AI as a whole but the tech is here and has novel use cases. Making the models more efficient is a necessary step towards seeing this tech’s true usefulness be actualized.