Training and fine-tuning of AI models
Decentralized training can help mitigate this dependency, enabling an open, verifiable, and community-driven approach to model training.
Today, training and fine-tuning of AI models is dominated by a handful of companies, requiring billions of dollars in compute and relying on proprietary cloud infra.
Decentralized training can help mitigate this dependency, enabling an open, verifiable, and community-driven approach to model training.
We mapped out key players across five categories:
→ Compute & Infra Layer
@gensynai, @PrimeIntellect, @fortytwonetwork, @exolabs
→ Data & Knowledge Networks
@PluralisHQ, @CerboAI, @flock_io, @exolabs, @Ammo_AI
→ Models
@PrimeIntellect, @NousResearch
→ Execution & Optimization
@MacrocosmosAI, @fortytwonetwork, @NousResearch, @CerboAI, @flock_io, Gradients by @rayon_labs
→ App Layer
Who else should be on this list?