Training and fine-tuning of AI models

Darshan Gandhi

Mar 8, 2025

Darshan Gandhi

Mar 8, 2025

Darshan Gandhi

Mar 8, 2025

Decentralized training can help mitigate this dependency, enabling an open, verifiable, and community-driven approach to model training.

Today, training and fine-tuning of AI models is dominated by a handful of companies, requiring billions of dollars in compute and relying on proprietary cloud infra.

Decentralized training can help mitigate this dependency, enabling an open, verifiable, and community-driven approach to model training.

We mapped out key players across five categories:

→ Compute & Infra Layer

@gensynai, @PrimeIntellect, @fortytwonetwork, @exolabs

→ Data & Knowledge Networks

@PluralisHQ, @CerboAI, @flock_io, @exolabs, @Ammo_AI

→ Models

@PrimeIntellect, @NousResearch

→ Execution & Optimization

@MacrocosmosAI, @fortytwonetwork, @NousResearch, @CerboAI, @flock_io, Gradients by @rayon_labs

→ App Layer

@NousResearch, @Ammo_AI

Who else should be on this list?

Polaris Fund © 2025

Polaris Fund © 2025

Polaris Fund © 2025