- TAO Times ⚡️
- Posts
- TAO Times #37: Distributed Training
TAO Times #37: Distributed Training
Subnet Summer, Instos buying, & heaps of research

TAO Times #37
Subnet Summer, real or fake? Fact or fiction?
I have some thoughts around this, but I don’t want to be brash, so instead, I will focus purely on delivering the week’s news… and maybe, just maybe, I will write long form about the mythical pot of gold at the end of the rainbow that is Subnet Summer.
Nonetheless, here’s what happened this week:
TLDR;
Subnet Expansion & Governance Debates: The community is actively debating a proposed cap of 128 subnets, while major protocol upgrades like the introduction of Yuma v3,
start_call
enforcement, and the Crowdloan pallet—are reshaping validator incentives and strengthening network scalability.Subnet Milestones: SN3 (Templar) and SN9 (IOTA) released detailed papers outlining novel distributed training architectures, while SN17 (404gen) showcased real-world 3D game asset generation using Gaussian splats.
Capital Inflows & Institutional Adoption: Safello and TaoBase announced strategic TAO accumulation, signaling growing institutional conviction in Bittensor as the decentralized base layer for AI.
Product & App Integrations: TaoFi deployed Uniswap V3 on Bittensor EVM; Chutes hit 100B tokens processed/day and was integrated into Veldt’s unlimited chat app.
Valuation Frameworks Emerging: Thought leaders like Sami, Seth, Micah, and Teng Yan are reframing how subnet tokens are valued in the dTAO era arguing FDV alone is misleading and calling attention to yield, emissions, and time horizon-based metrics.
Highlights of the Week
⚡️ The community is debating a proposed cap of 128 subnets to manage network scalability and reduce emissions dilution from inactive or speculative subnets. Under the proposal, subnets with the lowest emissions may be deregistered, with staked ALPHA refunded in TAO. While supporters argue this enforces quality and improves efficiency, critics warn it may penalize legitimate builders and stifle innovation.
⚡️ Templar (SN3) has released a technical report, introducing Gauntlet, its incentive mechanism for distributed AI training. The report details the successful training of a 1.2B parameter language model using permissionless pseudo-gradient contributions from over 200 GPUs across 20,000 training cycles. Results show competitive benchmark performance against industry-standard methods, validating the approach. Looking ahead, Templar aims to scale to 70B+ models and become a foundation-layer platform for distributed, crowd-powered AI development. You can also use the model on Chutes.
⚡️ Iota (SN9) also released its white paper introducing IOTA (Incentivized Orchestrated Training Architecture), a new framework for decentralized, scalable pretraining of LLMs. IOTA addresses limitations from SN9’s earlier work by shifting from isolated model training to a coordinated pipeline-parallel architecture, allowing model parameters to be distributed across heterogeneous miners. IOTA will officially launch its mainnet on June 2, 2025.
⚡️ As institutional conviction around TAO grows, both Safello and TaoBase have emerged as early movers prioritizing TAO accumulation and ecosystem engagement. Safello, the leading Nordic crypto exchange, recently reallocated part of its Bitcoin treasury into TAO. Meanwhile, TaoBase, a newly launched U.S.-based operating company, aims to accelerate Bittensor’s growth by acquiring TAO, and incubating subnets, among other initiatives.