在峰会开幕前的密集发布期间,有一件事悄悄发生了。2月7日,Sarvam发布Bulbul V3语音合成模型当天,Deedy Das——那个在2025年5月写下「丢人」的人——主动在X上发了一篇帖子,开头三个字:"I was wrong."他说,他一年前认为训练小型Indic语言模型的方向是错的。"但他们做到了转变。他们有Indic语言最好的语音合成、语音识别和文字识别模型,这是真正有价值的东西。"从「丢人」到"I was wrong",八个月,触发转变的不是大模型,而是一个语音产品。
Фото: Oleg Elkov / Shutterstock / Fotodom,更多细节参见新收录的资料
Spotify Listening Duration over Time。关于这个话题,新收录的资料提供了深入分析
PricingCopySmith offers a free trial with no credit card required. After the free trial, the paid plans are as follows:,详情可参考新收录的资料
Our primary finding is that dynamic resolution vision encoders perform the best and especially well on high-resolution data. It is particularly interesting to compare dynamic resolution with 2048 vs 3600 maximum tokens: the latter roughly corresponds to native HD 720p resolution and enjoys a substantial boost on high-resolution benchmarks, particularly ScreenSpot-Pro. Reinforcing the high-resolution trend, we find that multi-crop with S2 outperforms standard multi-crop despite using fewer visual tokens (i.e., fewer crops overall). The dynamic resolution technique produces the most tokens on average; due to their tiling subroutine, S2-based methods are constrained by the original image resolution and often only use about half the maximum tokens. From these experiments we choose the SigLIP-2 Naflex variant as our vision encoder.