钛媒体 前天
30B-Parameter MiroThinker 1.5 Unveiled, Challenging the 'Bigger is Better' Mantra
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_font3.html

 

TMTPOST -- In an AI arms race toward trillion-parameter models, Miromind.ai is making a contrarian bet. The company has released MiroThinker 1.5, a new family of agentic AI models where its 30-billion-parameter version demonstrates performance on par with, or in some cases exceeding, that of 1-trillion-parameter giants.

This marks a potential turning point in the AI industry, challenging the long-held "bigger is better" scaling law and introducing a new paradigm focused on "intelligence density" rather than sheer size.

Why This Matters: a Paradigm Shift in AI Efficiency

Just as other innovators have disrupted the AI landscape by decoupling performance from brute-force capital expenditure, MiroThinker 1.5 is poised to do the same for agentic reasoning. The AI industry has been trapped in an overheated race for computational mass, but Miromind believes that true intelligence isn't about memorizing the internet — it's about finding and verifying information.

"The future of AI is not about bigger brains. It's about better researchers," the company stated. This shift from "memorized intelligence" to "native intelligence" is at the heart of their mission.

A New Scaling Law: Interactive Scaling

Miromind's core innovation is Interactive Scaling. Instead of training a model to be a digital oracle, MiroThinker is trained to be an expert researcher. It learns to interact with the external world — to search for evidence, verify facts, and revise its own conclusions. This "reason-verify-revise" loop is embedded in its architecture, allowing it to function more like a scientist than a test-taker.

This approach leads to a dramatic increase in efficiency. By focusing on interaction, Miromind has created a model that is not only powerful but also sustainable.

Performance That Punches Above Its Weight

The benchmark results show that MiroThinker 1.5 outperformed its peers in a range of benchmarks designed to test agentic search and complex reasoning.

Table 1: MiroThinker 1.5 models ( highlighted ) demonstrate competitive performance against significantly larger models.

As the table indicates, the 235B model is a top performer, while the 30B model holds its own against models over 30 times its size. To visualize this, the chart below zooms in on the BrowseComp scores.

Figure 1: BrowseComp score comparison shows the 30B MiroThinker model competing with models orders of magnitude larger.

When compared directly with Kimi-K2-Thinking, a 1-trillion-parameter model, MiroThinker-1.5-30B not only achieved a higher score on the Chinese-language BrowseComp benchmark but did so at 1/20th of the inference cost and with significantly faster response times.

From Theory to Practice: Real-World Applications

Beyond benchmarks, MiroThinker 1.5 is already demonstrating its capabilities on complex, real-world tasks. Here are a few examples a user can try on their live demo:

●       Financial Markets:  Predict the impact of next week's events on the Nasdaq Index

●       Entertainment: Forecast the most likely Best Picture nominee for the 2026 Oscars

●       Sports: Predict the most likely teams to reach the 2026 Super Bowl

These examples highlight the model's ability to go beyond simple information retrieval to perform multi-step reasoning and analysis in domains filled with uncertainty.

A Commitment to Open Research

In a move that echoes the strategy of other disruptive players in the AI space,   Miromind is open-sourcing its 30B and 235B parameter models on Hugging Face and Github.

This commitment to open research allows the community to build upon their work   and verify the impressive efficiency gains for themselves.

Miromind ’ s methodology has already been validated by their success in topping   the global leaderboard on Polymarket, a predictions market platform, where   predictive accuracy is paramount.

The Road Ahead

With the release of MiroThinker 1.5, Miromind.ai, founded by renowned entrepreneur Tianqiao Chen and AI scientist Jifeng Dai, has thrown down the gauntlet, not just to its direct competitors, but to the entire paradigm of scaling laws. Their achievement suggests that the future of AI may not be a race to build the biggest model, but a competition to create the most efficient and effective researcher.

As the industry grapples with the immense costs and energy consumption of large-scale AI, this focus on "intelligence density" may prove to be the more sustainable and ultimately more powerful path forward.

宙世代

宙世代

ZAKER旗下Web3.0元宇宙平台

一起剪

一起剪

ZAKER旗下免费视频剪辑工具

相关标签

ai
相关文章
评论
没有更多评论了
取消

登录后才可以发布评论哦

打开小程序可以发布评论哦

12 我来说两句…
打开 ZAKER 参与讨论