Additionally, they exhibit a counter-intuitive scaling Restrict: their reasoning effort increases with challenge complexity as much as a point, then declines Even with having an ample token budget. By comparing LRMs with their conventional LLM counterparts below equal inference compute, we recognize 3 overall performance regimes: (one) small-complexity tasks https://www.youtube.com/watch?v=snr3is5MTiU