Will the Noise Step paper work for LLMs?
Basic
22
Ṁ3031
resolved Jun 25
Resolved
NO

Will the noise_step method hold up in 6 months?

A new method for training neural networks, called noise_step, has gained traction on Twitter. The core innovation is that it purportedly allows neural networks to be trained using 1.58-bit precision instead of the standard f16 precision.

You can explore more about noise_step through the following links:

This market resolves YES if, within six months:

  • A model with at least 100 million parameters is successfully trained using noise_step at 1.58-bit precision,

  • With verifiable results that align with the original claims (e.g., comparable or better performance than f16-based training).

The market resolves NO if:

  • The method is debunked,

  • Demonstrated to be ineffective,

  • Or if no verifiable evidence of its success on a model of the specified size emerges within the time frame.

Get
Ṁ1,000
and
S3.00
Sort by:

I see no evidence that anyone has even tried to replicate noise_step, let alone that it works. BitNet has continued to release papers, but they aren't using the same method as noise_step and they have the same issue that Brickner was originally trying to fix (i.e. they did backprop, and they did it in full fp16) so BitNet v2 does not qualify.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules