Bit is all we need: binary normalized neural networks
PaulHoule
2 days ago
99
53
https://arxiv.org/abs/2509.07025
modeless2 days ago
> each parameter exists in two forms simultaneously during training: a full-precision 32-bit floating-point value (p) used for gradient updates, and its binarized counterpart (pb) used for forward computations

So this is only for inference. Also activations aren't quantized, I think?

nighthawk454modeless2 days ago
Yeah, but it’s ’quantization aware’ during training too, which presumably is what allows the quantization at inference to work
benobmodeless2 days ago
I wonder if one could store only the binary representation at training and sample a floating point representation (both weights and gradient) during backprop.
adastra22benob2 days ago
Back propagation on random data that is then thrown away would be pretty useless.
jampekkamodeless2 days ago
> Also activations aren't quantized, I think?

The very last conclusion: "Future work will focus on the implementation of binary normalization layers using single-bit arrays operations, as well as on quantizing layer activations to 8 or 16-bit precision. These improvements are expected to further enhance the efficiency and performance of the binary neural network models."

kouteiheikamodeless2 days ago
You don't necessarily have to store the parameters in fp32 for gradient updates; I experimented with it and got it working (all parameter full fine-tuning) with parameters being as low as 3-bit (a little bit more than 3-bit, because the block-wise scales were higher precision), which is essentially as low as you can go before "normal" training starts breaking down.
noosphrmodelessa day ago
Yes, that's been the downside of these forever.

If you use quantized differentiation you can get away with using integers for gradient updates. Explaining how takes a paper and in the end it doesn't even work very well.

At university, way back at the end of the last AI winter, I ended up using genetic algorithms to train the models. It was very interesting because weights were trained along with hyper parameters. It was no where near practical because gradient descent is so much better at getting real world results in reasonable time frames - surprisingly because it's more memory efficient.

AmazingTurtle2 days ago
I'm gonna refer to this one here: https://news.ycombinator.com/item?id=45361007
s20nAmazingTurtle2 days ago
Attention Is All You Need - The Beatles ft. Charlie Puth
JKCalhouns20na day ago
b/w "All You Need is Love".
ameliuss20na day ago
Attention is all Google needs. Apparently.

I'm sick of BigTech fighting for my attention.

mnky9800nAmazingTurtle2 days ago
Yes calling your paper this now makes me think your paper has no interesting results. It is kind of the opposite of what’s intended.
gloomydayAmazingTurtle2 days ago
This naming trend has been going for 8 years. Incredible.
IshKebabgloomydaya day ago
It's on my naughty list, together with "... considered harmful", "The unreasonable effectiveness of ...", "... for fun and profit", "Falsehoods programmers believe about ...", "The rise and fall of ...".
fxtentacle2 days ago
These techniques are not new. And the reason why they’re usually not used is on page 9 in the paper. They require about 10x as many training iterations.
typpilolfxtentacle2 days ago
Yea I saw that training perplexity and thought hmmm...
shompfxtentacle2 days ago
Turns out using floats is a feature and not a bug?
Dylan16807shomp2 days ago
No, I don't think so, in that I don't think anyone has ever called that a bug.
shompDylan16807a day ago
In the paper summary they did not call it a bug explicitly, but they do say there are 32x improvements in using single bits instead.
reactordevshompa day ago
To memory, sure. At the cost of 32x slower speeds.
Dylan16807shompa day ago
That's an obvious exaggeration. The competition is using smaller weights already, some of which are floating point and some of which aren't.

And they use full size floats for training.

imtringuedDylan1680715 hours ago
That means their paper is actually worse than SOTA, which is concerned with training in fp4 natively without full precision [0] for QAT.

[0] "full precision" in ML usually means 16 bit floats like bfloat16

Dylan16807imtringued6 minutes ago
I wouldn't say "worse". It's focusing on inference cost and leaving training at a default for now.
personalitysonfxtentacle2 days ago
Unless each iteration is 90% faster
ameliuspersonalitysona day ago
This.

In fact, it can be slower because hardware is probably not optimized for the 1-bit case, so there may be a lot of low-hanging fruit for hardware designers and we may see improvements in the next iteration of hardware.

nlitenedameliusa day ago
Isn't digital (binary) hardware literally optimized for 1-bit case by definition?
reactordevnliteneda day ago
People are confusing word size…

The CPU can handle up to word size bits at once. I believe they mean that a lot of assembly was written for integer math and not bit math. Word size 4+ However, it is unlikely we’ll see improvements in this area because by definition, using 64-bit floats uses max word size. So… that’s the max throughput. Sending 1 bit vs 64 bits would be considerably slower so this entire approach is funny.

observationistnliteneda day ago
No, because there are algorithmic shortcuts that allow approximations and skipped steps in comparison to a strict binary step-by-step calculation, by using in-memory bit reads and implicit rules, among other structural advantages in how GPUs and CPUs instruction sets are implemented in hardware.
nickpsecurityameliusa day ago
FPGA's could be highly-competitive for models with unusual, but small, bit lengths. Especially single bits since their optimizers will handle that easily.
PaulHoulefxtentaclea day ago
When I was working for startups trying to develop foundation models circa 2015 we were concerned with training more than inference.

Today with models that are actually useful training costs matters much less than inference costs. A 10x increase in training costs is not necessarily prohibitive if you get a 10x decrease in inference costs.

nickpsecurity PaulHoulea day ago
I still don't have a GPT3-class model that was trained without copyright infringement. I'd have so many uses for it from research to production. What's stopping me is the $30 million training cost for 180B models. Even a 30B like Mosaic cost over a million dollars.

So, I strongly disagree unless we're talking about the five or six companies that already spend tens of millions on training and keep repeating that. Outside of them, the medium to large models are done infrequently or one off by a small number of other companies. Then, most of us are stuck with their pretraining efforts because we can't afford it ourselves.

On my end, I'd rather see a model that drops pretraining costs to almost nothing but costs 10-32x more to do inference. My uses would produce mere MB of output vs hundreds of GB to TB that pretraining requires. A competitive use that costs 32x current prices would probably be profitable for me. Optimizations, which are plentiful for inference, might bring it down further.

arthurcollenickpsecuritya day ago
Why are you making something cheap more expensive than it needs to be?
nickpsecurityarthurcolle21 hours ago
It's not cheap. It costs millions to $100 million depending on the model. I was responding to this tradeoff:

"A 10x increase in training costs is not necessarily prohibitive if you get a 10x decrease in inference costs."

Given millions and up, I'd like that to be 10x cheaper while inference was 10x more expensive. Then, it could do research or coding for me at $15/hr instead of $1.50/hr. I'd just use it carefully with batching.

imtringuednickpsecurity15 hours ago
Calculating the gradient requires a forward pass (inference) and a backward pass (back propagation).

They cost roughly the same, with the backwards pass being maybe 50% more expensive. So let's say three times the cost of a forward pass.

You can't make training faster by making inference slower.

nickpsecurityimtringued11 hours ago
I was responding to their claim by starting with an assumption that it may be correct. I don't know the cost data myself. Now, I'll assume what you say is true.

That leaves computation and memory use of two passes plus interlayer communication.

I think backpropagation doesn't occur in the brain since it appears to use local learning but global optimization probably happens during sleep/dreaming. I have a lot of papers on removing backpropagation, Hebbien learning, and "local, learning rules."

From there, many are publishing how to do training at 8-bit and below. A recent one did a mix of low-bit training with sub-1-bit storage for weights. The NoLayer architecture might address interlayer better.

People keep trying to build analog accelerators. There are mismatches between their features and hardware. Recent works have come up with analog NN's that work well with analog hardware.

A combination of those would likely get cost down dramatically on both inference and training. Also, energy use would be lower.

PaulHoulenickpsecuritya day ago
I think you're right but there has to be a limit. If I'm training a model I'm going to do a significant amount of inference to evaluate it and support the training.
pixelpoet2 days ago
The critical "1" is missing from the title...
Dylan16807pixelpoet2 days ago
"Bit" being singular gets the intent across just fine.
pixelpoetDylan168072 days ago
Disagree
hirako2000pixelpoeta day ago
I also* disagree, otherwise we would say, kilo of meat is enough?
pixelpoethirako2000a day ago
Yes, that's the point I was making, and the other person said it's fine without saying how many bits, not me.
hirako2000pixelpoeta day ago
My bad, I meant that I disagree with parent, I edited it. I agree with you.
user823749Dylan16807a day ago
Yes, just like in "16 bit integer". No confusion at all.
jongjong2 days ago
This reminds me if my university days. For one of the assignments, we had to write our own ANN from scratch for handwriting recognition and we implemented a step activation function because that was easier than sigmoid; basically each layer would output one or zero though I guess the weights themselves were scalars. It's just the node outputs which were 1 or 0... But this was convenient because the output of the final layer could be interpreted as a binary which could be converted straight into an ASCII character for comparison and backpropagation.
meindnochjongjong2 days ago
>could be interpreted as a binary which could be converted straight into an ASCII character for comparison and backpropagation.

There's nothing to backpropagate with a step function. The derivative is zero everywhere.

steppimeindnocha day ago
It sounds like jongjong was probably using surrogate gradients. You keep the step activation in the forward pass but replace with a smooth approximation in the backwards pass.
bjournesteppia day ago
Yeah, but then there is no performance benefit over plain old sgd.
steppibjournea day ago
Yeah, I think surrogate gradients are usually used to train spiking neural nets where the binary nature is considered an end in itself, for reasons of biological plausibility or something. Not for any performance benefits. It's not an area I really know that much about though.
nickpsecuritysteppia day ago
There's performance benefits when they're implemented in hardware. The brain is a mixed-signal system whose massively-parallel, tiny, analog components keep it ultra-fast at ultra-low energy.

Analog NN's, including spiking ones, share some of those properties. Several chips, like TrueNorth, are designed to take advantage of that on biological side. Others, like Mythic AI's, are accelerating normal types of ML systems.

jongjongsteppi12 hours ago
I can't remember the name of the algorithm we used. It wasn't doing gradient descent but it was a similar principle; basically adjust the weights up or down by some fixed amount proportional to their contribution to the error. It was much simpler than calculating gradients but it still gave pretty good results for single-character recognition.
forntentacl2 days ago
This paper ignores 50+ years of research in the domain of quantized networks, quantized training algorithms, and reaches wrong conclusions out of sheer ignorance.

TLDR abstract of a draft paper I wrote years ago, for those interested in the real limits of quantized networks:

We investigate the storage capacity of single‐layer threshold neurons under three synaptic precision regimes—binary (1‐bit), ternary (≈1.585‐bit), and quaternary (2‐bit)—from both information‐theoretic and algorithmic standpoints. While the Gardner bound stipulates maximal loads of α=0.83, 1.5 and 2.0 patterns per weight for the three regimes, practical algorithms only reach α_alg≈0.72, 1.0 and 2.0, respectively. By converting these densities into storage‐efficiency metrics—bits of synaptic memory per stored pattern—we demonstrate that only quaternary weights achieve the theoretical optimum in realistic settings, requiring exactly 1 bit of memory per pattern. Binary and ternary schemes incur 39 % and 58 % overheads, respectively.

naaskingforntentacla day ago
Is this actually equivalent to classical forms of quantization though? The paper has extensive discussion of quantization on page 2 and 3. This paper is not just a rehash of earlier work, but pushes the single bit precision to more parts of the system.
thijsona day ago
How does this compare to:

https://arxiv.org/pdf/1811.11431