DeepSeek’s AI breakthrough rivals top models at a fraction of the cost, proving open source innovation is reshaping AI’s future. Is this an AI race or an open vs. closed battle?
It’s tricky. There is code involved, and the code is open source. There is a neural net involved, and it is released as open weights. The part that is not available is the “input” that went into the training. This seems to be a common way in which models are released as both “open source” and “open weights”, but you wouldn’t necessarily be able to replicate the outcome with $5M or whatever it takes to train the foundation model, since you’d have to guess about what they used as their input training corpus.
Definitions are tricky, and especially for terms that are broadly considered virtuous/positive by the general public (cf. “organic”) but I tend to deny something is open source unless you can recreate any binaries/output AND it is presented in the “preferred form for modification” (i.e. the way the GPLv3 defines the “source form”).
A disassembled/decompiled binary might nominally be in some programming language–suitable input to a compiler for that langauge–but that doesn’t actually make it the source code for that binary because it is not in the form the entity most enabled to make a modified form of the binary (normally the original author) would prefer to make modifications.
I view it as the source code of the model is the training data. The code supplied is a bespoke compiler for it, which emits a binary blob (the weights). A compiler is written in code too, just like any other program. So what they released is the equivalent of the compiler’s source code, and the binary blob that it output when fed the training data (source code) which they did NOT release.
It’s tricky. There is code involved, and the code is open source. There is a neural net involved, and it is released as open weights. The part that is not available is the “input” that went into the training. This seems to be a common way in which models are released as both “open source” and “open weights”, but you wouldn’t necessarily be able to replicate the outcome with $5M or whatever it takes to train the foundation model, since you’d have to guess about what they used as their input training corpus.
Definitions are tricky, and especially for terms that are broadly considered virtuous/positive by the general public (cf. “organic”) but I tend to deny something is open source unless you can recreate any binaries/output AND it is presented in the “preferred form for modification” (i.e. the way the GPLv3 defines the “source form”).
A disassembled/decompiled binary might nominally be in some programming language–suitable input to a compiler for that langauge–but that doesn’t actually make it the source code for that binary because it is not in the form the entity most enabled to make a modified form of the binary (normally the original author) would prefer to make modifications.
I view it as the source code of the model is the training data. The code supplied is a bespoke compiler for it, which emits a binary blob (the weights). A compiler is written in code too, just like any other program. So what they released is the equivalent of the compiler’s source code, and the binary blob that it output when fed the training data (source code) which they did NOT release.
This is probably the best explanation I’ve seen so far and really helped me actually understand what it means when we talk about “weights” for LLMs.