The gap between open and closed models has been narrowing.
True if an open-weights model scores within 2% of the leading closed model on MMLU, HumanEval, and GPQA.
Meta investing heavily in open-source
Training techniques closing gap
Compute costs dropping
Frontier labs have data advantages
Post-training techniques closely held
Benchmark parity ≠ real-world parity