Saturday, January 25, 2025

"Why everyone in AI is freaking out about DeepSeek"

From VentureBeat, January 23:

As of a few days ago, only the nerdiest of nerds (I say this as one) had ever heard of DeepSeek, a Chinese AI subsidiary of the equally evocatively named High-Flyer Capital Management, a quantitative analysis (or quant) firm that initially launched in 2015.

Yet within the last few days, it’s been arguably the most discussed company in Silicon Valley. That’s largely thanks to the release of DeepSeek-R1, a new large language model (LLM) that performs “reasoning” similar to OpenAI’s current best-available model o1 — taking multiple seconds or minutes to answer hard questions and solve complex problems as it reflects on its own analysis in a step-by-step, or “chain of thought” fashion.

Not only that, but DeepSeek-R1 scored as high as or higher than OpenAI’s o1 on a variety of third-party benchmarks (tests to measure AI performance at answering questions on various subjects), and was reportedly trained at a fraction of the cost (reportedly around $5 million), with far fewer graphics processing units (GPU) that are under a strict embargo imposed by the U.S., OpenAI’s home turf.

But unlike o1, which is available only to paying ChatGPT subscribers of the Plus tier ($20 per month) and more expensive tiers (such as Pro at $200 per month), DeepSeek-R1 was released as a fully open-source model, which also explains why it has quickly rocketed up the charts of AI code sharing community Hugging Face’s most downloaded and active models.

Also, thanks to the fact that it is fully open-source, people have already fine-tuned and trained many variations of the model for different task-specific purposes, such as making it small enough to run on a mobile device, or combining it with other open-source models. Even if you want to use it for development purposes, DeepSeek’s API costs are more than 90% lower than the equivalent o1 model from OpenAI....

....MUCH MORE