When the Chinese firm DeepSeekinfo-icon dropped a large language model called R1 last week, it sent shock waves through the USinfo-icon tech industry. Not only did R1 match the best of the homegrown competition, it was built for a fraction of the cost--and given away for free. 

The US stock marketinfo-icon lost $1 trillion, President Trumpinfo-icon called it a wake-up call, and the hype was dialed up yet again. "DeepSeek R1 is one of the most amazing and impressive breakthroughs I've ever seen--and as open source, a profound gift to the worldinfo-icon," Silicon Valley's kingpin investor Marc Andreessen posted on X.

But DeepSeek's innovations are not the only takeaway here. By publishing details about how R1 and a previous model called V3 were built and releasing the models for free, DeepSeek has pulled back the curtain to reveal that reasoning models are a lot easier to build than people thought. The company has closed the lead on the world's very top labs.

The news kicked competitors everywhere into gear. This week, the Chinese tech giant Alibaba announced a new version of its large language model Qwen and the Allen Institute for AIinfo-icon (AI2), a top US nonprofit lab, announced an update to its large language model Tulu. Both claim that their latest models beat DeepSeek's equivalent.

Click to Read Full Story