DeepSeek has sent shockwaves across the business and technology worlds. Amid headlines about geopolitical tensions and collapsing share prices, knowing precisely what DeepSeek鈥檚 new AI models mean for businesses 鈥 from long-term AI strategy to day-to-day technology experimentation 鈥 is challenging.听
听
So, to give technology and business leaders some grounding, we鈥檝e answered five key questions that can help you move forward, however the news and hype cycle evolves in the weeks and months to come.
听
Why is DeepSeek in the news?
听
DeepSeek is a Chinese startup that released two new AI models 鈥 DeepSeek-R1 and DeepSeek R1-Zero 鈥 on January 20, 2025. It鈥檚 in the news because the model鈥檚 performance appears to match those of its rivals, such as Llama, Gemini, Claude and ChatGPT鈥檚 o1 鈥渞easoning model.鈥 This is despite it reportedly being trained using NVIDIA chips that are less advanced than the manufacturer's top-tier chips used by established vendors. (NVIDIA actually developed the chips in such a way to comply with US government regulations around what chips can be exported to China 鈥 they have , which DeepSeek engineers mitigated through considerable ingenuity in their code.)
听
There have been significant consequences. NVIDIA鈥檚 market value has , and the US tech industry more broadly has been left reeling at a Chinese player apparently beating them at their own game 鈥 despite only having access to ostensibly inferior hardware.
听
Is DeepSeek going to lower the cost of using AI for businesses?
听
DeepSeek-R1 comes in several smaller 'distilled' sizes and can be run on commodity hardware. This is significant because the ability to run a model that matches the performance of ChatGPT o1 鈥 instead of being beholden to third party API costs 鈥 is a big deal. It's especially important if you鈥檙e trying to do advanced things like agentic AI, where the AI may require many cycles to get the job done successfully.
听
Although precisely how much cheaper is hard to determine, it鈥檚 believed DeepSeek鈥檚 hardware (some industry voices have but there are strong indications the claims made ). In theory, then, this should make AI much cheaper for businesses: this is because not only is the foundational model itself cheaper to train, using and running the model 鈥 in, say, an application 鈥 is cheaper too.
听
However, this is just an assumption; lots of questions remain. For instance, DeepSeek鈥檚 cheaper infrastructure may come with some tradeoffs yet to be identified. Even more importantly, it鈥檚 worth bearing in mind what鈥檚 known as the : efficiency gains, rather than reducing prices as you might expect, actually lead to increased demand which, in turn, offsets the decrease in price.

What DeepSeek appears to have achieved will likely encourage greater focus on efficiency 鈥 doing more with less.
What DeepSeek appears to have achieved will likely encourage greater focus on efficiency 鈥 doing more with less.
Will DeepSeek reduce energy consumption?
听
DeepSeek鈥檚 models suggest you can unlock an incredibly high standard of performance without the same scale of electricity required by other established models. Consequently, many . One of the potential benefits, though, is that it could help drive the adoption of green computing, in which the environmental impacts of computing are addressed through greater efficiency. (That said, this may also lead us to Jevons paradox again, where energy consumption will go up as efficiency gains are realized.)
听
What this means for the likes of OpenAI and Google remains to be seen. These companies have planned for huge amounts of investment in data centers and resources in the years to come: if DeepSeek really has proven you can do more with less, perhaps we will see these companies pivot. While that鈥檚 currently just speculation, there鈥檚 no doubt DeepSeek is forcing the industry to rethink how much energy is required to build and then run an effective AI system.
听
Will this spur another wave of AI innovation?
听
What DeepSeek appears to have achieved will likely encourage greater focus on efficiency 鈥 doing more with less. The challenges in the field have typically been framed in terms of scale 鈥 more computing power, more intensive model training, bigger models. One of the biggest lessons of DeepSeek is perhaps there are ways of innovating in AI that don鈥檛 require greater scale but, instead, ingenuity and optimization.
听
It鈥檚 also worth noting that DeepSeek R-1 is what 鈥 open but not quite fulfilling the strict requirements to be called open source. This means it can be adapted and used in ways that proprietary systems cannot, arguably challenging the current dominance of proprietary models. When you combine this with decreased costs, the door will potentially being open for a whole new set of companies to consider the options for building their own models.
听
What should my next steps be?
听
The AI landscape moves so fast that advancements like this are going to keep happening. That鈥檚 why it鈥檚 vital to make sure your experiments pipeline and your process for evaluating tools is agile enough to adapt to change. You never know, we could get yet another new model from a different vendor next week.
听
Right now, though, there are many potential use cases worth exploring, from building a simple chat application to leveraging it for coding. It鈥檚 undeniably powerful, so see what you can do with it. that 鈥渢he foundation model layer being hyper-competitive is great for people building applications鈥 鈥 this is certainly an exciting time for organizations seeking to bring generative AI into production environments.听
听
That all being said, it鈥檚 nevertheless important to be mindful of the privacy risks. While that鈥檚 true whatever AI model you use, some security and privacy experts .听They鈥檝e expressed concern at how the Chinese government may be able to leverage DeepSeek data.
听
At 魅影直播, we鈥檙e excited to experiment with DeepSeek and encourage the wider industry to continually evaluate and share the value they get from it. That鈥檚 ultimately how we learn and innovate. Most importantly, it will help us continue to deliver more value for customers.