# A Rápida Adoção da DeepSeek pela Microsoft - The Verge **Título Original:** **Inside Microsoft’s quick embrace of DeepSeek** **Fonte**: The Verge **Data de Publicação:** 30.01.2025 **URL:** https://www.theverge.com/notepad-microsoft-newsletter/603170/microsoft-deepseek-ai-azure-notepad **Autor:** Tom Warren # Resumo gerado por AI A Microsoft rapidamente adotou o modelo R1 da DeepSeek, demonstrando agilidade na incorporação de avanços em IA, com o CEO Satya Nadella liderando a iniciativa. - A Microsoft rapidamente adotou o modelo R1 da DeepSeek. - Engenheiros da Microsoft trabalharam para integrar o modelo R1 na Azure AI Foundry e GitHub em 10 dias. - O CEO da Microsoft, Satya Nadella, estava preparado para essa mudança. - Nadella já havia alertado sobre avanços algorítmicos que resultam em eficiência computacional. - DeepSeek alega que o treinamento do modelo custou US$ 5,6 milhões. - Os modelos R1 destilados podem ser executados localmente em PCs Copilot Plus. - Microsoft está implantando o modelo em Azure AI Foundry, GitHub e PCs Copilot Plus. # Artigo Original ![[Clippings/publish/attachments/247141_NOTEPAD_DEEPSEEK_AI_MICROSOFT_CVIRGINIA-1.jpg]] ![[Clippings/publish/attachments/profilephoto.0.jpg|Tom Warren]] [Tom Warren](https://www.theverge.com/authors/tom-warren) is a senior editor and author of [*Notepad*](https://www.theverge.com/notepad-microsoft-newsletter), who has been covering all things Microsoft, PC, and tech for over 20 years. The Chinese startup DeepSeek shook up the world of AI last week after showing its supercheap R1 model could compete directly with OpenAI’s o1. While it wiped nearly $600 billion off Nvidia’s market value, Microsoft engineers were quietly working at pace to embrace the partially open- source R1 model and get it ready for Azure customers. It was a decision that came from the very top of Microsoft. Sources familiar with Microsoft’s DeepSeek R1 deployment tell me that the company’s senior leadership team and CEO **Satya Nadella** moved with haste to get engineers to test and [deploy R1 on Azure AI Foundry and GitHub](https://www.theverge.com/news/602162/microsoft-deepseek-r1-model-azure-ai-foundry-github) over the past 10 days. For a corporation the size of Microsoft, it was an unusually quick turnaround, but there are plenty of signs that Nadella was ready and waiting for this exact moment. While the open-source model has upended Wall Street’s idea of how much AI costs, Nadella seemed to know that something like DeepSeek was coming eventually. Appearing on [the *BG2* podcast in early December](https://www.youtube.com/watch?v=9NtsnzRFJ_o), he warned of the exact thing DeepSeek went on to achieve weeks later: an algorithmic breakthrough that results in compute efficiency. How this breakthrough has been achieved is still up for debate. Microsoft and OpenAI have [reportedly been investigating](https://www.theverge.com/news/601195/openai-evidence-deepseek-distillation-ai-data) whether the Chinese rival used OpenAI’s API to train DeepSeek’s models using a technique called distillation. That, too, was a threat Nadella warned about. “It’s impossible to control distillation,” Nadella said on December 12th. “You don’t even have to do anything. You just … reverse engineer that capability, and you do it in a more compute efficient way.” He even joked that this approach was “kind of like piracy.” On Christmas Day, DeepSeek released its V3 reasoning model, the foundation for the R1 release early last week. DeepSeek’s progress might have looked like it came out of nowhere to Wall Street, but anyone following AI closely, like Nadella, will have witnessed the progress the Chinese AI lab has made with its consistent releases throughout 2024. DeepSeek claims its final training run cost $5.6 million, and AI labs in the US are currently replicating the R1 recipe to see if DeepSeek’s numbers are accurate. It looks like Microsoft is happy with the quality of the model either way, as it’s not just [Azure AI Foundry and GitHub](https://www.theverge.com/news/602162/microsoft-deepseek-r1-model-azure-ai-foundry-github) where the software maker is looking to deploy R1. Distilled R1 models [can now run locally on Copilot Plus PCs](https://blogs.windows.com/windowsdeveloper/2025/01/29/running-distilled-deepseek-r1-models-locally-on-copilot-pcs-powered-by-windows-copilot-runtime/), starting with Qualcomm Snapdragon X first and Intel chips later.