Facebook is developing self-developed chips for machine learning and video transcoding

Recently, The Information reported that two sources revealed that Facebook is developing a machine learning chip to handle tasks such as recommending content to users. In addition, it is revealed that Facebook is also developing another video transcoding chip to improve the user experience of watching videos and live broadcasts in the app.
In fact, in 2019, Facebook once introduced a group of engineers to engage in chip design, and cooperated with Qualcomm, Intel, Broadcom and other chip companies to jointly develop semi-custom ASIC chips for AI reasoning and video transcoding. But different from cooperating with external chip companies to modify the chip design, this time Facebook is going to completely develop self-developed ASIC chips for specific purposes and will not replace the existing chips in the data center.
As the demand for services such as video and advertising grows, relying on previous semi-custom ASIC processors may not be able to meet Facebook’s growing demand for inference and transcoding service data centers. According to public information, the semi-custom ASIC chip in Facebook’s data center processes nearly 250 million videos every day. If the self-developed fully customized ASIC can be used, it will inevitably better match the company’s related algorithms, and the energy efficiency will be higher.
In terms of self-developed data center chips, Facebook is actually more conservative than other Internet giants. Google began to realize the bottleneck of data center processing capacity as early as 2013, began to develop the TPU chip used in the data center, and began to use TPU in 2015 to provide support for its search, street view, translation and other services. At the same time, after its success in data center chips, Google has also begun to work hard on mobile phones. Its upcoming Pixel 6 series mobile phones are expected to use Google’s self-developed Tensor SoC.
As the world’s leading cloud service provider, Amazon designed Graviton server processors for its customers in 2018, and by 2021, there is news that Amazon is developing Netcom chips to reduce its dependence on Broadcom and other chip manufacturers.
Domestically, Ali’s Pingtou Semiconductor launched the first self-developed AI inference chip Hanguang 800 in 2019; Baidu launched the AI ​​chip Kunlun series in 2018 and mass-produced it in early 2020. It has already been deployed on a large scale. Twenty thousand pieces.
Cambrian, an AI chip company listed on the Science and Technology Innovation Board in 2020, introduced in its annual report: As the current artificial intelligence technology represented by deep learning is widely used in daily life and traditional industries, the demand for underlying chip computing power has been rapid Growth, its growth rate has greatly exceeded the speed of Moore’s Law. Artificial intelligence computing often has the characteristics of large computing capacity, high concurrency, and frequent memory access, and the computing modes involved in different sub-fields (such as vision, speech and natural language processing) are highly diversified. The integration, manufacturing process and even supporting system software all pose huge challenges.
In addition, compared with general-purpose processors, custom ASIC chips are actually much lower in development cost. And for Internet giants, their own demand for chips is large enough. Self-developed chips can not only better match their own algorithms, meet their own needs, and reduce overall costs, but they can also build themselves by taking advantage of the integration of software and hardware. While reducing the dependence on external chip suppliers, it is especially important in recent years when the industry’s core shortage has prevailed.

Popular ic chip you may need:
MC68010P10 
45G127 
IRF9640 
AT28HC256E-12SI