AI chip hegemony, Huawei AI chip

Release time:2017-08-29 18:01:32 Author:admin frequency:2547次

Yu Chengdong, senior vice president of Huawei, posted a video on his microblog to build momentum for his AI chip. He said, "the pursuit of speed is never more than imagination", and predicted that the AI chip will appear on ifa2017 on September 2.

At Huawei's mid year performance media communication meeting last month, Yu Chengdong revealed that AI chips will be released this autumn, and Huawei will also be the first manufacturer to introduce artificial intelligence processors into smart phones. In addition, at the 2017 China Internet Conference, Yu Chengdong also said that the chips manufactured by Huawei Hisilicon will integrate CPU, GPU and AI functions, and may be based on the new AI chip design launched by arm at Computex this year.

According to Yu Chengdong's video today, Huawei's AI processor is expected to significantly improve the data processing speed of the Kirin 970. If the AI chip can be used on the Huawei mate 10 mobile phone released in October, the data processing capacity of Huawei mate 10 will be very expected.

Like Huawei, global technology giants such as Intel, Lenovo, NVIDIA, Google and Microsoft are actively embracing AI, and the layout of AI chips has become a top priority.


Intel

Regarding the importance of AI chips, song Jiqiang, President of Intel China Research Institute, pointed out in an interview with Xinzhiyuan this month that we need technology to process a large amount of data to make it valuable to customers. In this process, there is no doubt that chips are extremely important:

By 2020, it is conservatively estimated that 50 billion devices will be interconnected in the world. Future data comes from various equipment terminals. No longer rely on us to make phone calls, play mobile phones and send emails. Unmanned cars, smart homes and cameras are all generating data.

In the future, every driverless car will be a server, and each car will have more than 4000 GB of data every day. These data cannot be transmitted through 5g, so a lot of data must be processed and analyzed locally, and then go up selectively. Locally, you will use a lot of technologies that surpass the technology of modern servers.

As a traditional leading chip manufacturer, Intel launched a new generation of Xeon server chip in July this year, with significantly improved performance, 2.2 times the in-depth learning ability of the previous generation of servers, and can accept training and reasoning tasks. In addition, Intel also demonstrated the field programmable gate array (FPGA) technology that will play a significant role in the future AI field. At the same time, it plans to launch Lake crest processor for in-depth code learning.


Lenovo

Yang Yuanqing, President of Lenovo Group, said, "Ai general processor chip is the strategic commanding point in the era of artificial intelligence". He Zhiqiang, senior vice president of Lenovo Group and President of Lenovo venture capital group, also pointed out:

In the era of intelligent Internet, AI chip is the engine of artificial intelligence and will play a decisive role in the development of intelligent Internet.

Just last week, Lenovo venture capital, Alibaba venture capital and other top investors jointly invested in Cambrian technology known as "the first unicorn in the global AI chip industry".


Yingweida

NVIDIA has shifted its business focus to AI and deep learning in the past few years. In May this year, NVIDIA released a heavyweight processor for artificial intelligence applications: Tesla V100.

With 21 billion transistors, the chip is much more powerful than the 15 billion transistor Pascal processor released by NVIDIA a year ago. Although it is only as large as the surface of the apple watch smart watch, it has 5120 CUDA (statistical computing device architecture) processing cores, with double precision floating-point operation performance of 7.5 trillion times per second. NVIDIA CEO Huang Renxun said that NVIDIA spent $3 billion to build this chip, and the price will be $149000.

Google

Google, which announced its strategic shift to "Ai first", released a TPU (tensor processing unit) specially customized for machine learning last year. Compared with CPU and GPU, TPU efficiency has been improved by 15-30 times and energy consumption has been reduced by 30-80 times.

At the Google developer conference in May this year, Google released a new product - cloud TPU, which has four processing chips and can complete 180 tflops computing tasks per second. Connecting 64 cloud TPUs can form a supercomputer called pod by Google. Pod will have the computing power of 11.5 petaflops (1 petaflops is 1015 floating-point operations per second) - which will be a very important basic tool for the research in the field of AI.

At present, TPU has been deployed to almost all Google products, including Google search, Google assistant, and even played a key role in the go war between alphago and Li Shishi.


Microsoft

Last month, the media reported that Microsoft would add an independently designed AI coprocessor to the next generation hololens, which can analyze the content users see and hear on the device locally, and there is no need to waste time transmitting the data to the cloud for processing. This AI chip is currently under development and will be included in the holographic processing unit (HPU) of the next generation hololens in the future. Microsoft said that this AI coprocessor will be the first chip designed by Microsoft for mobile devices.

In recent years, Microsoft has been committed to developing its own AI chip: it once developed a set of motion tracking processor for Xbox Kinect game system; In order to compete with Google and Amazon in cloud services, Microsoft has specially customized a set of field programmable gate array (FPGA). In addition, Microsoft also purchased programmable chips from Altera, a subsidiary of Intel, and wrote customized software to meet the needs.


Last year, Microsoft used thousands of AI chips at a conference to translate all English Wikipedia into Spanish. There were about 5 million articles, and the translation time was less than 0.1 second. Next, Microsoft hopes to enable customers using Microsoft cloud to complete tasks through AI chips, such as identifying images from massive data, or predicting consumers' purchase models through machine learning algorithms.