JW Insights: China’s homegrown AI GPU developers pursue closely the ChatGPT triggered AI boom
Chinese article by 李映
English Editor 张未名
07-21 13:33

By Greg Gao

(JW Insights) Jul 13 -- Chinese GPU developers are developing homegrown AI GPUs at full speed,aiming to become a new force in AI computing power, according to a recent report by JW Insights.

At the recently held 2023 World Artificial Intelligence Conference (WAIC) in Shanghai,

They showcased major products: Enflame’s Suisi 2.0 and Suisi 2.5, Biren Technology’s BR100 series products, Iluvatar CoreX’s Zhikai 100, MetaX Tech’s Xisi N100, AzurEngine’s RPP-R8 chips

These Chinese GPU developers are trying to fill in the market demands, while the US has banned Nvidia and AMD from selling higher performance AI GPUs to China.

China’s tech companies are also losing no time in coming up with their AI large language models(LLM). They include SenseTime’s SenseNova, Huawei Cloud’s Pangu, Alibaba Cloud’s Tongyi Qianwen, and Baidu’s Ernie Bot.

The practical application of large language models has transformed the three elements of AI from “data, algorithms, and computing power” to “scenarios, products, and computing power.” With the expansion of AI-Generated Content (AIGC) parameters to the scale of hundreds of billions, the demand for AI computing power is growing exponentially. Countless tech veterans and newcomers in China are flocking into the computing power race.

Dr. Liang Gang, a partner at Shanghai-based designer and developer of intelligent processors Biren Technology, pointed out at the last month’s conference in Shanghai that in addition to demanding high computing power, support for various data precision, and high-bandwidth interconnectivity, it is also crucial to have a robust software ecosystem. This includes support for general-purpose programming languages, various LLMs training, inference frameworks, and adaptation of algorithms for LLMs.

Zhao Lidong, Enflame’s founder and chairman, emphasized the new requirements for the full-stack capabilities of general-purpose GPUs in the era of general AI. It is necessary to achieve high performance, high bandwidth, high storage capacity, high versatility, efficient distributed computing, and efficient cluster interconnection to meet the computational demands of LLMs, he said.

So far that overseas players such as Amazon, Google, and OpenAI have impressed for the consumer sector. Some Chinese experts pointed to the greater potential for general-purpose generative AI applications in vertical fields, where it can be seamlessly integrated with various industries like healthcare, education, and more. Therefore, the emphasis is placed on the overall solution’s performance, functionality, and cost-effectiveness, they said.

Although China’s domestic high-performance computing chip vendors have collectively made significant progress, they are lagging behind major international rivals in terms of technological capability, especially in the software ecosystem, an analyst from JW Insights noted.

In the current market landscape, China is still in the process of following the trends and technology paths, and there are no shortcuts. It requires solid efforts to refine products and build ecosystems, pointed out Liang Gang of Biren.

linkedin twitter facebook line
Copy succeeded
link