Yicai Global: Chinese tech giant Tencent launches pepped-up computing cluster to help with AI push
Chinese article by 爱集微
English Editor 张未名
04-20 15:36

(JW Insights) April 20 -- Chinese tech giant Tencent started a new high-performance computing cluster on April 14 to meet the growing need to develop and train artificial intelligence models, according to Yicai Global.

The new cluster uses Tencent’s self-developed server Star Lake, US chip giant Nvidia’s H800 GPU, and 3.2 terabyte ultra-high communication bandwidth between servers, the Shenzhen-based firm said at a press conference. This means it can provide cluster computing for the training of large AI models, autonomous driving, and scientific computing applications.

Tencent said the new cluster can shorten the training time for its self-developed natural language processing model Hunyuan from 11 days to four days with the same data set.

US AI startup OpenAI’s ChatGPT bot has proved hugely popular around the world, and Chinese companies are rushing to develop similar products.

Tencent previously revealed that it has set up a project team called HunyuanAide to research large AI models in sectors including natural language processing and computer vision, Yicai Global reported earlier.

Song Dandan, director of heterogeneous computing products at Tencent Cloud, said the eagerness to develop AI models has also increased the need for scalable high-performance computing power, which needs to be stable.

AI’s computing power needs are divided into two parts: training and inference. In the training phase, a large amount of computing power is required within a short period of time, while in the inference phase, large models require more cost-effective computing power and higher connection speeds with end-user device applications, Song added.

(Yuan XY)

linkedin twitter facebook line
Copy succeeded
link