【资料图】
BEIJING, July 10 (TiPost)— Increasing Chinese technology companies jumped on the artificial intelligence (AI) wagon and four giants fuels the race with their new large language models (LLMs), the technology behind popular chatbots including ChatGPT.
Credit:Visual China
Huawei rolled out Huawei Cloud Pangu Models 3.0, the latest version of its Pangu pre-trained deep learning AI model, in the Huawei Developer Conference 2023. Unlike the mainstream large models like ChatGPT, Pangu Models 3.0 doesn’t know how to compose poems, but it can do concrete works with three purposes for innovation—reshaping industries, honing technologies and sharing success, to continue to build core competitiveness and serve customers, partners in the industry and developers with better services, said Zhang Pingan, executive director of Huawei and CEO of Huawei Cloud.
As an industry-oriented large model, Pangu Models 3.0 is made up of three layers with so-called “5+ N+ X” architecture. The foundation layer, L0, represents the “5”. It provides various skills to meet the needs of different industry scenarios with its five foundational models in natural language processing, computer vision, multimodal learning, prediction and scientific computing. Pangu Models 3.0 is available in serialized foundational models with four different parameters numbers including 10 billion, 38 billion, 71 billion and 100 billion, to satisfy customers’ diversified industry demands in terms of scenario, latency, and response speed.
L1, the second layer, offers a variety of industry-specific models, focusing on fields such as e-government, finance, manufacturing, mining and meteorology. It can also train customers’ exclusive large models with their own data based on L0 layer and L1 layer. The third layer, L2, provides multiple scenario-specific models for particular industry applications or business scenarios, such as government service contact, screening for drug development, foreign object detection on conveyor belts and typhoon path prediction.
During the 2023 World Artificial Intelligence Conference (WAIC), Alibaba’s could unit released its latest AI image generation model, Tongyi Wanxiang. In Chinese, Wanxiang means tens of thousands of photos. The generative AI model allows users to input prompts in either Chinese or English to generate detailed images in an array of styles, encompassing watercolours, oil and Chinese painting to animation, sketch, flat illustration, and 3D cartoons. Moreover, users can use the model to turn any image into a new one with any visual style that the user designates while preserve contents of the original image.
Tencent announced in WAIC an upgrade of its Cloud Maas (Model as a Service), expanding applications of customized the enterprise LLM into new scenarios such as financial risk management, simultaneous translation, and digital smart customer service. Among the new applications, Tencent’s first LLM enables businesses to improve efficiency of their financial risk management by ten times compared with the traditional risk control. As to interactive translation, Tencent’s model no longer requires millions of training data and can achieve better results after training on relatively small-size samples. In digital human technology, Tencent’s model can significantly reduce cost as it can only use a small amount of data to reproduce 2D digital clones within 24 hours.
SenseTime at the same event unveiled upgrades to its SenseNova foundation model sets, which include an impressive array of new products and applications of large models. As SenseTime"s proprietary generative large model, SenseMirage 3.0, has upgraded its parameter count from 1 billion to 7 billion since its initial release in April this year, enabling the model to produce photos with professional-level detail. SenseAvatar 2.0 has improved speech and lip-syncing fluency by over 30% and supports 4K video effects. It also supports AIGC image generation and digital singing capabilities. SenseSpace 2.0 has increased reconstruction efficiency by 20% and renders performance by 50%, enabling completion of mapping a 100-square-kilometer scene in just 38 hours with the support of 1200 TFLOPS/second computing power. Additionally, SenseThings 2.0 reproduces the texture and material of small objects with millimeter-level precision, overcoming the challenge of capturing highly reflective and mirror-like objects.
关键词: