June 9, 2023
4cb5f5a6b35069c4d682f693d81fe794

Tongyiqianwen, “activate” Alibaba Cloud again

ChatGPT set off another wave of AI wars, and Ali is one of the latest giants to join the battle.

On April 11, Zhang Yong, chairman of the board of directors of Alibaba, officially released Ali’s large language model tool “Tongyi Qianwen” at the Alibaba Cloud Summit, and announced that all of his Ali’s “family buckets” were integrated into it.

Before the press conference, Tongyi Qianwen’s “Niaoniao” video has been circulated on social media. For Ali, it is a good thing to entertain the public, but more importantly, how will the ability of AI large models be added to B-side customers in the future? Among the “workflows” of Alibaba Cloud, it is the focus of Alibaba Cloud’s attention.

Therefore, Alibaba Cloud Intelligence Group also revealed the next strategic direction – the integration of cloud and intelligence.

In the era of large-scale models, can Alibaba Cloud “branch”? How will it develop next?

Ali “Tongyi Family Bucket”

On April 7, Ali demonstrated the capabilities of Ali’s large model to the outside world for the first time by opening a limited invitation test of Tongyi Qianwen. At Alibaba Cloud’s annual summit on April 11, Zhang Yong announced the official release of Alibaba’s large-scale model product – Tongyi Qianwen.

This is also the first time that Zhang Yong has led a team to appear at the Alibaba Cloud Summit after he concurrently served as the CEO of Alibaba Cloud Intelligence Group, which shows the significance of this event.

Tongyi Qianwen can provide copywriting, dialogue and chat, knowledge quiz, logical reasoning, code writing, text summarization, and image and video understanding services|Alibaba Cloud

At the meeting, Alibaba Cloud CTO Zhou Jingren briefly introduced several capabilities of Tongyi Qianwen through a PPT narration in 2 minutes; followed by a 3-minute short video to introduce the possibilities brought by Tongyi Qianwen .

In this 3-minute clip, the smart assistant powered by Tongyi Qianwen is shown in 3 scenes.

  • In the office scene, the intelligent assistant can automatically help order plane drinks, plan routes, and take taxis after obtaining interface calls from different platforms; it can also generate meeting minutes, invitations, and posters.

  • In the home scene, children’s ideas for writing stories, background music for adults’ exercise and fitness, cooking suggestions, etc. can all be obtained through calling the smart assistant.

  • In the shopping scene, after the sentence “What do 18-year-old girls need to prepare for the first time for makeup?” “Grandpa will have his 80th birthday next month, ideas for birthday party planning”, the intelligent assistant automatically generates suggestions and attaches shopping links .

In fact, due to the capabilities of generation, reasoning, and multiple rounds of dialogue brought about by large models, it has become a relative consensus that AI can cut into a broader “operational flow”.

For example, the consumer decision-making behaviors that used to be planted in Xiaohongshu, transformed in Taobao, booked tickets in Ctrip, and taxied in Didi were scattered on different platforms. Now they can automatically obtain “knowledge enhancement” through GPT questions and answers on one platform. After a task is given, users can complete the entire process more efficiently and conveniently on one platform.

And Ali, which has a “family bucket”, is naturally suitable for using large-scale models to enhance existing businesses and cut into more “operational flows”. At the meeting, Zhang Yong said: “Based on the basic model of Tongyi Qianwen, we hope to be able to connect all of Ali’s businesses in the future.”

All Alibaba products will be connected to the large model of “Tongyi Qianwen” in the future

Among the internal businesses that have obtained the priority of internal testing of Tongyi large models, DingTalk and Tmall Genie may be the fastest to land. On the day of the press conference, the official accounts of DingTalk and Tmall Genie released samples of future new features respectively.

As far as DingTalk is connected to Tongyi Qianwen, it can automatically generate group chat summaries, assist in content creation, summarize meeting minutes, and take a functional sketch to automatically generate small programs. On the other hand, Tmall Genie, which has the ability to have multiple rounds of dialogue, has become a smarter companion robot.

Not only Ali’s internal business, Zhou Jingren introduced,In the future, every enterprise will be on Alibaba CloudIt can not only invoke all the capabilities of Tongyi Qianwen, but also combine the enterprise’s own industry knowledge and application scenarios to train its own large enterprise model

Regarding the latter, Zhou Jingren said that Tongyi Qianwen is a general-purpose large-scale model. On this basis, an enterprise-specific model can be combined with the enterprise’s scenario, knowledge system, and industry-specific needs to solve practical problems in the industry.

Tongyi Qianwen is not only an important “node” in the field of large models, but also an important node of the Alibaba Cloud platform.

Cloud computing, entering the “big model era”

Since December 2022, the GPT interface provided by OpenAI has allowed the outside world to see the possibility of “Model-as-a-service” (MaaS, model as a service) in the business model. The large language model can be used as a service. It is called, but for cloud service providers, it also means designing its own cloud product system based on a large model.

Zhou Jingren believes: “Model training and model reasoning are inseparable from cloud computing. Today, the three layers of IaaS, PaaS, and MaaS are gradually formed, and even a new architecture such as SaaS. In this regard, Alibaba Cloud has long-term plans. According to This concept designs the technology and architecture of the cloud itself, as well as related product systems.” At the main forum of the press conference, he introduced in detail the cluster that provides high-performance computing power for AI — Lingjun, and PAI for model training and model reasoning platform and other products.

?code=MGY4MzJhMzNmMTE4YjkyNGFhMGZjMDUwMDEzM2VkMzVfOTN4Y2FUMWt0N1VXeWI2MFFRRjdyMlVTNTY3YzJLMndfVG9rZW46SW5DcGJuaFJ6b0lpYm94NFM5R2NRN2R5bnpoXzE2ODEzNTU3OTE6MTY4MTM1OTM5MV9WNA

He further explained Model as a Servicethe concept ofthat is, it can effectively support the entire life cycle of the model. Today, from the initial research and development of the model, data cleaning, to the training and testing of the model, and the overall model can enter a unified model standard website, which allows users to quickly find and use the model, lowers the threshold for using the model, and allows more People can really enjoy a series of applications of artificial intelligence today through a few lines of code.

In fact, since the release of ChatGPT, the large model or generative AI seems to have become a must for cloud vendors at home and abroad. Although the scope of new opportunities is widely discussed: computing power infrastructure, big models, middleware, application software and hardware, etc., among them, the most certain opportunity is cloud computing.

Rui Yong, CTO of Lenovo Group, once gave an example to Geek Park to explain the computing power required for generative AI. He believes that we have entered the third era of computing – the era of AI. Compared with the previous two eras of computing (PC and the Internet, smartphones and the cloud), generative AI and large models require hundreds or even thousands of times more computing power. When using search engines today, a lot of content is there, and it is just grabbed by inverted list; but the content of generative AI needs to be generated on the spot, and the computing power is much higher than before. This will bring benefits to enterprises doing high-performance computing.

In addition to computing power, there is also cloud architecture, and workloads in different scenarios require different computing architecture requirements. In the artificial intelligence scenario, cloud computing provides the underlying infrastructure for the training and reasoning of AI models; a sufficiently intelligent platform product can maximize the capabilities of big data and machine learning. Therefore, cloud vendors are also eager to co-create with industry customers and rebuild a series of cloud computing products and experiences around the model.

On the new node, Zhang Yong said that in the process of customer intelligence, Alibaba Cloud’s next goal and commitment is to make the cost of computing power 1/10 or even 1% of today’s. This means that the computing power architecture based on the large model still has a lot of room for optimization.

“Tongyi Qianwen is a node in the established route, not the starting point, nor the end point.” Zhou Jingren said.

 

 

Ewen Eagle

I am the founder of Urbantechstory, a Technology based blog. where you find all kinds of trending technology, gaming news, and much more.

View all posts by Ewen Eagle →

Leave a Reply

Your email address will not be published.