March 25, 2023
zhu tu 1 tu pian lai yuan openai 960

[Not only explaining pictures, but also viewing hand-drawn sketches to generate corresponding web page code]A new version of multi-modal GPT-4 is here, and inputting pictures and texts at the same time is a new feature

Just two weeks after the release of the paid API service, OpenAI made a big move to publish the multi-modal GPT-4 model. The biggest change is that it can input text and pictures at the same time. Greg Brockman, the co-founder of OpenAI, showed in the live broadcast that after shooting the design sketches he hand-drawn on his notebook and inputting them into GPT-4, GPT-4 can automatically generate the code of the corresponding web page. Not only the screen is almost the same as the hand-drawn sketches, but also buttons can be generated. And the corresponding event triggers the JavaScript program. He emphasized that this will greatly change the website design mode.

Ewen Eagle

I am the founder of Urbantechstory, a Technology based blog. where you find all kinds of trending technology, gaming news, and much more.

View all posts by Ewen Eagle →

Leave a Reply

Your email address will not be published.