March 30, 2023

OpenAI CEO: AI entrepreneurial hotspot is the “middle layer” of models and applications | Geek Park

The explosion of generative AI has allowed Silicon Valley’s money to find a new direction. The painting model DALL-E and the chat robot ChatGPT launched by Open AI last year have become the leaders in the AIGC wave.

Not long ago, Microsoft decided to invest billions of dollars in Open AI, and the new round of investment also brought its valuation to $29 billion. It seems to be back to the last time when AI brought unlimited imagination to people.

The founder of OpenAI, Sam Altman (Sam Altman), has become the absolute spokesperson of this new wave of AI.


OpenAI founder Sam Altman (Sam Altman)

If you want to name the person closest to the AI ​​​​technology revolution, Sam Altman (Sam Altman) must be one of them.

In 2017, he wrote on Medium that “we have reached the stage of co-evolution—the influence of AI influences, functions and infects us, and we improve AI”. At that time, he was still serving as the president of the startup incubator Y Combinator, and together with Elon Musk, Peter Thiel, Reid Hoffman and others invested 1 billion US dollars and became the founder of Open AI.

After another two years, he shifted his focus to AI, serving as the CEO of OpenAI until today.

Last fall, Greylock, an investment firm, held an AI-themed summit. In an exchange with Reid Hoffman, an old friend of one of the founders of Open AI, and the sitting audience, Sam Altman shared his predictions for the future:AI large model technology will become the largest technology platform in the future after the mobile Internet;With the chatbot as the interface, coupled with the development of multi-modal models such as images, music, and text, large-scale enterprises will be born.

There are even some more radical predictions. For example, AI scientists will learn to iterate themselves. It will not only reduce costs and increase efficiency, but will bring new knowledge to humans and increase the sum of human knowledge. However, if you understand his earlier thoughts, you will find such predictions not unexpected. He once said that humans are proud of their superior intelligence. Perhaps in the eyes of AI, “the difference between us and chimpanzees is almost Worth mentioning.”

He described “we are in theAIOn the edge of the cliff”, he felt that great surprises and dangers were close at hand.However, he firmly believes that the development of science and technology will bring about human progress and economic growth, so he chooses to remain optimistic and work hard for a better future.

This is a compilation of the main content of the conversation that day. Some of the questions were from Reid Hoffman, and some were from the audience, so they will not be marked separately.


Reid Hoffman and Sam Altman at the event

Large model – a new technology platform

Q: As for the large modelAPI(application programming interface, application interface)What are the real business opportunities? How to create a unique business on them?

A:In this area so far, one could do a superb copywriting business, educational service or otherwise. But I haven’t seen anyone pursuing a trillion-dollar (industry) after Google.I feel like it’s about to happen, and it might work. Or Google will do it themselves.However, I guess that as the quality of language models improves in the next few years, Google as a search product will encounter its first serious challenge.

Many trends were laughed off too early, and this is when human-friendly chatbot interfaces really work. Chatbots are fine, it’s just too early. Now it works. New medical services can be implemented in this way, can have great advice or new educational services, these will be very large companies.

Soon there will be multimodal models, which will open up new things. This is going to be a huge trend and large enterprises will build on this as an interface.More generally, these very powerful models will be one of the real new technology platforms,We haven’t really had this kind of platform since mobile phones. There will always be a flood of new companies after that, so it will be cool.

Q: As a large language model API service provider, what is the key? How to create a lasting differentiated business?

A:I think there will be a small set of basic big models. But what happens now is that a company makes a big language model (an API can be built on top of that),The middle layer will become very important. I’m skeptical of all startups trying to train their own models.I don’t think it will last forever. But what will happen is that there will be a new crop of startups taking the big models that already exist and tweaking them, not just fine-tuning them.

There are many ways to create medical models, or treat computers as friends. These companies will create a lot of long-term value because they will have a special edition. Instead of creating base models, they can be created just for themselves or shared with others. They have unique data flywheels that improve over time.I think that layer in the middle creates a lot of value.

Q: How does one large language model startup differ from another large language model startup?

A:I think it should be the middle layer. In a sense, startups train their own models, just not from scratch. They will take base models, which are trained with a lot of computation and data, and then train on top of those models to create models for each vertical class.

The 1% of the training they do really matters for the application. I think these startups are going to be very successful and different. Here’s what startups can do with data flywheels. This may include prompt engineering or core base models that have existed for some time. I think it’s going to be overly complicated and expensive, and there aren’t enough chips in the world.

Note: Prompt engineering refers to the debugging process of putting task descriptions or questions in the input to let the AI ​​model output ideal results; after ChatGPT became popular, the post of prompt engineer has also attracted people’s attention.

Q: In five years, what will be the way most users will interact with the base model? Will prompt engineering be an internal function for many organizations?

A:I don’t think five years from now we’re still doing prompt engineering, it’s going to be integrated everywhere. Either text or speech, depending on the context, just need a language interface and let the computer do whatever you want. Maybe generating an image still needs to do some prompt engineering, but that’s just getting it started doing this complicated thing, or just being my therapist, helping me figure out how to make my life better, or using my computer, do this or something else.I think the basic interface will be natural language.

Q: When there is a great visual thinker, they can get more out of DALL-E, because they know how to think deeper, know how to iterate the loop in the test. Do you think that’s the universal truth about most of these things?

A:100% sure. What matters is the quality of thought, and an understanding of what you want. So the artist would still be the best at image generation, not because they put this magic word at the end of the image.Because they can express it with a creative eye that I don’t have.


Works generated by DALL-E2|Source: kasendorf

Q: What surprised you the most? What kind of surprise do you think there would be if you didn’t realize that things had gotten to this point?

A:The biggest systematic mistake that people are making right now is that they say, “Well, I might be skeptical, but this language model is going to work, and of course images and videos will work. But it doesn’t Will generate new knowledge for humanity. It will only do what others have already done. Still great. It still makes the marginal cost of intelligence very low, it won’t cure cancer. It won’t add to the sum of human scientific knowledge.” I think this will turn out to be wrong, which surprises the current experts in the field the most.

when AI Scientists can iterate themselves

Q: Whether built on API Beyond that, or scientists using APIs, where will science be accelerated, and how?

A:One of them is science-focused products like the AlphaFold. These add enormous value and you’ll see more and more that way. I figured if I had time to do something else, I’d be excited to find a bio company right now.

There’s another thing that’s happening, tools that make us more efficient, that help us think about new research directions, write a bunch of code, so we can double our efficiency.This effect on the net output of an engineer or scientist would be an amazing way for AI to contribute to science that goes beyond obvious models.CoPilot is an example(Note: A collaboration between GitHub and OpenAI AI tool to help programmers automatically complete code during programming). There are even cooler things. This will be a major change in the way of technological development and scientific development. Currently, these two are huge, bringing accelerated progress.

But I think one of the big things that people are starting to explore is — I don’t want to use that term because I think it has a good way to use it, and a more horrible way to —AIYou can start becoming an AI scientist and iterate on yourself.As an AI developer, can you automate your work first? Will this help with difficult “alignment problems” that I don’t know how to solve? To be honest, I think this is the way of the future.

I firmly believe thatThe only real drivers of human progress and long-term economic growth are the social structures that enable scientific progress, and then scientific progress itself.I think we will do more.

Q: The “alignment problem” might be worth explaining?

A:Build a very powerful system, and if it doesn’t do what we want it to do, or if its goals conflict with ours, it can go terribly bad.

Therefore, the alignment problem is:How can we build an AGI that does what is in the best interest of humanity(Artificial General Intelligence General Artificial Intelligence)? How to ensure that human beings can determine the future of human beings?

How do we avoid accidental and deliberate misuse, the former being an unforeseen bug, the latter being a bad guy using AGI to do a lot of damage; the intrinsic alignment question is what if this thing turns into a creature and sees us as a threat ?

We have some ideas about how to solve the alignment problem on a small scale, and have been able to make OpenAI’s largest model (perform) better than imagined. We have some ideas about what to do next, but can’t honestly look anyone in the eye and say how this problem is going to be solved in 100 years. But once the AI ​​is good enough, we can ask it, “Hey, can you help us with our alignment research?” That will be a new tool in the toolbox.

Q: One of our previous conversations was, can we tell the agent(Note:AI a concept in , usually referring to an intelligent agent in an environment)“Don’t be racist”?

A:certainly. Once the model gets smart enough to really understand what racism looks like and how complicated it is, you can say, “Don’t be racist.”

Q: The term “AGI” has been widely used. Sometimes the confusion comes from people having different definitions of AGI. How do you define AGI, and how do you know when we achieve it?

A:There are many valid definitions. but for me,An AGI is basically the equivalent of an average human being that can be hired as a colleague.They can do anything you like to do with a remote colleague behind a computer, including learning how to be a doctor, learning how to be a very capable programmer.

An ordinary person can do many things well.I think a skill of AGI is not any particular milestone, but the meta-skill of learning problem solving,It can do anything you need it to do. A superintelligence is smarter than all humans combined.

Q: How do you view the specific impact of basic technologies like GPT-3 on the progress of life science research? What are the rate-limiting factors in life sciences research? We can’t go beyond this limit because the laws of nature are like that?

A: I don’t think existing models are good enough to have a significant impact on the field.At least that’s what most life science researchers tell me. They’ve seen it all, and feel that it can be helpful in some cases at the moment. I think this will change. There will be new $100 billion to $1 trillion companies in these areas, which are not many.

The pace of development in biotechnology is still limited, and human trials take a long time. So interesting, where can this be avoided?The most interesting synthetic biology companies I’ve seen are the ones that make the biological cycle super fast.It’s good for AI, AI will give you a lot of good ideas, but you still have to test them, and that’s it for now.

I very much believe that startups can compete with existing large companies as a startup if they have low costs and a fast cycle. I wouldn’t choose heart disease as the first target for this new company. But it’s nice to make things with biotechnology. Another thing is the emulator is still that bad. If I had a bio-AI startup, I’d definitely try to work on that.


Molecular rendering|Source: ANIRUDH on Unsplash

Next ten years:

When the cost structure changes

Q: What do you think of the moon landing program (referring to AI evolution) in the next few years, what should people pay attention to?

A:Start with something more certain.I think language models are going to go further than people think.A lot of people say that the computer is running out, the data is running out. But I think there’s a lot of progress on the algorithm side, and we’re going to have a really exciting time.

The other thing is, a true multimodal model will work.So, not just words and images, butEvery form you have in a model, you can move fluidly between things with ease. There will be models that are constantly learning.Now if you use GPT, it stays at the time it was trained. No matter how much you use it, the effect will not be good. I think that will change. So I’m really excited about all of this.

If you think about what that alone can unlock, and what applications people can use to build, that’s a huge win for everyone. Is a huge step forward, a real technological revolution – if it ever happened. But I think we may also continue to make research progress on new paradigms. We are pleasantly surprised by what is happening.I think all these questions are about the generation of new knowledge (How can we actually advance human progress?) I think there will be systems to help us.

Q: Talk about areas that are currently being widely discussed, for example,AI and nuclear fusion.

A:We’re talking, is anybody saying “I’m using these reinforcement learning models for nuclear fusion or something,” and as far as we know, they’re all far worse than clever physicist inventions.

One unfortunate thing is that AI has become a buzzword, which is usually a bad sign. I hope this doesn’t mean the field is about to fall apart. But historically, this is a very bad sign for new startups. I think this is an area where people say everything is “this plus AI,” and a lot of things are true.I do think it’s going to be the biggest tech platform of this generation.

We like to make predictions on the frontier, predict and understand what scaling laws are (or after research), and then say “OK, this new thing will work, just predict the derivation according to this method.”

That’s how we try to run OpenAI, and when we’re confident, we go to the next thing, take 10% of the company and let it go, and that’s been a huge win.

I doubt whether transformers are still used five years later. I hope to find a better way. But Transformer is clearly extraordinary. So I think it’s important to always look where the next whole new paradigm is going to be found. I think that’s the way to make predictions. Don’t focus on AI for everything.Can I see something working? Can I see how it could be better? Leave room of course, you can’t plan for greatness, but sometimes breakthroughs happen in research.

Q:AI Applied to very important systems, such as financial markets, what will happen?

A:AI will permeate every corner.My basic model for the next decade is that the marginal cost of intelligence and energy will quickly go to zero,And the magnitude is astonishing.

One has to assume this will touch on pretty much everything. When the cost structure of the whole society changes, there will be a sea change. This kind of change has happened many times before, and it is always easy to underestimate these changes. I wouldn’t make a high-confidence forecast on anything that doesn’t change much, or isn’t applied.

Q:AI It can provide tools for human creators to expand creativity. So what are the boundaries for making creators more productive\AI doing everything by itself with creativity?

A: At least what we see so far is not a replacement, but an enhancement. In some cases, it’s replacing.But for most of the jobs that people in those fields want to do, it’s augmentation. This trend will continue for a long time. Maybe looking forward 100 years, it can do the whole creative work.

I find it interesting that if people were asked 10 years ago how AI was going to make a difference, most would be pretty confident (saying) that first it would replace blue collar jobs in factories, truck drivers, etc and then it would replace low skill white-collar jobs, and then high-skilled, high-IQ white-collar jobs, such as programmers. Maybe never replace those creative jobs. The current development is just the opposite.

Here’s an interesting reminder of how difficult it is to predict. More specifically, it’s not always clear (even to ourselves) what skills are hard or easy, which ones use the brain or don’t, and how hard it is to control the body.

Q: What do you think AI What aspects of your life will not change?

A:All deep biology stuff. We’ll still genuinely care about our interactions with other people, we’ll still have fun, and our brain’s reward system will still work the same way. We’ll still have the same drive to create new things, compete for stupid status, start a family, etc. So I think what people cared about 50,000 years ago is more likely to be cared about 100 years from now than it was 100 years ago.


“Blade Runner 2049” Stills | Source: Douban

Q: In the next 20 to 30 years, with the continuous development of artificial intelligence, will there be major social problems? What can we do today to alleviate these problems?

A:The impact on the economy is huge. If it’s like I think it is, some people are doing really well and some people are not doing well, society won’t tolerate that. So figuring out when it’s going to disrupt so much economic activity, even if it’s not fully disrupted 20 or 30 years from now. I think it’s clearly going to happen.

What is the new social contract? It must be figured out, I surmise, how to think about the equitable distribution of wealth.When it comes to AGI systems, it’s going to be a commodity with domain attributes, governance, how we collectively decide what they can and can’t do, etc. Finding answers to these questions will be a big deal.

I’m optimistic that people will find ways to spend their time and be very fulfilled.I think the way people worry is kind of silly.I’m sure people do it very differently, but it always works out. I do think the concepts of wealth, opportunity, and governance are all going to change, and how we address those issues is going to be huge.

We conducted the world’s largest UBI experiment (Unconditional Basic Income, unconditional basic income). There is still a year and a quarter left in the five-year plan. It’s not the only solution, but I think it’s a great thing. Should have tried 10 more of these things. We also experimented with different approaches, getting input from groups we thought would be most affected, and seeing how to act earlier in the cycle. We’ve recently explored how this technology can be used to retrain those who will be impacted earlier, and will try to do more of that.

Note: Unconditional basic income means that there are no conditions or eligibility reviews, and citizens can regularly receive a sum of funds given by the government or specific organizations.

One more thing

I think nobody realizes that we are on the precipice of AI. People say “it’s going to be great or it’s going to be terrible” and you have to prepare for the worst. Saying everything will be fine is not a strategy. You probably get a feeling though: we’re going to get to a great future and work as hard as we can to fight for it, instead of acting all the time from a place of fear and hopelessness.

Reference link:

AI for the Next Era

Ewen Eagle

I am the founder of Urbantechstory, a Technology based blog. where you find all kinds of trending technology, gaming news, and much more.

View all posts by Ewen Eagle →

Leave a Reply

Your email address will not be published.