AI Personal Learning
and practical guidance

OpenAI CEO Looks to AGI Economics: Three Observations Reveal Disruptive Change in the Next Decade

By Sam Altman, Chief Executive Officer, OpenAI

OpenAI CEO Looks to AGI Economics: Three Observations Reveal Disruptive Change in the Next Decade-1


OpenAI's mission is to ensure that generalized artificial intelligence (AGI) benefits all of humanity.

OpenAI believes that systems pointing to AGI are emerging, so it's critical to understand the moment we're in.AGI is a term that defines слабо, but in general, OpenAI considers AGI to be a system that is able to cope with increasing complexity in multiple domains to the level of a human being.

Comments: The fact that the definition of AGI remains vague reflects the cutting edge and uncertainty of current AI development.OpenAI's acknowledgement of the vagueness of its definition also implies that the concept of AGI is still evolving.

Humans are tool builders, with an innate drive to understand and create, which makes the world a better place for all of us. Each generation has stood on top of the discoveries of its predecessors to create even more powerful tools-electricity, transistors, computers, the Internet, and soon AGI.

Over time, the steady march of human innovation has intermittently brought unprecedented prosperity and improvement to almost every aspect of people's lives.

In one sense, AGI is just one more tool in the ever-higher scaffolding of progress that humanity is building together. In another sense, AGI is a beginning, and it's hard not to say, "This time is different"; the economic growth before us looks staggering, and we can now imagine a world where we can cure all diseases, have more time to spend with our families, and realize our full creative potential.

Comments: Sam Altman depicts an optimistic future driven by AGI, including medical breakthroughs and productivity gains. However, this rosy vision contrasts with the potential risks and ethical challenges that AGI can bring, such as job displacement and concentration of power.

Perhaps in a decade, every person on the planet will have the ability to accomplish more than the most influential people today can.

OpenAI continues to see rapid progress in AI development. Here are three observations about the economics of AI:

  1. The intelligence level of an AI model is roughly equal to the logarithm of the amount of resources used to train and run it. These resources are primarily training computations, data and inference computations. It seems that you can spend any amount of money and get consistent and predictable returns; scaling laws that predict this are accurate over multiple orders of magnitude.
  2. The cost of using a given level of AI decreases by a factor of 10 approximately every 12 months, and lower prices lead to more usage. You can start your GPT-4 at the beginning of 2023 with a Token This is seen in the cost of GPT-4o's Token in mid-2024, during which time the price of each Token drops by about 150x. Moore's Law has changed the world at a rate of 2x every 18 months; this is incredibly more powerful than Moore's Law.
  3. The socio-economic value of smart linear growth is inherently super-exponential. One result of this is that OpenAI sees no reason for exponentially growing investments to stop in the near future.

Comments: These three observations reveal the core economic drivers of AI development: the positive correlation between arithmetic investment and intelligence enhancement, the ubiquity brought about by rapidly declining costs, and the huge socio-economic rewards of intelligence enhancement. However, the sustainability of this exponential growth and the impact on environmental and social resources deserve deeper consideration.

If these three observations continue to hold true, the impact on society will be enormous.

OpenAI is now starting to roll out an AI agent, which will eventually feel like a virtual coworker.

Let's imagine the case of a software engineering agent, which OpenAI expects to be particularly important. Imagine that such an agent would eventually be able to do most of the work that a software engineer with a few years of experience at a top-tier company could do, with tasks lasting at most a few days. It won't have the greatest new ideas, it will require a lot of human supervision and guidance, and it will be very good at some things but surprisingly bad at others.

Nonetheless, think of it as a real but relatively elementary virtual coworker. Now imagine 1,000 such agents. Or a million. Now imagine such an agent for each knowledge work area.

Comments: The emergence of the AI agent heralds a profound change in work patterns, and Sam Altman emphasizes its potential as a "virtual colleague," especially in knowledge-intensive fields such as software engineering. However, it remains to be seen what the limitations of AI agents are in terms of creativity, complex problem solving, and human interaction, as well as their potential impact on existing jobs.

In some ways, AI could end up being economically like transistors - a massive scientific discovery that scales well and penetrates almost every corner of the economy. We don't think much about transistors or transistor companies; the benefits are very widely distributed. But we do want our computers, TVs, cars, toys, etc. to work wonders.

The world is not going to change all at once; it never has. In the short term life will be largely business as usual, and people in 2025 will spend their time mostly in the same way as they did in 2024. We'll still be falling in love, starting families, arguing online, hiking in nature, and so on.

But the future will come to us in a way that cannot be ignored, and the long-term changes to our societies and economies will be dramatic. We will find new things to do, new ways to be useful to each other, and new ways to compete, but they may be very different from what we do today.

Motivation, willpower and determination can be invaluable. Making the right decisions about what to do and figuring out how to navigate an ever-changing world will be of immense value. Resilience and adaptability will be skills that will help develop. AGI will be the greatest lever of human willpower ever created and give individuals more influence than ever before, not less.

OpenAI expects the impact of AGI to be uneven. While some industries will see little change, scientific advances could be much faster than they are today; this impact of AGI could outweigh everything else.

Prices of many goods will eventually fall dramatically (for now, smart costs and energy costs are limiting a lot of things), while the prices of luxury goods and some inherently scarce resources (such as land) may rise even more sharply.

Technically, the path ahead for OpenAI looks fairly clear. But public policy and collective opinion about how we should integrate AGI into society is important; one of the reasons OpenAI is coming out early and often is to give society and the technology time to evolve together.

Comments: OpenAI emphasizes the synchronization of technological development and social adaptation. It is a positive strategy to launch products as early as possible so that society has time to adapt and shape the direction of AGI. However, how to balance the speed of innovation, ethical considerations and social equity in this process of "co-evolution" remains a complex issue.

AI will permeate all areas of the economy and society; we will expect everything to be smart. Many at OpenAI expect that people will need to be given more control over technology than ever before, including more open source, and accept that there is a balance to be struck between security and personal empowerment,.

While OpenAI never wants to be reckless, and there may be some major decisions and limitations related to AGI security that would be unpopular, directionally, OpenAI sees a strong preference for more individual empowerment as OpenAI gets closer to realizing AGI; another possible path OpenAI could see is AI being used by authoritarian governments to control their populations through mass surveillance and loss of autonomy.

Ensuring that the benefits of AGI are widely distributed is critical. The historical impact of technological advances suggests that the vast majority of the indicators we care about (health outcomes, economic prosperity, etc.) get better on average and over the long term, but improving equality does not seem to be determined by technology, and doing so well may require new ways of thinking.

Comments: Sam Altman expressed concern about potentially negative scenarios for AGI, such as authoritarian governments using AI for social control. At the same time, he emphasized the importance of striking a balance between security and individual empowerment, and the need to ensure that AGI benefits everyone. This awareness of the double-edged sword of technology is commendable.

In particular, the balance of power between capital and labor seems to be easily screwed up, which may require early intervention. OpenAI is open to some strange-sounding ideas, such as giving some "computing budget" to enable everyone on the planet to use a lot of AI, but OpenAI also sees many ways to achieve the desired effect simply by ruthlessly lowering the cost of intelligence.

Anyone in 2035 should be able to mobilize the equivalent of the intellectual capacity of everyone in 2025; everyone should be entitled to unlimited talent and to be guided in any way they can imagine. The resulting creative output of a world where a great deal of talent lacks the resources to express itself fully would be a tremendous benefit to all of us if we were to change that situation.

Special thanks to Josh Achiam, Boaz Barak and Aleksander Madry for reviewing drafts.

*The use of AGI terminology herein is intended by OpenAI to communicate clearly and OpenAI has no intention of changing or interpreting the definitions and processes that define OpenAI's relationship with Microsoft. OpenAI has every intention of working with Microsoft for the long term. This footnote may seem silly, but on the other hand, OpenAI knows that some journalists will try to get clicks by writing silly things, so OpenAI is preemptively discouraging such silly behavior here ......

Comments: In a footnote at the end of the article, OpenAI emphasizes its longstanding relationship with Microsoft and pre-emptively refutes any possible "hype" that might arise, reflecting OpenAI's cautious approach to technology development and business partnerships.

CDN
May not be reproduced without permission:Chief AI Sharing Circle " OpenAI CEO Looks to AGI Economics: Three Observations Reveal Disruptive Change in the Next Decade

Chief AI Sharing Circle

Chief AI Sharing Circle specializes in AI learning, providing comprehensive AI learning content, AI tools and hands-on guidance. Our goal is to help users master AI technology and explore the unlimited potential of AI together through high-quality content and practical experience sharing. Whether you are an AI beginner or a senior expert, this is the ideal place for you to gain knowledge, improve your skills and realize innovation.

Contact Us
en_USEnglish