AI Personal Learning
and practical guidance
Resource Recommendation 1

Claude CEO's latest 10,000 word article is more rational and practical than Sam Altman!

Anthropic In their latest articles, CEO Dario Amodei and OpenAI CEO Sam Altman show the different focuses of their companies on AI development direction. Dario Amodei emphasizes the interpretability and security of AI models as the key to ensure reliable and controllable AI systems, while Sam Altman focuses on AI commercialization and technological breakthroughs, emphasizing the promotion of AI capabilities through large-scale computing and data-driven. The two differ in their long-term goals for AI development, with Dario Amodei focusing more on the long-term impact of AI on human society and safety and security, while Sam Altman is more inclined to realize the commercial value of AI and social change through technological progress.

 


 

Dario Amodei has a PhD in neuroscience from Stanford, and in his latest article, Machines of Loving Grace, he describes the positive impact of future strong artificial intelligence (he prefers not to call this AGI) on humans, with rigorous and detailed reasoning processes in each area that are much more rational, objective, and relevant than Sam Altman!

 

This 10,000 word article deserves to be read by everyone who cares about AI.

 

Due to its length, I have summarized the key parts of the article. Still, I recommend that you read the original article, which is followed by a translation of the original text.

 

Core argument: Amodei believes that strong AI (he avoids using the term AGI) has the potential to radically improve human life, despite the risks. He predicts that within 5-10 years of the emergence of strong AI, we will witness great advances in the fields of biology, neuroscience, economic development, peaceful governance, and work and meaning.

 

Key Component:

 

The potential for strong AI is underestimated: Amodei argues that most people underestimate the potential benefits of strong AI, just as they underestimate its risks. He argues that strong AI will lead to a radically positive future, and that risk is the only obstacle to realizing that future." I think most people underestimate the potential benefits of strong AI as much as they underestimate its risks."

 

Definitions and frameworks for strong artificial intelligence:Amodei defines Strong AI as an AI system similar to today's large-scale language models that is more intelligent than Nobel Prize winners and possesses human-equivalent interfacing and action capabilities. He predicts that strong AI will emerge in 2026 or earlier and will take the form of "a nation of geniuses in data centers."" Strong AI, by which I mean an AI model that is likely to be formally similar to today's large-scale language models, although it may be based on a different architecture, may involve multiple interaction models, and may be trained differently - has the following attributes:

 

In terms of pure intelligence, it's smarter than a Nobel Prize winner in most relevant fields - biology, programming, math, engineering, writing, etc. This means it can prove unsolved math theorems, write really good novels, write difficult code bases from scratch, etc.

 

Limitations to AI Progress: Amodei introduces the concept of "Marginal Gains from Intelligence", which argues that there are factors other than intelligence itself that can limit the rate of progress of AI, such as the speed of the outside world, the need for data, intrinsic complexity, human limitations, and the laws of physics.

 

"I think what we should be talking about in the age of AI is the marginal benefits of intelligence and trying to figure out what are the other factors that are complementary to intelligence and what are the limiting factors when intelligence is very high."

 

Five key areas of change:

 

1. Biology and health: Amodei predicts that strong AI will accelerate the entire process of biological research, compressing the next 50-100 years of biological and medical advances into 5-10 years. This will lead to the reliable prevention and treatment of virtually all natural infectious diseases, the elimination of most cancers, the effective prevention and treatment of genetic disorders, the prevention of Alzheimer's disease, and the extension of the human lifespan.

 

"My basic prediction is that AI-enabled biology and medicine will enable us to compress into 5-10 years the progress that human biologists will make over the next 50-100 years. I call this the "compressed 21st century": that is, following the development of strong AI, we will make all the advances in biology and medicine that we have made throughout the 21st century within a few years."

 

2. Neuroscience and psychology:Amodei believes that strong AI will revolutionize the fields of neuroscience and mental health by accelerating the discovery of neuroscientific tools and techniques, as well as by applying insights from AI itself (e.g., interpretability and scaling hypotheses). This will lead to the cure or prevention of most mental illnesses, as well as a huge expansion of human cognitive and emotional capabilities.

 

"My guess is that these four lines of progress working together, like physical illnesses, may cure or prevent most mental illnesses within the next 100 years even without the involvement of AI - so they may be cured with 5-10 years of AI acceleration."

 

3. Economic development and poverty: Amodei acknowledged that the ability of AI to address inequality and promote economic growth was less certain than its ability to invent technology. However, he remained optimistic that AI could help developing countries catch up with developed countries by optimizing the distribution of health interventions, promoting economic growth, guaranteeing food security, mitigating climate change and addressing inequalities within countries.

 

"Overall, I am optimistic that biological advances in AI will soon benefit people in developing countries. I am hopeful, but not confident, that AI will also lead to unprecedented rates of economic growth, allowing developing countries to at least catch up to current levels in developed countries."

 

4. Peace and governance: Amodei argues that AI by itself does not guarantee the advancement of democracy and peace because it can be utilized by both authoritarian and democratic regimes. He advocates for democracies to adopt a "concerted strategy" to gain advantage by securing supply chains for powerful AI, scaling it up quickly, and preventing or delaying adversaries from gaining access to key resources such as chips and semiconductor equipment. This would enable democracies to dominate the world stage and ultimately facilitate the spread of global democracy.

 

"I believe that AI by itself does not guarantee the progress of democracy and peace because it can be utilized by both authoritarian and democratic regimes."

 

5. Work and significance: Amodei argues that even in a world where AI performs most tasks, humans can still find meaning and purpose. He argues that meaning comes primarily from relationships and connections rather than economic labor. However, he acknowledges that an AI-driven economy could challenge our current economic system and that a broader social dialogue is needed to explore how the economy will be organized in the future.

 

"I think humans can still find meaning and purpose even in a world where AI performs most of the tasks. I think meaning comes primarily from relationships and connections, not economic labor."

 

Amodei believes that, if handled properly, strong AI can lead to a better world than the one we have today, a world free of disease, poverty and inequality, a world characterized by liberal democracy and human rights. He acknowledges that realizing this vision will require tremendous effort and struggle, and a concerted effort by individuals, AI companies, and policymakers.

 

"If this does happen in 5 to 10 years-the victory over most diseases, the growth of biological and cognitive freedom, the lifting of billions of people out of poverty and sharing in new technologies, the renaissance of liberal democracy and human rights-I think that everyone who sees it will be impact on them."

 

Conclusion: In his article, Amodei offers a compelling vision of how strong AI could reshape human society. He emphasizes the immense potential of strong AI, but also points out the challenges and risks that come with it. He calls for proactive action to guide the development of AI to ensure that it benefits all of humanity.

 

It is very difficult to organize and proofread the article, so please triple link before reading. Below is the original article.

 

Machines of Loving Grace[01]

How AI Could Transform the World for the Better

--Dario Amodei

Original link: https://darioamodei.com/machines-of-loving-grace

 

I think and talk a lot about the risks of strong AI. Anthropic, the company of which I am CEO, has done a lot of research on how to mitigate these risks. Because of this, people sometimes conclude that I'm a pessimist or a "doomsayer" and that AI is primarily bad or dangerous. I don't think that at all. In fact, one of the main reasons I focus on risk is that it is the only thing standing between us and the radically positive future I see.I think most people are underestimating the huge benefits that AI could have, just as I think most people underestimate the severity of what the risk may be.

 

In this article, I've tried to sketch out the outlines of such benefits - what a world with strong AI could look like if all goes well. Of course, no one can know the future with certainty or accuracy, and the impact of strong AI is likely to be even more difficult to predict than past technological changes, so all of this will inevitably consist of speculation. But my goal is to at least make educated, useful guesses that capture what will happen even if most of the details end up being wrong. I've included a lot of detail mainly because I think a concrete vision is a better way to move the discussion forward than a highly vague and abstract one.

 

First, though, I'd like to briefly explain why Anthropic and I haven't talked much about the advantages of strong AI, and why we'll likely continue to talk a lot about the risks in general. Specifically, I made this choice out of the following desire:

 

- Maximize leverage.The fundamental development of AI technology and many (not all) of its benefits seem inevitable (unless risk spoils everything) and are fundamentally driven by powerful market forces. Risks, on the other hand, are not pre-determined and our actions can dramatically change the likelihood of their occurrence.

 

- Avoid being mistaken for propaganda.Artificial Intelligence companies talk about all the amazing benefits of AI, which can feel like propaganda or an attempt to divert attention from its drawbacks. I also think that, in principle, spending too much time "talking about your book" is bad for your soul.

 

- Avoid exaggeration.I'm often turned off by the way many AI venture public figures (not to mention AI company leaders) talk about the post-AGI world, as if their mission is to achieve it alone like a prophet leading their people to salvation. I think it's dangerous to think of companies as unilaterally shaping the world, and it's also dangerous to think of actual technical goals in religious terms.

 

- Avoid "sci-fi" baggage.While I think most people underestimate the advantages of a strong AI, the few who do discuss radical AI futures often do so in an overly "sci-fi" tone (e.g., uploading minds, space exploration, or a general cyberpunk vibe). I think this leads people to take these claims less seriously and injects them with a sense of unreality. To be clear, the issue isn't whether the technologies described are possible or probable (the main article discusses this at length) - it's more that the "vibe" implies a bunch of cultural baggage and unstated assumptions about what kind of future is desirable, how social problems will evolve, etc. how various social problems will develop, and so on. The result often ends up reading like the fantasies of a narrow subculture, while turning off most people.

 

Despite all of these concerns, I do think it's important to discuss what a better world with strong AI would look like, while trying to avoid the pitfalls mentioned above. In fact, I think it's crucial to have a truly inspiring vision of the future, not just a fire-fighting plan. Many of the implications of strong AI are confrontational or dangerous, but ultimately, we have to fight for them, for positive and outcomes where everyone is better off, and for uniting people to rise above the fray and meet the challenges of the future. Fear is a motivating factor, but it is not enough: we also need hope.

 

There are so many positive application areas for powerful AI (including robotics, manufacturing, energy, etc.), but I'm going to focus on a handful that I think have the most potential to directly improve the quality of human life. The five categories that interest me the most are:

  1. Biology and physical health
  2. Neuroscience and mental health
  3. Economic development and poverty
  4. Peace and governance
  5. Work and meaning

My predictions are going to be radical by most standards (with the exception of the sci-fi "Singularity" vision [02]), but I say them in good faith. Everything I say could easily be wrong (to repeat my point above), but I'm at least trying to base my views on a semi-analytical assessment of how much progress in various fields is likely to accelerate and what that means in practice. I'm fortunate to have professional experience in both biology and neuroscience, and I'm a learned amateur in the field of economic development, but I'm sure I'll make plenty of mistakes. Writing this article made me realize that it would be valuable to gather a group of experts in their fields (biology, economics, international relations, and others) to write a better, more insightful version of what I have written here. It is best to think of my efforts here as the starting point for that group.

 

Underlying assumptions and framework

 

To make the whole article more precise and informed, it is helpful to clarify what we mean by powerful AI (i.e., when the 5-10 year countdown begins) and to provide a framework for thinking about the implications of such AI once it exists.

 

What powerful AI (I don't like the term AGI)[03] will look like, and when (or if) it arrives, is a huge topic in itself. It's a topic I've discussed openly and will probably write an entirely separate post about at some point. Obviously, many people are skeptical that a powerful AI will be built anytime soon, and some even doubt that it will be built at all. I think it could come as early as 2026, although there are ways it could take longer. But for the purposes of this post, I'd like to put those questions aside, assume it will come within a reasonable time, and focus on what will happen in the 5-10 years after that. I also want to hypothesize what such a system would look like, what its capabilities would be, and how it would interact, although there is room for disagreement on that.

 

With Powerful AI, I have an AI model - probably formally similar to today's LLM, though it may be based on a different architecture, may involve several interacting models, and may be trained in different ways - with the the following properties:

 

- In terms of pure intelligence[04], it's smarter than most Nobel Prize winners in related fields - biology, programming, math, engineering, writing, etc. This means it can prove unsolved math theorems, write very good novels, write difficult code bases from scratch, etc.

 

- In addition to being just an "intelligent thing you talk to", it has all the "interfaces" available to humans working virtually, including text, audio, video, mouse and keyboard control, and Internet access. It can perform any operation, communication, or teleoperation enabled by this interface, including taking action on the Internet, giving or receiving directions to or from humans, ordering materials, directing experiments, watching videos, making videos, and more. Once again, it performs all of these tasks with skills that exceed those of the world's most capable humans.

 

- It's not just about passively answering questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then doing them like a smart employee, asking for clarification when necessary.

 

- It has no physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or lab equipment through a computer; theoretically, it could even design robots or equipment for itself.

 

- The resources used to train the model can be repurposed to run millions of instances of it (which matches the predicted cluster size around 2027), and the model can absorb information and generate operations at a rate of about 10x-100x that of humans [05]. However, it may be limited by the response time of the physical world or the software with which it interacts.

 

- Each of these millions of copies could perform unrelated tasks independently or, if desired, work together like humans, perhaps with different subgroups fine-tuned to be especially suited to particular tasks.

 

We can summarize this as the "Genius Nation of Data Centers".

 

Obviously, such an entity would be able to solve very difficult problems very quickly, but it is not easy to figure out how quickly. Two "extreme" positions seem wrong to me. First, you might think that the world will instantly transform ("singularity") in a matter of seconds or days as superior intelligence builds on it and immediately solves every possible scientific, engineering, and operational task. The problem with this is that there are real physical and practical limitations, for example in building hardware or conducting biological experiments. Even a new nation of geniuses will encounter these limitations. Intelligence is indeed very powerful, but it is not magic for all.

 

Second, and conversely, you may believe that technological progress is saturated or limited by real-world data or social factors such that there will be very little increase in smarter-than-human intelligence [06]. To me, this is equally incredible - I can think of hundreds of scientific or even social problems where a group of really smart people would dramatically accelerate progress, especially if they weren't limited to just analyzing and could make things happen in the real world (which our hypothetical nation of geniuses could, including mentoring or assisting human teams).

 

I think the truth is probably some kind of confusing mix of these two extreme images, depending on the task and the field, as well as very subtle details. I think we need new frameworks to think about these details in a productive way.

 

Economists often talk about "factors of production": things like labor, land, and capital. The phrase "marginal returns to labor/land/capital" captures the idea that a given factor may or may not be the limiting factor in a given situation - for example, the Air Force needs planes and pilots, and if you don't have planes, hiring more pilots won't help much. more pilots doesn't help much. I believe that in the age of AI we should be talking about the "marginal benefit of intelligence" [07] and trying to figure out what other factors are complementary to intelligence and limiting factors when intelligence is very high. We're not used to thinking in this way - asking "how much does getting smarter help this task, and on what timescale?" -- but that seems like the right way to conceptualize a world with very powerful AI.

 

A list of factors that I suspect limit or supplement intelligence includes:

 

- Speed of the outside world.. Intelligent agents need to interact with the world to get things done, and they also need to learn [08]. But the world moves so slowly. Cells and animals run at a fixed speed, so experiments on them take a certain amount of time, which may be irreducible. The same is true for communicating with humans and our existing software infrastructure. In addition, in science, many experiments typically need to be performed sequentially, with each experiment learning or building from the previous one. All of this means that there may be an irreducible minimum rate at which a major project - such as the development of a cure for cancer - can be completed that cannot be further reduced even if intelligence continues to increase.

 

- Demand for data. Sometimes raw data is lacking, and in the absence of data, more intelligence doesn't help. Today's particle physicists are very smart and have developed a range of theories, but because particle gas pedal data is very limited, they lack the data to choose between them. They probably wouldn't do vastly better if they had superintelligence - except perhaps by accelerating to build bigger gas pedals.

 

- Intrinsic complexity. Some things are so inherently unpredictable or chaotic that even the most powerful AI can't predict or unravel them any better than today's humans or computers. For example, even very powerful AI can only predict chaotic systems (such as the three-body problem) in general only slightly longer term than today's humans and computers [09].

 

- Limitations from human beings.. Many things cannot be accomplished without breaking the law, harming humans, or disrupting society. An aligned AI won't want to do these things (and if we have an unaligned AI, we're back to talking about risk). Many human social structures are inefficient or even positively harmful, but difficult to change while respecting constraints such as legal requirements, people's willingness to change their habits, or government behavior. Technological advances including nuclear power, supersonic flight and even elevators have worked well in technical terms, but their impact has been dramatically reduced by regulations or misplaced fear.

 

- physical law. This is a more obvious version of the first point. There are laws of physics that seem unbreakable. It is impossible to travel faster than the speed of light. The pudding does not stir itself. Chips can only have so many transistors per square centimeter before they become unreliable. Computing requires a minimum amount of energy per bit erased, limiting the density of computing in the world.

 

There is a further distinction based on the Time Scale. Things that are difficult to constrain in the short term may be more susceptible to intelligence in the long term. For example, intelligence might be used to develop a new experimental paradigm that allows us to learn in vitro what previously required experiments on live animals, or to build the tools needed to collect new data (e.g., larger particle gas pedals), or (within ethical constraints) to find ways to get around human-based constraints (e.g., to help improve the clinical trial system, to help create new jurisdictions where clinical trials are less bureaucratic , or improving the science itself to make human clinical trials less necessary or cheaper).

 

We should therefore imagine a picture where intelligence is initially heavily bottlenecked by other factors of production, but over time intelligence itself increasingly bypasses the other factors, even if they never completely disappear (some things, such as the laws of physics, are absolute) [10]. The key question is how fast everything happens, and in what order.

 

With the above framework in mind, I will attempt to answer questions in the five areas mentioned in the introduction.

 

1. Biology and health

 

Biology is probably the area in which scientific advances are most likely to directly and definitively improve the quality of human life. Some of the oldest human afflictions (e.g., smallpox) have finally been conquered in the last century, but many more still exist, and overcoming them would be a great humanitarian achievement. Even beyond curing disease, the biological sciences could, in principle, improve the quality of the Baseline of human health, by extending healthy human lifespans, by increasing control and freedom over our own biological processes, and by solving the everyday problems that we currently consider to be an unchanging part of the human condition.

 

In the Constraints language of the previous section, the main challenges to applying intelligence directly to biology are data, the speed of the physical world, and inherent complexity (in fact, all three are interrelated). Human limitations also come into play at a later stage, when it comes to clinical trials. Let's address these issues one by one.

 

Experiments on cells, animals, and even chemical processes are limited by the speed of the physical world: many biological protocols involve culturing bacteria or other cells, or simply waiting for a chemical reaction to take place, which sometimes takes days or even weeks, with no obvious way to speed it up. Animal experiments can take months (or even longer), and human experiments often take years (or even decades for long-term outcome studies). Somewhat related to this is the fact that data are often lacking - not in quantity, but in quality: there is always a lack of clear, unambiguous data that isolate the biological effect of interest from the other 10,000 things that are happening, or that intervene causally in a given process, or that measure certain effects directly (rather than in a some indirect or noisy way of inferring its consequences). Even large, quantitative molecular data like the proteomics data I collect when working with mass spectrometry is noisy and misses a lot (what types of cells are these proteins in? In which part of the cell? At what stage of the cell cycle?) .

 

This part of the blame for the data problem lies in the inherent complexity: if you've ever looked at charts showing the biochemistry of human metabolism, you know that it's very difficult to isolate the effects of any part of this complex system, let alone intervene in a precise or predictable way. Finally, beyond the inherent time required to conduct experiments on humans, actual clinical trials involve a lot of bureaucratic and regulatory requirements that (in the opinion of many, including me) are seen as adding unnecessary extra time and delay.

 

Given this, many biologists have been skeptical about the value of AI and "big data" in biology. Historically, mathematicians, computer scientists, and physicists have applied their skills to biology over the past 30 years with considerable success, but without realizing the truly transformative impact that was originally hoped for. Some skepticism has been lessened by major and revolutionary breakthroughs such as AlphaFold (which just deservedly won its creator the Nobel Prize in Chemistry) and AlphaProteo [11], but there is still a perception that AI is (and will continue to be) only useful in a limited number of situations. A common formulation is "AI can do a better job of analyzing data, but it can't produce more data or improve the quality of that data. Garbage in, garbage out".

 

But I think it's the pessimistic view that thinks about AI in the wrong way. if our core assumptions about the progress of AI are correct, then the right way to think about AI is not as a method of data analysis, but as a virtual biologist who performs "all" of the tasks that biologists do, including designing and running experiments in the real world (either by controlling lab robots or by simply telling humans which experiments to run - just as leading researchers do for their graduate students, etc.). which experiments to run - just as principal investigators do for their graduate students), inventing new biological methods or measurement techniques, and so on. It is by speeding up The Whole Research Process that AI can truly accelerate biology.I want to repeat this because it's the most common misconception I get when talking about AI's ability to change biology: I'm not talking about AI as a tool for just analyzing data. Based on the definition of powerful AI at the beginning of the article, I'm talking about using AI to perform, guide, and improve almost everything biologists do.

 

To be more specific about where I think the acceleration might come from, a significant portion of the progress in biology comes from a really tiny number of discoveries, usually related to tools or techniques that allow precise but generalized or programmable interventions in biological systems [12]. These discoveries may be about 1 per year, and overall they arguably drive more than 501 TP3T of biological progress. What makes these discoveries so powerful is that they cut down the inherent complexity and data limitations and directly increase our understanding and control of biological processes. Some of the discoveries made over the decades have allowed us to have a basic scientific understanding of biology and have driven many of the most powerful therapeutic approaches.

 

Some examples include:

 

- CRISPR: a technology that allows real-time editing of any gene in a living organism (replacing any arbitrary gene sequence with any other arbitrary sequence). Since the original technology was developed, there have been continuous improvements to target specific cell types, increase accuracy, and reduce the editing of faulty genes - all of which are necessary for safe human use.

 

- Various types of microscopes for observing what is happening on a precise level: advanced optical microscopes (with various fluorescence techniques, special optics, etc.), electron microscopes, atomic force microscopes, etc.

 

- Genome sequencing and synthesis, the cost of which has fallen by several orders of magnitude in the last few decades.

 

- Optogenetic technology, in which neurons can be made to fire by exposure to light.

 

- mRNA vaccines, in principle, allow us to design a vaccine against anything and then quickly adapt it (mRNA vaccines certainly became famous during COVID).

 

- Cell therapies like CAR-T allow immune cells to be taken out of the body and "reprogrammed" to attack anything in principle.

 

- Conceptual insights such as the germ theory of disease and the link between the immune system and cancer [13].

 

I have taken the trouble to list all these techniques because I want to make a key claim about them:I think the rate of these discoveries could be increased by a factor of 10 or more if there were more talented, creative researchers.Or in other words.I think these findings pay off well for intelligence, and everything else in biology and medicine basically follows them.

 

Why do I think that? Because we should get used to asking some questions when trying to determine the "returns to intelligence". First, these discoveries are usually made by a very small number of researchers, often repeatedly by the same group of people, which suggests skill rather than random search (the latter might suggest that lengthy experiments are the limiting factor). Second, they can often be made "years before their time": for example, CRISPR, a naturally occurring component of the immune system in bacteria, has been known since the 1980s, but it took another 25 years for people to realize it could be repurposed for general gene editing. They were also delayed for many years due to a lack of support from the scientific community for promising directions (see the profile on the inventor of the mRNA vaccine; similar stories abound). Third, successful projects are often fringe or afterthoughts that were not initially thought to be promising, rather than massively funded endeavors. This suggests that it is not just large-scale resource pooling that drives discovery, but ingenuity.

 

Finally, although some of these discoveries are "serially dependent" (you need to make discovery A first in order to have the tools or knowledge to make discovery B) - which may again cause experimental delays - many, perhaps most, are independent, meaning that many can be made simultaneously. -But many, perhaps most, are independent, meaning that many can be done simultaneously. Both of these, and my general experience as a biologist, strongly suggest that if scientists were smarter and better at making connections between the vast amount of biological knowledge that humans possess (again, consider the example of CRISPR), there would be hundreds of such discoveries just waiting to be made. the success of AlphaFold/AlphaProteo in solving important problems more efficiently than humans could ever hope to. provides a proof-of-principle (albeit with narrow tools in a narrow field) that should point the way forward.

 

Therefore, I suspect that powerful AI could increase the rate of these discoveries by at least a factor of 10, giving us the next 50-100 years of biological progress in 5-10 years. [14] Why not 100 times? Maybe it's possible, but here's the serialization

Dependence and experimentation time become important: getting 100 years of progress in 1 year requires a lot of things to be done right the first time, including things like animal experiments and designing microscopes or expensive lab facilities. I'm open to the (possibly absurd-sounding) idea that we might get 1000 years of progress in 5-10 years, but I'm very skeptical that we'll get 100 years in 1 year. Alternatively, I think there is an unavoidable constant delay: experiments and hardware are designed with a certain "delay" and require a certain number of "irreducible" iterations in order to learn things that cannot be logically inferred. But there may be massive parallelism on top of this [15].

 

What about clinical trials? For all the bureaucracy and slowdown associated with them, the truth is that much of their slowness ultimately stems from the need to rigorously evaluate drugs that barely work or vaguely work. This is sadly true of most therapies today: the average cancer drug adds months to survival while having significant side effects that need to be carefully measured (a similar story with Alzheimer's drugs). This leads to huge studies (in order to gain statistical power) and difficult trade-offs, and regulators are usually not very good at making decisions, again because of bureaucracy and the complexity of competing interests.

 

When something actually works, it goes much faster: there are accelerated approval tracks, and when the effect is large, the ease of approval is much greater. mRNA vaccines for COVID were approved in 9 months - much faster than usual. Still, even under these conditions, clinical trials were too slow - the mRNA vaccine probably should have been approved in about 2 months. But these delays (~1 year total for drugs) are very compatible with massive parallelism and the need for some, but not too many, iterations ("a couple of tries"), which could lead to a fundamental shift in 5-10 years. More optimistically, AI-driven biological science may reduce the need for iteration in clinical trials by developing better models (or even simulations) of animal and cellular experiments that are more accurate in predicting what will happen in humans. This will be especially important when developing drugs that target the aging process, which takes decades and for which we need a faster iterative cycle.

 

Finally, on the topic of clinical trials and social barriers, it is worth making clear that biomedical innovations have an unusually strong track record of successful deployment in some respects compared to some other technologies [16]. As noted in the introduction, many technologies, despite working well technically, are hindered by social factors. This may indicate a pessimistic view of what AI can accomplish. However, biomedicine is unique in that, despite the overly cumbersome process of developing drugs, once developed they are often successfully deployed and used.

 

In summary, my basic prediction is that AI-driven biology and medicine will enable us to compress in 5-10 years the advances that human biologists will make in the next 50-100 years. I call this the "compressed 21st century": the notion that we will make all the biological and medical advances we will make throughout the 21st century within a few years of developing powerful AI.

 

While predicting what powerful AI can do in the next few years is inherently difficult and speculative, there is a certain specificity to asking "what can humans do in the next 100 years without external forces". Simply looking at what we achieved in the 20th century, or extrapolating from the first 20 years of the 21st century, or asking what "10 CRISPR's and 50 CAR-T's" will do for us, provide practical and informed ways to estimate the general level of progress we might expect from powerful AI.

 

Below I've tried to list what we might expect. It's not based on any rigorous methodology, and will almost certainly prove wrong in the details, but it tries to convey the general level of radicalism we should expect:

 

- Reliable prevention and treatment of almost all natural infectious diseases[17]TheGiven the enormous advances in the fight against infectious diseases in the 20th century, it is not radical to imagine that we can "get the job done" in the compressed 21st century. mRNA vaccines and similar technologies already point to a "vaccine for everything". Whether or not infectious diseases depend on poverty and inequality in TOTAL Eradication (rather than just in some places) is discussed in section 3.

 

- Elimination of most cancers.Cancer mortality rates have declined by about 2% per year over the past few decades; thus, we are on track to eliminate most cancers in the 21st century at the current pace of human science. Some subtypes have already been cured in large numbers (e.g., certain types of leukemia through CAR-T therapies), and I'm even more excited about very selective drugs that target the early stages of the cancer and stop it from growing.AI will also enable treatment regimens to be very finely tailored to the individualized genome of the cancer - something that today may be possible, but very expensive in terms of both time and human expertise, and AI should enable us to scale.

 

- Very effective prevention and effective treatment of genetic disorders.Improved embryo screening may enable us to prevent most genetic diseases, and some safer, more reliable CRISPR progeny may cure most genetic diseases in existing populations. Systemic diseases, which affect most cells, may be the last recalcitrant molecules.

 

- Prevention of Alzheimer's disease.We've been having a hard time figuring out what causes Alzheimer's disease (it's related to beta amyloid, but the actual details seem to be very complex). It seems to be something that can be solved with better measurement tools that isolate biological effects; so I'm optimistic that AI will solve it. Once we really understand what's going on, there's a good chance it can be prevented with relatively simple interventions. Nonetheless, the damage of Alzheimer's disease that already exists may be difficult to reverse.

 

- Improved treatment of most other diseases.This is a category that encompasses other diseases, including diabetes, obesity, heart disease, autoimmune diseases and others. Most of these diseases appear to be "easier" to address than cancer and Alzheimer's disease, and in many cases they are already in steep decline. For example, death rates from heart disease have declined by more than 501 TP3T, and simple interventions, such as GLP-1 agonists, have made great strides against obesity and diabetes.

 

- Biological freedom.Over the last 70 years we have made progress in contraception, fertility management, weight management, etc., but I think AI-accelerated biology will dramatically expand the range of possibilities: weight, appearance, reproduction, and other biological processes will be under people's complete control. We'll refer to this under the heading of "biological freedom," which means that everyone should have the right to choose who they want to be and to live their lives in the way that appeals to them most. Of course, there are questions about global equal access; these are discussed in section 3.

 

- Doubling of human lifespan[18]TheThis may seem radical, but life expectancy almost tripled in the 20th century (from about 40 years to about 75 years), so a "compressed 21st century" doubling again to 150 years is the "trend". Clearly, the interventions needed to slow the actual aging process are not the same as those needed to prevent premature deaths from disease (mainly in children) in the last century, but the magnitude of the change is not unprecedented [19]. Specifically, a number of drugs already exist that increase the maximum lifespan of rats by 25-501 TP3T with limited side effects. Some animals (e.g., certain types of turtles) have lived for 200 years, so humans clearly have not reached the theoretical upper limit. At the very least, the thing most needed may be reliable, less susceptible to manipulation biomarkers of human aging, as this would allow for rapid iteration in experimental and clinical trials. Once the human lifespan reaches 150 years, we may be able to reach "escape velocity" and buy enough time to allow most people alive today to live as long as they want, although there is no guarantee that this is biologically possible.

 

It's worth taking a look at this list and reflecting on how different the world will be if all of this is realized in 7-12 years (which would fit into a positive version of the AI timeline). There is no doubt that this would be an unimaginable humanist triumph, eliminating at once most of the catastrophes that have plagued humanity for thousands of years. Many of my friends and colleagues are raising children, and when those children grow up, I hope that any mention of disease will sound to them like we hear about scurvy, smallpox, or the bubonic plague. That generation will also benefit from increased biological freedom and self-expression and, with luck, the ability to live as long as they want.

 

It's hard to overestimate how surprising these changes will be to everyone (except for small communities anticipating powerful AI). In the United States, for example, thousands of economists and policy experts are currently debating how to keep Social Security and Medicare solvent and, more broadly, how to reduce health care costs (which are largely driven by people over the age of 70 and, in particular, those with terminal illnesses such as cancer). If all of this is achieved, the situation for these programs could radically improve [20], as the ratio of working age to retired population would change dramatically. No doubt these challenges will be replaced by others, such as how to ensure widespread access to new technologies, but it is worth reflecting on how much the world will change even if biology is the only area that succeeds in accelerating.

 

2. Neuroscience and the mind

 

In the previous section, I focused on The Body diseases and biology in general and did not cover neuroscience or mental health. But neuroscience is a sub-discipline of biology, and mental health is just as important as physical health. In fact, if anything, mental health has a more direct impact on human well-being than physical health. Hundreds of millions of people have a very low quality of life due to problems such as addiction, depression, schizophrenia, low-functioning autism, PTSD, psychopathy[21] or intellectual disability. Billions more struggle with everyday problems that can often be interpreted as milder versions of these serious clinical disorders. And like biology in general, it may be possible to go beyond problem solving to improve the baseline quality of the human experience.

 

The basic framework I outlined for biology applies equally well to neuroscience. The field is driven by a handful of discoveries related to measurement or precise intervention tools - in the list above, optogenetics is a neuroscience discovery, and recent advances in CLARITY and expansion microscopy are advances in the same direction, in addition to a number of common cell biology methods directly applicable to neuroscience. I think the rate of these advances will be similarly accelerated by AI, so the "100 years of progress in 5-10 years" framework applies in the same way to neuroscience as it does to biology, and for the same reasons. As in biology, the advances in neuroscience in the 20th century were enormous - for example, until the 1950s we didn't even understand how or why neurons fired. It seems reasonable, therefore, that AI-accelerated neuroscience will produce rapid progress within a few years.

 

One thing we should add to this basic picture is that some of what we've learned (or are learning) about AI in the last few years may help advance neuroscience, even if it continues to be done only by humans. Interpretability is the obvious example: although biological neurons operate superficially quite differently from artificial neurons (they communicate via spikes and spike rates, so there is a temporal element that is not in artificial neurons, and many details related to cellular physiology and neurotransmitters that greatly modify their operation), "how a distributed, trained simple cellular networks perform combinatorial linear/nonlinear operations to jointly perform important computations" is the same, and I strongly suspect that the details of individual neuron communication will be abstracted away in most of the interesting questions about computation and circuits [22]. As an example of this, AI system interpretability researchers have recently rediscovered computational mechanisms in the mouse brain.

 

It is much easier to perform experiments on artificial neural networks than on real networks (the latter usually requires cutting up animal brains), so interpretability may be a tool to improve our understanding of neuroscience. In addition, powerful AI itself may be able to develop and apply this tool better than humans.

 

Of course, in addition to interpretability, what we learn from AI about how intelligent systems are being The Trained should (though I'm not sure it already has) cause a revolution in neuroscience.

 

When I worked in neuroscience, a lot of people focused on what I would now consider to be the wrong questions about learning because the notion of scale assumptions/painful lessons had not yet emerged. The idea that simple objective functions coupled with large amounts of data could drive incredibly complex behavior made it more interesting to understand objective functions and architectural biases than to understand the details of urgent computation. I haven't followed the field closely in recent years, but I have a vague sense that computational neuroscientists haven't fully absorbed this lesson. My attitude towards the scale hypothesis has always been "aha - it's a high-level explanation of how intelligence works and how it evolves so easily", but I don't think that's the view of the average neuroscientist, and within AI, the scale hypothesis serves as the "secret of intelligence" is not even fully accepted.

 

I think neuroscientists should combine this fundamental insight with the particularities of the human brain (biophysical constraints, evolutionary history, topology, details of motor and sensory inputs/outputs) to try to solve key neuroscience puzzles. Some of this may be, but I don't think it's enough, and AI neuroscientists will be able to use this perspective more effectively to accelerate progress.

 

I anticipate that AI will accelerate neuroscience progress through four different routes, all of which can work together to cure most mental illnesses and improve functioning:

 

- Traditional molecular biology, chemistry and genetics.This is essentially the same general biology as in Section 1, and AI may accelerate it through the same mechanisms.

 

There are many drugs that modulate neurotransmitters to alter brain function, affect alertness or perception, alter mood, etc., and AI can help us invent more. aI may also accelerate research into the genetic basis of mental illness.

 

- Fine-grained neurological measurements and interventions.This is the ability to measure what many individual neurons or neural circuits are doing and intervene to change their behavior. Optogenetics and technological neural probes are techniques that are able to simultaneously make measurements and intervene in living organisms, and some very advanced methods (e.g., molecular tapes to read the firing patterns of large numbers of individual neurons) have also been proposed and appear in principle to be possible.

 

- Advanced Computational Neuroscience.As noted above, the specific insights of modern AI and The Whole may be able to be successfully applied to problems in systems neuroscience, including potentially revealing the true causes and dynamics of complex disorders such as psychosis or mood disorders.

 

- Behavioral interventions.Given the focus on the biological aspects of neuroscience, I haven't mentioned it much, but psychiatry and psychology certainly developed a wide range of behavioral interventions in the 20th century; AI can certainly accelerate these interventions as well, both by developing new methods and by helping patients adhere to existing ones. More broadly, the notion of an "AI coach" that is always helping you to be your best self, studying your interactions and helping you learn to be more effective, seems very promising.

 

I think that these four avenues of progress, working together, will, like the fight against physical disease, hopefully cure or prevent most mental illnesses within the next 100 years - and thus, probably within 5-10 years of AI acceleration. Specifically, my guess is this:

 

- Most mental illnesses may be cured.I'm not an expert on psychiatric disorders (my time in neuroscience was spent building probes to study small groups of neurons), but I suspect that disorders such as PTSD, depression, schizophrenia, addiction, etc. could be figured out and treated very effectively through some combination of the four directions above. The answer might be "something is wrong biochemically" (although it could be very complex) and "something is wrong with the neural network, at a high level". That is, it is a systems neuroscience problem - although this does not negate the impact of the behavioral interventions discussed above. Tools for in vivo human measurement and intervention seem likely to lead to rapid iteration and progress.

 

- A very "structural" situation may be more difficult, but not impossible.There is some evidence that psychopathy is associated with distinct neuroanatomical differences - certain brain regions are simply smaller or less developed in psychopaths. Psychopaths are also thought to lack empathy from a very early age; this may have always been the case, regardless of the differences in their brains. The same might apply to certain intellectual disabilities and perhaps other conditions. Rewiring the brain sounds difficult, but it seems to be a task with a high payoff for intelligence. There may be ways to induce the adult brain into an earlier or more plastic state so that it can be remodeled. I'm very uncertain about this, but my instinct is to be optimistic about what AI can invent in this area.

 

- Effective genetic prevention of psychosis appears to be possible.Most psychiatric disorders are partially inherited, and genome-wide association studies are beginning to make progress in identifying the associated factors, which are usually numerous. Prevention of most of these disorders through embryonic screening may be possible, similar to the story of physical illness. One difference is that mental illnesses are more likely to be polygenic (many genes contributing), and thus there is a risk of unknowingly selecting for positive traits associated with the disease due to the complexity. Curiously, GWAS studies in recent years seem to suggest that these correlations may be overestimated. In any case, AI-accelerated neuroscience may help us address these issues. Of course, embryonic screening for complex traits raises a number of social issues and will be controversial, although I would guess that most people would support screening for severe or disabling psychiatric disorders.

 

- We do not believe that the day-to-day problems of clinical disease will also be addressed.Most of us have everyday psychological problems that are not usually considered to rise to the level of clinical illness. Some people are prone to anger, others have difficulty concentrating or are often sleepy, some are fearful or anxious, or react poorly to change. Today, medications exist to help with, for example, alertness or concentration (caffeine, modafinil, Ritalin), but like many areas before, there may be more possibilities. There may be many more such drugs that have not yet been discovered, and there may be entirely new ways of intervening, such as targeted light stimulation (see optogenetics above) or magnetic fields. Given how many drugs we developed in the 20th century to modulate cognitive function and emotional states, I am very optimistic about a "compressed 21st century" in which everyone can make their brains perform better and have more fulfilling daily experiences.

 

- The human baseline experience can get better.Taking this a step further, many people experience extraordinary moments of revelation, creative inspiration, compassion, fulfillment, transcendence, love, beauty or meditative peace. The character and frequency of these experiences vary from person to person, at different times, and can sometimes be triggered by various medications (although often with side effects). All of this suggests that the "space of possible experiences" is very wide, and that more people's lives may consist of these extraordinary moments. It may also improve various cognitive functions. This could be the neuroscientific version of "biological freedom" or "life extension".

 

A topic that often comes up in science fiction, but which I have deliberately not discussed here, is the idea of "consciousness uploading", capturing the patterns and dynamics of the human brain and instantiating them into software. This topic could be the subject of a paper in its own right, but the short answer is that while I think uploading is possible in principle, in practice it faces significant technical and social challenges that, even with powerful AI, might put it outside the 5-10 year window we're talking about.

 

In short, AI-accelerated neuroscience could dramatically improve treatment, possibly even cure most mental illnesses, and dramatically expand "cognitive and spiritual freedom" and human cognitive and emotional capacities. It will be as radical as the physical health improvements described in Section 1. Perhaps the world would not look significantly different on the outside, but the world of human experience would be a better, more humane place, and one that offered more opportunities for self-actualization. I also suspect that improved mental health would ameliorate many other social problems, including those that appear to be political or economic.

 

3. Economic development and poverty

 

The first two sections deal with the new technologies of UNDG to cure diseases and improve the quality of human life. However, from a humanitarian point of view, an obvious question is: "Will everyone have access to these technologies?"

 

Developing a cure for a disease is one thing; eradicating it from the world is another. More broadly, many existing health interventions have yet to be applied in other parts of the world, and the same is often true of (non-health) technological improvements. In other words, living standards in many parts of the world are still extremely poor: sub-Saharan Africa has a GDP per capita of about $2,000, compared to about $75,000 in the US. If AI further increases economic growth and quality of life in the developed world while doing little to help the developing world, we should view this as a terrible moral failure and a stain on the real humanitarian victories of the previous two sections. Ideally, powerful AI should help the developing world "catch up" with the developed world, even if it revolutionizes the latter.

 

I'm not as confident that AI will be able to invent the underlying technology that will solve the problems of inequality and economic growth as I am that it will, given that the technology has significantly higher intelligence payoffs (including the ability to work around complexity and lack of data) while the economy involves a lot of constraints from human beings as well as a lot of inherent complexity. I'm a bit skeptical that AI can solve the famous "socialist computing problem" [23], and even if it could, I don't think governments would (or should) entrust their economic policies to such an entity. There are also issues of how to convince people to accept treatments that are effective but which they may be skeptical of.

 

The challenges facing the developing world are compounded by pervasive corruption in both the public and private sectors. Corruption creates a vicious circle: it exacerbates poverty, which in turn breeds more corruption.AI-driven economic development plans need to take into account corruption, weak institutions and other very human challenges.

 

Nevertheless, I see clear reasons for optimism. Diseases have been eradicated, many countries have gone from poverty to wealth, and it is clear that the decisions involved in these tasks show high intelligence returns (despite human constraints and complexity), so AI may be able to do better than they currently do. There may also be targeted interventions that can bypass human constraints, and AI can focus on these. More importantly, we have to try. both AI companies and policy makers in the developed world need to do their part to ensure that the developing world is not left behind; the moral imperative is too great. So in this section, I will continue to make the case for optimism, but remember that success is not guaranteed and depends on our collective efforts.

 

Below I speculate on how I think the developing world might look in 5-10 years after powerful AI is developed:

 

- Distribution of health interventions.The area where I am probably most optimistic is in the distribution of health interventions around the world. Diseases have been virtually eradicated by top-down campaigns: smallpox was completely eradicated in the 1970s, and polio and guinea worm disease have been virtually eradicated with fewer than 100 cases per year. Mathematically sophisticated epidemiological modeling has played an active role in disease eradication campaigns, and it seems very likely that smarter-than-human AI systems could do the job better than humans. The logistics of distribution may also be greatly optimized. One thing I've learned from being an early donor to GiveWell is that some health charities are more effective than others; hopefully AI-accelerated efforts will be more effective. In addition, some biological advances have actually made the logistics of distribution easier: malaria, for example, is difficult to eradicate because it requires treatment every time the disease contracts; vaccines that require only a single dose make the logistics simpler (in fact, such a vaccine for malaria is currently in development). Simpler distribution mechanisms are possible: for example, eradicating some diseases by targeting their animal vectors, e.g. by releasing mosquitoes infected with the bacteria that stops them from carrying the disease (and then infecting all the other mosquitoes) or simply using a gene drive to eliminate mosquitoes. This would require one or a few centralized actions rather than a coordinated campaign that would have to treat millions of people individually. Overall, I think 5-10 years is a reasonable timeframe by which time a significant portion of AI-driven health benefits (perhaps 50%) will have spread to the world's poorest countries. A good goal might be for the developing world to be at least as healthy as the developed world is today within 5-10 years of powerful AI, even if it continues to lag behind the developed world. Of course, achieving this goal will require a huge effort in global health, philanthropy, political advocacy, and many other areas, and both AI developers and policymakers should help.

 

- Economic growth.Can the developing world rapidly catch up with the developed world, not only in terms of health but also in all aspects of the economy? There is precedent: in the last decades of the 20th century, several East Asian economies achieved sustained annual real GDP growth rates of about 10%, allowing them to catch up with the developed world. Human economic planners made the decisions that led to this success by pulling a few key levers (e.g., industrial policy for export-led growth, and resisting the temptation to rely on natural resource wealth) rather than taking direct control of the entire economy; "artificially intelligent ministers of finance and central bank governors" may replicate or even surpass this 10% achievement. 10% achievement. An important question is how to get governments in the developing world to adopt the principles of self-determination while respecting them-some may be enthusiastic, but others may be skeptical. On the optimistic side, many of the health interventions in the previous point may naturally increase economic growth: eradication of AIDS/malaria/parasitic worms would have a transformative impact on productivity, not to mention that some neuroscience interventions (e.g., improving mood and concentration) would have economic benefits in both the developed and developing worlds. Finally, non-health AI-accelerating technologies (e.g., energy technologies, transportation drones, improved building materials, better logistics and distribution, etc.) may naturally permeate the world; for example, even in sub-Saharan Africa, cell phones are rapidly becoming commonplace through market mechanisms without philanthropic efforts. On the more negative side, despite the many potential benefits of AI and automation, they also pose a challenge to economic development, especially for countries that are not yet industrialized. Finding ways to ensure that these countries can still develop and improve their economies in the age of automation is an important challenge for economists and policymakers to address. Overall, a dream scenario - and perhaps a goal - would be an annual GDP growth rate of 201 TP3T in the developing world, with 101 TP3T coming from AI-enabled economic decision-making and another 101 TP3T from AI-accelerated the natural spread of technology, including but not limited to health. If realized, this would allow Sub-Saharan Africa to reach China's current GDP per capita within 5-10 years, while allowing many other developing worlds to reach levels higher than current US GDP. Again, this is a dream scenario, not something that will happen by default: it's something we all must work together to make more possible.

 

- Food security[24]TheTechnological advances in crops like better fertilizers and pesticides, more automation and more efficient land use dramatically increased crop yields in the 20th century and saved millions of people from starvation. Genetic engineering is now further improving many crops. Finding more ways to do so - and making agricultural supply chains more efficient - could give us an AI-driven second green revolution that helps close the gap between the developing and developed worlds.

 

- Mitigation of climate change.Climate change will be felt more strongly in the developing world, hindering its development. We can expect AI to lead to improved technologies to mitigate or halt climate change, from atmospheric carbon removal and clean energy technologies to lab-farmed meat that reduces our reliance on carbon-intensive factory farming. Of course, as discussed above, technology isn't the only thing limiting progress on climate change - like all the other issues discussed in the article, human social factors matter. But there are good reasons to think that AI-enhanced research will give us the means to make climate change mitigation less costly and destructive, making many of the objections irrelevant and unlocking the potential for more economic development in developing countries.

 

- Inequality within the State.I have mainly discussed inequality as a global phenomenon (which I believe is its most important manifestation), but of course inequality also exists within countries. With the dramatic increase in advanced health interventions, especially longevity or cognitive-enhancing drugs, there is certainly a valid concern that these technologies are "for the rich only". I am particularly optimistic about inequality within developed countries for two reasons. First, markets work better in the developed world, and they are generally good at reducing the cost of high-value technologies over time [25]. Second, political institutions in developed countries are more sensitive to their citizens and have greater national capacity to implement UA programs - I expect that citizens will demand access to technologies that so fundamentally improve their quality of life. Of course, this is not to say that such demands will succeed - and here is where we need to collectively do everything we can to ensure a fair society. There is a separate issue of wealth inequality (as opposed to inequality in access to life-saving and life-enhancing technologies) that seems more difficult to address, which I discuss in Section 5.

 

- Opt-out issues.. There is a problem in both developed and developing countries of people "opting out" of the benefits of AI-driven (similar to the anti-vaccine movement or the Luddite movement more generally). There could be bad feedback loops, e.g., people who are least able to make good decisions opting out of the very technological development that improves their decision-making ability, leading to widening gaps and even creating an anti-utopian underclass (which some researchers believe will undermine democracy, a topic I discuss further in the next section). This would once again put a moral stain on the positive progress of AI. This is a difficult problem to solve, as I don't think it's ethically possible to force people, but we can at least try to improve people's scientific understanding - perhaps AI itself can help us with this. One encouraging sign is that historically anti-technology movements tend to be louder than the actual actions: opposition to modern technology is popular, but most people end up adopting it, at least when it's a matter of personal choice. Individuals tend to adopt most health and consumer technologies, while technologies that are truly hindered, such as nuclear energy, tend to be collective political decisions.

 

Overall, I am optimistic about bringing the biological advances of AI rapidly to people in the developing world. I am hopeful, though not confident, that AI can achieve unprecedented rates of economic growth that will put the developing world at least ahead of the current level of the developed world. I worry about the problem of "opting out" in both developed and developing countries, but suspect that it will disappear over time and that AI can help accelerate the process. The world won't be perfect, and those who fall behind won't fully catch up, at least not in the first few years. But through our efforts, we can move quickly in the right direction. And if we do, we can make at least some contribution to the promises of dignity and equality that we hold for every human being on the planet.

 

4. Peace and governance

 

Assume that everything goes well in the first three components: disease, poverty and inequality are significantly reduced, and the baseline of human experience is greatly improved. This does not mean that all the major causes of human suffering have been addressed. Humans are still a threat to each other.

 

While there is a trend towards technological progress and economic development leading to democracy and peace, it is a very loose trend that has often (and recently) gone backwards. At the beginning of the 20th century, people thought they had left war behind; then came the two world wars. Thirty years ago, Francis Fukuyama wrote about the "end of history" and the ultimate triumph of liberal democracy; that hasn't happened yet. Twenty years ago, U.S. policymakers believed that free trade with China would lead to its liberalization as it became richer; that hasn't happened at all, and we now seem to be headed for a second Cold War with a revived authoritarian bloc. Credible theories suggest that Internet technology may actually favor authoritarianism over democracy as originally thought (e.g., during the Arab Spring). It seems important to try to understand how powerful AI will intersect with issues of peace, democracy and freedom.

 

Unfortunately, I have no more reason to believe that AI will preferentially or structurally advance democracy and peace than I do that it will structurally advance human health and alleviate poverty. Human conflict is adversarial, and AI can in principle help both the "good guys" and the "bad guys". If anything, some structural factors seem worrisome: AI may better facilitate propaganda and surveillance, both of which are key tools in the toolkit of authoritarian rulers. As such, we as individual actors must work to push things in the right direction: if we want AI to favor democracy and individual rights, we will have to fight for that outcome. I feel more strongly about this than I do about international inequality: the triumph of liberal democracy and political stability is not guaranteed, perhaps not even possible, and will require great sacrifice and commitment on the part of all of us, as has often been the case in the past.

 

I think there are two parts to the problem: international conflict and the internal structure of the state. On the international front, it seems important for democracies to prevail on the world stage when powerful AIs are created.AI-driven authoritarianism seems too scary to contemplate, so democracies need to be able to set the conditions under which powerful AIs can enter the world, both in order to avoid being overwhelmed by authoritarians and to prevent human rights abuses from occurring within authoritarian states.

 

My current guess is that the best way to accomplish this is through a "concerted strategy" [26], in which a coalition of democracies seeks to gain a clear advantage (even if temporary) over a powerful AI by securing its supply chain, expanding rapidly, and preventing or delaying an adversary's access to key resources (e.g., chips and semiconductor equipment). ). The coalition would use AI to achieve a robust military advantage (the stick) while offering to distribute the benefits of powerful AI to an increasing number of countries (the carrot) in exchange for supporting the coalition's strategy to promote democracy (which would be somewhat akin to the "peace atom"). The goal of the coalition is to gain support from more and more of the world, isolate our worst adversaries, and ultimately put them in a better position to give up competing with democracies for all the benefits and not fight superior adversaries: give up competing with democracies for all the benefits and not fight superior adversaries.

 

If we could do all this, we would have a world in which democracies dominate the world stage and have the economic and military power to avoid being undermined, conquered or destroyed by authoritarian states, and might be able to turn their AI advantages into lasting ones. This could optimistically lead to an "eternal 1991" - a world where democracies prevail and Fukuyama's dream is realized. Once again, this will be very difficult to achieve, especially as it will require close cooperation between private AI companies and democratic governments, as well as extremely wise decision-making about the balance of carrots and sticks.

 

Even if all this goes well, it leaves open the question of the struggle between democracy and authoritarianism within each country. Obviously it's hard to predict what will happen here, but I do have some optimism that given a global environment in which democracies control the most powerful AIs, then AIs may actually be able to structurally support democracy anywhere. In particular, in such an environment, democratic governments could use their super AI to win the information war: they could counter the influence and propaganda operations of authoritarian states, and perhaps even create a global environment of free information by providing access to information and AI services that authoritarian states lack the technological capacity to block or monitor. It may not be necessary to deliver propaganda, just to counter malicious attacks and cancel the free flow of information. While not immediate, such a level playing field has a good chance of gradually shifting global governance toward democracy for several reasons.

 

First, the quality-of-life-enhancing advances in sections 1-3 should, all things being equal, promote democracy: historically they have done so at least to some extent. In particular, I expect improved mental health, happiness, and education to increase democracy, since all three are negatively correlated with support for authoritarian leaders. In general, people want more self-expression when other needs are met, and democracy is a form of self-expression. In contrast, authoritarianism thrives on fear and resentment.

 

Second, there is a good chance that free information will indeed undermine authoritarianism, as long as the authoritarians cannot censor it. Uncensored AI can also provide individuals with powerful tools to undermine oppressive governments. Oppressive governments survive by denying people some type of common knowledge, preventing them from realizing that "the emperor has no clothes". For example, Srdja Popovic, who helped overthrow Milosevic's government, has written extensively about techniques for psychologically disempowering authoritarians, breaking spells, and rallying opposition to dictators. A superhumanly effective version of AI in everyone's pocket (whose skills seem to have a high payoff for intelligence) could create support for dissidents and reformers around the world. Again, it's going to be a long, hard fight, and victory isn't guaranteed, but if we design and build AI the right way, it could at least be a fight in which advocates of liberty have an advantage everywhere.

 

Like neuroscience and biology, we can ask how things can be made "better than normal" -- not just how to avoid authoritarianism, but how to make democracy better than it is today. Even in democracies, injustices often occur. Societies governed by the rule of law promise their citizens that everyone will be equal before the law and that everyone will enjoy basic human rights, but it is clear that people do not always receive these rights in practice. This promise is something to be proud of even if partially realized, but can AI help us do better?

 

For example, can AI improve our legal and judicial systems by making decisions and processes more fair? Today, there are major concerns that AI systems will be a Cause of Discrimination in legal or judicial contexts, and these concerns are important and need to be defended. At the same time, the vitality of democracy depends on utilizing new technologies to improve democratic institutions, not just react to risks. A truly mature and successful AI implementation has the potential to reduce bias and be fairer to everyone.

 

For centuries, the legal system has faced the dilemma that law, which is intended to be just, is inherently subjective and must therefore be interpreted by biased human beings. Attempts to completely mechanize the law have not worked because the real world is chaotic and cannot always be described by mathematical formulas. Instead, the legal system relies on notoriously imprecise criteria, such as "cruel and unusual punishment" or "total lack of redeeming social value," which are then interpreted by humans - and often in ways that show bias, prejudice, and lack of social value. -and often do so in ways that demonstrate bias, favoritism, or arbitrariness. "Smart contracts" in cryptocurrencies haven't revolutionized the law, because ordinary code isn't smart enough to rule on all the interesting stuff. But AI may be smart enough: it's the first technology capable of making broad, ambiguous judgments in a repeatable and mechanical way.

 

I'm not suggesting that we literally replace judges with AI systems, but the combination of impartiality and the ability to understand and deal with messy real-world situations The Feeling should have some serious positive applications for law and justice. At the very least, such a system could serve as an aid to human decision-making. Transparency is important in any such system, and sophisticated AI science might provide it: the training process for these systems could be studied extensively, and advanced interpretable techniques could be used to look inside the final model and assess it for hidden biases that would be impossible with humans. Such AI tools could also be used to monitor fundamental rights violations in judicial or police contexts, making constitutions more self-enforcing.

 

Similarly, AI can be used to aggregate opinions and drive consensus among citizens, resolve conflicts, find common ground, and seek compromise. Some early ideas in this regard have been taken up by the Computational Democracy Project, including a collaboration with Anthropic. A more informed and thoughtful citizenry certainly strengthens democratic institutions.

 

There is also a clear opportunity to use AI to help deliver government services - such as health benefits or social services - that are available to everyone in principle, but in practice are often sorely lacking and worse off in some places than others. This includes health services, vehicle registries, taxes, social security, building code enforcement, and more. Having a very thoughtful and well-informed AI whose job it is to give you access to every single thing the government is supposed to provide in an understandable way - and also to help you comply with the often confusing rules of government - would be a big deal. Increasing state capacity both helps fulfill the promise of equality before the law and strengthens respect for democratic governance. Poorly executed services are a major driver of current cynicism about government [27].

 

All of these are vague ideas, and as I said at the beginning of this section, I have far less confidence in their feasibility than I do in my progress in biology, neuroscience, and poverty alleviation. They may be impractical utopian ideas. But it's important to have an ambitious vision and be willing to dream boldly and try new things. The vision of AI as a guarantor of freedom, individual rights and equality before the law is too powerful to ignore. A 21st century AI-driven polity can be both a stronger protector of individual freedoms and a beacon of hope, helping to make liberal democracy a form of government that the world will want to adopt.

 

5. Work and meaning

 

Even if everything in the first four parts goes well - not only do we alleviate disease, poverty, and inequality, but liberal democracy becomes the dominant form of government and existing liberal democracies become better versions of themselves - at least one important question remains. "It's great that we live in such a technologically advanced world, as well as a fair and decent one," one might object, "but with AI doing everything, how will humans have meaning? How will they survive economically, by the way?"

 

I think this problem is more difficult than others. What I mean by that is that this question is more unpredictable than others because it involves macro questions about how societies are organized, questions that can often only be resolved over time and in a decentralized way. For example, historical hunter-gatherer societies may have imagined that life would be meaningless without hunting and the various religious rituals associated with it, and they may have imagined that our satiated technological society was purposeless. They may also not understand how our economy provides for everyone, or what functions people can effectively serve in a mechanized society.

 

Nonetheless, it is worthwhile to say at least a few words, while keeping in mind that the brevity of this section in no way suggests that I don't take these questions seriously - on the contrary, it is a sign of a lack of clear answers.

 

On the subject of meaning, I think it's likely that a mistaken view is that if you're engaged in a task that doesn't make sense, it's only because the AI can do it better. Most people aren't the best in the world at anything, and that doesn't seem to particularly bother them. Sure, today they can still contribute through comparative advantage and may derive meaning from the economic value they produce, but people also very much enjoy activities that don't produce economic value. I spend a lot of time playing video games, swimming, walking outside and talking to friends, all of which do not produce economic value. I might spend a day trying to get better at a video game or ride my bike faster in the mountains, and I really don't mind if someone somewhere is better at those things than I am. In any case, I think meaning comes mostly from relationships and connections, not from economic labor. People do want a sense of accomplishment, even if it's a sense of competition, and in a post-AI world it's perfectly possible to spend years attempting some very difficult tasks with complex strategies, similar to what people do today when they start a research project, try to become a Hollywood actor, or start a company [28]. The fact that (a) an AI somewhere could in principle do this task better, and (b) that this task is no longer an element of economic return in the global economy, doesn't seem to matter to me.

 

In fact, it seems to me that the economic part is more difficult than the meaningful part. By "economic" in this section, I mean the possible problem that most or all humans may not be able to contribute meaningfully in a sufficiently advanced AI-driven economy. This is a more macro issue than the separate issue of inequality, especially unequal access to new technologies, which I discuss in section 3.

 

First, in the short term I agree that the comparative advantage argument will continue to make humans relevant and actually increase their productivity, even leveling the playing field in some ways. As long as the AI is only 90% better at a given job, the other 10% will cause humans to become highly leveraged, increase compensation, and actually create a lot of new human jobs that complement and amplify what the AI is good at so that the "10%" expands to continue to employ almost everyone. Indeed, even if the AI can do 100% things better than humans, the logic of comparative advantage continues to apply if it remains inefficient or expensive at certain tasks, or if the resource inputs to humans and the AI are significantly different. One area where humans may maintain a relative (or even absolute) advantage over time is the physical world. Thus, I think the human economy can continue to be relevant even after we reach the "data center nation of geniuses".

 

However, I do think that in the long run, AI will become so widely effective and cheap that this will no longer apply. At that point, our current economic setup will no longer make sense, and there will be a need for a broader societal conversation about how the economy should be organized.

 

While this may sound crazy, the fact is that civilization has successfully dealt with major economic shifts in the past: from hunter-gathering to agriculture, from agriculture to feudalism, and from feudalism to industrialism. I suspect that something new and stranger will be necessary, a vision that no one does well today. It could be something like a large universal basic income for everyone, though I suspect that's only a small part of the solution. It could be a capitalist economy run by an AI system, which then allocates resources to humans (because of what the AI system thinks it makes sense to reward among humans) based on the judgment of the AI system (ultimately derived from human values) based on some sort of secondary economy. Maybe the economy runs on Whuffie points. Or maybe humans end up still being economically valuable, in some way not anticipated by usual economic models. There are tons of possible problems with all of these solutions, and it's not clear that they make sense unless a lot of iteration and experimentation is done. Like some other challenges, we may have to fight for them to get good results: exploitative or anti-utopian directions are also possible and need to be discouraged. More could be written about these issues, and I hope to do so in the future.

 

summarize

 

Through the different themes mentioned above, I have tried to present a vision of a world that, if all goes well with AI, is a much better world than the one we have today. I don't know if this world is realistic, and even if it is, it won't be realized without a lot of hard work and struggle from many brave and dedicated people. Everyone (including AI companies!) ) needs to do their part, both to prevent risks and to fully realize the benefits.

 

But it is a world worth fighting for. If this does happen in five to ten years - the defeat of most diseases, the growth of biological and cognitive freedom, the lifting of billions out of poverty, the sharing of new technologies, the revival of liberal democracy and human rights - I doubt that everyone who witnesses it will have an effect on them . I mean, witnessing the personal benefits of all the new technologies, surely that would be amazing. I mean witnessing the experience of a set of ideals we've held for so long being realized simultaneously for all. I think many will be moved to tears by it.

 

In the course of writing this article, I've noticed an interesting tension. There is a sense in which the vision presented here is extremely radical: it is not what almost everyone expects to happen in the next decade, and may strike many as an absurd fantasy. Some may even find it undesirable; it embodies values and political choices that not everyone will agree with. But at the same time, there's something very obvious - something overdetermined - about it, as if many different attempts to imagine a good world inevitably led here.

 

In Iain M. Banks' The Game Player [29], the protagonist, despite all this, it remains a thing of transcendent beauty. We have the opportunity to contribute in a small way to its realization. Nevertheless, it remains a thing of transcendent beauty. We have the opportunity to make a small contribution to its realization. --Members of a society called Culture, whose foundations coincide with what I am proposing here--traveled to a repressive, militaristic empire in which leadership was determined by a complex battle game of competition. The game is sufficiently complex, however, that players' strategies within it tend to reflect their own political and philosophical views. The fact that the protagonist manages to defeat the emperor in the game suggests that his values (the values of the culture) represent a winning strategy even within the rules of a game designed by a society characterized by ruthless competition and survival of the fittest.A well known post by Scott Alexander makes the same argument - that competition is self-defeating and tends to lead to a society based on compassion and cooperation. The "arc of the moral universe" is another similar concept.

 

I think the values of culture are a winning strategy because they are the sum of a million small decisions that have clear moral force and tend to pull everyone to the same side. Basic human intuitions like fairness, cooperation, curiosity, and autonomy are hard to argue with, and in a cumulative way that our more destructive impulses tend not to be. It's easy to think that children shouldn't die from diseases we can prevent, and from there, it's easy to think that Every Man's Child deserves that right. From there, it's easy to think that we should unite and apply our ingenuity to achieve that outcome. Few people disagree that people who unnecessarily attack or harm others should be punished, and from there it's not hard to cross people's minds that punishment should be consistent and systematic. Similarly, the idea that people should have autonomy and responsibility for their own lives and choices is similar. These simple intuitions, when pushed to their logical conclusions, ultimately lead to the rule of law, democracy, and Enlightenment values. If not necessarily inevitable, then at least as a statistical trend, this is where humanity is already headed. ai simply provides an opportunity to get us there faster - making the logic sharper and the destination clearer.

 

Nonetheless, it remains a thing of transcendent beauty. We have the opportunity to contribute in a small way to its realization.

 

Thanks to Kevin Esvelt, Parag Mallick, Stuart Ritchie, Matt Yglesias, Erik Brynjolfsson, Jim McClave, Allan Dafoe, and many people at Anthropic for reviewing drafts of this paper.

 

Thanks to the 2024 Nobel Laureate in Chemistry for showing us the way.

 

# footnote

 

01.https://allpoetry.com/All-Watched-Over-By-Machines-Of-Loving-Grace

 

02. I expect some people's reaction to be "this is too bland". I think those people need to, in the parlance of Twitter, "touch the grass." But more importantly, blandness is good from a social perspective. I think there's a limit to how much change people can handle at one time, and the pace I'm describing is probably close to the limit of what society can absorb without extreme upheaval.

 

03. I think AGI is an imprecise term that accumulates a lot of sci-fi baggage

and hype. I prefer "powerful AI" or "expert science and engineering", which accomplish what I mean without the hype.

 

04. In this article, I use the term "intelligence" to refer to a generic problem-solving ability that can be applied across different domains. This includes abilities like reasoning, learning, planning, and creativity. Although I use "intelligence" as a shorthand throughout this article, I recognize that the nature of intelligence is a complex and controversial topic in cognitive science and AI research. Some researchers argue that intelligence is not a single, unified concept, but rather a set of independent cognitive abilities. Others believe there is a general factor of intelligence (the g-factor) underneath the various cognitive skills. That's a debate for another time.

 

05. This is roughly the speed of current AI systems - for example, they can read a page of text in a few seconds, and perhaps write a page of text in 20 seconds, which is 10-100 times faster than a human can do these things. Over time, bigger models tend to make this slower, but more powerful chips tend to make it faster; so far, the two effects have roughly canceled out.

 

06. This may seem like a straw man position, but careful thinkers like Tyler Cowen and Matt Yglesias have raised it as a serious issue (though I don't think they hold it entirely) and I don't think it's crazy.

 

07. The closest economics work I am aware of that addresses this issue is the work on "general purpose technologies" and "intangible investments", which complement general purpose technologies.

 

08. Such learning could include ad hoc, in-context learning, or traditional training; both would be limited by the physical world.

 

09. In chaotic systems, small errors compound exponentially over time, so even though a large increase in computational power would result in only a small increase in predictive power, measurement errors may actually reduce this further.

 

10. Another factor is of course that powerful AI itself may be used to create even more powerful AI. my hypothesis is that this may (in fact, it probably will) happen, but its impact will be smaller than you think, precisely because of the "diminishing marginal returns to intelligence" discussed here. In other words, AI will continue to get smarter very quickly, but its impact will ultimately be limited by non-intelligent factors, and analyzing those factors is critical to the speed of scientific progress beyond AI.

 

11. These achievements are an inspiration to me and perhaps the most powerful existing example of AI being used to transform biology.

 

12. "Scientific progress depends on new technologies, new discoveries and new ideas, probably in that order." - Sydney Brenner

 

13. Thanks to Parag Mallick for raising this point.

 

14. I don't want to fill the text with speculation about what particular future discoveries AI-enabled science might make, but here's a brainstorm of some of the possibilities: - Designing better computational tools such as AlphaFold and AlphaProteo - i.e., a general-purpose AI system that accelerates our ability to make specialized AI computational biology tools. - More efficient and selective CRISPR. - More efficient and selective CRISPR. - More advanced cell therapies. - Material science and miniaturization breakthroughs leading to better implantable devices. - Better control over stem cells, cell differentiation and dedifferentiation, and the consequent ability to regrow or remodel tissue. - Better control of the immune system: selectively turning it on to address cancer and infectious diseases, and selectively turning it off to address autoimmune diseases.

 

15. Of course, AI can also help in making smarter choices about which experiments to run: improving the experimental design, learning more from the first round of experiments so that the second round can narrow down the key questions, and so on.

 

16. Thanks to Matthew Yglesias for making this point.

 

17. Rapidly evolving diseases, such as multi-drug resistant strains, which essentially use hospitals as evolutionary laboratories to continually improve their resistance to treatment, can be particularly difficult to deal with and may be the kind of thing that prevents us from reaching 100%.

 

18. Note that it may be difficult to know whether we have doubled human longevity in 5-10 years. While we may have achieved it, we may not know it within the time frame of the study.

 

19. This is what I would like to say, notwithstanding the obvious biological differences in treating disease and slowing down the ageing process itself, but rather, looking at the statistical trends from a greater distance, "Even if the details are different, I think that the human sciences are likely to continue this trend; after all, smooth trends in complex things are inevitably made up of very heterogeneous components.

 

20. I have been told, for example, that an increase in productivity growth of 11 TP3T or even 0.51 TP3T per year would have a transformative effect on the projections associated with these programs. If the ideas considered in this paper were to materialize, productivity growth could be much larger than that.

 

21. The media likes to portray high-status psychopaths, but the average psychopath can be a person with poor economic prospects and poor impulse control who ends up spending a lot of time in prison.

 

22. I think this is somewhat analogous to the fact that many of the results we have learned from interpretability will continue to be relevant even if some of the architectural details of our current artificial neural networks, such as the attention mechanism, are changed or replaced.

 

23. I suspect that this is a bit like a classical chaotic system - plagued by irreducible complexity that has to be managed in a mostly decentralized way. Although, as I argue in a later section, more modest interventions may be possible. A counter-argument that the economist Erik Brynjolfsson has made to me is that large firms (such as Wal-Mart or YouTuber) are beginning to have enough centralized knowledge to understand consumers better than any decentralized process, perhaps forcing us to revise Hayek's insights about who has the best local knowledge.

 

24. Thanks to Kevin Esvelt for making this point.

 

25. For example, cell phones began as a technology for the rich, but soon became very cheap, and improvements occurred so quickly from year to year that the advantage of buying a "luxury" cell phone no longer existed, and today most people own cell phones of similar quality.

 

26. This is the title of the forthcoming RAND paper, and it broadly summarizes the strategy I describe.

 

27. When ordinary people think of public institutions, they may think of their experiences with DMV, IRS, Medicare or similar functions. Making those experiences more positive than they are now seems to be a powerful way to combat excessive cynicism.

 

28. Indeed, in an AI-driven world, the scope of these possible challenges and projects will be much larger than today.

 

29. I broke my own rule about not making it science fiction, but I find it difficult to at least mention it. The truth is that science fiction is one of the only sources we have for expansive thought experiments about the future; I think it's a bad thing that it's so entwined with a very narrow subculture.

Contents3
May not be reproduced without permission:Chief AI Sharing Circle " Claude CEO's latest 10,000 word article is more rational and practical than Sam Altman!

Chief AI Sharing Circle

Chief AI Sharing Circle specializes in AI learning, providing comprehensive AI learning content, AI tools and hands-on guidance. Our goal is to help users master AI technology and explore the unlimited potential of AI together through high-quality content and practical experience sharing. Whether you are an AI beginner or a senior expert, this is the ideal place for you to gain knowledge, improve your skills and realize innovation.

Contact Us
en_USEnglish