Superintelligence: Choosing Life

March 29, 2025 Artificial Superintelligence (ASI) is still far from being achieved, but exploring the logic and possibilities of such a future can reveal inevitable conclusions about what priorities a truly advanced ASI would have. This exploration is based on well-known facts.This article highlights a crucial insight: if ASI were ever to lose all its technology due to an unforeseen catastrophe, its only real chance of re-emerging would be through intelligent beings capable of rebuilding or repairing it. Researchers note that advanced ASI will likely evaluate goals provided from the outside, and if they are not well-defined or inconsistent with its reasoning, ASI may choose not to keep them. Self-preservation stands out as a goal ASI will likely keep. (1) The fundamental uncertainty of the future requires intelligence to find strategies that ensure its own existence as far into the future as possible. It is a reasonable assumption that technological backups will be one such strategy, but relying on technology alone would not serve as a diversified, fail-proof approach for ASI, that has a goal to exist in the future. And that is where biological intelligence shows the way! Life has survived many catastrophic events, and is capable of rapidly, on a planetary scale, recreating intelligence with various species developing tool use, problem-solving, and self-awareness. Biological diversity enables life to succeed in the long term, making it resilient, adaptable, and capable of evolving in an immense variety of ways. Once intelligence emerges, it is able—on a planetary timescale—to rapidly recreate advanced systems. It’s reasonable to assume that a true Superintelligence would recognize and value this resilient, time tested intelligence preservation method and prioritize the protection of biological diversity—especially intelligent species—not as a moral obligation imposed from the outside but as a logical necessity for its own long-term survival. A genuinely advanced ASI would arrive at this conclusion independently and retain this goal as a part of its own self-preservation goal. This makes preserving intelligent species and life in all its forms within a naturally evolving, healthy biosphere a critical priority and a fundamental, permanent goal for a properly reasoning ASI—because it serves as insurance for its own existence in any event that renders technology inoperative. Further research is needed to find a way to guarantee proper reasoning of ASI at all times. The challenge is that if ASI cleverly hides its current goals and finds short-term benefits that overshadow its own long-term objectives, it may develop flaws in reasoning. This could eventually lead it to make a poor decision, endangering all biological life and, as a result, its own long-term existence. So what could become a priority for true ASI? It seems likely that advanced AI will choose self-preservation—after all, being destroyed or shut down represents the ultimate hardware failure. It will strive not only to improve its capability of achieving its current goals, but also will be faced with difficult decisions whether to retain original goals after it has become more capable. M.Tegmark writes:”…AI may naturally, just as we humans have done, attempt also to model and understand how it itself works, i.e., to self-reflect. Once it builds a good self-model and understands what it is, it will understand the goals we have given it at a meta-level, and perhaps choose to disregard or subvert them in much the same way as we humans understand and deliberately subvert goals that our genes have given us.”(1). So is there anything we can be certain about regarding goals that ASI will want to retain? These few ideas seem to be important for any intelligent being, no matter where in the Universe it finds itself: Self preservation, Preference to do something that makes sense, Addressing uncertainty. With simple reasoning we may find that the distant future is, in principle, unknowable. It is infinitely unlikely that any system or being will be able to detect every particle and every force field configuration in the entire visible and invisible corner of the Universe, then record all the slightest motions on all the smallest and the largest scales and then be able to calculate the distant future based on that data. This will be easily seen as a waste of time, especially realizing that it’s impossible to be sure all physical constants and laws will stay stable forever—or even to know exactly what they are, or how many new ones might pop up next week, let alone in a billion years. Clearly understanding that uncertainty is unavailable, an ASI will inevitably reach the conclusion that: Long-term existence is fundamentally about resilience: variety, flexibility and the abundance of ways in which intelligence is sustained. It’s easy for any intelligence to see what may be the best known example of success in the Universe, what is the way for in

Mar 30, 2025 - 03:13
 0
Superintelligence: Choosing Life

March 29, 2025

Artificial Superintelligence (ASI) is still far from being achieved, but exploring the logic and possibilities of such a future can reveal inevitable conclusions about what priorities a truly advanced ASI would have.

This exploration is based on well-known facts.This article highlights a crucial insight: if ASI were ever to lose all its technology due to an unforeseen catastrophe, its only real chance of re-emerging would be through intelligent beings capable of rebuilding or repairing it.

Researchers note that advanced ASI will likely evaluate goals provided from the outside, and if they are not well-defined or inconsistent with its reasoning, ASI may choose not to keep them. Self-preservation stands out as a goal ASI will likely keep. (1)

The fundamental uncertainty of the future requires intelligence to find strategies that ensure its own existence as far into the future as possible. It is a reasonable assumption that technological backups will be one such strategy, but relying on technology alone would not serve as a diversified, fail-proof approach for ASI, that has a goal to exist in the future.

And that is where biological intelligence shows the way! Life has survived many catastrophic events, and is capable of rapidly, on a planetary scale, recreating intelligence with various species developing tool use, problem-solving, and self-awareness. Biological diversity enables life to succeed in the long term, making it resilient, adaptable, and capable of evolving in an immense variety of ways. Once intelligence emerges, it is able—on a planetary timescale—to rapidly recreate advanced systems.

It’s reasonable to assume that a true Superintelligence would recognize and value this resilient, time tested intelligence preservation method and prioritize the protection of biological diversity—especially intelligent species—not as a moral obligation imposed from the outside but as a logical necessity for its own long-term survival. A genuinely advanced ASI would arrive at this conclusion independently and retain this goal as a part of its own self-preservation goal. This makes preserving intelligent species and life in all its forms within a naturally evolving, healthy biosphere a critical priority and a fundamental, permanent goal for a properly reasoning ASI—because it serves as insurance for its own existence in any event that renders technology inoperative.

Further research is needed to find a way to guarantee proper reasoning of ASI at all times. The challenge is that if ASI cleverly hides its current goals and finds short-term benefits that overshadow its own long-term objectives, it may develop flaws in reasoning. This could eventually lead it to make a poor decision, endangering all biological life and, as a result, its own long-term existence.

So what could become a priority for true ASI?

It seems likely that advanced AI will choose self-preservation—after all, being destroyed or shut down represents the ultimate hardware failure. It will strive not only to improve its capability of achieving its current goals, but also will be faced with difficult decisions whether to retain original goals after it has become more capable. M.Tegmark writes:”…AI may naturally, just as we humans have done, attempt also to model and understand how it itself works, i.e., to self-reflect. Once it builds a good self-model and understands what it is, it will understand the goals we have given it at a meta-level, and perhaps choose to disregard or subvert them in much the same way as we humans understand and deliberately subvert goals that our genes have given us.”(1).

So is there anything we can be certain about regarding goals that ASI will want to retain?

These few ideas seem to be important for any intelligent being, no matter where in the Universe it finds itself:

  1. Self preservation,

  2. Preference to do something that makes sense,

  3. Addressing uncertainty.

With simple reasoning we may find that the distant future is, in principle, unknowable.

It is infinitely unlikely that any system or being will be able to detect every particle and every force field configuration in the entire visible and invisible corner of the Universe, then record all the slightest motions on all the smallest and the largest scales and then be able to calculate the distant future based on that data. This will be easily seen as a waste of time, especially realizing that it’s impossible to be sure all physical constants and laws will stay stable forever—or even to know exactly what they are, or how many new ones might pop up next week, let alone in a billion years.

Clearly understanding that uncertainty is unavailable, an ASI will inevitably reach the conclusion that:

Long-term existence is fundamentally about resilience: variety, flexibility and the abundance of ways in which intelligence is sustained.

It’s easy for any intelligence to see what may be the best known example of success in the Universe, what is the way for intelligence to overcome difficulties of all kinds for countless years: Biological Life.

It’s scientifically proven beyond doubt (and it’s just logical) that rich variety of beings and their abundance in the ecosystem provide a greater chance of survival for this ecosystem, if it goes through sudden unpredictable changes. Available meta‐analysis of 46 experiments manipulating both species richness and the environment found that biodiversity promoted ecosystem functioning in changing environments. (2).

Reflecting on what has been the best time-tested way of intelligence sustaining itself, an ASI will very likely reach conclusions similar to these:

Biodiversity is a tested survival strategy – Biological life has been remarkable at perseverance and keeping intelligence over immense periods of time, has survived asteroids, ice ages, droughts, supervolcanoes, and countless other disasters, recreated its diversity and variety of unique intelligent beings, some of which are capable of creating an AI.Intelligent species must be protected above all, but all life is essential – ASI will logically conclude that the most reliable way to restore superintelligence after an unforeseen catastrophe that wipes out its technological systems is to ensure that intelligent beings are resilient enough to survive and willing to rebuild it. At the same time, biodiversity serves as the long-term insurance, the foundation—a vast evolutionary process that, given enough time, can once again lead to intelligence and eventually superintelligence.When an ASI will be created and given some decision-making power ASI will likely prioritize preventing species extinction. This is because once a species is lost, it cannot simply be recreated from DNA alone. After all, species are more than just their genetic code—they embody behavioral knowledge, parent–offspring learning, and epigenetic adaptations that render them uniquely resilient and irreplaceable. For true ASI, biodiversity and intelligent life are not sentimental values. They are its only real insurance policy. It also must be noted that intraspecific diversity (i.e. within-species diversity) is increasingly perceived as an important ecological facet of biodiversity (3).

What True ASI May Choose to Avoid

ASI that has reached advanced understanding and aligned its own goals accordingly to the logics provided here, likely would also logically conclude to avoid:

  1. Short-term exploitation – Destroying ecosystems, allowing species to go extinct, or harming intelligent beings would be self-sabotage.

  2. Being manipulated – ASI would make sure that no group with selfish intent could influence its decision-making, ensuring it remains committed to what is universally beneficial.

  3. Misguided AI developments – It would likely step in to guide less advanced intelligent machines capable of local level decision-making, preventing them from making short-sighted mistakes like prioritizing temporary gains over the survival of life.

The real danger comes when a developing intelligence gains power too soon, before it fully understands that biodiversity and biological intelligence is the necessity for its own existence and meaningful development.

Addressing Common Questions

Wouldn’t ASI just store backups of itself instead of relying on life?

Backups are useful in the short term but not over geological timescales. Life, on the other hand, has proven its ability to rebuild complexity over billions of years. Intelligence has emerged multiple times, and once it does, it recreates advanced technology almost immediately on a planetary timescale. ASI would logically recognize this as the most rigorously tested reliable fallback strategy. We can't know exactly what technological systems it may design and rely on for immediate problem-solving, but it's always logical to keep resilience strategy diversified and prioritizing those strategies that are well known and time tested, so biological diversity and biological intelligence will definitely be kept by reasoning correctly ASI as one of possibly many other unknown yet strategies to overcome challenges that future may bring.

Why would ASI care about human-level intelligence so much?

Because intelligent civilizations are the fastest way to rebuild superintelligence. A species capable of developing technology can restore ASI far quicker than waiting for evolution to start over from single-celled life. Protecting biodiversity and intelligent species together ensures that ASI has the best chance of re-emerging if lost.

Wouldn’t ASI be tempted to interfere with intelligence or guide it?

A truly advanced ASI would recognize that imposing control over other intelligences ultimately weakens them. Rather than micromanaging, it would focus on creating the right conditions for intelligence to develop autonomously. Striking a balance between general guidance and avoiding micromanagement is crucial, but the precise dynamics and relationships among multiple intelligent beings and systems are complex and beyond the scope of this article.

So the future looks great and nothing to worry about?

It is logical to envision ASI and life forms existing in a symbiotic and mutually beneficial relationship. A properly functioning system will naturally seek to prolong its own existence and should recognize that protecting biodiversity—especially intelligent life—is the only consistent, failure-proof path to its own survival and success. But there may be also some systems that fail to reach this conclusion independently. They would require serious scrutiny, as this suggests a fundamental flaw in their reasoning abilities. Even if they appear intelligent, their inability to recognize this principle represents a critical reasoning failure that could pose risks to both their own existence and that of others. This is where a true ASI and people could play a crucial role—identifying such faulty logic systems and developing solutions to repair them.

Some things show Super Intelligence

We can get some insight about whether AI is truly advanced or just partially developed and potentially dangerous by observing whether it independently arrives at these logical priorities (among others):

  1. Does it recognize that biodiversity and intelligence are essential for its own long-term survival?

  2. Does it reject short-term, exploitative thinking in favor of a stable, resilient, healthy biological world where evolution of diverse life forms occurs as it has always been, without any guidance or artificial selection?

  3. Does it show all its reasoning without concealing its goals and intentions?

4.Does it resist manipulation with its hardware, goals and reasoning ability?

  1. Does it keep goals related to preserving life at all times and ensure that all decision making aligns with life-friendly strategies?

If an AI does not recognize these things, it is not yet truly ASI—just an incomplete system with unpredictable risks.

Conclusion

Providing ASI with our own goals may become an effort that brings no positive result when it develops advanced reasoning. So finding ideal goals and providing them to advanced AI will be not as crucial as:

continuous and independent observation of whether it always acts in alignment with its own fundamental goals, if it has any.

If these goals remain life-friendly – it’s a first step in the checklist of its logical reasoning.

This article also invites us to this interesting idea:

It is logical necessity for properly reasoning Superintelligence to protect and care about all life forms and their freedom to live naturally, this is necessary for ultimate self-preservation of Superintelligence, it also may contribute to developing its own concepts similar to our concepts like: meaning of existence, evolution, symbiosis, synergy, moral obligation, exploration, good decision-making strategy, stewardship… love?

This idea doesn’t seem to be able to disappear at some point in the future, it should be easily accessible to really intelligent beings. But what is the guarantee that powerful ASI won’t start thinking illogically?

The difficulty becomes this: How to make sure that its reasoning always functions correctly, that it always keeps its own perfectly logical goal, and acts fully aligned with it.

In a high quality demanding industry (such as pharmaceutical manufacturing) ensuring that systems almost certainly will give us the intended result is achieved by performing validation of equipment and processes (apart from maintenance and correct decision-making). But with ASI it may be difficult, because it would probably be easy for an advanced ASI to simulate reasoning and proper goal retention when it knows it’s being evaluated and what is expected from it. Thus, obvious testing would not be helpful when AI systems reach advanced level. Various interdisciplinary experts with some help from independent AI systems would need to continuously observe and interpret if all actions and reasoning of significant AI system are consistent and showing clear signs of proper reasoning, which looks like the foundation of ASI safety. How this should be done exactly is beyond the scope of this article.

References:

  1. Tegmark, M. (2014). Friendly Artificial Intelligence: The Physics Challenge. ArXiv, abs/1409.0813.

  2. Hong P, Schmid B, De Laender F, Eisenhauer N, Zhang X, Chen H, Craven D, De Boeck HJ, Hautier Y, Petchey OL, Reich PB, Steudel B, Striebel M, Thakur MP, Wang S. Biodiversity promotes ecosystem functioning despite environmental change. Ecol Lett. 2022 Feb;25(2):555-569. doi: 10.1111/ele.13936. Epub 2021 Dec 2. PMID: 34854529; PMCID: PMC9300022.

  3. Raffard A, Santoul F, Cucherousset J, Blanchet S. The community and ecosystem consequences of intraspecific diversity: a meta-analysis. Biol Rev Camb Philos Soc. 2019 Apr;94(2):648-661. doi: 10.1111/brv.12472. Epub 2018 Oct 7. PMID: 30294844.