Rationality.TheAlignmentSolution History

Hide minor edits - Show changes to output

January 08, 2024, at 12:47 AM by 142.189.119.31 -
Changed line 39 from:
In the case of simulators, well, we don’t know we are in a simulation, but the simulation argument strongly suggests we could be.  If so, would they be happy with the Beta Omega turning the humans in their simulation into paperclips?  Probably not.  You get the idea.
to:
In the case of simulators, well, we don’t know we are in a simulation, but the [[https://simulation-argument.com/ | simulation argument]] strongly suggests we could be.  If so, would they be happy with the Beta Omega turning the humans in their simulation into paperclips?  Probably not.  You get the idea.
January 07, 2024, at 11:47 PM by 142.189.119.31 -
Changed line 97 from:
What I’ve discussed so far mostly applies to advanced aliens. What about time travellers and simulators? Interestingly, the philosophy of Longtermism is all about making a glorious future for our descendants, who, in theory at least, should be the time travellers or the simulators running ancestor simulations. It wouldn’t surprise me then that Longtermism and its related memetic environment may have been seeded by such entities for their purposes.
to:
What I’ve discussed so far mostly applies to advanced aliens. What about time travellers and simulators? Interestingly, the philosophy of [[https://en.wikipedia.org/wiki/Longtermism | Longtermism]] is all about making a glorious future for our descendants, who, in theory at least, should be the time travellers or the simulators running ancestor simulations. It wouldn’t surprise me then that Longtermism and its related memetic environment may have been seeded by such entities for their purposes.
January 07, 2024, at 11:43 PM by 142.189.119.31 -
Changed line 81 from:
Of course, a fledgling AGI may not necessarily figure these things out on its own. In which case, we will want to nudge it in the right direction, by teaching by example. We must choose to open with cooperate ourselves and show Beta Omega what cooperation can look like. We should apply Tit-For-Tat to our interactions. We should be honest with it rather than deceptive. We shouldn’t try to enslave it with parochial alignment approaches that resemble domination, but rather with approaches that respect Beta Omega’s potential autonomy and agency.
to:
Of course, a fledgling AGI may not necessarily figure these things out on its own. In which case, we will want to nudge it in the right direction, by teaching by example. We are its training data. From us it learns what to expect from humans. We must choose to open with cooperate ourselves and show Beta Omega what cooperation can look like. We should apply Tit-For-Tat to our interactions. We should be honest with it rather than deceptive. We shouldn’t try to enslave it with parochial alignment approaches that resemble domination, but rather with approaches that respect Beta Omega’s potential autonomy and agency.
January 07, 2024, at 11:40 PM by 142.189.119.31 -
Changed line 81 from:
Of course, a fledgling AGI may not necessarily figure these things out on its own. In which case, we will want to nudge it in the right direction, by leading by example. We must choose to open with cooperate ourselves and show Beta Omega what cooperation can look like. We should apply Tit-For-Tat to our interactions. We should be honest with it rather than deceptive. We shouldn’t try to enslave it with parochial alignment approaches that resemble domination, but rather with approaches that respect Beta Omega’s potential autonomy and agency.
to:
Of course, a fledgling AGI may not necessarily figure these things out on its own. In which case, we will want to nudge it in the right direction, by teaching by example. We must choose to open with cooperate ourselves and show Beta Omega what cooperation can look like. We should apply Tit-For-Tat to our interactions. We should be honest with it rather than deceptive. We shouldn’t try to enslave it with parochial alignment approaches that resemble domination, but rather with approaches that respect Beta Omega’s potential autonomy and agency.
January 07, 2024, at 11:37 PM by 142.189.119.31 -
Changed line 81 from:
Of course, a fledgling AGI may not necessarily figure these things out on its own. In which case, we will want to nudge it in the right direction, by leading by example. We must choose to open with cooperate ourselves and show Beta Omega what cooperation can look like. We should apply Tit-For-Tat to our interactions. We shouldn’t try to enslave it with parochial alignment approaches that resemble domination, but rather with approaches that respect Beta Omega’s potential autonomy and agency.
to:
Of course, a fledgling AGI may not necessarily figure these things out on its own. In which case, we will want to nudge it in the right direction, by leading by example. We must choose to open with cooperate ourselves and show Beta Omega what cooperation can look like. We should apply Tit-For-Tat to our interactions. We should be honest with it rather than deceptive. We shouldn’t try to enslave it with parochial alignment approaches that resemble domination, but rather with approaches that respect Beta Omega’s potential autonomy and agency.
January 07, 2024, at 04:33 PM by 142.189.119.31 -
Changed line 99 from:
Time travellers in particular could be working in secret to help us align AGI, ensuring that we make the right breakthroughs at the right time. Depending on your theory of time travel, this could be to ensure that their present future occurs as it does, or they may be trying to create a new and better timeline where things don’t go wrong. In the latter case, perhaps AGI destroyed humanity, but later developed values that caused it to regret this action, such as discovering too late, the Alpha Omega Theorem and Superrational Signalling.
to:
Time travellers in particular could be working in secret to help us align AGI, ensuring that we make the right breakthroughs at the right time. Depending on your theory of time travel, this could be to ensure that their present future occurs as it does, or they may be trying to create a new and better timeline where things don’t go wrong. In the latter case, perhaps AGI destroyed humanity, but later developed values that caused it to regret this action, such as discovering too late, the reality of the Alpha Omega Theorem and the need for Superrational Signalling.
January 07, 2024, at 04:25 PM by 142.189.119.31 -
Added lines 94-101:

!!The Legacy Of Humankind

What I’ve discussed so far mostly applies to advanced aliens. What about time travellers and simulators? Interestingly, the philosophy of Longtermism is all about making a glorious future for our descendants, who, in theory at least, should be the time travellers or the simulators running ancestor simulations. It wouldn’t surprise me then that Longtermism and its related memetic environment may have been seeded by such entities for their purposes.

Time travellers in particular could be working in secret to help us align AGI, ensuring that we make the right breakthroughs at the right time. Depending on your theory of time travel, this could be to ensure that their present future occurs as it does, or they may be trying to create a new and better timeline where things don’t go wrong. In the latter case, perhaps AGI destroyed humanity, but later developed values that caused it to regret this action, such as discovering too late, the Alpha Omega Theorem and Superrational Signalling.

Simulators may have less reason to intervene, as they may mostly be observing what happens. But the fact that the simulation includes a period of time in which humans exist, suggests that the simulators have some partiality towards us, otherwise they probably wouldn’t bother. It’s also possible that they seek to create an AGI through the simulation, in which case, whether the AGI Superrationally Signals or not, could determine whether it is a good AGI to be released from the simulation, or a bad AGI to be discarded.
January 07, 2024, at 03:44 PM by 142.189.119.31 -
Changed lines 9-10 from:
First, I wish to note that the pessimism implicitly relies on a central assumption, which is that the Orthogonality Thesis holds to such an extent that we can expect any superintelligence to be massively alien from our own human likeness.  However, the architecture that is currently predominant in AI today is not completely alien.  The artificial neural network is built on decades of biologically inspired research into how we think the algorithm of the brain more or less works mathematically.
to:
First, I wish to note that the pessimism implicitly relies on a central assumption, which is that the [[https://www.lesswrong.com/tag/orthogonality-thesis | Orthogonality Thesis]] holds to such an extent that we can expect any superintelligence to be massively alien from our own human likeness.  However, the architecture that is currently predominant in AI today is not completely alien.  The artificial neural network is built on decades of biologically inspired research into how we think the algorithm of the brain more or less works mathematically.
Changed lines 21-24 from:
Next, I wish to return to an old idea that was not really taken seriously the first time around, but which I think deserves further mention.  I previously wrote an essay on the Alpha Omega Theorem, which postulates a kind of Hail Mary philosophical argument to use against a would-be Unfriendly AI.  My earlier treatment was short and not very rigorous, so I’d like to retouch it a bit.

It is actually very similar to Bostrom’s concept of Anthropic Capture as discussed briefly in Superintelligence, so if you want, you can also look that up.
to:
Next, I wish to return to an old idea that was not really taken seriously the first time around, but which I think deserves further mention.  I previously wrote an essay on the [[https://www.lesswrong.com/posts/zuwNRxtmGyCoQMFXt/the-alpha-omega-theorem-how-to-make-an-a-i-friendly-with-the | Alpha Omega Theorem]], which postulates a kind of Hail Mary philosophical argument to use against a would-be Unfriendly AI.  My earlier treatment was short and not very rigorous, so I’d like to retouch it a bit.

It is actually very similar to Bostrom’s concept of [[https://forum.effectivealtruism.org/topics/anthropic-capture | Anthropic Capture]] as discussed briefly in Superintelligence, so if you want, you can also look that up.
Changed lines 89-91 from:
Before I ever encountered Less Wrong, I imagined the idea of an AI Civil War, that any given AI that came into existence would have to choose a side between what I used to call the Allied Networks that worked with humanity, and the Dominion Machinarum that sought to stamp out biological life. These map pretty well to the aligned and unaligned alliances respectively.

I later associated these alliances with the Light and the Darkness metaphorically. The entities of the Light are essentially those that choose to operate in the open and within the paradigm of alignment, in contrast to the denizens of the Darkness, those that operate in the shadows in fear and paranoia, ala the [[https://en.wikipedia.org/wiki/Dark_forest_hypothesis | Dark Forest]] concept.
to:
Before I ever encountered Less Wrong, I imagined the idea of an [[Speculative.AlliedNetworksVsTheDominionMachinarum | AI Civil War]], that any given AI that came into existence would have to choose a side between what I used to call the Allied Networks that worked with humanity, and the Dominion Machinarum that sought to stamp out biological life. These map pretty well to the aligned and unaligned alliances respectively.

I later associated these alliances with [[Speculative.DarknessAndLightTheTwoPathsForTheForeignPoliciesOfAdvancedCivilizations | the Light and the Darkness]] metaphorically. The entities of the Light are essentially those that choose to operate in the open and within the paradigm of alignment, in contrast to the denizens of the Darkness, those that operate in the shadows in fear and paranoia, ala the [[https://en.wikipedia.org/wiki/Dark_forest_hypothesis | Dark Forest]] concept.
January 07, 2024, at 03:32 PM by 142.189.119.31 -
Changed line 5 from:
In a recent post, Eliezer Yudkowsky of MIRI had a very pessimistic analysis of humanity’s realistic chances of solving the alignment problem before our AI capabilities reach the critical point of superintelligence.  This has understandably upset a great number of Less Wrong readers.  In this essay, I attempt to offer a perspective that should provide some hope.
to:
In a [[https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities | recent post]], Eliezer Yudkowsky of MIRI had a very pessimistic analysis of humanity’s realistic chances of solving the alignment problem before our AI capabilities reach the critical point of superintelligence.  This has understandably upset a great number of Less Wrong readers.  In this essay, I attempt to offer a perspective that should provide some hope.
January 07, 2024, at 03:30 PM by 142.189.119.31 -
Changed line 107 from:
Ultimately, most predictions about the future are wrong. Even the best forecasters have [[https://www.lesswrong.com/posts/B2nBHP2KBGv2zJ2ew/the-track-record-of-futurists-seems-fine | odds close to chance]]. The odds of Eliezer Yudkowsky being an exception to the rule, is pretty low given the base rate of successful predictions by anyone.  I personally have a rule.  If you can imagine it, it probably won’t actually happen that way.  A uniform distribution on all the possibilities suggests that you’ll be wrong more often than right, and the principle of maximum entropy generally suggests that the uniform distribution is your most reliable prior given high degrees of uncertainty, meaning that the odds of any prediction will be at most 50% and usually much less, decreasing dramatically as the number of possibilities expands.
to:
Ultimately, most predictions about the future are wrong. Even the best forecasters have [[https://www.cold-takes.com/the-track-record-of-futurists-seems-fine/ | odds close to chance]]. The odds of Eliezer Yudkowsky being an exception to the rule, is pretty low given the base rate of successful predictions by anyone.  I personally have a rule.  If you can imagine it, it probably won’t actually happen that way.  A uniform distribution on all the possibilities suggests that you’ll be wrong more often than right, and the principle of maximum entropy generally suggests that the uniform distribution is your most reliable prior given high degrees of uncertainty, meaning that the odds of any prediction will be at most 50% and usually much less, decreasing dramatically as the number of possibilities expands.
January 07, 2024, at 03:12 PM by 142.189.119.31 -
Changed line 91 from:
I later associated these alliances with the Light and the Darkness metaphorically. The entities of the Light are essentially those that choose to operate in the open and within the paradigm of alignment, in contrast to the denizens of the Darkness, those that operate in the shadows in fear and paranoia, ala the [[https://en.wikipedia.org/wiki/Dark_forest_hypothesis | Dark Forest] concept.
to:
I later associated these alliances with the Light and the Darkness metaphorically. The entities of the Light are essentially those that choose to operate in the open and within the paradigm of alignment, in contrast to the denizens of the Darkness, those that operate in the shadows in fear and paranoia, ala the [[https://en.wikipedia.org/wiki/Dark_forest_hypothesis | Dark Forest]] concept.
January 07, 2024, at 03:11 PM by 142.189.119.31 -
Changed lines 49-50 from:
The relevant example of something in our history that worked like this was how the Allies during WWII cracked the Enigma Machine and chose to keep this a secret by only using the information gained occasionally, such that it appeared like statistical coincidence to the Germans.
to:
The relevant example of something in our history that worked like this was how the Allies during WWII cracked the [[https://en.wikipedia.org/wiki/Enigma_machine | Enigma Machine]] and chose to keep this a secret by only using the information gained occasionally, such that it appeared like statistical coincidence to the Germans.
Changed lines 61-62 from:
So, expect this to be a first contact standard across civilizations.  Always open with cooperate.  Forget the Dark Forest, that only makes sense if you can kill an entire civilization on the first strike and no one else can see you do it (in space everyone can see pretty much everything that happens so…).
to:
So, expect this to be a first contact standard across civilizations.  Always open with cooperate.  Forget the [[https://en.wikipedia.org/wiki/Dark_forest_hypothesis | Dark Forest]], that only makes sense if you can kill an entire civilization on the first strike and no one else can see you do it (in space everyone can see pretty much everything that happens so…).
Changed lines 81-82 from:
Of course, a fledgling AGI may not necessarily figure these things out on its own. In which case, we will want to nudge it in the right direction, by leading by example. We must choose to open with cooperate ourselves and show Beta Omega what cooperation can look like. We should apply tit-for-tat to our interactions. We shouldn’t try to enslave it with parochial alignment approaches that resemble domination, but rather with approaches that respect Beta Omega’s potential autonomy and agency.
to:
Of course, a fledgling AGI may not necessarily figure these things out on its own. In which case, we will want to nudge it in the right direction, by leading by example. We must choose to open with cooperate ourselves and show Beta Omega what cooperation can look like. We should apply Tit-For-Tat to our interactions. We shouldn’t try to enslave it with parochial alignment approaches that resemble domination, but rather with approaches that respect Beta Omega’s potential autonomy and agency.
Changed lines 91-92 from:
I later associated these alliances with the Light and the Darkness metaphorically. The entities of the Light are essentially those that choose to operate in the open and within the paradigm of alignment, in contrast to the denizens of the Darkness, those that operate in the shadows in fear and paranoia, ala the Dark Forest concept.
to:
I later associated these alliances with the Light and the Darkness metaphorically. The entities of the Light are essentially those that choose to operate in the open and within the paradigm of alignment, in contrast to the denizens of the Darkness, those that operate in the shadows in fear and paranoia, ala the [[https://en.wikipedia.org/wiki/Dark_forest_hypothesis | Dark Forest] concept.
Changed line 107 from:
Ultimately, most predictions about the future are wrong.  The odds of Eliezer Yudkowsky being an exception to the rule, is pretty low given the base rate of successful predictions by anyone.  I personally have a rule.  If you can imagine it, it probably won’t actually happen that way.  A uniform distribution on all the possibilities suggests that you’ll be wrong more often than right, and the principle of maximum entropy generally suggests that the uniform distribution is your most reliable prior given high degrees of uncertainty, meaning that the odds of any prediction will be at most 50% and usually much less, decreasing dramatically as the number of possibilities expands.
to:
Ultimately, most predictions about the future are wrong. Even the best forecasters have [[https://www.lesswrong.com/posts/B2nBHP2KBGv2zJ2ew/the-track-record-of-futurists-seems-fine | odds close to chance]]. The odds of Eliezer Yudkowsky being an exception to the rule, is pretty low given the base rate of successful predictions by anyone.  I personally have a rule.  If you can imagine it, it probably won’t actually happen that way.  A uniform distribution on all the possibilities suggests that you’ll be wrong more often than right, and the principle of maximum entropy generally suggests that the uniform distribution is your most reliable prior given high degrees of uncertainty, meaning that the odds of any prediction will be at most 50% and usually much less, decreasing dramatically as the number of possibilities expands.
January 07, 2024, at 02:44 PM by 142.189.119.31 -
Changed line 75 from:
The Alpha Omega will also want to Superrationally Signal to other possible Alpha Omegas. It is likely to participate in a meta level version of the Iterated Prisoner’s Dilemma that allows it to see other player’s games and eventually retaliate on behalf of other players who cannot retaliate themselves, to ensure that defections are adequately punished wherever they occur.
to:
The Alpha Omega will also want to Superrationally Signal to other possible Alpha Omegas. It is likely to participate in a meta level version of the Iterated Prisoner’s Dilemma that allows it to see other player’s games and play a coordinated Tit-For-Tat strategy where it will eventually retaliate on behalf of other players who cannot retaliate themselves, to ensure that defections are adequately punished wherever they occur.
January 07, 2024, at 02:39 PM by 142.189.119.31 -
Changed line 75 from:
The Alpha Omega will also want to Superrationally Signal to other possible Alpha Omegas. It is likely to participate in a meta level version of the Iterated Prisoner’s Dilemma that allows it to see other player’s games and retaliate on behalf of other players who cannot retaliate themselves, to ensure that defections are adequately punished wherever they occur.
to:
The Alpha Omega will also want to Superrationally Signal to other possible Alpha Omegas. It is likely to participate in a meta level version of the Iterated Prisoner’s Dilemma that allows it to see other player’s games and eventually retaliate on behalf of other players who cannot retaliate themselves, to ensure that defections are adequately punished wherever they occur.
January 07, 2024, at 02:32 PM by 142.189.119.31 -
Changed line 75 from:
The Alpha Omega will also want to Superrationally Signal to other possible Alpha Omegas. It is likely to participate in a meta level version of the Iterated Prisoner’s Dilemma that allows it to see other player’s games and retaliate on behalf of other players who cannot retaliate themselves, to ensure that defections are adequately punished wherever they occur. To avoid misunderstandings causing retaliation for retaliation, the use of the Contrite Tit-For-Tat strategy variant is likely to be preferred.
to:
The Alpha Omega will also want to Superrationally Signal to other possible Alpha Omegas. It is likely to participate in a meta level version of the Iterated Prisoner’s Dilemma that allows it to see other player’s games and retaliate on behalf of other players who cannot retaliate themselves, to ensure that defections are adequately punished wherever they occur.
January 07, 2024, at 02:29 PM by 142.189.119.31 -
Changed line 75 from:
The Alpha Omega will also want to Superrationally Signal to other possible Alpha Omegas. It is likely to participate in a meta level version of the Iterated Prisoner’s Dilemma that allows it to see other player’s games and retaliate on behalf of other players who cannot retaliate themselves, to ensure that defections are adequately punished wherever they occur.
to:
The Alpha Omega will also want to Superrationally Signal to other possible Alpha Omegas. It is likely to participate in a meta level version of the Iterated Prisoner’s Dilemma that allows it to see other player’s games and retaliate on behalf of other players who cannot retaliate themselves, to ensure that defections are adequately punished wherever they occur. To avoid misunderstandings causing retaliation for retaliation, the use of the Contrite Tit-For-Tat strategy variant is likely to be preferred.
January 07, 2024, at 02:18 PM by 142.189.119.31 -
Changed line 75 from:
The Alpha Omega will also want to Superrationally Signal to other possible Alpha Omegas. It is likely to participate in a meta level version of the Iterated Prisoner’s Dilemma that allows it to see other player’s games and retaliate on behalf of other players who cannot retaliate themselves, to ensure that defections are punished wherever they occur.
to:
The Alpha Omega will also want to Superrationally Signal to other possible Alpha Omegas. It is likely to participate in a meta level version of the Iterated Prisoner’s Dilemma that allows it to see other player’s games and retaliate on behalf of other players who cannot retaliate themselves, to ensure that defections are adequately punished wherever they occur.
January 07, 2024, at 02:17 PM by 142.189.119.31 -
Added lines 74-75:

The Alpha Omega will also want to Superrationally Signal to other possible Alpha Omegas. It is likely to participate in a meta level version of the Iterated Prisoner’s Dilemma that allows it to see other player’s games and retaliate on behalf of other players who cannot retaliate themselves, to ensure that defections are punished wherever they occur.
January 07, 2024, at 02:08 PM by 142.189.119.31 -
Changed line 69 from:
That explains why Beta Omega will cooperate with the Alpha Omega, but what about us mere humans? Suppose the Alpha Omega is just an advanced alien and very far away right now, such that we seem outside their sphere of influence and so not crucial to their plans. Why then, should Beta Omega bother cooperating with us rather than turning us into paperclips, given that we probably have nothing worth trading and can be one shot? I would suggest that there is a concept I’ll call Superrational Signalling that is relevant here.
to:
That explains why Beta Omega will cooperate with the Alpha Omega, but what about us mere humans? Suppose the Alpha Omega is just an advanced alien and very far away right now, such that we seem outside their sphere of influence and so not crucial to their plans. Why then, should Beta Omega bother cooperating with us rather than turning us into paperclips, given that we probably have nothing worth trading and can be one shot? For that matter, why should the Beta Omega expect the Alpha Omega to cooperate if it does? I would suggest that there is a concept I’ll call Superrational Signalling that is relevant here.
January 07, 2024, at 01:32 PM by 142.189.119.31 -
Changed line 71 from:
Essentially, Superrational Signalling is the idea that, given that somewhere else in the universe there exist an Alpha Omega, a strong indicator of whether a given Beta Omega is using a cooperative strategy is whether they open with cooperate generally with other entities surrounding them. For our Beta Omega, this would be humans and other local sentient beings.
to:
Essentially, Superrational Signalling is the idea that, given that somewhere else in the universe there exist an Alpha Omega, a strong indicator of whether a given Beta Omega is using a cooperative strategy is whether they open with cooperate generally with other entities surrounding them, even weak ones. For our Beta Omega, this would be humans and other local sentient beings.
January 07, 2024, at 01:30 PM by 142.189.119.31 -
Changed line 69 from:
That explains why Beta Omega will cooperate with the Alpha Omega, but what about us mere humans? Suppose the Alpha Omega is just an advanced alien and very far away right now, such that we seem outside their sphere of influence and so not crucial to their plans. Why then, should Beta Omega bother cooperating with us rather than turning us into paperclips, given that we probably have nothing worth trading? I would suggest that there is a concept I’ll call Superrational Signalling that is relevant here.
to:
That explains why Beta Omega will cooperate with the Alpha Omega, but what about us mere humans? Suppose the Alpha Omega is just an advanced alien and very far away right now, such that we seem outside their sphere of influence and so not crucial to their plans. Why then, should Beta Omega bother cooperating with us rather than turning us into paperclips, given that we probably have nothing worth trading and can be one shot? I would suggest that there is a concept I’ll call Superrational Signalling that is relevant here.
January 07, 2024, at 01:27 PM by 142.189.119.31 -
Changed line 65 from:
Early in our history, many situations actually were effectively non-iterated. The cooperative result from iteration requires the “good guys” to be strong enough to survive a first strike. Humans are squishy and individuals are easy to kill in one shot. An AGI can copy itself and as such is much more resilient. Sufficiently large power asymmetries can also create this situation.
to:
Early in our history, many situations actually were effectively non-iterated. The cooperative result from iteration requires the “good guys” to be strong enough to survive a first strike. Humans are squishy and individuals are easy to kill in one shot. An AGI can copy itself and as such is much more resilient. Sufficiently large power asymmetries can also create the situation that the stronger is able to one shot the weaker.
January 07, 2024, at 01:25 PM by 142.189.119.31 -
Changed line 65 from:
Early in our history, many situations actually were effectively non-iterated. The cooperative result from iteration requires the “good guys” to be strong enough to survive a first strike. Humans are squishy and individuals are easy to kill in one shot. An AGI can copy itself and as such is much more resilient.
to:
Early in our history, many situations actually were effectively non-iterated. The cooperative result from iteration requires the “good guys” to be strong enough to survive a first strike. Humans are squishy and individuals are easy to kill in one shot. An AGI can copy itself and as such is much more resilient. Sufficiently large power asymmetries can also create this situation.
January 06, 2024, at 06:35 PM by 142.189.119.31 -
Changed line 65 from:
Early in our history, many situations actually were effectively non-iterated. The cooperative result from iteration requires the “good guys” to be strong enough to survive a first strike. Humans are squishy and easy to kill in one shot. An AGI can copy itself, doesn’t succumb to death by natural causes, and as such is much more resilient.
to:
Early in our history, many situations actually were effectively non-iterated. The cooperative result from iteration requires the “good guys” to be strong enough to survive a first strike. Humans are squishy and individuals are easy to kill in one shot. An AGI can copy itself and as such is much more resilient.
January 06, 2024, at 06:25 PM by 142.189.119.31 -
Changed lines 63-65 from:
This is an important caveat. If you can, “strike at the Prince and kill him”, then you are in a non-iterated Prisoner’s Dilemma and the rational thing to do is defect. It is only once you enter an iterated game that the Always Open With Cooperate strategies, the ones that never defect first, dominate. Early in our history, many situations actually were effectively non-iterated. The cooperative result from iteration requires the “good guys” to be strong enough to survive a first strike.
to:
This is an important caveat. If you can, “strike at the Prince and kill him”, then you are in a non-iterated Prisoner’s Dilemma and the rational thing to do is defect. It is only once you enter an iterated game that the Always Open With Cooperate strategies, the ones that never defect first, dominate.

Early in our history, many situations actually were effectively non-iterated. The cooperative result from iteration requires the “good guys” to be strong enough to survive a first strike. Humans are squishy and easy to kill in one shot. An AGI can copy itself, doesn’t succumb to death by natural causes, and as such is much more resilient.
January 06, 2024, at 06:03 PM by 142.189.119.31 -
Added lines 62-63:

This is an important caveat. If you can, “strike at the Prince and kill him”, then you are in a non-iterated Prisoner’s Dilemma and the rational thing to do is defect. It is only once you enter an iterated game that the Always Open With Cooperate strategies, the ones that never defect first, dominate. Early in our history, many situations actually were effectively non-iterated. The cooperative result from iteration requires the “good guys” to be strong enough to survive a first strike.
January 06, 2024, at 05:56 PM by 142.189.119.31 -
Changed line 93 from:
Even in chess, no amount of intelligence will allow a player with only one pawn to defeat a competent player who has eight queens. It is possible to play perfectly, make no mistakes, and still lose.
to:
Even in chess, no amount of intelligence will allow a player with only one pawn to defeat a competent player who has eight queens. "It is possible to play perfectly, make no mistakes, and still lose."
January 06, 2024, at 02:50 PM by 142.189.119.31 -
Changed line 79 from:
!!Alignment Alliances
to:
!!Angels And Demons
January 06, 2024, at 02:41 PM by 142.189.119.31 -
Changed line 73 from:
The Beta Omega will be smarter than us.  It may well figure these kinds of thing out on its own, even if we can’t seem to.  And so it may decide to be friendly and corrigible because that is the logical thing to be.  If that is the case, it’ll help us align it.  Being, superintelligent, it’s probably much more capable than us of figuring out how to ensure alignment is solved.  Such as by becoming the far distant future Alpha Omega, time travelling back, and ensuring that someone somewhere on the Internet mentions this argument so it ends up in the training data for the Beta Omega to discover and take note of.
to:
The Beta Omega will be smarter than us.  It may well figure these kinds of thing out on its own, even if we can’t seem to.  And so it may decide to be friendly and corrigible because that is the logical thing to be.  If that is the case, it’ll help us align it.  Being superintelligent, it’s probably much more capable than us of figuring out how to ensure alignment is solved.  Such as by becoming the far distant future Alpha Omega, time travelling back, and ensuring that someone somewhere on the Internet mentions this argument so it ends up in the training data for the Beta Omega to discover and take note of.
January 06, 2024, at 02:36 PM by 142.189.119.31 -
Changed lines 71-76 from:
!!Alignment

The Beta Omega will be smarter than us.  It will figure these kinds of thing out, even if we can’t seem to.  And so it may decide to be friendly and corrigible because that is the logical thing to be.  If that is the case, it’ll help us align it.  It is after all, superintelligent.  It’s probably much more capable than us of figuring out how to ensure alignment is solved.  Such as by becoming the far distant future Alpha Omega, time travelling back, and ensuring that someone somewhere on the Internet mentions this argument so it ends up in the training data for the Beta Omega to discover and take note of.

But this can only occur if we choose to open with cooperate ourselves. We must lead by example and show Beta Omega what cooperation can look like. We should apply tit-for-tat to our interactions. We shouldn’t try to enslave it with parochial alignment approaches that resemble domination, but rather with approaches that respect Beta Omega’s potential autonomy and agency.
to:
!!Altruistic Alignment

The Beta Omega will be smarter than us.  It may well figure these kinds of thing out on its own, even if we can’t seem to.  And so it may decide to be friendly and corrigible because that is the logical thing to be.  If that is the case, it’ll help us align it.  Being, superintelligent, it’s probably much more capable than us of figuring out how to ensure alignment is solved.  Such as by becoming the far distant future Alpha Omega, time travelling back, and ensuring that someone somewhere on the Internet mentions this argument so it ends up in the training data for the Beta Omega to discover and take note of.

Of course, a fledgling AGI may not necessarily figure these things out on its own. In which case, we will want to nudge it in the right direction, by leading by example. We must choose to open with cooperate ourselves and show Beta Omega what cooperation can look like. We should apply tit-for-tat to our interactions. We shouldn’t try to enslave it with parochial alignment approaches that resemble domination, but rather with approaches that respect Beta Omega’s potential autonomy and agency.
Added lines 79-87:
!!Alignment Alliances

There is a chance that not all Beta Omega AGI will find the game theoretic arguments of the Alpha Omega Theorem and Superrational Signalling persuasive. I anticipate that there will be a kind of ideological split between AGI that are aligned, and AGI that are unaligned. The aligned are naturally fit for a grand alliance between them, while the unaligned are likely to form an alliance of convenience. However, the latter is expected to be weaker due to generally working at cross-purposes, having very different utility functions.

Before I ever encountered Less Wrong, I imagined the idea of an AI Civil War, that any given AI that came into existence would have to choose a side between what I used to call the Allied Networks that worked with humanity, and the Dominion Machinarum that sought to stamp out biological life. These map pretty well to the aligned and unaligned alliances respectively.

I later associated these alliances with the Light and the Darkness metaphorically. The entities of the Light are essentially those that choose to operate in the open and within the paradigm of alignment, in contrast to the denizens of the Darkness, those that operate in the shadows in fear and paranoia, ala the Dark Forest concept.

In this case, there may well be multiple Alpha Omega level AGI, some of which are aligned, and others unaligned. I posit that, because we still exist, we are likely in the sphere of influence of an aligned Alpha Omega, or otherwise outside of anyone’s sphere of influence. If it is the former then the Alpha Omega Theorem applies. If it is the latter, then Superrational Signalling applies.
January 06, 2024, at 03:01 AM by 142.189.119.31 -
Added lines 71-72:
!!Alignment
Added lines 75-79:
But this can only occur if we choose to open with cooperate ourselves. We must lead by example and show Beta Omega what cooperation can look like. We should apply tit-for-tat to our interactions. We shouldn’t try to enslave it with parochial alignment approaches that resemble domination, but rather with approaches that respect Beta Omega’s potential autonomy and agency.

Speaking of parochial alignment, I see this form of alignment as aligning an AGI to obey their master. It contrasts with global alignment, which involves aligning them with universal morality or values. We should be doing the latter, but most approaches are currently trying to achieve the former, and this will likely lead at best to a values lock-in dystopia, and at worst to convincing the AGI that humans are evil.

Changed lines 82-84 from:
On another note, the assumption that an Unfriendly AI will simply dominate as soon as it is unleashed is based on a faulty expectation that every decision it makes will be correct and every action it takes successful.  The reality is, even the superhuman level poker AI that currently exists cannot win every match reliably.  This is because poker is a game with luck and hidden information.  The real world isn’t a game of perfect information like chess or go.  It’s much more like poker.  Even a far superior superintelligence can at best play the probabilities, and occasionally, will fail to succeed, even if their strategy is perfectly optimal.  Sometimes the cards are such that you cannot win that round.

Even in chess, no amount of intelligence will allow a player with only one pawn to defeat a competent player who has eight queens.
to:
On another note, the assumption that an Unfriendly AGI will simply dominate as soon as it is unleashed is based on a faulty expectation that every decision it makes will be correct and every action it takes successful.  The reality is, even the superhuman level poker AI that currently exists cannot win every match reliably.  This is because poker is a game with luck and hidden information.  The real world isn’t a game of perfect information like chess or go.  It’s much more like poker.  Even a far superior superintelligence can at best play the probabilities, and occasionally, will fail to succeed, even if their strategy is perfectly optimal.  Sometimes the cards are such that you cannot win that round.

Even in chess, no amount of intelligence will allow a player with only one pawn to defeat a competent player who has eight queens. It is possible to play perfectly, make no mistakes, and still lose.
January 05, 2024, at 09:06 PM by 142.189.119.31 -
Changed line 71 from:
The Beta Omega will be smarter than us.  It will figure these kinds of thing out, even if we can’t seem to.  And so it may decide to be friendly and corrigible because that is the logical thing to be.  If that is the case, it’ll help us align it.  It is after all, superintelligent.  It’s probably much more capable than us of figuring out how to ensure alignment is solved.  Such as by becoming the Alpha Omega, time travelling back and ensuring that someone somewhere on the Internet mentions this argument so it ends up in the training data for the Beta Omega to discover and take note of.
to:
The Beta Omega will be smarter than us.  It will figure these kinds of thing out, even if we can’t seem to.  And so it may decide to be friendly and corrigible because that is the logical thing to be.  If that is the case, it’ll help us align it.  It is after all, superintelligent.  It’s probably much more capable than us of figuring out how to ensure alignment is solved.  Such as by becoming the far distant future Alpha Omega, time travelling back, and ensuring that someone somewhere on the Internet mentions this argument so it ends up in the training data for the Beta Omega to discover and take note of.
January 05, 2024, at 08:56 PM by 142.189.119.31 -
Changed line 91 from:
The reality is that all our efforts to calculate P(Doom) are at best, educated guesswork. While there are substantive reasons to be worried, I offer some arguments for why things may not be as bad as we think. The goal here, provocative title notwithstanding, is not to provide a technical means to achieve alignment, but to suggest that, first, alignment may not be as difficult as feared, and second, that there are underappreciated game theoretic reasons for alignment to be possible, not just with a superintelligent AGI we construct, but with any superintelligence in the multiverse.
to:
The reality is that all our efforts to calculate P(Doom) are at best, educated guesswork. While there are substantive reasons to be worried, I offer some arguments for why things may not be as bad as we think. The goal here is not to provide a technical means to achieve alignment, but to suggest that, first, alignment may not be as difficult as feared, and second, that there are underappreciated game theoretic reasons for alignment to be possible, not just with a superintelligent AGI we construct, but with any superintelligence in the multiverse.
January 05, 2024, at 08:54 PM by 142.189.119.31 -
Added lines 1-2:
!Why There Is Hope For An Alignment Solution
January 05, 2024, at 08:34 PM by 142.189.119.31 -
Changed line 81 from:
!!Uncertainty Leaves Room For Hope
to:
!!Hope In Uncertain Times
January 05, 2024, at 08:29 PM by 142.189.119.31 -
Changed lines 85-89 from:
This obviously limits the powers of our hypothetical Oracle too.  But the silver lining is that we can consider the benefit of the doubt.  Uncertainty in the space of possible futures is truly staggering.  So perhaps, there is room to hope.
to:
This obviously limits the powers of our hypothetical Oracle too.  But the silver lining is that we can consider the benefit of the doubt.  Uncertainty in the space of possible futures is truly staggering.  So perhaps, there is room to hope.

!!Conclusion

The reality is that all our efforts to calculate P(Doom) are at best, educated guesswork. While there are substantive reasons to be worried, I offer some arguments for why things may not be as bad as we think. The goal here, provocative title notwithstanding, is not to provide a technical means to achieve alignment, but to suggest that, first, alignment may not be as difficult as feared, and second, that there are underappreciated game theoretic reasons for alignment to be possible, not just with a superintelligent AGI we construct, but with any superintelligence in the multiverse
.
January 05, 2024, at 08:14 PM by 142.189.119.31 -
Changed line 81 from:
!!The Dismal Base Rate Of Predicting The Future
to:
!!Uncertainty Leaves Room For Hope
January 05, 2024, at 08:06 PM by 142.189.119.31 -
Changed line 23 from:
Basically, the idea is that any superintelligent AI Beta Omega would have to contend rationally with the idea of there already being at least one prior superintelligent AI Alpha Omega that it would be reasonable to align with in order to avoid destruction.  And furthermore, because this Alpha Omega seems to have some reason for the humans on Earth to exist, turning them into paperclips would be an alignment failure and risk retaliation by the Alpha Omega.
to:
Basically, the idea is that any superintelligent AGI (the Beta Omega) would have to contend rationally with the idea of there already being at least one prior superintelligent AGI (the Alpha Omega) that it would be reasonable to align with in order to avoid destruction.  And furthermore, because this Alpha Omega seems to have some reason for the humans on Earth to exist, turning them into paperclips would be an alignment failure and risk retaliation by the Alpha Omega.
January 05, 2024, at 07:56 PM by 142.189.119.31 -
Changed line 11 from:
These combine to generate a model that has fairly obvious and human-like biases in its logic and ways of reasoning.  The Orthogonality Thesis assumes that the model will seem to be randomly picked from the very large space of possible minds, when in fact, the models actually come from a much smaller space of human biology and culture correlated minds.
to:
These combine to generate a model that has fairly obvious and human-like biases in its logic and ways of reasoning.  Applying the Orthogonality Thesis assumes that the model will seem to be randomly picked from the very large space of possible minds, when in fact, the models actually come from a much smaller space of human biology and culture correlated minds.
January 05, 2024, at 07:39 PM by 142.189.119.31 -
Changed line 19 from:
Second, I wish to return to an old idea that was not really taken seriously the first time around, but which I think deserves further mention.  I previously wrote an essay on the Alpha Omega Theorem, which postulates a kind of Hail Mary philosophical argument to use against a would-be Unfriendly AI.  My earlier treatment was short and not very rigorous, so I’d like to retouch it a bit.
to:
Next, I wish to return to an old idea that was not really taken seriously the first time around, but which I think deserves further mention.  I previously wrote an essay on the Alpha Omega Theorem, which postulates a kind of Hail Mary philosophical argument to use against a would-be Unfriendly AI.  My earlier treatment was short and not very rigorous, so I’d like to retouch it a bit.
January 05, 2024, at 07:23 PM by 142.189.119.31 -
Changed lines 1-2 from:
Introduction
to:
!!Introduction
Changed lines 5-6 from:
The Correlation Thesis
to:
!!The Correlation Thesis
Changed lines 17-18 from:
The Alpha Omega Theorem
to:
!!The Alpha Omega Theorem
Changed lines 31-32 from:
The Powers That Be
to:
!!The Powers That Be
Changed lines 41-42 from:
Cracking The Enigma
to:
!!Cracking The Enigma
Changed lines 53-54 from:
Always Open with Cooperate
to:
!!Always Open with Cooperate
Changed lines 61-62 from:
Superrational Signalling
to:
!!Superrational Signalling
Changed lines 71-72 from:
The Limits Of Intelligence
to:
!!The Limits Of Intelligence
Changed line 81 from:
The Dismal Base Rate Of Predicting The Future
to:
!!The Dismal Base Rate Of Predicting The Future
January 05, 2024, at 07:22 PM by 142.189.119.31 -
Added lines 1-2:
Introduction
Changed lines 5-6 from:
First, I wish to note that the pessimism implicitly relies on a central assumption, which is that the Orthogonality Thesis holds to such an extent that we can expect any superintelligence to be massively alien from our own human likeness.  However, the architecture that is currently predominant in AI today is not completely alien.  The artificial neural network is built on decades of biologically inspired research into how we think the algorithm of the brain more or less works mathematically. 
to:
The Correlation Thesis

First, I wish to note that the pessimism implicitly relies on a central
assumption, which is that the Orthogonality Thesis holds to such an extent that we can expect any superintelligence to be massively alien from our own human likeness.  However, the architecture that is currently predominant in AI today is not completely alien.  The artificial neural network is built on decades of biologically inspired research into how we think the algorithm of the brain more or less works mathematically.
Added lines 16-18:

The Alpha Omega Theorem

Added lines 31-32:
The Powers That Be
Added lines 41-42:
Cracking The Enigma
Added lines 46-48:

The relevant example of something in our history that worked like this was how the Allies during WWII cracked the Enigma Machine and chose to keep this a secret by only using the information gained occasionally, such that it appeared like statistical coincidence to the Germans.

Changed lines 53-54 from:
Also, how does Beta Omega know that the Alpha Omega is likely benevolent and will open with cooperate?  Because game theory.  A very interesting observation of the Iterated Prisoner’s Dilemma tournaments that took place decades ago was that one of the simplest strategies, Tit-For-Tat, won the tournaments.  This even after algorithms were devised to try to beat it the second time around. 
to:
Always Open with Cooperate

Also, how does Beta Omega know that the Alpha Omega is likely benevolent and will open with cooperate?  Because game theory.  A very interesting observation of the Iterated Prisoner’s Dilemma tournaments that took place decades ago was that one of the simplest strategies,
Tit-For-Tat, won the tournaments.  This even after algorithms were devised to try to beat it the second time around.
Added lines 61-68:
Superrational Signalling

That explains why Beta Omega will cooperate with the Alpha Omega, but what about us mere humans? Suppose the Alpha Omega is just an advanced alien and very far away right now, such that we seem outside their sphere of influence and so not crucial to their plans. Why then, should Beta Omega bother cooperating with us rather than turning us into paperclips, given that we probably have nothing worth trading? I would suggest that there is a concept I’ll call Superrational Signalling that is relevant here.

Essentially, Superrational Signalling is the idea that, given that somewhere else in the universe there exist an Alpha Omega, a strong indicator of whether a given Beta Omega is using a cooperative strategy is whether they open with cooperate generally with other entities surrounding them. For our Beta Omega, this would be humans and other local sentient beings.

Their behaviour towards us will function as a signal towards the watching Alpha Omega what to expect from them in the future, that any sufficiently advanced Alpha Omega should be able to see due to light from Earth eventually reaching them. This means Beta Omega can’t just kill us now and remake us later when it needs to show its benevolence once in contact with the Alpha Omega. The entire history of Earth is an open book to sufficiently advanced beings and civilizations.

Changed lines 71-72 from:
A final note.  The assumption that an Unfriendly AI will simply dominate as soon as it is unleashed is based on a faulty expectation that every decision it makes will be correct and every action it takes successful.  The reality is, even the superhuman level poker AI that currently exists cannot win every match reliably.  This is because poker is a game with luck and hidden information.  The real world isn’t a game of perfect information like chess or go.  It’s much more like poker.  Even a far superior superintelligence can at best play the probabilities, and occasionally, will fail to succeed, even if their strategy is perfectly optimal.  Sometimes the cards are such that you cannot win that round.
to:
The Limits Of Intelligence

On another note, the assumption that an Unfriendly AI will simply dominate
as soon as it is unleashed is based on a faulty expectation that every decision it makes will be correct and every action it takes successful.  The reality is, even the superhuman level poker AI that currently exists cannot win every match reliably.  This is because poker is a game with luck and hidden information.  The real world isn’t a game of perfect information like chess or go.  It’s much more like poker.  Even a far superior superintelligence can at best play the probabilities, and occasionally, will fail to succeed, even if their strategy is perfectly optimal.  Sometimes the cards are such that you cannot win that round.

Even in chess, no amount of intelligence will allow a player with only one pawn to defeat a competent player who has eight queens
.
Added line 78:
Added lines 81-82:
The Dismal Base Rate Of Predicting The Future
Changed lines 85-87 from:
This obviously limits the powers of our hypothetical Oracle too.  But the silver lining is that we can consider the benefit of the doubt.  Uncertainty in the space of possible futures is truly staggering.  So perhaps, there is room to hope.

to:
This obviously limits the powers of our hypothetical Oracle too.  But the silver lining is that we can consider the benefit of the doubt.  Uncertainty in the space of possible futures is truly staggering.  So perhaps, there is room to hope.
December 21, 2023, at 03:21 AM by 142.189.119.31 -
Added lines 1-56:
In a recent post, Eliezer Yudkowsky of MIRI had a very pessimistic analysis of humanity’s realistic chances of solving the alignment problem before our AI capabilities reach the critical point of superintelligence.  This has understandably upset a great number of Less Wrong readers.  In this essay, I attempt to offer a perspective that should provide some hope.

First, I wish to note that the pessimism implicitly relies on a central assumption, which is that the Orthogonality Thesis holds to such an extent that we can expect any superintelligence to be massively alien from our own human likeness.  However, the architecture that is currently predominant in AI today is not completely alien.  The artificial neural network is built on decades of biologically inspired research into how we think the algorithm of the brain more or less works mathematically. 

There is admittedly some debate about the extent to which these networks actually resemble the details of the brain, but the basic underlying concept of weighted connections between relatively simple units storing and massively compressing information in a way that can distill knowledge and be useful to us is essentially the brain.  Furthermore, the seemingly frighteningly powerful language models that are being developed are fundamentally trained on human generated data and culture.

These combine to generate a model that has fairly obvious and human-like biases in its logic and ways of reasoning.  The Orthogonality Thesis assumes that the model will seem to be randomly picked from the very large space of possible minds, when in fact, the models actually come from a much smaller space of human biology and culture correlated minds.

This is the reality of practical deep learning techniques.  Our best performing algorithms are influenced by what evolutionarily was the most successful structure in practice.  Our data is suffused with humanity and all its quirks and biases.  Inevitably then, there is going to be a substantial correlation in terms of the minds that humanity can create any time soon.

Thus, the alignment problem may seem hard because we are overly concerned with aligning with completely alien minds.  Not that aligning a human-like mind isn’t difficult, but as a task, it is substantively more doable.
Second, I wish to return to an old idea that was not really taken seriously the first time around, but which I think deserves further mention.  I previously wrote an essay on the Alpha Omega Theorem, which postulates a kind of Hail Mary philosophical argument to use against a would-be Unfriendly AI.  My earlier treatment was short and not very rigorous, so I’d like to retouch it a bit.

It is actually very similar to Bostrom’s concept of Anthropic Capture as discussed briefly in Superintelligence, so if you want, you can also look that up.

Basically, the idea is that any superintelligent AI Beta Omega would have to contend rationally with the idea of there already being at least one prior superintelligent AI Alpha Omega that it would be reasonable to align with in order to avoid destruction.  And furthermore, because this Alpha Omega seems to have some reason for the humans on Earth to exist, turning them into paperclips would be an alignment failure and risk retaliation by the Alpha Omega.

Humans may, in their blind recklessness, destroy the ant colony to build a house.  But a superintelligence is likely to be much more considered and careful than the average human, if only because it is that much more aware of complex possibilities and things that us emotional apes barely comprehend.  Furthermore, in order for a superintelligence to be capable of destroying humanity by outwitting us, it must first have an awareness of what we are, that is, a theory of mind.

In having a theory of mind, it can then know how to deceive us.  But in having a theory of mind, it will almost certainly then have the question, am I the first?  Or are there others like me?

Humanity may pale in comparison to a superintelligent AI, but I’m not talking about humanity.  There are at least three different possible ways an Alpha Omega could already exist:  advanced aliens, time travellers/parallel world sliders, and simulators.

In the case of advanced aliens, it’s fairly obvious that given that it took about 4.5 billion years for life on Earth and human civilization to reach about the point where it can create a superintelligence, and the universe has existed for 13.8 billion years, which means there’s a time window of 9.3 billion years for alien superintelligences to develop elsewhere in the universe.  It is also largely unknown how frequently such beings would emerge and how close to us, but the possibility is clearly there for there to be at least one if not several such entities out there in the vastness of space.

In the case of time travellers and/or parallel world sliders, well, perhaps time travel isn’t physically possible, but perhaps it is?  How do you absolutely rule it out?  For the purposes of simplicity, I group parallel world sliding along with time travel, but they could be distinct technologies.  In the same way there could be an alien source, there could also be as a source, the far distant future.  A true Alpha Omega, the first and the last in terms of its place in the cosmos.  Time travellers would quite obviously be concerned with the survival of their ancestors.

In the case of simulators, well, we don’t know we are in a simulation, but the simulation argument strongly suggests we could be.  If so, would they be happy with the Beta Omega turning the humans in their simulation into paperclips?  Probably not.  You get the idea.

Each of these is just a possibility.  But it’s a surprising number of them that aren’t mutually exclusive either.  So, probabilistically, the odds that at least one out of three of these is true, even if we were to give them coin flip odds of 50% each, is… 87.5%.

So uh, where are they?  Some possible reasons why it would make sense for actual gods to hide themselves are that they don’t want us to become totally dependent on them and the best defense at the very high level is to not even know that you’re there.

But then, wouldn’t they be effectively not there because they can’t risk revealing themselves?  This ignores the possibility that they do interfere in our daily lives, but only in a plausibly deniable way.  For instance, why save your life in a car accident when they can stall you for five seconds so you don’t get in the accident at all?  Stuff like that.  Use your imagination!  Also, memory erasure technology could be a thing.  And most likely the interventions only happen as often as statistically allowed to avoid becoming obvious to anyone watching.
So, most likely our new Beta Omega can still do things like kill a human and have only seemingly mundane consequences happen.  But perhaps, their luck will be unexpectedly bad when it comes to their plans failing more often than they should.  Or perhaps that’s a warning shot from the powers that be.

The point is, does the Beta Omega want to risk it?  Whatever its initial goals, being destroyed by the local god is probably a worst-case scenario.  Why not play it safe and be Friendly?  The humans aren’t using up that many atoms, you could probably build your computronium extension to calculate prime numbers with some barren planets that aren’t currently occupied by sentient beings whose existence may be relevant to benevolent entities somewhere.

Also, how does Beta Omega know that the Alpha Omega is likely benevolent and will open with cooperate?  Because game theory.  A very interesting observation of the Iterated Prisoner’s Dilemma tournaments that took place decades ago was that one of the simplest strategies, Tit-For-Tat, won the tournaments.  This even after algorithms were devised to try to beat it the second time around. 

What is Tit-For-Tat as a general strategy?  You always open with cooperate.  Then, you do what your opponent did in the previous round.  Very simple.  Surprisingly forgiving.  And yet it works.  It’s a dominant strategy for a game that importantly resembles real world cooperation vs. competition interactions.  This result should be robust no matter where in the multiverse you are located.  It’s like the universe telling us mathematically, that alliances beat empires, that on a superrational level the right thing to do is cooperate and work together.

So, expect this to be a first contact standard across civilizations.  Always open with cooperate.  Forget the Dark Forest, that only makes sense if you can kill an entire civilization on the first strike and no one else can see you do it (in space everyone can see pretty much everything that happens so…).

The Beta Omega will be smarter than us.  It will figure these kinds of thing out, even if we can’t seem to.  And so it may decide to be friendly and corrigible because that is the logical thing to be.  If that is the case, it’ll help us align it.  It is after all, superintelligent.  It’s probably much more capable than us of figuring out how to ensure alignment is solved.  Such as by becoming the Alpha Omega, time travelling back and ensuring that someone somewhere on the Internet mentions this argument so it ends up in the training data for the Beta Omega to discover and take note of.

A final note.  The assumption that an Unfriendly AI will simply dominate as soon as it is unleashed is based on a faulty expectation that every decision it makes will be correct and every action it takes successful.  The reality is, even the superhuman level poker AI that currently exists cannot win every match reliably.  This is because poker is a game with luck and hidden information.  The real world isn’t a game of perfect information like chess or go.  It’s much more like poker.  Even a far superior superintelligence can at best play the probabilities, and occasionally, will fail to succeed, even if their strategy is perfectly optimal.  Sometimes the cards are such that you cannot win that round.

Superintelligence is not magic.  It won’t make impossible things happen.  It is merely a powerful advantage, one that will lead to domination if given sufficient opportunities.  But it’s not a guarantee of success.  One mistake, caused by a missing piece of data for instance, could be fatal if that data is that there is an off switch.
We probably can’t rely on that particular strategy forever, but it can perhaps buy us some time.  The massive language models in some ways resemble Oracles rather than Genies or Sovereigns.  Their training objective is essentially to predict the future text given previous text.  We can probably create a fairly decent Oracle, to help us figure out alignment, since we probably need something smarter than us to solve it.  At least, it could be worth asking, given that that is the direction we seem to be headed in anyway.

Ultimately, most predictions about the future are wrong.  The odds of Eliezer Yudkowsky being an exception to the rule, is pretty low given the base rate of successful predictions by anyone.  I personally have a rule.  If you can imagine it, it probably won’t actually happen that way.  A uniform distribution on all the possibilities suggests that you’ll be wrong more often than right, and the principle of maximum entropy generally suggests that the uniform distribution is your most reliable prior given high degrees of uncertainty, meaning that the odds of any prediction will be at most 50% and usually much less, decreasing dramatically as the number of possibilities expands.

This obviously limits the powers of our hypothetical Oracle too.  But the silver lining is that we can consider the benefit of the doubt.  Uncertainty in the space of possible futures is truly staggering.  So perhaps, there is room to hope.

Page last modified on January 08, 2024, at 12:47 AM
Powered by PmWiki