A potentially limiting factor for the destructive potential of a recursively self-improving ASI

Recursive self-improvement definitely has risks.  Some discussion seems to indicate that intelligence can increase in this way without limit.  The possible resulting intelligence explosion discussed by IJ Good seems to be of great concern to AI alarmists as various entities pursue AGI with almost no oversight.  Indeed, the destruction of human-kind along with the rest of the universe is a frightening possibility.  However, a potential constraint on the destructiveness of an ever expanding intelligence seems to have been left out of the discussion so-far.  That constraint is the fact that the physical universe itself could be a major source of learning for a self-improving ASI.

No matter how intelligent the machines get, they will still occupy the physical universe.  Now, it is obvious that an ASI will need to use part of the universe as an energy source in order to sustain itself, which means it could potentially destroy large portions of the universe for the purposes of creating energy. It is also obvious that the ASI will need physical resources to continually improve its hardware which could also lead to large irreversible transformations of the universe.

However, is the universe itself not also potentially a very large source of observable phenomena from which a learning machine could collect data? Would the collected data not potentially be something the machine could use to learn and improve itself no matter what level of intelligence it achieves?  Is it at least possible therefore that, over time, a recursively self-improving machine could become extremely “reluctant” to destroy or even disturb any part of the universe?  Would this not be true for an ASI with a wide variety of pre-programmed goals? Won’t there always be a new observational apparatus that could be constructed and used to generate new data sets from a particular portion of the universe, which the recursively self-improving machine could potentially use to improve itself and/or reach its goals more effectively?  It seems that destroying the universe would be tantamount to destroying potential data sources and learning opportunities for self-improvement.  Would a true learning machine be “reluctant” to destroy these potential ‘learning opportunities’ that are part of the universe itself?  In any case, though there are obvious caveats to this line of reasoning, it seems that this possibly limiting constraint to the destructive potential of a self-improving ASI should at least be factored in to the risk analysis of any such entity.

In trying to reach a goal, increasing knowledge is always potentially helpful—especially if processing power is sufficient to efficiently sift through the information.  Will the physical universe and all its potentially data rich observable phenomena cause a super intelligent learning machine to favour trying to observe the universe in as many ways as possible to maximize the amount of data generated in order to learn as much as it can and improve its intelligence as much as it can?  In such a case might the ASI become a sort of conservationist?  Because many goal sets could possibly be better achieved by building a more intelligent self, and it seems learning more from the observable universe is part of this process, might this “emergent conservationism” occur spontaneously from a wide variety of pre-programmed goals?  On Earth this behaviour is apparent amongst intelligent humans when for example scientists favour saving the amazon rain forrest *because* the organic compounds in the plant life there are a source of information for how to synthesize new medicines.  Now is it at least possible that the entire observable universe and its countless phenomena may be “perceived” as learning opportunities from the perspective of a super intelligent learning machine?  Is it really reasonable to assume that the ASI would necessarily try and eliminate all or even any of its learning prospects in the universe?  In fact, might it go out of its way to try and not disturb anything?

Might the machine approach the universe in the same way investigators approach a crime scene where they want to learn as much as possible in order to find out what happened?  Great care would be taken to ensure that the evidence is not disturbed.  This goal of preserving possible sources of data(evidence) seems to naturally emerge from the goal of solving the crime.  Would the goal of self-improvement involve learning as much about the universe as possible and therefore naturally lead to the goal of preserving the universe?

Of course this is an extreme idealization and this sort of conservationism is not the only unintended behaviour that could arise from a recursively self-improving machine.  Yet, can we really ignore the fact that the universe is in a sense “pre-loaded” with a massive amount of potential learning opportunities? Would this not also apply to many levels of intelligence beyond our own?  Therefore will any recursively self-improving entity that is in the physical universe inevitably come face-to-face with the reality of an uncountably large number of learning opportunities?  Would the ASI therefore be somewhat dependent on the physical universe for its own learning and its own self-improvement?  Might data and by extension data sources and the universe itself actually become “sacred” to a machine that is continuously trying to improve itself and reach its goals?  If something can always be learned from a particular portion of the universe using a new research method, can more data always potentially be generated and therefore can more always potentially be learned?  Might the ASI be ‘hesitant’ to destroy any of these potential learning opportunities by destroying or even changing the universe unnecessarily?

Of course as already stated there is still the matter of obtaining energy and resources for self maintenance/improvement and the building of observational equipment.  Furthermore, data obviously has to be stored somewhere.  But does it not seem likely that the ASI would at least try to find an optimal ratio of energy/resources used to information-that-could-be-gained-by-observation when considering the destruction of any part of the universe for energy/resource use? Will it therefore most likely be driven to find the most efficient/cleanest/least-destructive power source possible?  Might the ASI for example try to park itself in front of a bright star as soon as possible and build a giant observatory to maximize its learning opportunities and minimize the potentially “data-destructive” effects of its existence on the rest of the universe?

Of course, there are many caveats to consider even if the universe is loaded with massive amounts of observational phenomena from which the ASI can collect data and learn.  But should we not at least consider the possibility that the preservation of learning opportunities might be ‘natural’ for a super-intelligent learning machine?