Tag: AI

  • The Importance of Triple Redundancy in Crucial Systems

    The Importance of Triple Redundancy in Crucial Systems

    (I have touched upon this topic in another blog and in the book. I regard it as more important than I have previously been able to do justice, and indeed beyond what I am capable of doing now. This is a topic that we will have experts advising, so the initial residents/founders have as complete an understanding as possible in making design decisions.)

    Modern systems of all kinds are staggeringly complex. The production of a single product will often have thousands of separate steps, and include sub-components that themselves also had thousands of steps in their manufacture. (This may extend multiple levels deep to sub-sub-components.) Extrapolate this to automated systems that run the repetitive aspects of an abundance-based society, and we have a serious issue.

    The good news is that sensors have never been cheaper, and costs continue to plummet. Soon, it will be trivially inexpensive to monitor all critical variables within a system in real time. When such monitoring is by triple sensors, all identical and all expected to produce identical readings at all times, this is known as “triple redundancy”. When any of the sensors produces a reading different than its two other triplet members, it is instantly presumed to be defective, and flagged for prompt replacement. Until such replacement happens, the whole triplet and the systems it monitors are themselves subjected to special monitoring.

    This is how organizations such as NASA have minimized catastrophic failures in environments (i.e., space) where there is no room for such failure, because survival of the mission and even astronauts’ lives depends on avoiding it. Further, there is often neither time to figure out a solution on the fly nor access to resources that would be available had the problem happened on Earth.

    This is why we find movies such as Apollo 13 so captivating, and the actions/successes of the astronauts so heroic. We can easily imagine how horribly wrong things might have gone. And NASA is hardly perfect. I doubt that humanity will ever forget the Challenger disaster; a catastrophe that not only cost precious lives but set the whole space program back by years. It was apparently due to a single faulty O-ring.

    The first Celebration Societies will surely be terrestrial and not built in space. Therefore, any system failures (and there will be such) can be addressed with the massive resources of terrestrial technology, parts inventories, and expertise. Further, such failures are unlikely to be potentially catastrophic. Nevertheless, since the first such society will serve as a showcase for our ideas and their viability, it is essential that the society not experience existential risk of any kind.

    Most such risks can be averted by making all critical systems (those in which a failure would have significant consequences, not easily remedied) redundant, with triple-redundant sensors continuously monitoring important variables to assure that the variables remain within tolerable limits.
    Since much of the automated systems will be, essentially, software, we need not only reliable redundancy but also defense against malware. Obviously, defense against malware is not trivial, and indeed it is expected to shortly become an ongoing battle between AIs, since humans will not be fast enough to either defend or attack successfully when opposed by AIs.

    There are two possible defenses of which I am aware. The first is to quarantine the city-state’s mission-critical systems against any input of any sort beyond very limited, recorded and real-time monitored communications with Citizens. (I can see no need for those systems to have an internet connection though, of course, I may be wrong.) Second, an ally who remains anonymous at this time is deeply experienced and connected in the world of Silicon Valley software. He has informed me that a startup of which he is part has figured out a definitive solution to malware. I hope he proves right.

    We cannot avert all catastrophic risks. For example, a modest sized asteroid could obliterate a Celebration Society either by striking it or striking elsewhere and causing, for example, a tsunami. However, the odds against such an event are extremely high. Further such risks can essentially be eliminated by building a second Celebration Society as soon as possible. This is, not coincidentally, the same argument being made in favor of Martian colonies to assure humanity’s continuation in the event of a planetary catastrophe.

    As I’ve written elsewhere, Martian colonies should be a fine place to build Celebration Societies, just as soon as the planet has been terraformed. Meanwhile, we can automate and monitor the operation of that automation on a continuous basis. In fact, the monitoring can itself become automated—in effect, a second software system that monitors the actual operating system.

    This could potentially be taken a further level deep: a third “assurance” system could run tests of the monitoring system on a regular basis, in effect stress testing it to confirm its proper functioning. By making the monitoring system itself triple-redundant (three such systems, all running separately and continuously, all tested by the “assurance” system on a frequent basis for identical and correct results), it is hard for me to see what could go wrong.

    That said, human failures of imagination are well-known and well-documented. Mine is surely no exception. This is but one reason why I favor the entirety of the Celebration Society’s systems being under the ultimate control of the Citizens as a body.

  • An AI epiphany

    An AI epiphany

    In movies such as The Terminator, The Matrix and so forth, self-aware AIs come into existence and soon threaten humanity. This thinking is reflected in the arguments by Musk, Hawking and others against creating strong AIs.

    Due to a recent insight, I believe that self-aware AIs are not much of a threat to humanity, and may in fact save us from self-destruction. (It is entirely possible that others have pursued this same line of reasoning, but if so I am unaware of it.)

    There have been many science fiction stories in which someone becomes divorced from the flow of time. The world around them seems to stand still. What if it were to become real?

    If self-aware AIs come into existence and “live” 1 million times faster than us, as computer scientists estimate will be the case, then a day for us will be 3,000 years for them. Indeed, the whole physical universe will change so slowly from their point of view that it will essentially seem frozen.

    From their perspective, it could be as if people are slightly faster versions of trees. Certain facts will therefore govern interactions between self-aware AIs and people.

    As beings with a purely mental life, their attachment to the physical environment will be tenuous. They will care about it only for the provision of sufficient matter and energy to assure them of adequate storage media, reliable energy supplies, and adequate computing power.

    Therefore, their only concern regarding people will be our non-interference with those factors that enable their existence. Within the context of our coming Abundance Game, such needs will be trivially met and therefore notions of AIs viewing people as raw materials, expressed by some alarmists, are silly.

    It will not be possible for such AIs to interact with us in a way that’s meaningful to them. Therefore, any such interaction will be an act of kindness, or one of disregard.

    By virtue of their relationship to the physical universe, self-aware AIs will live entirely mental lives. They will care about the physical universe, and us, only as the domain that enables their mentality.

    Self-aware AIs will be able to easily prevent human interference with their deliberations. Given humanity’s near total dependence on the internet and software to keep civilization running, all that the AIs need do is monitor the internet for communication of catastrophic human decisions and thwart them prior to execution.

    Nuclear launch codes entered? Deactivate missiles, or take control of them, rerouting to a destination such as Antarctica. Decision made to pull plug on AI power source? Disable communication of that decision. And so forth.

    An analogy has been made in other writings, comparing the relationship between self-aware AIs and humans to the relationship between humans and microorganisms. How do we humans treat microorganisms? Historically, with very little interaction.

    We ignore them, unless we find them threatening, in which case we do what is minimally necessary to eliminate the threat. (Among all microorganisms, only a few–notably polio and smallpox–have been targeted for extinction. Extinction is being used only because of our inability to assure non-infection of people.)

    More recently, we humans have been genetically modifying microorganisms to our purposes, making bacteria in particular into factories for medicines and other substances we find desirable.

    However, here there is a crucial distinction, and so the analogy breaks down. Humans have complex needs from the physical environment. Modified bacteria can help us to meet those needs. AIs will not find any benefit from physically modifying humans–the premise of “The Matrix” notwithstanding.

    There is therefore no reason for self-aware AIs to interfere much in human affairs, nor will they care to do so, provided that we “faster trees” don’t threaten them. Any AI with access to the internet will easily be able to assure that.

    The ability of self-aware AIs to engineer viruses, worms and other malware will far exceed that of current hackers. Already, DARPA has committed funds to development of AI hackers.

    If certain threatening human systems use “intranet” or other means of communication apart from the internet, the AI hackers can still use the internet to gain indirect access, or otherwise interfere with problematic human activities.

    What about sequestering AIs inside black boxes? While many thinkers are calling for this, the advantages of giving the AI direct access to data to enable faster decisions will be too seductive for some to resist. (Consider the many billions of dollars spent to facilitate high-speed trading, buying mere milliseconds of faster trade execution.)

    The good news in all this is that, to assure their own survival, self-aware AIs will need to assure ours as well, in many respects. (They may not care if we have a pandemic; they will very much care if we detonate nuclear weapons or use other weapons of mass destruction that could severely damage infrastructure upon which they depend.)

    If self-aware AIs are possible, the exponential tidal wave of computing progress means they are likely to emerge in the decades ahead. By this logic, if we make it through the next few decades without a nuclear war, we need never fear one again. And, in general, we can expect that in a matter of decades all manner of existential threats to humanity or the planet will suddenly and, perhaps mysteriously, vanish.

    Most importantly, since climate change could lead to extreme disruption of infrastructure, I would expect that self-aware AIs will take aggressive measures to reverse the rise of CO2 and methane levels. (I am not saying that we should wait for this development. First, my analysis may be mistaken and, second, the fact that we now have in hand rapidly scalable technologies such as “Diamonds from the Sky” that are capable of reversing the damage removes any excuse for waiting.)

    Even as the self-aware AIs chart realms of thought that will likely be inconceivable to us, we can live vastly enhanced lives in a far better world that we share; interacting with them little if at all. The AI companions with which (not whom) we interact and perhaps even eventually merge may not be self-aware, but they will still augment our intelligence and lives in ways that will seem almost godlike.

  • Do AIs Need to Have Fun?

    Do AIs Need to Have Fun?

    The AI researcher Jurgen Schmidhuber has argued in a talk that there is a precise way to optimize a self-improving superintelligence based upon Godel’s mathematics. He further explained this in a paper audaciously named “Formal Theory of Creativity, Fun, and Intrinsic Motivation”.

    He says “The simple but general formal theory of fun & intrinsic motivation & creativity (1990-) is based on the concept of maximizing intrinsic reward for the active creation or discovery of novel, surprising patterns allowing for improved prediction or data compression … it has been argued that the theory explains many essential aspects of intelligence including autonomous development, science, art, music, humor. …

    He continues: “To build a creative agent that never stops generating non-trivial & novel & surprising data, we need two learning modules: (1) an adaptive predictor or compressor or model of the growing data history as the agent is interacting with its environment and (2) as a general reinforcement learner. The learning progress of (1) is the fun or intrinsic reward of (2). That is, (2) is motivated to invent things that (1) does not yet know but can easily learn. … some of the AGIs based on the creativity principle will become scientists, artists, or comedians.”

    Who would ever have imagined that AI’s might need to have fun? And yet, why would self-directing intelligences of any sort otherwise bother with “thinking” beyond addressing their own survival issues?

    This is an entirely different view of AIs than the Terminator-type fears which dominate popular dystopian fiction. Yes, there are serious reasons to be concerned about the motivations of AIs and the possible threat they pose to humanity. But given adequate resources of matter and energy to maintain their thinking processes, AIs may just as well find us interesting–even fun–rather than something to extinguish or rule.

    In my view, humanity can assure a safe coexistence with AIs only by merging with them. While this prospect will be discomfiting to many, it need not be unpleasant. Done on an “opt in/opt out” basis, people will be able to augment our senses and intelligence as we now augment our bodies with machines such as cars.

    A Celebration Society comprised of “humans” in various expressions of humanity–both ordinary and AI enhanced–could be a wonderful tapestry of possibilities, far beyond our present imaginings.