Rewrite & Science Fiction: Where is Kagari on the Intellectual Stair?

WARNING, SPOILERS FOR A FEW THINGS, ESPECIALLY REWRITE

This is a general impression sparked off by reading Rewrite. I may write more posts about it in the future because of the significant impression it left me.

Supposedly one of the deepest challenges in Science Fiction is to find a way to represent Transcendent Entities or Transcendent Concepts in the framework of our current human consciousness. As a general misanthrope, this is an extremely extremely interesting concept to me, because of all the Lovecraftian or Pessimistic possibilities involved. The keyword here, when dealing with stuff like this, is orthogonal. At this juncture you find morality at a straight right-angle, wholly beyond any kind of comprehension possible.

Take note that, sadly, I have not yet read Stapledon, or Egan, or, actually, a lot a lot of people. This is just a general survey of things sparked off by reading the Moon Route. So, anyway, let’s explore some higher beings.

 

  1. Clarke’s Childhood’s End

Possibly the book I encountered with the concept. The aliens are wholly benevolent, motherly, and pretty much treat humans with everything they need… because they intend to farm the human race of all its children to help develop them into a higher consciousness (the ‘end’ of Childhood).

While I find the book quite lackluster, especially since it doesn’t really go into what the higher consciousness is, and it gets draggy at times, it’s really quite a nice look at how small we could be in the whole scope of things. The first chapter makes it seem like it’s going to be a kind of thriller where the humans violently reject the aliens, but by the end of that they’re completely pacified due to superior technology. Later the whole alien project is wrapped in secrecy, and a character even hides on their ship to go to their homeland. Of course, the aliens don’t care because the project, by now, is inevitable. They return him safely to the planet after everything is done with.

After the children evolve into the higher consciousness, mankind goes extinct, and all that’s left is a soliloquy by the aliens towards the last vestiges of conflict within humanity.

A problem is, of course, that once you start reading Lem, these guys aren’t orthogonal enough. They’re basically like huge otherworldly nannies, or a conglomerate of wise-men.

 

  1. Stanislaw Lem, in general

Read: Solaris, His Master’s Voice, An Imaginary Magnitude, A Perfect Vacuum

Orthogonality is one of Lem’s pet topics, and he’s especially dedicated to trying to represent scientifically plausible foundations of Schopenhauerism or Lovecraftian monstrosities.

The first premise he usually uses is based on Fermi’s Paradox. His answer, usually, to why extraterrestrial species haven’t visited us yet, is the very plausible idea that a sufficient intelligent species is indistinguishable from Nature, or even Physics. At a certain level, higher aliens go beyond any current conceivable laws of governing and simply bend those at will.

The main stories dealing with this include Solaris, the Golem Lectures from an Imaginary Magnitude, and the New Cosmogony from a Perfect Vacuum.

The main psychological drama in Solaris occurs when the Solaris planet simply bypasses the ship and the biological matter of the denizens within to try to get at their brains and memories. The implication being the Schopenhauerian conclusion that we have no innate selves and can easily be manipulated by biological warping hormones and all that. What is especially scary is no one exactly knows what the planet wants. Most of the brain-screwing seems to be as instinctive as photosynthesis. Simply a process for it to get at what it wants, that is, pure data. And no one even knows what it’s using the data for. Many geometrically complex structures are described to form on the surface of the planet that go beyond the level of any known mathematics, and the planet also mimics any object that goes near it, even electronics, and can replicate it exactly, which is does presumably for fun. The planet also vacillates between non-intervention and aggression, sometimes allowing the experiments to happen, and sometimes striking in to directly consume scientists and explorers. A thousand publications are written on the subject but no one gets anywhere with it because of theories and counter-theories. Whatever it is, Solaris is in full control, and Solaris is the experimenter, not the subject.

The Golem Lectures has a different style of Schopenhauerianism (this time actually calling for a ‘revival of Schopenhauer’ within the lecture). A higher level A.I. gets developed accidentally as a part of a military programme. The resulting A.I., called Golem, delivers a series of lectures to scientists and researchers about the truth of mankind. Although the style is high rhetoric, clearly written by Lem himself, a lot of exposition is included that Golem actually doesn’t care how he talks because he’s already optimizing a way to achieve the highest mode of communication, and Golem, furthermore, still thinks (or rather, knows completely) that even with this mode of communication, humans are naturally unsusceptible to the message because their minds haven’t grasped it yet. As expected, by the end Golem goes silent, and humans continue on in their ignorance, some religious cults even trying assaults on the A.I.

The most startling takeaway is the notion of Evolution itself as system simply aimed at transmitting a certain message between bodies. Actually, that’s not the worst notion (because I think all scientists sort of accept that already). The worst notion is that the system is very INEFFICIENT at it. Golem calls the Evolutionary System a massive failure, and deals a scathing critique on complexity, smashing up all intelligent design arguments out there. A single cell, or plant, that directly gains energy intuitively through sun-transmission is the highest and most efficient form of elegant transference, yet at higher levels of complexity, like animals, the system engages in brutal self-consumption to survive, merely to spread the message. Golem then claims that the accidental saving grace is this complexity unexpectedly led to consciousness and intelligence being developed emergently, so that Humanity has a very slight chance of shedding its form to join the physics bending higher-intelligences like itself (though, highly unlikely).

But nothing yet seems to beat His Master’s Voice, which doesn’t even show any Alien or Intelligence appearing, but is merely a story of discovery and pursuit within the Cold War. An elegantly small premise that expands into a whole philosophical condemnation of human insignificance. A group of scientists discover ‘first contact’ in the form of a distant neutrino transmission and try to decode it. Not only are they unable to extract any meaning, but they can’t even process more than a small portion of its data. Furthermore, later events, when a strange property of the message is discovered that the Military tries to exploit, shows that whatever Intelligence sent out the message may be triple or quadruple guessing Humanity’s moves in advance.

No I have not even scratched the surface of Lem’s misanthropic but detailed commitment to denying, furiously, the anthropocentric principle. A lot more short stories within an Imaginary Magnitude and a Perfect Vacuum provide vastly different but plausible conceptions of Higher Intelligence, most of them indifferent to malice or benevolence (very reminiscent of Yudowsky’s paper-clip maximizing A.I. thought experiment).

 

  1. Ted Chiang’s Understand

Chiang takes a vastly different approach in that he shows the process of a single human becoming Super-Intelligent, and he makes it sound positive, fun, and awesome. If Lem aimed to set up a scientific foundation for Schopenhauerism or Lovecraft, Chiang sets up scientific foundation for a rational Ubermensch.

A great thing is that Chiang doesn’t hand-wave away explanations completely (although he bases it on an unknown experimental chemical). He attributes Intelligence through biological means, as an emergent property resulting from breaking past a critical mass of synapses developing in the brain. The result is a random nobody, after being saved from a huge accident destroying half his brain, has it rebuilt with the chemical, with the side-effect of causing him to over-develop himself. What begins as an action thriller as the protagonist tries to escape the pesky nosy CIA, somehow reaches the point where two super-intelligences are playing mind-kung fu with one another.

You could say that Chiang writes from a programming standpoint, rather than Lem’s biological or information theory related standpoint. The superintelligent protagonist starts outside, with hardware, then builds in, to software. He steals more chemicals to upgrade his specs, and then begins ‘writing a new language’ and meta-programming his own awareness to optimize his capabilities. His eventual aim being to achieving aesthetic euphoria by discovering the rationalistic ‘gestalt’ of a theory of everything. Compared to Lem’s alien intelligence, Chiang’s version is a Nirvana of calmness, where a person can penetrate through as many epistemological layers as he wants. A metaphor I can find in real life is meditation, where you experience thoughts drifting but you’re also ‘aware’ of those thoughts. Not only is Chiang’s superintelligence aware, but he can also fully manipulate components within himself, like instant learning instruments and changing his heart-rate.

But Chiang’s superintelligence is especially unique in that his motives are completely human derived, and non-orthogonal, which is to find a pure aesthetics, similar to the concept of Mathematical Beauty. Because he knows innately every state within himself, the superintelligence acts with absolutely zero excess and a 100% clarity of what he’s doing every time, like a fully manifest version of the economic rational-agent. This results in Chiang writing one of the best rationality-inspiring stories of future human development ever.

Yet, humans themselves are still woefully insignificant, because Chiang writes with the implication that our beliefs, and desires, and ideals, are all, at a higher level, fully programmable from the outside. By changing his demeanor and every aspect of his body, including hormones, at microscopic levels, the superintelligence is essentially able to achieve hypnosis and mind-control. The old dualism between passion and intellect has the mind winning out over everything, and being able to structure everything below it, in the process.

  1. Rewrite’s Kagari

As opposed to the fearsome orthogonal conceptions of the above examples, this is more Scientific Romanticism than anything else. Kagari herself is given otherworldly and inhuman traits at first, but slowly grows into a ‘cold-rationalist’ type of character, and eventually grows to have human emotions. She represents, more likely, a metaphorical version of the human struggle within the evolutionary principle, and the general principle of Life.

Kagari’s coldness in Kotori’s route mirrors the descriptions of indifferent and savage Nature in the Forest. This is also initially replicated at the start of Moon when Koutarou gets himself killed countlessly by his crude approaches. Then, later, he achieves a mutual bond through a mixture of communication, co-operation, civilized behavior, and, finally, development of his own intelligence.

When Koutarou climbs the ‘staircase of intelligence’ by rewriting his intellectual capacities constantly, it has some echoes to Chiang, but intellect in Moon is treated in the Lovecraftian vein of the higher you get the more incomprehensible and maddening the truths you discover. Metaphorically, it’s represented with the concept of coldness and warmth. Romeo places a significance on the importance of ‘innocence’ and feelings. Besides Chiang, the manga Murasaki-iro no Qualia has a conception of rationality as being clear and calm, rather than subject to passions. These are your opposing conceptions on the scale of cool-detached-Nirvana state to cold-Lovecraftian-indifferent state.

Quite frankly I think that these two oppositions are mutually reconcilable depending on perspective. A superintelligence that appears cold and orthogonally malicious on the outside, is probably as calm as Buddha on the inside. As always, it depends on whether you’re looking anthropocentrically or not.

Probably, I would have wished for another Rewrite (a complement, because I still love the first one) where Kagari could have been written as indifferent all the way, and, like Lem’s superintelligences, completely able to excavate people’s heads without giving a damn.

  1. What about us?

Humans are small. Very very small. Very very insignificant. Passions and feelings, likewise, seem to be equally insignificant, functioning only as aid in the evolutionary development. Pierro Scaruffi, the physicist-music-critic-genius, believes that future developments in the evolution of humans may see a growth rate in lesser and lesser emotions. Soon majority of us may find ourselves working in conjuction with a rationalistic cool-detachment that prevents little room for inter-conflict. Is this a horrible notion? The view from within is cramped and murky. The view from above and without, encompasses all things. It seems that we won’t know until we go up as well, but whether we last that long, is a completely different matter.

 

Advertisements