About Intelligences in Sci-Fi
Dear Readers,
What you’re about to read is a bit stream-of-thought and not well-formatted. I will do some editing, but don’t expect good formatting or readability.
Now, I’ve always been a big fan of Sci-Fi, and I’ve been recently listening to podcasts. I also am a living, breathing human being, a surprise to some of you and most of all myself. Recently, I’ve seen a lot of trends I see as, if I have no better word, facile.
To explain better what I mean, I heard something similar to the idea that because the universe has no end goal, humans have none, when we develop A.I. it will have one. It will be an artificially constructed organism so therefore it will be better than us. And this gets repeated often, life will be measuredly ameliorated, it will be worse, it will be this or that. It will always be stable and fixed and able to be understood very easily.
There is no possible conception of such a mind as either wandering, incapable of conceiving of an ultimate goal (I guess I love the character Dr. Manhattan). Beyond that, there’s the Hawking hypothesis (R.I.P.) or Terminator that the A.I. will want to kill us. Okay? Why are people so self-centered that they think that 1. other intelligence will want to kill us, 2. another mind will think like us and 3. we will be in competition for the same resources?
I think the logic goes like this: intelligence tends to conform to certain properties. All forms of life seeks to reproduce themselves and do what they need to further that goal. That’s a bit simplistic, but it cuts to the chase, right? Hypothetical reader and straw man, I can hear you now, saying ‘but that isn’t true. First off, what about pets? And second off, what about people sacrificing their lives to save others? Third off, yada yada yada,’ and you’re correct if you understand me in the very literal sense which is not unreasonable given the types and amounts of popular discourse these days using those same words to mean a different thing than their original and my meaning. Anyway, we do that, us humans too, on a societal level sometimes, on a familial level (like the all-too-popular in movies man who pushes others out of the ways of cars but dies in the process) and on a personal level. It’s the same phenomenon, different ways its manifested in a complicated monkey brain.
But I don’t agree that intelligence and intelligences will always seek the same things nor conform so easily. It’s not so far-reaching and incomprehensible to postulate that the human brain is complicated and doesn’t work in what we would consider a 1:1 need to calculated response level like we would consider if we were macroeconomists. Demand doesn’t perfectly match supply, nor do people pay according the market value of a good.
Let’s face it, humans are irrational and so is human civilization. We pretend we’re not, but everything we do is based on some cause creating an effect we either deem undesired or whatever. My point isn’t that there’s no reason behind things like superstitions but that they often get paved over.
I’m going to go on a tangent about God, and it’ll eventually get back around to the point. Thank god I no longer am writing for school so I can just say that and not worry about it. Anyway, I am a spiritual person, but only in my way which isn’t the way most people say it. I was raised Jewish and no longer adhere to it because of human intolerance, and my studies and thoughts have lead to two fatalistic conclusions: 1. everything happens for a reason, but it doesn’t have to be a good one and 2. God has no connection to humans or our world.
Those words don’t accurately describe what I want, but there’s no better way to say it shortly, and so for lack of verbiage I say nothing. Let’s start with the latter. In a simplification and stupidification of Buddha, caring about something makes you suffer. But I’m not a Boddhisatva or capable of pretending that I don’t care about things. I have a wife, a family, books, my mind, all things that I care about greatly. But I know that there is no truer statement, that I feel sad or happy because of objects and events in my real world. As much as I don’t value physical objects (except for my laptop, a substitute for an unending pen and notebook), I need emotional ones. And there are people that are the opposite, that treasure physical objects over anything else, and so on and so forth because the human race is complicated and so varied that no one will even come close to knowing it all. Our emotions are a natural reaction to how those objects fare in the world.
But God isn’t afraid of dying or any sort of thing, therefore they will not have those specific fears and desires. And so what does God cherish? Humans’ cherishing of life comes from biological instinct and adaptations in our brains. A simple precept is that God doesn’t have the form of a human so therefore wouldn’t have the same brain structure. Before we impute qualities to the unknowable (don’t put the blame on me for the description, it says as much in the Old Testament), we should look at the variety in human cultures. Therefore, what God cares about is nothing that we care about.
To quote Manual Automata:
It seemed to me like it was one of those things like ‘evil’ that is so simple for us to define, but it’s hard to say in other languages. The French have Méchant, the Italians Cattivo, and so on, but they’re not the same. Sadism, indubitably a French word because it names a timeless concept after one of their own, might be the only connection.
I really only know about 2 cultures, that’s adding my fractions of Italian, French culture combined with my fraction of middle-class college-educated American living. However, such common words and ideas should be consistent if we’re going to extrapolate about other intelligences. To fully comprehend God, we must fully comprehend all the possibilities of humans, and they’re far more varied than that.
And so, what does this have to do with God not being attached to anything? To quote my hypothesis from the 7th episode of Down South Boulder Road that I didn’t expand on: any connections or feelings about humans are entirely based on human intelligences. Ipso facto, a greater intelligence will be undefinable and unknowable to our minds because we are so varied in our little world, a greater intelligence is capable of so much more than we can imagine.
As Shakespeare said:
There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.
Okay, first thing second, things all happen for a reason. I believe in a deterministic world, and it’s fairly obvious that all of modern science is built on the hypothesis of knowability, that A will cause B. It’s the fundamentals of testing conclusions and data, repeatability, that sort of thing. However, most people understand A as one event or phenomenon and the same with B. My contention is that such a possibility is present in a laboratory or certain environment, however it’s a boring idea for normal human-to-human interactions (which must be limited in my life given my use of that phrase). There is no A and B because there are a million events that happen between A and B that you cannot isolate any one of them without some sort of great intelligence. Again, I treat the idea in Manual Automata, quote:
“I was telling you,” she continued, “advances in statistical analysis. You have action A and reaction B. Everyone does things because it makes sense to them. They see action A, reaction B according to their perceptions. They create a duplicate pair of events, C and D. That’s a level of complication that you don’t need to know. Let’s say there’s a thousand people that see or undergo an incident. That’s two thousand plus or minus events because we have to calculate, then there’s a million because each of them react the others’ reactions. It’s like when people first started getting computers to run up and down the stock market, buying and selling options and shares of companies. It was a matter of predicting how people thought and did things
Now, to bring this back to Sci-Fi and other people’s writings and not my own book I’m able to control and manipulate as I please. One example I like is AI predictive writing, some examples are Botnik Studios such as Harry Potter-adjacent stories and such. I’m too tired to find other examples, but here’s some stuff: The Weirdness of AI Deep-learning, and I’m sure you get the picture.
I can hear the straw man from before, ‘but didn’t you say it was wrong to compare AI to God which is the obvious implication of the last few paragraphs?’ Well, yes, you’re completely right. Thank you for letting me get to my next point. They are both types of intelligences, and any good prediction will be equally impossible. Why should an A.I. have an end goal in mind for humanity? Why should humans more or less behave the same with the same priorities in the future when human desires are the very opposite of static? After all, people want to have less children or more children, seemingly the basis for a biology-focused priority system, completely differently depending on context.
Now, my problem with most Sci-Fi is that it is either an allegory, a simplification of technologies, a what-if that simplifies the question or a host of other things. This is why I used the word facile because ray-blasters or warfare in space is absolutely ridiculous unless you wish the world to be different than it is. It’s fantasy and adventure in a different context. I really liked Kindred as partly Sci-Fi because the mechanics of the time travel were completely irrelevant. In the future, just as now, only an engineer would obsess over technology. But it’s there and has implications, and they shouldn’t only be relevant when it’s convenient to the plot. So, in my parting words: like, is it really Sci-Fi when it isn’t a well-thought-out analysis of details but in the background or even foreground that makes sense, dudes and dudettes? And therefore, part of A.I. and aliens in Sci-Fi should be more of a probe of the unknown than an expansion of the known?
Deep thoughts, people.
Until the next time,
Ben
P.S.
I was writing this post at about midnight after a lot of driving, and I was pretty tired. I’m not going to edit any of it for posterity, but an example came to me last night. So police forces are using algorithms to predict where crimes will happen, and they’re middling and getting better. It’s not so hard to guess that these will get more sophisticated, and crime will be dramatically reduced. The only crimes that people will be able to continue to get away with are ones that are increasingly sophisticated or committed by those who have power. The second, corruption, is another thing to deal with. But petty crimes in a distant-future Sci-fi will have to be defined and committed completely differently.