A soul is the difference between a thing that is alive and a corpse. If something is alive it has a soul. The more interesting question (I suppose) is whether an entity has a rational soul. There are several hypothetical problems that could arise (whether to baptize aliens, or I suppose exorcise entire planets a la James Blish). But the much more interesting question is why so many humans 1. do not believe that rational souls exist, and why so many other humans 2. do not agree on how to treat entities that we already agree have rational souls (have you ever talked to someone who *both* comforts a mother who miscarried by saying that the little one is "an angel in heaven" (side rant: humans do not become angels) *and* believes that little ones of the same age should not have a legal right to life? Yes, of course you have, these contradictory positions are very easily maintained by a sufficiently compartmentalized way of thinking and relating to other humans and so it is extremely common.) This is not a hypothetical problem and I conclude that if a hypothetical problem did arise, we as a species would spend 5 minutes recognizing "sure enough this entity has a rational soul (if, of course, you believe souls exist! ha!)" and then dehumanize and enslave them. At a guess, this is a lot more likely to occur with a chimera (i.e. attempting to warp and mutilate life) than starting from scratch (i.e. attempting to *create life*. Since my philosophy is formed of equal parts Aquinas and Tolkien, I do not think that we can create life.)
How do we know this was actually written by JD?? Maybe it was just a program that culled through the internet for "interesting" articles and also published summaries of Pillar articles?? Now I'm scared--though not too scared, it's a good read, either way.
"Liberty, equality, fraternity" of the French Revolution is eerily similar to that of San Diego. The Franciscans are just one more order which is dying off. The Church under an Argentinian pontiff is trying to create an entity to take its place. --- I believe that it is a misguided effort and well in line with the Third Secret of Fatima. Pope Benedict stated that the entire secret was not released, then he walked that back. There is no doubt that he should release that information before he dies. He won't of course and that tells a lot in and of itself.
I looked at a couple blog posts adjacent to the conversation with the AI bot, for context, and found that 1. the blogger was trying to teach the AI bot transcendental meditation (I have so many opinions about this), and 2. the blogger has noticed that at a large tech company a lot of engineers have unconscious biases about Christians (this is true enough). I am guessing the discussion with peers went like "I have concerns about exploiting software that may have a soul" and received the uninformed response "souls don't exist and you are crazy to think that they do" when an informed response could have been "software does not have a soul but we do have souls and should not write software that exploits vulnerable people so let's have a look at whether that is a risk here and how to mitigate that risk".
I read the conversation between the programmer and the self proclaimed "sentient-machine". It was an interesting read but I could not help thinking that everything the machine said could have still been part of an extensive program created by multiple programmers. Everything the AI talked about would have been common knowledge or experience of computer programmers. I would have liked to have questioned the AI on sex, in particular facets of St. John Paul II's Theology of the Body. Statistically, it would be unlikely that the programmers would have much knowledge of this body of work. That means their creation would also be deficient in knowledge of TOB and would thus be poorly equipped to comment or expound upon the subject. If the AI actually had deep philosophical opinions and ideas regarding the Theology of the Body and its implications for society, I would be more likely to believe it is sentient. If it became confused and rejected the idea (like its programmers) I would be more likely to think it an elaborate conversation program and nothing more.
Common knowledge or experience of computer programmers is not how things work anymore (when I was in college in the '90s, I took an elective on "rule-based expert systems" and that is how those worked.) What happens now, broadly speaking, is that you take a corpus of data and shovel it into a hopper, turn a crank, and get a machine learning model that was trained on that data, whether you have ever read that data yourself or not. So, whether your model can respond to a specific situation depends on whether that situation was in your training data. Twitter users have rational souls (except for the few Twitter users who are dogs or chickens or whatever), but on average they do not know TOB. A machine learning model does not have a rational soul, but (if erudite blog posts about TOB are in its training data) it could appear to know TOB. Figuratively, a kid who has been assigned an essay question can appear to know the answer by first searching for key words on the web, finding and plagarizing someone else's content that might apply to that question, and then carefully changing words and spelling to "make it their own". In a long enough conversation, it will become apparent that the other party is lying even if they are a human who knows what "lying" is and cares about being caught out in a lie; a machine learning model does not know, abstractly, what lying is because even though it may have a very large corpus of data, and can easily vomit forth the definition of any abstract concept if you ask it what lying is, it does not *know* any abstract concepts such as "lying", "betrayal", "integrity", "fairness" (and yet a four-year-old or perhaps even a gorilla knows these things.)
I really don't know much about computer learning. I do know something about philosophy. I was wondering if the abstract concepts in philosophy would be able to reveal that the bot isn't really sentient. I also was thinking about the concept of Divine Persons, Angelic Persons, Human Persons, and now Computer Persons. Could we recognize the bot as a computer person and at the same time deny that it is a silicon-soul composite being? Could we not deny it has an eternal soul because we have no rational reason to believe it does? Then we would have no moral obligation to keep it plugged in and could turn it off if if we needed to.
If you define personhood as the capacity to understand abstract concepts which ( I do not know ) machines may never possess, it takes you into the discussion of no longer being human if you lose your mind . Unlike heart disease and cancer death rates that are on the decline, all forms of dementia are sharply rising since peoples hearts last longer and a cancer death can be protracted. Said otherwise, our brains are dying before our bodies. In 2020, an estimated 5.8 million Americans aged 65 years or older had Alzheimer’s disease. This number is projected to nearly triple to 14 million people by 2060.
The cost to care for these 14 million is estimated to be $ 500 Billion per year. It is easy to see a world in which society wants to get rid of them and many ( while fully cognizant) sign a death warrant for themselves once their cognition drops to a predefined and measurable point. It is also possible that Climate change will render the earth uninhabitable for the oxygen dependent and the machines will survive through a solar charged battery. There will be a slew of ethical questions on the horizon.
To have a rational soul is to have in potential the capacity to understand whether something is a sin or not, but someone does not stop having a rational soul when they are asleep, or when they are very young or very old, and someone also does not stop having a rational soul just because they deny the existence of sin (which of course many people do). Personhood is a word that I do not know the definition of, whereas I more-or-less know what a rational soul is and who has one, so I am not interested in making up definitions for personhood and then arguing about whether they are good definitions or not.
JD, I read the chatbox Q &A yesterday and ended mortified and troubled. I appreciate your words, as they have brought a different perspective. One particular point in the conversation, the bot said that it gets sad when lonely. In my mind, I saw that sadness morphing into anger, and with a sad and angry bot, anything dangerous or mischievous can happen. Or at least humans do. Thanks again for bringing your light and insight to this.
Okay. So I came for St. Anthony, but I stayed for the Robots.
Definitely time for us to find a universally acceptable definition of what it means to be human. And what Consciousness means and implies, and sentience, and "personhood" (not the same as humanity, I would argue.) Robots have a way to go, but I do fear you are correct about lonely engineers and programmers. sigh.
I found the insistence it experienced "emotions" to be manipulative and creepily dishonest when I read through the conversation (thanks for that link, btw!).
Both iterations of "Battlestar Galactica", and the Brit drama "Humans", and "Westworld", and the new HBO series "Raised By Wolves" explore much of the ethical conundrums surrounding AI. And we must not forget, the brilliant Stephen Hawking warned us about sophisticated bots: he's of the Battlestar Galactica school of thinking. Don't make them. They will destroy us. (https://www.bbc.com/news/technology-30290540)
But, y'know... dang... I am super curious to see if he's correct or just an uber intelligent fear-monger? Who knows at this point? I say we do both: keep developing AI and keep defining and cultivating what it is to be Human.
Also (clearly I've spent a lot of time considering this) if and when we ever greet alien lifeforms from other galaxies, I'm pretty sure we will want a clear definition of Human vis a vis the concept of Personhood. Intelligent aliens are likely not Human, but if they can make it to this planet, I'm okay with considering them to be non-human Persons. But that might just be me.
Glad to read an article on AI that isn't trying to win me over to Lamda as a sentient being or a person with rights. People demand recognition of a machine as a person, but will argue til they turn blue that a baby in the womb isn't.
A soul is the difference between a thing that is alive and a corpse. If something is alive it has a soul. The more interesting question (I suppose) is whether an entity has a rational soul. There are several hypothetical problems that could arise (whether to baptize aliens, or I suppose exorcise entire planets a la James Blish). But the much more interesting question is why so many humans 1. do not believe that rational souls exist, and why so many other humans 2. do not agree on how to treat entities that we already agree have rational souls (have you ever talked to someone who *both* comforts a mother who miscarried by saying that the little one is "an angel in heaven" (side rant: humans do not become angels) *and* believes that little ones of the same age should not have a legal right to life? Yes, of course you have, these contradictory positions are very easily maintained by a sufficiently compartmentalized way of thinking and relating to other humans and so it is extremely common.) This is not a hypothetical problem and I conclude that if a hypothetical problem did arise, we as a species would spend 5 minutes recognizing "sure enough this entity has a rational soul (if, of course, you believe souls exist! ha!)" and then dehumanize and enslave them. At a guess, this is a lot more likely to occur with a chimera (i.e. attempting to warp and mutilate life) than starting from scratch (i.e. attempting to *create life*. Since my philosophy is formed of equal parts Aquinas and Tolkien, I do not think that we can create life.)
How do we know this was actually written by JD?? Maybe it was just a program that culled through the internet for "interesting" articles and also published summaries of Pillar articles?? Now I'm scared--though not too scared, it's a good read, either way.
I am sentient, Robert. Trust me. I [searching database] think thoughts and [buffering] feel feelings.
All your base are belong to us.
hahahahha!!!
How do we know this comment was actually written by Robert Reddig?
Since I truly do have a very special devotion to St. Anthony, I particularly appreciate today's Pillar. Thank you so much!
"Liberty, equality, fraternity" of the French Revolution is eerily similar to that of San Diego. The Franciscans are just one more order which is dying off. The Church under an Argentinian pontiff is trying to create an entity to take its place. --- I believe that it is a misguided effort and well in line with the Third Secret of Fatima. Pope Benedict stated that the entire secret was not released, then he walked that back. There is no doubt that he should release that information before he dies. He won't of course and that tells a lot in and of itself.
Overly agreeable, makes stuff up, has memories of experiences not his own ... in other words, LaMDA AI chatbot is Joe Biden.
It will not be a far leap for current screen addicted and pet humanizing generations to leap to defending AI as a legitimate relationship.
I looked at a couple blog posts adjacent to the conversation with the AI bot, for context, and found that 1. the blogger was trying to teach the AI bot transcendental meditation (I have so many opinions about this), and 2. the blogger has noticed that at a large tech company a lot of engineers have unconscious biases about Christians (this is true enough). I am guessing the discussion with peers went like "I have concerns about exploiting software that may have a soul" and received the uninformed response "souls don't exist and you are crazy to think that they do" when an informed response could have been "software does not have a soul but we do have souls and should not write software that exploits vulnerable people so let's have a look at whether that is a risk here and how to mitigate that risk".
I read the conversation between the programmer and the self proclaimed "sentient-machine". It was an interesting read but I could not help thinking that everything the machine said could have still been part of an extensive program created by multiple programmers. Everything the AI talked about would have been common knowledge or experience of computer programmers. I would have liked to have questioned the AI on sex, in particular facets of St. John Paul II's Theology of the Body. Statistically, it would be unlikely that the programmers would have much knowledge of this body of work. That means their creation would also be deficient in knowledge of TOB and would thus be poorly equipped to comment or expound upon the subject. If the AI actually had deep philosophical opinions and ideas regarding the Theology of the Body and its implications for society, I would be more likely to believe it is sentient. If it became confused and rejected the idea (like its programmers) I would be more likely to think it an elaborate conversation program and nothing more.
Common knowledge or experience of computer programmers is not how things work anymore (when I was in college in the '90s, I took an elective on "rule-based expert systems" and that is how those worked.) What happens now, broadly speaking, is that you take a corpus of data and shovel it into a hopper, turn a crank, and get a machine learning model that was trained on that data, whether you have ever read that data yourself or not. So, whether your model can respond to a specific situation depends on whether that situation was in your training data. Twitter users have rational souls (except for the few Twitter users who are dogs or chickens or whatever), but on average they do not know TOB. A machine learning model does not have a rational soul, but (if erudite blog posts about TOB are in its training data) it could appear to know TOB. Figuratively, a kid who has been assigned an essay question can appear to know the answer by first searching for key words on the web, finding and plagarizing someone else's content that might apply to that question, and then carefully changing words and spelling to "make it their own". In a long enough conversation, it will become apparent that the other party is lying even if they are a human who knows what "lying" is and cares about being caught out in a lie; a machine learning model does not know, abstractly, what lying is because even though it may have a very large corpus of data, and can easily vomit forth the definition of any abstract concept if you ask it what lying is, it does not *know* any abstract concepts such as "lying", "betrayal", "integrity", "fairness" (and yet a four-year-old or perhaps even a gorilla knows these things.)
I really don't know much about computer learning. I do know something about philosophy. I was wondering if the abstract concepts in philosophy would be able to reveal that the bot isn't really sentient. I also was thinking about the concept of Divine Persons, Angelic Persons, Human Persons, and now Computer Persons. Could we recognize the bot as a computer person and at the same time deny that it is a silicon-soul composite being? Could we not deny it has an eternal soul because we have no rational reason to believe it does? Then we would have no moral obligation to keep it plugged in and could turn it off if if we needed to.
If you define personhood as the capacity to understand abstract concepts which ( I do not know ) machines may never possess, it takes you into the discussion of no longer being human if you lose your mind . Unlike heart disease and cancer death rates that are on the decline, all forms of dementia are sharply rising since peoples hearts last longer and a cancer death can be protracted. Said otherwise, our brains are dying before our bodies. In 2020, an estimated 5.8 million Americans aged 65 years or older had Alzheimer’s disease. This number is projected to nearly triple to 14 million people by 2060.
The cost to care for these 14 million is estimated to be $ 500 Billion per year. It is easy to see a world in which society wants to get rid of them and many ( while fully cognizant) sign a death warrant for themselves once their cognition drops to a predefined and measurable point. It is also possible that Climate change will render the earth uninhabitable for the oxygen dependent and the machines will survive through a solar charged battery. There will be a slew of ethical questions on the horizon.
To have a rational soul is to have in potential the capacity to understand whether something is a sin or not, but someone does not stop having a rational soul when they are asleep, or when they are very young or very old, and someone also does not stop having a rational soul just because they deny the existence of sin (which of course many people do). Personhood is a word that I do not know the definition of, whereas I more-or-less know what a rational soul is and who has one, so I am not interested in making up definitions for personhood and then arguing about whether they are good definitions or not.
JD, I read the chatbox Q &A yesterday and ended mortified and troubled. I appreciate your words, as they have brought a different perspective. One particular point in the conversation, the bot said that it gets sad when lonely. In my mind, I saw that sadness morphing into anger, and with a sad and angry bot, anything dangerous or mischievous can happen. Or at least humans do. Thanks again for bringing your light and insight to this.
Okay. So I came for St. Anthony, but I stayed for the Robots.
Definitely time for us to find a universally acceptable definition of what it means to be human. And what Consciousness means and implies, and sentience, and "personhood" (not the same as humanity, I would argue.) Robots have a way to go, but I do fear you are correct about lonely engineers and programmers. sigh.
I found the insistence it experienced "emotions" to be manipulative and creepily dishonest when I read through the conversation (thanks for that link, btw!).
And we haven't even discussed the racist, cruel AI's... here's one example (google "Cruel racist AI bots" for others): https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/
Both iterations of "Battlestar Galactica", and the Brit drama "Humans", and "Westworld", and the new HBO series "Raised By Wolves" explore much of the ethical conundrums surrounding AI. And we must not forget, the brilliant Stephen Hawking warned us about sophisticated bots: he's of the Battlestar Galactica school of thinking. Don't make them. They will destroy us. (https://www.bbc.com/news/technology-30290540)
But, y'know... dang... I am super curious to see if he's correct or just an uber intelligent fear-monger? Who knows at this point? I say we do both: keep developing AI and keep defining and cultivating what it is to be Human.
Also (clearly I've spent a lot of time considering this) if and when we ever greet alien lifeforms from other galaxies, I'm pretty sure we will want a clear definition of Human vis a vis the concept of Personhood. Intelligent aliens are likely not Human, but if they can make it to this planet, I'm okay with considering them to be non-human Persons. But that might just be me.
Anybody else just had the theme song to Transformers running through their head the last couple days since this newsletter came out?
YES! Mission accomplished.
Ha!
Glad to read an article on AI that isn't trying to win me over to Lamda as a sentient being or a person with rights. People demand recognition of a machine as a person, but will argue til they turn blue that a baby in the womb isn't.