Casey mentioned that a key problem for him is picking a side…nominally represented by Geoffrey Hinton on the one side and Jürgen Schmidhuber on the other. I won’t restate Casey’s discourse here except to say that I agree with him…AND…I have other problems too.
First among my other problems is that I am a human and to me, this seems to be a human issue as much as it is a technical issue.
Part of our current discourse about AI involves us asking ourselves what we think about an LLM technology that holds up to us a mirror that shows us…us. And rather than showing us a complete representation of us, the reflection seems heavily weighted toward the social part of us because, by and large, these LLMs are trained on the contents of the internet.
In particular, the reflection that emerges from our interactions with these LLMs seems to be one of how we treat our world, the other inhabitants in our world, and each other.
If what is coming back from these LLMs is itself based on humanity, how do I write about something where I am a part of the story?
How do we discuss this topic if what we are discussing is, in part, ABOUT us.
As I think about myself as a “representative” of the human race, it occurs to me that I am likely bathed in a barely submerged guilt or shame about how I treat others and my world.
If this is true, am I worried about being judged one day by these LLMs in the same way that I have judged others? I have a traumatic flashback to the character Q judging humanity and Picard in the first episode of Star Trek: The Next Generation.
And to be clear, I have judged others.
At my worst, I have judged some of those “others” to be lesser. [Yes, some have judged me to be lesser, but I suspect that I don’t hold as much shame about those judgments.]
As a “for example”, every time my woke, liberal, elite, ass, from my fancy schools with my fancy profession and fancy degrees, looks at people convicted of seditious conspiracy, I can’t help but think that — for some reason — they deserve it. And at my worst, I think of them as “bad people.”
And that’s only talking about humans.
What makes me think that it is appropriate to consume beef, or pork, or chicken? And let us be crystal clear — as of the time of writing this — I very much like hamburgers, bacon, and chicken parmesan.
Do I think that I am entitled to participate (as a consumer) in the industrial meat food chain because I am better than cows, pigs, or chickens? Is my “I am better than” view based on a belief that humans are more intelligent than animals? Isn’t that quite similar to a racist’s view of blacks and other so-called “minorities”?
In other words, aren’t I not a “speciesist”? [Let’s put a pin in that.]
When I think about “Artificial Intelligence” I am quick to dismiss the intelligence part. The LLMs are NOT yet at Artificial General Intelligence (AGI). But let’s pretend that the models are able to do “some” things more adeptly, efficiently, and accurately than humans. [I don’t think that presumption is a stretch.] So let’s just say that, in some ways, the LLM models are ALREADY more intelligent than humans.
Are we ready for that?
If these LLMs are more intelligent than we are, and if our position in society is based on us being more intelligent than other inhabitants of this world, what then?
[And…to be sure, there has been a crack in this justification for the primacy of humans for a while. We still value children more than animals…and children are NOT valued because of their intelligence.]
Do we humans shift our primacy to something else? Our empathy? Our ability to feel or to experience emotions?
[Let’s come back to that pin about me being a speciesist.]
My dog clearly feels pain. She clearly misses my son when he goes to school each day. She KNOWS when one of us is depressed…and she comes and comforts us no matter what else is going on.
And yet, my dog…as much as I love her…is not a co-equal member of our household. She eats after us, she is at our mercy for entering and leaving the house, and she is routinely left alone for large stretches of time.
Clearly, I am a speciesist.
And if I am worried about AI becoming something that will displace humans or judge humans, how can I write — or think — objectively?
Thank god I am not a reporter.
It is hard to write about this stuff. Especially when I am PART OF the story.
More to come — as I clearly have OTHER problems writing about AI. My thoughts on those other problems will be posted here in due course.
Thanks for reading.