We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
Roy Amara, Institute For The Future, Past President
As regards Artificial Intelligence, I have spent a great deal of the last many months struggling to extract reliable signals of actual long-term concerns from the vast noise of short-term “hype” on one hand and doomer-ism (marked in particular by conversations that begin with a query regarding my personal p-doom) on the other. [BTW, as of October 1, 2023, my personal p-doom is 5%]
What frustrates me most is my tendency toward an over-focus on the short term and an under-focus on the long term. Social media is my most recent glaring example. When Facebook first appeared, I worked hard to promote the “democracy of ideas” narrative…even while Tristan Harris was outlining to my wife and me the core arguments later presented in his documentary The Social Dilemma.
I am working to resist that tendency this time around, but I am also noticing that inside the hall of mirrors of my mind, I chase my tail. One path out is to work harder on what goes into my mind.
To that end, this month I begin three classes at Stanford Continuing Studies. Two of the classes are focused on AI and the other is focused on society at large and the 2024 Election in particular. All three are open to the public and all three are offered via Zoom:
Some are again asking “why does the site use the moniker J²?”
It’s really a two-part story.
I am a “jr”. That means that I am named “after my Dad”. His name was Jerry. Thus, I am Jerry Jr. Interestingly, I have never really been called “Jerry Jr.” From as early as I can remember, I have been called “J.J.” — except of course, at work, where I was simply called “Jerry”.
Here ends Part One — My name is “J.J.”
I attended UC Davis for my law degree. As is typical for grad students, I decided to not focus entirely on the law while I was attending Davis. One of the “diversions” was learning how to Scuba dive. Now, it is important to realize that I was at a university. Everything must fit into a semester-long format. It matters not whether it is (i) “non-linear differential equations” or (ii) learning how to start a charcoal grill. One semester. No longer. No shorter.
So, this was not your one-hour resort scuba class.
And the class was taught by wacko ex-NAVY seals — at least, I recall that they were ex-NAVY seals — they felt like wacko ex-NAVY seals — so I will just go with that — it’s not like you can ever track down someone like that and ask.
These folks did the “I only have one name” thing way before Cher or Madonna or Rhianna. Their leader was a guy named “Omaha”.
For this class, we had to buy a whole bunch of “stuff”. They called it “snorkel gear” but I think that they got kickbacks from the local dive shop — but I digress. There was a lot of gear.
On the first day of class, these folks asked us to take out our little bottles of white-out (yes, “white-out” was labeled “snorkel gear” on the syllabus — or more appropriately, it was “for labeling our snorkel gear”) and place our “mark” on each item of our gear.
“I want some small unique mark that clearly identifies the item as yours,” Omaha says. “For example, J.J.’s mark should be J²”.
Casey mentioned that a key problem for him is picking a side…nominally represented by Geoffrey Hinton on the one side and Jürgen Schmidhuber on the other. I won’t restate Casey’s discourse here except to say that I agree with him…AND…I have other problems too.
First among my other problems is that I am a human and to me, this seems to be a human issue as much as it is a technical issue.
Part of our current discourse about AI involves us asking ourselves what we think about an LLM technology that holds up to us a mirror that shows us…us. And rather than showing us a complete representation of us, the reflection seems heavily weighted toward the social part of us because, by and large, these LLMs are trained on the contents of the internet.
In particular, the reflection that emerges from our interactions with these LLMs seems to be one of how we treat our world, the other inhabitants in our world, and each other.
If what is coming back from these LLMs is itself based on humanity, how do I write about something where I am a part of the story?
How do we discuss this topic if what we are discussing is, in part, ABOUT us.
As I think about myself as a “representative” of the human race, it occurs to me that I am likely bathed in a barely submerged guilt or shame about how I treat others and my world.
If this is true, am I worried about being judged one day by these LLMs in the same way that I have judged others? I have a traumatic flashback to the character Q judging humanity and Picard in the first episode of Star Trek: The Next Generation.
And to be clear, I have judged others.
At my worst, I have judged some of those “others” to be lesser. [Yes, some have judged me to be lesser, but I suspect that I don’t hold as much shame about those judgments.]
As a “for example”, every time my woke, liberal, elite, ass, from my fancy schools with my fancy profession and fancy degrees, looks at people convicted of seditious conspiracy, I can’t help but think that — for some reason — they deserve it. And at my worst, I think of them as “bad people.”
And that’s only talking about humans.
What makes me think that it is appropriate to consume beef, or pork, or chicken? And let us be crystal clear — as of the time of writing this — I very much like hamburgers, bacon, and chicken parmesan.
Do I think that I am entitled to participate (as a consumer) in the industrial meat food chain because I am better than cows, pigs, or chickens? Is my “I am better than” view based on a belief that humans are more intelligent than animals? Isn’t that quite similar to a racist’s view of blacks and other so-called “minorities”?
In other words, aren’t I not a “speciesist”? [Let’s put a pin in that.]
When I think about “Artificial Intelligence” I am quick to dismiss the intelligence part. The LLMs are NOT yet at Artificial General Intelligence (AGI). But let’s pretend that the models are able to do “some” things more adeptly, efficiently, and accurately than humans. [I don’t think that presumption is a stretch.] So let’s just say that, in some ways, the LLM models are ALREADY more intelligent than humans.
Are we ready for that?
If these LLMs are more intelligent than we are, and if our position in society is based on us being more intelligent than other inhabitants of this world, what then?
[And…to be sure, there has been a crack in this justification for the primacy of humans for a while. We still value children more than animals…and children are NOT valued because of their intelligence.]
Do we humans shift our primacy to something else? Our empathy? Our ability to feel or to experience emotions?
[Let’s come back to that pin about me being a speciesist.]
My dog clearly feels pain. She clearly misses my son when he goes to school each day. She KNOWS when one of us is depressed…and she comes and comforts us no matter what else is going on.
And yet, my dog…as much as I love her…is not a co-equal member of our household. She eats after us, she is at our mercy for entering and leaving the house, and she is routinely left alone for large stretches of time.
Clearly, I am a speciesist.
And if I am worried about AI becoming something that will displace humans or judge humans, how can I write — or think — objectively?
Thank god I am not a reporter.
It is hard to write about this stuff. Especially when I am PART OF the story.
More to come — as I clearly have OTHER problems writing about AI. My thoughts on those other problems will be posted here in due course.
My name is Jerry Chacon. This is my personal blog. If you have been around for a while, you will note that this blog is a reboot. The old blog can be found at the Wayback Machine. I won’t spend any time talking about why I ditched the old one and replaced it with this one other than to say: (i) I am too old to remember (and too lazy to re-learn) how to import old posts from an old database, and (ii) those old posts are likely only interesting to me. So, to quote Forrest Gump, “That’s all I have to say about that“.
So why am I here? What am I doing?
But most importantly, why?
My answer: Stochastic Parrots. Fire, Electricity and Stochastic Parrots.
I will begin by stating…for the record…that I will end a three-decade legal career in  days. Specifically, I am retiring from my day job at the end of April 2023.
After all that time, I wanted to goof off. Really I did.
I can completely relate. I remain naive to this day. I want life to be simple. Nuance is not my strong suit as it relates to how I want to live…or how I want to train an AI to live. Like the Avengers, I don’t even fully understand my own values. And as a lawyer, I got *really* good at nuance.
But, of course, the alignment problem is not my only concern.
Just because I don’t believe that I can add a verse today, doesn’t mean that I can’t add a verse tomorrow.
This world is very different than it was 20 years ago. I am very different than I was 20 years ago. What hope have I to predict what the world needs in the next 20 years or even what capacities I will have over the course of that time span?
Add to this my firm belief that this current moment in time is akin to the invention of the combustion engine. Others describe it as akin to the invention of fire or electricity. It matters. It is important. I can’t just walk on by.
So here I am. And this blog is to help me get all of these thoughts out of my head…for all of you to poke at and help me contemplate. This is what I am doing. This is why I am here.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.