In search of meaning...and well functioning Stochastic Parrots

Month: May 2023

May 16 Senate Hearing on AI

I’m just going to put this here for easy access:

Oversight of A.I.: Rules for Artificial Intelligence

Credit: PBS News Hour

Witnesses: Christina Montgomery (IBM), Gary Marcus (NYU), and Sam Altman (OpenAI)

Senate Hearing Video Archive

Explaining J²

Some are again asking “why does the site use the moniker J²?”

It’s really a two-part story.

Part One:

I am a “jr”. That means that I am named “after my Dad”. His name was Jerry. Thus, I am Jerry Jr. Interestingly, I have never really been called “Jerry Jr.” From as early as I can remember, I have been called “J.J.” — except of course, at work, where I was simply called “Jerry”.

Here ends Part One — My name is “J.J.”

Part Two:

I attended UC Davis for my law degree. As is typical for grad students, I decided to not focus entirely on the law while I was attending Davis. One of the “diversions” was learning how to Scuba dive. Now, it is important to realize that I was at a university. Everything must fit into a semester-long format. It matters not whether it is (i) “non-linear differential equations” or (ii) learning how to start a charcoal grill. One semester. No longer. No shorter.

So, this was not your one-hour resort scuba class.

And the class was taught by wacko ex-NAVY seals — at least, I recall that they were ex-NAVY seals — they felt like wacko ex-NAVY seals — so I will just go with that — it’s not like you can ever track down someone like that and ask.

These folks did the “I only have one name” thing way before Cher or Madonna or Rhianna. Their leader was a guy named “Omaha”.

For this class, we had to buy a whole bunch of “stuff”. They called it “snorkel gear” but I think that they got kickbacks from the local dive shop — but I digress. There was a lot of gear.

On the first day of class, these folks asked us to take out our little bottles of white-out (yes, “white-out” was labeled “snorkel gear” on the syllabus — or more appropriately, it was “for labeling our snorkel gear”) and place our “mark” on each item of our gear.

“I want some small unique mark that clearly identifies the item as yours,” Omaha says. “For example, J.J.’s mark should be J²”.

And well that mark has stuck.

The difficulty of writing about difficult things

Casey Newton (http://platformer.news) recently posted a piece highlighting his difficulty writing about AI.  This post resonated with me quite profoundly.  

Casey mentioned that a key problem for him is picking a side…nominally represented by Geoffrey Hinton on the one side and Jürgen Schmidhuber on the other.  I won’t restate Casey’s discourse here except to say that I agree with him…AND…I have other problems too.

First among my other problems is that I am a human and to me, this seems to be a human issue as much as it is a technical issue.  

Part of our current discourse about AI involves us asking ourselves what we think about an LLM technology that holds up to us a mirror that shows us…us.  And rather than showing us a complete representation of us, the reflection seems heavily weighted toward the social part of us because, by and large, these LLMs are trained on the contents of the internet.

In particular, the reflection that emerges from our interactions with these LLMs seems to be one of how we treat our world, the other inhabitants in our world, and each other.

My problem? 

If what is coming back from these LLMs is itself based on humanity, how do I write about something where I am a part of the story?

How do we discuss this topic if what we are discussing is, in part, ABOUT us.

As I think about myself as a “representative” of the human race, it occurs to me that I am likely bathed in a barely submerged guilt or shame about how I treat others and my world.

If this is true, am I worried about being judged one day by these LLMs in the same way that I have judged others? I have a traumatic flashback to the character Q judging humanity and Picard in the first episode of Star Trek: The Next Generation.

And to be clear, I have judged others.  

At my worst, I have judged some of those “others” to be lesser.  [Yes, some have judged me to be lesser, but I suspect that I don’t hold as much shame about those judgments.] 

As a “for example”, every time my woke, liberal, elite, ass, from my fancy schools with my fancy profession and fancy degrees, looks at people convicted of seditious conspiracy, I can’t help but think that — for some reason — they deserve it.  And at my worst, I think of them as “bad people.”

And that’s only talking about humans.

What makes me think that it is appropriate to consume beef, or pork, or chicken?  And let us be crystal clear — as of the time of writing this — I very much like hamburgers, bacon, and chicken parmesan.  

Do I think that I am entitled to participate (as a consumer) in the industrial meat food chain because I am better than cows, pigs, or chickens?  Is my “I am better than” view based on a belief that humans are more intelligent than animals?  Isn’t that quite similar to a racist’s view of blacks and other so-called “minorities”?

In other words, aren’t I not a “speciesist”?  [Let’s put a pin in that.]

When I think about “Artificial Intelligence” I am quick to dismiss the intelligence part.  The LLMs are NOT yet at Artificial General Intelligence (AGI).  But let’s pretend that the models are able to do “some” things more adeptly, efficiently, and accurately than humans.  [I don’t think that presumption is a stretch.]  So let’s just say that, in some ways, the LLM models are ALREADY more intelligent than humans.

Are we ready for that?

If these LLMs are more intelligent than we are, and if our position in society is based on us being more intelligent than other inhabitants of this world, what then?

[And…to be sure, there has been a crack in this justification for the primacy of humans for a while.  We still value children more than animals…and children are NOT valued because of their intelligence.]

Do we humans shift our primacy to something else?  Our empathy?  Our ability to feel or to experience emotions?

[Let’s come back to that pin about me being a speciesist.]

My dog clearly feels pain.  She clearly misses my son when he goes to school each day.  She KNOWS when one of us is depressed…and she comes and comforts us no matter what else is going on.

And yet, my dog…as much as I love her…is not a co-equal member of our household.  She eats after us, she is at our mercy for entering and leaving the house, and she is routinely left alone for large stretches of time. 

Clearly, I am a speciesist.

And if I am worried about AI becoming something that will displace humans or judge humans, how can I write — or think — objectively?

Thank god I am not a reporter.  

It is hard to write about this stuff.  Especially when I am PART OF the story.

More to come — as I clearly have OTHER problems writing about AI.  My thoughts on those other problems will be posted here in due course.

Thanks for reading.

Powered by WordPress & Theme by Anders Norén