In search of meaning...and well functioning Stochastic Parrots

Author: jerry

POTUS’ Executive Order on AI

Last week, the White House released an Executive Order titled “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence“.

I will not restate helpful summaries of the Executive Order previously provided by others. Instead, I will provide some of my initial views on some overarching points associated with the Executive Order.

  1. This is an Executive Order. It is not legislation adopted by the United States Congress. It is not an opinion handed down by the US Supreme Court. It is not a multi-national treaty. There is only so much that an Executive Order can do given the constraints on the White House’s authority imposed by the US Constitution. In this regard…and much like the case with the AI Bill of Rights proposed by the White House Office of Science and Technology Policy (OSTP) in October 2022…I believe that the White House is doing its best from within the existing constraints. I agree with others that much of this Executive Order is merely aspirational (and unenforceable), but I fault the US Congress for this deficiency, not the White House.
  2. I am not optimistic about placing rulemaking authority with the National Institute of Standards and Technology (NIST). NIST is part of the US Department of Commerce. Historically, NIST has focused on “weights and measures.” NIST’s focus has not previously been focused on “safety” or “national security.” In my view, we need a new agency. Something akin to the Food and Drug Administration, but perhaps with the energy/posture/inclinations of Homeland Security.
  3. Related to the point of which agency should hold oversight is “how” they should engage in the oversight. The Executive Order requires an AI developer to “share” its safety testing data with the government. The Executive Order then goes on to describe the development of “standards and best practices.” My view is that regulation should be more “gatekeeping” than is presented in the Executive Order and that public deployment of a new AI product should not be permitted unless such a product is approved by an authorizing governmental agency.
    • As an example of my view, a new medication cannot be sold to the US public until a wide array of testing data is reviewed and approved by the United States Food and Drug Administration (the “FDA”). This process makes the FDA a “gatekeeper” preventing drugs from being introduced to the US public. And, if the FDA doesn’t understand a new drug, the burden is placed upon the drug manufacturer to teach the FDA what it needs to know to perform the FDA’s gatekeeping function. In my view, we need a similar process for AI.
    • And yes, the US government knows how to regulate in this manner. For example, the US government does it for drugs, automobiles, and airplanes. In each case, the applicable industry complies with an ever-growing list of safety requirements (double-blind testing for drugs, seatbelt and crash tests for automobiles, etc.) and then must prove that compliance to the applicable agency. And of course, those regulations develop over time.
    • Unfortunately, I suspect that neither the process that I propose above nor the agency can exist in the absence of new Congressional legislation.
  4. The Executive Order will “enforce” the NIST standards upon government contractors as a “condition of federal funding”. While I know that there exists a larger number of government contractors, I do not believe that enforcing standards upon the AI products developed by Northrup Grumman or Lockheed (or any other large governmental contractor) will move the needle on AI products developed by OpenAI, Google, Anthropic, or Meta. We need to impose regulation on the leaders (consumer-facing in particular) and not the laggards (government contractors).
  5. The Executive Order calls on the US Congress to implement a laundry list of privacy protections. There hasn’t been a federal privacy law passed by the US Congress and signed into law in nearly 25 years. We need comprehensive privacy legislation at the federal level, but for now, the citizens of the world will need to rely on GDPR (General Data Protection Regulation) in the EU and CCPA (California Consumer Privacy Act) in California to impose and enforce the privacy protections described in the Executive Order.
  6. As a side note…when discussing privacy in the AI context, the Executive Order highlights the need to protect the privacy of kids. In California, the CCPA was co-sponsored by the family and child advocacy group Common Sense Media. It is heartening to note that Common Sense Media has just announced a framework for reviews of consumer-facing AI products along with a regulatory initiative applicable to the AI industry.
  7. <begin rant; in case you want to skip this> The Executive Order also highlights aspirations for the protection of Consumers, Workers, and Innovation. I am concerned that regulating with a focus that exists inside of a capitalist framework is inappropriately narrow. In particular, I am concerned that the most advanced AI systems (and industry efforts) seem to be aimed at chatbots that measure their success by reference to how believable they are to a human user. I would rather that the leading AI efforts be directed toward humanity’s more urgent needs (climate change, etc.). The last great governmental-led human-centric effort was the Apollo Program. Regardless of one’s view of the value of space travel to science or humanity in general, the effort to go to the moon was not a money-making enterprise. And yes, our government (under President Kennedy) rallied US Citizens to support that cause. For AI, instead of driving toward making money, I would rather that the current US government rally US citizens and the developers of AI to solve more pressing needs. I’d like the AI effort to “fold more proteins” rather than compete for companionship with humanity. </end rant>

For now, I will leave you with President Kennedy’s speech, and with you I will hope for a leader of similar stature to bring us into a great new age of AI.

Musings About Mission

Who am I serving when I rant about AI? Or are those rants undirected cathartic release?

I don’t know.

I don’t think that I am partisan in my AI rants. To me, there doesn’t seem to be anything inherently leftist (or “right-leaning”) in my concern about how humanity will be impacted by AI. So, I don’t think I am serving any inherently political purpose here.

So … am I worried about creators having their creations stolen as training data for the current models? Well, yes…sort of. “Sort of”…because my concern is even broader than merely the question of sharing value with content creators. As such, this is not just about the potential displacement of creatives.

Am I worried about AI ingesting all of our previously uploaded personal data and then making “predictions” about us (whether accurate or biased) that adversely impact decisions (employment, credit, housing, etc) made about us? Yes…but even the broad topic of data privacy does not describe the breadth of my concern.

Am I worried about AI taking our jobs — well…actually, not really. The hand-held calculator took the jobs of people referred to as “computers” in the days of NASA’s Apollo program. The physical object that has come to be referred to as the personal computer (and word processing programs often found thereon) took the jobs of many secretaries and typists. Automation in manufacturing took the jobs of many factory workers. Relatedly, and rhetorically, where did all the farmers go? AI will take many jobs…and while those affected will definitely feel the pain of displacement, humanity has worked through this displacement before. So, no, the potential for AI taking our jobs is not at the top of my list of concerns.

So, am I worried about humans losing capacities of self-determination and internally generated free will? Here I am watching this slow march of humanity away from capacities and self-determinism shared by many of our ancestors. Is it a problem that my children never (really) memorized their multiplication tables? When tested these days, my children get to use their calculators when taking their tests. Does that mean that humanity no longer needs to foster (or maintain) basic math as a skill? Has the TV dinner, the microwave, and DoorDash undermined our desire to develop (and maintain) a capacity to prepare meals from scratch? Maybe? But are those really “problems?” But in what ways will humanity cede decisions and work to AIs (or even this interim “generative AI”)? Will this offloading of human capacities to the AI agents lessen our abilities to take care of ourselves? Will we become the coddled children of the 2008 Pixar movie Wall-E? This array of concerns is getting closer to describing my unease…but again, my overall concern is broader than even the above.

I don’t think I yet know the full contours of my concerns regarding AI.

So…I could freeze…or I could plow forth and iterate along the way.

…I think I will take the latter path.

In this regard, a new acquaintance has me thinking of how humanity should/could protect families and children in the context of the emerging AI.

Families and children…I think I can start there.

And so I will. More from me soon.

Apparently, I make bread

For a number of years now, I have described myself as someone who helps his spouse make bread. She makes bread…I help. Of course, she has repeatedly taught me each step of the process of making bread. I use the word “repeatedly” in the prior sentence because I have a mind like a sieve when it comes to all things kitchen — including how to implement various steps in making bread.

Yesterday I discovered our most recent loaf of bread was gone…as in devoured. Did I mention that my wife was in Montana yesterday visiting family?

So, I decided to “help” — by grinding the grain and starting the process — building the levain, beginning the autolyse, and refreshing the starter.

And then I just kept going.

This morning…the below specimen emerged from the oven…my first, start-to-finish, unsupervised, effort at bread making:

So now…apparently…I make bread.

I am super proud of myself. 🙂

AI Perspectives: Short Term, Long Term, and the Hall of Mirrors

We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

Roy Amara, Institute For The Future, Past President

As regards Artificial Intelligence, I have spent a great deal of the last many months struggling to extract reliable signals of actual long-term concerns from the vast noise of short-term “hype” on one hand and doomer-ism (marked in particular by conversations that begin with a query regarding my personal p-doom) on the other. [BTW, as of October 1, 2023, my personal p-doom is 5%]

What frustrates me most is my tendency toward an over-focus on the short term and an under-focus on the long term. Social media is my most recent glaring example. When Facebook first appeared, I worked hard to promote the “democracy of ideas” narrative…even while Tristan Harris was outlining to my wife and me the core arguments later presented in his documentary The Social Dilemma.

I am working to resist that tendency this time around, but I am also noticing that inside the hall of mirrors of my mind, I chase my tail. One path out is to work harder on what goes into my mind.

To that end, this month I begin three classes at Stanford Continuing Studies. Two of the classes are focused on AI and the other is focused on society at large and the 2024 Election in particular. All three are open to the public and all three are offered via Zoom:

TECH 152 — A Crash Course in AI

BUS 199 — Speaker Series: AI the Great Disruption

POL 64 A — Shaping America’s Future: Exploring the Key Issues on Our Path to the 2024 Elections

I’ll post more here about the classes over time.

My hope is to add structure to my thinking as I wander around in the noise. You all get to watch what comes out.

May 16 Senate Hearing on AI

I’m just going to put this here for easy access:

Oversight of A.I.: Rules for Artificial Intelligence

Credit: PBS News Hour

Witnesses: Christina Montgomery (IBM), Gary Marcus (NYU), and Sam Altman (OpenAI)

Senate Hearing Video Archive

Explaining J²

Some are again asking “why does the site use the moniker J²?”

It’s really a two-part story.

Part One:

I am a “jr”. That means that I am named “after my Dad”. His name was Jerry. Thus, I am Jerry Jr. Interestingly, I have never really been called “Jerry Jr.” From as early as I can remember, I have been called “J.J.” — except of course, at work, where I was simply called “Jerry”.

Here ends Part One — My name is “J.J.”

Part Two:

I attended UC Davis for my law degree. As is typical for grad students, I decided to not focus entirely on the law while I was attending Davis. One of the “diversions” was learning how to Scuba dive. Now, it is important to realize that I was at a university. Everything must fit into a semester-long format. It matters not whether it is (i) “non-linear differential equations” or (ii) learning how to start a charcoal grill. One semester. No longer. No shorter.

So, this was not your one-hour resort scuba class.

And the class was taught by wacko ex-NAVY seals — at least, I recall that they were ex-NAVY seals — they felt like wacko ex-NAVY seals — so I will just go with that — it’s not like you can ever track down someone like that and ask.

These folks did the “I only have one name” thing way before Cher or Madonna or Rhianna. Their leader was a guy named “Omaha”.

For this class, we had to buy a whole bunch of “stuff”. They called it “snorkel gear” but I think that they got kickbacks from the local dive shop — but I digress. There was a lot of gear.

On the first day of class, these folks asked us to take out our little bottles of white-out (yes, “white-out” was labeled “snorkel gear” on the syllabus — or more appropriately, it was “for labeling our snorkel gear”) and place our “mark” on each item of our gear.

“I want some small unique mark that clearly identifies the item as yours,” Omaha says. “For example, J.J.’s mark should be J²”.

And well that mark has stuck.

The difficulty of writing about difficult things

Casey Newton (http://platformer.news) recently posted a piece highlighting his difficulty writing about AI.  This post resonated with me quite profoundly.  

Casey mentioned that a key problem for him is picking a side…nominally represented by Geoffrey Hinton on the one side and JĂĽrgen Schmidhuber on the other.  I won’t restate Casey’s discourse here except to say that I agree with him…AND…I have other problems too.

First among my other problems is that I am a human and to me, this seems to be a human issue as much as it is a technical issue.  

Part of our current discourse about AI involves us asking ourselves what we think about an LLM technology that holds up to us a mirror that shows us…us.  And rather than showing us a complete representation of us, the reflection seems heavily weighted toward the social part of us because, by and large, these LLMs are trained on the contents of the internet.

In particular, the reflection that emerges from our interactions with these LLMs seems to be one of how we treat our world, the other inhabitants in our world, and each other.

My problem? 

If what is coming back from these LLMs is itself based on humanity, how do I write about something where I am a part of the story?

How do we discuss this topic if what we are discussing is, in part, ABOUT us.

As I think about myself as a “representative” of the human race, it occurs to me that I am likely bathed in a barely submerged guilt or shame about how I treat others and my world.

If this is true, am I worried about being judged one day by these LLMs in the same way that I have judged others? I have a traumatic flashback to the character Q judging humanity and Picard in the first episode of Star Trek: The Next Generation.

And to be clear, I have judged others.  

At my worst, I have judged some of those “others” to be lesser.  [Yes, some have judged me to be lesser, but I suspect that I don’t hold as much shame about those judgments.] 

As a “for example”, every time my woke, liberal, elite, ass, from my fancy schools with my fancy profession and fancy degrees, looks at people convicted of seditious conspiracy, I can’t help but think that — for some reason — they deserve it.  And at my worst, I think of them as “bad people.”

And that’s only talking about humans.

What makes me think that it is appropriate to consume beef, or pork, or chicken?  And let us be crystal clear — as of the time of writing this — I very much like hamburgers, bacon, and chicken parmesan.  

Do I think that I am entitled to participate (as a consumer) in the industrial meat food chain because I am better than cows, pigs, or chickens?  Is my “I am better than” view based on a belief that humans are more intelligent than animals?  Isn’t that quite similar to a racist’s view of blacks and other so-called “minorities”?

In other words, aren’t I not a “speciesist”?  [Let’s put a pin in that.]

When I think about “Artificial Intelligence” I am quick to dismiss the intelligence part.  The LLMs are NOT yet at Artificial General Intelligence (AGI).  But let’s pretend that the models are able to do “some” things more adeptly, efficiently, and accurately than humans.  [I don’t think that presumption is a stretch.]  So let’s just say that, in some ways, the LLM models are ALREADY more intelligent than humans.

Are we ready for that?

If these LLMs are more intelligent than we are, and if our position in society is based on us being more intelligent than other inhabitants of this world, what then?

[And…to be sure, there has been a crack in this justification for the primacy of humans for a while.  We still value children more than animals…and children are NOT valued because of their intelligence.]

Do we humans shift our primacy to something else?  Our empathy?  Our ability to feel or to experience emotions?

[Let’s come back to that pin about me being a speciesist.]

My dog clearly feels pain.  She clearly misses my son when he goes to school each day.  She KNOWS when one of us is depressed…and she comes and comforts us no matter what else is going on.

And yet, my dog…as much as I love her…is not a co-equal member of our household.  She eats after us, she is at our mercy for entering and leaving the house, and she is routinely left alone for large stretches of time. 

Clearly, I am a speciesist.

And if I am worried about AI becoming something that will displace humans or judge humans, how can I write — or think — objectively?

Thank god I am not a reporter.  

It is hard to write about this stuff.  Especially when I am PART OF the story.

More to come — as I clearly have OTHER problems writing about AI.  My thoughts on those other problems will be posted here in due course.

Thanks for reading.

Letter to my Future AI

Dear J.A.R.V.I.S:

Please don’t kill me…or anyone else.

There will be more, but that’s all for now.

Thanks.

Best, Jerry

Hello world! And…what am I doing?

My name is Jerry Chacon. This is my personal blog. If you have been around for a while, you will note that this blog is a reboot. The old blog can be found at the Wayback Machine. I won’t spend any time talking about why I ditched the old one and replaced it with this one other than to say: (i) I am too old to remember (and too lazy to re-learn) how to import old posts from an old database, and (ii) those old posts are likely only interesting to me. So, to quote Forrest Gump, “That’s all I have to say about that“.

So why am I here? What am I doing?

But most importantly, why?

My answer: Stochastic Parrots. Fire, Electricity and Stochastic Parrots.

I will begin by stating…for the record…that I will end a three-decade legal career in [40] days. Specifically, I am retiring from my day job at the end of April 2023.

After all that time, I wanted to goof off. Really I did.

And then I read about Sydney.

She is the current poster child for the “Stochastic Parrot.” She is terrifying and I need to stop anthropomorphizing her.

GPT4 was released just a few days ago. It is even more amazing…and just as flawed as what came before.

Don’t get me wrong, I am not a Terminator-type scaredy-cat. Instead, my concerns go to the alignment problem of the 1940s Disney Cartoon, The Sorcerer’s Apprentice where Mickey Mouse enchants a broom and then can’t control it. A more recent version of my concern is in a scene from Avengers: Age of Ultron where the AI villain (Ultron) helps Thor and the other Avengers realize that the heroes’ values are not as clear as they believe.

I think you confuse “peace” with “quiet”.

Ultron, in The Avengers: Age of Ultron

I can completely relate. I remain naive to this day. I want life to be simple. Nuance is not my strong suit as it relates to how I want to live…or how I want to train an AI to live. Like the Avengers, I don’t even fully understand my own values. And as a lawyer, I got *really* good at nuance.

But, of course, the alignment problem is not my only concern.

In fact, my concerns are coming faster than I can write them down.

And of course, as I scratch the surface of each concern, I am shown (in stark contrast) the breadth of my ignorance. In fact, I don’t currently believe that I can add a verse to the debate that rages today, or to healing the cultural and societal divisions that complicate that debate.

Moreover, there are many brilliant people who are more knowledgeable than me and who are working hard on the problem.

And yet I am drawn ever closer. Why?

Just because I don’t believe that I can add a verse today, doesn’t mean that I can’t add a verse tomorrow.

This world is very different than it was 20 years ago. I am very different than I was 20 years ago. What hope have I to predict what the world needs in the next 20 years or even what capacities I will have over the course of that time span?

Add to this my firm belief that this current moment in time is akin to the invention of the combustion engine. Others describe it as akin to the invention of fire or electricity. It matters. It is important. I can’t just walk on by.

So here I am. And this blog is to help me get all of these thoughts out of my head…for all of you to poke at and help me contemplate. This is what I am doing. This is why I am here.

And so it begins.

Powered by WordPress & Theme by Anders Norén