Last week, the White House released an Executive Order titled “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence“.
I will not restate helpful summaries of the Executive Order previously provided by others. Instead, I will provide some of my initial views on some overarching points associated with the Executive Order.
- This is an Executive Order. It is not legislation adopted by the United States Congress. It is not an opinion handed down by the US Supreme Court. It is not a multi-national treaty. There is only so much that an Executive Order can do given the constraints on the White House’s authority imposed by the US Constitution. In this regard…and much like the case with the AI Bill of Rights proposed by the White House Office of Science and Technology Policy (OSTP) in October 2022…I believe that the White House is doing its best from within the existing constraints. I agree with others that much of this Executive Order is merely aspirational (and unenforceable), but I fault the US Congress for this deficiency, not the White House.
- I am not optimistic about placing rulemaking authority with the National Institute of Standards and Technology (NIST). NIST is part of the US Department of Commerce. Historically, NIST has focused on “weights and measures.” NIST’s focus has not previously been focused on “safety” or “national security.” In my view, we need a new agency. Something akin to the Food and Drug Administration, but perhaps with the energy/posture/inclinations of Homeland Security.
- Related to the point of which agency should hold oversight is “how” they should engage in the oversight. The Executive Order requires an AI developer to “share” its safety testing data with the government. The Executive Order then goes on to describe the development of “standards and best practices.” My view is that regulation should be more “gatekeeping” than is presented in the Executive Order and that public deployment of a new AI product should not be permitted unless such a product is approved by an authorizing governmental agency.
- As an example of my view, a new medication cannot be sold to the US public until a wide array of testing data is reviewed and approved by the United States Food and Drug Administration (the “FDA”). This process makes the FDA a “gatekeeper” preventing drugs from being introduced to the US public. And, if the FDA doesn’t understand a new drug, the burden is placed upon the drug manufacturer to teach the FDA what it needs to know to perform the FDA’s gatekeeping function. In my view, we need a similar process for AI.
- And yes, the US government knows how to regulate in this manner. For example, the US government does it for drugs, automobiles, and airplanes. In each case, the applicable industry complies with an ever-growing list of safety requirements (double-blind testing for drugs, seatbelt and crash tests for automobiles, etc.) and then must prove that compliance to the applicable agency. And of course, those regulations develop over time.
- Unfortunately, I suspect that neither the process that I propose above nor the agency can exist in the absence of new Congressional legislation.
- The Executive Order will “enforce” the NIST standards upon government contractors as a “condition of federal funding”. While I know that there exists a larger number of government contractors, I do not believe that enforcing standards upon the AI products developed by Northrup Grumman or Lockheed (or any other large governmental contractor) will move the needle on AI products developed by OpenAI, Google, Anthropic, or Meta. We need to impose regulation on the leaders (consumer-facing in particular) and not the laggards (government contractors).
- The Executive Order calls on the US Congress to implement a laundry list of privacy protections. There hasn’t been a federal privacy law passed by the US Congress and signed into law in nearly 25 years. We need comprehensive privacy legislation at the federal level, but for now, the citizens of the world will need to rely on GDPR (General Data Protection Regulation) in the EU and CCPA (California Consumer Privacy Act) in California to impose and enforce the privacy protections described in the Executive Order.
- As a side note…when discussing privacy in the AI context, the Executive Order highlights the need to protect the privacy of kids. In California, the CCPA was co-sponsored by the family and child advocacy group Common Sense Media. It is heartening to note that Common Sense Media has just announced a framework for reviews of consumer-facing AI products along with a regulatory initiative applicable to the AI industry.
- <begin rant; in case you want to skip this> The Executive Order also highlights aspirations for the protection of Consumers, Workers, and Innovation. I am concerned that regulating with a focus that exists inside of a capitalist framework is inappropriately narrow. In particular, I am concerned that the most advanced AI systems (and industry efforts) seem to be aimed at chatbots that measure their success by reference to how believable they are to a human user. I would rather that the leading AI efforts be directed toward humanity’s more urgent needs (climate change, etc.). The last great governmental-led human-centric effort was the Apollo Program. Regardless of one’s view of the value of space travel to science or humanity in general, the effort to go to the moon was not a money-making enterprise. And yes, our government (under President Kennedy) rallied US Citizens to support that cause. For AI, instead of driving toward making money, I would rather that the current US government rally US citizens and the developers of AI to solve more pressing needs. I’d like the AI effort to “fold more proteins” rather than compete for companionship with humanity. </end rant>
For now, I will leave you with President Kennedy’s speech, and with you I will hope for a leader of similar stature to bring us into a great new age of AI.