There ought to be a law
The U.S. Government has announced they will initiate artificial intelligence (AI) oversight, and Congress as well as the public have voiced their complete lack of confidence that the Executive Branch will be able to address this complex issue.1 Having been in that White House position of deciding how to proceed with a new technology —- to regulate or not to regulate — it is all in a context of uncertainty, and many times it is best to follow the normal progression of an emerging technology to see where it goes, if anywhere, that needs regulating. Many promising new technologies die in the valley of death2 when they fail to find widespread acceptance or are never affordable. AI is freely available and so crossing that valley of death and sent it into a reality that we should be thinking about any harms that should be regulated. However, “there ought to be a law”,3 should probably wait.
In my study of emerging technologies and regulation, I have observed a pattern that normally follows anouncements of new technologies that are accessible to the public. In the beginning stage, you have what I call “parlor tricks” with the technology. In the case of nanotechnology, scientists spelled out the acronym “IBM” in nanosize letters that required a microscope to see.4 Not useful, but an interesting “parlor trick”. The next stage is to take that knowledge from the parlor tricks and find something useful to apply it to in useful way. In the case of the “IBM” parlor trick, that knowledge would be later used to manipulate nanosized particles into nano-motors and nano-bots with mobile legs.5
Application of the new technology so there is “utility” is the key to growing intellectual property. The USPTO has to be careful in granting patents to limit the scope of patents enough to protect the invention but not to allow them to be so broad that it shuts out all innovation with improvements. This is always important but it is critical for a new and rapidly growing emerging technology.
The next stage is wide adaptation of these useful inventions. A good example of this is the mobile phone that was available only in cars to the wealthy in the 1970-80s, but by the late 1990s you could find a mobile phones (cellphones) in areas of Africa that did not yet have powerlines.
Artificial intelligence is not so much like smart phones, as it is like the internet. Where the smart phone has become an omni-present artefact of living in our world, artificial intelligence, like the internet, pervades almost every aspect of our lives. Looking back on our quarter of a century of adapting the internet technology, some of it has been life-changing and a great quality of life improvement; while some of it is dark and down right dangerous, magnifying and making more efficient such evils as child trafficking and child pornography.
When injuries or crimes occur because of the new technology, this can be a signal that the injuries and crimes could be not to just one litigant but to many if the government does not intervene with some protective rules. First, if existing regulations might apply to the situation there is no need to develop a new rule, but if the technology is so different than any other previous technology, it may need a new rule. An example of this was federal legislation to protect employees against employers using DNA testing to discriminate against employees who might have a likelihood of sickness. Before the federal government established the law, all fifty states had passed their own version in the previous ten years while waiting for the federal government to take action. Genetic testing was different than any other previous technology and so it opened up society to new harms not covered by existing legislation.
In regulating AI, it is also likely that states will take the lead on regulating AI in areas that are traditional state jurisdictional areas like criminal law, insurance law and health laws, for example. So it is at this point, the federal government actually has an idea of how to regulate the new technology based on known legal harms from litigants who have been harmed by the new technology; and based on state legislation as models for the federal legislation.
How might work change?
AI affects our jobs and sense of identity,6 law enforcement in ways we haven’t even realized yet,7 and our shopping choices.
In the field of law, for example, tools to be more efficient with time is critical because in the world of the lawfirm, time is money, and billable hours is the currency. But does the use of AI decrease billable hours, or will there be a new standard for billing artificial intelligence hours, but then where do you draw the line? Forming the query for the language model is the key to getting good results (the “old garbage in - garbage out” adage for computer programming is still operating here). Using tools that can give you a first draft of a brief is now possible, and that can cut hours from associate time but will that be reflected in billable hours to the client? Will clients simply turn to AI with their legal questions and drafting contracts tasks, for example? Becoming a master at basic and complex queries may be the new legal skill hired by biglaw.
In the field of medicine, recent announcements reveal that artificial intelligence has been tasked with finding antibiotics to address the ever increasing drug-resistant bacteria that can kill while we look on without a single drug left in our arsenal. Out of 240 candidates among all 6,680 molecules reviewed by AI, eleven of them have been identified as “promising.”8 Artificial intelligence is also identifying cancer markers among a massive database of genetic data that will speed the ability to find targets for new drugs that will stop cancers that will now have genetic names rather than organ names. We have been stalled for four or five decades using mostly the same chemotherapy treatments with only one or two genetic targeting drugs to show, but this will change the speed of discovery. We will have to change the speed of the legal review process for drugs and devices to make this truly effective and that might be improved with artificial intelligence, too. Investments in AI assistants to physicians is rising as ways to minimize chores like note-taking and documentation. This could allow physicians to reclaim maybe as much as 1/3 of their day to diagnosis and treat patients.
The field of education from the earliest user of AI (K-12) to higher education is faced with a rapidly changing environment. The pedagogy needed to teach critical thinking skills is being replaced by well-crafted queries to AI language models like ChatGPT and jasper.ai both readily available for homework assignments and essays. Only the honor code stands between completing your homework in five minutes verses five hours. It is a tough sell. So educators will have to think of new ways to develop critical thinking and learning methods as well as ways to test it.
What do we have to fear?
Elon Musk (Tesla, Twitter) and Geoffrey Hinton, the “Godfather of AI”, have warned the world that artificial intelligence can be the destruction of all of us. Hinton said that learning simple reasoning could lead to the extinction of humans. Worse, he said that humans could be just a passing phase in the evolution of intelligence.9 That hurt. The tech audience was reportedly taken aback by that statement.
Let us assume we calculate a risk analysis and conclude this is a low probability/high consequence event of these warnings are worth taking some precautions. It would have been helpful if Hinton had added some risk estimate to AI becoming superior to humans, followed by a will or “desire” to make us extinct, but he did not. As the inventor he may have a bias toward believing his invention is more capable than it is. So even if we consider it to be a low probability for AI to reach that point, the consequences are so high (extinction) that we should take some precautions, now.
Can the Constitution protect us?
Science and technology know no boundaries or jurisdictions like Constitutions and law which only apply in their carefully defined geographical boundaries. So whereas, the U.S. may be disciplined and ethical in its research with artificial intelligence, other nations like China and Russia hold no such restraints in their political views and instead consider it a competition for survival. From our perspective as the world’s super power (still, but only hanging on by a thread) we are in a better position to act ethically with some constraints whereas the threatened Nations may well feel compelled to risk it all, and thereby risk all of us.
Our Constitution will protect us against our own government to the extent the rule of law exists, but it will not protect us from the rest of the world. Our strength of the rule of law measure has dropped in recent years, and is badly in need of attention.10 So this leaves the Elon Musk advice to put it all on hold while we think about it (see previous paragraph — that will not happen in Russia or China). With that in mind, we can still use the Constitution to ensure we protect ourselves from the government use of AI.
The due process clause of the Fifth Amendment of the Constitution which applies to the federal government as well as the state government (through the 14th Amendment)11, is probably the most important right to consistently use when we are using AI tools in law enforcement, civil enforcement, regulatory enforcement to ensure that there is a process that includes a human review in any process that adapts AI, particularly when life, liberty, property interests and economics are involved in the decisionmaking outcome. This should also include a right to appeal any decision made by AI, as part of due process when a property interest, life or liberty is at stake.
Informed vigilance is called for from everyone, not just the government that is charged with protecting the public from broad harms, but from the private sector who will be driving the development of AI, which includes the DIY sector and the criminal enterprises that pervade our economy. This will take non-partisan collaboration with a broad range of experts who rarely speak to each other but should.
We are in that valley of uncertainty where taking regulatory action too early, can impede the growth of a technology that could increase our quality of life in ways we could not imagine two decades ago; but not without an ever present reality that the same technology could cause our extinction.
https://www.foxnews.com/official-polls/fox-news-poll-more-see-bad-good-ai
https://www.sciencedirect.com/science/article/pii/S2666188822000119
https://idioms.thefreedictionary.com/There+ought+to+be+a+law!
https://ethw.org/Don_Eigler#
Victoria Sutton, “Xenobots — A New Lifeform,” poster, We Are Robots, Univ of Ottawa (2020) at https://techlaw.uottawa.ca/werobot/posters .
https://www.news-medical.net/news/20230525/Scientists-use-AI-to-identify-new-antibiotic-that-could-fight-drug-resistant-infections.aspx#:~:text=Using%20an%20artificial%20intelligence%20algorithm,for%20many%20drug%2Dresistant%20infections.
https://mitsloan.mit.edu/ideas-made-to-matter/why-neural-net-pioneer-geoffrey-hinton-sounding-alarm-ai
https://worldjusticeproject.org/rule-of-law-index/downloads/WJPIndex2022.pdf (The United States ranks 26th in the world.)
https://constitutioncenter.org/the-constitution/articles/amendment-xiv/clauses/701
I think, as you point out, that some of the issues with using AI will work themselves out prior to needing laws that govern the use of AI.
Case in point: https://www.cbsnews.com/news/lawyer-chatgpt-court-filing-avianca/