Artificial Intelligence isn’t something new that dropped out of the sky in 2023, although it may feel that way! I first encountered the last Millennia as a graduate student under Dr. Chou at the University of Toledo. He showed us the uses of neural networks to analyze pavement deterioration. Back in 1999, neural networks were not at the tip of everyone’s tongue and around dinner table conversations — I thought they deserved a place in everyday conversations. Alas!
Fast forward to 2023, Artificial Intelligence (AI) is at the forefront of every conversation. A confluence of myriad factors led to the heightened focus on AI, and rightly so. We are at a critical juncture in history with the acceleration and adoption of AI-related technologies. We should consider this point akin to another with an unknown species from another galaxy with the ability to significantly alter the way of our life.
This encounter with AI, however, is less glamorous than the various alien encounters shown in Hollywood productions. Importantly, it is not as agentic as the characters portrayed in movies and sci-fi novels. AI has the potential to produce impacts that are much more perilous or beneficial than their Hollywood alien counterparts.
Unlike the Hollywood characters, AI is rather unsexy. It is a probabilistic model. It, nonetheless, can alter our way of life by predicting a particular outcome. Equally, it can change decisions by eliding options. Unlike the uncontrollable otherworldly characters of Hollywood, we AI can work with AI. Make AI work towards a common good and alter the fundamental way of human life.
I am not advocating stalling AI research or development; on the contrary, we should increase scientific and engineering engagement across multiple fields while deploying it cautiously. To that end, the US Federal government has taken a few steps from 2019 to today (October 10, 2023).
There are many definitions of AI floating around; however, the National AI Initiative Act of 2020 provides a good common starting point:
A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to:
perceive real and virtual environments;
abstract such perceptions into models through analysis in an automated manner; and
use model inference to formulate options for information or action
(National Artificial Intelligence Initiative Act of 2020, 2020)
I added emphasis to the definition above in an attempt to draw attention to Subject Matter Experts (SMEs) across different fields. Notice, according to the Act, AI is a machine-based system. It is not an abstract, otherworldly system. It is of great consequence to SMEs in Engineering, Medicine, and Law.
Education, Training, and, importantly, licensure are mechanisms through which our society places trust in SMEs. We cannot replace that trust with a black-box approach to decisions.
To address the issues of trust, the US President, in December 2020, signed an Executive Order (EO) 13960 to promote the Trustworthy use of AI. The first principle, the EO, states that when designing, developing, and using AI, agencies shall adhere to “our Nation’s values.” Additional principles in the EO state that SMEs shall mediate the results; crucially, the EO states that use shall be Responsible and Traceable. Note these principles do not appear only for AI development; they include the usage of AI. SMEs cannot use AI and claim it is a black box.
Today, October 30, 2023, the US President signed another EO. The primary goal of the EO, according to the White House, is to improve “AI safety and security.” One critical takeaway is to address concerns about the transparency of AI-generated content. The White House in the EO mandates the Department of Commerce to develop comprehensive guidelines for labeling such content. The purpose of these guidelines is to enable AI companies to create adequate labeling and watermarking tools that differentiate between AI-generated content and other content. By doing so, the White House hopes to promote transparency and accountability in using AI-generated content.
In addition to actions by the government, here are a few actions we can take in this historical moment of encounter with AI.
- Beyond SMEs, we engage with AI-related discussions and legislation as citizens. We would be more engaged citizens if we perceived this moment as an existential crisis.
- Demand increased transparency for software vendors and data curators. Professor Julia Stoyanovich of NYU’s Center of Responsible AI uses the nutritional label as an analogy. In her analogy, nutritional labels convey information about the ingredients and calories. Like food consumers, data consumers need to know the components and outputs.
- SMEs across fields should stop taking a black box approach to AI solutions.
- Several disciplines — Engineering, Medicine to name a couple — have developed robust quality control mechanisms over tens of decades. We should resist the urge to throw out these processes while chasing the shining new toy. In other words, do not throw the baby with the bath water.
- Many specialized fields — Engineering, Medicine, Law — have continuing education requirements. Representative groups, such as ACEC, ASCE, working with state boards and incorporate continuing education that meets current needs. Even though AI might appear as a technology topic outside the core specialty, it should be part of continuing education.
To wit, we are at an exciting and critical juncture; we have the agency to make AI what we want it to be!





