Misinformation … or Manipulation? Why Mobility Airmen Must Approach AI Advancements with Caution

By MRS. LAUREN FOSNOT, STAFF WRITER

Do you have a minute? Great. Step into this time machine and press this button. You are now enroute to early 18th century Britain. As you are chauffeured through the most quintessential English villages, please keep your hands and feet in the vehicle—although these folks could certainly use a hand!

Upon arriving, you have most likely noticed that much of the villagers’ work is completed manually. Tasks such as farming, making clothing, and forging can be grueling. Thankfully, technological advances would soon drastically improve the villagers’ way of life.

Seen enough? What you witnessed was a period just shy of the Industrial Revolution. The period from creating goods by hand to using machines, which started in 1760 in Britain, transformed industries, economies, and entire societies—so much so that historians mark it as one of the most significant eras of human advancement.

While we wish our time machine could visit the future (sorry, beta model!), it can still be predicted that we may be in the midst (or maybe the beginning!) of an era of similar, if not more, significance. Starting with the creation of the internet in the late 20th century, we are now dipping our toe into the vast sea of possibilities of artificial intelligence (AI).

By the way, you can step out of the time machine now—we are back in 2024. AI tools have become increasingly sophisticated, and the potential for misinformation and manipulation becomes more of a pressing concern each day.

The AI tools quickly circulating and gaining popularity are primarily large language models (LLMs)—including ChatGPT, an OpenAI product—that can generate human-sounding text to convey ideas and concepts. “Big tech” companies, such as Google, Microsoft, and Meta, seem to be in a rat race to develop and release the biggest, best, and most novel AI products.

Among the most remarkable AI innovations is the ability to generate images by entering just a few keywords into an LLM. That image of Darth Vader roller skating with Elvis? Sorry, that might not be real. All joking aside, there are many realistic AI-generated images floating around online that go undetected as artificial (often referred to as a “deep fake”).

Ethan Mollick, Associate Professor of Management and Co-Director, Generative AI Lab at the Wharton School of the University of Pennsylvania, cautioned his followers about the proliferation of AI-generated images after encountering them at the top of a Google search result.

“It isn’t just AI-generated text that is starting to bleed over into search results,” Mollick explained. “The main image, if you do a Google search for Hawaiian singer Israel Kamakawiwo‘ole (whose version of Somewhere Over the Rainbow you have probably [heard]), is a Midjourney creation right from Reddit.”

While Google has released a statement about a new tool that may help users differentiate between images that are “real or not,” it is important to be wary about anything and everything online—AI has the potential to fly under the radar even to the most discerning eye.

In this modern age, we have more information than we know what to do with and seemingly less time to verify its accuracy. This contrast makes it difficult to navigate online information. Generation Z has been called lazy by older generations, but often, they are just pulled in more directions than existed before. This vulnerability may increase the chance of false content slipping through the cracks.

With AI tools, not only falsities occur but also bias. For example, there have been instances of evidence of racial bias found in AI algorithms used in healthcare, leading to disparities in the allocation of resources. While this example pertains to the medical realm, similar biases could manifest in military applications, affecting recruitment, promotions, and more.

Unintendedly biased information is frightening enough, but what if it is created intentionally? Bad agents could cause mayhem in myriad ways—they already do. An example highlighting this is the 2019 AI-generated deepfake voice scam, where cybercriminals mimicked a CEO’s voice to deceive an employee into transferring financial funds from the company’s account into a fraudulent one.

Further, Microsoft recently released word that the technology company detected and disrupted instances of U.S. adversaries using or attempting to exploit generative AI. NYU professor and former AT&T Chief Security Officer Edward Amoroso believes this could snowball, and that malicious use of generative AI “will eventually become one of the most powerful weapons in every nation-state military’s offense.”

The misuse of AI is certainly a topic to be discussed by U.S. Air Force leadership; however, each Airman can also guard themselves and others against it.

Below are a few “dos and don’ts” surrounding navigating the cyber world safely.

DO:

  • Think about who would benefit from spreading confusing information.
  • Fact check. The Washington Post’s Fact Checker, Snopes, and PolitiFact are a few sites used for fact-checking information circulating online.
  • Look for telltale signs that images and videos were generated by AI, including jumbled-up text, hands with too many fingers, eyes that do not synch up with movement, blurry or distorted details, an overly glossy appearance, and more.
  • Utilize an AI-generated content checking tool.
  • Encourage the establishment of a code of ethics.

DO NOT:

  • Upload sensitive information as it would be available to all users of an LLM, including adversaries.
  • Assume information generated is factually correct.
  • Share or publish information before verifying it as true.
  • Avoid exploring AI tools. Learning how they work is the best way to safeguard yourself and sensitive information.

Yes, the future setting on the time machine would be a nice button to push right now. Unfortunately, no one knows how this revolution will play out. With AI’s ability to widen its reach exponentially, there could soon be unfathomable advancements (and consequences!). Just as the villagers shortly before 1760 were to be introduced to new tools, they would have no idea of the significance of the tools on human history.

AI tools offer both excitement and fear. In light of the challenges, Mobility Airmen must approach AI advancements with a critical eye and a commitment to ethical conduct. Embrace them, but embrace them cautiously.