top of page
Search
  • Jeemes Akers

Sleepwalking Toward The Precipice

                     Sleepwalking Toward The Precipice                                                  

 

“If we are all going to be destroyed by an atomic bomb [or A.G.I.], let that bomb when it comes find us doing sensible and human things—praying, working, teaching, reading, listening to music, bathing the children, playing tennis [or pickleball], chatting to our friends over a pint and a game of darts—not huddled together like frightened sheep.”

 

                                                                               C. S. Lewis

                                                                               (1948 essay)

 

“Even if A.G.I. does turn out to be dangerous, many in Silicon Valley argue, wouldn’t it be better for it to be controlled by an American company, or by the American government, rather than by the government of China or Russia, or by a rogue individual with no accountability?”

 

                                                                               Ben Goldhaber[1]

                                                                                                                    

It is rare that I read a single magazine article online that prompts a missive. By my own count, this is my eighth short article dealing with the topic of artificial intelligence (A.I.), generative A.I., and more significantly, general artificial intelligence (A.G.I.). Indeed, I have written more about A.I. than any other single topic, with the exception of those missives touching on my faith in the Lord Jesus Christ. In this vein, my attention was especially drawn (like a technology curious moth to a flame) to a recent article by Andrew Marantz appearing in The New Yorker under the title “Among the A.I. Doomsayers,” concerning the ongoing intellectual debate—primarily on the West Coast—between those camps who think machine intelligence will transform humanity for the better, and those who fear A.I. may destroy us.[2] 

Why, you may ask, do I remain so interested in this topic? 

After all: the U.S. presidential elections are only 236 days away (and isn’t that, really, the most important thing to human civilization in today’s world); the Gaza war continues (with Israel and the U.S. differing on the upcoming push against Rafah), prompting criticisms by celebrities at the Oscar Awards (and aren’t they the most important people to listen to), street demonstrations, and further fueling a rising tide of antisemitism; Hezbollah in Lebanon launched a new volley of missiles at Israel, prompting an immediate response in the Bekaa Valley; the Houthis remain a threat to shipping in the Red Sea; the Ukraine says more weapons are desperately needed to forestall a new Russian advance (with the war now entering its third year); Russian and Chinese naval forces will join Iranian naval units in a major maritime exercise in the Middle East; the F.B.I. Director testified before Congress last week that buried among the estimated 8-10 million illegal immigrants who have flooded across our southern border during the Biden administration includes likely ISIS recruits and an untold number of Chinese young men of military age; China and Russia announce they will collaborate to build an unmanned nuclear station at the lunar South Pole; Elon Musk’s Space X successfully launches its heavy Starship on the third attempt; Congress is moving to ban TikTok for ties to the Chinese Communist Party; domestic and international fallout continues in the wake of the uncommonly feisty speech of President Biden at the State of the Union address (did Biden actually apologize for using the term “illegal” to describe Laken Riley’s murder suspect while failing to offer condolences to the family or mention her by name during the State of the Union address?); and the announcement that soon we will be paying more for the interest on the national debt than the entire outlay for Defense Department expenditures.

But we can always print more money, right?

And in the midst of all this Jeemes, you want to write another piece about A.I.?

Why?

As an aside here—to those of you who may be interested—the second novel in my futuristic techno-Christian troika called Prawnocuos Resplendent, will be available to order any day now (please go to my website jeemesakers.com for more information). In brief, the new book continues the story of how a group of Christian youths, (the biblical remnant, or as I call the group in my novels, “The Society”), living some 30-35 years in the future, deal with an increasingly techno-paganist world as they, and their newfound friends, frantically race around the globe in a bid to halt the next pandemic. Many of you know that this trilogy has been a labor of love: I’ve been working on the trilogy, and updating the technology involved, for the last three decades. The novels feature a new post-war reality, A.I., drones, new medical diagnostic tools, robotic-humanoids, new virtual reality devices, body-powered communications and identification systems, futuristic art forms, and global technological megacorporations with powerful enforcement arms to protect proprietary claims, among other things.

Please forgive me for including a shameless plug for my new book.

Back to the missive. During those many years of writing, I assumed (incorrectly in some areas) and tried to predict many of the events we are now seeing on an almost daily basis. I also assumed that many of these changes would take much longer to gestate and materialize.

At any rate, when I write and think about the future—and what it will mean for the spiritual destinies of my children and grandchildren—the most difficult futuristic piece for me to fit into the puzzle has been how to gauge the progress to be made by artificial intelligence. Specifically, will A.I. advance exponentially toward the so-called “singularity,” that point where computer-based intelligences become indistinguishable from human-based intelligence. (By the way, I read an article this morning where Ray Kurzweil, a futurist and former Google engineer, who first brought the notion of the future “singularity” into popular techno-parlance, has moved up his prediction for the occurrence from 2045 to 2029.[3]) And that, my friends, is not too far away.

Or, on the other hand, will the A.I. march forward in sporadic “starts and fits” of breakthroughs. If you would have asked me that question three years ago, I would have said all the available evidence would support that trajectory. But that was before the ChatGPT revolution and today’s race using A.I. to develop ever more powerful   training models.[4]

Even harder to believe (at least for me) is that ChatGPT is yesterday’s news in an exponentially changing technology landscape. Today, for example, technology watcher Will Knight reports on a start-up called Cognition AI that has released an A.I. program called “Devin,” the latest and most polished of an emerging class of A.I. “agents,” which instead of providing answers or advice about a problem presented by a human can take action to solve it.[5]

More to the point, whether you think A.I. is an irresistible force charging relentlessly forward, or will advance in fits and spurts, the real question is what the world will look like for Christian believers (and my grandchildren) in 30-35 years? A case in point. My grandson Joshua will graduate from high school this year. He plans to go to the same college (now a university) where I attended and study psychology. I am going to recommend to him, if he wants to stay in that field, to specialize in an area that combines human psychology with working alongside automated systems. Right now, that seems like sound advice.

Today’s perceptual tension between two future A.I.-related worldviews dominates the debate today, and this is the essence of Marantz’s fascinating article. On one side are the techno-optimists (they call themselves “effective accelerationists”—or e/accs), and they essentially believe that A.I. will usher in a utopian future for all humanity. That is, as long as the worriers get out of the way. On social media, they troll doomsayers as “decels,” or even worse, “regulation-loving bureaucrats.”[6]

Standing at the opposite extreme are the doomsayers, or the P(doom) camp, whose “timelines” are predictions of how soon A.I. will pass particular benchmarks, such as writing a Top Forty pop song or a bestseller novel, making a Nobel-worthy scientific breakthrough, or achieving true artificial intelligence (that point at which a machine can do any cognitive task that a person can do.) P(doom) is the probability that, if A.I. does become smarter than people, it will, either on purpose or by accident, annihilate everybody on the planet.

From our present perceptual vantage point, it looks like A.I.-enhanced technologies are destined to become the skeletal framework upon which the other advances—in biogenetics, communication technologies, the metaverse, quantum applications, etc.—will hang. (As I have written previously, all of this assumes the absence of a totally unpredictable, but game-changing, “Black Swan” event over the next decade or so. And we are long overdue.)

Perhaps this is a long-winded way of explaining why A.I.-related topics have preoccupied my thinking for decades.

What makes this quest especially unusual is that my three professions—college history professor, intelligence analyst and lawyer—rely on much different spheres of thinking, and perceptual approaches, to arrive at a conclusion on the topic.

Perhaps I should have put this apology right up front; it is the future, and what it holds for believers, that turns my intellectual wheels and triggers my creative juices. Sorry, it is the way I am wired …

But A.I. is so much more than a topic for futurists and technologists to discuss at Bay Area “scenes.”

As I was strolling through a Border’s Bookstore last weekend with my son-in-law and two grandkids, I noticed the recently published book 2054 by Elliot Ackerman and James Stavridis, concerning the role of A.I. in future conflicts.[7] These same two men wrote one of my favorite books on the topic called 2034 (where China neutralizes the U.S. “eyes in the Sky” advantage with a sneak attack and wins the opening bouts of a future war in the Pacific).[8] At any rate, the authors combined for an essay piece in The Wall Street Journal this week asserting, among other things, that on today’s battlefield drones appear to be a manageable threat but in the future, when hundreds of them are harnessed to A.I. technology, they will become a tool of conquest. As they note in the piece: “the drone will change the face of warfare when employed in swarms directed by AI. This moment hasn’t yet arrived, but it is rushing to meet us. If we’re not prepared, these new technologies deployed at scale could shift the global balance of military power.”[9]

How true. As a former military analyst in the intelligence community, I remember being invited to a military “game” scenario set 50 years in the future in the Taiwan Straits. It was an incredible experience. I learned first-hand how attached naval leaders were to their high-ticket platforms such as aircraft carriers. (In the recent naval deployment following the Oct. 7 Hamas massacre and the Israeli response in Gaza, the U.S. sent two carrier battle groups, one headed by the most expensive warship in history--$13 billion—the USS Gerald Ford on its maiden voyage. For that same cost a nation could purchase over 650,000 Iranian-made Shahed drones[10]).

The essay also talks about how AI pattern recognition patterns are changing the “OODA loop”—observe, orient, decide, act—advanced in the 1950’s by USAF fighter pilot John Boyd. In a conflict, the theory holds, the side that can move through its OODA loop fastest possesses a decisive battlefield advantage. Transformational warfare in the future will not be a race for the best platforms but rather for the best A.I. directing those platforms—in their words: “warfare is headed toward a brain-on-brain conflict” … “a war of OODA loops, swarm versus swarm.” At present the U.S. insists that a human decision maker must always remain in the loop before any AI-based system might conduct a lethal strike. Will our adversaries show similar restraint?[11]

I doubt it.

By the way, did I tell you that my grandson Joshua last week received a card to register for the Selective Service (draft)?

“Sigh.”

A.I. changes will affect my grandson’s future decisions, the political process (our first true A.I. presidential election replete with “deepfakes”), and the very nature of war.

Stay tuned.

 


[1] The quote is cited by Andrew Marantz, “Among the A.I. Doomsayers,” The New Yorker, Mar. 11, 2024.Goldhaber runs a highly respected A.I.-safety group. 

[2] Marantz, “Among the A.I. Doomsayers.” .

[3] Anthony Cuthbertson, “Google’s AI prophet fast tracks singularity prediction,” Independent, Mar. 14, 2024.

[4] See my missive on the topic: Jeemes Akers, “ChatGPT: Revisiting the A.I. Issue,” Feb. 10, 2023.

[5] Will Knight, “The Age of AI Agents Is Fast Approaching,” Fast Forward (WIRED), Mar. 14, 2024.

[6] Marantz, “Among the AI Doomsayers.”.

[7] Elliot Ackerman and Admiral James Stavridis, 2054, Penguin Press, Mar. 2024.

[8] I discuss the book 2034 in my missive entitled “AI and the Future of War,” May 2021.

[9] Elliot Ackerman and James Stavridis, “Drone Swarms Are About to Change the Balance of Military Power,” (Essay), The Wall Street Journal, Mar. 14, 2024.

[10] Ibid.

[11] Ibid.

4 views

Recent Posts

See All

Comments


bottom of page