AI: Not Always Right, But Never in Doubt

Originally published at Forbes.com

A few weeks ago, I published a column on ChatGPT, explaining why it won’t replace CEOs—at least not yet. Now I’m back with another look at artificial intelligence (AI) systems, which aren’t always right but are seemingly never in doubt.

That’s the thing with AI these days: The possibilities are endless, so it’s easy to get carried away experimenting with the likes of ChatGPT. Wasting two or three hours by playing around with different prompts is par for the course.

With so much we don’t know (yet), today’s anxieties are understandable and legitimate. Will AI wipe out jobs? Will it take over human decision-making in disturbing, uncontrollable ways?

Perhaps, or perhaps not. Only time will tell.

The fear that machines will replace humans—not just for repetitive work, but in taking over leadership positions—is not a new one. I recently rewatched Stanley Kubrick’s 1968 classic 2001: A Space Odyssey, which warned against essentially the same threat that AI now poses—over a half-century ago. While the film concludes with an open-ended reflection on mankind’s place in the universe, the first two-thirds form a parable. It is essentially a story about the use of tools as the fundamental breakthrough in human evolution.

The ultimate measure of the tool revolution, which was always separate from the evolution of human capability, is embodied in the “AI” computer HAL running every single function on a space ship. Spoiler alert: HAL eventually becomes self-aware and decides that the human crew is an impediment to the success of their mission. And, so, he kills off all but one of them and locks the remaining person outside the ship. This part of the movie ends with the crewman finding a way to “manually” re-enter the ship and disconnect HAL to power him down. It is a major “phew” moment.

We should retain the same reassurance today. Human beings can hold onto the power to disconnect and override AI systems, as long as we remain fully cognizant of their purpose for existence in the first place. Remember: Like search engines, these systems are being built not necessarily to service mankind, but to enhance the commercial prospects of the systems’ owners for more usage, more eyeballs, and ultimately more data about us to sell.

Human beings also need to understand that we lose the power to “disconnect and override” as we embed AI into the center of more and more critical life functions. The opportunity to breathe a sigh of relief diminishes over time.

I, for one, don’t want to be complacent or put blind faith in AI companies. We can maintain effective fail-safes and overrides, but only with a firm commitment to implementing those checks and balances. Again, we need to understand that reducing humanity’s dependence on AI is not in the interest of those building and selling products like ChatGPT.

In the short run, what needs exponentially more attention is the understanding of AI’s limits—that it is regularly wrong, contradictory, or even nonsensical. Keeping the limitations of AI in the public consciousness will be integral to insisting on the imperative of “disconnect and override” requirements.

Here’s a fun, instructive example, based on my recent Forbes article (“Will ChatGPT Replace CEOs? Not So Fast”): I shared a draft of the piece—before publication—with a friend of mine who uses AI, and we fed that question into one of the most popular AI-based search engines. The modern-day HAL quickly gave us an essay that led with a list of reasons why AI would be better than humans in CEO functions, including but not limited to:

  • More data leads to better decisions

  • Higher rates of accuracy

  • The elimination of human bias

  • Working 24/7 (it doesn’t need sleep)

Even at first glance, such assertions are tenuous and laughable:

  • “More data” cuts both ways. Judgement is equally about experience and a vision of the future environment—and its consequences.

  • With data, it still comes down to “garbage in, garbage out.” The inputs determine output accuracy.

  • There is bias built into the design and algorithms of AI systems, in addition to bias in the content it uses when compiling responses to prompts.

  • Who wants a boss that is “on” 24/7?

After my Forbes column was published, we posed the same question to the same AI search engine. This time, the response was a direct assertion that, while AI may be a useful adjunct to a CEO’s work, it simply cannot replace the judgement, experience, and strategic vision needed to lead people and communicate effectively. (I’d like to think that HAL read my article.)

The point is this: We received two different answers to the exact same question a mere three weeks apart—both delivered with confidence and conviction. But which one is right?

As we continue to cope with our anxieties about AI, let’s not forget to focus on the accuracy component. Our skepticism is justified, and it should lead human beings to use tools like ChatGPT more judiciously—recognizing their limitations. That, and keeping the “on/off” switch nearby.

Previous
Previous

Is Being A Great Manager A “Coachable” Skill?

Next
Next

Culture Is King For CEOs. At CNN, Culture Killed The King