As Academia Competes Over AI, Who Becomes The “Referee”?

Originally published at Forbes.com

Business schools and information technology programs are going head-to-head. These days, it’s one of the most important competitions within academia, with the winner potentially shaping the world of work for decades to come.

The question is this: Who will train the millions of artificial intelligence professionals we need to power one of America’s next big industries? And how will they contribute to the already rapid pace of AI development (see: Nvidia’s $2 trillion valuation)?

But there is another, equally important question: Who is missing a seat at the table?

Academia’s current tug-of-war is not a new problem. With each introduction of disruptive technology, business schools and IT programs—among others, such as engineering departments—have developed new offerings and specialities, hoping to prepare the next generation of talent to harness new inventions. All of them want to groom the next wave of leadership, especially for an industry as increasingly lucrative as AI.

This competition dates back to when computers replaced bookkeeping departments to maintain company records. Back then, the computers took over the handling of customer transactions from traditional options like bank teller windows, trading floors, and retail points of sale. And then, along came the internet, with the ability to advertise, sell, deliver, and even employ talent outside of physical business locations.

Business schools and others quickly developed new programs to teach students how to use online disruption for business advantage. But there was always a third party to these lurches of progress: University accounting departments trained certified public accountants (CPAs), who confirmed the accuracy of financial statements, and then also trained the auditors of key business processes.

And it was a workable balance. Technologists were driven to try the next new thing and build capabilities with unknown potential uses, even to them. The MBA graduates who became managers had to figure out “how” to use these new technologies and where they could be used to make money. The accountants and auditors had no stake in either of those two games. They were charged with ensuring transparency so that the investing and lending public could rely on the published financial statements. Through accounting and auditing, the general public gained the necessary confidence that those statements reflected a company’s actual financial condition.

As business processes increasingly became more automated, complex, and opaque, accountants and auditors were also tasked with ensuring that the processes were resistant to fraud—giving the customer what was promised and safeguarding their assets.

However, with AI, it’s not clear that we have figured out who needs to have the third seat at the table. Who is the accountant or auditor in the case of AI?

This is a critical question to answer, since AI has perhaps more potential for both utility and deception than any technology before it. The technology companies that are building and will be selling or leasing AI systems are motivated by maximum usage and revenue. But the customers ultimately using their systems have no way of knowing the accuracy or reliability of what is being delivered. The “truth” is unclear because we have not yet established the expertise to be “truth-testers.”

Who is going to fill that role? Someone must, but there are no perfect answers. The federal and state governments are being pressured to regulate AI. But they will likely become too heavy-handed with their regulation, at the expense of private innovation, or too little and too late to be effective.

Perhaps accounting firms can morph into the role of the third party. But it will need to be a different kind of involvement because their base expertise lies in financial reporting, where you have access to the source code and the underlying transactions and data points. AI doesn’t currently provide access to any of them.

Academia needs to think through the importance of AI’s “referee,” and where that archetype may be taught and trained. It is possible that new schools may emerge to build a new class of referees. Perhaps business and engineering programs may build their own joint ventures to teach and train referees, but liability is a concern. If and when AI goes wrong, who will be on the hook?

We also can’t rule out the private sector, and the idea of an “AI detective” to hold technology companies accountable. This kind of detective might do enough sampling to rate the relative reliability of various AI services.

In the end, all we know for sure is this: The game is on. We’re past the preseason and into the games that count for AI, except there are no referees in sight—yet.

Previous
Previous

A Primer On Disney-Style Board Fights For Officers And Directors

Next
Next

Are Online MBAs Worth It? Here’s Insight From The Best (And The Rest)