Home News How Credible Are the White House’s AI Regulation Principles?

How Credible Are the White House’s AI Regulation Principles?

16 min read
0
0
2

What is the Trump administration’s real goal in suggesting new national regulations governing the use of artificial intelligence?

The presence of artificial intelligence in our lives will continue to grow. Considering the degree of alarm that AI has triggered in the general population, we can expect a deeper dose of laws and regulations governing how the technology is deployed, used, and managed.

It’s a bit amusing that the Trump administration should caution agencies against “overreach” as they consider whether, when, and how to regulate AI. The reach of any regulatory regime should be commensurate with the reach of the phenomenon being regulated. I doubt, however, that Trump has a crystal ball that tells him how extensively AI will disrupt our world in coming years.

Actually, it’s curious that this president would undertake a new regulatory initiative of any sort. Trump is implacably hostile to any and all environmental, health and safety, anti-trust, and other regulations that have benefited Americans immensely for many generations. Earlier this month, the White House Office of Science and Technology Policy (OSTP) released a set of principles to guide federal agencies when regulating the use of AI in the private sector. Release of the document kicked off a 90-day public commentary period. At the end of that, agencies will have 180 days to decide how to implement the principles.

On the unlikely chance that Trump-appointed agency heads will eventually implement these principles, let’s consider what the document’s current draft actually says. As summarized here, the principles state that agencies “must promote reliable, robust, and trustworthy AI applications.” They also advocate cross-agency consistency and public participation in the rulemaking process, require security, transparency, and fairness in how AI is used, and call for flexible regulatory updates to adapt to technological advances. They also encourage industry self-regulation where feasible over heavy-handed government regulation of AI development, deployment, and utilization.

That’s all fine and good, and even a Democratic administration would probably put out something similar. But I almost lost it when the document stated that issuance of new regulations on AI’s use require “scientific evidence” to inform the necessary upfront risk assessments and cost-benefit analyses.

I’m sorry, but how dumb does this administration think we are? This principle has little credibility coming from the most irrationally anti-scientific president in US history. Among other atrocities, Trump has rolled back numerous regulations that were instituted to address climate change. Under Trump, private business is being given free regulatory rein — without interference from pesky scientific authorities — to heat the planet, pollute our environment, and endanger the safety of workers, consumers, and everybody else.

Image: Tashatuvango - stock.adobe.com

Image: Tashatuvango – stock.adobe.com

Even if we accept the OSTP document’s requirement of “scientific evidence” in the rulemaking process without a shred of cynicism, we need to ask who exactly would determine what constitutes such evidence for the purpose of framing specific agency regulations that govern AI. This administration has ruthlessly suppressed credible scientific studies that were produced by government employees and contractors. More than that, scientific professionals — including the data scientists most competent to advise on AI regulations — have been told in no uncertain terms that their skills are no longer needed under this administration and that it would be best for them to leave public service entirely.

If you’re hoping that US federal agencies’ engagement with other nations’ AI experts would make up for this scientific brain drain, you’re sadly mistakenly. Trump shot down that hope when he rejected US participation with other G7 nations in the Global Partnership on AI, which seeks to establish shared regulatory principles governing the technology’s use around the globe.

If you’re a US taxpayer, you best believe that the people remaining at the federal level to adjudicate what constitutes credible scientific evidence will be some unholy alliance of pseudoscientific quacks and ideological hacks.

 

It’s no surprise that regulations over AI’s use in US society — such as for facial recognition — are starting to take root at the state and local levels. Though an unidentified Trump administration official recently characterized those efforts as “over-regulation,” you could very plausibly argue that they are nothing of the sort, but, rather, a justified grass-roots campaign to counter egregious under-regulation at the national level.

Besides, it’s not at all clear whether Trump and his administration truly care about such AI downsides as privacy encroachment, biased decisioning, and so on. Though some headlines claim otherwise, these new principles are not intended to make AI “safer,” which would imply that some sort of consumer-protection impulse motivates this effort.

Though US CTO Michael Kratsios expressed concern about “the rise of authoritarian governments that have no qualms with AI being used to track, surveil, and imprison their own people,” his boss at 1600 Pennsylvania has no qualms about openly admiring practically every dictator who walks the Earth.

When you look at it, these principles are designed to hamstring efforts by federal agencies to ensure that private businesses manage AI responsibly for the benefit of all Americans. More to the point, Trump’s primary interest in AI is nationalistic: as a weapons-grade asset for maintaining US global dominance. As Kratsios stated here, the ulterior purpose of these principles is to “maintain and strengthen the US position of leadership” on AI.

One would hope that the purpose is, at least in part, to ensure that AI is managed responsibly to benefit all humanity, but apparently that’s too much liberal folderol for this administration to stomach. If you seek a set of AI governance principles that put people first, with ethics (not power politics) at their core, check out such initiatives as this.

Interestingly, Trump advocates a laissez faire AI regulatory regime domestically while, consistent with this nationalistic philosophy, going the opposite direction internationally. The administration recently instituted an export ban that forbids US companies from selling software abroad that uses AI to analyze satellite imagery without a license. This ban is quite clearly intended to deny China, in particular, access to such technology, though they’ve obviously made huge investments domestically and probably can get by without US-developed AI software for this use case.

So let’s get real here. No matter how much merit these proposed AI regulation principles might possess in the abstract, they’re an obvious ploy for the Trump administration to retaliate against the left-leaning Silicon Valley companies that are driving the AI revolution. Demonizing AI is an effective smokescreen for Trump to lash out against the likes of Amazon, Microsoft, Google, Facebook, and other powerful tech companies that have bet their futures in part on their AI prowess.

Even if this administration were promulgating these principles in good faith, they come almost a year after Trump’s signing of the “American AI Initiative. This executive order that puts forth a high-level strategy guiding AI development within the US but includes no new federal funding to give the initiative a chance of succeeding. If Trump were truly trying to strengthen the US’s AI competencies, he would already have proposed a substantial federal outlay in this regard.

Let’s hope that whatever administration follows Trump actually institutes responsible regulation of AI at the federal level, while funding the R&D needed to develop credible tooling and approaches to manage AI responsibly wherever it touches our lives.

For more on AI check out these recent articles.

A Realistic Framework for AI in the Enterprise

How to Manage the Human-Machine Workforce

The Facial Recognition Debate

Restart Data and AI Momentum This Year

James Kobielus is Futurum Research’s research director and lead analyst for artificial intelligence, cloud computing, and DevOps. View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.

More Insights




Source link

Load More By admin
Load More In News

Leave a Reply

Your email address will not be published. Required fields are marked *