President Trump attended an AI summit hosted by All‑In Podcast and Hill & Valley Forum in Washington … More
President Trump’s newly released “Winning the AI Race: America’s AI Action Plan,” outlines sweeping ambitions for U.S. leadership in artificial intelligence—along with complex challenges for corporate boards.
The plan calls for accelerating AI innovation and infrastructure while reducing what it labels as burdensome regulation. Yet deregulatory approach may force companies to fill the gap by strengthening their own internal risk controls and oversight.
Key Pillars of Trump’s AI Action Plan
The plan includes over 90 federal policy actions across three policy pillars – “Accelerating Innovation,” “Building American AI Infrastructure,” and “Leading in International Diplomacy and Security” – that will be rolled out over time.
The pillars focus on exporting American AI; promoting the rapid buildout of data centers; removing “onerous” federal regulations that hinder AI development and deployment; and updating federal procurement guidelines to ensure that artificial intelligence models procured by the government prioritize truthfulness and ideological neutrality.
The New York Times described the action plan as “opening the door for companies to develop the technology unfettered from oversight and safeguards,” while assuring that A.I. be free of “partisan bias.”
The plan is seen as a way for the U.S. to lead in AI by avoiding broad government regulations that could slow development. This represents a change in approach by the U.S. and reflects a departure from the approach of other governments, like the European Union, to implement regulatory standards for AI.
At the same time, President Trump signed three executive orders: one bans the federal government from buying AI tools seen as politically biased; another speeds up AI infrastructure projects; and the third controls exports of American AI products.
How Corporate Boards May Respond
Corporate boards will probably respond to the AI action plan in two main ways. One path involves detailed discussions between the board and management about how the plan affects the company’s AI strategy. These conversations will likely focus on establishing internal mechanisms to monitor the rollout of the plan, and on identifying opportunities under the plan to enhance the company’s approach to AI acquisition and deployment.
The other, more challenging path could also involve board and management conversations about whether the limited commitment to federal AI regulation could create increased liability and reputational exposure for the company.
Any such conversation would be grounded in an honest self-evaluation of the current degree of board proficiency in artificial intelligence related matters. Is the board prepared to accelerate and monitor the company’s use of AI? In such an exercise, the National Association of Corporate Directors’ 2024 Blue Ribbon Commission Report, “Technology Leadership in the Boardroom” might be an effective measuring stick.
Managing AI Risks And Compliance
Discussions about liability and risk would focus on improving the board’s AI oversight and adding review standards to ensure reliability, safety, and trust. In addition, the board may consider whether additional oversight may be necessary in order to responsibly manage issues historically associated with the irresponsible use of AI — societal harms such as fraud, discrimination, bias, and disinformation; anti-competitive behavior, and the displacement and disempowerment of workers.
The board will also likely consult with its advisors on whether to expand compliance programming to address possible government enforcement policies regarding the objectivity of AI systems in general, and their possible treatment of such controversial topics as diversity, equity and inclusion, climate change and misinformation.
Following this path will require the board to spend extra time reviewing, discussing and consulting beyond their current AI efforts.
The ultimate governance question arising from the AI action plan is whether boards will find it necessary to compensate for the absence of comprehensive AI regulation with increased internal oversight ‒ and whether it can do so without creating an expensive internal bureaucracy.


