Table of Contents
- 1. TL;DR: Trump’s AI strategy boosts Big Tech and limits regulations
- 2. Trump and the adoption of AI in the White House
- 3. Trump’s executive order and its impact on state laws
- 4. Pressure from AI companies and the $100 million PAC
- 5. The Senate and the elimination of federal AI legislation
- 6. Trump’s threats over federal funding and state regulation
- 7. The role of Big Tech in Trump’s AI strategy
- 8. Criticism of the lack of regulation in AI development
- 9. The future of AI regulation in the United States
- 10. Trump’s AI strategy: Implications for the technological future
TL;DR: Trump’s AI strategy boosts Big Tech and limits regulations
- Trump has signed an executive order that limits state regulations on AI.
- Major tech companies like OpenAI and Google benefit from this strategy.
- A PAC with $100 million has been formed to influence elections.
- The lack of regulation raises concerns about safety and ethics in AI.
- The future of AI regulation in the U.S. is uncertain, with a federal rather than state-level approach.
Trump and the adoption of AI in the White House
Since his return to the White House, Donald Trump has embraced artificial intelligence (AI) as a key technology for economic growth and global competitiveness. The administration has worked closely with U.S. technology companies to encourage investment in AI, a sector in which China has also made significant advances. This strategy is part of a broader context of technological competition between the U.S. and China, where AI is considered a fundamental pillar for the future.
Trump has emphasized the need for U.S. companies to be free to innovate without the constraints of state regulations he considers excessive. In this regard, his administration has promoted an approach that prioritizes the creation of a single federal regulatory framework, which seeks to eliminate the fragmentation that could hinder the development of AI in the country. This approach has been well received by major technology companies that have lobbied to limit regulations they consider burdensome.
However, this strategy has drawn criticism. Detractors argue that the lack of regulation could leave citizens vulnerable to risks associated with AI, such as algorithmic discrimination and a lack of transparency in AI models. The Trump administration has responded to these concerns by stating that a unified regulatory framework is essential to maintain U.S. competitiveness on the global stage.
In addition, the administration has begun implementing measures that allow collaboration with Congress to establish national standards that prohibit state laws that may conflict with federal policies. This includes the protection of copyright and child safety, which has led to a debate about the balance between innovation and the protection of citizens.
Trump’s executive order and its impact on state laws
The executive order signed by Trump has as its main objective limiting states’ ability to regulate AI. This move s
is justified by the need for a cohesive approach that does not hinder innovation. The administration argues that state regulations can create a confusing and complicated environment for companies seeking to operate across multiple jurisdictions.
One of the state laws that has been the subject of controversy is California’s AB 2013, which requires AI companies to disclose the data used to train their models. This law, along with other similar initiatives in states such as Colorado and Tennessee, has been criticized by leaders in the AI industry who argue that such regulations could inhibit the growth and competitiveness of U.S. companies in the global market.
The executive order also states that the Department of Commerce will evaluate state laws for conflicts with the administration’s AI priorities. This could result in the withholding of federal funds from those states that implement regulations deemed too restrictive. This strategy has been seen as an attempt by Trump to consolidate federal power over state decisions, raising concerns about the erosion of state autonomy.
Pressure from AI companies and the $100 million PAC
AI companies have stepped up their lobbying efforts in Washington, launching a super political action committee (PAC) with a budget of at least $100 million to influence the 2026 midterm elections. This PAC aims to promote candidates who support the AI deregulation agenda and who are willing to work on creating a regulatory framework favorable to the industry.
The creation of this PAC reflects AI companies’ growing concern about the possibility that regulations could be implemented that might limit their ability to operate and grow. Industry leaders, such as Sam Altman of OpenAI, have stated that a fragmented approach to regulating AI is not only ineffective, but could also put the U.S. at a disadvantage compared with competitors like China.
The pressure exerted by these companies has led to a political climate in which deregulation is presented as a solution to foster innovation and economic growth. However, this strategy has also drawn criticism from those who warn about the risks of an unregulated environment, which could result in abuses and harm to citizens.
The Senate and the elimination of federal AI legislation
In June 2025, the U.S. Senate voted overwhelmingly to eliminate a proposed piece of legislation that sought to establish a federal regulatory framework for AI. This decision was seen as an endorsement of the dereguation of the Trump administration and a blow to the efforts of those advocating for stricter regulation in the sector.
Senator Marsha Blackburn, who has been a prominent voice in defense of state regulation, argued that states are best able to protect citizens in the virtual space. However, the majority of the Senate appears to have adopted the position that a federal approach is necessary to maintain U.S. competitiveness in the global race for AI.
The removal of this federal legislation raises questions about the future of AI regulation in the U.S. and whether state laws will be able to remain effective in protecting citizens. The lack of a federal regulatory framework could lead to an environment in which AI companies operate without the necessary oversight, which could result in negative consequences for society.
Trump’s threats over federal funding and state regulation
Trump has threatened to withhold federal broadband funding from states that implement AI regulations his administration considers restrictive. This measure is part of a broader effort to ensure that state policies do not hinder U.S. dominance in technology.
The White House AI adviser, David Sacks, has stated that the executive order will allow the federal government to roll back state regulations deemed “burdensome.” This includes reviewing state laws for conflicts with the administration’s AI priorities, which could result in those states being excluded from significant federal funds.
The threat to withhold funds has been criticized by lawmakers who warn that it could create a “Wild West” environment for AI companies, where a lack of regulation could put citizens at risk. This strategy has sparked a debate about the legality and ethics of using federal funding as a tool to influence state policies.
The role of Big Tech in Trump’s AI strategy
Large technology companies, such as OpenAI and Google, have been key beneficiaries of Trump’s AI strategy. These companies have been at the forefront of developing AI technologies and have pushed to limit regulations they consider excessive. The administration has responded to these demands by promoting a federal regulatory framework that favors innovation and growth.
Collaboration between the government and technology companies has been seen as a way to ensure that the U.S. maintains its competitive edge in the AI arena. However, this relationship has also raised
raised concerns about the disproportionate influence that large companies can have in policymaking.
Critics argue that the lack of regulation could allow abuses and unfair practices in the development and use of AI. The pressure exerted by big tech to limit regulations could result in an environment where corporate interests prevail over the protection of citizens.
Criticism of the lack of regulation in AI development
The lack of a clear regulatory framework for AI has generated significant concerns among experts and rights advocates. Many argue that the absence of adequate regulations could leave citizens vulnerable to the risks associated with AI, including algorithmic discrimination and a lack of transparency.
Criticism focuses on the fact that a deregulated approach could allow AI companies to operate without oversight, which could result in harm to citizens. The lack of regulation could also make it difficult to hold companies accountable if their technologies cause harm or fail.
Advocates of stricter regulation argue that it is essential to establish standards that protect citizens and ensure that AI is developed ethically and responsibly. However, the Trump administration has maintained its stance that deregulation is necessary to foster innovation and economic growth.
The future of AI regulation in the United States
The future of AI regulation in the U.S. is uncertain, especially with the recent repeal of federal legislation and the Trump administration’s deregulation strategy. As AI companies continue to grow and evolve, the need for a clear regulatory framework becomes increasingly urgent.
Pressure from big tech and the lack of consensus on how to regulate AI have led to an environment in which state regulations may be insufficient to address the challenges posed by this technology. The administration has emphasized the importance of a federal approach, but implementing an effective regulatory framework remains a challenge.
As we move toward a future where AI will play an increasingly important role in society, it is essential to find a balance between innovation and the protection of citizens. The lack of regulation could result in an environment where the risks associated with AI are not adequately managed, which could have serious consequences for society.
Trump’s AI strategy: Implications for the technological future
Trump’s strategy has led to an erosion of states’ power to regulate AI, which could have long-term consequences for the protection of citizens. The lack of a clear regulatory framework could result in an environment where companies operate without oversight.
The strengthening of big tech
Large technology companies have been key beneficiaries of Trump’s deregulation strategy, allowing them to operate with greater freedom. However, this has also raised concerns about the disproportionate influence these companies may have in policymaking.
Challenges and criticisms of AI policy
The lack of regulation has prompted significant criticism, with experts warning about the risks associated with a deregulated environment. The need to establish standards that protect citizens becomes increasingly urgent as AI continues to evolve.
Future outlooks in AI regulation
The future of AI regulation in the U.S. is uncertain, with the need to find a balance between innovation and the protection of citizens. As AI companies continue to grow, implementing an effective regulatory framework will be essential to address the challenges posed by this technology.


