- The incoming administration will likely take a more light-handed approach to AI regulation than the current administration.
- Many of the Biden administration’s AI executive actions and agency actions will be rolled back.
- AI legislation will continue to proliferate at the state level.
- Employers should prioritize transparency measures and self-regulatory efforts as attainable goals to managing inherent risk of bias in in AI tools.
Because the United States has so far adopted a light-handed and decentralized approach to regulating artificial intelligence (AI) in the workplace, the reelection of President Donald Trump is unlikely to have as dramatic an impact on AI regulations as it may on other labor and employment matters. Nevertheless, the second Trump administration will have significant influence on issues related to AI from January 2025 to January 2029, a critical time for the rapidly evolving AI regulatory environment. Overall, the incoming administration is likely to take a more light-handed approach to AI regulation than did the Biden administration.
Executive Actions
The incoming Trump administration is expected to roll back most of the Biden administration’s AI regulatory efforts regarding the workplace, especially any measures that might be viewed as hampering innovation or which purport to limit free speech or are overtly pro-union. Most significantly, the incoming Trump administration is expected to repeal the Biden administration’s executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” issued on October 30, 2023, because of concerns that it stifles innovation.
The AI executive order gave broad directives to a wide variety of federal agencies to address AI and laid out a roadmap intended to safeguard workers and consumers from risk of bias and other potential harms posed by AI. The incoming Trump administration will also likely withdraw the “Blueprint for an AI Bill of Rights,” which addressed the possibility of employers using technologies for anti-union purposes. The incoming Trump administration may turn to some version of Trump’s 2019 executive order titled, “Maintaining American Leadership in Artificial Intelligence,” which called on federal agencies to develop a roadmap for integrating machine learning technologies into the private sector to help America remain at the forefront of AI research, development and deployment. While dated in some respects given the rapidly evolving field, Trump’s previous executive order on AI maintains some risk management principles, referencing upskilling workers to AI technology, and “protect[ing] civil liberties, privacy, and American values.”
The incoming Trump administration will likely turn to many of the same concepts developed during his first administration. For instance, in December of 2020, Trump issued an executive order that stated AI should be used to improve government operations in a manner that remains consistent with all applicable laws, including those related to privacy, civil rights, and civil liberties. Among the other principles to guide the federal government in its use of AI are accuracy, reliability, security, responsibility, traceability, transparency, and accountability. In January of 2021, President Trump signed the National Defense Authorization Act for 2021 into law, which directed the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) to develop a voluntary risk-management framework for trustworthy AI systems. NIST’s framework sets forth guidelines and best practices that can help organizations manage the risks associated with AI systems, and provides a structured approach to identifying, assessing, and mitigating risks through the lifecycle of an AI system. Because the NIST framework was spearheaded by President Trump and is non-partisan and congressionally authorized—and precedes the Biden AI executive order—it is likely to remain an important point of continuity.
Moreover, the incoming Trump administration will likely put greater emphasis on voluntary industry self-governance rather than prescriptive federal mandates. The Biden administration focused its efforts on getting voluntary agreements from major tech companies to address AI and the second Trump administration will likely accelerate this approach. Similarly, the incoming Trump administration will likely encourage companies to continue to embrace self-regulation to foster responsible AI deployment and development. Indeed, it has become a standard practice for many leading companies to develop or publish their own AI principles or guidelines, establish diverse partnerships, and institute other measures to help ensure there are proper guardrails in place and to create the appearance that AI is being used in a responsible and legally compliant way. These self-regulatory efforts will likely accelerate in the second Trump administration.
However, there may be a push for the Trump administration to at least consider certain AI regulations, especially due to the influence of key informal advisors such as Elon Musk. Historically, Elon Musk has supported the creation of a regulatory body for overseeing AI to ensure that it does not present a danger to the public. In addition, Elon Musk supported California Bill SB 1047, which would have put into place safety regulations for large AI models. This bill was later vetoed by Governor Gavin Newsom. Recent reports suggest that Trump is considering naming an “AI czar” to coordinate AI federal policy and governmental use of such technologies, and Mr. Musk is expected to be involved with the coordination in some capacity.1
Congressional Actions
President-elect Trump will also have Republican majorities in both houses of Congress, which will put the Trump White House in a powerful position to advance AI legislative priorities. Trump and the Republican Congress may work together for legislation to address the growing patchwork of laws across the nation and conflicting agency requirements, which present compliance challenges for employers. More specifically, it is possible that President-elect Trump will work with Congress for legislation establishing a national standard that simplifies regulatory compliance and preempts conflicting regulatory frameworks at the state and local levels. The Trump administration may, however, adopt a more laissez-faire approach to regulating AI, as has been the case in other areas of employment law.
Agency Actions
The shift in power will also lead to changes in how the U.S. Department of Labor (DOL), the U.S. Equal Employment Opportunity Commission (EEOC), and the National Labor Relations Board (NLRB) address AI’s growing influence in the workplace. At the federal agency level, the incoming Trump administration is expected to withdraw certain AI guidance documents issued during the Biden administration that might be viewed as stifling innovation or conceivably limit free speech or seem to be pro-union. The incoming Trump administration will likely want to address AI-related wage and hour concerns through opinion letters. An opinion letter is an official written opinion from an agency on how a statute, its implementing regulations, and related case law apply to a specific situation presented by the person or entity requesting the opinion. During the first Trump administration, DOL’s Office of Federal Contract Compliance Programs began issuing opinion letters, and that agency may also issue opinion letters to address AI in the workplace.
Overall, the incoming Trump administration’s immediate impact on federal agencies’ AI regulations may be limited since Democrats will likely continue to have a majority on the EEOC and NLRB for a significant part of his administration. But President-elect Trump will be able to appoint the EEOC and NLRB chairs. For AI regulatory efforts this is significant, especially since all EEOC AI guidance thus far has been issued solely by the chair. Furthermore, the EEOC chair can appoint the agency’s legal counsel who heads the Office of Legal Counsel, which provides legal advice to the chair and the Commission on a wide range of substantive, administrative, and procedural matters. Equally important, the EEOC’s Office of Legal Counsel is responsible for developing Commission rules and guidance, including the agency’s AI guidance.
President-elect Trump is also expected to fire the current EEOC and NLRB general counsels and name their replacements, which could also have a significant impact on AI regulatory efforts. This impact is aptly illustrated by the NLRB’s general counsel memorandum issued in 2022 that warns employers that using electronic surveillance and automated management technologies presumptively violates employee rights under the National Labor Relations Act. The EEOC’s general counsel is responsible for managing and coordinating the enforcement litigation program, providing overall direction to all legal unit components within the agency, and filing amicus briefs. Given the EEOC’s AI Initiative launched in 2021 and the agency’s recent Strategic Enforcement Plan emphasizing the need to address AI in the workplace, the EEOC’s general counsel will play a critical role.
Another unknown factor is the impact of President-elect Trump’s selection of Lori Chavez-DeRemer to serve as labor secretary. Congresswoman Chavez-DeRemer was widely supported by labor unions, and she co-sponsored the controversial Protecting the Right to Organize (PRO) Act, far-reaching legislation that contains what may be considered to be a wish list of union demands. Even though the DOL’s involvement with traditional labor law and union-employer relations is minimal, Chavez-DeRemer may be hesitant to rescind certain pro-union policies, including those related to AI.
In addition, the incoming Trump administration will likely roll back interagency agreements, known as Memoranda of Understanding (MOUs). MOUs are generally unenforceable, non-binding agreements signed between various agencies that clarify agencies’ respective jurisdictions, assign regulatory tasks, and establish ground rules for information-sharing, investigation, training, enforcement, and other informal arrangements. The Biden administration has put a strong emphasis on the use of MOUs to address the purported misuse of AI. For instance, in October 2022, the current NLRB general counsel issued a memorandum emphasizing that the agency will use MOUs with several other federal agencies, including the DOL, Federal Trade Commission, and U.S. Department of Justice, to facilitate coordinated enforcement against employers for their use of monitoring technologies. We expect most of these MOUs to be rescinded.
State Level
The overall incoming administration’s impact on regulating AI will be minor because most AI regulatory efforts are occurring at the state level. States have been more proactive in developing laws around AI use. In 2024, at least 40 states introduced bills addressing AI use, including discrimination and automated employment decision-making. AI legislation will continue to proliferate at the state and local level. In Democratic-led states, we may see an uptick in AI regulations that establish a counterpoint to what’s happening at the federal level.
Conclusion
The change in administrations will generate a lot of questions over how employers can comply with the law considering AI regulatory changes. Key indicators suggest that the incoming Trump administration will adopt a deregulatory approach to AI, allowing states to fill the void. Amidst the trend towards more self-regulation and the patchwork of emerging laws, employers should prioritize transparency measures and proactive audits as attainable goals to managing risk of bias inherent in in AI tools.
Footnotes
1 See Mike Allen, Scoop: Trump eyes AI czar, Axios (Nov. 26, 2024), https://www.axios.com/2024/11/26/trump-ai-czar-role-elon-musk.