With Trump expected to repeal Biden’s 2023 AI Executive Order, we’re on the cusp of a shift in how AI will be regulated. Biden’s order introduced voluntary standards aimed at reducing risks in AI, including bias, data security, and intellectual property protections, which are key considerations for any recruiter using AI to support fair and accurate decision-making. Trump’s administration, however, labels these protections as obstacles to innovation, pledging a more unrestrained approach that could change the landscape of AI-powered recruiting tools.
This approach could bring both benefits and risks for recruiters. With fewer restrictions, innovation may speed up, driving advancements in AI’s capabilities for sourcing and assessing talent. On the other hand, this could mean fewer safety nets around bias mitigation, potentially affecting how candidate data is processed and ranked. As in all things, rushing can lead to bad results! Recruiters may find they need to keep a closer eye on their AI tools’ performance to ensure candidate scoring and screening remain equitable.
An Innovation-First Approach to AI Development
The Trump administration is likely to prioritize rapid AI development to keep the U.S. ahead of competitors like China. With a “speed first” attitude, there could be a stronger emphasis on fast-tracking new AI features which are central to recruiting, such as predictive analytics, natural language processing, and advanced candidate assessment tools. This could mean a wave of powerful, next-gen tools arriving at recruiters’ desks sooner than expected, and these tools could save time, improve accuracy, and create a more efficient hiring process.
Yet with a focus on speed, we might see a reduction in transparency standards for AI. For recruiters, this could make it harder to understand why an AI tool recommends certain candidates over others. The “black box” nature of many AI solutions may deepen, limiting your ability to explain or justify AI-driven hiring decisions to clients or candidates.
Just as problematic would be the decisions these tools make which we never see. Great candidates hidden because the focus on speed necessitated using biased “same think” data to quickly get a product out the door. Meaning the new standard was the old average.
The Uncertain Future of the U.S. AI Safety Institute
Biden’s administration established the U.S. AI Safety Institute (AISI) to help manage potential risks in high-stakes applications, including hiring. While the Institute has gained bipartisan support, the Trump administration views it with skepticism, and its future remains uncertain. For recruiters, a diminished AISI might mean fewer mandatory safeguards in hiring technology. This raises an important consideration: choosing vendors who prioritize ethical and responsible AI development, even without federal mandates, could become increasingly vital for recruiters seeking to avoid biased or opaque outcomes.
A Push Toward “America-First” AI
Trump’s policies are likely to take an “America-first” approach, where limiting foreign access to U.S.-developed AI is key. The administration is expected to tighten export controls on technologies such as AI chips, keeping competitive advantages within the U.S. While this approach could strengthen domestic AI development, enabling U.S.-based recruiting tools to gain a technical lead, the competitive landscape may tighten as local companies race to capitalize on these advancements.
This policy focus on protectionism could also lead to greater federal support for infrastructure like data centers and chip manufacturing. For recruiters, that might mean faster, more reliable AI-powered tools at their fingertips, backed by a robust infrastructure. As these advances unfold, staying current with AI vendors at the forefront of new technology will be essential to staying competitive.
State-Level AI Regulation Likely to Increase
If federal regulations are rolled back, states may step in to set their own standards for AI use, particularly those with more progressive stances on technology. California recently passed laws requiring transparency in AI model training and protections for workers against unauthorized AI-generated voice cloning. These measures could influence recruiting tools in those markets.
As state-level regulations increase, recruiters might experience varied compliance requirements depending on where they work. Vendors may start tailoring their products to meet stricter regional standards, particularly in states like California, Colorado, and Illinois. For recruiters in these areas, understanding and adapting to these localized regulations could impact how you select and use AI tools in your daily work.
A Divided Advisory Camp on AI Policy
The Trump administration’s AI approach is influenced by a coalition of advisors who hold varying views on regulation. This division could lead to selective enforcement of AI guidelines, with minimal restrictions in some areas but possible oversight in high-risk sectors like defense and security. Notable advisors, such as Marc Andreessen and JD Vance, advocate for minimal AI regulation, arguing that warnings about risks are exaggerated by industry leaders to limit competition. On the other hand, prominent figures like Elon Musk stress the importance of AI safety, warning of existential risks associated with unchecked development. It’s important to note most AI Engineers favor a more regulated environment than we have now, even the ones arguing for less ultimate regulation.
For us as recruiters, this dynamic mix of perspectives could mean that while certain AI functionalities will flourish in a less-regulated environment, some aspects of AI, especially those involving sensitive candidate data, may still face compliance measures. The outcome will likely depend on Trump’s appointments and advisory team, which will shape how federal agencies manage AI in practical applications like hiring.
Increased International Tensions and AI in Hiring
A key concern in Trump’s AI agenda is staying competitive with China. Tighter restrictions on exporting AI technology and components may be imposed to prevent Chinese companies from leveraging U.S.-developed tools. For recruiters, this could signal an advantage in having exclusive access to cutting-edge U.S.-based AI technologies, as competitors in other regions face limitations on the same.
However, the potential trade barriers may also affect vendor partnerships and the cost of technology development, as restrictions on chip manufacturing and AI software exports may lead to higher costs. Keeping a pulse on this environment will help you navigate any shifts in vendor relationships and evaluate the potential impact on the availability or affordability of the tools you depend on.
Adapting to a New AI Landscape in Recruiting
The direction of AI policy under Trump’s administration will introduce both opportunities and uncertainties for recruiters. As regulations shift, the need for adaptability in selecting AI tools will be more important than ever. Prioritize vendors who demonstrate a commitment to ethical AI use and transparency, and remain vigilant in assessing your tools for fairness and data security.
Ultimately, the recruiter’s success with AI in this evolving landscape will hinge on staying informed, being proactive in tool selection, and maintaining best practices for fair and compliant candidate evaluation. By approaching these changes with a strategic mindset, you can harness the full potential of AI to enhance your recruiting operations while navigating the new policy landscape responsibly.
Stop working in a silo! Get the support you need from expert coaches and a group of high performing peers. Learn more below.
Tricia Tamkin, headhunter, advisor, coach, and gladiator. Tricia has spoken at over 50 recruiting events, been quoted in multiple national publications, and her name is often dropped in groups as the solution to any recruiters’ challenges. She brings over 30 years of deep recruiting experience and offers counsel in a way which is perspective changing and entertaining.