close
close

AI will also be on the ballot on November 5

AI will also be on the ballot on November 5

Ballot paper with the word AI written on it and goes into the ballot box.The choice Americans make in November will determine whether they continue to lead a collective effort to shape the future of AI according to democratic principles. Illustration: edited by Erik English; original from DETHAL via Adobe.

Artificial intelligence represents one of the most consequential technologies of our time, promising enormous benefits while posing serious risks to the country’s security and democracy. The 2024 elections will determine whether America takes the lead or retreats from its crucial role in ensuring that AI develops safely and in accordance with democratic values.

AI promises extraordinary benefits – from accelerating scientific discovery to improving healthcare and boosting productivity across our economy. But realizing these benefits will require what experts call “safe innovation,” developing AI in ways that protect America’s safety, security and values.

Despite its benefits, the range of risks associated with artificial intelligence are significant. Unregulated AI systems can reinforce societal biases, leading to discrimination in crucial decisions about jobs, loans and healthcare. The security challenges are even greater: AI-powered attacks can probe power grids for vulnerabilities thousands of times per second, launched by individuals or small groups instead of requiring the resources of nation states. During public health or safety emergencies, AI-enabled disinformation can disrupt critical communications between emergency responders and the public, undermining life-saving response efforts. Perhaps most alarming, AI could lower the barriers for malicious actors to develop chemical and biological weapons more easily and quickly than without the technology, putting devastating capabilities within the reach of individuals and groups who previously lacked the expertise or research skills .

The Biden-Harris administration recognized these risks and developed a comprehensive approach to AI governance, including the milestone Executive Order on the Safe, Secure and Reliable Development and Use of Artificial Intelligence. The administration’s framework directs federal agencies to address the full spectrum of AI challenges. It establishes new guidelines to prevent AI discrimination, promotes research that serves the public interest, and creates new initiatives within government to help society adapt to AI-driven changes. The framework also addresses the most serious security risks by ensuring that powerful AI models are rigorously tested so that safeguards can be developed to block their potential misuse – such as helping to create cyber attacks or bioweapons – in ways that impact public safety to threaten. These safeguards ensure America’s ability to lead the AI ​​revolution while protecting our security and values.

Critics who argue that this framework would stifle innovation would do well to consider other transformative technologies. The strict safety standards and air traffic control systems developed through international cooperation have not held back the aviation industry, but rather enabled it. Today, millions of people board planes without a second thought because they trust the safety of air travel. Aviation became a cornerstone of the global economy precisely because countries worked together to create standards that won public trust. Likewise, catalytic converters didn’t hold back the auto industry: they helped cars meet growing global demands for both mobility and environmental protection.

Just as the Federal Aviation Administration ensures safe air travel, dedicated federal oversight working with industry and academia can ensure responsible use of artificial intelligence applications. Via the recently released National Security Memorandumthe White House has now established the AI ​​Safety Institute within the National Institute of Standards and Technology (NIST) as the U.S. government’s primary liaison to private sector AI developers. This institute will facilitate voluntary testing – both before and after public deployment – ​​to ensure the safety, security and reliability of advanced AI models. But since threats such as bioweapons and cyber attacks respect no borders, policymakers must think globally. That’s why the government is building a network of AI safety institutes together with partner countries to harmonize standards worldwide. This is not about going it alone, but about leading a coalition of like-minded countries to ensure that AI evolves in ways that are both transformative and trustworthy.

Former President Trump’s approach would be clearly different from that of the current administration. The Republican National Committee platform proposes to “revoke Joe Biden’s dangerous Executive Order that hinders AI innovation and imposes radical left ideas on the development of this technology.” This position contradicts the public’s growing concern about technological risks. For example, Americans have witnessed the dangers children face from unregulated social media algorithms. That’s why the U.S. Senate recently came together in an unprecedented display of bipartisan power to pass the bill Children’s Online Safety Act by a vote of 91-3. The bill offers young people and parents tools, safeguards and transparency to protect themselves against online harm. The stakes with AI are even higher. And for those who think setting technology guardrails will hurt America’s competitiveness, the opposite is true: Just as travelers came to prefer safer planes and consumers demanded cleaner vehicles, they will push for trustworthy AI systems. Companies and countries that develop AI without adequate safeguards will find themselves at a disadvantage in a world where users and companies demand assurance that their AI systems will not spread disinformation, make biased decisions, or enable dangerous applications.

The Biden-Harris Executive Order on AI forms a foundation on which to build further. Strengthening the United States’ role in setting global AI safety standards and expanding international partnerships is essential to maintaining American leadership. This will require working with Congress to secure strategic investments in AI security research and oversight, as well as investments in defensive AI systems that protect the nation’s digital and physical infrastructure. As automated AI attacks become more sophisticated, AI-powered defenses will be critical to protect power grids, water systems and emergency services.

The space for establishing effective global governance of AI is narrow. The current administration has built a growing ecosystem for safe, secure, and trustworthy AI – a framework that positions America as a leader in this critical technology. If we step back now and dismantle these carefully constructed safeguards, we would not only be giving up America’s technological edge, but also its ability to ensure that AI evolves in accordance with democratic values. Countries that do not share the United States’ commitment to individual rights, privacy and security would then have a greater voice in setting the standards for technology that will reshape every aspect of society. This election represents a crucial choice for America’s future. The right standards, developed in collaboration with allies, will not hinder AI development; they will ensure that it reaches its full potential in the service of humanity. The choice Americans will make in November will determine whether they will continue to lead a concerted effort to shape the future of AI according to democratic principles, or whether they will surrender that future to those who want to use AI to improve security, prosperity and undermine our country’s values.