Christmass gift offer image

Biden issues U.S.′ first artificial intelligence executive order, requiring safety assessments, civil rights guidance, research on labor market impact

The first U.S. executive order on artificial intelligence, signed by President Joe Biden, requires safety reviews, equality and civil rights advice, and labour market studies.

Law enforcement agencies have warned that they will apply existing law to Artificial Intelligence abuses, and Congress is studying the technology to write new rules, but the executive order could have an instant impact. As with all executive orders, it “has the force of law,” a senior administration official told reporters Sunday.

Biden issues U.S.′ first artificial intelligence executive order

According to the White House, the executive order has eight main parts:


1.Creating new Artificial Intelligence safety and security standards, such as requiring some AI companies to share safety test results with the federal government, directing the Commerce Department to develop AI watermarking guidance, and creating a cybersecurity programme to develop Artificial Intelligence tools to identify critical software flaws

2.Protecting consumer privacy by setting criteria for agencies to examine Artificial Intelligence privacy solutions

3. Promoting equity and civil rights by advising landlords and federal contractors on how to avoid AI algorithms worsening discrimination and developing best practices for Artificial Intelligence use in sentencing, risk assessments, and crime forecasting

4. Protecting consumers by instructing the Department of Health and Human Services to develop a programme to review potentially hazardous Artificial Intelligence-related health-care practices and materials for educators to safely use AI capabilities

5. Producing a report on Artificial Intelligence’s labour market implications and investigating how the federal government could help workers affected by a labour market disruption


6. Promoting innovation and competition by increasing AI research funding for climate change and modernising the standards for highly qualified immigrant workers with vital skills to stay in the U.S.

7. Implementing global AI standards with foreign partners

8. Guide federal agencies’ AI use and procurement and speed up government recruiting of AI experts.

The directive is “the strongest set of actions any government in the world has ever taken on AI safety, security, and trust,” White House Deputy Chief of Staff Bruce Reed stated.


It builds on White House voluntary commitments from prominent AI businesses and is the first binding government action on the technology. It precedes a U.K. AI safety summit.


15 big American technology companies have agreed to voluntary AI safety commitments, but the senior administration official said that “is not enough” and that Monday’s executive order is a start towards concrete regulation for the technology’s development.


“The President, several months ago, directed his team to pull every lever, and this order does that: bringing the federal government’s power to bear in a wide range of areas to manage artificial intelligence’s risk and harness its benefits,” the official added.


In a Monday White House speech, Biden said he’ll meet with Senate Majority Leader Chuck Schumer, D-N.Y., and a Schumer-led bipartisan group on Tuesday. He said the summit will “underscore the need for congressional action.”

READ ALSO:  Meet Georgia Marie Thompson: Wiki, Parent, Siblings, Net Worth


“This executive order represents bold action, but we still need Congress to act,” Biden added.


Before releasing AI systems, Biden’s executive order requires large corporations to share safety test results with the government. It also prioritises the National Institute of Standards and Technology’s AI “red-teaming” requirements for stress-testing system defences and issues. The Commerce Department will create AI-generated content watermarking guidelines.


The order also addresses training data for large AI systems and requires authorities to assess how they gather and use commercially accessible data, including data from data brokers, especially where it contains personal information.


The Biden administration is also expanding AI staff. The senior administration official said will list federal government jobs for AI experts starting Monday.


The administration official said Sunday that the “most aggressive” timeline for some safety and security parts of the order is 90 days, while others might take a year.

Monday’s presidential order follows months of White House efforts to address AI research and recommended guidelines.


Since ChatGPT went viral in November 2022, it has become the fastest-growing consumer app in history, according to a UBS study. Generative AI has caused public worries, legal disputes, and legislative questions. Microsoft was criticised for toxic speech days after integrating ChatGPT into Bing, and popular AI picture generators have been criticised for racial bias and stereotypes.


On the Sunday press call, the administration official said Biden’s executive order requires the Department of Justice and other federal agencies to set criteria for “investigating and prosecuting civil rights violations related to AI.”

READ ALSO:  Everything About ChatGPT From Open AI: The Chatbot Which Can Answer Every Guestion


The official said the President’s executive order compels landlords, government assistance programmes, and federal contractors to receive specific guidance to prevent AI algorithms from worsening discrimination.


The White House challenged thousands of hackers and security experts to outperform OpenAI, Google, Microsoft, Meta, and Nvidia’s top generative AI models in August. The world’s largest hacking conference, DefCon, hosted the tournament.


“It is accurate to call this the first-ever public assessment of multiple LLMs,” a White House Office of Science and Technology Policy spokesman told CNBC.

The competition follows a July White House meeting with seven leading AI companies, including Alphabet. Microsoft, OpenAI, Amazon, Meta, Anthropic, and Inflection

Each company left the meeting agreeing to a set of voluntary AI development commitments, including allowing independent experts to assess tools before their public debut, researching AI-related societal risks, and allowing third parties to test system vulnerabilities, like at DefCon.

Leave a Reply