The heads of Google, Microsoft, and two other companies working on artificial intelligence met with Vice President Kamala Harris on Thursday as the Biden administration launched initiatives to ensure that the rapidly evolving technology improves lives without jeopardizing people’s rights and safety.
There has been a significant increase in commercial investment in AI tools that can produce genuinely human-like prose and generate new graphics, music, and computer code since the debut of the well-known AI chatbot ChatGPT late last year.
According to White House officials on Thursday, even President Joe Biden has tried it.
However, the ease with which it can pass for a human has also prompted governments worldwide to think about how it could rob people of their jobs, deceive citizens, and spread misinformation.
Seven new AI research institutes will be founded thanks to a $140 million investment announced by the Democratic administration.
Additionally, guidelines on how federal agencies can use AI tools are anticipated to be released by the White House Office of Management and Budget in the coming months.
Top AI developers have also independently agreed to participate in an honest evaluation of their systems in August at the DEF CON hacker convention in Las Vegas.
According to Adam Conner of the Liberal Center for American Progress, the White House must also take more action because the AI systems developed by these businesses are being incorporated into hundreds of consumer applications.
In the coming months, Conner believes, “we’ll decide whether or not we lead on this or cede leadership to other parts of the world, as we have in other tech regulatory spaces like privacy or regulating large online platforms.”
Harris and government representatives met with Google CEO Sundar Pichai, Microsoft CEO Satya Nadella, and the leaders of two significant startups, Microsoft-backed OpenAI, and Google-backed Anthropic, on Thursday to address the dangers they perceive in current AI development.
The message from the government’s top officials to the businesses is that they can cooperate with the government and have a part to play in lowering risks.
After the meeting behind closed doors, Harris said in a statement that she spoke to the executives and explained that “the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products.”
Authorities in the UK also declared on Thursday that they are investigating the dangers of AI.
The competition watchdog in Britain announced that it had started an investigation into the AI sector, concentrating on the technology that powers chatbots like OpenAI’s ChatGPT.
Last month, President Joe Biden noted that while AI can help combat disease and climate change, it also has the potential to threaten national security and destabilize the economy.
On Thursday, Biden also attended the function. The president has received a thorough briefing on ChatGPT and is familiar with its operation.
Concerns regarding automated systems’ ethical and societal implications have grown due to a flurry of new “generative AI” tools like chatbots and image generators.
The data that certain businesses, including OpenAI, use to train their AI systems have been kept a secret.
This has made it more difficult to comprehend why a chatbot is giving incorrect or biased responses to inquiries or to address questions about whether it is plagiarizing from protected works.
According to Margaret Mitchell, chief ethics scientist at AI company Hugging Face, organizations concerned about being held accountable for something in their training data may also lack incentives to monitor it carefully.
“I think it might not be possible for OpenAI to detail all of its training data at a level of detail that would be useful in terms of some of the concerns around consent, privacy, and licensing,” Mitchell said in a Tuesday interview.
“That just isn’t done, from what I understand of tech culture.”
Theoretically, a disclosure regulation may compel AI service providers to expose their platforms to wider outside scrutiny.
However, since AI systems are constructed on top of existing models, it won’t be simple for businesses to offer more transparency later.