{"id":2780917,"date":"2023-07-21T17:00:38","date_gmt":"2023-07-21T21:00:38","guid":{"rendered":"https:\/\/wordpress-1016567-4521551.cloudwaysapps.com\/plato-data\/google-tests-new-ai-tool-capable-of-writing-news\/"},"modified":"2023-07-21T17:00:38","modified_gmt":"2023-07-21T21:00:38","slug":"google-tests-new-ai-tool-capable-of-writing-news","status":"publish","type":"station","link":"https:\/\/platodata.io\/plato-data\/google-tests-new-ai-tool-capable-of-writing-news\/","title":{"rendered":"Google Tests New AI Tool Capable of Writing News"},"content":{"rendered":"

As the AI wave continues to boom with new updates every day, big AI focused tech companies have voluntarily agreed with the White House on responsible AI developments. <\/strong><\/p>\n

AI Companies were scheduled to meet at the White House on Friday, July 20, and make their voluntary agreements, upon which President Biden would also make his remarks. The promises include investments towards cybersecurity and watermarking systems to show if content is AI generated or not.<\/p>\n

Since the launch of ChatGPT by OpenAI in November last year, there has been a spate of new AI systems being unleashed onto the market, much to the concerns of many who are worried about the potential risks the technology may pose on humanity. This has led global leaders to scramble to come up with regulatory frameworks to govern the industry in a way that encourages innovation but in a safe manner.<\/p>\n

Big names<\/h2>\n

In light of this, executives and presidents of seven big tech companies \u2013 Meta, Microsoft, OpenAI, Amazon, Google, Anthropic, and Inflection have agreed to address the many risks that AI poses to humanity.<\/p>\n

\u201cUS companies lead the world in innovation, and they have a responsibility to do that and continue to do that, but they have an equal responsibility to ensure that their products are safe, secure and trustworthy,\u201d White House chief of staff Jeff Zients told<\/a> NPR.<\/p>\n

\n

\u201cWe will use every lever that we have in the federal government to enforce these commitments and standards. At the same time, we do need legislation,\u201d he added.<\/p>\n<\/blockquote>\n

However, it is not yet clear how the government will ensure the companies meet their commitments and the possible course of action in the case of failure to meet such commitments.<\/p>\n

Also read: ChatGPT\u2019s Performance Declines: The Quest for Balance<\/a><\/em><\/strong><\/p>\n

The commitments<\/h2>\n

The commitments are centered around safety, information sharing and transparency as well as report vulnerabilities as soon as they arise.<\/p>\n

\"AI<\/p>\n

\"AI<\/p>\n

According to reports, the White House<\/a> sees the development as an attempt towards striking a balance between the supposed AI benefits and the risks associated with the technology.<\/p>\n

This comes as the government has been lobbying for safeguards to be put in place. Zients also pointed out the need for pressure testing products, safeguard the market against cyberattacks and discrimination against particular groups of people.<\/p>\n

The tech firms themselves have committed to third party testing of their products before their release, although there isn\u2019t clarity yet on who will be the third-parties and how they will be selected.<\/p>\n

Additionally, firms are taking the responsibility to ensure users can distinguish<\/a> between AI-generated content and original human made content.<\/p>\n

Last month, the EU asked online platforms<\/a> to label AI generated content amid moves to combat disinformation.<\/p>\n

A starting point<\/h2>\n

TS2<\/a> says the voluntary commitments expose the limitations of \u201cwhat the Biden administration can do to regulate advanced AI models.\u201d<\/p>\n

The White House<\/a> however views this as a stepping stone.<\/p>\n

\u201cThe commitments the companies are making are a good start, but it\u2019s just a start,\u201d said Zients.<\/p>\n

\u201cThe key here is implementation and execution in order for these companies to perform and earn the public\u2019s trust.\u201d<\/p>\n

However, there are concerns the approach to have the big techs greatly involved in quest to regulate the sector is disastrous as the companies will seek to benefit more.<\/p>\n

Ifeoma Ajunwa, a law professor at Emory studying the intersection of technology and work said this was \u201cdisappointing.\u201d<\/p>\n

\n

\u201cWe also want to ensure that we are including other voices that don\u2019t have a profit motive,\u201d she said. \u201cWe should definitely invite corporate leaders and tech titans to be part of this conversation, but they should not be leading the conversation.\u201d<\/p>\n<\/blockquote>\n

Political economy professor at University of Washington Victor Menaldo expressed concerns involving big tech firms in regulatory framework making process is risky as they may take advantage of this to elbow out upcoming businesses.<\/p>\n

\u201cThe bigger established firms can kind of game it to benefit them, and the newcomers don\u2019t have a say,\u201d he said.<\/p>\n

\u201cBig companies love to do these kind of things because they\u2019re already established, so they\u2019re like \u2018Oh, the rules of the road are going to benefit us.\u2019\u201d<\/p>\n