The news article discusses how Google is giving web publishers the choice to allow or disallow their content to be used by the company’s AI models, such as Bard AI. This can be done by disallowing “User-Agent: Google-Extended” in the site’s robots.txt file, which controls what content web crawlers can access. While Google claims to develop its AI in an ethical manner, the article argues that the use case for training AI models is different from indexing the web. The article points out that Google has already trained its models on vast amounts of data collected without consent, and now it is asking for permission after the fact to appear ethical. It also mentions that Medium has announced its decision to block AI crawlers until a better solution is found.
In summary, Google is now offering web publishers the option to allow or disallow their content to be used by its AI models. However, the article criticizes Google for exploiting unfettered access to user data in the past and now seeking permission to appear ethical. The article argues that the framing of the choice as a willingness to help rather than Google taking something is not authentic, considering the company’s previous actions. Additionally, Medium has joined other web publishers in blocking AI crawlers until a more granular solution is found.
In conclusion, Google’s decision to allow web publishers to decide whether their content can be used by its AI models is discussed in the article. The article questions the authenticity of Google’s approach, pointing out that the company had already collected data without consent and is now seeking permission after the fact. It also mentions that Medium and other publishers are blocking AI crawlers until a more suitable solution is developed.