In brief Google’s AI-powered internet search chatbot, Bard, isn’t available for netizens in the EU and Canada.
CEO Sundar Pichai announced Bard had expanded its coverage to 180 countries at this year’s Google I/O conference, and it currently supports three languages: English, Korean, and Japanese. But the European Union and Canada are missing from the list of countries and territories that can use the software.
Chatbots like Bard are new, and their impact is still being studied while governments around the world ramp up efforts to regulate the technology. Officials in Italy, Spain, France, Germany, and Canada have launched investigations into ChatGPT over data privacy concerns.
Experts are concerned that the tools can extract personal information leaked on the internet, and it’s not clear how they use and store any data that is processed. They may indeed be violating the EU’s GDPR laws.
Italy has since lifted its ban, but the upcoming AI Act suggests companies building Bard-like chatbots will be subject to rules that require they disclose any outputs generated by AI, and roll out filters that prevent them from producing illegal or copyrighted content.
For its part, Google is pressing on. “We’ll gradually expand to more countries and territories in a way that is consistent with local regulations and our AI principles,” the Chocolate Factory confirmed.
Deepfakes on the dark web
Demand for realistic AI deepfake videos – by crooks hoping to carry out cryptocurrency scams or break into some online biometric tool to steal money – is on the rise.
Cybersecurity biz Kaspersky searched for posts on the dark net looking for developers to generate deepfakes mimicking targets like celebrities or politicians, and found they can charge anywhere from $300 to $20,000 depending on complexity of the job. Analysts found that miscreants often wanted them to make fake content promoting cryptocurrency scams, or crack into people’s online accounts – or, of course, for pornography.
“Our research found that there’s a significant demand for deepfakes – which far outweighs the supply of them. Individuals who’ll agree to create fake videos are being desperately searched for,” Kaspersky observed in a blog post. “And this is quite disturbing, since, as we all know, demand creates supply; thus, we predict that in the nearest future we’ll indeed see a significant increase in incidents involving high-quality deepfakes.”
The security vendor also said, however, it was optimistic that tools detecting whether a video is authentic will eventually become widely available to tackle fraud, identity theft, and disinformation.
“The most obvious but depressing advice is simply ‘never trust your eyes or ears ever again.’ However, there is hope. The same artificial intelligence technologies that are helping create deepfakes can be used to distinguish genuine videos, pictures and audio from the fakes. And such tools are slowly emerging on the market. Let’s hope that in the nearest future media outlets, messengers and maybe even browsers will be equipped with such technologies.”
Expanding Claude’s context window
Anthropic’s large language model, Claude, can now process up to 100,000 tokens. That means users can submit hundreds of pages of documents for it to analyze in one go.
Sequences of characters, like words, that the model processes to generate an output – its so-called “context window” – affect its performance and capabilities. If a model’s context window is larger, it can handle higher volumes of text and carry out more complex tasks, like search or summarization.
“The average person can read 100,000 tokens of text in ~5+ hours, and then they might need substantially longer to digest, remember, and analyze that information. Claude can now do this in less than a minute,” the startup explained.
“For example, we loaded the entire text of The Great Gatsby into Claude-Instant (72K tokens) and modified one line to say Mr Carraway was ‘a software engineer that works on machine learning tooling at Anthropic.’ When we asked the model to spot what was different, it responded with the correct answer in 22 seconds.”
A larger context window opens up new applications – including things like summarizing financial reports, research papers, or court filings. Users can use the company’s API to find relevant information without having to read through pages of text first. Developers could quiz Claude to search for a particular part to better understand technical documentation.
Bots are listening to AI-generated songs, and Spotify doesn’t like that
Tens of thousands of AI-generated tracks uploaded on Spotify have been removed as the audio streaming platform tackles bot accounts artificially inflating listener numbers as well as raising copyright issues.
Universal Music, a major record label, alerted officials to suspicious activity on tracks created by Boomy on Spotify. Boomy creates tools to help people make AI-generated music, but listener counts for tens of thousands of its songs appeared to increase miraculously over a short period of time – suggesting that numbers had been inflated by online bots.
After investigation, Spotify removed 7 percent of tracks uploaded by Boomy, according to The Financial Times. “Artificial streaming is a longstanding, industry-wide issue that Spotify is working to stamp out across our service,” it said. AI-generated music is controversial – especially if it clearly copies styles from human artists.
Universal Music urged streaming services to take down a fake rap song that ripped off Drake and The Weeknd’s voices, saying it violates copyright law. Spotify, SoundCloud, and YouTube reportedly removed the track, but it can still be found here.
The platform’s CEO, Lucian Grainge, reportedly told investors that “the recent explosive development in generative AI will, if left unchecked, both increase the flood of unwanted content on platforms and create rights issues with respect to existing copyright law.” ®