Security

Urgent warning to millions of Facebook users as it’s feared hackers have found new way to break into devices


FACEBOOK users have been given an urgent warning as it’s feared hackers have discovered a new way to break into phones.

Meta issued the warning after their security team found malicious software that can give hackers access to personal devices.

Facebook users have been warned that hackers have found a new way to break into devicesCredit: Alamy

The software claimed to offer ChatGPT-based tools for the Facebook user but discretely contained malware designed to give hackers full access to their gadgets.

“From a bad actor’s perspective, ChatGPT is the new crypto,” Meta chief information security officer Guy Rosen said, likening the situation to the surge in cryptocurrency scams.

Since March, Meta claimed it blocked the sharing of more than 1,000 malicious web addresses that were apparently linked to ChatGPT and related tools.

Meta said it had “investigated and taken action against malware strains taking advantage of people’s interest in OpenAI’s ChatGPT to trick them into installing malware pretending to provide AI functionality”.

“Our research and that of security researchers has shown time and again that malware operators, just like spammers, try to latch onto hot-button issues and popular topics to get people’s attention,” it said.

“With an ultimate goal to trick people into clicking on malicious links or downloading malicious software, the latest wave of malware campaigns have taken notice of generative AI tools becoming popular.”

ChatGPT has been taking the internet by storm but people are still baffled over how the artificial intelligence app actually works.

The app can create human-like texts through language models after a user feeds it a prompt.

And it can even hold conversations, learning from things that you’ve said.

In some ways, it’s like a search engine – but potentially much more powerful.

However, you should be warned that it’s entirely possible ChatGPT will give you false or misleading information.

“These models were trained on vast amounts of data from the internet written by humans, including conversations, so the responses it provides may sound human-like,” Open AI explained.

“It is important to keep in mind that this is a direct result of the system’s design (i.e. maximizing the similarity between outputs and the dataset the models were trained on).

“And that such outputs may be inaccurate, untruthful, and otherwise misleading at times.”

Open AI warns: “ChatGPT is not connected to the internet, and it can occasionally produce incorrect answers. It has limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.