[ad_1]
Google CEO Sundar Pichai speaks in conversation with Emily Chang during the APEC CEO summit at Moscone West on November 16, 2023 in San Francisco, California. The APEC summit is being held in San Francisco and will run until November 17.
Justin Sullivan | Getty Images News | getty images
MUNICH, Germany – Rapid advances in artificial intelligence could help strengthen defenses against security threats in cyberspace, according to Google CEO Sundar Pichai.
Amid growing concerns about the potentially nefarious use of AI, Pichai said intelligence tools can help governments and companies detect and respond to threats from hostile actors faster.
“It’s our right to be concerned about the impact on cybersecurity. But I think AI actually strengthens our defense on cybersecurity,” Pichai told delegates at the Munich Security Conference last weekend.
Cybersecurity attacks are increasing in volume and sophistication as malicious actors increasingly use them as a way to exert power and extort money.
According to cyber research firm Cybersecurity Ventures, cyber attacks caused an estimated $8 trillion loss to the global economy in 2023 – an amount that will grow to $10.5 trillion by 2025.
A January report from Britain’s National Cyber Security Center – GCHQ, part of the country’s intelligence agency – said AI would only amplify those threats, lowering the barriers to entry for cyber hackers and enabling more malicious cyber attacks, including ransomware attacks. Will enable the activity.
“AI helps rescue people disproportionately because you’re getting a tool that can impact it on a large scale.
Sundar Pichai
CEO at Google
However, Pichai said AI is also reducing the time required for defenders to detect and respond to attacks. He said this would reduce the defenders’ dilemma, whereby cyberhackers have to succeed only once in a system while a defender has to succeed every time to protect it.
“AI disproportionately helps people doing defense because you’re getting a tool that can have a larger impact than people trying to exploit,” he said.
“So, in some ways, we are winning the race,” he said.
Google last week announced a new initiative offering investments in AI tools and infrastructure designed to boost online safety. A free, open-source tool called Magica aims to help users detect malware — malicious software — while a white paper proposes measures and research, the company said in a statement, and guardrails around AI. makes.
Pichai said the tools are already in use in the company’s products, such as Google Chrome and Gmail, as well as its internal systems.

“AI is at a definite crossroads – where policymakers, security professionals, and civil society have a chance to finally tilt the cybersecurity balance from attackers to cyber defenders.
The release comes as major companies signed an agreement at the MSC to take “due diligence” to prevent the use of AI tools to disrupt democratic votes in the bumper election year of 2024 and beyond.
Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok and .
This is when the Internet becomes a significant sphere of influence for both individuals and state-backed malicious actors.
Former US Secretary of State Hillary Clinton on Saturday described cyberspace as “a new battlefield”.
“The technology arms race has gone up another notch with generative AI,” he said in Munich.
“If you can run a little faster than your opponent, you’ll perform better. That’s really what AI is giving us defensively.
mark hughes
President of Security at DXC
A report published last week by Microsoft found that state-backed hackers from Russia, China and Iran are using its OpenAI large language models (LLM) to increase their efforts to defraud targets.
Russian military intelligence, Iran’s Revolutionary Guard, and the Chinese and North Korean governments are all said to rely on the devices.
Mark Hughes, president of security at IT services and consulting firm DXC Technology, told CNBC that bad actors were increasingly relying on a ChatGPT-inspired hacking tool called WormGPT to conduct tasks such as reverse engineering code.
However, he said he also sees “significant benefits” from similar tools that help engineers detect and engineer attacks at speed.
“It gives us the ability to accelerate,” Hughes said last week. “Most of the time in cyber, you have the time that the attackers have to gain against you. That’s often the case in any conflict situation.
“If you can run a little bit faster than your opponent, you’ll perform better. That’s really what AI is giving us defensively at the moment,” he said.
