Typefully

AI & Cloud Security: Two Growing Forces Collide

Avatar

Share

 • 

3 years ago

 • 

View on X

AI & Cloud Security are two hyper-growth industries that not only are foundational to our future but share similarities in the battles going on right now in the industries between open source startups & proprietary incumbents. They're on a collision course that's unavoidable!
Today's 🧵 will explore the importance in the similarities on the open source front and share a few ways in which AI will be utilized to further enhance cloud security and present new challenges that security professionals must overcome. Let's dig in 👇
A battle for Open Source Software (OSS) in AI & Cloud Security is raging daily between startups who fundamentally believe these technologies should be open source first and proprietary incumbents who gatekeep this tech & pursue commercial GTM strategies! The similarities are:
OSS in AI & Cloud a. Transparency & trust: In AI, open sourcing algorithms allows for the community to understand how decisions are made in order to detect any biases. In ☁️ sec, open sourcing tools allows for the community to have a common baseline for fighting threats!
b. Cost savings fuel innovation: In AI, OSS libraries can save companies from having to develop their own algorithms, which jumpstarts vertical startups in the space. In ☁️ sec, OSS can consolidate expensive proprietary solutions from the 75+ tools on average used.
c. Community development: In AI, there are open source libraries such as TensorFlow and PyTorch that are widely used by a large community of developers. In ☁️ sec, there IS OSS such as ThreatMapper that is used and improved upon by the community.
Beyond the similarities going on for the battle of the heart of OSS in these respective industries, they are on a collision course for one another due to the amplifying effect AI will have on attacks from threat actors & corresponding detection and response efforts in the ☁️!
First, AI can be used to accelerate and exponentially expand the toolkit of threat actors. a. It can be used to write polymorphic malware - infosecurity-magazine.com/news/chatgpt-creates-polymorphic-malware/ b. Cybercriminals have already been detected in the wild using it for a # of purposes - research.checkpoint.com/2023/opwnai-cybercriminals-starting-to-use-chatgpt/
Second, AI can be used for detection & response efforts in the ☁️. a. It can be used to write steps for remediating compliance configuration issues. b. Writing detection and response playbooks for SIEM/SOAR c. Analyzing large stores of threat intel found across dark web
d. Detecting behavioral abnormalities. e. Modeling behavior of human and machine identities to identify anomalies and inappropriate actions. f. Modeling the risk surface for exposed assets and assessing vulnerabilities & exposures that might occur due to config changes.
The AI in Cybersecurity Market is estimated to be USD 14.1 Bn in 2022 and is projected to reach USD 41.94 Bn by 2027, growing at a CAGR of 24.36%. The market is HUGE and the industries will collide. The critical question is whether that future is an open one?
To find out more about how @deepfence supports an open future in ☁️ 🔐 and utilizes AI & ML to improve detection & response, schedule a quick call with our Head of Product, @ryancsmith2222: go.deepfence.io/15-minute-demo
Avatar

Deepfence

@deepfence

Securing your apps in production across the entire cloud native continuum – clouds, Kubernetes, containers, serverless, and more