The cybersecurity world has never been short on tools. Most security teams already have dashboards, scanners, endpoint platforms, cloud logs, identity alerts, SIEM systems, ticketing workflows, and more signals than they can realistically handle in a normal day. The problem is not only that threats are growing. The bigger issue is that security teams are being asked to move faster while still doing too much of the work by hand.
That is the problem Lior Div is trying to solve with 7AI.
Div is not a new name in cybersecurity. He is widely known as the co-founder of Cybereason, a company built around helping enterprises detect and respond to advanced cyber threats. With 7AI, he is taking on a newer challenge: bringing autonomous AI agents into security operations so human analysts can spend less time buried in repetitive work and more time making high-value decisions.
The idea sounds simple at first. Let AI agents handle the heavy, repetitive, time-sensitive parts of security operations. In reality, it points to a much deeper shift in how the modern security operations center, often called the SOC, could work in the years ahead.
Who is Lior Div
Lior Div is a cybersecurity entrepreneur with a long track record in enterprise security. Before launching 7AI, he co-founded Cybereason, where he helped build a company focused on detecting, investigating, and stopping cyberattacks inside large organizations.
That background matters because cybersecurity is not an industry where trust comes easily. A founder building in this space needs to understand how attackers think, how defenders work, and how enterprise buyers evaluate risk. Div’s earlier work gave him direct experience with the pain points that security teams face every day, from alert overload to slow investigations and the constant pressure to stop threats before they spread.
With 7AI, Div is building from that experience, but he is not simply repeating the same playbook. He is stepping into a different cybersecurity era, one where AI is not just a feature inside a security product. It is becoming part of the operating model itself.
Why Lior Div started 7AI
The modern SOC is under pressure from every side. Security teams are expected to protect cloud systems, endpoints, identities, SaaS apps, remote employees, APIs, and growing amounts of sensitive data. At the same time, attackers are becoming faster, more automated, and more creative.
For many teams, the daily work still involves a large amount of manual effort. Analysts review alerts, collect context, check logs, compare signals, decide whether something is serious, and then document or escalate the issue. This work is important, but it can also be repetitive and exhausting.
That is where 7AI enters the picture.
The company is built around the belief that AI agents can take on large parts of security investigation work. Instead of only showing alerts to a human analyst, an AI agent can help investigate what happened, gather supporting evidence, enrich the alert with context, and recommend or take the next step depending on the workflow.
For Div, the opportunity is not just to make existing tools slightly faster. The bigger idea is to rethink how security work gets done when AI agents can operate across tools, data sources, and workflows.
What 7AI does in cybersecurity
7AI builds AI agents for security operations. These agents are designed to support SOC teams by handling tasks that normally take analysts a lot of time, especially around alert investigation, triage, enrichment, and response.
In plain terms, 7AI is trying to help security teams answer questions like these faster:
Is this alert real or just noise?
What systems, users, or applications are involved?
Has this behavior appeared anywhere else?
What evidence supports the decision?
What should happen next?
Traditional automation usually follows fixed rules. It can be useful, but it often breaks down when an investigation becomes messy or unfamiliar. Agentic AI is different because it can work through a task with more flexibility. An AI agent can gather information, reason through the steps, use available tools, and adapt as more context appears.
That does not mean humans disappear from the process. In cybersecurity, trust and control still matter. The stronger vision is a SOC where human analysts supervise, guide, and improve AI agents while the agents handle much of the repetitive work at machine speed.
How autonomous agents are changing the security operations center
The SOC has always been a place where speed matters. A delayed investigation can give an attacker more time to move through a network, steal data, or disrupt systems. But speed is hard to achieve when analysts are buried under thousands of alerts.
Autonomous agents can change that workflow by taking the first pass at investigation. Instead of waiting for a human to manually open every alert, collect data, and check each tool, an AI agent can begin the process immediately.
This can help in several practical ways.
It can reduce alert fatigue by filtering out low-value noise before it reaches analysts.
It can speed up investigation by pulling together context from different systems.
It can make response more consistent by following approved playbooks.
It can give junior analysts stronger support while letting experienced analysts focus on harder cases.
This is why 7AI is important in the broader cybersecurity conversation. The company is not only selling another security dashboard. It is pushing toward a model where AI agents become active participants in the SOC.
Lior Div’s vision for an AI-native cybersecurity company
Many companies are adding AI features to existing products. 7AI is different because it is being built as an AI-native cybersecurity company from the start.
That distinction matters. A company built around AI agents has to think differently about product design, workflow, trust, and customer value. It has to ask what work should be handled by software, what should stay with humans, and how the two should work together in a real security environment.
Lior Div brings a useful mix of founder experience and cybersecurity depth to that challenge. He has already seen how complex enterprise security can be. He knows that security teams do not adopt tools just because they sound advanced. They adopt tools that can prove value, fit into existing workflows, and reduce risk instead of creating new uncertainty.
That is especially important with autonomous agents. In a normal business workflow, a small AI mistake might be annoying. In cybersecurity, a mistake can be serious. A false negative can miss a real attack. A false positive can waste time or interrupt business operations. A poorly controlled response can create more damage than the original threat.
For 7AI to succeed, its agents need to be fast, but they also need to be reliable, explainable, and useful inside the real pressure of security operations.
The funding milestone that put 7AI in the spotlight
7AI gained major attention after raising a $130 million Series A round in December 2025. The round was led by Index Ventures, with participation from Blackstone Innovations Investments, Greylock, CRV, and Spark Capital.
For a young cybersecurity company, that kind of funding sends a strong signal. It shows that investors see agentic cybersecurity as more than a trend. They see a real market need as enterprises look for ways to handle more threats without simply adding more people, more dashboards, and more complexity.
The funding also gives 7AI room to grow. It can support product development, hiring, customer expansion, research, and deeper integrations with the tools that security teams already use.
But the funding itself is only part of the story. The bigger achievement is the timing. 7AI is building at a moment when enterprises are actively asking how AI can improve security operations. Many companies are already experimenting with copilots and automation. The next step is moving from AI that assists humans to AI agents that can carry out security work with greater autonomy.
Why 7AI matters for SOC teams
For SOC teams, the value of 7AI comes down to time, focus, and scale.
Security analysts often spend a large part of their day sorting through alerts that may not lead anywhere. Some are duplicates. Some are low-risk. Some lack enough context. Others require several manual checks before an analyst can decide whether the issue matters.
If AI agents can take over more of that early work, the human team gets breathing room. Analysts can focus on complex incidents, threat hunting, strategy, and decisions that require experience and judgment.
This could also help with burnout. SOC work can be intense, especially when teams face constant alerts, after-hours incidents, and pressure from leadership to reduce risk quickly. By removing some of the repetitive load, AI agents may make the job more sustainable.
For CISOs and security leaders, the appeal is also clear. They want stronger security outcomes without endlessly expanding headcount or adding tools that create more noise. A platform that can process alerts, investigate issues, and support response workflows could become a serious advantage.
How 7AI fits into the rise of agentic cybersecurity
The phrase agentic cybersecurity refers to the use of AI agents that can work through security tasks with a level of autonomy. These agents are not just answering questions in a chat box. They are designed to take action, use tools, follow workflows, gather evidence, and complete security tasks.
This trend is growing because the old SOC model is struggling under the weight of modern threats. Security teams need more than visibility. They need action. They need systems that can help turn alerts into decisions and decisions into outcomes.
7AI fits directly into that shift. Its focus on autonomous security agents places it in one of the most active areas of cybersecurity innovation. The company is aiming at a clear pain point: too much security work still depends on manual investigation at a time when attackers are moving faster.
As AI becomes more common on both sides of cybersecurity, defenders will need tools that can keep up. Attackers can use automation to write phishing emails, scan for weaknesses, generate malware variations, and move quickly across systems. Defensive teams need their own speed advantage, and AI agents may become a key part of that response.
What makes Lior Div’s second cybersecurity chapter important
The story of Lior Div and 7AI is interesting because it is not just a founder launching another startup. It is a second major cybersecurity chapter built on lessons from the first.
With Cybereason, Div helped build around a previous wave of enterprise defense. That era focused heavily on detection, endpoint visibility, and helping teams understand attacks across their environments.
With 7AI, the focus is different. The question is no longer only whether a platform can detect a threat. The question is whether AI agents can help security teams investigate, decide, and act faster than before.
That makes 7AI part of a bigger shift from security visibility to security execution. Dashboards still matter. Data still matters. Detection still matters. But security teams increasingly need systems that can do more of the work after the alert appears.
Div’s experience gives this story more weight. He has seen how large companies adopt cybersecurity products. He understands that the best technology does not win if it does not fit the way teams actually work. That practical founder experience may help 7AI avoid becoming just another AI promise in a crowded market.
Challenges 7AI will need to solve
The opportunity for 7AI is large, but the company will also face serious challenges.
The first challenge is trust. Security teams will not hand important work to AI agents unless they can understand what the agents are doing and why. Analysts need clear evidence, transparent reasoning, and controls that allow them to approve or adjust actions.
The second challenge is integration. A SOC is rarely clean or simple. Large companies use many different security tools, cloud platforms, identity systems, and ticketing processes. For AI agents to be useful, they need to work across that environment without creating more complexity.
The third challenge is accuracy. Cybersecurity is full of edge cases. Alerts can be noisy. Data can be incomplete. Attackers can behave in unusual ways. AI agents need to handle uncertainty carefully and avoid overconfident decisions.
The fourth challenge is enterprise adoption. Many organizations are excited about AI, but security buyers are cautious for good reason. 7AI will need to show measurable improvements in speed, workload reduction, investigation quality, and security outcomes.
These challenges do not weaken the story. They make it more realistic. Any company trying to bring autonomous agents into cybersecurity must prove that the technology can work in real environments, not only in demos.
Why Lior Div and 7AI are worth watching
Lior Div is building 7AI at a moment when cybersecurity teams are ready for a new way of working. The pressure on SOC teams is growing, the volume of alerts is still a major problem, and AI is changing what both attackers and defenders can do.
The company’s focus on autonomous agents gives it a strong position in the next phase of cybersecurity. If 7AI can help analysts move faster, reduce repetitive work, and make investigations more consistent, it could become an important name in the future of security operations.
Div’s track record with Cybereason adds credibility to the effort, but 7AI will need to earn trust through product performance and real customer outcomes. That is what makes the company’s journey worth following. It sits at the center of a bigger question facing the industry: how much of cybersecurity work can AI agents take on, and how should humans guide them?
For now, Lior Div is betting that autonomous agents will not simply support the SOC. They will help redefine it.








