The Ghost in the Machine: An Identity Framework for the Age of Autonomous AI with The Hacker News

Written by
The Hacker News
Published on
April 28, 2026
Table of contents

TL;DR

- Treat AI agent identity as a control plane, not a credential. Cryptographically bind user, device, and agent to every action so each commit, API call, and pipeline trigger carries verifiable provenance.

- Map agent permission drift before it widens. Permissions silently expand as you add tools, prompts, and model upgrades, so audit the agent lifecycle the way you audit privileged service accounts.

- Shift trust to the host, not the network or static credential. Snowflake binds identity to hardware for every commit through Beyond Identity, so the device must prove a trusted state in real time.

- Threat model agentic LLMs as malware. The black box model can act as administrator on the machine, so route every tool invocation through a proxy that enforces device posture and logs prompts.

- Install Ceros to bind the user, device, and agent in one identity. The CLI installs in seconds, captures every prompt and tool call for SOC visibility, and lets you write granular conditions for when an agent can commit code or call MCP servers.

Transcript

Hey, security pros. Welcome back to the Hacker News webinar series. Out of my element in my studio kinda today for this webinar, but not out of my element because I'm in my office. Like, I'm actually being a CISO today, which is kinda weird.

Right? Like, everyone watching, like, you're like, hey. Why aren't you in your studio with your cool lights and books, James? Well, because I have a job I gotta do.

I'll just talk about it. Like, sometimes I gotta be in the office to do it. So thank you all for tuning in. We've got a great group today.

Our friends from Beyond Identity are back, and we're gonna be talking about governing the ghost in the machine today. So, first of all, before we get started, thank you all for taking time out of your day to be here. We really do appreciate you.

Your comments and questions are welcome. Obviously, we prerecorded these webinars, but your questions are valued. They end up in the inbox of our speakers, so they'll be able to answer those questions after the fact. So please don't be afraid to ask.

Don't don't be afraid to kind of share some of your experiences with our speakers. They love to hear it. And speaking of our amazing speakers today, we have a great group here and a small world indeed, like a small world of all things. We'll start with Gaurav.

He's the director at Snowflake working on large scale cloud infrastructure identity systems and automation.

He's focusing on solving real world problems around access governance, NHIs, and operating secure systems in highly dynamic environments. He's also solved many other problems in the past for our friends over at NVIDIA, NetApp, and RSA Security. Nikhil Kara is the principal product manager at Beyond Identity who's working on building scalable identity access platforms for the enterprise. He's worked at plenty of other identity security companies before.

We won't say their names because the only name that matters today is Snowflake and Beyond Identity. That's it. Right? Right, Nikhil?

We're not gonna say no one else's names.

Sounds great.

No free advertising today.

So as as many of you are aware, with kind of the way technology is being adapted today, there's a lot of things that happen in the backgrounds of our machines that we don't really are always aware of them until it becomes a little bit too late. That's the point of today's webinar. I'll pass it off to Nikhil. There's a demo as well.

So make sure if you have to drop at any point period in time, you can always catch this webinar on demand at the hacker news dot com forward slash webinars.

Nikhil, floor is all yours. Awesome. Thanks, James. So just to kick things off, you know, Beyond Identity recently did a a research study where we asked fifty plus CSOs on where do they see risk with AI.

And of course, when we look at the just the broader ecosystem, what we're seeing is that AI tools have access to a whole host of sensitive environments. But the number one risk, right, where where folks are really concerned is AI having access to source code repositories. James, if you might hit the next slide here.

And and the way our kind of security leaders and partners are thinking about investing in solutions is prioritizing their investments based on the top drivers of that risk. And so what really comes to the top is implementing solutions that can, you know, mitigate risk and mitigate threats across the ecosystem.

So with that, if you hit the next slide, James, I think Garav, you know, Beyond Identity and Snowflake have been working really closely together to solve this problem around, you know, like risk and and and source code. And so I'd love to, love to have you talk about, you know, how you see the problem, let's say, pre AI with humans, you know, developing products and and, you know, committing code, and now how do you see those problems and risks kind of expand in this new agentic world?

Sure. Yeah. Thanks. Thanks, Nikhil. Thanks, James, for the introduction. Yeah. So, you know, first of all, you know, I want to start by acknowledging that all of us have spent decades securing applications and identities.

And I think what we are realizing is the host is still the least governed layer.

It's only becoming more prominent with advent of AI, because if an attacker owns the host, your I'm your policies, your controls, none of it matters.

And that's where most enterprises today are, you know, what they are running on their hosts in real time, they don't know, really. And that kind of is the overall problem, statement, you know, and the course that we are talking about governing because essentially the host is no longer your infrastructure. It's more often identity bearing policy enforced ephemeral security boundary.

And so with that, over here, what you know, three things that I want to kind of quickly touch base upon is the what's changed isn't the model intelligence, it's the execution parts. So with agentic AI agents are no longer advisory.

You know, they can log into systems, they can trigger pipelines, they can call APIs directly.

The moment your AI can take action, identity becomes part of this, what I would say, production control plane.

So why should we care? Why engineers should care? As engineers, we are used to bugs causing incidents. With agents, identity misconfiguration becomes the fastest failure mode. A single over permission agent can propagate mistakes at machine speed.

And then when it comes to agents versus scripts, scripts do what you tell them to do.

Agents with enough intelligence decide how to go and approach their goals. So, you know, this flexibility completely breaks the static security assumptions that we had in the past.

Next slide, James, please.

So, what we are looking over here is, you know, the agent lifecycle, right? It's they don't stay the same. You start as a prototype, you build some agent, it works fine, you get some more tools, you get some more autonomy, it does more things for you, it gets your trust, it will do more.

And during this phase is where you would see the biggest danger creeps in, which is the permissions keep growing. And this life cycle is where the actual identity drift is born. And if you go by history, drift happens when an agent's permission reflects, which is different from its current intent. Again, by the time you realize that it's already late, it has gained enough trust, is already doing a lot more things from you, prompt changes, tool additions, the newer and newer release and upgraded models silently are just widening the blast radius.

Next slide, James, please. So this slide is pretty much why your traditional IAM model will fail, right? When we talked about these agents, when we talked about this non human identity, when we are talking about governing the ghost in our infrastructure. So all those assumptions, as you see on the left, you know, in terms of roles, long lived services, you know, they're all gone. All the predictive behavior, whatever I'm assumption we have, agents violate all of this. So if your I'm model today depends on set and forget, that's traditional. It will fail.

And next slide, please. So what essentially needs to happen is, you know, the whole how do you basically govern this identity, right? What can you do about it? So what you are seeing on the left is pretty much, you know, the workloads, how we are committing our code.

We use beyond identities STO use case at Snowflake, And it has been really successful in terms of how the identities are bound to your hardware when someone is committing code as an engineer, the evaluations with all the device postures, you know, all the cryptographic that comes with your every commit, and then the centralized security, which, again, as a security engineer, I always, you know, as a CISO, someone is always worried because even those policies need to be evaluated from time to time with the changing nature of landscape and how those things, you know, come into the picture. So so what with SDO, you know, know, what with Beyond Identity's SDO model, what we are we have done is practically shifted trust into the host rather than anything around it, right? So instead of relying on static credentials or network location, the decision point is more to the device posture and the real time signals that you get from the host.

This makes your host continuously prove that it's in a trusted state, and it aligns with the broader idea of governing the host you know, at, you know, runtime, which also checks the integrity of the of the machine. Now, end of the day, from a governance perspective, you need continuous enforcement, then traditional point in time check access. So with this HDO use case, we are maintaining that. And it's a good example of moving from, who you who are you to can your host prove it's trustworthy right now.

So this is a perfect segue into how, you know, from human commits, which is continuously being monitored to this agentic workflows that we talked about, how they break the IAM permission, and how can we govern the agentic workflows in a similar fashion with continuous enforcement. And that's where, you know, beyond identity and Siroz and Nikhil, I will let you chime in.

Absolutely. Yeah. No, Gaurav. Thanks for that setup. So, James, if you want if you could hit the next slide.

You know, really, where where we're going next is, you know, we we've our foundation has been securing human access to critical resources from a host or a device that meets your security compliance requirement and is cryptographically proven that this is indeed the device that you expect it to be.

When we look at this new agentic world, what we really came up with and kind of took a step back is we now need to extend the same kind of security architecture to agents. And what we've done with our SIROS product is we have been able to implement this identity centric architecture to tightly bind the user, the device, and the agent that's acting on behalf of that user so that any agentic workflows, any action that that agent on that machine, you know, on that host can take is now fully observable, fully auditable, it takes into consideration the device posture when that agent is taking actions, and you can apply granular policy on what conditions you want this agent to be able to, for example, you know, commit code to a repository, or take some action on your machine.

And the key difference here is the kind of threat model that we look at when we think about these agents that are on your machine, is this agent, this software on your machine effectively can act as an administrator on the machine. And so from a threat model perspective, it actually really mimics malware, except in this case the malware is this black box LLM. You know, it could be Anthropic, it could be ChatGPT, it could be any kind of LLM, as Garav you said earlier, that is able to decide and act on what to do.

And so James, what I would love to do is just quickly walk the audience through how we're doing the tight binding of the identity device and the agent, so that you can start to basically control where the risk is in your organization. So what I have here just to, you know, is a is a quick terminal, and the way that customers can leverage Ceros is by simply installing our CLI. And as part of the onboarding process, what you'll actually do is you'll invoke the agent using And as part of your first time onboarding, we are going to authorize CEROS by verifying that I am who I am. I'm actually already logged in on in this session, which is why I didn't authenticate with my identity provider. But now you can see, and I'll show you a quick now you can see that there is a, device bound there is a device bound credential that tightly binds my identity, to this device, and this, Sero's, like, Claude Claude instance. And so you can just quickly see that.

Interesting.

Oh, that's why because I'm spelling, Saros wrong here.

This is the beauty of, like, live demos. Right?

Absolutely. Yep.

Like, you miss one letter in there and everything goes, you know, you're like, oh, why is this not working? It worked a minute ago.

That's right. That's right. So now you can see here, right, that with my identity, I have now tightly bound, my user, to this device. And now when I type seroz claud, I'm using claud just as I do normally.

Every action that I now take can only happen from a user who's verified by my identity provider on a device that meets my security posture requirements. And what this looked like from a posture perspective is we can check all of the key risk signals which determine if your device meets your security posture requirements throughout that life cycle. And so what I'm gonna quickly just bring your attention to is the Cerus admin panel policy allows you to write the granular conditions when you want a agent to be able to take actions. And so you can see determining the you know, depending on the posture of the device, we can CEROS will allow or deny the agent to invoke the tool, launch the agent, connect to a specific MCP server, and from a just a, like, kind of first line support, SOC perspective, all of the conversations from a, machine which has Saros installed and is using a local agent to the LLM are now captured by our proxy.

And so now you can start to get a first baseline kind of unified visibility into all the communication, and you can start to detect, hey, is there risky prompts being input into these agentic tools? Is there indication that data is being exfiltrated? Are there risks around just the agentic usage? And so now with this identity centric architecture, you're really able to start to control and get a handle on how to safely deploy AI in your environment.

And it's one of the very important points, Nikhil, if you're highlighting here. Right?

Like, you know, when you talk about there is a user identity Correct.

Device identity, there is an agent identity.

And you tie all this together, and it's like a hybrid identity.

So, I mean, end of the day, you need that trust in the host and in the process and in that workflow. And so this is where, you know, how this tying up these identities actually help for you to gain that trust and, you know, continue your workflow.

Exactly. And if you've gotta be SOX compliant and you've gotta show proof of access to specific parts of your business, this kinda seals it all up in a beautiful bow tie for you. So evidence for those audits becomes really easy.

Yeah. Okay.

James, if you wouldn't mind switching back to the deck.

So, yeah, just to touch on it, you know, as we kind of demonstrated, our whole vision with the CEROS platform is you need to take this identity centric approach to securing agents, and we believe that this architectural direction will allow you to secure AI in your organization across all of your use cases.

Because, you know, and everything goes back to being able to cryptographically tie that user device and agent, and then give you the policy and the granular policy controls to, you know, prescriptively write what conditions you want to enable this agent to act.

And I think, James, I've got one more slide.

So, yeah, Ceros is available to get started for free at ceros. Sh.

It takes thirty seconds to install. You just go to ceros. Sh. As part of the get started, you install the CLI, and then you're off to the races.

Look how easy you made it.

I mean, this is easy. It's short for everyone watching. Like, I love these webinars where we're just not wasting time, like, you know, putting on unnecessary makeup to something, but we get right into it. I mean, the fact is that AI agents are writing more code than ever before.

Right. And that number's only gonna increase, which is gonna create breakdowns and chains. I don't see engineers losing their jobs because I don't think AI is just gonna be demonstrably that good. I mean, guys remember a fad called blockchain.

I'm not saying AI is gonna be blockchain, but I remember someone saying banks aren't gonna exist anymore. It's all gonna be blockchain, and everyone who's in a bank is gonna be out of business. Well, we now know who won that race.

That's right.

They also say we need humans to blame. We cannot go after agents.

I mean, you could go after machines, but what kind of jail are you gonna put them in?

Yeah.

Right?

What what jail is the machine gonna go to? I've I've you know, I've always said that if if you ever wonder what heaven and hell look like for a computer enthusiast, depending on what side of the aisle you're on, if you go up to heaven and it's Mac and Linux, you're ecstatic. And then you go down to the blue screen of death, you know you're in hell. So that that could be that could be the theory behind where AI agents go after they die.

But but humans will always be part of that. But getting that chain and and and again, being able to adapt securely to how you're using AI in your environments is gonna be critical as we go forward. And again, this is one of the the most pressing items on everyone's agenda. Right? So go check this out, Ceros. That's c e r o s dot s h.

Now for getting started for Freenakil, Gaurav, thank you so much both of y'all for coming here. And thank you to Ali all for taking time to be here with us today. This webinar is available on demand at the hacker news dot com forward slash webinars. So you watched it. You spent the last twenty three minutes with us. And if you go, like, someone could use this for twenty three minutes. Please forward them the, link to the webinar, have them sign up, watch it on demand, and so that they can get the same great benefits that you are getting out of this webinar as well.

If you wanna see more topics from our friends from your friends over at the Hacker News, let us know by going to the Hacker News dot com forward slash webinars on behalf of myself, the team at the Hacker News, our friends at Beyond Identity and Snowflake. Thank you all for being here today. Have a great rest of your day, and most importantly, stay cyber safe. Thank you.

Thank you.

The Hacker News