TL;DR
- Treat every tool call your agent makes as a potential exfiltration path and a prompt injection vector. Indirect injection can be partitioned across multiple tool results, then assembled later by the transformer.
- Reject the barbell of total block or full risk acceptance. Use identity bound to the device to launch the agent inside a monitored security context that discovers tools, MCP servers, skills, and subagents at runtime.
- Stop relying on stealable Anthropic, OpenAI, or Gemini API keys. Bind the agent session to the device and its posture so a leaked key cannot be replayed by another machine.
- Build agentic playbooks for your security team. Map each step to either a probabilistic LLM node for judgment or a deterministic script node for execution, the same way you would design a control flow graph.
- Run the offensive drill yourself. Jasson built a working voice cloning kit in four hours from open Hugging Face models and seven seconds of audio. If your red team is not doing this, the attackers already are.
Transcript
Every time your agent executes a tool, it's a chance for proprietary information to basically be expelled. It's a chance for maybe that tool, if it's not actually authorized or maybe even malicious, to send c two commands back to the agent. And the agent, just like Ron Burgundy, will happily do whatever on the teleprompter. It also starts to look and feel a little bit like adversarial malware. How do you run fast but not run fast into a next breach? Like, that's an interesting tension.
There's only two camps that I see. Mostly is one is an outright denial that we won't do it at all. Yep. The other one is the floodgates are already open. There is OpenAI. Everything is being used. You just have no idea.
Prompt injection is a real problem. Information exfil is a real problem. How do you manage any of those things in a systematic way? It's not whack a mole. Everybody can come up with a whack a mole, technique. Yeah. But how do you manage this in a systematic way that has the fewest moving parts, if not through identity?
In four hours, we used it to pull a couple of models off of Hugging Face and real time audio impersonation. It wasn't that hard to turn that into a Red Team kit as well. We can leave a pretty convincing voice mail. And I think it was, like, seven seconds of high quality sampling. If the security teams are not adopting these sort of workflows and really going deep on it, they're gonna get left behind.
Most companies are at two stages for AI adoption. Either they have not gone ahead at all and just blocked the entire AI usage completely, Or on the other end, they have accepted the risk and just decided to speed aboard ahead for the kind of AI they want to do in their organization. I had a great conversation with Jasson Casey, who is the CEO of Beyond Identity. We spoke about how AI agents are currently being used in the organization, and why shadow AI would just become a commodity after a while once you establish standards for what good AIs are and what good AI practices.
However, how the challenge would remain in a world where AI agents can work on browsers, containers, local endpoints? How do you establish identity and perhaps even containerization to isolate what can be done if in case the AI system was compromised? I'll then a lot more in this episode with Jasson Casey. If you're someone who's working on securing AI agents, perhaps the use of identity or containerization, just even understand what what the new threat model is like, I would definitely share this episode with them.
As always, if you have been listening or watching our episodes for a while and have been finding them valuable, I would really appreciate if you take a quick second to drop the subscribe or follow button on whichever platform you've been listening or watching this on. We are on all platforms including Apple, Spotify, YouTube, and LinkedIn.
Also wanna say thank you to everyone who came and said hello to us at RSA. It meant a lot when you took the time, stopped, and said hello, and basically shared the love that you had for the podcast and the work we do here as well. Thank you so much for all the love and support. I hope to see you at the next conference as well.
I hope you enjoy this episode with Jasson Casey. I'll talk to you soon. Peace. Hello, and welcome to another episode of Cloud Security Podcast.
I've got Jasson with me. Thank you for coming on the show, Jasson.
Thanks for having me.
Maybe just to set some context, if you can share a bit about yourself and your background as well?
I am a cofounder and CEO of a company called Beyond Identity. My background, I am a engineer product person. I've been in the security infrastructure space pretty much my entire career. And, yeah, I like to work on deep tech problems that are challenging but also meaningful.
And talking about challenging and meaningful problems, we were talking the other day about how shadow AI has become like the new standard for first response for AI. And and I guess they had some interesting thoughts about what's the role of Shadow AI, if it's actually worth investing a lot of time in, or is it what is it protecting you from? I'm curious to hear if you wanna share your thoughts on the whole Shadow AI piece and a, in terms of a, is it a valid concern and b, is it something that we should probably build programs around?
Yep. So so first, yeah, it's a concern and you can definitely see the the response to it just in the advertisements. Before I even left the airport in New York last night, I saw three different company adverts all about, like, AI discovery, shadow AI, and whatnot. So it's it's definitely a little noisy on that right now. Why should you care about it? Every company in the world, private or public, is exceedingly getting pressure from the board and from its investors to be more efficient, to show some of these advantages that we've been promised around AI, like how do you operate in a more AI native way.
And you flip that to the inside of an organization, and you can actually see the promise of some of these projects. Right? Engineers doing things that used to take them six months and six weeks, right, with some of these code assistance, like Cursor or QuadCode or Codex or Gemini.
Marketers doing things like building a browser plug in that literally follows them as they're going through LinkedIn and follows them as they're looking at competitive websites, produces a JSON stream and a Git repo that Claude is consuming to do kind of structured competitive analysis. So, like, it's there. Right? And so I think this creates a natural tension in most organizations.
How do I run fast? How do I let the organization run fast? Because the organization exists to build product, right, to to to help customers with specific problems and to drive top line revenue. At the same time, when you look at how these AI code assistance work and these agents work now I'm gonna focus a lot on code assistance, but I'll I'll argue that, like, agents kinda structurally are the same.
When you look at how they work, their power comes not just from using an LLM. The LLM is kind of like the brain, but you've got to bring interesting data to the brain to say, chew this, synthesize something, tell me what to do next.
And you've got to bring tools to that brain as well. You've to give it arms and legs to interact with the real world. And that's where the danger comes from. Right?
Every time your agent executes a tool, it's a chance for proprietary information to basically be ex filled. It's a chance for maybe that tool, if it's not actually authorized or maybe even malicious, to send c two commands back to the agent. And the agent, just like Ron Burgundy, will happily do whatever on the teleprompter. Yeah.
Right? It also starts to look and feel a little bit like adversarial malware or living off the land. Right? Like tool execution of an agent.
Certainly feels like living off the land. Every tool execution is a potential the result is a potential prompt injection. Right? I don't just inject the prompts from prompt input.
I can have prompt injection that comes through the result of tool calls. Prompt injection doesn't have to happen all at once. A context is big. A context gets stacked.
The way transformers work is they don't read that context linearly. They have a way to actually associate related contexts. So if I'm smart, I can actually inject a prompt slowly in partitions over multiple tool calls to then get that transformer to do what I want. Modern agents have memory now.
Right? They can remember things about prior conversations. When you see that compacting context, what they're really doing is they're trying to figure out how to conceptually store that information so they can kinda retrieve it recursively later Yeah. In a way that makes the conversation feel like it continues and it's seamless.
Yeah. That feels a lot like malware persistence.
Maybe I can persist my prompt to, like, get it to do something when a condition arises a little bit later.
Yeah.
So there's this natural tension in organization between how do I run fast and become AI native, but also, like, these behaviors are clearly new, but also potentially gonna flag a lot of your existing security controls and even the security mindset. So, like, how do you run fast but not run fast into a next breach? Like, that's an interesting tension.
Yeah. And I guess maybe to what you said as well, a lot of CSOs are under the pressure from their boards or their executives that, hey. We need to I mean, I guess I there's only two camps that I see. Mostly is one is an outright denial that we won't do it at all, and we're gonna wait and see what happens. The other one is the front gate's already open and there is cloud cord, there is OpenAI, everything is being used, you just have no idea. Maybe that's kind of where some of that tension comes in from as well. Is that the only way to deal with this, that shadow?
So so we see the barbell as well. We see these large financial organizations that have literally just paused the project the projects, and the response is a bit heavy handed. It's like, well, only these three people can do it and no one else. Alright.
So clearly, they're actually The chosen one.
Yeah.
And then we see the other end of the spectrum, which is, well, this is the point of our business, and we're just gonna blindly risk accept all of this.
Yeah. And so our argument is you don't actually have to make that choice.
There are smart, lightweight, intelligent ways of letting the organization run run fast while also getting kind of a a safe, secure context wrapped around that agent That helps you understand what's going on Yeah.
But then also make decisions about what you want to allow and where you want to nudge people for better actions. And so these are kind of classic security control user interaction mindsets. Right? You don't necessarily wanna shut everything down. You wanna know what's going on. You wanna nudge people in the right direction to the right behavior, and you do want to kind of shut down very known bad actions or activity. So we see see it as a bit of continuum.
But do you find that a lot of people maybe I don't if this the right thing or I it's maybe limited. Do you find that when people focus only on data exposure or shadow AI as the only two challenges, are they missing something more than what they should be focusing on, or is that just the tip of the iceberg?
It's the tip of the iceberg. Right? So everything is new and nothing is new. Right?
It's like, what is it? The the king is dead. Long live the king. Yeah. We still are establishing like a standardness cybersecurity framework.
Right? We still need to identify. We need to know what we have. We still need to protect.
Like, how do we harden what we have? How do we make sure we have the controls on the things that we have? We still have to detect. How do we know when a bad thing happens?
Right? Because we can't prevent all bad things from happening. We still have to respond. Right?
Like, when a bad thing happens, what are we gonna do? Right? Like, where is the fire extinguisher? Have we actually trained on how to use it?
And repair. Right? So, like, I would argue the framework is still the same. There's a new use case in the organization called, agentic use cases, right, as these organizations go through becoming AI native.
And ShadowAI is kind of the way of getting the conversation started. A lot of organizations are kind of thinking about, well, let's just number one, figure out what we have. Yeah. And that's a great way to start because again, like, without Identify, you can't really run the other CSF functions, right, or the other CSF dimensions.
But I would argue it is just the tip of the iceberg Yeah. Because what you really want is you still want foundational security just over the agentic life cycle.
Interesting. And do would you say everything that we have done so far in the industry, like, I've got maybe CISOs may be thinking, oh, that's what should be fine. I have EDR, XDR, I have SIEM, I have all these things that I've traditionally used And they seem to give me the confidence that I can take care of most of the problems. But to what you were saying earlier, developers using Cloud Code, Cursor, there's a they're browser based extensions as well. What's the what's the obvious blind spots that if people have just gone, I'm just gonna use the traditional, I'm only going to focus on say, my network controls Yep. And that that should be enough for me to limit the exposure that I have for l six l l m. Is that enough?
So absolutely not.
So let's break agents into a couple categories. So like there are agents inside of SaaS products That customers experience through a a SaaS control panel. Right? I would argue that a lot of the risk in that environment is kind of traditional risk.
It can be managed by DLP. It can be managed by traditional identity, etcetera. Now let's talk about agents running on machines. And let's specifically zero in on code assistance because that's, I think, what almost everyone has experience with right now.
This may be a managed machine. This may be an unmanaged machine. Yeah. This may be a third party managed machine.
Contractor. Right?
Yeah. It could be all of those scenarios. It's gonna have access to your code. Yeah. It's going to have access to your intellectual property. It's gonna have access to your local device.
It is going to if it's going to be productive, someone's going to be inherently telling it where value is and isn't. So how do you actually know about secure that? What is the lowest leverage action? What's the simplest thing you can do as an organization that gives you, at a bare minimum, visibility and the removal of, like, really low hanging fruit security problems, like credential theft and session hijacking of the agent itself?
Would you say identity?
Yeah. So there was a Reddit story making the rounds a couple weeks ago, and the gist of it was, hey. My my clogged code key or my Anthropic key got popped, and I just got an eighty thousand dollar bill. I think I'm gonna go bankrupt.
And, you know, the interesting part of that story is when you look at these code assistants, they're still using this legacy technology where essentially, whether it's a user credential or whether it's a session credential, it's stealable. It's not device bound Yep. Which means you can you can phish. You can do this thing called device code flow phishing, you can do session hijacking, etcetera.
That doesn't have to exist. That tech technology to solve that problem, not reduce the rate of it happening, but actually make that go away, has existed now for a couple years. And identity is essentially the technology that makes that go away. Now identity that does that, that's device bound, that's posture based, could take one step further.
It could launch the workload. In this case, ClogCode, Codex, Gemini, etcetera. And it could launch it in a in a what we call kind of this secure durable context where it's monitoring the risk of what the agent's actually doing. Yep.
So that immediately gives you discovery around things like tools, not just MCP. MCP is one route of tools, but there are other types of tools Yeah. Your system uses. Built in bash to anything that you've actually put local in the system.
There are skills. There are sub agents. Yeah. There are plug ins that package all of this together.
Right? There's local permissions that your developers not even your developers, your marketing analysts, your researchers, etcetera, are are are manipulating. So the security context that this identity system can launch, it can very quickly discover these assets, understand them, run it through some sort of policy saying, is this good? Is this not good?
And then continuously monitor that. So if anything were to ever change, basically kill the agent session. Right? And withdraw its ability to actually access the sensitive information.
Yep. Yep. But isn't identity a bit more complex in sense of there is the whole what we used to traditionally call system users By doing automated actions, and there is Ashish, the actual employee And there is Ashish, the contractor. Like, how would I guess where I'm coming with this is that a lot of people already have, especially in enterprises There is already a established identity team that focuses on MFA, user onboarding, user onboarding. It this almost seems like a paradigm shift from that, or is it not?
This is the unification of identity and security.
You have to understand identity, but you also have to understand security. So the operating system has an identity. The operating system identity and the corporate directory identity, are they the same?
Probably not. They're related Yeah. But they're not the same. A nonhuman workload versus a human workload. How similar versus different actually are they?
When I want to do continuous identity Yeah. It's not enough to only be in the control plane. You have to also be in the data plane. Otherwise, you cannot be a point of enforcement. And we think code assistant agents, specifically AI agents in general, are kind of the forcing function for that. Like, they're really kind of making it clear that control plane identity security products are not enough. You actually have to be data plane enforcement as well.
And what would that look like in terms of I guess, where I'm coming from this is, like, people already have established architectures. The mature ones even have, like, a SIEM provider, EDR provider, and I'm sure an identity provider as well. What am I changing? Because, obviously, a lot we are at RSA. There's a lot of people who are thinking about how am I approaching AI security as a whole in my program. Yeah. Are there blind spots in these program in the traditional program that's been done?
Yeah. So so there's a couple interesting things there. Number one, I think we're gonna see enterprise architecture change drastically. Right?
So the market's already started to speak in terms of SaaS IT products are no longer as valuable as they were before.
And the reason why is we think essentially if your if your value comes from being a database with a nice UI, Well, I can do that with Git and Cloud Code. And we're already starting to see this. Like, in our own organization, our workflows have changed. So adding AI to your existing IT architecture is a way of kind of signaling that you're a dinosaur and you're probably gonna get eaten.
The rethinking your actual business architecture to if I have natural agentic workflows in my business, what does it look like? What can change? Maybe I don't really need all of these existing systems as before. And when you start considering that, there are knock on effects.
You're still gonna need EDR. You're still gonna need a SIM. But the way you're gonna design your your security stack is not going to be the same. And I would argue the simplifying architecture for that new stack is actually placing identity at the core of that security architecture.
Yep. Yep.
I was just gonna say, like, when you think about it from a security perspective, whether you're worried about, like, proactive defense or you're worried about response, identity already is core. Right? You still need to understand, alright, what's the offending process? Who launched that process?
What's the effective user ID? What was the group like, you're already trying to dig into all of these identity concepts. Right? And, generally, you're you're you're you're failing at, like, the determinism problem.
How do I know exactly this came and this came and this came and this came? And you're kind of doing a probabilistic nuance. Well, this likely came from here, and you're building this probabilistic blast radius. Yeah.
Is this is the thing that changes and actually simplifies. If you have an identity security solution at the core, all of these things have device bound attributable identity, and it it simplifies whether it's anything from discovery to protection to detection.
You find that I mean, because it's it's funny. Everyone who would have who would hear or watch this, they'd be like, but there are defined roles for these things. There are people who have dedicated roles for identity, dedicated roles for cloud, dedicated role like, so are are you already seeing that maybe in your customer base as well? Are you seeing people actually starting to evolve that into what the new world is?
Absolutely. We I'd say we see all three. So we see the barbells. Right? We see folks basically kind of just not doing anything about it.
Okay. Right? We see folks risk accepting it, which is another version of not doing anything about it. And then we see people actually exploring their organizational architecture and their IT architecture.
It's not it's not Metcalf's role. I forget the guy's name, but there's this general rule that your architecture reflects your organizational structure and vice versa. Yeah. Yeah.
Right? It's no different whether you're talking about how you build software or how you actually build the architecture of your business itself. And I think we're gonna see the same thing. We're starting to see that.
And because a lot of the industry outside of the whole data exposure and shadow AI, the other thing people keep talking about is prompt injection, and that's a real problem that people should be focusing on. Because obviously, the so far the conversation that we've talking about is that shadow AI is required, but it doesn't need to be if if you focus on the identity piece, you can still manage it. The same with data exposure as well, you can limit the exposure from it. What about this prompt injection? Because obviously as a security program people are building, they're also thinking about, oh, that's great, but all these AI agents that are running multi stage, and then prompt injection, because the way at least the narrative goes, whether it's indirect or direct, it could happen across multiple stages. And I don't, I would not know what stage am I losing control at, or I would not know if I'm being impacted by it. Does identity kind of help tackle that challenge as well?
If you're actually using device bound identity in your Agentyx, it means your agent has an identity. It means you can track the agent to ever authorize the agent's identity.
It means all the services it interacts with, all the tools, whether they're local or remote, whether they're MCP or built ins, have an identity that's attributed. It means everything actually has a chain of providence. Right? So I would argue you can't prompt injection is a real problem. Information exfil is a real problem. Injected prompt persistence is a problem.
How do you manage any of those things in a systematic way? Not whack a mole. Everybody can come up with a whack a mole technique. Yeah. But how do you manage this in a systematic way that has the fewest moving parts, if not through identity?
Interesting. I was so you know how obviously, you're a you're an engineer at heart as well. I'm curious, is the future that you're seeing with some of the mature customers you have, is the future for security where there's less dashboards and more APIs?
Yeah. Yeah. So we're already starting to see this.
The So with where AI is right now, it's very good at analyzing data. And what it's not good at is being deterministic.
You'll ask it to do a thing against your eight hundred data points. You'll do it against six hundred data points, and it'll say, good enough for me because it observed this behavior on Reddit. Right?
So the that that's kinda level one. Level the level two organization, when they're interacting with the AI, they start to realize, alright. So really what I ought to do is I I do problem discovery with the AI and I'm not worried about completeness like enumerating the whole set. I'm worried about understanding what I wanna do.
Then I have the AI write a script. The script does the thing deterministically. Then I wrap that script in a skill and a prompt, which is how I handle the probabilistic hand wavy of an analyst and whatnot. And we see the organizations that are actually kind of already at that sort of life cycle and how they interact with data, they're starting to ship less UI features.
They're focusing more on just kind of API data access and wrap it with skills. And we're actually experimenting with this ourselves. And so the the basic premise is, look, if I expose the data Yep. Through the agent effectively, and I teach the agent how to do and interact in a way that's deterministic.
Right? Like the the everyone's had the the agent go off and do something really, really annoying. It's like, why aren't you doing this? You're not doing the right thing.
Why do I have to tell you not to make mistake? That kind of stuff.
Right?
Once you kinda get over that hump, all of a sudden, you realize that these agents allow your customers to play with the long tail of your data. And they can even generate graphs and charting on the fly, dashboarding on the fly.
By pivoting to that sort of product architecture, it lets you, number one, open all the data to the customer, not just what your UX team is working on.
Yeah.
Number two, you get a signal. Right? So if everybody's doing something different, then that's probably gonna be your best interface.
But what if you get a signal that says, you know what? Eighty percent of my customers just keep asking these same twenty percent questions. Maybe that's where I'll invest in a little bit more where the AI may not be able to do it just in time. But, yeah, we see a different development pattern.
So you see security is making smarter choices moving forward. And I guess the the flip it off of it also is that people who are considering building security programs and even making decisions about what products to buy, they should probably consider that.
And obviously planning five, six years is is in in an AI world sounds ridiculous, but in the next six months to one, let's just say twenty twenty six. Yeah. In twenty twenty six, if I'm trying to rebuild or uplift my security program to be, like, cover the gap for that I have for AI security, which kind of we spoke about the traditions at least the traditional roadmap has had gaps because of the traditional threat model we approached it with.
In today's threat model, if I'm building a roadmap for, say, twenty twenty six
What would you say they should consider? Like, for it's a mature organization, an enterprise which already has a plethora of identity, XDR, theme, all of that jazz.
What do you see them uplifting towards? And obviously, as general as possible, we can't go into nuance, but what do you see them as uplifting their, a, their architecture, enterprise architecture, and b, from a, what should they be looking at as a future building tech? Hey, this would last you. Because you guys have done this internally. You guys have completely AI fied yourself from the last quarter session we had.
Right. Got customer success people building dashboards and analytics.
Yeah. I think I remember three years, and they were deploying cloud code and going, oh, wait. This is way different. So I imagine a lot of people want to get to that place as well.
So where do you see these people who are watching or listening? What should they focus on for the test of twenty twenty six for their teams to be almost in that because everyone's being asked to, hey. We use more AI. Use more AI, but people don't know what that looks Yeah.
So I think there there's a couple parts of the question. So from a I think there's like a IT business, I hate the word business process, but like how your business operates question, and that has to be answered really by leadership. And you take a step back, and you you basically put everything on the map, and you you ask yourself, why do I operate this way today? Which part of these operational steps are because of these were the only tools available to me at the time versus it's actually necessary for how my business operates? So, like, here's a an extreme example. Let's say you're starting a business today.
And let's take this extreme argument just to kinda see where the bound where it breaks down. What if the we only buy three products, for our business to to go operate? We buy GitHub or GitLab. Right? So we have a Git repo. Yes. We buy ClogCode or Codex or Gemini CLI.
And, maybe, like, Workspace or o three sixty five, so we have email Yep.
And nothing else.
What if that's all that we had? Right? How far would we get? Where would we start to break down? Why do we really need to bring in other products?
What for? Clearly, accounts payable, expense reporting and whatnot, but like that's I guess, sorry to the expensive five people, but that's less exciting. What else do I really, really need? I think I think the SIM guys and the SOAR guys have a good spot.
Right? Because essentially, their their their product and their value is kind of deeper analysis and big data big data collection for, like, streaming analytics. I think the EDR guys have a a long term play as well just because like their their key value is behavioral analytics around like funky on device processor behave process behavior. But like a lot of these other products that are about like making workflows better, do they have a place?
Yeah. Yeah. Okay. I see what you mean. Yeah.
And I guess your point, you start questioning, and once you start drilling down into all the parts that you already cover for, you're almost saying, what's the point of this when all I care about is metrics and I have APIs
That can enable me to do a lot more of this with a cloud code or a codex or whatever as well? And what parts are specialized that would make me help? Like, guess yeah. It's a it's a growing problem, but do you also find that the security programs moving forward would also need to be more agentic for lack of better word?
Oh, if they're not agentic already, they're getting left behind. So for instance, just like everything else that we've talked about, you want your security team to essentially have agentic playbooks. Think of it as a playbook the agent is gonna run. So you use Clog code? Yep. Have you built you probably built skills?
Yeah.
Yeah. You've probably gone through that loop where you realize, okay. I really need this to be a Python this part this part need can be, like, prompt can be language. This part needs to be a script. This part can be languages.
Oh, yep.
Yeah. So, like, your security team needs to be going through that in their playbooks. Because there are certain areas where judgment is required. Right?
And that's what the LLM is great for. There are other areas where you don't really need judgment. You need perfect execution. Right?
Like, I need to run this for all endpoints. I need to do this analysis exactly in this way. Yeah. And that's where that's where I was talking about, like, that basic life cycle of, like, breaking down prompt, breaking down a skill, almost thinking of it like classic control flow graphs.
Yeah. And for each node, is it a probabilistic node, is it a deterministic node? Is it probabilistic node? Is it and if it's a probabilistic node, it's an LM it's an LM task.
And if it's a deterministic node, it's a scripts task. So, yeah, if your security teams aren't, like, building this out already, number one, they're get they're behind. Number two, you can use that for controls verification, controls research, detection playbooks, actual detection. It's actually really, really good at prototyping.
This is more of like an a red team offensive thing, but, like, in four hours at the end of last year, we used it to pull a couple of models off of Hugging Face Alright.
And basically build real time audio real time audio impersonation.
And goal of the exercise was to have a poem read by different people at the party who weren't actually at the party, basically in the style, think, A Christmas Carol, but, like, saying things about the business. Right? Like, here's what went well and whatnot. It took about four hours.
Right? Yeah. But, like, it wasn't that hard to turn that into a red team kit as well around, hey, when we wanna do targeted phishing in this in this particular way, we can leave a pretty convincing voicemail. And I think it was, like, seven seconds of high quality sampling, and that was with very little research.
Wow. And that was zero shot training. So, like, we took models off the off the off the shelf to make that work. Yeah.
Yeah. Well Claude wanted to really Claude was really excited for me to actually do some fine tuning on a model. We just didn't have time. But I guess I'm just getting excited about the implementation.
But, yeah, if the security teams are not adopting these sort of workflows and really going deep on it, they're gonna get left behind. And I I would say the other thing I would caution against is you may start doing these things and see all these product announcements and then say, well, maybe I shouldn't do this. This other vendor will kinda do it for me. And I would kinda caution you against that sort of thinking.
Whether you buy a product or build a product, you still have to understand the domain the product operates in.
Yeah.
And you're not gonna build real experience and real depth of knowledge if you're not running those experiments and if your team isn't running those experiments now.
Yeah. Actually, it's a good point because and to be fair, it's not just for certain parts of security, all of it. Like, your GRCs, your detection engineering, SOC, identity, everyone needs to be kind of on that track of what's the what's the agentic playbook here.
We could probably almost have, a game show, like, stump the chump. Right? You pick the area, and I would argue we could probably come up with a pretty set of really interesting experiments Yeah. Executable almost immediately to see, like, how could this change?
Wow. Awesome. It's a good note to kind of wrap up the tech questions. A word snack war that's gonna be here.
You have choices of British and Australian, but you also have the choice of, well, weird and interesting as well. I've got kangaroo crocodile. Alright. I've got the the sweeter versions, vegemite version.
These are traditional.
So these are these are Let's try crocodile and kangaroo.
Alright. I'll let you pick one. Funny how everyone goes for the interesting ones. Maybe not that part.
It tastes like plastic. Alright.
Just pop it in?
Yeah. I'm gonna grab one as well. So I'll cheers on that.
Alright.
Alright. Kangaroo and cook her way.
Is it actually like chicken? Is it like chicken by any chance? What how would you describe the taste of it?
It doesn't have the structural texture of, like, a Game of meat.
Beef. Like, it kinda disintegrates a bit. Not that stringy.
Is that what you expected for a crocodile turkey to be?
Actually, no. I expected it to be a bit more I guess I expected the texture to be different. So I've I've eaten alligator.
Oh, yeah. Yeah.
And Is that gamey?
I wouldn't necessarily say it's gamey, but I don't know if I have a good flavor palette for gamey because I grew up eating a lot of wild gamey.
Oh, yeah. Yeah.
But, no, it it's the the alligator that I had, the texture is very tough. It's very it can be kind of almost rubbery.
But I think I and it's when I first had it, I actually thought it was, like, more like chicken. And I'm like because I was expecting it was more gamey. This is a kangaroo, if you wanna try that as well.
I think I've had kangaroo steaks before.
Oh, you've had that before? Kangaroo steak? Just I mean, it does taste like a jerk jerk version of a kangaroo steak.
Yeah. That's not that surprising.
Yeah.
This is this is, like, on point, but That's surprising.
Yeah. I've I've never had crocodile meat to begin with, which is why was like, oh, I wonder what that would be like, but it was one of the best sellers, and I'm just like, I guess people take this when they leave Australia to buy crocodile meat. But which leads me to go at my fun questions as well.
First one being really, what do you spend time on when you're not trying to solve identity problems in the world now?
Let's see. I so I live in the country. I do spend a lot of time outside with the family and the dogs, and we yeah. We have a small garden.
I'm doing quite a bit of research on trying to figure out what we're gonna plant this year. I have some electronic projects that I work on. The it's actually a radar project. Oh.
The it's just a way of kind of working a technical problem, not thinking too hard not not having to think too hard from, a a consequential work perspective.
Oh, yeah. Yeah. Fair.
And it's a sort sort of so different from work as well that you almost like a It's it's a you're engineering.
Yeah.
Yeah. It's it's, you know, it's the world is clearly evolving. The cost of of drones is coming way, way, way down. The idea of multipurpose or dual use components in kind of commercial and civilian life versus in defense life is really starting to get blurred.
We see this with, like, the war in Ukraine. We see this with the the war in Iran that's going on right now. And one of the things I've been interested in a while is, obviously, like, ISR, right, and from a cyber perspective, that's pretty close to home. But I don't know.
I've just always been curious about it from, like, an EMF and EMI perspective, a radar perspective. My dad worked in radar, so, you know, was gonna build some.
Awesome. And second question, what is some something that you're proud of that is not on your social media?
Proud of, but not on my social media. I'm really good at cooking.
Favorite dish?
Actually, so my favorite dish really is simple stuff. Northern Mexican cuisine, I make dark mole that's pretty good every Thanksgiving.
Oh, didn't realize they were different kinds. So North Mexican food is different, like, there I just in my mind, it's like fajitas and tacos.
Oh, no. No. So Mexico's a big place.
Yeah. Yeah. Yeah.
There's lots of different regional cuisine. Yeah.
Part the the food that I kinda grew up on is more earthy, nutty, bitter flavors, astringency. So, like, dark malaise with, like, that that kind of chocolate folded in.
The smell that I remember from Thanksgiving as a kid, honestly, was, like, toasted dried chilies going into, a chicken or a turkey stock.
Oh, wow. That sounds yummy.
And then letting that just cook for a while with that. Yeah.
So, I mean, I cook I cook all kinds of foods.
I'm gonna look up North Mexican. I was just not even aware of the category. I should be looking for I'm look for that.
New Mexican too. Like, just think, like, Texas Mexico border food.
Ah, okay. And final question, what's what's your favorite cuisine or restaurant? I guess this is not Mexican food, I guess.
I yeah. It's hard to pick. Right? So, like, I love Japanese food. I actually make a killer ramen. Takes about five days, but I can make a killer ramen.
Five days to do You wanna do it right.
You wanna let your ingredients actually have time to settle and whatnot.
Okay. Fair. I mean, I was gonna say, mean, like, normally, ramen shops just go go down the path of I mean, I'm sure they boil it for a few days as well because that's the whole thing in Japan as well. Right? Because the ramen is the thing that makes one ramen shop different to another ramen shop.
The the the there's there's there's a lot of variation, there's a lot of style, but the hit the history of it was it's basically the working man's lunch. Like, what can you make that satisfies someone very quickly Yeah. And gets them in and out?
Yeah. But, you know And it's filling at the same time.
Like most Japanese cuisines though, like, they've perfected it to an art. Yeah. But, yeah, like Mediterranean food, Middle Eastern food, Indian food, I I like food.
I mean, I like food as well, this kind of gusset works out really well. So, I mean, that's the fun questions I had. Where can people connect with you, learn more about Beyond Identity Yeah. And the work you guys are doing?
So I'm on LinkedIn. I'm on X. Yep. Jasson Casey. Remember Jasson s two s's? Yep.
And everything that I was saying about AI, we support in a product called Ceros. Yep. And you can try it for free, at ceros.sh. Yep.
If you go to ceros.sh, sign up for Ceros, try it out. If you don't like it, complain, we'll make it better. And if you do like it, tell us where we could improve it anyway.
Yeah. I will put the links to the short as well, but thank you so much for coming on the show.
Thanks for having me. No.
Thank you. Yeah. Thanks again for tuning in as well.

