Episode 9: AI Is Going To Solve Everything

Episode 9: AI Is Going To Solve Everything

Rayna Stamboliyska:

Hello, and welcome to What the Hack is as ISO. This podcast will help you in your journey to be a better cybersecurity leader. It's sponsored by AWS, the world's biggest cloud company, and by Sysdig, the company on a mission to make every cloud deployment reliable and secure. I'm Rayna Stamboliyska, an EU Digital Ambassador and the CEO at RS Strategy. And, in this week's episodes, we're looking into the future.

Rayna Stamboliyska:

How will CISOs and security leaders more broadly approach their roles in say 2030? What potential complex issues will they have to deal with? How will they manage security in their organizations? And, what will sadly still be the same? On today's episode, to discuss this with me, I have 2 wonderful and may I say opinionated guests.

Rayna Stamboliyska:

The first one is Ed Targett. Hi, Ed.

Ed Targett:

Hi, Rayna. Great to be here.

Rayna Stamboliyska:

Thank you for joining us. So you're the publisher and editor in chief at The Stack, a publication tracking technology and security for senior leaders. And since you cofounded the site in 2022, you've spoken to CISOs and CIOs at some of the largest companies in the world, like Bank of America, JPMorgan Chase, and Nomura to discover their approaches and the people behind those strategies. And also with us, we have Sergej Epp. Hi, Sergej.

Sergej Epp:

Hi, Rayna. Thanks for having me.

Rayna Stamboliyska:

Thank you for joining us. You are the CISO at Sysdig, and, also, you were one of Capital Magazine's top 40 under 40. And before joining Sysdig, you led security at Deutsche Bank and lectured on cybersecurity at the Frankfurt School of Finance and Management. In this episode, we'll be taking a trip in a cybersecurity time machine, if you like, and looking at the role of cybersecurity leaders in 2030. And we chose to discuss what's coming and how to face it because when we look to the future, there are no surprises.

Rayna Stamboliyska:

There is just lack of foresight. So just for our listeners, this episode has been well less prepared than any other one before and on purpose because we value honest and authentic conversations here, and we do want to have our guests say that they don't know when they don't do, and acknowledge that just like everyone else we can be wrong. But, that's fine because there are no wrong answers about the future. So let's warm up and learn a little more about each of you 2. And let's start with you, Sergej.

Rayna Stamboliyska:

Could you share a pivotal moment for you that shaped your understanding of how security leadership needs to evolve?

Sergej Epp:

One of my pivotal moments in my career was really to make the jump from working for a large bank to the vendor space and, you know, leading cyber defense forensics for what you bank before, an organization where you have sometimes 1000, 2000, 3000 people working in cyber security, sort of get a very, very narrow view, you know, with departments like reversing forensics, hardware integrity, board security, and so on. And then joining a security vendor and in a previous role, before I joined Sysdig, I was pretty much as a field CISO and not responsible for internal security, you start to talk to a lot of different industries, a lot of different, not just security leaders, but also business leaders. And you, you know, you sort of build this generalized wisdom and ask yourself on a completely different level how security is supposed to look like. Not just from architecture point of view, if this company is more network security or detection and response, or if it can modernize a software stack instead of buying more security controls, but really, like, building this business security architecture. And I think what I've learned really, like, from my leadership career is you need, first of all, very strong technical background, but then also, second, having this business experience.

Sergej Epp:

Right? Either having worked in the business or in consulting or for Wendell to get this more higher level view and understand the values of security.

Rayna Stamboliyska:

Yeah. I mean, I understand where you're coming from for having gone back and forth to CISO or to consulting and the ways you have to juggle also with realities. And how about you, Ed? What specific event or discussion shaped your understanding of how security leadership needs to evolve?

Ed Targett:

I wouldn't say there's been one particular conversation. I think for me, what's really stood out over the years is how kind of security culture evolves and and how CISOs can best establish a security culture. Because I speak to not just a lot of security leaders, but a lot of CTOs and CIOs as well. So how kind of senior leadership interface around security has always been interesting for me. I think, you know, there there was a conversation that stood out with a a chief security officer, actually, a a very large vendor a couple of years ago, where I was talking to him about, you know, his responsibilities and his reporting lines.

Ed Targett:

Unusually, he reported to the head of legal rather than a CIO or a COO. And you know why they'd set that up. And we were going through all the sort of standard questions you might ask somebody at a security leadership level. I said to him, what keeps you up at night then? Your sort of final stop question.

Ed Targett:

What keeps you up at night? What are you most worried about? And there was this very long pause. And then he said, me. I said, you know, he said, you know, it is an incredibly stressful job, and I'm really I'm it's just incredible candor.

Ed Targett:

You know? I'm struggling to switch off, and I'm worried about my mental health. In a lot of the conversations I have with leadership, it's, you know, talking about establishing a security culture, talking about the personal and the institutional resilience that you need have been big themes and and themes that I've always been interested in and, you know, how they've evolved over the years in different organizations.

Rayna Stamboliyska:

So how about you in 2030? What's your morning routine in today in 2030? Where are you today? I like this anecdote that you just told us. So out of the blue, what are you doing today in 2030?

Rayna Stamboliyska:

One sentence. No thinking. Go ahead, Sergei.

Sergej Epp:

I mean, according to some people, we're going all cybersecurity experts to try to find the superintelligence. Right? Look, to me, I think I will be definitely somewhere in this intersection between, you know, deep cyber business and product. I think this is where the role is evolving and where I technically also, you know, we have the most fun. Right?

Rayna Stamboliyska:

Right. And how about you, Ed? What are you doing today in 2030?

Ed Targett:

It's a great question. I have a horrible feeling that I'm gonna be talking to a security leader at a large organization who's going to have thousands of vendors trying to sell them lots of incredibly cutting edge, very shiny a augmented things. And they're still gonna be getting popped by people clicking phishing emails or, you know, exposed endpoints or VPNs without MFA set up on them. So I'm not hugely hopeful that the future on the security front is going to look dramatically different from how it looks in 2025, but I'm I'm ready to be proven wrong.

Rayna Stamboliyska:

Yeah. I hope you are wrong on this one. Talking about this, and about superintelligence or whatever hype, you know, topic, we still have this question, and it's hanging also on me as well that, we need capabilities to evolve, you know, both within organizations, but also the capabilities, like the abilities people have, right? Cybersecurity leaders, CISOs, and whatever the title is. So, what would be, say, 3 essential capabilities that organizations and cybersecurity leaders need to develop to thrive in the future?

Sergej Epp:

I don't think that there will be anything, you know, surprising because as I've mentioned, security hasn't really changed much, I would say, in the last decades. And, you know, first of all, it's becoming much more, communication heavy and business heavy. Right? So you need even if you're the best security experts out there and you are unable to communicate the risks and to get the buy in, it will be very, very hard just to enforce security in organization. Me, for instance, joining now 1 half months ago at this stage, my boss, our CEO, Bill Mausch, and and I agree, first of all, that we're going to present cybersecurity on every board meeting.

Sergej Epp:

Right? And I think this is also what I see across the industry that many security leaders, getting more and more, you know, exposed, not just to the boards, but also to the different business leaders. And at the same time, there's a more pull for that information because more business leaders are interested in security across entire ecosystems. So very often, I see that the board meetings are not just getting briefed by internal security leaders, but also by by certain partners, by by certain regulators at the bot level. Right?

Sergej Epp:

And I think this is amazing to see that because it creates more education awareness. And I guess the second point would be to me, to make sure that, you know, there's a sort of experience between code security and business. And culture is potentially the right word for that because, first of all, having really this experience across both blue and red domains, so being on the attack, offense, and defense side is quite important. And having worked in those areas, I can just say this is something I would definitely recommend for every security leader to go in and build this capability. And then also trying really and that's like a constant debate we're having in cybersecurity, trying really to find the right metrics how to measure ourselves.

Sergej Epp:

And I think I'm a big fan of trying to focus not so much on measuring the outcomes in cybersecurity, but really trying to measure the process which generates and or guarantees these outcomes. So to summarize in, you know, in our digital jungle, we need to be both cartographers and also guides at the same time, and this is very, very difficult. Right? That's what I see really important.

Rayna Stamboliyska:

Thank you. And how about you, Ed?

Ed Targett:

I actually want to ask a question to Sergei, if that's okay. And I don't want to take us down too much of a kind of dry, although it's a very interesting rabbit warren. But I'd love to hear a little bit more about measuring outcomes and your thinking around that and and sort of how you're approaching that. This is not an area I can add any expertise to whatsoever, but I'd I'd love to hear more about what Sergei is doing there, if that's okay briefly.

Sergej Epp:

Yeah. Absolutely. I mean, you know, the question is, how do you translate or communicate the most relevant cybersecurity topics to the board? Right? And that's a question a lot of CEOs are asking themselves, you know, right now, and I'm in this position to define that for for my new organization.

Sergej Epp:

And, you know, there are multiple ways to do that. And first of all first and foremost, I guess, what's important is to create the transparency around cybersecurity because we need to be able as a CISO, I think, we need to be in a position to communicate not only the risks we're seeing, but also the different scenarios which can happen in an easy digestible way to the business leaders. So that's 1. Right? 2nd, we need as well to find a way to make sure that they take responsibility.

Sergej Epp:

And now that's very easy to say, but take this analogy, for instance, with CFOs. Right? As a CCO, you go to the board, you present the risk and scenarios, and then you go away with all different tasks you have to solve. As a CFO, if you report to the board that the company is doing bad, all the business leaders are going to take some responsibility and some action points. And I think that's the situation we need to to get into in cybersecurity.

Sergej Epp:

So the CISO, I think, my first responsibility is going to create as much transparency in different scenarios as possible and then trying to get the business leaders to pull the risk scenarios and understand what are the action points for them and why is it important to them rather than me trying to push certain action points to them and, you know, ask them to set to to fix vulnerabilities. Because risk and cyber security is not always about fixing everything. Right? So we need to find the right balance between what is important, what's not. And as a CISO, I can do that.

Sergej Epp:

It's a responsibility we have to share across different stakeholders.

Rayna Stamboliyska:

It's an interesting one because it reminds me the conversation we used to have, say, 10 years ago, you know, when startup scene was booming. And I remember distinctly a piece that was every startup needs a CFO. And I'm yet to kind of see sort you know, that sort of very, say, solid affirmative, you know, stance about why every company or even every business line needs cybersecurity minded lead. But, yeah, it's an interesting one to give responsibility and to give more transparency to everyone. Right?

Rayna Stamboliyska:

Knowing that not everyone deals well with that uncertainty that comes from it. Right?

Sergej Epp:

You know, there's also a very, very short trick to do that. If you're sort of new in the position and you have this leverage, you know, on the supervisor board, on the board, just ask the board members to ask the business leaders on the value, security provides in the organization. And now perhaps there will be not much they can tell the board or the supervisor board at the beginning, but they will start to pull this information and be become interested. Right? Rather than asking the CISO, now show me how good are the different business departments are doing.

Sergej Epp:

Because then I think the risk is that the responsibility of security will still stay within the CISO departments and organization, which we definitely don't wanna have. Right?

Rayna Stamboliyska:

Or with the cyber insurance provider. But this is me being very impolite here.

Ed Targett:

That's such a brave recommendation by Sergei there. I think because, you know, in in a lot of organizations, if you ask lines of business what value security provides, you're not necessarily going to get a very positive answer. Whether you want that getting fed back to a board or not, it's, you know, it's it's a really bold suggestion. I think, you know, just just going back to to that board communication side of things, you know, how much security has evolved over the years. But I think from the conversations that I've had, one area where there has been improvement is in board awareness of the issue and of the risk.

Ed Targett:

And I think a lot of organizations have been quite proactive now about putting people with security expertise on boards, and there's there's some ongoing movement there, which I think is really, really positive to see. I think there is, you know, obviously, sometimes still a massive gap, and it this is, you know, not exactly a parallel. But just to kind of give you an example of some of the gaps I sometimes see between leadership and and people or, you know, at the operational side of things. I was speaking to a former government CTO yesterday, a very, very, very senior CTO, the middle of his majesty's government, and he was called in to meet a new digital minister. And this new digital this new digital minister, you know, could barely turn on a computer.

Ed Targett:

Right? And she sat through his presentation about all the amazing applications and the the architecture they're building. And then she asked him, can you color code my word document for me? And so that that was a particularly egregiously awful, you know, example that he gave me of what it's like trying to run technology in central government. But I think in the in the private sector as well, there's probably a lot of CISOs who've had that sort of level of interaction with people perhaps at the board level.

Ed Targett:

And I do think that that is beginning to change, and and that's that's something positive to see. So, yeah, I just wanted to throw that out there.

Rayna Stamboliyska:

Just a small thing. We tend to think that only, let's say, older generations are kind of tech impaired, you know, so to speak. But what I'm seeing, because I'm also teaching, and what I'm seeing is an increasing number, for example, of students providing Chargegpt a PDF of their, whatever, essay or something, and asking Chargegpt to tell them how many words there are in the essay. And you're like, excuse me. There are different ways of being tech non savvy or tech impaired today, unfortunately.

Rayna Stamboliyska:

Which brings me, you know, again, to my question about the three essential capabilities, you know, that organizations and cybersecurity leaders need to develop to thrive in the future, especially as we see that sort of people who have, by the way, other skill sets, you know, and other competencies, who can provide value in many different ways. But who still kind of come across as tech impaired compared to sort of our expectations of what someone with such a title should have or someone who is ready to jump into the, you know, job market should be able to do. You know? So how about what are your 3 takes?

Ed Targett:

I'd love to put a slightly different twist on that or or hopefully a slightly unexpected answer. And I think one of the most interesting conversations I've had on this recently was with a bank, a head of platform engineering at a bank. And she thought that the most tech impaired, as you put it, people she had, that she was working closely with, were on the security team. So she had a security team of people who were used to, you know, configuring firewalls and, you know, putting a perimeter around on premises stuff and, you know, maybe making sure a file share was locked down tightly. But they did not have a clue about cloud native security.

Ed Targett:

They didn't have a clue about how to make sure that Kubernetes was secure in some shape or form. And for her, security at the level she was working at on the platform side of things was was essentially redundant. The skills simply weren't there, and it was a platform engineering team that was essentially leading security across all the new infrastructure. But it wasn't really codified as as security at that point. I found that really interesting, and it's a conversation I've sort of picked up with a a couple of folks since.

Ed Targett:

So I think that's one interesting area, which obviously speaks to skills and and and upskilling that's needed. And, you know, it's something that you're sort of hinting, nibbling around the edges of there with your question. And there are big skills gaps in security. Right? There's big skills gaps in a a lot of places in tech, and we've seen people taking some quite creative approaches to that.

Ed Targett:

I think, you know, BAE Systems is an organization that springs to mind. They've hired a former hairdresser. They've hired a a former animator from a a BBC children's show. They've trained them up and, you know, they're working their sock. They're working on, like, pen testing engagements and stuff.

Ed Targett:

So I think people taking quite an unusual approach to upskilling is is something that we're keeping a close eye on and and continuously interesting.

Rayna Stamboliyska:

You know, building upon this, that question is even more pressing because, I mean, both of you mentioned, you know, AI earlier, and it's you know, we're in the full hype cycle of AI, but it's here to stay. And there are things that will be changed because of the increase, if you like, of autonomous decision making being done by tools that comes and in ways not necessary, at least not yet, replaces humans, you know, human oversight and human decision making, but takes up on different tasks that for now are being performed by humans who may like it or not. When we talk about AI and cyber, what comes to mind very quickly is the whole alert analysis. And a lot of people, and well intentioned people, you know, may very well tend to take this, you know, with open arms and say, look, we have this huge analyst fatigue that we know about. You know, we've been knowing about this for years.

Rayna Stamboliyska:

So, here is a machine that can deal with those things. But then, if we go further with this, we could also imagine a scenario where in, say, 5, 7 years' time from now, the human role in cyber is changed in a way that security professionals have transformed from, you know, direct defenders, if you like, to AI trainers and strategic overseers, you know, to people who make sure that there are no intended consequences that emerge from autonomous decision making. And this comes in a conflicting way with many other moves towards more government or at least regulatory oversight, but also towards tendencies where technological sovereignty is, at least in some people's minds, becoming as crucial as physical territorial sovereign. You have those different contradictory trends here. What is it about you that, if you look at emerging challenges or even emerging technologies, keeps you awake at night and why?

Sergej Epp:

Wow. There's a lot to unpack. I think we're going to turn this into 4 hour podcast, at least based on that question. Everything you've mentioned makes totally sense to me. Right?

Sergej Epp:

I guess the most really horrible scenario is going to be if we're going to see sort of regulatory oversight or sovereignty oversight in certain countries, which is going to move cybersecurity to centralized national state type of, you know, control. And to me, it's like, you know, going from democracy to dictatorship to to a certain degree because let's reverse it back. Like, what is cybersecurity about? Cybersecurity is, first of all, about being able to apply controls as close as possible to data and systems. This is where cybersecurity is working best.

Sergej Epp:

Right? And to taking decision as well at the level where we have the most context. Very often, it's being very close to the data and system level. Sometimes, it's enterprise level because we have to do forensics and so on. On.

Sergej Epp:

And, you know, having now, for instance, like, a regulatory body trying to do oversight, like, we've seen some, you know, authoritarian states happening already, is is very, very horrible because it's going to decrease, first of all, the speed of innovation, increase sort of censorship, and also decrease quite heavily cybersecurity because there will be never a central authority be in a position to understand how good security is going to look like. Right? Yeah. So I think this is something which is definitely going to be a challenge, and I have to sing a lot about that. I mean, we as a company also very much supportive about trying to understand how to bring security as close as possible to the systems by supporting sovereign clouds and so on.

Sergej Epp:

But you don't wanna, you know, go the other direction and and try to centralize everything what's possible. And then I think that's a worst case scenario. I think what we're going to see more realistically is that AI is going definitely to play a very, very important and vital role, and there's obviously a lot of hype right now around AI. But the way how I'd like to see that is pretty much the term the former head of AI from Tesla under Kaparti Cohen back in 2017 with software 2 dot o, where he sort of laid out the transition from explicitly written code to new networks. So AI trained on data where, you know, programmers or coders focus on creating datasets, designing architectures rather than writing specific instructions.

Sergej Epp:

Right? And, today, people actually record, like, agents, and we see this already in some tools around, you know, like, Corza, COSI, SSH, Lovable, Replate, and so on. In cybersecurity, we start to see similar type of patterns where we don't really spend so much time on repeatable tasks. Like, if you you brought up the example of security operation centers, that's my home turf. And one of the biggest issues is always in security operations is trying to understand what really happened, trying to reconstruct for every event or, you know, high signal, is it now a threat?

Sergej Epp:

Was data exfiltrated? Is it a a bad attack or just a a crime where and to do that, you have to analyze a lot of data, but also try to understand how to put this data in context. And, typically, this type of forensics would take weeks, sometimes even months or years. Right? So now with Gen AI, obviously, we have this powerful tool.

Sergej Epp:

And you've mentioned this example before. You can just run a lot of protocols and unstructured data, which you could actually understand, to Gen AI's system just to help you understand that and bring context to that.

Rayna Stamboliyska:

Right.

Sergej Epp:

Now I will not call myself a Kubernetes expert. But now investigating, you know, certain alerts in a large environment, I could simply start asking question. How is this alert related to this namespace and so on? And get the help directly by Gemini to reduce this workload. So I think that's very, very interesting.

Sergej Epp:

I think what we are really missing right now is, and what we should all work at as an industry, is to have the proper benchmarks we can use to anchor ourselves because there's a lot of homework to be done before we will see secure AI. Right? We we know that now with all the CI coders, we're we're gonna see tons of different applications, 10 x, 1, 100 x, more code being generated, mostly being insecure. Right? And we we still don't know how to secure the CI systems properly, so we have to assume breach.

Sergej Epp:

We have to assume that the supply chain is going to be breached. So how do we really measure this type of progress in the eye? And in cybersecurity, a good example for that is Office Mitra Tech for endpoint detection response type of capabilities. So I think we need the same as well for AI, and it's coming slowly. There's, for instance, a very nice framework, which is called CyberSec ML for meter, where they would measure this type of settings.

Sergej Epp:

But this is just a start. Right? And this is something where we really need to, yeah, to start with both ourselves, to understand where we're going to end up in in the next 5 years.

Rayna Stamboliyska:

Yeah. It's more of a strategic planning at this point. Right? 5 years is, like, not yesterday, but nearly there. And how about you, Ed?

Rayna Stamboliyska:

What are you seeing, and what's your take on those challenges? Because you have, you know, by comparison with Sergei or myself, you have a very, like, different look because you have a fresh pair of eyes. Right? You're not, you know, heading all day long in operations and strategizing about security. You're looking at it from a different perspective.

Rayna Stamboliyska:

So what are you seeing that's happening? What are the questions that people ask themselves?

Ed Targett:

I feel like if my eyes are fresh, I shouldn't be quite the cynical. Yeah. Let me try and answer the question. So I think there is great opportunities, you know, to deploy generative AI. I'm not as close to the coalface on this as Sergei and and some folks.

Ed Targett:

I think, you know, whether it's it's helping map controls to regulations and and just making that a lot easier or, you know, helping strip out false positives or track attack parts and and all the you know, some of the things that Sergei mentioned, there's there's huge opportunities there. Upskilling people just being able to use natural language to inquire about certain issues. It's all huge. I think there's also you know, there's so much pressure to build generative AI based applications. There's a lot of pressure in most organizations from from business leaders to just ship stuff as fast as possible.

Ed Targett:

And anytime you have that, whether it's AI or not AI, there's security risk inherent in that. Right? And I am sort of mindful of and have a lot of conversations about how this is just helping build up a potentially massive new attack surface. Right? And I think that is something that's the kind of the other side of the AI coin there.

Ed Targett:

You know, it's prompt injection related issues and all that sort of stuff. A lot of the security people I talk to are so mindful of how basic a lot of successful attacks still are. And AI and buying something really shiny is not necessarily the answer. And, you know, we see time and time again from Colonial Pipeline up to UnitedHealthcare to, you know, you name it, that it is an unpatched Windows XP box sitting somewhere, Internet facing, you know, or a somebody's left the company and nobody shut down their VPN account, or it's GitHub credentials in a public Amazon bucket or what whatever it is. Right?

Ed Targett:

And I'm just not sure that AI is the answer to any of that. You know? It's this it's cultural stuff. It's basic hygiene, and there's still so much work to do on hygiene. I love talking to red teamers.

Ed Targett:

I think that's one of the most interesting areas of the space. I talk to red teamers, and some of the ways they get in, you know, they're hilarious. So here's a great example. Again, not AI related, but we'll tie it back. Spoke to a red team a couple of years ago based out in Australia.

Ed Targett:

He'd been tasked with having a pop at a large pharmaceutical company's network. Right? And it had spent 100 of 1,000,000 on multilayered zero trust, you know, you name it toolings. He worked out what the most common typos were for its domain. He spun up a mail server on this typo squatted domain.

Ed Targett:

It cost him $15, and he sat back and waited for the emails to come in. And he had a couple of emails come in. Within a week, he'd had an email from somebody looking for IT support. Hi, IT support. I'm having an issue with this, that, or the other.

Ed Targett:

And he great. Fantastic jackpot. He replied, yes. You know? Really happy to help you.

Ed Targett:

Can let's get on a call. You just need to install this, and you need to install that. And boom. He was inside. He was he was on her machine, and it cost him $15, and there's the beachhead.

Ed Targett:

Right? And I said, okay. Great. Once you're in there, how difficult was lateral movement? Yeah.

Ed Targett:

He was just so dismissive. Once you're in most organizations, lateral movement is an absolute doddle. You know? It's just he literally said, I just do a control f for password on the file share. And most people have got, like, a Word document or some stickies or something with all their passwords sitting there.

Ed Targett:

That's still so common. So common. Right? And there's, you know, numerous other examples. And I'm really skeptical that the idea that AI is just, you know, the magic the magic shiny new thing that's going to help organizations at a really meaningful level tackle that kind of issue.

Ed Targett:

So, yeah, sorry. That's my rather jaded view with my fresh eyes.

Sergej Epp:

Look. I think there are financial service industries still running the main frames. Right? And I think they've been benefiting in the past from security by obscurity because it it's very hard just to understand how mainframe is working and, you know, being able to configure RACF and and all these all protocols. I think nowadays, it's becoming, on other side, easier by using AI because, obviously, it was Gen AI.

Sergej Epp:

And we saw this, for instance, with some nation state actors who were using Gen AI and JGPT to reverse back the protocols to satellites, which they've intercepted. Right? So I think that's also a good question. We have to expose industry to which degree is JNI going to help attack us versus defend us, and this conversation is going on. But, yeah, I agree with that.

Sergej Epp:

I think, you know, the biggest question will be, are we going to be in the position just to sort of modernize our software stack versus trying to add more and more security everywhere? And to be honest with you, I mean, there are 2 views to that. There's a realistic view and then the view where everybody is sort of drawing the picture that their ads are going to solve for everything, which is not going to happen.

Rayna Stamboliyska:

I mean, even if I push it even further because you mentioned mainframes. You know? And that's true. We still have Cobble applications running, and I've never seen Cobble in my life. Right?

Rayna Stamboliyska:

So we also need to think about the long term effects of and impacts of AI and security. Right? I mean, will we end up and this is plausible. It's possible and plausible that we end up in a situation like Cobble applications, where Teams companies are just too afraid to turn off those applications because they have no idea how those tools work and how to update them, and so on and so forth. They just know that those are crucial.

Rayna Stamboliyska:

How do we, we as an ecosystem of professionals, collectively decide what is okay and what is not acceptable. Like, if I ask you differently to give me the decision, you know, the security decision, for example, that must not involve AI oversight or must always involve human oversight? What would that be? One decision, the security decision that should always be taken by humans, not by AI.

Sergej Epp:

Wow. I think that's a hard one, to be honest with you, because look. I think AI is is is just a simple next progression step to whatever we want our ultimate next. Right? And having worked in in the bank again, got mainframes on one side.

Sergej Epp:

On the other side, 60% of of your trades are already completely automated. That was 10 years ago. I guess yeah. I mean, creation of privileged accounts or entities could be something like that where you definitely wanna have still a human being in the loop and doing that. And, yeah, I mean, to perhaps just to nail down a bit, if we would live in a world without AI, I think what is going to be still very important is just to have the right incentives for cybersecurity.

Sergej Epp:

This is still a question which is not going to go away on micro and macro level. Fixing the right regulations, not doing not trying to regulate the what, but the outcomes of regulations. Right? But then also going down and trying to understand how do you measure security in the right way so that we sort of fight all the snake oil problem we have in cybersecurity because cybersecurity is very, very hard to measure, and that generates the worst thing in cybersecurity. Having the assumption that or assuming that you are you are secured, which is not the case.

Sergej Epp:

It's much more worse than having no security, and I think that's something we need to to try to solve somehow.

Rayna Stamboliyska:

And there was this dichotomy or this opposition, if you like, between your actual level of security and your feeling of you being protected. You know? And sometimes and you have 2 big categories of people. Right? Those who even if they have those big doors and stuff, they still are afraid of getting burgled or something.

Rayna Stamboliyska:

And on the other side, you have people who leave their doors and windows open, and they don't care. Right? And they still feel fine. And how about you, Ed? Like, what's the 3rd decision?

Rayna Stamboliyska:

The security decision that should or must not be left to autonomous nonhuman decision making.

Ed Targett:

And I think so many security decisions are they're cultural and, you know, not to belabor the point, but they're they're cultural decisions anyway. You simply couldn't automate them. I mean, you want to, for example, you want to embed security in lines of business. Right? It's about people and processes.

Ed Targett:

There's automation that can be used to augment that, but ultimately, it's an organizational decision to make sure sure that you've got kind of security and lines of business interfacing much better. You can look at things like, you know, tabletop exercises. Most organizations, in my view, simply don't do enough of them. And you see that when you do run a really good tabletop exercise in your wargaming, what happens when you're hit by ransomware, for example. It's a multiplayer game.

Ed Targett:

Right? You need legal in there. You need comms in there. You need, you know, business units in there. And that's so important to build awareness of, you know, everybody's role.

Ed Targett:

If you do get hit, you know, how to build resilience and and just kind of awareness, security awareness more broadly. And I'm sure there's all sorts of useful ways that, you know, AI can be used to help make that exciting and useful and and colorful and interesting. But, ultimately, it's about discipline. It's about organizational structure. It's about culture.

Ed Targett:

It's about communication. So, yeah, I think I'd flag that. And just another thing, you know, very quickly. Obviously, there's so many useful ways that kind of manual steps can be automated in organizations, and that doesn't need to be AI. You don't need some great probabilistic algorithms spitting out hallucinations to, you know, automate something.

Ed Targett:

There's building better controls into your horrible phrase, but DevSecOps pipelines, for example. There's lots you can do there automation wise. It's not necessarily gen AI.

Rayna Stamboliyska:

That's very true. I'm all for more Ansible and Puppet and less chat GPT, but each to their own. Okay. So we're coming to the end of our conversation, and I would like to pick your brains on a few closing reflections. So, we've talked about bad situations.

Rayna Stamboliyska:

Those still are important because where we see challenges, we can also see opportunities. Right? So, what opportunities do you see that are emerging from these challenges? And even more broadly, what gives you hope, you know, about the future of cybersecurity leadership?

Ed Targett:

I think a past of, you know, 15 years ago where a lot of CISOs had very much come from the IT coal face and didn't necessarily know how to communicate so well with the business and with the board, I think that's evolved a lot. I think my view is there's there's an ongoing debate about this, but about, you know, how technical a CSO should be. And I think, you know, CSOs probably should have good engineering chops, but I see more and more CSOs coming through who can communicate really, really well with the business and and with the board, and who are really good at building strong teams with diverse backgrounds in a way that perhaps wasn't the case 10 years ago. And I think that kind of gives me a degree of hope and positivity. I think in terms of some of the sort of toolings and and cultural stuff I see, I think people are also getting really good at gamifying security, making it fun for people.

Ed Targett:

It's not just, you know, your annual click box, punish people for clicking a phishing exercise type thing, you know, down at the coalface of of your staff. Encouraging developers to up upscale, you know, having little kind of games boards and rewarding people for good security practices. And I think that's changed a lot over the years as well.

Sergej Epp:

Chris, just to add, I think, I really like this point around fun. In general, I think, I just spoke yesterday to Cesar who had user experience on his top three projects for this year. And I feel this is also something which we which we started to do really, really well. Right? Even on on protocol level, like, if you consider FIDO 2 protocol, for instance, to replace passwords, that has been, you know, built and designed with user experience in mind.

Sergej Epp:

And there are so many different examples where security is becoming more transparent and more easy to adopt. And, you know, to me, cybersecurity is not a problem of a lack of solution. Right? We have solutions for all the security issues out there. It's really a problem of a lack of execution adoption.

Sergej Epp:

And, again, we find that having a better user experience definitely helps. And then, obviously, we have to mention AI as well. Right? JONE AI is definitely helping. I really have a couple of things on my wish list that we're going to see, like, some sort of zero effort compliance going forward because now we don't need to deal with all this, you know, 500 pages long regulations, and we can just ask, our chat or agent.

Sergej Epp:

But then also, I still have hope that all the systems and AI coding becoming sort of software 2 dot o, where software is going to be written by AI. I really hope that we'll find a way to make this software more secure and more reliable. I know we are far away from that, but that's definitely my wish list as well.

Ed Targett:

I love that answer, and I love that focus on user experience. I think that's so important. And I think it's particularly important given that there's so much tooling sprawl out there, you know, talking to, you know, talking about banks again. I'm talking to the CTO at a bank recently, and they've got 80, 90 security tools. Right?

Ed Targett:

The platforms in place. And the big focus, and literally everybody I talk to, is consolidation. So, you know, if you're consolidating down to fewer and fewer tools, you want to make sure that they're really accessible, usable, and you can use all the firepower that they'll deliver you as a security function. So, yeah, I love that answer from Sergei.

Sergej Epp:

Absolutely. You know, being the hamster in the wheel I think doesn't work anymore. Right? And you cannot really come up with a with every time for every new security problem with a new solution, that has to stop. On the other hand, you need to find right ways to adopt innovation.

Sergej Epp:

I think this is where user experience is going to play a very, very important role.

Rayna Stamboliyska:

Talking to people. I know. That sounds scary for so many of us out there. Right? So thank you so much.

Rayna Stamboliyska:

This has been wonderful, and I'm happy that I managed to ask you difficult questions. That was on my to do list for this episode. So thank you for joining us.

Sergej Epp:

Thanks for having us, Reina. So we're looking forward to review this in a couple of years and challenge us for our predictions.

Ed Targett:

Yeah. Thanks, Reina. Thanks, Sergei. Really engaging, enjoyable conversation. And like Sergei, I'm looking forward to reflecting on this in a few years' time and being proven right about most of the things I said.

Rayna Stamboliyska:

This is all for today, for this episode. So thank you for discussing the issues that we might have to face over the next 5 ish, even plus years. And how about you, dear listener? What gives you hope about the future of cybersecurity leadership? Did you learn any lessons that you will apply yourselves, or was there another factor that we overlooked?

Rayna Stamboliyska:

If so, please do share in comments or over social networks. And, of course, if you enjoyed the episode, please leave us a 5 star review where you got this podcast. That's all for this episode of What the Hack is a CISO sponsored by AWS, the world's most comprehensive and broadly adopted cloud company. And by Sysdig, the company making every cloud deployment reliable and secure. I'm Aina Stambouliska, and I'll see you next time.

Creators and Guests

Rayna Stamboliyska
Host
Rayna Stamboliyska
Strategy & Foresight. Award-winning writer. Former🧬scientist.
Supported by Sysdig with 💚