Intersection of AI and Security [Cloud 2030 Dec 17 2020]

Mar 21, 2021 22:14 · 9980 words · 47 minute read

This is a session from our archives, the December 17 2020 discussion about how artificial intelligence and AI and security are interplaying. And what we’re going to have to do to make things work together, as always close to a third discussions cover a lot of ground, and I think you will enjoy this one. Thanks, Mark.

00:23 - I don’t think your law is, um, I want to just prove your deals law disprove it.

00:31 - Basically, main things I said in water start typing too much, but basically, basically, it What about all the services? And basically, I think you actually agree with me, when when all the services that are created, that no one ever actually uses? faces, the issue is nothing new, is where I think you’re getting at is that if you create it, people start assuming that you have to start supporting it, right? That’s not the case.

You basically should wait before you support things, even though it’s people start rushing towards it.

01:14 - Man, I started trying to come up with all the examples that people were people tried to wear the services came and then they and they be No, it’s just stopped. So basically, you don’t want to just use AWS comes up with a new type of service doesn’t mean everyone else to start thinking about how they’re gonna want to support that type of service also, right, right. No, I mean, it’s a good point.

01:35 - And certainly, it’s not meant to be much more than a joke of a law. But, you know, for the most part, for the most part, my my own history is has been that, you know, I learned the hard way, frankly, I try to do somebody a favor, create some sort of access or give them a new tool. And before you know it, the whole department is demanding, demanding the same access, or every person that does that job through the entire division of the company wants to be able to do that same thing.

And it was never anything that was any more than a favor to begin with. Now, I’ve got to either figure out how to justify shutting it off, or figure out how to justify supporting it in real terms, and when you think about, or what I was thinking about, I was thinking about edge specifically. And a lot of the companies that I’m talking to that are enabling things inside stores, as an example, are enabling some minor features that individually don’t seem to be a big deal at the store.

And if any one of them were off? Who would complain? Or why would they think think it a big deal. But the combination of those services are almost the web that holds the store together now, relative to modern technology. And when the group of those services go away, it’s almost as if you’ve taken away the point of sale tool, and people all of a sudden have to figure out how to do math in their head and record sales on a piece of paper.

And so it’s I was just thinking about it from my own personal experience. And, and from that angle that you do, generally speaking, have to have a plan for rolling out any new service.

03:27 - Because if you don’t, and you think you’re just doing it, because well, I’ve got extra cycles, or I’ve got extra CPUs, or I’ve got extra bandwidth. Before you know it, somebody is going to be requiring that, that that favor, you turned on become a justified, approved and supported service. Well, isn’t that why people use terms like beta or what what do you call it? Before beta? Right? Well, that’s that’s the thing, Larry, I mean, when you’re talking about something that goes through, when you’re talking about something that goes through major project approval, you get all those kinds of things, right, you have teams that do beta and user acceptance testing, and all that stuff.

But a lot of times, there are services that can be turned on, especially from a pure infrastructure standpoint, that don’t require a whole lot of you know, team investigation and user acceptance approval. In fact, you know, when you turn on something as a favor for someone, the only approval is the person that asked for the favor. And if they start using it, and then other people decide they want to use it. Well, it’s at that point to the Europe ability to roll it out in a traditional production model release to production model goes down the tubes.

And if you’re a service provider, you get really bogged down just like with a lot of other things. I don’t know if we’ve talked to you, like in terms of having the model clients whether you want to have the early clients that you get bogged down with as your sponsor. extra time with him.

05:01 - Well, there there is there is I think there’s a linkage there, Larry. I mean, I wasn’t thinking that way. But I do think that there’s a linkage there. I mean, and I’ve, it’s funny, I was just advising a startup, one of the advice startups that I advise, I was just advising them the other day. And I was talking about to them about the risk of getting too embedded in in early but very large customer, and focusing on delivering exactly what that customer wants, whether or not it’s what the product should have, you know, for every other customer.

And they have to recognize two things. One is that they may be creating cycles that are used only by one customer, while two, they’re they’re effectively pulling money out of their 401k account as far as their schedule for their product or their service, and their available hours of time because they never recover that. So it’s a it’s a it’s a snowball effect, unfortunately.

06:01 - Rob, did you take a lead? I took your conversation away from you. No, no, no, this is I I’m fascinated by this conversation. And you’re poking me at all the in places that make me want to shout. So it’s it’s an IT MAKES IT that which qualifies is excellent.

06:17 - To me. And by the way, I, I poked Paul on Twitter last this week, with his new boss, based off our conversation from last week. I think equinix basically the, the head of equinix. I asked him a question that he mentioned something about open, that would open hardware. I’m like, oh, and then I’m like das said something He then said, I’ve been talking about this cc Paul. And then Paul went on a seven point read, I hope he doesn’t get mad at me.

06:55 - No, I, I actually, I’m sure he will appreciate one the elevation of it and to the discussion because pausa I love discussion type of person.

07:06 - So I just mentioned something along what Marcus said to me, had a separate release, well had a separate test path for AOL for their network caching product, because AOL was their big customer.

07:30 - and saw the same thing with cloud comm where every release was driven by certain multiple customers who said if we only had, and, yeah, it was, it was just a treadmill, rat race kind of thing.

07:49 - Yeah, we ran into the same things at DocuSign. If we tried to get to down the rabbit hole of what a large customer white might want, it would distract for what we were trying to do for the larger community.

07:59 - Exactly. The point. Yeah, we I mean, this specific example, first example where I was in a position enough to, to recognize the problem and be partially responsible for trying to fix it was I was with service mesh, and we were working with a Bank of America.

08:17 - And Bank of America, you know, they spent $6 billion it. And they were doing a proof of concept with us that was gonna last a year.

08:26 - And eventually, and they were asking for all kinds of things that were not ever on our original roadmap, but it’s really hard to say no to somebody that who might end up spending, you know, millions with you, if they get it, right, but it’s it’s a dangerous path. It’s like, it’s like, it’s the same path or same same sort of risk that you have, when you’re talking about doing too much. Or when you end up doing too much professional services, as part of a software release and being dependent on those dollars.

That that sets a bad precedent for focus in the company. And but they take you away product design, right? So what you’re what you’re talking about Excel, right, you know, has a whole bunch of built in functions, but it’s fundamentally fungible, what makes it so adaptable. If you’ve identified where you can do customization, I think it’s a reasonable thing, like I mean, because we’re doing exactly what you’re describing.

Although, literally, we do almost no custom ear, there’s very little customization things that our our customers ask us to do become productized after three iterations, or sooner, depending on what they do. But it’s it’s, I mean, that’s the design is fungible stuff, right. We extend the product using a plug in system so the API changes don’t become permanent until there’s grounds for it.

09:57 - That’s figured out how to do it without getting trapped down the rabbit hole, you’ve got the abstraction layer in there. Where as with a lot of the start when you’re a startup is just we need this. And it is how spaghetti code happens and how the other issues happen.

10:16 - It’s okay, the customer just needs this one thing, and they’ll buy more product. And even if they buy more product, you’re locking into a single provider. If you haven’t architected it such that it’s expandable to different applications, you’re you’re you’re making me smile, because I’m thinking about the my Infrastructure as Code. I did a five minute Infrastructure as Code summary for a local group. And one of the things about this was, if you’re just duct taping things together, you’re not really doing Infrastructure as Code.

We see I mean, this isn’t just a product problem. It’s an internal company problem. I was talking to a company yesterday, they have two different products that we would replace and extend.

11:07 - And so they’re like, yeah, we want one thing that’s standard, but at the same time, it’s so match to Mark’s original point, right? It’s so so a meshed in their organization that pulling it out is is going to be it’s very hard to shut down services, once you it gets to what you’d said before, where you get the the scope sprawl, where these applications end up getting spidered into an enterprise in all these unclear and indeterminate ways.

So you end up with dependencies that you don’t necessarily foresee when you go to move these sorts of things out, we, we see that a lot in the data center space over here at equinix.

11:47 - What are the one of the only good things about having a Bank of America, as a customer, as at&t as a customer with inktomi was, their network was so freakin messed up, that we had to have special we sent a team of, of test engineers out to to their site to work on their network and develop test suites that could find the problems with their proxies. And the only good thing about that was that we now had a solution for other companies who had just as messed up network.

12:32 - So b happened to be a general application, but it was in search of satisfying a single customer and a lot of resources.

12:49 - That’s dangerous. I mean, and you know, it’s gonna be different for every situation, right? I mean, it’s, it’s, it depends on how your code is designed. It depends on on how the team that’s involved in the in the proof of concept is, is dependent or not dependent on helping you build the rest of your code. I mean, there are all kinds of things in there that determine the value but it good points, I think, to, you know, to to Larry’s part of the question, my part of the question and Rob’s point, I think it it points to the fact that you just can’t go into building something for people without a plan for how to get out of it.

And whether getting out of it as a positive thing or getting out of it is considered a negative thing. It’s It’s all a factor of how did you how did you plan to to get this rolled out in the first place? Whose hours were involved? And once you’ve done with it, who’s going to support it? And when you’re doing it, have you made a decision that’s viable, and justifiable against effort that could have been put against something that the business could have perceived as more valuable? Right.

So the bottom line is I didn’t when I was doing people favor as an infrastructure weenie at HP in the 90s. Before I started learning my lesson, I wasn’t going to my CIO or my head of infrastructure and saying, could I be spending my time more successfully on something else? I was making a customer happy, and I thought that was a good enough reason to do something right. And that’s just a very simple example. But it’s it’s it’s it’s as you know, no, no different from how a government agency might put in a road and then next year ask you for money to pay for it in taxes for for support and upkeep of the road.

No, that should have been part of the plan. You don’t build the road unless you’ve got a period of money to support the road after it’s gone into place. Otherwise, the road was never justified in the first place.

14:51 - And our projects are not dissimilar regardless of whether we sell them to a customer or are running them ourselves inside our business. There was there was reminds me of the need for a go back plan, you know, what is what is your plan to go back to the starting line, if you encounter a hard stop situation, when I was deploying IP phone systems and contact centers, I used to have to build those all the time, because if something went sideways, I had to get back to where we started before business open.

15:21 - Yep, on this is the value of abstraction layers from that, that perspective and, and not consuming things as as directly although it’s, you know, in my in the chat, I was ranting on terraform, which, you know, brilliant strategy for becoming an essential product without any permission, or controls in place at all. You know, they’re still at zero dot, they’re now at one four, and they introduce significant breaking changes between 1212 1213 and 14 have no degree of breaking changes with their their whole architecture.

And but they’re, you know, it’s like, well, it’s not released software yet, you know, for years, what, six years old now, and it’s not released. So I that there’s elements here where it’s sort of unexcusable, and then we all jump on the bandwagon because it’s, you know, to Mark’s point, it’s become an essential service. My favorite? No, I’m gonna tell story. But then I do want to pivot us to inflection points, because this is I think, our last real meeting until we meet for the, for our summit.

16:32 - Oh, no, we’ll probably do some unofficial planning meetings between holiday skit.

16:43 - The, you know that very famously, Microsoft, re implemented AP API’s. When they came out, I think, with Windows 95, they re implemented bugs that were in the previous versions of Windows, because those bugs had become embedded in the software in their software ecosystem. And so they literally when when they it took them years to get this right.

17:07 - But they had to test every single game, every like they went to their ecosystem and tested and tested and tested, and they would re implement API defects that people had been using as workarounds because if they didn’t fix them, the games that they had already been sold would fail. And for them to keep market, they had to re implement those things. Today’s market that we don’t we were very much burn the bridges. This is what drives me nuts.

We had this conversation in one of the Amazon recaps that I was in where it’s like, yeah, Amazon’s like, yeah, we’re changing this service around, you know, you’ve got six months to get off of it, or Santos, oops, Hey, sorry, you can’t depend on Santos for production anymore. Thank you very much. We’ve we’ve gotten to a point where we’re expecting people to build systems in it that try to break marks law. an edge is definitely not going to be like that you put a device in the field.

It’s in the field. Yeah. Now there’s no going back. I mean, from a security risk standpoint, from a data collection and value of data standpoint to somebody is, I mean, if somebody takes some small feature that you’ve implemented for something else, and turned it into a safety feature, that becomes now fundamental to operation of a heavy piece of equipment, or robotics, or traffic planning or something.

18:34 - And a year later, when it fails, and a car crashes, somebody is going to come and say, What the fuck, why wasn’t this working? Why wasn’t it protected? Why wasn’t it being updated? Why weren’t the security patches applied? And you’re like, I only applied that to collect the weather information for a day and left it there.

18:51 - Yeah, or it was a hack, because I was waiting for a patch to come out, but it got rolled out and then somebody writes something on top of it.

18:57 - Yeah. Right. Do you think that the supply chain hack with solar winds is going to Doug’s excited about solar winds? I shouldn’t have named him solar winds, that was a huge mistake.

19:11 - Winds what’s up gold, so don’t feel bad. The the, but that the the supply chain aspect of what we’re talking about, it’s, it’s, you know, one, you got to be able to update quickly, but to you know, rolling out those updates, potentially, then compromise your whole system, and it seems like an impossible bind.

19:42 - It really does. And I don’t I don’t mean to you know, jump in in front of everyone else.

19:46 - But when I heard about the solar winds hack, it actually made me reminded me of supply chain hacks that our own NSA was doing with disk drives out of Taiwan, updating firmware to allow for remote hacking, before they ever arrived on people’s desks and even unboxing and re boxing to look identical to the original box. And so, you know, it’s not as if Russia or China or whoever it is did this is the first to do it. But certainly they did it really, really well.

I I’m frankly terrified. And I’m glad I don’t run a large infrastructure organization anymore. Because I’m terrified of the not only the precedent that this sets, but the onus that it will put on large organizations from a security standpoint, we imagine, I mean, you work for equinix. Right. And, and the, the onus that many of your buyers, certainly many of the buyers of companies that I’ve worked with in the past on the security and operational perfection, that you will provide them as your customer.

Imagine having to do that for every vendor that supports the delivery of your service.

21:05 - Yep. It’s It’s horrifying that the scale and breadth of what this represents is, is really the most terrifying element of it. Because it’s the first one we’ve uncovered, I think we’d be delusional to think it’s the only one out there.

21:21 - Yeah, I would agree. Do you guys were you when you read about how the heck happened? And then it was sitting in the code for six months and all that stuff? Did you? Were you surprised? Does this surprise you How, how it happened, the how they inserted it in and all that stuff? Andrea, I mean, I’m I’m a little bit of a historically, I’m a little bit of a spy junkie, although I don’t do that much reading in that area anymore.

Because I, unfortunately, I’ve given up my fun time reading. And I tend to do it more for work these days. And for the last couple of years.

21:56 - Plus, my social justice side of me is taken over and it’s like I feel like I’m I’m ruining society, if I’m reading a tom clancy novel, instead of reading the color of law or something like that. But all that being equal, you know, the history around about around spycraft, especially from Russia and China as to perfect examples are that they will build systems that they don’t expect rewards from for anywhere from 18 months to 18 years.

So it doesn’t surprise me a bit, when you see the target delivery audience for solar winds, that I would I would have put something in, you know, in with a project planner who didn’t even plan to put the code into the product for another three years, I would have been all over that if I was a Russian spy, because the opportunity was so large, right? So it really doesn’t surprise me What, what? What, actually, I don’t know that it surprises me that we didn’t do a better job at at, you know, validating backdoors and stuff like that.

I mean, when you consider the complexity of these products, knowing all that before you implement just seems like too much to ask.

23:14 - But, you know, as we started talking about, it seems like maybe somehow that’s going to be the criteria going forward.

23:20 - When it gets back to a problem that’s existed since well, software was developed, right, security has always been a third or fourth thought, in, in product development, in that history is being exploited, you know, by by the attackers because they’ve been paying attention to what, what screen doors were left on lock? No, no, it was windows didn’t have any class. But would this be the inflection point that changes that like, or are you surprised that inflection had it hasn’t happened already? And we didn’t we didn’t really answer it.

Somebody? I don’t know if that was Rob or Larry, somebody asked that question earlier whether whether this would finally change how people prioritized security? I think, I think the from my perspective, the simple perspective is that we have to be careful of getting into a position where we get ever ever better at chasing the symptoms of a hack. And we need to get better at finding ways of of keeping the hack from having any value.

And that may sound counterintuitive, but the the tools that we’re building these days, so I’m for good purpose and some for nefarious reasons, maybe AI tools or machine learning tools.

24:46 - They’re to me, there really is no reason why over time. We can’t just assume that our networks will be hacked and that AI will be managing enough Emily’s on that network and then hack two, once that, unless it’s been hacked to.

25:07 - That’s always a possibility. Right? Well, solar winds have been caught then if the system sat for six months, right? Yeah, well, they would have been they would have been. Because it I mean, in real terms from from when you think about the potentials power of an AI solution, in real terms, the fact that packets were leaving encrypted, hidden as something else, that would have been exactly what you would have asked AI to be looking for AI would have been smart enough to realize was not normal traffic, and they would have flagged it, I would think, but is anyone to blame? I don’t remember hearing at once they anyone’s to blame for this hack.

25:52 - Was there anything? Well, there is Tim Crawford’s problem in right under the bus as soon as he walks in. He didn’t realize there’s a bus coming for you. So yeah, it’s gonna be hard for the blame game.

26:09 - Because with all these companies, and all these teams doing agile, and lean and, and quick turn around and see ICD and whatnot, there are a number of techniques that were used long ago and far away that have fallen by the wayside. A back in the day, there used to be QA and QA. At the design level, the design spec level where there was a team specifically there to question designs, testability, usability, security, etc.

as security is always kind of been like a second citizen, but QA was always the champion of these downstream teams. QA doesn’t exist anymore, but hasn’t for years. And they’re all talking shifting left? Well, that’s because they shifted way out of the way again, so will there be will they find someone to blame, only if they can find an individual who is no longer at the company who specifically injected these particular bits.

And if they find that they were injected over a course of time, where it was a little bit here a little bit there, they’re not going to be they’re not going to find blame, they’re just going to say, Oh, we need to improve our process. So then that place, not here, because Gina has experience actually was was there. And it was part of the product Product Marketing for this product. But it is a big problem with agile, in that it goes so fast that there’s no time to have a systems level perspective.

27:58 - I don’t know Rocky, I mean, I get your point.

28:01 - But, um, in the teams that I’ve helped with agile or, you know, I don’t want to call myself anything approaching a DevOps expert.

28:09 - So that would you know, that would be, I think, Oh, can I comment here? I tell him, everything.

28:16 - Thank you, Andrea, thank you, no wonder I get all that spam. But, you know, when I when the first time I put together a team that was similar to in theory to DevOps, was at gilliat, in 2004 2005 timeframe. And we got rid of the change management team and all that stuff. But the point of the DevOps process is not to remove, Oh, those those ugly hindrances of people worried about what gets put into a firewall, or people worried about whether the release to production will cause a disruption or a security hole.

It rather it’s actually to automate those things that are known understood processes, and actually allow you to deliver code and solutions more quickly and more accurately, right. I mean, the, you can do it wrong. You can, you can definitely do it wrong. But the point of automation, we make jokes, yeah, you could automate something that’s bad. And it just makes it bad more often. But the point of automation and DevOps certainly has a major component of automation associated with it, is to actually be repeating good behavior without having to talk to people every time that you do it.

And I would say that, in if I were to look back at some of the QA teams that I have not necessarily been responsible for, but have worked with like as recently as apcera. I would say that maybe there is a difference between QA for success of code from a customer perspective, and then QA from a quality of code as in is the code efficient, etc, etc. but not necessarily QA from a content? Like, if people read it, and it makes sense, and it doesn’t seem to break anything? Are they even testing against it? Do they need to test against it? How do they value or evaluate some string of code that they’re not even worried about looking at, because as far as they’re concerned, it doesn’t put any functionality at risk.

And I think that’s a hard a hard nut to crack, it goes back to goes back to crack. And agile makes it harder. That’s why you go back to the space, space stuff, and the original IoT, where you’re not touching anything for how long is Voyager been sitting out there and, and transmitting data that’s actually useful. That that software, you make one small mistake and upgrade and you’ve lost the system. So there’s a lot different approach for those sorts of systems.

And back in the day, when it made a bigger difference than today, because everything is throwaway code. And DevOps, definitely automating the systems makes it easier to validate, and replicate and find the problems because then you’re spending time on the hard stuff that you need human minds on rather than computing cycles.

31:33 - But without going back and accepting that, whatever you want to call it, it won’t be called QA this next time around, but the analysis process that was used in the past to figure out whether code is architected reasonably well, and whether it accounts for the stakeholders needs and and even once sometimes, although once is that thing that Mark was talking about, you don’t want to have to put all the ones in there. But the needs until there is an analysis process that systemic.

There, there’s going to be a lot more of these holes found over time, because we’ve gotten around away from applying system level thinking at the software developer process. And DevOps is in some ways, a patch.

32:40 - And the product zation process that was required to add the quality doesn’t exist in most of the code that doesn’t exit a company.

32:50 - So like a product, you have a better time of it, except I have a question. Are we talking? Are we talking about? From the vendor perspective? Are we talking about how the customer develops code? Because I see those as two different things. So are we talking about solar winds, as a company and how companies like solar winds develop code? Are we talking about the customers that were impacted and how they develop code or how they consume products and code, the solar winds, the company that’s producing the production, sort of code, but also, the part of the issue along those lines also is that if you have a product like solar winds, you do have customers that if you’re not careful in how you produce solar winds, and take into account the foibles of your customers, they can actually inject their own exploits that make solar winds susceptible.

33:57 - So I’m trying to distinguish between the two because I see them as two completely separate issues.

34:05 - Yep. So product, the product. And okay, part of the issue is the the same care that was taken in creating the product and, and validating it was supposedly secure and usable. And everything isn’t taken with internal code, such as the build server, the code that was put on that doesn’t go through the same rigorous check process, rigorous test analysis process.

34:36 - But from a solar winds perspective, the supply chain aspect was, you know, that individual change would have a much smaller blast radius, it’s much less likely to have an injection by somebody who’s trying to trying to take advantage of I mean, we’re down in the details, details. But But I think what what we’re both it makes, it is complex is that is, and This to me is an inflection point that I’m tracking on the list, right? We have a whole bunch of scale related inflection points and complexity of IT systems is part of that inflection point, right? are we are we talking around a problem, and I don’t think AI makes things less complex.

Just it doesn’t make it like less complex. But for instance, if you apply big data analysis to, and if you have everything observable, so you turn on every single marker, but this This to me is here. So here’s, here’s, here’s where things are going, which is a challenge, I think. And we’ve talked about this in multiple sessions, there’s a complexity explosion, we are we are seeing an increase in complexity. Just look at the Kubernetes landscape.

And that product is designed to increase complexity. And it organizations.

36:03 - Rob, they’re actually always has been, it’s this is not this is not new. I mean, Mark, and I have talked about this in the past, I mean, complexity, and it is a reality. And as soon as the sooner you can admit it, and accept it, the sooner you can start addressing it. But we’re not going to get simpler as we go through time.

36:26 - So So from that perspective, let me then see if I can run that run the inflection point to ground is, is there a breaking point with complexity where we we need to simplify things and they just got like, as they get too complex, and we start changing? Or is it? So is it? Are we the Are we the frog in the complexity, hot water? Or is you know, actually where we were already? Just just it’s acceptable status quo, and we’re going to build it into our systems, and it’s going to get know the frog, you know, the frog in the boiling water, I think is is a great analogy for this.

We’re past that point. People just don’t, they either don’t realize it, or they’re not looking at it from that perspective. But it’s definitely the water temperature has been increasing, and the frog never realized it. And so we’re past that inflection point.

37:21 - I don’t mean the analogy is that you can you can cook a frog by gradually turning up the water, they don’t they acclimate, they don’t notice that the water is getting too hot. So they, they would jump out, if they would jump out if they felt like it was hot. But if they if you slowly increase the temperature, they get used to being CIOs didn’t know that everyone knew this was happening, right? We keep designing systems or I mean, this is like solar wind sells a product to mitigate operational complexity by having this behemoth of a monitoring infrastructure.

It’s like, oh, we’re just adding a new widget in.

38:02 - I mean, this is solar solar winds literally came in and said, You’ve got this incredibly complex diversion environment, we’re going to sell you a tool that helps you monitor all things. And that let them sneak in something under the radar that you didn’t even didn’t even realize was there.

38:21 - But I guess that’s the, that’s the point I’m trying to make is that this is not unique to solar winds. It’s just solar winds. And solar winds is the one that that we’re aware of at this point. But there are I’m certain there are more out there. I mean, this goes back to my my whole thing of it running with scissors. So you know, I asked the question, are we talking about the vendor? Or are we talking about the consumer the products, right? We take this this product approach or project approach where we say, okay, we’re gonna put solar winds in, and then we’re off to the next thing.

And we don’t necessarily come back to it, or manage it in a program at a programmatic way. Right? It’s a product.

39:02 - And so this is one of the core fundamental issues is, I mean, there’s several, it’s not any one, but several of these kind of came together. And I think that’s why you’re seeing this kind of hit the fan, if you will.

39:18 - Yeah, I think the customers run with grenades without pins.

39:23 - I Same, same analogy. Yeah. But is that is there an inflection point where we slow down this date, like people say, Whoa, wait a second. I don’t want to implement the new new and I’m gonna that that’s the sent us. The center thing.

39:43 - There are lots of companies that don’t implement the new new and they’re considered old and stodgy or just slow in and second tier, third tier or whatnot. But they’re are a lot of conservative communities and verticals and whatnot. So I think oil and conservative, I think we’re seeing a lot of even those conservative companies start to move towards bleeding edge technologies and they get upset when they bleed.

40:21 - Yeah. Especially the the ones who have been so conservative and have all these processes change, change control and all these other things. And they move to the new stuff. And suddenly things like you said bleed, and then they’re starting to, then they sit there and go, Whoa, why is everything breaking? Except that they forget that they’ve changed their processes, the whole culture, then? Yeah, digitalization brings out a lot of the weaknesses of company process.

40:57 - Yeah, I still. And it’s probably a failed idea before it gets started. But you know, I’ve used the analogy before we started talking about at the beginning of the of the discussion today a little bit, but you know, you if you find yourself needing to bucket water out of the bottom of the dinghy faster and faster, the answer is not to get more buckets. The answer is to fix the leaks.

41:28 - Right. And in in this particular case, I think it brings to bear and and are the combination of discussion around complexity and bead that we’ve included in the topic from the beginning. leads me to believe that long term, the The only real successful answer is to have something that mitigates the risk by monitoring the entire system almost regardless of how it’s put together.

41:55 - Yep. So you can take whatever building blocks you want to build the environment you need.

42:01 - But the system of governance that monitors that environment is what is what tells you whether or not there are leaks or breaches or risks to availability, etc, etc. I just and I realized that’s a, it’s a pipe dream today.

42:16 - But this problem is not going to get easier, it’s only going to get harder and come together. Is that like, at what level? Is it like At the hardware level at the application level? And how do you even build that? Yeah, I mean, I don’t I’m not an AI specialist. Right. So I’m just I’m just looking at it from the perspective of when I’ve done process of elimination type tools for things like intrusion detection, and things like that, or worked with partners who have helped me.

And when I say I’ve done them, I haven’t done shit, I couldn’t code myself out of a paper bag.

42:51 - So but when I’ve done those things, it’s about understanding what the environment is supposed to look like in very, very simple terms, and recognizing when there are anomalies. And there are already AI ops oriented systems that allow you to take a look at an environment and understand where failure may occur, and allow you to do remediation in many cases without ever having to touch any code, or send someone into a data center.

And so those those same practices have been used in small increments or in portions of infrastructure and applications. There are already companies now that automate the process of identifying anomalous behavior in network traffic patterns, peer to peer networking risks, and things like that, to identify what actually might be considered anomalous or a security threat and allow you to address it, I just see that if we continue to focus on treating the symptoms, that the people that are breaking in, will we’ll be breaking in with AI systems before we have AI systems to help protect against those break ins by other AI systems.

So I say, fucking jump the shark, you know, screw this, this short term, let’s screw the bolts in faster and faster and faster, and go right to what it is we really need to protect the environment in a future that is likely to be threat actors using AI to pummel networks and firewalls and every other system in your environment.

44:24 - mainframes 2030. Get on my platform, folks, I actually what what what you’re what you’re talking about is is where I’m going with some of my notes about this from an inflection point perspective. Because at some point, what we’re really saying is just pull the, you know, we’re gonna have to pull the plug and keep people off infrastructures.

44:44 - Well, you hit the big red button on the line, and then you sit there and figure out where the problems are. And you fix the manufacturing line. But we haven’t hit that big red button yet.

44:59 - Infrastructure Well, what I’m what I’m saying is, is that if if there is no way to, you know if marks, right, and you’re just like, our API’s are getting more and more secure, and we can’t determine that our supply chain is safe, we’re just going to cut, we’re just going to dig a electronic moat around our buildings, and not let traffic enter out and say, you know, what, we don’t have remote work come into the country.

What’s happening in the world? Well, this is why it’s an inflection point.

45:32 - Question. Yeah, I just do we get no point where it’s undefendable. And you have to resolve take a dramatic action.

45:38 - Sorry. You know, I, I hate to say this, I’m all for, you know, supporting mark in any way I can. And I hate to say it, but he’s right.

45:49 - He’s absolutely right. But here’s the problem. The trains already left the station.

45:55 - And so it’s not as simple as just pulling the plug on it and, and not doing it it. I can’t see how that realistically would play out.

46:06 - Because when you start to think about the entire value chain from infrastructure, and the most basic technology all the way to the consumer, right? Meaning each of us going into a grocery store or going to a gas station, or going to a doctor’s office, there’s absolutely no way that we can go backwards. There’s no way we can go backwards at this point. I mean, companies even struggled today with outages of how do they kind of go back to a manual process or be able to, to address this? I, I agree with Mark, but there’s gotta be there’s gotta be some solution in the middle that we can address it.

But the piece that I know some of the communities that that I’m party to, and and participate in, one of the things we have been talking about for some time, sometime being a few years now is that the threat actors absolutely are using AI. And some of the vendors that produce these types of products like Microsoft, and Amazon are actually trying to identify when someone is using it for nefarious reasons, as opposed to for malicious reasons as opposed to productive pieces.

And unfortunately, that’s incredibly hard to do. But now, regulation Now, now, no, I agree with you that that’s a remediation, that’s all remediation, and all we’ve been talking about here is remediation. The question is, you can’t pull the plug, how do you get to the root, and I think it goes back to something I’ve said before, either on this on the Tuesday call, is, we have to get back to valuing good solid engineering in engineering principles.

And we, as engineers, technologists have to do a better job of telling the business. Look, you’re about to go off a cliff, if you make the decision, you will go off the cliff, here is the data that demonstrates that do you want to be the next CEO like Target who got fired, because they ignored good engineering principles period, we as engineers have, look, the train has left the station. And I think it’s a good thing to train as legislation.

It’s not that we’ve gotten advanced as the problem is that we’ve got we’ve thrown out all the things that we’ve learned along the way, we’ve done what you know, you can go to school, I know I’m not beating up on schools or anything like that guys don’t come from, but we want it, don’t get a certificate and coding. And you’re now a developer, go get you know, go take a course and Python, you now can code and you now have the coders thing, and then you got Google and all the rest of them selling a quick way.

I mean, whole AWS made his whole business on, you have to worry about this infrastructure stuff, you have to worry about good principles, just push the code will take care of it. We have got to get back to solid engineering principles.

49:11 - That’s good. That’s the that’s the message.

49:14 - That’s the message to from reinvent was, hey, we’re gonna invent some API’s that take care of this for you. Just keep inventing, and we’re going to we’re going to improve the the operational stuff in the backend, right? This was like, I mean, Keith, this is what scares me. For this. It’s it we’re making it easier and easier for people to leverage incredibly powerful tool.

49:35 - So without under without understanding, you know, sort of the the consequences for how to do that for how to do it. Right back in that back in the 80s. Right. We used to have books about people doing a bio lab in their basement, unleashing a killer virus. Hard to imagine I know but but that was right, we’re getting to that point with AI and technology.

50:00 - tools were biohacking to and biohacking.

50:06 - Well, I like I like the sentiment from Keith.

50:13 - And I think that that should be a targeting goal for every organization. But unfortunately, the reality of humans makes the that effort largely failed in the long term for any organization. And I’ll give you a simple example. As part of my job, I’ve also had to worry about physical security for data centers, etc. And when you’ve got somebody watching a monitor has an example for outside traffic, you never leave them on the monitor for more than an hour, it doesn’t matter how much you beat that person, they cannot watch the monitor for more than an hour and be 100% effective at identifying threats or risks that are entering the property.

And that’s when you’re when you’re the soldier guarding a camp, you never are allowed to walk, there’s the border more than a certain amount of time, or because the assumption will be that your patience and and attention will wander. And security unfortunately, is the same thing. You You run security for two years, and you don’t have a threat, people begin to beat people. And and that’s just the nature of humanity. And we can fight it like saying, well just don’t have sex, because we don’t want to give you condoms, so just don’t have sex.

And we all know how well that works. So we have to, we have to, we have to account for how humans live with the stuff if we’re gonna figure out how to address the problem. So it gets to the point of at some point your firefighters become arsonists.

51:52 - There’s an SD when I agree with mark this, those sorts of things, the monitoring and stuff are perfect places where AI is currently capable of addressing a lot of the pitfalls. But we’re in the second phase of expert systems at this point. Back when expert systems was the first way of freeing up the expensive people through AI, it was the experts would would provide their knowledge to a an AI developer, and would come up with a set of rules that they use to avoid these problems and whatnot.

52:40 - Now we’re in training. But again, there the limitation is, if you don’t know what to train the system on, and all the different inflection points there, it’s going to have as many holes as a junior engineer or junior, whoever that’s doing it. So we still have the problem that AI can address some of the more ordinary, repetitive, repetitive issues, but can’t actually do the knowledge worker. heavy lifting.

53:21 - So there’s a there’s an SD win, going back to what it was Mark was just talking about, there’s a there’s an SD win company at a port of Beaverton that I’ve that I’ve worked with in the past, they just announced some risk monitoring software. Using using light AI, I guess is how I would put it, but the stats that they that they found on the number of alerts, one of the things that we that one of the things that we don’t talk about a lot is the fatigue.

Right, it’s alert fatigue, that that gets in there, I think it was, um, you know, basically not uncommon for it and ops managers to see hundreds of 1000s and sometimes millions of alert emails each day.

54:04 - You know, 10,000, at what point does 10th out a human can’t process that? No, they can’t even really process hundreds lend on 1000s. I mean, that’s what I was talking about earlier on, tools to help eliminate, you know, false positives and things like that and look for true anomalous behavior.

54:23 - Right? Exactly. What’s the action? Right? What’s really actually, what’s really? Yeah, I can’t tell you how many conversations I’ve had with my teams around this, just this one issue, which is great. Now we put in this monitoring product, now we’re getting just pummeled with alerts. And so what they do is they turn up the squelch on it so that they get fewer alerts, but they miss the core pieces until after something is fall falling apart.

And so the challenge there is, how do you how do you set it up just right, and I mean, granted, this is not A new problem.

55:01 - This is an old problem. I mean, I’ve been having this conversation, my teams for, shoot 20 years. And so it’s more than that, actually. But I think that the point here is, we need better intelligence to understand what is real and what is not. And the technology is there, I don’t think we’re necessarily applying it as well as our mindset in the right way. This isn’t just a technology problem. And that’s the other piece here is that this there isn’t this silver bullet that we just drop in an auto magically, everything gets addressed, there has to be human tuition that comes into play in this.

And I think that’s the piece that we’re ultimately missing, whether it’s from the vendor perspective, or the consumer, you know, the buyers perspective, and I agree in the Basically, there still needs to be people to manage the software and to handle the incoming alerts and the dashboards. And if you’re kidding, I pity you internally, you know, for example, managed security service provider to actually be monitoring these things to take actions when necessary.

56:08 - Otherwise, you just pay money for software that you’re not going to use. And one of the things I heard from big from the co founder of big leaf on when I was talking to him about that, about how he built it, he goes, it’s the tech, the AI is not enough, you’ve got to have people that have been in that industry long enough to be able to understand what is truly actionable, and what is just noise, because the AI is not going to be able to process that.

Yes, we have to have people that have been in that scene. But that’s why I wait, wait, ISS.

56:40 - Oh, hold on. I need I need one minute for logistics.

56:44 - Because I write, I actually think we could do four hours on this. And everybody would be animated and excited. And that’d be easy.

56:52 - Actually, I’ve been taking this conversation, the inflection points and then breaking it back to questions. But here, here’s my suggestion. Before we go into the seventh, are people interested in shifting this meeting to Tuesday morning or Wednesday morning for the next two sessions. And lining out the questions a little bit. Basically, it’s going to be me, I have to plan logistics for the framework, but I also have to start publicizing it.

So I’ll be tapping on all of you to help encourage people to come to the session, but I am if you’re not playing work, that’s fine. I you know, I’ll pull this together, take my notes and and do the topic during the DevOps time.

57:38 - I could do note, well, the next step is time is a book club. And then I wasn’t going to do one next week, I was thinking of doing the same 8am slot 8am Pacific slot on the 22nd or the 23rd. And then do the same thing, the 29th or the 30th, whichever we pick.

57:56 - As long as you don’t do Monday, Sunday’s I’m always up late for some strange reason. Even when I’m not working, buddy knows Sunday, stay up till three.

58:09 - I get it though today, it would be it’d be my thought would be to do it on Tuesday to stay further away from the holidays, but I’m game. Okay. Oh, wait, Tim said he’s game.

58:23 - So I’m not game. I’m out. Ha ha ha. All right, I will, I will do the modifications for those two events on on the Tuesday. This isn’t like my head is exploding. These are, we’re getting to this interconnected because it’s as long thread we’re getting to these interconnected components of these discussions. And I think if we a little bit more, and then the Thursday Converse, the seventh is going to be an amazing conversation and more, even more will come out of it.

But I am going to ask for help getting it publicized, because the more if we get about 20 people, I think it’ll be a critical mass.

59:05 - So I just want to say that marks right about the AI, it’s really got to pick up the pace at but AI is deep. And so the real key is figuring out where we apply the human knowledge in and actively and cognitively say this is handleable by the AI with a training system. And this is where you have to turn real human eyes on it and define the level so that you can actually put in intelligent alerting systems because you need to turn down the alert levels and you have to use AI for it.

But you have to be able to say this isn’t this isn’t handleable by AI at the moment, we don’t have a system that’s either deep enough or enough knowledge or expertise to be able to Apply it in a training situation or what not. So, yeah, and the real key is figuring out somebody who’s going to make lots of money, figuring out how much is enough AI? And yet not too much anybody wants to accompany? Yeah, well, I mean, we, you know, we all we all can assume that that what AI will eventually accomplish, but it’s not it’s an easy an easy stretch of the imagination to say that you know, we you could, in theory, see the images as you know, two dragons fighting each other and it’s one AI attempting to break in and another AI trying to figure out how to keep it out.

Right. And and who gets who gets the keep out AI in first, maybe what saves the planet from the break in AI and vice versa.

61:02 - Your I also see it those two dragons over Tokyo dragons.

61:08 - The dragon masters overseeing and in tweaking them here and their habits but this is this is like the this is like Godzilla in Tokyo. Because the way that you’re gonna use is mothra.

61:21 - Right? No, but it’s there’s, there’s there’s people behind trying to steer it. But what’s going to happen is they’re going to read even the good robot, even the good AI is gonna wreak havoc as it shuts down systems trying to protect like, it’s gonna I need this building out of the way. Sorry, all those users you got to you know, I’m knocking you over until until I’ve dealt with the threat.

61:43 - Right? I mean, we’ve we’ve seen this is an incredibly complex discussion topic, right? I mean, we could literally spend a weekend around, you know, pod fire, and not cover all of it. But if you if you mean, I’m not even bother, let’s just start with the pod.

62:03 - But if you think about how complex our systems are already, and and then you think about how difficult so many organizations find it in keeping bias out of AI. Imagine someone who thinks they can instantaneously make determinations about making a change to the way AI is doing something without better understanding what the add on side effects of that change might be as they ripple through the entire environment.

62:34 - love how you did that without even saying the word Zuckerberg.

62:38 - Yeah, well, everybody, I’ll officially move it in cloud 2030. But I already moved the invites, and we’ll keep going. This is amazing. Thank you, everyone.

62:51 - Thank you guys. Wow, it’s amazing to go back even a couple of months and realize how many of these core topics automation, ai controls, play out over and over again, in a routine basis for us. So if these discussions are important to you, and you have an opinion, please join us.