Hello, I’m Rob Hirschfeld, CEO and co founder of RackN. And this is the DevOps lunch and learn from March 16. The topic of the day is edge computing. And we talk all around edge and its use cases and how it would work. If you’re interested in edge. And you’ve been talking about it for a while, you’ll find a lot of themes that we’ve covered in the past here. And maybe some new insights, I mean, for the most part was sitting in town is getting into s3 bucket like Limelight has I forget 40 gigs of direct peering with Amazon just to get things out of s3 buckets.
00:40 - So the interesting thing to me is there is we’re giving me flashbacks to somebody OpenStack days, specifically around the AWS compatibility arguments for it.
01:00 - And there was a ton of time that Randy Baya spent pounding the table talking about compatibility with the AWS API’s.
01:08 - And the thing that was interesting is, I never found that to be the problem. The thing that always burned us was the operational patterns in AWS, where those were different, it was much harder to work around. And when the API’s were different, so something like needing a directory for the bucket name, you know, that ended up hard coded and stuff, and you can change the, you know, the API, but something, you know, deep, deep assumptions about a path or an operational practice or you know, how something’s coat you know, whether or not you can, you know, set set the SSH parameter.
01:50 - This was much more damaging from a compatibility perspective.
01:55 - And the shift from ACLs to I am. Yeah, no, it was that was interesting.
02:07 - So, I had a thing that I was thinking to talk through. And, you know, once once I started thinking about it, realized this might might grow into something bigger.
02:19 - But I was wondering if, if we wanted to talk about edge operations, like, walk through the idea of what it would take to run real infrastructures in I’m not too worried about the fighting edge, but a lights out small footprint, non, you know, a non date, non typical data center, and go through what that what the operational constraints of that would be, or what it would look like to do a good job of it.
02:49 - And when I started thinking about that’s why I was thinking about talking about you today and having the group sort of riff on that.
02:58 - It’s an it’s an, obviously an interesting topic for me, but strikes me that it could actually be a multi day topic, and we could pull in people more broadly, because some of the communities that are talking about this stuff are doing a really weird job. For us.
03:20 - It’s I it’s sadly, it’s like, I’ve been watching the OpenStack stuff go.
03:26 - The open infrastructure stuff, and they’ve had this edge edge thing where they’re reading white papers and things like that. They just always start with how do we make OpenStack work in this in this use case? I don’t feel like they they start with how do I operate this data center? Or they’re not particularly? I don’t feel like they’re doing and I’ve been to a lot of meetings lately. So maybe they maybe they leveled out and they’re flying, right.
03:51 - They’ve got a ton of smart people, but I don’t feel like they’re having the conversations in actionable ways, more broadly actionable ways.
04:03 - I guess we’re talking about latency. Why? Why do you wish it’s not talking about latency? Because there’s no use case for? Alright, unroll that I’m super interested in hearing your the rant behind that. And then why, why it’s what what they should be talking about.
04:31 - Think about the spectrum of latencies and applications.
04:36 - Right? On the extreme edge of that, right? You have power grid management, latency expectations, you know that they would accept two milliseconds, they’d prefer something a little second.
04:50 - There’s not a network on the planet that can meet everyone.
04:54 - Okay, is there going to be anytime in the foreseeable future and I’m not talking the next five There’s some talk in the next 50 years.
05:03 - Right? So so that’s one step of things and start moving up that stack of what are the next things where we had a second kind of thing to drive application. So 20 to 30 millisecond range, you have fast twitch gaming, right? And that’s being generous, right? Because human perceptions eight millisecond latency requirement needs to be faster than what I can actually perceive.
05:30 - Okay, so that’s your next real kind of use case into these things. And you think about things like voice assistance, we don’t like the delays and these other components to it.
05:41 - So what are the speeds of a wired network today? What are the speeds of 5g? Right, we originally talked about building those things in like the sub five millisecond range, then it started getting talked about the 10 millisecond range. Next time you watch with those sprint commercials than they did in the speed test, download, watch the milliseconds that they’re 23 to 25. Right? Because there was no business driver to build a lower latency network even though we had the technology.
06:11 - Okay, so ultra low latency is just a way of saying I’m clueless.
06:20 - And so that’s, that’s one aspect of it. And if you think about the edge just gets to where I get down to, on average, me the cloud, envisioning what this edge infrastructure looks like, if you look at the way Microsoft built out their data centers, for us, it has been two very different strategies, the data center roll up. If I look at Amazon, they’re big and tall. They built vertical data centers. If I look at a juror they built out they built multiple data centers.
So as you’re in Germany has five data centers, right, those are all certainly near access networks, they’re all certainly going to be in that 10, that sub 20 millisecond range. So why do I need to go any deeper? Not my latency.
07:07 - Good. I’ve, I’ve often wondered about that.
07:13 - In the metros, right, the metros are pretty well served by by by becoming mainstream, commercial cloud data centers.
07:22 - So what do you think? What’s the upper edge? Yeah, well, so first of all, you think about Microsoft right now you think about all the major metros, and that dense population density stuff, I don’t need to go any deeper than that you’re give me I mean, I spent five years trying to find use cases for this shit.
07:51 - And could not find it. And this is part of the problem. But But I can’t talk about the datacenter stuff. But that was really the thing.
07:58 - So there’s kind of four things that would drive things to edge computing. Everyone talks about latency and throughput. And it’s just not there.
08:05 - I mean, with things like DDR, we solve throughput problems in TCP stacks.
08:10 - You know, and so yeah, the second driver towards edge is volume. Right? The sheer amount of data that is being generated, can’t be carried over traditional backhaul. Right. And you think about connected cars is the use case everyone goes to on those.
08:29 - But I get to those I my question on that is, what’s the end up? Not really fun to use gates that justifies it? Where’s the compute processing power? Today? It’s in the vehicle.
08:45 - Right, like Ford has backlog? I think it was forward because they were they had a shortage of chips. Right? So if we need to do data aggregation or data distillation, why wouldn’t we use the compute capacity in the vehicle? Great, how much of that data, it’s generating massive amounts of data? How much of that data has value outside of the vehicle? So I’m tempted to answer that question. But if I if we jumped down that rabbit hole, do you can we be able to pull back up to your other two items? The other two, and then I’ll explain to it so the third one we get to is locality.
Right, which is the data is only consumed locally, right? And I tend to use zoom as an example of something is a good locality to it right is the odds are if you’re doing your zoom video, you’re consuming it, you know, near where the video is actually generating. There’s no reason to backhaul to a central server, and then access it from there.
09:47 - Or the other one you get into that one is data sovereignty where you simply not allowed to move it from a location. Sure, right. It’s restricted to an enterprise. It’s restricted to a country like Sweden, right, and that’s kind of pieces to it and then the fourth factor Which is the only one that I think has merit it is scale, is the sheer volume of transactions, means you can’t do it out of a centralized data center to it.
So, um, yeah, there’s companies in Cupertino, they’re having problems with handsets and the number of connections needed to maintain. But we’re talking about a few billion handsets into it, right? What happens in 10 years when those devices are in the trogons? Right, so all the things that I think, are leaning towards edge compute frameworks, I think it’s the explosion of not edge devices, I don’t know there’s a lot of edge devices, there’s gonna be an explosion of consumer devices, and enterprise devices that results in a scaling problem that we probably can’t solve with our traditional footprint.
But more importantly, probably can’t solve in our traditional software methodologies.
10:56 - I agree with that are more at the data level than at the infrastructure level.
11:10 - But when you when you scale up, again, like when you say, like, trillions or more or more, it becomes very hard to, to, to put that data in, in an acid compliant data store, you will have to go to, like, eventual consistency, whether it’s sharding, or or some other approach. But you’ll be more specific, right? It’s impossible.
11:43 - Yeah, well, yeah. I at least with the current technology, yes.
11:50 - I think it’s impossible. Unless that is happening in a geographically tightly bounded area. It’s just flat out impossible.
12:00 - Yeah, and I still don’t see us being ready, of from, like, it’s not society perspective, but like from, from a community perspective.
12:23 - To take that as a given there, there are still too many cases where the assumption is that the data is always correct.
12:35 - I mean, you end up with situations like like Amazon’s outage from from last year, like where the just couldn’t handle that much information anymore. And that’s because they put it all in one central source of truth to you, like when you when you scale up, you cannot have one source of truth anymore, which means that you cannot have absolute truth.
13:03 - But even what they have today is on absolute truth limits.
13:09 - Amazon offers a range of solutions around that, right, but to use it internally, then dynamodb is kind of like one of the better examples where they’ve actually given you a tool that’s pretty good, where it’s, it’s a rebase consistency, right? It serves as the resolve problem up to the application developer. It’s just like doing a git commit, there’s a conflict, you’d have to resolve it at the application.
13:37 - But so anyway, those are kind of the four ways I kind of forget that. And the more the way I would draw this on whiteboard is take those four quadrants, the more the more impact those quadrants have, the more likely it is for that application to be pushed to the edge. Right. So yeah, that that would drive the economics of it. But um, yeah, it’s not latency. Sorry. That’s my rant.
14:07 - Oh, we’re back? I yeah, I don’t I don’t think it’s the short, the short latency. That’s the same, I think that there is a latency component.
14:19 - But it’s interesting that the thing that I see is actually a combination of the pieces that you’re talking about.
14:31 - Where to, you know, I see edge is more environmentally based computing. So you’re reacting, you have local things reacting to local things. And the current and the latency to me is not the, you know, a game server to the device. The latency is device, one talking to device to and having to trombone through a remote cloud interface and you know, not not having access but that But, you know, when I look at what I could see happening, if we can fix fix scalability, scalability and operating operation challenges is that you could have a much lower cost to add environmental device, whether it’s a sensor or camera or a motor or something that’s reacting to your environment, into the environment that it needs to operate, and then have it get and share data locally to that that system.
The car example, and the reason why I was tempted to go down the rabbit, rabbit hole in the car is the idea that the car is doing all the processing, which is where it has to be right now is very much like, oh, we’re replacing drivers with computers.
15:47 - But that’s the, that’s not the better design.
15:51 - Ultimately, I would expect us to have intersections that manage traffic through the intersection. And so they’re going to the intersection would do you track all the pedestrian traffic, track the traffic, the cart, the flow of cars, through it, communicate to all those cars in the system, talk to other intersections to see what’s what’s coming to them, this should near the car, that we’re not going to share all that information into the car, the car should be you know, controlling its local function, receiving the information it needs from the local traffic controls.
And then, you know, limiting it because we the product, the idea of putting more and more compute into a car has a one has a finite bound and to its mean, it needs to level and decrease, because it’s going to be super expensive and power hungry to make car cars in the data centers.
17:00 - had that conversation on two different planes, right.
17:04 - You know, one is, you know, a 5g network where you have smart streets, whatever you want to call them are connected in and yeah, there was at one point in life company we worked with that was building intelligent, like ghost light poles.
17:20 - They actually did data centers inside of them, they actually had cell on top for power interconnected with fiber.
17:27 - And actually, they looked at providing like a drone landing pad where you could actually land drone control and provide drone drone coordination and a air traffic controller for drones if you would, into it. And we actually talked about running, you know, Kubernetes asked as a distributed control plane on top of those at a city level, the coordinates of that kind of stuff. And communications piece that at that was is either you know, one of two things or combination that was there’s a number of stuff being worked on vehicle to vehicle.
Right. So it’s short range, they can Bluetooth the Bluetooth interconnecting vehicles for the vehicle three, vehicle three cars in front of you behind the truck, and you can see and yet you could still the literature. Right, so that’s that’s one set of possibilities, the possibilities, obviously, you know, small stumbles on 5g, or more importantly is you know that there’s nothing content like the forest, I forgot that I should see my good for so.
Yeah, the stuff they’re doing. The Private 5g and he’s doing it almost always world render under privileged communities.
18:36 - Yeah, they do terminate IP at the cell tower, which is not the case with the others. So if you needed to be lower than 20 milliseconds, you know, private 5g could potentially do that, or some combination of those things together. So we have talked about those different types of use cases.
18:57 - I know no deployments where anyone’s really tried to do that stuff. The closest thing that came to that was looking at the drone example, for doing filming that’s fly around that when when bills and see if they need repairs, right? You have to do high definition video components, do it, go to a damage site and do those pieces.
19:20 - There’s quite a bit happening the drone side.
19:24 - And then I had to check that in a year or so the problem they had is that the chips in the majority of those drones are coming out of China. And then basically the US government said no, and like a whole new series of German companies have to be started.
19:38 - So there’s that piece of thing. But even like, Yeah, go ahead. Did you ever hear anything you guys bye, but it’s been a while since he’s working on this. So eventually it gets to like the TCP congestion control and whatnot and a bunch of other things.
20:01 - So Dan had a thing where he basically declared we got the internet wrong, we did a bad job.
20:08 - And started advocating. And really the premise for it was is that the rate at which storage is the cost of storage, responsive storage is decreasing, and the rate at which compute is decreasing.
20:26 - rate. jammer losing your audio again, started modeling a survey, The following content centric networking, which he wanted to find was a piece of content, how you found that you didn’t care about it, but it was heavily relied upon device to device communication, which may be closer to where you’re thinking, right? The challenge we always had with these things is, are you willing to give up story from your cell phone for better access or storage on whatever device you have in your home? So the trade off you trade off storage on the device for access? Yeah, so let’s imagine, let’s imagine that you’re on it, you have what you would use, let’s imagine you’re on an airplane that’s got no Wi Fi.
And you want a copy of The New York Times someone else’s airplane happens to have a copy of The New York Times. Right? So I could obviously access from that device and download the device, right? Yes, that’s what I mean, like what device device accident. So think about the type of thing, right, your file sharing services, right? But the average consumer for both, you know, what if it’s their device, and they want to store their data on it, right, and privacy concerns generally doesn’t engage that kind of behavior.
21:58 - Interesting. The reason why I’m thinking about that, that type of stuff, it’s convenient. I’m not sure that any amount of storage distributed through a group is going to solve your can I find the thing I’m looking for a problem, especially because you have to also index and track it, which then requires some external, some mass some distributed index that people participate in? So you mean like ipfs or tracking is very much like a rap.
Right? If you’re building a BGP network, I don’t need to know the entirety of the network.
22:52 - I just need to keep losing John’s audio.
23:00 - Get quiet, john. area. Sorry. It could be either, so I’m just saying he treated like a router. So there’s a policy for 14 days to PIP that he would actually for the requested which had forwarded up the routing chain. It was actually very good.
23:19 - Just too radical for people. I yeah. I mean, it’s part of what we turned up with the 2030. When we went off to the tenure, the tenure future for Go ahead, Greg. Say Sony did a research project on it, where they use the cars as basically HTTP caches. Because one of the biggest problems in like downtown Tokyo is that they overload the cellphone towers. And so they were setting up, quote, smart traffic intersections. And we’re basically using the cars as HTTP caches, CD ends.
And it basically cut the outbound traffic to certain sets of the websites and stuff. Well, overall, it cut down the traffic like by 50%, out of that cell region, just by basically temporarily caching webpages, on to cars and other things.
24:21 - And so when somebody was walking around on the street, asking for the latest times, or whatever, the cars didn’t know, and you weren’t actually asking for the data, you were just passing it around.
24:34 - It was an interesting use of that kind of technology, and seriously offloaded the upstream cell towers, but you still got to have, you know, a bunch of infrastructure in place to create localized elements. And to me, that’s the part of the edge that we don’t necessarily talk about is that kind of set of things where you’re like, well, that’s actually a win.
25:01 - It’s not amazing, and it’s not really awesome.
25:03 - But the carrier cares a lot. So that may be a reason to do it.
25:10 - Yeah, you know, most people to get on that is, you know, we did a similar, we basically added calculated cell towers. And if you’re extending an airport, you know, how good the quality is there? The I mean, to me, I think we’re, I’m in strong agreement, the idea of having it local infrastructure that you can use in fungible ways seems like a really valuable component for this, right.
25:46 - That That, to me is where we get a lot of things we’re building for edge right now feel very much like it silos or rebuilding Amazon, on, you know, in my in my neighborhood, but it doesn’t feel like either it is quite right. Yeah. So close my window is not what it says down here.
26:06 - But I think the way people think about edge today, where I was getting to, was the dream for the longest extended my Amazon edge, right here, that’s gonna be driven by an IoT application. It could be driven by other kind of just an extension of my centralized compute down. Not a strong believer, right, I think possibly in 10 years. The use cases may come around for that. But what I really haven’t good start to understand is Metropolitan based applications, not ends with regional applications.
Like I said this before, I could start to see a number of regional applications evolve that weren’t necessarily say edge. Right.
26:51 - But they certainly weren’t a centralized cloud concept of how we’re going to expand these things out. And until you talk about the connected cars, if you talk about a lot, because I don’t think you think about it at a regional not kind of a cloud centric view to it, I think you can find things that start to make sense.
27:15 - So let me go down that path. If I have regional five, a regional connected car application, I might be able to maintain all the traffic control all the components within a appropriate milliseconds latency, including camera feeds from the intersections into a regional data center with with with cost effective networking is sort of where where we’re going with that, right? Sure, I think you know, if you think about, move in and out of basically different zones that are monitored by different air traffic controllers.
27:55 - And if you think about infrastructure in that same light, right, you would think about moving into the San Francisco zone, by vehicles now connected into that network. Right? So the parallels between the way aircrafts have built these out with autonomous coordinated control zones, that actually kind of makes sense. Hmm.
28:19 - So I’m thinking about the swimmer swim AI cases where they’re building AI and digital twinning into the, into the sensor networks. And maybe this is where there’s a mismatch, right? Because they’re doing a fair bit of AI and sensor processing and event event processing on devices within the local zone, and then you’re subscribing to updates from that, that digital twin, very different than what we’re talking about from a centralized data center.
28:56 - Maybe I’ll just cross the streams. And that’s not helpful.
29:02 - Right, because I mean, go ahead. I think I think it’s a class of application inside of this spectrum, doing data distillation.
29:12 - Right, okay, in your database, you’re going to work out while you’re going through some of these to basically pick it down to eliminate the noise writer, and you’re getting the data into things that people are actually interested in and then provide a mechanism to access the IoT that was just a class of applications in Oakland, limit them and maybe where they’re doing other things with those things can occur at multiple points inside of network.
29:44 - So if you’ve had your bank transactions, and every day you backhaul them into a central location, their technology still provides value, right? It doesn’t require being placed that they actually derive value right? I guess maybe we’re maybe we’re looking at the telescope from the wrong end.
30:16 - Why centralized? But what value is the cloud providing in that in this transaction? That would lead us to not distributed? Because centralized is cheaper and simpler. And simpler. Yeah. It. I mean, when you when you’re scaling up, you start with centralized. But even if it’s a you starting at the edge, like single location, your centralized location, then you grow. And you say, Okay, I’ve got to lift and shift us to the cloud.
So it can be bigger. It’s still the same design, the same architecture, which is centralized.
31:04 - On all it after you start are growing the, like, centralized in the cloud, is it is when you start considering, okay, let’s distribute it.
31:14 - So, so it’s a matter of inertia. inertia part, it’s part of it, right? But the way you mean it, right. So the initial part, for example, when Microsoft is building out data centers, we have a 30 year over year growth rate for just generic computing. Right? So they have an inertia that allows them to build out and do things that enable edge applications that if you’re just trying to make a complete edge play, you don’t have, right so in our case, if we’re trying to do for example, SD win in distributed down in the sintel, right? How many nodes Do I need to deploy for an SD win? To? Right, what’s my command and control stack? What’s going to put OpenStack out there? How many nodes do I need? A lot of strength.
So you know, when you get into these things, we talk about the metaphorical, you know, streetlights and cars and vehicles and NASA command and control to work in those environments, needs to be fundamentally re architected with, which brings up a good opportunity for me to open another line of thinking so. So Robbie, like you mentioned, the like, why centralized weather benefit? Another way to look at it, I think, is that let’s look at examples of edge computing that that have worked.
32:51 - And the thing that comes to my mind related to credit control is botnets. botnets are inherently edge computing on you have the command and control there. And you have a mobile command or control in many cases as well.
33:10 - So So in terms of that’s an interesting way to think about it. Yeah. In terms of edge computing, design, there doing pretty well. I mean, it’s also a very domain specific implementation. But I think, from an architectural perspective, you may have some lessons to learn from, from that, you know, I almost go back back the reverse way, one of the things I wanted to do when we were working with the various service providers, was to get access to their NetFlow feeds.
Right, that’s the power that the botnet operators have is that they’re they’re coming in and channels that are hard to trace, because we don’t have access to the data, who’s like, you give me NetFlow comp, I’ll find the bottom.
34:01 - Right, I can track down where those traffic’s are coming through, I can track down which nodes they went to. And I can figure out the source of who watched that particular, whatever it was using the bottom network into it. But that’s actually you know, that the stopping botnets.
34:16 - Right, requires access to data. But actually, it’s a good edge compute use case, right? Because I can do security and aggregate data at the operator level. I can do a bunch of really interesting things.
34:31 - I guess, then the botnets are good, interesting example to me, because edge is more if we’re building a whole bunch of thin devices.
34:42 - Edge is not going to happen for building if we keep putting processing capability. If it keeps getting cheaper and more ubiquitous, right? That’s where I guess the, to me the botnets are along those lines of thinking. Then there’s a lot of compute infrastructure available.
35:01 - environmentally, right at the edge. It’s not It’s not a question of, you know, building something new, it’s just a question of leveraging capabilities that are there.
35:12 - Yeah, spare cycles are centralized architecture on or more more than point are very siloed. The sign leads inherently to the waste of resources. And in many cases, it’s warranted because you might need it or you need to guarantee it, right. But again, another case diagram, you really you need it for burst under and it is either you see a lot in NCIC.
35:55 - There is a very clear rift between two approaches. One is you have your dedicated his work instances, for for ci M.
36:10 - On the other one is you have like on demand billing, like you, you bring up you, you’re consuming the resources, and then they go back to a pool.
36:21 - I mean, the lather is, is more cost effective. One, when you have short workloads, the former is more cost effective one, when you have longer workloads, like for example, if your compiled time, takes a takes an hour even on a threadripper you’re gonna need those dedicated instances.
36:47 - Particularly if you need to cache the intermediate results to cut down on Lita. Yeah.
36:54 - That there there’s I mean, if you look at circle ci, like sure that they do dynamic caching between between builds up from our workers, but they have a limit. Like if you go above a certain thing, which was 500. Meg’s cache size, it gets invalidated.
37:22 - But would that mean that we could, in that analogy, move that into more and more environmental compute, like what keeps somebody from having servers are relatively cheap? Right, what? Maybe this is the right question for ask for edge. Right, as somebody who has, you know, at least five computers running around my house usually idle? What What, is it reasonable that all that compute power can be used somehow? And then would that be edge computing? Anyone that I’ve we’ve talked, I mean, this is like the SETI stuff and things like that.
But that’s not particularly useful. To me, at least it’s just distributed algorithm. It’s just reusing compute cycles.
38:11 - It is a good question. I mean, it’s it both it poses a fundamental problem is that I mean, it’s not just the free compute cycles, the dot that you have to consider, it’s also the cost of running those cycles.
38:31 - And we’ve kind of gotten used to having our devices on 24. Seven, but didn’t that didn’t used to be the case.
38:42 - And part of it is that by having them on 24, seven, like that, we have available 24. Seven, right? But there is the cost the power costs.
38:56 - Well, please don’t run steady on my phone, I’d like to have a battery life of more than five minutes.
39:03 - Exactly. Well, this is this is where I where I went was your Tesla could be a Bitcoin miner when it’s parked and fully and plugged in, right? There’s more there’s there’s plenty of GPU for you to be doing. computing, computing, you know, GPU could be using those GPUs at night when the car’s not in operation.
39:27 - I did that there’s also the question of ownership of it this case that people who are going to do not be happy with with allowing other people to use their devices. Like they’re the it’s mine and I do with it does I want to camp or I mean, even for the people who are willing to, to either donate it or or make it available at other price.
39:58 - There’s the possibility of have just up use of it as well. Well, but to me, just to me, you’ve just backed into why a? What is it? The telcos would potentially want to do that, right? Because you’re basically saying I have the potential of compute resources. And I don’t know who’s going to run it. But they are going to have a set of applications. So they’re going to want to run something and have a choice. Because in some regards, everything you’ve talked about has some policy implication of it.
Right? I really don’t like cancer research. So don’t ever run a protein folding program by program. Right? You have all sorts of that kind of thing. And part of my question then becomes, okay, is that really a tecos? place to put that at the edge for people to choose arbitrarily? I think we’re all struggling to say what’s the right.
40:55 - And the only one that really the only one that functionally really comes to my mind. Right? If we do the flipping, the why not centralized? Is the 5g streaming wave that I envision happening? I mean, we’re data locality matters. And being able to stash data, at regional or potentially, even at a cell level becomes hugely important, because I can’t backhaul that back to even a regional and still have my interest cell networks survive.
41:35 - Yeah, yeah. So maybe you kind of get to the point right off, I was trying to say, like, to put in a different way is I started out with, with arguing, yes, that we have to all of this, all of this waste of computing power.
41:57 - I also get why we have to this waste, because again, no conflict of interest. That’s by the way.
42:07 - And, um, I ended that’s kind of my point is that the way our computing architecture or the way our compute computing power distribution is currently designed, it, it’s it’s inherently centralized on and it is going to require a very radical redesign in order to to make it possible to decentralize it. Going back to your argument, Greg, about the telco thing would be considered Comcast, like xfinity thing to be edge computing where they use people’s home internet for, for wireless offload? I don’t know why.
43:02 - I mean, I think it is effectively an edge computing system right? Now. It’s very proprietary. It’s very localized. It’s very focused, but Sure, why Why? Why restrict what somebody could run there, if they came up with something to meaningfully provide, right, you could see, for example, ring, wanting to offload.
43:27 - Right camera analysis to that, that thing instead of pushing it back through.
43:38 - Because these conversations, there’s no compute capacity to handle that.
43:44 - What was trying to get to get to where we’re bringing that up, I get stuck.
43:51 - Where I take this, like Comcast, in their case is effectively putting users in the same conundrum that that that you brought up, and that the user has no control over what the third party is using the bandwidth for.
44:12 - That could be could be looking at other other kids pictures. They could be looking at other people’s kids pictures.
44:20 - And I mean, who whose responsibility is it a if, if, if that network is used for illegal content compared to internet DOCSIS is probably more secure, actually quite a bit more secure than the general internet traffic is built in from day one because they didn’t want to get their content? Right, the days of Hey, I found the cable and plugged in, right. So get those set content flowing over that is struggling Key, the key changes every 10 seconds.
45:03 - So we don’t have anything like that. And the gentlemen, that even comes close to what you see in the DOCSIS three, one are their predecessors into it. But I can tell you what Comcast cares about is how I attack another five bucks and locked onto your bill.
45:17 - What service can I add in there, but it was game acceleration or it was extended to parental security and to it, they’re trying to figure out how to monetize that, that footprint that they have. Right in knowing that the value they’ve had the day is effectively the accidents, the cable in the ground. And that, you know, five years from now using it more into 5g, it’s no longer about physical connectivity, right as we become more of a wire, where it was German society that that value of real estate diminishes.
So it’s all about how to generate additional billing, if they thought they had an application that involved edge compute. And they do a rev share, they might do it. But think about the pragmatics of that, that practically all these use cases are fine if you take them out of the real world.
46:09 - But let’s imagine you had to switch out 100,000.
46:19 - What do you mean, that’s a draft? Sorry, Rocky, I didn’t see you.
46:26 - That’s okay. But yeah, I’m an xfinity user. And I’m really pissed off that they’re using the fact that I need to have their box, I can’t use a plain ol modem in my house. And since they’re boxes in my house, they’re using it to sell phone to other people and sucking down my bandwidth, and not providing me the bandwidth they claim they’re providing.
46:51 - So yeah, they’re there. They want to get those boxes in your house and charge you for them. And the only reason they have those boxes in your house this, they could tack on other people’s uses? No, I’m one of those people, I don’t know, I talked them into letting me use my own modem, I refuse to abuse their hardware. For that reason, I have to actually wire the house before I can do that. So I have to get under the house and put in two cable connections.
Once those are in, then I can go on on the modem I actually purchased and get off of theirs. But yeah, it’s it’s an interesting issue because I have unstable bandwidth through them.
47:45 - Even without other uses. But they’re the only game in town in San Jose.
47:53 - They’re the only people who can provide better than like about 10 megabits a second. Right now.
47:59 - Oh, same for me. It’s my only other choices for G.
48:05 - Yep. These edge conversations always leave me more confused than I started with.
48:18 - Sorry. No, it’s good. Because what I what I ultimately think is that what we have a lot of different things that we’re all calling edge.
48:28 - And that real use cases are well, are that well understood? I think there’s a lot of people talking about theoretically.
48:40 - The challenges when running down. Right? You asked about the one where the cost of deploying the infrastructure January to return it makes it worthwhile. For the physical limitations like Bitcoin mining, you don’t want to use probably any two PCs you have in your house. Could be Bitcoin mining.
49:04 - That’s right. Well, yeah. And the Bitcoin there’s a I want to, I want to be respectful of the time. So we can we’ll pick this up I I want to think about how to narrow it down because I would actually like there’s there’s operational questions that I think are get really interesting to ask if we if we can narrow, narrow set of use cases and then start thinking through, how do we manage infrastructure like that? And it’s hard because right, we first we have talked about what’s the use case, and how do we use infrastructure.
So I have to think through how to bracket a conversation so that we can talk about actually managing real and infrastructure distributed infrastructure. So way back one in at the dev conference, That opens, then OpenStack ran, we came out with four different cases in that edge, communicate stuff up open dev stuff. So it might be worth revisiting what they’ve done since then.
50:21 - Or just revisiting the original white paper. But we, we got the telecom perspective very well from Beth cone. And then there were a couple of other and they have four, I believe four generic areas of use abuses of use cases. Yeah.
50:42 - But I don’t think they had botnets on it. Yeah, that was actually a bear. I’ve never thought of that before. botnets is.
50:54 - Yeah, you know, let me let me reach out to Jason Hartman. Oh, I love Jason. Yeah, please. Yeah. I mean, I’d be curious to see where they go with mobile regex. And the use cases they came up with, and they were pretty actively. And he was thinking, what I liked about Jason is he was the average telco? No, no offense was their use cases, were not real world.
51:21 - I think case was trying to do a lot more usable thing. Let me pay him. Okay. Yeah. be fun to bring him in.
51:29 - Cool. All right, everybody. Thank you for entertaining my conversation on edge. So appreciate everybody’s time. Thank you for joining us for a DevOps Lunch and Learn great conversation about edge, I keep leaving these edge conversations, feeling like I know less than when we started. And that shows that there’s a lot to be not to be thought about. And that we really need to figure out how to narrow the topic. And I’m going to be doing just that and trying to come back for edge operations, tech cover technologies and approaches on maybe some smaller footprint use cases in the future.
So please join us for those things at that 2030 Cloud is where you go to RSVP and come into the sessions. Thanks.