Istio on Kubernetes Deep Dive

Dec 5, 2020 09:00 · 4200 words · 20 minute read know one secret end point

welcome my name is damian hanson i’m a software engineer with cisco you’ve probably seen this application called book info so quickly it is made up of several micro services a product page front end a detail service rating service and a review service that consists of multiple versions of that service so each version’s running a separate pod v1 shows no stars v2 shows black stars uh v3 shows red stars uh istio architecture right istio is made up of a control plane and a data plane right um and so the control plane consists uh consists of uh components called the mixer a pilot and the ca also referenced as istio security or istio auth and then the data plane consists of an enhanced version of the envoy proxy so if uh if you’ve been to any of the seo sessions you know that this envoy proxy gets deployed with every pod within uh within your cluster right so to do that you can either do a manual injection of that pod this is an example that actually creates the necessary resources within your manifest on the fly right so you’re another option is to use an initializer so starting with kubernetes 1.7 there’s this concept of an initializer if you’re unfamiliar with what an initializer is it really consists of two components one is pre-initially initialization tasks that need to be met before actually running containers that make up your pod as well as an initially an initialization controller that’s responsible for actually implementing those tasks and here are a couple flags that you would need to enable on your kubernetes api server to to run the initialization controller all right and the initialization controllers is a type of dynamic admission controller if you think of an admission controller it really sits between when um when an api request comes into kubi api server gets authenticated this piece of code this admission controller gets called before actually persisting the object so if i go ahead and say create deployment after that create deployment is authenticated the initialization controller gets called so let’s take a look at an initializer pod and config you can see in my cluster it’s a simple development cluster got a master and a node you can see the pods that are running in the istio system namespace i have the control plane components the ca uh the mixer the pilot a few additional uh optional control plane components the initializer which we’ve been talking about as well as an ingress and we’ll talk more about this but let’s take a look at the initializer really quick okay and um let me do a coupe cuddle get po the initializer the sto namespace and then output it to yaml and so one of the things to uh to go ahead and point out is that this initializer is essentially a a proxy where is it at here i’m sorry the initializer is simply a web server that’s listening on port 8083 and there’s a couple things that i want to point out as well so there’s a configuration that gets mounted um at etsy istio config and so that configuration actually comes from a config map you see configmap here right so let’s look at the config map so this config map tells the initializer a few things what image to use for the init and the proxy containers another important piece i want to point out is that it actually daisy chains additional configuration components for the mesh wide configuration using a config map all right so let’s look at this config map really quick as well and so again this is the the mesh wide configuration do we want to enable mutual tls this is for the control plane there’s also a mutual tls option for the data plane so that the uh so that the proxies use tls as they communicate with one another do i want to enable tracing again you could use the the links that i provided to go through this in more detail provides things like how do i reach the discovery address what’s the address for zipkin if i’m using uh distributed tracing so on and so forth let’s look at the initializer logs right so that’s basically an http server listening on port 8083 and here’s the configuration you see some of those configuration components that i pointed out right telling the initializer when it initializes uh these these proxies here’s the init and the proxy uh image that i want you to use um and one thing that i uh neglected to point out when we looked at the pod let me maybe just jump back there really quick and here is one of the application containers running or the application pods that’s made up of multiple containers okay we’ve got our app container this one’s the details you see that there’s a proxy container that’s running in the details pod but there’s also this init container that’s running in the pod right and so when you look at the cube cuddle get pods right you only see two pods running right if we weren’t running an istio mesh we would only see one pod one container within the pod running but we actually see two containers within each of these pods running that’s because not only do we have that app container running in this example it’s the details container but we also have the proxy there’s one other container that actually gets started it’s this init container but if if you notice it actually gets terminated this init container is part of that initialization right there’s some things that we need to do uh before actually running the proxy and the app container and what this init container does is it goes into ip tables right within the pot not the actual node or the host goes in there and says redirect all inbound and outbound traffic to the proxy container and that’s exactly how the uh the envoy proxy is able to intercept all traffic between your app container and the rest of your service mesh and vice versa and so yeah i made mention that you know we can use this initializer to automatically inject these sidecar pods but you can be granular in how you want to do that and there’s actually an initialization config that you go ahead and you say hey here’s the api groups that i want to use here’s the type of objects that i want to use for initializing for doing this initialization and if you see here i point out deployments right if you look at any of the book info examples deployments are used for for deploying the applications and so we see the the supported kinds we actually use an initializer config to say here’s what i want uh the type of of resources that i want to use for this pre- initialization and take a look here right well here’s a kind deployment but it says false right so what we do is we actually say within the deployment manifest we can use annotations to say hey i don’t want this deployment to be to be initialized with a sidecar proxy but in this example with my book info services you see that the side cars are actually getting uh initialized let’s see so paw details for the envoy side car let’s go back here again and look at this and we’ll use the details pod for example and i showed you that there’s the three containers two that are running the pod uh the uh the proxy as well as the app container and we actually pass in arguments to the proxy we tell you we basically are saying hey proxy what mode do i want you to act in i want you to be in the sidecar mode and we’ll talk a little bit about another mode but we’re passing in basically the bootstrap configuration of the proxy so the proxy knows hey where’s uh the path for my configuration file where can i find the binary uh you know where do i go for discovery the pilot address so on and so forth and then let’s look at the runtime configuration of the proxy so i can exec here i’m going to execute into the details pod and get into the istio proxy container and i showed you the path to the configuration file you’re gonna see here this revision number and so you know after that bootstrap um occurs the envoy proxy can now speak to a pilot and get its configuration and maintain that configuration so that as the mesh configuration changes we don’t have to go out and reprogram these proxies right we push it to the pilot then pilot pushes it out any changes to uh to the proxy and and the envoy proxy supports this hot restart so it’s able to change its configuration uh without having to um to be fully reloaded and so um you know you’re going to see a lot of those configuration parameters here that were passed in via the the bootstrap configuration and so uh you know the envoy proxy has one thing that i want to to point out here is you see this local admin port an address right so now that i’m actually in the proxy i can go ahead and curl this end point and i could see that um you know it has several different endpoints such as okay listeners i want to here’s all the listeners that this proxy is uh is listening on and we’re gonna pay close attention to this 90 80 right because that’s the port that this sample book application uses for all those different services that i showed in the diagram and so since it’s since it has a listener for 9080 and all these other listeners the proxy also has routes right so these are our routes basically saying for this particular listener like 90 80 i’ve got a route table and what you’re going to see here is that there’s tons of routes and there’s tons of routes on 9080 again because of sample book application all those different components are listening on 90 80 right so details is listening on 90 80 the product page reviews ratings so on and so forth right um so what we can do now is we can actually look at one of these routes right here’s a route we’re saying match everything from the root path and send it out this particular cluster this is the out details default service cluster and so now what i can do i’m going to copy that and i can talk to the cluster’s endpoint on this admin port and what i’m going to do is just output it to a file so we can grep right and then we can grip against clusters right so it if we put ourselves in the mind of the proxy we’re listening for uh for those ports we’re hearing 90 80 uh requests come in uh we look up our route and we say oh you’re supposed to go to this cluster well here’s the cluster that we’re seeing for uh for uh for details right so it gives us maximum connections you know all sorts of settings is this a canary no and you’re seeing an end point here too so this is how the proxy knows where to forward the traffic so this ip address is the ip address of the details pod 2.69 2.69 right well this proxy can do load balancing it’s not going to do load balancing here because we only have a single endpoint uh the proxy can also do health checking right so it’s doing this active health checking if you’ve got multiple if you’ve got multiple endpoints within a cluster right it can you we can it can incorporate any policies we create on how we want to load balance across those endpoints but again in this example we only have a single endpoint so why don’t we look at a different cluster and let’s look at we said it’s the reviews service that has multiple pods in it so we now see that we’ve got multiple endpoints.71. right and if we have um any type of policy again that’s set up saying i want to load balance based on certain http headers or based on certain labels from kubernetes uh so on and so forth okay uh so that was the data plane right and this enhanced version of the envoy proxy is the key piece to the istio data plane but now let’s talk about the control plane and the first component is pilot right so pilot is responsible for maintaining the canonical represent representation of of the service model right and it’s the responsibility of the adapters such as the kubernetes adapter uh to populate that model accordingly right so with the kubernetes adapter for example uh the adapter implements a controller that sits there and watches the kubernetes api server for certain resource registrations like deployments like pods and it will take those resources and then populate this model that that pilot maintains accordingly right and what that does is is it allows um pilot to provide kind of an interface that’s neutral from any of these adapters and not really locking um you know the mesh down to a specific implementation right and so when that when that model is populated based on the adapter then it allows us to take that information and push it out to the envoy proxies you know one of the uh key features of pilot is service discovery right so uh pilot expects that there’s uh a service registry that exists right like within uh kubernetes using cube dns right and that expectation is that when services are created uh the registry is updated when services are removed those ser those services are removed from the registry as well right and this allows envoy to dynamically again to dynamically find out which services exist within the mesh and what’s neat about envoy is that it doesn’t just rely on what it’s learning from this service discovery right so it can populate all these different endpoints that we just saw right back here right it can populate all these different endpoints based on that service discovery but again it’s also doing this active health checking to each one of those endpoints to ensure that they’re healthy right and if they’re not it’s not going to load balance traffic to that end point so we can take a quick look at um the pilot configuration and so the first thing that uh jumped out at me is well we’ve got a pilot container but we also have a proxy container right so not only does the envoy proxy get deployed with our application pods like reviews or ratings but actually gets deployed with the pilot control plane component as well you’ll see here that uh pilot uses uh the secret mounts the secret so that it can communicate with kubernetes api server so it’s an in- cluster authentication that occurs since pilot does need to speak to the kubernetes api server uh you’ll see that the certs etsy certs gets mounted so that the the proxy that gets deployed with pilot is able to do that mutual tls for the control plane traffic between the envoy proxies and and pilot or between pilot and mixer so you see here the logs of pilot when it starts up i mentioned that mesh wide configuration that the initializer uses well it’s um that that config map is not only used for the initializer it’s also used for each of the control plane components but since the control plane components they don’t get initialized by the initializer the only uh the only components that get initialized by the initializer is those sidecar proxies right and so we still tell each of the control plane components use this config map for your mesh config right so that that mesh config for those proxies that sit with the control plane components can communicate with with the sidecar proxies as well right so um since we are using uh the kubernetes adapter you see that that adapter gets registered let’s jump over to the mixer so you know the mixer is essentially an attribute processing engine right so each of the envoy sidecar proxies they go ahead and produce attributes what attributes those proxies produce is dependent on the the the user or the operator right so you create these things called uh attribute manifests that say hey uh envoy here’s the important attributes that i care about that for the traffic that comes through you and send this information to mixer well mixer just like what we showed in pilot has this pluggable back end of infrastructure components so pluggable back ends could be for logging for telemetry authentication so on and so forth right and mixer’s responsible for taking those attributes from your attribute producers again those producers being your envoy proxies and then routing them accordingly to the appropriate backend component making that component making its decision if it’s a policy decision providing it back to mixer which then funnels it back down to uh your envoy let’s look at the mixer config and the mixer configuration is going to look very similar to the pilot configuration the mixer also has the proxy deployed with it and as with the pilot we mount uh the service account um secrets so that the mixer can communicate with kuby api server securely we also mount etsy certs and these are the certs that are used so that uh the proxy for the mixer can communicate with the sidecar envoy proxies right and you see here um this custom config file i want to take a look at it really quick and i didn’t show you for the pilot but the pilot also has a custom configuration file right a few minutes or earlier in the presentation i showed you the configuration file of the sidecar proxy and how that dynamically gets managed uh it gets bootstrapped but then gets managed by uh by the pilot well these control plating components they have static configuration files so if we exec into um let’s see this let’s see you could cuddle get uh let’s get into mixer cube cuddle exec minus it and let’s get into uh is it mixer actually to proxy it worked and what did we say this uh config file was again here our mixer off there it is so this is the static configuration file that that we we pass into mixer and um yeah looks a little bit different than the sidecar proxies we don’t have tons of different clusters tons of different services running so each of the envoy proxies run cluster discovery service service discovery service all these different services to help construct that chain between the listeners the routes the clusters and um having the the envoy essentially do the proxy based on that chain of configuration that was uh the mixer off that we just showed you let’s look at the mixture logs we’ll go i’m going to speed things up a little bit just because we’re down to our final 10 minutes to point out a few things from the logs now the mixer is listening on multiple uh ports something here if you look at it and we’re seeing the control plane is doing this uh mutual tls why is you know why is the mixer container have no uh certificates or keys or anything like that in its configuration well that’s because again that sidecar the proxy gets deployed with the mixer and it’s responsible for doing the mutual tls so um you know between mixer and the rest of the service mesh mixer is not you know it’s not actually doing any type of tls termination that’s happening at the proxy that sits right next to it now you see that this is an empty config store and this basically says to use the in cluster configuration i showed you how that that secret gets mounted into uh into the pod and this tells mixer to use that uh in cluster configuration to authenticate to the kubernetes api server and here’s just an uh a capture that i did of what we’re seeing here is actually an envoy proxy sent in some traffic to mixer and said hey you know take a look at this and we can see that here’s all these attributes that we configured for for mixer that got pushed down to the proxies and just a few things right so when we create a policy within istio we can say i want to create some kind of policy based on source service and destination service or any of these other attributes that i don’t have highlighted right and because my environment has you know no route rules nothing configured you see that there’s zero actions here that are taking place because there are no rules right so istio security uh there’s a one of the pods is the istio ca it’s probably one area that i’m not going to do a deep dive into but at a high level the the istio security is responsible for delivering those tls assets that we saw right in the etsy search right so if we go and exec just do um details minus it details curl instead of looking at the cluster let’s look at the certs all right this information here was actually delivered from the istioca right so the istio security the key components is first the authentication right so or you know the identity so without istio security we don’t know any type of identity information we see that here’s a source service here’s a destination service but actually attaching an x-509 certificate to each of these services gives us strong identity right we’re able to authenticate based on that identity right we’re able to do things like mutual tls so that all the communication between the different services those services do not have to be configured for tls right if we configure the mesh for tls then the envoy proxies as they intercept that traffic will then go ahead and create a mutual tls connection with any of the other proxies that it needs to forward traffic to there’s also the ingress component right so how do i have services outside of my kubernetes cluster communicate to these services running in my mesh right so istio leverages the ingress resource within kubernetes and if you’re familiar with ingress it needs an ingress controller right so when you do a cube cuddle get pods uh from the seo name space you’re going to see istio ingress that’s the ingress controller that’s responsible for watching the ingress resource right and then uh based on the rules that you identify in the ingress resource permitting that traffic into the mesh and it’s the the ingress controller that also uh runs as a proxy as an envoy proxy but instead of running it as a sidecar proxy right the sidecar proxies they need ip tables by that init container to intercept all traffic that comes that comes through uh the the ingress runs in this ingress mode and it’s only going to capture traffic based on the rules that you put into that ingress resource you know the ingress resource that i set up basically says you know all pat all sub paths from root so for the product page example if it’s a login log out the slash product page any of those endpoints it’s going to allow into the cluster egress by default your services cannot communicate outside of the mesh why is that it’s because those sidecar proxies again they’re configured based on ip tables to intercept all the traffic and those sidecar proxies know nothing about any type of networking outside of the mesh right so we can create egress rules these egress rules support http https that allow you to select selectively allow outbound traffic okay if you don’t want to use the egress rule let’s say it’s not http traffic you can set up include ip ranges okay the include ip ranges uh this would be a a configuration parameter that you add to that config map for the initializer right so the initializer when it’s when when it initializes the sidecar proxies it’s going to say hey sidecar proxies only only intercept this traffic and the way that it does that again is it’s basically saying going into the iptables tells that a nit container go to iptables instead of sending all traffic only send traffic in this example from the you know for the 100 16 network thank you .