Monty. Copyright © 2019, FPP, LLC. All rights reserved.

Rogue Tech Talks: August, 2019

INTRODUCTION
Monty Zukowski presented a Tech Talk on the topic of Containerization, the concept of virtualizing a customized collection of resources to isolate an application into, essentially, an RTOS app.

OVERVIEW OF VIRTUAL MACHINES
Monty launched his presentation by asking whether everyone was familiar with virtual machines. Most people acknowledged that they were. So, for the one person who indicated they were not too familiar with virtual machines, he started with an overview.

Operating Systems are collections of software programs that act as an interface between application programs and the hardware in order to facilitate communication between the software processes and the disk drives, the network, the peripheral equipment, the displays, and the memory; all the system resources. [This is convenient. In the Old Days, programmers had to include additional code in every application program to manage individual system resources to, for example, tell the system printer how to print a job.]

WHY WE NEED AN OPERATING SYSTEM; SPECIAL PURPOSE COMPUTERS vs. GENERAL PUPOSE COMPUTERS: An example of a Special Purpose Computer is a hand-held calculator; it has one purpose: Do one thing (add) really fast! (Note that it does negative addition for subtraction, and repetitive adding and negative adding for multiplication and division, respectively.) In contrast, a General Purpose Computer is capable of letting humans control the processing via application programs. With all that power available to humans, something has to exert some control in the fight for speed and space (the primary system resources) so the Operating System acts as the Resource Manager, dedicating memory and allocating processing time to an app.

VIRTUAL MACHINES
A virtual machine (VM) pretends it is the hardware—but not the raw hardware; it’s an emulation. What that means is: you can be running on your hardware (laptop | desktop), and spin-up (start up | boot up | launch) virtual machines that might each be running a different operating system—or a different version of the same operating system.

THE WHY
Data Centers use VMs a lot because they don’t want to purchase a computer for every client—or workstation. Instead, they purchase a larger machine [faster clock, more memory] that can run a number of VMs for all their various applications.

BUILDING A VM
When you work with a VM, you initially create what’s called an image [template] by working with virtual commands to collect and set up [initialize | instantiate] the software. The resulting image is then saved to disk. You can then make a copy of that image and start it up with all the application software ready-to-go. The advantage is: you don’t have to go through the complete setup process [called provisioning] every time you want to run the VM on another computer.

DOCKER; LAYERS
Docker took that concept and formalized it. Again, it’s a VM that you create and then build-up by adding layers of the file system. You begin with your base image [the template] and then install application software, and save the resulting image as a layer. Then instead of saving the entire file system, it saves the difference between the two systems. You can build on that and add more files to create yet another image. This method standardizes the process of creating the VMs.

Docker’s main contribution is the layered file system. The idea is that you can tailor the file system to match your company’s file system with all the security and basic file packages, save that image, and that becomes the launch point—or basis—for all your new Docker Images. When you run the images, all the system has to do is pull the differences. That’s called a container; everything is abstracted within the container. The software thinks it has a local file system, networking, etc. but all the resources are virtual. This has become a popular way to package software.

CONTAINERIZATION FRAMEWORK
What’s interesting about the containerization framework, from a development point of view, is: your developers can create and use a Docker Image as they do their development work and then, when they hand it off to Q.A. on its way to Production, the developer can hand off the exact environment in which they were running. It’s no longer the case that the Q.A. system is running on a different environment; the software package is in a container. This is a clever way to manage all the dependencies. The combination is frozen and it goes to Production that way. Using containers ensures that the environment is stable from Q.A. through Production and beyond; it’s wrapped into a package.

Docker is now an ecosystem where there are standard images; Python Images, Java Images, MySQL Images, Web Server Images, Database Images, etc. These are typically built by people making the changes, so their expertise comes with the package.

Google uses containers all over the place; one estimate is that they spin-up one billion containers every week, constantly upgrading … scaling up … scaling down. So, when you are doing that and setting up hundreds of containers out in a cloud, there are some problems you need to solve, such as networking: you want the containers to set up a service discovery mechanism [to be aware of other containers] and some kind of failure detection system.

KUBERNETES
To that end, Google initially set up Borg which eventually became Kubernetes. Kubernetes [Greek for “governor”] lets you specify, for example, that you have 25 copies of this [process] running at all times [e.g., a Java web server], and want to have a load balancer In front of it, and only want to expose that in front of the firewall and everything else is blocked-off. Kubernetes is the Resource Manager so it locates the hardware where it can run these instances, starts the equipment and connects the networking layers so they can talk to each other, and handles the network load balancing between each other and the 25 instances that are running round-robin to them.

Kubernetes can know when one instance goes down and needs to be replaced. You can add rules so that if the CPU usage gets too high then you spin-up more instances to handle the load. That can feed back up into the Kubernetes layer where it can actually determine that there are not enough machines to run that many processes, and you can decide to bring up more machines. You can configure things like how much CPU time is required, how much memory a process uses, etc.

So Kubernetes is this huge orchestration layer that endeavors to make your services fault tolerant and auto-scalable.

Those are the basic highlights.

Monty, this was really a great talk; very informative!. Any questions?

Q&A
Q: With all this fragmentation that is happening with the containers and instances needing to communicate with each other, and taking into consideration the concept of Internet Time (whether or not they are actually using the Internet to communicate), is Internet Time an issue, or is that what Kubernetes does?

Monty: It can be an issue. Typically, the Docker VM will defer to its host, the actual [physical] hardware to get the correct time. Then, typically on the host, you run the NTP Servers so the action remains synchronized. Because the time is correct on the hardware, Docker uses the hardware’s clock. Interestingly, the different containers can be set up for different time zones, but those differentials are handled by the NTP Servers. There are also some aspects of the network layer that are monitoring for communication delays and stuff like that … determining which elements are busy and whether more resources are required.

Corey, Kristoff. Copyright © 2019, FPP, LLC. All rights reserved.

Q: When people start developing in Docker, are they typically locked-into that? Or, can they switch to a container product offered by another vendor? Because Docker is only one vendor that does this, right? It sounds like you’re being specific on the Docker Container and it would be really difficult to take the Docker Container that’s specific to the hardware and O.S. and move it to another container from another vendor. Or, could you port it anywhere? Because, obviously, virtualization allows you to move things anywhere if the hardware is the same but if you’re talking about emulation software, I wouldn’t think you could move that around.

Monty: I think it would be difficult. I don’t think it would be impossible. Typically, when you build a Docker Container, often you create what’s called a Docker File which is like a main—or, base—file that contains build instructions. Each line [of code] has a specific instruction such as: 1) start from this base image; 2) add some files [such as source code] from the current local [developer’s] machine; 3) run an actual command like go install Nginx [“engine x”], go install software, etc. and the list of build instructions goes on and on. If you keep it simple enough, it wouldn’t be that difficult or tedious to build the container and transfer it to another vendor’s container product, because it would be something fairly similar. There are some things that are Docker-specific that might not work but, since I don’t use those in my daily work, I can’t think of any drawbacks.

However, there are also other initiatives like Google’s build tool that is built around Bazel that kind of bypasses the Docker architecture and builds an image directly in a format which might be considered an alternate vendor. So, in Google’s case, they are very specifically trying to narrow the focus of the image.

When you create a container, you typically want it to do one job. I want to try my Java Web Server or I want to try Nginx. It’s already behind a firewall. It’s already on a host that’s running the security stuff that you care about so you don’t need to have a full-blown O.S. You certainly don’t need an email server … you don’t need a lot of the standard stuff. So one of the trends has been to make a minimal image, and determine how much of the O.S. you can get rid of and still be able to run a Python program. By doing that, not only do you limit your couch surface (because you don’t have an email server running), it also makes the size of the image smaller so it’s quicker to spin-up and easier to move around. So the Bazel image by Google is an attempt to make it smaller; to simply give it the minimal O.S. elements that are required to run one program—the one that I’m building, which is typically a Go Program. [Go is a language that Google is using to write all their software.] So you almost get to the point where the container itself is simply an executable that happens to have all the facilities of an O.S. and access to the network but none of the extraneous software that would normally run on an O.S. [You’re reducing the footprint (required resources) of the process—both speed and space.]

You can also go to extremes. Some people make the commands on their computer Docker Containers of single-purpose things, so they have complete control over all of the dependencies and the end result is a very small but efficient Docker Container that’s almost as efficient as simply running an executable. And then, bypass the O.S.’s libraries and simply include them in the image. Once you’ve committed to something like Kubernetes, you’re going to be using Docker; that’s simply what people use. And, Kubernetes is unlikely to switch to another vendor. It would be a big investment. And, I have not heard of anyone wanting to use someone else’s containers.

Q: The motivation to even use this type of architecture is … because you have a lot of application software that might need to run with a specific version of an O.S. So, when you need to upgrade, that impacts the application software … Or is there some other motivation to go with Virtual Machines—or not? Especially when the I.T. people at a [customer] site are not developers but simply customers of developers. And, you must consider support from the vendor …

Monty: The main reason is to avoid having to upgrade software—and having to manage dependencies. The point is: With the Docker approach what’s nice, at least in our organization, is we already have standards—a set of libraries we’re going to use that have already been scanned for vulnerabilities so we have a pipeline that has a process so as we are rebuilding things reducing the amount of security stuff that we have to deal with. It’s not terribly difficult to rebuild Docker Containers—it’s automatable—so it’s easy to keep things up to date and, if something breaks in the upgrade process, you’re usually not that stuck … you can use the old version until the new one is fixed. Alternatives for keeping things up to date are things like Puppet or Chef or Ansible, which are configuration management tools. These products are small pieces of software you install on every server you have and it tracks what’s supposed to be installed and if it’s not installed, it will correct the deficiency. Many shops have that approach; it’s somewhat automated. You’re expecting that you have the same software on all the nodes and that you have a basic image and that you standardize on one O.S. Whereas, with the Docker Container, you’re not restricted to one O.S. You can use different Operating Systems for different purposes, if that’s advantageous to you.

Q: So, what are the reasons you would do the minimalist version of Docker? For speed?

Monty: Yes. To reduce the footprint; the less memory you need—or disk space you need—for running whatever process you’re running. Another reason is to keep things isolated; to make it so you don’t have everything running on one O.S. When it’s isolated like that, you can more specifically target policies, or memory management, and CQ management.

Q: Is there a concept of upstream/downstream? You gave an example of a Python Container that I would expect has a Python application as its reason for being there but, if it’s in its own container, how does it interact with other software?

Monty: Typically, the Python Container is like a stripped-down Debian; standard Python executable and Python Libraries. Then, as you build on that … well, actually, just given that, you can do things such as … Python has a built-in Web Server. So, if what you want to do is serve a few files, you can have the command that kicks-off the server [Python background server] and give it the directory of the files and then it will serve those. So, in that case, you would just start the base image, you would have your HTML files and package that up and launch it. And then, if you have a full-fledged Python application, you would start with the Python image, you would typically install any O.S. packages that you need, MySQL Client, and then you would have your Python Requirements Package and use pip to install that. Then you would add your Python files; whatever application files you need. The last piece would be to specify the command to start things up. So now we have a standard type distribution layered with the packages that you need, your code, and it’s all ready to go.

In Kubernetes, there is a concept of a job—and a concept of a pod. So, a pod is a piece of a container; a pod provides networking, etc. You can have 25 pods based off one image, so they are all running in concert, round-robin. Or, you can have a job. A job is something like … O.K., spin up a pod, run this command, and when it’s done, kill it. So, more like a batch job at night. If your Python program is only getting data from the database, crunching it and generating reports, it doesn’t have to be a server.

Q: Would you say there are any drawbacks to using Containers, especially Docker Containers?

Monty: Well, yes. The architecture is still relatively new so there are still things that you need to debug. It’s getting much better. Two years ago I was trying to write a container that only ran cron [a unix scheduler]: give it a cron file [with weird syntax] and, every day at 1:00, run this program. Because Docker has a layered file system, each time you add more stuff, it creates a new layer of the file system. The way they implemented that, it happens to use a whole bunch of links at the dist layer so there’s a lot of indirection. It turns out that, in Debian, the cron program specifically won’t run a cron file that is linked in a nested way, as security precaution. So, when you run it, it doesn’t work. I ended up going into the source code and doing a lot of searching on Stack Overflow and finally learned that it was this linked file system issue. But that kind of stuff is going away. Enough people are now using Docker that operating systems are more Docker-friendly. There’s a cost to virtualization. If you have an application where performance is an issue, you won’t use Docker for that.

That said, places like Amazon and Google and Microsoft’s Azure are offering native Kubernetes type stuff. It used to be that you would have to run your own special configurations on the hardware in order to provide the services that Kubernetes provides. The bottom line is: there are some things you wouldn’t want to use Kubernetes for because it is a virtual machine, it’s providing abstractions that make it difficult to diagnose certain types of problems. In our business, we have certain applications that we prefer to not use Docker for. Because we’re an AWS Shop, we’re going to run our VPN using the AWS setup, not our own setup. It doesn’t make sense to run that on Kubernetes. So, with company-wide networking, we don’t want to use Kubernetes.

Q: Are you familiar with Singularity?

Monty: No, I’m not.

FollowUp Question: It’s a fork of Docker explicitly made for multi-user environments like HPC Clusters because Docker Images have to run as root, which is a security risk … Singularity does not have to run as root.

Monty: And, Docker is addressing that, too. There are two components to that: one is, within the Docker Image, the convention, which people are starting to be aware of, because you’re doing all this customizing (installing O.S., etc.), it makes your root invalid and that’s obviously not a secure thing. You don’t have to do it that way; it’s simply easier. On the other side, the host that is running the Docker VM requires root.

G: It does. You can grant permission for other users to launch them and, when they launch, they basically set your ID.

Monty: OK. Yeah; that may be what Singularity is doing.

Q: Do you think there’s a risk with these big companies that are housing all this data, to decide one day that maybe they don’t like your application and they’re going to shut it off? Because, you just said that you’ve committed to this one product. Let’s say your organization relies on it and [the cloud] says, “we don’t know what you’re building … you’re done” … And you said it’s very difficult to build somewhere else … What would you do in that case? Because, for me, that’s a big concern because, if you’re in the Cloud, you’re relying on someone else’s thoughts, processes, security—and ethics! So, what do you think about that?!

Monty: Well …

FollowUp Question: I mean, it’s a big question. Are we choosing the right vendor? Do they have the same beliefs as the company? Because, in the past, everything was all sane. Now, we’re seeing these companies start to decide how they want their world to be.

Monty: Hmmm. Yeah. Hmmm. Generally, Amazon has policies about that and they will shut you down if you’re doing things that are illegal or that violate some policies; those kinds of Terms Of Service. Beyond providing the environment that has the hardware on it, they don’t really know what you’re running and they don’t monitor … they certainly … we don’t have concerns about it being a threat that Amazon is going to shut us down.

FollowUp Question: I don’t think most people do; most people think that’s fine. But [what I’m asking about] is that something that, in the future, people will have to consider?

Comment: I think that’s something in favor of using Docker, because it’s so ubiquitous … you can do it on Azure and Google and AWS (to mention the larger companies), but other places as well.

Q: Can you bring that in house? If so, then it wouldn’t be an issue, in this case.

G: Yes. Worst case is you set up your own hardware yourself. You’ve got multiple vendors to choose from … if one shuts you down, you can select another.

C: I’m just saying, if you could port your application over easily, that wouldn’t be an issue.

G: Because it’s a Docker Container, there’s no porting.

C: So, Docker, I guess … I was thinking that Docker is a specific vendor’s product.

G: It’s an open source standard.

C: OK. It’s more of a standard that goes between …

G: I think so. There’s Docker Hub; that might be commercial.

Monty: I’m not sure. I’ll have to check how open it is.

G: It almost has to be. It started as some technology and advanced from that point.

C: I’m concerned about putting all of this stuff in the Cloud when you finally have enough bandwidth to do it—that’s a big step for me. I’m getting ready to build a connection to Glacier and seeing how that works out. I’m excited about that. But they’re designing these hybrid systems right now where you can run locally and in the Cloud and, if you needed to, you could switch back and forth. So that’s really the ideal scenario for us.

G: In fact, Amazon’s got a new [AWS] program where you buy your hardware from them and what you get is Amazon-Spec’d hardware that directly interacts with AWS.

C: Really?!

G: It’s called Outposts. And it just behaves like another Amazon Availability Zone, only it’s on premise.

C: Exactly. So, you stay on the Internet and then, when it comes back …

G: Yes. And that means, if you’re tied to Amazon for your implementation, let’s say EKS (AWS Elastic Kubernetes Service) or Lambda, you’ve got compatible stuff running in house.

C: I can’t imagine a small business justifying the cost of that.

G: It actually scales; same server. Probably a ten-grand server …

C: We just received a notice; we have a new scam we have to worry about that’s specific to 911: We are now seeing people sending us malware links to 911 via text messages. In case you all don’t already know, you can now text to 911. We actually have a Web application we run that receives text messages and, what we’ve seen is malicious links being sent to 911 which is obviously a pretty generic number and we’re locked-down so it doesn’t happen but if the link is clicked, it affects the local machine. We never considered that avenue of attack. We don’t control via a firewall because that all comes in through a third party vendor who provides that service. Our solution was to not worry because those machines don’t have Internet access.

G (to Monty): Were you there when they started the transition to Docker?

Monty: Yeah. I was the one who did it.

G: So, how easy was that?

Monty: It was a big learning curve. Basically … let’s see … 2.5 years ago, I had been there about 3 months and had become familiar with their code base and fixed a few bugs and stuff like that, and was just finishing a rewrite of the system in Java and they wanted to go with Kubernetes and Docker. So I started learning that. We only had 20 people so, since I was the most recent hire and, therefore, didn’t yet have my niche. I got thrown into it; good and bad.

It was good for them because I’m fully-capable of doing that and not-so-good for me because dev ops (system administration) isn’t my first choice. But I put in limited hours; like 30 hours per week. They finally hired one contractor to start helping out in November and then in February they started hiring and now we have a team of professionals who are taking my work and rewriting it in a more modern way. So, I’m hoping that, within a month, I’ll be able to transition off that team, but it will probably take two months. I’ve been waiting for two years now! I’ll definitely be on the back end team, on the server side of things. My specialty is programming language analysis, transformation, code analysis, code generators, that kind of stuff. And we have a chat bot application. They kind of built their own application to more or less rate these conversations that you have and it’s not well-designed. It’s in need of attention. So my hope is I can jump in with the language stuff because I think that’s where I’m most valuable to them. So, we’ll see …

G: You said you have a Java dependency?

Monty: Yeah.

G: So how are you handling the change in Java licensing?

C: I was going to ask that, too.

Monty: So far, we haven’t run into any problems. I don’t know what they’re going to do.

G: Well, this is a commercial product? Or are you designing servers?

Monty: It’s most definitely a commercial product.

G: So, you sell software?

Monty: No, we don’t sell software.

G: Or, ship software as part of a product?

Monty: It’s all internal servers. Our product is the application; backend server. We’re not reselling.

G: I’m thinking about the terms of the Java licensing … for development and testing and licensing, generally, it’s still free to use. But if your customers need Java to run it, they have to license it; that’s the problem.

C: Did they just change that recently?
G: Yeah. It was effective February 1st … somewhere around Java 8.1.24 so, if you want the latest Java, you have to pay Oracle to use it. Or, you can go with Open JDK; Red Hat manages that—and they also charge licensing. And I just came across another one: Adopt Open JDK; Open JDK source looks like that one’s going to be useful. Panasonic.

K: As an open-source JDK, Java offering, should I look into that?

G: Yes, you should. One of the drawbacks with Open JDK is that it’s about 100 releases behind the actual current code. So, let’s say the most recent Java 8 is 8.221 and the most recent Open JDK is 84 . There’s a lot of bug fixes, and lot of security stuff and a lot of security stuff that came out in those years.

Jon. Copyright © 2019, FPP, LLC. All rights reserved.

J: We buy permits from various states, and they’ve switched their models from going only through big companies. We used to have to go through Comdata to order our permits and now they have us order direct because they’ve outsourced all of their websites but they never outsourced the versions of Java so we’ve got some machines that can only run Java 6 because it won’t work if you’ve got Java 7 … they finally started raising their levels, but it’s been crazy because we’ve had to keep machines on Java 6 because that was the only way we could connect to their website.

G: It should work. Java 8 is backwards-compatible. If you switch to Java 9 which is modularized Java, you’re screwed.

KC: Containerization, Jon!

J: It wasn’t us; it’s the states.

C: Radware still uses Java in the browser so we’ve got this old trunk station that we use and I’m dreading the day that someone upgrades the browser.

G: Which no longer runs Java…

C: Yeah. Because I’m having a hard time…because you can run Firefox ESR, some specific version, but even the newest version of Firefox ESR does not support Java.

G: And, it’s a Linux box?

C: It’s Mac … I wish it was Linux.

G: If it was Windows, then Explorer still supports Java, so you’d shut off Edge and start Explorer and then you could run Java.

C: Oh! Interesting.

G: What you do is: you start Edge and then you go, Edge, run this page in Explorer.

C: Oh, I’ve heard about that. It’s some sort of compatibility thing.

G: Explorer is still installed …

C: Does it still support SilverLight?

G: Don’t know.

K: I kinda got SilverLight working in Linux for NetFlix; but don’t know how.

G: That’s desperation!

K: I got it working for a short period and then something happened …

C: I came across an XP machine the other day that I didn’t expect … it was still working.

Man, those machines lasted one or two years, tops!

Eric. Copyright © 2019, FPP, LLC. All rights reserved.

E: The latest Mac OSX, Catalina, breaks Firefox; crashes on launch.

KC: Did you report it?

E: I did. I found a forum where it’s a known issue and the Firefox people said it was traced back to some Apple Graphics Process, so Apple has been notified and now we wait for Apple.

G: That’s interesting because life’s going to get harder for Firefox now because Microsoft dropped their own Explorer developers and they’re just rebranding Chrome; taking the Gecko Engine and rebranding it Explorer. So, it’s going to be a Chrome World.

E: I missed that …

G: Microsoft is no longer developing Explorer. They’re licensing the Gecko Engine from Apple. So why would you bother with Firefox? (For the security …)

G: Fortunately, Chrome does have a significantly-better Javascript Engine, by probably a order of magnitude.

K: So, with Containers, is there any sort of resource sharing of libraries, at all, on the orchestration side?

Monty: I don’t think so.

G: In some of the micro architectures, like the shrunken-down containers that AWS uses called Firecracker, that creates what they call layers. So you can build a library, say Python, so you don’t have to load Python into every firecracker. You just have it as a layer and then all your firecracker containers, which is what Lambda runs on, can use that as a resource. So, again, the containers are all micro-sized.

KC: I vote you deliver a Tech Talk on Firecracker.

G: OK.

C: I guess with containers, in theory, they would use more memory. Because they’ve got that micro kernel that has to be loaded into memory every time. And it’s not a significant amount, unless you’re running under 2,000.

G: It’s a shared object…like any other shared object. Load once and all concurrent processes use the same library.

C: Oh.

G: It’s only the differences …

K: Huh. I played around with it 6 or 7 years ago when it was really new, for a few weeks. And then I got busy with other things. But our company was talking about it back then and whether or not we were going to support it. I think, to a certain extent, we support it but within AWS, which is kind of weird because we have our own data summaries. We just have customers who come in and say they already have this container infrastructure and we need support. So we tell them we’ll support it. But I don’t think we were actually hosting any containers. And with scanning containers for vulnerability, it gets kinda weird. Like Tenable … how does that work? Well, you give us the containers to host. Well, that kind of defeats the purpose of us hosting our own containers and saving money with overhead. But I think they use Amazon, as well. And, it’s extremely expensive. It’s just hard to scan them. I don’t know if Tenable has a mechanism for you to scan containers on premises. We talked about doing that but it turned out to be a lot more work than we wanted to do.

K: Are there Windows Containers? Like as a base image?

Monty: They’re working on it … or, maybe it’s already there.

K: Does Microsoft use Docker? Natively? Because I thought Windows was going to support containers without installing anything (like Docker).

G: Support containers and Docker containers but Windows, as far as I know, Azure will stick with their own virtualization, called HyperV. I think that’s what Azure is based on. But they do support Docker Images. They now support Linux Docker Images.

C: It’s weird seeing Microsoft doing things with Linux…for the longest time, they resisted.

K: I know a Linux engineer who works for Microsoft; it’s the weirdest thing!

G: When you look at Microsoft supporting the Linux core … But it means you can now run SQL Server on a Linux server.

C: I don’t know if that’s a good idea …

K: I’m just a die-hard Linux fan. I’m just more familiar with MySQL. Actually, most of our customers … we don’t have any large customers that use MySQL.

G: Postgres?

K: Yeah. For the higher performance.

G: But not just higher performance. But MySQL is owned by Oracle; there’s a licensing fee; it’s not free.

K: That’s true. They just ruin everything!

G: That is Larry Ellison; that is his purpose in life…how many things can he ruin?

K: Do you think things are simply getting more and more fragmented and we’re going back to square one, from 50 years ago?

K: Yeah. It’s just harder to run things for free—or, for low cost.

G: At the same point, Linux is what enabled all that.

KC: That’s true.

G: If you look at things from the perspective of your home router, firewall, DOCSIS modems, don’t run on Linux kernels. It’s because they got out of those licensing fees. It used to be you’d have to license QNX or some other RTOS to run that and that is really expensive. Do you remember when SUN actually made native Java computers? JavaOS was the operating system and it got to the point where they had little smart cards, which were credit-card-size ID and had a little chip inside running Java that contained your identity information.

SUMMARY

The idea of Containerization is to assist the I.T. Department in being efficient and effective, in line with the mainstream of business operations. To paraphrase Strunk & White [authors of The Elements of Style]: Omit Needless Subroutines.

Links:

Understanding the Different Layers of Routing & Switching

Docker and Container Provenance

Docker Image Signing and Provenance

Excerpt from Kubernetes Blog: https://kubernetes.io/blog/page/10/

The most important precursor to Kubernetes was the rise of application containers. Docker, the first tool to really make containers usable by a broad audience, began as an open source project in 2013. By containerizing an application, developers could achieve easier language runtime management, deployment, and scalability. This triggered a sea change in the application ecosystem. Containers made stateless applications easily scalable and provided an immutable deployment artifact that drastically reduced the number of variables previously encountered between test and production systems.

While containers presented strong stand-alone value for developers, the next challenge was how to deliver and manage services, applications, and architectures that spanned multiple containers and multiple hosts.

Google had already encountered similar issues within its own IT infrastructure. Running the world’s most popular search engine (and several other products with millions of users) lead to early innovation around, and adoption of, containers. Kubernetes was inspired by Borg, Google’s internal platform for scheduling and managing the hundreds of millions, and eventually billions, of containers that implement all of our services.

Kubernetes is more than just “Borg, for everyone” It distills the most successful architectural and API patterns of prior systems and couples them with load balancing, authorization policies, and other features needed to run and manage applications at scale. This in turn provides the groundwork for cluster-wide abstractions that allow true portability across clouds.

End Excerpt.

Author: Karen
Written: 8/1/19
Published: 8/18/19
Copyright © 2019, FPP, LLC. All rights reserved.