AWS Products. Copyright © 2019, FPP, LLC. All rights reserved.

Rogue Techies Monthly Meeting: March 2019

INTRODUCTION
Gunnar Engelbach presented an overview of the vast array of AWS products.

The AWS home page offers a visual catalog—a hierarchical arrangement—of all AWS products by category (25 in all) and within each category, from 1-19 products, for a nominal total of 176 products—currently. [But wait! There’s more … read on.] If you thought that the Storage category offered the most solutions, you’d be wrong; that honor belongs to the Machine Learning category (19) followed by Management & Governance (18) and then Security, Identity & Compliance (17). Cloud storage is probably the most talked-about but Amazon’s focus seems to be on business. IOW, Storage is the bait to hook you into their suite of product offerings that solve long-term issues. And AWS Security is lauded within the CyberSecurity Community as serious.

Gunnar began his presentation with an overview of what’s available on AWS; the nomenclatures and terms, an idea of what services are available, how it works, and a little bit about costing and then he demonstrated an example of how to set up one of the products.

AWS Regions and Availability Zones.
AWS Regions and Availability Zones. Copyright © 2019, FPP, LLC. All rights reserved.

As a starting point, AWS is divided into Regions and Availability Zones. A Region, like you’d expect, is a Geographic Region, which means they have a Data Center there. Availability Zones means that, within a specific Region, they have multiple Data Centers, physically isolated by enough of a distance that, if one of the Data Centers is hit by a disaster, the other Data Centers in the Region would still be operational. And a lot of their services automatically replicate among these Availability Zones. So when you set up storage—something like an S3 Storage Bucket, for example—it’s automatically available in multiple Zones. If one Zone goes down, all your data is still there and you didn’t have to do anything to plan for that.

Q: How often does that happen?
Gunnar: The automatic replication of data throughout a Region happens the moment you write something to a Zone. And that’s not always the case … when you get into databases, the automatic data replication process works a bit differently.

Referencing the Regions And Availability Zones Map, above, Gunnar indicated that the yellow circles are Regions and the numbers within those circles are the number of Zones within that Region.

So, Northern California has three Availability Zones, Oregon has four, Northern Virginia has six, Ohio has three. The green circles are Cloud Front. Among other things, AWS is their own CDN; Content Delivery (or, Distribution) Network … Like Akamai.

These are pipes—Amazon’s own endpoints—and they put Distribution Servers there so if you want to use something like a CDN to speed-up local access to your website, you have something that’s geographically closer to your users.

Glen, Jon, Jacek
Glen, Jon, Jacek. Copyright © 2019, FPP, LLC. All rights reserved.

Services Available
Now we move to Services Available. There are currently about 500 different web services that Amazon provides. I won’t go into much detail on most of them, mostly because we only have 30 minutes and also because I don’t know most of them, but we’ll try to get some detail on the most critical ones, in case you’re actually thinking about a cloud move.

Elastic Compute Cloud (EC2)
From a traditional standpoint, when you think of running servers, that’s where most of the compute stuff comes in … especially the first one in this group of images: Amazon EC2 (Elastic Compute Cloud.) If you want to create a server for doing something, you set up an EC2 Instance; an instance is simply a VM (Virtual Machine). And to make it easier, Amazon has a bunch of pre-defined images with a whole bunch of options as to what kind of hardware you want to run that image on.

Auto Scaling
Associated with an EC2 Instance is: Auto Scaling. Say you’re setting up an EC2 Instance; you’re running your web server off of that but you want to make sure that as more users hit it, you can handle the load. But at the same time, you want to save money. So what you’re going to do is pick the lowest EC2 Instance type—the cheapest one you can run your instance on—and then set up rules to expand that when your traffic picks up. And that’s what an Auto Scaling Group does. Basically, you create a set of rules that declare when traffic or CPU utilization or memory utilization runs over a pre-defined threshold, create another instance and put it online and then load-balance between them. An Auto Scaling Group lets you do that automatically.

Q: So, would this also auto-retract?
Gunnar: Yes.

Q: And, this is on an on-going basis?
Gunnar: It’s all automated. Once you set it up, add a couple of rules and it works. It’s called scale-in and scale-out. The rules work in both directions.

Code Deploy
Auto Scaling Groups come in handy for other things, like when you want to do automated updates and distribute the update to everything. Or, going a step further, you can have a code repository out there that you keep your software in and what you’re using your servers for is your standard images but, of course, you have to load your software on them. So there’s a service called Code Deploy that lets you automate the process of updating or loading software, deploying your code onto an instance and, when you put that code into an Auto Scale Instance, what you can do is give it a rule, “don’t put this instance online until you have triggered this code deploy and my software is on it and verified to be running correctly.”

The whole thing is pretty painless.

Git
Q: Does Code Deploy work in a manner similar to Git, but automated?
Gunnar: GitHub would be a piece of it but if you push a code commit into GitHub, one of the Amazon Services would detect that and automatically download the change and apply it to your servers. And, of course, you can specify whether you want the changes from GitHub to go to your test servers, your production servers, wherever. All of that is automated.

Amazon Git
Amazon does have their own internal Git repository; it is not publicly-accessible. You need to have an Amazon Account to access it, and one of the popular things, at least internally, that Amazon likes to say is: Security is Job Zero; not Job One but Job Zero.They take security very seriously. When you see banks running on their cloud, hosting all kinds of sensitive stuff, they would be dead if they had a serious security incident and they know it. For example, Capital One is running on AWS; they’re like the fifth largest bank in the world. [Actually, Capital One ranks #7 in the U.S. with total assets of $304,657,685,000; following JP Morgan Chase Bank, BofA, Wells Fargo, Citibank, U.S. Bank, and PNC Bank. JP Morgan Chase is ranked #6 in the world, following four Chinese and one Japanese bank.]

Xen
Getting away from the traditional set up a server, manage a server, load your software on a server, run things that way … the old, traditional IT Method, which is what the EC2 Service is geared towards, except that it’s virtualized … kind of as an aside, at its base, what Amazon has built upon is Xen. So it’s all Linux-based and hypervisors.

Fargate
You can do Docker Images. They have their own Docker Service, including their own version of Kubernetes for setting up clusters of Docker images, or whole suites. And they also have their own Docker Registry, so you can create your own Docker Images or they’re linked directly to and trusted by Docker Hub, so you can download things from Docker Hub, automatically deploy them within the Amazon Infrastructure; that service is called Fargate. And then you go beyond that, so instead of running a full VM, now you’re just running your full software plus supporting packages inside a Docker Image, so you no longer have to manage a server. Now we can take it a step further than that with this thing called Lambda.

Lambda
Lambda is: you simply write a little piece of code that handles one simple task (e.g., handling the action off a web URL) or, maybe it’s tied-into a pipeline so something’s triggered byan event somewhere that sends data to a Lambda Function and it executes the defined task. If you think of object oriented programming where you have functions declared as part of the object, a Lambda Function can be just a function and then you implement your product by a whole bunch of Lambda Functions. In order to support that, Amazon invented their own micro VMs. The project is called Firecracker. It’s a Docker Image but it’s stripped-down as small as they can get it, with two key things: 1) fast startup times and 2) very secure. And it has absolutely minimal support in it. The startup time in a Docker Image is 125 ms. or something like 250 instances per second they can start up with a Lambda Function. [And then, there’s this strategy for Optimizing AWS Lambda Costs.]

Outposts
With Outposts, you can take the AWS Services and run them on your local premises hardware; that gives you some advantages and they have things that allow you to connect and take advantage of that combination.

Lightsail
Lightsail … When you set up a web server, you typically set up a LAMP server (Linux | Apache | MySQL | PHP), so you have to set up all those services. And that’s pretty typical for a web service. Lightsail simplifies that process. Instead of going in and setting up the process and configuring it, you simply say, Lightsail, I want this kind of server, this kind of database, etc.and Lightsail goes out and sets up everything for you. Done.

Storage
One of the key storage services is S3. S3 is not actually a file system. If you look at the interface, it looks and behaves like one but it really isn’t. It’s a Block Storage System … actually more of a database—with a huge capacity. Downside is: you can’t simply mount it to a computer and use it like a file system. You typically have to use one of their APIs or CLI interfaces to get stuff in and out of it.

Elastic Block Storage
EBS (Elastic Block Store) is a block-based file system; that’s something you can actually mount to a running instance. And, in fact, when you create an EC2 Instance, the boot drive that becomes part of it is an EBS File System. And then there are two ways to go with that. The EBS File System is created with your EC2 Instance when you set it up … if you ever delete the EC2 Instance, that file system gets deleted. But if you create a second EBS and mount it to your EC2 Instance, that will survive a re-boot. So anytime you want persistent data in a local accessible file system, or you want to be able to mount that file system to multiple machines at the same time, you set up a new EBS System.

Elastic File Storage
EFS is actually very similar; it’s actually more like a traditional file share, so it’s like an NFS Mount. The thing with these is EBS is fairly expensive, relative to the others. EFS is fairly expensive. S3 is really cheap, comparatively. And that’s why S3 is the popular place to store stuff; it has huge capacity and it’s very cheap. Glacier is even cheaper. But Glacier is not readily-accessible. Once your data is in Glacier, which is kind of archival storage, you say,OK, I need access to my data again.And you have to pull your data back to an S3 file system and then you can access your file data. And you’re charged based upon how much data you pull out—and how quickly you want it to be available. If you want your data to be immediately available, that’s the most expensive and it runs down toit could be a couple of days before your data is available to you.

Q: And, this is per instance?
Gunnar: This is the total amount of data you pull. On a monthly cycle. And it gets a little more complicated because there are multiple types of EBS File Systems. So everything, by default, is on a SSD drives, in fact it looks like it’s all MVME these days which is not really cheap.

But, if that’s not good enough, you can go to I/O Optimized SSDs. That’s really expensive but it supports very high data rates or multiple access. Or you can go back to the cheaper end and go back to magnetic storage. Has good data transfer rates, not great access times, but it’s also really cheap. So those are all things you have to keep in mind when you’re setting up your system.

Glacier
Glacier is archive storage.

Storage Gateway
Storage Gateway is a device you put on your local network and basically do your copies to it and it uploads to your AWS System.

The Snow Family
This collection refers to what AWS calls a Snowball. If you have huge amounts of data, there’s this physical thing called Snowball, which is a device they mail to you. You upload your data to it and mail it back and then they insert it into your account. And, it’s not only a hard drive, it’s actually a big hardened box with a computer in it and a disk array; so it’s redundant, it’s hardened (so it’s armored), it’s encrypted. Because, again, Security Is Job Zero.So they care about the integrity of your data and the security of it.

FSX for Luster and File System
Those are new so Gunnar couldn’t tell us much about them.

And, you have to have a backup service.

DATABASES

RDS (Relational Databases)
Amazon makes multiple options available under the RDS umbrella. So when you request a relational database, they ask you which flavor you want. MySQL, PostgreSQL, Oracle SQL Server, Maria (which is actually MySQL); it can be any of those. In fact, when you set it up, it will ask you which compatibility—or, version—you want.

The next thing it will do is ask, “How big? How capable of a database do you want?” And the answer will determine what kind of an instance it sets up. The more CPU/memory/network capability you want, the more it’s going to cost you.

Dynamo
Basically, Dynamo is a Mongo Database; Amazon’s version of Mongo; a no-SQL—or—document database.

Q: Is it based on Mongo?
A: Probably, but they don’t explicitly say.

One of the things about Amazon is: they take big advantage of open-source projects—to the point where some open-source projects are now changing their licenses to try and shut out entities like Amazon because they’re tired of having huge companies making use of their product without any recompense.

Q: They won’t contribute back to the developer?
A: Not specifically to things like that, that I know of. But, I mentioned Firecracker … they actually put the whole thing on Git Hub, open and, under Apache License, it’s very open.

Elasticache
Elasticache is simply a memory cache, so if you have frequently-accessed data, you can set up memory cache to get much faster access to that data.

Redshift
Redshift is for data warehousing; it’s probably Hadoop.

Neptune
Neptune is a graph database.

Block Cryptography
AWS now supports Block Cryptography, so they have a database for that.

Amazon Aurora
Amazon Aurora is what they are changing RDS to. This is Amazon’s in-house developed database engine but it is compatible with MySQL, PostgreSQL and maybe some others. One of the benefits of it is that you can run it serverless.

All of these database products are serverless, from the perspective that, once you’ve set up a MySQL database, you don’t get access to a server. The database is actually running on a VM Instance but you don’t have access to it; it’s managed by Amazon for you—but it’s still a physical server.

Aurora is not a physical server; it’s completely serverless. It’s distributed. It scales. One of the tricks they do is, as with Lambda Functions, when somebody makes a request to a Lambda function, they get their own unique Lambda Function until that request is serviced. If 100 people make requests to the same service at the same time, 100 Lambda services start up and handle the 100 requests.

Q: What does that do for performance?
A: It’s fantastic! It also means that it scales—infinitely—up to the capacity of Amazon’s computing capacity, its network performance.

Cool.
Yeah. There’s no load-balancing. There’s no nothing that you have to worry about; it simply scales—and can be distributed. It’s the same with Aurora.
Aurora works the same way. When I say serverless, it works in that fashion; per connection, it simply runs.

Virtual Private Cloud (VPC)
Virtual Private Cloud is actually the core. If you set up an Amazon account, at the very top level, architecturally, is a VPC; your own virtual private cloud. Every account gets five of them. Within your VPC, you define sub-nets and then, within those sub-nets, you define gateways to determine whether it’s a public or private sub-net. The default VPC will tend to have a sub-net in every Availability Zone within the Region where you place it (reference the initial Region/Availability Zone Map). Within the VPC, you define the desired Region and then define sub-nets within Availability Zones within that Region. But each sub-net can be within any of the Availability Zones so your VPC can be spanning all of those Availability Zones at the same time.

Route 53
Route 53 is their DNS System. You can also register your DNS through Route 53. But it’s also tied to all of these other services, so it understands things like the IDs that are assigned to it. So when you stand up an EC2 Instance, it’s given a Machine ID. Then, when you go to Route 53, there’s an ARN (Amazon Resource Number) that’s a long string that uniquely-identifies that particular instance. Then you can use that ARN in your Routing Table. So if a request comes in for this HTML Address, it gets routed to the proper system. That system can be an EC2 instance or a load balancer that’s sitting in front of those systems and that handles all of that traffic. And, the spiffy thing about load balancers—especially the application load balancer—is that it handles the SSL sessions and in fact, Amazon is their own top-level key authority so there’s no charge for creating certificates; you simply create certificates for your domains and the load balancer handles all the negotiations for certificates. Behind the scenes it can be plain old http because it’s all on a private network—not really critical—but everyone gets the same certificate and everything else is handled invisibly beyond that.

Free Tier Services
What Amazon calls Free Tier Services … there are always free tiers available. Those are things like the first 1,000 database requests every month … or the first one million Lambda functions are free and that’s a permanent free. Others are on a 12-month period. So when you set up an EC2 Instance, if you use what they call the Free Tier Instance (called a T2 Micro which means one processor, half a gig of RAM); that’s free but only for the first 12 months after creating your account.

Security is important. They have a bunch of stuff; including every service every storage type, including databases, includes the ability to encrypt it, either using standard keys, their key management system, or you can provide and manage your own keys and they will do the encryption on those. Or, you can encrypt it before you send it to them.

CloudWatch
A lot of logging happens but CloudWatch also does events and, in fact, can trigger events. Simply make a rule like, when this server goes over 80% CPU utilization, send me an email. Or, when a build fails, send me an email.

Cloud Trail is more about tracking user activity; who did what/when.

Systems Manager
Systems Manager does things like automatic updates and a recent update to that process lets you have CLI access command line access to running instances even if they don’t have public interface. Normally, up until now, the way you’d manage a system is: you make sure it’s publicly accessible and then you secure-shell into it -or- you set up a Bastion host that you secure-shell into and then hop to another machine from there. That SSM part of System Management lets you direct-connect to something that has no public network interface and is on a private sub-net.

Command Line Interface (CLI)
Command Line Interface is largely used for interaction with the AWS services. This is a Python Application that you download to your local system. Part of setting up user accounts is that you generate a key-pair set—or some really long hex strings—and once you register your user account with the CLI you’ve installed on your local machine, you can then take actions within your VPC from within your local machine, wherever you happen to be, as that user.

Cloud Formation
Let’s go back to Cloud Formation; third on the left in the array of services image. All of this stuff is nice, and it’s cool all the things you can do, but you take that to the next step and one of the problems you have with setting up environments is, if you go back to the traditional thing of pulling a computer out of a box, setting up, attaching the network, changing the firewall rules and the network, all of that sort of stuff, creating the database, all those traditional things of setting up services… well, Amazon is all automated. What cloud formation does is: lets you create a script—and you get to choose whether you want to define it in JSON or YAML—and it defines all those things. This is my typical setup: I want this kind of server, this AMI image on it, I want this firewall rule, this security group; it’s now reproducible. Anytime you want to spin up that environment, run the script and it’s done. So that’s what they call infrastructure as code.And that is one of the most powerful things about AWS. In fact, when I go into the console and start showing you stuff, the console is Javascript application written on top of their SDK. Their SDK is supported in multiple languages, so everything you see me do in the console, you can do using their SDK by writing your own program and running it locally and just having AWS execute it. That SDK can be Python, C++, PHP, node.js, .net. The Management Console doesn’t expose everything you can do; only the most critical.

Step Functions
So for pasting things together: Simple Queue Service, Simple Notification Service, Message Queue. So if you have multiple functions, multiple servers, whatever, running out there, and you need to communicate between them, these are the kinds of services you’d use. Whether you want the Notification Service or a queue, you might have one process that handles the front end and fills a queue with data bits, another process that processes things out of that queue so that you’re not feeding things directly from one to the other and don’t create a catastrophe for whichever end can’t keep up. Step functions take that a bit further. You get to draw out, “here’s my different services and this is how I want them connected.” You draw them in a little graphic designer, press GO and then Amazon sets that up for real.

Developer Tools
Debugging: debugging

Code Deploy we mentioned earlier.

Code Pipeline is a service for tying all these things together. You simply say, “when this thing happens, take this action, maybe in parallel to this action, if it fails, notify me, if it succeeds, go on to the next step.” So you get to tie all those things together. And that’s part of how you do something like publish a code change to Git Hub: Code Pipeline will notice that and will create an event, so there’s no initiation that you do; it happens automatically. Oh, you made a commit. I’m going to build your code for you and test it and push it out to the server … all of that stuff. It’s done for you.

Q: And, this is Amazon’s internal Git?
A: Either. You can use your internal git or that or you can have it watch an S3 Bucket. So you can simply do a copy to an S3 File System and it will take all these actions based on it.

Cost Management
They give you a lot of tools for managing costs.

Alexa For Business
Alexa goes a bit beyond that. Alexa is entirely programmable through Amazon. So what you can do is actually create your custom commands and actions associated with your Alexa Instance.

Q: So, the customer at home—or at a business—can interact with Alexa?
A: Yes, you can say, “Alexa, feed the cat.” And then you set up a rule set in Amazon so that, whenever you say that, it happens.

Q: That sounds like a real security situation for the people on the other end, is that right?
A: Yeah. It’s going to be limited to Alexa Instances that you have control over. You can’t install it on someone else’s Alexa.

Q: Are you talking about a customer who happens to have Alexa? Who’s got the Alexa here?
A: If you buy Alexa for your home, I can’t make any changes to it. If I buy one, I can make changes to mine because I have the account information. If I have account-level access to a customer’s Alexa, I can manipulate it.

Amazon Workspaces
You can do things like Terminal Services, Windows Desktop or other desktops that you use remotely.

Game Engine
They have their own Game Engine, hosted on AWS.

IoT, including their own free RTOS that they’ve compiled for a number of micro controllers and processors.

Machine Learning
<some of these get a little scary>

Elastic Transcoder
With Elastic Transcoder, you can do things like send it an audio stream and it will transcribe it to written text, or reverse that as you do with the Alexa Tool Kit: Say this phrase, in this language.Alexa does the translation.

They also have photo analysis; you can send it a picture and ask it to analyze it. What are you looking for in this photo? What are you looking for in this video stream?

Mobile Device Farm
A Mobile Farm used for testing on mobile devices…anything you make.

Ground Station Service
This is used for communicating with satellites.

Screen Shot.
Screen Shot. Copyright © 2019, FPP, LLC. All rights reserved.

Demo
Gunnar logged into the Amazon Console (live on WiFi) on his Amazon Account. He created a computer using the default VPC which is actually very insecure. If you actually use Amazon, what you want to do is create your own VPC, gateways and sub-nets and then delete the default that they give you. Once you do that, the default actions will all be secured by default. The default VPC is meant to get you up-and-running for doing free tier stuff and learning about the services but you don’t want to use the default VPC for production applications.

Amazon Machine Images
When I mentioned AMIs (Amazon Machine Image), they do their own version of Linux, called Amazon Linux; it’s CentOS-based. This is CentOS 7 and it will have some of the additional Amazon Tools set on it, plus they run some of their own mirrors for the RPM (RedHat Package Manager; used for installing software)archives, which are internal to Amazon, so things like Amazon Updates run a lot faster.

For most things, this is the quick-and-easy thing to do…but…there are lots of VMs: Windows, Windows Server, headless Windows Server, and these are simply the ones that Amazon makes available. There’s an Amazon Marketplace. You can create your own and save them. So if you do something like take an image—one of the Amazon images—you load a bunch of software on it and you don’t want to go through that process again, you can save it as your own AMI and, now, that’s your starting point, and you’re done.

Q: You can’t do this on Mac?
A: No. Mac has a special something in the BIOS that won’t let it boot on anything but a Mac.

Q: How long has this been evolving?
A: 10 years?

The breadth of this is breathtaking.
It is. This is now the biggest moneymaker for Amazon, too. They built it for themselves, to support their online business and then their CTO came in and said, “You know, we have some good stuff here, why don’t we start selling it?”

Hundreds of Products
You can see hundreds of products; these are all commercial products that are already ready to go; you simply load them up and create an instance and run it and license it and start paying it; that means two costs to you: one is for whomever made the AMI and is licensing it, and the other is paying Amazon for whatever compute resources you use to run it.

Community AMI
Then there’s Community AMI; there’s hundreds of these…maybe thousands; in fact, 98,505.

And most of these will be free…at least for the AMI. But you still have to pay for whatever services you use.

Amazon Linux
Amazon Linux 2 is free. The account I’m using is less than a year old, so it will be free—until my 12 months are up. This is an EC2 Instance.

[NOTE: It could take a person a year to discover—and learn—all the offerings Amazon provides, so it’s worth it to work with an AWS Consultant to dig-in to the various menus and get a feel for what’s available before you invest in development and then discover you want to do things another way and have to transfer your work to another product after your one year free period ends.]

Placement Groups
Here’s a list of the hardware you get to run it on…virtualized hardware. However, there is a thing called Placement Groups that lets you actually save rules like, “I want my instance to run on private hardware that only I have access to.” This costs you a little bit more but it’s no longer a shared server. You get your own private server for running your stuff on. Provided by Amazon, in their data center, but you’re guaranteed that you’ll be the only tenant on that piece of hardware.

Placement Groups can also be used for things like making sure something is distributed. So, say you have an application and you want to make sure it’s running in multiple places so it’s not susceptible to a problem in one data center. Well, you tell the Placement Group to make it a distributed Placement Group and then it will put every new one in a separate group. OR, maybe you’re running something like a cluster, where high-speed communications are really important and they all need to be located in the same place, so now you do a co-located place and it will guarantee they are all in the same place, possibly to the level that they are all in the same rack, to get the highest data transfer rate.

So these are the different levels of hardware; a virtual definition of a machine you can choose to run on. So we run down the tiers. The T2 Micro has one virtual CPU, 1 GB RAM, uses an EBS File System, and doesn’t have great network performance; there are lots of these. So you can be running up to 96 processes and that gives you a 25 Gbit connection; 10 Gbit on some of these. Some of these will have graphics processors; some of them have really high-end graphics processors. There’s a huge variety of different machines you can run on. And the naming convention—T2, E2, G2—are kind of based upon what they’re optimized for; memory, GPU, network throughput, etc.

T2 Micro Setup
This is the default VPC. I’d create another VPC but it’s not working right now. I’m in the Oregon Region. I can choose whatever region I want to set up in; Oregon is my default. Oregon has four Availability Zones and it already has sub-nets set up. So, by picking one of these sub-nets, I’ll determine which Data Center this image is created in. The default for this is to give it a public IP Address. I haven’t set up a Placement Group.

Once it assigns an auto-IP Address static? No. Public IPs are dynamic; they’re DHCP. In fact, that’s a good thing to cover. There is something called Elastic IP Addresses. Everything Amazon is Elastic SomethingOrOther.

Elastic IP Addresses
An Elastic IP is a public IP Address you can assign to an image, and that is a static IP Address. Each account gets five, and that’s all. So, use them wisely. In fact, you shouldn’t have to use them at all if you set things up properly. But if you’re still stuck with the traditional way of having to set up a server, make it available, etc., an Elastic IP is what’s going to get you there. The spiffy thing is: you can assign that to something like a load balancer and that’s your Elastic IP and, behind that can be any number of machines that the load balancer is distributing traffic to and you don’t have to worry about that part of it; you’ve just got that one public IP that you’ve set up.

IAM Roles
IAM Roles are a very granular system for defining what can do what where, so every single service on Amazon has a category for an IAM Role and then a set of services that can break out, such as being able to read to it, write to it, get a list, whatever is pertinent for that particular thing. That growing—or, expanding—list is defined in JSON and they have a huge number pre-defined for you but you can go in and customize that. To the point of saying things like, “this user can access port on this IP Address”…or “this particular database instance is accessible from these three machines but only in read-only mode”; that’s the kind of granularity you can get into. And, the thing is, if you don’t do an IAM Role, stuff doesn’t talk to each other, so you have to use them.

Here’s an example IAM Role I set up earlier. This is one gives this whatever resource has this S3 admin access, IAM Role is allowed to read & write to S3 Buckets. So if I want to copy something to/from this S3 Bucket on this machine I’m creating, I have to give it access to it. And that’s what this particular one does.

There is this advanced one which is worth going into. So when you create an instance, this little advanced box right here lets you set some [so-called] user data. But basically what you do is create a shell script and, anytime this instance is started, this shell script will be executed so you can do some typical things like simply do a software update as soon as the instance comes up; that’s the first thing it does even before it’s made available to you, it’s made the update.

So I’m going to give it an 8 gig boot device. Since I’m still in free tier, I can give it up to 30 gig as a boot device but as soon as I go over that, they’re going to start charging me.

Storage Services are charged based on what you provision it as, so regardless of how much of it gets used, if you set up a terabyte, you get charged for a terabyte, even if it’s empty.

It’s going to be SSD. I can do an optimized IOD if I want to.

This is the last step: Security Group.
Security Group is Amazon’s term for Firewall. So this particular one, the default one, very straightforward. Port 22, TCP, any IP address in the world is allowed to access it. IOW, I’ll be able to secure shell into this instance once it’s started. And, one last step. I’m going to launch this and pop-up …so one of the things it’s going to do is insert on this, the public key of a public/private key pair that belongs to my account. I’ve already created one, but I could create a new one. And that’s how I’m going to access it. There are no passwords; it’s all certificate authentication. Thing is: I’ve already created this; there’s no way to access it. So now that I’m going back, having logged into it, that public/private key pair does not exist on Amazon. If I had not downloaded it, it’s gone. So, if I don’t have access to it, my only choice is create a new one and start somewhere safe. I do have access to it so I’m going to launch an instance. And, there it is; it’s being set up. Initiating server.

We wish to thank Gunnar for another excellent, learned presentation!

LINKS:
Apple Pays Amazon Over $30M/Month

NOTES related to link, above:

  • Waukee, IA: West of Des Moines, via I-80.
  • Newark, CA: Northeast of Palo Alto, across the San Francisco Bay from Palo Alto, via the Bayfront Expressway.
  • Prineville, OR: Northease of Bend, via some back roads. Located in Crook County, Oregon.

Buy your real estate early!

EXCERPT: “AWS’ total revenue last year was $25.66 BILLION.”

FINANCIAL NEWS:
During this Tech Talk, I mentioned that I recently heard a rumor that Amazon is considering buying Fedex. At that time, FDX was trading at $170/share. As of 4/24/19, FDX is trading at $197/share, a 17.5% increase in about 30 days. Boom! Old Stockbroker’s Adage: Buy on the rumor; sell on the news.

Author: Karen
Written: 4/1/19
Published: 4/7/19
Copyright © 2019, FPP, LLC. All rights reserved.