NOTE: The following is primarily transcribed narrative from Nick Card, captured during a recent Tech Talk at the Rogue Techies Monthly Meeting. Some minor changes—a word or phrase here and there—were made to translate conversational to narrative format, or to clarify a concept. Enjoy!
Nick is the System Administrator at Combined Transport & Logistics, Inc., a family-owned logistics services provider, located in Central Point. His background includes producing some amazing applications hosted on a Raspberry Pi.
Nick launched his presentation by explaining that the Raspberry Pi was developed in 2012 to promote the teaching of basic computer science in schools and developing countries. The market eventually expanded to include uses such as robotics. The price point is $35.00.
The original devices had 256MB of RAM, split 50/50 between the GPU and the CPU. The example device Nick brought (a third-generation model) has 1GB of RAM and the following additional features: USB 2 (4 ports), Ethernet, HDMI, Audio, and Composite Video Out over the TRRS Connector (T=Tip | R=Ring | S=Sleeve; a.k.a., headphone jack; a.k.a., RCA Connector).
There’s a new version out: the Raspberry Pi 3 Model B+ that has some new features, like a 1.4GHz 64-bit quad-core processor, dual-band wireless LAN, Bluetooth BLE 4.2, faster Ethernet, and PoE (Power-over-Ethernet) support.
Nick explained that the Raspberry Pi took off because it was useful for way more things than robotics. Obviously, it’s useful as a general computer. For $35, when it came out back in 2012, the idea of getting a full-blown computer with a monitor, keyboard and mouse for less than $100 was insane. You can boot off a micro SD, although the new ones can be booted off a stick, too, so you don’t need the micro SD Card. You can boot off an external USB Hard Drive. That’s kinda cool.
Question: Does this support BOOTP; boot over the network?
The new one does support a PXE Boot Option, but you might have to have the PoE Hack for that. The new ones also do native PoE so, instead of having a separate USB, you can run PoE but you have to have a special power source for the Ethernet hack that goes on it, and it has an extra 4-pin connector that breaks-out the power; it’s sort of a native deal, whereas this one requires a device to convert the wattage to a level sufficient to power the USB.
The Raspberry Pi Foundation has a Linux Distro called Raspbian which is based on Debian that runs on these things. Any Linux that runs an ARM (Advanced RISC Machine) will boot. There are a few special distros like Pidora, Slackware ARM, RISC OS Pi, so if you really want to re-live the Glory Days, you can do it.
There’s so much more you can do with this; it is a computer, right? You can run server or standalone appliance applications on this tiny thing. A very common thing that people do is turn it into a gaming center. RetroPie. OpenElec. PiHole (a Linux network-level ad and internet tracker blocking application which acts as a DNS sinkhole; an ad-blocking deal that runs a special DNS that you configure to take in all the ad-block lists and block-out—at the DNS Level—all the ads.)
There is the 40-pin GPIO Connector that really makes this device super-powerful. The 40-pin version offers both 5v and 3.3v power, so you can run any standard digital piece on this. It’s got analog sensors as well as digital I/O so anything that you can imagine—buttons and knobs and sensors—you can integrate to this device. So that’s why it’s a super-popular device for the Internet of Things (IoT).
Windows 10 IoT can run on the Raspberry Pi; the system was designed specifically for the Raspberry Pi.
Raspberry Pi Chromium is an open-source version of Google Chrome that runs on Raspberry Pi.
There are lots of lightweight IoT applications. For example, thermal sensors to observe body heat, for the purpose of capturing statistics in order to figure out occupancy-detection for a home automation system. The difference between using Thermal Sensors as opposed to PIR (Passive InfraRed) Sensors is: PIR is looking for changes in heat, whereas Thermal Sensors are able to detect the location of absolute heat measurements. For example, if I’m sitting on a couch and not moving, I’m not going to trigger a PIR Sensor but I would be visible in a Thermal Sensor because I’m much hotter than my surroundings.
And then, because it’s got network connections and runs Linux, it’s trivial to take this data and then push it to whatever system you might want to use to analyze the data with scripting tools.
Question: I know someone who’s running an irrigation system off of a Raspberry Pi.
Nick: Yes! There’s a system called Open Sprinkler specifically designed to run off a Raspberry Pi. Oh, and this has WiFi, also; that makes it really easy to connect it anywhere. The way it works is: In addition to weather forecast data from the Internet, you can get little WiFi humidity sensors and stick them around your yard to get actual local readings, and then run your sprinklers and the system detects the amount of water the system puts down in different areas, and it will run the sprinklers based upon exactly the amount of water it needs for each watering station. You need one Raspberry Pi Master Controller to manage all the sensors.
So, the box you buy (~ $100) accepts input from the Raspberry Pi and converts that into commands required by the various sprinkler heads; performing all the voltage stepping and other feedback and analog-to-digital conversions.
What we at Combined Transport are doing with these devices is a project that came about organically. We had a desire to have digital dashboards around the company; digital signage. Instead of having a big white board where people have to manually write stuff … well, the info is all in the (back end) system already … so, we decided to pull that out and display it, in a real-time way.
We figured out how to pull out that data using MediaWiki; I set up a bunch of dashboards that take in data from our main database and refresh regularly. The next challenge was to discover how to display that. People could log in on their own station and see the information but people don’t want to keep one browser window open only to see an occasional snapshot of statistical information. They want to see the updates from time-to-time. So we went out and purchased a bunch of TVs, and plugged them into an old computer and went to the web page we created in MediaWiki. We zoomed-in so you could read the information from across the room, and turned on auto-refresh to refresh about every minute or so; it updates throughout the day.
[NOTE: ECSO & CORE also use similar TV displays.]
When we looked at Digital Science Solutions, we decided they were way too expensive. [Others agreed.] So I came up with this idea to use the Raspberry Pi devices to do the project. We began configuring the devices and discovered that it turned into a maintenance nightmare! Either Chrome wouldn’t work properly or it would stop refreshing or somehow someone would click it and decide it was going to crash. So I ended up creating what I call Pi Herald to create a way to script the devices from a central server, to make it easy for me to manage these Digital Science Devices. Pi Herald is written in Perl because, as a System Admin, that’s my go-to for scripting things. Then I had to figure out how to script these displays.
Nick explained the setup at Combined Transport as follows:
I ended up creating an entire controller/server infrastructure around the Raspberry Pi devices. I have a VM that runs Debian; that’s my master server and it has all the control software on it and runs all the screens.
The way I ended up setting this up is: the master server creates each screen and runs Chrome and then has a VMC Server that lives in each one of the screens, and then each Raspberry Pi VMC (Virtual Machine Container) connects to the screens to display whatever is on the screen. That makes it much easier for me because now, I don’t have to change at a client level what’s being displayed. I can change it in one place on the server and then the changes are automatically populated throughout the bank of machines.
The server runs and when the screens come online, what they do is broadcast. I have a full installer that I created so that, when I install Raspbian and then run the installer, it copies all the software and updates all the dependencies and figures everything for me. What it spits out is a client that is ready to accept communication from my server and it broadcasts to my server and says, “Hey! I’m online. This is my name and I’m awaiting instructions.” I can then go into my little Perl Database to determine the screen to connect to. Then it pushes out a configuration file that tells the file which screen it’s supposed to connect to on VMC, and it shares a certificate over SSH that allows me to securely communicate with this so no one else can simply remote in. That’s not a big problem, but better to be safe.
And then I can run commands on the server. If I’m going to restart a screen because there’s a problem or I want to display something different that’s not in the configuration or I want to change something with Chrome, or I want to fix a corrupted cache and need to wipe it to start over, then I can reboot my screen and it will say, “Hey, I’ve got all of these clients attached to that screen,” so I tell all of those clients to turn off their VMC and reboot and refresh the screen and, then once it’s up and confirmed then it will go out to all those clients and tell them to reconnect and then connect back into the screen.
I use the light version of Raspbian, which is a headless system, which is part of my install script. Then I configure LXDE (Lightweight X11 Desktop Environment) which is a super lightweight—used a lot within the Debian World—to connect so that keeps it as performant as possible … which is an issue because, as we’ve grown and used more and more screens, each screen configuration (when I say screen, I don’t necessarily mean TV because each TV is only one Pi) is a VMC connection displaying. For example: Parts Dashboards; Dispatch Dashboard; Shop Dashboard. As I add those, I’m spinning-up a whole virtual desktop and they’re all running HD because they’re all HD TVs so that ends-up putting more and more load on my hypervisor so that server is the most demanding virtual machine I have.
Comment: So, on your server, you’re building a web page and using a pre-broadcast web page using the clients to display the information.
Question: Have you considered using XRDP (open source Remote Desktop Protocol Server) instead of VMC? It runs faster than VMC does. But it might not be compatible with LXDE.
Nick: I’ll have to check it out. (General discussion about alternatives…)
I have 23 TVs running with 8 different screens.
Question: So, you’re only managing 8 windows on your VM Server?
Nick: Yes. It’s one VM but I’ve got scripts that, every time I create a new screen, it spawns a new X (Windows TTY) Session and then it sets the T Server and then starts the Chrome Process.
The legacy process is working so I’m hesitant to change the way things work.
One of the problems I ran into with MediaWiki is the Sidebar (on the left), which is really useful when you’re a human being interacting with stuff, but when you’re trying to display data on the monitor, you don’t want to see that Sidebar. Well, I can’t simply move the screen because I’m constantly refreshing it and I need to clear the cache when I’m refreshing it, in order to avoid pulling the same data. As a result, it’s constantly displaying the Sidebar.
Well, I figured out that you can disable the Sidebar if you log in as a user and set a flag in the user’s CSS to hide the Sidebar. Then I need to have the user login on every session. I finally figured out how to hack it into the page itself so I can make certain pages never display the Sidebar when you’re on that certain page. That solves all my problems with needing custom Chrome Profiles.
My dream was to be able to run PowerPoint Presentations as well, which is a hard thing because of Open Office. Word and Excel support is pretty good, but their latest version of Power Point is really bad.
So then, I had to make a decision. Microsoft has a Power Point Viewer which can convert Power Point Presentations to .pdf … I can upload a Power Point and then go to the general-purpose Windows Server and convert it into a .pdf in Power Point and then spit it back as a .pdf that I could then display, because the .pdf files display better. But I never got around to that. I simply make everybody go through MediaWiki.
The data is coming from MediaWiki. There’s an extension that goes in and pulls that data directly from the database, so as long as the data is updated in our database, it’s just using MediaWiki as a visually rendering process; a better GUI for this data.
MediaWiki has an extension called Parser so, instead of having to go in and write extensions for the different displays, I can take the data and run limited Parser functions to, for example, bold data that is below a specified threshold or make the background of a specific cell red.
One advantage of Digital Signage is that, when users are constantly viewing the data and they notice something wrong, they are the ones who need to fix the data. For example, if a salesperson notices their sales numbers are different than they should be, they can go in and update their sales figures. Both the quality of the data and the efficiency of the system have improved; it’s a great motivational tool.
This is what we’re talking about; it’s a small, $35 computer.
Oh, I forgot to mention: Raspberry Pi Zero; it’s about half the size of this thing. Cost: $5.00. And the new ones, Raspberry Pi ZeroW are $10 but those have WiFi.
I use one of these at home. My projector has serial control; I use this as a Telnet Pass-through so I can SSH into the serial connection and control the projector without it having to be connected, which means that I can do things like, if I push my wall button to start theater time then my home automation system can send the commands to start my projector.
This guy also does a couple other things for me. He runs, for the sole purpose of creating a quorum for my hypervisor. So I have two hypervisors. Prox Mox is a Debian-based hypervisor. First, a hypervisor is a system that runs virtual machines so you can run all sorts of virtual machines; it’s a physical server.
Question: What is Prox Mox?
Nick: I can’t remember what it runs in; what the actual hypervisor is.
Comment: Either KVM (or KBM?) or Zen?
Nick: It’s neither one of those.
Comment: Hmmm.
Nick: It’s QEMU; that’s what is used for the virtualization. It also has a Linux Kernel piece; it basically does containers and virtual machines. Pretty cool. It’s free. You have to do a little tweaking. When you install it by default, it only updates if you have a license key but they have a testing repository so you have to manually go in and change the repo dates from the testing repo, which is free.
In a Prox Mox Cluster, in order to make changes, I have two hypervisors set up; that way, if I have to bring up one of my machines that’s been down for maintenance … primarily, I usually keep one of them off-line because of the power…they’re primarily Dell Workstations with dual processors in them. They’re really great for virtualization but they suck a ton of power, at 300 Watts idle.
I usually keep one of them offline but, because it’s a cluster, you have to have a quorum; you cannot have a quorum with one in Prox Mox, which means that you cannot make configuration changes to your software unless you have two devices online.
What you can do is run only the quorum software instead of the whole hypervisor and, because this is a separate standalone device, now I have a quorum no matter what is going on and I can make changes to my configuration and then that will be replicated so as soon as I turn the other machine on, it will be like Oh, yeah, I’m out of date and now there’s a quorum and now I update correctly.
I also use this to control all of my UPSs. I’ve got a bunch of Uninterruptable Power Supplies in my theater space that run my home audio, my projector, and some of the servers, and those all plug in here and then, this guy knows how to manage the power for all of those as the different UPSs come online. After losing power in my house, it has a priority list of what equipment to shut down. It immediately relays, via a UPS-Over-Network process, and informs the individual devices that they are offline and need to immediately shut down, safely, because it’s a controlled shutdown. And other devices will stay up longer, but as the UPSs reach certain thresholds, it will kick those devices offline. Eventually, this will be the last thing on and it will die when the power goes down. Because of the nature of the Raspberry Pi, it’s super-resilient against sudden power losses.
When power is restored, this will come back up and wait for the UPSs to cross their sequential thresholds and then it will initiate wake-on-LAN of the different servers and devices to bring them back online. So, even if I was away from home, the power shuts down cleanly and then is restored cleanly.
Comment: You must have an amazing home theater space …
Comment: I want to know when you have time to watch your TV Projector …
Nick explained that he maintains a wiki to journal all his ideas about fantasy worlds.
He also has a full backup system, notification system, salt (an appendage concatenated to a password to further encrypt it.); abstracts away the details.
eInk has just introduced a proper, color eInk device; 32” screen with slow refresh time, but will work for eBooks. Looks like a book. About $5,000. Then I can have actual pictures display. Discussion of features and benefits of eInk. Transitioned to mirrors, or wall screens.
Question: Is the Arduino a competitor?
Nick: No. It’s kind of complementary. You can’t run Linux on an Arduino. Arduino is a micro-controller. Where the Arduino is a competitor (to the Raspberry Pi) is the GPIO portion. And in fact, the Arduino has a much richer collection of HATs (Hardware Attached on Top) which are little plug-in modules. So you can buy the X-hat, Y-hat or Z-hat, plug in the add-in and do your project. Arduino is like a fancy FPGA (Field-Programmable Gate Array.) You have to write out the code, compile it into the Arduino, and then the Arduino is a single-purpose computer, until you go back to the drawing board, write another bunch of code, compile that new code into the Arduino and run it.
There are a lot of other companies that produce this form factor of a full-fledged computer. Orange PC is one. They do the exact same thing that a Raspberry Pi does except it’s more memory, faster processor, and a full USB3 Bus. One of the problems is that, if you want to put a lot of data on it, all of the communication is done over the USB2 Bus. So this one only has a 10/100 port. The new ones have gigabit. You’re still limited to 380 bps because of the USB Bus. There’s nobody that competes with Raspberry Pi in that budget space. It runs the Windows 10 IoT, which is like Windows embedded with a cool interface. What’s super-cool about this is: If your application that you want to do in Raspberry Pi works in Windows 10 IoT, there is no easier way to get engaged in this. You go to the Microsoft Store, download the Windows IoT app from the Windows Store and install it on the Raspberry Pi. Then you’re prompted to stick an SD Card in here and then, boom! it’s installed. And it immediately connects to your computer. And it communicates over the network; very user-friendly. But you’re limited to one thing at a time and it has to be one of whatever is in the Microsoft Silo, of which there are currently a limited number.
Question: Have you seen the rack mount? You can put 100 of these in a rack.
Nick: Turns out these are almost perfectly 1 U (1 Unit: the narrowest server that can fit in a server rack) in height, so you can stack them all across and deep.
Then there’s the Raspberry Pi Compute Platform, which costs about $100, and it’s about half the size of a piece of paper. It has a bunch of slots for which you can buy Raspberry Pi Compute Modules; memory modules, processing modules, and I/O FPGA processing modules. Where those first became interesting was during the BitCoin Mining craze. You could actually take a Raspberry Pi Compute Module, put in the FPGAs, program those with the specific BitCoin hashing deal, put in a big old heat sync, overclock the heck out of them and this little compute thing which overall cost you about two or three hundred dollars, was comparable to a $1,000 GPU. So, it was way better than that. Once people figured that out, they figured out that if they custom-designed an ASIC (Application-Specific Integrated Circuit), that was way faster. Now, today, the fastest BitCoin Mining rigs of yesteryear look like dinosaurs, compared to what the ASICs put out; they’re custom-designed for the task.
End of the Tech Talk … but wait! There’s more!
Nick shared that he automated his pumpkins this year; self-lighting. He didn’t want to have to go out and light the candles in his pumpkins, so he made little circuits on the Arduino that connected to LEDs; he had a mix of red and yellow LEDs and it would turn on and off the red and yellow LEDs to look like a flame. It runs four AA batteries in a little battery shell, in super-low frequency, low-power mode, barely uses any power and, because the Arduino has a time-keeping chip in it, he can have it shut down anytime he wants. So he set it to start a half-hour before sunset to 3:00 a.m.
Question: How much are Arduinos?
Nick: Ripoff; about $20. You might as well buy a Raspberry Pi. If you go buy Arduino components from Ali Express, they’re about $5.00—or, you order enough of them and they’re a dollar each, including shipping … you only have to wait 45 days for them to show up. Nick has a whole closet of Arduino components, after going crazy one year.
With the whole automation thing, he recently went through and re-did all his light switches. For the most part, everything’s communicating. He’s now working on routines that will let his home do his thinking for him, so he doesn’t have to touch a switch; sense when he enters a room and turn on the lighting; sense when he turns on the shower and adjust the fan to keep the humidity low. To do all this, he needs a ton of sensors. Right now, he’s at the point where he can automate the things but he has no idea what he should be doing because his system doesn’t know about him or his house.
The Arduino will be the key to automating his home. There are a lot of sensor chips available: humidity, thermal, temperature, vibration … there’s a kit you can get that contains 105 sensors.
That’s part of the reason he uses Prox Mox. He used to use Zen Server, which was Citrix, but Zen Server is Zen and they do not pass through hardware … they do now … you can pass through a PCI Device … oh, yeah, that’s what it was: he couldn’t pass through a USB stick device. He would have to have purchased a PCI Controller … on and on…
That’s a very common application for Raspberry Pi. People run a system called Home Assistant; they have a full O.S. they call HAS (Home Assistant); it’s Raspbian with all the pieces installed and some helper functions, so it’s really easy to get the system going. All you need is a Z-Wave Device and your home assistant can manage and pull it all together. Use Python to run your own stuff.
Occupancy detection is really hard; to adjust the lighting based upon the number of people in a room. He’s been working on building a smart algorithm; a probabilistic occupancy algorithm. He wants to create a data set to train his model.
Suggestion: Use TensorFlow to train it. AWS has a training system in their cloud services.
The model Nick is building is simple enough; the challenge is aggregating the data to train it.
Comment: That’s one of the problems with machine learning, especially neural net; it takes hundreds of thousands of iterations before you get a usable outcome. And, even then, it will only work in narrow circumstances because it matches how it was trained. So the more you can code common sense into the process, the more you can shortcut that.
Discussion: Capturing training data can be a real challenge. Example: a store captures training data from January – June which then is out-of-date during July – December. Things change. Behaviors change. Overfitting relevant data. Cyclical periods are not constant. Weather issues.
Applications where people are using Raspberry Pi?
- Robotics.
- Display Boards
- PLC Controllers
- Water Controllers; big Motorola controllers
- Automating arcade machines (from Walmart)
The new Raspberry Pi’s are quad-core 1.7 MHz. You could easily emulate an M64
MAME: Multiple Arcade Machine Emulator
The arcade machines are looking for $299.
General discussion among hardware geeks ensued.
It’s like an unmanaged x86. But, yeah, a lot of the controls are gone. But you can simply plug-in those and, basically, you have an x86 processor. Yeah, 3 B.
Some of this stuff was driven by the BitCoin Craze; it artificially drove up the price of BitCoin because the processors were being produced to meet the demand for BitCoin applications. Now that BitCoin prices have settled (lost 50% of value in past few months), the demand has evaporated.
ASIC manufacturing costs are as cheap as they’ve ever been. Suddenly, there’s this huge demand for being able to build custom chips so, now all these fabs (fabricators) all over the world have re-tooled to do custom chip stuff to meet that demand. Well, as the price of BitCoin falls off, you’re going to see it be very inexpensive for companies who have very special compute needs to go out and buy that stuff. And you’re seeing it as Apple, Nvidia, Google get into the chip design space. Actually, Apple is a bad example. Nope; that’s the rumor; they’re actually getting back into designing their own chips. Well, they have been designing their own chips for the iPhone. They’re getting away from Intel chips and going back to proprietary.
RISC Processors are going to be competitive with x86.
Nick: That’s really fascinating; I’m really interested to see where that goes.
The reason x86 was so good is because x86 is really good at task-switching; it’s about shallow pipelines that have lots of complicated instruction sets, so a missed branch is not that costly. And, in a modern computer where you’ve got all sorts of activities going on, in parallel, simultaneously, keeping that short depth-of-pipeline allows you to really take advantage of that feature. And, to go back to the drawing board and try to get to where the x86 is today, seems ridiculous; it would take a long time and I don’t know whether you would catch up with x86 processors.
But where ARM and Qualcomm is going with that, which I think is a very different, but I think, arguably better strategy long-term is: they’re saying, “look if we optimize our software … and we have lots and lots of different compute modules (COREs), then we don’t have to task-switch on any one CORE and we can avoid a missed—or, bad—branch prediction because we optimize the software for that in the compile stage, as opposed to dealing with it at runtime.” So now, you have a really different way of approaching this.
Comment: It’s a whole lot easier to update software than hardware.
Comment: Oh, yes.
Nick: So it will be really interesting to see where that goes because you can pack so many COREs into these tiny, tiny dies.
Comment: That’s been the big thing for 20 years now; that was really why it all got started … was being able to do stuff in software instead of hardware. So I’m surprised that it’s not more optimized to take advantage of that kind of application.
Comment: That depends where you’re at. Most of the really big servers are still RISC-based. Like IBM.
Nick: Our server is RISC-based.
Comment: Cray is a RISC-based system.
Nick: So there’s definitely a practical application to that, still, to this day. If it was bad, it would have gone away. It’s obviously not bad. It’s simply never found its footing in the general computing market. But smart phones really changed that because smart phones did very few things; you wanted those few things to be done well and quickly. So a small chip that could run very fast is what you were looking for. You didn’t want the complexity, the heat, the power consumption of a full x86 processor in your smart phone because it would kill the battery and make your phone hot. And it was an unnecessary complexity.
But today, we’re able to compile software that’s better for that. We’re able to build better chips and that power thing is huge. Power is everything now. Especially with mobile phones. But even with servers. Like power demands. I get advertisements every day from Dell … HP … whatever … that are telling me how power-efficient their servers are. They’re not advertising how fast their servers are, they’re not advertising how many I can hack into a rack … they’re advertising how power-efficient their processors are. Their setups optimize everything for the lowest power draw. And they save me 5 watts of power more than their competitor does and it add up to this many thousands of dollars per year.
Comment: I think virtualization did a bigger savings than anything else, than power consumption. We no longer have physical servers; we have hosts.
Comment: The servers are still there but you’re making multiple use of them.
Comment: They’re more efficient now because, instead of having individual servers handle specific tasks … having 30-40 servers sitting there … they can now run on one host. But that was the way you had to build them back then.
Comment: I wonder if there’s any room for a comeback for Transmeta [very interesting reading…] It was a cool idea that really didn’t pan-out. Transmeta built a CPU that is mutable; IOW, it can emulate other CPUs and I can’t remember how they do it; they must be running microcode inside the hardware. I don’t know what their base CPU is, yet, it could act like a RISC or an x86 or whatever.
Comment: That would be handy, because they are times when it would be nice to virtualize some of the old mainframes and get at stuff that’s proprietary, running whatever.
Comment: I’m sure they have to do an implementation for every single CPU they support; that can’t be cheap.
The discussions ended.
Thanks to Nick for a superb presentation … sort of like trying to get a drink of water from a fire hose … about the Raspberry Pi and the vast variety of applications he has created!
Author: Karen
Written: 12/15/18
Published: 1/8/19
Copyright © 2019, FPP, LLC. All rights reserved.