How the libuavcan project, a C++ implementation of the UAVCAN protocol, is using buildkite, raspberry PI, and some delicious Python custard to provide Hardware-In-the-Loop test automation for pull requests and CI builds.
Minty Punk Console
Being the digital creature I am by day I wanted to play around with something purely analog at night. Some Googling surfaced a fun toy, the "Hello World" of analog synthesizer projects; The Atari Punk Console.
Quickly breadboarding this little gem I decided to see how portable I could make the actual build. As such I give you the…
About the Design
I always have Adafruit permaproto boards for Altoids Smalls on hand so I decided to use this form factor. Next I sourced the knobs and pots by feel rather then by brand (I found some $2 Philmore pots at Frys that actually felt better then much more expensive parts I bought on Digikey). Given the space constraints I went with all SMT passives and ICs, sandwiching the SOIC adapters directly onto the perf board. Finally, after a fruitless search for a 3.7 to 9v DC/DC power module that could source 500mA and fit on half a minty board I was forced to go with a more complex design. Using TI's surprisingly useful WEBBENCH Power Designer I generated an integrated design built around the LM2735 1.6MHz DC-DC regulator. With this rather expensive part ($2.80 on Digikey, actually the most expensive single part on the MPC) I was finally able to supply enough voltage and current from a 3.7v, 350mAh lipo battery to drive a speaker connected to the LM386 amplifier I had on hand.
Here's the full portable, and rechargeable, result (courtesy of my son DJ Huck):
Details
Power
- 3.7v, 350mAh lipo battery from Adafruit.
- 9v rail using a boost switching supply based on the TI LM2735 1.6MHz DC-DC regulator.
- 3.3v rail using a linear regulator: MCP 1700.
- Microchip's MCP73833 lipo charge management controller to charge the battery from a USB Mini B connector.
The big lesson from this part of the build was the awesomeness of the MCP73833. It's completely standalone (except for a couple of resistors to program the output) and the datasheet describes a circuit that should be useful for just about any portable maker project using lipo batteries. You can buy a breakout for this chip from Adafruit but the chip is so simple that it's hardly worth the space taken up by the extra PCB.
Audio
- TI's two-555-in-one IC the LM556 in a 14-SOIC package.
- TI LM386 Audio amplifier.
- The handsfree speaker from a dissected BlackBerry Curve (8300).
- 2 Philmore 500k linear taper potentiometers.
I made two improvements to the APC design when stuffing it into the mint tin. The first is to use the LM386 both to drive a speaker and to provide the proper output bias given a ground reference input and a single voltage supply. The original APC design simply used a serial capacitor on the output which makes the voltage offset frequency dependent. Using a properly biased amp means this design's output response should be far flatter than the naive implementation.
The second feature was the breakout of the two 555 timers' output in a mini "patch bay" on the front. This adds some educational utility to the toy by allowing scopes, instered at each stage, to show how the two square waves interfere to produce the MPC's shrill tones.
Materials
APC Values for a 3.3v Supply
For the MPC I decided to run the 555 timers using 3.3 volts. The Wikipedia design is based on a 9v supply so I had to recalculate the passives:
3.7 to 9v Power Supply
Part | Manufacturer | Part Number | Quantity | Notes |
---|---|---|---|---|
Cf |
Yageo America |
CC0805KRX7R9BB821 |
1 |
|
Cin |
MuRata |
GRM188R60J106ME47D |
1 |
|
Cout |
MuRata |
GRM21BR61C106KE15L |
1 |
|
D1 |
Diodes Inc. |
B220-13-F |
1 |
VFatIo 0.5V Io 2A VRRM 20V |
L1 |
Bourns |
SRN6045-6R8Y |
1 |
L 6.8uH DCR 0.047Ohm IDC 2.8A |
Renable |
Vishay-Dale |
CRCW040210K0FKED |
1 |
Resistance: 10kOhm Tolerance: 1% Power: 0.063W |
Rfbb |
Vishay-Dale |
CRCW040210K0FKED |
1 |
Resistance: 10kOhm Tolerance: 1% Power: 0.063W |
Rfbt |
Vishay-Dale |
CRCW040261K9FKED |
1 |
Resistance: 61.9kOhm Tolerance: 1% Power: 0.063W |
U1 |
Texas Instruments |
LM2735XMF/NOPB |
1 |
"MCP", "Minty Punk Console", and all designs licenced CC 4.0 Attribution
Eclipse CDT Indexing With Makefiles
The Eclipse CDT (C/C++ Development Tooling) has been around for awhile. While it has some usefulness as a generic IDE I prefer it because it uses external makefiles and cross compilation toolchains. This means I don't need to maintain two separate project structures and anything I do on the commandline is the same as what the CDT does when using built-in commands. Also, it works on OSX where most vendor supplied IDEs only support Windows.
While powerful the Eclipse CDT isn't for the faint-of-heart. It most definitely doesn't work without some manual configuration and can be quite frustrating when it doesn't do what you expect. This blog post will take you through one particular use-case: setting up the Eclipse CDT to build and index native source using only an existing set of Makefiles and an external toolchain. Specifically we'll build the blinky example in the Nordic Semiconductor nRF5 SDK using the GNU ARM Embedded Toolchain (arm-none-eabi).
Prerequisites
Mostly this post is about getting setup with the CDT indexer so it'll be useful across a wide range of SDKs and toolchains. If you want to follow along though here's what you'll need:
- Eclipse CDT – I'm using the Neon release 9.0.0 but these steps should apply to 8.8.1/Mars as well
- nRF5-SDK – If you want to actually run the code you'll need to buy one of the evaluation boards supported by their SDK but that's not germane to this blog post. You only need to download this free SDK to try the techniques we'll discuss.
- GNU Arm Embedded Toolchain – I think you can get this through brew on OSx otherwise you'll have to download it from launchpad.net or build it from source. Try apt-get on ubuntu. Given ARM's dominance these days it's not a hard toolchain to locate.
Setup
- Install the Eclipse CDT.
- Unpack the nRF5 SDK.
- Open an Eclipse workspace.
A brief digression: I always use this bash script to launch eclipse. It starts an Eclipse instance with the workspace set as the directory the shell script is located within and uses JAVA_HOME to find a jre (if set).
#!/usr/bin/env bash # From http://stackoverflow.com/questions/59895/can-a-bash-script-tell-what-directory-its-stored-in SOURCE="${BASH_SOURCE[0]}" while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )" SOURCE="$(readlink "$SOURCE")" [[ $SOURCE != /* ]] && SOURCE="$DIR/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located done DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )" if [ -z ${ENV_ECLIPSE_PATH+x} ]; then if [[ "$OSTYPE" == "darwin"* ]]; then ENV_ECLIPSE_PATH=/Applications/Eclipse.app/Contents/MacOS/eclipse else ENV_ECLIPSE_PATH=eclipse fi fi echo "Opening eclipse workspace at ${DIR}" echo "(${ENV_ECLIPSE_PATH})" if [ -z ${JAVA_HOME+x} ]; then JAVA_VM_ARG= else JAVA_VM_ARG="-vm ${JAVA_HOME}/jre/bin" fi ${ENV_ECLIPSE_PATH} -data ${DIR} ${JAVA_VM_ARG} &> /dev/null &
I'll start my example by opening a new workspace directly under the nRF5 SDK root folder.
Blinky
Now start a new project in the workspace you just opened: File > New > New Makefile Project with Existing Code
We'll create the project right in the blinky directory. In the nRF5 SDK this is found under examples/peripheral/blinky.
We select the Cross GCC toolchain here because we're going to modify it in a later step but note that we don't actually build using this toolchain.
Next open the project properties (right click on the project or cmd+i) and then C/C++ Build. Here you'll see the Build Command the CDT uses when building. For many projects you'll need to tweak this command. For the nRF5 SDK we need two additional parameters:
make GNU_INSTALL_ROOT=/usr/local/ VERBOSE=1
- GNU_INSTALL_ROOT=[path to your gcc – This is needed to provide the makefile with the basepath for the gcc arm embedded toolchain. On my system this is /usr/local
- VERBOSE=1 – This one is really important. Many makefiles suppress compiler output for aesthetic reasons but in the next step we're going to tell the indexer to read this output. Luckily the Nordic makefiles provide this VERBOSE override. Some other makefiles will have to be manually modified to emit the gcc commands. This usually involves removing prepended @ directives in object rules.
- You also need to modify the build directory to where the Makefile is located. For the Nordic SDK this will be, from the workspace root, [board]/blank/armgcc where [board] is the name of one of the Nordic evaluation boards (Use pca10040 if you just want to build something and don't have a board).
Now when you build (cmd+B) you should see a bunch of scrollback in the console window and the blinky example should build. You'll see a binary size report at the end of the console if the build succeeded.
Indexer
If you open the main.c in our blinky project you'll see a sad state of affairs. Lots of little beetles and red squiggly lines. If the build succeeded then why are there so many errors in the source? The CDT build and indexer are independent systems and it's the indexer that reports these errors. To get the indexer to use our external makefiles and toolchain we need to modify two "providers": the CDT GCC Build Output Parser and the CDT Cross GCC Build-In Compiler Settings. To get to the list of providers open the project properties (right click or cmd+i) and go to C/C++ General > Preprocessor Include Paths > Providers.
GCC Build Output Parser
This little gem reads the console output from your build and parses and -I or -D directives it finds. The trick is it only looks for arguments to known compiler aliases. If you are using a compiler like avr-gcc or, as in this example, arm-none-eabi-gcc you'll need to modify the default pattern appropriately. It takes a regular expression so you can get fancy with this if you like. For this example I've just added "or arm-none-eabi-gcc": |(arm-none-eabi-gcc)
CDT Cross GCC Built-in Compiler Settings
The next provider we are going to use is a bit of a hack but it seems to work brilliantly. The Built-in Compiler provider is supposed to provide information about system headers and defines for the Cross GCC toolchain. We're not really using this per-se but we can aim it at our compiler to provide the indexer with the same information. Again I've just hard-coded in arm-none-eabi-gcc but you can do this with whatever compiler you are using. Just be sure to supply the right parameters to get it to dump its system include paths and defines to standard out. Open a console and try this manually first if you aren't sure what the options are. If you are still unsure if this is working for check the "Allocate console in the Console View" box for this provider. When the indexer runs you should see all the system paths and -D defines dumped to one of your CDT consoles.
Unicorns and Rainbows
So that's it. You should now have fully indexed source for this blinky example. If you don't you can try coercing the indexer by right clicking on the project and selecting Index > Rebuild. Now you can simply cmd+click on any type, define, or include to hyperlink right into the source. You should also see the elf output for this example in your Project Explorer pane. This expands to reveal all the source files that went into building it. With more work you can get the Eclipse CDT to debug, write hex images to boards, manage your SCM (i.e. git), etc. All this and it's all open-source, free, and cross-platform.
Working With the AOSP Framework
What is "the Android Framework"? In short; it is the layer of Android that defines APIs, services, and environments for Android applications. For a better explanation you should see AOSP's description.
Note that this post starts with the assumption that you already have a working Android build, have sourced build/envsetup.sh and have lunched. The best place to get help with this is the AOSP Downloading and Building documentation.
Iterating Within the Java Framework
In a Nutshell
#!/usr/bin/env bash # source this script. For example: # > . iterate.sh # build the framework module mmm frameworks/base || return # ensure adb is running as root adb root; # make the system partition on the device writeable adb remount; # synchronize files between the local system folder under # out/target/product/<device>/system and the device's system partition adb sync; # restart the framework adb shell stop; adb shell start;
The Android framework module is built by the frameworks/base/Android.mk makefile. In AOSP mm builds the module for the current directory and mmm <dir> builds the module in <dir> (or in a list of dirs).
ProTip: elinux.org's wiki is a good cheatsheet
You'll need a rooted device to iterate on the framework without having to flash the entire system each time. This is because the adb sync command is copying just the parts of the framework you have rebuilt to the device's /system partition. For the Java framework this will be out/target/product/<device name>/system/framework/framework.jar.
Alternative System Build
It's worth mentioning that there's an alternative way to get a system image which contains a java framework onto a device. Once you've run the mmm part of the build and have the build intermediates you want to get onto a device you can run
make snod
which stands for "Make System with No Dependencies". This simply packages up whatever is in the out folder into a new system image. You can then `reboot fastboot` and `fastboot flash system` to install the entire system image onto a connected device.
Careful here though, if this is the first time you are flashing a device from your local build you'll want to do a full build/flash or your device may get into the infamous infinite app optimization and reboot. See the Android Open Source Project's docs for full details on building AOSP and get past the point where you:
fastboot flashall
before starting to use make snod or adb sync.
Being System
The signature and signatureOrSystem protection levels become more important when developing for the Android application framework. That said, don't define systemOrSignature permissions unless you understand the ramifications doing so has for your Android distribution (I'll have to write a follow up post some day about this). For the signature level, the framework itself is signed using a key that establishes a signature defining what packages belong to the system and what packages are from another source (i.e. "downloaded"). Because of this your package will have to have the same signature as the framework to obtain signature level permissions defined by the system.
Under build/target/product/security you'll see several keys that sign various parts of the system. We're mostly concerned with the platform key when working with the application framework.
To create an application keystore with the same signature as the application framework build you'll need to create a java keystore that contains the platform key and certificate found the build. To do this you'll need two things: the JDK's keytool executable and openssl. Once you have these on your path, have a working AOSP build, and an Android application development environment you can create a new debug keystore like so:
openssl pkcs8 \ -in build/target/product/security/platform.pk8 \ -inform DER \ -nocrypt \ -out platform.pem; openssl pkcs12 -export \ -in build/target/product/security/platform.x509.pem \ -out platform.p12 \ -inkey platform.pem \ -password pass:android \ -name androiddebugkey; keytool -importkeystore \ -deststorepass android \ -destkeystore platform.jks \ -srckeystore ./platform.p12 \ -srcstoretype PKCS12 \ -srcstorepass android;
You can check that this is a valid keystore using keytool:
keytool -list -keystore platform.jks Enter keystore password: > android
Now all you have to do is either point your app build to use this keystore or
replace your randomly generated debug keystore with this one:
mv platform.jks ~/.android/debug.keystore
If you've already installed an application package and get an Application Installation Failed, INSTALL_FAILED_UPDATE_INCOMPATIBLE error this is because the signature has changed. Uninstall the package and you should be able to install the version with the new signature without further errors.
Free Rein
WARNING! This hack disables critical security measures in Android. Don't install any real user accounts on a development device running this hack.
Sometimes you want to test integration with apps that are production signed and limit access to certain activities by signature. If production signing your development app is too much of a burden you can just disable the signature check. In Lollipop this was found in ActivityStackSupervisor.java:1430. By commenting out the real security check and always reporting PERMISSION_GRANTED you can start any Activity you want on your hacked system regardless of signatures:
- final int startAnyPerm = mService.checkPermission( - START_ANY_ACTIVITY, callingPid, callingUid); - final int componentPerm = mService.checkComponentPermission(aInfo.permission, callingPid, - callingUid, aInfo.applicationInfo.uid, aInfo.exported); + final int startAnyPerm = PERMISSION_GRANTED; + //mService.checkPermission( + // START_ANY_ACTIVITY, callingPid, callingUid); + final int componentPerm = PERMISSION_GRANTED; + //mService.checkComponentPermission(aInfo.permission, callingPid, + // callingUid, aInfo.applicationInfo.uid, aInfo.exported);
Tools
Here are some resources that you'll find useful while hacking AOSP:
AndroidXREF – Online OpenGrok index of the full AOSP codebase organized by release branch. This resource is invaluable for developing AOSP and applications. Frankly the existence of this site is one of the key benefits of Android over iOS for developers. There's no scouring docs and trying to reverse engineer strange behaviours. You just go to the source and you know exactly what the problem is. done.
App Detective – A useful app for enumerating all the activities, services, providers, permissions, etc available on a system. This can help with building rich integrations with other popular apps.
The Amazon Experience
Last year I left Amazon after being there for 7 years. When I read the recent (August 15th 2015) New York Times article on Amazon's corporate culture it set off a period of reflection on my time there. Because I learned so much from the talented people I worked with I find it important to revisit those lessons with perspective and objectivity. This blog post is my own personal retrospective of my time at Amazon and a retroactive application of "vocally self critical" if my former colleagues will allow me the conceit.
The Bad Year
I received one bad review in my 7 years at Amazon. It happened that this review period included the time my wife was in the hospital for over two months of bed rest while carrying our twins and the time afterwards when our children where in the NICU after being born fairly small. I, obviously, was juggling a lot between work and family at that point. Was it fair that my job performance was rated without taking that into account? I'm not sure. I admit I didn't do my best work and in a vacuum the review was fair. But the review didn't feel like a isolated analysis of my engineering output. It felt like a critique of my overall ability and ongoing value as an employee. It felt like a reproach for letting my personal life affect my work.
At the same time a colleague under the same manager was going through a divorce. He was "managed out". He was smarter than me and more qualified but he wasn't working as many hours. That situation demonstrated an extreme lack of empathy from Amazon management. To be fair; this all happened over 5 years ago and the groups I worked for in subsequent years were a real joy. Still over the years there was a steady stream of people being placed on "performance improvement plans" (which always ended in termination) that seemed to be based more on a need to fill some hidden quota then actual problems.
Amazon is a big company and I was there for a (relatively) long time. I could provide a long list of points on both sides of the good and evil scale. In all I don't think Amazon is especially "evil" compared to any other American firm but it does have a lot of areas it needs to improve on if it wants to hire and retain experienced talent.
The Good Years
During my time at Amazon I did work for many managers that I would rate as the best of the best. These were people who were empathetic and many had families of their own, yes; but they also were managers who used data, intellect, and wisdom to amplify the effect of dozens of people all working together towards the same end. My experience in these good times proved to me that the principles developed and adopted by Amazon can be interpreted in a way that makes an excellent and effective work culture.
My good years at Amazon were spent building things people wanted, that weren't completely frivolous, while working for smart people with high but reasonable expectations. These good managers had a keen understanding of what was happening "on the ground" even as they were translating more detached directives from upper management. One of my current co-workers, John Ikeda, described this as being the "master sergeant manager." The highest ranking NCO, so-to-speak, that protects their platoon from the shit they take from the officers. Based on my considerable military experience with "Saving Private Ryan" and "Call of Duty" I find it an apt analogy.
What I learned from watching these managers ("You're one of the good ones." "Some of my best friends are managers.") is to use Data as a way to avoid micromanagement rather than the inverse. Good managers watch the data flow and get to know it like the feel of an engine through the deck of a ship. They are able to stand back when the data "feels" right and only interfere when they feel a strange vibration coming up from the data beneath their feet.
Customer Obsession
Thinking about where groups within Amazon were good and where they were bad I find one key difference in how the group interpreted Amazon's "god principle"; namely "customer obsession." Laser focus on the customer in all aspects and at all levels of the business is one of, if not the, key(s) to Amazon's success. Interpreted by reasonable people when optimizing decisions and making hard calls this principle can reduce the amount of vanity and self-indulgent bull-shit a company engages in. Where it goes wrong is when this principle is used as an excuse for making people work long or unreasonable hours. The argument goes "customers want to be able to buy widgets at 1am and our widget service is down! You must stop everything else you are doing and fix this now!" Amazon needs to understand the inalienable '0' principle: "We don't love our customers more than our families. We don't love our customers more than ourselves." I don't think any rational person would expect that any retailer be made up of individuals that love them more than life. Any organization that attempts to practice this sort of perverse customer worship is as much a cult as a business.
But this sort of religious zealousness wasn't the most common abuse of "customer obsession" I saw. More typically the principle was perverted as an excuse to take shortcuts. It became a rug under which all manner of poor engineering practices were swept. The reasoning goes like this, "I can have my engineers spend the next week fixing a problem that happens once a month and is easily fixed by a human with 5 minutes of effort or I can have them build a new feature our customers want." Perhaps in isolation this tradeoff seems reasonable but compounded the "once a month" problems begin to add up until the systems owned by the group need constant human intervention to keep them running. The problems snowball and engineers leave the group in frustration further compounding the problem for those that remain. This is a pattern that Amazon repeats time and time again in its rush to build new services and features (For a concrete example, read this article). Even during the last months of my employment I became aware of a team that is doing just this; building systems held together with bubble-gum and duct tape in a mad rush to meet unreasonable schedules. This team was clearly at the start of the cycle I'm describing here and all the "customer obsession" excuses were deployed in typical fashion to justify the situation.
In this operational death spiral the final insult comes when new employees, normally young college graduates, are used to keep the derelict systems running until they can be replaced. To do this managers use the fact that new hires are not allowed to move to another group for the first year after they are hired. Furthermore, if they do not get a good review they are not allowed to move until their performance improves. This means young developers are thrown into these grinders where they have to work with poorly designed infrastructure that often fails and is hard to fix. The engineers that built the software are gone and there is little documentation to be found. They are all but setup to fail which means a bad review and yet another year enslaved by a pager.
These quagmires can only be avoided by better strategic thinking. Rather than design software with only the first release in mind managers and engineers must first consider what the evolution of the systems could be. The immediate counter argument I hear when I say this is "we wouldn't get anything done if we waited around until the future was clear." There seems to be this notion that agility and innovation require poor planning and short-sighted thought. Failed engineering organizations start like a chess player that thrusts out some random pawn without first devising a strategy. They are forever reactive and unable to respond to the demands placed upon them. In the face of uncertainty there has to be some evaluation of "possible moves" beyond the first one. A business, a service, a technology must be planned as both its first iteration and the possible changes that will come next. You don't need to know every move in the game but you do need a framework that supports your strategy for winning. This planning means preparing for success as much as failure. Time and time again I've seen groups struggle because they were successful and weren't prepared for the load that a robust user base placed on them. They weren't ready for the next move even though they were given a fleeting opportunity to play offense.
The Conclusion at the End
I'm not drawing any simple lessons with this post. There are some like; ambition, hard work, and fearlessness actually can't replace intelligence, planning, and craftsmanship when making things; that being kind is hard but valuable; that large companies are made up of individuals each with their own strengths, weaknesses, and habits; but in the end the last seven years were more about raising my two kids Huck and Esme and making a life for myself here in Seattle with my wife Kathleen. Amazon was work. Good work, great pay, and really awesome health insurance but in the end it was a job. If you are currently working at Amazon and it's more than just a job that's when you need to stop and evaluate your priorities. That's when you can find reasons to work yourself into the ground and be miserable. Work hard, yes, but for god's sake, if you're not having fun you're doing it wrong.