A Comparison of Activity Trackers

— July 16, 2014 • #

The concept of activity tracking is getting ever closer to ubiquitous nowadays with the prevalence of dozens of mobile apps, wearable wristbands, and other health monitoring tools like Bluetooth-enabled scales and video games based on exercise. Now the world’s largest tech company is even rumored to be working on some form of wearable hardware (and software APIs), at which point the whole concept of “life tracking” will reach 100% penetration. Everyone will be tracking and recording their lives like characters in cyberpunk literature.

I’m a casual runner and cyclist, and started testing a handful of fitness tracker mobile apps to map my activity. Since I’m a stats and data junkie, I did some extensive experimental testing with these four apps to size up the advantages of each in terms of technical capability, as well as the feature-set of services provided by each within their online social systems:

There are dozens of other options for wearable hardware for tracking activity, location, and more, but I still think most of them are either too costly or not mature enough to invest my money in. I seriously debated buying a Fitbit or Up, but I’m glad I haven’t given Apple’s potential push into that market.

Let’s run through the details of each and compare what they have to offer.


Each of these apps has its focus, but they all promise the same basic set of features (with the exception of Moves, which I’ll get to in a moment):

  1. Allow user to log an activity of specific type — running, walking, cycling, hiking, kayaking, skiing, etc.
  2. Calculate metrics about the activity including time, distance, map location (in the form of a GPS track), speed, pace, calories, elevation, etc.
  3. Share your activities with friends, and join a social network of other active people (including professional athletes)
  4. Compete against others in various ways
  5. Set goals and measure your progress toward said goals

Moves is a different style of app. It’s a persistent motion tracker that runs continuously in the background on your device, mostly for calculating steps and distance per day for all of your activity. No need to open the app and record independent activities. I wanted to include Moves in the mix primarily for its deep data recording and mapping capabilities. I’ll revisit Moves’ data quality later on when discussing data.

Mobile Apps

I’m an iPhone user, and iOS has matured to the point that serious, veteran app developers have ironed out most of the annoyances and kinks of basic app design concepts. Most of the conventions around app UI have arrived at general consensus in presentation, using a couple of well-known paradigms for structuring the user interface. Both RunKeeper and Strava use the home-row tab button UI layout, with standard “5-button” options list across the bottom. MapMyRun uses the sidebar/tray strategy to house its options, like most of Google’s iOS apps.

Activity trackers

The basic interfaces of all three of these apps are nice. RunKeeper and Strava are almost exactly level on features on the mobile side. They both have a basic social presence or feed of your friends’ activity, activity type selectors, and big “Start” buttons to get going with minimal fiddling. MMR’s look is a little cluttered for me, but it does include other functions on the mobile side like weight entry and nutrition logging.

All of them support configurable audio announcements of progress during an activity. A voice will chime in while you’re running to give you reports on your current distance, pace, and time since the start. Each also can be paired up via Bluetooth with an array of external sensors like heart rate monitors, bike speedometers, and others. Strava even has a nice capability to visualize your heart rate metrics throughout the course of your activities if you use a monitor.


In my testing, the reliability and consistency of all of these apps has come a long way since the early days of the App Store, back to iPhone 3G and the first devices with GPS. The only one of the group that I’ve been using that long (since 2009) is RunKeeper, and its reliability now is in another class than it was back then. Since the introduction of multitasking with iOS, apps run silently in the background when switching between apps while a tracking activity is in progress. I tested tracking with all three simultaneously without any issues.

During a couple of my test runs, Strava inexplicably stopped my activity for no reason, but didn’t hard crash. When I’d switch back to the app, the current activity was paused mid-way, which is an annoying bug or behavior to encounter when you can’t recreate your activity easily. RunKeeper still seems the most reliable option all around, including the mobile app dependability and the syncing operations with the cloud service. Multiple times I had trouble getting the activity to properly save and sync on Strava and MapMyRun, though usually it was just a delay in being able to get my data synced — didn’t involve data loss except for the paused activities and couple of app crashes.


All three of these apps function as clients for their associated web services, not just standalone applications. They’re not much different; each of them shows a feed of activity and a way to browse your (and your friends’) activity details. Stacking up your accomplishments against your friends for some friendly competition seems to be the main focus of their web services, but the motivators and ability to “plus up” friends’ activity might push some to work out harder or more often. The differences here are mostly minor, and deciding on the “best” service in terms of its online offerings will come down to personal preference. One of the features I like with Strava is the ability to add equipment that you use, like your running shoes or specific bikes. Doing this will let you see the total distance ridden on your bike over time.

Each service offers a premium paid tier with additional features. Strava and RunKeeper have free-to-use mobile apps with fewer features, while MMR goes with advertisements and in-app-purchase to remove the ads.

Data Quality / Maps

My primary interest in analyzing these services was to check out the quality of the GPS data logging. I ran all three of them on the same ride through Snell Isle so I could overlay them together and see what the variance was in location accuracy. Even though iOS is ultimately logging the same data from the same sensor, and offering that up to the applications via the Core Location API, the data shows that all three apps must be processing and storing the location values differently. Here’s a map showing the GPS track lines recorded in each — Strava, MapMyRun, and RunKeeper. Click the buttons below the map to toggle them on and off to see how the geometry compares. If you zoom in close, you’ll see the lines stray apart in some areas:

Each app performs roughly the same in terms of location data quality. The small variances in precision seem to trend together for the most part, which makes sense. When the signal gets bad, or the sky is slightly occluded, the Location APIs are going to return worse data for all running applications. One noticable difference between the track geometry (in this example, at least) is that the MapMyRun track alignment tends to vary in different ways than the other two. It looks like there might be some sort of server-side smoothing or splining going on to make the data look better after processing, but it doesn’t dramatically change the accuracy of the data overall.

I did notice that using these apps without cellular data enabled results in severe degradation of quality, I think due to the fact that the Assisted GPS services are unavailable, forcing the phone to rely on a raw GPS satellite fix. When using any location logging app without cellular data switched on, the device has to take longer to get a position lock. A couple of runs from my Europe trip exhibited this, like my run along the Thames in London, and one in Lucerne.

Run on the Thames

Since these motion trackers rely on the GPS track and time series data for calculating total distance (which is obviously way off with this much linear error), you end up with massively incorrect pace and calorie-burning metrics. This jagged-looking run activity in London reported itself to be 4.7 miles, and in reality it was only about 3.5. Soon I’d like to pair my iPhone up with an external GPS device I’ve been testing out to see what the improvement in accuracy looks like.

If you want to export the raw data straight from the web services, Strava and RunKeeper are the only ones that will give you a full time series-enabled GPX track file for each activity. MapMyRun only exports the track point data, which without the timestamp info for each point can’t be processed to calculate pace and other metrics with elapsed time as a variable.

The location data captured by the Moves app works a little differently. It splits your persistent movement activity up into day and week views, with totals of steps taken and calories burned, by type of activity. It does some cool auto-detection of activity type to try and classify car transport, cycling, running, and walking automatically. Because it’s always running in the background, though, the location data isn’t quite as granular as from the other three applications, probably due to less frequent logging using the location APIs.

Moves app examples

One caveat important to note is that Moves was acquired by Facebook back in May. That may turn a lot of people off to the idea of uploading their persistent motion tracking information to the Borg.

Wrap up

Strava and MapMyRun also support pulling the track info from external devices like mountable GPS devices, watches, and bike sensors.

Overall, my favorite is Strava as the app-of-choice for tracking activity. It performs consistently, the GPS and fitness data is high quality, and the service has a good balance of simplicity and social features that I like.


— June 7, 2014 • #

I recently took a trip to Tunis to attend the GCT-Tunisia conference, a geospatial industry event focused on capacity building and promotion of mapping tools in the fluid and exciting region of North Africa. It was a fascinating trip to visit a place at such a turning point in its development. Both Tunisia and Libya, each of which had significant representation at the conference, are still just 3 years out from revolutions that unseated regimes in power for decades. It was a welcome opportunity to visit during this period of transition (yet relative safety and stability).

Tunis Carthage Airport

Traveling from Florida to North Africa is a trek. We flew through JFK Airport in New York, to Atatürk Airport in Istanbul, then onto Tunis-Carthage. For the international legs we flew on Turkish Airlines, which was a first for me. Turkish didn’t have the luxury-by-default feeling of the Gulf airlines, but it was comfortable. And for being an international hub and one of the busiest airports in the world, Atatürk was easy enough to transit between gates and move through security. Even at 5am local time, the airport was a swarm, with a varied crowd that proved Istanbul really is “where east meets west”.

The flight to Tunis left early in the morning local time. There were some fantastic views of the Greek islands and Sicily’s Mount Etna from my window seat. Tunis-Carthage Airport is right in the geographic center of the city. From the east, you fly in right over the Lake of Tunis, a natural lagoon encircled by the city. Passport control was slow, but easy, and the terminal was bustling with people. We caught a car ride to our hotel - about a 20km drive through La Marsa up along the beach to Gammarth. For a city that underwent a revolution only 3 years ago, there are few visible signs. We heard from some of the locals that before and during the revolution, much of the European expat population left the country, but things seem to be recovering strongly. Our hotel and the neighboring ones on the waterfront were crowded all week with people from Europe and all over the region.

Sidi Bou Saïd

Early in the trip we visited the old town of Sidi Bou Saïd, which is a fascinating hilltop settlement and tourist spot with amazing views overlooking the Mediterranean. It’s packed with shops selling mostly artwork — paintings, pots, dishware, and the like. Since it’s positioned on the center of a bluff above the waterfront, it has labyrinthine streets winding between distinctive white and blue buildings. We didn’t do much here but buy some gifts for those back home, then eat some pizza at a nearby local joint.

The highlight of the trip was toward the end of the week, when the conference organizers put together an excursion tour that took us south to a town called Zaghouan, to visit an ancient Roman water temple at the base of the mountain. We hopped into a tour van early on Friday to make the journey into the countryside to Zaghouan, which gave Patrick and I ample opportunity to snap photos from the road (for some post-trip OpenStreetMapping). On the way out of town we stopped near the ruins of Carthage to see the destination of a 3000 year old aqueduct that once led to the cisterns where the Romans stored and supplied the city with water from the mountainous south. About 30 km south of Tunis we stopped on the roadside to see the remnants of the aqueduct, at a point where it’s remarkably well-preserved. It feels unbelievable to stand beneath a structure nearly 2000 years old and marvel at the fact that even this form of ancient plumbing is still standing. There are even sections of it where the original water pipeline is still covered and intact.

Djebel Zaghouan

Another hour or so of driving took us into the city of Zaghouan, which sits beneath Djebel Zaghouan, a craggy mountain that’s one of the northernmost peaks in the Atlas range. We wound our way up the streets into the foothills, to the “Temple des Eaux”, the Roman water temple. The Romans built the temple on top of a spring in the second century AD - it served as a place of worship, and the source of the aqueduct that supplied water to Carthage via the aqueduct. You could see the pipe where water was siphoned from the spring emerging from the side of the hill, where it slowly pitched downward onto the top of the aqueduct for its 100km downhill trickle. The views from the temple are incredible. The climate and topography make it feel like you’re in southern California, overlooking the olive orchards and almond plantations of the surrounding area. With our fellow geographers out in the field, everyone naturally couldn’t help but do some surveying while on site at such a historic place. One of our tour-mates, a surveyor that builds and operates 3D laser scanners, broke out the devices to gather some high-resolution scans of the temple site.

El Fahs

About 20km west of Zaghouan is the smaller town of El Fahs. There we were visiting the ruins of a Roman city called Thuburbo Majus. On a hilltop a few kilometers from the main town, it’s quiet, calm, and stunningly well-preserved and protected. We arrived there in the mid-afternoon to an empty site. There were two staff guards at the entrance that let us in, then we pretty much had the entire site to ourselves. The road through the site dates from nearly 2,000 years ago, and various of the structures were built over the next 3 or 4 centuries. The highlight of the walk through the ruins, for me, was an archway between the baths and an elevated temple, dating from the time of the Punic Wars — narrow, perfectly constructed, and still standing after 20 centuries. Completely unbelievable, and at a site with relatively little oversight or protection. I could touch the arch when walking beneath it.

Punic arch

Up on the site of the old forum, some local boys were kicking the football around. I overheard an argument about who was “Cristiano” as they were chasing the ball up and down the stairs to the columns of the capital.

Soccer on the forum

The wildflowers were so dense we could barely walk through them. I got some video walking through the “House of the Auriga” and the Winter Baths. A stunning place to get to visit, with beautiful weather the whole day of our excursion.

I’ve posted a bunch of photos from Tunis and the excursion up on Flickr.

The Three Laws of Robotics

— March 6, 2014 • #

This is part two of a series on Isaac Asimov’s Greater Foundation story collection. This part is about the short story collection, I, Robot.

Picking up with the next entry in the Asimov read-through, I read a book I last picked up in college, I, Robot. This is the book that cemented his reputation in science fiction. His works on robots are probably his most well-known. He was an early thinker in the space (he even coined the term “robotics”), and wrote extensively on the subject of artificial intelligence. After reading this again, it’s incredible how much influence a 60 year old collection of pulpy science fiction thought experiments ended up having on the sci-fi genre, and arguably on real-world engineering technical development itself.

I, Robot

I, Robot isn’t a novel, but a collection of 9 short stories, each of which were published independently in several science fiction publications throughout the 1950s. The parts are stitched together within a framing story of Dr. Susan Calvin, the “robopsychologist” that makes appearances in several of Asimov’s robot stories, recounting her experiences with robot behavior working for US Robots and Mechanical Men, from the time of the earliest models to extremely advanced humanoid versions. Fundamentally, I, Robot is a philosophical study of Asimov’s famous Three Laws of Robotics, laws that dictate the allowable behavior of robots and which form the basis of much of his exploratory thinking on the nature of intelligence:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

This simple set of rules form the basis for the stories of I, Robot. The groundwork of the Three Laws lets Asimov ruminate on logical, ethical thought process, and what differentiates the human from the artificial.

Each story is an analysis of an aspect of robotic technical development. As the stories progress and the technology advances, each plot line underscores elements of human thought taken for granted in their complexity and nuance. In order to poke and prod at the Three Laws, moral and psychological situations are presented to investigate how robots might respond to input, and by extension, how minor variations in inputs could dramatically change response. Asimov’s robots are equipped with “positronic brains” — three-pronged logic processors that weigh every decision against the Three Laws. Upon initial interpretation within the framework of the Laws, each plot’s situation appears to result in a conundrum or violation of the rule set. Asimov’s mystery storytelling then kicks in and invites the reader to deconstruct and solve the puzzle.

My favorite of the stories center around US Robots’ “field engineers”, Mike Powell and Greg Donovan. They appear in four of the nine stories, and serve as the corporate guinea pigs responsible for putting new robot models through their paces in a variety of settings, from remote space stations to inhospitable planets to asteroids. I loved how the technology always seems to get the better of them, only to have them figure clever solutions by twisting the Three Laws to their advantage. In “Reason”, Powell and Donovan are stuck on a space station with a robot named QT-1 (Cutie), a model with highly developed reasoning abilities. Cutie refuses to obey any of their commands because it reasons that a power exists higher than humans, which it calls “The Master”. They eventually discover that the Master is actually the station’s power source, which Cutie determines is of a higher authority than the stations human operators, as none of them could exist without it. It’s a 2001-esque series of events, though Cutie isn’t quite as insidous as HAL.

Evidence” introduces the character of Stephen Byerley, a man suspected of being a highly-developed humanoid robot. Dr. Calvin attempts to use psychological analysis to determine if he is man or machine when physical means are exhausted, realizing that if he were truly a robot, he would be forced by programming to obey the Three Laws. But the investigation takes a turn when she realizes that his conformance with the Three Laws may “simply make him a good man”, since the Laws were engineered to model human morals.

In the final story, “The Evitable Conflict”, Asimov even hints at what our modern AIs will look like, with positronic brains embedded in even non-humanoid machines, a 1950s vision of Siri or Watson. These computers of the future are critical in managing the world’s economy, mass-production, and coordination. The computers begin experiencing minor glitches in decision-making that seem to be minor violations of the First Law. But it turns out that the computers have effectively invented a “Zeroth Law” by reinterpreting the First: A robot may not harm humanity, or, by inaction, allow humanity to come to harm — making minor exceptions to the First Law to save humanity from themselves. Between Calvin and Byerley, there’s a sense of despair as humanity has given its future over to the machines. Would we be okay dispensing with free will in order to avoid war and conflict? It punctuates the final evolutionary path of robotic development, and provides a nice segue into the Robot novels in the future chronology of his universe.

“Think, that for all time, all conflicts are finally evitable. Only the Machines, from now on, are inevitable!”

I’m interested to see where the path leads as I continue to read more of his work, and to find out how these robot stories interconnect with his wider universe. Overall, I thoroughly enjoyed this book. It’s clever, thought-provoking, humorous, and will make you realize how many of our favorite works of science fiction in writing and film owe a tremendous debt to this book.

The Year in Books

— December 20, 2013 • #

2013 was busy in so many ways — our product matured beyond the level I’d hoped it could, we’ve done some incredible mapping work around the world, and I’m just getting started with my involvement in an awesome local hackerspace scene. Even with all that going on, I still managed to read a fair number of great books this year.

Year in Books

A few thoughts on some of the favorites:

Neuromancer, William Gibson. 1984.

I first read this one back in 2010, but after finishing up the Sprawl series with Mona Lisa Overdrive, I had to revisit it. The first time around, I found it difficult to follow and get engaged, but the second reading cemented it as one of my all-time favorites of any fiction. This is one of the seeds that sprouted the cyberpunk scene, a genre which might as well have been invented for me. The setting and culture of the book is completely fascinating, and Gibson’s prose drags you through its cities, space stations, and cyberspaces at pace, but with enough expression that you can taste the Sprawl’s grime and visualize the grandeur of Freeside, the massive spindle-like space station. Gibson’s writing oozes with style; he can turn a drug addict on a computer terminal (er, “console cowboy”) hacking a corporate network into an action anti-hero. I highly recommend this book to anyone.

When Gravity Fails, George Alec Effinger. 1987.

In this one, Effinger reverses the traditions of futuristic settings, with the West in decline and the Levant as the world’s economic core. It’s the first of a three-part series featuring Marîd Audran, a hustler from the Maghreb who lives in the fictional Arab ghetto of the Budayeen. In the slums and backalleys of the Budayeen, blackmarket clinics offer its brain-wired citizens installation of cybernetic add-ons and full personality replacement mods. Audran is an unmodified traditionalist (and drug addict), but quickly finds himself in the debt of Friedlander Bey, the Budayeen’s resident paternal crimeboss. The story follows Audran as he must himself get “wired” in order to track down a serial killer committing a string of inexplicable murders. Loved this unconventional work of cyberpunk, and looking forward to getting to the next two parts in 2014.

Consider the Lobster, David Foster Wallace. 2005.

I’ve had DFW on my reading list for years, but this first book of his I picked up is actually a collection of essays rather than fiction. Many of the pieces in the collection are works of journalism, with Wallace covering events or reviewing books. Reading a writer of his caliber covering something like the Maine Lobster Festival, or following the 2000 McCain campaign is rare, and his outsider’s point of view is refreshing.

The Revenge of Geography, Robert Kaplan. 2012.

I had been on the lookout for some time for a book about modern geopolitics, and this one was excellent. Kaplan begins by setting the historical context with the ideas of early geopolitical theorists. The central ideas of “sea-centric” vs. “land-centric” power are explained — the Rimland vs. the Heartland — and how significant historical events revolved around these two central strategies of geographic positioning. Kaplan then goes on to analyze the regions of the modern world, their connections with one another, and conjectures interesting possible outcomes, all through the lens of geography.

The End of Eternity, Isaac Asimov. 1955.

This one really surprised me, one of my favorite works of sci-fi. I wrote a post a couple months ago with my thoughts on this book, but suffice it to say that it’s my favorite piece of time travel fiction. And if you’ve watched Fringe, you’ll see the deep influence of this novel about 20 pages in.

The One World Schoolhouse, Salman Khan. 2012.

Our public education system is deeply flawed. In this book, Sal Khan analyzes the fundamental problems and posits a potential way forward. He’s the founder of the Khan Academy, one of the largest players in the world of MOOCs, striving to build an approach and set of tools to bring the same level of education worldwide with minimal access, and to wean ourselves off of the old world, hyper-structured Prussian education system we’ve been following for over a century. I have a deep personal interest in our education system, particularly the almost total lack of representation of my field as a foundational layer in primary and secondary schools.

Shadow of the Torturer & Claw of the Conciliator, Gene Wolfe. 1980.

I’ll round it out with the first two parts of Gene Wolfe’s Book of the New Sun tetralogy. The series is set on a distant future Earth, and follows Severian, a torturer of the “Seekers for Truth and Penitence” (the guild of torturers) responsible for holding and extracting information from political prisoners. The depth of these novels is unmatched, and they’re quite difficult to follow at first. Severian tells the story in the first person, is sometimes an unreliable narrator, and from his point of view many places and things that cross his path he misidentifies or misunderstands, having never left the torturers guild until his exile. Wolfe uses language that is arcane or dead, many of the words derived from Greek or Latin (a few examples: fuligin, autarch, archon, aquastor, optimate), which will send you to the dictionary frequently. Becauses of the complexity of the story and writing, this was my second attempt to read these two books. If you make it through the first quarter, you’ll be handsomely rewarded with one of the most fascinating, deep, and original fantasy stories ever written.

Upwhen and Downwhen

— October 30, 2013 • #

This is part one of a series of essays on Isaac Asimov’s famous Greater Foundation story collection. In this first one I discuss the time travel mystery The End of Eternity. It’s rife with spoilers, so beware.

The prolific science fiction writer Isaac Asimov published an astonishing body of work in his life. Though he’s probably most well-known for his stories, collections, and postulations about robots (and, therefore, artificial intelligence), he wrote a baffling amount speculating on much bigger ideas like politics, religion, and philosophy. The Robot series is one angle on a bigger picture. Within the same loosely-connected universe sit two other series, those of the Empire and Foundation collections. Altogether, these span 14 full novels, with a sprinkling of several other short story collections in between.


In deciding to read all the works in the collection, I first had to choose where to begin. Is the best experience had by reading in the order he wrote them? Or to read them in story chronological order? Trying to figure this out, I naturally ran across the sci-fi message board discussions arguing the two sides, with compelling arguments both ways. I wasn’t sure which had more merit until I read that Asimov himself suggests a chronological approach, rather than in the order of their writing, to lend maximum immersion into the galactic saga. Taking a tip from another reader, I also decided to go a step further and begin with one outside of the main series, but seen by many as a precursor to the other storylines — the 1955 time travel story The End of Eternity.

The novel is primarily a mystery-slash-thriller, set in a distant future. The story follows the experiences of Andrew Harlan, a man extracted from Reality and into “Eternity”, a place that exists outside of time where humans called “Eternals” have taken it upon themselves to police the timeline of human existence, altering Reality where necessary to minimize human suffering, and control the flow of history. Eternals are people recruited from various times throughout history for particular desired skills, from the 27th century, all the way up to the 30,000th and beyond. Within Eternity is something of a class hierarchy, with Eternals dividing up the duties – Sociologists use statistics to plot the lives of individuals, Computers calculate the long-term effects of Reality Changes, and Technicians pinpoint the exact moments in time at which to intiate the Reality Change. By traveling time and entering at an exact pre-calculated point, Technicians strive to introduce the “minimum necessary change” to induce a “maximum desired response”. In other words, the smallest modification to Reality possible to create the most positive outcome:

“…He had tampered with a mechanism during a quick few minutes taken out of the 223rd and, as a result, a young man did not reach a lecture on mechanics he had meant to attend. He never went in for solar engineering, consequently, and a perfectly simple device was delayed in its development a crucial ten years. A war in the 224th, amazingly enough, was moved out of Reality as a result.”

Harlan is one of the Technicians, who actually triggers these butterfly effect Reality Changes. Unlike most of the Eternals, he has a fascination with the “primitive centuries”, those of the era before the discovery of time travel in the 24th. He collects artifacts from the 20th and 21st centuries — magazines, books, and other relics of the past to understand what made people tick in the time before Eternity. So Harlan and the other Eternals go about this business, traversing time “upwhen” and “downwhen” along their temporal transit system, shaping history like plastic.

This story contains one of my favorite takes on time travel. It presents a set of rules, obeys those rules, and directly acknowledges the time paradoxes it introduces. The plot itself is set up as a mystery, flinging Harlan into a Twilight Zone-esque narrative, leaving us as perplexed as he is as to what is actually going on, and whether he’s being manipulated by those around him. Eternals are allowed no contact or personal relationship with any “Timers”, people not aware of Eternity and that still exist within the timeline of Reality. Since the reality changes they induce can remove the existence of friends and family from Reality, Eternals are supposed to sever ties with family and forget that they ever existed. Like much time travel-based fiction, keeping tabs on the plot can get confusing, even though there’s a logical framework for how time travel functions in this universe.

For a story written in 1955 (and about as “hard sci-fi” as you can get), I was pleasantly surprised with several scenes that felt like reading a fast-paced thriller, with twists and revelations popping up every few pages for the entire final third of the book. One in particular consists of Harlan entering a point in time he had entered previously, creating the first of several ontological paradoxes that become key plot elements. The characters in the story directly acknowledge these paradoxes, speculate about the effects of an Eternal meeting himself, and even hatch a scheme to save Eternity by intentionally creating one.

The grand experiment of social engineering created by the existence of time travel and reality change in Eternity is questioned by the characters as they imagine the impact of constantly molding time to maintain an unexciting equilibrium. Each time the Sociologists’ “life plots” predict some calamity, like nuclear war, they intervene to level things out. And as it turns out, the intention to do good by removing chaos and chance from the equation stagnates humanity’s expansion to greater things, and creates a never ending cyclical machine. History is doomed to repeat itself.

The best science fiction gives itself space to ruminate on the philosophical and moral implications of technology. I loved this book, and found it to be one of the most creative takes on time travel I’ve read, which says a lot given the quantity and variations on the subject in film, television, and writing. It’s all the more impressive that this was written in 1955, and isn’t even one of Asimov’s better-known works. I highly recommend it to anyone interested in science fiction. Its mystery structure keeps things interesting throughout, from a plot perspective, but it doesn’t shy away from classic sci-fi conventions, either.

OmniFocus 2 for iPhone

— October 22, 2013 • #

I’m an OmniFocus-flavored GTD adherent, or try to be. The iOS apps for OmniFocus were huge contributors to my mental adoption of my own GTD system. When OmniFocus 2 dropped a few weeks back for iPhone, I picked it up right away.

OmniFocus iOSThe new design lines up with the iOS 7 look. I really dig the flat UI style in utilitarian apps like OmniFocus; for any app where function truly overrides form in importance — typically anything I open dozens of times of day as part of my routine. The new layout gives weight and screen real estate to the things you access more frequently, like the Inbox, Forecast, and Perspectives views. I’m really liking the inclusion of the Forecast view as a first-class citizen, with the top row devoted to giving you context on the next week out for tasks with deadlines.

As before, there’s a fast “Add to Inbox” button for quick capture. But rather than a button positioned somewhat arbitarily in a bottom navigation menu, it’s now an ever-present floating button, always in the bottom right for rapid inbox capture. Upcoming and overdue tasks are now symbolized with colored dots when in sub-views, and with colorized checkboxes in list views. The color highlights fit the iOS 7 aesthetic nicely, and give subtle indications of importance.

Like any effective design, the right balance of positioning and subtlety actually makes it clear how a feature should be used, and makes it simpler for you to integrate with your workflow. In past OmniFocus versions, I had a hard time figuring out how to make use of due dates (and start dates) properly, so I leaned away from using them.

With the latest iOS update, OmniFocus is now not only a tool that follows a GTD workflow, but one that actually leads you into better GTD practice.

Cabbage Key

— October 9, 2013 • #


I spent last weekend with the family at Cabbage Key, an island near Charlotte Harbor, in southwest Florida. It’s only visitable by boat, so we launched the Shamrock on Friday morning to head over to the cottage, including a number of cargo trips to bring all the weekend’s people and provisions.

We had a fantastic time fishing, sailing, drinking beers, and eating. Cabbage is a great spot that’s close enough to drive to, yet still detached enough to feel like a true vacation away from home.

House on Cayo Costa

On Saturday we visited a friend’s rustic cabin on Cayo Costa, a barrier island state park, with a mangrove-lined shore on Pine Island Sound, and a beach on the Gulf. Since, like Cabbage, Cayo Costa is only accessible by private boat or ferry, it’s pretty secluded. Our family friend’s cabin is a minimalist setup, with just enough shelter, a generator, and small kitchen — perfect for our weekend seafood grill session.

I recorded some GPS traces of a few of our outings, a couple on the Shamrock, and some aboard Nat’s 18’ Buccaneer. We had an amazing sail back to Pineland on Monday (the red line below), averaged 6 knots in rough seas, making the 5 mile trip in a little over 45 minutes. We had the tails of Tropical Storm Karen sweeping through that afternoon, so we made it back just ahead of a heavy squall.

It was convenient on the trips to have the charts readily-available offline in Fulcrum. Once I figured out how to download the raster data, convert it, and load it in, it was pretty simple. I now have a process for doing this with any of the digital charts that NOAA publishes. I had built a small app in Fulcrum for reporting errors on the charts, and used it with some success out on the water – though I’m not sure what exactly constitutes an actual missing feature, what things are “managed” as canonical features for navigational charts, and how to report them back. Planning a future post on this soon.

In all the hacking I’ve done with charts and data in recent weeks, a small side project is coming together to make it easier to extract the raw data from the electronic charts, not just rasters. NOAA’s formats are workable (and supported in GDAL), but it’s far too difficult for a regular person to make use of the data outside the paper charts or expensive proprietary chart plotters. A project is brewing to do more with that data, to make it more consumable and ready for mapping out-of-the-box, so stay tuned.

Bringing Geographic Data Into the Open with OpenStreetMap

— September 9, 2013 • #

This is an essay I wrote that was published in the OpenForum Academy’s “Thoughts on Open Innovation” book in early summer 2013. Shane Coughlan invited me to contribute on open innovation in geographic data, so I wrote this piece on OpenStreetMap and its implications for community-building, citizen engagement, and transparency in mapping. Enjoy.

OpenStreetMapWith the growth of the open data movement, governments and data publishers are looking to enhance citizen participation. OpenStreetMap, the wiki of world maps, is an exemplary model for how to build community and engagement around map data. Lessons can be learned from the OSM model, but there are many places where OpenStreetMap might be the place for geodata to take on a life of its own.

The open data movement has grown in leaps and bounds over the last decade. With the expansion of the Internet, and spurred on by things like Wikipedia, SourceForge, and Creative Commons licenses, there’s an ever-growing expectation that information be free. Some governments are rushing to meet this demand, and have become accustomed to making data open to citizens: policy documents, tax records, parcel databases, and the like. Granted, the prevalence of open information policies is far from universal, but the rate of growth of government open data is only increasing. In the world of commercial business, the encyclopedia industry has been obliterated by the success of Wikipedia, thanks to the world’s subject matter experts having an open knowledge platform. And GitHub’s meteoric growth over the last couple of years is challenging how software companies view open source, convincing many to open source their code to leverage the power of software communities. Openness and collaborative technologies are on an unceasing forward march.

In the context of geographic data, producers struggle to understand the benefits of openness, and how to achieve the same successes enjoyed by other open source initiatives within the geospatial realm. When assessing the risk-reward of making data open, it’s easy to identify reasons to keep it private (How is security handled? What about updates? Liability issues?), and difficult to quantify potential gains. As with open sourcing software, it takes a mental shift on the part of the owner to redefine the notion of “ownership” of the data. In the open source software world, proprietors of a project can often be thought of more as “stewards” than owners. They aren’t looking to secure the exclusive rights to the access and usage of a piece of code for themselves, but merely to guide the direction of its development in a way that suits project objectives. Map data published through online portals is great, and is the first step to openness. But this still leaves an air gap between the data provider and the community. Closing this engagement loop is key to bringing open geodata to the same level of genuine growth and engagement that’s been achieved by Wikipedia.

An innovative new approach to open geographic data is taking place today with the OpenStreetMap project. OpenStreetMap is an effort to build a free and open map of the entire world, created from user contributions – to do for maps what Wikipedia has done for the encyclopedia. Anyone can login and edit the map – everything from business locations and street names to bus networks, address data, and routing information. It began with the simple notion that if I map my street and you map your street, then we share data, both of us have a better map. Since its founding in 2004 by Steve Coast, the project has reached over 1 million registered users (nearly doubling in the last year), with tens of thousands of edits every day. Hundreds of gigabytes of data now reside in the OpenStreetMap database, all open and freely available. Commercial companies like MapQuest, Foursquare, MapBox, Flickr, and others are using OpenStreetMap data as the mapping provider for their platforms and services. Wikipedia is even using OpenStreetMap as the map source in their mobile app, as well as for many maps within wiki articles.

What OpenStreetMap is bringing to the table that other open data initiatives have struggled with is the ability to incorporate user contribution, and even more importantly, to invite engagement and a sense of co-ownership on the part of the contributor. With OpenStreetMap, no individual party is responsible for the data, everyone is. In the Wikipedia ecosystem, active editors tend to act as shepherds or monitors of articles to which they’ve heavily contributed. OpenStreetMap creates this same sense of responsibility for editors based on geography. If an active user maps his or her entire neighborhood, the feeling of ownership is greater, and the user is more likely to keep it up to date and accurate.

Open sources of map data are not new. Government departments from countries around the world have made their maps available for free for years, dating back to paper maps in libraries – certainly a great thing from a policy perspective that these organizations place value on transparency and availability of information. The US Census Bureau publishes a dataset of boundaries, roads, and address info in the public domain (TIGER). The UK’s Ordnance Survey has published a catalog of open geospatial data through their website. GeoNames.org houses a database of almost ten million geolocated place names. There are countless others, ranging from small, city-scale databases to entire country map layers. Many of these open datasets have even made their way into OpenStreetMap in the form imports, in which the OSM community occasionally imports baseline data for large areas based on pre-existing data available under a compatible license. In fact, much of the street data present in the United States data was imported several years ago from the aforementioned US Census TIGER dataset.

Open geodata sources are phenomenal for transparency and communication, but still lack the living, breathing nature of Wikipedia articles and GitHub repositories. “Crowdsourcing” has become the buzzword with public agencies looking to invite this type of engagement in mapping projects, to widely varying degrees of success. Feedback loops with providers of open datasets typically consist of “report an issue” style funnels, lacking the ability for direct interaction from the end user. By allowing the end user to become the creator, it instills a sense of ownership and responsibility for quality. As a contributor, I’m left to wonder about my change request. “Did they even see my report that the data is out of date in this location? When will it be updated or fixed?” The arduous task of building a free map of the entire globe wouldn’t even be possible without inviting the consumer back in to create and modify the data themselves.

Enabling this combination of contribution and engagement for OpenStreetMap is an impressive stack of technology that powers the system, all driven by a mesh of interacting open source software projects under the hood. This suite of tools that drives the database, makes it editable, tracks changes, and publishes extracted datasets for easy consumption is produced by a small army of volunteer software developers collaborating to power the OpenStreetMap engine. While building this software stack is not the primary objective of OSM, it’s this that makes becoming a “mapper” possible. There are numerous editing tools available to contributors, ranging from the very simple for making small corrections, to the power tools for mass editing by experts. This narrowing of the technical gap between data and user allows the novice to make meaningful contribution and feel rewarded for taking part. Wikipedia would not be much today without the simplicity of clicking a single “edit” button. There’s room for much improvement here for OpenStreetMap, as with most collaboration-driven projects, and month-by-month the developer community narrows this technical gap with improvements to contributor tools.

In many ways, the roadblocks to adoption of open models for creating and distributing geodata aren’t ones of policy, but of technology and implementation. Even with ostensibly “open data” available through a government website, data portals are historically bad at giving citizens the tools to get their hands around that data. In the geodata publishing space, the variety of themes, file sizes, and different data formats combine to complicate the process of making the data conveniently available to users. What good is a database I’m theoretically allowed to have a copy of when it’s in hundreds of pieces scattered over a dozen servers? “Permission” and “accessibility” are different things, and both critical aspects to successful open initiatives. A logical extension of opening data, is opening access to that data. If transparency, accountability, and usability are primary drivers for opening up maps and data, lowering the bar for access is critical to make those a reality.

A great example the power of the engagement feedback loop with OpenStreetMap is the work of the Humanitarian OpenStreetMap Team’s (HOT) work over the past few years. HOT kicked off in 2009 to coordinate the resources resident in the OpenStreetMap community and apply them to assist with humanitarian aid projects. Working both remotely and on the ground, the first large scale effort undertaken by HOT was mapping in response to the Haiti earthquake in early 2010. Since then, HOT has grown its contributor base into the hundreds, and has connected with dozens of governments and NGOs worldwide – such as UNOCHA, UNOSAT, and the World Bank – to promote open data, sharing, transparency, and collaboration to assist in the response to humanitarian crises. To see the value of their work, you need look no further than the many examples showing OpenStreetMap data for the city of Port-au-Prince, Haiti before and after the earthquake. In recent months, HOT has activated to help with open mapping initiatives in Indonesia, Senegal, Congo, Somalia, Pakistan, Mali, Syria, and others.

One of the most exciting things about HOT, aside from the fantastic work they’ve facilitated in the last few years, is that it provides a tangible example for why engagement is such a critical component to organic growth of open data initiatives. The OpenStreetMap contributor base, which now numbers in the hundreds of thousands, can be mobilized for volunteer contribution to map places where that information is lacking, and where it has a direct effect on the capabilities of aid organizations working in the field. With a traditional, top-down managed open data effort, the response time would be too long to make immediate use of the data in crisis.

Another unspoken benefit to the OpenStreetMap model for accepting contributions from a crowd is the fact that hyperlocal map data benefits most from local knowledge. There’s a strong desire for this sort of local reporting on facts and features on the ground all over the world, and the structure of OpenStreetMap and its user community suits this quite naturally. Mappers tend to map things nearby – things they know. Whether it’s a mapper in a rural part of the western United States, a resort town in Mexico, or a flood-prone region in Northern India – there’s always a consumer for local information, and often times from those for whom it’s prohibitively expensive to acquire. In addition to the expertise of local residents contributing to the quality of available data, we also have local perspective that can be interesting, as well. This can be particularly essential to those humanitarian crises, as there’s a tendency for users to map things that they perceive as higher in importance to the local community.

Of course OpenStreetMap isn’t a panacea to all geospatial data needs. There are many requirements for mapping, data issue reporting, and opening of information where the data is best suited to more centralized control. Data for things like electric utilities, telecommunications, traffic routing, and the like, while sometimes publishable to a wide audience, still have service dependencies that require centralized, authoritative management. Even with data that requires consolidated control by a government agency or department, though, the principles of engagement and short feedback loops present in the OpenStreetMap model could still be applied, at least in part. Regardless of the model, getting the most out of an open access data project requires an ability for a contributor to see the effect of their contribution, whether it’s an edit to a Wikipedia page, or correcting a one way street on a map.

With geodata, openness and accessibility enable a level of conversation and direct interaction between publishers and contributors that has never been possible with traditional unilateral data sharing methods. OpenStreetMap provides a mature and real-world example of why engagement is often that missing link in the success of open initiatives.

The complete book is available as a free PDF download, or you can buy a print copy here.