Coleman McCormick

Archive of posts with tag 'Work'

November 20, 2024 • #

Where You Work Shapes How You Work →

Our levels of productivity, creativity, and inspiration have an intimate, hard-to-articulate connection to our environments. And we all have different predilections — quiet vs. noisy, calm vs. bustling, light vs. dark. Each quality creates a climate that pulls something different out of us.

Our surroundings shape how we work, yet we also have the power to choose and to mold them ourselves.

✦

What if Government Paid Better?

October 14, 2020 • #

In his book Political Order and Political Decay, Francis Fukuyama has a section on corruption in political systems and how it impacts economic development:

There are many reasons why corruption impedes economic development. In the first place, it distorts economic incentives by channeling resources not into their most productive uses but rather into the pockets of officials with the political power to extract bribes. Second, corruption acts as a highly regressive tax: while petty corruption on the part of minor, poorly paid officials exists in many countries, the vast bulk of misappropriated funds goes to elites who can use their positions of power to extract wealth from the population.

Today the most famously corrupt regimes lead the least liberal, least free societies. In these unstable environments, government jobs are among the most attractive to ambitious people. In part it’s because those jobs are more reliable than weak, inconsistent private sector jobs (and sometimes easier to get and retain), but the ease with which rents can be extracted in corrupted systems attracts people ambitious to build personal wealth.

You see the inverse of this phenomenon in states with strong free market systems. A certain class of ambitious person is still attracted to government, but more often for reasons of celebrity or power than financial reasons. The potential for personally-enriching rent extraction is much lower. Brain drain happens in the public sector because many of the most ambitious for wealth and status see faster, more lucrative paths in the private market. So paradoxically, the lack of this personally-enriching career path could be impeding potential economic development, just as in poisoned systems, but for different reasons.

It’s unfortunate that we squander our hard-won strong, corruption-resistant1 government system’s performance because we can’t find the funding to better pay our public servants. Our federal (and state) agencies don’t realize how efficient this allocation of capital would be, compared to the many channels through which we hemorrhage money year after year. What would happen if we paid civil servants better? How many of the ambitious, entrepreneurial class would stick around and increase the state’s capacity if they didn’t become disillusioned with personal stagnation?

  1. Of course we’re far from immune here. But when juxtaposed with the political systems of Liberia or the DRC, we’re doing pretty well. â†Š

✦
✦
✦

The State of Distributed Work

August 12, 2020 • #

Like most teams, we’ve now been fully remote and distributed since March 13th, almost 5 months exactly after moving a team of 50+ to fully remote, with no upfront plan on how to best organize ourselves.

About 20 of our team was already remote (scattered across the lower 48) before the COVID lockdown started, but several of them were in the office fairly regularly. But that still leaves 30+ that were forced to figure out a remote work setup overnight. Even the previously remote staff had to get used to changes in communications with the rest of the team adjusting in-flight.

Distributed work

So what’s worked and what hasn’t? What’s the overall impact been?

My general view is that it hasn’t impacted productivity overall in a terribly material way. After a few weeks to find stability with the work-from-home reality, things settled into a regular cadence for the most part. Aside from many of us with kids and other home impacts having to manage solutions for school closures, e-learning Zoom classes, and cabin fevered children, the work cycle leveled off into a predictable flow.

Zoom life

Zoom fatigue is a serious thing. I don’t think we’re having more meetings or conversation on a minute-to-minute basis, but as many have pointed out during this lockdown, there’s something different about voice and video interaction that absorbs more energy, draws greater attention bandwidth, or something. It also seems that with all people remote, there’s a bit of a creeping tendency to inflate invitation lists and make meetings bigger than their in-person versions would be. No hard evidence of this, but it feels like we’ve got a higher average people-per-meeting than pre-quarantine. And for me, the more heads on the Zoom session, the more draining it tends to be.

Being on persistent video doesn’t help, since it pressures you to sit still and be visible in the frame, when in person we’d often be up and about, at the whiteboard, leaning back in chairs, or getting something from the fridge. We haven’t had enough time yet to develop the social norms about what’s acceptable and not while on remote calls. Personally, I’m inclined toward seeing other people and having them see me, since we’re all starved for the ability to interact face to face, but perhaps over time we’ll work out some norms about when it’s expected to be present at the desk and when voice-only would suffice.

Documents, artifacts, and async work

With collaboration, we’ve been far less impacted than I expected. We’ve been able to make do. Most product design groups live and breathe by sketching, drawing, or whiteboarding ideas, and I’ve yet to find any good distributed digital methods for replacing the exploratory process of sketching something out in a group setting. I’ve done a few Zoom calls where I’ll screenshare the iPad with Concepts open. It’s excellent that we’ve got tools like this today to do visual collaboration without too much friction, but it’s still very one-way — iPad sketcher is drawing, but others can’t “take a pen” themselves and add to it like they would at a whiteboard. I’m sure someone will develop a live Google Docs-like multiplayer sketchboard to fill this need (hint: someone please do this!).

Even in pre-COVID times, most of the company has always been pretty solid with asynchronous work. Things are facilitated through Slack as a foundational communication layer, then plenty of collaborative Google Docs and Sheets on top of that for interactive work. We recently set up Confluence, too, where we want to have a better central location for content — like an internal blog and a place that’s better for collaborative work on documents than Docs. The truth here is that there’s no shortage of tooling to help teams with async work; it’s a human behavior and comfort problem to get everyone in the right tempo of working this way.

Serendipity

One of the biggest benefits of co-located teams is the random hallway interaction you get that’s very hard to replicate remotely.

In some ways removing the random hallway chatter is what we often long for — a way to add more time to our day for deep work. But lots of hallway chatter results in not only human social connection, but also real work discussion and idea exchange. There still doesn’t seem to be a good mechanism to replace what gets lost here when you’re distributed. You do recover some productivity with more time for deep work (if you can keep the meeting-creep down), but it’s less clear what longer-term detriments there will be on the ideas never discovered or pursued that result from those random encounters. For a couple months some of us were doing regular “social hour” Zooms to fill this void. They were great for maintaining interpersonal connections, but didn’t solve the problem for new product ideas or work topics.

It still remains to be seen how many companies return to full in-person work models after all of this settles down. I’m sure many will go back to something almost resembling the pre-shelter model, but I’d bet that there’s plenty of residual work-from-home that’ll happen even in the most face-to-face-leaning organizations. Over time we’ll surely all adapt to some sort of regular pattern, hopefully landing on something more effective than we had before. I know that hybrid models have shown poor results in the past, but I think there’s a way to get to a place like that that works for everyone, now that we’re all subject to the same costs and benefits of working remotely.

✦
✦

Fulcrum's Report Builder

July 5, 2020 • #

After about 6-8 months of forging, shaping, research, design, and engineering, we’ve launched the Fulcrum Report Builder. One of the key use cases with Fulcrum has always been using the platform to design your own data collection processes with our App Builder, perform inspections with our mobile app, then generate results through our Editor, raw data integrations, and, commonly, generating PDF reports from inspections.

Fulcrum Report Builder

For years we’ve offered a basic report template along with an ability to customize the reports through our Professional Services team. What was missing was a way to expose our report-building tools to customers.

With the Report Builder, we now have two modes available: a Basic mode that allows any customer to configure some parameters about the report output through settings, and an Advanced mode that provides a full IDE for building your own fully customized reports with markup and JavaScript, plus a templating engine for pulling in and manipulating data.

Under the hood, we overhauled the generator engine using a library called Puppeteer, a headless Chrome node.js API for doing many things, including converting web pages to documents or screenshots. It’s lightning fast and allows for a live preview of your reports as you’re working on your template customization.

Feedback so far has been fantastic, as this has been of the most requested capabilities on the platform. I can’t wait to see all of the ways people end up using it.

We’ve got a lot more in store for the future. Stay tuned to see what else we add to it.

✦

Weekend Reading: Invading Markets, Sleep Deprivation, and the Observer Effect

June 13, 2020 • #

🎖️ Commandos, Infantry, and Police

Jeff Atwood on Robert X. Cringely’s descriptions of three groups of people you need to “attack a market”:

Whether invading countries or markets, the first wave of troops to see battle are the commandos. Woz and Jobs were the commandos of the Apple II. Don Estridge and his twelve disciples were the commandos of the IBM PC. Dan Bricklin and Bob Frankston were the commandos of VisiCalc.

Grouping offshore as the commandos do their work is the second wave of soldiers, the infantry. These are the people who hit the beach en masse and slog out the early victory, building on the start given them by the commandos. The second-wave troops take the prototype, test it, refine it, make it manufacturable, write the manuals, market it, and ideally produce a profit.

What happens then is that the commandos and the infantry head off in the direction of Berlin or Baghdad, advancing into new territories, performing their same jobs again and again, though each time in a slightly different way. But there is still a need for a military presence in the territory they leave behind, which they have liberated. These third-wave troops hate change. They aren’t troops at all but police.

😴 Why Sleep Deprivation Kills

Behind all this is the astonishing, baffling breadth of what sleep does for the body. The fact that learning, metabolism, memory, and myriad other functions and systems are affected makes an alteration as basic as the presence of ROS quite interesting. But even if ROS is behind the lethality of sleep loss, there is no evidence yet that sleep’s cognitive effects, for instance, come from the same source. And even if antioxidants prevent premature death in flies, they may not affect sleep’s other functions, or if they do, it may be for different reasons.

📥 The Observer Effect: Marc Andreessen

A new interview series from Sriram Krishnan:

The Observer Effect studies interesting people and institutions and tries to understand how they work.

He kicks it off big with an interview with Marc Andreessen.

✦
✦

Weekend Reading: The State and the Virus, Future of Work, and Stephen Wolfram's Setup

April 18, 2020 • #

🏛 The Individual, the State, and the Virus

I agree with most of Kling’s takes here on the role the state should play in the coronavirus crisis.

👩🏽‍💻 Mapping the Future of Work

A nice comprehensive list of SaaS products for the workplace, across a ton of different categories. Great work by Pietro Invernizzi putting this database together.

⌨️ Stephen Wolfram’s Personal Infrastructure

Mathematician and computer scientist Stephen Wolfram wrote this epic essay on his personal productivity infrastructure.

✦

2020 Ready: Field Data Collection with Fulcrum

February 4, 2020 • #

Today we hosted a webinar in conjunction with our friends at NetHope and Team Rubicon to give an overview of Fulcrum and what we’re collectively doing in disaster relief exercises.

Both organizations deployed to support recent disaster events for Cyclone Idai and Hurricane Dorian (the Bahamas) and used Fulcrum as a critical piece of their workflow.

Always enjoyable to get to show more about what we’re doing to support impactful efforts like this.

✦
✦

Fall All Hands 2019

November 9, 2019 • #

We just wrapped up our Fall “all hands” week at the office. Another good week to see everyone from out of town, and an uncommonly productive one at that. We got a good amount of planning discussion done for future product roadmap additions, did some testing on new stuff in the lab, fixed some bugs, shared some knowledge, and ate (a lot).

Looking forward to the next one!

✦

San Juan

October 21, 2019 • #

We’re in San Juan this week for the NetHope Global Summit. Through our partnership with NetHope, a non-profit devoted to bringing technology to disaster relief and humanitarian projects, we’re hosting a hands-on workshop on Fulcrum on Thursday.

NetHope Summit

We’ve already connected with several of the other tech companies in NetHope’s network — Okta, Box, Twilio, and others — leading to some interesting conversations on working together more closely on integrated deployments for humanitarian work.

Fortin San Geronimo de Boqueron
Fortin San Geronimo de Boqueron

Looking forward to an exciting week, and maybe some exploring of Old San Juan. Took a walk last night out to dinner along the north shore overlooking the Atlantic.

✦
✦

Data as a Living Asset

September 20, 2019 • #

This is a post from the Fulcrum archives I wrote 3 years back. I like this idea and there’s more to be written on the topic of how companies treat their archives of data. Especially in data-centric companies like those we work with, it’s remarkable to see how quickly it often is thrown on a shelf, atrophies, and is never used again.

In the days of pen and paper collection, data was something to collect, transcribe, and stuff into a file cabinet to be stored for a minimum of 5 years (lest those auditors come knocking). With advances in digital data capture — through all methods including forms software, spreadsheets, or sensors — many organizations aren’t rethinking their processes and thus, haven’t come much further. The only difference is that the file cabinet’s been replaced with an Access database (or gasp a 10 year old spreadsheet!).

Many organizations collect troves of legacy data in their operations, or at least as much as they can justify the cost in collecting. But because data management is a complicated domain in and of itself, often times the same data is re-collected over and over, with all cost and no benefit. Once data makes its way into corporate systems somewhere after its initial use, it’s forgotten and left on the virtual shelf.

Data as a Living Asset

Data is your company’s memory. It’s the living, institutional knowledge you’ve invested in over years or decades of doing business, full of latent value.

But there are a number of challenges that stand in the way when trying to make use of historical data:

  • Compatibility â€” File formats and versions. Can I read my old data with current tools?
  • Access â€” Data silos and where your data is published. Can my staff get to archives they need access to without heartburn?
  • Identification â€” A process for knowing what pieces are valuable down the road. Within these gigabytes of data, what is useful?

If you give consideration to these issues up-front as you’re designing a data collection workflow, you’ll make your life much simpler down the road when your future colleagues are trying to leverage historical data assets.

Let’s dive deeper on each of these issues.

Formats and Compatibility

I call this the “Lotus 1-2-3” problem, which happens whenever data is stored in a format that dies off and loses tool compatibility1. Imagine the staggering amount of historical corporate data locked up in formats that no one can open anymore. This is one area where paper can be an advantage: if stored properly, you can always open the file.

Of course there’s no way to know the future potential of a data format on the day you select it as your format of choice. We don’t have the luxury of that kind of hindsight. I’m sure no one would’ve selected Lotus’s .123 format back in ‘93 had they known that Excel would come to dominate the world of spreadsheets. Look for well-supported open standards like CSV or JSON for long term archival. Another good practice is to revisit your data archives as a general “hygiene” practice every few years. Are your old files still usable? The faster you can convert dead formats into something more future-proof, the better.

Accessibility

This is one of the most important issues when it comes to using archives of historical data. Presuming a user can open files of 10 year old data because you’ve stored it effectively in open formats — is the data somewhere that staff can get it? Is it published somewhere in a shared workspace for easy access? Most often data isn’t squirreled away in a hard-to-reach place intentionally. It’s often done for the sake of organization, cleanliness, or savings on storage.

Anyone that works frequently with data has heard of “data silos”, which arise when data is holed up in a place where it doesn’t get shared, only accessible by individual departments or groups. Avoiding this issue can also involve internal corporate policy shifts or revisiting your data security policies. In larger organizations I’ve worked in, however, the tendency is toward over-securing data to the point of uselessness. In some cases it might as well be deleted since it’s effectively invisible to the entire company. This is a mistake and a waste of large past investments in collecting that data in the first place.

Look for publishing tools that make your data easy to get to without sacrificing controls over access and security. But resist the urge to continuously wall off past data from your team.

Identifying the Useful Things

Now, assuming your data is in a useful format and it’s easily accessible, you’re almost there. When working with years of historical records it can be difficult to extract the valuable bits of information, but that’s often because the first two challenges (compatibility and accessibility) have already been standing in your way. If your data collection process is built around your data as an evergreen asset rather than a single-purpose resource, it becomes much easier to think of areas where a dataset could be useful 5 or 6 years down the road.

For instance, if your data collection process includes documenting inspections with thorough before-and-after photographs, those could be indispensable in the event of a dispute or a future issue in years time. With ease of access and an open format, it could take two clicks to resolve a potentially thorny issue with a past client. That is if you’ve planned your process around your data becoming a valuable corporate resource.

A quick story to demonstrate these practices:

I’m currently working with a construction company on re-roofing my house, and they’ve been in business for 50+ years. Over that time span, they’ve performed site visits and accurately measured so many roofs in the area that when they get calls for quotes, they often can pull a file from 35 years ago when they went out and measured a property. That simple case is an excellent example of realizing latent value in a prior investment in data: if they didn’t organize, archive, and store that information effectively, they’d be redoing field visits every week. Though they aren’t digital with most of their process, they’ve nailed a workflow that works for them. They use formats that work, make that data accessible to their people, and know exactly what information they’ll find useful over the long term.

Data has value beyond its immediate use case, but you have to consider this up front. Design sustainable workflows that allow you to continuously update data, and make use of archival data over time. You’ve spent a lot to create it, you should be leveraging it to its fullest extent.

  1. Lotus 1-2-3 was a spreadsheet application popular in the 80s and 90s. It succumbed to the boom of Microsoft Office and Excel in the 1990s. â†Š

✦
✦

9/11

September 11, 2019 • #

I’m always proud of our annual tributes. It matters to keep perspective on how bad things can get, how good most of us really have it, and those first-responders, public servants, and national security forces that work to keep it that way.

Fulcrum 9/11
✦

Group Training

August 28, 2019 • #

Our SNI running club on Strava keeps expanding. We’ve got 12 members now and counting. Two people are committed to marathons in the fall, and two of us to half-marathons.

Somewhere in reading about marathon training I read that the community aspect of the training plan is one of the most important: finding a group of people around you for mutual support and motivation along the way. Proper training (aside from the physical effort) is time-consuming and requires consistency to get 4 or more activities in per week, without falling off the wagon. It certainly helps to have the visibility of those around you keeping their habits going as a motivator to push yourself.

When we do our semi-annual All Hands events with the whole team in the office for a week, we now have something of a tradition of doing a group run sometime when we’re all together. I think we’ve done it for 2 or 3 years now pretty consistently. It looks like the upcoming November event we’ll be mobilizing about 15 of us or so to get out there and do at least a 5K. There’s a half-dozen of us that are real active and do this routinely, but it’s awesome to see the communal gravitational pull working, attracting many to join in who are really just trying to get moving on building the habit.

This’ll be right after my half-marathon, so it might be the first recovery run after that race.

✦

Shipping the Right Product

August 14, 2019 • #

This is one from the archives, originally written for the Fulcrum blog back in early 2017. I thought I’d resurface it here since I’ve been thinking more about continual evolution of our product process. I liked it back when I wrote it; still very relevant and true. It’s good to look back in time to get a sense for my thought process from a couple years ago.

In the software business, a lot of attention gets paid to “shipping” as a badge of honor if you want to be considered an innovator. Like any guiding philosophy, it’s best used as a general rule than as the primary yardstick by which you measure every individual decision. Agile, scrum, TDD, BDD — they’re all excellent practices to keep teams focused on results. After all, the longer you’re polishing your work and not putting it in the hands of users, the less you know about how they’ll be using it once you ship it!

These systems followed as gospel (particularly with larger projects or products) can lead to attention on the how rather than the what — thinking about the process as shipping “lines of code” or what text editor you’re using rather than useful results for users. Loops of user feedback are essential to building the right solution for the problem you’re addressing with your product.

Shipping the right product

Thinking more deeply about aligning the desires to both ship _something_ rapidly while ensuring it aligns with product goals, it brings to mind a few questions to reflect on:

  • What are you shipping?
  • Is what you’re shipping actually useful to your user?
  • How does the structure of your team impact your resulting product?

How can a team iterate and ship fast, while also delivering the product they’re promising to customers, that solves the expressed problem?

Defining product goals

In order to maintain a high tempo of iteration without simply measuring numbers of commits or how many times you push to production each day, the goals need to be oriented around the end result, not the means used to get there. Start by defining what success looks like in terms of the problem to be solved. Harvard Business School professor Clayton Christensen developed the jobs-to-be-done framework to help businesses break down the core linkages between a user and why they use a product or service1. Looking at your product or project through the lens of the “jobs” it does for the consumer helps clarify problems you should be focused on solving.

Most of us that create products have an idea of what we’re trying to achieve, but do we really look at a new feature, new project, or technique and truly tie it back to a specific job a user is expecting to get done? I find it helpful to frequently zoom out from the ground level and take a wider view of all the distinct problems we’re trying to solve for customers. The JTBD concept is helpful to get things like technical architecture out of your way and make sure what’s being built is solving the big problems we set out to solve. All the roadmaps, Gantt charts, and project schedules in the world won’t guarantee that your end result solves a problem2. Your product could become an immaculately built ship that’s sailing in the wrong direction. For more insight into the jobs-to-be-done theory, check out This is Product Management’s excellent interview with its co-creator, Karen Dillon.

Understanding users

On a similar thread as jobs-to-be-done, having a deep understanding of what the user is trying to achieve is essential in defining what to build.

This quote from the article gets to the heart of why it matters to understand with empathy what a user is trying to accomplish, it’s not always about our engineering-minded technical features or bells and whistles:

Jobs are never simply about function — they have powerful social and emotional dimensions.

The only way to unroll what’s driving a user is to have conversations and ask questions. Figure out the relationships between what the problem is and what they think the solution will be. Internally we talk a lot about this as “understanding pain”. People “hire” a product, tool, or person to reduce some sort of pain. Deep questioning to get to the root causes of pain is essential. Often times people want to self-prescribe their solution, which may not be ideal. Just look how often a patient browses WebMD, then goes to the doctor with a preconceived diagnosis, without letting the expert do their job.

On the flip side, product creators need to enter these conversations with an open mind, and avoid creating a solution looking for a problem. Doctors shouldn’t consult patients and make assumptions about the underlying causes of a patient’s symptoms! They’d be in for some serious legal trouble.

Organize the team to reflect goals

One of my favorite ideas in product development comes from Steven Sinofsky, former Microsoft product chief of Office and Windows:

“Don’t ship the org chart.”

Org chart

The salient point being that companies have a tendency to create products that align with areas of responsibility within the company3. However, the user doesn’t care at all about the dividing lines within your company, only the resulting solutions you deliver.

A corollary to this idea is that over time companies naturally begin to look like their customers. It’s clearly evident in the federal contracting space: federal agencies are big, slow, and bureaucratic, and large government contracting companies start to reflect these qualities in their own products, services, and org structures.

With our product, we see three primary points to make sure our product fits the set of problems we’re solving for customers:

  • For some, a toolbox — For small teams with focused problems, Fulcrum should be seamless to set up, purchase, and self-manage. Users should begin relieving their pains immediately.
  • For others, a total solution — For large enterprises with diverse use cases and many stakeholders, Fulcrum can be set up as a total turnkey solution for the customer’s management team to administer. Our team of in-house experts consults with the customer for training and on-boarding, and the customer ends up with a full solution and the toolbox.
  • Integrations as the “glue” — Customers large and small have systems of record and reporting requirements with which Fulcrum needs to integrate. Sometimes this is simple, sometimes very complex. But always the final outcome is a unique capability that can’t be had another way without building their own software from scratch.

Though we’re still a small team, we’ve tried to build up the functional areas around these objectives. As we advance the product and grow the team, it’s important to keep this in mind so that we’re still able to match our solution to customer problems.

For more on this topic, Sinofsky’s post on “Functional vs. Unit Organizations” analyzes the pros, cons, and trade offs of different org structures and the impacts on product. A great read.

Continued reflection, onward and upward 📈

In order to stay ahead of the curve and Always Be Shipping (the Right Product), it’s important to measure user results, constantly and honestly. The assumption should be that any feature could and should be improved, if we know enough from empirical evidence how we can make those improvements. With this sort of continuous reflection on the process, hopefully we’ll keep shipping the Right Product to our users.

  1. Christensen is most well known for his work on disruption theory↩

  2. Not to discount the value of team planning. It’s a crucial component of efficiency. My point is the clean Gantt chart on its own isn’t solving a customer problem! â†Š

  3. Of course this problem is only minor in small companies. It’s of much greater concern to the Amazons and Microsofts of the world. â†Š

✦

Weekend Reading: Rhythmic Breathing, Drowned Lands, and Fulcrum SSO

July 20, 2019 • #

🏃🏻‍♂️ Everything You Need to Know About Rhythmic Breathing

I tried this out the other night on a run. The technique makes some intiutive sense that it’d reduce impact (or level it out side to side anyway). Surely to notice any result you’d have to do it over distance consistently. But I’ve had some right knee soreness that I don’t totally know the origin of, so thought I’d start trying this out. I found it takes a lot of concentration to keep it up consistently. I’ll keep testing it out.

🏞 Terrestrial Warfare, Drowned Lands

A neat historical, geographical story from BLDGBLOG:

Briefly, anyone interested in liminal landscapes should find Snell’s description of the Drowned Lands, prior to their drainage, fascinating. The Wallkill itself had no real path or bed, Snell explains, the meadows it flowed through were naturally dammed at one end by glacial boulders from the Ice Age, the whole place was clogged with “rank vegetation,” malarial pestilence, and tens of thousands of eels, and, what’s more, during flood season “the entire valley from Denton to Hamburg became a lake from eight to twenty feet deep.”

Turns out there was local disagreement on flood control:

A half-century of “war” broke out among local supporters of the dams and their foes: “The dam-builders were called the ‘beavers’; the dam destroyers were known as ‘muskrats.’ The muskrat and beaver war was carried on for years,” with skirmishes always breaking out over new attempts to dam the floods.

Here’s one example, like a scene written by Victor Hugo transplanted to New York State: “A hundred farmers, on the 20th of August, 1869, marched upon the dam to destroy it. A large force of armed men guarded the dam. The farmers routed them and began the work of destruction. The ‘beavers’ then had recourse to the law; warrants were issued for the arrest of the farmers. A number of their leaders were arrested, but not before the offending dam had been demolished. The owner of the dam began to rebuild it; the farmers applied for an injunction. Judge Barnard granted it, and cited the owner of the dam to appear and show cause why the injunction should not be made perpetual. Pending a final hearing, high water came and carried away all vestige of the dam.”

🔐 Fulcrum SAML SSO with Azure and Okta

This is something we launched a few months back. There’s nothing terribly exciting about building SSO features in a SaaS product — it’s table stakes to move up in the world with customers. But for me personally it’s a signal of success. Back in 2011, imagining that we’d ever have customers large enough to need SAML seemed so far in the future. Now we’re there and rolling it out for enterprise customers.

✦

On Retention

July 12, 2019 • #

Earlier this year at SaaStr Annual, we spent 3 days with 20,000 people in the SaaS market, hearing about best practices from the best in the business, from all over the world.

If I had to take away a single overarching theme this year (not by any means “new” this time around, but louder and present in more of the sessions), it’s the value of customer success and retention of core, high-value customers. It’s always been one of SaaStr founder Jason Lemkin’s core focus areas in his literature about how to “get to $10M, $50M, $100M” in revenue, and interwoven in many sessions were topics and questions relevant to things in this area — onboarding, “aha moments,” retention, growth, community development, and continued incremental product value increases through enhancement and new features.

Mark Roberge (former CRO of Hubspot) had an interesting talk that covered this topic. In it he focused on the power of retention and how to think about it tactically at different stages in the revenue growth cycle.

If you look at growth (adding new revenue) and retention (keeping and/or growing existing revenue) as two axes on a chart of overall growth, a couple of broad options present themselves to get the curve arrow up and to the right:

Retention vs. growth

If you have awesome retention, you have to figure out adding new business. If you’re adding new customers like crazy but have trouble with customer churn, you have to figure out how to keep them. Roberge summed up his position after years of working with companies:

It’s easier to accelerate growth with world class retention than fix retention while maintaining rapid growth.

The literature across industries is also in agreement on this. There’s an adage in business that it’s “cheaper to keep a customer than to acquire a new one.” But to me there’s more to this notion than the avoidance of the acquisition cost for a new customer, though that’s certainly beneficial. Rather it’s the maximization of the magic SaaS metric: LTV (lifetime value). If a subscription customer never leaves, their revenue keeps growing ad infinitum. This is the sort of efficiency ever SaaS company is striving for — to maximize fixed investments over the long term. It’s why investors are valuing SaaS businesses at 10x revenue these days. But you can’t get there without unlocking the right product-market fit to switch on this kind of retention and growth.

So Roberge recommends keying in on this factor. One of the key first steps in establishing a strong position with any customer is to have a clear definition of when they cross a product fit threshold — when they reach the “aha” moment and see the value for themselves. He calls this the “customer success leading indicator”, and explains that all companies should develop a metric or set of metrics that indicates when customers cross this mark. Some examples from around the SaaS universe of how companies are measuring this:

  • Slack — 2000 team messages sent
  • Dropbox — 1 file added to 1 folder on 1 device
  • Hubspot — Using 5 of 20 features within 60 days

Each of these companies has correlated these figures with strong customer fits. When these targets are hit, there’s a high likelihood that a customer will convert, stick around, and even expand. It’s important that the selected indicator be clear and consistent between customers and meet some core criteria:

  • Observable in weeks or months, not quarters or years — need to see rapid feedback on performance.
  • Measurement can be automated — again, need to see this performance on a rolling basis.
  • Ideally correlated to the product core value proposition — don’t pick things that are “measurable” but don’t line up with our expectations of “proper use.” For example, in Fulcrum, whether the customer creates an offline map layer wouldn’t correlate strongly with the core value proposition (in isolation).
  • Repeat purchase, referral, setup, usage, ROI are all common (revenue usually a mistake — it’s a lagging rather than a leading indicator)
  • Okay to combine multiple metrics — derived “aggregate” numbers would work, as long as they aren’t overcomplicated.

The next step is to understand what portion of new customers reach this target (ideally all customers reach it) and when, then measure by cohort group. Putting together cohort analyses allows you to chart the data over time, and make iterative changes to early onboarding, product features, training, and overall customer success strategy to turn the cohorts from “red” to “green”.

Retention cohorts

We do cohort tracking already, but it’d be hugely beneficial to analyze and articulate this through a filter of a key customer success metric is and track it as closely as MRR. I think a hybrid reporting mechanism that tracks MRR, customer success metric achievement, and NPS by cohort would show strong correlation between each. The customer success metric can serve as an early signal of customer “activation” and, therefore, future growth potential.

Customer success leading indicator

I also sat in on a session with Tom Tunguz, VC from RedPoint Ventures, who presented on a survey they had conducted with almost 600 different business SaaS companies across a diverse base of categories. The data demonstrated a number of interesting points, particularly on the topic of retention. Two of the categories touched on were logo retention and net dollar retention (NDR). More than a third of the companies surveyed retain 90+% of their logos year over year. My favorite piece of data showed that larger customers churn less — the higher products go up market, the better the retention gets. This might sound counterintuitive on the surface, but as Tunguz pointed out in his talk, it makes sense when you think about the buying process in large vs. small organizations. Larger customers are more likely to have more rigid, careful buying processes (as anyone doing enterprise sales is well aware) than small ones, which are more likely to buy things “on the fly” and also invest less time and energy in their vendors’ products. The investment poured in by an enterprise customer makes them averse to switching products once on board1:

Enterprise churn is lower

On the subject of NDR, Tunguz reports that the tendency toward expansion scales with company size, as well. In the body of customers surveyed, those that focus on the mid-market and enterprise tiers report higher average NDR than SMB. This aligns with the logic above on logo retention, but there’s also the added factor that enterprises have more room to go higher than those on the SMB end of the continuum. The higher overall headcount in an enterprise leaves a higher ceiling for a vendor to capture:

Enterprise expansion

Overall, there are two big takeaways to worth bringing home and incorporating:

  1. Create (and subsequently monitor) a universal “customer success indicator” that gives a barometer for measuring the “time to value” for new customers, and segment accordingly by size, industry, and other variables.
  2. Focus on large Enterprise organizations — particularly their use cases, friction points to expansion, and customer success attention.

We’ve made good headway a lot of these findings with our Enterprise product tier for Fulcrum, along with the sales and marketing processes to get it out there. What’s encouraging about these presentations is that we already see numbers leaning in this direction, aligning with the “best practices” each of these guys presented — strong logo retention and north of 100% NDR. We’ve got some other tactics in the pipeline, as well as product capabilities, that we’re hoping bring even greater efficiency, along with the requisite additional value to our customers.

  1. Assuming there’s tight product-market fit, and you aren’t selling them shelfware! â†Š

✦
✦
✦
✦

Andy Grove on Meetings

June 21, 2019 • #

You hear the criticism all the time around the business world about meetings being useless, a waste of time, and filling up schedules unnecessarily.

A different point of view on this topic comes from Andy Grove in his book High Output Management. It’s 35 years old, but much of it is just as relevant today as back then, with timeless principles on work.

Grove is adamant that for the manager, the “meeting” is an essential piece in the managerial leverage toolkit. From page 53:

Meetings provide an occasion for managerial activities. Getting together with others is not, of course, an activity—it is a medium. You as a manager can do your work in a meeting, in a memo, or through a loudspeaker for that matter. But you must choose the most effective medium for what you want to accomplish, and that is the one that gives you the greatest leverage.

This is an interesting distinction from the way you hear meetings described often. That they should be thought of as a medium rather than an activity is an important difference in approach. When many people talk about the uselessness of meetings, I would strongly suspect that the medium is perhaps mismatched to the work that needs doing. Though today we have many media through which to conduct managerial work — meetings, Slack channels, emails, phone calls, Zoom video chats — the point is you shouldn’t ban the medium entirely if your problem is really something else. I know when I find myself in a useless meeting, its “meetingness” isn’t the issue; it’s that we could’ve accomplished the goal with a well-written document with inline comments, an internal blog post, an open-ended Slack chat, or a point-to-point phone call between two people. Or, alternately, it could be that a meeting is the optimal medium, but the problem lies elsewhere in planning, preparation, action-orientation, or the who’s who in attendance1.

We should focus our energies on maximizing the impact of meetings by fitting them in when they’re the right medium for the work. As Grove notes on page 71:

Earlier we said that a big part of a middle manager’s work is to supply information and know-how, and to impart a sense of the preferred method of handling things to the groups under his control and influence. A manager also makes and helps to make decisions. Both kinds of basic managerial tasks can only occur during face-to-face encounters, and therefore only during meetings2. Thus I will assert again that a meeting is nothing less than the medium through which managerial work is performed. That means we should not be fighting their very existence, but rather using the time spent in them as efficiently as possible.

  1. A major issue I see in many meetings (as I’m sure we all do) is a tendency to over-inflate the invite list. A fear of someone missing out often crowds the conversation, spends human hours unnecessarily, and invites the occasional “I’m here so I better say something” contributions from those with no skin in the outcome. â†Š

  2. This shows some age as we have so many more avenues for engagement today than in 1983, but his principle about fitting the work to the medium still holds. â†Š

✦

The Second Phase: allinspections

June 3, 2019 • #

This post is part 3 in a series about my history in product development. Check out the intro in part 1 and all about our first product, Geodexy, in part 2.

Back in 2010 we decide to halt our development of Geodexy and regroup to focus on a narrower segment of the marketplace. With what we’d learned in our go-to-market attempt on Geodexy, we wanted to isolate a specific industry we could focus our technology around. Our tech platform was strong, we were confident in that. But at the peak of our efforts with taking Geodexy to market, we were never able to reach a state of maturity to create traction and growth in any of the markets we were targeting. Actually targeting is the wrong word — truthfully that was the issue: we weren’t “targeting” anything because we had too many targets to shoot at.

We needed to take our learnings, regroup on what was working and what wasn’t, and create a single focal point we could center all of our effort around, not just the core technology, but also our go-to-market approach, marketing strategy, sales, and customer development.

I don’t remember the specific genesis of the idea (I think it was part internal idea generation, part serendipity), but we connected on the notion of field data collection for the property inspection market. So we launched allinspections.

allinspections

That industry had the hallmarks of one ripe for us to show up with disruptive technology:

  • Low current investment in technology — Most folks were doing things on paper with lots of transcribing and printing.
  • Lots of regulatory basis in the workflow — Many inspections are done as a requirement by a regulatory body. This meant consistent, widespread needs that crossed geographic boundaries, and an “always-on” use case for a technology solution.
  • Phased workflow with repetitive process and “decision tree” problems — a perfect candidate for digitizing the process.
  • Very few incumbent technologies to replace — if there were competitors at all, they were Excel and Acrobat.
  • Smartphones ready to amplify a mobile-heavy workflow — Inspections of all sorts happen in-situ somewhere in the field.

While the market for facility and property inspections is immense, we opted to start on the retail end of the space: home inspections for residential real estate. There was a lot to like about this strategy for a technology company looking to build something new. We could identify individual early adopters, gradually understand what made their business tick, and index on capability that empowered them. There was no need immediately to worry about selling to massive enterprise organizations, which would’ve put a heavy burden on us to build “box-checking” features like hosting customization, access controls, single sign-on, and the like. We used a freemium model which helped attract early usage, then shifted to a free trial one later on after some early traction.

Overall the biggest driver that attracted us to residential was the consistency of the work. While anyone who’s bought property is familiar with the process of getting a house inspected before closing. That sort of inspection is low volume compared to those associated with insurance underwriting. Our first mission was this: to build the industry-standard tool for performing these regulated inspections in Florida — wind mitigation, 4-point, and roof certification. These were (and still are) done by the thousands every day. They were perfect candidates for us for the reasons listed above: simple, standard, ubiquitous, and required1. There was a built-in market for automating the workflow around them and improving the data collected, which we could use as a beachhead to get folks used to using an app to conduct their inspections.

Our hypothesis was that we could apply the technology for mobile data collection we’d built in Geodexy and “verticalize” it around the specialty of property inspection with features oriented around that problem set. Once we could spin up enough technology adoption for home inspection use cases at the individual level, we could then bridge into the franchise operations and institutions (even the insurance companies themselves) to standardize on allinspections for all of their work.

We had good traction in the early days with inspectors. It didn’t take us long before we connected with a half-dozen tech-savvy inspectors in the area to work with as guinea pigs to help us advance the technology. Using their domain expertise in exchange for usage of the product, we were able to fast-forward on our understanding of the inspection workflow — from original request handling and scheduling, to inspecting on-site, then report delivery to customer. Within a year we had a pretty slick solution and 100 or so customers that swore by the tool for getting their work done.

But it didn’t take us long to run into friction. Once we’d exhausted the low-hanging fruit of the early adopter community, it became harder and harder to find more of the tech savvy crowd willing to splash some money on something new and different. As you might expect, the community of inspectors we were targeting were not technologists. Many of these folks were perfectly content with their paperwork process and enjoyed working solo. Many had no interest in building a true business around their operation, not interested in growing into a company with multiple inspectors covering wider geographies. Others were general contractors doing inspections as a side gig, so it wasn’t even their core day to day job. With that kind of fragmentation, it was difficult to reach the economies of scale we were looking for to be able to sell something at the price point where we needed to be. We had some modest success pursuing the larger nationwide franchise organizations, but our sales and onboarding strategy wasn’t conducive to getting those deals beyond the small pilot stage. It was still too early for that. We wanted to get to B2B customer sizes and margins, but were ultimately still selling a B2C application. Yes, a home inspector has a business that we were selling to, but the fundamentals of the relationship share far more in common with a consumer product relationship than a corporate one.

By early 2012 we’d stalled out on growth at the individual level. A couple opportunities to partner with inspection companies on a comprehensive solution for carriers failed, partially for technical reasons, but also immaturity of our existing market. We didn’t have a reference base sizable enough to jump all the way up to selling 10,000 seats without enormous burden and too much overpromising on what we could do.

We shut down operations on allinspections in early 2012. We had suspected this would have to happen for a while, so it wasn’t a sudden decision. But it always hurts to have to walk away from something you poured so much time and energy into.

I think the biggest takeaway for me at the time, and in the early couple years of success on Fulcrum, was how relatively little the specifics of your technology matter if you mess up the product-market fit and go-to-market steps in the process. The silver lining in the whole affair was (like many things in product companies) that there was plenty to salvage and carry on to our next effort. We learned an enormous amount about what goes into building a SaaS offering and marketing it to customers. Coming from Geodexy where we never even reached the stage of having a real “customer success” process to deal with, allinspections gave us a jolt in appreciation for things like identifying the “aha moment” in the product, increasing usage of a product, tracking usage of features to diagnose engagement gaps, and ultimately, getting on the same page as the customer when it comes to the final deliverable. It takes working with customers and learning the deep corners of the workflow to identify where the pressure points are in the value chain, the things that keep the customer up at night when they don’t have a solution.

And naturally there was plenty of technology to bring forward with us to our next adventure. The launch of Fulcrum actually pre-dates the end of allinspections, which tells you something about how we were thinking at the time. At the time we weren’t thinking of Fulcrum as the “next evolution” of allinspections necessarily, but we were thinking about going bigger while fixing some of the mistakes made a year or two prior. While most of Fulcrum was built ground-up, we brought some code but a whole boatload of lessons learned on systems, methods, and architecture that helped us launch and grow Fulcrum as quickly as we did.

Retrospectives like this help me to think back on past decisions and process some of what we did right and wrong with some separation. That separation can be a blessing in being able to remove personal emotion or opinion from what happened and look at it objectively, so it can serve as a valuable learning experience. Sometime down the road I’ll write about this next evolution that led to where we are today.

  1. Since the mid-2000s, all three of these inspection types are required for insurance policies in Florida. â†Š

✦

Discovering QGIS

May 29, 2019 • #

This week we’ve had Kurt Menke in the office (of Bird’s Eye View GIS) providing a guided training workshop for QGIS, the canonical open source GIS suite.

It’s been a great first two days covering a wide range of topics from his book titled Discovering QGIS 3.

The team attending the workshop is a diverse group with varied backgrounds. Most are GIS professionals using this as a means to get a comprehensive overview of the basics of “what’s in the box” on QGIS. All of the GIS folks have the requisite background using Esri tools throughout their training, but some of us that have been playing in the FOSS4G space for longer have been exposed to and used QGIS for years for getting work done. We’ve also got a half dozen folks in the session from our dev team that know their way around Ruby and Python, but don’t have any formal GIS training in their background. This is a great way to get folks exposure to the core principles and technology in the GIS professional’s toolkit.

Kurt’s course is an excellent overview that covers the ins and outs of using QGIS for geoprocessing and analysis, and touches on lots of the essentials of GIS (the discipline) and along the way. All of your basics are in there — clips / unions / intersects and other geoprocesses, data management, editing, attribute calculations (with some advanced expression-based stuff), joins and relates, and a deep dive on all of the powerful symbology and labeling engines built into QGIS these days1.

The last segment of the workshop is going to cover movement data with the Time Manager extension and some other visualization techniques.

  1. Hat tip to Niall Dawson of North Road Geographics (as well as the rest of the contributor community) for all of the amazing development that’s gone into the 3.x release of QGIS! â†Š

✦

Weekend Reading: Data Moats, China, and Distributed Work

May 25, 2019 • #

🏰 The Empty Promise of Data Moats

In the era of every company trying to play in machine learning and AI technology, I thought this was a refreshing perspective on data as a defensible element of a competitive moat. There’s some good stuff here in clarifying the distinction between network effects and scale effects:

But for enterprise startups — which is where we focus — we now wonder if there’s practical evidence of data network effects at all. Moreover, we suspect that even the more straightforward data scale effect has limited value as a defensive strategy for many companies. This isn’t just an academic question: It has important implications for where founders invest their time and resources. If you’re a startup that assumes the data you’re collecting equals a durable moat, then you might underinvest in the other areas that actually do increase the defensibility of your business long term (verticalization, go-to-market dominance, post-sales account control, the winning brand, etc).

Companies should perhaps be less enamored of the “shiny object” of derivative data and AI, and instead invest in execution in areas challenging for all businesses.

🇨🇳 China, Leverage, and Values

An insightful piece this week from Ben Thompson on the current state of the trade standoff between the US and China, and the blocking of Chinese behemoths like Huawei and ZTE. The restrictions on Huawei will mean some major shifts in trade dynamics for advanced components, chip designs, and importantly, software like Android:

The reality is that China is still relatively far behind when it comes to the manufacture of most advanced components, and very far behind when it comes to both advanced processing chips and also the equipment that goes into designing and fabricating them. Yes, Huawei has its own system-on-a-chip, but it is a relatively bog-standard ARM design that even then relies heavily on U.S. software. China may very well be committed to becoming technologically independent, but that is an effort that will take years.

The piece references this article from Bloomberg, an excellent read on the state of affairs here.

⌨️ The Distributed Workplace

I continue to be interested in where the world is headed with remote work. Here InVision’s Mark Frein looks back at what traits make for effective distributed companies, starting with history of past experiences of remote collaboration from music production, to gaming, to startups. As he points out, you can have healthy or harmful cultures in both local and distributed companies:

Distributed workplaces will not be an “answer” to workplace woes. There will be dreary and sad distributed workplaces and engaged and alive ones, all due to the cultural experience of those virtual communities. The key to unlocking great distributed work is, quite simply, the key to unlocking great human relationships — struggling together in positive ways, learning together, playing together, experiencing together, creating together, being emotional together, and solving problems together. We’ve actually been experimenting with all these forms of life remote for at least 20 years at massive scales.

✦
✦

Weekend Reading: Product Market Fit, Stripe's 5th Hub, and Downlink

May 11, 2019 • #

🦸🏽‍♂️ How Superhuman Built an Engine to Find Product/Market Fit

As pointed out in this piece from Rahul Vohra, founder of Superhuman, most indicators around product-market fit are lagging indicators. With his company he was looking for leading indicators so they could more accurately predict adoption and retention after launch. His approach is simple: polling your early users with a single question — “How would you feel if you could no longer use Superhuman?”

Too many example methods in the literature on product development orient around asking for user feedback in a positive direction — things like “how much do you like the product?”, “would you recommend to a friend?” Coming at it from the counterpoint of “what if you couldn’t use it” reverses this. It makes the user think about their own experience with the product, versus a disembodied imaginary user that might use it. It brought to mind a piece of the Paul Graham essay “Startup Ideas”, if you go with the wrong measures of product-market fit:

The danger of an idea like this is that when you run it by your friends with pets, they don’t say “I would never use this.” They say “Yeah, maybe I could see using something like that.” Even when the startup launches, it will sound plausible to a lot of people. They don’t want to use it themselves, at least not right now, but they could imagine other people wanting it. Sum that reaction across the entire population, and you have zero users.

🛤 Stripe’s Fifth Engineering Hub is Remote

Remote work is creeping up in adoption as companies become more culturally okay with the model, and as enabling technology makes it more effective. In the tech scene it’s common for companies to hire remote, to a point (as Benedict Evans joked: “we’re hiring to build a communications platform that makes distance irrelevant. Must be willing to relocate to San Francisco.”) It’s important for the movement for large and influential companies like Stripe to take this on as a core component of their operation. Companies like Zapier and Buffer are famously “100% remote” — a new concept that, if executed well, gives companies an advantage against to compete in markets they might never be able to otherwise.

A neat Mac app that puts real-time satellite imagery on your desktop background. Every 20 minutes you can have the latest picture of the Earth.

✦

Weekend Reading: Gene Wolfe, Zoom, and Inside Spatial Networks

April 27, 2019 • #

📖 Gene Wolfe Turned Science Fiction Into High Art

Wolfe’s work, particularly his Book of the New Sun “tetralogy”, is some of my favorite fiction. He just passed away a couple weeks ago, and this is a great piece on his life leading up to becoming one of the most influential American writers. I recommend it to everyone I know interested in sci-fi. Even reading this made me want to dig up The Shadow of the Torturer and start reading it for a third time:

The language of the book is rich, strange, beautiful, and often literally incomprehensible. New Sun is presented as “posthistory”—a historical document from the future. It’s been translated, from a language that does not yet exist, by a scholar with the initials G.W., who writes a brief appendix at the end of each volume. Because so many of the concepts Severian writes about have no modern equivalents, G.W. says, he’s substituted “their closest twentieth-century equivalents” in English words. The book is thus full of fabulously esoteric and obscure words that few readers will recognize as English—fuligin, peltast, oubliette, chatelaine, cenobite. But these words are only approximations of other far-future words that even G.W. claims not to fully understand. “Metal,” he says, “is usually, but not always, employed to designate a substance of the sort the word suggests to contemporary minds.” Time travel, extreme ambiguity, and a kind of poststructuralist conception of language are thus all implied by the book’s very existence.

📺 Zoom, Zoom, Zoom! The Exclusive Inside Story Of The New Billionaire Behind Tech’s Hottest IPO

Zoom was in the news a lot lately, not only for its IPO, but also the impressive business they’ve put together since founding in 2011. It’s a great example of how you can build an extremely viable and healthy business in a crowded space with a focus on solid product execution and customer satisfaction. This profile of founder Eric Yuan goes into the core culture of the business and the grit that made the success possible.

🗺 A Look Inside The GIS World With Anthony Quartararo, CEO Of Spatial Networks

The folks over at FullStackTalent just published this Q&A with Tony in a series on business leaders of the Tampa Bay area. It gives some good insight into how we work, where we’ve come from, and what we do every day. There’s even a piece about our internal “GeoTrivia”, where my brain full of useless geographical information can actually get used:

Matt: What’s your favorite geography fun fact?

Tony: Our VP of Product, Coleman McCormick, is the longest-reigning champion of GeoTrivia, a competition we do every Friday. We just all give up because he [laughter], you find some obscure thing, like what country has the longest coastline in Africa, and within seconds, he’s got the answer. He’s not cheating, he just knows his stuff! We made a trophy, and we called it the McCormick Cup.

All that time staring at maps is finally useful!

✦
✦

Weekend Reading: Running Maps, Thinking, and Remote Work

April 20, 2019 • #

🏃🏻‍♂️ On the Go Map

Found via Tom MacWright, a slick and simple tool for doing run route planning built on modern web tech. It uses basic routing APIs and distance calculation to help plan out runs, which is especially cool in new places. I used it in San Diego this past week to estimate a couple distances I did. It also has a cool sharing feature to save and link to routes.

🔮 As We May Think

I mentioned scientist Vannevar Bush here a few days back. This is a piece he wrote for The Atlantic in 1945, looking forward at how machines and technology could become enhancers of human thinking. So many prescient segments foreshadowing current computer technology:

One can now picture a future investigator in his laboratory. His hands are free, and he is not anchored. As he moves about and observes, he photographs and comments. Time is automatically recorded to tie the two records together. If he goes into the field, he may be connected by radio to his recorder. As he ponders over his notes in the evening, he again talks his comments into the record. His typed record, as well as his photographs, may both be in miniature, so that he projects them for examination.

👨🏽‍💻 Best Practices for Managing Remote Teams

I thought this was an excellent rundown of remote work, who is suited for it, how to manage it, and the psychology of this new method of teamwork.

Let’s first cover values. Remote work is founded on specific core principles that govern this distinct way of operating which tend to be organization agnostic. They are the underlying foundation which enables us to believe that this approach is indeed better, more optimal, and thus the way we should live:

  • Output > Input
  • Autonomy > Administration
  • Flexibility > Rigidity

These values do not just govern individuals, but also the way that companies operate and how processes are formed. And like almost anything in life, although they sound resoundingly positive, they have potential pitfalls if not administered with care.

I found nearly all of this very accurate to my perception of remote work, at least from the standpoint of someone who is not remote, but manages and works with many that are. I’m highly supportive of hiring remote. With our team, we’ve gotten better in many ways by becoming more remote. And another (perhaps counterintuitive) observation: the more remote people you hire, the better the whole company gets and managing it.

✦

Spring 2019 All Hands

April 8, 2019 • #

Today kicked off our Spring 2019 All Hands. The 59-person team makes for an exciting, hectic, energizing, and fun week! Getting us all in a single room is pretty challenging these days. This morning Tony did his semiannual “AMA” to talk company strategy, focus, and talk about what’s new in the business.

Spring 2019 All Hands
✦

Weekend Reading: T Cells, Creating Proteins, and SNI Awards

April 6, 2019 • #

🦠 T is for T Cell

After reading The Breakthrough, I’ve been doing more reading on immunotherapy, how it works, and what the latest science looks like. Another book in my to-read list is An Elegant Defense, a deeper study of how the immune system works. The human defensive system of white blood cells is a truly incredible evolutionary machine — a beautiful and phenomenally complex version of antifragility.

🧬 Engineering Proteins in the Cloud with Python

This stuff is crazy. Using modern compute, data science, and gene sequencing, you can now design proteins from your laptop:

Amazingly, we’re pretty close to being able to create any protein we want from the comfort of our jupyter notebooks, thanks to developments in genomics, synthetic biology, and most recently, cloud labs. In this article I’ll develop Python code that will take me from an idea for a protein all the way to expression of the protein in a bacterial cell, all without touching a pipette or talking to a human. The total cost will only be a few hundred dollars! Using Vijay Pande from A16Z’s terminology, this is Bio 2.0.

👩🏽‍💻 Spatial Networks Named a “Top Place to Work in Tampa Bay”

This is a fun one. I’ve been at Spatial Networks almost 10 years now. When I joined we were maybe 10 or 12 people, now we’re about 60 and still going up. It’s exciting to see the hard work paying off and validated — but like I say to our team all the time: it feels like we’re just getting started.

✦
✦

Entering Product Development: Geodexy

March 27, 2019 • #

I started with the first post in this series back in January, describing my own entrance into product development and management.

When I joined the company we were in the very early stages of building a data collection tool, primarily for internal use to improve speed and efficiency on data project work. That product was called Geodexy, and the model was similar to Fulcrum in concept, but in execution and tech stack, everything was completely different. A few years back, Tony wrote up a retrospective post detailing out the history of what led us down the path we took, and how Geodexy came to be:

After this experience, I realized there was a niche to carve out for Spatial Networks but I’d need to invest whatever meager profits the company made into a capability to allow us to provide high fidelity data from the field, with very high quality, extremely fast and at a very low cost (to the company). I needed to be able to scale up or down instantly, given the volatility in the project services space, and I needed to be able to deploy the tools globally, on-demand, on available mobile platforms, remotely and without traditional limitations of software CDs.

Tony’s post was an excellent look back at the business origin of the product — the “why” we decided to do it piece. What I wanted to cover here was more on the product technology end of things, and our go-to-market strategy (where you could call it that). Prior to my joining, the team had put together a rough go-to-market plan trying to guesstimate TAM, market fit, customer need, and price points. Of course without real market feedback (as in, will someone actually buy what you’ve built, versus say they would buy it one day), it’s hard to truly gauge the success potential.

Geodexy

Back then, modern web frameworks in use today were around, but there were very few and not yet mature, like Rails and it’s peers. It’s astonishing to think back on the tech stack we were using in the first iteration of Geodexy, circa 2008. That first version was built on a combination of Flex, Flash, MySQL, and Windows Mobile1. It all worked, but was cumbersome to iterate on even back then. This was not even that long ago, and back then that was a reasonable suite of tooling; now it looks antiquated, and Flex was abandoned and donated to Apache Foundation a long time ago. We had success with that product version for our internal efforts; it powered dozens of data collection projects in 10+ countries around the world, allowing us to deliver higher-quality data than we could before. The mobile application (which was the key to the entire product achieving its goals) worked, but still lacked the native integration of richer data sources — primarily for photos and GPS data. The former could be done with some devices that had native cameras, but the built-in sensors were too low quality on most devices. The latter almost always required an external Bluetooth GPS device to integrate the location data. It was all still an upgrade from pen, paper, and data transcription, but not free from friction on the ground at the point of data collection. Being burdened by technology friction while roaming the countryside collecting data doesn’t make for the smoothest user experience or prevent problems. We still needed to come up with a better way to make it happen, for ourselves and absolutely before we went to market touting the workflow advantages to other customers.

Geodexy Windows Mobile

In mid-2009 we spun up an effort to reset on more modern technology we could build from, learning from our first mistakes and able to short-circuit a lot of the prior experimentation. The new stack was Rails, MongoDB, and PostgreSQL, which looking back from 10 years on sounds like a logical stack to use even today, depending on the product needs. Much of what we used back then still sits at the core of Fulcrum today.

What we never got to with the ultimate version of Geodexy was a modern mobile client for the data collection piece. That was still the early days of the App Store, and I don’t recall how mature the Android Market (predecessor to Google Play) was back then, but we didn’t have the resources to start of with 2 mobile clients anyway. We actually had a functioning Blackberry app first, which tells you how different the mobile platform landscape looked a decade ago2.

Geodexy’s mobile app for iOS was, on the other hand, an excellent window into the potential iOS development unlocked for us as a platform going forward. In a couple of months one of our developers that knew his way around C++ learned some Objective-C and put together a version that fully worked — offline support for data collection, automatic GPS integration, photos, the whole nine yards of the core toolset we always wanted. The new wave of platform with a REST API, online form designer, and iOS app allowed us to up our game on Foresight data collection efforts in a way that we knew would have legs if we could productize it right.

We didn’t get much further along with the Geodexy platform as it was before we refocused our SaaS efforts around a new product concept that’d tie all of the technology stack we’d built around a single, albeit large, market: the property inspection business. That’s what led us to launch allinspections, which I’ll continue the story on later.

In an odd way, it’s pleasing to think back on the challenges (or things we considered challenges) at the time and think about how they contrast with today. We focused so much attention on things that, in the long run, aren’t terribly important to the lifeblood of a business idea (tech stack and implementation), and not enough on the things worth thinking about early on (market analysis, pricing, early customer development). Part of that I think stems from our indexing on internal project support first, but also from inexperience with go-to-market in SaaS. The learnings ended up being invaluable for future product efforts, and still help to inform decision making today.

  1. As painful as this sounds we actually had a decent tool built on WM. But the usability of it was terrible, which if you can recall the time period was par for the course for mobile applications of all stripes. â†Š

  2. That was a decade ago. Man. â†Š

✦
✦

Promoting GIS at Hunter College

November 14, 2018 • #

As premier sponsors of the American Geographical Society, we try to do our part in promoting geographic literacy, education, and the future of geo sciences.

Hunter College

Part of our efforts this week is participating in the GIS Career Fair at Hunter College in Manhattan. Bill and I were there to showcase how geography fits into our business and talking with students about what it means to build a career centered around GIS and mapping. We talked to dozens of people about all aspects of the industry, with a diverse group interested in environmental science, energy, space, and more.

It’s good to see the energy and excitement in the geospatial industry.

✦

All Hands 2018

October 14, 2018 • #

Spatial Networks is past 50 employees now, with a sizable remote group scattered all over the country. Even though we’ve grown substantially in 2018, we’ve been able to scale our processes, tools, and org chart to maintain pretty effective team dynamics and productivity. When we first started hiring remote folks back in 2010, we had nowhere near the foundation in place to have an effective distributed team.

This week is our 2nd “All Hands” of the year, where our entire remote team comes to St. Petersburg HQ for a week of teamwork, group projects, and fun camaraderie. A total of 18 people representing 11 states will be in town. These weeks are at once energizing, exciting, and exhausting — but also always a positive exercise. I’m glad to work at a place where we’ve consistently valued this investment and made the effort to keep it going as we’ve scaled.

✦

The Missing Communication Link

October 8, 2018 • #

Slack grew huge on the idea that it would “replace email” and become the digital hub for your whole company. In some organizations (like ours), it certainly has, or has at least subsumed most all internal-only communication. Email still rules for long form official stuff. It’s booming into a multi-billion dollar valuation on its way to an IPO on this adoption wave.

But over the last couple of years there’s been something of a backlash to “live chat” systems. Of course any new tool can be abused to the point of counter-productivity. As tools like Slack and Intercom (a live chat support software) have become widespread, people and companies need to find normal patterns of use that are comfortable for everyone. In our company, Slack is where nearly everything happens — including quite a bit that, on the surface, looks like noise and random chatter (our #random is something to behold). One common argument is that people now spend more time keeping up with Slack conversation than they ever did with email. Maybe so, maybe not. But regardless, isn’t analysis of the time spent on one versus the other missing the point?

My general argument “pro-chat” is that a world with Slack adds the layer of communication that should have been happening all along and wasn’t. For me, I know that I’m better informed about the general activity of the business with Slack than without. It takes some care and attention to keep it from becoming a distraction when it’s unnecessary, but I’m willing to make the effort.

Anyone that compares the world of Corporate Slack to the prior one would notice a striking similarity in work patterns. Workplaces are social, people are people, and will talk, joke, commiserate, and enjoy each others’ company. I try to picture a world where we could effectively work as a distributed team with 50+ people dispersed over 11 states without tools like Slack. Looking at it that way, it’s easy see the downsides as manageable things we’ll figure out.

Effectively using new systems for collaboration is just as much about adapting our own behavior as it is about the feature set of the new tool. Each tool is not perfect for everything (as much as their marketing might say so). I think much of the kickback is from those that don’t want to change. They want all the benefits of a system that conforms around their comfort zone.

✦

A Product Origin Story

September 11, 2018 • #

Fulcrum, our SaaS product for field data collection, is coming up on its 7th birthday this year. We’ve come a long way: from a bootstrapped, barely-functional system at launch in 2011 to a platform with over 1,800 customers, healthy revenue, and a growing team expanding it to ever larger clients around the world. I thought I’d step back and recall its origins from a product management perspective.

We created Fulcrum to address a need we had in our business, and quickly realized its application to dozens of other markets with a slightly different color of the same issue: getting accurate field reporting from a deskless, mobile workforce back to a centralized hub for reporting and analysis. While we knew it wasn’t a brand new invention to create a data collection platform, we knew we could bring a novel solution combining our strengths, and that other existing tools on the market had fundamental holes we saw as essential to our own business. We had a few core ideas, all of which combined would give us a unique and powerful foundation we didn’t see elsewhere:

  1. Use a mobile-first design approach — Too many products at the time still considered their mobile offerings afterthoughts (if they existed at all).
  2. Make disconnected, offline use seamless to a mobile user — They shouldn’t have to fiddle. Way too many products in 2011 (and many still today) took the simpler engineering approach of building for always-connected environments. (requires #1)
  3. Put location data at the core — Everything geolocated. (requires #1)
  4. Enable business analysis with spatial relationships — Even though we’re geographers, most people don’t see the world through a geo lens, but should. (requires #3)
  5. Make it cloud-centric — In 2011 desktop software was well on the way out, so we wanted an platform we could cloud host with APIs for everything. Creating from building block primitives let us horizontally scale on the infrastructure.

Regardless of the addressable market for this potential solution, we planned to invest and build it anyway. At the beginning, it was critical enough to our own business workflow to spend the money to improve our data products, delivery timelines, and team efficiency. But when looking outward to others, we had a simple hypothesis: if we feel these gaps are worth closing for ourselves, the fusion of these ideas will create a new way of connecting the field to the office seamlessly, while enhancing the strengths of each working context. Markets like utilities, construction, environmental services, oil and gas, and mining all suffer from a similar body of logistical and information management challenges we did.

Fulcrum wasn’t our first foray into software development, or even our first attempt to create our own toolset for mobile mapping. Previously we’d built a couple of applications: one never went to market, was completely internal-only, and one we did bring to market for a targeted industry (building and home inspections). Both petered out, but we took away revelations about how to do it better and apply what we’d done to a wider market. In early 2011 we went back to the whiteboard and conceptualized how to take what we’d learned the previous years and build something new, with the foundational approach above as our guidebook.

We started building in early spring, and launched in September 2011. It was free accounts only, didn’t have multi-user support, there was only a simple iOS client and no web UI for data management — suffice it to say it was early. But in my view this was essential to getting where we are today. We took our infant product to FOSS4G 2011 to show what we were working on to the early adopter crowd. Even with such an immature system we got great feedback. This was the beginning of learning a core competency you need to make good products, what I’d call “idea fusion”: the ability to aggregate feedback from users (external) and combine with your own ideas (internal) to create something unified and coherent. A product can’t become great without doing these things in concert.

I think it’s natural for creators to favor one path over the other — either falling into the trap of only building specifically what customers ask for, or creating based solely on their own vision in a vacuum with little guidance from customers on what pains actually look like. The key I’ve learned is to find a pleasant balance between the two. Unless you have razor sharp predictive capabilities and total knowledge of customer problems, you end up chasing ghosts without course correction based on iterative user feedback. Mapping your vision to reality is challenging to do, and it assumes your vision is perfectly clear.

On the other hand, waiting at the beck and call of your user to dictate exactly what to build works well in the early days when you’re looking for traction, but without an opinion about how the world should be, you likely won’t do anything revolutionary. Most customers view a problem with a narrow array of options to fix it, not because they’re uninventive, but because designing tools isn’t their mission or expertise. They’re on a path to solve a very specific problem, and the imagination space of how to make their life better is viewed through the lens of how they currently do it. Like the quote (maybe apocryphally) attributed to Henry Ford: “If I’d asked customers what they wanted, they would’ve asked for a faster horse.” In order to invent the car, you have to envision a new product completely unlike the one your customer is even asking for, sometimes even requiring other industry to build up around you at the same time. When automobiles first hit the road, an entire network of supporting infrastructure existed around draft animals, not machines.

We’ve tried to hold true to this philosophy of balance over the years as Fulcrum has matured. As our team grows, the challenge of reconciling requests from paying customers and our own vision for the future of work gets much harder. What constitutes a “big idea” gets even bigger, and the compulsion to treat near term customer pains becomes ever more attractive (because, if you’re doing things right, you have more of them, holding larger checks).

When I look back to the early ‘10s at the genesis of Fulcrum, it’s amazing to think about how far we’ve carried it, and how evolved the product is today. But while Fulcrum has advanced leaps and bounds, it also aligns remarkably closely with our original concept and hypotheses. Our mantra about the problem we’re solving has matured over 7 years, but hasn’t fundamentally changed in its roots.

✦
✦

Mapping Kabul

February 29, 2012 • #

We’ve just posted a map of Kabul, Afghanistan built from spatial networks map data. I built this a couple of months back (with TileMill) for some mobile field collection project work we were doing with Fulcrum. This is the sort of challenging work that our company is out there doing, bringing high-tech (yet cheap and simple) solutions to up-and-coming communities like Kabul.

✦
✦