Our levels of productivity, creativity, and inspiration have an intimate, hard-to-articulate connection to our environments. And we all have different predilections â quiet vs. noisy, calm vs. bustling, light vs. dark. Each quality creates a climate that pulls something different out of us.
In his book Political Order and Political Decay, Francis Fukuyama has a section on corruption in political systems and how it impacts economic development:
There are many reasons why corruption impedes economic development. In the first place, it distorts economic incentives by channeling resources not into their most productive uses but rather into the pockets of officials with the political power to extract bribes. Second, corruption acts as a highly regressive tax: while petty corruption on the part of minor, poorly paid officials exists in many countries, the vast bulk of misappropriated funds goes to elites who can use their positions of power to extract wealth from the population.
Today the most famously corrupt regimes lead the least liberal, least free societies. In these unstable environments, government jobs are among the most attractive to ambitious people. In part itâs because those jobs are more reliable than weak, inconsistent private sector jobs (and sometimes easier to get and retain), but the ease with which rents can be extracted in corrupted systems attracts people ambitious to build personal wealth.
You see the inverse of this phenomenon in states with strong free market systems. A certain class of ambitious person is still attracted to government, but more often for reasons of celebrity or power than financial reasons. The potential for personally-enriching rent extraction is much lower. Brain drain happens in the public sector because many of the most ambitious for wealth and status see faster, more lucrative paths in the private market. So paradoxically, the lack of this personally-enriching career path could be impeding potential economic development, just as in poisoned systems, but for different reasons.
Itâs unfortunate that we squander our hard-won strong, corruption-resistant1 government systemâs performance because we canât find the funding to better pay our public servants. Our federal (and state) agencies donât realize how efficient this allocation of capital would be, compared to the many channels through which we hemorrhage money year after year. What would happen if we paid civil servants better? How many of the ambitious, entrepreneurial class would stick around and increase the stateâs capacity if they didnât become disillusioned with personal stagnation?
Of course weâre far from immune here. But when juxtaposed with the political systems of Liberia or the DRC, weâre doing pretty well. âŠ
Great insights here to some of the workings of Stripe from Patrick. Though Iâve never been on that ride, the hypergrowth company (even with its flaws and stumbles) is an absolute marvel that it can be made functional at all.
Stripe seems exceptional in many ways, but in others itâs just a solid example of the âdoerâ culture that great SV tech companies are known for. Stripeâs Atlas product now looks like a polished, full-featured platform for creating a company, but as Patrick notes in this example, even well-funded tech companies have to be frugal, iterative, and resourceful to get ideas off the ground:
When I was on Stripe Atlas, a small, focused team with many high-horsepower generalists who largely didnât have a huge amount of entrepreneurial experience, part of my job was bringing skills and connections and part was just standing up portions of our offering by sheer force. We wanted helpful advice for founders and didnât have it; I locked myself in a room for a month and wrote a 30,000 word guide plus the ERB to put it on the Internet. We wanted to inculcate an Atlas community; I installed Discourse, wrote our SSO code for it, sent out invites, and commented on every thread for months.
On the default âbias to actionâ as part of this culture:
The returns to pushing your cadence to faster are everywhere and they compound continuously, for years. Donât send the email tomorrow. Donât default to scheduling the meeting for next week. Donât delay a worthy sprint until after the next quarterly planning exercise. Design control and decisionmaking structures to bias heavily in favor of preserving operating cadence.
I donât think Stripe is uniformly fast. I think teams at Stripe are just faster than most companies, blocked a bit less by peer teams, constrained a tiny bit less by internal tools, etc etc. There are particular projects which have been agonizingly long to ship; literally years after I would have hoped them done. But across the portfolio, with now hundreds of teams working, we just get more done than we âshouldâ be able to.
Like most teams, weâve now been fully remote and distributed since March 13th, almost 5 months exactly after moving a team of 50+ to fully remote, with no upfront plan on how to best organize ourselves.
About 20 of our team was already remote (scattered across the lower 48) before the COVID lockdown started, but several of them were in the office fairly regularly. But that still leaves 30+ that were forced to figure out a remote work setup overnight. Even the previously remote staff had to get used to changes in communications with the rest of the team adjusting in-flight.
So whatâs worked and what hasnât? Whatâs the overall impact been?
My general view is that it hasnât impacted productivity overall in a terribly material way. After a few weeks to find stability with the work-from-home reality, things settled into a regular cadence for the most part. Aside from many of us with kids and other home impacts having to manage solutions for school closures, e-learning Zoom classes, and cabin fevered children, the work cycle leveled off into a predictable flow.
Zoom life
Zoom fatigue is a serious thing. I donât think weâre having more meetings or conversation on a minute-to-minute basis, but as many have pointed out during this lockdown, thereâs something different about voice and video interaction that absorbs more energy, draws greater attention bandwidth, or something. It also seems that with all people remote, thereâs a bit of a creeping tendency to inflate invitation lists and make meetings bigger than their in-person versions would be. No hard evidence of this, but it feels like weâve got a higher average people-per-meeting than pre-quarantine. And for me, the more heads on the Zoom session, the more draining it tends to be.
Being on persistent video doesnât help, since it pressures you to sit still and be visible in the frame, when in person weâd often be up and about, at the whiteboard, leaning back in chairs, or getting something from the fridge. We havenât had enough time yet to develop the social norms about whatâs acceptable and not while on remote calls. Personally, Iâm inclined toward seeing other people and having them see me, since weâre all starved for the ability to interact face to face, but perhaps over time weâll work out some norms about when itâs expected to be present at the desk and when voice-only would suffice.
Documents, artifacts, and async work
With collaboration, weâve been far less impacted than I expected. Weâve been able to make do. Most product design groups live and breathe by sketching, drawing, or whiteboarding ideas, and Iâve yet to find any good distributed digital methods for replacing the exploratory process of sketching something out in a group setting. Iâve done a few Zoom calls where Iâll screenshare the iPad with Concepts open. Itâs excellent that weâve got tools like this today to do visual collaboration without too much friction, but itâs still very one-way â iPad sketcher is drawing, but others canât âtake a penâ themselves and add to it like they would at a whiteboard. Iâm sure someone will develop a live Google Docs-like multiplayer sketchboard to fill this need (hint: someone please do this!).
Even in pre-COVID times, most of the company has always been pretty solid with asynchronous work. Things are facilitated through Slack as a foundational communication layer, then plenty of collaborative Google Docs and Sheets on top of that for interactive work. We recently set up Confluence, too, where we want to have a better central location for content â like an internal blog and a place thatâs better for collaborative work on documents than Docs. The truth here is that thereâs no shortage of tooling to help teams with async work; itâs a human behavior and comfort problem to get everyone in the right tempo of working this way.
Serendipity
One of the biggest benefits of co-located teams is the random hallway interaction you get thatâs very hard to replicate remotely.
In some ways removing the random hallway chatter is what we often long for â a way to add more time to our day for deep work. But lots of hallway chatter results in not only human social connection, but also real work discussion and idea exchange. There still doesnât seem to be a good mechanism to replace what gets lost here when youâre distributed. You do recover some productivity with more time for deep work (if you can keep the meeting-creep down), but itâs less clear what longer-term detriments there will be on the ideas never discovered or pursued that result from those random encounters. For a couple months some of us were doing regular âsocial hourâ Zooms to fill this void. They were great for maintaining interpersonal connections, but didnât solve the problem for new product ideas or work topics.
It still remains to be seen how many companies return to full in-person work models after all of this settles down. Iâm sure many will go back to something almost resembling the pre-shelter model, but Iâd bet that thereâs plenty of residual work-from-home thatâll happen even in the most face-to-face-leaning organizations. Over time weâll surely all adapt to some sort of regular pattern, hopefully landing on something more effective than we had before. I know that hybrid models have shown poor results in the past, but I think thereâs a way to get to a place like that that works for everyone, now that weâre all subject to the same costs and benefits of working remotely.
A thoughtful and transparent post from Figma founder Dylan Field on their plans for re-establishing a hybrid / hub / remote model for the future of their teams.
In many ways, we followed our normal design process in revising this policy. We thought about the competitive landscape and ecosystem, collected data, explored different options and then converged on a solution. However, unlike making a product change, this was weirdly emotional for meâsmall changes in policy sometimes felt like we were leaping between potential worlds. I didnât feel like I had a framework for the decision until the end of our process and at times it was as if there was nothing to hold onto.
The idea of hybrid sounds attractive to me personally, but I can see the possible risks he describes here in ending up with sort of the worst of both worlds â expensive but oft-deserted office and lack of clarity on whoâs in or out each day. Weâre unsure what our plan looks like specifically, but thereâs no immediate pressure to overcommit on a particular plan just yet. This is interesting food for thought on what others are doing.
After about 6-8 months of forging, shaping, research, design, and engineering, weâve launched the Fulcrum Report Builder. One of the key use cases with Fulcrum has always been using the platform to design your own data collection processes with our App Builder, perform inspections with our mobile app, then generate results through our Editor, raw data integrations, and, commonly, generating PDF reports from inspections.
For years weâve offered a basic report template along with an ability to customize the reports through our Professional Services team. What was missing was a way to expose our report-building tools to customers.
With the Report Builder, we now have two modes available: a Basic mode that allows any customer to configure some parameters about the report output through settings, and an Advanced mode that provides a full IDE for building your own fully customized reports with markup and JavaScript, plus a templating engine for pulling in and manipulating data.
Under the hood, we overhauled the generator engine using a library called Puppeteer, a headless Chrome node.js API for doing many things, including converting web pages to documents or screenshots. Itâs lightning fast and allows for a live preview of your reports as youâre working on your template customization.
Feedback so far has been fantastic, as this has been of the most requested capabilities on the platform. I canât wait to see all of the ways people end up using it.
Weâve got a lot more in store for the future. Stay tuned to see what else we add to it.
Jeff Atwood on Robert X. Cringelyâs descriptions of three groups of people you need to âattack a marketâ:
Whether invading countries or markets, the first wave of troops to see battle are the commandos. Woz and Jobs were the commandos of the Apple II. Don Estridge and his twelve disciples were the commandos of the IBM PC. Dan Bricklin and Bob Frankston were the commandos of VisiCalc.
Grouping offshore as the commandos do their work is the second wave of soldiers, the infantry. These are the people who hit the beach en masse and slog out the early victory, building on the start given them by the commandos. The second-wave troops take the prototype, test it, refine it, make it manufacturable, write the manuals, market it, and ideally produce a profit.
What happens then is that the commandos and the infantry head off in the direction of Berlin or Baghdad, advancing into new territories, performing their same jobs again and again, though each time in a slightly different way. But there is still a need for a military presence in the territory they leave behind, which they have liberated. These third-wave troops hate change. They arenât troops at all but police.
Behind all this is the astonishing, baffling breadth of what sleep does for the body. The fact that learning, metabolism, memory, and myriad other functions and systems are affected makes an alteration as basic as the presence of ROS quite interesting. But even if ROS is behind the lethality of sleep loss, there is no evidence yet that sleepâs cognitive effects, for instance, come from the same source. And even if antioxidants prevent premature death in flies, they may not affect sleepâs other functions, or if they do, it may be for different reasons.
Interesting perspective here on the effectiveness of hybrid in-office/WFH models of work, which are likely to be popular in reopening cautiously:
Hybrid structures are hard to get right because, baseline working methodology will be entirely different. And I donât mean pair programming, which can still be done using VS Code Share or what have you. I mean the decision making structures. How an organisation makes decisions sets the culture, defines your hiring and creates the career path for leadership.
A nice comprehensive list of SaaS products for the workplace, across a ton of different categories. Great work by Pietro Invernizzi putting this database together.
Today we hosted a webinar in conjunction with our friends at NetHope and Team Rubicon to give an overview of Fulcrum and what weâre collectively doing in disaster relief exercises.
Both organizations deployed to support recent disaster events for Cyclone Idai and Hurricane Dorian (the Bahamas) and used Fulcrum as a critical piece of their workflow.
Always enjoyable to get to show more about what weâre doing to support impactful efforts like this.
Today we announced this investment from Kayne and Kennet in Spatial Networks, to help us keep scaling Fulcrum in 2020 and beyond. This effort has been one of my main missions for the better part of 2019, so itâs rewarding to get to this milestone to build from. Our new partners at Kayne and Kennet each bring unique perspectives and experience to help us move faster and expand.
Spatial Networks, the creator of Fulcrum, the leading geospatial data collection and analysis platform for field operations, today announced that it has closed an investment of $42.5 million led by Kayne Partners, the growth equity group of Kayne Anderson Capital Advisors, L.P., and Kennet Partners, Ltd. The funding will primarily be used to scale the companyâs sales and marketing capabilities, accelerate its product development roadmap, and further expand the Fulcrum data collection platform into international markets. The company has appointed Jim Grady CEO to oversee all aspects of the companyâs strategy and execution globally.
Iâve spent the majority of the last few months working on our expansion strategy going forward next year and beyond. Weâve got some big ideas and plans for the product that Iâm excited about in 2020.
We just wrapped up our Fall âall handsâ week at the office. Another good week to see everyone from out of town, and an uncommonly productive one at that. We got a good amount of planning discussion done for future product roadmap additions, did some testing on new stuff in the lab, fixed some bugs, shared some knowledge, and ate (a lot).
Weâre in San Juan this week for the NetHope Global Summit. Through our partnership with NetHope, a non-profit devoted to bringing technology to disaster relief and humanitarian projects, weâre hosting a hands-on workshop on Fulcrum on Thursday.
Weâve already connected with several of the other tech companies in NetHopeâs network â Okta, Box, Twilio, and others â leading to some interesting conversations on working together more closely on integrated deployments for humanitarian work.
Fortin San Geronimo de Boqueron
Looking forward to an exciting week, and maybe some exploring of Old San Juan. Took a walk last night out to dinner along the north shore overlooking the Atlantic.
Bryan wrote this up about the latest major release of Fulcrum, which added Views to the Editor tool. This is a cool feature that allows users doing QA and data analysis to save sets of columns and filters, akin to how views work in databases like PostgreSQL. We have some plans next to let users share or publish Views, and also to expose them via our Query API, with the underlying data functioning just like a database view does.
Thisâll be a foundational feature for a lot of upcoming neat stuff.
This is a post from the Fulcrum archives I wrote 3 years back. I like this idea and thereâs more to be written on the topic of how companies treat their archives of data. Especially in data-centric companies like those we work with, itâs remarkable to see how quickly it often is thrown on a shelf, atrophies, and is never used again.
In the days of pen and paper collection, data was something to collect, transcribe, and stuff into a file cabinet to be stored for a minimum of 5 years (lest those auditors come knocking). With advances in digital data capture â through all methods including forms software, spreadsheets, or sensors â many organizations arenât rethinking their processes and thus, havenât come much further. The only difference is that the file cabinetâs been replaced with an Access database (or gasp a 10 year old spreadsheet!).
Many organizations collect troves of legacy data in their operations, or at least as much as they can justify the cost in collecting. But because data management is a complicated domain in and of itself, often times the same data is re-collected over and over, with all cost and no benefit. Once data makes its way into corporate systems somewhere after its initial use, itâs forgotten and left on the virtual shelf.
Data is your companyâs memory. Itâs the living, institutional knowledge youâve invested in over years or decades of doing business, full of latent value.
But there are a number of challenges that stand in the way when trying to make use of historical data:
Compatibility â File formats and versions. Can I read my old data with current tools?
Access â Data silos and where your data is published. Can my staff get to archives they need access to without heartburn?
Identification â A process for knowing what pieces are valuable down the road. Within these gigabytes of data, what is useful?
If you give consideration to these issues up-front as youâre designing a data collection workflow, youâll make your life much simpler down the road when your future colleagues are trying to leverage historical data assets.
Letâs dive deeper on each of these issues.
Formats and Compatibility
I call this the âLotus 1-2-3â problem, which happens whenever data is stored in a format that dies off and loses tool compatibility1. Imagine the staggering amount of historical corporate data locked up in formats that no one can open anymore. This is one area where paper can be an advantage: if stored properly, you can always open the file.
Of course thereâs no way to know the future potential of a data format on the day you select it as your format of choice. We donât have the luxury of that kind of hindsight. Iâm sure no one wouldâve selected Lotusâs .123 format back in â93 had they known that Excel would come to dominate the world of spreadsheets. Look for well-supported open standards like CSV or JSON for long term archival. Another good practice is to revisit your data archives as a general âhygieneâ practice every few years. Are your old files still usable? The faster you can convert dead formats into something more future-proof, the better.
Accessibility
This is one of the most important issues when it comes to using archives of historical data. Presuming a user can open files of 10 year old data because youâve stored it effectively in open formats â is the data somewhere that staff can get it? Is it published somewhere in a shared workspace for easy access? Most often data isnât squirreled away in a hard-to-reach place intentionally. Itâs often done for the sake of organization, cleanliness, or savings on storage.
Anyone that works frequently with data has heard of âdata silosâ, which arise when data is holed up in a place where it doesnât get shared, only accessible by individual departments or groups. Avoiding this issue can also involve internal corporate policy shifts or revisiting your data security policies. In larger organizations Iâve worked in, however, the tendency is toward over-securing data to the point of uselessness. In some cases it might as well be deleted since itâs effectively invisible to the entire company. This is a mistake and a waste of large past investments in collecting that data in the first place.
Look for publishing tools that make your data easy to get to without sacrificing controls over access and security. But resist the urge to continuously wall off past data from your team.
Identifying the Useful Things
Now, assuming your data is in a useful format and itâs easily accessible, youâre almost there. When working with years of historical records it can be difficult to extract the valuable bits of information, but thatâs often because the first two challenges (compatibility and accessibility) have already been standing in your way. If your data collection process is built around your data as an evergreen asset rather than a single-purpose resource, it becomes much easier to think of areas where a dataset could be useful 5 or 6 years down the road.
For instance, if your data collection process includes documenting inspections with thorough before-and-after photographs, those could be indispensable in the event of a dispute or a future issue in years time. With ease of access and an open format, it could take two clicks to resolve a potentially thorny issue with a past client. That is if youâve planned your process around your data becoming a valuable corporate resource.
A quick story to demonstrate these practices:
Iâm currently working with a construction company on re-roofing my house, and theyâve been in business for 50+ years. Over that time span, theyâve performed site visits and accurately measured so many roofs in the area that when they get calls for quotes, they often can pull a file from 35 years ago when they went out and measured a property. That simple case is an excellent example of realizing latent value in a prior investment in data: if they didnât organize, archive, and store that information effectively, theyâd be redoing field visits every week. Though they arenât digital with most of their process, theyâve nailed a workflow that works for them. They use formats that work, make that data accessible to their people, and know exactly what information theyâll find useful over the long term.
Data has value beyond its immediate use case, but you have to consider this up front. Design sustainable workflows that allow you to continuously update data, and make use of archival data over time. Youâve spent a lot to create it, you should be leveraging it to its fullest extent.
Lotus 1-2-3 was a spreadsheet application popular in the 80s and 90s. It succumbed to the boom of Microsoft Office and Excel in the 1990s. âŠ
This is a cool post on a study done by a research team in the City of Saskatoon, looking at the perceptions of safety in a downtown area. They used Fulcrum to collect survey data using a safety audit developed to capture the on-the-ground intelligence from residents:
Because we were interested in perceptions and fear at a very micro-level, the study area was confined to the blocks and laneways within a four block area. We used our new app to collect information from 108 micro-spatial locations within a radius of 30 meters (100 feet) of each location, and then we also collected 596 additional intercept surveys with members of the public on the street at the time.
The urban design strategy known as Crime Prevention Through Environmental Design (CPTED) is about creating safer neighborhoods through specifically constructing the built environment. From their takeaways in the study:
Interestingly, the respondentsâ night-time perceptions did not appear as negative as we expected. Some parts were so inactive at night that we obtained very few interview responses. While CPTED surveys conducted by one team concluded these underactive areas were anxiety-provoking, when late-night social events and festivals activated the area, it positively influenced the perceptions in our surveys with the public.
Iâm always proud of our annual tributes. It matters to keep perspective on how bad things can get, how good most of us really have it, and those first-responders, public servants, and national security forces that work to keep it that way.
Our SNI running club on Strava keeps expanding. Weâve got 12 members now and counting. Two people are committed to marathons in the fall, and two of us to half-marathons.
Somewhere in reading about marathon training I read that the community aspect of the training plan is one of the most important: finding a group of people around you for mutual support and motivation along the way. Proper training (aside from the physical effort) is time-consuming and requires consistency to get 4 or more activities in per week, without falling off the wagon. It certainly helps to have the visibility of those around you keeping their habits going as a motivator to push yourself.
When we do our semi-annual All Hands events with the whole team in the office for a week, we now have something of a tradition of doing a group run sometime when weâre all together. I think weâve done it for 2 or 3 years now pretty consistently. It looks like the upcoming November event weâll be mobilizing about 15 of us or so to get out there and do at least a 5K. Thereâs a half-dozen of us that are real active and do this routinely, but itâs awesome to see the communal gravitational pull working, attracting many to join in who are really just trying to get moving on building the habit.
Thisâll be right after my half-marathon, so it might be the first recovery run after that race.
This is one from the archives, originally written for the Fulcrum blog back in early 2017. I thought Iâd resurface it here since Iâve been thinking more about continual evolution of our product process. I liked it back when I wrote it; still very relevant and true. Itâs good to look back in time to get a sense for my thought process from a couple years ago.
In the software business, a lot of attention gets paid to âshippingâ as a badge of honor if you want to be considered an innovator. Like any guiding philosophy, itâs best used as a general rule than as the primary yardstick by which you measure every individual decision. Agile, scrum, TDD, BDD â theyâre all excellent practices to keep teams focused on results. After all, the longer youâre polishing your work and not putting it in the hands of users, the less you know about how theyâll be using it once you ship it!
These systems followed as gospel (particularly with larger projects or products) can lead to attention on the how rather than the what â thinking about the process as shipping âlines of codeâ or what text editor youâre using rather than useful results for users. Loops of user feedback are essential to building the right solution for the problem youâre addressing with your product.
Thinking more deeply about aligning the desires to both ship _something_ rapidly while ensuring it aligns with product goals, it brings to mind a few questions to reflect on:
What are you shipping?
Is what youâre shipping actually useful to your user?
How does the structure of your team impact your resulting product?
How can a team iterate and ship fast, while also delivering the product theyâre promising to customers, that solves the expressed problem?
Defining product goals
In order to maintain a high tempo of iteration without simply measuring numbers of commits or how many times you push to production each day, the goals need to be oriented around the end result, not the means used to get there. Start by defining what success looks like in terms of the problem to be solved. Harvard Business School professor Clayton Christensen developed the jobs-to-be-done framework to help businesses break down the core linkages between a user and why they use a product or service1. Looking at your product or project through the lens of the âjobsâ it does for the consumer helps clarify problems you should be focused on solving.
Most of us that create products have an idea of what weâre trying to achieve, but do we really look at a new feature, new project, or technique and truly tie it back to a specific job a user is expecting to get done? I find it helpful to frequently zoom out from the ground level and take a wider view of all the distinct problems weâre trying to solve for customers. The JTBD concept is helpful to get things like technical architecture out of your way and make sure whatâs being built is solving the big problems we set out to solve. All the roadmaps, Gantt charts, and project schedules in the world wonât guarantee that your end result solves a problem2. Your product could become an immaculately built ship thatâs sailing in the wrong direction. For more insight into the jobs-to-be-done theory, check out This is Product Managementâs excellent interview with its co-creator, Karen Dillon.
Understanding users
On a similar thread as jobs-to-be-done, having a deep understanding of what the user is trying to achieve is essential in defining what to build.
This quote from the article gets to the heart of why it matters to understand with empathy what a user is trying to accomplish, itâs not always about our engineering-minded technical features or bells and whistles:
Jobs are never simply about function â they have powerful social and emotional dimensions.
The only way to unroll whatâs driving a user is to have conversations and ask questions. Figure out the relationships between what the problem is and what they think the solution will be. Internally we talk a lot about this as âunderstanding painâ. People âhireâ a product, tool, or person to reduce some sort of pain. Deep questioning to get to the root causes of pain is essential. Often times people want to self-prescribe their solution, which may not be ideal. Just look how often a patient browses WebMD, then goes to the doctor with a preconceived diagnosis, without letting the expert do their job.
On the flip side, product creators need to enter these conversations with an open mind, and avoid creating a solution looking for a problem. Doctors shouldnât consult patients and make assumptions about the underlying causes of a patientâs symptoms! Theyâd be in for some serious legal trouble.
Organize the team to reflect goals
One of my favorite ideas in product development comes from Steven Sinofsky, former Microsoft product chief of Office and Windows:
âDonât ship the org chart.â
The salient point being that companies have a tendency to create products that align with areas of responsibility within the company3. However, the user doesnât care at all about the dividing lines within your company, only the resulting solutions you deliver.
A corollary to this idea is that over time companies naturally begin to look like their customers. Itâs clearly evident in the federal contracting space: federal agencies are big, slow, and bureaucratic, and large government contracting companies start to reflect these qualities in their own products, services, and org structures.
With our product, we see three primary points to make sure our product fits the set of problems weâre solving for customers:
For some, a toolbox â For small teams with focused problems, Fulcrum should be seamless to set up, purchase, and self-manage. Users should begin relieving their pains immediately.
For others, a total solution â For large enterprises with diverse use cases and many stakeholders, Fulcrum can be set up as a total turnkey solution for the customerâs management team to administer. Our team of in-house experts consults with the customer for training and on-boarding, and the customer ends up with a full solution and the toolbox.
Integrations as the âglueâ â Customers large and small have systems of record and reporting requirements with which Fulcrum needs to integrate. Sometimes this is simple, sometimes very complex. But always the final outcome is a unique capability that canât be had another way without building their own software from scratch.
Though weâre still a small team, weâve tried to build up the functional areas around these objectives. As we advance the product and grow the team, itâs important to keep this in mind so that weâre still able to match our solution to customer problems.
For more on this topic, Sinofskyâs post on âFunctional vs. Unit Organizationsâ analyzes the pros, cons, and trade offs of different org structures and the impacts on product. A great read.
Continued reflection, onward and upward đ
In order to stay ahead of the curve and Always Be Shipping (the Right Product), itâs important to measure user results, constantly and honestly. The assumption should be that any feature could and should be improved, if we know enough from empirical evidence how we can make those improvements. With this sort of continuous reflection on the process, hopefully weâll keep shipping the Right Product to our users.
Not to discount the value of team planning. Itâs a crucial component of efficiency. My point is the clean Gantt chart on its own isnât solving a customer problem! âŠ
Of course this problem is only minor in small companies. Itâs of much greater concern to the Amazons and Microsofts of the world. âŠ
I tried this out the other night on a run. The technique makes some intiutive sense that itâd reduce impact (or level it out side to side anyway). Surely to notice any result youâd have to do it over distance consistently. But Iâve had some right knee soreness that I donât totally know the origin of, so thought Iâd start trying this out. I found it takes a lot of concentration to keep it up consistently. Iâll keep testing it out.
A neat historical, geographical story from BLDGBLOG:
Briefly, anyone interested in liminal landscapes should find Snellâs description of the Drowned Lands, prior to their drainage, fascinating. The Wallkill itself had no real path or bed, Snell explains, the meadows it flowed through were naturally dammed at one end by glacial boulders from the Ice Age, the whole place was clogged with ârank vegetation,â malarial pestilence, and tens of thousands of eels, and, whatâs more, during flood season âthe entire valley from Denton to Hamburg became a lake from eight to twenty feet deep.â
Turns out there was local disagreement on flood control:
A half-century of âwarâ broke out among local supporters of the dams and their foes: âThe dam-builders were called the âbeaversâ; the dam destroyers were known as âmuskrats.â The muskrat and beaver war was carried on for years,â with skirmishes always breaking out over new attempts to dam the floods.
Hereâs one example, like a scene written by Victor Hugo transplanted to New York State: âA hundred farmers, on the 20th of August, 1869, marched upon the dam to destroy it. A large force of armed men guarded the dam. The farmers routed them and began the work of destruction. The âbeaversâ then had recourse to the law; warrants were issued for the arrest of the farmers. A number of their leaders were arrested, but not before the offending dam had been demolished. The owner of the dam began to rebuild it; the farmers applied for an injunction. Judge Barnard granted it, and cited the owner of the dam to appear and show cause why the injunction should not be made perpetual. Pending a final hearing, high water came and carried away all vestige of the dam.â
This is something we launched a few months back. Thereâs nothing terribly exciting about building SSO features in a SaaS product â itâs table stakes to move up in the world with customers. But for me personally itâs a signal of success. Back in 2011, imagining that weâd ever have customers large enough to need SAML seemed so far in the future. Now weâre there and rolling it out for enterprise customers.
Earlier this year at SaaStr Annual, we spent 3 days with 20,000 people in the SaaS market, hearing about best practices from the best in the business, from all over the world.
If I had to take away a single overarching theme this year (not by any means ânewâ this time around, but louder and present in more of the sessions), itâs the value of customer success and retention of core, high-value customers. Itâs always been one of SaaStr founder Jason Lemkinâs core focus areas in his literature about how to âget to $10M, $50M, $100Mâ in revenue, and interwoven in many sessions were topics and questions relevant to things in this area â onboarding, âaha moments,â retention, growth, community development, and continued incremental product value increases through enhancement and new features.
Mark Roberge (former CRO of Hubspot) had an interesting talk that covered this topic. In it he focused on the power of retention and how to think about it tactically at different stages in the revenue growth cycle.
If you look at growth (adding new revenue) and retention (keeping and/or growing existing revenue) as two axes on a chart of overall growth, a couple of broad options present themselves to get the curve arrow up and to the right:
If you have awesome retention, you have to figure out adding new business. If youâre adding new customers like crazy but have trouble with customer churn, you have to figure out how to keep them. Roberge summed up his position after years of working with companies:
Itâs easier to accelerate growth with world class retention than fix retention while maintaining rapid growth.
The literature across industries is also in agreement on this. Thereâs an adage in business that itâs âcheaper to keep a customer than to acquire a new one.â But to me thereâs more to this notion than the avoidance of the acquisition cost for a new customer, though thatâs certainly beneficial. Rather itâs the maximization of the magic SaaS metric: LTV (lifetime value). If a subscription customer never leaves, their revenue keeps growing ad infinitum. This is the sort of efficiency ever SaaS company is striving for â to maximize fixed investments over the long term. Itâs why investors are valuing SaaS businesses at 10x revenue these days. But you canât get there without unlocking the right product-market fit to switch on this kind of retention and growth.
So Roberge recommends keying in on this factor. One of the key first steps in establishing a strong position with any customer is to have a clear definition of when they cross a product fit threshold â when they reach the âahaâ moment and see the value for themselves. He calls this the âcustomer success leading indicatorâ, and explains that all companies should develop a metric or set of metrics that indicates when customers cross this mark. Some examples from around the SaaS universe of how companies are measuring this:
Slack â 2000 team messages sent
Dropbox â 1 file added to 1 folder on 1 device
Hubspot â Using 5 of 20 features within 60 days
Each of these companies has correlated these figures with strong customer fits. When these targets are hit, thereâs a high likelihood that a customer will convert, stick around, and even expand. Itâs important that the selected indicator be clear and consistent between customers and meet some core criteria:
Observable in weeks or months, not quarters or years â need to see rapid feedback on performance.
Measurement can be automated â again, need to see this performance on a rolling basis.
Ideally correlated to the product core value proposition â donât pick things that are âmeasurableâ but donât line up with our expectations of âproper use.â For example, in Fulcrum, whether the customer creates an offline map layer wouldnât correlate strongly with the core value proposition (in isolation).
Repeat purchase, referral, setup, usage, ROI are all common (revenue usually a mistake â itâs a lagging rather than a leading indicator)
Okay to combine multiple metrics â derived âaggregateâ numbers would work, as long as they arenât overcomplicated.
The next step is to understand what portion of new customers reach this target (ideally all customers reach it) and when, then measure by cohort group. Putting together cohort analyses allows you to chart the data over time, and make iterative changes to early onboarding, product features, training, and overall customer success strategy to turn the cohorts from âredâ to âgreenâ.
We do cohort tracking already, but itâd be hugely beneficial to analyze and articulate this through a filter of a key customer success metric is and track it as closely as MRR. I think a hybrid reporting mechanism that tracks MRR, customer success metric achievement, and NPS by cohort would show strong correlation between each. The customer success metric can serve as an early signal of customer âactivationâ and, therefore, future growth potential.
I also sat in on a session with Tom Tunguz, VC from RedPoint Ventures, who presented on a survey they had conducted with almost 600 different business SaaS companies across a diverse base of categories. The data demonstrated a number of interesting points, particularly on the topic of retention. Two of the categories touched on were logo retention and net dollar retention (NDR). More than a third of the companies surveyed retain 90+% of their logos year over year. My favorite piece of data showed that larger customers churn less â the higher products go up market, the better the retention gets. This might sound counterintuitive on the surface, but as Tunguz pointed out in his talk, it makes sense when you think about the buying process in large vs. small organizations. Larger customers are more likely to have more rigid, careful buying processes (as anyone doing enterprise sales is well aware) than small ones, which are more likely to buy things âon the flyâ and also invest less time and energy in their vendorsâ products. The investment poured in by an enterprise customer makes them averse to switching products once on board1:
On the subject of NDR, Tunguz reports that the tendency toward expansion scales with company size, as well. In the body of customers surveyed, those that focus on the mid-market and enterprise tiers report higher average NDR than SMB. This aligns with the logic above on logo retention, but thereâs also the added factor that enterprises have more room to go higher than those on the SMB end of the continuum. The higher overall headcount in an enterprise leaves a higher ceiling for a vendor to capture:
Overall, there are two big takeaways to worth bringing home and incorporating:
Create (and subsequently monitor) a universal âcustomer success indicatorâ that gives a barometer for measuring the âtime to valueâ for new customers, and segment accordingly by size, industry, and other variables.
Focus on large Enterprise organizations â particularly their use cases, friction points to expansion, and customer success attention.
Weâve made good headway a lot of these findings with our Enterprise product tier for Fulcrum, along with the sales and marketing processes to get it out there. Whatâs encouraging about these presentations is that we already see numbers leaning in this direction, aligning with the âbest practicesâ each of these guys presented â strong logo retention and north of 100% NDR. Weâve got some other tactics in the pipeline, as well as product capabilities, that weâre hoping bring even greater efficiency, along with the requisite additional value to our customers.
Assuming thereâs tight product-market fit, and you arenât selling them shelfware! âŠ
Ryan Singer and the Basecamp team just released their new ebook on product development, called Shape Up, made available for free online. Me and some of our team here have already dug into it and are finding some interesting ideas to experiment with in our own product development cycles.
On shaping and wireframing:
When design leaders go straight to wireframes or high-fidelity mockups, they define too much detail too early. This leaves designers no room for creativity.
Appetites:
Whether weâre champing at the bit or reluctant to dive in, it helps to explicitly define how much of our time and attention the subject deserves. Is this something worth a quick fix if we can manage? Is it a big idea worth an entire cycle? Would we redesign what we already have to accommodate it? Will we only consider it if we can implement it as a minor tweak?
Breadboarding:
Deciding to include an indicator light and a rotary knob is very different from debating the chassis material, whether the knob should go to the left of the light or the right, how sharp the corners should be, and so on.
Similarly, we can sketch and discuss the key components and connections of an interface idea without specifying a particular visual design. To do that, we can use a simple shorthand. There are three basic things weâll draw:
Places: These are things you can navigate to, like screens, dialogs, or menus that pop up.
Affordances: These are things the user can act on, like buttons and fields. We consider interface copy to be an affordance, too. Reading it is an act that gives the user information for subsequent actions.
Connection lines: These show how the affordances take the user from place to place.
Iâm planning to go through it completely this weekend. There are a couple of ideas here to try out right out of the gate.
Like many working in product, Iâve been a follower-slash-admirer of how Basecamp works for years.
This model of working in 6-week âcyclesâ sounds like an attractive option for organizing a team, without falling into the onion-slicing trap of what agile can become â where more time is spent microscoring, tracking, and measuring velocities than on defining what needs to get done and why.
Once a six week cycle is over, we take one or two weeks off of scheduled projects so everyone can roam independently, fix stuff up, pick up some pet projects weâve wanted to do, and generally wind down prior to starting the next six week cycle. Ample time for context switching. We also use this time to firm up ideas that weâll be tackling next cycle.
I like the idea of spacing the cycles for the inevitable bugfixes, polishes, and the randomness that crosses the transom unexpectedly. And, importantly, it leaves time for the product team to think through planning out what to slot for the next cycle, and exactly how things should be build (see more about Ryan Singerâs hill charts). Another topic is on how to tackle The Big Stuff. Friedâs point on how they do that is to do some âscope hammeringâ â chop it down to whatever the 6-week version looks like:
The secret to making this possible is something we call scope hammering. We take the chisel to the big block of marble and figuring out how to sculpt, nip, and tuck a feature into the best six-week version possible. Itâs all about looking carefully at a feature and figuring out the true essence. Not what can it be, but what does it need to be?
Before any project is included in a cycle, weâve already figured out what we think the six week version is. We donât include planning in the cycle timeâââall the planning and consideration happens in the pitch. It has to happen before the work is slated to be done by a team. That way the six weeks is all implementation and execution. No time is spent on big unknownsâââwe try to make sure all the big stuff is known enough before we get started.
This process would improve things for many folks, but could very well be unworkable for others. The article links to an example kickoff note that shows what one of the Basecamp work cycles looks like. Iâm always fascinated to see new ideas on how teams work together.
Iâve always been largely agnostic to systems in terms of which is better than another writ large. If a system increases your teamâs output from 3 to 5, or from 4 to 7 on the whole, itâs a good system. That doesnât mean there arenât better ways to tweak or optimize, but thereâs too much âthat sucks, this is betterâ literature out there that confuses folks.
Use what works, and continue experimenting cautiously but confidently.
Our friend and colleague Kurt Menke of Birdâs Eye View GIS recently conducted a workshop in Hawaii working with folks from the Pacific Islands (Samoa, Marianas, Palau, and others) to teach Fulcrum data collection and QGIS for mapping. Seeing our tech have these kinds of impacts is always enjoyable to read about:
The week was a reminder of how those of us working with technology day-to-day sometimes take it for granted. Everyone was super excited to have this training. It was also a lesson in how resource rich we are on the continent. One of my goals with Birdâs Eye View is to use technology to help make the world a better place. (Thus my focus on conservation, public health and education.) One of the goals of the Community Health Maps program is to empower people with technology. This week fulfilled both and was very gratifying.
Most of the trainees had little to no GIS training yet instantly knew how mapping could apply to their work and lives. They want to map everything related to hurricane relief, salt water resistant taro farms, infrastructure related to mosquito outbreaks etc. A benefit of having the community do this is that they can be in charge of their own data and it helps build community relationships.
You hear the criticism all the time around the business world about meetings being useless, a waste of time, and filling up schedules unnecessarily.
A different point of view on this topic comes from Andy Grove in his book High Output Management. Itâs 35 years old, but much of it is just as relevant today as back then, with timeless principles on work.
Grove is adamant that for the manager, the âmeetingâ is an essential piece in the managerial leverage toolkit. From page 53:
Meetings provide an occasion for managerial activities. Getting together with others is not, of course, an activityâit is a medium. You as a manager can do your work in a meeting, in a memo, or through a loudspeaker for that matter. But you must choose the most effective medium for what you want to accomplish, and that is the one that gives you the greatest leverage.
This is an interesting distinction from the way you hear meetings described often. That they should be thought of as a medium rather than an activity is an important difference in approach. When many people talk about the uselessness of meetings, I would strongly suspect that the medium is perhaps mismatched to the work that needs doing. Though today we have many media through which to conduct managerial work â meetings, Slack channels, emails, phone calls, Zoom video chats â the point is you shouldnât ban the medium entirely if your problem is really something else. I know when I find myself in a useless meeting, its âmeetingnessâ isnât the issue; itâs that we couldâve accomplished the goal with a well-written document with inline comments, an internal blog post, an open-ended Slack chat, or a point-to-point phone call between two people. Or, alternately, it could be that a meeting is the optimal medium, but the problem lies elsewhere in planning, preparation, action-orientation, or the whoâs who in attendance1.
We should focus our energies on maximizing the impact of meetings by fitting them in when theyâre the right medium for the work. As Grove notes on page 71:
Earlier we said that a big part of a middle managerâs work is to supply information and know-how, and to impart a sense of the preferred method of handling things to the groups under his control and influence. A manager also makes and helps to make decisions. Both kinds of basic managerial tasks can only occur during face-to-face encounters, and therefore only during meetings2. Thus I will assert again that a meeting is nothing less than the medium through which managerial work is performed. That means we should not be fighting their very existence, but rather using the time spent in them as efficiently as possible.
A major issue I see in many meetings (as Iâm sure we all do) is a tendency to over-inflate the invite list. A fear of someone missing out often crowds the conversation, spends human hours unnecessarily, and invites the occasional âIâm here so I better say somethingâ contributions from those with no skin in the outcome. âŠ
This shows some age as we have so many more avenues for engagement today than in 1983, but his principle about fitting the work to the medium still holds. âŠ
This post is part 3 in a series about my history in product development. Check out the intro in part 1 and all about our first product, Geodexy, in part 2.
Back in 2010 we decide to halt our development of Geodexy and regroup to focus on a narrower segment of the marketplace. With what weâd learned in our go-to-market attempt on Geodexy, we wanted to isolate a specific industry we could focus our technology around. Our tech platform was strong, we were confident in that. But at the peak of our efforts with taking Geodexy to market, we were never able to reach a state of maturity to create traction and growth in any of the markets we were targeting. Actually targeting is the wrong word â truthfully that was the issue: we werenât âtargetingâ anything because we had too many targets to shoot at.
We needed to take our learnings, regroup on what was working and what wasnât, and create a single focal point we could center all of our effort around, not just the core technology, but also our go-to-market approach, marketing strategy, sales, and customer development.
I donât remember the specific genesis of the idea (I think it was part internal idea generation, part serendipity), but we connected on the notion of field data collection for the property inspection market. So we launched allinspections.
That industry had the hallmarks of one ripe for us to show up with disruptive technology:
Low current investment in technology â Most folks were doing things on paper with lots of transcribing and printing.
Lots of regulatory basis in the workflow â Many inspections are done as a requirement by a regulatory body. This meant consistent, widespread needs that crossed geographic boundaries, and an âalways-onâ use case for a technology solution.
Phased workflow with repetitive process and âdecision treeâ problems â a perfect candidate for digitizing the process.
Very few incumbent technologies to replace â if there were competitors at all, they were Excel and Acrobat.
Smartphones ready to amplify a mobile-heavy workflow â Inspections of all sorts happen in-situ somewhere in the field.
While the market for facility and property inspections is immense, we opted to start on the retail end of the space: home inspections for residential real estate. There was a lot to like about this strategy for a technology company looking to build something new. We could identify individual early adopters, gradually understand what made their business tick, and index on capability that empowered them. There was no need immediately to worry about selling to massive enterprise organizations, which wouldâve put a heavy burden on us to build âbox-checkingâ features like hosting customization, access controls, single sign-on, and the like. We used a freemium model which helped attract early usage, then shifted to a free trial one later on after some early traction.
Overall the biggest driver that attracted us to residential was the consistency of the work. While anyone whoâs bought property is familiar with the process of getting a house inspected before closing. That sort of inspection is low volume compared to those associated with insurance underwriting. Our first mission was this: to build the industry-standard tool for performing these regulated inspections in Florida â wind mitigation, 4-point, and roof certification. These were (and still are) done by the thousands every day. They were perfect candidates for us for the reasons listed above: simple, standard, ubiquitous, and required1. There was a built-in market for automating the workflow around them and improving the data collected, which we could use as a beachhead to get folks used to using an app to conduct their inspections.
Our hypothesis was that we could apply the technology for mobile data collection weâd built in Geodexy and âverticalizeâ it around the specialty of property inspection with features oriented around that problem set. Once we could spin up enough technology adoption for home inspection use cases at the individual level, we could then bridge into the franchise operations and institutions (even the insurance companies themselves) to standardize on allinspections for all of their work.
We had good traction in the early days with inspectors. It didnât take us long before we connected with a half-dozen tech-savvy inspectors in the area to work with as guinea pigs to help us advance the technology. Using their domain expertise in exchange for usage of the product, we were able to fast-forward on our understanding of the inspection workflow â from original request handling and scheduling, to inspecting on-site, then report delivery to customer. Within a year we had a pretty slick solution and 100 or so customers that swore by the tool for getting their work done.
But it didnât take us long to run into friction. Once weâd exhausted the low-hanging fruit of the early adopter community, it became harder and harder to find more of the tech savvy crowd willing to splash some money on something new and different. As you might expect, the community of inspectors we were targeting were not technologists. Many of these folks were perfectly content with their paperwork process and enjoyed working solo. Many had no interest in building a true business around their operation, not interested in growing into a company with multiple inspectors covering wider geographies. Others were general contractors doing inspections as a side gig, so it wasnât even their core day to day job. With that kind of fragmentation, it was difficult to reach the economies of scale we were looking for to be able to sell something at the price point where we needed to be. We had some modest success pursuing the larger nationwide franchise organizations, but our sales and onboarding strategy wasnât conducive to getting those deals beyond the small pilot stage. It was still too early for that. We wanted to get to B2B customer sizes and margins, but were ultimately still selling a B2C application. Yes, a home inspector has a business that we were selling to, but the fundamentals of the relationship share far more in common with a consumer product relationship than a corporate one.
By early 2012 weâd stalled out on growth at the individual level. A couple opportunities to partner with inspection companies on a comprehensive solution for carriers failed, partially for technical reasons, but also immaturity of our existing market. We didnât have a reference base sizable enough to jump all the way up to selling 10,000 seats without enormous burden and too much overpromising on what we could do.
We shut down operations on allinspections in early 2012. We had suspected this would have to happen for a while, so it wasnât a sudden decision. But it always hurts to have to walk away from something you poured so much time and energy into.
I think the biggest takeaway for me at the time, and in the early couple years of success on Fulcrum, was how relatively little the specifics of your technology matter if you mess up the product-market fit and go-to-market steps in the process. The silver lining in the whole affair was (like many things in product companies) that there was plenty to salvage and carry on to our next effort. We learned an enormous amount about what goes into building a SaaS offering and marketing it to customers. Coming from Geodexy where we never even reached the stage of having a real âcustomer successâ process to deal with, allinspections gave us a jolt in appreciation for things like identifying the âaha momentâ in the product, increasing usage of a product, tracking usage of features to diagnose engagement gaps, and ultimately, getting on the same page as the customer when it comes to the final deliverable. It takes working with customers and learning the deep corners of the workflow to identify where the pressure points are in the value chain, the things that keep the customer up at night when they donât have a solution.
And naturally there was plenty of technology to bring forward with us to our next adventure. The launch of Fulcrum actually pre-dates the end of allinspections, which tells you something about how we were thinking at the time. At the time we werenât thinking of Fulcrum as the ânext evolutionâ of allinspections necessarily, but we were thinking about going bigger while fixing some of the mistakes made a year or two prior. While most of Fulcrum was built ground-up, we brought some code but a whole boatload of lessons learned on systems, methods, and architecture that helped us launch and grow Fulcrum as quickly as we did.
Retrospectives like this help me to think back on past decisions and process some of what we did right and wrong with some separation. That separation can be a blessing in being able to remove personal emotion or opinion from what happened and look at it objectively, so it can serve as a valuable learning experience. Sometime down the road Iâll write about this next evolution that led to where we are today.
Since the mid-2000s, all three of these inspection types are required for insurance policies in Florida. âŠ
This week weâve had Kurt Menke in the office (of Birdâs Eye View GIS) providing a guided training workshop for QGIS, the canonical open source GIS suite.
Itâs been a great first two days covering a wide range of topics from his book titled Discovering QGIS 3.
The team attending the workshop is a diverse group with varied backgrounds. Most are GIS professionals using this as a means to get a comprehensive overview of the basics of âwhatâs in the boxâ on QGIS. All of the GIS folks have the requisite background using Esri tools throughout their training, but some of us that have been playing in the FOSS4G space for longer have been exposed to and used QGIS for years for getting work done. Weâve also got a half dozen folks in the session from our dev team that know their way around Ruby and Python, but donât have any formal GIS training in their background. This is a great way to get folks exposure to the core principles and technology in the GIS professionalâs toolkit.
Kurtâs course is an excellent overview that covers the ins and outs of using QGIS for geoprocessing and analysis, and touches on lots of the essentials of GIS (the discipline) and along the way. All of your basics are in there â clips / unions / intersects and other geoprocesses, data management, editing, attribute calculations (with some advanced expression-based stuff), joins and relates, and a deep dive on all of the powerful symbology and labeling engines built into QGIS these days1.
The last segment of the workshop is going to cover movement data with the Time Manager extension and some other visualization techniques.
Hat tip to Niall Dawson of North Road Geographics (as well as the rest of the contributor community) for all of the amazing development thatâs gone into the 3.x release of QGIS! âŠ
In the era of every company trying to play in machine learning and AI technology, I thought this was a refreshing perspective on data as a defensible element of a competitive moat. Thereâs some good stuff here in clarifying the distinction between network effects and scale effects:
But for enterprise startups â which is where we focus â we now wonder if thereâs practical evidence of data network effects at all. Moreover, we suspect that even the more straightforward data scale effect has limited value as a defensive strategy for many companies. This isnât just an academic question: It has important implications for where founders invest their time and resources. If youâre a startup that assumes the data youâre collecting equals a durable moat, then you might underinvest in the other areas that actually do increase the defensibility of your business long term (verticalization, go-to-market dominance, post-sales account control, the winning brand, etc).
Companies should perhaps be less enamored of the âshiny objectâ of derivative data and AI, and instead invest in execution in areas challenging for all businesses.
An insightful piece this week from Ben Thompson on the current state of the trade standoff between the US and China, and the blocking of Chinese behemoths like Huawei and ZTE. The restrictions on Huawei will mean some major shifts in trade dynamics for advanced components, chip designs, and importantly, software like Android:
The reality is that China is still relatively far behind when it comes to the manufacture of most advanced components, and very far behind when it comes to both advanced processing chips and also the equipment that goes into designing and fabricating them. Yes, Huawei has its own system-on-a-chip, but it is a relatively bog-standard ARM design that even then relies heavily on U.S. software. China may very well be committed to becoming technologically independent, but that is an effort that will take years.
I continue to be interested in where the world is headed with remote work. Here InVisionâs Mark Frein looks back at what traits make for effective distributed companies, starting with history of past experiences of remote collaboration from music production, to gaming, to startups. As he points out, you can have healthy or harmful cultures in both local and distributed companies:
Distributed workplaces will not be an âanswerâ to workplace woes. There will be dreary and sad distributed workplaces and engaged and alive ones, all due to the cultural experience of those virtual communities. The key to unlocking great distributed work is, quite simply, the key to unlocking great human relationshipsâââstruggling together in positive ways, learning together, playing together, experiencing together, creating together, being emotional together, and solving problems together. Weâve actually been experimenting with all these forms of life remote for at least 20 years at massive scales.
Our design and marketing team put together this awesome shop for company-branded gear â some shirts, mugs, and other swag with product brands and other funstuff (I even got my own tribute). Iâll have to say that my personal favorite has to be Calebâs custom Joy Divisionhomage:
Of course weâre not in the business of using this as a money-making venture. All of the proceeds from anything purchased here are going to an organization called Stop Soldier Suicide, co-founded by our friend Nick Black. As should be more widely known, veterans returning home from deployments often struggle with the process of reintegrating into ânormalâ civilian life. SSS was founded in 2010 to help stop this terrible problem for veterans and their families. Hereâs more about SSS from their website:
With more than 45,000 veteran service organizations recognized by the IRS, getting help can be difficult, confusing, and time-consuming. Weâre here to change that. Stop Soldier Suicide works 1-on-1 with troops, veterans, and military family members to help navigate the maze of services, programs, and assistance available.
As pointed out in this piece from Rahul Vohra, founder of Superhuman, most indicators around product-market fit are lagging indicators. With his company he was looking for leading indicators so they could more accurately predict adoption and retention after launch. His approach is simple: polling your early users with a single question â âHow would you feel if you could no longer use Superhuman?â
Too many example methods in the literature on product development orient around asking for user feedback in a positive direction â things like âhow much do you like the product?â, âwould you recommend to a friend?â Coming at it from the counterpoint of âwhat if you couldnât use itâ reverses this. It makes the user think about their own experience with the product, versus a disembodied imaginary user that might use it. It brought to mind a piece of the Paul Graham essay âStartup Ideasâ, if you go with the wrong measures of product-market fit:
The danger of an idea like this is that when you run it by your friends with pets, they donât say âI would never use this.â They say âYeah, maybe I could see using something like that.â Even when the startup launches, it will sound plausible to a lot of people. They donât want to use it themselves, at least not right now, but they could imagine other people wanting it. Sum that reaction across the entire population, and you have zero users.
Remote work is creeping up in adoption as companies become more culturally okay with the model, and as enabling technology makes it more effective. In the tech scene itâs common for companies to hire remote, to a point (as Benedict Evans joked: âweâre hiring to build a communications platform that makes distance irrelevant. Must be willing to relocate to San Francisco.â) Itâs important for the movement for large and influential companies like Stripe to take this on as a core component of their operation. Companies like Zapier and Buffer are famously â100% remoteâ â a new concept that, if executed well, gives companies an advantage against to compete in markets they might never be able to otherwise.
Wolfeâs work, particularly his Book of the New Sun âtetralogyâ, is some of my favorite fiction. He just passed away a couple weeks ago, and this is a great piece on his life leading up to becoming one of the most influential American writers. I recommend it to everyone I know interested in sci-fi. Even reading this made me want to dig up The Shadow of the Torturer and start reading it for a third time:
The language of the book is rich, strange, beautiful, and often literally incomprehensible. New Sun is presented as âposthistoryââa historical document from the future. Itâs been translated, from a language that does not yet exist, by a scholar with the initials G.W., who writes a brief appendix at the end of each volume. Because so many of the concepts Severian writes about have no modern equivalents, G.W. says, heâs substituted âtheir closest twentieth-century equivalentsâ in English words. The book is thus full of fabulously esoteric and obscure words that few readers will recognize as Englishâfuligin, peltast, oubliette, chatelaine, cenobite. But these words are only approximations of other far-future words that even G.W. claims not to fully understand. âMetal,â he says, âis usually, but not always, employed to designate a substance of the sort the word suggests to contemporary minds.â Time travel, extreme ambiguity, and a kind of poststructuralist conception of language are thus all implied by the bookâs very existence.
Zoom was in the news a lot lately, not only for its IPO, but also the impressive business theyâve put together since founding in 2011. Itâs a great example of how you can build an extremely viable and healthy business in a crowded space with a focus on solid product execution and customer satisfaction. This profile of founder Eric Yuan goes into the core culture of the business and the grit that made the success possible.
The folks over at FullStackTalent just published this Q&A with Tony in a series on business leaders of the Tampa Bay area. It gives some good insight into how we work, where weâve come from, and what we do every day. Thereâs even a piece about our internal âGeoTriviaâ, where my brain full of useless geographical information can actually get used:
Matt: Whatâs your favorite geography fun fact?
Tony: Our VP of Product, Coleman McCormick, is the longest-reigning champion of GeoTrivia, a competition we do every Friday. We just all give up because he [laughter], you find some obscure thing, like what country has the longest coastline in Africa, and within seconds, heâs got the answer. Heâs not cheating, he just knows his stuff! We made a trophy, and we called it the McCormick Cup.
Weâve been supporting the Santa Barbara County Sheriff through Fulcrum Community this year for evacuation reporting during emergency preparation and response. It feels great to have technology that can have real-world immediate impact like this. The gist of their workflow (right now) is using the app to log where evacuation orders were posted, where they havenât notified yet, and tracking that with the slim resources available even in time of need. Centralizing the reporting has made a big difference:
All of this information is uploaded in real time and is accessible to incident commanders who can follow the progress as an evacuation order is implemented.
âItâs really sped up the process, and given us more accurate information,â said Nelson Trichler, an incident commander for the sheriffâs Search and Rescue Team. âItâs a tool we can go back to statistically to see who is responding to these evacuations.
Found via Tom MacWright, a slick and simple tool for doing run route planning built on modern web tech. It uses basic routing APIs and distance calculation to help plan out runs, which is especially cool in new places. I used it in San Diego this past week to estimate a couple distances I did. It also has a cool sharing feature to save and link to routes.
I mentioned scientist Vannevar Bush here a few days back. This is a piece he wrote for The Atlantic in 1945, looking forward at how machines and technology could become enhancers of human thinking. So many prescient segments foreshadowing current computer technology:
One can now picture a future investigator in his laboratory. His hands are free, and he is not anchored. As he moves about and observes, he photographs and comments. Time is automatically recorded to tie the two records together. If he goes into the field, he may be connected by radio to his recorder. As he ponders over his notes in the evening, he again talks his comments into the record. His typed record, as well as his photographs, may both be in miniature, so that he projects them for examination.
I thought this was an excellent rundown of remote work, who is suited for it, how to manage it, and the psychology of this new method of teamwork.
Letâs first cover values. Remote work is founded on specific core principles that govern this distinct way of operating which tend to be organization agnostic. They are the underlying foundation which enables us to believe that this approach is indeed better, more optimal, and thus the way we should live:
Output > Input
Autonomy > Administration
Flexibility > Rigidity
These values do not just govern individuals, but also the way that companies operate and how processes are formed. And like almost anything in life, although they sound resoundingly positive, they have potential pitfalls if not administered with care.
I found nearly all of this very accurate to my perception of remote work, at least from the standpoint of someone who is not remote, but manages and works with many that are. Iâm highly supportive of hiring remote. With our team, weâve gotten better in many ways by becoming more remote. And another (perhaps counterintuitive) observation: the more remote people you hire, the better the whole company gets and managing it.
Today kicked off our Spring 2019 All Hands. The 59-person team makes for an exciting, hectic, energizing, and fun week! Getting us all in a single room is pretty challenging these days. This morning Tony did his semiannual âAMAâ to talk company strategy, focus, and talk about whatâs new in the business.
After reading The Breakthrough, Iâve been doing more reading on immunotherapy, how it works, and what the latest science looks like. Another book in my to-read list is An Elegant Defense, a deeper study of how the immune system works. The human defensive system of white blood cells is a truly incredible evolutionary machine â a beautiful and phenomenally complex version of antifragility.
This stuff is crazy. Using modern compute, data science, and gene sequencing, you can now design proteins from your laptop:
Amazingly, weâre pretty close to being able to create any protein we want from the comfort of our jupyter notebooks, thanks to developments in genomics, synthetic biology, and most recently, cloud labs. In this article Iâll develop Python code that will take me from an idea for a protein all the way to expression of the protein in a bacterial cell, all without touching a pipette or talking to a human. The total cost will only be a few hundred dollars! Using Vijay Pande from A16Zâs terminology, this is Bio 2.0.
This is a fun one. Iâve been at Spatial Networks almost 10 years now. When I joined we were maybe 10 or 12 people, now weâre about 60 and still going up. Itâs exciting to see the hard work paying off and validated â but like I say to our team all the time: it feels like weâre just getting started.
We just finished up a several-monthâs-long effort updating the design and branding of Fulcrum, from the logo to typefaces, to web design and all. As happens with these things, it took longer than we wanted it to when we started, but Iâm very pleased with the results.
Timâs post here covers the background and approach we took to doing this refresh:
Sometimes it seems companies change their logos like people change their socks. Maybe they got a new marketing director who wanted to shake things up or a designer came up with something cool while experimenting after hours. We, on the other hand, have never changed our logo. The brief came down the pipeline in 2011 to create a logo for a new initiative called Fulcrum. Many pages of sketches and a few Adobe Illustrator iterations later, the only logo Fulcrum would know for 8 years was born.
We donât take projects for rebranding lightly. Changing this kind of thing too often doesnât impact the bottom line value to your users, can be a confusing moving target for brand recognition in the marketplace, and just plain takes time away from more valuable things. But in our case the need was two-fold: bring the look and feel in line with our family of other brands, and clean it all up after 8 years with our old look.
I started with the first post in this series back in January, describing my own entrance into product development and management.
When I joined the company we were in the very early stages of building a data collection tool, primarily for internal use to improve speed and efficiency on data project work. That product was called Geodexy, and the model was similar to Fulcrum in concept, but in execution and tech stack, everything was completely different. A few years back, Tony wrote up a retrospective post detailing out the history of what led us down the path we took, and how Geodexy came to be:
After this experience, I realized there was a niche to carve out for Spatial Networks but Iâd need to invest whatever meager profits the company made into a capability to allow us to provide high fidelity data from the field, with very high quality, extremely fast and at a very low cost (to the company). I needed to be able to scale up or down instantly, given the volatility in the project services space, and I needed to be able to deploy the tools globally, on-demand, on available mobile platforms, remotely and without traditional limitations of software CDs.
Tonyâs post was an excellent look back at the business origin of the product â the âwhyâ we decided to do it piece. What I wanted to cover here was more on the product technology end of things, and our go-to-market strategy (where you could call it that). Prior to my joining, the team had put together a rough go-to-market plan trying to guesstimate TAM, market fit, customer need, and price points. Of course without real market feedback (as in, will someone actually buy what youâve built, versus say they would buy it one day), itâs hard to truly gauge the success potential.
Back then, modern web frameworks in use today were around, but there were very few and not yet mature, like Rails and itâs peers. Itâs astonishing to think back on the tech stack we were using in the first iteration of Geodexy, circa 2008. That first version was built on a combination of Flex, Flash, MySQL, and Windows Mobile1. It all worked, but was cumbersome to iterate on even back then. This was not even that long ago, and back then that was a reasonable suite of tooling; now it looks antiquated, and Flex was abandoned and donated to Apache Foundation a long time ago. We had success with that product version for our internal efforts; it powered dozens of data collection projects in 10+ countries around the world, allowing us to deliver higher-quality data than we could before. The mobile application (which was the key to the entire product achieving its goals) worked, but still lacked the native integration of richer data sources â primarily for photos and GPS data. The former could be done with some devices that had native cameras, but the built-in sensors were too low quality on most devices. The latter almost always required an external Bluetooth GPS device to integrate the location data. It was all still an upgrade from pen, paper, and data transcription, but not free from friction on the ground at the point of data collection. Being burdened by technology friction while roaming the countryside collecting data doesnât make for the smoothest user experience or prevent problems. We still needed to come up with a better way to make it happen, for ourselves and absolutely before we went to market touting the workflow advantages to other customers.
In mid-2009 we spun up an effort to reset on more modern technology we could build from, learning from our first mistakes and able to short-circuit a lot of the prior experimentation. The new stack was Rails, MongoDB, and PostgreSQL, which looking back from 10 years on sounds like a logical stack to use even today, depending on the product needs. Much of what we used back then still sits at the core of Fulcrum today.
What we never got to with the ultimate version of Geodexy was a modern mobile client for the data collection piece. That was still the early days of the App Store, and I donât recall how mature the Android Market (predecessor to Google Play) was back then, but we didnât have the resources to start of with 2 mobile clients anyway. We actually had a functioning Blackberry app first, which tells you how different the mobile platform landscape looked a decade ago2.
Geodexyâs mobile app for iOS was, on the other hand, an excellent window into the potential iOS development unlocked for us as a platform going forward. In a couple of months one of our developers that knew his way around C++ learned some Objective-C and put together a version that fully worked â offline support for data collection, automatic GPS integration, photos, the whole nine yards of the core toolset we always wanted. The new wave of platform with a REST API, online form designer, and iOS app allowed us to up our game on Foresight data collection efforts in a way that we knew would have legs if we could productize it right.
We didnât get much further along with the Geodexy platform as it was before we refocused our SaaS efforts around a new product concept thatâd tie all of the technology stack weâd built around a single, albeit large, market: the property inspection business. Thatâs what led us to launch allinspections, which Iâll continue the story on later.
In an odd way, itâs pleasing to think back on the challenges (or things we considered challenges) at the time and think about how they contrast with today. We focused so much attention on things that, in the long run, arenât terribly important to the lifeblood of a business idea (tech stack and implementation), and not enough on the things worth thinking about early on (market analysis, pricing, early customer development). Part of that I think stems from our indexing on internal project support first, but also from inexperience with go-to-market in SaaS. The learnings ended up being invaluable for future product efforts, and still help to inform decision making today.
As painful as this sounds we actually had a decent tool built on WM. But the usability of it was terrible, which if you can recall the time period was par for the course for mobile applications of all stripes. âŠ
We recently began a corporate sponsorship for the QGIS project, the canonical open source desktop GIS. I got back into doing some casual cartography work using QGIS back in December after a years-long hiatus of using it much (I donât get to do much actual work with data these days). Bill wrote up a quick post about how we rely on it every day in our work:
As a Mac shop, the availability of a sophisticated, high-quality, powerful desktop GIS that runs natively on MacOS is important to us. QGIS fits that bill and more. With its powerful native capability and its rich third-party extension ecosystem, QGIS provides a toolset that would cause some proprietary software, between core applications and the add-on extensions needed to replicate the full power of QGIS, to cost well more than a QGIS sponsorship.
The team over at Lutra Consulting, along with the rest of the amazing developer community around QGIS have been doing fantastic work building a true cross-platform application that stacks up against any other desktop GIS suite.
As premier sponsors of the American Geographical Society, we try to do our part in promoting geographic literacy, education, and the future of geo sciences.
Part of our efforts this week is participating in the GIS Career Fair at Hunter College in Manhattan. Bill and I were there to showcase how geography fits into our business and talking with students about what it means to build a career centered around GIS and mapping. We talked to dozens of people about all aspects of the industry, with a diverse group interested in environmental science, energy, space, and more.
Itâs good to see the energy and excitement in the geospatial industry.
Spatial Networks is past 50 employees now, with a sizable remote group scattered all over the country. Even though weâve grown substantially in 2018, weâve been able to scale our processes, tools, and org chart to maintain pretty effective team dynamics and productivity. When we first started hiring remote folks back in 2010, we had nowhere near the foundation in place to have an effective distributed team.
This week is our 2nd âAll Handsâ of the year, where our entire remote team comes to St. Petersburg HQ for a week of teamwork, group projects, and fun camaraderie. A total of 18 people representing 11 states will be in town. These weeks are at once energizing, exciting, and exhausting â but also always a positive exercise. Iâm glad to work at a place where weâve consistently valued this investment and made the effort to keep it going as weâve scaled.
Slack grew huge on the idea that it would âreplace emailâ and become the digital hub for your whole company. In some organizations (like ours), it certainly has, or has at least subsumed most all internal-only communication. Email still rules for long form official stuff. Itâs booming into a multi-billion dollar valuation on its way to an IPO on this adoption wave.
But over the last couple of years thereâs been something of a backlash to âlive chatâ systems. Of course any new tool can be abused to the point of counter-productivity. As tools like Slack and Intercom (a live chat support software) have become widespread, people and companies need to find normal patterns of use that are comfortable for everyone. In our company, Slack is where nearly everything happens â including quite a bit that, on the surface, looks like noise and random chatter (our #random is something to behold). One common argument is that people now spend more time keeping up with Slack conversation than they ever did with email. Maybe so, maybe not. But regardless, isnât analysis of the time spent on one versus the other missing the point?
My general argument âpro-chatâ is that a world with Slack adds the layer of communication that should have been happening all along and wasnât. For me, I know that Iâm better informed about the general activity of the business with Slack than without. It takes some care and attention to keep it from becoming a distraction when itâs unnecessary, but Iâm willing to make the effort.
Anyone that compares the world of Corporate Slack to the prior one would notice a striking similarity in work patterns. Workplaces are social, people are people, and will talk, joke, commiserate, and enjoy each othersâ company. I try to picture a world where we could effectively work as a distributed team with 50+ people dispersed over 11 states without tools like Slack. Looking at it that way, itâs easy see the downsides as manageable things weâll figure out.
Effectively using new systems for collaboration is just as much about adapting our own behavior as it is about the feature set of the new tool. Each tool is not perfect for everything (as much as their marketing might say so). I think much of the kickback is from those that donât want to change. They want all the benefits of a system that conforms around their comfort zone.
Fulcrum, our SaaS product for field data collection, is coming up on its 7th birthday this year. Weâve come a long way: from a bootstrapped, barely-functional system at launch in 2011 to a platform with over 1,800 customers, healthy revenue, and a growing team expanding it to ever larger clients around the world. I thought Iâd step back and recall its origins from a product management perspective.
We created Fulcrum to address a need we had in our business, and quickly realized its application to dozens of other markets with a slightly different color of the same issue: getting accurate field reporting from a deskless, mobile workforce back to a centralized hub for reporting and analysis. While we knew it wasnât a brand new invention to create a data collection platform, we knew we could bring a novel solution combining our strengths, and that other existing tools on the market had fundamental holes we saw as essential to our own business. We had a few core ideas, all of which combined would give us a unique and powerful foundation we didnât see elsewhere:
Use a mobile-first design approach â Too many products at the time still considered their mobile offerings afterthoughts (if they existed at all).
Make disconnected, offline use seamless to a mobile user â They shouldnât have to fiddle. Way too many products in 2011 (and many still today) took the simpler engineering approach of building for always-connected environments. (requires #1)
Put location data at the core â Everything geolocated. (requires #1)
Enable business analysis with spatial relationships â Even though weâre geographers, most people donât see the world through a geo lens, but should. (requires #3)
Make it cloud-centric â In 2011 desktop software was well on the way out, so we wanted an platform we could cloud host with APIs for everything. Creating from building block primitives let us horizontally scale on the infrastructure.
Regardless of the addressable market for this potential solution, we planned to invest and build it anyway. At the beginning, it was critical enough to our own business workflow to spend the money to improve our data products, delivery timelines, and team efficiency. But when looking outward to others, we had a simple hypothesis: if we feel these gaps are worth closing for ourselves, the fusion of these ideas will create a new way of connecting the field to the office seamlessly, while enhancing the strengths of each working context. Markets like utilities, construction, environmental services, oil and gas, and mining all suffer from a similar body of logistical and information management challenges we did.
Fulcrum wasnât our first foray into software development, or even our first attempt to create our own toolset for mobile mapping. Previously weâd built a couple of applications: one never went to market, was completely internal-only, and one we did bring to market for a targeted industry (building and home inspections). Both petered out, but we took away revelations about how to do it better and apply what weâd done to a wider market. In early 2011 we went back to the whiteboard and conceptualized how to take what weâd learned the previous years and build something new, with the foundational approach above as our guidebook.
We started building in early spring, and launched in September 2011. It was free accounts only, didnât have multi-user support, there was only a simple iOS client and no web UI for data management â suffice it to say it was early. But in my view this was essential to getting where we are today. We took our infant product to FOSS4G 2011 to show what we were working on to the early adopter crowd. Even with such an immature system we got great feedback. This was the beginning of learning a core competency you need to make good products, what Iâd call âidea fusionâ: the ability to aggregate feedback from users (external) and combine with your own ideas (internal) to create something unified and coherent. A product canât become great without doing these things in concert.
I think itâs natural for creators to favor one path over the other â either falling into the trap of only building specifically what customers ask for, or creating based solely on their own vision in a vacuum with little guidance from customers on what pains actually look like. The key Iâve learned is to find a pleasant balance between the two. Unless you have razor sharp predictive capabilities and total knowledge of customer problems, you end up chasing ghosts without course correction based on iterative user feedback. Mapping your vision to reality is challenging to do, and it assumes your vision is perfectly clear.
On the other hand, waiting at the beck and call of your user to dictate exactly what to build works well in the early days when youâre looking for traction, but without an opinion about how the world should be, you likely wonât do anything revolutionary. Most customers view a problem with a narrow array of options to fix it, not because theyâre uninventive, but because designing tools isnât their mission or expertise. Theyâre on a path to solve a very specific problem, and the imagination space of how to make their life better is viewed through the lens of how they currently do it. Like the quote (maybe apocryphally) attributed to Henry Ford: âIf Iâd asked customers what they wanted, they wouldâve asked for a faster horse.â In order to invent the car, you have to envision a new product completely unlike the one your customer is even asking for, sometimes even requiring other industry to build up around you at the same time. When automobiles first hit the road, an entire network of supporting infrastructure existed around draft animals, not machines.
Weâve tried to hold true to this philosophy of balance over the years as Fulcrum has matured. As our team grows, the challenge of reconciling requests from paying customers and our own vision for the future of work gets much harder. What constitutes a âbig ideaâ gets even bigger, and the compulsion to treat near term customer pains becomes ever more attractive (because, if youâre doing things right, you have more of them, holding larger checks).
When I look back to the early â10s at the genesis of Fulcrum, itâs amazing to think about how far weâve carried it, and how evolved the product is today. But while Fulcrum has advanced leaps and bounds, it also aligns remarkably closely with our original concept and hypotheses. Our mantra about the problem weâre solving has matured over 7 years, but hasnât fundamentally changed in its roots.
Great post from Tim on our recent all-hands sprint at our new office.
Iâve seen our company transform itself from all on-site to very distributed inside of 5 years. An interesting evolution, finally clicking after some initial false starts. Growing with a dispersed team is a challenge to say the least, in finding the right patterns for communication and striking a balance of face-to-face interaction with teleworking. Getting it right allows us to grow our team with the right people, instead of arbitrarily restricting us to close people.
Weâve just posted a map of Kabul, Afghanistan built from spatial networks map data. I built this a couple of months back (with TileMill) for some mobile field collection project work we were doing with Fulcrum. This is the sort of challenging work that our company is out there doing, bringing high-tech (yet cheap and simple) solutions to up-and-coming communities like Kabul.