What this product does
Gecko makes robots that inspect industrial assets like tanks, pressure vessels, pipes, and boilers — anything huge and ferrous — and the software that delivers the inspection data to customers. This app is where customers review that data and use it to track damage, assess risk, and decide what to do next.
What we got wrong the first time
Two years ago, I led the first version of this product. It worked for the basic use case of reviewing inspection data, but as Gecko experimented with more types of customers, the design failed to evolve with the diverging needs of the expanding customer base.
Here's what I learned.
Hardcoded templates are too brittle for core platform pages
Different user types care about different information on the same asset; different customers have entirely different classes of assets, from storage tanks to aircraft carriers. One rigid asset page layout couldn't accommodate this breadth, so each new case required custom engineering, and added complexity to existing cases.
An archipelago of features does not add up to a platform workflow
Over two years of customer engagements, I embedded with teams building features for specific users — asset integrity managers, field engineers, and executives. Each little island of features worked for its user type, but didn't connect to the the next user and their task, resulting in dead ends instead of a workflow loop.
A new iteration
Two years later, Gecko has renewed its focus on the core workflow of inspection delivery and industrial asset management. Here's how I'm approaching it differently this time.
Starting from jobs, not screens
I led the team through a jobs-to-be-done analysis of the entire asset lifecycle: every job and task we'd learned about across two years of customer work, mapped by lifecycle stage and user type. The resulting map illustrates the flow of work between users, and how it loops back around: inspections produce information, information feeds decisions, decisions lead to actions, and actions' results are evaluated at the next inspection.
The key product insight
Mapping the jobs made clear that Gecko's customers aren't buying data, they're buying confidence. Asset integrity managers face a recurring, high-stakes, murky question: of hundreds of assets, which are in worst shape, which pose the most risk, and how should a limited budget be spent to keep them alive? Their existing tools for answering this — for generating enough confidence to take action — are data warehouses and homespun pivot-table workflows. So Gecko's software can do more than just present inspection data; it can authoritatively answer this question and make customers more confident in their plans than ever before.
With this framing, we identified the key risk-related use cases that bring each user type into the app, and we charted the jobs that would need to happen to satisfy each case.
Modular design strategy
We knew that every customer thinks about assets primarily in terms of risk. But the way risk is calculated and acted on varies by customer, industry, asset type, and user. So rather than trying for a single "risk-centric asset page" design to solve for them all, I proposed a modular approach: compose the page from independent blocks that work on their own, so they may be assembled differently to meet each user's needs.
Rethinking the data model
In parallel with the JTBD work, I started a project with our eng team to iterate on our data model. The existing model was heavy with entities needed to parse and display inspection data, but light on entities that matched user mental models. Missing concepts from the latter category were kludged into the former, resulting in awkward interactions like representing assets' repairs as markup annotations.
We clustered the jobs by customer concepts to determine which customer concepts were missing, and which entities needed to become first-class objects: Repairs, Actions, Plans, and a protocol for Events.
Prototyping in code
These entities and their clusters of jobs seemed likely to become our modules. To get a sense of hierarchy for each module, I ranked the jobs it could do by how discoverable they needed to be: obvious, easy, or possible. Then I moved into sketching very lofi UI, just enough of a foothold to start prototyping in code.
The result
I built our initial set of modules, customized them for our first customer, and composed them into an asset detail page for that customer's asset integrity user. Using the updated data model, we hydrated the page with data from this customer's real assets and inspections, and we shared it with them for feedback.
The modules
- Asset condition summary with overall risk rank (customizable with the customer's own risk-ranking logic)
- List of damage mechanisms observed in inspections, with a corresponding detail pane
- List of health plans — the customer's term for damage-mitigation scenarios — and a plan detail pane where users can create Actions
- List of Actions with a corresponding Action detail pane
- Timeline that displays all Events related to the asset
- Block of asset information such as date built and nominal thickness
- Lightweight file uploader/viewer for asset docs
Where this is headed
Early customer response to this design has been strong. Next for me: more onsite visits with this customer and others to learn more about the users, to see how this design breaks, and to run sprints of iteration like the ones that worked so well on Power Docs.
