How do you verify trust?

Wed, 26 May 2021 - Gertjan Filarski

Developing software requires knowledge in two domains: the subject-matter and engineering. Most members of a team are expert in one and not the other. In my blog last week, I explored what happens when we cannot prove that subject-matter experts reliably transferred their knowledge to software engineers. At that point team members can only hope an application will do the right things and projects start suffering from Hopitis. For further reading - please follow this link.

This week I will look at the way we try to deal with Hopitis and start to explore a more quantifiable approach.

Current remedies

Dealing with trust within a team is considered a 'soft' issue and is often the responsibility of a project manager or SCRUM-master. This reflects, not entirely coincidental, two fundamentally different ways of project management. But spoiler alert: neither cures Hopitis.

The first is the rigid approach. This is commonly called 'waterfall' management, but I disagree with the somewhat dismissive meaning attached to that name. I prefer 'rigid'. Rigid is not bad, it is sturdy, and provides many organizations with a system of control. Maybe it is not the most effective system and it is certainly not my first choice, but it often gets the job done and it integrates well with other business units and departments. Rigid project management methods revolve around governance, responsibility, and accountability. Within that regulatory framework subject-matter experts and software developers are often surprisingly free to get the job done. Rigid systems deal in advance with expected problems. If something like X happens: who is responsible for solving it? and who is accountable? Many rigid systems excel in risk analysis. But signalling the potential for Hopitis - if it's even recognized - does nothing to remedy it. A proper risk assessment includes remedial actions, but when trust is lost within a team, it is exceptionally difficult to regain it. Trust takes years to build, seconds to break, and forever to fix.

The second approach is known as agile and seeks to keep trust-issues from developing into problems through short feedback loops. After each iteration, users, domain experts, and developers sit together and review new functionality. This demonstration is intended to build and retain trust. One team member has a dedicated role (the product owner) for intermediation between subject-matter experts and software engineers. After consultation with users and other stakeholders the product owner signs off on the features that the team will work on. But intermediation is a double edged sword. Although it avoids the pitfall of engineers who need to become domain experts, it introduces another step in communication. The subject-matter experts may eventually trust the product owner in the team. But trust is not automatically extended to the software developers. How do we confirm that knowledge was reliably and consistently transferred from the subject-matter experts to the product owner? And from the product owner to the engineers?

Virtually all project management methods agree that knowledge transfer is important. But none can guarantee the reliability of the transfer. In most instances, the problem is recognized and piled together with the other soft communication issues that teams need to deal with. Also, no current method leaves a structural trail of trust by which future teams can review and assess the code and trust its basis for continued sound development.

Trust and verifiable understanding

Trust occurs incrementally as a step function. In most professional relationships we start with a little bit of initial trust. Let's call that the benefit of doubt. Trust will not increase by itself: something has to occur to move trust upwards over time. The most important 'something' that pushes trust up are those moments when we get to verify we understand each other. If we fail to do that for a prolonged period of time, trust even tends to decrease.

I started Fourdays with Jauco because we wanted to help subject-matter experts and software engineers to understand each other; without having to go through the process of learning a new language. Although that definitely doesn't hurt, it should never be a requirement for success. Besides, a shared language does not provide anyone else with a tangible and reproducible proof of understanding.

Although I don't believe you can quantify trust, I do think that we can create a metric - or better a model - for verifiable understanding. To reach understanding, both sides need to agree on a shared mental model. That model is ambiguous and everyone has their own interpretation. To clarify what we mean, we often start writing project plans, propositions, and assessments. Although these may be less ambiguous than a shared mental model, they are still texts that require interpretation. When our interpretation (seems to) align, people (think they) understand each other, and when there is a mismatch - well they don't.

Explicit understanding == trust

I think Fourdays can make the shared mental model explicit by defining hypotheses: "If X then assert Y" - much akin to unit tests. But as I claimed in my earlier blog: unit tests check if an application does things right. Now we want to assert whether or not the application is doing the right thing. Individual hypotheses reflect the knowledge transferred from the domain expert to the software engineer: e.g. "If the number of hours that a consultant is expected to work per year is below 1636, then report they do not meet EU subsidy requirements.", "If the age of a user is younger than 18, then a user record cannot be created." etc.

The trick is that we need to phrase the hypotheses in such a way they can be automatically tested. When we manage to do that, the model can grow over time as the project progresses. After every change in the code we can reassert the model. When the application passes the assessment we have verified that software engineers and subject-matter experts still understand each other and the code is doing the right thing. If it fails we can flag the disparity immediately and discuss it. I think continuous verification is the bedrock for trust. It allows all team members to trust that the application reliably reflects the transferred knowledge between the subject-matter experts and the engineers.

Example

Subject-matter Expert X on September 27, 2020 16:44:23 - "If the age of a user is younger than 18, then a user record cannot be created."

Subject-matter Expert Y on November 12, 2020 12:51:42 - "If a user registers as a legal guardian, then they need to send Documents as proof."

Subject-matter Expert Y on November 12, 2020 12:53:38 - "If the Documents match some condition, then the legal guardian user can create new records for users younger than 18."

When the team implements the feature 'add legal guardian users' they reassert their shared model to verify understanding. The model immediately raises the issue that on September 27th expert X claimed a conflicting truth. The model does not care whether or not expert X was wrong, the transferred knowledge has become outdated, or that the software engineer did not understand X properly. The model only flags that there is no longer a consistent shared understanding of what the application is supposed to do.

The team of engineers and experts decides to solve the issue by modifying the model:

Subject-matter Expert Y on November 19, 2021 17:04:10 - "If the age of a user is younger than 18 and the record is not being created by a verified legal guardian user, then a user record cannot be created."

Every time we change and reassert the model, trust between software developers and subject-matter experts grows. And in years from now, when new engineers and experts need to add a feature to the codebase, they too can rely on this model. Verifiable understanding leaves no space for vagueness or ambiguity. It makes the expectations of the code transparant and reproducible.

Implementation

At Fourdays, Jauco and I have only started to think about modelling verifiable understanding. We intend to continue working on the subject over the coming weeks and months. In blogs and videos we will document our work and share our discoveries with you. Let's see if we can create a platform tailored for building a shared mental model. A tool that enables software engineers to build and implement applications (like any other development platform), but which is also useful for subject-matter experts. We envision a product that invites everyone to experiment with the code, whether you are a developer or not. A product that demands that you prove every addition to the model, and which tracks the consequences meticulously. We started Fourdays to help domain experts and engineers to verifiably trust each other. Instead of hoping that things will work out in the end.

Can we help you?

Audits & Consultancy

An extra set of eyes to see if you are where you want to be, or going where you had imagined.

Grant Acquisition

Both editing and authoring of (EU) funding proposals.

Interim Management

Temporary assignments at home or abroad to get projects and teams up to speed - or back on the rails.