Tackle conflict
Most humans are reasonable and want to succeed. Most are also averse to open conflict—it’s not in our nature to avoid collaboration. Conflict usually surfaces when reality does not meet expectations, when desired outcomes are not being reached by the project team.
The first step in resolving conflict and charting a path forward is to truly understand the origin of the conflict and the desired outcomes. Sometimes it may be about our failure, and sometimes it may be related to a stakeholder’s job security or an insecurity with new technologies.
Here are a few areas that we’ve observed to cause conflict:
- Failure (aka missed expectations around quality, speed). More on this below, primarily an issue with either our work product or how we manage the process.
- Religious. Often a result of engrained familiarity with a tech stack or process. Individuals that are threatened by change and new technologies will often resist and raise conflict. Also applies to process change, adoption.
- Infrastructure access, stability. Either lack of DevOps skill set or highly regulated access leads to conflict between our teams and client infrastructure teams.
- Values misalignment. The values practiced by a specific individual or the key sponsor clash with our values. Instead of delivering results, individuals focus on producing activity. Instead of being transparent, stakeholders play double-sided politics. If values are too far apart, the relationship will not stand the test of time.
Taking ownership of failure
It doesn’t happen often, but we fail. We’re in the business of tackling complex business challenges using bespoke software—we’re predestined to trip up here and there. While sometimes unavoidable, failure teaches all an important lesson about perseverance, and some of the strongest, longest relationships are forged while navigating heated projects. No matter the scenario, we always look out for our clients; and we always ship.
Our clients are used to vendors failing. How we react to failure, however, defines the type of partner that we are.
We win as a team, and we lose as a team. As we identify the root cause, we focus on learning from the experience as well as investing our own skin in the game to right the wrong. This behavior is unique to us as a business and unlike anything that our large competitors can do. We sincerely, honesty, and candidly take ownership of our own failures, and we always deliver.
You may be wondering, What does failure mean when it comes to our work? Here are some good examples:
- Failure to anticipate complexity. The team finds implementation to be of much higher complexity than what was researched and proposed during the sales process. Project will take longer and cost more money.
- Failure to manage performance. Feature delivery is prioritized over technical debt, which leads to production stability issues. Clients lose trust in our technical maturity and ability to complete the work.
- Failure to meet sprint commitments. Often the burden of a freshly formed team or a team that struggles with leadership and ownership. Manifests in highly volatile velocity over time, unpredictable roadmap and go-to-market.
- Failure to manage expectations (and behavior of stakeholders). The team finds mid-sprint changes to scope, growth of scope in backlog, promises made to client that cannot be realistically achieved by the team.
- Failure of competency. Happens rarely, manifests the team making decisions detrimental to product. For example, a poorly designed data schema, or uninformed user experience design.
In most scenarios above, failure is shared not only by the team but also by the client. That, however, is expected in our delivery model and our value of taking ownership. Even that which is outside our control . . . is something we need to own and manage.
Building trust is easy through delivery. Rebuilding trust is very hard and takes time, communication, and honesty.
Analyzing the root cause of the above failures is outside the scope of this book. To start rebuilding trust, however, we need to be honest and own the areas of failure within our control.
Clearly identifying the real crisis
The first thing to do when a crisis is escalated is to acknowledge that further investigation is needed. Perceptions are biased, the team is biased, the client is biased, and so the crisis needs to be approached as if there is not yet any concrete evidence.
Collect evidence by interviewing three parties: the client sponsor/stakeholder, the product manager / design manager, and the engineering team lead. While the managing director is in charge, everyone should be involved in gathering evidence with the understanding that conflicting opinions are inevitable. The first step in conflict resolution is realigning the whole team.
Conflict is a process to reach a desired outcome. Make sure you understand the motivations and outcomes before trying to solve the conflict.
Capture the varied perspectives on the issue. Document feedback and get ready to work toward a shared resolution. As a team works to understand the current state, empathizing with everyone involved, and charting a speedy recovery as the ultimate goal.
Proof: Owning Performance Issues Forged a Path Forward
An event management enterprise tasked with managing the audiovisual equipment for large shows and the underlying custom-built ERP system coordinates inventory, work orders, billing, account management, and all other functions that run the company was introduced to Devbridge after the previous vendor was unsuccessful—struggling with the complexity of the platform. Incompetent architecture decisions led to an unstable tool with massive performance issues, technical debt, and even faulty business logic with unreliable data.
After a short discovery period, we proposed a two-stage approach to salvage the platform:
- stabilize the core; and
- release necessary features to make the tool viable for the business.
During the first phase, the business did not see much value in our work—after all, we were fixing things that were supposed to work. During the second phase, we accelerated feature development to catch up to the desired go-to-market schedule for all properties that the client was managing. We delivered according to plan and raced to the finish line.
Mounting costs and an extended schedule for going live was frustrating for stakeholders. All attention and prioritization were given to features and enabling all the client’s locations with the platform. This meant deprioritizing performance. Predictably enough, technical debt started to accumulate along the way, and performance issues on several pages in one production instance got escalated as showstoppers.
In retrospect, it’s comical how similar our failure was to the failure of the previous vendor, even if the issues were skin deep and quickly addressed.
The client’s perception is reality, and we did not manage it effectively.
It wasn’t much longer before an email was sent to notify us that the platform was being cut and the investment written off unless we demonstrated our competence and were able to right the course moving forward.
We rallied. We didn’t act just on feedback from a single stakeholder. We hosted interviews with all parties involved and collected perceived challenges from our product manager and technical team lead as well as the client. Each viewpoint was different, though all were correct, and all were critical to formulating a strategy to move forward and recover trust. We discovered that performance was the core issue, but only on certain pages and only with certain large data sets stemming from two application screens that experienced time-outs for substantial work orders, and those specific to launching the application in one select location.
To better understand the severity of the issue (architectural, a defect, or bad implementation of said page), we pulled in third-party technical expertise. Fortunately, it wasn’t as bad as everyone thought. We also discovered that the team had deprioritized analytics and performance monitoring in favor of feature releases. Whether that was a good idea or a bad idea, the whole team was responsible for these decisions, including the priorities set by the stakeholders. If performance had been monitored as a nonfunctional requirement, these issues could have been picked up sooner and addressed before becoming painful. To be fair, it was on Devbridge not only to make this recommendation but also to escalate it if our recommendations went unheard.
Furthermore, during the discovery process, we learned that our own team was out of sync—communication and trust between our product manager and our technical team lead was low, and their perceptions of reality were different. This could be attributed to stress, approaching deadlines, lack of focus, and many other reasons. Regardless of the reasons, our client now perceived a lack of technical competence in our team.
Truly understanding the issue and removing individual bias (client, team, etc.) was incredibly important. We took the following steps to recover:
- Addressed priorities. Stakeholders were notified that we would forfeit a few members from each team to form a new performance-centric group. Their roadmap was to include the resolution of all issues and payback of technical debt. Some features would have to wait.
- Demonstrated swift recovery. We identified key pages to fix and immediately pushed those fixes out into the platform. Because the issues were not architectural in nature, they took hours instead of weeks.
- Showed personal attention. We flew out to meet with the client’s full executive team. We shared the story and ran a live demo during the session, showing how key pages loaded immediately. We provided a full list of fixes and informed the team that they would be resolved within several days.
- Demonstrated goodwill. Because of our failure to manage performance, we threw a highly desirable feature into the roadmap at no cost to the client. In this case, it was digital contract signatures using DocuSign.
- Rebuilt trust. We built a communication strategy for our primary stakeholder that helped win back trust from all users of the system (remember, the platform was live while these issues were taking place). This strategy included weekly update emails announcing new feature releases, performance improvements, bug fixes, and more.
- Paved the way forward. We scheduled a video recording with one of the client locations to interview their team and demonstrate how the platform was working successfully for the client.
The key lesson to take away is that each engagement, each account needs to be proactively managed. Each relationship is like a thread that keeps unspooling continuously—only individual attention and meticulous care can maintain it and guarantee it won’t snap.
Monitoring risk
While you will not be able to avoid failure altogether, you can be proactive in monitoring high-risk areas of the project:
- Assumptions. Regularly evaluate the assumptions captured during the requirements workshop. Teams tend to forget specifics, and clients tend to forget initial assumptions. Review on a per-sprint basis.
- Project health. Consistently report on the product’s health status—with focus on scope and spend. Distribute PowerUp reports, track in the project health reports internally.
- Project maturity. Track technical and product maturity scores; advise client how to raise the maturity score.
- Demo participation. When stakeholders participate in product demos and the sponsor is aware of Devbridge’s progress, then everyone is aligned on the product build. Enforce participation.
- Team health. Have everyone on the team participate in the retrospectives—and encourage an open and critical conversation during those sessions.
- Scope and spend. Perform a round-trip analysis on the estimated spend versus actual spend per epic, checking the epic delivery and spend trends. It often happens that projects estimates are ignored during delivery. Use gates and microreleases to demonstrate value and track progress.
- Demonstrating value. Actively communicate the delivered value through demos and narration of the delivered functionality to a wide group of stakeholders. Contrast the work with past experiences and have analytics in place that demonstrate trending to a metric established early in the planning stages.
- Defect trending. Log all generated defects and track time spent remedying defects throughout the engagement. Bugs are normal, but a team that tracks them consistently needs to reflect on what isn’t working.
Proof: Building one of the largest accounts through failure
One of our largest clients gave us a replatforming project for a legacy product, absorbed through an acquisition. Lo and behold, we found ourselves neck deep in black-box business logic that even the client was not aware of, with the project estimate and schedule ballooning to four times the initial numbers.
Could we have done a better job at discovery? Perhaps. When we warned the client of high complexity, we were informed that the budget available for this integration project was limited and, if we couldn’t get it done, they would look elsewhere. So we took the job, knowing it was going to hurt us long term.
Halfway through the effort the final numbers started crystallizing and we needed to collectively decide the future of the product. The work completed to date gave the client trust in our quality and delivery capability, but the organization was not in a position to spend several extra millions of dollars halfway through the budget year. We determined that we would split the overages down the middle, making a year-and-a-half-long project incur significant losses to Devbridge. As part of the agreement, however, the client promised a certain volume of spend the following year—a means for us to earn some of the money back.
After successful delivery of the product, we were introduced to several new branches of the business, all with unique product development needs. The ownership we demonstrated elevated our relationship to a new level and significantly grew the account.