In this blog, we examine the human and organisational impact of a major cyberattack that brought a prominent business-to-business services provider in the UK to a halt. Through an in-depth interview with the organisation’s Managing Director, we uncover the technical recovery process, the emotional toll on staff, personal feelings of guilt, and the leadership challenges faced during the crisis. The blog also highlights key lessons on resilience, culture, and preparation that can make the difference between recovery and disaster.
It’s every organisation’s worst nightmare: a cyberattack. URM spoke with a Managing Director of a UK- based B2B services organisation who faced exactly that scenario. One Friday morning, a cyberattack encrypted all operational data and brought business to a standstill.
Confronted with a critical decision, the Leadership Team chose to rebuild from scratch in a clean environment. Within two weeks, systems were restored, but the technical, emotional, and organisational toll was immense.
In this interview, the Managing Director shares his reflections on the attack, the recovery journey, and the lessons learned.
Q&A
Q: What happened on the day of the attack?
A: Everything froze. Our data had been encrypted, and we knew operations were effectively paralysed. We decided early we’d rebuild from scratch. We had our suspicions about how the breach had occurred, but without certainty around the vulnerability that made the attack possible, restoring systems in the existing environment risked reinfection. So, rebuilding in a new environment was our best option. The key was that we had a clean, segregated backup. Without that, we’d have been in serious trouble.
Q: How did you approach recovery?
A: It took us two full weeks to get fully back online. In this period, the development team worked tirelessly to restore systems, taking an incremental approach out of necessity. In an ideal world, we’d have been able to wait those two weeks and switched everything back on at once, but we couldn’t leave clients with 0% availability of services for that long. We had to switch things back on piece by piece. Even if it was just 20% or 40% availability, it’s better than nothing. It wasn’t perfect and we had to restore services gradually, but any level of availability was better than none. Every hour counted.
Q: What were the biggest challenges during recovery?
A: Fatigue and pressure. Attempts to bring services back online didn’t always go as planned. The pressure is so great, and people are so exhausted and stressed, it’s inevitable that mistakes get made. You’d switch something on, and it wouldn’t work. Then you’d have to stop everything and investigate why. These setbacks felt devastating at the time, and added to the already significant mental strain. The team was exhausted, and mistakes happened.
Q: What was your strategy?
A: We learned to prioritise: the team had to think strategically about which functionalities to restore first. Where applicable, they focused on enabling the early stages of customers’ end-to-end processes initially, rather than getting everything up and running at once. If a customer needs to place an order, for example, you can prioritise the functionality that allows them to start that process, because it might be a few days before the other part of the process kicks in. That gives you some breathing room to get the rest of the workflow back online. But then you then have to work at full pelt to make sure it’s back by the time they reach the later stage of the process.
Q: Did prior preparation help?
A: Yes, we’d run a desktop exercise simulating a similar incident just months earlier, with a scenario that involved changing environments. This meant we’d already started to consider how we’d achieve that and were in the process of drawing up plans. Although the team found that the plans were lacking detail in some areas, it meant there was at least a framework to build from, providing us with a head start in the recovery process. It certainly saved us time and relieved some of the pressure.
Q: How did you and the team cope on a personal level?
A: Honestly, it was brutal. It hit hard, and I couldn’t shake the feeling of personal responsibility. I kept asking myself, how did this happen on my watch? What did I overlook? The pressure of the situation was heightened by an awareness of what was at stake. There's always the thought in the back of your mind that if you can't get up and running, there could be no business, in which case people lose their jobs and their livelihoods. That’s a big fear, amongst many others.
Everything was pushed aside to deal with the attack and support recovery, both in terms of work-related priorities and on personal levels, with one individual even choosing to cancel a holiday. We didn’t ask them to do that, and of course we reimbursed the cost, but it was a significant personal sacrifice. The team worked long hours, often 16 to 17 hours a day. Sleep was fragmented, meals were skipped or hastily ordered, and the mental strain was constant. That's manageable for a few days but once you're over a week in, it’s really not healthy.
What was amazing was that despite the intensity, a remarkable sense of camaraderie emerged. There were no raised voices or attempts to blame, and people really supported each other. There were many informal gestures of care, such as arranging grocery deliveries, or colleagues alerting management if a particular individual seemed to be struggling and in need of a lighter workload.
There was stress, of course, but also humour, solidarity, and a real sense of shared responsibility.
Q: What role did leadership play in managing burnout?
A: At first, we tried to stagger people taking breaks, but found that nobody wanted to stop while others were still working. So, we made rest mandatory, not optional, and started scheduling group downtime. Those breaks, however short, kept us going.
Q: How did you manage communications internally?
A: Managing internal communication during the incident presented its own set of challenges. While a core group of individuals were directly involved in the recovery effort, we were conscious that the wider organisation still needed clarity, and striking the right balance between transparency and caution was essential.
Early on we called an all-hands meeting to provide a high-level overview of the situation, while deliberately withholding specific details due to the sensitivity of the incident. These briefings continued twice a week, offering updates to those not directly involved in the technical response. All the time, we had to be careful. While we wanted to communicate more with our colleagues, there were legal constraints around what could be shared. A key message was the need for discretion, with staff reminded to refer any external queries to a designated communications channel.
Q: And externally?
We quickly realised that communication with customers needed to be tightly managed, and we brought in a specialist PR firm and legal advisors almost immediately to help shape communications. Standardised email responses were developed to share with the majority of customers, while senior leaders handled more complex or sensitive conversations with key clients. Some customers were understanding but others had legal and commercial teams ready to press for answers and accountability.
We were fortunate that media interest was limited, but it wasn’t absent, with a few articles surfacing during the recovery period. Again, our PR firm played a crucial role in managing responses and ensuring that public statements were aligned with legal advice. The key was saying enough to keep the press satisfied, but not so much that they just kept asking more questions!
Q: What happened after systems were back online?
A: Although we were technically back online in two weeks, the impact of the incident persisted long after systems were restored. A significant amount of ‘clean-up’ work remained, and while the technical team could eventually step back from the intensity of the rebuild, pressure on customer-facing teams continued. There’s a tendency to focus on the people who get the systems back online, but the people who deal with the fallout on the customer-facing side, they’re heroes too.
Our customers who had experienced disruption were understandably frustrated, and many were vocal in expressing their dissatisfaction. Support teams had to manage a high volume of queries, often from irate and angry clients who were themselves experiencing significant impacts from the attack. We were deeply conscious that the incident had a far-reaching impact, with other businesses and reputations being affected, adding further stress to the situation. We had to hold discussions with certain clients and reach an agreement that was reasonable and fair.
Q: What made the biggest difference?
A: Without a shred of doubt, it was the strength of the organisation’s culture that ultimately carried us through the crisis. Culture is something you nurture over time, but it’s in moments like this that you find out whether it’s real or just words on a page.
During the most intense periods of the recovery, our team demonstrated extraordinary commitment. Individuals stepped up without being asked, consistently supported each other, and made personal sacrifices so that they could focus on aiding the organisation’s recovery.
In my opinion, in organisations where the culture is weak or toxic, a crisis such as this would be far more difficult to recover from. I just don’t think that people would be willing to put in that extra effort. But the fact that our team was willing, and that we had that culture, is what really made the difference.
Q: What key lessons have you taken from this experience?
A: Segregation is essential; your backup must be isolated from your live environment. Otherwise, you risk losing both. In our case, having a separate backup is what made the difference between recovery and complete disaster. I don’t know what we’d have done without it.
The more detailed the incident response plans the better. Our plans were too high level and when you’re under real pressure, the more detail you have, the easier it will be to navigate the response. It just helps to take a bit of stress out of the situation.
Before the incident, there was naturally a strong focus on cyber security, but since the attack there has been a far greater focus on cyber resilience and it’s absolutely front of mind now. You need to be prepared for when it happens, not if it happens. As a leader, you almost need to begin every week by asking yourself what you would do if you had an incident this week. Who’s around, who’s not around, etc.?
Get your experts lined up. When something like this happens, you need the help of specialists who know exactly what they’re dealing with. So, make sure you have your PR advisors, forensics team, specialist legal firm, insurance partners, etc., ready before you need them, so that when you do need them, you can immediately pick up the phone and ask for their help.
Also on the insurance point, you need to not only make sure you have cyber insurance, but also that it gives you the cover you need. If an incident occurs and you haven’t considered this, you might find that the excess is too high or the headroom isn’t enough, leaving you exposed to significant costs at the worst possible time.
Maintain backup communications. If your main systems are down, you still need a way to coordinate and communicate. Following the incident, we introduced wallet cards for all staff with the mobile numbers of key team members, so that people can get in contact with each other if our usual communication channels are affected.
Q: If you could give one piece of advice to another MD, what would it be?
A: Don’t assume it won’t happen to you. Test your resilience, test your people, and know what you’d do tomorrow if everything went dark.
Conclusion
This conversation highlights two realities that many organisations overlook. First, that a cyber attack tests every aspect of an organisation—not just its technical capabilities, but also its leadership, culture and the personal resilience of its staff. Second, it’s not a question of if, but when. Incidents can occur in even the most secure of organisations, and those that plan for these scenarios and invest in resilience as well as security will be best placed to recover quickly and with minimal impact when the worst occurs.
How URM Can Help
If your organisation would benefit from enhancing its cyber security, resilience, and business continuity capabilities, URM can provide tailored support to help you reduce both the likelihood of your organisation suffering an attack, and the impact if one were to occur.
Penetration Testing
As a CREST-accredited provider of penetration testing, URM can offer a range of pen testing services to identify the vulnerabilities affecting your environment and assets before they can be exploited by a threat actor—thus strengthening your overall security posture and reducing the risk of a breach. For example, we can offer network and infrastructure penetration testing against all IP addresses associated with your organisation, location or service from either an internal or external perspective. We can also conduct cloud penetration testing, web and mobile app testing, as well as business-led pen testing, in which the scope of the penetration test is determined by your organisation’s unique issues and concerns.
Business Continuity
To ensure you are positioned to recover quickly in the event of an attack or other disruptive incident, URM can provide BC services and guidance that are informed by recognised best practice, as well as extensive practical experience. If you would benefit from assistance conducting a business impact analysis (BIA), we can provide BIA support where we assist you to establish your BIA methodology, providing you with a clear picture of what you will need to recover first during disruption, how quickly, and to what level. Here, you can also utilise our BIA tool, Abriska® 22301, which simplifies the BIA process and helps you create your business continuity plan (BCP). Having conducted the BIA, URM can also help you to develop and implement bespoke BCPs or incident management plans (IMPs), which are always developed with your organisation’s unique needs in mind. Once these have been developed, we can offer tailored BC exercise services where we devise challenging, bespoke scenarios to exercise your BCPs and CMPs and provide a report on your team’s response, including any recommendations for improvement.
If you are looking to certify to ISO 22301, the International Standard for Business Continuity Management Systems (BCMS), URM can guide you through the entire process, from conducting a BC gap analysis to helping you build the BCMS, ensuring you are prepared for a successful certification assessment.
In addition to our consultancy services, URM will be delivering a free, 1-hour webinar on Improving Your Organisation’s Resilience With ISO 22301. Register for the webinar on 3 December at 11am, where you will gain practical insights on implementing and preparing to certify against ISO 22301, as well as actionable steps and real-world tips on how to safeguard critical operations and prepare for unexpected disruptions.
URM is pleased to provide a FREE 30 minute consultation on penetration testing for any UK-based organisation.
If you are unsure, URM can perform CREST-accredited internal and external penetration testing against all IP addresses associated with your organisation, location, or service.
Designed to assess the architecture, design and configuration of web applications, our web application pen tests use industry standard methodologies to identify vulnerabilities.
URM’s blog discusses the best next steps your organisation can take following Cyber Essentials certification to further enhance its security posture.
URM explains each control law firms must include in an information management and security policy that complies with the Lexcel Practice Management Standard.
URM’s blog explores how Cyber Essentials can help your legal practice enhance its security posture and achieve/maintain its SQM or Lexcel accreditation.


