On the evening of 12 March 2026, starting at approximately 20:45 CET, a subset of Zaptec chargers experienced intermittent connectivity issues. This caused temporary disruptions to charging sessions for affected devices. The issue was resolved by the morning of 13 March 2026.
Up to 50,000 devices, primarily Zaptec Go and Go2 chargers, were affected across all markets. Because our devices are distributed across multiple independent infrastructure clusters, the disruption was limited to a subset of the total fleet. Devices that maintained connectivity continued to operate normally throughout the incident.
12 March 2026
20:49 – An elevated rate of devices going offline was detected.
20:58 – Automated monitoring triggered a critical alert.
21:00 – The on-call engineering team began investigating.
21:07 – Initial status update communicated internally.
21:19 – The affected infrastructure was scaled up in an attempt to restore connectivity.
21:30 – Scope of the incident was refined and communicated.
21:40 – The team evaluated a regional failover but determined it would affect more devices than the incident itself, so it was held in reserve.
21:52 – A support case was raised with our cloud infrastructure provider.
22:00 – Diagnostic analysis confirmed the connectivity errors originated from an infrastructure-level service disruption outside of Zaptec's control.
22:03 – Devices began reconnecting.
22:30 – Recovery trend continued steadily.
22:34 – Our cloud infrastructure provider confirmed a regional capacity issue affecting device connectivity services in the relevant region.
22:45 – The majority of devices had recovered. Status moved to monitoring.
13 March 2026
08:15 – The incident was marked as resolved.
The disruption was caused by a capacity constraint within our cloud infrastructure provider's device connectivity services in the affected region. This resulted in intermittent service unavailability for a subset of our chargers.
Zaptec's own applications and backend services were not the source of the issue. Once connectivity was restored by the infrastructure provider, all systems resumed normal operation without any additional intervention.
Our engineering team identified the affected device segment within minutes and took immediate steps to mitigate the impact, including scaling up infrastructure resources. A regional failover was evaluated but intentionally held back, as it would have introduced a broader disruption than the incident itself. Once the root cause was confirmed as an external infrastructure issue, the team monitored the recovery closely until full connectivity was restored.
Failover readiness: We will conduct controlled failover exercises in production to increase our confidence and speed of response in the event of larger-scale disruptions.
Faster offline detection: We are improving our monitoring systems to detect and respond to connectivity issues more quickly, reducing time to resolution in future incidents.