Previous incidents

February 2025
Feb 02, 2025
1 incident

Reporting Service Disruption

Degraded

Resolved Feb 02 at 04:42pm GMT

ATMSi Ad-Hoc reporting services have now been restored. Apologies for any inconvenience caused.

1 previous update

January 2025
Jan 17, 2025
1 incident

Isotrak ELD Service Disruption

Degraded

Resolved Jan 17 at 12:28pm GMT

Our partner has resolved the issue and the ELD portal is now available for login.

1 previous update

Jan 10, 2025
1 incident

Mobile Network Service Issues

Degraded

Resolved Jan 10 at 02:52pm GMT

We are now observing a return to normal inbound data levels. We will continue to monitor and provide further updates if necessary. Thank you for your patience and understanding.

3 previous updates

Jan 05, 2025
1 incident

Isotrak ELD - Portal and API Service Disruption

Degraded

Resolved Jan 05 at 10:14am GMT

Our partner has resolved the issue, and the portal and API services are now available.

1 previous update

December 2024
Dec 18, 2024
1 incident

ATMSi App - Users Unable to Login

Degraded

Resolved Dec 18 at 07:10am GMT

All queued messages were processed, and normal service levels
resumed.

5 previous updates

Dec 13, 2024
1 incident

Isotrak ELD - API Service Disruption

Degraded

Resolved Dec 13 at 10:32am GMT

Our partner has resolved the issue, and the portal and API services are now available.

1 previous update

Dec 11, 2024
1 incident

Message Queues

Degraded

Resolved Dec 11 at 05:12pm GMT

All outstanding messages in the queues have now been processed, and we are seeing a significant improvement in performance. Updates for the remaining resources will occur as they come back online. Thank you for your patience during this time and apologies for any inconvenience.

5 previous updates

Dec 10, 2024
1 incident

Inbound Device Data Outage

Degraded

Resolved Dec 11 at 04:50pm GMT

Summary: Microsoft applied patches in Azure on Tuesday evening, resulting in estate wide VM restarts. Some services did not correctly start up after the restarts. Monitoring system is setup to alert on this, but none were sent to the out of hours team. Users discovered and reported the issue instead.

Root cause: Our monitoring system’s schedule was not correctly set due to human error, resulting in alerts not sending to our out of hours team. They were unable to respond to the issue as they ...

1 previous update