When a person dials for help, every second on the line is a plea for the government to care fast enough.
On November 16, Malaysia launched a new emergency call platform to modernise how distress calls turn into ambulances.
Instead of progress, the rollout unravelled: within days, medical dispatchers quietly reverted to the old system.
That wasn’t just a stumble — it exposed a fragile skeleton under an ambitious upgrade.
This isn’t simply a list of technical failures. It’s a demand: life-critical systems must be designed, deployed, and governed with much more care than a typical digital product.
The question isn’t “can we digitise emergency calls?” but “do we have the structures to do so safely?”
Over a billion ringgit was allocated to reinvent how ambulances are dispatched.
If it doesn’t deliver faster response times, better coverage, or fewer lost calls, what exactly are we paying for?
The emphasis appears to have been on digital over effective.
You can build a sleek app, and still have no ambulance available to pick someone up.
Without a well-maintained fleet, trained dispatchers, and a reliable maintenance programme, the tech layer is superficial: decorative, not lifesaving.
Algorithms without guardrails are not enough
One alarming report: ambulances being dispatched across district borders purely because a computer algorithm thought it was “closest.”
That kind of routing overlooks realities like traffic, jurisdiction, staff capability, and hospital capacity.
In life-or-death situations, we need a human check. Dispatchers must have the authority to override algorithmic suggestions, not just as a fallback, but as a core part of system safety.
This isn’t technophobia; it’s common sense. Expecting an algorithm alone to make these calls is a gamble with lives.
Available data suggests the new platform was hit with a surge of calls after its launch, and many were not genuine emergencies.
Prank calls, silent calls, or accidental dials can swamp a routing system, draining its capacity when real crises happen.
Yes, there are technical fixes: caller verification, smart filtering, tiered routing, and more.
But if we calibrate filters too aggressively, we risk blocking vulnerable people: the elderly, those panicked or unable to speak clearly.
Quality testing with real users, not just in labs, is critical.
When an emergency service fails, the public must demand clarity. How many calls went unanswered? How many ambulances were misrouted, and where? Were there documented patient harms?
A public, independent post-mortem should be released. We need honest data: system uptime, error logs, misdispatch rates, and clinical outcomes.
Auditors must be empowered to inspect the system, not just rubber-stamp it.
Lessons from abroad
Malaysia’s experience echoes warnings from other nations that have tripped in their own 999 / 000 modernisation journeys.
Australia: In November 2023, Optus suffered a nationwide outage that left more than 2,100 people unable to dial 000 (Australia’s emergency number).
ACMA, the telecom regulator, later fined Optus over A$12 million (over RM33 million) for breaches of emergency call rules.
During the investigation, regulators also found Optus had failed to perform welfare checks on hundreds of customers who could not reach emergency services.
In a later response, Optus admitted it “had not followed established processes” during its upgrade.
The fallout has spurred major reforms: stronger regulations, more frequent system testing, and greater obligations on telecom companies to guarantee emergency access.
United Kingdom: In June 2023, a software fault in a major telecom provider’s 999 emergency call infrastructure prevented more than 9,000 callers from connecting for nearly 80 minutes, according to a government review.
The regulator uncovered that backup systems failed, reporting procedures were weak, and disaster-recovery capacity was inadequate.
In response, the government committed to new safeguards such as better backup coordination and public alerting during future disruptions.
These aren’t theoretical risks. When emergency systems break, it’s not just an inconvenience: it’s a systemic failure with potentially deadly consequences.
What Australia and the UK demonstrate is stark: if regulators don’t enforce hard accountability, digital systems meant to help can become a new point of failure.
A roadmap for sanity
Short-term fixes (weeks to months):
Restore human oversight by making trained dispatchers the default, not backup.
Publish a public incident report with granular data: call volumes, failure rates, misrouted dispatches, and any known clinical outcomes.
Introduce caller-validation measures to filter prank or silent calls, while preserving access for vulnerable or distressed users.
Medium-term reforms (6–24 months):
Invest in ambulance capacity — lease new vehicles, retire the broken ones, and set performance targets for availability and response times.
Run staged pilots of any algorithmic dispatch system, co-designed with first responders and independently reviewed.
Embed enforceable performance standards in contracts, with meaningful penalties for system failures that threaten public safety.
A case for dignity-led reform
Technology should amplify care, not replace accountability.
It is not enough that a system looks modern. Success must be measured by how often help truly arrives when someone cries out for it.
If the government is willing to allocate hundreds of millions to digitise our emergency system, then it must also demand that its partners face real consequences when they fail.
No more vague promises. No more black-box solutions. We need open audits, structural investments, and human judgment where it matters most.
When a 999 call goes unanswered, that is not a technical glitch. It is a moral failure. A lapse in governance, oversight, and public prioritisation.
Time is what saves lives, and without the right systems, our newest digital upgrade risks costing more time than it saves.
The views expressed are those of the writer and do not necessarily reflect those of FMT.
