- Fitch updates Global Reinsurance 2018 Outlook to incorporate recent significant catastrophe losses
- AIR Worldwide releases “2017 Global Modeled Catastrophe Losses” report
- Standard & Poor's looks at main concerns for European insurers
- Swiss Re new sigma report says many life insurers are focused on enhancing the value of existing business
- EIOPA publishes an Opinion on monetary incentives and remuneration between providers of asset management services and insurers
- Equifax Touchstone reports total UK sales of protection products increased by 1.4% in Q3 2017 to £149.1m-a new five year high
- Allianz now a telematics service provider for Marmalade young driver offering expired
- Clyde & Co launches in-house data analytics lab that builds on relationship with University College London(UCL) expired
- First ILSBlockchain completed expired
- Lloyd's selects REG to assist in recording data and to help reduce the administrative burden on UK coverholders expired
- NIIT announces release of version 4.0 of its exposure management tool Exact expired
- Tropics Breeze workers' comp solution new features announced expired
9th August 2017
Clyde & Co seminar panel discuss the difference between autonomous and assisted-driven vehicles and the issues arising after a claim
‘Assisted’ and ‘autonomous’ driving may seem similar, but the difference between them could prove to be a challenging battleground for the insurance industry in the development of self-driving vehicles. This was the view of a panel of experts speaking at global law firm Clyde & Co’s seminar on the future of autonomous vehicles last month.
The panel, comprising motor experts and Clyde & Co partners from around the world, said that there was a legal grey area between a car fully under its own control and a car which was nominally driving under its own control but still required driver supervision–even though the driver was not playing an active role.
Clyde & Co partner Mark Hemsted highlighted the potential legal issues that could be caused by an advanced car switching between different driving modes. He said “Imagine a car driving in autonomous mode on a motorway and then switching to assisted mode when it turns onto a trunk road as conditions change. If the change was triggered in error, this could lead to product liability issues if an accident resulted. There’s also the question of whether the driver is aware of the change. The same could be the case with switching between assisted and manual modes.”
Mark Wing, a Clyde & Co partner, noted the difficulty insurers would face when challenging motor manufacturers over possible faults in their intelligent vehicles. He said “Experience shows that motor manufacturers do not readily acknowledge design flaws in their products. They also know the vehicle far better than the insurer, which leads to the question of how any insurer will be able to prove fault.”
He also pointed out that product liability claims focusing on the car’s artificial intelligence(AI) system would be particularly problematic. He said “A fully autonomous car is dependent on its sensors and AI. That AI is likely to be a connected service rather than a product built into the vehicle. What happens if the AI itself goes wrong? How can we understand the decisions made by an incredibly complex AI system? We may have to revisit the systems we have for assigning liability.”
The Automated and Electronic Vehicles Bill, announced during the recent Queen's Speech, places liability for accidents involving autonomous vehicles on the motor insurer, with the possibility of subrogating from the manufacturer. Panel members speculated that manufacturers may provide their own insurance for drivers if the traditional motor insurance market failed to respond favourably.
Hemsted observed that data storage in autonomous vehicles would bring about major changes for the management of insurance claims. He said “Automation is clearly going to affect evidence. Autonomous vehicles bristle with sensors so will record any accident. That’s going to provide a perfect factual history of the collision. Specifically, that data will show what role, if any, the driver played, which will be vital to these types of claims.
If autonomous cars become the perfect witness in an accident, traditional methods of accident investigation and gathering physical evidence will no longer be necessary.”
In order to illustrate the issues surrounding assisted and autonomous driving modes, the panel highlighted the death in May last year of the driver of a Tesla car in Florida. The car’s Autopilot system requires driver supervision. Data recovered from the vehicle revealed that the car had warned its driver seven times to place his hands on the wheel. The data also showed that during the 37 minutes the car had driven in this assisted mode, the driver’s hands had been on the wheel for just 25 seconds. The panel felt that, at present, there was a risk that that some assisted driving systems–modes in which cars effectively drive themselves but under the supervision of the human occupant–were so reliable they could lull drivers into a false sense of security.
The panel also felt that the brand names of the assisted driving technologies such as Tesla’s Autopilot could confuse motorists as to the scope of their capabilities. This April, Tesla owners in the US filed a class action against the manufacturer for allegedly mischaracterising the capabilities of its new Autopilot 2 feature to consumers.
Clyde & Co Trends(11 articles)