EU laws, as well as most Member States' laws, do not include any type of specific provisions concerning the liability of robots and, more in general, the use of AI-powered technologies.

As automation and machine learning become the rule of our present and future, a new straight-forward discipline on liability is very much needed. 

To that extent, EU institutions are currently working on an attempt to regulate the civil liability of robots for the first time in the history of the Union and set new global standards in this field.

According to the EU Parliament's Recommendations on Civil Law Rules on Robotics, new legal standards should start from the assumption that robots cannot be held liable per se for acts or omissions that cause damage to third parties. 

In fact, existing EU rules on liability only cover cases where the cause of a robot’s act or omission can be traced back to a specific human agent such as the manufacturer, the operator or the user. This principle shall remain unchanged, in the Parliament's view.

Is this going to change with the spread of AI-powered robots?

The answer is very difficult. However the EU Commission's Working Document on Liability or emerging digital technologies underlines that in a more and more automated world liability is necessarily spread through a vast technological chain. 

From autonomous machines' manufacturers, to the programmers of their algorithms, from big data providers to IT suppliers, software houses and cyber security consultants: welcome to robotics' liability chain.

Finally, here are some open-ended questions: once the (legal) person responsible for the damage has been identified (AI manufacturer, programmer, supplier, user, etc.), should his/her/its responsibility be proportional to the “degree of autonomy” of the robot / AI system? 

If so, how to address such degree of autonomy in the determination of compensation, sanctions or civil liability in general?

To that extent, the EU is investigating the creation of a “quasi-legal” personality for robots (e-Person) that could protect manufacturers and users against liability similarly to how autonomous liability of companies shields its shareholders. 

Legal personality for robots will not mean the birth of robot personhood though, it should just help making things easier from a liability perspective - shouldn'it?