Undoubtedly, the technology sector is increasingly developing programs, applications, and other software innovations involving artificial intelligence. We use artificial intelligence in some form through the devices we use daily, such as phones, tablets, computers, various types of applications and etc. Whether we realize it or not, artificial intelligence is an integral part of our lives. For all those who doubt, We would like to inform you that even the latest versions of Powerpoint have similar modifications. Programs and applications using AI are found in many sectors, such as finance and capital markets, accounting, trade, medicine, and many others. In times of extremely high technological progress and innovation, not many people outside the professional community ask the extremely important question – Who will be responsible in case of an artificial intelligence error ?”.
The use of AI raises a number of legal and legislative issues on which it is absolutely necessary to find suitable solutions. At the moment, this puts IT companies at great risk. For example, diagnosing a health problem and prescribing treatment with AI is gaining popularity and momentum. Such procedures will sooner rather than later become an object of a brand new legal and regulatory framework. Patients should not and cannot be left to make vital decisions for which they are not competent. It is necessary to work towards a system where the conclusions made by the AI should be thoroughly checked and confirmed by a doctor, who will verify with his signature both the diagnosis and the treatment. In addition, from an ethical point of view, the responsibility bared by AI developers (for medical purposes) should be clearly ironed out. Of course, the responsibility of the patient himself in obtaining informed consent, in agreement with the diagnosis and treatment should not be neglected.
Intensive processes are currently underway on both the global and European levels for developing a complete and comprehensive legal framework when it comes to AI. In the financial and accounting sectors, where AI is perhaps most widely used, the harmful effects and losses could be immeasurably high. Let’s take an example of an AI, which collects financial documentation and data. There should be a human factor to ensure the reliability of the information AI collects and processes, as well as a set of oversight mechanisms that effectively verify the accuracy of the data.
We believe that this should be foreseen by IT companies. In this way, there is traceability регардинг the provision of information and in the processing and analysis of data by several counterparties. With this mechanism, it will be known from whom responsibility can be sought.
Things are no different in trade. Let’s say a client places an order from an AI interface and instead of 10 pieces, the AI marks 1,000. Who will be responsible for transportation costs? Things could go even further. If this is the last 1,000 units of a product and in the meantime, another customer wants to buy exactly that number of units but is informed that the product is no longer in supply, without actually being. Should not the developer or supplier of AI be liable not only for damages but for the benefits lost?
Companies developing AI should clearly define the division of responsibilities between the creator, supplier, user, and the affected entity, whether it is a customer or business partner. For example, when a company develops AI for a financial institution, the parties themselves should decide who will be responsible for any possible error, due to AI issues that are outside the legal framework.
In healthcare, our law firm firmly believes that the process of AI diagnosis and treatment should be controlled by human factors. We definitely can’t leave people lives in the hands of machines, or at least we’re still far from those times. It should be noted that if the patient decides to follow the diagnosis and treatment prescribed by AI, despite the medical staff disagreeing, documentation should be signed by the patient that relieves the doctor and the medical institution of any responsibility. It is obvious that here, in addition to IT companies, there is also a risk for medical institutions.
In any case, when working with artificial intelligence in Ivanchov and Partners, we strongly believe that there is a need for a specialized program that can carry out supervision, and provide a mechanism via a two-factor system to control the calculations and conclusions of AI programs, which should be developed by IT companies. Regulators, both locally and globally, should ensure clarity of accountability through transparent rules. Of course, in times of the connected global economy, businesses, and industries should take e a firm stand on AI.
Last but not least, technology companies should work towards the development of insurance coverage for AI-related issues. It is not clear to what extent insurance companies would respond to such incidents, but if methodologies are developed to control AI in all of its steps by people ensuring its logical and practical use, we believe that the insurance sector should at least consider seizing the opportunity.
If you want us to help you resolve legal issues related to Artificial Intelligence, contact us at +359 893 483 463 or write us at lawyer@ivanchovandpartners.com.