Skip to main content

Legal Responsibility in Investment Decisions Using Algorithms and AI

Makoto Chiba, Mikari Kashima, Kenta Sekiguchi (Bank of Japan)

Research LAB No.19-E-1, April 26, 2019

Keywords: algorithm; artificial intelligence; AI; investment decision; duty to explain; duty of due care of a prudent manager; market manipulation; insider trading

JEL Classification: K22

Contact: makoto.chiba@boj.or.jp

Abstract

This article provides an overview of the report released by a study group on legal issues regarding financial investments using algorithms/artificial intelligence (AI).1 The report focuses on legal issues regarding the automated or black-boxed financial investment decisions by using algorithms/AI. Specifically, the report discusses the points for consideration in applying laws regarding (1) regulations and civil liability issues surrounding business operators for investment management or investment advisory activities, and (2) regulations on market misconduct. The report shows that the application of some existing laws requires the presence of a certain mental state (such as purpose and intent), which is unlikely to be given in the case of investment decisions using algorithms/AI. To deal with this problem, the report considers the necessity of introducing new legislation.

  1. 1The whole text is available in Japanese only.

Introduction

Technologies relating to artificial intelligence (AI) are advancing significantly on the back of improvements in the data processing capabilities of computers and the increase in data availability. The use of AI is expanding in a wide range of fields, and their application to investment decisions is one such example.

The term AI has no standard definition. When the term AI is defined in a broad manner, the following two cases are both regarded as scenarios of applying AI to investment decisions. In the first case, humans decide the investment rules -- that is, they define the criteria for investment decisions --, and investments are made automatically using algorithms based on those criteria. In the second case, algorithms set the criteria for investment decisions based on a data learning system. The second case is subdivided into two cases: one is where humans select what variables ("features") to use for the data analysis and the other is where algorithms discover the relevant features through deep learning techniques. However, the use of such deep learning techniques potentially makes it difficult for humans to comprehend the criteria for investment decisions.

Considering the wide range of applications of algorithms/AI to investment decisions, legal issues will differ from case to case. The following two cases are particularly notable in that respect.

The first is the execution of automatic transactions without human judgment by using algorithms/AI.

The second is investment decisions of algorithms/AI beyond human understanding -- i.e., the "black box" problem. Criteria for investment decisions set by humans allow us to examine how changes in the social and economic environment influence investment decisions. However, criteria set by algorithms/AI based on a data learning system could potentially be incomprehensible to humans, thus making it difficult to predict what kind of investment decisions will be made.

The report released in September 2018 by the study group on legal issues regarding financial investments using algorithms/AI, for which the Bank of Japan's Institute for Monetary and Economic Studies served as the secretariat, discusses the following: (1) Who will be subject to regulations stipulated by the Financial Instruments and Exchange Act (FIEA) when business operators offer investment decision services using algorithms/AI to customers; (2) How and to what extent the operators will bear liability for any losses incurred as a consequence of their services; and (3) How regulations on market misconduct will be applied when financial transactions are executed automatically using algorithms/AI.

This article summarizes the discussions of the report related to (2) and (3) above.

Civil Liability of an Investment Management Business Operator

Suppose that an investment management business operator has a discretionary investment contract with a client. If the investment makes a loss, the client will possibly seek to hold the operator liable. Possible reasons include the following: (1) The risks of the investment are not sufficiently explained by the operator; (2) The management service is not conducted in a way suitable for the client; (3) The operator carries out transactions in its own interest at the expense of the client; or (4) Investment decisions are clearly irrational.

The investment management business operator can potentially be held liable based on the aforementioned reasons even when algorithms/AI are used for investment decision-making. However, when the grounds for investment decisions of the algorithms/AI are difficult to comprehend and hence difficult for the operator to explain to the client, will the operator always be in breach of the duty to explain? Moreover, since the rationality of the investment decisions will not be clear in such cases, how should the liability of the operator be assessed?

The "black box" problem and the duty to explain

Before entering into a discretionary investment contract, an investment management business operator has to explain, for example, the risks of the investment instruments and the basic investment policy. Meanwhile, the operator is not required to explain the information used and how the information is weighted in reaching investment decisions. Accordingly, even if the operator uses algorithms/AI to make investment decisions and is unable to explain what information is used and how it is factored in to reach investment decisions, the operator is not necessarily in breach of the duty to explain so long as the basic investment policy alone is explained to the client.

Therefore, the black box problem of algorithms/AI does not necessarily make the operator liable for breaching the duty to explain. However, a client can potentially misinterpret algorithms/AI as something omnipotent or be unaware of risks unique to investment strategies using algorithms/AI. In such cases, the operator is obliged to explain the risks involved in an appropriate manner and to the extent necessary in light of the client's characteristics.

Rationality of investment decisions and duty of due care of a prudent manager2

When losses are incurred as a result of investments conducted by an investment management business operator, the rationality of the investment policy and investment decisions is a factor to determine the liability of the operator for breaching the duty of due care of a prudent manager.

Likewise, when algorithms/AI are used to make investment decisions, the rationality of the criteria for investment decisions is a factor to determine the operator's liability. However, if the criteria are incomprehensible to humans, the operator's liability cannot be determined in this way. The report proposes that, in such cases, the operator should not be necessarily liable for breaching the duty of due care simply because the operator adopted algorithms/AI with the black box problem; instead, the rationality of making investment decisions using the algorithms/AI should be examined. In such an assessment, various aspects of the algorithms/AI are taken into consideration, such as the scope of the data used for machine learning, the data processing procedures, and the results of simulations using test data and of experimental operations.

In practice, however, the rationality of using the algorithms/AI is often unclear. In order to clearly avoid liability claims from clients, investment management business operators need to appropriately explain the characteristics of the algorithms/AI and clarify the allocation of liabilities in the contract.

  1. 2The report also analyzes cases where investment management business operators entrust the development of algorithms/AI to a system vendor. The report examines the liability of the system vendor for any irrational investment decisions made by the algorithms/AI, and points out that the vendor is likely to be held liable if it does not provide explanations to the operator, for example, on the characteristics of the algorithms/AI and their constraints.

Trading Using Algorithms/AI and Regulation Prohibiting Market Manipulation3

Article 159, paragraph (2), item (i) of the FIEA prohibits market manipulation through trading that seeks to cause fluctuations in market prices of securities. While various interpretations of the scope of the regulation have been discussed, the Supreme Court has ruled that the regulation applies when the following two requirements are satisfied: (1) "Price-fluctuation-causing trades," i.e., a series of sales and purchases of securities that cause fluctuations in market prices, are conducted; and (2) They are based on the "purpose of inducement," i.e., their purpose is to induce investors to sell or purchase securities in a securities market by misleading them into believing that the prices of the securities are formed by the natural relation between supply and demand when in fact they are made to fluctuate by artificial manipulation.4 Since "price-fluctuation-causing trades" comprise an excessively wide range of transactions, the "purpose of inducement" requirement, which is related to the mental state (such as purpose and intent), plays an important role to circumscribe the scope of the regulation.

When trades are conducted using algorithms/AI, how are regulations on market manipulation to be applied? The report examines this issue by assuming that a corporation sells or purchases securities for itself using algorithms/AI.

If the person in charge of trading at the corporation establishes, with the purpose of inducement, algorithms/AI that conduct price-fluctuation-causing trades, the person and/or the corporation is in breach of the regulation. Alternatively, if the person in charge becomes aware that algorithms/AI result in manipulative quotations and continues to conduct transactions using the algorithms/AI, the person and/or corporation may be regarded as having the purpose of inducement and thus be in breach of the regulation.

On the other hand, when algorithms/AI continuously analyze the impact of trading on market quotations and engage in manipulative trading based on this analysis, the person in charge could possibly be unaware that the algorithms/AI form manipulative quotations. In this case, the algorithms/AI appear to have the purpose of inducement, but actually algorithms/AI do not have any mental state. Therefore, provided that the person in charge does not hold such a purpose, neither the person nor the corporation could be held criminally liable or be forced to pay administrative monetary penalties for contravening the regulation.

Financial instruments business operators (FIBOs) registered under the FIEA are required to establish sufficiently robust trading management frameworks to avoid the formation of manipulative quotations. Thus, if a FIBO trades with algorithms/AI that form manipulative quotations, it will be ordered to improve its business operation for failing to discharge this requirement. However, breach of this regulation is not subject to criminal liability or administrative monetary penalties. Moreover, non-financial corporations and individuals are not subject to the requirement to establish such trading management frameworks.

Given this situation and from the perspective of safeguarding the integrity of financial markets, the report proposes imposing the requirement of establishing such trading management frameworks on all users of algorithms/AI for trading. In addition, administrative monetary penalties and/or criminal penalties could be imposed for the breach of this regulation. At the same time, the report notes that the challenge of this legislative approach is to identify the types of trading that should be avoided.

  1. 3The report also discusses the application of insider trading regulations to a corporation that sells or purchases securities for itself using algorithms/AI. The report points out that even when algorithms/AI accidentally have access to insider information of a listed company and such information is used to trade the company's securities, the corporation is not necessarily charged with violating such regulations as long as the person in charge of trading is unaware of this insider information.
  2. 4Supreme Court Decision of July 20, 1994 (Supreme Court Reports (criminal cases), Vol. 48 No. 5, p. 201).

Conclusion

The discussion above highlights that whereas issues regarding the civil liability of investment management business operators can be dealt with by the interpretation of existing laws, issues regarding regulations on market misconduct may require new legislation. The reasons for this difference is that the latter case requires a certain mental state, which algorithms/AI do not possess.

Thus, when algorithms/AI replace human actions, not only careful application of existing laws but also consideration of new legislation is vital. To ensure that the legal system does not hinder technological innovation, the legal implications of such innovation should be continuously examined.

Reference

Study group report on "Legal Responsibility in Investment Decisions Using Algorithms and AI" (2018), Kin'yu Kenkyu (Monetary and Economic Studies), 38(2), Institute for Monetary and Economic Studies, Bank of Japan (2019) (available in Japanese only).

Note

The views expressed herein are those of the authors and do not necessarily reflect those of the Bank of Japan.