This summer marked the third time in eight years that the Securities and Exchange Commission brought high profile cases against asset managers who allegedly concealed errors in their quantitative models.  Financial remediation and penalties paid by the firms ranged from $35 million to more than $200 million. 

All three cases also involved sanctions against individual executives, from the relatively modest $25,000 and $65,000 fines in the most recent case, to more than $12 million in another instance.  Two cases led to leaders of the firm (co-founder in one, CEO in another) being barred from the industry.

As models become increasingly common, fund complexes can take some valuable lessons from this trio of cases.

The Cases

The first case came in February 2011. The SEC’s order found that a fund firm’s senior management had learned of a material error in the firm’s core model that effectively misweighted risk. Instead of disclosing and fixing the error, however, the order alleged that senior management engineered a cover-up and left the problem unaddressed, which compounded the risks to clients.  Speaking about the case, Robert Khuzami, the SEC’s then Director of Enforcement, suggested that the firm’s decision to separate quantitative modeling staff from other parts of the business had frustrated the firm’s ordinary management and compliance oversight.[1]

The next case came three years later, in December 2014, when a large provider of ETF-based investment models was charged with falsely advertising back-tested quantitative performance as actual performance. While the crux of the case was the firm’s false statements, the SEC also alleged performance was inflated because of a known quantitative modeling error, which the analysts had reported to the CEO and which the firm then ignored.[2]

Then, earlier this year, the SEC announced an action against another fund firm for alleged misconduct involving its quantitative models. In this latest case, the SEC said that the firm’s models contained numerous errors and did not work as promised. The agency implied this was in part because the models had been developed by a junior analyst who didn’t have adequate oversight. According to the SEC, 15 different funds and separate accounts were launched based on the models, all with insufficient pre-launch testing and confirmations.  When the errors were discovered, the firm stopped using the models, allegedly without telling investors or disclosing the errors.[3] 

Tips to Avoiding Issues with Models

 Taking a step back, asset managers can learn several lessons from these cases.

 Some are straightforward: 

  1. When using a quantitative model take reasonable steps, both before the product launch and over time, to determine that it operates as intended.Rigorously test coding and the underlying math.When possible, follow a maker-checker approach, which separates testing from the original coding and design.
  2. If there are material issues related to the model, fix them promptly and make appropriate disclosures.
  3. If there are material limitations or risks associated with use of the model, disclose them before an issue arises.
  4. Address any identified weaknesses promptly.
  5. Be thoughtful about organizational choices. Do not isolate development and oversight of quantitative models outside core management and compliance oversight.Do not leave models to inexperienced staff or new hires without taking care to supervise and integrate them with the firm.

 Other lessons point to complexity.

  1. Applying regulatory and compliance oversight to quantitative models requires at least some technical capacity. Senior management and control functions need sufficient understanding of the models to ask probing questions. 
  2. Oversight draws on diverse functions. Portfolio management, trading, technology, operations, product support, and legal and compliance all have a role.  (“New product” protocols at most large firms are already cross-functional, but a firm vetting its first quantitative model should not assume existing protocols will be sufficient.)   
  3. Trade secrets and intellectual property are important, but cannot override compliance and fiduciary requirements. Make sure all regulations are being followed.

And then there is a reality – sometimes obscured by the enforcement cases – that coders and model builders are in the same risk-taking, judgment-driven business as analysts and portfolio managers.  Which means that not every coder or model error is necessarily material; nor is their every misstep fairly labeled an “error.” Some will be simple design and risk judgments.  Recognizing that difference can be the most important oversight challenge.  

[1] https://www.sec.gov/news/press/2011/2011-37.htm

[2] https://www.sec.gov/news/pressrelease/2014-289.html

[3] https://www.sec.gov/news/press-release/2018-167