Steps Ahead

MOSAIC SMART DATA IS PROUD TO BE A SPONSOR OF FRONTIERS IN QUANTITATIVE FINANCE

MOSAIC SMART DATA IS PROUD TO BE A SPONSOR OF FRONTIERS IN QUANTITATIVE FINANCE

Frontiers in Quantitative Finance

A monthly seminar series which brings novel research on quantitative modelling in finance and risk management to a broad audience of academic researchers, industry professionals and regulators. Students, academics and professionals from the finance and government sector are welcome to attend the seminar.

The seminar will be held in central London and requires prior online registration.

Date:
Thursday 12th September 2024
The seminar begins at 6:00 PM (Seating cannot be guaranteed for late arrivals)

Venue:
The Auditorium, Citigroup Centre, London, E14 5LB

Speaker:
Dr Olivier Daviaud
Olivier Daviaud is a research analyst for JP Morgan, where he focuses on systematic option trading. His role is to create new systematic strategies, and to provide perspective to clients on quantitative investing topics. Before moving to the sell side he spent most of his career working for Brevan Howard, a macro hedge fund, in risk management and trading capacities. He studied at Ecole Normale Supérieure Paris-Saclay, holds a PhD in Mathematics from Stanford University, and is a CFA Charterholder. His peer-reviewed publications include articles in Risk.net’s Cutting Edge and in the Annals of Probability.

Event synopsis:
Profit and loss attribution for options: a new framework

Abstract: We introduce a new way to decompose the profit and loss of a delta hedged option, and apply that decomposition to revisit several key trading questions, such as the fair value of implied volatility, the best choice for a delta hedging scheme, and the ex-ante risk profile of an option portfolio.

Points of discussion:

6th June

This paper parsimoniously generalizes the VIX variance index by constructing model-free factor portfolios that replicate skewness and higher moments. It then develops an infinite series to replicate option payoffs in terms of the stock, bond, and factor returns. The truncated series offers new formulas that generalize the Black-Scholes formula to hedge variance and skewness risk.

Points of discussion:

16th May

A wide variety of solutions have been proposed in order to cope with the deficiencies of Modern Portfolio Theory. The ideal portfolio should optimise the investor’s expected utility. Robustness can be achieved by ensuring that the optimal portfolio does not diverge too much from a predetermined allocation. Information geometry proposes interesting and relatively simple ways to model divergence. These techniques can be applied to the risk budgeting framework in order to extend risk budgeting and to unify various classical approaches in a single, parametric framework. By switching from entropy to divergence functions, the entropy-based techniques that are useful for risk budgeting can be applied to more traditional, constrained portfolio allocation. Using these divergence functions opens new opportunities for portfolio risk managers. This presentation is based on two papers published by the BNP Paribas QIS Lab, `The properties of alpha risk parity’ (2022, Entropy) and `Turning tail risks into tailwinds’ (2020, The Journal of Portfolio Management).

Points of discussion:

22nd February

We build statistical models to describe how market participants choose the direction, price, and volume of orders. Our dataset, which spans sixteen weeks for four shares traded in Euronext Amsterdam, contains all messages sent to the exchange and includes algorithm identification and member identification. We obtain reliable out-of-sample predictions and report the top features that predict direction, price, and volume of orders sent to the exchange. The coefficients from the fitted models are used to cluster trading behaviour and we find that algorithms registered as Liquidity Providers exhibit the widest range of trading behaviour among dealing capacities. In particular, for the most liquid share in our study, we identify three types of behaviour that we call (i) directional trading, (ii) opportunistic trading, and (iii) market making, and we find that around one third of Liquidity Providers behave as market markers. This is based on work with Álvaro Cartea, Saad Labyad, Leandro Sánchez-Betancourt and Leon van Veldhuijzen. View the working paper here.

Points of discussion:

January 11th 

Automated Market Makers have emerged quite recently, and Uniswap is one of the most widely used platforms. This protocol is challenging from a quantitative point of view, as it allows participants to choose where they wish to concentrate liquidity. In this talk, we revisit Uniswap v3’s principles in detail to build an unambiguous knowledge base; we analyze the Impermanent Loss of a liquidity provider without assumptions about swap trades or other liquidity providers; we introduce the notion of a liquidity curve and show how to statically replicate options with Uniswap v3; last, we provide a closed-form approximation formula for the collected fees and discuss its accuracy in practice.

Points of discussion:

December 7th 

In the contemporary AI landscape, Large Language Models (LLMs) stand out as game-changers. They redefine not only how we interact with computers via natural language but also how we identify and extract insights from vast, complex datasets. This presentation delves into the nuances of training and customising LLMs, with a focus on their applications to quantitative finance.

Points of discussion:

November 9th 

Empirical studies consistently find that the price impact of large trades approximately follows a nonlinear power law. Yet, tractable formulas for the portfolios that trade off predictive trading signals, risk, and trading costs in an optimal manner are only available for quadratic costs corresponding to linear price impact. In this paper, we show that the resulting linear strategies allow to achieve virtually optimal performance also for realistic nonlinear price impact, if the “effective” quadratic cost parameter is chosen appropriately. To wit, for a wide range of risk levels, this leads to performance losses below 2% compared to the numerical Viterbi algorithm of Kolm and Ritter (2014) run at very high accuracy. The effective quadratic cost depends on the portfolio risk, but can be computed without any sophisticated numerics by simply maximising an explicit scalar function.

Points of discussion:

June 1st 

Brokers face an essential challenge in Transaction Cost Analysis (TCA): correctly estimating price impact considering a client’s alpha. Using live trading experiments, causal regularization estimates price impact when client alpha is unknown. In addition to satisfying best execution requirements, such an analysis empowers trading algorithms that rely on price impact models, such as Almgren and Chriss’ shortfall execution algorithm. Using a realistic order simulator, the paper quantifies these benefits for a canonical live trading experiment. This is joint work with Nick Westray.

Points of discussion:

October 28th 

October’s Frontiers In Quantitative Finance seminar was given by Prof. Alexander Lipton, Co-Founder and Chief Information Officer of Sila and Connection Science Fellow at MIT. He presented an automated market-making (AMM) cross-settlement mechanism for digital assets on interoperable blockchains, focusing on central bank digital currencies (CBDCs) and stable coins.

Points of discussion:

September 30th 

This month’s seminar was given by Dr Nicholas Westray, Courant Institute of Mathematical Sciences, New York University with a talk on extracting alpha from the limit order book using deep learning.

Firstly, Dr Westray explained the many challenges that firms face to generate alpha signals from limit order books such as: the enormous amounts of data generated; the need for specialist infrastructure to store, process and analyse the data; the data is noisy, non-stationary and fat-tailed and also the field is extremely competitive. The current approach would be for Quants to extract features using expert domain knowledge.

Dr Westray then described how neural networks have transformed problems that had previously used hand-crafted approaches and gave a summary of neural network architectures: multi-layer perceptron, recurrent neural networks, long-short term memory models and convolutional neural networks.

Dr Westray gave an outline the problem, which is to predict returns from limit order books data, where returns are stationary, formulated in terms of prices and volumes and the prices are non-stationary. The bid and ask prices were transformed, and from these order flow and order flow imbalance can be then determined which are better suited to determine returns.

Several models were compared: an autoregressive model, multilayer perceptron, a standard LSTM, LSTM feeding a multi-layer perceptron, a multi-layer LSTM and a CNN feeding into a LSTM and tested on 13 months of NASDAQ stock data. The models were fit to each stock, implemented in Python using TensorFlow and Keras, and run on GPUs.

Dr Westray concluded from his results that stationarity of the inputs was critical to getting good outcomes, and since the model results had strong microstructural dependencies that universality also needs to be taken into consideration. He also noted that the simple models gave similar performance as the complicated ones.

Dr Westray concluded his talk with opportunities to extend the approach such as considering Bayesian Deep Nets, volume prediction and considering other venues/asset classes such as foreign exchange or futures.

Points of discussion:

June 17th 

June’s seminar was given by Carol Alexander, Professor of Finance at the University of Sussex, and visiting Professor at Peking University HSBC Business School.

She gave an overview of her joint research with: Jun Deng at the University of International Business and Economics, Beijing, Daniel Heck and Andreas Kaeck at the University of Sussex Business School, and Bin Zou at the Department of Mathematics, University of Connecticut on “Hedging and Speculation with Bitcoin Derivatives.” 

Firstly, Professor Alexander introduced both direct and inverse perpetual futures contracts and auto-liquidations, focusing on addressing the question of which derivatives on the main perpetual exchanges is the best for speculation.

This was followed by an overview of research on price and volatility transmissions in six exchanges, observing that volume emerges mostly from Binance Asia, and volatility transmits from Bybit and the spot exchanges Coinbase and Binance USD.

To conclude, Professor Alexander presented findings on which exchange is best for hedging – observing that direct USD tether perpetuals were more effective than inverse contracts however, Binance, Bybit and OKEx seem to offer similar protection for shorter horizons. Also, when determining the optimal hedge ratio which minimises the probability of auto-liquidations, they found that the optimal lies between 40 and 80% of the position to be hedged which changes over time.

Points of discussion:

May 6th 

May’s seminar was given by Dr Sasha Stoikov, Senior Research Associate at Cornell Financial Engineering, Manhattan.

He gave an overview of his joint research with Peter Decrem, Director at Citi, Yikai Hua, Quantitative Analyst at Citi, and Anne Shen, Graduate at Cornell Financial Engineering on “Market Microstructure for Cointegrated Assets”.

The aim of the research was to define the notion of “micro-price” for multiple cointegrated assets, a generalisation of a previous model to compute the fair price of a single asset. This yields a notion of fair prices as a function of the observable state of multiple order books.

To test this model, the micro-prices of two highly cointegrated assets were computed, using data collected from Interactive Brokers. In particular, two Exchange Traded Funds were focused on that were linked to the performance of the S&P 500, namely the SH targeting the inverse of the daily return of the SP 500 and the SDS targeting twice the inverse daily return of the S&P 500.

The model relied on two empirical features of the data. Firstly, the regression residual between the two ETFs, which was noted as a good predictor of the performance of a long/short portfolio. Secondly, the imbalance at the top of each order book was observed as a good predictor of the next price move. The goal of the model was then to combine these factors with the mid-price to obtain a generalised formula for the micro-price of each ETF.

Multiple trading strategies were compared based on cumulative profit and loss of the buys and sells, both in sample and out of sample and it was found that this approach outperformed three of the other trading strategies.

To conclude, it was found that execution algorithms based on this approach could save half the bid-ask spread.

Artificial Intelligence (AI) and machine learning adoption in the FX market

Webinar: Make your FX workflow smart with AI and machine learning

AI and machine learning have permeated virtually every corner of everyday life, but when it comes to FX, how can you harness this technology to make your workflow smarter and improve performance?

To answer these questions and more, Mosaic Smart Data invites you to attend a webinar on AI and machine learning adoption in the FX market.

DATE:  27th April 2021

Colin Lambert
TheFullFX.com (Host)

Professor Rama Cont
Chair of Mathematical Finance, University of Oxford

Matthew Hodgson
Founder and CEO, Mosaic Smart Data

Stephane Alex
Head of Global Customer Marketing Group, MUFG

Stephane Malrait
Head of Market Structure & Innovation, ING Financial Markets

John Estrada
Global Head of eMacro, Credit Suisse

The expert panel will discuss how AI is capable of optimising the whole FX workflow, from pre- and post-trade analysis to trade execution, risk management and reporting – and, critically, the foundational steps you must take before effectively applying this ground-breaking technology to your FX business.

Don’t miss this unique opportunity to learn how to adapt and thrive in the future of FX.

Points of discussion:

April 15th 

April’s seminar was given by Petter Kolm, Clinical Professor and Director of the Mathematics in Finance Master’s program at NYU’s Courant Institute of Mathematical Sciences. His talk was titled “Hedging an options book with reinforcement learning.

He gave an overview of his joint research with Gordon Ritter, Adjunct Professor at NYU’s Courant Institute of Mathematical Sciences on tackling the problem of how to optimally hedge an options book in practice, where trading decisions are discrete and trading costs can be nonlinear and difficult to model by proposing a model based on the reinforcement learning technique.

Firstly, Professor Kolm gave some background, referring to the seminal work of Black, Scholes & Merton of replicating and hedging an option position, explaining that in practice, portfolio replication is impossible, and an optimal hedging strategy will depend on the desired trade-off between replication error and trading costs.

After a brief literature review of previous work in this area, he then gave an overview of the reinforcement learning technique, detailing the implementation: agent, state space, transaction cost and reward function, which he amounted to automatic hedgers who are prepared to optimise the trade-off cost versus variance.

Professor Kolm then explained how this was applied to a simple example of a European call option with a given strike price and expiry on a non-dividend paying stock, first taking the strike and maturity as fixed and assuming a zero risk-free rate and then extended this for a range of strikes and maturities.

He reported that the implementation was flexible, accurate and promising for real-world applications, pointing out that a key strength of the reinforcement learning approach is not making any assumptions about the form of trading cost and learning the minimum variance hedge subject to the transaction cost function provided, requiring only an environment in which transaction costs and options prices can be simulated accurately.

Points of discussion:

March 18th

March’s seminar was given by Bastien Baldacci, PhD student at École Polytechnique, with a talk titled “Adaptive trading strategies across liquidity pools”. This was joint work with Dr Iuliia Manziuk, Post-Doctoral Researcher also at École Polytechnique.

Firstly, Bastien gave a brief literature review of optimal trading strategies, distinguishing three groups of models: deterministic curves using market orders, stochastic approaches for OTC markets and also in limit order books. He mentioned that optimal liquidation is usually tackled firstly by computing a deterministic trading curve, where the amount to liquidate is split between several venues and secondly by using a smart order router to execute across several liquidity pools.

In order to address the problem of designing an optimal execution across several liquidity pools and accounting for a dynamic market, he proposed a framework for optimal trading in an asset listed on different venues. This takes into account the different execution patterns observed in each venue, allowing for partial execution of limit orders at different limits as well as market orders.

To analyse the behaviour of the trader he compared solutions obtained from a finite difference scheme and a deep reinforcement learning algorithm.

Finally, he suggested some extensions to this approach including the inclusion of short-term as well as mid to long term and path-dependent price signals, market impact and hidden liquidity sources.

Points of discussion:

February 4th

February’s seminar was given by Dr Nicolas Gaussel, Metori Capital Management. He discussed Environmental, Social and Governance (ESG) investing at a portfolio level, resulting from joint research with Laurent Le Saint, with a talk titled, “ESG risk rating of alternative strategies”.
Firstly, Dr Gaussel gave some background of ESG investing, its recent growth, how it is measured and giving examples of ratings. He then stressed the need for a methodology to aggregate ESG ratings at a portfolio level, since the common approach of using a formula based on the weighted average of the scores for individual ratings, was meaningless for hedge funds, who may have leveraged or short positions in an asset.

Dr Gaussel then considered the problem of ESG ratings on long/short and leveraged positions, proposing several interpretations of ESG risk and ESG rating. He showed that the risk of a short position is similar to that of a long position. He proposed that the ESG correlation can be inferred from the correlation of returns on a portfolio, which can then be used to define the overall and ESG specific risk. The ESG rating is then calculated as the ratio of the ESG specific risk to the total risk.

Finally, Dr Gaussel gave an example of applying this methodology to a fund which included futures on stock indices, bonds, currencies and short-term interest rates. Dr Gaussel concluded the talk by reiterating that it is necessary to consider how to produce an ESG rating at a portfolio level and the methodology that he proposed using a risk score based on the ESG ratings of its constituents.

Points of discussion:

January 7th

January’s Frontiers in Quantitative Finance Seminar was given by Professor Darrell Duffie, Stanford University, where he discussed his latest research, “New approaches to dynamic credit-spread benchmarks”, a collaboration with Professor Antje Berndt & Dr Yichao Zhu from Australian National University and also with research student Zachry Wang, Stanford University.

Professor Duffie began his talk by referring to the new risk-free interest rates benchmarks that were introduced to replace LIBOR. The replacement U.S. benchmark, SOFR, resulted in dissatisfaction from a number of banks due to the increase in their funding costs as credit spreads increase. To address this issue, Professor Duffie proposed his idea for a new credit spread benchmark that he called an, “Across-the-Curve Credit Spread Index” (AXI), which he stressed is just one example of an index with more desirable properties than SOFR.

Firstly, Professor Duffie outlined his list of desirable criteria that would need to be met by any new benchmark: highly correlated with the bank’s cost of funding; statistically robust and difficult to manipulate by being based on a large number of transactions; adaptable as bank’s change their approach to funding. He then explained how AXI meets all these requirements, an index of the credit spreads on unsecured debt instruments issued by U.S. bank holding companies and their commercial bank subsidiaries. The index is a weighted average of credit spreads with maturities ranging from overnight to five years, where the weights reflect both issuance volumes in the primary market and transaction volumes in the secondary market.

Professor Duffie suggested that there is an additional efficiency gain if banks using a credit spread index in their lending, however, he pointed out this was preliminary work and further research was needed.

Professor Duffie found that extending this concept to investment grade corporate bonds increased the volume of transactions to nearly five-fold, and that by increasing the number of transactions further improved the robustness of the index since it was less prone to statistical noise and manipulation. He referred to this as the Financial Conditions Spread Index (FXI) which he found to be highly correlated with AXI spreads over the last few years. 

Points of discussion:

December 10th

Modelling the Oil Squeeze: Storage and Trading Opportunities in Oil Derivatives

December’s Frontiers in Quantitative Finance Seminar was given by Dr Ilia Bouchouev, Managing Partner of Pentathlon Investments and former Global Head of Derivatives at Koch Supply & Trading. He discussed the background and impact of negative oil prices in his talk titled “Modelling the Oil Squeeze: Storage and Trading Opportunities in Oil Derivatives”.

Firstly, Dr Bouchouev gave some background, referring to the event in April 2020, where for first time in history, the price of WTI oil future was negative. He pointed out that whilst negative prices are common in commodities markets, the magnitude of the negative price was unusual, closing at minus $40 per barrel.

Dr Bouchouev then went on to dispel a number of misconceptions such as this only affected the futures market, whereas in fact the impact to the spot price meant that oil producers were paying their consumers to buy oil. He then gave an overview of oil markets, explaining that the roll yield is the main contributor to returns and crucially that “inventory hedgers” are the largest participant in the oil futures market.

He stressed that oil price models must take into consideration storage cost and capacity limits. Dr Bouchouev showed that in early 2020 the impact of COVID-19 resulted in falling demand for oil which resulted in increasing inventories. In addition, transporting oil incurs a cost which is reflected in the spread between storage facilities.

To conclude, Dr Bouchouev explained that storage is important and has an impact on futures, reasoning that as inventories increase, inventory hedgers sell futures leading to systematic bias in trading signals. He suggested that for options, a Bachelier normal model gave an improved performance than compared to a Black lognormal model and that the method of linearisation provides a simple analytic solution.

Points of discussion:

November 19th

November’s seminar was given by Dr Richard Martin, Visiting Professor at Imperial College London.  He discussed negative prices and strikes in the Black Model with a talk titled, “Embedded Optionalities in Commodity Markets”.

Firstly, Dr Martin gave some background, referring to the event in April 2020, where for first time in history, the price of WTI crude was negative as a consequence of storage reaching maximum capacity. He discussed the implications of this on option pricing and the solution that was proposed to abandon the lognormality assumptions in the Black Model (which cannot handle negative prices/strikes), in favour of the Bachelier Model.

Dr Martin suggested that the Bachelier Model approach is problematic due to market familiarity with the Black Scholes Model, (which until this time, has worked well across all commodity products), and related properties of the Black model such as building volatility surfaces and the use of Greeks.

Dr Martin proposed that the Black model can be repaired given its ability to obtain closed-form solutions of pricing and its extension to the Merton jump-diffusions specified as exponential Levy processes which are easier to implement. He therefore reasoned that there was little merit in moving to the Bachelier Model. Finally, he suggested that rather than using local volatility, a better approach is to look for embedded optionality in the assets.

Points of discussion:

October 29th

On the 29th October, the Frontiers in Quantitative Finance Seminar Series, hosted by the Mathematical Computational Finance Group, the University of Oxford and sponsored by Mosaic Smart Data, met for its monthly webcast. This month’s speaker was Dr Charles-Albert Lehalle, Head of Data Analytics, Capital Fund Management Paris, who gave a talk on Reinforcement Learning in High Frequency Finance. This talk was focused around his joint work with Dr Othmane Mounjid (University of California, Berkeley).

Dr Lehalle started his talk by mentioning how Reinforcement Learning has been recently used for application on financial markets such as optimal trading and deep hedging. He stressed the importance of finding the optimal learning rate, the exploitation-exploration trade-off and highlighted results from the literature to obtain convergence. He explained his selection methodology for the learning rate and how to enhance the level of convergence.

Dr Lehalle then described his implementation, called a PASS algorithm, equivalent to a line-search for exploration-exploitation problems. He then showed results from the application of the PASS algorithm to several problems taken from the optimal trading literature: optimal placement of a limit order and the optimisation of liquidation of a large number of shares and compared the convergence rates to a benchmark.

Dr Lehalle showed that on both problems the PASS algorithm was more efficient and demonstrated faster convergence at the beginning of the learning phase.

Points of discussion:

September 24th

On the 24th September, the monthly Frontiers in Quantitative Finance Seminar Series, commenced for the forthcoming academic year, hosted by Mathematical Computational Finance Group, the University of Oxford and sponsored by Mosaic Smart Data.

This month’s speaker was Dr Vladimir Piterbarg, Head of Quantitative Research Analytics at NatWest Markets and his talk was about LIBOR reform and the Arc-Sine Law.

First Dr Piterbarg gave a brief history of LIBOR rates and the need to replace them with alternative benchmarks based on Overnight Risk-Free Rates (RFR). The biggest challenge this will pose is the trillions of existing derivatives contracts that reference the LIBOR rates. He then outlined the ISDA Fallback protocol, a mechanism for switching legacy contracts from LIBOR to RFRs, with most requiring a fixed spread, that he termed the LIBOR Adjustment Spread, to be applied to the RFR. This spread is defined as the median of the spread between LIBOR and the compounded Overnight Index Swap (OIS) rates, over the last five years from the LIBOR cessation announcement date, which is expected at the end of 2021.

Dr Piterbarg proposed a model of the future evolution of spreads using a numerically efficient algorithm to approximate the expected median value using the known historical data. The future values of the spread were modelled as a Brownian motion with time-dependent mean where a critical parameter was determined using the Arc-Sine Law.

Results highlighted differences between implied expectations of the spread in the market versus maximum theoretical values determined by the model. 

Future work is still in progress.

Points of discussion:

June 25th

For the second Frontiers in Quantitative Finance webcast in June, Professor Loriana Pelizzon from Goethe University, Frankfurt, gave a presentation on ‘Loss sharing in Central Counterparties: Winners and Losers’. Her research on the subject is a collaboration with Christian Kubitza from University of Bonn and Mila Getmansky Sherman from University of Massachusetts.

Their aim of their work is to determine the relative benefits of central clearing to different participants following default losses in derivatives transactions. Professor Pelizzon started the webcast by giving a brief background of the over-the-counter (OTC) derivatives market and default losses before the role of the Central Counterparty (CCP) was mandated. She described the role of the CCP and outlined two central clearing mechanisms: multilateral netting which offsets gains and losses across multiple counterparties; and loss-sharing in the event of a member defaulting, in which losses are shared among the remaining members.

For their model, Professor Pelizzon and her co-researchers have considered systematic risk and portfolio directionality using a network consisting of dealers only, and another with all users, and then observed how losses are managed or shared by the CCP.

Their research concludes that central clearing favours dealers over other market participants such as asset managers, hedge funds and non-dealing banks. Their reasoning is that the latter are penalised because of the rule of loss-sharing adopted by the CCPs. Furthermore, this imbalance in benefits to all participants is a reason why there is reluctance to choose to centrally clear when it is not mandated.

Points of discussion:

June 11th

For June’s Frontiers in Quantitative Finance webcast, Professor Marco Avellaneda from New York University explored Statistical Clustering, Hierarchical Principal Component Analysis and Portfolio Management. The research presented was carried out in collaboration with Juan Serur from New York University.

Professor Avellaneda discussed different types of factor models comparing those based on explicit versus mathematical factors. He went on to describe Principal Component Analysis (PCA) and the first eigen portfolio, which is the market portfolio. He then showed some results of PCA applied to stock markets and concluded that it is difficult to find a financial explanation for higher order eigen portfolios.

Professor Avellaneda continued by introducing Hierarchical Principal Component Analysis (HPCA) which aims to partition the stock market into clusters, explaining how he and his research partner constructed the correlation matrix, selecting a representative portfolio for each sector, which extends the work of Cont and Kan [2011]. He showed the results when applied to S&P 500 returns. Further analysis on four major global equity markets, in the US, Europe, Emerging Markets and China, showed that HCPA gave clear high order eigen problems.

Comparing PCA against HPCA using the percentage of variance explained, he concluded that PCA rises faster since it is a less greedy algorithm and the HPCA has a lower concentration given a number of components. HPCA is therefore more interpretable.

He ended the talk with a brief explanation of Statistically Generated Clusters and gave some suggestions for future work.

Points of discussion:

May 7th

May’s Frontiers in Quantitative Finance webcast, co-sponsored by Mosaic Smart Data, was led by Dr Brian Healy from Decision Science Ltd and Professor Andrew Papanicolaou from New York University, who gave a joint presentation on Principal Component Analysis (PCA) of Implied Volatility Surfaces.

The presentation was based on new research carried out in collaboration with Professor Marco Avellaneda from New York University and Professor George Papanicolaou from Stanford University.

As an introduction to the subject, Dr Healy began by explaining how they applied PCA to U.S. equities’ returns. He discussed issues such as how many components should be removed for the residuals to be random, how to construct the explanatory factors and the natural structure of the data. He then demonstrated this by applying some results from random matrix theory to the data.

Professor Andrew Papanicolaou followed by discussing the application of PCA using tensors to implied volatility surface data, which is a higher-dimensional problem. When performing matrix PCA on the volatility surface data, they concluded that the number of significant components is 9. As a result of the structure contained in option prices, this is lower than is typical for equity returns, where the number is usually 20.

In a similar approach to a market capitalisation weighted equity factor or index, Healy and Papanicolaou used the open interest on the option and its vega to create a portfolio of implied volatility returns that tracks the volatility surface ‘EigenPortfolio’ – suggesting this can be used in similar ways to the VIX, the S&P 500 volatility index.

Healy and Papanicolaou concluded the talk by explaining how retaining the tensor structure in the ‘EigenPortfolio’ improves how well it tracks the open interest weighted implied volatility returns portfolio.

The online event was hosted by the Oxford Mathematical and Computational Finance Group and was attended by more than 130 participants from around the world. Follow Mosaic Smart Data on LinkedIn to ensure you don’t miss information on the next Group meeting which will be in June.

To read the full presentation of Hedging with Neural Networks, by Professor Johannes Ruf and Weiguan Wang, click here.

Points of discussion:

April 16th

In April’s Frontiers in Quantitative Finance webcast, co-sponsored by Mosaic Smart Data, Professor Johannes Ruf from the London School of Economics (LSE) spoke about the use of neural networks as estimation tools for the hedging of options.

As Professor of Mathematics at the LSE, he has conducted research with one of his PhD students, Weiguan Wang. The aim of Ruf and Wang’s work was to investigate the application of neural networks (NNs) as a tool for non-parametric estimation, discussing their implementation, comparing the results to several benchmarks and analysing the results, giving possible explanation.

Professor Ruf discussed how NNs have been applied to the pricing and hedging of derivatives over the past 30 years, commenting that most research papers have claimed that NNs have superior performance over other benchmarks.

He and Wang have implemented a NN, ‘HedgeNet’, which is trained to minimise the hedging error rather than the pricing error. Professor Ruf described their implementation of the model, the choice of input features and time periods. The data sets used were the end-of-day and tick prices of S&P 500 and Euro Stoxx 50 options.

He compared their results using Black-Scholes and linear regression as benchmarks. The results showed that HedgeNet outperformed the Black-Scholes benchmark significantly but not the linear regression model, which incorporated the leverage effect. 

Professor Ruf argued that the outperformance of NNs previously reported in mathematical literature is most likely due to a lack of data hygiene such as the incorrect treatment of the time series data or using an incorrect split of the test and training data.

The online event was hosted by the Oxford Mathematical and Computational Finance Group and was attended by 167 people from around the world.

To read the full presentation of Hedging with Neural Networks, by Professor Johannes Ruf and Weiguan Wang, click here.

Points of discussion:

February 20th

In February’s Frontiers in Quantitative Finance event co-sponsored by Mosaic Smart Data, Professor Olivier Guéant from the Université Paris 1 spoke about how automation in financial trading is taking the next step in market making.

Professor Guéant discussed research that he and his team have conducted on how to build market making algorithms for options on liquid assets. He discussed problems faced by market makers and the models that price-setting is based upon.

He formulated the problem as one of optimisation which is complicated due to the ‘curse of dimensionality’. His research uses ‘The Greeks’ to obtain dimensionality reduction, reducing the problem down to a system of linear ordinary differential equations which becomes easier to solve.

In giving numerical results he referred to assumptions he and his team have made, as well as ideas for further work in the algo market making space.

The event was hosted by the Oxford Mathematical and Computational Finance Group and held at Citi Stirling Square.