Program Book for 27th international congress on insurance: mathematics and economics

UTC-5

Program Outline

Mon, July 08

Mon, July 08

10:00 AM – 05:00 PM SAC 161 Short course on Decentralized Insurance
07:00 PM – 08:00 PM LPSC Dinning Hall Reception

Detailed Agenda

Jul 08

10:00 AM–11:00 AM SAC 161 Short Course

Marco Mirabella

Decentralized Finance and Blockchain: Implications for the Insurance Industry

Jul 08

11:00 AM–11:15 AM SAC 161 Coffee Break

Jul 08

11:15 AM–01:00 PM SAC 161 Short Course

Jan Dhaene

Decentralized risk sharing: definitions, properties, and characterizations

Jul 08

01:00 PM–02:30 PM SAC 161 Boxed lunch

Jul 08

02:30 PM–04:15 PM SAC 161 Short Course

Arthur Charpentier

Collaborative insurance, unfairness, and discrimination

Jul 08

04:15 PM–04:30 PM SAC 161 Coffee Break

Jul 08

04:30 PM–05:00 PM SAC 161 Short Course

Runhuan Feng

Decentralized insurance: bridging the gap between industry practice and academic theory

Jul 08

07:00 PM–08:00 PM LPSC Dinning Hall Reception

Tue, July 09

Tue, July 09

09:00 AM – 09:10 AM LC/SAC/OCH Welcome Remarks
09:10 AM – 10:25 AM LC/SAC/OCH Parallel Sessions 09A
SAC 280 Risk Modeling: Capital Allocation
SAC 292 Mortality Modeling I
SAC 294 Pension Mathematics: Fund Management
OCH 202 Machine Learning I
OCH 460 Climate Risk: Extreme Events
LC 201 Optimal Control: Portfolio Optimization
LC 408 Dependence Modeling: Coupla
LC 507 P&C Insurance: Estimation Techniques
10:25 AM – 10:50 AM LC 308 Coffee Break
10:50 AM – 12:30 PM LC/SAC/OCH Parallel Sessions 09B
SAC 280 Risk Measure: Dynamic Setting
SAC 292 Longevity Risk
SAC 294 Pension Mathematics: Product Design
OCH 202 Statistical Methods: Estimation and Simulation
OCH 460 Climate Risk: Carbon Emission
LC 201 Ruin Theory II
LC 408 Insurance Economics: P2P Insurance
LC 507 P&C Insurance: Ratemaking I
01:00 PM – 02:00 PM LPSC Dinning Hall Lunch
02:00 PM – 03:40 PM LC/SAC/OCH Parallel Sessions 09C
SAC 280 Risk Modeling: Catastrophic Risk
SAC 292 Mortality Modeling II
SAC 294 Pension Mathematics Product Valuation
OCH 202 Maching Learning II
OCH 460 Quantitative Finance
LC 201 Optimal Control: Dividend Strategy
LC 408 (Re)Insurance Design I
LC 507 InsurTech I
03:40 PM – 04:00 PM Outside LPSC 120 Coffee Break
04:00 PM – 04:10 PM LPSC 120 A&B Opening Ceremony
04:10 PM – 05:10 PM LPSC 120 A&B Keynote Session 1 – Patricia Born
06:00 PM – 07:00 PM LPSC Dinning Hall Networking Event (Buffet Dinner)

Detailed Agenda

Jul 09

09:00 AM–09:10 AM LC/SAC/OCH Welcome Remarks

Jul 09

09:10 AM – 10:25 AM

Parallel Session

SAC 280 Risk Modeling:
Capital Allocation
Chair: Daniel Bauer
09:10 AM–09:35 AM

Daniel Bauer

Asset and Liability Risks in Financial Institutions
09:35 AM–10:00 AM

Yiqing Chen

Asymptotic Capital Allocation based on the Higher Moment Risk Measure
10:00 AM–10:25 AM

Pietro Millossovich

Stress Testing with f-divergences
SAC 292 Mortality Modeling I Chair: Brian Hartman
09:10 AM–09:35 AM

Brian Hartman

Multivariate Spatiotemporal Models for County Level Mortality Data in the Contiguous United States
09:35 AM–10:00 AM

Thomas Landry

Modelling seasonal mortality: An age–period–cohort approach
10:00 AM–10:25 AM

Yin Yee Leong

A Spatial Approach to Model Mortality Rates
SAC 294 Pension Mathematics:
Fund Management
Chair: Tianxiang Shi
09:10 AM–09:35 AM

Tianxiang Shi

Pension Fund Management with a Machine Learning Strategy
09:35 AM–10:00 AM

Xiaoqing Liang

Turnpike Properties for Stochastic Pension Fund Control Problems
10:00 AM–10:25 AM

Mei-ling Tang

Pension fund optimization under life-cycle investment with discretionary stopping time
OCH 202 Machine Learning I Chair: Kwangmin Jung
09:10 AM–09:35 AM

Minjeong Park

Flow-based Deep Insurance Ratemaking
09:35 AM–10:00 AM

Canchun He

Machine Learning and Insurer Insolvency Prediction
10:00 AM–10:25 AM

Kwangmin Jung

Matrix-based factor analysis on the prediction of claims probability
OCH 460 Climate Risk:
Extreme Events
Chair: Wei Wei
09:10 AM–09:35 AM

Zhen Dong Chen

Joint Extremes of Precipitation and Wind Speed
09:35 AM–10:00 AM

Shimeng Huang

Building Disaster Resilience: Leveraging High-Resolution Weather Data in Index Insurance
10:00 AM–10:25 AM

Yue Shi

Assessing the dependence between extreme rainfall and extreme insurance claims: A bivariate POT method
LC 201 Optimal Control:
Portfolio Optimization
Chair: Jinchun Ye
09:10 AM–09:35 AM

Yueman Feng

Constrained portfolio optimization in the DC pension fund management
09:35 AM–10:00 AM

Rosario Maggistro

Multi-Agent Dynamic Financial Portfolio Management: A Differential Game Approach
10:00 AM–10:25 AM

Jinchun Ye

Stochastic Utilities with Subsistence and Satiation: Optimal Consumption, Life Insurance Purchase, and Portfolio Management
LC 408 Dependence Modeling:
Coupla
Chair: Nariankadu Shyamalkumar
09:10 AM–09:35 AM

Rosy Oh

Introducing Normal-Gamma copula and its application to collective risk model
09:35 AM–10:00 AM

Nariankadu Shyamalkumar

A study of one-factor copula models from a tail dependence perspective
10:00 AM–10:25 AM

Christopher Blier-Wong

Rank-based sequential tests for copulas
LC 507 P&C Insurance:
Estimation Techniques
Chair: Georgios Pitselis
09:10 AM–09:35 AM

Georgios Pitselis

Semi-linear Credibility Distribution Estimation
09:35 AM–10:00 AM

Chudamani Poudyal

Robust Credibility Models – The Winsorized Approach
10:00 AM–10:25 AM

Sebastian Calcetero Vanegas

Bridging the gap between aggregate and individual claim reserving via a population sampling framework

Jul 09

10:25 AM–10:50 AM LC 308 Coffee Break

Jul 09

10:50 AM – 12:30 PM

Parallel Session

SAC 280 Risk Measure:
Dynamic Setting
Chair: Roger Laeven
10:50 AM–11:15 AM

Jose Da Fonseca

Wishart conditional tail risk measures: A dynamic and analytic approach
11:15 AM–11:40 AM

Roger Laeven

Dynamic Return and Star-Shaped Risk Measures via BSDEs
11:40 AM–12:05 PM

Jinzhu Li

The Principle of a Single Big Jump from the perspective of Tail Moment Risk Measure
12:05 PM–12:30 PM
SAC 292 Longevity Risk Chair: Marie-Claire Koissi
10:50 AM–11:15 AM

Jackie Wong Siaw Tze

Bayesian model comparison for mortality forecasting: Coherent Quantification of Longevity Risk
11:15 AM–11:40 AM

Marie-Claire Koissi

Robust-Regression and Applications in Longevity Risk Modeling
11:40 AM–12:05 PM

Mario Marino

Frailty and lifetime shifting: Investigating a new paradigm inmortality modeling and forecasting
12:05 PM–12:30 PM
SAC 294 Pension Mathematics:
Product Design
Chair: Xiaobai Zhu
10:50 AM–11:15 AM

Yumin Wang

Equilibrium Intergenerational Risk Sharing Design for a TargetBenefit Pension Plan
11:15 AM–11:40 AM

Xiaobai Zhu

Welfare-Enhancing Annuity Divisor for Notional Defined Contribution Design
11:40 AM–12:05 PM

Yunxiao Wang

Reverse mortgages strategies for families with early bequests and altruism
12:05 PM–12:30 PM

Yuxin Zhou

Multi-state Health-contingent Mortality Pooling: An Actuarially Fair and Self-sustainable Product that Allows Heterogeneity
OCH 202 Statistical Methods:
Estimation
Chair: Vytaras Brazauskas
10:50 AM–11:15 AM

Vytaras Brazauskas

Smoothing and Measuring Discrete Risks on Finite and Infinite Domains
11:15 AM–11:40 AM

Jackson Lautier

A discrete-time, semi-parametric time-to-event model for left-truncated and right-censored data
OCH 460 Climate Risk:
Carbon Emission
Chair: Ruediger Kiesel
10:50 AM–11:15 AM

Xinran Dai

Pricing carbon emission permits under the cap-and-trade policy
11:15 AM–11:40 AM

Leonard Gerick

Managing carbon risk: The impact on optimal portfolio composition
11:40 AM–12:05 PM

Ruediger Kiesel

Net Zero: Fact of Fiction
12:05 PM–12:30 PM

Haibo Liu

Climate-Related Disclosure, Emission-Conscious Investment, and Internal Carbon Price
LC 201 Ruin Theory II Chair: Jae Kyung Woo
10:50 AM–11:15 AM

Jae Kyung Woo

Approximating renewal function with Laguerre series expansion for various Applications
11:15 AM–11:40 AM

Xinyi Zeng

The structures of the number of claims until first passage times and their applications
11:40 AM–12:05 PM

Enrique Thomann

Poverty Trapping in proportional losses models.
12:05 PM–12:30 PM

Yasutaka Shimizu

Estimation of scale functions for spectrally negative Levy processes with applications to risk theory
LC 408 Insurance Economics:
P2P Insurance
Chair: Markus Huggenberger
10:50 AM–11:15 AM

Markus Huggenberger

Stochastic Dominance and Financial Pricing in Peer-to-Peer Insurance
11:15 AM–11:40 AM

Jiajie Yang

On the optimality of linear residual risk sharing
11:40 AM–12:05 PM

Tao Li

Moral hazard in peer-to-peer insurance with social connection
12:05 PM–12:30 PM

Michael Zhu

Pareto-Efficient Contracts in Centralized vs. Decentralized Insurance Markets, with an Application to Flood Risk
LC 507 P&C Insurance:
Ratemaking I
Chair: TBA
10:50 AM–11:15 AM

Jae Youn Ahn

Generalization of the Laplace approximation and its application to insurance ratemaking
11:15 AM–11:40 AM

Tsz Chai Fung

Statistical Learning of Trade Credit Insurance Network Data with Applications to Ratemaking and Reserving
11:40 AM–12:05 PM

Juan Sebastian Yanez

Weekly dynamic motor insurance ratemaking with a telematic signals bonus-malus score
12:05 PM–12:30 PM

Jul 09

01:00 PM–02:00 PM LPSC Dinning Hall Lunch

Jul 09

02:00 PM – 03:40 PM

Parallel Session

SAC 280 Risk Modeling:
Catastrophic Risk
Chair: Ruodu Wang
02:00 PM–02:25 PM

Maud Thomas

Assessing Extreme Risk using Stochastic Simulation of Extremes
02:25 PM–02:50 PM

Samuel Eschker

Analyzing the Effects of Catastrophic Natural Disaster Losses on Mortgage Delinquency Risk using State Space Models
02:50 PM–03:15 PM

Ruodu Wang

Infinite-mean Pareto distributions in decision making
03:15 PM–03:40 PM
SAC 292 Mortality Modeling II Chair: Jinchun Ye
02:00 PM–02:25 PM

Rokas Puišys

Survival with random effect
02:25 PM–02:50 PM

Jinchun Ye

Random distribution kernels and three types of defaultable contingent payoffs
02:50 PM–03:15 PM

Qing Cong

Continuous-time mortality modeling with delayed effects
03:15 PM–03:40 PM
SAC 294 Pension Mathematics
Product Valuation
Chair: Jonathan Ziveyi
02:00 PM–02:25 PM

Jean-François Bégin

Benefit volatility-targeting strategies in lifetime pension pools
02:25 PM–02:50 PM

Gayani Thalagoda

Variable annuity portfolio valuation with Shapley additive explanations
02:50 PM–03:15 PM

Jonathan Ziveyi

The valuation and assessment of retirement income products: Aunified Markov chain Monte Carlo framework
03:15 PM–03:40 PM
OCH 202 Maching Learning II Chair: Arnold Shapiro
02:00 PM–02:25 PM

Panyi Dong

Automated Machine Learning in Insurance
02:25 PM–02:50 PM

Arnold Shapiro

Insurance applications of support vector machines
02:50 PM–03:15 PM

Eric Dong

Distributional Forecasting via Interpretable Actuarial Deep Learning
03:15 PM–03:40 PM

Dylan Liew

Hush Hush: Keeping Neural Network Claims Modelling Private, Secret, Decentralised, and Distributed Using Federated Learning
OCH 460 Quantitative Finance Chair: Maciej Augustyniak
02:00 PM–02:25 PM

Churui Li

Portfolio selection based on the Herd Behavior Index
02:25 PM–02:50 PM

Xinyi Wang

Loan Profit Prediction under the Framework of Innovative Fusion Model
02:50 PM–03:15 PM

Maciej Augustyniak

Optimal quadratic hedging in discrete time under basis risk
03:15 PM–03:40 PM
LC 201 Optimal Control:
Dividend Strategy
Chair: Hansjoerg Albrecher
02:00 PM–02:25 PM

Hansjoerg Albrecher

Optimal dividend strategies for a catastrophe insurer
02:25 PM–02:50 PM

Eric Cheung

Optimal periodic dividend strategies when dividends can only be paid from profits
02:50 PM–03:15 PM

Dante Mata Lopez

On an optimal dividend problem with a concave bound on the dividend rate
03:15 PM–03:40 PM
LC 408 (Re)Insurance Design I Chair: Jinggong Zhang
02:00 PM–02:25 PM

Jinggong Zhang

Blended Insurance Scheme: A Synergistic Conventional-Index Insurance Mixture
02:25 PM–02:50 PM

Jing Zhang

Optimal Bundling Reinsurance Contract Design
02:50 PM–03:15 PM

Yaodi Yong

Optimal reinsurance design under distortion risk measures and reinsurer’s default risk with partial recovery
03:15 PM–03:40 PM

Wanting He

Multi-constrained optimal reinsurance model from the duality perspectives
LC 507 InsurTech I Chair: Maxim Bichuch
02:00 PM–02:25 PM

Xinjie Ge

Risk Preference and Bubble-crash Experience in the Crypto Insurance Market
02:25 PM–02:50 PM

Zhiyu Quan

Privacy-Preserving Collaborative Information Sharing through Federated Learning
02:50 PM–03:15 PM

Maxim Bichuch

Pricing by Stake in DeFi Insurance
03:15 PM–03:40 PM

Jul 09

03:40 PM–04:00 PM Outside LPSC 120 Coffee Break

Jul 09

04:00 PM–04:10 PM LPSC 120 A&B Welcome and Opening Remarks

Jul 09

04:00 PM–04:10 PM LPSC 120 A&B In-person opening

Jul 09

04:10 PM–05:10 PM LPSC 120 A&B

Keynote Session 1 – Patricia Born

Patricia Born

Opportunities, Challenges, and Pitfalls in the Application of Catastrophe Models

Jul 09

06:00 PM–07:00 PM LPSC Dinning Hall Networking Event (Buffet Dinner)

Wed, July 10

Wed, July 10

09:00 AM – 10:00 AM LPSC 120 A&B Keynote Session 2 – Steven Kou
10:00 AM – 10:30 AM Outside LPSC 120 Coffee Break
10:30 AM – 12:10 PM LC/SAC/OCH Parallel Sessions 10A
SAC 280 Discrimination-free Insurance Pricing
SAC 292 Mortality and Other Factors
SAC 294 Pension Mathematics: Decentralized Annuity
OCH 202 Cyber Risk
OCH 460 Life Annuities
LC 201 Optimal Control: Optimal (Re)Insurance
LC 408 Insurance Economics: Other Topics
LC 507 P&C Insurance: Ratemaking II
12:10 PM – 01:10 PM LPSC Dinning Hall Lunch
LC 308 IME editorial meeting
01:30 PM – 03:10 PM LC/SAC/OCH Parallel Sessions 10B
SAC 280 Risk Measure: Theoretical Development I
SAC 292 Mortality Modeling III
SAC 294 Dependence Modeling in P&C Insurance
OCH 202 Statistical Methods: GLM
OCH 460 Climate Risk: Other Topics
LC 201 Optimal Control: Actuarial Applications
LC 408 Insurance Economics: Risk Attitude and Decision
LC 507 P&C Insurance: Theoretical Development
03:10 PM – 03:40 PM Outside LPSC 120 Coffee Break
03:40 PM – 04:40 PM LPSC 120 A&B Keynote Session 3 – Jose Blanchet
05:00 PM – 06:00 PM 600 E Grand Ave Shuttle to Navy Pier
06:00 PM – 07:00 PM On the Cruise Start Boarding Cruise
07:00 PM – 10:00 PM On the Cruise Banquet on the Cruise; Dance teaching

Detailed Agenda

Jul 10

09:00 AM–10:00 AM LPSC 120 A&B

Keynote Session 2 – Steven Kou

Steven Kou

Anonymized Risk Sharing

Jul 10

10:00 AM–10:30 AM Outside LPSC 120 Coffee Break

Jul 10

10:30 AM – 12:10 PM

Parallel Session

SAC 280 Discrimination-free
Insurance Pricing
Chair: Arthur Charpentier
10:30 AM–10:55 AM

Arthur Charpentier

Using optimal transport to mitigate unfair predictions
10:55 AM–11:20 AM

Olivier Côté

A Fair price to pay: exploiting causal graphs for fairness in insurance
11:20 AM–11:45 AM

Lydia Gabric

A Bayesian approach to discrimination-free insurance pricing
11:45 AM–12:10 PM

Hong Beng Lim

An adversarial reweighting scheme for discrimination-free insurance pricing
SAC 292 Mortality and
Other Factors
Chair: Ching-Syang Yue
10:30 AM–10:55 AM

Jens Robben

The association between environmental variables and mortality: Evidence from Europe
10:55 AM–11:20 AM

Ching-Syang Yue

Marital Status and All-cause Mortality Rates in Younger and Older Adults: A Study based on Taiwan Population Data
11:20 AM–11:45 AM

Jianjie Shi

Bayesian MIDAS Regression Tree: Understanding Macro/Financial Effect on Mortality Movement
11:45 AM–12:10 PM

Dan Zhu

Mortality and Macro-economic Conditions: A Mixed-frequency FAVAR Approach
SAC 294 Pension Mathematics:
Decentralized Annuity
Chair: Moshe Arye Milevsky
10:30 AM–10:55 AM

Runhuan Feng

Generalized Tontine
10:55 AM–11:20 AM

Peixin Liu

A Unified Theory of Multi-Period Decentralized Insurance and Annuities
11:20 AM–11:45 AM

Nan Zhu

Adverse Selection in Tontines
11:45 AM–12:10 PM

Moshe Arye Milevsky

The Riccati Tontine
OCH 202 Cyber Risk Chair: Linfeng Zhang
10:30 AM–10:55 AM

Yousra Cherkaoui

Classification of Cyber Events Through CART Trees: Excitation Split Criteria Based on Hawkes Process Likelihood
10:55 AM–11:20 AM

Changyue Hu

NLP-Powered Repository and Search Engine for Academic Papers: A Case Study on Cyber Risk Literature with CyLit
11:20 AM–11:45 AM

Olivier Lopez

Modeling and anticipating massive cloud failure in cyber insurance
11:45 AM–12:10 PM

Linfeng Zhang

Machine Learning Methods for Designing Optimal Multiple-peril Cyber Insurance
OCH 460 Life Annuities Chair: Antonino Zanette
10:30 AM–10:55 AM

Antonino Zanette

Enhancing Valuation of Variable Annuities in Lévy Models with Stochastic Interest Rates
10:55 AM–11:20 AM

I-Chien Liu

Analytic Formulae for Valuing Guaranteed Minimum Withdrawal Benefits under Stochastic Interest Rate
11:20 AM–11:45 AM

Kelvin Tang

Valuing equity-linked insurance products for couples
11:45 AM–12:10 PM

Wenchu Li

Basis Risk in Variable Annuity Separate Accounts
LC 201 Optimal Control:
Optimal (Re)Insurance
Chair: Dongchen Li
10:30 AM–10:55 AM

Emma Kroell

Optimal reinsurance in a monotone mean-variance framework
10:55 AM–11:20 AM

Dongchen Li

Strategic Underreporting and Optimal Deductible Insurance
11:20 AM–11:45 AM

Sixian Zhuang

Robust investment and insurance choice under habit formation
11:45 AM–12:10 PM

Bin Zou

Optimal Insurance to Maximize Exponential Utility when Premium is Computed by a Convex Functional
LC 408 Insurance Economics:
Other Topics
Chair: TBA
10:30 AM–10:55 AM

Haiying Jia

Marine claims and the human factor
10:55 AM–11:20 AM

Ko-Lun Kung

Does currency risk mismatch affect the life insurance premium?
11:20 AM–11:45 AM

Morton Lane

The Natural Catastrophe Loss File, 2001-2023 - and some Questions
11:45 AM–12:10 PM

Renata Alcoforado

Socioeconomic benefits of the Brazilian INSS AtestMed programme
LC 507 P&C Insurance:
Ratemaking II
Chair: Himchan Jeong
10:30 AM–10:55 AM

Himchan Jeong

Tweedie multivariate semi-parametric credibility with the exchangeable correlation
10:55 AM–11:20 AM

Minji Park

Unbiased commercial insurance premium
11:20 AM–11:45 AM

Raissa Coulibaly

Comparison of offset and weighted regressions in Tweedie case
11:45 AM–12:10 PM

Jul 10

12:30 PM–01:30 PM LPSC Dinning Hall Lunch

Jul 10

01:30 PM – 03:10 PM

Parallel Session

SAC 280 Risk Measure:
Theoretical Development I
Chair: Silvana Pesenti
01:30 PM–01:55 PM

Peng Liu

Law-invariant factor risk measures
01:55 PM–02:20 PM

Kathleen Miao

Robustifying Elicitable Functionals under Kullback-Leibler Misspecification
02:20 PM–02:45 PM

Silvana Pesenti

Differential Sensitivity in Discontinuous Models
02:45 PM–03:10 PM

Yunran Wei

On Vulnerability Conditional Risk Measures: Comparisons and Applications in Cryptocurrency Market
SAC 292 Mortality Modeling III Chair: Kenneth Zhou
01:30 PM–01:55 PM

Yechao Meng

Mortality Prediction: a Parameter Transfer Approach
01:55 PM–02:20 PM

Hongjuan Zhou

On the fractional volatility models: characteristics and impact analysis on actuarial valuation
02:20 PM–02:45 PM

Tsai Tzu-Hao

A Cohort and Empirical Based Multivariate Mortality Model
02:45 PM–03:10 PM

Kenneth Zhou

A new paradigm of mortality modeling via individual vitality dynamics
SAC 294 Dependence Modeling
in P&C Insurance
Chair: Lluis Bermudez
01:30 PM–01:55 PM

Lluis Bermudez

A finite mixture approach to model jointly claim frequency and severity
01:55 PM–02:20 PM

Benjamin Côté

Tree-based Markov random fields with Poisson marginal distributions
02:20 PM–02:45 PM

Lu Yang

Dynamic Prediction of Outstanding Insurance Claims Using Joint Models for Longitudinal and Survival Outcomes
02:45 PM–03:10 PM
OCH 202 Statistical Methods:
GLM
Chair: Emiliano Valdez
01:30 PM–01:55 PM

Vali Asimit

Efficient GLM Solutions
01:55 PM–02:20 PM

Zinoviy Landsman

A Minimum Variance Approach to Linear Regression with application to actuarial and financial problems
02:20 PM–02:45 PM

Emiliano Valdez

Flexible Modeling of Hurdle Conway-Maxwell-Poisson Distributions with Application to Mining Injuries
02:45 PM–03:10 PM

Paul Wilsens

Reducing the dimensionality and granularity in hierarchical categorical variables
OCH 460 Climate Risk:
Other Topics
Chair: Wenjun Zhu
01:30 PM–01:55 PM

Eugenia Fang

Climate Extremes and Financial Crises: Similarities, Differences, and Interplay
01:55 PM–02:20 PM

Yuhao Liu

Pricing green bonds subject to multi-layered uncertainties
02:20 PM–02:45 PM

Ilaria Stefani

Modeling the greenium term structure
02:45 PM–03:10 PM

Wenjun Zhu

El Niño’s Enduring Legacy: An Analysis of Its Impact on Life Expectancy
LC 201 Optimal Control:
Actuarial Applications
Chair: Wenyuan Li
01:30 PM–01:55 PM

Xiaoyu Song

Optimal Consumption and Investment under Uncertain Lifetime and Lifetime Income
01:55 PM–02:20 PM

Chunli Cheng

Reference Health and Investment Decisions
02:20 PM–02:45 PM

Wenyuan Li

Optimal life insurance and annuity decision under money illusion
02:45 PM–03:10 PM

David Saunders

Portfolio Optimization with Reinforcement Learning in a Model with Regime-Switching
LC 408 Insurance Economics:
Risk Attitude and Decision
Chair: Mario Ghossoub
01:30 PM–01:55 PM

Mario Ghossoub

Pareto-Optimal Risk Sharing under Monotone Concave Schur-Concave Utilities
01:55 PM–02:20 PM

Inhwa Kim

Tipping the Scales: Financial Safeguarding and Individual Risk Appetites
02:20 PM–02:45 PM

Benxuan Shi

Optimal insurance with uncertain risk attitudes and adverse selection.
02:45 PM–03:10 PM
LC 507 P&C Insurance:
Theoretical Development
Chair: Edward (Jed) Frees
01:30 PM–01:55 PM

Edward (Jed) Frees

Why Markowitz is better for insurance than investments
01:55 PM–02:20 PM

Jiandong Ren

Credibility theory using fuzzy numbers
02:20 PM–02:45 PM

Matúš Maciak

Functional techniques in chain ladder claims reserving
02:45 PM–03:10 PM

Jul 10

03:10 PM–03:40 PM Outside LPSC 120 Coffee Break

Na

Jul 10

03:40 PM–04:40 PM LPSC 120 A&B

Keynote Session 3 – Jose Blanchet

Jose Blanchet

Navigating Decisions with Imperfect Data Models

Jul 10

05:00 PM–06:00 PM 600 E Grand Ave Depart to Navy Pier

Jul 10

06:00 PM–07:00 PM On the Cruise Start boarding cruise

Jul 10

07:00 PM–10:00 PM On the Cruise Banquet on the Cruise

Thu, July 11

Thu, July 11

09:00 AM – 09:30 AM LPSC 120 A&B Remarks & Closing Ceremony  
09:30 AM – 09:40 AM Short break
09:40 AM – 10:55 AM LC/SAC/OCH Parallel Sessions 11A
SAC 280 Industry Trend
SAC 292 Mortality Estimation
SAC 294 Pension Mathematics: Consumption and Investment
SAC 270 Statistical Methods: Regression Model
LC 405 Option Valuation
LC 406 Optimal Control: Investment and Consumption
LC 408 Ruin Theory I
LC 506 InsurTech II
10:55 AM – 11:20 AM LC 308 Coffee Break
11:20 AM – 01:00 PM LC/SAC/OCH Parallel Sessions 11B
SAC 280 Risk Measure: Theoretical Development II
SAC 292 Mortality Analysis in Long-Term Care
SAC 294 Dependence Modeling
SAC 270 Machine Learning III
LC 405 Climate Risk: Government’s Role
LC 406 Optimal Stopping
LC 408 (Re)Insurance Design II
01:00 PM – 01:00 PM LC 308 Lunch Box

Detailed Agenda

Jul 11

09:00 AM–09:30 AM LPSC 120 A&B Farewell Remarks & Closing Ceremony  

Jul 11

09:30 AM–09:40 AM Building 1 Short break

Jul 11

09:40 AM – 10:55 AM

Parallel Session

SAC 280 Industry Trend Chair: William Marella
09:40 AM–10:05 AM

William Marella

Actuarial Faculty Development Program: Global Launch-Summer 2024
10:05 AM–10:30 AM

Adam Lewis

Chat GPT and the Insurance Landscape
10:30 AM–10:55 AM

B John Manistre

The Risk Adjusted Scenario Set I
SAC 292 Mortality Estimation Chair: Andrey Ugarte Montero
09:40 AM–10:05 AM

Andrey Ugarte Montero

Incorporating Information on Insured Amounts to Improve Survival Rate Estimates in Liabilities
10:05 AM–10:30 AM

Hsin-CHung Wang

Estimating Life Expectancy for Small Populations
10:30 AM–10:55 AM
SAC 294 Pension Mathematics:
Consumption and Investment
Chair: Fabio Viviano
09:40 AM–10:05 AM

Fabio Viviano

Optimal life annuitisation and investment strategy in a stochastic mortality and financial framework
10:05 AM–10:30 AM

Ziqi Zhou

Optimal Consumption and Retirement Investments under Loss Aversion and Mental Accounting
10:30 AM–10:55 AM

Jiyuan Wang

Sucks in Poverty? The Short and Longer Impacts of Earthquake on Pension Choices and How Government Aids Could Help
SAC 270 Statistical Methods:
Regression Model
Chair: Marie-Pier Côté
09:40 AM–10:05 AM

Marie-Pier Côté

Recent Implementations of Gradient Boosting for Decision Trees: A Comparative Analysis for Insurance Applications
10:05 AM–10:30 AM

Zhiwei Tong

Predictive Subgroup Logistic Regression for Classification with Unobserved Heterogeneity
10:30 AM–10:55 AM

Michal Pesta

Semi-continuous time series for sparse losses with volatility clustering
LC 405 Option Valuation Chair: Anatoliy Swishchuk
09:40 AM–10:05 AM

Hangsuck Lee

Pricing equity indexed annuities with American lookback features
10:05 AM–10:30 AM

Anatoliy Swishchuk

Option pricing with exponential multivariate general compound Hawkes processes
10:30 AM–10:55 AM

Yahui Zhang

Pricing High Dimensional American Options in Stochastic Volatility and Stochastic Interest Rate Models
LC 406 Optimal Control:
Investment and Consumption
Chair: Michel Vellekoop
09:40 AM–10:05 AM

Yevhen Havrylenko

Asset-liability management with liquid and fixed-term assets
10:05 AM–10:30 AM

Michel Vellekoop

Constrained Consumption and Investment for General Preferences
10:30 AM–10:55 AM

Ning Wang

Investment-consumption Optimization with Transaction Cost and Learning about Return Predictability
LC 408 Ruin Theory I Chair: Alexander Melnikov
09:40 AM–10:05 AM

Melanie Averhoff

Experience Rating in the Cramér-Lundberg Model
10:05 AM–10:30 AM

Alexander Melnikov

On estimation of ruin probability when value process is optional semimartingale
10:30 AM–10:55 AM

Zijia Wang

Last Exit Times for Generalized Drawdown Processes
LC 506 InsurTech II Chair: Zhiyu Quan
09:40 AM–10:05 AM

Ian Weng Chan

Data Mining of Telematics Data: Unveiling the Hidden Patterns in Driving Behaviour
10:05 AM–10:30 AM

Muhsin Tamturk

Actuarial Modelling by Quantum Computing
10:30 AM–10:55 AM

Jul 11

10:55 AM–11:20 AM LC 308 Coffee Break

Jul 11

11:20 AM – 01:00 PM

Parallel Session

SAC 280 Risk Measure:
Theoretical Development II
Chair: Fabio Gomez
11:20 AM–11:45 AM

Fabio Gomez

Robustness of the Divergence-Based Risk Measure
11:45 AM–12:10 PM

Xia Han

Monotonic mean-deviation risk measures
12:10 PM–12:35 PM

Qinyu Wu

Duet expectile preferences
12:35 PM–01:00 PM

Fabio Bellini

Orlicz premia and geometrically convex risk measures
SAC 292 Mortality Analysis
in Long-Term Care
Chair: Mengyi Xu
11:20 AM–11:45 AM

Mengyi Xu

Understanding the Length of Stay of Older Australians in Permanent Residential Care
11:45 AM–12:10 PM

Colin Zhang

Expected Length of Stay at Residential Aged Care Facilities in Australia: Investigating the Impact of Dementia
12:10 PM–12:35 PM

Zhihang Huang

Healthy working life expectancy and flexible retirement options for subdivided populations in China
12:35 PM–01:00 PM
SAC 294 Dependence Modeling Chair: Jianxi Su
11:20 AM–11:45 AM

Melina Mailhot

Directional tails dependence for multivariate extended skew-t distributions
11:45 AM–12:10 PM

Liyuan Lin

Negatively dependent optimal risk sharing
12:10 PM–12:35 PM

Jianxi Su

Some results of the multivariate truncated normal distributions with actuarial applications in view
SAC 270 Machine Learning III Chair: Etienne Marceau
11:20 AM–11:45 AM

Xing Wang

Multi-output Extreme Spatial Model for Complex Production Systems
11:45 AM–12:10 PM

Etienne Marceau

Tree-based Ising models: mean parameterization, efficient computation methods and stochastic ordering
12:10 PM–12:35 PM

Despoina Makariou

Using causal machine learning to predict treatment heterogeneity in the primary catastrophe bond market
12:35 PM–01:00 PM

Eduardo F. L. de Melo

Challenges in Actuarial Learning for Loss Modeling of Brazilian Soybean Crops
LC 405 Climate Risk:
Government’s Role
Chair: Tobias Huber
11:20 AM–11:45 AM

Tobias Huber

The Effect of Governmental Disaster Relief on Individuals’ Incentives to Mitigate Losses
11:45 AM–12:10 PM

Shan Yang

The Role of a Regional Risk Insurance Pool in Supporting ClimateResilience
12:10 PM–12:35 PM

Yanbin Xu

Bridging the Protection Gap: A Tax Redistribution Solution Under the Private-Public Partnership Framework
12:35 PM–01:00 PM

Fu Yang

Can environmental pollution liability insurance improve firms’ ESG performance? Evidence from listed industrial firms in China
LC 406 Optimal Stopping Chair: Servaas van Bilsen
11:20 AM–11:45 AM

Servaas van Bilsen

Optimal Consumption and Portfolio Choice in the Presence of Risky House Prices
11:45 AM–12:10 PM

Junyi Guo

Optimal Branching Times for Branching diffusions and its Application in Insurance
12:10 PM–12:35 PM

Jose Manuel Pedraza Ramirez

Optimal Stopping for Exponential Lévy Models with Weighted Discounting
12:35 PM–01:00 PM
LC 408 (Re)Insurance Design II Chair: Qiuqi Wang
11:20 AM–11:45 AM

Qiuqi Wang

A Revisit of the Excess-of-Loss Optimal Contract
11:45 AM–12:10 PM

Ziyue Shi

Worst-case reinsurance strategy with likelihood ratio uncertainty
12:10 PM–12:35 PM

Ziyue Shi

Insurance design under a performance-based premium scheme
12:35 PM–01:00 PM

Jul 11

01:00 PM–01:00 PM LC 308 Lunch Box

Abstract Book

Decentralized Finance and Blockchain: Implications for the Insurance Industry

Marco Mirabella (Ensuro)

This presentation will examine the leading blockchain platforms and their most prominent decentralized finance (DeFi) protocols, exploring the potential impacts on the financial sector, with a focus on the insurance and reinsurance industry. We will analyze how DeFi protocols could disrupt the insurance value chain, presenting both opportunities and challenges. Major decentralized insurance protocols will be introduced, shedding light on their underlying operational models. As a case study, we will take a deeper look at Ensuro, a decentralized capital provider for parametric risk coverage. Key concepts like crypto-backed stablecoins, liquidity pools, decentralized asset management, and the role of decentralized oracles in claims processing will be explained in the context of Ensuro’s approach. The presentation aims to equip audiences with insights into this emerging intersection of blockchain, DeFi, and the insurance realm.

Decentralized risk sharing: definitions, properties, and characterizations

Jan Dhaene (KU Leuven)

In this mini-workshop, we give an overview of some recent results on decentralized risk-sharing, i.e. pooling risks without a central insurer. We consider risk-sharing pools, where each participant is compensated from the pool for his loss and pays in return an ex-post contribution to the pool. The contributions of the participants follow from an appropriate risk-sharing rule, which is chosen such that the pool is self-financing. We introduce and discuss a list of relevant properties of risk-sharing rules. A number of candidate risk-sharing rules are considered, including the simpel uniform risk-sharing rule, as well as the conditional mean risk-sharing rule and the quantile risk-sharing rule. Their compliance with the proposed properties is investigated. Also axiomatic characterizations of the above-mentioned risk-sharing rules are considered.

Collaborative insurance, unfairness, and discrimination

Arthur Charpentier (UQAM)

In this course, we will get back to mathematical properties of risk sharing on networks, with reciprocal contrats. We will discuss conditions about stochastic dominance, proving that policyholers might have interest in sharing risks with "friends". Then, we will try to adress fairness issues, for such risk sharing mechanisms. If fairness has been recently intensively studied, either through group or individual fairness, there are yet not much litterature about fairness on networks. It is important to adress those issues, since perceived discrimination is usually associated with networks. We will see why the topology of the network is important, both to design peer-to-peer schemes to share risks, but also to see if perceived discrimination is associated with global disparate treatement.

Decentralized insurance: bridging the gap between industry practice and academic theory

Runhuan Feng (Tsinghua University)

In this final segment of the short course, we shall explore from an academic perspective various business models of tokenomics within DeFi insurance protocols. The objective is to establish a theoretical framework that bridges academic theories of risk-sharing with practical applications in DeFi insurance and risk management mechanisms.

Asset and Liability Risks in Financial Institutions

Daniel Bauer (UW Madison)

We analyze economic models of financial institutions with risky assets and liabilities. Markets are incomplete, firm counter-parties are risk-averse, and risk capital is costly. In such asetting, profit-maximizing firms hold risk capital and incur costs associated with different positions. We investigate how institutions measure, allocate, and price asset and liability risks.While we find that risk margins and allocation can be represented by risk measures as they are used in practice, we find that it is not appropriate to focus on the net portfolio position of assets minus liabilities as is common practice. Rather, we show that asset and liability risks should be assessed separately with different risk measures and different associated costs.

Asymptotic Capital Allocation based on the Higher Moment Risk Measure

Yiqing Chen (Drake University)

We investigate capital allocation based on the higher moment risk measure at a confidence level \(q \in (0,1)\). To reflect the excessive prudence of today’s regulatory frameworks in banking and insurance, we consider the extreme case with q ↑ 1 and study the asymptotic behavior of capital allocation for heavy-tailed and asymptotically independent/dependent risks. Some explicit asymptotic formulas are derived, demonstrating that the capital allocated to a specific line is asymptotically proportional to the Value at Risk of the corresponding individual risk. In addition, some numerical studies are conducted to examine their accuracy.

Stress Testing with f-divergences

Pietro Millossovich (Bayes Business School, City, University of London)

We discuss how sensitivity and (reverse and forward) stress testing of a risk management model can be tackled solving an optimisation problem where the f-divergence of an alternative scenario is minimised under some constraints. The special cases of KL- and chi2-divergence are given special attention, and some features of the general f-divergence case are investigated.

Multivariate Spatiotemporal Models for County Level Mortality Data in the Contiguous United States

Brian Hartman (Brigham Young University)

Using a number of modern predictive modeling methods, we seek to understand the factors that drive mortality in the contiguous United States. The mortality data we use is indexed by county and year as well as grouped into 18 different age bins. We propose a model that adds two important contributions to existing mortality studies. First, instead of building mortality models separately by age or treating age as a fixed covariate, we treat age as a random effect. This is an improvement over previous models because it allows the model in one age group to borrow strength and information from other age groups that are nearby. The result is a multivariate spatiotemporal model and is estimated using Integrated Nested Laplace Approximations (INLA). Second, we utilize Gaussian Processes to create nonlinear covariate effects for predictors such as unemployment rate, race, and education level. This allows for a more flexible relationship to be modeled between mortality and these important predictors. Understanding that the United States is expansive and diverse, we also allow for many of these effects to vary by location. The amount of flexibility of our model in how predictors relate to mortality has not been used in previous mortality studies and will result in a more accurate model and a more complete understanding of the factors that drive mortality. Both the multivariate nature of the model as well as the non-linear predictors that have an interaction with space will advance the study of mortality beyond what has been done previously and will allow us to better examine the often complicated relationships between the predictors and mortality in different regions.

Modelling seasonal mortality: An age–period–cohort approach

Thomas Landry (Université du Québec à Montréal (UQAM))

Age–period–cohort (APC) mortality models have become the standard approach in actuarial science to project mortality improvements for uses such as pricing life insurance, annuities, and setting contributions in pension plans. Annual mortality rates are sufficient for such long-term applications; yet, for understanding excess mortality due to, e.g., epidemics and heat waves, annual observations have important limitations, and high¬-frequency data (i.e., daily death counts and exposure levels) need to be used. In this presentation, we introduce a seasonal overlay that can be used in the context of APC models. Based on a cyclic spline, this extra layer allows the model to capture seasonal features parsimoniously. In an empirical application, we fit a CBDX variant of the APC family to daily mortality data from the province of Quebec. Our dataset covers over 3.5 million individuals aged at least 60 between 1996 and 2019. Our results show significant seasonal patterns consistent with the flu season, which are similar between males and females. We also test different parametric models and find that the shape of seasonality has remained constant over time for most age categories, except for individuals older than 90 years old. As part of a sensitivity analysis, we investigate intra-annual mortality patterns between subgroups and report that the local climate (proxied via the average annual temperature) and individual economic status (i.e., annual income) affect mortality patterns in Quebec. However, seasonal patterns seem to be virtually identical among most subgroups.

A Spatial Approach to Model Mortality Rates

Yin Yee Leong (Department of Risk Management and Insurance, Feng Chia University, Taiwan)

Human longevity has been experiencing its largest increase since the end of World War II, and modeling the mortality rates is often the focus of many studies. Among all mortality models, the Lee–Carter model is a popular approach since it is easy to use and has good accuracy in predicting mortality rates. However, empirical studies from several countries have shown that the age parameters of the Lee–Carter model are not constant in time. Many modifications of the Lee–Carter model have been proposed to deal with this problem, including adding an extra cohort effect and adding another period effect. In this study, we propose a spatial modification of the Lee-Carter model and use simulation results to explain why the proposed approach can be used to deal with the problem of the age parameters. Mortality rates are usually recorded by age and time, and thus we can treat mortality rates as 2-dimensional values and apply tools of spatial analysis to them. For example, clusters are areas with unusually high (or low) mortality rates than their neighbors and we use popular cluster detection methods, such as Spatial scan statistics, to evaluate where there are locations with mortality rates that cannot be described well by the Lee–Carter model. We first use computer simulation to demonstrate that the cluster effect is a possible source causing the problem of the age parameters not being constant and adding the cluster effect can solve the non-constant problem. We also apply the proposed approach to mortality data from Japan, France, the USA, and Taiwan. The empirical results show that our approach has better fitting results and smaller mean absolute percentage errors than the Lee–Carter model.

Pension Fund Management with a Machine Learning Strategy

Tianxiang Shi (Temple University)

Managing pension funds has been an increasingly challenge task for defined benefit pension sponsors. Subject to a series of funding regulatory requirements, pension funds are pressured to generate consistent returns to meet their long-term pension liabilities while mitigating the investment risk at the same time in order to avoid pension shortfalls. In this paper, we introduce a novel Autoformer-based approach, a time series prediction method evolved from Transformer, to predict stock returns and design the trading strategies. Subsequently, we employ the Genetic Algorithm, a search and optimization technique based on the mechanics of natural selection and genetics, to further determine the optimal asset allocations among various stocks and bonds, controlling the pension shortfall risk. Our results show that the proposed method significantly outperforms traditional pension fund investment strategies.

Turnpike Properties for Stochastic Pension Fund Control Problems

Xiaoqing Liang (Hebei University of Technology)

We consider an optimal investment problem for a defined benefit pension fund and examine its exponential turnpike property. We first present a mean-variance model, which can be solved by embedding it in a stochastic linear-quadratic (LQ, for short) problem. Pension managers choose the optimal investment and contribution rate to minimize the combination of the contribution rate and solvency risks. To explore the turnpike property, we construct a static optimization problem corresponding to the optimal control problem. We show that the optimal fund process and its associated optimal strategy, in some suitable sense, approximate closely to the steady states or equilibrium points related to the solutions of the static optimization problem as the time horizon lengthens. Additionally, we show that the value function of the stochastic LQ problem approaches to that of the static optimization problem in a time-average sense.

Pension fund optimization under life-cycle investment with discretionary stopping time

Mei-ling Tang (Department of Financial Engineering and Actuarial Mathematics, Soochow University)

This research discusses the setting of random retirement time and the variation in the representative’s terminal retirement wealth with respect to the changes in the optimal retirement time caused by the random environment. Based on the life cycle model framework of Farhi and Panageas (2007), this research extends to allow stochastic interest rate and the discretionary stopping time in the dynamic programming process. The optimal consumption and portfolio choices relative the changes of optimal stopping time help explicitly guide the representative about his/her investment-consumption decisions for matching his/her lifestyling preference and terminal financial objectives. The numerical experiment exhibits that the stochastic stopping time derived by the first hitting time in the barrier option pricing theorem occurs earlier than that based on the life-cycle model in Farhi and Panageas (2007). Nontheless, the barrier option framework provides more abundant information in dynamic optimization for pension funds because it enables not only to estimate the exact hitting time of the retirement wealth thresholds, but also gives both of the corresponding cumulative and marginal probabilities of the referable stopping point.

Flow-based Deep Insurance Ratemaking

Minjeong Park (Department of Statistics, Ewha Womans University)

In the realm of predictive analysis for insurance, deep learning models have gained popularity. Until now, neural networks have mainly been used to model the mean loss. For instance, in the context of LocalGLMnet proposed by Richman and Wüthrich (2023), the predictive mean loss is estimated as a complex function of the a priori risk characteristics, while the corresponding distribution is described by the exponential dispersion family, which is usually parameterized by a few parameters. However, in insurance ratemaking, the estimation of the distribution itself is equally important to the mean estimation. To address this issue, our study utilized normalizing flows, which are a family of generative models with tractable distributions, to estimate arbitrary given distributions. Through simulation studies, we demonstrate the efficiency of our proposed flow-based method in ratemaking across diverse insurance scenarios. Additionally, we demonstrate the impact of the tail thickness on the ratemaking, and how to control the tail thickness in the proposed method. Finally, real data analysis is accompanied to illustrate the performance of the proposed method.Reference: [1] Richman, Ronald, and Mario V. Wüthrich. "LocalGLMnet: interpretable deep learning for tabular data." Scandinavian Actuarial Journal 2023.1 (2023): 71-95.

Machine Learning and Insurer Insolvency Prediction

Canchun He (Peking University)

Solvency and its related ruin theory have long been a central topic in actuarial science and insurance economics. We explore the Structural Artificial Neural Network (SANN), a component-wise machine learning algorithm, to predict the insolvency and failure of insurance companies. This new algorithm enables exploring non-redundant predictive information from Big Data. We show that SANN significantly improves the out-of-sample failure prediction, resulting in an average increase of AUC between 1.83% and 9.55%, compared to traditional machine learning and logistic models, based on a sample of 2,424 insurers from 17 European countries. We also show that macroeconomic and yield information is important in predicting insurer failures, in addition to firm characteristics. This research contributes a new machine learning method, SANN, to the insolvency modelling and prediction in the sense that SANN enables grouping predictors into economic categories, and thus allows firm characteristics and macroeconomic priors as separate disciplines to interact within and between categories.

Matrix-based factor analysis on the prediction of claims probability

Kwangmin Jung (Pohang University of Science and Technology (POSTECH))

We propose a model to predict the probability of insurance claims with a matrix-based factor analysis. The proposed model employs projected principal component analysis, where one can estimate unobserved latent factors more accurately by projecting the data matrix onto a linear space spanned by covariates. Our approach can help overcome the curse of dimensionality when the number of insured-specific features and insurance coverages is large; thus, it can lead actuaries to better estimate the claims probability than they do with conventional methods such as generalized linear models. Using a health insurance customer dataset provided by one of the largest life insurers in the Korean market, we show that the proposed model better performs to predict the probability of claims than other machine learning-based benchmarks (logistic regression and XGBoost). We further determine that our model can reduce the computational time by around 86% and 98% compared to logistic regression and XGBoost, respectively. Our approach can help insurers to increase their efficiency of claims management and thus strengthen their financial stability with high accuracy of claims prediction.

Joint Extremes of Precipitation and Wind Speed

Zhen Dong Chen (UNSW)

The compounding effects of heavy precipitation and strong winds can lead to severe disasters, resulting in catastrophic losses. This paper presents a novel approach to modeling joint extreme climate events involving precipitation and wind speed. A case study is undertaken, utilizing data gathered from weather observatories spanning across New South Wales, Australia.Regarding methodology, we implement an automated declustering technique to address inherent challenges in applying extreme value theory to data with temporal dependence. Moreover, we utilize the INLA package for spatial modeling across a significant number of datasets. Remarkably, these methodological advancements are applicable to various other forms of joint extremes.Furthermore, we introduce a proxy derived from these joint extremes and create a heatmap to identify any significant trends of heavy precipitation and strong winds. This proxy offers a refined understanding of the compounding effects of joint extremes and holds the potential to provide valuable insights for insurance pricing.

Building Disaster Resilience: Leveraging High-Resolution Weather Data in Index Insurance

Shimeng Huang (University of Wisconsin-Madison)

With rising frequency, natural disasters are leading to dramatically increasing financial costs in recent decades. Yet, the insurance protection gap for natural disasters is large, with over 60% of global economic losses uninsured from 2012 to 2021. In contrast to traditional indemnity insurance with high adjustment costs and long delays in claims settlement, index-based contracts can greatly reduce the cost and time of claims settlement while offering much needed protection against catastrophic events. The question arises: can index insurance be utilized to bridge the protection gap, as an alternative to indemnity insurance? While promising, index insurance faces basis risk due to the mismatch between the chosen index and actual losses. By leveraging comprehensive weather data and deep learning models, we intend to construct a modeled index capable of forecasting the ultimate losses from flood events for a given claim with reduced basis risk. Moreover, we will explore the potential of the proposed index insurance to improve the coverage of disaster insurance under a utility framework. We aim to show that a high-quality index with low basis risk is crucial for improving insurance coverage. This offers valuable insights for policymakers, insurers, and policyholders on leveraging insurance innovations for greater disaster resilience.

Assessing the dependence between extreme rainfall and extreme insurance claims: A bivariate POT method

Yue Shi (NHH Norwegian School of Economics)

Climate change has increased the frequency and intensity of extreme weather events. For insurance companies, it is essential to identify and quantify extreme climate risk. They must set aside enough capital reserve to bear the costs of extreme events, otherwise they can be put in a danger of facing bankruptcy. In this paper, I employ the state-of-the-art bivariate peak over threshold (POT) method to study the dependence between extreme rain events and extreme insurance claims. I utilize a novel insurance dataset on home insurance claims related to rainfall-induced damage in Norway and select two typical Norwegian municipalities to investigate the impact of heavy rain on large claims. Based on the model estimates and tail dependence measures, I find evidence that extreme insurance claims have the strongest dependence with rainfall intensity and daily rain amounts. I also identify the region-specific difference in rainfall variables as a key indicator of home insurance risk. The findings offer insights into the complex dynamics between extreme rain and home insurance claims and provide modeling advice to insurance companies in the face of changing climate.

Constrained portfolio optimization in the DC pension fund management

Yueman Feng (The University of Hong Kong)

This paper examines a constrained portfolio optimization problem in the context of defined contribution (DC) pension fund management. We address two primary risks: financial risk stemming from stochastic interest and inflation rates, and mortality risk associated with the possibility of an individual passing away before reaching retirement age.Throughout the accumulation period, individuals dynamically allocate their wealth across a stock index, two nominal bonds, an inflation-linked bond, and a life insurance policy in order to maximize both their death benefit and terminal wealth. Simultaneously, they must follow a convex-set trading constraints, which encompasses non-tradeable asset constraints, no short-selling constraints, and no borrowing constraints as special cases.To tackle this problem, we construct an artificial market to derive the dual problem and introduce a dual control neural network approach to compute tight lower and upper bounds for the initial problem. Our proposed algorithm demonstrates greater applicability in comparison to the simulation of artificial markets strategies (SAMS) approach presented by Bick et al. (2013).In conclusion, our findings suggest that when taking trading constraints into account, individuals tend to decrease their demand for life insurance.

Multi-Agent Dynamic Financial Portfolio Management: A Differential Game Approach

Rosario Maggistro (University of Trieste, Italy)

In this work, we consider a multi-agent portfolio optimization model with life insurance for two players with random lifetimes under a dynamic game approach. Each player is price-taker, does not receive any income and invests in the market to maximize her own utility for consumption and bequest. The market is complete and consists of n different assets, of which n-1 are risky with prices driven by Geometric Brownian motion, while one is risk-free. We analyze both the non-cooperative and cooperative scenarios, and by considering the family of CRRA utility functions, we determine the closed-form expressions of the optimal consumption, investment, and life insurance for both players. A sensitivity analysis is provided both to illustrate the impact of the biometric and risk aversion parameters on the optimal controls and to compare the non-cooperative strategies with the cooperative ones. As a result, we suggest that cooperation favours consumption optimality, while non-cooperation promotes the coverage of the risk of death.

Stochastic Utilities with Subsistence and Satiation: Optimal Consumption, Life Insurance Purchase, and Portfolio Management

Jinchun Ye (Indepedent Researcher)

We introduce stochastic utilities such that utility of any fixed amount of interest is a stochastic process or random variable. Also, there exist stochastic (or random) subsistence and satiation levels associated with stochastic utilities. Then, we consider optimal consumption, life insurance purchase and investment strategies to maximize the expected utility of consumption, bequest and pension with respect to stochastic utilities. We use the martingale approach to solve the optimization problem in two steps. First, we solve the optimization problem with an equality constraint which requires that the present value of consumption, bequest and pension is equal to the present value of initial wealth and income stream. Second, if the optimization problem is feasible, we obtain the explicit representations of the replicating life insurance purchase and portfolio strategies. As an application of our general results, we consider a family of stochastic utilities which have hyperbolic absolute risk aversion (HARA).

Introducing Normal-Gamma copula and its application to collective risk model

Rosy Oh (Korea Military Academy)

This study proposes a new copula family and corresponding statistical method for modeling the joint distribution between frequency and average severity in the collective risk model (CRM). Copula methods applied to CRM in the literature so far have limitations due to the complex intrinsic dependence between frequency and average severity so that they were incapable of describing increasing/decreasing conditional variance property in a bivariate distribution. We propose a new copula family derived from the Normal-Gamma distribution that addresses this limitation. The effectiveness of the proposed copula family in modeling the joint distribution between frequency and average severity is evaluated through numerical studies and real data analysis.

A study of one-factor copula models from a tail dependence perspective

Nariankadu Shyamalkumar (The University of Iowa)

Modeling multivariate dependence in high dimensions is challenging, with popular solutions resorting to the construction of a multivariate copula as a composition of lower dimensional copulas. Pair-copula constructions do so with the use of bivariate linking copulas, but their parametrization, in size, being quadratic in the dimension, is not quite parsimonious. Besides, the number of regular vines grows super-exponentially with the dimension. One popular parsimonious solution is factor copulas, and in particular, the one-factor copula is touted for its simplicity - with the number of parameters linear in the dimension - while being able to cater to asymmetric non-linear dependence in the tails. In this paper, we will add nuance to this claim from the point of view of a popular measure of multivariate tail dependence, the Tail Dependence Matrix (TDM). In particular, we focus on the one-factor copula model with the linking copula belonging to the BB1 family. For this model, we derive tail dependence coefficients and study their basic properties as functions of the parameters of the linking copulas. Based on this, we study the representativeness of the class of TDMs supported by this model with respect to the class of all possible TDMs. We establish that since the parametrization is linear in the dimension, it is no surprise that the relative volume is zero for dimensions greater than three, and hence, by necessity, we present a novel manner of evaluating the representativeness that has a combinatorial flavor. We formulate the problem of finding the best representative one-factor BB1 model given a target TDM and suggest an implementation along with a simulation study of its performance across dimensions. Finally, we illustrate the results of the paper by modeling rainfall data, which is relevant in the context of weather-related insurance.

Rank-based sequential tests for copulas

Christopher Blier-Wong (University of Toronto)

In multivariate insurance data analysis, the slow arrival of data and the complexity of dependence challenge the effectiveness of classical statistical tests. Constrained by their finite-time validity, traditional copula tests fall short in real-time analysis settings. This talk introduces a framework to develop always-valid tests for copulas, using normalized sequential ranks and sequential e-variables to construct a test martingale. This approach allows for continuous data monitoring, surpassing traditional tests’ limitations by ensuring validity at any given moment and demonstrating enhanced robustness against peeking effects. We showcase the method’s superior performance in fixed-time and real-time testing scenarios. The ability to monitor the result of tests in real-time will enable risk managers and insurance companies to arrive at conclusions more quickly, enabling insurers to mitigate risks more effectively and optimize their portfolios through data-driven insights.

Semi-linear Credibility Distribution Estimation

Georgios Pitselis (University of Piraeus)

The semi-linear credibility, introduced by De Vylder (1976b), is based on a transform data in the form Y=f(X), where f is some arbitrary transformation. The function f may be given in advance, or we may try to find one that provides the best fit on the data. This paper proposes the semi-linear credibility distribution estimation, and it provides a measurement process uncertainty in claim estimation. The optimal projection theorem is applied for the semi-linear credibility distribution estimation. In addition, the credibility distribution estimators are also established, and numerical illustrations are herein presented. Some examples of semi-linear credibility distribution estimation are also provided based on insurance data.

Robust Credibility Models – The Winsorized Approach

Chudamani Poudyal (University of Wisconsin-Milwaukee)

Credibility theory offers quantitative tools that help insurance companies balance the experience of individual policyholders with the overall portfolio. In this presentation, we introduce a robust alternative to classical Buhlmann credibility for premium estimation, using winsorized loss data to reduce the impact of extreme values. This method provides explicit formulas for calculating structural parameters and premiums across typical risk models in insurance. The corresponding asymptotic normality properties are established in accordance with the general framework of L-statistics. Through simulation studies, it is show that this robust approach is more stable and less sensitive to model assumptions than traditional methods. We also discuss the use of non-parametric estimates in credibility and present a case study to demonstrate how this model effectively captures risk behavior and mitigates the effect of misspecification.

Bridging the gap between aggregate and individual claim reserving via a population sampling framework

Sebastian Calcetero Vanegas (University of Toronto)

Claim reserving in insurance has been explored within two major frameworks: the macro-level approach, which estimates reserves at an aggregate level (e.g., Chain-Ladder), and the micro-level approach, which estimates reserves individually (e.g Antonio, K., & Plat, R. (2014)). These approaches, rooted in entirely different theoretical foundations, are somewhat incompatible. This disparity in methodologies impedes insurance practitioners from adopting more flexible models, instead relying on simplistic versions of macro-level approaches.In this talk, we introduce a novel approach grounded in population sampling theory, offering a unified statistical framework for reserving methods. We demonstrate that macro and micro-level models represent extreme yet natural instances of an augmented inverse probability weighting (AIPW) estimator. This AIPW estimator bridges the gap between these contrasting methodologies, enabling a seamless transition by integrating principles from both aggregate and individual models simultaneously into more accurate estimations. Additionally, we explore how the population sampling perspective can enhance the robustness and effectiveness of current reserving models.

Wishart conditional tail risk measures: A dynamic and analytic approach

Jose Da Fonseca (Auckland University of Technology)

This study introduces a new dynamic and analytical framework for quantifying multivariate risk measures. Using the Wishart process, which is a stochastic process with values in the space of positive definite matrices, we explicitly compute several conditional tail risk measures which, thanks to the remarkable analytical properties of the Wishart process, can be explicitly computed up to a one/two dimensional integration. Exploiting the stochastic different equation property of the Wishart process, we show how a dynamic as well as an intertemporal view of these risk measures can be embedded in the proposed framework. Several numerical examples show that the framework is versatile and operational, thus providing a useful tool for risk management.

Dynamic Return and Star-Shaped Risk Measures via BSDEs

Roger Laeven (University of Amsterdam)

This paper establishes characterization results for dynamic return and star-shaped risk measures induced via backward stochastic differential equations (BSDEs). We first characterize a general family of static star-shaped functionals in a locally convex Fréchet lattice. Next, employing the Pasch-Hausdorff envelope, we build a suitable family of convex drivers of BSDEs inducing a corresponding family of dynamic convex risk measures of which the dynamic return and star-shaped risk measures emerge as the essential minimum. Furthermore, we prove that if the set of star-shaped supersolutions of a BSDE is not empty, then there exists, for each terminal condition, at least one convex BSDE with a non-empty set of supersolutions, yielding the minimal star-shaped supersolution. We illustrate our theoretical results in a few examples and demonstrate their usefulness in two applications, to capital allocation and portfolio choice.

The Principle of a Single Big Jump from the perspective of Tail Moment Risk Measure

Jinzhu Li (Nankai University)

In this work, we establish a formulation of the principle of a single big jump based on a kind of risk measure defined via tail moments. We conduct our study under the framework in which the individual risks are pairwise asymptotically independent and have the distributions from the Fréchet or Gumbel max-domain of attraction. The asymptotic behavior of the tail mixed moments is also discussed in details. The results obtained are applied to the optimal capital allocation problem based on a tail mean-variance model. We also extend parts of our results to the case involving randomly weighted risks.

Bayesian model comparison for mortality forecasting: Coherent Quantification of Longevity Risk

Jackie Wong Siaw Tze

Stochastic models are appealing for mortality forecasting in their ability to generate intervals that represent uncertainties underlying the forecasts. This is particularly crucial in the quantification of longevity risks which translates into monetary terms in actuarial computations. In this talk, we present a fully Bayesian implementation of the age-period-cohort-improvement (APCI) model with overdispersion, which is compared with the well-known Lee–Carter model with cohorts. We show that naive prior specification can yield misleading inferences, where we propose Laplace prior as an elegant solution. We also perform model averaging to incorporate model uncertainty. Overall, our approach allows coherent inclusion of multiple sources of uncertainty, producing well-calibrated probabilistic intervals. Our findings indicate that the APCI model offers better fit and forecast for England and Wales data spanning 1961–2016.  For more details, please see https://academic.oup.com/jrsssc/article/72/3/566/7083938.

Robust-Regression and Applications in Longevity Risk Modeling

Marie-Claire Koissi (University of Wisconsin-Eau Claire)

Life expectancy has generally increased wordwhile: the global life expectancy at birth increased byan average of 6.5 years, from 66.8 years in 2000 to 73.3 years in 2019. The longevity risk, from theperspective of an insurance company or a defined contribution plan, is the unexpected probability thatindividuals will live longer than anticipated and therefore potentially outlive their retirement asset.Improved models for mortality (and therefore life expectancy) forecasting give more accurateestimate of life expectancy and are undoubted steps in mitigating longevity risk.In a robust (or resistant) regression, less weight is put on outliers or influential points. In this project,we analyzed several models for mortality and life expectancy forecasting using robust regression and non-regression based fitting techniques. Applications to selected mortality-linked insurance products is presented.

Frailty and lifetime shifting: Investigating a new paradigm inmortality modeling and forecasting

Mario Marino (University of Trieste)

Longer lives are an achievement and the course of longevity is a compelling matter of interest for policymakers, demographers, and actuaries. Various and pioneering studies concerning mortality modeling and forecasting have been introduced in the literature, all of which share an implicit assumption: the age flows chronologically. However, recent research works highlighted the concept of non-chronological age, suggesting new insights for mortality modeling and forecasting. Then, the present work aims to investigate to what extent the potential non-chronological pace of aging affects the usual mortality and longevity analysis. Since the course of lifespan is increasingly influenced by unobservable risk factors altering the chronological pace of aging, we exploit the concept of individual frailty as an unobservable random quantity shifting the chronological human lifetime. We first illustrate how technically construct the non-chronological lifetime starting from a chronological Gompertz mortality framework. Thereafter, we define the mortality intensity under the non-chronological lifetime, discussing the associated fitting and forecasting methods. In particular, we refer to the Poisson maximum likelihood framework for fitting purposes, while a multivariate random walk with drift is employed to project the mortality intensity parameters. Finally, we empirically test our methodological framework on the Italian mortality data, for both genders. Our analysis highlights both the goodness of fit and the prediction accuracy of the non-chronological mortality intensity on the observed mortality experience.

Equilibrium Intergenerational Risk Sharing Design for a TargetBenefit Pension Plan

Yumin Wang (University of Manitoba)

In this paper, we develop a risk sharing pension design for a target benefit pension plan by considering both the individual and the generational discount functions. An intergenerational Nash equilibrium design that balances interests across different generations is explicitly derived. In contrast to some alternative designs, we find that the equilibrium design is more robust to the choices of generational weights and time preferences, and is thus more sustainable in the long run.

Welfare-Enhancing Annuity Divisor for Notional Defined Contribution Design

Xiaobai Zhu (The Chinese University of Hong Kong)

In this study, we examine the problem of setting the annuity divisor for a notional defined contribution plan. Our analysis uncovers that both the constant annuity divisor and the actuarial annuity divisor, despite their widespread use in reality, inevitably assign lower weights to low-income groups and therefore result in an unintended wealth transfer from low-income to high-income individuals. To address this limitation, we formulate the problem with the optimization framework of the weighted social welfare function, and derive the optimal annuity divisor explicitly by employing optimal control techniques. We also provide discussions regarding the S-Gini welfare function and explore the shape of the optimal solution under specific population structures. Through the calibration of the model with Chinese data, we propose a progressive annuity conversion formula and numerically demonstrate its redistributive impact, showcasing its potential to enhance overall social welfare.

Reverse mortgages strategies for families with early bequests and altruism

Yunxiao Wang (UNSW Sydney, School of Risk & Actuarial Studies; Australian Research Council Centre of Excellence in Population Ageing Research (CEPAR))

Reverse mortgages allow older homeowners to access their housing wealth while ageing in place. However, reverse mortgage markets remain small internationally, with one frequently cited reason being bequest motives. We study the role of reverse mortgages in intergenerational financial planning as a tool for families to bring forward bequests. We develop a new two-generation lifecycle model with altruism to compare the welfare gains of bequests and early bequests (inter vivos gifts) for homeowning parents and adult children seeking to purchase their first home. The two-generation model accounts for house price risk, interest rate risk, wage growth, uncertain long-term care costs, and means-tested pensions. The model results suggest that families across a wide range of wealth levels can enjoy large welfare gains when the parent uses a reverse mortgage for retirement income and to gift the adult child a first home deposit. We also find that by replacing parent bequest with altruism the model no longer underestimates welfare gains and can better capture improvements to the future well-being of the child from early bequest. However, Australian reverse mortgage market data suggests that gifting is not widespread. This presents an opportunity for reverse mortgage providers to increase awareness of the ‘gifting function’ of reverse mortgages, which can potentially address the low demand for these products.

Multi-state Health-contingent Mortality Pooling: An Actuarially Fair and Self-sustainable Product that Allows Heterogeneity

Yuxin Zhou (UNSW Business School, CEPAR)

There is a growing need for higher retirement incomes to cover the higher long-term care (LTC) costs of functionally disabled and ill retirees. However, most of the existing mortality pooling products in the literature do not consider the health status of members. Hence, they do not provide higher retirement incomes to members who have LTC needs due to functional disability and morbidity. To address this issue, we propose a health-contingent mortality pooling product that is actuarially fair at every point of time and self-sustainable, with the income payments dependent on the health states of individuals. The proposed product uses forward iterations to determine the health-contingent income payments. Our framework allows free transitions between health states so that recovery from functional disability is allowed. The framework has the flexibility to allow any number of health states, while we use a five-state model with the health states constructed from two dimensions, which are functional disability and morbidity. Meanwhile, the product allows heterogeneity so members can have different ages, contributions, initial health statuses, and rates of investment returns. In addition, new members are allowed to join to make it an open pool, which helps increase the pool size and generate more stable income payments. We find that the proposed health-contingent pooling product provides significantly higher retirement incomes for members with functional disability and morbidity while having little impact on the retirement incomes of healthy members. The income payments remain stable and are higher in the less healthy states than in the healthy states, even in the worst-case scenario. Moreover, the jump in income payments happens immediately when there is a transition to a less healthy state, allowing members to quickly obtain higher incomes to cover the higher costs incurred by being functionally disabled or ill. Furthermore, we find that the product can consistently pay higher incomes to individuals who stay in a less healthy state.

Smoothing and Measuring Discrete Risks on Finite and Infinite Domains

Vytaras Brazauskas (University of Wisconsin-Milwaukee)

In this talk, we will introduce a new methodology for smoothing quantiles of discrete random variables, which can be defined on a finite (e.g., Bernoulli variable) or infinite domain (e.g., Poisson variable). Smoothed discrete quantile functions facilitate a straightforward transition from the risk measurement literature of continuous loss variables to that of discrete. Large-sample properties of sample estimators of smoothed quantiles are established by using the theory of fractional order statistics, which was originated by Stigler (1977). A Monte Carlo simulation study illustrates the methodology in the case of several distributions, such as Poisson, negative binomial, and their zero-inflated versions, which are commonly used in insurance to model claim frequencies. Additionally, we propose a very flexible bootstrap-based approach for the use in practice. Finally, using automobile accident data and their modifications, we compute what we have termed the conditional five number summary (C5NS) for the tail risk and construct confidence intervals for each of the five quantiles making up C5NS. The results show that the smoothed quantile approach classifies the tail riskiness of portfolios more accurately. Moreover, it produces lower coefficients of variation in the estimation of tail probabilities than those obtained using the linear interpolation approach.

A discrete-time, semi-parametric time-to-event model for left-truncated and right-censored data

Jackson Lautier (Bentley University)

Insurance companies have large investment holdings in asset-backed securities (ABS). For the purposes of risk management, asset-liability management, and asset management generally, it is desirable to perform asset-level financial modeling of these ABS holdings. Any model calibration will rely upon empirical analysis of time-to-event data that is sampled from ABS trusts. This financial time-to-event observational data will be subject to discrete-time, left-truncation, and right-censoring, with a known, finite duration. This incomplete data combination has not received thorough study, however. In this paper, we propose a semi-parametric, discrete-time lifetime model that is attuned to left-truncated and right-censored data with a known, finite duration. We do not assume any forms for the left-truncation distribution, which offers more flexibility than the uniform assumptions of length-biased sampling. We then derive general stationary point theorems to maximize the likelihood function in both the cases of left-truncation only and left-truncation and right-censoring. Our results significantly simply a multiparametric constrained optimization problem into a single-parameter optimization problem. In the case of modeling lifetime data with a right-truncated geometric distribution, which is theoretically reasonable for payment time-to-event consumer loan data, we derive analytical results for the maximum likelihood estimates in both the cases of left-truncation and left-truncation with right-censoring. All theoretical results are illustrated with consumer auto loan data sampled from the Drive Auto Receivables Trust 2017-1 ABS bond.

Pricing carbon emission permits under the cap-and-trade policy

Xinran Dai (UNSW)

Government policies are generally considered the most powerful and efficient tools for climate mitigation. In this paper, we explore a cap-and-trade framework in which the total amount of carbon emissions is limited by a government-imposed cap. We determine the carbon price through a competitive trading market involving both green and brown sectors. To achieve this, we develop a dynamic stochastic general equilibrium model in which energy input serves as a proxy for integrating carbon emissions into the production process. We calculate the optimal carbon price by examining the social planner’s problem from a top-down perspective and the sectors’ maximization problems from a bottom-up perspective, respectively. Taxes on the brown sector are necessary for achieving the first best. Our model shows that the government allocation scheme will affect the sectors through their stock prices and financial situations.

Managing carbon risk: The impact on optimal portfolio composition

Leonard Gerick (Macquarie University & Ulm University)

Climate change and the concomitant adaptations to it pose major challenges for companies and confront them with new risks. Investors, too, are compelled to factor these risks into their investment decisions. This study delves into the implications of mitigating carbon risk on the optimal composition of investment portfolios based on their ability to maximize expected terminal utility. Our approach involves modeling the carbon risk of each company over time and assuming that the investor is seeking to progressively diminish carbon risk exposure while optimizing expected terminal utility. Through the imposition of an increasingly stringent carbon risk constraint, we derive a time-dependent optimal portfolio composition through a closed-form solution. Different carbon risk metrics are employed to quantify the carbon risk, and resulting optimal portfolios are compared. In addition, the model is expanded to incorporate the impact of carbon risk on the individual company’s stock price development, thus extending beyond the portfolio-wide carbon risk constraint.

Net Zero: Fact of Fiction

Ruediger Kiesel (University Duisburg-Essen)

Companies flood the public with net-zero promises, but often leave the path to net-zero unexplained or blurred. In this paper we show the deficiencies in the analysis of carbon risks and develop a methodology to capture net-zero promises and carbon risks probabilistically. In our model, we assume that carbon emissions follow a Geometric Brownian Motion and, in a first version, we estimate the drift and the volatility from historical emission reduction rates. We extend the model to allow for a non-constant carbon emissions’ drift which is based on sectoral net-zero emissions scenarios. Using estimated carbon emission, we compute the probability of the firm reaching its emission reduction targets. Moreover, we provide the probability of respecting the carbon budget implied by the calibrated net-zero emissions pathway. The probabilities can be updated with the arrival of new data, thus, resulting in a time series of probabilities that can be used to assess progress of firms on net-zero transition path.We extend our analysis by incorporating forward-looking transition plans that companies disclose, e.g. in their annual and sustainability reports. To classify text paragraphs extracted from these reports, we apply different fine-tuned versions of ClimateBERT, which is a transformer-based language model that has been pretrained using climate related text. Using the quantities of sentences captured, we construct a textual measure called the forward-looking index (FLI) that is based on the ratio of specific forward-looking transition-relevant statements to total transition-relevant statements. The measure is furthermore balanced to consider the overall amount of transition-relevant statements in a report, the sentiment of the captured text paragraphs and the relation to indicators that might undermine the creditworthiness of a net-zero statement. We investigate the relationship between the proposed textual measure and environmental performance of a company of interest by running a panel regression with firm and sectoral fixed effects. We then adjust the historical probability of sticking to a net-zero budget by including the forward-looking transition information in Bayesian Nets that provide a way of structured reasoning under uncertainty. In a proposed Bayesian Net, we combine the historical probability of sticking to a net-zero budget with the probability that environmental performance deduced from the textual measure is in line with the net-zero budget. The probability weights to combine the backward- and forward-looking-related probabilities are subjective and can be enhanced by evaluating the underlying forward-looking statements by an expert or using further AI methods.

Climate-Related Disclosure, Emission-Conscious Investment, and Internal Carbon Price

Haibo Liu (Purdue University)

In this paper, we develop a theoretical framework that a regulated firm could use to align its climate-related investment goals with its climate-related financial disclosure requirements. For the firm’s asset manager, we solve a series of optimization problems to identify the optimal investment strategies under carbon emission constraints. Using the most recent Scope 1 and Scope 2 emissions data, security prices data, and company fundamentals data from Bloomberg and S&P Global, we derive the asset manager’s empirical emission-efficient frontier. We demonstrate how the firm could use the efficient frontier to derive its internal carbon price for disclosure purpose, by striking a balance between emission reduction and investment return.

Approximating renewal function with Laguerre series expansion for various Applications

Jae Kyung Woo (UNSW Sydney)

In this paper, we shall develop an exact formula for the renewal function as the sum of the well-known linear asymptotic formula and a correction term that approaches zero when the time gets large. Under some mild conditions on the inter-arrival time distribution, the correction term can beexpressed as a Laguerre series, where the Lauguerre coefficients can be computed recursively. For computational purposes, the Laguerre series needs to be truncated, leading to an approximation of the renewal function. Error bounds are also considered, and in particular when the inter-arrival timehas a Decreasing Failure Rate the error bounds are expressed in terms of the moments of the inter-arrival times. Applications of our results in actuarial science, queueing theory, reliability, and inventory management are also provided.

The structures of the number of claims until first passage times and their applications

Xinyi Zeng (Western University, Department of statistics and actuarial science)

In this talk, we will examine the first-passage problems of insurance risk processes. While previous literature has primarily focused on the ruin-related quantities, such as the time of ruin, our study expands the scope by exploring the significant structures of the number of claims preceding the first passage times. These identified structures enables us to further explore the risk management implications on the risk process. Lastly, we present some practical applications of our main results and illustrate them via numerical examples.

Poverty Trapping in proportional losses models.

Enrique Thomann (Department of Mathematics, Oregon State University)

The talk will describe recent work on modeling risk processes in low income households for which a proportional loss model can capture the catastrophic effects of income loss. The mathematical analysis, combined with numerical calculations help determine conditions in which the insured population can avoid a certainty of ruin.

Estimation of scale functions for spectrally negative Levy processes with applications to risk theory

Yasutaka Shimizu (Waseda University)

The scale function holds significant importance within the fluctuation theory of Levy processes, particularly in addressing exit problems. However, its definition is established through the Laplace transform, thereby lacking explicit representations in general. This paper introduces a novel series representation for this scale function, employing Laguerre polynomials to construct a uniformly convergent approximate sequence. Additionally, we derive statistical inference based on specific discrete observations, presenting estimators of scale functions that are asymptotically normal.

Stochastic Dominance and Financial Pricing in Peer-to-Peer Insurance

Markus Huggenberger (University of St.Gallen)

This paper studies the economic benefits of peer-to-peer (P2P) or mutual insurance in a general setting with heterogeneous risks. We apply market-consistent arbitrage-free pricing to characterize a P2P insurance contract that can generate utility gains for all risk-averse pool participants without requiring further information on their individual preferences. We derive necessary and sufficient conditions on the policyholders’ risks for the occurrence of utility gains from the proposed risk pooling arrangement. Our results provide new insights for the pricing and the design of P2P insurance products and can help to identify constraints on the composition of insurance portfolios which make P2P insurance beneficial for all participants.

On the optimality of linear residual risk sharing

Jiajie Yang (University of Illinois at Urbana-Champaign)

In this paper, we propose a new risk-sharing paradigm among multiple participants. The proposed framework improves the current literature by considering a wider scope of candidates in the optimization problem. Besides, we provide a closed-form solution for optimal risk sharing under several realistic constraints, which have been numerically considered in previous literature. Moreover, a graphical explanation will be provided to intuitively illustrate the risk mitigation effect in our model. These results motivate us to consider a general problem with two sets of constraints. The result of the general problem provides a series of choices to motivate engagement from participants with different risk characteristics. It can be practically used in the current peer-to-peer insurance market. Finally, an equivalent problem is considered to provide an alternative understanding of our model.

Moral hazard in peer-to-peer insurance with social connection

Tao Li (Faculty of Economics and Business, Katholieke Universiteit Leuven)

Peer-to-peer insurance is an emerging phenomenon in the InsurTech industry worldwide. By gathering communities of participants who are acquainted with each other, peer-to-peer insurance has been providing affordable insurance coverage that is believed to be largely associated with the social network among the participants. This paper analyzes the issue of moral hazard associated with the framework of peer-to-peer insurance from the theoretical perspectives. We investigate how the social network within the community affects the participants’ incentive to spend effort on precautionary loss prevention. Using a quantitative framework to study the moral hazard in the peer-to-peer insurance, we present that participants’ effort at Nash equilibria orderly lies in a hyper-rectangle influenced by their social network. In addition, we also investigate how participants’ tendency to spend effort in reducing risk varies according to the structure of the social network.

Pareto-Efficient Contracts in Centralized vs. Decentralized Insurance Markets, with an Application to Flood Risk

Michael Zhu (University of Waterloo)

We study the form of Pareto-efficient contracts in both centralized and decentralized insurance market structures, when the preferences of each agent are given by distortion risk measures. In the centralized market, we assume that there is a single monopolistic insurer that uses a coherent risk measure. We obtain representations of Pareto-efficient contracts and Stackelberg equilibria (SE) in the centralized setting, and we show that contracts induced by a SE are Pareto-efficient. However, we note that SE do not provide a welfare gain to the policyholders, echoing the conclusions of recent work in the literature. To remedy this, we also investigate the setting of a decentralized insurance market or peer-to-peer market, where the policyholders may share risk among themselves without interacting with a central entity. We obtain a closed-form characterization of Pareto-efficient contracts in the decentralized setting, and we show that while the total welfare gain is lower in the decentralized market, the negative implications of a Stackelberg equilibrium can be avoided. Our results are illustrated through a numerical analysis of the flood risk insurance market in the United States.

Generalization of the Laplace approximation and its application to insurance ratemaking

Jae Youn Ahn (Department of Statistics, Ewha Womans University)

Laplace approximation provides a Gaussian approximation of a posterior distribution via a second-order Taylor expansion. While the Bernstein–von Mises theorem justifies this Gaussian approximation when an infinite amount of data is available, it may not always be suitable with a finite number of observations. This is particularly true when the posterior distribution is skewed, which is a common occurrence in theinsurance ratemaking process, where the use of a Gaussian distribution may not yield an accurate approximation. In this study, by utilizing the generalized version of Taylor expansion (Widder, 1928), we introduce a generalized version of Laplace approximation where the posterior distribution is approximated by various parametric distributions in the exponential family. We provide numerical analysis to demonstrate the effectiveness of the proposed method. Additionally, using real data, we demonstrate how the proposed method can be applied in the insurance ratemaking.References:[1] Widder, D. (1928). A generalization of taylor’s series. Transactions of the American Mathematical Society, 30(1):126–154.

Statistical Learning of Trade Credit Insurance Network Data with Applications to Ratemaking and Reserving

Tsz Chai Fung (Georgia State University)

Trade credit insurance (TCI) is an emerging business line of P&C insurance, which protects businesses against losses in case their customers fail to make payments. Developing predictive models for TCI claim frequencies and severities is a significant challenge due to the complex data structure. In addition to sharing certain data characteristics with conventional P&C insurance claim data, such as longitudinal and data truncation, TCI claim data introduces an additional layer of complexity. This arises from the presence of a complex buyer-seller directional network structure, making claim dependencies among policyholders significantly more intricate compared to traditional P&C claim data. In this presentation, we develop an expanded directed-network variant of the Generalized Linear Mixed Model (GLMM), which jointly models the claim frequency, severity, and reporting delay, while considering the impact of data incompleteness resulting from incurred but not reported (IBNR) claims. It also accounts for various levels of observed information, including buyer, seller, policy, and trade link information. Our proposed model is designed to effectively capture the diverse network dependencies discussed earlier. We also enhance the model by integrating an extended version of degree centrality (DC), which quantifies how the relative importance of each buyer or seller within the network influences predictive claims. We show that our method significantly outperforms existing methods in terms of claim prediction accuracy.

Weekly dynamic motor insurance ratemaking with a telematic signals bonus-malus score

Juan Sebastian Yanez (University of Barcelona)

We present a dynamic pay‐how‐you‐drive pricing scheme for motor insurance using risky telematic signals (near-misses). More specifically, our approach allows the insurer to apply penalties to a baseline premium on the occurrence of events such as harsh brakes or accelerations. In addition, we incorporate a Bonus-Malus System (BMS) adapted for telematic data, providing a credibility component based on past telematic signals observations to the claim frequency predictions. Purposefully, we consider a weekly setting for our ratemaking approach to benefit from the signal’s high frequency rate and to encourage safe driving via dynamic premium corrections. Moreover, we provide a detailed method allowing our model to benefit from historical records as well a detailed telematic data collected weekly through an on-board device. We showcase our results numerically in case study using data from an insurance company.

Assessing Extreme Risk using Stochastic Simulation of Extremes

Maud Thomas (ISUP-LPSM Sorbonne Université)

In the context of financial market stress testing, where the focus is more on financial crisis, the best tailored tools to capture the risk that such events could represent for financial institutions are tail-related risk measures. Hence, extraordinary market circumstances are of upmost importance, and they translate through the estimation of risk measures at extreme levels. However, the data size in these regions can be rather small making the estimation a challenging task. In addition, the computation of these risk measures usually only relies on the risk factor of interest. However, it could be shown that these univariate assessments could be improved if one is able to identify a set of risk factors exhibiting asymptotic dependence with the risk factor of interest, and then include them in the quantification of the risk of the risk factor of interest. The generalization of the simulation approach suggested in (Legrand et al. 2023) to a dimension K>2 solves these issues. It aims at expanding jointly and conditionally a sample of extremes, for a set of asymptotically dependent risk drivers, through stochastic simulation based on a specific characterization of multivariate extremes with Multivariate Generalized Pareto Distribution (MGPD). Numerical study shows the true benefit of employing this approach in terms of the estimates accuracy.

Analyzing the Effects of Catastrophic Natural Disaster Losses on Mortgage Delinquency Risk using State Space Models

Samuel Eschker (Purdue University)

Insurance companies typically hold a significant portion of mortgage-related investments. When modeling the investment risk associated with bonds and stocks, conventional wisdom suggests that the occurrence of natural catastrophes and the performance of the financial market are independent. However, this assumption of independence is questioned when it comes to mortgage-related investments. Empirical evidence suggests that disaster events can lead to an abnormal volume of mortgage delinquencies due to property destruction and disruptions to a family’s regular income flow.In this presentation, we employ state space models to examine the impact of catastrophic losses from natural disasters on mortgage delinquency rates within the United States. We consider various model forms and compare their predictive abilities. Moreover, we discuss difficulties that arise in hyperparameter selection and diagnostic methods for resolving these challenges.

Infinite-mean Pareto distributions in decision making

Ruodu Wang (University of Waterloo)

We discuss a few intriguing new results related to infinite-mean Pareto distributions (IMP for short). First, under negative dependence, the weighted average of identically distributed IMP losses is larger than one such loss in the sense of first-order stochastic dominance. Second, under independence, a stronger diversification always hurts in a market of identically distributed IMP losses. Third, with an IMP multiplicative background risk, every rational decision maker becomes risk averse for gains and risk seeking for losses, offering a theoretical justification for the convex-concave shape utility in the cumulative prospect theory. These results lead to interesting implications in risk sharing and catastrophe risk management.

Survival with random effect

Rokas Puišys (Vilnius University, Lithuania)

Presentation focuses on mortality models with a random effect applied in order to evaluatehuman mortality more precisely. Such models are called frailty or Cox models. The main assertion shows that each positive random effect transforms the initial hazard rate (or densityfunction) to a new absolutely continuous survival function. In particular, well-known Weibull andGompertz hazard rates and corresponding survival functions are analyzed with different randomeffects. These specific models are presented with detailed calculations of hazard rates and correspondingsurvival functions. A set of specific models with a random effect are applied to the same dataset. The results indicate that the accuracy of the model depends on the data under consideration.

Random distribution kernels and three types of defaultable contingent payoffs

Jinchun Ye (N/A)

We introduce the random distribution kernel on a product probability space and obtain the representation results connecting the product and base probability spaces. Using the random variable with the random distribution kernel to model default/death time, we then consider three types of defaultable contingent payoffs. By allowing the survival conditioning time to be anytime before the start time of the payoffs, between the start time and end time, or after the end time of the payoffs, we provide the complete treatment of three types of defaultable contingent payoffs. As the application of the general results developed in this paper, we also provide the more general results for three types of defaultable contingent payoffs than the ones in the literature under the stochastic intensity framework.

Continuous-time mortality modeling with delayed effects

Qing Cong (Department of Statistics, The Chinese University of Hong Kong)

National mortality data are generally published on an annual basis, leading to the natural development of discrete-time mortality models. However, continuous-time models are valuable for pricing and hedging mortality risk by calibrating them to insurance and pension products. When fitting discrete data into continuous-time models, the Markovian property is often assumed, disregarding the serial dependence in annual scale from discrete-time models. To address this limitation and allow a continuous-time model to capture range-dependence on annual scale, we propose a novel continuous-time mortality model with a delayed effect. Specifically, the mortality rate is modeled by a stochastic delay differential equation. By utilizing functional Ito’s calculus, we demonstrate that the model incorporates annual lags observed in discrete-time models and provides a closed-form solution for the survival probability. This tractability enables the derivation of closed-form pricing formulas for various life insurance products and longevity securities. Furthermore, we formulate an optimal mean-variance hedging strategy using risk-free and longevity bonds under the delayed mortality models. Numerical studies examine the comparative statics of different insurance products regarding the delayed effect and evaluate the efficiency of hedging. This model can be extended to address calendar time effects as well.

Benefit volatility-targeting strategies in lifetime pension pools

Jean-François Bégin (Simon Fraser University)

Lifetime pension pools—also known as group self-annuitization plans and tontines—allow retirees to convert a lump sum into lifelong income, with payouts linked to investment performance and the pool’s collective mortality experience. Existing literature has predominantly examined basic investment strategies like constant allocations and investments solely in risk-free assets. Recent studies, however, proposed volatility targeting, aiming to enhance risk-adjusted returns and minimize downside risk. Yet they only considered investment risk in the volatility target, neglecting the impact of mortality risk. This presentation thus aims to address this gap by investigating volatility-targeting strategies for both investment and mortality risks, offering a solution that keeps the risk associated with benefit variation as constant as possible through time. Practical investigations of the strategy demonstrate the effectiveness and robustness of the new dynamic volatility-targeting approach.

Variable annuity portfolio valuation with Shapley additive explanations

Gayani Thalagoda (University of New South Wales, Australia)

This study proposes a metamodeling method that utilizes Shapley additive explanations in selecting the representative sample in an explainable manner for valuing variable annuity portfolios. Selecting a representative sample with clear and explainable criteria is necessary when applying metamodeling techniques to principle-based calculations. The proposed method involves (i) training a surrogate neural network with existing valuations for the same portfolio across multiple market conditions and (ii) decomposing the overall risk of a policy into clearly separated contributions from each risk driver using Shapley additive explanations. This decomposition results in an informative and explainable representation of the insurance policy data which can later be used for selecting the representative sample under a new market condition. Furthermore, by fine-tuning the surrogate neural network with the carefully chosen representative sample, the proposed method offers a systematic way to improve the neural network’s estimation with limited data in the new market condition. The proposed method can assist users in explaining the reasoning behind selecting a policy as representative. Furthermore, the proposed method aligns with the U.S. National Association of Insurance Commissioners (NAIC)’s requirements for principle-based reserves for variable annuities, which necessitate representative policies to be selected in a manner that sufficiently reflects the impact of policy characteristics on the calculated risk measure. Our numerical analyses show that the proposed method outperforms conventional methods in the existing literature in prediction accuracy and goodness of fit.

The valuation and assessment of retirement income products: Aunified Markov chain Monte Carlo framework

Jonathan Ziveyi (UNSW)

This paper devises a flexible assessment framework for a catalogue of existing retirement income products which include, account-based pensions, group self-annuities and variable annuity contracts. It utilises the Hamiltonian Monte Carlo approach, a proven computational technique for simulating conditional distributions in higher dimensions. Graphical illustrations for the risk-return trade-offs for each product are presented which can readily be adapted by advisors, and all stakeholders as a tool for enhancing the decision-making process for retirees. The key features of the retirement income products which include variable annuities, account-based pension, and group self-annuities, are presented in an easily understandable way. Sensitivity analysis on the investment options of the underlying fund provides insights for retirees to maximise income according to their risk preferences. We also devise a lifetime utility analysis framework for the comparison of lifetime utility from purchasing the retirement products.

Automated Machine Learning in Insurance

Panyi Dong (University of Illinois Urbana-Champaign)

Machine Learning (ML), the study of applying statistics and computational algorithms to learn the data and further predict outcomes more accurately, has gained popularity for academic research and industrial applications during the past decades. However, the performance of most ML tasks heavily depends on data preprocessing, model selection, and hyperparameter optimization, which all are considered as domain knowledge-, experience-, manual labor-intensive procedures. Automated Machine Learning (AutoML) aims to automatically complete the full life-cycle of ML tasks and provides state-of-art ML models without any human intervention or supervision. This paper introduces an AutoML workflow, which aims to provide a robust and effortless ML deployment procedure by writing a few lines of code, for those with neither domain knowledge nor previous experience. This proposed AutoML is tailored to the insurance application; specifically, the balancing step in the preprocessing process is designed to solve the imbalanced nature of common insurance datasets. Full code and materials can be accessed at GitHub repository.(https://github.com/PanyiDong/InsurAutoML)

Insurance applications of support vector machines

Arnold Shapiro (Penn State University)

The purpose of this presentation is to discuss insurance applications of a machine learning algorithm called a support vector machine (SVM). The presentation begins with a brief overview of the history and nature of SVMs and their use in classification and regression situations. Both separable training data, where a hyperplane can entirely divide the data points into their respective classes, and non-separable training data are addressed, as are the choice of kernels (similarity functions), which are used to accommodate the latter. With the foregoing as background, we review a representative selection of insurance-related articles involving SVMs. Next we turn to a discussion of the pros and cons of SVMs and a comparison of SVMs with competing methodologies. The presentation ends with a commentary on the findings of this study.

Distributional Forecasting via Interpretable Actuarial Deep Learning

Eric Dong (University of New South Wales)

Neural networks have recently seen extensive developments in the actuarial field (e.g., Richman, 2022). Adapting neural network technology to actuarial science can be difficult as interpretability and distributional forecasting are highly important attributes of any actuarial distributional regression model. Existing parametric approaches, including the Combined Actuarial Neural Network (Schelldorfer and Wuthrich, 2019) and the Mixture Density Network (Delong et al., 2021, Al-Mudafer et al., 2022), rely on prespecified distributional assumptions. These assumptions may not be appropriate and can impact the accuracy of quantifying outcomes’ variability and undermine the reliability of distributional forecasts. Semiparametric models that offer greater flexibility, such as the Deep Distribution Regression (Li et al. 2019), often come at the expense of distributional interpretability. We propose Deep Distributional Boosting (DDB), a distributional regression approach leveraging interpretable deep learning to enhance distributional forecasting in actuarial science. Our framework excels in distributional flexibility and interpretability, whence offering improvements over the abovementioned approaches.

Hush Hush: Keeping Neural Network Claims Modelling Private, Secret, Decentralised, and Distributed Using Federated Learning

Dylan Liew (Institute and Faculty of Actuaries (“IFoA”))

Federated Learning is a new method of training Machine Learning models pioneered by Google in 2016 aimed for use on smartphones. This concept enables the direct training of machine learning models on users’ devices, such as smartphones, eliminating the need to share or transfer potentially sensitive data to a centralized server. Unlike traditional machine learning methodologies, federated learning adopts a model where the algorithm is brought to the data, rather than transferring the data to the algorithm.In this presentation, the Institute and Faculty of Actuaries (IFoA) Federated Learning Working Party (part of the IFoA Data Science Research Section) will illustrate how insurance companies can leverage this technique. Specifically, we will show how these companies can collaboratively develop a Neural Network model to predict claims frequency. This collaboration allows for the combination and utilization of their customer data without actually sharing or compromising any sensitive information.We achieve this using the Flower package in Python along with PyTorch. We simulate a car insurance market’s claims data with 10 companies in it using the freMTPL2freq dataset. We find that if all insurers are allowed to share their confidential data with each other they could collectively build a model that achieves an accuracy (Poisson Deviance Explained or “% PDE”) of c.5% on an unseen sample. However if they are not allowed to share their customer data none of them can achieve more than c.3% PDE on the same unseen sample. If they use Federated Learning they could keep all of their customer data private, and build a model that achieves near the same accuracy as if their confidential data was shared, reaching c.5% on the same unseen sample.

Portfolio selection based on the Herd Behavior Index

Churui Li (KU Leuven)

In this paper, we propose to determine optimal portfolios using the Herd Behavior Index (HIX, Dhaene et al. (2012)). The HIX is a diversification measure that provides information about the extent to which stock prices move together in the same direction. The optimal minimum-HIX, as well as mean-HIX, portfolio can be seen as the most diversified one since it has the lowest degree of co-movement. We make use of a reformulation method for determining the minimum-HIX, and mean-HIX, portfolios, as their closed-form expressions are not readily available in a general setting. This also allows us to study the mean-HIX efficient frontier. We prove the existence of the minimum-HIX, and mean-HIX, portfolios in the general setting, and provide their closed-form expressions in the two-stock case. We also study how to determine the minimum-HIX, as well as mean-HIX, portfolios when short-selling is allowed. This requires us to generalize the definition of HIX in order to ensure that the (theoretical) comonotonic portfolio always exceeds the (actual) portfolio in convex order. [1] Dhaene, J., Linders, D., Schoutens, W. & Vyncke, D. (2012), ‘The herd behavior index: A new measure for the implied degree of co-movement in stock markets’, Insurance: Mathematics and Economics 50(3), 357–370.

Loan Profit Prediction under the Framework of Innovative Fusion Model

Xinyi Wang (Peking University)

To achieve the goal of credit business and improve the decision-making efficiency, this paper establishes models based on LendingClub data to predict loan profitability levels. This paper innovatively proposes an innovative fusion model framework. By predicting the best model of each sample, different fusion strategies are set for each sample, and sample-based model fusion is realized. In this paper, Decision-Tree, LGBM and Neural-Networks are base models, and single model, fusion model and innovative fusion model are obtained through the innovative fusion model framework. Empirical results show that the prediction ability of the innovative fusion model framework is significantly improved compared to the base models’ and Stacking fusion model’s. The innovative fusion model framework improves the deficiencies of the traditional model fusion framework and improves prediction performance. The prediction of loan profit based on innovative fusion model has high accuracy and stability. The findings remain robust after replacing the multi-classifiers in the framework. This paper innovatively explores the prediction effect on positive and negative samples. The framework has more significant improvement effect and higher prediction accuracy on the profitable customers, that the model focuses on. Finally, interpretability analysis is conducted on the framework. Finance-related information has a high variable importance.

Optimal quadratic hedging in discrete time under basis risk

Maciej Augustyniak (University of Montreal)

Basis risk arises whenever one hedges a derivative using an instrument different from the underlying asset. Recent literature has shown that this risk can significantly impair hedging effectiveness. This article derives new semi-explicit expressions for discrete-time quadratic hedging strategies in the presence of basis risk when the dynamics of the underlying asset and hedging instruments are driven by correlated processes with stationary and independent increments. The solutions are derived under the physical measure in terms of inverse Laplace (or Fourier) transforms and can be computed accurately in a fraction of a second. Several numerical experiments are conducted to evaluate the performance of quadratic hedging for different levels of basis risk. Overall, we find that this strategy can significantly improve hedging effectiveness for options with long-term maturities.

Optimal dividend strategies for a catastrophe insurer

Hansjoerg Albrecher (University of Lausanne)

We study the problem of optimally paying out dividends from an insurance portfolio, when the criterion is to maximize the expected discounted dividends over the lifetime of the company and the portfolio contains claims due to natural catastrophes, modelled by a shot-noise Cox claim number process. We solve the resulting two-dimensional stochastic control problem, and uniformly approximate the optimal value function through adiscretization of the space of the free surplus of the portfolio and the current claim intensity level. It is shown that the nature of the barrier and band strategies known from the classical models with constant Poisson claim intensity carry over in a certain wayto this more general situation, leading to action and non-action regions for the dividend payments as a function of the current surplus and intensity level. We also discuss some interpretations in terms of upward potential for shareholders when including a catastrophe sector in the portfolio.

Optimal periodic dividend strategies when dividends can only be paid from profits

Eric Cheung (UNSW Sydney)

In this paper, we consider the compound Poisson insurance risk model and analyze the optimal dividend strategy (that maximizes the expected present value of dividend payments until ruin) when dividends can only be paid periodically as lump sums. If one makes the usual assumption that dividends can be paid from the available surplus, then the optimal strategies are often of band or barrier type, resulting in a ruin probability of one (e.g. Albrecher et al. (2011a)). As opposed to such an assumption, we propose that dividends can only be paid from a certain fraction of the profits (i.e. positive increment of the process between successive dividend decision times), and such a constraint allows the surplus process to have a positive survival probability. Some theoretical properties of the value function and the optimal strategy are derived in connection to the Bellman equation. These properties suggest that a bang-bang type of control can be a candidate for the optimal strategy, where dividend is paid at the highest possible amount as long as the surplus is high enough. The dividend function under the candidate strategy is subsequently derived, and we also provide numerical examples where the proposed strategy is indeed optimal.

On an optimal dividend problem with a concave bound on the dividend rate

Dante Mata Lopez (Université du Québec à Montréal (UQAM))

We study a version of De Finetti’s optimal dividend problem driven by a diffusion. In our version, the control strategies are assumed to have an absolutely continuous density, which is bounded above by an increasing, concave function.Under mild assumptions on the drift and diffusion coefficients, we provide sufficient conditions to show that an optimal strategy exists and lies within the set of generalized refraction strategies. In addition, we are able to characterize the optimal refraction threshold in our setting.This is joint work with Hélène Guérin, Jean-François Renaud and Alexandre Roch.

Blended Insurance Scheme: A Synergistic Conventional-Index Insurance Mixture

Jinggong Zhang (Nanyang Technological University)

Conventional indemnity-based insurance ("conventional insurance") and index-based insurance ("index insurance") represent two primary insurance types, each harboring distinct advantages depending on specific circumstances. This paper proposes a novel blended insurance whose payout is a mixture of the two, to achieve enhanced risk mitigation and cost efficiency. We present the product design with a machine learning-based framework that employs a multi-output neural network (NN) model to determine both the triggering type and the index-based payout level. The proposed framework is then applied to an empirical case involving soybean production coverage in Iowa. Our results demonstrate this blended insurance could generally outperform both conventional and index insurance in enhancing policyholders’ utility.

Optimal Bundling Reinsurance Contract Design

Jing Zhang (Department of Statistics and Actuarial Science. The University of Hong Kong)

The majority of existing studies on optimal reinsurance design under adverse selection primarily focus on customized mechanisms via the individual compatibility constraints, where each insurer is offered a unique tailor-made reinsurance policy. In reality, reinsurance companies usually can only offer a small number of policy options in the menu due to cost and management considerations. In this paper, we propose a novel mechanism for designing a menu of bundled reinsurance contracts, incorporating a quota-share component when the insurers adopt a general risk measure, with the aim of maximizing the reinsurance company’s profit. We introduce an iterative algorithm to find the optimal bundled policies, where only one new policy is introduced in each step. The analytic expression of optimal policies and the associated profits can be effectively calculated in the iterative process.

Optimal reinsurance design under distortion risk measures and reinsurer’s default risk with partial recovery

Yaodi Yong (Southern University of Science and Technology)

Reinsurers may default when they have to pay large claims to insurers but are unable to fulfill their obligations due to various reasons such as catastrophic events, underwriting losses, inadequate capitalization, or financial mismanagement. This paper studies the problem of optimal reinsurance design from the perspectives of both the insurer and reinsurer when the insurer faces the potential default risk of the reinsurer. If the insurer aims to minimize the convex distortion risk measure of his retained loss, we prove the optimality of a stop-loss treaty when the promised ceded loss function is charged by the expected value premium principle and the reinsurer offers partial recovery in the event of default. For any fixed premium loading set by the reinsurer, we then derive the explicit expressions of optimal deductible levels for three special distortion functions, including the TVaR, Gini, and Proportional hazard (PH) transform distortion functions. Under these three explicit distortion risk measures adopted by the insurer, we seek the optimal safety loading for the reinsurer by maximizing her net profit where the reserve capital is determined by the TVaR measure and the cost is governed by the expectation. This procedure ultimately leads to the Bowley solution between the insurer and the reinsurer. We provide several numerical examples to illustrate the theoretical findings. Sensitivity analyses demonstrate how different settings of default probability, recovery rate, and safety loading affect the optimal deductible values. Simulation studies are also implemented to analyze the effects induced by the default probability and recovery rate on the Bowley solution.

Multi-constrained optimal reinsurance model from the duality perspectives

Wanting He (The University of Hong Kong)

In the presence of multiple constraints such as the risk tolerance constraint and the budget constraint, many extensively studied (Pareto-)optimal reinsurance problems based on general distortion risk measures become technically challenging and have only been solved using ad hoc methods for certain special cases. In this paper, we extend the method developed in Lo (2017a) by proposing a generalized Neyman-Pearson framework to identify the optimal forms of the solutions. We then develop a dual formulation and show that the infinite-dimensional constrained optimization problems can be reduced to finite-dimensional unconstrained ones. With the support of the Nelder-Mead algorithm, we are able to obtain optimal solutions efficiently. We illustrate the versatility of our approach by working out several detailed numerical examples, many of which in the literature were only partially resolved.

Risk Preference and Bubble-crash Experience in the Crypto Insurance Market

Xinjie Ge (Peking University)

We measure risk preferences of policyholders in the crypto insurance market, using data from Nexus Mutual. We examine the impact of cryptocurrency bubble-crash experiences on the risk preferences of policyholders and show that cryptocurrency bubble-crashes cause higher levels of risk aversion. We show that policyholders who are more risk averse have lower token returns before the next cryptocurrency market downturn, followed by higher returns after the next cryptocurrency market downturn, given their tendency to hold safer assets. Furthermore, we identify the lucky bubble-crash experiences, i.e. individuals experiencing a lucky cryptocurrency bubble period in which their returns are in the 90th percentile. We show that policyholders experiencing lucky cryptocurrency bubble-crashes become less risk averse, which then have higher token returns before the next cryptocurrency market downturn, followed by lower returns after the next cryptocurrency market downturn, given their tendency to hold risky assets.

Privacy-Preserving Collaborative Information Sharing through Federated Learning

Zhiyu Quan (UIUC)

This research explores the critical balance between leveraging proprietary data for collaborative gains and protecting business-sensitive information and client privacy. Taking advantage of unique and comprehensive proprietary data from multiple insurance and InsurTech firms, we present the first comprehensive examination of the real economic benefits of the federated learning framework as a privacy-enhancing business solution for data collaboration. We cover a complete variety of dimensions across the possibilities of boundary-spanning multiple business entities in the business context. The proposed business solution could foster access to the collective intelligence of industry peers in scenarios where physical data aggregation is infeasible, thereby yielding a more comprehensive assessment of user-related features across entities and providing industry-level insights, enhancing standards, preventing fraud, improving efficiency, and driving innovation.

Pricing by Stake in DeFi Insurance

Maxim Bichuch (University at Buffalo)

In blockchain architectures, consensus mechanisms, such as the proof-of-stake, have been used for processing transactions and building new blocks in a blockchain. It requires participants to stake digital assets to validate transactions in order to settle disagreements. The concept has been extended in Decentralized Finance (DeFi) applications to use staking mechanisms for decision-makings in financial services. In this article, we examine the pricing by stake in DeFi insurance applications, where the rate-making, typically done by centralized actuarial approaches in traditional insurance, is replaced by pricing formulas of digital assets staked on underwritten insurance policies. We show in this study that such a consensus mechanism may not always lead to a Nash equilibrium and, when it does, the mechanism may fail to reflect the true risk of underwritten policies. While such a mechanism is intuitive and can be the only viable approach for emerging risks such as smart contract risks without long history of claims data, it may not be economically viable over time.

Opportunities, Challenges, and Pitfalls in the Application of Catastrophe Models

Patricia Born (Florida State University)

Catastrophe modeling stands at the forefront of risk assessment and management in today’s complex world. This presentation will explore the fundamental principles, methodologies, and applications of catastrophe modeling, focusing on its pivotal role in quantifying and predicting the impacts of natural and man-made disasters. Beginning with an overview of key concepts such as hazard analysis and vulnerability assessment, the presentation will delve into the modeling techniques used to simulate and forecast catastrophic events. Using examples from the Florida Commission on Hurricane Loss Projection Methodology over the past 30 years, we will explore how catastrophe models have evolved and consider the implications of model sensitivity to granularity in model inputs. Catastrophe models empower stakeholders across insurance, reinsurance, and government sectors to make informed decisions, optimize risk portfolios, and enhance resilience strategies. By examining current trends and future directions in catastrophe modeling, this presentation aims to equip attendees with a comprehensive understanding of its significance in mitigating the increasingly complex risks facing our global society.

Anonymized Risk Sharing

Steven Kou (Boston University)

Anonymized risk sharing, which requires no information on preferences, identities, private operations, and realized losses from the individual agents, is especially relevant in the digital economy, with applications such as P2P health care insurance, revenue sharing of digital music and videos and blockchain mining pools. Although there is extensive literature on axiomatic approaches in decision theory, so far there is no axiomatic theory for risk sharing. We present such a theory in the context of anonymized risk sharing. The theory is based on four axioms of fairness and anonymity that characterize the conditional mean risk sharing rule. Applications to the digital economy are presented.

Using optimal transport to mitigate unfair predictions

Arthur Charpentier (UQAM)

The insurance industry is heavily reliant on predictions of risks based on characteristics of potential customers. Although the use of said models is common, researchers have long pointed out that such practices perpetuate discrimination based on sensitive features such as gender or race. Given that such discrimination can often be attributed to historical data biases, an elimination or at least mitigation is desirable. With the shift from more traditional models to machine-learning based predictions, calls for greater mitigation have grown anew, as simply excluding sensitive variables in the pricing process can be shown to be ineffective. In this talk, we first investigate why predictions are a necessity within the industry and why correcting biases is not as straightforward as simply identifying a sensitive variable. We then propose to ease the biases through the use of Wasserstein barycenters instead of simple scaling. To demonstrate the effects and effectiveness of the approach we employ it on real data and discuss its implications.

A Fair price to pay: exploiting causal graphs for fairness in insurance

Olivier Côté (Université Laval)

In many jurisdictions, insurance companies must not discriminate on some given policyholder characteristics. Omission of prohibited variables from models prevents direct discrimination, but fails to address proxy discrimination, a phenomenon especially prevalent when powerful predictive algorithms are fed with an abundance of acceptable covariates. The lack of formal definition for key fairness concepts, in particular indirect discrimination, hinders the fairness assessment of methodologies. We review causal inference notions and introduce a causal graph tailored for fairness in insurance. Exploiting these, we discuss potential sources of bias, formally define direct and indirect discrimination, and study the properties of fairness methodologies. A novel categorization of fair methodologies into five families (best-estimate, unaware, aware, hyperaware, and corrective) is constructed based on their expected fairness properties. A comprehensive pedagogical example illustrates the practical implications of our findings: the interplay between our fair score families, group fairness criteria, and sources of discrimination.

A Bayesian approach to discrimination-free insurance pricing

Lydia Gabric (Arizona State University)

In recent years, many jurisdictions have implemented anti-discrimination regulations that require protected information to be excluded from insurance pricing calculations. With the rise of complex pricing methods, insurers are facing the dual challenge of complying with evolving anti-discrimination regulations while maintaining accurate pricing outcomes. In response, recent studies have sought to develop discrimination-free pricing methods through probabilistic inference. However, for these methods to produce unbiased and non-discriminatory results, the estimation process will require individual-level discriminatory data which are often prohibited to insurers due to regulatory constraints. In this study, we propose a Bayesian discrimination-free pricing method that integrates a hierarchical finite mixture model with latent variables in lieu of explicit discriminatory information. Supported by a simulation study and an empirical analysis based on real insurance data, our Bayesian approach is capable of inferring the hidden relationship between variables and consequently producing unbiased discrimination-free pricing results.

An adversarial reweighting scheme for discrimination-free insurance pricing

Hong Beng Lim (Chinese University of Hong Kong)

Regulators are interested in pricing methodologies which are guaranteed to be free of proxy discrimination or discrimination-free, especially against protected groups. One particularly valuable contribution to this area is the discrimination-free premium formula of Lindholm et al. (2022, ASTIN), which is prized for its simple analytical form. However, this formula in fact requires the modeler to calculate the discriminatory price against each group as an intermediate step. This is of concern since, for certain protected group labels such as race, regulators have been understandably hesitant to allow insurers to even collect and hold such information, let alone use it as part of pricing. Thus, a framework where a trusted third party first debiases the sample, and hands it to the insurer to fit any models needed, may be more desirable. We demonstrate that, using an appropriate reinterpretation of the discrimination-free premium formula, that it is possible to decouple the two steps of debiasing the sample and fitting the discrimination-free model, allowing the modeler in the second step more freedom in choosing and changing the model used. Using the connection of this formula to causal inference, we propose a scheme involving adversarially reweighting the sample for the former step, and demonstrate that the resulting methodology allows for the discrimination-free premium to be calculated in a more straightforward and flexible manner.

The association between environmental variables and mortality: Evidence from Europe

Jens Robben (KU Leuven)

Using fine-grained, publicly available data, this paper studies the association between environmental variables, i.e., weather factors and air pollution, and weekly mortality rates. Hereto, we develop a mortality modelling framework where a baseline model captures a region-specific, seasonal trend observed within the weekly mortality rates. Using a machine learning algorithm, we then explain deviations from this mortality baseline using anomalies constructed from the environmental data. We illustrate our proposed modelling framework through a case study on more than 550 regions located in 20 different European countries. Through interpretation tools, we unravel insights into which weather and environmental factors are most important when estimating excess or deficit mortality with respect to the baseline and explore how these factors interact. Additionally, we evaluate the relevance of weather and environmental information when aggregating the estimated mortality rates, resulting from our proposed model, on a country-level or annual basis. Our framework is helpful for actuaries and risk managers to assess the impact of environmental scenarios on mortality outcomes.

Marital Status and All-cause Mortality Rates in Younger and Older Adults: A Study based on Taiwan Population Data

Ching-Syang Yue (National Chengchi University)

Due to factors such as the death of a spouse and divorce, the rate of having a couple decreases with age. Women have lower mortality rates, and thus the proportion of widows is higher than that of widowers with difference increasing with age. Social changes in Taiwan have led to a decrease in the marriage rate and a later age at marriage. We want to explore how these chsanges relate to marriage-specific mortality. In this study, we use Taiwan’s marital data for the whole population (married, unmarried, divorced/widowed) to evaluate if the marital status is still a preferred criteria. We choose visualization tools and mortality models, such as the Lee-Carter model, to show the mortality improvements for various marital statuses. Because of a small population size, we also apply small area estimation methods to stabilize the parameter estimation. We found that married people generally have lower mortality rates and this advantage tends to extend among older men with time.

Bayesian MIDAS Regression Tree: Understanding Macro/Financial Effect on Mortality Movement

Jianjie Shi (Monash University)

Understanding mortality patterns and trends is paramount across diverse domains such as actuarial science, public health, and insurance risk assessment. This paper develops a novel non-parametric mortality model using the Bayesian Additive Vector Autoregressive Tree (BAVART) model, providing a flexible framework capable of capturing the complex nonlinear relationships inherent in mortality data. Moreover, our approach incorporates a mixed data sampling (MIDAS) regression structure into the tree model, enabling the integration of a comprehensive array of high-frequency macro/financial indicators. This innovative methodology allows for the analysis of yearly mortality dynamics, shedding light on the intricate interplay between various factors and mortality outcomes. Through empirical analysis and validation using real-world mortality datasets, we showcase the efficacy and robustness of our approach in mortality prediction and risk assessment.

Mortality and Macro-economic Conditions: A Mixed-frequency FAVAR Approach

Dan Zhu (Monash University)

Age-specific mortality data are typically released on an annual basis, yet economic and financial decision-making often necessitates more timely updates. This paper explores the dynamic interrelationships between age-specific mortality rates and macroeconomic conditions through a Mixed-Frequency Factor-augmented Vector Autoregression (MF-FAVAR) model, accommodating various data release frequencies. In particular, we develop an efficient simulation approach for handling high-dimensional, mixed-frequency data under the Bayesian MF-FAVAR model. Focusing on U.S.-based mortality data and corresponding quarterly macroeconomic indicators, we employ the proposed model to estimate and provide nowcasts of quarterly age-specific mortality rates using the most recent macroeconomic information. Additionally, a comprehensive structural analysis is conducted to unravel the complex interdependencies between macroeconomic conditions and mortality trends using different identification strategies. Specifically, a Bayesian proxy structural Vector Autoregression (BP-SVAR) model is employed to investigate the impacts of health expenditure shocks on mortality rates with the aid of an external instrumental variable. Our findings indicate that a positive, unexpected shock in health expenditures contributes to a short-term reduction in the overall mortality trend.

Generalized Tontine

Runhuan Feng (Tsinghua University)

Tontine is a decentralized risk sharing mechanism where retirement benefits are divided among surviving participants. In contrast with annuities, all longevity risks are borne by participants as opposed to a third party such as an insurer. Most existing tontine designs suffer from a major disadvantage that a random amount may be lost after all participants decease, which can be difficult to implement in practice for those without any bequest motives. A new generalized tontine design is introduced in this paper to tackle this issue and to offer a greater variety of decentralized annuity schemes.

A Unified Theory of Multi-Period Decentralized Insurance and Annuities

Peixin Liu (University of Illinois Urbana-Champaign)

There are two streams of research work on decentralized risk sharing plans. On one side, there is a rich literature on classic risk sharing, and more recently, emerging topics on decentralized insurance, including peer-to-peer insurance, mutual aid, DeFi insurance, etc. On the other side, a body of literature is dedicated to decentralized annuities, such as, tontine, group self-annuitization, etc. While the two are developed in vastly different context, participants offer indemnities for each other in decentralized ways in both types of mechanisms. This paper aims to bridge the gap between these two seemingly different topics and extend a theory of decentralized insurance introduced earlier to a general framework that encompasses not only multi-period decentralized insurance models but also decentralized annuities. This framework sheds light on hidden connections between spatial risk sharing and dynamics of inter-temporal risk transfer. It can be used to compare and understand distinct features of various designs.

Adverse Selection in Tontines

Nan Zhu (Penn State University)

Several recent studies have cited the theoretical work of Valdez et al. (2006) as evidence that there is less adverse selection in tontine-style products than in conventional life annuities. Weargue that the modeling work and results of Valdez et al. (2006) do not unconditionally support such a claim. Conducting our own analyses structured in a similar way but focusing on the relative instead of absolute change in annuity vs. tontine investments, we find that an individual with private information about their own survival prospect can potentially adversely select against tontines at the same, or even higher levels than against annuities. Our results suggest that the investor’s relative risk aversion is the driving factor of the relative susceptibility of the two products to adverse selection.

The Riccati Tontine

Moshe Arye Milevsky (Schulich School of Business & The IFID Centre)

This paper introduces a Riccati tontine, which we name after two Italians, one a mathematician and the other a financier: Jacobo Riccati (b. 1676, d. 1754) and Lorenzo di Tonti (b. 1602, d. 1684). This accumulation-based (savings) tontine, which is a form of decentralized longevity risk sharing, differs from alternate designs via two novel features. The first is driven by securities regulators and the second by a personal hedging motive. First, the Riccati tontine is designed so that conditional on dying or lapsation, the representative investor is {} to receive their money back. Second, in the Riccati tontine the underlying funds are allocated to investments (i.e. risky assets) whose return shocks are negatively correlated with stochastic mortality. Why negative? Well, it turns out that to maximize the expected payout to survivors, investments should hedge longevity risk. In other words, the investment funds within a Riccati tontine should not be indexed to the market. In sum, the mathematical contribution is to prove the recovery schedule that generates this dual outcome satisfies a first-order ODE that is quadratic in the unknown function, that is a Riccati equation.

Classification of Cyber Events Through CART Trees: Excitation Split Criteria Based on Hawkes Process Likelihood

Yousra Cherkaoui (CREST-ENSAE/Milliman R&D)

The digital transformation across the global economy has significantly emphasizes the need for cyber risk management within the insurance sector. As IT systems grow more interconnected, they become increasingly vulnerable to accumulation phenomena, challenging the pooling mechanism of insurance. Moreover, cyber attacks are heterogeneous, varying in types and contagion degrees, some attacks highly spread while others remain more contained. Recent studies, see [1] for example, have underscored the relevance of Hawkes processes in modeling the contagion and the heterogeneity in cyber attacks spread. In [1], groups are manually formed based on sector and geography which points to the need for an automated approach to group formation. To tackle this, this work introduces the use of the Classification And Regression Tree (CART) algorithm with the likelihood of a Hawkes process as its split criterion. This approach is an extension to [2] where cyber events are classified with the likelihood of a Generalized Pareto Distribution as a splitting criteria. Here, the aim is to group trajectories by their self-excitation features, diverging from [2]’s focus on random variables. We first present the modified CART-algorithm applied on Hawkes processes and show on simulated trajectories how this CART algorithm can correctly retrieve the underlying characteristics. The robustness of the derived estimators is analyzed with respect to the sample size and imbalance. Furthermore, we investigate the application of this methodology to real-world datasets, such as the Privacy Rights Clearinghouse and Hackmageddon databases.

NLP-Powered Repository and Search Engine for Academic Papers: A Case Study on Cyber Risk Literature with CyLit

Changyue Hu (University of Illinois Urbana Champaign)

As the body of academic literature continues to grow, researchers face increasing difficulties in effectively searching for relevant resources. Existing databases and search engines often fall short of providing a comprehensive and contextually relevant collection of academic literature. To address this issue, we propose a novel framework that leverages Natural Language Processing (NLP) techniques. This framework automates the retrieval, summarization, and clustering of academic literature within a specific research domain. To demonstrate the effectiveness of our approach, we introduce CyLit, an NLP-powered repository specifically designed for the cyber risk literature. CyLit empowers researchers by providing access to context-specific resources and enabling the tracking of trends in the dynamic and rapidly evolving field of cyber risk. Through the automatic processing of large volumes of data, our NLP-powered solution significantly enhances the efficiency and specificity of academic literature searches. Using NLP techniques, we aim to revolutionize the way researchers discover, analyze, and utilize academic resources, ultimately fostering advancements in various domains of knowledge.

Modeling and anticipating massive cloud failure in cyber insurance

Olivier Lopez (Center for Research in Economics and Statistics, IP Paris)

Attack on a cloud provider (or on another critical infrastructure) is a threat to cyber insurance portfolio, because it could generate simultaneous claims for a large proportion of the portfolio. Such accumulation of claims disrupts the mutualization principle which is a cornerstone of the economic model of insurance. The European Insurance and Occupational Authority identified this situation as a scenario requiring stress-testing. In this presentation, we will discuss how to design such scenario via the connexion between mathematical modeling and threat intelligence. We define an appropriate risk measure to qualify how a portfolio is exposed to such situation. We deduce from this model a way to better target prevention against this risk, and to determine optimal insurance portfolio that achieve an equilibrium between profitability in a standard regime and reasonable exposition to these cloud failure scenarios.

Machine Learning Methods for Designing Optimal Multiple-peril Cyber Insurance

Linfeng Zhang (The Ohio State University)

In the current market practice, many cyber insurance products offer a coverage bundle for losses caused by various types of incidents (referred to as perils), and the coverage for each peril comes with a separate deductible and limit. This work formulates the task of determining the peril-specific deductible and limit amounts as a Pareto optimization problem such that the policy design is mutually beneficial to the insured and the insurer. The problem takes probabilistic predictions on peril occurrences as model inputs. The Cross-Entropy Method, a well-established genetic algorithm, is employed to solve this problem, and several computational challenges arise in this process owing to the non-convexity of the problem and the non-uniqueness of solutions. To obtain the predictions and address the computational challenges, we discuss the use of machine-learning methods and integrate them into a workflow suitable for the production environment. Real-world data on cyber incidents is used to illustrate the feasibility of this workflow. Several implementation improvement methods for practicality are also discussed.

Enhancing Valuation of Variable Annuities in Lévy Models with Stochastic Interest Rates

Antonino Zanette (University of Udine (Italy) - MathRisk Inria Paris (France))

This paper extends the valuation and optimal surrender framework for variable annuities with guaranteed minimum benefits under a Lévy equity market by considering a stochastic interest rate described by the Hull-White model. This framework allows for a realistic and dynamic financial environment. We provide a robust valuation mechanism based on a hybrid numerical method which combines tree methods for interest rate with finite difference for the underlying asset price. This method is particularly suited for the complex nature of variable annuities where periodic fees and mortality risks are significant considerations. Our findings reveal the impact of stochastic interest rates on the strategic decision-making process related to the surrender of these financial instruments. Through extensive numerical experiments, and using comparison against Longstaff-Schwartz Monte Carlo method, we demonstrate how our enhanced model can guide policyholders and issuers in structuring contracts that balance the interests of both parties, particularly in disincentivizing premature surrender while accommodating the realistic fluctuations of the financial markets. Finally, acomparative analysis with varying interest rate parameters highlights the impact of the interest rate on the cost of the optimal surrender strategy and highlights the relevance of proper modelling for the stochastic interest rate.

Analytic Formulae for Valuing Guaranteed Minimum Withdrawal Benefits under Stochastic Interest Rate

I-Chien Liu (Department of Insurance and Finance, National Taichung University of Science and Technology, Taichung, Taiwan.)

This research attempts to derive the analytic formulae for valuing guaranteed minimum withdrawal benefits (GMWB) with Hull and White (1990) stochastic interest rate model. The variable annuity product is assumed to invest in an investment portfolio with equity, bond, and cash. Different from the existing literature dealing with constant interest rate assumption (e.g. Milevsky and Posner (1998), this paper derives that the probability density function for the infinite sum of the correlated assets under Hull and White model follows the reciprocal gamma distribution. Thus, approximating the finite sum of the correlated assets by the reciprocal gamma distribution, we derive the analytic formula for pricing GMWB-guarantee. For comparison, we also derive the analytic formulae using lognormal distribution approximation and examine the accuracy of the analytic valuation formulae based on Monte Carlo simulations. The numerical analysis has presented that the proposed analytic formulae based on a reciprocal gamma distribution perform well. Ignoring the effect of the stochastic interest rate would underestimate the fair charge, especially for long-duration insurance contracts. Thus, the pricing formulae taking into account the stochastic interest rate can better evaluate the embedded guarantees for long-duration insurance contracts.

Valuing equity-linked insurance products for couples

Kelvin Tang (University of New South Wales)

In the pricing of joint life contracts, it is often convenient to assume that the lifetimes of different persons are independent. However, some results in the literature have demonstrated that the lifetimes of a couple typically exhibit positive dependence, and failure to account for this dependence can lead to significant mispricing of benefits. In this presentation, we explore the valuation of some equity-linked products for a couple under the assumption that the lifetimes can be dependent. While traditional equity-linked products are usually only defined for a single life, we propose some products designed specifically for a couple where the benefit can depend on the death times of both lives. These include, for example, a reversionary annuity that pays the surviving spouse until his/her death, where the payment can depend on an underlying equity index which helps to guarantee the standard of living for the surviving spouse. By modelling the joint mortality function with a bivariate mixed Erlang distribution, our model generalises some existing results in the literature and finds closed-form solutions for the expected value of the products.

Basis Risk in Variable Annuity Separate Accounts

Wenchu Li (St. John’s University)

Variable annuities (VAs) are hybrid long-term financial products that combine both the insurance and investment functions. Many VA policies have complex long-term financial guarantees attached that expose insurers to a large amount of systematic equity risk. Hedging this risk is paramount, but the effort is complicated by basis risk, i.e., the discrepancy between changes to the value of VA guarantees in response to financial market movements and the returns of potential hedging instruments. There is an expanding strand of literature examining the dynamic hedging algorithm for VA portfolios, though the majority of them conduct mapping at the individual fund level with a single index or futures contract as the hedging instruments. In this study, we develop a novel portfolio mapping strategy that simultaneously addresses the three primary challenges insurers face in practice when hedging the VA liabilities: (i) reduce basis risk by producing a high-quality mapping; (ii) limit transaction costs incurred from rebalancing the mapping strategy; and (iii) keep the mapping tractable by using few instruments. We combine the sure independence screening (SIS) procedure with revised LASSO regression to reduce transaction costs while aiming to maximize the mapping efficiency of VA portfolios. To document the real-world effectiveness of our proposed approach, we conduct an empirical study of two U.S. VA providers: a market leader and a minor player, respectively. We examine historical monthly returns of the companies’ VA-underlying mutual funds from October 2008 to December 2021, with more than 600 ETFs serving as potential mapping instruments. We show that basis risk can be effectively diversified at the separate account level, and we conclude that basis risk can be much less of a concern to VA providers than previously suggested by presenting a practical method that helps insurers mitigate basis risk effectively and improve the quality of their VA hedging.

Optimal reinsurance in a monotone mean-variance framework

Emma Kroell (University of Toronto)

Optimizing criteria based on static risk measures often leads to time-inconsistent strategies. To mitigate this issue, we study the optimal behaviour of an insurer with monotone mean-variance preferences (Maccheroni et al. (2009)) who purchases reinsurance over a finite continuous-time horizon. Leveraging a dual representation of the monotone mean-variance criterion as a min-max problem, we solve for the insurer’s optimal time-consistent ceded loss function. Assuming a Cramér-Lundberg loss model and the expected value premium principle, we explicitly obtain the optimal contract and conclude with some numerical illustrations.

Strategic Underreporting and Optimal Deductible Insurance

Dongchen Li (York University)

We propose a theoretical insurance model to explain well-documented loss underreporting and to study how strategic underreporting affects insurance demand. We consider a utility-maximizing insured who purchases a deductible insurance contract and follows a barrier strategy to decide whether she should report a loss. The insurer adopts a bonus-malus system with two rate classes, and the insured will move to or stay in the more expensive class if she reports a loss. First, we fix the insurance contract (deductibles) and obtain the equilibrium reporting strategy in semi-closed form. A key result is that the equilibrium barriers in both rate classes are strictly greater than the corresponding deductibles, provided that the insured economically prefers the less expensive rate class, thereby offering a theoretical explanation to underreporting. Second, we study an optimal deductible insurance problem in which the insured strategically underreports losses to maximize her utility. We find that the equilibrium deductibles are strictly positive, suggesting that full insurance, often assumed in related literature, is not optimal. Moreover, in equilibrium, the insured underreports a positive amount of her loss. Finally, we examine how underreporting affects the insurer’s expected profit.

Robust investment and insurance choice under habit formation

Sixian Zhuang (The Chinese University of Hong Kong)

The literature extensively documents the significant influence of consumption habits on investment decision-making and insurance purchasing behavior. In the presence of investment opportunities, the inherent ambiguity surrounding financial market returns becomes an additional factor that interacts with the formation of consumption habits. This research undertakes a novel exploration of robust decision rules pertaining to insurance purchases and investment choices, with a particular focus on incorporating the effects of consumption habits and model ambiguity. To address this problem, we employ a duality framework within the G-expectation framework to account for both habit formation and model uncertainty. By formulating the problem as a robust optimization task encompassing general utility functions, we shed light on the economic implications arising from habit formation in worst-case scenarios. We further investigate the impact of historical consumption habits when the financial market undergoes a shift towards an ambiguous regime. For instance, we analyze the duration of pre-COVID-19 consumption habits during the unprecedented volatility experienced in the financial market during the COVID-19 period. By simultaneously considering both habit formation and market uncertainty, our study offers valuable insights into the demand for insurance products.

Optimal Insurance to Maximize Exponential Utility when Premium is Computed by a Convex Functional

Bin Zou (University of Connecticut)

We find the optimal indemnity to maximize the expected utility of terminal wealth of a buyer of insurance whose preferences are modeled by an exponential utility. The insurance premium is computed by a convex functional. We obtain a necessary condition for the optimal indemnity; then, because the candidate optimal indemnity is given implicitly, we use that necessary condition to develop a numerical algorithm to compute it. We prove that the numerical algorithm converges to a unique indemnity that, indeed, equals the optimal policy. We also illustrate our results with numerical examples.

Marine claims and the human factor

Haiying Jia (Norwegian School of Economics)

Over 90% of international seaborne trades are transported by ocean going vessels, which carry valuable cargoes and, thus, are responsible for securing a safe transportation system by minimizing accidents. Marine insurance has a total premium size of USD 35.8 billion in 2022 (IUMI, 2023) and covers the property (vessel hull & machinery) and Protection & Indemnity. This research investigates whether seafarers’ health, reflected in their health insurance claims, and the attrition rate have any impacts on the marine insurance claims for the companies that employ these seafarers. Moreover, the geopolitical disruptions brought by the Ukraine-Russian war on the maritime industry have been profound, considering that Ukraine and Russia have been traditionally important countries for seafarer supplies, particularly to European shipping companies. The homeland situation has brought negative impact on the seafarers who work onboard vessels sailing around the globe. Meanwhile, the difficulties in seafarers’ repatriation after duty have put extra pressure on seafarers’ retention situation. We investigate how Ukraine-Russian war and COVID-19 pandemic period have impacted the seafarers’ health insurance claims, and consequently on the marine insurance claims for their employers. Utilizing over 40,000 health insurance records for seafarers together with 36 employers, we employ econometrics models to explore the research questions. The results show that seafarers’ attrition rate does affect marine insurance claims, highlighting the importance of keeping a sustainable workforce among other aspects.

Does currency risk mismatch affect the life insurance premium?

Ko-Lun Kung (Feng Chia University)

In this paper, I investigate the impact of the currency risk mismatch of life insurers on insurance pricing in Taiwan. In 2000-2020, the foreign investment of Taiwanese life insurers has grown from 5% to 65% of the total assets, mostly in US corporate bonds. A possible explanation is the life insurers reach for yield due to the low-interest rate environment (Becker and Ivashina, 2015). However, almost all liabilities are in the domicile currency. This creates a large currency risk mismatch and therefore regulatory pressure from RBC. Using hand-collected data on insurance prices, I quantify the effects of currency risk mismatch and estimate the cost pass-through to the policyholder.

The Natural Catastrophe Loss File, 2001-2023 - and some Questions

Morton Lane (University of Illinois at Urbana-Champaign)

The Insurance-Linked Securities [ILS] market is approximately a quarter of a century old. It is a small (but transparent and growing) window into the traditional (and near opaque) reinsurance market for natural catastrophes. Uniquely for the securities market, each ILS private placement comes with a formal quantified risk analysis, laying out the expected loss [EL] for that security. Also uniquely, since 2006 the ILS market was the first one to openly publish one climate-change metric based on the warm sea surface temperature in the North Atlantic. In 2023 the market further distinguished itself, for investors, by delivering very high double-digit returns. Moreover, its longer run returns over 2001-2023 compare very favorably with other fixed income securities. This paper seeks to examine the record of this ILS market in addressing the following questions.a)Are actual losses consistent with expected losses [EL] in the risk analysis? b)Is there a bias to overestimating loss or underestimating loss? c)Is the sea-surface metric consistent with a rise in actual loss? d)Is that metric rising or falling? e)What is the source of high sustained returns? f)What is the source of the spectacular returns of 2023? g)Is it possible to discern which perils are better price than others?These questions are examined in detail, based on available data over 23 years. Some questions are surprisingly awkward to answer, involving primary and secondary market data and implicit models of loss expectations. However, the analysis herein is an empirical one without heavy mathematics or statistics. For those concerned about climate change, the answers in the data might give one pause. Or require a better framing of climate change questions. They also suggest ELs bias towards the conservative.

Socioeconomic benefits of the Brazilian INSS AtestMed programme

Renata Alcoforado (Department of Computer Science, Universidade Federal Rural de Pernambuco, Recife - Brazil)

This study analyses forecast modeling with time series to project INSS expenses related to the AtestMed programme, a recent socioeconomic initiative by the Brazilian government to streamline benefit compensation for temporary disability within the Social Security System. Aiming to reduce processing and payment delays, AtestMed significantly impacts the duration of disability social benefits and the associated costs. Our analysis extends from historical data from 2021 to 2023, inflation adjusted using the National Consumer Price Index (INPC), to future forecasts up to 2028. In our model we use variables related to the Mean Cost Delay, Cost Delay and Mean Value of Initial Salary Payment. They are all related with the Social Security Temporary Disability Benefit. The study delineates INSS cost patterns with and without the AtestMed programme, introduced in July 2023, using six time series models (Simple, Holt and Holt-Winters Exponential Smoothing, ARMA, ARIMA, and SARIMA) applied to the Mean Delay Cost variable. Our forecasts for 2024 predict a R$ 5.6 billion saving, increasing to nearly R$ 30 billion over the next five years, thus highlighting AtestMed’s pivotal role in financial efficiency and operational improvement in social security management. This methodological framework not only unravels temporal and seasonal expenditure trends but also enhances strategic decision-making within the social security domain.

Tweedie multivariate semi-parametric credibility with the exchangeable correlation

Himchan Jeong (Simon Fraser University)

This article proposes a framework for determining credibility premiums for multiple coverages in a compound risk model with Tweedie distribution. The framework builds upon previous results on credibility premium and provides an explicit multivariate credibility premium formula that is applicable to the Tweedie family assuming that the unobserved heterogeneity for the multiple coverage have the common correlation. The practical applicability of the proposed framework is evaluated through simulation and empirical analysis using the LGPIF dataset, which includes claims and policy characteristics data for various types of coverages observed over time. The findings suggest that the proposed framework can be useful in ratemaking practice by incorporating a non-trivial dependence structure among the multiple types of claims.

Unbiased commercial insurance premium

Minji Park (Department of Statistics, Ewha Womans University)

An insurance premium is defined as the product of two elements: the a priori rate, which is a function of the policyholder’s risk characteristics observable at the time of contract, and the a posteriori rate, which is the residual component not explained by the a priori information. This paper explores the mathematical structure of the a posteriori rate and its corresponding statistical properties. In most cases, such as the Bayes premium and the credibility premium, the a posteriori rate is a function of both the a priori rate and the claim history. However, in certain scenarios like the bonus-malus premium in auto insurance, the a posteriori rate is solely dependent on the claim history. Building upon the concept of the bonus-malus premium, we propose a new type of insurance premium, which we term as the commercial insurance premium, where the a posteriori rate is exclusively dependent on the claim history. Despite the simplistic mathematical structure of the commercial insurance premium potentially facilitating better communication with policyholders, such limitations on the mathematical structure may lead to undesirable outcomes. Indeed, several empiricalstudies in the insurance literature indicate that the bonus-malus premium is subject to a systematic bias, known as the double counting problem. Expanding upon the empirical study of the bonus-malus premium, we present theoretical findings that demonstrate the prevalence of the double counting problem across all types of commercial insurance premiums. We also propose a solution to address this issue. The effectiveness of our proposed method is validated through data examples.

Comparison of offset and weighted regressions in Tweedie case

Raissa Coulibaly (Chaire CARA - UQAM)

Traditional pricing models assume risk exposure is proportional to the number of claims. Consequently, the insurer will give a 50% discount to an insured person who leaves during the term and who is only covered for half a year. Although a penalty for cancellation during the term exists, this is based on compensation for administrative costs linked to an early departure and not in connection with a frequency of claims, which would be higher for this kind of insured.In this paper, however, we show that policyholders who leave mid-term have much more claims experience than those who do not. We also show that mid-term cancellations impact the parameters of the Tweedie distribution.More specifically, we consider the Tweedie distribution, equivalent to the mixture, Poisson and gamma distribution, as discussed in (Delong et al., 2021). We then compare the premium calculation approaches with offset and weights to identify the Tweedie model that adequately considers claims experience from policyholders who leave during the term. We refer to (Denuit et al. 2021) for comparisons.

Law-invariant factor risk measures

Peng Liu (University of Essex)

In this talk, we present the axiomatization results on the factor risk measures. While risk measures only rely on a loss variable, in many cases risk needs to be measured relative to some major factors. In this talk, we introduce a double-argument mapping as a risk measure to assess the risk of a loss variable relative to a vector of factors. A set of natural axiomsare discussed, and particularly distortion, quantile, linear and coherent factor risk measures are introduced and characterized. We also introduce a large set of factor riskmeasures that include many of the existing risk measures in the literature such as CoVaR and CoES. We will see that these risk measures can be readily used in risk-sharing problems. Finally, some numerical examples are presented.

Robustifying Elicitable Functionals under Kullback-Leibler Misspecification

Kathleen Miao (University of Toronto)

Elicitable functionals and (strict) consistent scoring functions are of interest due to their utility of determining (uniquely) optimal forecasts, and thus the ability to effectively backtest predictions. However, in practice, assuming that a distribution is correctly specified too strong a belief to reliably hold. To remediate this, we incorporate a notion of statistical robustness into the framework of elicitable functionals, meaning that our robust functional accounts for “small” misspecifications of a baseline distribution. Specifically, we propose a robustified version of elicitable functionals by using the Kullback-Leibler divergence to quantify potential misspecifications from a baseline distribution. We show that the robust elicitable functionals admit unique solutions lying at the boundary of the uncertainty region. Since every elicitable functional possesses infinitely many scoring functions, we propose the class of b-homogeneous strictly consistent scoring functions, for which the robust functionals maintain desirable statistical properties. In numerical experiments, we explore the behaviour of the robust functionals, such as the mean and quantiles, and demonstrate the impact of the uncertainty set, and the choice of scoring function using Murphy diagrams.

Differential Sensitivity in Discontinuous Models

Silvana Pesenti (University of Toronto)

Differential sensitivity measures provide valuable tools for interpreting complex computational models, as used in applications ranging from simulation to algorithmic prediction. Taking the derivative of the model output in direction of a model parameter can reveal input-output relations and the relative importance of model parameters and input variables. Nonetheless, it is unclear how such derivatives should be taken when the model function has discontinuities and/or input variables are discrete. We present a general framework for addressing such problems, considering derivatives of quantile-based output risk measures, with respect to distortions to random input variables (risk factors), which impact the model output through step-functions. We prove that, subject to weak technical conditions, the derivatives are well-defined and we derive the corresponding formulas. We apply our results to the sensitivity analysis of compound risk models and to a numerical study of reinsurance credit risk in a multi-line insurance portfolio.

On Vulnerability Conditional Risk Measures: Comparisons and Applications in Cryptocurrency Market

Yunran Wei (Carleton University)

The paper introduces a novel systemic risk measure, the Vulnerability Conditional Expected Shortfall (VCoES), which is able to capture the "tail risk" of a risky position in scenarios where at least one of the other market participants is experiencing financial distress. It extends the theory of the newly-proposed systemic risk measure Vulnerability-CoVaR (VCoVaR). We develop a set of systemic risk contribution measures: \(\Delta\mathrm{vcovar}\), \(\Delta^R\mathrm{vcovar}\), \(\Delta\mathrm{vcoes}\), and \(\Delta^R\mathrm{vcoes}\), employing both difference and relative ratio functions to compare VCoVaR and the traditional Value-at-Risk, as well as VCoES against the conventional Expected Shortfall. The study delves into various theoretical properties, including stochastic orders, of VCoVaR, VCoES, multivariate Value-at-Risk (MCoVaR), and multivariate conditional expected shortfall (MCoES), alongside the corresponding risk contribution measures. We further introduce the backtesting procedures of VCoES and \(\Delta\mathrm{vcoes}\). Through numerical examples, we validate our theoretical insights and further apply our newly proposed risk measures to the empirical analysis of cryptocurrencies, demonstrating their practical relevance and utility in capturing systemic risk.

Mortality Prediction: a Parameter Transfer Approach

Yechao Meng (University of Prince Edward Island)

Borrowing information from populations with similar mortality patterns is arecognized strategy for the mortality prediction of a target population. Thismirrors the concept of Transfer Learning, a popular and promising area inmodern data mining and machine learning, which aims at improving theperformance of target learners on target domains by transferring theknowledge contained in different but related source domains.This project focuses on applying transfer learning to actuarial applicationsof mortality predictions. We explore how data from other mortality datasetscan be effectively integrated into the parameter transfer learning frameworkto improve mortality predictions for a target population. Our approachincludes incorporating existing mortality prediction models into aregularization framework with closed-form solutions. Additionally, we developan iterative updating algorithm for classic mortality models and penaltyforms.

On the fractional volatility models: characteristics and impact analysis on actuarial valuation

Hongjuan Zhou (Arizona State University)

We study the fractional stochastic volatility model by integrating a compound Poisson jump process to effectively capture the sudden fluctuations in the volatility of asset prices. The characteristics and asymptotics will be analyzed, and a data-driven calibration process would be proposed to estimate the parameters in the model. A stochastic model driven by fractional Brownian motion is utilized to characterize mortality dynamics, taking into account the observed long-range dependence evident in mortality data. To illustrate the joint impact of the generalized fractional stochastic volatility model and the long-memory mortality model, we consider a stylized equity-indexed annuity whose values are linked with S&P 500. By implementing Monte Carlo simulations, sensitivity analysis is conducted to evaluate how the models’ parameters would affect actuarial valuation results.

A Cohort and Empirical Based Multivariate Mortality Model

Tsai Tzu-Hao (National Tsing Hua University)

This article proposes a cohort-age-period (CAP) model to characterize multi-population mortality processes using cohort, age, and period variables. Distinct to the factor-based Lee-Carter-type decomposition mortality model, this approach is empirically based and include the age, period, and cohort variables into the equation system. The model not only provides a fruitful intuition for explaining multivariate mortality change rates but also has a better performance on forecasting future patterns. Using the US and the UK mortality data and performing ten-year out-of-sample tests, our approach shows smaller mean square errors in both countries, comparing to the models in the literature.

A new paradigm of mortality modeling via individual vitality dynamics

Kenneth Zhou (Arizona State University)

The significance of mortality modeling extends across multiple research areas, including life insurance valuation, longevity risk management, life-cycle hypothesis, and retirement income planning. Despite the variety of existing approaches, such as mortality laws, intensity processes, and factor-based models, they often lack compatibility or fail to meet specific research needs. To address these shortcomings, our study introduces a novel approach centered on modeling the dynamics of individual vitality and defining mortality as the depletion of vitality level to zero. More specifically, we develop a four-component modeling framework to analyze the initial value, trend, diffusion, and sudden changes in vitality level over an individual’s lifetime. Our vitality-based mortality model stands out for its modularity, broad applicability, and enhanced interpretability, offering an alternative paradigm for mortality modeling. We demonstrate the model’s estimation and analytical capabilities through a numerical example, followed by a detailed discussion on its practical implementation and significance in various research directions.

A finite mixture approach to model jointly claim frequency and severity

Lluis Bermudez (Universitat de Barcelona)

In insurance ratemaking, employing the classical collective model necessitates the concurrent modeling of both claim frequency and severity. Empirical evidence shows the correlation between these two components, underscoring the importance of accounting for their interdependence. Recent studies have suggested joint models using copulas or other bivariate mechanisms suitable to deal with this issue (see, for example, Vernic et al., 2022). This paper seeks to further advance these methodologies. To address unobserved heterogeneity and clustering effects, we propose a finite mixture approach, segmenting the population into smaller subsets while still capturing the relationship between claim severity and frequency. This model not only offers deeper insights into describing the population under consideration but also facilitates enhanced premium calculations. We use a copula-based approach, extending our work in Bermudez and Karlis (2022), but now focusing on the dependencies between claim frequency and severity. A real data application is discussed.

Tree-based Markov random fields with Poisson marginal distributions

Benjamin Côté (Université Laval)

Insurance claims count data exhibits rich dependencies, motivating the need for multivariate distributions that can capture a wide variety of dependence structures. Encrypting the dependence structure on trees, via tree-based Markov random fields, leverages on their diversity of topologies to offer that flexibility. A new family of tree-based Markov random fields for a vector of discrete counting random variables is presented. According to the characteristics of the family, the marginal distributions of the Markov random fields are all Poisson with the same mean, and are untied from the strength or structure of their built-in dependence, which is uncommon for Markov random fields and convenient for dependence modelling of risks. The specific properties of this new family confers analytic expressions for the joint probability mass function and the joint probability generating function of the vector of counting random variables, thus avoiding message-passing algorithms and granting computational methods that scale well to vectors of high dimension. We study the distribution of the sum of random variables constituting a Markov random field from the proposed family, analyze a random variable’s individual contribution to that sum through expected allocations, and establish stochastic orderings to assess a wide understanding of their behavior. Via the designing of a partial order on trees, we also provide a comparison-based analysis of the underlying tree’s shape, an angle of examination which opens new avenues of comprehension for MRFs.

Dynamic Prediction of Outstanding Insurance Claims Using Joint Models for Longitudinal and Survival Outcomes

Lu Yang (University of Minnesota, School of Statistics)

To ensure the solvency and financial health of the insurance sector, it is vital to accurately predict the outstanding liabilities of insurance companies. We aim to develop a dynamic statistical model that allows insurers to leverage granular transaction data on individual claims into the prediction of outstanding claim payments. However, the dynamic prediction of an insurer’s outstanding liability is challenging due to the complex data structure. The liability cash flow from a claim is generated by multiple stochastic processes: a recurrent event process describing the timing of the cash flow, a payment process generating the sequence of payment amounts, and a settlement process terminating both the recurrence and payment processes. We propose to use a copula-based marked point process to jointly model the three processes. Specifically, a counting process is employed to specify the recurrent event of payment transactions; the time-to-settlement outcome is treated as a terminal event for the counting process; and the longitudinal payment amounts are formulated as the marks associated with the counting process. The dependencies among the three components are induced using the method of pair copula constructions. Compared with existing joint models for longitudinal and time-to-event data such as random effect models, the proposed approach enjoys the benefits of flexibility, computational efficiency, and straightforward prediction.

Efficient GLM Solutions

Vali Asimit (Bayes Business School, City, University of London)

The generalised linear model (GLM) is a flexible predictive model that is widely used in practice by actuaries. An idealised GLM model, the so-called proper GLM, is first discussed, and we explain the advantages and disadvantages of this setting. GLMs are implemented in practice via the well-known Iteratively Reweighted Least Squares (IRWLS), which has a low computational time, but it may not convergence. We propose a self-concordant algorithm that can be applied to Poisson and Gamma GLMs, which always convergence and outperforms IRWLS. The effectiveness of our methods is determined through an extensive numerical comparison of our estimates and those obtained using three built-in packages, MATLAB (fitglm), R (glm2) and Python (sm.GLM) libraries. Finally, we developed some new penalised and shrinkage linear regression models to solve more accurately the classical multiple regression models. We then deploy IRWLS formulations based on our new linear regression solutions, and we show how the GLM implementations could be improved.

A Minimum Variance Approach to Linear Regression with application to actuarial and financial problems

Zinoviy Landsman (University of Haifa, Actuarial Research Center)

Uncertainty is a prevalent characteristic across various statistical, actuarial, and economic models. Accurate quantification and measurement of uncertainty are vital for making informed decisions and effectively managing risks. Several methods for assessing uncertainty are available, including standard deviation, variance, and confidence intervals. In a quest to find a reasonable measure of uncertainty, Landsman and Shushi (2022) introduced a novel risk functional called the Location of Minimum Variance Squared Distance (LVS). In this paper, our objective is to expand the application of LVS to capture uncertainty in regression models, which are commonly employed to analyze the relationship between a dependent variable and one or more independent variables in actuarial problems. This functional allows the generation of predictors in the Minimum Variance Squared Deviation (MVS) sense. We demonstrate that when underlying distributions P of the predicted vector Y is symmetric, this functional closely resembles the traditional Minimum Expected Squared Deviation (MES) functional. However, for non-symmetric underlying distributions of Y, MES and MVS exhibit significant differences, with the disparity determined by the matrix of joint third moments of distribution P and the covariance matrix of vector Y. We derive the analytical closed-form expression for the MVS functional and explore a mixed combination of both MVS and MES functionals. Furthermore, we provide two numerical illustrations predicting three important components of fire losses: buildings, contents and profits, and predicting returns for six market indices using the returns of their dominant stocks.

Flexible Modeling of Hurdle Conway-Maxwell-Poisson Distributions with Application to Mining Injuries

Emiliano Valdez (University of Connecticut)

The Poisson regression is the most popular class of models for count data, but with excessive zeros and unequal dispersion, the ordinary Poisson may be unsuitable to handle the significant presence of zero inflation and dispersion. In this paper, we apply the Hurdle structure with Conway-Maxwell-Poisson (CMP) distribution and integrate use of binary link function as better alternative to Poisson and Negative Binomial. We take a fully Bayesian approach to draw inference from the underlying models to better investigate our inferential methodology, with the Deviance Information Criteria (DIC), Watanabe-Akaike information criterion (WAIC) and Logarithm of the Pseudo Marginal Likelihood (LPML) used for model selection. These criteria incorporate the effective number of parameters to adjust for overfitting, and can be calculated efficiently via likelihood function decomposition. For empirical investigation, we analyze mining injury data from the U.S. Mine Safety and Health Administration (MSHA). The Hurdle regressions are additionally adjusted for exposure, measured by the total employee working time in month; the proposed methodology is of specialty for studying occurrence and severity of the injuries. We tested its competitiveness from a business perspective against other models by estimating the expected injury cost due to adverse predictions. This is joint work with S. Yin, D.K. Dey, G. Gan, and X. Li.

Reducing the dimensionality and granularity in hierarchical categorical variables

Paul Wilsens (KU Leuven)

Hierarchical categorical variables often exhibit many levels (high granularity) and many classes within each level (high dimensionality). This may cause overfitting and estimation issues when including such covariates in a predictive model. In current literature, a hierarchical covariate is often incorporated via nested random effects. However, this does not facilitate the assumption of classes having the same effect on the response variable. In this paper, we propose a methodology to obtain a reduced representation of a hierarchical categorical variable. We show how entity embedding can be applied in a hierarchical setting. Subsequently, we propose a top-down clustering algorithm which leverages the information encoded in the embeddings to reduce both the within-level dimensionality as well as the overall granularity of the hierarchical categorical variable. In simulation experiments, we show that our methodology can effectively approximate the true underlying structure of a hierarchical covariate in terms of the effect on a response variable, and find that incorporating the reduced hierarchy improves model fit. We apply our methodology on a real dataset and find that the reduced hierarchy is an improvement over the original hierarchical structure and reduced structures proposed in the literature.

Climate Extremes and Financial Crises: Similarities, Differences, and Interplay

Eugenia Fang (UNSW)

Despite differences between the two concepts at first glance, extremes of climate and financial systems (that is, climate extremes and financial crises) actually share strong similarities. We name some of the most important ones including regime shifts, feedback loops, tipping and non-linearity, and systemic impact. We summarize that these similarities stem from their fundamental nature of stochasticity, deep uncertainty, and complex systems. Furthermore, we establish a connection between the two systems by discussing their interplay, especially the channels that link climate extremes to financial crises, and the mechanisms by which financial distress exacerbates losses from climate extremes. In light of the parallel and interplay, we highlight that some well-established actuarial and financial models, techniques, and ideas behind them may be potentially useful in modeling climate extremes and managing catastrophe risks.

Pricing green bonds subject to multi-layered uncertainties

Yuhao Liu (UNSW)

Green bonds, as an innovative financial instrument, have been playing an important role in capital mobilization towards climate change mitigation and adaptation. The market was established in 2007, gained momentum from 2014 onwards, and has since accumulated approximately US$3 trillion globally as of March 2024. The rapid market growth calls for a rigorous pricing task, an area of burgeoning importance. At the core of green bond pricing is the ‘greenium’—a premium that arises from investors’ pro-environmental motives. Green bonds are also subject to deep uncertainties, which encompass three dimensions: the physical and transition risks of climate change, market dynamics, and human perception of non-pecuniary utilities. This study establishes an integrated pricing framework that accounts for the uncertainties across these three dimensions and captures their effect on the ‘greenium’.

Modeling the greenium term structure

Ilaria Stefani (Bicocca University)

Green bonds provide financial backing for developing low-carbon initiatives and facilitating the transition towards a greener economy. The greenium effect refers to the potential premium bondholders are willing to pay to invest in green securities compared to investments with similar characteristics (such as maturity, coupon rate, and issuer credit profile). Despite the interest of recent literature on this topic, the determinants and dynamics of the greenium effect remain inadequately understood, particularly concerning its term structure and geographical dependencies. To address this, we propose a mathematical framework employing a bivariate Vector Autoregressive (VAR) model, encompassing risk-free rates and greenium premiums, alongside an affine Generalized Autoregressive Conditional Heteroskedasticity (GARCH) model for the error terms. Leveraging likelihood estimation techniques and empirical bond price data from both green and conventional markets, our methodology aims to establish a comprehensive term structure for the greenium effect.

El Niño’s Enduring Legacy: An Analysis of Its Impact on Life Expectancy

Wenjun Zhu (Nanyang Technological University)

The El Niño–Southern Oscillation (ENSO) significantly influences global weather patterns, leading to extensive socioeconomic repercussions. Recent studies, have highlighted the potential of anthropogenic changes to ENSO to disrupt the global economy. While the association between ENSO and human health-manifesting in epidemic outbreaks, floods, and food security-is well-documented, the specific impact of these anthropogenic modifications on human mortality improvements and life expectancy remains under explored. This paper investigates the effects of El Niño events on human mortality improvements around the globe, emphasizing the identification of patterns, trends, and possible causal mechanisms. Our findings reveal that ENSO consistently affects mortality improvements, leading to a deceleration in life expectancy growth across Asia-Pacific countries. We attribute a loss of 0.2 and 0.4 years in life expectancy due to the 1997-98 and 1982-83 El Niño events, respectively. Under an emissions scenario aligning with current mitigation pledges, we project a 1.8-year reduction in life expectancy among the Asia-Pacific population due to increased ENSO amplitude and teleconnections from warming. However, these effects are influenced by stochastic variations in the sequence of El Niño and La Niña events. Our results underscore the critical impact of climate variability on mortality improvement and life expectancy growth, independent of warming, and highlight the potential for future losses driven by anthropogenic intensification of such variability.

Optimal Consumption and Investment under Uncertain Lifetime and Lifetime Income

Xiaoyu Song (University of Wisconsin-Madison)

We develop a life-cycle framework to study optimal asset allocation strategies for retirees under lifetime income such as pension income or social security. More precisely, we extend the analysis from Yaari (1965) and Leung (1994) by allowing for optimal investments in risky and riskless assets. Leveraging results from Leung (1994), we show that the consumer fully depletes wealth (if she does not die before that) and solely relies on the survival contingent income from some point forward. Relying on the maximum principle for stochastic control (Yong and Zhu, 2012), we characterize the optimal depletion time, optimal consumption, and optimal investment via a system of Forward-Backward Stochastic Differential Equations (FBSDEs). In numerical illustrations, we compare our results to situations where individuals ignore or can borrow against future income.

Reference Health and Investment Decisions

Chunli Cheng (Lingnan College, Sun Yat-sen University)

Reference points influence economic decisions. This paper considers how health reference points and their adaptation to decreasing health influence medical spending, consumption, and investment in a dynamic model. A static reference point implies an aspiration to offset health losses already at a high initial level. In contrast, the case of reference adaptation entails much lower lifetime healthcare expenditure that concentrates late in life. A projection bias, i.e., the agent’s failure to anticipate the reference adaptation, induces behavior that initially resembles the static reference case. With decaying health, choices approach, but remain distinct, to those derived for adaptive reference health.

Optimal life insurance and annuity decision under money illusion

Wenyuan Li (The University of Hong Kong)

This paper investigates the optimal consumption, investment, and life insurance/annuity decisions for a family in an inflationary economy under money illusion. The family can invest in a financial market that consists of nominal bonds, inflation-linked bonds, and a stock index. The breadwinner can also purchase life insurance or annuity that are available continuously. The family’s objective is to maximize the expected utility of a mixture of nominal and real consumption, as they partially overlook inflation and tend to think in terms of nominal rather than real monetary values. We formulate this life-cycle problem as a random horizon utility maximization problem and derive the optimal strategy. We calibrate our model to the U.S. data and demonstrate that money illusion decreases (increases) life insurance demand for young adults (middle-aged workers) and reduces annuity demand for retirees. Our findings highlight the role of financial literacy in an inflationary environment.

Portfolio Optimization with Reinforcement Learning in a Model with Regime-Switching

David Saunders (University of Waterloo)

We study a reinforcement learning approach to portfolio selection when the underlying asset price process follows a regime-switching diffusion model. An analytical solution to the HJB equation for the exploratory mean-variance model under regime switching is derived and practical implementation will be discussed. Empirical results investigating robustness with respect to model mis-specification will also be presented.

Pareto-Optimal Risk Sharing under Monotone Concave Schur-Concave Utilities

Mario Ghossoub (University of Waterloo)

We characterize Pareto-efficient allocations in a pure-exchange economy where agents have monotone concave Schur-concave utilities. This covers a large class of utility functionals, including a variety of law-invariant robust utility functionals and monetary utility functionals. We show that Pareto optima exist and are comonotone, and we provide a crisp characterization thereof in the case of Schur-concave positively homogeneous monetary utility functionals. In the special case of law-invariant comonotone monetary utility functionals (Yaari-Dual utilities), we provide a closed-form characterization of Pareto optima. As an application, we examine risk-sharing markets, in particular, and obtain a closed-form characterization of Pareto-optimal allocations of an aggregate market risk, when all agents evaluate risk through coherent risk measures, a widely popular class of risk measures. As an illustration, we characterize Pareto-optimal risk-sharing for some special types of coherent risk measure.

Tipping the Scales: Financial Safeguarding and Individual Risk Appetites

Inhwa Kim (Harris-Stowe State University)

This study investigates the intricacies of financial decision-making, focusing on risk preparedness insurance subscription choices and the influence of risk preferences and premiums. Employing the quasi-hyperbolic discounting approach, the research utilizes sophisticated toy models to simulate decision-making under varying degrees of uncertainty. Through field surveys with two-stage experiments, the study establishes a direct correlation between subjective noise simulation and the inconsistency of preferences. Moreover, the research explores the interplay of risk preferences and decision-making procedures, offering valuable insights into individual approaches to achieving financial security in the realm of insurance domain. The findings hold significance for both theoretical understanding and practical applications in financial planning and risk management.

Optimal insurance with uncertain risk attitudes and adverse selection.

Benxuan Shi (University of Waterloo)

In this work, we investigate the revenue-optimal insurance contracts when the risk attitude of the buyer is uncertain. We propose a menu of insurance contracts aimed at ensuring consistent revelation of buyer risk preferences while maximizing the seller’s profit. Our approach involves designing contracts based on a dual utility framework and implementing a nonlinear pricing structure. We identify situations where deductible contracts or coverage limit contracts are optimal.

Why Markowitz is better for insurance than investments

Edward (Jed) Frees (University of Wisconsin-Madison, Australian National University)

This paper proposes an insurance version of the Markowitz portfolio optimization procedure commonly used in investments. In addition to providing a novel method of constructing insurable risk portfolios, a comparison of the insurance and investments problems provides insights into data uncertainty issues that plague investment problems. Here, "data uncertainty" refers to the observation that small changes in the risk distribution parameters can lead to large material changes in optimal risk portfolio coefficients.We show that the data uncertainty problem that hinders applications in the investments is not a material concern in the insurance. We do so by presenting the results of a bootstrap analysis that confirms and provides insights into the data uncertainty issue. In addition, an examination of the hessian associated with the constrained optimization Lagrangian provides an explanation to as to why the insurance version does not suffer from this limitation. To provide additional insights into the various sources of data uncertainty, we develop sensitivities based on now-classical results from nonlinear constrained optimization. We also show how these sensitivities can be expressed in terms of asymptotic statistical distributions that help interpret data uncertainty in risk retention problems.

Credibility theory using fuzzy numbers

Jiandong Ren (University of Western Ontario)

This paper studies the actuarial credibility theory when the information about the loss model or the prior distribution of its parameters is imprecise or vague. Several approaches, such as robust Bayesian method and imprecise probability, have been proposed in the literature to study such problems. In this paper, we propose to represent the imprecise/partial/vague information about model parameters as fuzzy numbers and derive formulas for ``fuzzy credibility premiums’’. The results extend those exist in the literature. Therefore, we argue that fuzzy set theory provides a useful approach to set credibility premium when the information for model/prior distribution is vague.

Functional techniques in chain ladder claims reserving

Matúš Maciak (Charles University)

In the talk, we discuss a functional based approach to stochastic claims reserving which handles the underlying loss development (chain ladder) triangles as functional profiles and predicts the overall claim reserve distribution using a permutation bootstrap. Three competitive functional-based reserving algorithms are described and their theoretical justification is provided. Finite sample properties are investigated using an empirical comparison with standard (parametric) reserving techniques based on several hundreds of real run-off triangles with known real loss outcomes. The proposed functional-based reserving methodology offers an effortless implementation, robustness against outliers, and, in general, very wide-range applicability.

Navigating Decisions with Imperfect Data Models

Jose Blanchet (Stanford University)

Massive data sets and scalable computing resources have significantly advanced the application of Operations Research (OR) methods in data-driven decision making. However, what happens when the data is corrupted or contains anomalies? Or when the deployment environment differs from the training environment due to distribution shifts? While these situations may seem similar as they both involve making decisions based on incorrect models, they are fundamentally different conceptually. In this talk, we will explore these differences and present a disciplined yet practical approach to addressing these challenges. We will also examine the statistical implications of this approach and discuss associated stochastic optimization algorithms for its implementation. Our discussion will highlight the significant differences between distributionally robust optimization and robust statistics within various applications.

Actuarial Faculty Development Program: Global Launch-Summer 2024

William Marella (ACTEX Learning)

Information on the Milliman-United Nations GAIN program that ACTEX Learning is a contributing partner, specifically leading the Actuarial Faculty Development Program.Benefit CorporationACTEX Learning is a mission-driven registered Benefit Corporation, advancing access to ever more affordable actuarial education, globally. In 2022, ACTEX Learning (and our parent company) legally registered as a Benefit Corporation and formally committed ourselves to this mission. AFDP Program DescriptionOne of our most recent and exciting initiatives is our work with the United Nations Development Program (UNDP) & Milliman and their GAIN program. The GAIN program has a number of educational and professional aspects. At ACTEX, we are focused on its Actuarial Faculty Development Program (AFDP). The Actuarial Faculty Development Program (AFDP) is a pilot program in partnership with ACTEX and GAIN. It consists of four components:1.Actuarial Education Bootcamp (for professors)2.Community of Practice (CoP) to support professors3.Curriculum resources (for classroom instruction)4.Mentorship (by experience professors for faculty new to teaching actuarial science) Objective: Strengthen the universities by supporting the development of faculty members and provide access to training materials, affordable education tools and resources. Gap addressing: Members of the faculty do not have actuarial science expertise or experience in the insurance field. This leads to a teaching faculty that lacks exposure to the practical aspects of the curriculum and students who find it hard to relate what they have learnt in their formal education to the workplace. Professors, students, industry and the profession stand to benefit from the GAIN and AFDP programs. Spreading the word about AFDP/GAIN will help accelerate the success of the program. I’ll mention that there is a modest fee for AFDP. All parties agreed that if we don’t charge a fee, it simply won’t be valued. This has been demonstrated repeatedly in a myriad of industries with all types of participants. ADDITIONAL BACKGROUND INFORMATIONAbout GAIN - Program background: In September 2022, the United Nations Development Programme (UNDP)’s Insurance and Risk Financing Facility (IRFF) and Milliman launched the UNDP-Milliman Global Actuarial Initiative (GAIN), which aim to build actuarial profession and expertise in developing countries, helping predict and prepare for risks in these uncertain times. What is GAIN’s mission? UNDP has identified actuarial capacity and expertise – assessing risk in insurance and finance with mathematical and statistical methods – as a necessary input for the achievement of the Sustainable Development Goals (SDGs). Working in partnership with the UNDP, Milliman will contribute to the growth of actuarial profession and expertise so that governments and the insurance industry can better manage the increasing risks faced by the people and enterprises in developing nations. The scope of this initiative includes:•Building actuarial capacity and expertise of local actuarial professionals.•Enhancing data availability through regulators and the insurance industry.•Supporting countries in adapting with more resilient risk management as part of the climate change impact.•Supporting advocacy to governments, insurers, and others in achieving programme goals.What are our goals? Through our engagement in many countries through GAIN we have consistently recognized the importance of actuarial science education through local universities as an important pillar of the development of local actuarial talent. The local university is an important step as it provides a more cost-effective approach to education relative to international universities or international actuarial exams, while also retaining the actuarial talent in-country and incentivizing graduates to contribute to the growth of their local profession. In order to build actuarial capacity, it is important to strengthen the universities with sustainable structures are established to ensure quality actuarial graduates are produced. GAIN has identified several goals to improve actuarial science education in the countries we are engaged in, which are:•Support the development of faculty members and provide access to training materials •Provide access to affordable educational tools and resources to faculty members and students•Increase awareness of the actuarial profession amongst prospective university students•Strengthen the linkage between industry and academia through better access to research opportunitiesJoin an alliance with the UNDP-Milliman partnership: As a professional firm we recognize our skillsets and expertise at Milliman do not reflect all the academic needs of higher education institutions. Therefore, we invite professionals and academics with the relevant expertise to work with us to achieve our goals. Supporting GAIN allows organizations and individuals to gain exposure through their commitment to the actuarial profession, share your knowledge and learn from peers around the world, and associate themselves with expertise and leadership in advancing the SDGs.

Chat GPT and the Insurance Landscape

Adam Lewis (Oliver Wyman)

Inspired by the recent popularization of Chat-GPT, this presentation will provide an overview of large language models and explore their potential applications in the insurance industry. GPT models have achieved groundbreaking results in a variety of natural language processing tasks. We will explain the theoretical foundation of how these models work and examine the potential impacts of GPT and similar models on the insurance industry and actuarial work.

The Risk Adjusted Scenario Set I

B John Manistre (Retired Practitioner)

Starting with a real world set of N economic scenarios, and an asset class to act as numeraire, the RASS is a subset of size N(1-a) which is calibrated to the current market and maximizes the present value of the Net Illiquid Liability of an insurer. The Net Illiquid Liability is the present value of risk adjusted liability cash flows less the present value of risk adjusted illiquid asset cash flows. This scenario set turns out to have many useful risk management applications ranging from constructing a market consistent balance sheet to pricing new illiquid instruments and A/LM. The RASS is extracted from the whole scenario set by solving a linear program. The parameter a is a CTE level that controls the amount of conservatism in the linear program. The model can be thought of as a compromise between the actuarial and financial engineering approaches to risk. We will argue that the model could be used not only for risk management but, with some adjustments, as a foundation for market consistent financial reporting and regulatory reporting. The presentation will conclude with some practical and theoretical examples. One example is that of putting a value on a 60-year insurance product with only 30 years of market data to work with.

Incorporating Information on Insured Amounts to Improve Survival Rate Estimates in Liabilities

Andrey Ugarte Montero (Research Center for Longevity Risk, University of Amsterdam)

Insurance companies need statistical tools to adequately assess the value of the risk associated with their liabilities. In the life insurance industry in particular, survival modeling is key to accurately assessing the value of insurance policies and annuity business. Traditional techniques, however, emphasize individual survival over time, regardless of the impact that an individual may have on liabilities based on their sum insured. As a result, practitioners have resorted to different methods to account for the fact that discrepancies between actual and expected survival of individuals with higher sum insured may be more critical to a company’s liabilities than those of individuals with lower benefits. In this context, our research focuses on formalizing and analyzing in depth some of the ways that can be used in the insurance industry to account for the role of the sum insured in developing survival models. As part of the study we use a new private dataset with survival information of individuals buying annuity products in the Netherlands to investigate how weighing observations with the sum assured or pension benefit would impact mortality estimates and financial predictions. In our analysis, we focus on both well-established techniques based on maximum likelihood estimation with classical mortality laws and generalized linear (additive) models, which allow to account for the risk factors available in our dataset when modeling mortality.

Estimating Life Expectancy for Small Populations

Hsin-CHung Wang (Department of Statistical Information and Actuarial Science, Aletheia University)

Life expectancy presents the trend of changes in population mortality rates and is a crucial indicator for evaluating a country’s overall living conditions. It can reflect if there exists regional inequality in a country, as well as assessing the risk of life and pension products of insurance companies. The calculation of life expectancy is typically carried out using traditional life table methods but their estimated results can be influenced by population size and missing values. Generally, we can apply graduation methods or including data from a reference population to reduce bias and fluctuations, when the size of target population is small. However, past studies have shown that these smoothing results rely heavily on whether the reference population has similar demographic attributes to the small population. In this study, we treat the Standardized Mortality Ratio (SMR), a relative measure of health inequality, as a clustering indicator to include data with similar attributes. We employ similarity criteria, heterogeneity index, through the Hierarchical and K-means clustering methods to merge small areas into the mortality-homogeneous population. Next, we use partial SMR (Lee, 2003) and Whittaker ratio graduation methods, together with life table construction methods, to calculate life expectancy. In addition, we compare the results of proposed method to those of estimating life expectancy directly from SMR. We use the township data in Taiwan (2007~2018) to evaluate these estimation methods. We found that selecting a homogeneous reference population can reduce estimation errors, particularly in areas with very few inhabitants. Utilizing the SMR into a linear model through reference populations is also a feasible alternative for estimating life expectancy of small areas.

Optimal life annuitisation and investment strategy in a stochastic mortality and financial framework

Fabio Viviano (Department of Economics, Statistics and Finance, University of Calabria)

This work tackles the problem of retired individuals’ choices regarding the allocation of their wealth between classical financial investments and life annuities during their post-retirement phase to maximize their future utility. We consider a complex mathematical framework in which equity, interest rates, and mortality risks are taken into account as well as possible dependencies among the considered risk factors. Because of the complexity of the adopted mathematical framework, to solve the dynamic portfolio optimization problem we rely on simulation and regression techniques; in particular, we exploit the well-known Least-Squares Monte Carlo methodology which is able to handle multivariate stochastic state variables, by combining the Monte Carlo simulation of the state variables with the regression estimatesof the conditional expectations in the dynamic programming equations. We consider a constant relative risk aversion (CRRA) utility function and provide extensive numerical experiments to analyse the effect of possible dependencies among the financial market and mortality, the bequest motive, and the level of the individual’s risk aversion on the optimal investment and annuitization strategies.

Optimal Consumption and Retirement Investments under Loss Aversion and Mental Accounting

Ziqi Zhou (Sun Yat-sen University)

Under the global trend of replacing pay-as-you-go public pension schemes with fully-funded pension accounts, financial and volatility risks of compulsory public retirement savings are transferred from states to individuals (or pension participants). In our study, we set up a lifetime consumption and retirement investment model for a public funded pension participant with loss-aversion preferences and mental accounting, with an updating reference point pertaining to the participant’s consumption history. We derive the auxiliary HJB equations for the optimal consumption-investment problem during and before retirement, using concave envelope techniques to tackle non-differentiability of S-shaped utility. Closed-form value functions are found for the problem under mild assumptions, and therefore the optimal consumption process and the investment are investigated.

Sucks in Poverty? The Short and Longer Impacts of Earthquake on Pension Choices and How Government Aids Could Help

Jiyuan Wang (Central University of Finance and Economics)

This proposal aims to investigate the effects of earthquakes on individual well-being, with a specific focus on potential changes in individual pension choices and poverty status following the 2013 Lushan earthquake. Leveraging administrative data from two counties/districts, we exploit variations in distance to the epicenter and earthquake magnitude across different districts and years. Preliminary analysis suggests that individuals demonstrate a reduced likelihood of opting for a high-tier pension plan and are less likely to fall below the poverty line following the earthquake. This effect is more pronounced in districts farther from the epicenter compared to those closer to it. Then, we employ an event-study difference-in-differences approach to examine the potential longer-term evolution of pension choices and poverty status in the aftermath of the earthquake. In the subsequent phase, we aim to enhance our understanding of the earthquake’s impact, particularly by incorporating earthquake intensity measurements into the empirical analysis. Additionally, we will refine our analysis of heterogeneity by considering factors such as government aid, and extend our investigation into potential mechanisms driving the observed effects.

Recent Implementations of Gradient Boosting for Decision Trees: A Comparative Analysis for Insurance Applications

Marie-Pier Côté (Université Laval)

Generalized linear models (GLM) are the most popular option for general insurance ratemaking. However, boosted decision trees have been shown to improve predictive performance over GLMs. Since the first gradient boosting algorithm based on decision trees was proposed by Friedman in 2001, many improvements and sophistications have been proposed. We present the differences between XGBoost, LightGBM, CatBoost, NGBoost and Probabilistic Gradient Boosting Machines and we criticize their advantages and disadvantages in the context of actuarial data. While the first three yield point estimates, the last two are designed for probabilistic regression. The results obtained for many actuarial datasets on claim frequency and severity show that none of the algorithms clearly stands out for point prediction accuracy, but there are winners in terms of computational efficiency. We also discuss model adequacy for probabilistic regression.

Predictive Subgroup Logistic Regression for Classification with Unobserved Heterogeneity

Zhiwei Tong (University of Iowa)

Unobserved heterogeneity refers to the variation among subjects that is not accounted for by the observed features used in a model. Its presence poses a great challenge to statistical modeling. This study introduces the Predictive Subgroup Logistic Regression (PSLR) model, which extends the conventional logistic regression and is specifically designed to address unobserved heterogeneity in classification problems. The PSLR model incorporates subject-specific intercepts in the log odds and is fitted through a penalized likelihood approach with a concave pairwise fusion penalty. A novel two-step procedure is developed to perform out-of-sample predictions for new subjects whose subgroup membership labels are unknown. This procedure empowers the PSLR model for both inferential and predictive tasks. Through extensive simulation studies and an empirical application, the PSLR model not only demonstrates great performance across various aggregate accuracy metrics but also achieves a balanced effectiveness in sensitivity and specificity.

Semi-continuous time series for sparse losses with volatility clustering

Michal Pesta (Charles University, Prague, Czech Republic)

Our intention is to model frequency and severity of sparse claims jointly. Time series of occasional losses contain non-negligible portion of possibly dependent zeros, whereas the remaining observations are positive. They are regarded as GARCH processes consisting of non-negative values. Our first aim lies in estimation of the omnibus model parameters taking into account the semi-continuous distribution. The hurdle distribution together with dependent zeros cause that the classical GARCH estimation techniques fail. Two different quasi-likelihood approaches are employed. The second goal consists in the proposed predictions utilizing unsupervised learning. The empirical properties are illustrated in a simulation study, which demonstrates computational efficiency of the employed methods. A real data analysis of sparse non-life insurance claims is performed.

Pricing equity indexed annuities with American lookback features

Hangsuck Lee (Sungkyunkwan University)

Equity-indexed annuities (EIAs) offer their investors the higher value between the performance of the underlying index or a guaranteed minimum return. EIAs take the form of lookback options, which are a type of path-dependent option. This talk provides two product types of EIAs with American lookback features such as floating-strike and fixed-strike, through a partial differential equation approach. First, in floating-strike cases, the payoff structure is determined by the grater of the return adjusted according to the participation rate or the guaranteed minimum return. Second, in fixed-strike cases, the payoff consists of the higher of the maximum return adjusted based on the participation rate or fixed strike price. By using Mellin transforms, we derive the analytic pricing formula of these two contracts. Furthermore, we obtain the integral form of the free boundaries related to the early exercise policy of these products. In addition, we perform the numerical implication, which includes identifying the break-even point and observing changes in option prices and free boundaries with respect to the several parameters.

Option pricing with exponential multivariate general compound Hawkes processes

Anatoliy Swishchuk (University of Calgary)

We introduce a new model for a stock price based on exponential multivariate general compound Hawkes process. Using this model, we present option pricing formulas such as analogue of Black-Scholes and Margrabe’s formulas, as well as spread and basket option pricing formulas. Numerical examples based on real data will be presented as well.

Pricing High Dimensional American Options in Stochastic Volatility and Stochastic Interest Rate Models

Yahui Zhang (Central University of Finance and Economics)

In this paper, we approach the problem of valuing high dimensional American Arithmetic options when advanced stochastic models are considered. In particular, we consider a basket of assets that follow a high dimensional Heston and Black-Scholes with stochastic interest rate models. For this purpose, we present the algorithm called GPR-MC-CV which based on Monte Carlo (MC), Machine Learning and Control Variate (CV) techniques. Specifically, we implement a backward dynamic algorithm and a Bermudan approximation of American option. On each exercise dates, the value of option is first calculated, which is equal to the maximum between the exercise value and continuous value and then approximated by Gaussian Process Regression (GPR) taking the underlying asset price as the explanatory variable and the value of option as the response variable. We also employ the American Geometric option price as a control variate to reduce the variance of price estimators. Via a series of numerical exercises, we analyze the effect of stochastic models parameter changes on the mean and variance of American Arithmetic option prices. Moreover, numerical tests show that the GPR-MC-CV method is also fast and reliable in handling American Arithmetic options which follow high dimensional stochastic models, overcoming the problem of the curse of dimensionality.

Asset-liability management with liquid and fixed-term assets

Yevhen Havrylenko (University of Copenhagen)

Insurance companies and pension funds have asset-allocation processes that may involve multiple risk management constraints due to liabilities. Furthermore, the investment universe of such institutional investors often contains assets with different levels of liquidity, e.g., liquid stocks and illiquid investments in infrastructure projects or private equity. Therefore, we propose an analytically tractable framework for economic agents who maximize their expected utilities by choosing investment-consumption strategies subject to lower bound constraints on both intermediate consumption and the terminal value of assets, some of which are liquid, while others are fixed-term. Using the generalized martingale approach and a separation technique, we derive optimal decisions and analyze them from an economic perspective.

Constrained Consumption and Investment for General Preferences

Michel Vellekoop (University of Amsterdam)

Optimal strategies for consumption and investment are derived for general utility functions under linear constraints. The class of utility functions for which exact solutions can be found is shown to be closed under application of constrained Hamilton-Jacobi-Bellman operators, and we provide conditions which guarantee that value functions are differentiable. These results for discrete time problems are then used to approximate analogous problems in continuous time by defining sequences of problems in which preferences and the underlying stochastic asset dynamics are discretized. Solutions to these problems are shown to converge to the viscosity solution of the Hamilton-Jacobi-Bellman equation for the problem in continuous time. We illustrate our approach using known results for a number of different optimization problems and show how the method can be used for cases for which no closed-form solutions are available.

Investment-consumption Optimization with Transaction Cost and Learning about Return Predictability

Ning Wang (Australian National University)

In this paper, we investigate an investment-consumption optimization problem in continuous-time settings, where the expected rate of return from a risky asset is predictable with an observable factor and an unobservable factor. Based on observable information, a decision-maker learns about the unobservable factor while making investment-consumption decisions. Both factors are supposed to follow a mean-reverting process. Also, we relax the assumption of perfect liquidity of the risky asset through incorporating proportional transaction costs incurred in trading the risky asset. In such way, a form of friction posing liquidity risk to the investor is examined. Dynamic programming principle coupled with an Hamilton–Jacobi–Bellman (HJB) equation are adopted to discuss the problem. Applying an asymptotic method with small transaction costs being taken as a perturbation parameter, we determine the frictional value function by solving the first and second correctorequations. For the numerical implementation of the proposed approach, a Monte-Carlo-simulation-based approximation algorithm is adopted to solve the second corrector equation. Finally, numerical examples and their economic interpretations are discussed.

Experience Rating in the Cramér-Lundberg Model

Melanie Averhoff (Aarhus University)

This paper provides a study of how experience rating on both claim frequency and severity impacts the solvency of an insurance business in the continuous-time Cramér Lundberg model. This is done by treating the claim parameters as random outcomes and continuously updating the premiums using Bayesian estimators. In the analysis, the claim sizes conditional on the severity parameter are assumed to be light-tailed. The main contributions are large deviation results,where the asymptotic ruin probabilities are found for a model updating the premium based upon both frequency and severity. This asymptotic ruin probability is compared to the one of a model, which updates the premium solely based on claim frequency. Our findings are illustrated with a parametric example, where the conditional claim size are exponentially distributed, and the severity parameter is the outcome of gamma distribution.

On estimation of ruin probability when value process is optional semimartingale

Alexander Melnikov (University of Alberta)

Risk theory investigates stochastic models of risk in finance and insurance, and its classical problem is identified with the estimation of ruin probability. In the talk we show how methods of optional processes work in this area. In our setting the evolution of the capital of insurance company is described as an optional semimartingale on stochastic basis without so-called “usual conditions”. In such general setting, the optional risk process may admit jumps from both left and right sides at any time. We study the estimation of ruin probability developing an extended technique of stochastic exponentials with respect to optional semimartingales. We prove that wide conditions the ruin probability for the optional risk process admits a certain exponential upper bound. It is shown that many well-known models and estimates in risk theory can be derived from our results. Giving a couple of illustrative examples, we also provide a reasonable motivation to use optional processes in this area.

Last Exit Times for Generalized Drawdown Processes

Zijia Wang (The Chinese University of Hong Kong)

In recent years, there has been a significant amount of work dedicated to the study of the generalized drawdown process with its extensive applications in insurance and finance. While existing studies have primarily focused on analyzing the associated first passage times, which signal adverse events, the investigation of last passage times should not be overlooked. Last passage times involve knowledge of the future and can thus offer additional insights. This paper aims to fill this gap in the literature by studying the last passage times for the generalized drawdown process with an independent exponential killing and discussing their applications to insurance risk. Our analysis focuses on spectrally negative Lévy processes, for which we derive the Laplace transforms for these random times. Additionally, we obtain new results on the joint distribution of the duration of the drawdown and the surplus level at killing. As applications, we implement our results in the loss-carry-forward tax and dividend models and investigate the valuation of an European digital drawdown option. Detailed numerical examples are presented for illustrative purposes.

Data Mining of Telematics Data: Unveiling the Hidden Patterns in Driving Behaviour

Ian Weng Chan (University of Toronto)

With the advancement in technology, telematics data which capture vehicle movements information are becoming available to more insurers. As these data capture the actual driving behaviour, they are expected to improve our understanding of driving risk and facilitate more accurate auto-insurance ratemaking. In this paper, we analyze an auto-insurance dataset with telematics data collected from a major European insurer. Through a detailed discussion of the telematics data structure and related data quality issues, we elaborate on practical challenges in processing and incorporating telematics information in loss modelling and ratemaking. Then, with an exploratory data analysis, we demonstrate the existence of heterogeneity in individual driving behaviour, even within the groups of policyholders with and without claims, which supports the study of telematics data. Our regression analysis reiterates the importance of telematics data in claims modelling; in particular, we propose a speed transition matrix that describes discretely recorded speed time series and produces statistically significant predictors for claim counts. We conclude that large speed transitions, together with higher maximum speed attained, nighttime driving and increased harsh braking, are associated with increased claim counts. Moreover, we empirically illustrate the learning effects in driving behaviour: we show that both severe harsh events detected at a high threshold and expected claim counts are not directly proportional with driving time or distance, but they increase at a decreasing rate.

Actuarial Modelling by Quantum Computing

Muhsin Tamturk (Antares Global)

In this talk, we introduce actuarial modelling on gate-based quantum computers. We also provide an overview of current quantum computing technologies, including insights into quantum investments and the quantum advantage. Additionally, we explore the potential applications of quantum technologies in the insurance industry, highlighting benefits for the insurance sector.

Robustness of the Divergence-Based Risk Measure

Fabio Gomez (UNSW)

The divergence-based risk measure is defined by taking the supremum of the expectation over an uncertainty ball described by the \(\phi\) divergence. A notable example is the higher moment risk measure, a natural extension of the expected shortfall. This work examines various robustnessissues related to the divergence-based risk measure. We demonstrate its robustness against optimization in a qualitative sense proposed by Embrechts, Schied, and Wang (2022, Operations Research). Additionally, we consider the distributionally robust optimization problem for linearportfolios, where ambiguity is quantified using the Wasserstein distance. Through this analysis, we highlight important quantitative aspects that enhance the robustness of the divergence-based risk measure.

Monotonic mean-deviation risk measures

Xia Han (Nankai University)

Mean-deviation models, along with the existing theory of coherent risk measures, are well stud- ied in the literature. In this paper, we characterize monotonic mean-deviation (risk) measures from a general mean-deviation model by applying a risk-weighting function to the deviation part. The form is a combination of the deviation-related functional and the expectation, and such measures belong to the class of consistent risk measures. The monotonic mean-deviation measures admit an axiomatic foundation via preference relations. By further assuming the convexity and linearity of the risk-weighting function, the characterizations for convex and coherent risk measures are ob- tained, giving rise to many new explicit examples of convex and nonconvex consistent risk measures. Further, we specialize in the convex case of the monotonic mean-deviation measure and obtain its dual representation. The worst-case values of the monotonic mean-deviation measures are analyzed under two popular settings of model uncertainty. Finally, we establish asymptotic consistency and normality of the natural estimators of the monotonic mean-deviation measures.

Duet expectile preferences

Qinyu Wu (University of Waterloo)

We introduce a novel axiom of co-loss aversion for a preference relation over the space of acts, represented by measurable functions in a suitable measurable space. This axiom means that the decision maker, facing the sum of two acts, dislikes the situation where both acts realize as losses simultaneously. Our main result is that, under strict monotonicity and continuity, the axiom of co-loss aversion characterizes preference relations represented by a new class of functionals, which we call the duet expectiles. A duet expectile involves two endogenous probability measures, and it becomes a usual expectile, a statistical quantity popular in regression and risk measures, when these two probability measures coincide. We discuss properties of duet expectiles and connections with fundamental concepts including probabilistic sophistication, risk aversion, and uncertainty aversion.

Orlicz premia and geometrically convex risk measures

Fabio Bellini (University of Milano-Bicocca)

Geometrically convex functions (shortly, GG-convex)constitute an interesting class of functions obtained by replacing the arithmetic mean with the geometric mean in the definition of convexity. We introduce a notion of GG-convex conjugate parallel to the classical Fenchel transform and study its properties. We use GG-convex conjugation to give a general dual representation of GG-convex risk measures. We then focus on the special case of Orlicz premia, of which we give a slightly more general definition that includes as particular cases also expectiles and the logarithmic certainty equivalent. We provide a novel axiomatization of Orlicz premia as the only elicitable GG-convex return risk measure. Finally, we discuss consistent families of scoring functions for Orlicz premia.

Understanding the Length of Stay of Older Australians in Permanent Residential Care

Mengyi Xu (Purdue University)

The length of stay in permanent residential care is a critical measure for understanding the utilization of institutional care and developing sustainable aged care policies. Understanding the length of stay is particularly significant in Australia due to its unique arrangement for paying accommodation costs in nursing homes, where residents can choose rental-style daily payments, a refundable lump sum, or a combination of both. This choice is likely influenced by the expected length of stay, among other factors. However, only a few studies examine this aspect for older Australians. Using survival analysis, we investigate the complete admission records of a cohort of older Australians first admitted to permanent residential care in 2008. Fitting several commonly used survival analysis distributions, we find the Generalized F distribution provides the best fit to the data due to its ability to capture the force of mortality for long-stay residents. Additionally, the nursing home’s organization type and service size, alongside resident characteristics found in international studies, significantly impact the length of stay.

Expected Length of Stay at Residential Aged Care Facilities in Australia: Investigating the Impact of Dementia

Colin Zhang (Macquarie University)

This paper investigates the key factors influencing the Length of Stay (LOS) in residential aged care facilities in Australia from 2008 to 2018. Utilising a dataset from the Australian Institute of Health and Welfare, we implement both a linear regression model and a random forest model to predict LOS. Our analysis reveals that the random forest model outperforms the linear regression model in terms of in-sample and out-of-sample fit. Key determinants of LOS include the Aged Care Funding Instrument (ACFI) variables (ADL_Score, BEH_Score, CHC_Level), as well as Admission_Year, Age, and Dementia_Flag. The significant role of dementia, even after adjusting for ACFI variables, highlights its potential impact on funding allocation, thus identifying a possible area for future research.

Healthy working life expectancy and flexible retirement options for subdivided populations in China

Zhihang Huang

Under the backdrop of population aging, delaying retirement policies have become crucial policy choices. Delaying retirement age policies are often tied to the extension of life expectancy. However, the health of workers is an important factor in determining whether these retirement schemes can proceed as expected. Poor health among workers may lead to a decline in labor productivity, absenteeism, or even early retirement. Sustained improvement in the health of the population is a prerequisite for extending working life expectancy and achieving delayed retirement. Healthy working life expectancy refers to the number of years a population can expect to work in a healthy state and can therefore be used to assess the feasibility of delaying retirement schemes. Based on the data from the China Health and Retirement Longitudinal Study (CHARLS) in 2020 and mortality data from the seventh national population census in China, this paper employs the Bayesian extension of Sullivan’s method to estimate the healthy working life expectancy of the population aged 50-65 across different regions, genders, and educational levels in China. Life expectancy is divided into four stages: healthy working life expectancy, unhealthy working life expectancy, healthy retirement life expectancy, and unhealthy retirement life expectancy to analyze the distribution of health among different populations at these stages. From the perspective of labor health work, it explores reasonable strategies for delaying retirement age in China. Based on the calculation results for different populations, the article offers relevant policy suggestions for flexible retirement policies tailored to different demographic groups. The results show that in 2020, the total sum of age and healthy working life expectancy for different population groups in China exceeded the current statutory retirement age. This indicates that even after retiring at the current statutory age, people still have the physical capacity to continue working in good health, confirming the necessity and feasibility of delaying retirement policies. The working life expectancy and healthy working life expectancy of rural populations were higher than those of urban populations of the same age. However, their healthy life expectancy was lower than that of urban populations. Taking a representative example of 50-year-old individuals without formal education in rural areas, the working life expectancy, healthy working life expectancy, and healthy life expectancy were 19.09 years (95% CI: 18.71-19.43), 13.51 years (13.25-13.76), and 18.42 years (18.00-18.88), respectively. In contrast, for 50-year-old urban females with the same educational level, the corresponding values were 9.99 years (9.71-10.28), 7.64 years (7.40-7.89), and 21.85 years (21.14-22.63). Rural populations devote most of their remaining life expectancy in work due to economic burdens, especially in healthy years, resulting in a shorter healthy life expectancy. The healthy working life expectancy of females is lower than that of males of the same age; for example, the healthy working life expectancy of females and males who have received high school education or above at the age of 50 in urban areas is 8.33 years (8.09-8.58) and 13.19 years (12.87-13.48), respectively. Females tend to have longer retirement periods but spend more time in unhealthy stages. Furthermore, populations with higher educational levels had higher healthy working life expectancy but relatively shorter total working life expectancy. For instance, for 50-year-old men who have not received formal education, have received junior high school and below education and senior high school and above education, the healthy working life expectancy was 12.31 years (11.93-12.72), 12.75 years (12.49-13.01), and 13.19 years (12.87-13.48), respectively. Correspondingly, the total working life expectancy was 14.34 years (13.88-14.75), 13.52 years (13.23-13.80), and 12.70 years (12.40-13.03), respectively. China’s policy of delaying retirement age should take into account regional, gender, and educational disparities in health and employment. The government should encourage urban residents and individuals with higher education levels to delay retirement, take advantage of the quality of the population with higher education levels, and improve the pension security of rural populations.

Directional tails dependence for multivariate extended skew-t distributions

Melina Mailhot (Concordia University)

We study the tail dependence for multivariate risks following a distributions class that is notable for its ability to model asymmetric and multivariate tails: the extended skew-t distributions. Particularly, allowing for a chosen rotation, we calculate explicit expressions for the upper and lower tail dependence coefficients. The calculated coefficients are exemplified for dimensions 2 and 3 with simulated data. Finally, empirically estimations for the upper and lower tail dependence coefficients are compared with the theoretical ones.

Negatively dependent optimal risk sharing

Liyuan Lin (University of Waterloo)

We analyze the problem of optimally sharing risk using allocations that exhibit counter-monotonicity, the most extreme form of negative dependence. Counter-monotonic allocations take the form of either "winner-takes-all" lotteries or "loser-loses-all" lotteries, and we respectively refer to these (normalized) cases as jackpot and scapegoat allocations. Our main theorem, the counter-monotonic improvement theorem, states that for a given set of random variables that are either all bounded from below or all bounded from above, one can always find a set of counter-monotonic random variables such that each component is greater or equal than its counterpart in the convex order. We show that Pareto optimal allocations, if they exist, must be jackpot allocations when all agents are risk seeking. We essentially obtain the opposite when all agents have discontinuous Bernoulli utility functions, as scapegoat allocations maximize the probability of being above the discontinuity threshold. We also consider the case of rank-dependent expected utility (RDU) agents and find conditions that guarantee that RDU agents prefer jackpot allocations. We provide an application for the mining of cryptocurrencies and show that in contrast to risk-averse miners, RDU miners with small computing power never join a mining pool. Finally, we characterize the competitive equilibria with risk-seeking agents, providing first and second fundamental theorems of welfare economics in this setting, where all equilibrium allocations are jackpot allocations.

Some results of the multivariate truncated normal distributions with actuarial applications in view

Jianxi Su (Purdue University)

The multivariate normal distributions have been widely advocated as an elegant yet flexible model, which uses a simple covariance matrix parameter to capture the intricate dependence involved in high-dimensional data. However, insurance loss random variables are often assumed to be non-negative. Thereby, the multivariate normal distributions must be properly truncated to be adopted in insurance applications. In this presentation, we are going to review some fundamental statistics properties of the multivariate truncated normal distributions, including their independence, non-steepness and maximum likelihood estimation properties. For actuarial applications, we propose an efficient numeric scheme to compute the tail conditional allocation for the multivariate truncated normal distributions, which hinges on the holonomic gradient method for computing the probability content of a simplex structure.

Multi-output Extreme Spatial Model for Complex Production Systems

Xing Wang (Illinois State University)

Data-driven spatial models in machine learning have enabled efficient control of production systems. However, most machine learning models are devoted to modeling the mean response, so they are inappropriate to analyze abnormal extreme events that are often the main interests. Since extreme events from tail distribution give rise to prohibitive expenditures in system man- agement, extreme spatial models should be utilized to analyze extreme risks. Recent engineering applications of extreme modeling are limited to simple cases such as univariate modeling, and it is insufficient for complex systems. Moreover, existing extreme spatial models in other domains cannot be directly applied to controllable systems. In this paper, we propose an extreme spatial model that enables the modeling of multi-output response control systems. Robust parameter estimation is proposed for marginal extreme distributions, and efficient composite likelihood estimation is devised to cope with high dimensional problems. The proposed model is applied to the modeling of maximum residual stress in composite aircraft production.

Tree-based Ising models: mean parameterization, efficient computation methods and stochastic ordering

Etienne Marceau (Université Laval (Laval University))

High-dimensional multivariate Bernoulli distributions are essential in the modeling of binary data in actuarial contexts. Tree-based Ising models, a class of undirected graphical models for binary data, have been proven to be useful in a variety of applications in machine learning. We assess advantages of expressing tree-based Ising models via their mean parameterization rather than their commonly chosen canonical parameterization. This includes fixed marginal distributions, often convenient for dependence modeling, and the dispelling of the intractable normalizing constant otherwise plaguing Ising models. We derive an analytic expression for the joint probability generating function of mean-parameterized tree-based Ising models. The latter is used to build efficient computation methods for the sum of its constituent random variables. Similarly, we derive an analytic expression of their ordinary generating function of expected allocations, providing means for exact computations in the context of risk allocations. We furthermore show that Markov random fields with fixed Poisson marginal distributions may act as an efficient and accurate approximation for tree-based Ising models, in the spirit of Poisson approximation. In this vein, we examine how their tailor-fitted partial order on trees may suit tree-based Ising models as well. This provides grounds for a better understanding of the tree-based Ising models, that is studying the impact of the shape of their underlying tree on their aggregate distribution.

Using causal machine learning to predict treatment heterogeneity in the primary catastrophe bond market

Despoina Makariou (University of St Gallen)

We introduce a causal machine learning approach to predict treatment effect heterogeneity in the primary catastrophe bond market. Studying the issuance timing is important for optimising the cost of capital and ensuring the success of a catastrophe bond offering for sponsors and investors. We find that issuance timing affects catastrophe bond spreads, but this result varies according to several market and bond specific factors.

Challenges in Actuarial Learning for Loss Modeling of Brazilian Soybean Crops

Eduardo F. L. de Melo

In Brazil, the agricultural sector plays a fundamental role, accounting for approximately 25% of the national Gross Domestic Product (GDP). Within this sector, soybean cultivation stands out as the most significant, with a forecast yield exceeding 150 million tons in the 2022/2023 harvest, which is equivalent to the combined production of all Organisation for Economic Co-operation and Development (OECD) nations. Nevertheless, soybean production is subject to considerable climatic vulnerabilities that not only affect the yield but also have a considerable impact on the formulation of crop insurance pricing. The consequences of drought and excessive rainfall in Brazil’s South and Central-West regions severely affected the crop yield of 2021/2022, leading to a substantial rise in the loss ratio of the insurer’s portfolios. These climatic adversities led to a substantial rise in the loss ratio of insurers’ portfolios. In this paper, we utilize actuarial learning models to predict losses in soybean crops, taking into account weather covariates, specifically rainfall, and temperature, given their relevance to crop productivity. We assess the predictive performance of generalized linear models, generalized additive models and tree-based models within the two-stage frequency-severity framework. We compare the charged premium with the estimated premium by using standard insurance modeling techniques. In addition, we carry out simulated stress tests to highlight the significance of weather-related variables in evaluating total losses during extreme events. Our assessment encompasses the estimation of losses across diverse scenarios. The implications of these results are significant for (re)insurance pricing, risk management, and solvency. They also bear substantial importance for formulating effective agricultural public policies.

The Effect of Governmental Disaster Relief on Individuals’ Incentives to Mitigate Losses

Tobias Huber (UNSW Sydney)

In recent decades, global economic losses from natural catastrophes have substantially increased, highlighted by events like Hurricane Ian in 2022 (USD 113 billion losses in the US), flash flood Bernd in 2021 (USD 54 billion losses in Germany), and floods in Australia in 2022 (USD 6.6 billion). Governments, post-catastrophe, offer disaster relief such as Germany’s allocation of up to EUR 30 billion after flash flood Bernd. This paper explores disaster relief’s impact on investments in damage mitigation, differentiating between loss reduction and prevention. It reveals relief can decrease investments in both, potentially fostering increased moral hazard. The paper advocates providing disaster relief only in conjunction with measures for damage mitigation to enhance societal resilience.

The Role of a Regional Risk Insurance Pool in Supporting ClimateResilience

Shan Yang (UNSW)

Climate change itself and its damage to the economy are subject to deep uncertainty. Financial adaptation, different from physical adaptation, is a powerful tool to mitigate the economic risk generated by climate change. Regional risk insurance pools emerge as a prominent practice of financial adaptation, as recommended by the IPCC in its latest AR6 report. This work aims to quantify the economic effects of regional risk insurance pools on climate resilience. A two-stage integrated climate-economy model is developed to facilitate our study. Inthis model, climate shocks are categorized into insurable and uninsurable ones with damages subject to uncertainty, and the pool under consideration protects the region against insurable stocks. We achieve a quantitative understanding of the economic effects of the pool under uncertainty. We further generalize the study to a multi-stage setting similar to the regional dynamic integrated climate-economy model.

Bridging the Protection Gap: A Tax Redistribution Solution Under the Private-Public Partnership Framework

Yanbin Xu (Nanyang Technological University)

The escalating impacts of climate change have disproportionately increased catastrophe risks in high climate risk regions, significantly influencing the solvency capital requirements for insurers. This situation has led to elevated premium rates, creating a substantial protection gap in these vulnerable areas. Moreover, the increasing threat of climate-related catastrophes has prompted insurers to withdraw from regions with high catastrophe risk exposure, thereby reducing coverage capacity and further widening the climate protection gap. The uncovered climate exposure of high-risk regions produces externalities affecting residents in low-risk regions who lack direct climate exposure, with ex-post taxation financed public disaster relief (PDR) following catastrophic events as a prime example.This paper addresses these challenges by presenting a novel analysis and solution. First, we demonstrate that the insurance coverage supply favours the regions with low climate risk exposure when the insurers’ required return exceeds the risk-free rate. Second, we discuss the dynamics between the demand for private insurance and the existence of ex-post taxation financed public disaster relief in both high and low climate risk regions. Third, we propose an optimal tax redistribution plan that fosters public private partnership through a coinsurance and premium subsidy scheme targeted at high climate risk regions. This plan is financed by policies written in low-risk areas, offering a trade-off of lower PDR liabilities. Our findings suggest the potential for a Pareto improvement among all residents under this tax redistribution framework, highlighting a pathway towards more equitable and effective catastrophe risk management in the face of climate change.

Can environmental pollution liability insurance improve firms’ ESG performance? Evidence from listed industrial firms in China

Fu Yang (Xi’an University of Finance and Economics)

As environmental issues continue to escalate, environmental pollution liability insurance (EPLI) becomes an essential financial tool for transferring environmental losses, reducing environmental risks, and achieving sustainable development. ESG (Environmental, Social, and Governance) is a comprehensive indicator used to evaluate firms’ environmental, social, and governance practices, reflecting the quality of their sustainable development efforts. Given that existing studies have not focused on the impact of EPLI on ESG performance, this study aims to investigate this impact and its mediation mechanism using data from 1004 Chinese A-share listed firms from 2013 to 2020. Additionally, the study explores the heterogeneous effects from both internal and external perspectives. The results indicate that EPLI significantly improves firms’ ESG performance. Firms that have EPLI in place demonstrate enhanced environmental risk management capabilities, improved quality of environmental information disclosure, and alleviation of financing constraints, thereby positively influencing corporate ESG performance. For state-owned firms, particularly during mature phases and in regions with more flexible environmental regulations, acquiring EPLI yields more evident effects on their ESG performance. These findings contribute valuable insights into the intrinsic relationship between EPLI and ESG performance. Furthermore, they can serve as a useful guide for policymakers in refining EPLI policies and assisting firms in enhancing their ESG practices, ultimately contributing to the pursuit of sustainable development.

Optimal Consumption and Portfolio Choice in the Presence of Risky House Prices

Servaas van Bilsen (University of Amsterdam)

This paper explores optimal consumption and portfolio decisions in the presence of risky house prices. We assume that changes in real interest rates and future rents directly impact house prices. A novel aspect of our model is that rent inflation rates and consumption inflation rates are cointegrated. We show that the individual prefers to be a home owner when young and a renter when old. This motives the design of so-called sale-and-rent-back plans. Furthermore, she invests significantly less pension wealth in inflation-linked bonds, as compared to conventional wisdom.Finally, we find that, for our parameter setting, the optimal mortgage changes from fixed-rate to adjustable-rate as the individual becomes older.

Optimal Branching Times for Branching diffusions and its Application in Insurance

Junyi Guo (Nankai University)

The optimal branching problem for a class of branching type diffusions is studied. The particles of the branching diffusion move along the path of Feller process. The characterization of the value function is given by studying the corresponding viscosity solution. A branching type risk model is then constructed. The optimal strategy for openning new businesses is obtained and numerical examples are given

Optimal Stopping for Exponential Lévy Models with Weighted Discounting

Jose Manuel Pedraza Ramirez (The University of Manchester)

In this talk, we consider an optimal stopping problem with weighted discounting, and the state process is modelled by a general exponential Lévy process. Due to the time inconsistency, we provide a new martingale method based on a verification theorem for the equilibrium stopping strategies. As an application, we generalise an investment problem with non-exponential discounting studied by Grenadier and Wang (2007) and Ebert et al. (2020) to Lévy models. Closed-form equilibrium stopping strategies are derived, which are closely related to the running maximum of the state process. The impacts of discounting preferences on the equilibrium stopping strategies are examined analytically.

A Revisit of the Excess-of-Loss Optimal Contract

Qiuqi Wang (Georgia State University)

Finding the optimal contact between two (or more than two) parties has received a huge amount of attention in the literature of actuarial science. The problem usually becomes technically challenging when multiple lines of business are taken into account. We propose a new statistical approach to the optimal reinsurance problem with a large claim size from the perspective of asymptotic models. We obtain optimal solutions for several practical optimal reinsurance problems under deductible contracts for Value-at-Risk. We also show that our method can be naturally extended to general distortion risk measures. Numerical studies and asymptotic analysis are also demonstrated.

Worst-case reinsurance strategy with likelihood ratio uncertainty

Ziyue Shi (University of Waterloo)

In this paper, we explore a non-cooperative optimal reinsurance problem incorporating likelihood ratio uncertainty, aiming to minimize the worst-case risk of the total retained loss for the insurer. We establish a general relation between the optimal reinsurance strategy under the reference probability measure and the solution in the worst-case scenario. %of the non-robust model without uncertainty and the optimal strategy of its robust counterpart with uncertainty. This relation can be generalized to insurance design problems quantified by tail risk measures. We provide a sufficient and necessary condition for when the problem using reference measure has at least one common optimal solution with its worst-case counterpart. As an application of this relation, optimal policies for the worst-case scenario quantified by the expectile risk measure are determined. Additionally, we explore the corresponding cooperative problem and compare its value function with that of the non-cooperative model.

Insurance design under a performance-based premium scheme

Ziyue Shi (University of Waterloo)

In this work, we introduce a reward-and-penalty premium scheme into an insurance design problem. In the most of classical insurance design models, the insurance premium is a constant, which is determined by a premium principle, paid upon the purchase of the insurance contract. The amount of the premium represents the insurer’s attitude toward the risky level of the loss, and thus it is natural that the insurer may use the premium to affect the policyholder’s choice on insurance contracts. In a performance-based premium scheme, the premium paid may vary with realized reimbursement amounts. Therefore, it becomes a random variable whose distribution depends on the underlying loss and the reimbursement function. By introducing such premium scheme, we investigate the optimal insurance contract chosen by the policyholder. Then we discuss the benefit the insurer can obtain from replacing a premium principle by this performance-based premium scheme.