Understanding Risk Models and Consumer Risk Profiles in Gambling Using Knowledge Extraction
BetBuddy presentation at NeSy 2017 demonstrating how knowledge extraction techniques (TREPAN and Random Forest Tree Explainer) can make black-box machine learning models interpretable for responsible gambling risk detection and personalized player interventions.
Categories
This presentation by Chris Percy (Lead Researcher, BetBuddy) at the 2017 Neural-Symbolic Integration Workshop (NeSy) explores how knowledge extraction techniques can transform complex machine learning models into interpretable, actionable insights for responsible gambling. The research addresses the fundamental tension between prediction accuracy and explainability in AI systems used for player protection.
Key findings:
- Regulatory Demand for Explainability: The UK Gambling Commission expects operators to use player analytics for harm detection, but industry stakeholders (70% in a 2016 poll) prefer a 75% accurate interpretable model over a 90% accurate black-box model, emphasizing transparency over raw performance.
- TREPAN Algorithm for Neural Network Interpretation: TREPAN successfully extracted compact decision trees (8 nodes) from neural networks with 33 features and 500+ weights, achieving competitive accuracy and fidelity while producing human-readable rules.
- Narrative Validation of Model Rules: Extracted rules can be converted into narratives for domain expert review. For example, “Men aged over 31 with high variability and increasing intensity are at higher risk” aligns with addiction research, while rules involving nationality were flagged as invalid indicators.
- Random Forest Tree Explainer: A novel approach traverses each tree in a 200-tree random forest to calculate weighted feature frequencies for individual players, enabling personalized risk explanations with Red/Amber classifications for targeted communications.
- Distinct Risk Profiles Among High-Risk Players: Case studies demonstrated that two players with similar overall risk scores exhibited fundamentally different behavioral patterns—one a new player with rapidly escalating risk (70% pattern-match to self-excluders), another an established player showing self-moderation attempts (62% pattern-match).
- Feature-Level Risk Analysis: Individual behavioral graphs revealed distinct risk drivers: Player 1 showed highly variable and increasing loss patterns, while Player 2 exhibited variable staking patterns without the same loss trajectory—demonstrating why uniform interventions may be ineffective.
- Personalization Research Foundation: Research on smoking cessation, physical fitness, alcohol reduction, and diabetes management supports personalized messaging over generic interventions. Problem gamblers are not a homogenous group, and “responsible gambling” terminology may be perceived negatively.
- UK Government Behavioral Insights (EAST Framework): Effective interventions should be Easy (clear messages), Attractive (visual attention and incentives), Social (avoiding reinforcement of problematic norms), and Timely (close to flagged behavior with follow-up prompts).
- Early Evidence of RGI Effectiveness: Testing at PlayOLG (Ontario Lottery and Gaming) showed that among 1,455 responsible gambling interactions, 24% led to moderated behavior, 64% had minimal impact, and 12% led to increased behavior—providing encouraging early indications that targeted interventions can help players.
The presentation establishes the foundation for BetBuddy’s explainable AI approach, arguing that interpretable models are essential for regulatory compliance, stakeholder trust, and effective personalized player interventions.
For full details, please refer to the complete document.
Playtech Planet