22nd European Conference on Artificial Intelligence
BetBuddy presented research successfully applying knowledge extraction to make neural networks interpretable at ECAI conference.
Categories
The research explores how machine learning—specifically neural networks—can be used to understand and predict harmful gambling behavior, focusing on self-exclusion as a proxy for problem gambling. It also investigates whether interpretable models can support safer gambling practices by enabling better industry and clinical interventions. Key findings:
• Improved Interpretability with TREPAN: A modified TREPAN algorithm extracted human-readable rules from neural networks with 87% fidelity and only a 1% drop in accuracy, making complex models more transparent.
• Behavioral Risk Indicators Identified: High variability in wagers and increased betting intensity were strong predictors of self-exclusion, especially when combined with demographic factors like age and gender.
• Compact Decision Tree Structure: TREPAN produced a simplified decision tree (9 leaves, height 5), outperforming traditional decision trees in both accuracy and readability.
• Stakeholder Preference for Transparency: 70% of industry professionals preferred interpretable models over more accurate black-box models, emphasizing the need for explainable AI in responsible gambling.
• Dual-Model Strategy Proposed: The study recommends using accurate black-box models for detection and interpretable models for communication and intervention, balancing precision with usability.
• Limitations in Scope: The model does not capture psychological or social factors (e.g., emotional vulnerability or substance abuse), highlighting the need for richer data in clinical applications.
These insights support the integration of interpretable machine learning models into responsible gambling programs, while also highlighting the need for comprehensive, multi-faceted approaches. For full details, please refer to the complete document.
Playtech Planet