The brief summarises Playtech’s work on using AI responsibly in gambling and AI governance, reviewing accountability demands and approaches.
Categories
The research explores how machine learning models identify at-risk gambling behavior and how these decisions can be explained to support more effective, personalized responsible gambling interventions. Key findings:
• Black Box vs. White Box Models: Most responsible gambling tools use complex “black box” models for risk prediction. While accurate, these models are difficult to interpret. Explainable Boosting Machines (EBMs) offer a promising middle ground, combining interpretability with strong performance.
• Three Explanation Techniques Identified:
- Simplification: Uses decision trees (e.g. TREPAN) to approximate model logic, but still complex and hard to interpret.
- Feature Relevance: Techniques like SHAP and LIME quantify how much each behavioral feature contributes to a risk prediction. SHAP also shows direction of influence.
-
Visual Explanation: Feature-risk curves show how risk changes with specific behaviors (e.g. night-time play), both at the model (global) and individual (local) level.
• No Single Technique is Sufficient: Each method has strengths and limitations. Combining multiple techniques provides a more complete and actionable understanding of why a player is flagged as at-risk.
• Actionable Insights Are Key: Not all explanations are equally useful for intervention. Effective responsible gambling requires identifying reasons that players recognize and can act on (e.g. deposit spikes vs. abstract statistical patterns).
• Ethical Considerations Matter: Models must be monitored for bias. A biased dataset can lead to misleading or unfair risk assessments across player groups.
These insights support the use of explainable AI in responsible gambling, enabling more targeted, ethical, and effective interventions. For full details, please refer to the complete document.
Playtech Planet