Deloitte-HKU Lab for Organizational Transformation
Lab Directors
Prof. Matthias Fahn
Associate Professor, Management and Strategy
Prof. Jie Gong
Associate Professor, Economics and Management and Strategy
Mission
Our mission is to understand how organizations should adapt to the challenges and opportunities of artificial intelligence. We research effective responses and develop frameworks to help leaders successfully implement AI in their organizations.
Core Objectives
AI is transforming the foundation of organizational success – how to attract talent, motivating them, and coordinating their efforts. Through rigorous research and real-world insights, we aim to:
Assess
the real-world state of AI transformation in firms across industries
Identify
the common pitfalls and strategic missteps that organizations should avoid
Develop
practical frameworks to help organizations adapt and thrive in the AI era
Research Projects
Better Technology, Worse Motivation: Gen AI’s Mediocrity Trap
Generative AI boosts efficiency but can lower work quality. In an experiment with professional illustrators, we found that AI speeds up early progress but limits further improvement. Many chose to trade quality for speed, revealing a key challenge: GenAI may undermine motivation for creative excellence and innovation.
When Good Enough Becomes Optimal: The Agency Costs of Using AI in Organization
We argue that widespread AI adoption raises the cost of incentivizing high human performance, as improved baseline outcomes make “good enough” results more acceptable to firms. Our analysis shows this “shirk-biased technological change” can reduce firm profitability and alter labor market dynamics—workers may benefit initially, but gains fade as competition grows. European regulatory frameworks, which give workers more say in AI adoption, may preserve higher value per worker but could limit competitiveness compared to less regulated regions. Ultimately, firms outside Europe may achieve higher profits per worker and outcompete their European counterparts over time.
AI Persuasion, Bayesian Attribution, and Career Concerns of Doctors
This project explores how AI influences doctors when they disagree on diagnoses. Disagreements stem from attention (objective, complementary) and comprehension (subjective, substitutive) differences. Uninterpretable AI can be more persuasive by letting doctors attribute disagreements to attention gaps, especially for those with lower abnormality detection skills or career concerns. This can ultimately improve diagnostic accuracy.