Author: JIE GONG AND LU XIAO

“We thought AI would make engineers faster. Instead, it showed us engineers were never the bottleneck.”

AI can accelerate the execution of individual tasks, but it cannot improve an organization’s overall efficiency. After Decathlon China deployed AI in its software development process, coding efficiency increased by 20 to 40 percent, while overall delivery speed increased by only 10 to 15 percent. This gap forced the company to rethink its time allocation and revealed that the real bottleneck was not technical execution, but coordination.

Background

Decathlon is the world’s largest sporting goods retailer by store counts, with global revenue of €16.2 billion by 2024. The company entered the Chinese market in 2003 and now has more than 200 stores in over 100 cities.

Decathlon China’s Digital, Tech & Data team employs approximately 150 full-time staff. It is responsible for the design and maintenance of customer platforms, e-commerce, retail systems, supply chain tools, and data and information systems applications. Recently, this team shifted from a project-driven to a product-driven operating model, assuming full responsibility for products, engineering, and data. Multiple product teams must collaborate continuously on shared systems; therefore, delivery speed depends directly on the fluidity of workflows between teams, rather than solely on each team’s coding speed.

Current State of Artificial Intelligence: Individual Gains, Organizational Disappointments

In 2025, Decathlon China’s digital leadership set an ambitious target: use AI to achieve a 2–5x improvement in software delivery productivity. Their approach was traditional: deploying code generation tools, automated testing, and early development agents to increase engineer productivity. Over the next year, these tools worked at the individual level. Coding and testing efficiency improved by 20 to 40 percent.

However, upon analyzing deliverables, the leadership team found that overall delivery speed had only increased by 10 to 15 percent. Projects were still taking weeks. Deadlines were still being missed. The gap between individual productivity and organizational performance was too large to be explained simply by adoption issues.

A deeper analysis of how engineers’ time was allocated revealed the solution. A significant portion of their weekly work time was not spent on development, but on clarifying requirements, attending cross-team coordination meetings, synchronizing dependencies, and waiting for a release. This pattern was consistent across all projects and teams. AI only optimized a small part of the problem. Tasks accelerated by AI, such as coding and testing, represented a small portion of the project time, and therefore a small portion of the total time. The majority of the time was spent coordinating between people, teams, and systems.
Real-Time Co‑pilot. When human intervention is required, C-Smart acts as a real-time assistant. As the human agent communicates with a customer, the AI listens in, instantly surfacing relevant knowledge documents and suggesting replies. This support significantly reduces the time human agents spend searching for answers and ensures accuracy and consistency, even in complex situations.

The team gradually realized that AI optimized tasks, not the system.

The NPS Experiment: From Traditional Pipeline to Multi-Agent Delivery

A cross-functional requirement presented an opportunity to try a different approach. The global Consumer & User Insights team needed Decathlon’s NPS survey system updated across multiple platforms, a contained task that would typically take four to six weeks through the standard development pipeline.

Traditional Approach: Following the established process, this requirement would go through business requirements, product requirements, iteration planning, and multi-team execution. In practice, two development iterations were required: the first iteration produced a working version; the team reviewed, discussed, and iterated; the second iteration delivered the final version. A large portion of these four to six weeks was not spent on engineering development but on handling, review, and coordination meetings between iterations.

AI Approach: Instead of the traditional process, the team used the open-source multi-agent framework OpenClaw to establish four AI agents, each playing a different role: product manager, engineer, UX designer, and tester. Each agent was role-specific, context-aware, and aligned with project goals. These agents were instructed to deliver the feature collaboratively.

A usable prototype was completed in a single afternoon. It took about two hours for setup, two hours for agent-led development, and an hour and a half for debugging. What would have taken four to six weeks was compressed into less than a day.

But the deeper change wasn’t speed, but structure. In traditional processes, coordination requires meetings, knowledge transfer requires documentation, decision-making requires cross-team consultation, and time constraints must be taken into account. In agent-based models, contextual information is shared in a programmatic manner, and roles interact within predefined structures. Coordination is no longer an ongoing activity but becomes an inherent property of the system design itself. Coordination work shifts from recurring costs, including daily stand-up meetings, review cycles, waiting for dependencies, to upfront costs, such as defining agent roles, coding context, designing interaction structures.

Implications

This diagnosis has universal significance. The idea is that if AI improves employee productivity and organizations fail to keep up, the bottleneck has likely shifted from execution to coordination. Today, most companies use AI at the task level, they let AI do jobs such as improving the speed of individual coding, drafting, or testing. Decathlon’s experience suggests that real change occurs at a higher level: not in speeding up tasks, but in compressing the coordination time between tasks. To extend multi-agent coordination to more complex systems, issues such as compliance and accountability must be addressed. Nonetheless, the starting point remains the same: identifying the point where time is really flowing and reducing barriers.

*All Nano Case materials may be used for non‑commercial purposes with proper citation. Commercial use requires prior permission from CAMO.

Research Labs.

Deloitte-HKU Lab for Organizational
Transformation

Organizations succeed by attracting talent, motivating action, and coordinating efforts—areas AI is transforming, demanding structural adaptation. Our lab guides this shift, identifying effective responses and their interplay with markets, norms, and institutions. Through research and industry collaboration, we assess AI adoption, spotlight pitfalls, and craft frameworks for leaders.

Lab for AI-Agents in Business and Economics

Our mission is to pioneer AI-driven solutions for business challenges by developing multi-agent systems and domain-specific AI architectures, while guiding organizations through ethical, scalable, and transformative AI adoption. We focus on developing platforms for AI agents and multi-agent architecture for business and management, designing AI agents for specific business applications with deep domain knowledge, and studying the economic impact of AI transformation on human behavior, business and organizations.

AI Implementation (AI2) Lab

The AI Implementation (AI2) Lab is dedicated to turning AI and business research into real-world AI adoption. We collaborate with organizations to identify frictions, develop deployment strategies, and measure impact of AI implementation. Our work focuses on helping firms adopt and scale AI, designing business models for the AI era, and incubating AI-related innovations through experimentation and prototyping.

Lab for the Future of Work
and Well-being

We advance understanding of generative AI’s transformation of China’s labor markets, leisure, and wellbeing through rigorous, data-driven, causally robust research generating actionable insights. Our people-centered approach prioritizes human wellbeing, going beyond productivity and profitability to foster better jobs and lives.

The Human-Artificial Intelligence Lab

We study comparative intelligence in humans and artificial systems to develop evidence-based frameworks for effective human–AI collaboration.

AI Implementation (AI2) Lab

The AI Implementation (AI2) Lab is dedicated to turning AI and business research into real-world AI adoption. We collaborate with organizations to identify frictions, develop deployment strategies, and measure impact of AI implementation. Our work focuses on helping firms adopt and scale AI, designing business models for the AI era, and incubating AI-related innovations through experimentation and prototyping.

Connect with us

We’d love to hear from you. We will get back to you as soon as possible.

General Enquiry