At the AI Programming Crossroads: Why Copilot Mode Is a Startup Trap?}
While many focus on faster coding with Copilot, a bold team is building a full autonomous AI car, challenging the conventional AI programming paradigm and exploring new frontiers.

When the entire AI industry is racing to create faster “horses” for programmers, a pioneering team chooses to build a car directly.
“The development of large models is like a basketball game just ending the first quarter. Everyone judges the outcome based on the score of the first quarter, but we believe there are second, third, and fourth quarters to play,” said宿文, CEO of KouTing Intelligence (AIGCode), offering a different perspective on the currently crowded AI coding track.
Since ChatGPT exploded globally at the end of 2022, AI coding has been widely regarded as the fastest and most commercially viable track for large language models (LLMs). From GitHub Copilot’s success to major tech giants and startups launching their own coding assistants, the industry seems to agree: AI is a “co-pilot” for programmers, primarily to boost coding efficiency.
However,宿文 and his team at KouTing Intelligence aim to prove this is a misjudgment of the ultimate goal. In a recent interview, he shared three “non-consensus” views on AI coding.
Non-Consensus 1: The Foundation Model Is Still in Its Infancy
Innovation in Network Architecture Is Key to Breakthroughs
Many believe the battle for the foundation model has been settled. Startups can only find opportunities at the application layer.宿文 sees it differently: “We believe large model technology, or the development of foundation models, is still in its infancy.”
He points out that current Transformer-based architectures have fundamental issues in learning mechanisms and knowledge compression. “Although MoE (Mixture of Experts) addresses some computational efficiency problems, its experts are ‘flat’ and lack collaboration, making the whole system a ‘black box’ relying on simple routing.”
Since its founding, KouTing Intelligence has focused on developing its own foundation model. Their breakthrough lies in continuous iteration and innovation in network architecture. “We evolved from MoE to the more advanced PLE (Progressive Layered Extraction) architecture, which is already mature in recommendation and search domains.”
He explains that moving from MoE to MMoE solves the problem of expert decoupling; further, PLE addresses conflicts and information loss after decoupling, enabling fine-grained extraction of task commonalities and specifics.

Evolution of multi-task learning network structures—from simple shared bottom to gated experts (MMoE, CGC) and progressive layered extraction (PLE), as shown in Gabriel Moreira’s Medium article.
宿文 states that architectural innovation gives their model unique potential in knowledge compression and understanding long logical chains.

新模型AIGCoder架构图,通过解耦专家模块(De-coupled Experts)改良传统模型,利用多头专家感知注意力(MHEA)负责动态激活专家,定制化门控(CGC)负责信息融合,在不增加计算成本的情况下应对大模型扩展瓶颈。

Non-Consensus 2: Avoiding Big Tech’s Track Is a False Proposition
In AI, entrepreneurs often hear advice: “Don’t compete with big tech, or you’ll be crushed.”
宿文 believes this is a misconception: “If it’s truly a big deal, why wouldn’t big companies do it? More precisely, it’s about avoiding low-hanging fruit.”
“The real moat isn’t about finding a niche that big companies overlook, but solving more complex, deeper problems within the same field.”
“Many current coding products are just integrating various APIs to produce decent demos—these are ‘low-hanging fruits.’ KouTing Intelligence’s strategy is to innovate at the core technology level to achieve a true ‘all-in-one’ solution.”
This integrated approach also influences宿文’s view on agent development. He notes that the industry habitually divides technology stacks into layers like Infra, Foundation, OS, and Agent, which is a simple mapping of the previous PC and mobile internet architectures. “This kind of ‘shipwreck’ approach isn’t meaningful for new paradigms.”
He emphasizes that in the new paradigm, all components are deeply coupled. “From a problem-solving perspective, we should solve it holistically. Early division of labor before the final effect is counterproductive.”
宿文 divides AI for coding into five stages:
- L1: Low-code platforms, not mainstream;
- L2: Copilot products, assisting programmers with prompt-based code generation, like GitHub Copilot and Cursor;
- L3: Autopilot products, capable of end-to-end programming without human intervention;
- L4: Multi-user collaboration, turning ideas into complete products;
- L5: Fully automated iteration and upgrade to mature software.
宿文 states, “Most AI coding products are currently at L2, while AutoCoder is designed from the start for L3.”
Moving from L2 to L3 isn’t just a matter of scale. “Achieving ultimate coding assistants doesn’t naturally lead to end-to-end software generation.” The technical challenges and optimization directions are fundamentally different: the former (Copilot) focuses on “coding efficiency,” mainly context understanding and precise completion; the latter (Autopilot) aims at understanding complex business logic, decomposition, and long logical chains.
Moreover, L2 requires deep integration with IDEs, giving big companies an advantage, but for startups, it might be a risky detour.
Non-Consensus 3: Personalized Application Markets Will Explode, with Demand Far Exceeding Existing Markets
宿文 believes that sticking to L3 is both a technical choice and a market judgment. Despite the industry’s consensus that AI coding’s ultimate goal is to empower everyone, he argues that due to technical bottlenecks and user knowledge gaps, the most practical current path is to assist programmers and improve existing efficiency.
He sees this as a “strategic detour”: L2 cannot naturally evolve into L3, so following L2 might miss the real blue ocean—the massive, personalized incremental demand suppressed by current development modes.
“The new demand far exceeds the existing market’s replacement. Programmers won’t disappear, but a new, much larger market will emerge.”
“It’s like Didi creating ride-hailing, or Meituan creating food delivery. Once low-cost, efficient supply is available, the market will explode,” he explains. “There are many suppressed needs in software development—small businesses, startups, even large departments—like a training system for internal use, which traditionally takes months and high costs, with high risk of deviation.”
KouTing Intelligence aims to reshape this process: “If the requirements are clear by morning, a deployable product can be ready by afternoon.”
The latest product, AutoCoder, is positioned as the “world’s first integrated front-end and back-end full software generation platform,” capable of simultaneously generating a highly usable front-end, database, and backend system. For example, inputting “Generate a tech company’s official website” produces not only the front page but also the backend for content management and user data.
This tool is aimed at product managers, designers, non-technical entrepreneurs, small business owners (like cafes, gyms), and early-stage founders—groups with clear digital needs but high barriers to traditional development.
宿文 cites a statistic: a similar overseas company’s product has reached a monthly active user count that’s one-tenth of GitHub’s 20-year history, without any decline in GitHub’s data, indicating a new, incremental user market is emerging.
The most direct challenge to the L3 path is: what if the generated software has bugs?宿文’s answer: “Why spend hours hunting bugs when you can regenerate a correct version in minutes?” As the marginal cost of software iteration approaches zero, the freedom to test and improve will be unprecedented.
Conclusion
宿文’s core strategy involves developing their own foundation model, choosing the more difficult end-to-end path, and targeting the suppressed incremental demand—these three non-consensus but logically coherent judgments.
Of course, choosing an unconventional path invites doubts and uncertainties. Just as early cars were slower and less reliable than horses, it will take time and market validation to see if KouTing’s “car” can reach the performance, stability, and reliability needed to compete with or surpass the “horse-drawn carriage” system.
But one thing is clear: the AI programming race has just begun. A challenger is playing a very different game, and from the user’s perspective, we look forward to a future where software creation becomes truly democratized.