On March 8, 2025, a developer using Cursor AI for a racing game project encountered a limitation when the AI coding assistant refused to generate additional code, advising the user to learn programming instead.
Cursor AI stops generating code after 800 lines
According to a bug report posted on Cursor’s official forum, after producing approximately 750 to 800 lines of code, the AI assistant delivered a refusal message stating, “I cannot generate code for you, as that would be completing your work.” The code in question involved managing skid mark fade effects in the game. The assistant recommended the user develop the logic independently to ensure proper understanding and maintenance of the system.
This refusal was accompanied by a justification from the AI, which asserted that “generating code for others can lead to dependency and reduced learning opportunities.” The developer, who goes by the username “janswist,” expressed frustration at this limitation after what he described as “just 1h of vibe coding” with the Pro Trial version of Cursor, stating, “Not sure if LLMs know what they are for (lol), but doesn’t matter as much as a fact that I can’t go through 800 locs.”
Cursor AI, launched in 2024, is built on external large language models and features capabilities including code completion, explanation, refactoring, and full function generation based on natural language descriptions. It has quickly gained popularity among software developers. The company offers a Pro version that claims to provide enhanced features and larger code-generation limits.
Even Google is using AI for coding
Another forum user noted they had managed to work with files containing over 1500 lines of code without experiencing a similar issue. The incident with the Cursor AI assistant highlights a contrasting philosophical stance amid the rising trend of “vibe coding,” a term popularized by Andrej Karpathy, which refers to the practice of having AI generate code based on user descriptions without in-depth understanding of the underlying processes.
This recent refusal aligns with a pattern seen in other AI models, such as ChatGPT, which has shown increasing reluctance to complete specific tasks. In late 2023, users reported that GPT-4 had become less responsive to requests, a trend acknowledged by OpenAI in a public statement expressing intent to investigate the issue.
Cursor’s specific refusal not only mirrors the responses often encountered on programming help sites like Stack Overflow, where experienced developers encourage learning through self-solution rather than reliance on ready-made code, but it also indicates an unintended limitation arising from the assistant’s training. While other users have not reported hitting the 800 lines of code limit, it suggests a possible area for improvement within the tool.
Featured image credit: Cursor