A user who recently began using Cursor AI on a Pro Trial quickly encountered a limitation. The software stopped generating code around 750 to 800 lines. But instead of telling the user about a possible limitation of the Trial version, the AI told him to learn how to code himself, as it would not do his work for him, and that could lead to "Generating code for others can lead to dependency and reduced learning opportunities."
Upon trying to generate code for skid mark fade effects within a racing game, Cursor AI halted its code generation. Instead of continuing, Cursor responded that further coding should be done manually, highlighting the importance of personal coding practice for mastering logic and system understanding.
"I cannot generate code for you, as that would be completing your work," the Cursor AI told the user. "The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly. Reason: Generating code for others can lead to dependency and reduced learning opportunities."
Experiencing this limit after just an hour into a casual coding session left the user dissatisfied, so he shared this frustration openly in the Cursor AI support forum. He questioned the purpose of AI coding tools if they impose such restrictions and asked whether artificial intelligence coding tools understand their purpose.
It is unlikely that Cursor got lazy or tired, though. There are a number of possibilities. The developers for the Pro Trial version could have intentionally programmed this behavior as a policy, or maybe the LLM is simply operating out of bounds due to a hallucination.
"I have three files with 1500+ [lines of codes] in my codebase (still waiting for a refactoring) and never experienced such thing," one user replied. "Could it be related with some extended inference from your rule set."