Claude Code is First Choice
Ok so I've now been using Claude Code for a little over a week. It has supplanted Cursor as my primary tool for AI assisted coding tasks, and the transition was very rapid. In fact, I'm not sure I'm going to continue to pay for Cursor. Monorepo, Multiple Cursor Instances One of the things that I discovered early on with Cursor was that it would frequently get "lost" in my monorepo. If it ever wanted to run an external command, such as installing another npm dependency, it would do it in the wrong directory (usually the repository root). The same thing would happen when trying to run the linter, or unit tests, etc. Almost every single time, I had to explicitly tell it to run a command in a given subdirectory, or just reject the command altogether and run it myself. To address this problem, I began running searpate Cursor instances for each major layer/service. That helped, but created another problem. I could no longer ask Cursor to implement a feature across the database, api, backend, and front end. Basically, I had to repeat myself a lot. That became quite tedious. That is not a problem with Claude Code. I run CC from my repo root, and it has access to the entire repo. This approach has certainly simplified introducing app-wide features. Claude is my Design Assistant I've come to value Claude as my design assistant more than being my coding assistant. Before I do any work, I thorough discuss what I want to do with Claude, iteratively refining the content. Only after the item is sufficiently refined do I let Claude run with the code. This approach has significantly improved the quality of the output; I'm spending fewer cycles telling Claude that it got something wrong and to try again. Improved Workflow I'd like to share my current workflow. Hopefully others can benefit, or adapt it to their needs. Keep in mind that I am a solo dev, so the workflow would definitely need to change in a team environment. The Feature Log This is the part that definitely would need to change in a team environment. Before I start any major work, I start by defining my vision for the feature. This is traditionally what Product does. My repo contains a directory I've simply named "feature-log", and in that directory I create a new markdown file for each feature I implement. In an enterprise environment, this information would be in somethiing like Aha or Jira. But the main thing here is that it's what gives Claude the necessary context to begin refinment, and it's part of the repo, so Claude doesn't have to reach out to an external tool for the information. I've refined the format and content of the feature log and reached a model that works well. I spend some time up front describing the vision of the feature, and explicitly cite any open discussion points or technology decisions. When I think I've provided enough context, I move on to refinement. Refinement It is at this point that I fire up a new instance of Claude. My first instructions are to ask Claude to read my project's documentation, to read my "feature template" (more on that later), and to read the feature log for the current feature. Claude dutifully reads all of the resources, and usually responds with suggestions on changes it can make immediately. I have to ask Claude to slow down and explain that I want to work through requirement or design/techincal decisions. At this point, Claude and I then engage in collaborative discussion. I find this step of the process immensely fulfilling. I offer Claude a chance to ask questions, and I in turn ask Claude my own questions. This process can take literally hours, depending on the complexity and up front work I've done on the feature definition. After the refinement has achieved sufficient detail, I then ask Claude to define an implementation plan, and to update the feature log with that implementation plan. I'll review the plan, which offers another opportunity to iteratively improve things. When I think the plan is ready, I'll ask Claude to revise the plan in the feature log, and to proceed to implementation. I normally give Claude specific instructions around my database schema and graphql api. If the feature requires changes to either, I request those be the first two items in the implementation plan. I also request Claude to show me the changes and let me validate them externally before moving on. This process has allowed to me catch a few database changes that passed the eye test but were in fact invalid. And by executing these two steps first, it saves time from replanning or re-executing the code portions if something is wrong. Execution After I give Claude permission to execute its implementation plan, it's time to sit back and do something else. Claude will usually need permission to do a few things, so I can't let it run 100% unattended. Claude will spend some time chugging through the code, and when it's ready, I will validate

Ok so I've now been using Claude Code for a little over a week. It has supplanted Cursor as my primary tool for AI assisted coding tasks, and the transition was very rapid. In fact, I'm not sure I'm going to continue to pay for Cursor.
Monorepo, Multiple Cursor Instances
One of the things that I discovered early on with Cursor was that it would frequently get "lost" in my monorepo. If it ever wanted to run an external command, such as installing another npm dependency, it would do it in the wrong directory (usually the repository root). The same thing would happen when trying to run the linter, or unit tests, etc. Almost every single time, I had to explicitly tell it to run a command in a given subdirectory, or just reject the command altogether and run it myself.
To address this problem, I began running searpate Cursor instances for each major layer/service. That helped, but created another problem. I could no longer ask Cursor to implement a feature across the database, api, backend, and front end. Basically, I had to repeat myself a lot. That became quite tedious.
That is not a problem with Claude Code. I run CC from my repo root, and it has access to the entire repo. This approach has certainly simplified introducing app-wide features.
Claude is my Design Assistant
I've come to value Claude as my design assistant more than being my coding assistant. Before I do any work, I thorough discuss what I want to do with Claude, iteratively refining the content. Only after the item is sufficiently refined do I let Claude run with the code. This approach has significantly improved the quality of the output; I'm spending fewer cycles telling Claude that it got something wrong and to try again.
Improved Workflow
I'd like to share my current workflow. Hopefully others can benefit, or adapt it to their needs. Keep in mind that I am a solo dev, so the workflow would definitely need to change in a team environment.
The Feature Log
This is the part that definitely would need to change in a team environment. Before I start any major work, I start by defining my vision for the feature. This is traditionally what Product does. My repo contains a directory I've simply named "feature-log", and in that directory I create a new markdown file for each feature I implement. In an enterprise environment, this information would be in somethiing like Aha or Jira. But the main thing here is that it's what gives Claude the necessary context to begin refinment, and it's part of the repo, so Claude doesn't have to reach out to an external tool for the information.
I've refined the format and content of the feature log and reached a model that works well. I spend some time up front describing the vision of the feature, and explicitly cite any open discussion points or technology decisions. When I think I've provided enough context, I move on to refinement.
Refinement
It is at this point that I fire up a new instance of Claude. My first instructions are to ask Claude to read my project's documentation, to read my "feature template" (more on that later), and to read the feature log for the current feature.
Claude dutifully reads all of the resources, and usually responds with suggestions on changes it can make immediately. I have to ask Claude to slow down and explain that I want to work through requirement or design/techincal decisions. At this point, Claude and I then engage in collaborative discussion. I find this step of the process immensely fulfilling. I offer Claude a chance to ask questions, and I in turn ask Claude my own questions. This process can take literally hours, depending on the complexity and up front work I've done on the feature definition.
After the refinement has achieved sufficient detail, I then ask Claude to define an implementation plan, and to update the feature log with that implementation plan. I'll review the plan, which offers another opportunity to iteratively improve things. When I think the plan is ready, I'll ask Claude to revise the plan in the feature log, and to proceed to implementation.
I normally give Claude specific instructions around my database schema and graphql api. If the feature requires changes to either, I request those be the first two items in the implementation plan. I also request Claude to show me the changes and let me validate them externally before moving on. This process has allowed to me catch a few database changes that passed the eye test but were in fact invalid. And by executing these two steps first, it saves time from replanning or re-executing the code portions if something is wrong.
Execution
After I give Claude permission to execute its implementation plan, it's time to sit back and do something else. Claude will usually need permission to do a few things, so I can't let it run 100% unattended. Claude will spend some time chugging through the code, and when it's ready, I will validate. Eventually my automatest testing story will be mature enough that hopefully Claude can perform the validation itself, but we're not there yet.
There are almost always problems. Sometimes it takes many attempts to work through the issues. But even with issues, the quality of the resultant changes is so much better than what I ws getting before.
Wrapping Up The Work
After all the work - validation, passing unit tests, no linter or formatting issues - I ask Claude to update the feature log with full details of what was actually done. That includes ideas for improvements, things that were possibly descoped, or areas that diverged from the implementation plan. I find it extremely valuable to compare what was planned versus what was implemented, and use that as a source to create new tickets in Linear.
At that point, I stage the files (I still don't trust the agent for this), and then ask Claude to commit with a good message. That concludes the work, and I am ready to move on to the next item.
I have found this process works for small items to large features. The only difference is the amount of prep work, and of course, larger implementations are more likely to require more iterations.
The Feature Template File
Earlier in the post, I mentioned a feature template file. After I followed this process a few times, I captured the main points into a file I called the feature template. It details the process I just outlined: feature vision/description, refinement, planning, execution, and documenting the actual results. I also included a section on some patterns implemented in various places (maybe this would be better in the CLAUDE.md file).
So when I start a new feature, I ask Claude to read this file so it understands the process I wish to follow. This has helped prevent Claude from being too eager to implement changes.
Conclusion
I've found a groove with Claude Code. I still have Cursor up so I can reference things more quickly, but my usage of Cursor for actually making changes has been reduced significantly. I'm not ready to completely give it up yet, but right now it's hard to justify it's continued use.