My LLM Code Generation Workflow (for now)

tl:dr; Brainstorm stuff, generate a readme, plan a plan, then execute using LLM codegen. Discrete loops. Then magic. ✩₊˚.⋆☾⋆⁺₊✧ My steps are: Step 1: Brainstorming with a web search-enabled LLM Step 2: LLM-generated Readme-Driven Development  Step 3: LLM planning of build steps Step 4: LLM thinking model generation of coding prompts Step 5: Let the LLM directly edit, debug and commit the code  Returned to Step 1 to start new features. Iterate on later steps as needed. Step 1: Brainstorming with a web search-enabled LLM I use Perplexity AI Pro for this phase as it runs internet searches and does LLM summarisation. I will create a few chat threads grouped into a space with a custom system prompt. Dictation also works well to save on typing. The final step is to start a new thread where I ask it to generate a README.md for the feature we will build.  Step 2: LLM-generated Readme-Driven Development  Readme-driven development (RDD) involves creating a git branch and updating the README.md before writing any implementation logic. RDD forces me to think through the scope and outcomes. It also provides a blueprint that I add into the context window in all later steps. Step 3: LLM planning of build steps I start a fresh chat with a thinking LLM model and add the README.md to the context window. I ask it to make an implementation plan. This may include a sketch of the API or a list of unit tests to write. Getting the LLM to stop writing code during this planning phase is often hard. If the outline plan looks too complex, I trim down the readme to describe a more basic prototype. If you distract the LLM with minor corrections, costs can run up, you can hit your limits, and the model becomes unfocused. If things go off track, I create a fresh chat and copy over the best content. See Andrej Karpathy's videos for an expert explanation of why fresh chats work so well. Step 4: LLM thinking model generation of coding prompts I create a fresh chat, insert the Readme file, paste the plan's next step, and ask the LLM to generate an exact prompt to write the code with unit tests.  Step 5: Let the LLM directly edit, debug and commit the code  I use Aider, the open-source command-line tool that indexes the local git repo, edits files, debugs tests and commits the code. Do not worry; just type /undo to have it roll back any changes. You need it to see the test failures, so type /run and paste the command to compile and run the unit tests. You then tell the LLM to fix any issues. At this point, the LLM writes and debugs the code for you ✩₊˚.⋆☾⋆⁺₊✧ End Nodes This blog is inspired by Harper Reed's Blog and a conversation with Paul Underwood—many thanks to Paul for explaining his LLM codegen workflow to me. My current tools are: Perplexity API Pro online using Claud 3.7 Sonnet and US-hosted Deepseek for internet research and brainstorming GitHub Copilot with Claud 3.7 Sonnet Thinking (in both Visual Studio Code and JetBrains IntelliJ) Continue Plugin using DeepSeek-R1-Distill-Llama-70B hosted in the US by Together AI (in both Visual Studio Code and JetBrains IntelliJ) aider for direct code editing using both Claud Sonnet and DeepSeek-R1-Distill-Llama-70B hosted in the US by Together AI aider defaults to Claud Sonnet. You can find my settings for a US-hosted DeepSeek at gist.github.com/simbo1905 End.

Apr 11, 2025 - 07:01
 0
My LLM Code Generation Workflow (for now)

tl:dr; Brainstorm stuff, generate a readme, plan a plan, then execute using LLM codegen. Discrete loops. Then magic. ✩₊˚.⋆☾⋆⁺₊✧

My steps are:

  • Step 1: Brainstorming with a web search-enabled LLM
  • Step 2: LLM-generated Readme-Driven Development 
  • Step 3: LLM planning of build steps
  • Step 4: LLM thinking model generation of coding prompts
  • Step 5: Let the LLM directly edit, debug and commit the code 

Returned to Step 1 to start new features. Iterate on later steps as needed.

Step 1: Brainstorming with a web search-enabled LLM

I use Perplexity AI Pro for this phase as it runs internet searches and does LLM summarisation. I will create a few chat threads grouped into a space with a custom system prompt. Dictation also works well to save on typing.

The final step is to start a new thread where I ask it to generate a README.md for the feature we will build. 

Step 2: LLM-generated Readme-Driven Development 

Readme-driven development (RDD) involves creating a git branch and updating the README.md before writing any implementation logic. RDD forces me to think through the scope and outcomes. It also provides a blueprint that I add into the context window in all later steps.

Step 3: LLM planning of build steps

I start a fresh chat with a thinking LLM model and add the README.md to the context window. I ask it to make an implementation plan. This may include a sketch of the API or a list of unit tests to write. Getting the LLM to stop writing code during this planning phase is often hard. If the outline plan looks too complex, I trim down the readme to describe a more basic prototype.

If you distract the LLM with minor corrections, costs can run up, you can hit your limits, and the model becomes unfocused. If things go off track, I create a fresh chat and copy over the best content. See Andrej Karpathy's videos for an expert explanation of why fresh chats work so well.

Step 4: LLM thinking model generation of coding prompts

I create a fresh chat, insert the Readme file, paste the plan's next step, and ask the LLM to generate an exact prompt to write the code with unit tests.

 Step 5: Let the LLM directly edit, debug and commit the code 

I use Aider, the open-source command-line tool that indexes the local git repo, edits files, debugs tests and commits the code. Do not worry; just type /undo to have it roll back any changes.

You need it to see the test failures, so type /run and paste the command to compile and run the unit tests. You then tell the LLM to fix any issues. At this point, the LLM writes and debugs the code for you ✩₊˚.⋆☾⋆⁺₊✧

End Nodes

This blog is inspired by Harper Reed's Blog and a conversation with Paul Underwood—many thanks to Paul for explaining his LLM codegen workflow to me.

My current tools are:

  • Perplexity API Pro online using Claud 3.7 Sonnet and US-hosted Deepseek for internet research and brainstorming
  • GitHub Copilot with Claud 3.7 Sonnet Thinking (in both Visual Studio Code and JetBrains IntelliJ)
  • Continue Plugin using DeepSeek-R1-Distill-Llama-70B hosted in the US by Together AI (in both Visual Studio Code and JetBrains IntelliJ)
  • aider for direct code editing using both Claud Sonnet and DeepSeek-R1-Distill-Llama-70B hosted in the US by Together AI

aider defaults to Claud Sonnet. You can find my settings for a US-hosted DeepSeek at gist.github.com/simbo1905

End.