Things: - My favourite tools are [[Aider]], [[Claude]] (and Claude Code) and [[gptel]] in [[Emacs]]. - I use and create [[Model Context Protocol|MCP]] servers - I use [[Model Context Protocol|MCP]] servers with [[Claude Code]]. - I use MCP servers with [[Claude on desktop]]. - I'm not using function and tool calling to its potential. - I use [[gptel]] in [[Org-mode]] with `gptel-org-set-topic`. - I can execute code and show the code execution with [[Org-mode]] Babel. - It's like [[Literate programming]]. - I like to select some code and ask the [[Large language model|LLM]] to transform it based on my instructions. - [[Claude Projects]] is magic. By the time you've opened your third browser tab, I've already found three solutions. - I use [[Repomix]] to get content into [[Claude Projects]]. - See also [gitingest](https://gitingest.com), [[Simon Willison]]'s [`files-to-prompt`](https://github.com/simonw/files-to-prompt) and [Repo prompt](https://repoprompt.com/). - I have a [[Claude Projects|Claude project]] that creates [[Repomix]] commands for me, so I don't have to remember the syntax. - I use [[Claude Projects]] to create quick one-shot [[Python]] programs with [[uv]].[^1][^2] - It's quick and easy to make [[Command line interface|CLI]] programs into guided journeys. With automated progress tracking and contextual help. It costs nothing to implement this and [[Large language model|LLM's]] rarely get it wrong. Before I would be too lazy to do things like that. So [[Large language model|LLM's]] affects [[User experience design]]. - Developers are lazy ("efficiency-seeking") and dislike tedious work. Detailed documentation, interactive tutorials, or context-sensitive help – can now be implemented with minimal effort. Good [[User experience|UX]] is democratised. It'll have [[Ripple effect]] throughout society. - Anything that can bring value but was a victim to laziness is now solved and it will improve [[User experience]] a lot. You see this in README files, for example. - [[Large language model|LLM's]] often write better [[Git]] commit messages than I would do myself. [[Aider]] is especially good at this. If I don't like it, I just amend it. It serves as inspiration anyway. - Once in a while I see how far [[Visual Studio Code]] and [[GitHub Copilot]] has come. I'm usually disappointed. - I've tried [[Zed]] but it didn't impress me (might be skill issue). - Once in a while I see how [[Cursor]] is doing. It's better than [[Visual Studio Code]], but it's not polished. - I often notice that people who don't use AI for coding waste a lot of mental bandwidth on mundane and repetitive tasks. - I follow [[Simon Willison]]'s blog closely. He shares cutting-edge ideas. - [AICodeKing](https://www.youtube.com/@AICodeKing) and [IndyDevDan](https://www.youtube.com/@indydevdan) have interesting takes on AI coding. - The wrong prompt in [[Aider]] make it go off the rails quickly. You have to be smart about writing prompts. - In [[Aider]], use `/reset` and `/undo` often, instead of piling on "fixes" to previous mistakes. It's much better to undo instead of having bad code in the chat context. The bad code will inspire even more dumb stuff from the model. Don't talk to it like a human. Rewind time and ask again. Don't poison the context window with useless information. Keep a high [[Signal-to-noise ratio]] in the context. - Prompt for small changes in [[Aider]]. Ask for a todo list for small changes or improvements and use that for inspiration for future prompts. Use `/architect` to make plans. - Don't ask [[Aider]] to "improve" large portions of the code. It will try to add documentation and tests all at once. It will overwhelm you. Unless you are [[Vibe coding]]. - Use `! make` to run commands with [[Aider]] so the output is fed into the chat context. You can ask about error messages and make it easy for the LLM to understand how the program looks and behaves. - It takes practice to prompt well. Make lots of mistakes and always look for improvements. - Just make something. You need to get good at [[Speedrunning]]. - The future is already here – it's just not evenly distributed. I see no point in AI pessimism. It's a new tool. Learn from the best, practice, be creative, and use it to it's fullest potential. I feel exactly the same way: > My underlying fascination with the new technology is the only way I have managed to figure it out, so I am sympathetic when other engineers claim LLMs are “useless.” ## See also - [crawshaw - How I program with LLMs](https://crawshaw.io/blog/programming-with-llms) - Pretty much mirrors my experience. - David Crawshaw is the co-founder of [[Tailscale]] by the way. - [Ask HN: Recommendation for a SWE looking to get up to speed with latest on AI | Hacker News](https://news.ycombinator.com/item?id=42256093) - [Things we learned about LLMs in 2024](https://simonwillison.net/2024/Dec/31/llms-in-2024/) - [AI-enhanced development makes me more ambitious with my projects | Hacker News](https://news.ycombinator.com/item?id=35382698) - [Repo prompt](https://repoprompt.com/) - [Composio MCP servers](https://mcp.composio.dev/) [^1]: [Using uv as your shebang line](https://akrabat.com/using-uv-as-your-shebang-line/) [^2]: [Lazy self-installing Python scripts with uv](https://treyhunner.com/2024/12/lazy-self-installing-python-scripts-with-uv/)