I am starting a new project and I would like to experiment with some AI assistants when it comes to coding. It will be a Python project that will access Microsoft cloud using the Graph 1.0 API.
One advantage of the situation is that this project is a bit standalone and I don't have to feed it 2000 already existing classes from an existing legacy codebase to give it a context to work with.
What would you recommend me?
Are there any dark patterns involved, e.g. de-facto un-cancellable subscriptions that will make me cancel my card instead?
I would be happy hearing your experience and tales from the battlefield.
It is definitely inspired by Kiro by Amazon. (unfortunately, I'm still on the wishlist.)
It works fine for me, and I would recommend this approach to understand how AI-assisted coding works.
If you do otherwise you're just creating legacy code at astonishing speed. This is fine as long as you throw it away after you're done.
- Don't let it create function signatures or structs/classes.
- Instruct it explicitly not to change variable names, or things unrelated to your query
- Pass it exactly the information and code it needs to solve the task, and no more
- If it starts going off the rails, or you're more than a few iterations in, start over. (Likely with modified context, including any parts of its results it did correctly)
From the interview I got the impression that AI can help you learn or rob you or learning, depending on how you use it. Like, you can go fast or you can go slower-but-more-educational... Depends on what you're after, I guess.
Take a couple hours to walk CC through your code and generate a CLAUDE.md. Note any architecture patterns you have already, or want to have, in your project.
This is probably the most important thing you can do to drive better results. As you work, try to ensure you're getting independently testable steps as you solve a problem. Take time planning, always have it reference your CLAUDE.md and existing code patterns. At the end of each step, I have CC determine whether or not to update the CLAUDE.md if there's any foundational updates.
The trick is to have a idea of what you're expecting out of these tools. If you can use the tool to break down the work into individual pieces you will find it is really fun and productive way to build software. You still have to think, but you are able to cover a lot more ground faster. I can't type out 4 files that are in my brain in 10 seconds.
the thing with AI is that you better try to give them "direction", give detailed instructions or expectations. it's better for you to do it rather than asking it to decide (because they tend to be always agree with you)
so in claude you can setup Claude.md, there are no rules here, but with AI we're talking about tokens and i usually write it in "json" structure despite of it being a markdown file. why json? it's more structured and CMIIW most AI "speak" with it
- Use OpenCode if you want to experiment with different models, if not just use Claude Code
- Use Git to your advantage. Always start a prompt on a fresh commit. This will make it easier to see everything that was changed and makes it very easy to undo all the changes and start over.
Install globally, initialize it in your project and it will be available inside VS Code without it being an extension. Point your AI coding agent to use agm --help and it will then use the commands to index your entire codebase.
Core features Transparency & Operator Parity
Global flags: --trace, --dry-run, --json, --explain Receipts: .antigoldfishmode/receipts/*.json with digests Journal: .antigoldfishmode/journal.jsonl Zero‑Trust Policy Broker (local, auditable)
agm policy status — show effective rules agm policy allow-command <cmd> — permit a command agm policy allow-path <glob> — permit a path agm policy doctor [--cmd] [--path] — explain pass/fail and print the fix agm policy trust <cmd> --minutes 15 — short‑lived dev convenience token Code‑aware Index & Search
agm index-code [--symbols] [--path .] [--include ...] [--exclude ...] Add --diff to skip unchanged files after an initial baseline run. agm search-code <query> [-k N] [--preview N] [--hybrid] [--filter-path ...] Hybrid FTS + vector rerank; sqlite‑vss when available, otherwise local cosine fallback Air‑Gapped Context (.agmctx)
agm export-context --out ./ctx.agmctx --type code [--zip] [--sign] agm import-context ./ctx.agmctx[.zip] (verification + receipts) Exports now include: manifest.json, map.csv, vectors.f32, notes.jsonl, checksums.json, optional signature.bin + publickey.der (if signed) Supports zipped bundle (ctx.agmctx.zip) with identical verification logic Deterministic integrity & exit codes (see Status / Air‑gapped integrity)
An alternative is buying credit for a specific provider and using that with Aider. Which is also not bad.
None of the major players are likely to abuse your card. Just make sure you're either using a prepaid plan or prepaid credits with no auto topups.
https://hn.algolia.com/?query=%22Claude%22&type=comment&dateRange=pastYear
https://hn.algolia.com/?query=%22Cursor%22&type=comment&dateRange=pastYear