Ask HN: Advice for someone who wants to try AI-assisted coding?
inglor_cz
6 days ago
10
22
Hi, I am a soon-to-be-47 developer with C, C++, Java, PHP, Python, Typescript experience, working in the field since 2002.

I am starting a new project and I would like to experiment with some AI assistants when it comes to coding. It will be a Python project that will access Microsoft cloud using the Graph 1.0 API.

One advantage of the situation is that this project is a bit standalone and I don't have to feed it 2000 already existing classes from an existing legacy codebase to give it a context to work with.

What would you recommend me?

Are there any dark patterns involved, e.g. de-facto un-cancellable subscriptions that will make me cancel my card instead?

I would be happy hearing your experience and tales from the battlefield.

viraptor6 days ago
Cursor is kind of the default these days and has reasonable pricing. Both the completion and the agent work well. You could nitpick various options, but really it's ok and you can do a lot worse than Cursor.

An alternative is buying credit for a specific provider and using that with Aider. Which is also not bad.

None of the major players are likely to abuse your card. Just make sure you're either using a prepaid plan or prepaid credits with no auto topups.

bad_haircut72viraptor6 days ago
Cursor certainly isnt the default ! No idea where you're getting that? If anything I would say Claude Code is the default but realistically all the tools are converging to be much of a muchness
the__alchemistbad_haircut726 days ago
Claude is so Mid-summer 2025. Crayon is the new hotness; everyone's using it and getting 11x gains.
viraptorbad_haircut726 days ago
Mostly from what I see used. If you have actual stats that disagree, please post them. It's hard to find hard numbers.
subsection1hviraptor6 days ago
I don't know of any usage stats, but Claude has been mentioned in nearly twice as many comments on HN during the past year compared to Cursor:

https://hn.algolia.com/?query=%22Claude%22&type=comment&dateRange=pastYear

https://hn.algolia.com/?query=%22Cursor%22&type=comment&dateRange=pastYear

the__alchemistsubsection1h5 days ago
It would behoove as to no longer rely on metrics like comment count from anonymous users going forward! This used to be a reasonable proxy, but I do not believe it is meaningful now.
viraptorsubsection1h5 days ago
You're comparing it to "Claude", not "Claude code", which brings up lots of unrelated entries.
uberman6 days ago
All the free tiers of the top products will allow you a days worth of coding if you are taking bite sized tasks such as defining methods and actually proofing to understand the results.
PretzelPirate6 days ago
I started with Github Copilot and VSCode. It has a free tier and is an easy way to try out AI-assisted development.
snaga6 days ago
I have tried the spec-driven development with Gemini (Gemini-CLI) and VS Code for a few weeks.

It is definitely inspired by Kiro by Amazon. (unfortunately, I'm still on the wishlist.)

It works fine for me, and I would recommend this approach to understand how AI-assisted coding works.

tinytunasnaga5 days ago
It’s not too hard to modify Kiro’s extension source code to skip the waiting list :)
mikewarot6 days ago
I've heard, and it seems reasonable to me, that you should never get completely out of the loop when it comes to the actual code. Type it all in yourself so that you can get a real sense of it. Do not copy and paste.

If you do otherwise you're just creating legacy code at astonishing speed. This is fine as long as you throw it away after you're done.

the__alchemist6 days ago

  - Don't let it create function signatures or structs/classes.
  - Instruct it explicitly not to change variable names, or things unrelated to your query
  - Pass it exactly the information and code it needs to solve the task, and no more
  - If it starts going off the rails, or you're more than a few iterations in, start over. (Likely with modified context, including any parts of its results it did correctly)
cellis6 days ago
Claude Code certainly not as easy to engineer with, though it is less expensive. For instance the @feature isn’t as robust as cursors ime. Also no shift+enter is quite a pain. Linting doesn’t “just work”, cursor with Claude 4.0 max is really thorough, I think even better than GPT-5. Not that Sonnet is better but that whatever “ensemble” of models cursor uses with sonnet seems to both adhere and tool call better than with GPT-5. GPT-5 often says what it will do and then says “say go and I’ll go” or says “you should run command x”, but doesn’t just DO it. Also for bug fixes in difficult codebases nothing beats Gemini 2.5 pro
MilnerRoute6 days ago
Lex Fridman did an interview with Ruby on Rails creator David Heinemeier Hansson. I think DHH actually keeps his AI in a separate window, and insists on retyping the suggestions it displays. (The act of typing the code somehow plants it more deeply in his memory, and he finds it more education.)

From the interview I got the impression that AI can help you learn or rob you or learning, depending on how you use it. Like, you can go fast or you can go slower-but-more-educational... Depends on what you're after, I guess.

bravesoul2MilnerRoute5 days ago
I like the rewrite suggestion. It sounds primitive but I imagine there is more of a sense of ownership by doing so and you can spot errors or things you dont understand and want to know more about.
mradek5 days ago
Cursor + Claude Code.

Take a couple hours to walk CC through your code and generate a CLAUDE.md. Note any architecture patterns you have already, or want to have, in your project.

This is probably the most important thing you can do to drive better results. As you work, try to ensure you're getting independently testable steps as you solve a problem. Take time planning, always have it reference your CLAUDE.md and existing code patterns. At the end of each step, I have CC determine whether or not to update the CLAUDE.md if there's any foundational updates.

The trick is to have a idea of what you're expecting out of these tools. If you can use the tool to break down the work into individual pieces you will find it is really fun and productive way to build software. You still have to think, but you are able to cover a lot more ground faster. I can't type out 4 files that are in my brain in 10 seconds.

jklein11mradek5 days ago
Do you have any examples of Claude.md files that i can use as an exmaple?
ismailrohaga4 days ago
any AI would do. but if you wanna try claude code, instead of paying the $20/mo pricing you can start by just using the API instead (https://console.anthropic.com). topup $5 and just try to prompt.

the thing with AI is that you better try to give them "direction", give detailed instructions or expectations. it's better for you to do it rather than asking it to decide (because they tend to be always agree with you)

so in claude you can setup Claude.md, there are no rules here, but with AI we're talking about tokens and i usually write it in "json" structure despite of it being a markdown file. why json? it's more structured and CMIIW most AI "speak" with it

trcarney3 days ago
I personally prefer the terminal based approaches over the IDE integrations. So my recommendation are:

- Use OpenCode if you want to experiment with different models, if not just use Claude Code

- Use Git to your advantage. Always start a prompt on a fresh commit. This will make it easier to see everything that was changed and makes it very easy to undo all the changes and start over.

Jahboukie2 days ago
I have a simple solution that you can try no upfront cost, Use GitHub Copilot in VS Code and try GPT-5 or Gemini .25Pro. Try this open-source tool https://github.com/jahboukie/antigoldfish, Air‑gapped, zero‑trust persistent memory CLI for AI agents and developers. It makes code context and decisions durable, auditable, and portable without relying on any cloud services. It’s built for regulated and offline environments where transparency and operator control are non‑negotiable.

Install globally, initialize it in your project and it will be available inside VS Code without it being an extension. Point your AI coding agent to use agm --help and it will then use the commands to index your entire codebase.

Core features Transparency & Operator Parity

Global flags: --trace, --dry-run, --json, --explain Receipts: .antigoldfishmode/receipts/*.json with digests Journal: .antigoldfishmode/journal.jsonl Zero‑Trust Policy Broker (local, auditable)

agm policy status — show effective rules agm policy allow-command <cmd> — permit a command agm policy allow-path <glob> — permit a path agm policy doctor [--cmd] [--path] — explain pass/fail and print the fix agm policy trust <cmd> --minutes 15 — short‑lived dev convenience token Code‑aware Index & Search

agm index-code [--symbols] [--path .] [--include ...] [--exclude ...] Add --diff to skip unchanged files after an initial baseline run. agm search-code <query> [-k N] [--preview N] [--hybrid] [--filter-path ...] Hybrid FTS + vector rerank; sqlite‑vss when available, otherwise local cosine fallback Air‑Gapped Context (.agmctx)

agm export-context --out ./ctx.agmctx --type code [--zip] [--sign] agm import-context ./ctx.agmctx[.zip] (verification + receipts) Exports now include: manifest.json, map.csv, vectors.f32, notes.jsonl, checksums.json, optional signature.bin + publickey.der (if signed) Supports zipped bundle (ctx.agmctx.zip) with identical verification logic Deterministic integrity & exit codes (see Status / Air‑gapped integrity)

revskill2 days ago
Do not overprompt. Quality matters in prompt including contexts.