Current AI is not even designed to do that. It is just a very sophisticated auto-complete.
It is sophisticated enough to fool some VCs that you can chop your round peg into square hole. But there is no ground to expect a scalable solution.
Thousands of users. 40+ GitHub stars. Original draft took 30 minutes. Added numerous feature requests and each took like 5 minutes a pop.
I never wrote a single line of that code.
Furthermore, my startup, https://gametorch.app/ has 110 sign ups, paying users, millions of impressions. Never wrote any of that code either. Typing it out at ~100 wpm is far too slow.
I’d want more control over what’s remembered and when. Curious if anyone here has used this yet — is it actually helpful in practice?
That said, I totally agree about control. I wish there was a more obvious way to “pause” or “reset” memory mid-session instead of diving into settings. It’s useful, but still a little opaque.
I use the "memory" feature of ChatGPT, and taking a look right now, it seems to have about ~30 items saved from me, some of them are like "Is using egui for a UI task, particularly related to configuring smooth automatic scrolling in a scrollarea." which is useful for maybe the ~3 chats I had about it, and also other things like "Prefers more accuracy in terminology and is looking to represent LLMs in a detailed and structured way." that are more broadly applicable.
Then you can obviously remove any of them, and also manually add by telling it explicitly you want something added.
I'm not sure of its usefulness, I guess it's nice that it correctly "knows" I'm mostly on Arch Linux most of the time but have my servers with NixOS, so if I ask it to create new unix commands I usually get something that works on both, or two versions. But sometimes it also incorrectly infers something because I didn't specify otherwise in the prompt and didn't think of it, but it could see something from the memories.
It works without AI, but there's a MCP and stuff, so you should be able to connect Claude etc with your emulator/device now.
AIs that I find useful are still just LLMs and LLMs power comes from having a massive amount of text to work with to string together word math and come up with something ok. That's a lot of data that comes together to get things ... kinda right... sometimes.
I don't think there's that data set for "use an app" yet.
We've seen from "AI plays games" efforts that there have been some pretty spectacular failures. It seems like "use app" is a different problem.
Yes and no, I'd say. On one hand, apps tend to be used by even the most dumb person (they are "users" after all ;) ) and I'm sure there are more people out there who can use most apps enough than people who can beat Pokemon, even if one might generally be easier than the other.
It's kind of hard to judge though I guess, I played Pokemon (Red) last time when it launched in my country and I was like 8 or something, maybe I underestimate the average person but I feel like I overestimate people generally.