And considering current web, which of these problems are technically unsolvable at the moment? We can have privacy-aware browser if we’d want to. Protocols are mostly ok. HTML and css are actually good in 2020. Javascript can be disabled. Ads are filterable and there’s nothing preventing it from spreading by other other channels.
If that’s conceptually just a downgrade to bare minimum, how is it much better than lynx?
Some people just want to experience a little bit of what that was like, and Lynx isn't it. That's how I tend to look at projects like Gemini and Gopherspace necromancy. It's something a little bit different, a little bit cool, not subject to Eternal September, and just technical enough to scare all the suits away because the suits already have the web.
If any of these projects have any amount of commercial success, then yes, we likely would see something like what happened with the commercialization of the web happen all over again. In the mean time, it's a place to put stuff that a small niche of likeminded people might discover and read and maybe experience a little bit of that early magic of the early web themselves. You could just have a website and a blog, or you can have a gopherspace and a... uh, glog? Splog? I'll leave that one for someone else to figure out. But more importantly, you can do so without someone trying to run remote code in a client on your machine in a standard language that for some reason, every client in the world thought it was a good idea to incorporate an interpreter for.
I don't know, Lynx dates to 1992. Can't get much older than that with Web software. It has MS-DOS versions and was written by someone who became a founding engineer of Netscape.
One of two things:
1. Nothing. They're trying to solve a social problem using technical methods.
2. The current UI ensures it never takes off. It remains a geek preserve, VC-oriented geeks need not apply.
Also, as I think they point out in an FAQ, you know that when you go to a Gemini site it will behave in a certain way. When you go to an http site, you don’t know if it will be HTML 1.0 style or if it won’t even render without masses of js. There’s no boundary between simple sites and tracking-filled garbage. In Gemini you know you’re clicking an http link and you accept the risks before doing so; or you can stay in your safe Gemini zone instead.
I've been following the Gemini mailing list for some time now, and I am still not a fan. it feels a lot like re-inventing the wheel, instead of working with existing protocols (like gopher). It also seems to be a passion project of one person, albeit with a large community of followers.
---
[1] Shameless plug: http://www.jaruzel.com/gopher/gopher-client-browser-for-windows
Not a fan of Gemini. Some of its decisions (e.g. no length) are questionable given its stated goals (including simplicity of implementation and working on resource-constrained hardware).
Resource constrained hardware is not a stated goal, by the way, quite the opposite. See 2.13: https://gemini.circumlunar.space/docs/faq.html (and 2.11 for content-length).
I see how it prevents some features, not how it introduces complexity.
From cited link:
> Even without this header, it is possible (unlike in Gopher) for clients to distinguish between a Gemini transaction which has completed successfully and one which has dropped out mid-transfer due to a network fault or malicious attack via the presence or absence of a TLS Shutdown message.
The former is more of a pain, from a clients' standpoint.
Take the dross out of the web and serve it to those with the skills (and/or determination) to spend their attention in a dramatically different way to the vacuous consumers of media who turn thwir brains off for hours a day playing Farmville (or whatever the latest cycle burner is on Farcebook).
What an amazing development, I hope nobody comes along and poisons the process for profit.
I often joke that making computers easy to use was, in retrospect, a huge mistake.