* Gopher: https://beta.shodan.io/search?query=port%3A70+gopher
* Finger: https://beta.shodan.io/search?query=product%3Afingerd
Create ~/.plan and clients can query it remotely, just like in the old days.
On the other hand it was fun making it with how ridiculously simple the Gopher protocol is. Sadly it seems that "Gemini" requires TLS that kills that simplicity - Gopher can implemented by anyone, TLS... i think it is a bit harder.
I'd be very interested in seeing an attempt at a minimal, portable standalone implementation of TLS suitable for Gemini's needs.
TLS on the other hand has multiple implementations and the situation is different per OS. I'm not sure how stable the APIs are either or if you can release -say- a binary that can expect that the API will be there (like you can expect a socket API to be there).
(and besides if for what you're doing you need to implement TCP/IP, requiring TLS is still additional work)
I think Windows does have a stable TLS API, but TBH i'm not sure about macOS (though it isn't like macOS is the bastion of stability) and AFAIK Linux doesn't have anything like that. I guess you could use OpenSSL and assume it to be there.
A library could fix, but it imposes several restrictions you may not want - like the language you'll use (e.g. i might want to use Free Pascal but only C libraries are available) or the compiler version (e.g. i might want to use C89 but only C99 libraries are available).
It is largely philosophical, i'd expect something like a 'tiny web for hackers' to be implementable by one person easily. It might not be practical but then again, the practical perspective is to just stick with the web.
The original idea was that it should be something anyone can implement as a weekend project. Most major languages seem to have fairly stable TLS libs (I know from the current clients that Go, Rust, Python, and some version of Lisp - cant remember which - all do).
spec: https://portal.mozz.us/gemini/gemini.circumlunar.space/docs/spec-spec.txt
docs: https://portal.mozz.us/gemini/gemini.circumlunar.space/docs/
software: https://portal.mozz.us/gemini/gemini.circumlunar.space/software/
test ur gemini client: https://portal.mozz.us/gemini/gemini.conman.org/test/torture/
using the protocol ...
known servers
printf 'gemini://gemini.circumlunar.space/servers/\r\n' |openssl s_client -connect gemini.circumlunar.space:1965 -ign_eof
spec printf 'gemini.circumlunar.space/spec/spec-spec.txt\r\n'|socat - ssl:gemini.circumlunar.space:1965
docs echo -e '/docs\r\n' |socat - ssl:gemini.circumlunar.space:1965
software echo -e '/software\r\n'|socat - ssl:gemini.circumlunar.space:1965
protocol allows for virtual hosting but does not (need to) use sni printf '/software/\r\n' |openssl s_client -connect 168.235.111.58:1965 -servername gemini.circumlunar.space -ign_eof
printf 'gemini://gemini.circumlunar.space/software/\r\n' |openssl s_client -connect 168.235.111.58:1965 -ign_eof apt install build-essential rustc cargo libgtk-3-dev libgdk-pixbuf2.0-dev libssl-dev
Gemini sites load so fast, it's a little crazy. I played around with Gopher a bit, but mostly through browser add-ons and proxies. A native text-only browser is very fast.Disclaimer: I'm the author of ncgopher.
The modern web still supports lightweight pages (like HackerNews).
> In particular, Gemini strives for simplicity of client implementation. Modern web browsers are so complicated that they can only be developed by very large and expensive projects. This naturally leads to a very small number of near-monopoly browsers, which stiffles innovation and diversity and allows the developers of these browsers to dictate the direction in which the web evolves.
...
> Experiments suggest that a very basic interactive client takes more like a minimum of 100 lines of code, and a comfortable fit and moderate feature completeness need more like 200 lines. But Gemini still seems to be in the ballpark of these goals.
...
> Gemini is designed with an acute awareness that the modern web is a privacy disaster, and that the internet is not a safe place for plaintext. Things like browser fingerprinting and Etag-based "supercookies" are an important cautionary tale: user tracking can and will be snuck in via the backdoor using protocol features which were not designed to facilitate it. Thus, protocol designers must not only avoid designing in tracking features (which is easy), but also assume active malicious intent and avoid designing anything which could be subverted to provide effective tracking. This concern manifests as a deliberate non-extensibility in many parts of the Gemini protocol.
It seems to me these goals could be achieved while retaining browser compatibility. Just define a strict, automatically verifiable subset of the modern web stack. This could be privacy-friendly, and support easy implementation, while making it enormously more approachable, no?
How does their transport protocol stack up against HTTPS/TLS? Is there unnecessary complexity that can be avoided by reinventing the wheel here?
The transport protocol used by Gemini is protected via TLS, just like HTTPS. It has the great advantage that the only way for a user to identify itself is by using mutual TLS, which is basically the only 100% secure way to do it (the fact that it's impossible to use mutual TLS on the web tells a lot about the incentives going on there). Gemini absolutely does not re-invent the wheel here.
The settings used by `install` break on macOS, btw, where it's (annoyingly) -d instead of -D to create directories, so I've just been running it directly from the releases folder for now.
I am not used to using GTK on macOS but the performance is not good, but it does run and loads pages.
Back in the 90s, it was good fun seeing all the stuff people put out on finger. Aside from the link someone else posted to Shodan, is there a list of finger servers anywhere?
(fingering the main host will give you a list of quirky options, including how to get the current server cpu temperature)
- written in C
- Dumb simple
- Fast
Bombadillo is a bit sluggish for my taste.
Also: I definitely agree a sacc-like client would be great!
.go version
go version go1.13.1 openbsd/amd64
Also, I'd love to go back/forward with h/l.I've read about children climbing up old towers and waiting hours (!) for the school website to load so they can access their homework during the quarantine days. I bet the same information could be delivered in seconds if that was a Gemini or a Gopher page rather than a "modern" web page bloated with JavaScript frameworks and everything.
I would even suggest discussing an idea of making maintaining a good lo-fi (e.g. Gemini/Gopher or pure HTML) site mandatory for organizations of certain kinds, the same way having wheelchair ramps and proper fire safety equipment is required.
We need strong whitelist controls in all of the browsers. Because when you connect to a website, you have no idea if it's going to start downloading 30 MB worth of images and 40 Javascript requests.
It's gone completely out of control. I say this as a web dev on a rural connection. Not only is it slow, I PAY for this, bandwidth is limited. For some lazy dev who doesn't compress their images, and it downloads without me noticing, I pay actual money for that.
If browser preferences has options for auto disabling Javascript on a webpage that initiated X amount of outbound connections, or auto stops when there are 30 img requests on page load, I should be in control of not allowing that. I should be in control of setting that threshold for different request types.
We don't need a second, lightweight web. We need to fix our tooling at the client level.
I doubt this is hard to implement. Somebody can build a Firefox/Chromium fork (or even an extension perhaps?) implementing this functionality and they don't have to be a real genius. Once I needed that for some time I just used a local proxy to cache and limit what is downloaded.
The only problem is the actual web sites are not designed with the fact somebody might want to control them this way in mind and can easily happen to be unusable unless you let them download a ton of stuff.
I doubt the problem can be solved without some sort of enforcement by a major power (Google, governments or whatever). At the same time trying to enforce anything on the existing web doesn't seem making much sense to me.
For instance, in Apple Mail, there is a setting that is labeled: Prompt me to skip messages over X MB. I bet almost nobody ever sets this up. But I do. I get a prompt asking me if I want to download 20MB of attachments. It also lets me skip downloading remote content in messages, but I can override and allow it with a click of a button.
If browsers had similar advanced options, that users needed to manually enable, then those users would understand what's happening when a website isn't working because of the content blocking.
There are lots of possible UI implementations. A button that says: X amount of things were blocked, would you like to reload the page and disable blocking? Or replace img tags (etc) with placeholder buttons, and click the placeholder to initiate the http request.
Safari has recently done some great UI for controlling which websites have access to autoplay. I would like to see this expanded, where I can choose to either blacklist or whitelist Javascript, or images and videos over X MB.
I don't think it needs enforcement, I mean, you can't control anything on the client end already. We already have APIs to check if cookies etc are enabled. We could expand this to check for other fine grain controls, like are images enabled. Or just use the noscript tag, which devs should be doing anyways but usually don't.
Anyways. For those on slow connections... this problem will only get worse over time. Something will eventually need to be done about it, otherwise like 20% of the connections are going to get left behind unable to even use the web.
Neither as ubiquitous nor as necessary, i think.
This would require a community focus on tooling and content.
First, common tools must support Gopher/Gemini output:
* Pandoc * Template engines like Jinja and Mustache * Static site generators like Hugo, Gatsby & Jekyll * Webservers like Apache and Caddy
Then those tools can be used to mirror content like:
* Wikipedia * ReadTheDocs * Docsets (eg for Dash) * HackerNews, Lobste.rs, Reddit? * RFC archives * Github/Gitlab repo READMEs
Suddenly you could spend your whole day in pure-text mode and never open a browser that does enough twitter to kill your flow.
Recent work has gone on to allow git cloning over gemini and there is syntax support for the, admittedly minimal, text/gemini format in vim and emacs.
I would love to see pandoc support (and believe someone recently mentioned working on on the mailing list). Great ideas! Pick one and build!
### 1.1 What is Gemini?
Gemini is a new application-level internet protocol for the distribution of arbitrary files, with some special consideration for serving a lightweight hypertext format which facilitates linking between files.
Gemini is intended to be simple, but not necessarily as simple as possible. Instead, the design strives to maximise its "power to weight ratio", while keeping its weight within "acceptable" limits. Gemini is also intended to be very privacy conscious.
You may think of Gemini as "the web, stripped right back to its essence" or as "Gopher, souped up and modernised a little", depending upon your perspective.
https://gemini.circumlunar.space/docs/faq.txt
Gopher and finger should be obvious, but just in case:
https://en.wikipedia.org/wiki/Gopher_(protocol)
https://en.wikipedia.org/wiki/Finger_protocol
https://www.sdf.org/
(It's what was originally linked to.)
They should either decide for a specified full file format, e.g. CommonMark or HTML5 (with index.html as default routes) or don't try to implement a file format inside a network protocol at all.
In my opinion, file structures (and layouts thereof) should have nothing to do with a network protocol.
The reason why the web exploded was because HTML was the best technology to allow custom web pages, and more importantly, while in parallel to allow them to interlink. If either of these two aspects are separated, the concept won't work for the discovery and exploration of new content.
Remember the sparkling unicorn gifs and construction animations everywhere? That's what the web was about.
I do not agree with how JS has exploded over the years, hence the reason for writing my own web browser/scraper/proxy [1] - but I do agree with the "why" HTML5/CSS3 makes sense for themselves while ignoring the scriptable aspects.
For me, as someone trying to build a web browser, the text/gemini concept makes it super-hacky and very much prone to future errors to integrate it with the rest of the web. Faking and rendering another file format (based on the runtime environment) should not be its recommendation.
A much simpler approach would be e.g. simply using "gemini://" for resources inside an HTML(5) file.
Additionally, a killer feature for HTTP/1.1 is the continuous download of files that have been partially downloaded. While I think the practical implementation of 206 ranges is pretty messed up (looking at you, nginx, who cannot count the amount of ranges requested), I do agree with the positive aspects of it.
Something like this has to be integrated into a minimal network protocol, otherwise it cannot be adopted in mobile/2G slow areas.
[1] https://github.com/cookiengineer/stealth
Depends on which big bang you talk about.
Gopher lost to HTML because Gopher was TUI-oriented when HTML was GUI-oriented. And Gopher lost a second time when Web 2.0 (JS expansion) occurred.
TUI is not so vastly inferior to GUI - at least when it comes to reading and writing and sparse multimedia contents. The Web exploded because GUI appears to be more user-friendly. There are vastly more users that need "friendliness" than users that just need honesty ;-)
In my mind, if one wants to suggest an alternative to the Web, one either has to go back to the passive, text-oriented web or one-up the current Web with something even more interactive and even more graphical.
In my opinion the only way to achieve successfully the latter is to provide a much easier way to make remote interactive graphical applications, with strong security guarantees of course. I believe that something like Squeak could do that job. It kind of already was done with Croquet/Cobalt.
As for the former approach, I would suggest 80 columns monospace text as the standard document format, with the footnote-style convention for links - not some oddity that's midway between text and HTML.
So in both case I think one has to be extreme in order to succeed. Either extreme sophistication or extreme simplicity.
The argument of focusing on text over styling is a fair one, but it would also have been easy to only support a subset of HTML/CSS.
If one has to create a new format for Hypertext, it should however do more than HTML does and really innovate on the concept. That would have been interesting, to make it for example more semantically structured and thus open new possibilities or include ideas from other Hypertext projects like Xanadu.
As it stands, I would prefer markdown or just plain html as well.
Because servers send mime-types in their response there is literally nothing stopping a client from implementing a full html rendering engine, a markdown rendering system etc. for when they receive documents of those types.