austin-cheney 7 hours ago

Every web server claims to be fast, so I wonder how they define that. As someone who has written their own supposedly fast web server I only want configuration simplicity. Most web servers are unnecessarily far too complicated.

In a web server here is what I am looking for:

* Fast. That is just a matter of streams and pipes. More on this later. That said the language the web server is written in largely irrelevant to its real world performance so long as it can execute low level streams and pipes.

* HTTP and WebSocket support. Ideally a web server will support both on the same port. It’s not challenging because you just have to examine the first incoming payload on the connection.

* Security. This does not have to be complicated. Let the server administrator define their own security rules and just execute those rules on incoming connections. For everything that fails just destroy the connection. Don’t send any response.

* Proxy/reverse proxy support. This is more simple than it sounds. It’s just a pipe to another local existing stream or piping to a new stream opened to a specified location. If authentication is required it can be the same authentication that sits behind the regular 403 HTTP response. The direction of the proxy is just a matter of who pipes to who.

* TLS with and without certificate trust. I HATE certificates with extreme anger, especially for localhost connections. A good web server will account for that anger.

* File system support. Reading from the file system for a specific resource by name should be a low level stream via file descriptor piped back to the response. If this specific file system resource is something internally required by the application, like a default homepage it should be read only once and then forever fetched from memory by variable name. Displaying file system resources, like a directory listing, doesn’t have to be slow or primitive or brittle.

  • 1dom 3 hours ago

    Can you share a link to your web server please?

    I'm finding it hard to make sense of your comment: I can't reconcile some of the stuff you're saying. My gut feeling is you're either ridiculously smart, so smart that defining and implementing a security rules engine for a web server is something genuinely trivial for you, and the world has a lot to learn from you. Or, you're really, really not aware of how much you don't know, so much so that you're going to end up doing something stupid/dangerous without realising.

    Either way, an example of a supposedly fast web server written by you should clear it up pretty quickly.

    Sorry, because this feels a little rude, and I don't mean it to be, but you're contradicting quite a lot of widely held common sense and best practise in a very blasé way, and I think that makes the burden of proof a little higher than normal.

    • solatic 3 hours ago

      Not OP, and also not a web server genius, but I read OP's comment as allowing server administers to write policy in OPA then just using https://github.com/microsoft/regorus/ to determine whether to allow or forbid the connection. The web server author can clearly document what is available in input/data to be checked against in the policy. Is it really more complicated than that?

      • 1dom 2 hours ago

        I honestly don't know. I'm in the same place as you: not a web server expert. But I did spend a bunch of time in security a while ago, so maybe it's my own bias to be sceptical of anyone who casually suggests building and implementing their own boundary security solutions.

        As well as that, the idea that the language any software is written in is largely irrelevant, especially in the context of performance, is not at all obvious or intuitive to me. I get that it would look that way if you reduce a web server down its core functionality. But that also is a common mistake in educated but inexperienced early career software engineers.

        I don't know this stuff, but I know enough to know how well I don't know this stuff. I'm trying to work out if the stuff I'm reading is from someone who I should learn from, or if it's from someone with a lot of confidence but limited experience. It could be either, I'm sincerely on the fence, but a git repo of their web server would help clear it up for me personally.

        > Is it really more complicated than that?

        I can't say without really doing a thorough review. Even if regorus is 100% reliable rules engine, my understanding is it's a rules engine. I assume there's still a bunch of custom integration needed to manage and source the rules, feed them to the engine, and then implement the result effectively and safely across the web server. It can be done quickly and easily, but to consider everything and be confident it's done correctly and securely? I don't think that can be done trivially by the average human without some compromise.

    • sim7c00 3 hours ago

      a rules engine for a web server isnt difficult if it doesnt have to carry responsibility for the web app security itself. then its as the posted outlined really...

      that being said, its common for new servers to have old vulns, not many coders will go over cve reports of apache and nginx and test their own code against old vulns in those.

      i do find a lot of claims about performance or security are oftend unsupported as with this server. it just says it on the readme but provides no additional contex or proof or anything to back up those claims.

      my thought is that original commenter gott riggered by that, perhaps rightfully, and points out thia fact more than anything. if you want to claim high performance or security, back it up with proof.

      the simple fact its in rust doesnt make it more secure. and using async in rust doesnt imply good performance. it could in both cases. wheres the proof.?

      • dorianniemiec 3 hours ago

        Regarding the server performance, there are benchmark results on the Ferron's website.

  • MaxBarraclough 4 hours ago

    > Fast. That is just a matter of streams and pipes. More on this later. That said the language the web server is written in largely irrelevant to its real world performance so long as it can execute low level streams and pipes.

    I'm no expert, but that doesn't sound right to me. Efficiently serving vast quantities of static data isn't trivial, Netflix famously use kernel-level optimisations. [0] If you're serious about handling a great many concurrent web-API requests, you'll need to watch your step with concerns like asynchrony. Some languages make that much easier than others. Plenty of work has gone into nginx's efficiency, for example, which is highly asynchronous but is written in C, a language that lacks features to aid with asynchronous programming.

    If you aren't doing that kind of serious performance work, your solution presumably isn't performance-competitive with the ones that do. As you say, anyone can call their solution fast.

    [0] [PDF] https://freebsdfoundation.org/wp-content/uploads/2020/10/net...

  • xorcist 6 hours ago

    Most of these things are much harder to get right that you make it sound. Perhaps proxying most so. It is a legitimately hard problem. Look at something like Varnish, which is likely one of the better proxies out there. It took many years to get good.

    I never had to write a proxy and am grateful for it. You have to really understand the whole network stack, window sizes and the effects of buffering, what to do about in flight requests, and so on. Just sending stuff from the file system is comparatively easier where you have things such as sendfile, provided you get the security implications of file paths right.

  • gizmo 3 hours ago

    Security is not that simple. Duplicate HTTP headers, utf-8 in HTTP headers, case insensitivity etc has resulted in countless security vulnerabilities over the years. Do you reject suspicious requests? Do you process the request but filter out invalid headers? What about suspicious headers being returned from the app being served? You have to make choices here and if you choose unwisely bad things happen. The spec is of little help here, because web browsers and other web servers (and downstream app servers) don't adhere to the specs being sometimes too lenient and in other times too restrictive in what they do.

    Just take a look at https://www.rfc-editor.org/rfc/rfc9110#section-5.5 to get an idea of how any choice made by a web server can blow up in your face.

  • koakuma-chan 6 hours ago

    In Rust all web frameworks are fast because they all use the same stack tokio + hyper.

    • dorianniemiec 5 hours ago

      By the way, Ferron web server also uses Tokio and Hyper.

password4321 12 hours ago

https://github.com/errantmind/faf is the fastest Rust static "web server" per the most recent TechEmpower Round 23 (Plaintext); it is purposely barebones (provide content via Rust callback!) The top 3 Composite scores are all Rust web frameworks, also not necessarily intended as general-purpose web servers.

https://github.com/static-web-server/static-web-server wins the SEO (and GitHub star) battle, though apparently it is old enough to have a couple unmaintained dependencies.

I use https://github.com/sigoden/dufs as my personal file transfer Swiss Army knife since it natively supports file uploads; I will check out Ferron as a lighter reverse proxy with automatic SSL certs vs. Caddy.

  • sshine 9 hours ago

    > the fastest ... purposely barebones

    This piece of fiber cable is the fastest static web server.

    It is purposely barebones, but I bet you, it does almost nothing to reduce the delivery of a static website. The trick is: The website is already in its final state when it gets piped through the fiber cable, so no processing is required. The templating and caching mechanism is left open for most flexibility.

    I call it an OSI layer 1 web server.

    The trick is to use fiber instead of copper.

    Many webservers don't care about this.

  • dorianniemiec 12 hours ago

    Static Web Server is also old enough to use Hyper 0.14.x instead of Hyper 1.x used by Ferron. I wish you good luck using Ferron then!

DoctorOW 9 hours ago

This is a really good Caddy replacement. The configuration format Caddy uses sometimes feels oversimplified in that complex configurations are hard to read. My instincts tell me this could scale better without getting more verbose. I'm definitely considering a migration if this project matures.

dorianniemiec 12 hours ago

The author of Ferron web server here. Thank you so much for submitting this, and thank you all for the support you have shown when I submitted the server on Hacker News.

  • indeyets 8 hours ago

    Important part of caddy’s configuration are their defaults. For example TLS and automatic certificates are on by default. It covers the most useful use case by default.

    Ferron is different.

    Is that a choice or just something you didn’t work on yet?

    • dorianniemiec 8 hours ago

      Well, Ferron has HTTP/2 and OCSP stapling enabled by default when HTTPS is enabled.

  • KennyBlanken 8 hours ago

    Some feedback: you really need to put a features list somewhere prominent and tell people what distinguishes your webserver from others in terms of its capabilities.

    Also, your FAQ really makes you come off as incredibly patronizing.

    • dorianniemiec 8 hours ago

      Why do you think that FAQ makes me come off as patronizing?

      • tommyage 7 hours ago

        I am also wondering about this.

        To me, your FAQ quickly addressed all questions I had to get a first grasp of the capabilities. It appears to me that you had a determined scope and I very much like that!

      • amenhotep 6 hours ago

        I wouldn't be as harsh as that, but "what is a web server" feels very out of place in how basic it is, and the final one that basically just says "read the docs" maybe also doesn't quite land.

        • echoangle 3 hours ago

          On the one hand, the „what is a web server“ seems pretty weird because it’s something most people visiting the page would know. But on the other hand, stuff like this is something I really miss in other places. It’s really annoying to get a link to some GitHub repo and you have to spend 5 minutes to just figure out what you’re even looking at.

titaphraz 3 hours ago

Kudos. It would have been nice to see benchmarks compared to Nginx, since it's extremely popular.

I'm not using any of the other servers in the benchmark so it's meaningless to me.

arnath 2 hours ago

Random thing I’ve been wondering: is there a point in including TLS support in web servers any more? Isn’t it always better to run a reverse proxy and terminate HTTPs at the edge?

  • dorianniemiec an hour ago

    The problem is that you will have more moving parts - a web server, and an additional reverse proxy (which can add overhead). Also, Ferron can also be configured as a reverse proxy.

no_wizard 12 hours ago

I wonder why they left nginx off their comparisons. Is it simply because nginx is still faster I wonder

  • yjftsjthsd-h an hour ago

    Especially because the details under that say,

    > The web servers serve a default page that comes with NGINX web server.

    so yeah, if you even refer to nginx when talking about benchmarks but leave it out, I'm going to favor adverse inference and assume that it's because nginx is faster.

  • dorianniemiec 12 hours ago

    Maybe because of marketing reasons or because I am biased in the comparisons?

    • no_wizard 12 hours ago

      So open question: is nginx faster?

      I think this is really cool. More competition in this space is better not worse, I am merely curious to know how it stacks up

      • alexpadula 11 hours ago

        You’re comparing a new project to nginx. Obviously nginx will be faster maybe not across the board but generally it probably is. As a project matures it will optimize surely! nginx has 21 years of development under its belt.

        • dijit 10 hours ago

          By that reasoning Apache should be faster than nginx, but alas.

          https://pressable.com/blog/head-to-head-performance-comparis...

          I think drawing any conclusions in the absence of benchmarks is unwise.

          • graemep 3 hours ago

            It is often a mistake to draw conclusions even with benchmarks. Are the benchmarks measuring what is relevant to your use case? Are the benchmarks unbiased.

            Your link does not seem to contain any benchmarks anyway.

            The Ferron benchmarks on their home page say Apache Pre fork MPM outperforms Apache Even MPM which seems odd to me.

          • MrFurious 8 hours ago

            What?, apache have mpm_event since many years ago, you don't need prefork process model except if you want use mod_php or something not thread safe.

        • alexpadula 10 hours ago

          Very detailed post. True dat.

kapilvt 3 hours ago

Docs links lead to a 403 forbidden for me https://www.ferronweb.org/docs/

  • dorianniemiec 3 hours ago

    You went to the documentation page while I was uploading the website files after I updated the website. You can now refresh the documentation page.

nottorp 4 hours ago

Isn't Go better for writing servers, and as fast and memory safe as the second coming of $DEITY?

  • dorianniemiec 4 hours ago

    Go has larger ecosystem of libraries for building web servers. You have FrankenPHP for running PHP, Lego for automatic TLS, etc. For Rust there is `tokio-rustls-acme` crate (used by Ferron) for automatic TLS. While for PHP there is a `php` crate that depends on unsupported PHP version. Ferron uses FastCGI for communicating with PHP-FPM daemon instead. However, Go uses a garbage collector, unlike Rust, which has a borrow checker to ensure memory safety.

    • rc00 4 hours ago

      Rust has garbage collection.

      • dorianniemiec 3 hours ago

        How? I rather think that it uses a borrow checker with ownership and borrowing rules.

bitbasher 3 hours ago

Why no benchmarks against Nginx?

timeflex 5 hours ago

The first thing on their main homepage is instructions to curl a shell script into Bash using Sudo. I find the argument that they prioritize security unconvincing.

  • dorianniemiec 4 hours ago

    Oh... For safety, it's recommended to check the installation script for suspicious commands. Or you can just pull the image for the Ferron web server from Docker Hub.

    • timeflex 3 hours ago

      You mean the script that you'd have to check every time you want to install? At least with Docker, unless you're running the container privileged then you have some isolation. However, a package manager is usually the recommended approach since those apps are usually checked by maintainers & often routinely scanned for vulnerabilities. A package manager is my preferred approach.

      • echoangle 3 hours ago

        Just for arguments sake, how did you install docker engine? Did you add their apt source where they can push anything they like into their packages?

        And also, you shouldn’t rely on docker for safety, it might or might not work but docker isn’t a reason to just run an untrusted program.

        • timeflex 2 hours ago

          I'm not even using Docker. I use Podman in rootless mode installed using the system package manager. Even if an app found a way to break out of the container, it wouldn't have elevated privileges.

          I'm not saying security is about perfection, but encouraging people to curl something to the shell with sudo is poor practice. I get that it is a newer piece of software, so I am forgiving. But getting it packaged into Homebrew, WinGet, Nix, etc. is more ideal. Some of them may verify a signed package, ensure reproducible builds, track changes for proper uninstalls, etc.

frontfor 12 hours ago

Are there benchmarks demonstrating its speed?

  • mleonhard 8 hours ago

    I'm also interested in looking at the benchmark code.

throwaway81523 10 hours ago

Yikes, there is a musician named Ferron who has been around forever, and her web site was formerly ferronweb.com. So I did a double take when I saw this. The musician's web site is ferronsongs.com now. Shadows on a Dime (from 1984) is a great album.

Tepix 9 hours ago

How much memory does it use? Is it suitable for memory-limited scenarios like a Raspberry Pi 1 with 256MB?

  • dorianniemiec 9 hours ago

    I am not exactly sure, but comparing Ferron 1.0.0-beta5 and Caddy 2.9.1 in a benchmark where HTTPS, HTTP/2 are enabled, and default Apache httpd page was served, Caddy used so much memory, that at 12,600 requests per second the system with 16 GB RAM ran out of memory, while Ferron didn't use that much memory, and benchmark succeeded up to 20,000 requests per second. Maybe it's a bug in Caddy?

    • evantbyrne an hour ago

      Either that or misconfiguration with the benchmark setup. A quick google search indicates this can happen with some setups. Maybe try looking at other Caddy benchmark code.

    • nicce 7 hours ago

      It also can be Go issue. Garbage collector did not have time to free memory.

  • nicoburns 8 hours ago

    Almost certainly given that pretty much all Rust webservers are, and this one is built on the same dependencies as others.

    I run a few websites on fly.io VMs with 256mb using Rust servers that never actually exceed 64mb of usage.

liveafterlove 9 hours ago

Nice, does it support DTSL for webrtc over the same port? Nginx only have a patch for it ATM.

  • dorianniemiec 9 hours ago

    Thank you! Unfortunately, Ferron doesn't support DTLS, although it can be used as a WebSocket reverse proxy for signaling in WebRTC applications...

alexpadula 11 hours ago

Cool project! The first feature put a smile on my face! “Built with rust so it’s fast” paraphrasing but yeah :)

  • dorianniemiec 11 hours ago

    Thank you!

    • alexpadula 11 hours ago

      You got it!! Keep it up :). Thank you for posting it

eptcyka 9 hours ago

How does it handle slow loris?

  • dorianniemiec 9 hours ago

    Hyper (HTTP library used by Ferron) has request header timeout of 30s by default if a timer is set. Ferron sets the timer for Hyper for request header timeout to work, thus mitigating Slowloris.

    • fatchan 5 hours ago

      Offering more detailed timeouts for other stages of the request would be great, too.

      For example with HAProxy you can configure separate timeouts for just about everything. The time a request is queued (if you exceed the max connections), the time for the connection to establish, the time for the request to be recived, inactivity timeout for the client or server, inactivity timeout for websocket connections... The list goes on: https://docs.haproxy.org/3.1/configuration.html#4-timeout%20...

      Slowloris is more than just the header timeout. What if the headers are received and the request body is sent, or response consumed very slowly? And even if this is handled with a "safe" default, it must be configurable to cater to a wide range of applications.

      • dorianniemiec 5 hours ago

        I also implemented timeouts for response processing (including reading the request body from the client), to protect against Slow HTTP POST attacks.

m00dy 12 hours ago

looks like caddy clone in rust ;) good luck. I think it is way better than caddy. Auto TLS renewal is a banger. I was thinking the same to build but had no time to do it.

  • dorianniemiec 12 hours ago

    Thank you! I think that my web server is indeed similar to a popular Caddy web server.