When I get my way reviewing a codebase, I make sure that as much state as possible is saved in a URL, sometimes (though rarely) down to the scroll position.
I genuinely don't understand why people don't get more upset over hitting refresh on a webpage and ending up in a significantly different place. It's mind-boggling and actually insulting as a user. Or grabbing a URL and sending to another person, only to find out it doesn't make sense.
Developing like this on small teams also tends, in my experience, to lead to better UX, because it makes you much more aware of how much state you're cramming into a view. I'll admit it makes development slower, but I'll take the hit most days.
I've seen some people in this thread comment on how having state in a URL is risky because it then becomes a sort of public API that limits you. While I agree this might be a problem in some scenarios, I think there are many others where that is not the case, as copied URLs tend to be short-lived (bookmarks and "browser history" are an exception), mostly used for refreshing a page (which will later be closed) or for sharing . In the remaining cases, you can always plug in some code to migrate from the old URL to the new URL when loading, which will actually solve the issue if you got there via browser history (won't fix for bookmarks though).
While I like this approach as well, these URLs ending up in the browser history isn’t ideal. Autocomplete when just trying to go to the site causes some undesired state every now and then. Maybe query params offer an advantage over paths here.
I think it’s a “use the right tool for the job” thing. Putting ephemeral information like session info in URLs sucks and should only be done if you need to pass it in a get request from a non-browser program or something, and even then I think you should redirect or rewrite the url or something after the initial request. But I think actual navigational data or some sort of state if it’s in the middle of an important action is acceptable.
But if you really just want your users to be able to hit refresh and not have their state change for non-navigational stuff like field contents or whatever, unless you have a really clear use case where you need to maintain state while switching devices and don’t want to do in server-side, local storage seems like the idiomatic choice.
JS does have features for editing the history, but it's a trade-off of not polluting the history too much while still letting the user navigate back and forth
Does that handle back button correctly? Nothing more annoying that sites/apps that overwrites the history incorrectly, so when you press the back button it goes to the entry before you entered the website/app, rather than back into what you were doing in the website/app.
Both approaches (appending/rewriting) have their uses, the tricky part is using the right thing for the right action, fuck up either and the experience is abysmal.
It’s definitely possible to make a really stellar experience, but that winds up being the exception. The URL and history state are sort of “invisible” elements of the user experience but require thoughtful care and attention to what the user expects/wants at each step, a level of attention which is already a rarity in web development even in the most visible parts of a page…so frequently the history/back button stuff just totally sucks.
Yeah, in my experience you only get great stuff when both product and engineering has equal care for the final experience. If either parties lack care, you'll miss stuff, particularly things that are "invisible" as you say.
It's pretty weird, my impression is that the APIs are flexible enough to implement most sane behaviors, but websites keep managing to mess it all up. Perhaps it's just one of those things that no one bothers re-testing as the codebase changes.
In my experience, the problem is two-fold. First product managers/owners don't consider the URIs, so it ends up not being specified. They say "We should have a page when user clicks X, and then on that page, user can open up modal Y", but none of it is specified in terms of what happens with the URIs and history.
Then a developer gets the task to create this, and they too don't push back on what exact URIs are being used, nor how the history is being treated. Either they don't have time, don't have the power to send back tasks to product, simply don't care or just don't think of it. They happily carry along creating whatever URIs make sense to them.
No one is responsible for URLs, no one considers that part of UX and design, so no one ends up thinking about it, people implement things as they feel is right, without having a full overview over how things are supposed to fit together.
Anyways, that's just based on my experience, I'm sure there are other holes in the process that also exacerbates the issue.
As a UX designer, this is a failure of the UX designers, IMO. If you're a UX designer for web, you should be aware of web technology and be thinking about these things. Even if you don't know enough to fully specify it, you should be able to enough such that you can have conversations with a developer to work together to fully spec it out.
That said, I've also worked with some developers that didn't like intruding on their turf, so to speak. Though I've also worked with others that were more than happy to collaborate and very proactive about these sorts of things.
Furthermore, as a UX designer this is the sort of topic that we're unlikely to be able to meaningfully discuss with PMs and other stakeholders as it's completely non-visual and often trying to bring this up with them and discuss it ends up feeling like pulling teeth and them wondering why we're even spending time on it. So usually it just ended up being a discussion between me and the developers with no PM oversight.
Nothing weird about it, you see people arguing right here whether a site should add a new history entry when a filter is set.
Interacting with the URL from JS within the page load cycle is inherently complex.
For what it's worth, I'd also argue that the right behavior here is to replace.
But that of course also means that now the URL on the history stack for this particular view will always have the filter in it (as opposed to an initial visit without having touched anything).
Of course the author's case is the good/special one where they already visited the site with a filter in the URL.
But when you might be interested in using the view/page with multiple queries/filters/paramerers, it might also be unexpected: for example, developers not having a dedicated search results page and instead updating the query parameters of the current URL.
Also, from the history APIs perspective, path and query parameters are interchangeable as long as the origin matches, but user expectations (and server behavior) might assign them different roles.
Still, we're commenting on a site where the main view parameter (item ID, including submission pages) is a query parameter. So this distinction is pretty arbitrary.
And the most extreme case of misusing pushState (instead if replace) are sites where each keystroke in some typeahead filter creates a new history entry.
All of this doesn't even touch the basic requirement that is most important and addressed in the article: being able to refresh the page without losing state and being able to bookmark things.
Manually implementing stuff like this on top of a basic routing functionality (which should use pushState) in an SPA is complex very quickly.
> But that of course also means that now the URL on the history stack for this particular view will always have the filter in it (as opposed to an initial visit without having touched anything).
I would have one state for when the user first entered the page, and then the first time they modify a filter, add a 2nd state. From thereon, keep updating/replacing that state.
This way if the user clicks into the page, and modifies a dozen things they can
1. Refresh and keep all their filters, or share with a friend
2. Press back to basically clear all their filters (get back to the initial state of the page)
3. Only 1 more press of back to get back to where-ever they came from
Browser autocomplete behavior is reliably incorrect and infuriating either way, so it's not a good reason to avoid the utility of having bookmarkable/sharable urls.
Yeah I use a web app regularly for work where they have implemented their own "back" button in the app. The app maintains its own state and history so the browser back button is totally broken.
The problem here is that they've implemented an application navigation feature with the same name as a browser navigation feature. As a user, you know you need to click "Back" and your brain has that wired to click the broswer back button.
Very annoying.
Having "Refresh" break things is (to me) a little more tolerable. I have the mental association of "refresh" as "start over" and so I'm less annoyed when that takes me back to some kind of front page in the app.
Still leaves the problem of not being able to simply send the current URL to someone else and know they'll see the same thing. Of course anchors can solve this, but not automatically
You probably don't want that most of the time, though. The time I'm most likely to send someone an article is once I've got to the end of it, but I don't want them to jump to the end of the article, I want them to start at the beginning again.
There are situations where you want to link to a specific part of a page, and for that anchors and text anchors work well. But in my experience it isn't the default behaviour that I want for most pages.
Even with JS, if it is classical synchronous JS it is much better than the modern blind push for async JS, which causes the browser to try to restore the position before the JS has actually created the content.
Also reminder that "refresh" is just a code word for "restart (and often redownload) the whole bloody app". It's funny how in web-world people so used to "refreshing" the apps and assume that it's a normal functionality (and not failure mode).
Restoring state is just one of the features, that can be implemented in any app if needed, with all that baggage that comes with a feature – testing, maintaining, etc. It's just if desktop app becomes so broken/unresponsive, that the only way is to restart it – we consider it a bad experience and bad software. On web "restarting the app" is a normal daily activity when something goes wrong with state/layout/fields/forms, etc.
Most desktops apps are buggy enough to occasionally require restarts or even crash. I don't currently use any program that never crashed on me. On the web "restarting the app" is seamless and not imply anything wrong happened. It's like the Erlang approach to errors, but on steroids
The trouble with leaving restoring state to the application do as they wish is that most of times they will get it wrong. Also most of them don't do any of this and will never do. Good defaults matter
I completely agree. In fact, I believe URL design should be part of UX design, and although I've worked with 30+ UX designers, I've never once received guidance on URLs.
As a UX designer that always gives guidance on URL design/strategy, I’ll say it’s not always well received. I’ve run into more than a few engineering or PM teams who feel that’s not w/in scope of design.
As a dev who cares about UX, this is crazy to hear but resonates, I've got a few weird looks from people whenever I mentioned some URL improvements. I've also worked with people who understood it. I've seen a correlation though, when people cared enough I could share freely about this, when I did the designer's and dev work I would just add that in (I'm def not a designer, so if I'm doing design work that means the owner doesn't care about design, let alone URLs).
I can imagine in your situation as a pure designer how you got it though though, sorry to hear that and I wish other devs cared more. I've def mentoring people to care about it so hope others do so too.
So that I can operate two windows/tabs of the same site in parallel without them stealing each other’s scroll position. In addition, the second window/tab may have originated from duplicating the first one.
Yes, but how do you garbage-collect the stored per-tab state from the local storage? Note that it’s not just per tab, but per history entry of the tab. (When the user goes back, they want the respective state to be restored, and again when going forward in reverse.) Furthermore, with browser features like “reopen closed tab”. Better let the browser manage the state implicitly by managing the URLs.
> I genuinely don't understand why people don't get more upset over hitting refresh on a webpage and ending up in a significantly different place. It's mind-boggling and actually insulting as a user. Or grabbing a URL and sending to another person, only to find out it doesn't make sense.
The two use cases are in slight conflict: most of the time, when I share a URL, I don't want to share a specific scroll position (which probably doesn't even make sense, if the other guy has a different screen size.)
Obviously the URL is not all state, it doesn’t save your cursor or IME input. So there is some distinction between “important” and “unimportant” state.
Perhaps a better example: should video URLs (like on youtube) include a timestamp or not?
Youtube gives you both options, and either can be what you want. Youtube also seems to be smart enough to roughly remember where you were in the video, when you are reloading the page.
Which, if you take the base64 encoded string, strip off the control characters, pad it out to a valid base64 string, you get
"eyJhZ2VuZGEiOnsiaWQiOm51bGwsImNlbnRlciI6Wy0xMTUuOTI1LDM2LjAwNl0sImxvY2F0aW9uIjpudWxsLCJ6b29tIjo2LjM1MzMzMzMzMzMzMzMzMzV9LCJhbmltYXRpbmciOmZhbHNlLCJiYXNlIjoic3RhbmRhcmQiLCJhcnRjYyI6ZmFsc2UsImNvdW50eSI6ZmFsc2UsImN3YSI6ZmFsc2UsInJmYyI6ZmFsc2UsInN0YXRlIjpmYWxzZSwibWVudSI6dHJ1ZSwic2hvcnRGdXNlZE9ubHkiOmZhbHNlLCJvcGFjaXR5Ijp7ImFsZXJ0cyI6MC44LCJsb2NhbCI6MC42LCJsb2NhbFN0YXRpb25zIjowLjgsIm5hdGlvbmFsIjowLjZ9fQ==", which decodes into:
I only know this because I've spent a ton of time working with the NWS data - I'm founding a company that's working on bringing live local weather news to every community that needs it - https://www.lwnn.news/
Nesting, mostly (having used that trick a lot, though I usually sign that record if originating from server).
I've almost entirely moved to Rust/WASM for browser logic, and I just use serde crate to produce compact representation of the record, but I've seen protobufs used as well.
Otherwise you end up with parsing monsters like ?actions[3].replay__timestamp[0]=0.444 vs {"actions": [,,,{"replay":{"timestamp":[0.444, 0.888]}]}
Sorry but this is legitimately a terrible way to encode this data. The number 0.8 is encoded as base64 encoded ascii decimals. The bits 1 and 0 similarly. URLs should not be long for many reasons, like sharing and preventing them from being cut off.
Links with lots of data in them are really annoying to share. I see the value in storing some state there, but I don’t think there is room for much of it.
What makes them annoying to share? I bet it's more an issue with the UX of whatever app or website you're sharing the link in. Take that stackoverflow link in the comment you're replying to, for example: you can see the domain and most of the path, but HN elides link text after a certain length because it's superfluous.
> I genuinely don't understand why people don't get more upset over hitting refresh on a webpage and ending up in a significantly different place.
Th web has evolved a lot, as users we're seeing an incredible amount of UX behaviors which makes any single action take different semantics depending on context.
When on mobile in particular, there's many cases where going back to the page's initial state is just a PITA the regular way, and refreshing the page is the fastest and cleanest action.
Some implementations of infinite scroll won't get you to the content top in any simple way. Some sites are a PITA regarding filtering and ordering, and you're stuck with some of the choices that are inside collapsible blocks you don't even remember where they were. And there's myriads of other situation where you just want the current page in anew and blank state.
The more you keep in the url, the more resetting the UX is a chore. Sometimes just refreshing is enough, sometimes cleaning the URL is necessary, sometimes you need to go back to the top and navigate back to the page you were on. And those are situations where the user is already in frustration over some other UX issue, so needing additional efforts just to reset is a adding insult to injury IMHO.
This is a viable solution, but as the article mentions, you lose intent and readability (e.g. seeing a query parameter for “product=laptop” vs. “state=XBE4eHgU”). And in general, it’s unlikely you’ll run into issues with URL length. Two to eight thousand characters is a lot!
I remember bouncing into this limit once in a project because we wanted to make a deeply customized interface shareable without a backend, and while on the site itself we didn't hit a URL limit, when someone shared it via some email clients it added it's own tracking redirect onto the URL which caused it to hit the limit and break.
Because a hash is by definition a one-way mapping, so then you'd have to keep a map of the reverse mapping hash -> state, which obviously gets impractical with state such as page index or search terms. Better just make two-way "compression" mapping
To make this work better, URL's should standardize several common semantic query parameters and fragment identifiers (like lines, etc). There is utterly no need for every website to re-invent the wheel here. It would also enable browsers to display long URL's better. It could also reduce the amount of client JS once browsers pick up the job of executing some of the client side interactions on very common fragment changes.
Url state should be descriptive not prescriptive. Either way it is important. Unfortunately my experience on several teams is that businesses never care about stuff like this but users do.
I agree, and this reminds me: I really wish there was better URL (and DNS) literacy amongst the mainstream 'digitally literate'. It would help reduce risk of phishing attacks, allow people to observe and control state meaningful to their experience (e.g. knowing what the '?t=_' does in youtube), trimming of personal info like tracking params (e.g. utm_) before sharing, understanding https/padlock doesn't mean trusted. Etc. Generally, even the most internet-savvy age group, are vastly ill-equipped.
It doesn't help that URLs are badly designed. It's a mix of left- and rightmost significant notation, so the most significant part is in the middle of the URL and hard to spot for someone non-technical.
Really we should be going to com.ycombinator.news/item?id=45789474 instead.
> Generally, even the most internet-savvy age group, are vastly ill-equipped.
It’s a losing battle when even the tools (web browsers hiding URLs by default, heck even Firefox on iOS does it now!) and companies (making posters with nothing more than QR codes or search terms) are what they’re up against….
And with commercial software like Outlook being so ubiquitous and absolutely HORRENDOUS with url obfuscation, formatting, “in network” contacts, and seemingly random spam filtering.
Our company does phishing tests like most, and their checklist of suspicious behavior is 1 to 1 useless. Every item on the list is either 1: something that our company actually does with its real emails or 2: useless because outlook sucks a huge wang. So I basically never open emails and report almost everything I get. I’m sure the IT department enjoys the 80% false report rate.
Unfortunately, too many websites use tracking parameters in URLs, so when a URL is too long I tend to assume it's tracking and just remove all the extra parameters from it when saving or sending it to anyone.
Though I guess this won't happen if it's obvious at first glance what the parameters do and that they're all just plaintext, not b64 or whatever.
If the URL is your state container, it also becomes a leakage mechanism of internals that, at the very least, turns into a versioning requirement (so an old bookmark won’t break things). That also means that there’s some degree of implicit assumption with browsers and multi-browser passing. At some point, things might not hold up (Authentication workflows, for example).
That said, I agree with the point and expose as much as possible in the URL, in the same way that I expose as much as possible as command line arguments in command line utilities.
But there are costs and trade offs with that sort of accommodation. I understand that folks can make different design decisions intentionally, rather than from ignorance/inexperience.
Super old but still a very functional library for saving state as JSON in the URL, but without all the usual JSON clutter.
I first saw it used in Elastic's Kibana.
I used it on a fancy internal React dashboard project around 2016, and it worked like a charm.
Thank you!! There’s a tone of projects where I’ve wanted something like that. I’ve previously cobbling together something ad hoc myself but this looks way more thought out and (slightly) more standard than me making up my own thing.
When the system evolves, you need to change things. State structure also evolves and you will refactor and rework it. You'll rename things, move fields around.
URL is considered a permanent string. You can break it, but that's a bad thing.
So keeping state in the URL will constrain you from evolving your system. That's bad thing.
I think, that it's more appropriate to treat URL like a protocol. You can encode some state parameters to it and you can decode URL into a state on page load. You probably could even version it, if necessary.
For very simple pages, storing entire state in the URL might work.
I think it depends on the permanence of the thing you’re keeping state for. For example for a blog post, you might want to keep it around for a long time.
But sometimes it’s less obvious how to keep state encoded in a URL or otherwise (i.e for the convenience of your users do you want refreshing a feed to return the user to a marker point in the feed that they were viewing? Or do you want to return to the latest point in the feed since users expect a refresh action to give them a fresh feed?).
This is one of the things that bothered me the most from existing React libraries, if you wanted to update a single query parameter now you needed to do a lot of extra work. It bothered me so much I ended up making a library around this [1], where you can do just:
> Browsers and servers impose practical limits on URL length (usually between 2,000 and 8,000 characters) but the reality is more nuanced. As this detailed Stack Overflow answer explains, limits come from a mix of browser behavior, server configurations, CDNs, and even search engine constraints. If you’re bumping against them, it’s a sign you need to rethink your approach.
So what is the reality? The linked StackOverflow answer claims that, as of 2023, it is "under 2000 characters". How much state can you fit into under 2000 characters without resorting to tricks for reducing the number of characters for different parameters? And what would a rethought approach look like?
Each of those characters (aside from domain) could be any of 66 unique ones:
Uppercase letters: A through Z (26 characters)
Lowercase letters: a through z (26 characters)
Digits: 0 through 9 (10 characters)
Special: - . _ ~ (4 characters)
So you'd get a lot of bang for your buck if you really wanted to encode a lot of information.
Unless you have some kind of mapping to encode different states with different character blocks your possibilities are much more limited.
Like storing product ids or EAN plus the number of items.
Just hope the user isn’t on a shopping spree
I believe draw.io achieves complete state persistence solely through the URL. This allows you to effortlessly share your diagrams with others by simply providing a link that contains an embedded Base64-encoded string representing the diagram’s data. However, I’m uncertain whether this approach would qualify as a “state container” according to the definition presented in the article.
The new web standard initiative BRAID is trying to make web to be more human and machine friendly with a synchronous web of state [1],[2],[3].
"Braid’s goal is to extend HTTP from a state transfer protocol to a state sync protocol, in order to do away with custom sync protocols and make state across the web more interoperable.
Braid puts the power of operational transforms and CRDTs on the web, improving network performance and enabling natively p2p, collaboratively-editable, local-first web applications." [4]
From a human user perspective, HATEOAS is effectively just the web. You follow links to get where you want, and forms let you send data where you want, all traversed from some root entrypoint.
From a machine client perspective, it's a different story. JSON-LD is more-or-less HATEOAS, and it works fine for ActivityPub. It's good when you want to talk to an endpoint that you know what data you want to get from it, but don't necessarily need to know the exact shape or URLs.
When you control both the server and client, HATEOAS extra pain for little to no benefit, especially when it's implemented poorly (ie. when the client still needs to know the exact shape of every endpoint anyway, and HATEOAS really just makes URLs opaque), and it interacts very badly when you need to parse the URL anyway, to pull parts from it or add query parameters.
Jokes aside, the crux of HATEOAS is having a dumb frontend which just displays content and links from backend responses. All logic is on the server side. It is more like a terminal connection than a browser based application.
Not at all. HATEOAS is about defining data formats that the client and server agree on ahead of time.
Browsers running Javascript referenced from HTML is a perfect example of HATEOAS, for example. browsers and web server creators agreed on the semantics of these two data formats, and now any browser in the world can talk to any web server in the world and display what was intended to be displayed to the user.
If the web design hadn't been HATEOAS, you'd need server specific code in your browser, like AOL had a long time ago, where your browser would know how to look up specific parts of the AOL site and display them. This is also how most client apps are developed, since both the client and the server are controlled by the same entity, and there is no problem in hardcoding URLs in the client.
The wild thing about this is that for the longest time, URLs were the mechanism for maintaining state on a page. It is only with the complete takeover of JavaScript-based web pages that we even got away from this being "just the way it is". Browsers and server-rendered pages have a number of features that folks try their best to recreate with javascript, and often recreate it rather poorly.
Yes and the comments in this thread don’t give me much hope that we will ever progress from the SPA mess to the idea that „simple is best“. Developers love to overengineer.
> #/dashboard - Single-page app routing (though it’s rarely used these days)
I actually use that for my self-hosted app, because hash routing doesn't require .htaccess or other URL rewriting functionality server-side. So yes, it's not ideal, but you don't fully control the deployment environment, it's better to reduce as much as you can the requirements.
I'm going to provide a dissenting opinion here. I think the URL is for location, not state. I believe that using the URL as a state container leads to unexpected and unwanted behaviour.
First, I think it's a fact that the average user does not consider a URL to be a state container. The fact that developers in this thread lament the "new school" React developers who don't use the URL as a state container is proof of this. If it follows that a React developer, no matter how inexperienced, is at least as knowledgeable if not more about URLs than the average person, if they don't even consider the URL to be a valid container for state than neither does the average person.
Putting state in the URL breaks a fundamental expectation of the user that refreshing a page resets its state. If I put a page into an unwanted state, or god forbid there is a bug that places it in an impossible state, I expect a refresh of the page to reset the state back. Putting state in the URL violates this principle.
Secondly, putting state in a URL breaks the expectation of the user for sharing locations. When I receive Youtube links from friends, half of the time the "t" parameter is set to somewhere in the video and I don't know if my friend explicitly wanted to provide a timestamp. The general user has no idea what ?t=294833289 means in a URL. It would be better to store that state somewhere else and have the user explicitly create a link a timestamp parameter if the desired outcome was to link to an explicit point in the video. As it stands now, when I send YouTube links to friends I have to remember to clear the ?=t parameter before sharing. This is not good UX.
There are other reasons why I think its a bad idea but I don't want this comment to be too long.
That doesn't mean not to use search parameters though. Consider a page for a t-shirt, with options for color and size. This is a valid use case for putting the color and size in the URL because it's a location property - the resource for a blue XL shirt is different from a red SM shirt, and that should be reflected in the URL.
That's not to say that state should never be put in the URL - in some cases it makes sense. But that's a judgement call that the developer should make by considering what behaviour the user expects, and how the link will most likely be used. For a trivial example, it's unlikely that a user wants to share their scroll position or if a dropdown is open when sharing a page. But they probably want to share the location they've navigated to on a map, as it's unlikely they're sharing a link to `maps.google.com` with others (although debatably that's not state, but rather a location property).
The amount of state that early video games stored in like 256 bytes of ram was actually quite impressive. I bet with some creativity one could do similarly for a web app. Just don’t use gzipped b64-encoded json as your in-url state store!
With a custom compression dictionary made against your JSON schema, I would bet you could still pack a surprising amount of data into 256 bytes that way.
I tried this once and discovered that for us it worked even better when populating the dictionary with a bunch of commonly seen URLs. Like that includes the same field names as the json schema, but none of the other JSON Schema cruft, and it also includes commonly used values etc. It seemed like the smarter I tried to be, the worse the results got.
I just used Pako.js which accepts a `{ dictionary: string }` option. Concat a bunch of common URL together, done.
The only downside (with both our approaches) is if you add substantially many new fields / common values later on, you need to update the dictionary, and then old URLs don't work, so you'd need some sort of versioning scheme and use the right dictionary for the right version.
I loke to keep state in the URL. It's nive when you can bookmark any section in an app and it brings you back to the exact same place, all the menus exactly the same. Also it's amazing for debugging. Any bug, I tell the user to send me the URL. I reproduce the issue instantly, fixed in 5 minutes. I wrote some very complex frontends without any tests thanks to this approach... Also it's great during development; when I make a change anywhere in the app, I just refreshed the page... I never have to click through menus to get back to the part of the code I want to test. Really brings down my iteration time... Also I use Vanilla JavaScript Web Components so I don't have to wait for transpiler or bundler. Then I use Claude Code. It's crazy how fast I can code these days when it's my own project.
Yes! This is a very under-utilized concept, especially with client-side execution (WASM etc!)
Few years back, I built a proof-of-concept of a PDF data extraction utility, with the following characteristic - the "recipe" for extracting data from forms (think HIPAA etc) can be developed independently of confidential PDFs, signed by the server, and embedded in the URL on the client-side.
The client can work entirely offline (save the HTML to disk, airgap if you want!) off the "recipe" contained in the URL itself, process the data in WASM, all client-side. It can be trivially audited that the server does not receive any confidential information, but the software is still "web-based", "browser-based" and plays nice with the online IDE - on dummy data.
Found a working demo link - nothing gets sent to the server.
Finishing building a framework at the moment. I'd rather say that they are state descriptors... They don't contain all the state. But they are some kind of hashkey that allow to retrieve application state.
"Hypertext as the engine of application state."
I'm not certain that I agree with this because a URL makes no claims about idempotency or side-effects or many other behaviors that we take for granted when building systems. While it is possible to construct such a system, URLs do not guarantee this.
I think the fundamental issue here is that semantics matter and URLs in isolation don't make strong enough guarantees about them.
I'm all for elegant URL design but they're just one part of the puzzle.
>If you need to base64-encode a massive JSON object, the URL probably isn’t the right place for that state.
Why?
I get it if we're talking about a size that flirts with browser limitations. But other than that I see absolutely no problem with this. In fact it makes me think the author is actually underrating the use-case of URL's as state containers.
Hot module replacement masks a lot of annoyances for end users. Yes its more instantaneous than reloading a page and relying on urls for all of the state and I am not advocating hard for abolishing HMR anymore, but it would be nice if we still used way more url state than currently the case. Browsers will also hibernate tabs to varying degrees, server sessions expire all the time, things are not shareable. The only thing that works as users expect is url state. One thing i absolutely hate about ios apps is how every state is lost if i just have the app in the background for a few seconds, this even applies to major apps like youtube, google maps, many email clients etc. Why do we live in this stupid world were things are not getting better, just because someone made things more convenient for developers?
PS: and i curse the day the social media brainwashed marketing freak coined the term "deep link" to mean just a normal link as its supposed to work.
Modern browsers have an "open clean link" feature that strips all the query parameters (everything after the '?' character in the URL).
This is because many sites cram the URL full of tracking IDs, and people like to browse without that.
So if you are embedding state in your URL, you probably want to be sure that your application does something sane if the browser strips all of that out.
It only strips known tracking parameters b(like those utm_ query params). It does not remove all parameters; if that's the case, YouTube video links will stop working.
I really like this approach, and think it should be used more!
In a previous experiment, I created a simple webpage which renders media stored in the URL. This way, it's able to store and render images, audio, and even simple webpages and games. URLs can get quite long, so can store quite a bit of data.
Deeplinking is awesome! The Azure portal is my favorite example. You could be many layers deep in some configuration "blade" and the URL will retain the exact location you are in the UI.
Depending on which mechanism you use to construct your state URLs they will see them as different pages, so you may end up with a lot of extra traffic and/or odd SEO side effects. For SEO at least there are clear directives you can set that help.
Not saying you shouldn't do this - just things to consider.
I use the concept for https://libmap.org to save the state of the map. You can share the libmap link via mastodon social or bluesky to make it permanent.
To fully describe client side state you also need to look at DOM and cookies. The server can effectively see this stuff too (e.g., during form post).
I design my SSR apps so that as much state as possible lives in the server. I find the session cookie to be far more critical than the URL. I could build most of my apps to be URL agnostic if I really wanted to. The current state of the client (as the server sees it) can determine its logical location in the space of resources. The URL can be more of an optional thing for when we do need to pin down a specific resource for future reference.
Another advantage of not urlizing everything is that you can implement very complex features without a torturous taxonomy. "/workflow/18" is about as detailed as I'd like to get in the URL scheme of a complex back office banking product.
This entire article is an argument against your approach here, and you're not really addressing any of its points.
Basically, your approach is easier to code, and worse to use. Bookmarks, multiple tabs, the back button, sharing URLs with others, it all becomes harder for users to do with your design. I mean feel free, because with many tech stacks it is indeed easier, but don't pretend it's not a tradeoff. It's easier and worse.
Maybe I'm misunderstanding what you're saying but applications like this tend to be horrible to use. How do you handle somebody navigating in two tabs at once? What about the back button?
I guess they use something like sessionStorage to hold tab specific ids.
But something that can bite you with these solutions if that browsers allow you to duplicate tabs, so you also need some inter-tab mechanisms (like the broadcast API or local storage with polling) to resolve duplicate ids
This should be used more often. I wish websites like Google could respect the language given in the URL. Always tries to guess what's my language based on IP and fails
One barrier to adoption is that big URLs are just ugly. Things are smooshed together without spaces, URL encoding, human-readable words mixed with random characters, etc. I think even devs who understand what they're looking at find it a little unsatisfying.
Maybe a solution is some kind of browser widget that displays query params in a user-friendly way that hides the ugliness, sort of like an object explorer interface.
One of my previous side projects used this idea in the extreme: It's a two-player online word game (scrabble with some twists) but all the state is stored in the URL so it doesn't need a backend.
Holding the snark aside for second, I think there is some harsh truth here.
Url query params are not popular in the front end developer world for some reason, probably bc the fundamentals of web dev are often skipped in favor of learning leetcode and all the react hooks. Same could be sade for SQL and CSS.
I also don't think its a good look that the author is a CTO and is just discovering how useful url query params are. that being said, its a pretty good and well-written blog post.
Sure and file names are state & attribute containers too. A URL is a uniform resource locator. You can hack it, of course, but this is no less kludgy than overloading filename. It is never ceases to amaze me seeing the recylcing of good and bad idea in this field.
You are either changing the meaning of "state", or probably unaware of what it means. To start with, state of what? app (http server) or the http client?
Not quite. As the L in URL says, it is the locator or address of the state. The S in REST implies the same, indicating states as the content, not path to it.
But from the viewpoint of a web app where you navigate between different (versions of) pages, the state of that app can be the address of the currently displayed page.
I think you are talking about client's navigational state. The original title of this post was "app state ...". Still it is not clear about state of what.
Navigational state need not be confused with app state. Also talking about "state" as in "state machine" etc used to sound pretty academic with obscure meaning of the word "state". When someone says "state machine" they are basically saying "I'm a PhD and you are not". There are simpler and more crisp ways to convey things rather than via obscurity.
I actually implemented a comment system where users just pick any arbitrary URL on the domain, ie, http://exampledomain.com/, and append /@say/ to the URL along with their comment so the URL is the UI. An example comment would be typed in the URL bar like,
And then my perl script tailing the webserver log file sees the line and and adds the comment "Hey! Cool somepage. - Me" to the .html file on disk for comments.
1) youre moving state into an arbitrary untrusted easy to modify location.
2) youre allowing users to “deep link” into a page that is deep inside some funnel that may or may not be valid, or even exist at some future point in time, forget skipping the messages/whatever further up.
You probably dont want to do either of those two things.
Hello, I am the author of the article and I can explain a few things.
First of all thank you for your words about the content.
I get why you might feel that way. English isn’t my first language, so I sometimes use GPT to help me polish phrasing or find a smoother rhythm for certain lines.
But the ideas, structure, and all the writing direction are mine. I don’t ask it to write articles for me. It just help me express things more clearly. I treat it more like an editor than a writer.
Is it really an LMM? It's not like real humans can't write the same style, LLMs have picked up on an existing stylistic tendency. I hate these patterns as much as anyone, and I have noticed them since long before transformers were a thing.
Hanselman famously said “URLs are UI” and he’s absolutely right
A challenge for this is that the URL is the most visible part of an HTTP request but there are many other submerged parts that are not available as UI yet are significant to the http response composition.
Additionally, aside from very basic protocol, domain, and path, the URL is a very not human friendly UI for composing the state.
It's fast becoming a lost art (alongside ensuring the text can be read by the 10% of the male population that is colour blind). It's one thing to coach a junior dev on implementing it properly into a Nextjs app (or whatever is trendy at the time), but quite another to have to explain this stuff to a Product Manager. If you're going to spend copious amounts of time with a designer to make sure the site is pixel perfect visually you should also have time to get your URLs right.
This is a risky idea, actually — at least in its fully expanded form.
Sure, in the prismjs.com case, I have one of those comments in my code too. But I expect it to break one day.
If a site is a content generator and essentially idempotent for a given set of parameters, and you think the developer has a long-term commitment to the URL parameters, then it's a reasonable strategy (and they should probably formalise it).
Perhaps you implement an explicit "save to URL" in that case.
But generally speaking, we eliminated complex variable state from URLs for good reasons to do with state leakage: logged-in or identifying state ending up in search results and forwarded emails, leaking out in referrer logs and all that stuff.
It would be wiser to assume that the complete list of possible ways that user- or session-identifying state in a URL could leak has not yet been written, and to use volatile non-URL-based state until you are sure you're talking about something non-volatile.
Search keywords: obviously. Seach result filters? yeah. Sort direction: probably. Tags? ehh, as soon as you see [] in a URL it's probably bad code: think carefully about how you represent tags. Presentation customisation? No. A backlink? no.
It's also wiser to assume people want to hack on URLs and cut bits out, to reduce them to the bit they actually want to share.
So you should keep truly persistent, identifying aspects in the path, and at least try not to merge trivial/ephemeral state into the path when it can be left in the query string.
When I get my way reviewing a codebase, I make sure that as much state as possible is saved in a URL, sometimes (though rarely) down to the scroll position.
I genuinely don't understand why people don't get more upset over hitting refresh on a webpage and ending up in a significantly different place. It's mind-boggling and actually insulting as a user. Or grabbing a URL and sending to another person, only to find out it doesn't make sense.
Developing like this on small teams also tends, in my experience, to lead to better UX, because it makes you much more aware of how much state you're cramming into a view. I'll admit it makes development slower, but I'll take the hit most days.
I've seen some people in this thread comment on how having state in a URL is risky because it then becomes a sort of public API that limits you. While I agree this might be a problem in some scenarios, I think there are many others where that is not the case, as copied URLs tend to be short-lived (bookmarks and "browser history" are an exception), mostly used for refreshing a page (which will later be closed) or for sharing . In the remaining cases, you can always plug in some code to migrate from the old URL to the new URL when loading, which will actually solve the issue if you got there via browser history (won't fix for bookmarks though).
While I like this approach as well, these URLs ending up in the browser history isn’t ideal. Autocomplete when just trying to go to the site causes some undesired state every now and then. Maybe query params offer an advantage over paths here.
I think it’s a “use the right tool for the job” thing. Putting ephemeral information like session info in URLs sucks and should only be done if you need to pass it in a get request from a non-browser program or something, and even then I think you should redirect or rewrite the url or something after the initial request. But I think actual navigational data or some sort of state if it’s in the middle of an important action is acceptable.
But if you really just want your users to be able to hit refresh and not have their state change for non-navigational stuff like field contents or whatever, unless you have a really clear use case where you need to maintain state while switching devices and don’t want to do in server-side, local storage seems like the idiomatic choice.
JS does have features for editing the history, but it's a trade-off of not polluting the history too much while still letting the user navigate back and forth
I'm glad to see that prismjs site mentioned by the blog is doing the right thing - when it updates the URL, it replaces the current history item.
Does that handle back button correctly? Nothing more annoying that sites/apps that overwrites the history incorrectly, so when you press the back button it goes to the entry before you entered the website/app, rather than back into what you were doing in the website/app.
Both approaches (appending/rewriting) have their uses, the tricky part is using the right thing for the right action, fuck up either and the experience is abysmal.
It’s definitely possible to make a really stellar experience, but that winds up being the exception. The URL and history state are sort of “invisible” elements of the user experience but require thoughtful care and attention to what the user expects/wants at each step, a level of attention which is already a rarity in web development even in the most visible parts of a page…so frequently the history/back button stuff just totally sucks.
Yeah, in my experience you only get great stuff when both product and engineering has equal care for the final experience. If either parties lack care, you'll miss stuff, particularly things that are "invisible" as you say.
It's pretty weird, my impression is that the APIs are flexible enough to implement most sane behaviors, but websites keep managing to mess it all up. Perhaps it's just one of those things that no one bothers re-testing as the codebase changes.
In my experience, the problem is two-fold. First product managers/owners don't consider the URIs, so it ends up not being specified. They say "We should have a page when user clicks X, and then on that page, user can open up modal Y", but none of it is specified in terms of what happens with the URIs and history.
Then a developer gets the task to create this, and they too don't push back on what exact URIs are being used, nor how the history is being treated. Either they don't have time, don't have the power to send back tasks to product, simply don't care or just don't think of it. They happily carry along creating whatever URIs make sense to them.
No one is responsible for URLs, no one considers that part of UX and design, so no one ends up thinking about it, people implement things as they feel is right, without having a full overview over how things are supposed to fit together.
Anyways, that's just based on my experience, I'm sure there are other holes in the process that also exacerbates the issue.
As a UX designer, this is a failure of the UX designers, IMO. If you're a UX designer for web, you should be aware of web technology and be thinking about these things. Even if you don't know enough to fully specify it, you should be able to enough such that you can have conversations with a developer to work together to fully spec it out.
That said, I've also worked with some developers that didn't like intruding on their turf, so to speak. Though I've also worked with others that were more than happy to collaborate and very proactive about these sorts of things.
Furthermore, as a UX designer this is the sort of topic that we're unlikely to be able to meaningfully discuss with PMs and other stakeholders as it's completely non-visual and often trying to bring this up with them and discuss it ends up feeling like pulling teeth and them wondering why we're even spending time on it. So usually it just ended up being a discussion between me and the developers with no PM oversight.
Nothing weird about it, you see people arguing right here whether a site should add a new history entry when a filter is set.
Interacting with the URL from JS within the page load cycle is inherently complex.
For what it's worth, I'd also argue that the right behavior here is to replace.
But that of course also means that now the URL on the history stack for this particular view will always have the filter in it (as opposed to an initial visit without having touched anything).
Of course the author's case is the good/special one where they already visited the site with a filter in the URL.
But when you might be interested in using the view/page with multiple queries/filters/paramerers, it might also be unexpected: for example, developers not having a dedicated search results page and instead updating the query parameters of the current URL.
Also, from the history APIs perspective, path and query parameters are interchangeable as long as the origin matches, but user expectations (and server behavior) might assign them different roles.
Still, we're commenting on a site where the main view parameter (item ID, including submission pages) is a query parameter. So this distinction is pretty arbitrary.
And the most extreme case of misusing pushState (instead if replace) are sites where each keystroke in some typeahead filter creates a new history entry.
All of this doesn't even touch the basic requirement that is most important and addressed in the article: being able to refresh the page without losing state and being able to bookmark things.
Manually implementing stuff like this on top of a basic routing functionality (which should use pushState) in an SPA is complex very quickly.
> But that of course also means that now the URL on the history stack for this particular view will always have the filter in it (as opposed to an initial visit without having touched anything).
I would have one state for when the user first entered the page, and then the first time they modify a filter, add a 2nd state. From thereon, keep updating/replacing that state.
This way if the user clicks into the page, and modifies a dozen things they can
1. Refresh and keep all their filters, or share with a friend 2. Press back to basically clear all their filters (get back to the initial state of the page) 3. Only 1 more press of back to get back to where-ever they came from
Yeah, lichess does this.
On lichess.org/analysis, each move you make adds a history item, lichess.org/analysis#1, #2, and so on.
Pretty annoying.
My personal take would be if it takes you to what's basically another page (such as the entire page being rewritten), then involve browser history.
Browser autocomplete behavior is reliably incorrect and infuriating either way, so it's not a good reason to avoid the utility of having bookmarkable/sharable urls.
Yeah it's an annoyance more than it helps. I always disable it.
I do as well - it's just irritating.
Same with search ahead.
Yeah I use a web app regularly for work where they have implemented their own "back" button in the app. The app maintains its own state and history so the browser back button is totally broken.
The problem here is that they've implemented an application navigation feature with the same name as a browser navigation feature. As a user, you know you need to click "Back" and your brain has that wired to click the broswer back button.
Very annoying.
Having "Refresh" break things is (to me) a little more tolerable. I have the mental association of "refresh" as "start over" and so I'm less annoyed when that takes me back to some kind of front page in the app.
> I make sure that as much state as possible is saved in a URL, sometimes (though rarely) down to the scroll position.
If your page is server-rendered, you get saved scroll position on refresh for free. One of many ways using JS for everything can subtly break things.
Still leaves the problem of not being able to simply send the current URL to someone else and know they'll see the same thing. Of course anchors can solve this, but not automatically
You probably don't want that most of the time, though. The time I'm most likely to send someone an article is once I've got to the end of it, but I don't want them to jump to the end of the article, I want them to start at the beginning again.
There are situations where you want to link to a specific part of a page, and for that anchors and text anchors work well. But in my experience it isn't the default behaviour that I want for most pages.
Chrome (at least?) solves this via Text Fragments[0] which are a pure client side thing and requires no server or site support.
This URI for example:
https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
Links to an instance of "The Referer" narrowed down via a start prefix ("downgrade:") and end suffix ("to origins").
These are used across Google I believe so many have probably seen them.
[0] https://developer.mozilla.org/en-US/docs/Web/URI/Reference/F...
Scroll position doesn’t do this because it’s not portable between devices.
Even with JS, if it is classical synchronous JS it is much better than the modern blind push for async JS, which causes the browser to try to restore the position before the JS has actually created the content.
isn't there a way to instruct the browser to restore the position only after certain async thing?
I think the hack is to store html height/width locally and restore it as early as possible so the content will then load under the scrolled view
Also reminder that "refresh" is just a code word for "restart (and often redownload) the whole bloody app". It's funny how in web-world people so used to "refreshing" the apps and assume that it's a normal functionality (and not failure mode).
The web is similar to android, and unlike desktop apps, in that restarting the whole thing is meant to not lose (much) state
Actually it would be amazing if desktop applications were like this too, and we had a separate way to go back to the initial screen
Restoring state is just one of the features, that can be implemented in any app if needed, with all that baggage that comes with a feature – testing, maintaining, etc. It's just if desktop app becomes so broken/unresponsive, that the only way is to restart it – we consider it a bad experience and bad software. On web "restarting the app" is a normal daily activity when something goes wrong with state/layout/fields/forms, etc.
Most desktops apps are buggy enough to occasionally require restarts or even crash. I don't currently use any program that never crashed on me. On the web "restarting the app" is seamless and not imply anything wrong happened. It's like the Erlang approach to errors, but on steroids
The trouble with leaving restoring state to the application do as they wish is that most of times they will get it wrong. Also most of them don't do any of this and will never do. Good defaults matter
I completely agree. In fact, I believe URL design should be part of UX design, and although I've worked with 30+ UX designers, I've never once received guidance on URLs.
As a UX designer that always gives guidance on URL design/strategy, I’ll say it’s not always well received. I’ve run into more than a few engineering or PM teams who feel that’s not w/in scope of design.
As a dev who cares about UX, this is crazy to hear but resonates, I've got a few weird looks from people whenever I mentioned some URL improvements. I've also worked with people who understood it. I've seen a correlation though, when people cared enough I could share freely about this, when I did the designer's and dev work I would just add that in (I'm def not a designer, so if I'm doing design work that means the owner doesn't care about design, let alone URLs).
I can imagine in your situation as a pure designer how you got it though though, sorry to hear that and I wish other devs cared more. I've def mentoring people to care about it so hope others do so too.
As a dev mentor one of my first lesson is what everybody has in common is design.
We all are trying to understand a problem and trying to figure out the best solution.
How each role approaches this has some low level specializations but high level learnings can be shared.
I can understand "shareable" state (scroll position), but _as much as possible_ seems like overkill.
Why not just use localStorage?
> Why not just use localStorage?
So that I can operate two windows/tabs of the same site in parallel without them stealing each other’s scroll position. In addition, the second window/tab may have originated from duplicating the first one.
You could work around that if needed with a unique id per tab (I was curious myself)
https://stackoverflow.com/questions/11896160/any-way-to-iden...
Yes, but how do you garbage-collect the stored per-tab state from the local storage? Note that it’s not just per tab, but per history entry of the tab. (When the user goes back, they want the respective state to be restored, and again when going forward in reverse.) Furthermore, with browser features like “reopen closed tab”. Better let the browser manage the state implicitly by managing the URLs.
Scroll position is _kind of_ fine. Typically I can link the ID in the URL as "state".
I was referring to mostly everything else
sessionStorage should treat the windows/tabs as separate
I would never structure my URLs for performance reasons. 100% for usability.
> I genuinely don't understand why people don't get more upset over hitting refresh on a webpage and ending up in a significantly different place. It's mind-boggling and actually insulting as a user. Or grabbing a URL and sending to another person, only to find out it doesn't make sense.
The two use cases are in slight conflict: most of the time, when I share a URL, I don't want to share a specific scroll position (which probably doesn't even make sense, if the other guy has a different screen size.)
Scroll, as parent said, is usually not included.
Obviously the URL is not all state, it doesn’t save your cursor or IME input. So there is some distinction between “important” and “unimportant” state.
Perhaps a better example: should video URLs (like on youtube) include a timestamp or not?
Youtube gives you both options, and either can be what you want. Youtube also seems to be smart enough to roughly remember where you were in the video, when you are reloading the page.
> I make sure that as much state as possible is saved in a URL
Do you have advice on how to achieve this (for purely client-side stuff)?
- How do you represent the state? (a list of key=value pair after the hash?)
- How do you make sure it stays in sync?
-- do you parse the hash part in JS to restore some stuff on page load and when the URL changes?
- How do you manage previous / next?
- How do you manage server-side stuff that can be updated client side? (a checkbox that's by default checked and you uncheck it, for instance)
One example I think is super interesting is the NWS Radar site, https://radar.weather.gov/
If you go there, that's the URL you get. However, if you do anything with the map, your URL changes to something like
https://radar.weather.gov/?settings=v1_eyJhZ2VuZGEiOnsiaWQiO...
Which, if you take the base64 encoded string, strip off the control characters, pad it out to a valid base64 string, you get
"eyJhZ2VuZGEiOnsiaWQiOm51bGwsImNlbnRlciI6Wy0xMTUuOTI1LDM2LjAwNl0sImxvY2F0aW9uIjpudWxsLCJ6b29tIjo2LjM1MzMzMzMzMzMzMzMzMzV9LCJhbmltYXRpbmciOmZhbHNlLCJiYXNlIjoic3RhbmRhcmQiLCJhcnRjYyI6ZmFsc2UsImNvdW50eSI6ZmFsc2UsImN3YSI6ZmFsc2UsInJmYyI6ZmFsc2UsInN0YXRlIjpmYWxzZSwibWVudSI6dHJ1ZSwic2hvcnRGdXNlZE9ubHkiOmZhbHNlLCJvcGFjaXR5Ijp7ImFsZXJ0cyI6MC44LCJsb2NhbCI6MC42LCJsb2NhbFN0YXRpb25zIjowLjgsIm5hdGlvbmFsIjowLjZ9fQ==", which decodes into:
{"agenda":{"id":null,"center":[-115.925,36.006],"location":null,"zoom":6.3533333333333335},"animating":false,"base":"standard","artcc":false,"county":false,"cwa":false,"rfc":false,"state":false,"menu":true,"shortFusedOnly":false,"opacity":{"alerts":0.8,"local":0.6,"localStations":0.8,"national":0.6}}
I only know this because I've spent a ton of time working with the NWS data - I'm founding a company that's working on bringing live local weather news to every community that needs it - https://www.lwnn.news/
In this case, why encode the string instead of just having the options as plain text parameters?
Nesting, mostly (having used that trick a lot, though I usually sign that record if originating from server).
I've almost entirely moved to Rust/WASM for browser logic, and I just use serde crate to produce compact representation of the record, but I've seen protobufs used as well.
Otherwise you end up with parsing monsters like ?actions[3].replay__timestamp[0]=0.444 vs {"actions": [,,,{"replay":{"timestamp":[0.444, 0.888]}]}
Sorry but this is legitimately a terrible way to encode this data. The number 0.8 is encoded as base64 encoded ascii decimals. The bits 1 and 0 similarly. URLs should not be long for many reasons, like sharing and preventing them from being cut off.
The “cut off” thing is generally legacy thinking, the web has moved on and you can reliably put a lot of data in the URI… https://stackoverflow.com/questions/417142/what-is-the-maxim...
Links with lots of data in them are really annoying to share. I see the value in storing some state there, but I don’t think there is room for much of it.
What makes them annoying to share? I bet it's more an issue with the UX of whatever app or website you're sharing the link in. Take that stackoverflow link in the comment you're replying to, for example: you can see the domain and most of the path, but HN elides link text after a certain length because it's superfluous.
The URL spec already takes care of a lot of this, for example /shopping/shirts?color=blue&size=M&page=3 or /articles/my-article-title#preface
The OP gives great guidance on these questions.
The URL is a public facing interface. If anything goes into the URL, it should already be detailed in the design that the PR’d code is implementing.
> I genuinely don't understand why people don't get more upset over hitting refresh on a webpage and ending up in a significantly different place.
Th web has evolved a lot, as users we're seeing an incredible amount of UX behaviors which makes any single action take different semantics depending on context.
When on mobile in particular, there's many cases where going back to the page's initial state is just a PITA the regular way, and refreshing the page is the fastest and cleanest action.
Some implementations of infinite scroll won't get you to the content top in any simple way. Some sites are a PITA regarding filtering and ordering, and you're stuck with some of the choices that are inside collapsible blocks you don't even remember where they were. And there's myriads of other situation where you just want the current page in anew and blank state.
The more you keep in the url, the more resetting the UX is a chore. Sometimes just refreshing is enough, sometimes cleaning the URL is necessary, sometimes you need to go back to the top and navigate back to the page you were on. And those are situations where the user is already in frustration over some other UX issue, so needing additional efforts just to reset is a adding insult to injury IMHO.
To save the url length, why not hash all possible states and have the value of the variable in the query string refer to that?
This is a viable solution, but as the article mentions, you lose intent and readability (e.g. seeing a query parameter for “product=laptop” vs. “state=XBE4eHgU”). And in general, it’s unlikely you’ll run into issues with URL length. Two to eight thousand characters is a lot!
I remember bouncing into this limit once in a project because we wanted to make a deeply customized interface shareable without a backend, and while on the site itself we didn't hit a URL limit, when someone shared it via some email clients it added it's own tracking redirect onto the URL which caused it to hit the limit and break.
base64(zstd(big state))
and where is the hash mapped back again?
Because a hash is by definition a one-way mapping, so then you'd have to keep a map of the reverse mapping hash -> state, which obviously gets impractical with state such as page index or search terms. Better just make two-way "compression" mapping
They probably have meant something like base64 encode
If you base64 encode an ascii string it gets 33% longer
Would this hijack the back button though? Genuinely curious if modifying the URL adds to the location history.
I think you can customize this. You can decide whether each URL changes the location history.
To make this work better, URL's should standardize several common semantic query parameters and fragment identifiers (like lines, etc). There is utterly no need for every website to re-invent the wheel here. It would also enable browsers to display long URL's better. It could also reduce the amount of client JS once browsers pick up the job of executing some of the client side interactions on very common fragment changes.
Url state should be descriptive not prescriptive. Either way it is important. Unfortunately my experience on several teams is that businesses never care about stuff like this but users do.
I agree, and this reminds me: I really wish there was better URL (and DNS) literacy amongst the mainstream 'digitally literate'. It would help reduce risk of phishing attacks, allow people to observe and control state meaningful to their experience (e.g. knowing what the '?t=_' does in youtube), trimming of personal info like tracking params (e.g. utm_) before sharing, understanding https/padlock doesn't mean trusted. Etc. Generally, even the most internet-savvy age group, are vastly ill-equipped.
It doesn't help that URLs are badly designed. It's a mix of left- and rightmost significant notation, so the most significant part is in the middle of the URL and hard to spot for someone non-technical.
Really we should be going to com.ycombinator.news/item?id=45789474 instead.
> Generally, even the most internet-savvy age group, are vastly ill-equipped.
It’s a losing battle when even the tools (web browsers hiding URLs by default, heck even Firefox on iOS does it now!) and companies (making posters with nothing more than QR codes or search terms) are what they’re up against….
And with commercial software like Outlook being so ubiquitous and absolutely HORRENDOUS with url obfuscation, formatting, “in network” contacts, and seemingly random spam filtering.
Our company does phishing tests like most, and their checklist of suspicious behavior is 1 to 1 useless. Every item on the list is either 1: something that our company actually does with its real emails or 2: useless because outlook sucks a huge wang. So I basically never open emails and report almost everything I get. I’m sure the IT department enjoys the 80% false report rate.
Unfortunately, too many websites use tracking parameters in URLs, so when a URL is too long I tend to assume it's tracking and just remove all the extra parameters from it when saving or sending it to anyone.
Though I guess this won't happen if it's obvious at first glance what the parameters do and that they're all just plaintext, not b64 or whatever.
If the URL is your state container, it also becomes a leakage mechanism of internals that, at the very least, turns into a versioning requirement (so an old bookmark won’t break things). That also means that there’s some degree of implicit assumption with browsers and multi-browser passing. At some point, things might not hold up (Authentication workflows, for example).
That said, I agree with the point and expose as much as possible in the URL, in the same way that I expose as much as possible as command line arguments in command line utilities.
But there are costs and trade offs with that sort of accommodation. I understand that folks can make different design decisions intentionally, rather than from ignorance/inexperience.
Remember when URLs became unstable wacky identifiers 10 years ago. Thankfully that trend died.
Recommendation:
https://github.com/Nanonid/rison
Super old but still a very functional library for saving state as JSON in the URL, but without all the usual JSON clutter. I first saw it used in Elastic's Kibana. I used it on a fancy internal React dashboard project around 2016, and it worked like a charm.
Sample: http://example.com/service?query=q:'*',start:10,count:10
Thank you!! There’s a tone of projects where I’ve wanted something like that. I’ve previously cobbling together something ad hoc myself but this looks way more thought out and (slightly) more standard than me making up my own thing.
RQL[0][1] or FIQL[2] might be of interest to you as well, Callum.
[0]: https://github.com/persvr/rql
[1]: https://github.com/jirutka/rsql-parser
[2]: https://datatracker.ietf.org/doc/html/draft-nottingham-atomp...
When the system evolves, you need to change things. State structure also evolves and you will refactor and rework it. You'll rename things, move fields around.
URL is considered a permanent string. You can break it, but that's a bad thing.
So keeping state in the URL will constrain you from evolving your system. That's bad thing.
I think, that it's more appropriate to treat URL like a protocol. You can encode some state parameters to it and you can decode URL into a state on page load. You probably could even version it, if necessary.
For very simple pages, storing entire state in the URL might work.
I think it depends on the permanence of the thing you’re keeping state for. For example for a blog post, you might want to keep it around for a long time.
But sometimes it’s less obvious how to keep state encoded in a URL or otherwise (i.e for the convenience of your users do you want refreshing a feed to return the user to a marker point in the feed that they were viewing? Or do you want to return to the latest point in the feed since users expect a refresh action to give them a fresh feed?).
You can always do versioning.
This is one of the things that bothered me the most from existing React libraries, if you wanted to update a single query parameter now you needed to do a lot of extra work. It bothered me so much I ended up making a library around this [1], where you can do just:
Here's a bit more complex example with a CodeSadnbox[2]: [1] https://crossroad.page/[2] https://codesandbox.io/p/sandbox/festive-murdock-1ctv6
> Browsers and servers impose practical limits on URL length (usually between 2,000 and 8,000 characters) but the reality is more nuanced. As this detailed Stack Overflow answer explains, limits come from a mix of browser behavior, server configurations, CDNs, and even search engine constraints. If you’re bumping against them, it’s a sign you need to rethink your approach.
So what is the reality? The linked StackOverflow answer claims that, as of 2023, it is "under 2000 characters". How much state can you fit into under 2000 characters without resorting to tricks for reducing the number of characters for different parameters? And what would a rethought approach look like?
Each of those characters (aside from domain) could be any of 66 unique ones:
So you'd get a lot of bang for your buck if you really wanted to encode a lot of information.Unless you have some kind of mapping to encode different states with different character blocks your possibilities are much more limited. Like storing product ids or EAN plus the number of items. Just hope the user isn’t on a shopping spree
This and the lack of proper a hrefs is the biggest pet peeve of mine with a lot of spa's
I disagree in the public URL, as either GPG --quick-generate in coining a counterpoint as a feature of anti-DDOS protocols.
Key is to generate capitol, which is being either a URL or playing hand in ball.
I believe draw.io achieves complete state persistence solely through the URL. This allows you to effortlessly share your diagrams with others by simply providing a link that contains an embedded Base64-encoded string representing the diagram’s data. However, I’m uncertain whether this approach would qualify as a “state container” according to the definition presented in the article.
i see the complaints around URL length limits and i raise you..
storing the entire state in the hash component of the URL
http://example.com/foo#abc
since this is entirely client-side, you can pretty much bypass all of the limits.
one place i've seen this used is the azure portal.. (payload | gzip | b64) take of that what you will.
The latest version of Microsoft Teams is absolutely terrible at this... just one URL for everything. No way to bookmark even a particular team.
The new web standard initiative BRAID is trying to make web to be more human and machine friendly with a synchronous web of state [1],[2],[3].
"Braid’s goal is to extend HTTP from a state transfer protocol to a state sync protocol, in order to do away with custom sync protocols and make state across the web more interoperable.
Braid puts the power of operational transforms and CRDTs on the web, improving network performance and enabling natively p2p, collaboratively-editable, local-first web applications." [4]
[1] A Synchronous Web of State:
https://braid.org/meeting-107
[2] Braid: Synchronization for HTTP (88 comments):
https://news.ycombinator.com/item?id=40480016
[3] Most RESTful APIs aren't really RESTful (564 comments):
https://news.ycombinator.com/item?id=44507076
[4] Braid HTTP:
https://jzhao.xyz/thoughts/Braid-HTTP
HATEOAS never gets the love it deserves until you call it something else..
Probably because it sounds like the most poorly named breakfast cereal ever.
From a human user perspective, HATEOAS is effectively just the web. You follow links to get where you want, and forms let you send data where you want, all traversed from some root entrypoint.
From a machine client perspective, it's a different story. JSON-LD is more-or-less HATEOAS, and it works fine for ActivityPub. It's good when you want to talk to an endpoint that you know what data you want to get from it, but don't necessarily need to know the exact shape or URLs.
When you control both the server and client, HATEOAS extra pain for little to no benefit, especially when it's implemented poorly (ie. when the client still needs to know the exact shape of every endpoint anyway, and HATEOAS really just makes URLs opaque), and it interacts very badly when you need to parse the URL anyway, to pull parts from it or add query parameters.
This has nothing to do with HATEOAS. Well, apart from both using URLs. But HATEOAS really isn’t about storing state in URLs.
I mean, at the end of the day it is a cerealization format…
Jokes aside, the crux of HATEOAS is having a dumb frontend which just displays content and links from backend responses. All logic is on the server side. It is more like a terminal connection than a browser based application.
Not at all. HATEOAS is about defining data formats that the client and server agree on ahead of time.
Browsers running Javascript referenced from HTML is a perfect example of HATEOAS, for example. browsers and web server creators agreed on the semantics of these two data formats, and now any browser in the world can talk to any web server in the world and display what was intended to be displayed to the user.
If the web design hadn't been HATEOAS, you'd need server specific code in your browser, like AOL had a long time ago, where your browser would know how to look up specific parts of the AOL site and display them. This is also how most client apps are developed, since both the client and the server are controlled by the same entity, and there is no problem in hardcoding URLs in the client.
The wild thing about this is that for the longest time, URLs were the mechanism for maintaining state on a page. It is only with the complete takeover of JavaScript-based web pages that we even got away from this being "just the way it is". Browsers and server-rendered pages have a number of features that folks try their best to recreate with javascript, and often recreate it rather poorly.
Yes and the comments in this thread don’t give me much hope that we will ever progress from the SPA mess to the idea that „simple is best“. Developers love to overengineer.
> #/dashboard - Single-page app routing (though it’s rarely used these days)
I actually use that for my self-hosted app, because hash routing doesn't require .htaccess or other URL rewriting functionality server-side. So yes, it's not ideal, but you don't fully control the deployment environment, it's better to reduce as much as you can the requirements.
I'm going to provide a dissenting opinion here. I think the URL is for location, not state. I believe that using the URL as a state container leads to unexpected and unwanted behaviour.
First, I think it's a fact that the average user does not consider a URL to be a state container. The fact that developers in this thread lament the "new school" React developers who don't use the URL as a state container is proof of this. If it follows that a React developer, no matter how inexperienced, is at least as knowledgeable if not more about URLs than the average person, if they don't even consider the URL to be a valid container for state than neither does the average person.
Putting state in the URL breaks a fundamental expectation of the user that refreshing a page resets its state. If I put a page into an unwanted state, or god forbid there is a bug that places it in an impossible state, I expect a refresh of the page to reset the state back. Putting state in the URL violates this principle.
Secondly, putting state in a URL breaks the expectation of the user for sharing locations. When I receive Youtube links from friends, half of the time the "t" parameter is set to somewhere in the video and I don't know if my friend explicitly wanted to provide a timestamp. The general user has no idea what ?t=294833289 means in a URL. It would be better to store that state somewhere else and have the user explicitly create a link a timestamp parameter if the desired outcome was to link to an explicit point in the video. As it stands now, when I send YouTube links to friends I have to remember to clear the ?=t parameter before sharing. This is not good UX.
There are other reasons why I think its a bad idea but I don't want this comment to be too long.
That doesn't mean not to use search parameters though. Consider a page for a t-shirt, with options for color and size. This is a valid use case for putting the color and size in the URL because it's a location property - the resource for a blue XL shirt is different from a red SM shirt, and that should be reflected in the URL.
That's not to say that state should never be put in the URL - in some cases it makes sense. But that's a judgement call that the developer should make by considering what behaviour the user expects, and how the link will most likely be used. For a trivial example, it's unlikely that a user wants to share their scroll position or if a dropdown is open when sharing a page. But they probably want to share the location they've navigated to on a map, as it's unlikely they're sharing a link to `maps.google.com` with others (although debatably that's not state, but rather a location property).
I strongly agree with this, just couldn't be bothered to type it out. I've tried it both ways many times, and you are indeed right on the money.
The amount of state that early video games stored in like 256 bytes of ram was actually quite impressive. I bet with some creativity one could do similarly for a web app. Just don’t use gzipped b64-encoded json as your in-url state store!
My 8-bit IDE lets you share your ROM as a lzg/b64-encoded URL. Things get dicey when you go above 2000 characters or so.
With a custom compression dictionary made against your JSON schema, I would bet you could still pack a surprising amount of data into 256 bytes that way.
I tried this once and discovered that for us it worked even better when populating the dictionary with a bunch of commonly seen URLs. Like that includes the same field names as the json schema, but none of the other JSON Schema cruft, and it also includes commonly used values etc. It seemed like the smarter I tried to be, the worse the results got.
I just used Pako.js which accepts a `{ dictionary: string }` option. Concat a bunch of common URL together, done.
The only downside (with both our approaches) is if you add substantially many new fields / common values later on, you need to update the dictionary, and then old URLs don't work, so you'd need some sort of versioning scheme and use the right dictionary for the right version.
That’s the reason i stay away and keep my customers away from SPAs. Good ole html forms do the trick for 99.95% use cases.
One might even say that hyperlinks are the engine of application state.
I loke to keep state in the URL. It's nive when you can bookmark any section in an app and it brings you back to the exact same place, all the menus exactly the same. Also it's amazing for debugging. Any bug, I tell the user to send me the URL. I reproduce the issue instantly, fixed in 5 minutes. I wrote some very complex frontends without any tests thanks to this approach... Also it's great during development; when I make a change anywhere in the app, I just refreshed the page... I never have to click through menus to get back to the part of the code I want to test. Really brings down my iteration time... Also I use Vanilla JavaScript Web Components so I don't have to wait for transpiler or bundler. Then I use Claude Code. It's crazy how fast I can code these days when it's my own project.
Yes! This is a very under-utilized concept, especially with client-side execution (WASM etc!)
Few years back, I built a proof-of-concept of a PDF data extraction utility, with the following characteristic - the "recipe" for extracting data from forms (think HIPAA etc) can be developed independently of confidential PDFs, signed by the server, and embedded in the URL on the client-side.
The client can work entirely offline (save the HTML to disk, airgap if you want!) off the "recipe" contained in the URL itself, process the data in WASM, all client-side. It can be trivially audited that the server does not receive any confidential information, but the software is still "web-based", "browser-based" and plays nice with the online IDE - on dummy data.
Found a working demo link - nothing gets sent to the server.
https://pdfrobots.com/robot/beta/#qNkfQYfYQOTZXShZ5J0Rw5IBgB...
Finishing building a framework at the moment. I'd rather say that they are state descriptors... They don't contain all the state. But they are some kind of hashkey that allow to retrieve application state. "Hypertext as the engine of application state."
I use URLs for pixel art: https://www.mathsuniverse.com/pixel-art?p=GgpUODLkg-N0JchwOF...
I'm not certain that I agree with this because a URL makes no claims about idempotency or side-effects or many other behaviors that we take for granted when building systems. While it is possible to construct such a system, URLs do not guarantee this.
I think the fundamental issue here is that semantics matter and URLs in isolation don't make strong enough guarantees about them.
I'm all for elegant URL design but they're just one part of the puzzle.
Yes It does. HTTP PUT is idempotent.
The URL is not a HTTP method.
Reminds me of xlink:href with an #xpointer(xpath) — with it you could xinclude an inner XML node out of a remote file
I use this for my rss reader!
https://rssrdr.com/?rss=raw.githubusercontent.com/Roald87/Ha...
>If you need to base64-encode a massive JSON object, the URL probably isn’t the right place for that state.
Why?
I get it if we're talking about a size that flirts with browser limitations. But other than that I see absolutely no problem with this. In fact it makes me think the author is actually underrating the use-case of URL's as state containers.
Hot module replacement masks a lot of annoyances for end users. Yes its more instantaneous than reloading a page and relying on urls for all of the state and I am not advocating hard for abolishing HMR anymore, but it would be nice if we still used way more url state than currently the case. Browsers will also hibernate tabs to varying degrees, server sessions expire all the time, things are not shareable. The only thing that works as users expect is url state. One thing i absolutely hate about ios apps is how every state is lost if i just have the app in the background for a few seconds, this even applies to major apps like youtube, google maps, many email clients etc. Why do we live in this stupid world were things are not getting better, just because someone made things more convenient for developers?
PS: and i curse the day the social media brainwashed marketing freak coined the term "deep link" to mean just a normal link as its supposed to work.
Modern browsers have an "open clean link" feature that strips all the query parameters (everything after the '?' character in the URL).
This is because many sites cram the URL full of tracking IDs, and people like to browse without that.
So if you are embedding state in your URL, you probably want to be sure that your application does something sane if the browser strips all of that out.
> Everything after the '?' character.
It only strips known tracking parameters b(like those utm_ query params). It does not remove all parameters; if that's the case, YouTube video links will stop working.
Hm, I didn't know that. Seems very easy to game then, just change your tracking parameter name to one that the browser doesn't strip.
I really like this approach, and think it should be used more!
In a previous experiment, I created a simple webpage which renders media stored in the URL. This way, it's able to store and render images, audio, and even simple webpages and games. URLs can get quite long, so can store quite a bit of data.
https://mkaandorp.github.io/hdd-of-babel/
Deeplinking is awesome! The Azure portal is my favorite example. You could be many layers deep in some configuration "blade" and the URL will retain the exact location you are in the UI.
Also to consider: bot traffic and SEO.
Depending on which mechanism you use to construct your state URLs they will see them as different pages, so you may end up with a lot of extra traffic and/or odd SEO side effects. For SEO at least there are clear directives you can set that help.
Not saying you shouldn't do this - just things to consider.
Canonical URLs come to the rescue.
Only for SEO - they don't help at all with aggressive AI scraper bots.
Letterboxd does this really well - each view is its own page! It's so pretty compared to other sites
I use the concept for https://libmap.org to save the state of the map. You can share the libmap link via mastodon social or bluesky to make it permanent.
This is a small hobby project, I am not in IT.
This is something you learn to appreciate when you do web scraping. I do overlook it for frontend webdev though
To fully describe client side state you also need to look at DOM and cookies. The server can effectively see this stuff too (e.g., during form post).
I design my SSR apps so that as much state as possible lives in the server. I find the session cookie to be far more critical than the URL. I could build most of my apps to be URL agnostic if I really wanted to. The current state of the client (as the server sees it) can determine its logical location in the space of resources. The URL can be more of an optional thing for when we do need to pin down a specific resource for future reference.
Another advantage of not urlizing everything is that you can implement very complex features without a torturous taxonomy. "/workflow/18" is about as detailed as I'd like to get in the URL scheme of a complex back office banking product.
This entire article is an argument against your approach here, and you're not really addressing any of its points.
Basically, your approach is easier to code, and worse to use. Bookmarks, multiple tabs, the back button, sharing URLs with others, it all becomes harder for users to do with your design. I mean feel free, because with many tech stacks it is indeed easier, but don't pretend it's not a tradeoff. It's easier and worse.
Maybe I'm misunderstanding what you're saying but applications like this tend to be horrible to use. How do you handle somebody navigating in two tabs at once? What about the back button?
Also bookmarks etc? For example if you have a view where you can have complex filters etc, you may want to bookmark this.
I guess they use something like sessionStorage to hold tab specific ids.
But something that can bite you with these solutions if that browsers allow you to duplicate tabs, so you also need some inter-tab mechanisms (like the broadcast API or local storage with polling) to resolve duplicate ids
Agreed. Also, when you paste somebody a URL, they should see what you saw... if at all possible.
This should be used more often. I wish websites like Google could respect the language given in the URL. Always tries to guess what's my language based on IP and fails
nuqs[0] is a great (React) library for managing state inside of the URL.
[0] https://nuqs.dev/
This is the first time I see this, thanks for sharing it
One barrier to adoption is that big URLs are just ugly. Things are smooshed together without spaces, URL encoding, human-readable words mixed with random characters, etc. I think even devs who understand what they're looking at find it a little unsatisfying.
Maybe a solution is some kind of browser widget that displays query params in a user-friendly way that hides the ugliness, sort of like an object explorer interface.
As an application developer I think this is very good advice and I wish I wouldve be more strict about it earlier.
One of my previous side projects used this idea in the extreme: It's a two-player online word game (scrabble with some twists) but all the state is stored in the URL so it doesn't need a backend.
https://scrobburl.com/ https://github.com/Jcparkyn/scrobburl
React kid discovers the web
Holding the snark aside for second, I think there is some harsh truth here.
Url query params are not popular in the front end developer world for some reason, probably bc the fundamentals of web dev are often skipped in favor of learning leetcode and all the react hooks. Same could be sade for SQL and CSS.
I also don't think its a good look that the author is a CTO and is just discovering how useful url query params are. that being said, its a pretty good and well-written blog post.
No snark. Genuinely happy. This is progress
Sure and file names are state & attribute containers too. A URL is a uniform resource locator. You can hack it, of course, but this is no less kludgy than overloading filename. It is never ceases to amaze me seeing the recylcing of good and bad idea in this field.
Urls have extra parts like the parameters to store that data. It’s not a hack
You are either changing the meaning of "state", or probably unaware of what it means. To start with, state of what? app (http server) or the http client?
I think the author is referring to the state of the form.
State of the form is it's data.
Not quite. As the L in URL says, it is the locator or address of the state. The S in REST implies the same, indicating states as the content, not path to it.
But from the viewpoint of a web app where you navigate between different (versions of) pages, the state of that app can be the address of the currently displayed page.
It's the state of your browser, not the app. App could be serving different pages to different clients at the same time.
State is just your location in state space.
An address book is not "state space". The country, land and things are the state.
Not every location represents a state, but every state can be considered a location.
If you want to argue against the use of URLs to represent state, I would concentrate on the “R” (resource) aspect.
I think you are talking about client's navigational state. The original title of this post was "app state ...". Still it is not clear about state of what.
Navigational state need not be confused with app state. Also talking about "state" as in "state machine" etc used to sound pretty academic with obscure meaning of the word "state". When someone says "state machine" they are basically saying "I'm a PhD and you are not". There are simpler and more crisp ways to convey things rather than via obscurity.
you can save so much data in the url, I like how pocketcal.com stores the calendar informations
Any blob of byte is a state container
>Scott Hanselman famously said “URLs are UI”
I actually implemented a comment system where users just pick any arbitrary URL on the domain, ie, http://exampledomain.com/, and append /@say/ to the URL along with their comment so the URL is the UI. An example comment would be typed in the URL bar like,
http://exampledomain.com/somefolder/somepage.html/@say/Hey! Cool somepage. - Me
And then my perl script tailing the webserver log file sees the line and and adds the comment "Hey! Cool somepage. - Me" to the .html file on disk for comments.
Mmm.
Youre doing two things:
1) youre moving state into an arbitrary untrusted easy to modify location.
2) youre allowing users to “deep link” into a page that is deep inside some funnel that may or may not be valid, or even exist at some future point in time, forget skipping the messages/whatever further up.
You probably dont want to do either of those two things.
More good content with a bunch of GPT noise added, obvious from patterns like
No database. No cookies. No localStorage
Themes chosen. Languages selected. Plugins enabled.
Which have the pattern of rhetoric but no substance. Clearly the author put significant effort it so why get an LLM to add noise?
Hello, I am the author of the article and I can explain a few things.
First of all thank you for your words about the content.
I get why you might feel that way. English isn’t my first language, so I sometimes use GPT to help me polish phrasing or find a smoother rhythm for certain lines.
But the ideas, structure, and all the writing direction are mine. I don’t ask it to write articles for me. It just help me express things more clearly. I treat it more like an editor than a writer.
Is it really an LMM? It's not like real humans can't write the same style, LLMs have picked up on an existing stylistic tendency. I hate these patterns as much as anyone, and I have noticed them since long before transformers were a thing.
Duh :)
Hanselman famously said “URLs are UI” and he’s absolutely right
A challenge for this is that the URL is the most visible part of an HTTP request but there are many other submerged parts that are not available as UI yet are significant to the http response composition.
Additionally, aside from very basic protocol, domain, and path, the URL is a very not human friendly UI for composing the state.
It's fast becoming a lost art (alongside ensuring the text can be read by the 10% of the male population that is colour blind). It's one thing to coach a junior dev on implementing it properly into a Nextjs app (or whatever is trendy at the time), but quite another to have to explain this stuff to a Product Manager. If you're going to spend copious amounts of time with a designer to make sure the site is pixel perfect visually you should also have time to get your URLs right.
This is a risky idea, actually — at least in its fully expanded form.
Sure, in the prismjs.com case, I have one of those comments in my code too. But I expect it to break one day.
If a site is a content generator and essentially idempotent for a given set of parameters, and you think the developer has a long-term commitment to the URL parameters, then it's a reasonable strategy (and they should probably formalise it).
Perhaps you implement an explicit "save to URL" in that case.
But generally speaking, we eliminated complex variable state from URLs for good reasons to do with state leakage: logged-in or identifying state ending up in search results and forwarded emails, leaking out in referrer logs and all that stuff.
It would be wiser to assume that the complete list of possible ways that user- or session-identifying state in a URL could leak has not yet been written, and to use volatile non-URL-based state until you are sure you're talking about something non-volatile.
Search keywords: obviously. Seach result filters? yeah. Sort direction: probably. Tags? ehh, as soon as you see [] in a URL it's probably bad code: think carefully about how you represent tags. Presentation customisation? No. A backlink? no.
It's also wiser to assume people want to hack on URLs and cut bits out, to reduce them to the bit they actually want to share.
So you should keep truly persistent, identifying aspects in the path, and at least try not to merge trivial/ephemeral state into the path when it can be left in the query string.
[flagged]