As I discovered (to my chagrin) Mailman's current simple RFC 2369 List-Unsubscribe field doesn't, even for personalized lists where there might be a personalized link in the footer.
My understanding from up-thread is that the RFC 2369 headers are added pre-personalization.
The token would be the encrypted (list, address) pair, possibly dated in clear (to allow expiration of tokens and rotation of encryption keys). Unfortunately RFC 8058 doesn't discuss that, I guess John assumed that people would usually be unsubscribing from the most recent post.
I don't think we can assume that it will be the most recent post, but it would likely be _a_ recent post. If we add a timestamp to the encrypted token I suggest there should be a pretty broad window allowed (i.e. several months).
On the assumption that unsubscribes would be "rare", you wouldn't have to store the token in the database. (Have to think about that, you might be able to DOS attack by sending lots of fake unsubscribes and attack the encryption with known plaintext attacks. Safety would suggest adding a random nonce and that would need to be saved in the database.)
Do you just mean a salt? Or are you suggesting adding a per-subscription nonce stored in the DB? (In which case, that could simply be the URI token...) I agree with your analysis, but this would be a very specialized attack, especially if we don't provide direct feedback on if the request was successful (which we are not required to do). I have trouble constructing a practical threat here and I think the encrypted tuple, or a plaintext tuple and truncated HMAC (like how SRS works) would be sufficient.
The real risk without a per-subscription random unique identifier would be a replay attack -- the user is unsubscribed multiple times? That feels outside of the threat model.
Alternatively, you could reverse proxy this directly through the front-end webserver to Mailman core with a new REST endpoint. That gives me bad vibes because I can just see naive admins messing up the configuration of the single endpoint, and "fixing" the problem by reverse proxying port 8001 to the Internet. We'd have to be careful that the urlconf not slop over to anything else, maybe something like "listen" on the Internet at (nginx notation):
location /mailman3/rfc8058/ { proxy_pass http://localhost:8001/3.2/PROXIED/rfc8058; }
The "PROXIED" part is just to provide a REST namespace that is obviously separate from the domains, addresses, users, lists, etc. Maybe "SELF_AUTHORIZING" would be more accurate. I think that reverse proxy sepcification would be safe enough.
And you could combine the two strategies which would allow sites that don't use Postorius to configure the reverse proxy, while sites that do would get it for "free" since we'd configure Postorius to do it.
I hadn't considered this -- I definitely like this approach because it means the same function need not be implemented twice, differently. I guess the big question is now just what the simplest content of the URI should be.
Aside: RFC 8058 says
But anti-spam software often fetches all resources in mail header fields automatically, without any action by the user, and there is no mechanical way for a sender to tell whether a request was made automatically by anti-spam software or manually requested by a user.
Weirdly, the RFC doesn't address this problem at all. :-(
This has always been the risk with single-click anything. Given how common this is as a use case, I believe that the anti-spam/anti-phishing systems have stopped clicking links willy-nilly, or at least special case List-Unsubscribe URIs. I also see that inbound mail filters these days are more likely to rewrite clickable links to bounce through a redirector so the scan happens at access (rather than receipt) time -- so the link is only proxy-clicked when the user does intend to do so.
A bigger threat might be web archives that include the list headers and clickable links. The GenAI companies are absolutely _hammering_ anything they can get a hold of with spiders that ignore all robots.txt or rate limit indicators. I have to keep blocking user agents and IP ranges because I'm spending $100s/month in excess egress fees on badly-written LLM spiders. The Internet is an increasingly hostile place.
Regardless, not something we can do much about.
--Jered