On Firefox, web accessible resources are available at "moz-extension://<extension-UUID>/myfile.png" <extension-UUID> is not your extension's ID. This ID is randomly generated for every browser instance. This prevents websites from fingerprinting a browser by examining the extensions it has installed. https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...
The real friction in browser hopping isn't features — it's keeping your workflow portable. Bookmarks especially. Each browser has its own sync silo (Chrome → Google, Firefox → Mozilla, Safari → iCloud).
For multi-browser setups (Firefox for fingerprint resistance, Chrome for the sites that only work there), cross-browser bookmark sync is weirdly undersolved. Xbrowsersync, marksyncr, and a few others exist but most people don't know about them.
Anecdote: yesterday i exported my bookmarks into an html file and then asked for a script that will make a webpage out of them. with a search. and favicon download from domain. better than any bookmark bar imho.
This is a great idea, thanks. I built an IPv6 only webhost in Digital Ocean a while ago as a learning exercise and it’s been sitting idle. Making a personal portal sounds like a fun project.
I actually don't even care too much if they try to detect, that I am the X from last time.
The issue is them selling the data, or using it in unrelated locations, or trying to detect me as a person. And their programmers are not enforced and rewarded when they report such behavior to law agencies / the public. And the law is not punishing it.
Anecdotally, I sometimes notice my computer fan spinning ferociously... it's almost always because I have left a firefox tab with linkedin open somewhere.
Are they bit coin mining or are they just incompetent?
Though LinkedIn in Firefox with uBlock Origin allowing just enough (not sure if that's relevant, just haven't run it without) does not last long without rocketing CPU & memory usage, fan spinning up, etc. (ime, anyway)
Doesn't the idea of swapping extension specific IDs to your browser specific extension IDs mean that instead of your browser being identifiable, you become identifiable?
I mean, it goes from "Oh they have X, Y , and Z installed" to "Oh, it's jim bob, only he has that unique set of IDs for extensions"
16 bytes is a lot. 4 bytes are within reach, we can scan all of them quickly, but even 8 bytes are already too much.
Kolmogorov said that computers do not help with naturally hard tasks; they raise a limit compared to what we can fo manually, but above that limit the task stays as hard is it was.
Let's go a step further and just iterate through them on the client. I plan on having this phone well past the heat death of the universe, so this is guaranteed to finish on my hardware.
function* uuidIterator() {
const bytes = new Uint8Array(16);
while (true) {
yield formatUUID(bytes);
let carry = 1;
for (let i = 15; i >= 0 && carry; i--) {
const sum = bytes[i] + carry;
bytes[i] = sum & 0xff;
carry = sum > 0xff ? 1 : 0;
}
if (carry) return;
}
}
function formatUUID(b) {
const hex = [...b].map(x => x.toString(16).padStart(2, "0"));
return (
hex.slice(0, 4).join("") + "-" +
hex.slice(4, 6).join("") + "-" +
hex.slice(6, 8).join("") + "-" +
hex.slice(8, 10).join("") + "-" +
hex.slice(10, 16).join("")
);
}
I don't think that's the case. I have the Earth View extension installed which shows a random google earth image.
I have this set as my homepage in Firefox as moz-extension://<extension-id>/index.html, and this has not changed since installing the extension. The page still works.
Doing it on restart makes the mitigation de facto useless. How often do you have 10, 20, 30d (or even longer) desktop uptime these days? And no one is regularly restarting their core applications when their desktop is still up.
There isn't enough energy in the solar system to count to 2^128. Now a uuid v4 number "only" has 2^122 bits of entropy. Regardless, you cannot realistically scan the uuid domain. It's not even a matter of Moore's law, it is a limitation of physics that will stand until computers are no longer made of matter.
Why does the browser even allow a website to query for installed extensions? I really don't see what the point of that would be.
The website should never be able to tell what's running in my browser, or on my computer in general. The browser renders the page, maybe runs a little Javascript, but there's no reason why it should be able to query anything about my environment.
I wonder how much stuff would break if the Chrome sandboxing was extended to preventing access to chrome-extension:// from Javascript loaded of random websites.
And just in case the magnitude of that isn't obvious to people, that means there are 340,282,366,920,938,463,463,374,607,431,768,211,456 total possible UUIDs. Good luck.
yes thats how browser fingerprinting works and it is impossible to defeat because there are just too many variations in monitors (relevant for fonts), simple things like user agent, etc.
And browsers trying to mitigate fingerprinting are miserable to use (fixed window size with only Arial available, etc) and probably fingerprintable anyway.
Skimming the list, looks like most extensions are for scraping or automating LinkedIn usage. Not surprising as there's money to be made with LinkedIn data. Scraping was a problem when I worked there, the abuse teams built some reasonably sophisticated detection & prevention, and it was a constant battle.
In order to create the data source that LinkedIn's extension-fingerprinting relies on to work, someone (at LinkedIn*?) almost certainly violated the Chrome Web Store TOS—by (perversely*) scraping it.
* if LinkedIn didn't get it from an existing data source
Programmers don't appreciate the fact that you can just violate terms of service. You can just do it. It's okay. The police won't come after you. Usually.
I think the point is more "in order to prevent people from scraping their site, which is against their ToS, they scraped some other site, against its ToS".
Indeed. I read a lot of comments like these one you are responding on HN. It seems like there is a type of person who thinks that writing down what their rules are has some magical power.
“This isn’t what it was intended for”. Who cares?
A long long time ago in a galaxy far far away I would encounter warnings on pirating websites saying “If you are an FBI agent you are not allowed to continue on this site”. Imagine their utter disbelief and shock if they were to be arrested by an FBI agent that clicked past the warning anyway.
I agree is must be programmers as a type that like rules a lot and, they think, what a perfect world it could be if people would follow them.
In the first place, no one said they needed to, only that they probably did.
Secondly, it's not "3000 extensions". They didn't somehow magically divine that the 2953 (+/-47) extensions we see here were the ones that they needed to download in order to be able to exploit the content-accessible resources described in their extension manifest. They looked at a much larger set, and it got filtered down to these 2953 that satisfied the necessary criteria.
Lol no, did you even read the list? You could pay someone to just search "LinkedIn" and "talent" and "recruiting" on the chrome web store and download each extension. It's probably harder to automate this than it is to do it manually. This is something you could develop in an afternoon and pay a small team of people to do for pennies on the dollar. Even ten thousand extensions is nothing. Spread that over years and this is trivial.
"The code" here you're referring to (fetch_extension_names.js[1]) isn't and doesn't claim to be LinkedIn's fingerprinting code. It's a scraper that the researcher behind this repo wrote themselves in order to create the CSV of the data that they're publishing here.
LinkedIn's fingerprinting code, as the README explains, is found in fingerprint.js[2], which embeds a big JSON literal with the IDs of the extensions it probes for. (Sickeningly enough, this data starts about two-thirds of the way through the file* and isn't the culprit behind the bulk of its 2.15 MB size…)
* On line 34394; the one starting:
const r = [{
id: "aacbpggdjcblgnmgjgpkpddliddineni",
file: "sidebar.html"
By looking the list it seems like it is not really “sophisticated”. It is just list based on names (if there is a “email” in the name). Majority of extensions do not even ask for permissions to access linkedin.com.
Do they respect my data? Why do they get to track me across sites when I clearly don't want them to but someone can't scrape their data when they don't want them to. Why should big companies get the pass but individuals not? They clearly consider internet traffic fair game and are invasive and abusive about it so it is not only fair to be invasive and abusive back, it is self defense at this point.
Are you talking about Recall, which got such huge negative press they delayed it a year and added a clear opt-in? And never sent anything off the device itself?
If anyone has evidence of constant tracking and reporting then please share it.
Well, I won't touch Windows 11 with a ten feet pole and I don't know if what I am referring to is called "Recall". Not that much into the MS terminology. I also read about Windows 11 having all kinds of shenanigans to suddenly upload data into onedrive. Wouldn't be surprised, if that also included screenshots, or could "accidentally" lead to that happening. Screenshotting every few seconds is unacceptable even if it stays on the device per se. Once data exists, it has potential to leak, and we have not even started considering malware infection yet. Huge risk to people's privacy and safety online.
We can stop pretending all it alright at some point, can't we? We don't need more enshittification. Windows 11 is already a disaster, that no one wants. It already starts with its idiotic HW requirements, trying to make perfectly fine HW obsolete. $$$
In this context, "protecting" means the interest of linkedin who aggressively sells the data. Users that give data to linkedin are not protecting their data either way.
> Oh right, companies change ToS and EULA and "agreements" without notice, without due process, and without recourse.
Companies change their terms of service all the time. They usually send emails about it.
I've responded to decline them a handful of times and asked for my account to be deleted. I chuckle slightly at the work it creates, but sometimes it has been easier to close an account that way.
I didn't want the web to turn into monolithic platforms. I abhor this status quo.
You cannot function without these enterprises, but that doesn't mean they're ideal or even ethical.
Microsoft wins because of network effects. It's impossible to compete. So I think it should be allowed to assail their monopoly here by any means. It's maximally fair for consumers and for free markets.
Ideally capitalism remains cutthroat and impossible to grow into undislodgeable titans.
Even more ideally, this would become a distributed protocol rather than a privately owned and guarded database.
I think they framed it this way because they don't consider scraping abuse (to be fair, neither do I, as long as it doesn't overload the site). Botting accounts for spam is clear abuse, however, so that's fair game.
No, I consider all data collection and scraping egregious. From that perspective, LinkedIn is hypocritical when Microsoft discloses every filesystem search I do locally to bing.
I'm sure there are issues with fake accounts for scraping, but the core issue is that LinkedIn considers the data valuable. LinkedIn wants to be able to sell the data, or access to it at least, and the scrapers undermine that.
They could stop all the scraping by providing a downloadable data bundle like Wikipedia.
thinking more about, I don't think its a terrible thing that they prevent scraping. Their listings are already suffering from being flooded with garbage applications and having to sift through tons of noise. allowing scraping would just amplify that and make the platform almost entirely worthless.
I "scrape" linkedin in a roundabout way for personal use, and really what Ive found is that i should just maybee not bother at all. I can't get through the noise even when im applying at places that heavily match my skillset, and just get automated rejection emails.
What is abuse? Is it anything that reduces my profit margin? Or is it anything that makes the world a worse place? The Flock CEO called Deflock terrorism, is he right?
We enjoy the fruits of an LLM or two from time to time, derived from hoards of ill gotten data. Linkedin has the resourses to attempt to block scraping, but even at the resource scale of LI I doubt the effort is effective.
I am not denying that scraping is useful. If it wasn't people wouldn't do it. But if the site rules say you aren't allowed to scrape, then I don't think people should be hostile towards the people enforcing the rules.
Well, they can try to enforce the rules; that's perfectly fair. At the same time, there are many methods of "trying" which I would not consider valid or acceptable ones. "Enforcing the rules" does not give a carte blanche right to snoop and do "whatever's necessary." Sony tried that with their CD rootkits and got multiple lawsuits.
this exchange -- obvious critical / perhaps insurrection speech versus a stable voice of business economics -- should be within the purview of an orderly and predictable legal environment. BUT things moved quickly in the phone battles. Some people say that the legal system has never caught up to the data brokering, and in fact the surveillance state grew by leaps and bounds.
So, reasonable people may disagree. This is a fine place to mention it .. what if individual profiles built at LinkedIn are being combined with illegitimate and even directly illegal surveillance data and sold daily? Everyone stand up and salute when LinkedIn walks in the room? there has to be legal and direct ways to deal with change, and enforcement to complete an orderly and predictable economic marketplace.
>BUT things moved quickly in the phone battles. Some people say that the legal system has never caught up to the data brokering, and in fact the surveillance state grew by leaps and bounds.
Partially by discrepancy in how responsive you can be or comprehensive you must be to win the next round of cat-and-mouse, and partially because a private/corporate surveillance apparatus is useful to a government that might otherwise be hampered by constitutional bounds.
The big social media businesses deserve a Teddy Roosevelt character swooping in and busting their trusts, forcing them to play ball with others even if it destroys their moats. Boo hoo! Good riddance. World's tiniest violin.
This is a popular position across the aisle. Here's hoping the next guy can't be bought, or at least asks for more than a $400M tacky gold ballroom!
I mean, regardless of who they are or even if you don’t like what LinkedIn does themselves with the data people have given them, the random third parties with the extensions don’t additionally deserve to just grab all that data too, do they?
Eh. I worked at a company which made an extension which scraped LinkedIn. We provided a service to recruiters, who would start a hiring process by putting candidates into our system.
The recruiters all had LinkedIn paid accounts, and could access all of this data on the web. We made a browser extension so they wouldn’t need to do any manual data entry. Recruiters loved the extension because it saved them time.
I think it was a legitimate use. We were making LinkedIn more useful to some of their actual customers (recruiters) by adding a somewhat cursed api integration via a chrome extension. Forcing recruiters to copy and paste did’t help anyone. Our extension only grabbed content on the page the recruiter had open. It was purely read only and scoped by the user.
Doesn't sound like your operation was particularly questionable, but I can imagine there must be some of those 3,000 extensions where the data flow isn't just "DOM -> End User" but more of a "Dom -> Cloud Server -> ??? -> Profit!" with perhaps a little detour where the end user gets some value too as a hook to justify the extension's existence.
I started their but it felt like a dodgy way (as it could be seen to be illegal).
We then just went aloffical and went through Google search API’s with LinkedIn as the target.
Worked a treat and was cheaper than recruiter!!!
So when pay the highest scraper, it’s ok! Same data, different manner.
Chrome is the new IE6. Google set themselves up to be the next Microsoft and is "ad friendly" in all the creepy ways because that's what Google IS an ad company. All they've contributed to security is diminishing the capability of adblockers and letting malware to do bad things to you as consumers.
However, they do contribute to security: Chrome was first to implement Site Isolation, sandboxing too. These are essential security features for modern browsers. They are also not doing too bad with patching and security testing.
That's because you're not aware enough of being spied on at every single step you make. The issues are now more or less invisible (the tracking being more, and the lobotomized adblockers being less)
Brave feels like using Chrome. The transition was seemless even as a developer who uses the devtools. Obviously that's because it's almost the same code, but Brave is much more privacy friendly right?
Brave was found to be mostly different adware years ago I thought. It's a degoogle'd chrome essentially, but replaced with their adware instead of google's.
If you want a clean chrome, use ungoogled-chromium. Like IE6, some stuff just doesn't work in librewolf (less scummy firefox), so I use ungoogled-chromium when so, and I just don't do anything googleish on it that it latches onto google again.
Pure unregulated market, that doesn't guarantee free market assumptions does that. Capitalism doesn't need it. Without mechanisms that allow for the free entry/exit of competitors, fair and simultaneous access to information, preventing cartels/price fixing, .... a bunch of assumptions for perfect free market to happen, the market will tend towards monopolies due cumulative advantage (in econ. known as Matthew effect), since small advantages compound into dominance.
Patch Firefox so navigator.webdriver is always false, then remote control it. Seems not easily detectable. You could still watch for fast input patterns...
LinkedIn has been employing a lot of strange dark patterns recently:
* Overriding scroll speed on Firefox Web. Not sure why.
* Opening a profile on mobile web, then pressing back to go to last page, takes me to the LinkedIn homepage everytime.
* One of their analytic URLs is a randomly generated path on www.linkedin.com, supposedly to make it harder to block. Regex rules on ublock origin sufficiently stop this.
Giving them the benefit of the doubt here obviously, I know they're in an all out war with the contact database industry. Going from websoup to agents dialing out to rent-a-human services requires different tactics.
I've been wondering why my scroll speed was off in LinkedIn, inspecting scroll-related css without finding an answer, I thought this was a bug. Anyone know what property does this? I might try to fix it with uBO scripts.
I think they want you to feel disoriented.
Why do they do all this bs and not fix the bug that happens when you insert Unicode U+202E in your name?
I've been having loads of fun with that but it's never been fixed. Anyone tagging me in a comment makes their input right-to-left unless they backspace the tag or insert newline. It also jumbles notification text because your name is concatenated to the notification static text.
You can also create an inverted link but it isn't clickable, just like other unicode links which aren't punycode-encoded on LinkedIn but aren't clickable (on the clients I've tried).
- scroll speed - unsure of ulterior motives, but i've seen this even on some foss things. i think some people just think it looks cool/modern/"responsive"/whatever
- back - hijacking it seems fairly common on malicious/dark-pattern sites to try to trap you on them. not sure why because you can just leave and it seems it would obviously piss someone off
- analytics paths - not everyone may know about/how to use regex rules for it or may use something else that doesn't support it (the stripped down ublock for chrome? i don't know if it can or not). sites seem to do this with malicious js code as well, presumably to prevent blocking
It could very much be confirmation bias, but I do feel like most "please use our app" popups appear after a mobile site breaks or refuses to load something
I also really don't understand why their subscription is so extremely expensive for someone who is not a recruiter.
It's already a sycophantic cesspool of corporate drones repeating mindless PR. I unfollow everyone who re"tweets" feel-good memes or corporate crap and I have very few people I follow left over :) Critical discussion doesn't exist, if I comment anything that's not 100% celebratory of so-called company successes I get blocked.
Fingerprinting. There are a few reasons you'd do it:
1. Bot prevention. If the bots don't know that you're doing this, you might have a reliable bot detector for a while. The bots will quite possibly have no extensions at all, or even better specific exact combination they always use. Noticing bots means you can block them from scraping your site or spamming your users. If you wanna be very fancy, you could provide fake data or quietly ignore the stuff they create on the site.
2. Spamming/misuse evasion. Imagine an extension called "Send Messages to everybody with a given job role at this company." LinkedIn would prefer not to allow that, probably because they'd want to sell that feature.
> The bots will quite possibly have no extensions at all
I imagine most users will also not have extensions at all, so this would not be a reliable metric to track bots. Maybe it might be hard to imagine for someone whose first thing to do after installing a web browser is to install some extensions that they absolutely can't live without (ublock origin, privacy badger, dark mode reader, noscript, vimium c, whatever). But I imagine the majority of casual users do not install any extensions or even know of its existence (Maybe besides some people using something like Grammarly, or Honey, since they aggressively advertise on Youtube).
I do agree with the rest of your reasons though, like if bots used a specific exact combinations of extensions, or if there was an extension specifically for linkedin scraping/automation they want to detect, and of course, user tracking.
I wrote some automation scripts that are not triggered via browser extensions (e.g., open all my sales colleagues’ profiles and like their 4 most recent unliked posts to boost their SSI[1], which is probably the most ‘innocent’ of my use-cases). It has random sleep intervals. I’ve done this for years and never faced a ban hammer.
Wonder if with things like Moltbot taking the scene, a form of “undetectable LinkedIn automation” will start to manifest. At some point they won’t be able to distinguish between a chronically online seller adding 100 people per day with personalized messages, or an AI doing it with the same mannerisms.
Third–party tools don't bring money to LinkedIn, that's the issue. Rather than try to compete, much easier to force you to use their tools! Reddit did the same thing.
Easy solution is to sell a plan that explicitly allows third-party tool usage. Then they get the money and the users get the tooling LinkedIn is incapable of building themselves.
(except they won't, because they're not after money but engagement, and their built-in tools suck on purpose to maximize wasted time)
> This repository documents every extension LinkedIn checks for and provides tools to identify them.
I get that the CSV lists the extensions, and the tools are provided in order to show work (mapping IDs to actual software). But how was it determined that LinkedIn checks for extensions with these IDs?
Technical writeup from a few weeks ago by a vendor that explains how LinkedIn does it, then boasts that their approach is "quieter, harder to notice, and easier to run at scale":
Reading the fingerprint.js is interesting, it's not just the thousands of extensions. It looks like it's also probing for a long list of webgl extensions, fonts, and other capabilities. There's recaptcha v3 references in there too.
Perhaps an overly aggressive attempt to block bots.
The list of extensions being scanned for are pretty clear and obvious. What is really interesting to me are the extensions _not_ being scanned for that should be.
The big one that comes to mind is "Contact Out" which is scan-able, but LinkedIn seems to pretend like it doesn't exist? Smells like a deal happened behind the scenes...
Another thing... they alter the localStorage & sessionStorage prototype, by wrapping the native ones with a wrapper that prevent keys that not in their whitelist from being set.
LinkedIn has also started sending a great deal of spam:
A $7.5B chip merger
Pinterest prepares layoffs
Healthcare premiums surge
Autodesk to cut 7% of jobs
Ozempic keeps getting cheaper
Since the "unsubscribe" link does not lead to a working page, this seems like a trivial violation of even what laughable protections CAN-SPAM alleges to offer.
And what's with some of these? Bad mouthing employers is an odd choice for a platform that makes its money from them? Or perhaps now all the revenue is ad derived?
I didn't find popular extensions like uBlock or other ad blockers.
The list is full of scammy looking data collection and AI tools, though. Some random names from scrolling through the list:
- LinkedGPT: ChatGPT for LinkedIn
- Apollo Scraper - Extract & Export Apollo B2B Leads
- AI Social Media Assistant
- LinkedIn Engagement Assistant
- LinkedIn Lead Magnet
- LinkedIn Extraction Tool - OutreachSheet
- Highperformr AI - Phone Number and Email Finder
- AI Agent For Jobs
These look like the kind of tools scummy recruiters and sales people use to identify targets for mass spamming. I see several AI auto-application tools in there too.
> I suggest everyone take a look at the list of extensions and their names for some very important context[…] I didn't find popular extensions like uBlock
Unsurprising outcome since uBlock (specifically: uBlock Origin Lite, the only version available for Chrome on the Chrome Web Store) makes itself undetectable using this method. (All of its content-accessible resources have "use_dynamic_url" set to "true" in its extension manifest.) So its absence in this data is not dispositive of any actual intent by LinkedIn to exclude it—because they couldn't have included it even if they wanted to.
LinkedIn itself provides tools for scummy recruiters to mass spam, so this is just them protecting their business.
Also, not all of them are data collection tools. There are ad blockers listed (Hide LinkedIn Ads, SBlock - Super Ad Blocker) and just general extensions (Ground News - Bias Checker, Jigit Studio - Screen Recorder, RealEyes.ai — Detect Deepfakes Across Online Platforms, Airtable Clipper).
So every Chrome extension that wants to avoid being detected this way needs to proxy fetch() on the target site, imagining someone with a bunch of them installed having every legit HTTP request on the target site going through a big stack of proxies
I started using Chrome at version 2 I think. It still had the 3D logo. It was such a breath of fresh air and the big innovation was running one process per tab. Firefox existed but the entire browser could (and did) hang. And IE was... well, IE.
I did have a relatively early beef with Chrome though, whcih was I couldn't completely opt out of Flash. As in, I didn't even want it installed. This turned out to be an issue because Flash turned out to be one of the earliest vectors for so-called "zombie cookies".
Fingerprinting in general has been a longstanding problem and has become more and more advanced.
Add to this that Google is, first and foremost, an advertising business and they've become increasingly hostile to ad-bloccking tech for obvious reasons.
Basically what I'm getting at is something I couldn't have imagined a decade ago where I think I really have go switch away from Chrome to something that takes privacy and security seriously so that LinkedIn can't do things like this. And I increasingly don't trust Google to do that.
I actually have more trust in Apple because they have historically been user-focused eg blocking Meta's third party cookies. But obviously Safari isn't an option because it's not cross-platform.
I'm not sure I trust the current state of Mozilla. What's the alternative? Brave? Is Opera still a thing? I honestly don't know.
What I really want is a cross-platform browser written in Rust that black-holes ads out of the box. Why Rust? Memory safety. I simply don't trust a large C/C++ code to never have buffer overruns. Memory safety has become too important.
I don't want my browser to provide information on what extensions I'm using to a site and that shouldn't be a thing I have to ask for or turn on in any way.
Linkedin is such a shity wanabe HR adult day care recruiting bs platform, if it would go offline tomorrow and never came back not a single tear would be shed by any Engineer.
I’m probably on the list. I made a LinkedIn Redactor that allowed you to add keywords and remove posts from your thread that included such words. It’s the X feature but for LinkedIn. Anyway, got a cease and desist from those lame fucks at LI. So I removed from the chrome store but it’s still available on GitHub.
No. Firefox always randomizes the extension ID used for URLs to web accessible resources on each restart [1]. Apparently, manifest v3 extensions on Chromium can now opt into similar behavior [2].
That's a different form of defense. The original claim in this thread was that LinkedIn's fingerprinting implementation was making cross-site requests to Chrome Web Store, and that they were reading back the response of those requests.
Firefox isn't susceptible to that, because that's not how Firefox and addons.mozilla.org work. Chrome, as it turns out, isn't susceptible to it, either, because that's also not how Chrome and the Chrome Web Store work. (And that's not what LinkedIn's fingerprinting technique does.)
(Those randomized IDs for content-accessible resources, however, do explain why the technique that LinkedIn actually uses is is a non-starter for Firefox.)
An additional improvement added in manifest v3 in both Chromium and Firefox is that extensions can choose to expose web accessible resources to only certain websites. Previously, exposing a web accessible resource always made that resource accessible to all websites.
It doesn't work. The person who posted the comment you're responding to has absolutely no idea what he's talking about. He confabulated the entire explanation based on a single misunderstood block of code that contains the comment «Remove " - Chrome Web Store" suffix if present» in the (local, NodeJS-powered) scraper that the person who's publishing this data themselves used to fetch extension names.
From memory from working with these a couple of years ago:
Firefox extension asset URLs are random and long (there's a UUID in there iirc). The extension itself can discover its randomized base so that it can output its asset URLs, but webpage code can't.
I'm not sure how you'd patch that. Any request that’s made from the current open tab / window is made on behalf of the user. From my point of view, it's impossible for the browser to know, if the request is legit or not.
An ideal implementation of the same origin policy would make it impossible for a site (through a fetch call or otherwise) to determine whether an extension resource exists/is installed or the site simply lacks permission to access it.
Is there no browser setting to defend against this attack? If not, there should be, versus relying on extension authors to configure or enable such a setting.
I imagine that it would require browsers to treat web requests from JS differently from those initiated by the user, specifically pretending the JS-originating requests are by logged-out or "incognito" users (by, I suppose, simply not forwarding any local credentials along, but maybe there's more to it than that).
Which would probably wreak havoc with a lot of web apps, at least requiring some kind of same-origin policy. And maybe it messes with OAuth or something. But it does seem at least feasible.
How do you patch it? The extensions themselves (presumably) need to access the same web accessible resources from their content scripts. How do you differentiate between some extension’s content script requesting the resource and LinkedIn requesting it?
The file is then available using a URL like: moz-extension://<extension-UUID>/images/my-image.png"
<extension-UUID> is not your extension's ID. This ID is randomly generated for every browser instance.
This prevents websites from fingerprinting a browser by examining the extensions it has installed.
It does by default, except for the files from the extension that the extension author has explicitly designated as content-accessible. It's explained ("Using web_accessible_resources") at the other end of the link.
If this is true, it's insane that this would work:
- why does CWS respond to cross-site requests?
- why is chrome sending the credentials (or equivalent) in these requests?
- why is the button enabled server-side and not via JS? Google must be confident in knowing the exact and latest state of your installed extensions enough to store it on their servers, I guess
It's not true. The person you're responding to has a habit of posting implausible-but-plausibly-plausible nonsense, and it's not how this works at all.
I made the mistake of trying to skim the code hastily before I had to leave to run an errand, and yes it turns out I was wrong, but please refrain from the personal comments, and no, I don't have any such "habit."
Wrong again. (PS: The fact that you have now replied—which automatically disables comment deletion—is the only thing that prevented my removing it just now. So great job.)
> The fact that you have now replied—which automatically disables comment deletion—is the only thing that prevented my removing it just now. So great job.
How was I supposed to know that you intended to delete it?
In any case, you may still have time to edit your comment, as I did with my erroneous root-level comment, since I can't delete that either, for the same reason.
Not interested. You also shouldn't have done that. You broke the thread—exactly what HN's no-deleting-comments-that-have-replies check was created to prevent.
I wrote an erroneous comment in haste, which I regret. However, this kind of thing happens countless times every day on HN. It's not unusual. Except perhaps the regret part: unlike me, many of those other commenters admit no error and express no regret.
If you truly cared about HN etiquette as much as you claim, you wouldn't post haughty hyperbole such as "Consider this: just stop being reckless" and "The person you're responding to has a habit of posting implausible-but-plausibly-plausible nonsense," which go against the HN guidelines, as you may already know. Be honest: do you actually care about the thread? Why would you care, when you ridiculed my top-level comment? Who are you trying to save the thread for, posterity? Nobody cares. The thread had already been downvoted to the bottom of the submission, and the top-level comment was misinformation, so I removed it, because no more people needed to read the misinformation or respond to it. Nothing of value was lost, and I thought my action was prudent, but in any case, the term "reckless" makes a mountain out of a molehill.
My impression is that you made a bigger deal out of this than is warranted because you appear to have some kind of strange, unexplained, preexisting grudge against me and take any minor fault as an excuse to bash me personally. I have no objection to correcting a falsehood, but please keep your personal feelings to yourself and the personal attacks out of the comments.
Isn't it enumerating web_accessible_resources? Below static collectFeatures(e, t) there is a mapping of extension IDs to files in the const r (Minified JS, obviously.)
This works by looking for web accessible resources that are provided by the extensions. For Chrome, these are are available in a webpage via the URL chrome-extension://[PACKAGE ID]/[PATH] https://developer.chrome.com/docs/extensions/reference/manif...
On Firefox, web accessible resources are available at "moz-extension://<extension-UUID>/myfile.png" <extension-UUID> is not your extension's ID. This ID is randomly generated for every browser instance. This prevents websites from fingerprinting a browser by examining the extensions it has installed. https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...
reply