Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
An iframe from googlesyndication.com tries to access the camera and microphone (techsparx.com)
562 points by authed on Dec 19, 2021 | hide | past | favorite | 271 comments


I think this sounds more like some sort of fingerprinting attempt. It good to see that random access to these kind of resources fails due to new(er) browser controls. However, this does not mean that the fingerprinting actually failed.

There is probably some way to determine if the request was denied automatically by the browser or manually by the user (e.g., time to get "response"), which is definitely something which can be used for fingerprinting.

Which reminds me of fingerprinting by tiny differences in the audio API provided by browsers [0]. Super interesting, but also a bit depressing. Also works for things like canvases and WebGL.

EFF allows you to check how fingerprintable your browser is [1]. Do note that the results may not be very accurate.

[0]: https://fingerprintjs.com/blog/audio-fingerprinting/

[1]: https://coveryourtracks.eff.org


As someone working on exactly this type of stuff, your'e absolutely right. \*.safeframe.googlesyndication.com is Google's implementation of the IAB's safeframe standard[0], which is basically a cross origin iframe with an API that's exposed to the embedded 3rd party code (the ad). This is how its HTML looks like (some attributes removed for readability):

  <iframe src="https://\*.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html" title="3rd party ad content" sandbox="allow-forms allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-top-navigation-by-user-activation" allow="attribution-reporting"></iframe>
As you can see, it has both sandbox[1] and allow[2] attributes. The former restricts certain behaviors of the embedded code (most notably, navigating the top window without user activation), and the latter restricts it from accessing certain APIs - this why the author saw errors in the console.

The script at https://cdn.js7k.com/ix/talon-1.0.37.js is an ad verification library developed by Verizon Media (formerly Oath), and it does, among other things,, fingerprinting for bot detection purposes (because they want to prevent ad fraud). It was served together with the actual ad media (so called "creative") into the safeframe.

This a relativity begin case. Iv'e seen much more terrible stuff, from fingerprinting for user taking to straight out malware being served in ads. It's a wild west (or web).

[0]: https://www.iab.com/guidelines/safeframe/

[1]: https://developer.mozilla.org/en-US/docs/Web/HTML/Element/if...

[2]: https://developer.mozilla.org/en-US/docs/Web/HTML/Element/if...


great post.

That verizon JS is surprisingly not very obfuscated so if anyone is interested or just curious to hack around this is a great one to look at!

It looks like they are checking notificationPermission for notifications. stores (this.permissionStatus = "") & (this.notificationPermission = "")

I don't see any requestPermission() in the verizon js. So it's probably not the culprit?

I also don't think that would make sense for them to do it. it's probably a bad faith advertiser.

I'm not sure if cross origin permissions requests can be blocked by the parent safe frame yet? It looks like Chrome is proposing but I can't find any info on if it has been implanted? [1] [2]

-------

I really enjoy fingerprinting. Just feels like 'hacking' in the basic sense of poking around with things. Since I don't know enough to make actual complicated real vulnerability hacking. I've built a pretty big js file for our own ads analytics & tracking.

The verizon js has most basic common things but one that sticks out as cool is cssSelectorCheck & cssRuleCheck checks a few like div:dir(ltr) probably for eastern languages, and stuff like -moz-osx-font-smoothing: grayscale.

I also like the idea of adding HONEYPOT_TAGS looks like they are adding a button to check for auto click publisher fraud. But man they should have obfuscated that name....

One interesting idea to expand on the css testing they have started to use a small amount.

I've played with is placing actual unique CSS features and @supports in styles and then measuring them. Maybe use variables pass to js. Also a couple @media sizes to see if it's lying about size. Can also measure if css/svg animation is paused for view ability.

There are a ton of new css features that are implemented in different browser versions so likely high entropy. Also would love to learn paintWorklet just to know it for design and also seems like a big surface area (svg too).

I'm kind of surprised they aren't doing a RTCPeerConnection to try and get any IPs and it doesn't look like they are doing actual webgl / audio prints.

seeing the mime type checks is validating to me. that's the latest check I added it's pretty fast to execute i have something like 150 different codes/mime types loop through lol. Verizon is more sensible in checking only a couple lmfao

[1] https://docs.google.com/document/d/1iaocsSuVrU11FFzZwy7EnJNO... [2] https://dev.chromium.org/Home/chromium-security/deprecating-...


In my experience, with tools like Cover Your Tracks (apparently this is the new name for Panopticlick), the more you try and thwart fingerprinting, the more unique you appear. Although I still do everything I can to block and filter everything conceivable, I've given up on trying to figure out how identifiable I am on the web because it seems useless. If you don't try then you're identifiable, and if you do then you are probably more identifiable. Whatever.


I think this can also be a good thing, as long as you also use software which makes sure you have a distinct 'unique' fingerprint for each session.

Not that I am a huge fan of Brave, but I think they have implemented something like this for certain (or all) APIs. You will still have a unique fingerprint, but it should not match to any previous fingerprints you had in the past.

Edit: see https://brave.com/privacy-updates/3-fingerprint-randomizatio...


Fingerprinting with any accuracy is hard. As a legitimate use case, I had a corporate client who wanted their management software only accessible to sub-management employees from certain on-site locations. And they wanted this without sending those employees through a VPN or having a static IP for each location. So what I allowed them to do was to let a manager clear a given device's browser fingerprint (e.g. on the computer at a certain desk, or the employee's laptop) and be able to manage or revoke access for a limited number of those at a time.

This was fairly secure because even the same employee was unlikely to get the same fingerprint twice - it was only occasionally more convenient than generating a random hash everytime they opened the browser. It became a huge pain for managers to be called constantly on the weekend to remotely reauthorize the devices they'd just authorized a few hours ago, or when chrome suddenly updated itself for half the employees, so eventually we switched to a looser hybrid of fingerprints and local storage.


But isn’t this exactly what client-side certificates are invented for?


Yeah. Maybe out of paranoia, there was concern that a rogue employee could snatch a client side key and reestablish a session from outside. The fingerprinting was aimed at making any attempt at that easily identifiable.


I attack if from a different direction. All these companies want to fingerprint your device and track you for really one reason at the very end: showing you a targeted ad. Now what happens if they can't deliver that ad (because you have an adblocker installed), well all that tracking and fingerprinting they just did is moot, because there's nothing actionable they can do with it.

That's my rather naive opinion, idk am I just being naive?


I care a lot less about whether or not I see an ad than I do about the shadow dossier being compiled about me based on my browsing habits. So no, I don't think all the fingerprinting is moot. I'd rather see untargeted advertising than have my personal profile bought and sold.


Do you have ads blocked on google.com then? Because all ads there are contextual, not personalized.


That is wrong. Not sure where you got that idea from.


Search 'hr platform' and you only get ads for HR platforms. At no point will you see totally unrelated ads for stuff you didn't search for, since those will do much worse than contextual ones in the search context.


Yeah but if you search for something generic, Google will infer what you are searching for based on your profile.


But the context/question is whether or not the adverts are different, not whether the search results are different.


As a counter point, google used to (still does?) recruit based off of your search history. the famous “now you’re speaking our language” popups



The adverts will also be different since Google would infer what to show you - both in the results and advertisements.


The problem is all activity gets sold to data brokers who build up a profile on you for future targeting.


Mimic latest iPhone. They are very hard to fingerprint.


Those anti-fingerprinting tools should make you appear as the most common iPhone as much as possible.


What is the most common iPhone? Should the common iPhone browser experience be scaled up to a desktop resolution, or should the desktop browser limit itself to the common iPhone resolution? What about mobile Safari bugs or misfeatures, such as webRTC shortcomings, or CSS bugs, or viewport resizing/scaling/zoom bugs?

There are so many possible variations that it seems like preventing fingerprinting by pretending you're something you're not would be an impossible task and makes you even more unique, not less.


You could pretend to be something you are not but if you keep giving the same info everything will be grouped together.


the basicest of basic

live laugh love as the user-agent


> the more unique you appear

If your fingerprint is unique and doesn't change then yes, you stand out. But if your fingerprint changes on every page load, then you become indistinguishable from other users.


> the more you try and thwart fingerprinting, the more unique you appear.

This is presumably because most people don't attempt to thwart fingerprinting.

If a particular feature behaves differently between the three most common browsers, it can be used to distinguish them. If you disable it, now you don't look like any of the most common browsers, which puts you in a category with a smaller number of people in it.

Solution: Get more people in it by having more people install anti-fingerprinting extensions etc.


> the more you try and thwart fingerprinting, the more unique you appear.

Not if you use Tor Browser.


Then i just get put on another list :)


You get put on a different list each time. :)


Or you get put on the same list again. And again. And again. The list of users that have only visited once and never came back (because when you came back you were sometime else).


"In my experience, with tools like Cover Your Tracks (apparently this is the new name for Panopticlick), the more you try and thwart fingerprinting, the more unique you appear."

In the interest of fair balance, I have had the opposite experience.

"I've given up..."

That's probably what "tech" companies are hoping you will do. I see this response repeatedly on HN when the fingerprinting topic comes up. I am wondering if the persons submitting these replies want others to "give up".

Is there a difference between users wanting to appear "the same" and a desire by users to stop supplying maximum amounts of free data/information to "tech" companies and exacerbating the problem of online advertising and associated surveillance.

If a user sends no fingerprinting data/information, then she might be "unique" because most users are sending excessive amounts of fingerprinting data/information. However, IMO, that is hardly a sound argument for continuing to send excessive amounts of fingerprinting data/information. I subscribe to the general principle of sending the least amount of information possible to successfully retrieve a page. This might be "unique" user behaviour, but I am confident it is the correct approach. The big picture IMHO is that "tech" companies, generally, are trying to collect data/information about users to inform online advertising. Uniquely identifying users is only a part of what they are trying to do.

It is a bit like telling a user to use/not use an ad blocker based on what other users are doing, so as to avoid being "unique". This might help with avoiding "uniqueness" but clearly there are gains to be had from using an ad blocker that are greater than the value of trying to appear "the same" as every other user.

Imagine users are all trying to appear exactly the same, so they embark upon coordinating with each other to make the exact same choices. It stands to reason that the number of choices each user has to make is going to be a factor in whether this is successful.

If every user is choosing to send large amounts of data/information (e.g., using browser defaults), then every user has to coordinate their choices on every single data point or bit of information. The higher the number of "correct" choices each user has to make, the less likely that all users succeed in being uniform. There are more chances for error. Whereas if we reduce the number of data points and bits of information so that every user is only sending one or two headers, with no Javascript, CSS, etc.,^1 then that is far easier for users to coordinate.

1. This has been tested heavily by yours truly for decades. One does not need a graphics layer or graphical browser features to make successful HTTP requests. I am not interested in being "invisible", I am interested in reducing the amount of free data/information I give to "tech" companies. Perhaps there is a difference between wanting to "blend in" and wanting to stop "feeding the beast".

"We do not know anything about User A. It looks like she is using TOPS-20 to browse the internet."

Is User A less or more likely to be unique. Probably more. Is User A a more or less viable target for online advertising. To me, it is the second question that matters the most.


"I've given up... That's probably what "tech" companies are hoping you will do.

You don't have "googlesyndication.com" blocked?


I prefer to take an "allow list" approach rather than "blocking". That domain is certainly not one I have any use for and it is not on the allow list. Not much for me to read or download from "googlesyndication.com". The browser I use to read HTML does not auto-load iframes. Iframes are not a "feature" that I find myself needing.


One thing you can do is use different computers for different purposes.


Just the other day I created a vm on my proxmox server of the Tails .iso. Makes it much easier to fire up rather than reboot something with a USB.


> the more you try and thwart fingerprinting, the more unique you appear.

That phenomenon is called the Streissand effect

https://en.wikipedia.org/wiki/Streisand_effect


> https://fingerprintjs.com/blog/audio-fingerprinting/

> It is particularly useful to identify malicious visitors attempting to circumvent tracking

Ah yes, the visitor trying to not be tracked is the malicious one. Barf.


The nerve of some people protecting themselves against browser exploits.


Must be terrorists or communists.


As someone working on exactly this type of stuff, your'e absolutely right. *.safeframe.googlesyndication.com is Google's implementation of the IAB's safeframe standard[0], which is basically a cross origin iframe with an API that's exposed to the embedded 3rd party code (the ad). This is how its HTML looks like (some attributes removed for readability):

  <iframe src="https://*.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html" title="3rd party ad content" sandbox="allow-forms allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-top-navigation-by-user-activation" allow="attribution-reporting"></iframe>
As you can see, it has both sandbox[1] and allow[2] attributes. The former restricts certain behaviors of the embedded code (most notably, navigating the top window without user activation), and the latter restricts it from accessing certain APIs - this why the author saw errors in the console.

The script at https://cdn.js7k.com/ix/talon-1.0.37.js is an ad verification library developed by Verizon Media (formerly Oath), and it does, among other things,, fingerprinting for bot detection purposes (because they want to prevent ad fraud). It was served together with the actual ad media (so called "creative") into the safeframe.

This a relativity begin case. Iv'e seen much more terrible stuff, from fingerprinting for user taking to straight out malware being served in ads. It's a wild west (or web).

[0]: https://www.iab.com/guidelines/safeframe/

[1]: https://developer.mozilla.org/en-US/docs/Web/HTML/Element/if...

[2]: https://developer.mozilla.org/en-US/docs/Web/HTML/Element/if...


>allow-popups-to-escape-sandbox

That setting is exactly the sort of reason I'm locked in a war to block ads from Google and others. What good is an escapable sandbox, other than for Google?


well, while I definitely block ads as well (when I don't reverse engineer them), this directive does have a good reason. It means:

"Allows a sandboxed document to open new windows without forcing the sandboxing flags upon them".

If it was absent, when user clicks the ad and it opens a new tab of the advertiser website, it would inherit the sandbox directives from the safeframe, which might break it. To be clear "sandbox" in this context refers to the iframe sandbox[0], not to be confused with the renderer process sandbox[1].

[0]: https://developer.mozilla.org/en-US/docs/Web/HTML/Element/if...

[1]: https://chromium.googlesource.com/chromium/src/+/refs/heads/...


The iframe sandbox is not for you or google. It’s for sites that want to protect themselves from ads they embed on the page. You’ll also see this used on proxy websites that scrape your requested URL and embed the contents of that page in an iframe.


The result of [1] surprises me. I'm on a very standard Android device with a standard Browser and nevertheless I'm told that my USER AGENT and the HTTP_ACCEPT HEADERS are unique.

[1]: https://coveryourtracks.eff.org/


Very likelx true. I am more and more of the opinion that more users should attempt click fraud. I know that advertising does pay for infrastructure, but most advertisers are just interested in their competition not being able to do more.

But since there is absolutely no consideration for privacy from corporations like Google or Facebook, I don't see the need to support their perverse business model.

If enough users participated in such schemes, maybe these privacy invasions would stop.


IIRC FF puts the domain on the ‘blocklist’ after the first manual choice, at least if the user selects ‘block’ instead of one-time ‘deny’ (haven't seen the dialog in a while).

Hopefully The Browser doesn't pester the user each time, either.


Does anyone have first-hand experience with a company using fingerprinting in practice for the more standard marketing uses?

How does it work? What value does it provide? Who are the major players?

I work in marketing but feel like it’s a completely other world.


> this does not mean that the fingerprinting actually failed

When does fingerprinting ever fail? Maybe in Brave and/or Tor, but I wouldn’t bet on it.


This is not google, but a third party ad network serving ads through google.

Google tries to sandbox the creatives in an attempt to prevent issues exactly like this, and develops browser features to prevent issues exactly like this.

This is likely a script that somehow avoided google's malware scanning pipelines.

This is definitely not google's malintent.

Disclaimer: Ex googler, worked in ads, dealed with problems like this all the time.


> This is likely a script that somehow avoided google's malware scanning pipelines.

I can't think of a good reason for scripts through google ad syndication to be asking for camera and microphone permissions. I'd assume Google runs these scripts in something like a lab environment to see what's ultimately invoked before deploying them to production? If so, would this be indicative of both a deliberate controls bypass and a ToS violation by the ad network?

Sounds like Google Syndication may have taken care of this by enabling Permissions Policies(1) across its domains? I can't tell because the article references Feature Policy (a predecessor to Permissions Policies(2)) "in Safari" even though Feature and Permissions Policies, as best as I understand them, are delivered from the origin for implementation by the browser. So I'm kinda confused.

And if they don't implement it, it tells me they're totally fine with this kind of fingerprinting.

(1)https://developer.mozilla.org/en-US/docs/Web/HTTP/Feature_Po... (2)https://www.w3.org/TR/permissions-policy-1/

---

A prior version of this comment suggested Google should add permissions policies. I since edited it to clarify that I'm quite confused over whether it's something Google already implemented or something Safari overlaid on top of Google syndication origins since I can't verify using the origins themselves. The article seems to suggest it's something Safari specific even though the spec for both FP and PP involves receiving a set of permissions from the origin as a header and implementing them in the browser.


Yes - banner ads are constantly targeted by malicious actors. My employer pays a vendor something like $200k/mo for creative scanning to avoid issues like this. Google certainly spends tens of millions a year trying to avoid issues like this.

See vendors like “the media trust”


Hey, if you want, you can give me $200k/month and I'll scan your ads to make sure they're just flat fucking image files without any arbitrary bullshit code


It took 10 seconds of searching to find a flat fucking image file exploit. I even skipped the recent NSO group zero-click exploit. Enjoy - https://www.bleepingcomputer.com/news/security/new-stegano-e...


Doesn't seem like a great example since the image itself is perfectly safe. The exploit still needs JavaScript to extract and execute the payload embedded in the image, and then relied on Flash to install malware. Without JavaScript it's just an image like any other.

The NSO group iMessage exploit is a more interesting example, essentially turning a poorly bounded JBIG2 decompressor into a virtual machine.


haha, if a country wants to spend a 9 figure sum or someone comes back in time from the future to hack me, I'll be happy to consider myself pwned

It's a huge bummer that I HAVE TO block all ads as a security measure, though, and that people accept "Download advertisement.exe and run it in a half-assed sandbox or you're stealing that clickbait article" as the way things should be


> if a country wants to spend a 9 figure sum

After the initial investment in developing the exploit and before the vulnerability was patched, there would have been a near zero cost to hack any one user in particular.


This gets a lot harder to pull off when the images are transcoded by being decoded in one sandboxed process that outputs a bitmap, then encoded in another sandboxed process that outputs what ultimately reaches the user.


Do ad syndication networks like Google transcode images provided by advertisers? When an advertiser uploads an image, Google could transcode it using a strict decoder and their own safe encoder to produce a clean image for syndication.


Malicious script can (1) fingerprint and detect lab machines and therefore not do bad thing — this is arbitrarily easy if Google test lab is the first to ever execute their script, (2) over time build a graph of IPs, geos, test machine characteristics that essentially allow them to avoid the world’s entire test lab infrastructure (there’s only a fixed number of test lab providers in the world since it’s so expensive to set up this infrastructure), (3) yes, bypassing testing like this is a violation of ToS, but that is fairly meaningless as entities and IPs are cheap to incorporate, (4) not sure what you’re saying about permission policies across its domains? Google does have such policies unless I’m missing something.


"In a lab environment" -> Certainly happens, but what if the script targets "specific devices" like "samsung galaxy s10" which google won't be able list exhaustively?

What if bad actors figured out a way to identify google's emulators and avoid doing bad stuff in that situation?

The part i said "google develops browser solutions to prevent issues like this" is exactly what features policy will end up doing. But google's ad systems and chrome features don't always move at the same speed, but you can be sure that whatever ad malware team is finding will help chrome team to strengthen their defense.


I thought they meant more like sandbox environment. Why would the API to access those things exist at all?


The API exists because zoom.com request microphone and camera through an iframe is a legit use case. As OP said below CSPs exist so that enterprises can lock down those vulnerabilities. But wouldn’t make sense to lock it down for all consumers.


Sandbox is provided through things like ContentSecurityPolicy and Feature Policies. See my comment on this.


Javascript is Turing complete so it is impossible to determine what the script is going to do in all environments without running it in all environments.

The script could just detect the test environment and avoid triggering its malicious behavior.


>Javascript is Turing complete so it is impossible to determine what the script is going to do in all environments without running it in all environments.

No, it just means that it is impossible to do for every program. Google could just make it reject scripts it is unable to handle.


IMO, if you serve content to the user, you're responsible for making sure you're not serving malware, e.g. by not allowing parties you cannot trust to serve arbitrary code.

And this doesn't apply just to the ad network, it also applies to the publisher. I'd really like the publishers to be held liable for malvertising they serve.

There is nothing inherently hard about serving safe ads, we've gotten pretty good at separating code from content nowadays. There is no reason why a "classic adsense" three-blue-links ad should be able to inject malware. Even images can be served safely. The only reason why this happens is because in pursuit of a few percent more profit (and more tracking) everyone allows everyone to include arbitrary scripts.


The browser is the main defense here. Google serves ads through iFrames. Browsers control resource access.


Not just ads, anyone can execute any code they like on tpc.googlesyndicatio.com. See: https://blog.dubbelboer.com/2016/06/10/embed-into-tpc-google...


> This is likely a script that somehow avoided google's malware scanning pipelines.

Why wouldn't google just block access to those API's? I mean I guess that's what this sandbox did.


Some of these APIs are not overridable inside javascript - overriding is pretty much the only way this can be prevented - short of browser features like CSP and FP. This needs to be the first thing that happens, but there's no standard api to run on your browser to do that, so things have flaws.

A static analysis is often relatively easy to circumvent, something like base64Decode(encodedMaliciousScript) can by-pass them.

Google does various runtime/dynamic analysis to figure out issues, but scripts can do interesting things to circumvent those too (like targeting specific devices through user agent and so on).

It's an arms race, often, where google catches up pretty fast, but bad actors move faster.

Feature Policy check addresses these but ad system and chrome features don't always move at the same speed, and often time there are trade offs that needs to be addressed first before it can be widely deployed.


I think you're right, also in the script's URL it's clear the name of the company that owns this particular script. If you know adtech companies good enough, it should be easy to spot.

Note: I also work in adtech, and my daily job is to maintain a library that has to load inside google's safe frame...


Why allow anything in an ad besides text or images? Why allow others to run arbitrary code through your network. Inexcusable by Google imo.


That would be anti-competitive by Google. These are different ad networks, not advertisers. Ad networks need to be able to do their own attribution and click spam detection.


You missed the question.

Why allow any thing on any advertisement, regardless of network, that is not image or text? Anything else opens up security issues.


He just told you?

"Ad networks need to be able to do their own attribution and click spam detection."


This doesn't answer why the end product (the person receiving the ad) is exposed to anything but images and text, which was the question.

Step outside technical approach mode and look at it from the highest level possible -- why are end users ever given more than images and text, or, why are ads anything more than a very simple a href tag.

Spam detection and other protection of the ad platform should be done long before this information ever goes to the eyeball product owners.


No - spam protection needs as many signals as possible, sadly.

You will have headless browsers simulating clicks and so on. This gets extremely hard over time fake, and the more signals there are, the easier it's to spot anomalies.

In a way, the thinking is - a malicious actor will get at least a few of those signals inorganic and ML can detect them...

"Long before the information ever goes to the eyeball" => wish it was true.


The author is concerned that an ad might be able to surreptitiously turn on the camera or microphone, but these are not accessible by default. In this case, it isn't even getting as far as a permissions prompt because the default Feature Policy doesn't allow camera or mic access in cross-origin iframes. (Ex, for Chrome: https://sites.google.com/a/chromium.org/dev/Home/chromium-se...) Instead, I think the most likely thing happening here is that an advertiser is running a script that is trying to do fingerprinting, and which is blocked by the browser protection.

(That it's an iframe running on https://[random].safeframe.googlesyndication.com tells us it's an ad served through Google Ad Manager, and the contents of the iframe are supplied by the advertiser.)

Disclosure: I work for Google, speaking only for myself


Curiously no explanation why this sort of malicious behavior is accepted by "Google Ad Manager" in the first place.

If you haven't already installed: https://addons.mozilla.org/en-US/firefox/addon/ublock-origin... https://chrome.google.com/webstore/detail/ublock-origin/cjpa...


I'm not sure it is allowed; that's not a part of the business I know much about. Since ads can run arbitrary JS it's hard to enforce policy programmatically.

On the other hand, it's not clear to me that whatever this advertiser is trying to do is having any real effect, aside from causing a console message that it is being blocked. Access to the mic and camera from cross-origin iframes is blocked by default, and you can't even trigger a permissions prompt.

As for installing an ad blocker, even with all the messiness of advertising, I still prefer it to paywalls.


> Since ads can run arbitrary JS it's hard to enforce policy programmatically.

Letting ads run arbitrary JS is the policy, right? It's not like that's a requirement to make the internet work, that's just a Google policy that trades money for user experience.


> Letting ads run arbitrary JS is the policy, right?

I mean, anything on the Web can run arbitrary JS. The entire point is that it's a sandbox environment where arbitrary JS can't do any harm (excluding cases where vulnerabilities are found).

If you're not comfortable with arbitrary JS running on your computer, you'd have to either (a) not use the Web, (b) disable JavaScript, or (c) only visit sites which you have vetted and deem to be trustworthy. None of those are particularly practicable.

Most of us operate on the generally-reasonable assumption that the sandbox is effective, and therefore that we're OK to [click on that random link from HN like you did just now / open that news site which pulls in a bunch of tracking scripts / etc].

Either way, this is not somehow a problem that's specific to Google Ad Manager in any way at all. I don't know what else you could really expect of them.


> (a) not use the Web, (b) disable JavaScript, or (c) only visit sites which you have vetted and deem to be trustworthy.

> I don't know what else you could really expect of them.

Your option C is basically how it must work, and mostly does. To be safe online we go to sites we trust. When Google delivers malicious JS through ads, the site operator probably doesn't know Google has harmed the user on their behalf, so their trustworthiness becomes moot. Is there some "safe ads" codeless option for site operators who want to protect their users while still showing ads? Has Google made site operators aware they occasionally deliver malicious JS to users?


> To be safe online we go to sites we trust.

I'm fairly confident in saying that nobody, but nobody, only goes to sites they trust. Come on. You're on HN: do you mean to tell me that you never click to open a link from a post unless you've pre-vetted the website and know it to be trustworthy? That's simply not a practicable model.

And every site pulls in JavaScript which, to you, is arbitrary. You don't know that they won't add a new third-party script, and you don't know that any given third-party script won't change. You don't know that transitively for the scripts loaded by the scripts. Etc etc etc. Nobody can practise the approach you are setting out here, not both diligently and honestly.

Your complaint here is just about how the internet works. It's absurd to expect Google to vet the JavaScript that all of its users host, as much as it's absurd to expect Squarespace or Weebly or even AWS or Cloudflare to do the same. The model of the internet does not and cannot rely on any and all JavaScript being vetted for 'malice' by a trusted party before being loaded. It relies on the JavaScript runtime being a safely isolated sandbox where malice or the lack thereof doesn't matter either way.


I didn't say people always have a good reason for trusting the sites they go to. I trust that HN links are safe because I think that someone would make a stink, at least, if there were malware on a page linked here. It's not perfect but it is how things work. A nasty part of contemporary business and business in general is: as long as trust isn't visibly betrayed, people go on trusting. That doesn't excuse any of the actors who benefit from the ignorance.


Ads are allowed to run arbitrary JS, but that doesn't mean they are allowed to do arbitrary things by policy. That is, the technical restrictions are not able to be as strict as the policy.

A bunch of us were working on a project where ads would be fully declarative, and so no longer able to run arbitrary JavaScript, but this received very little interest outside of Google (advertisers didn't want to move to a new format, publishers didn't care) and we moved on.

(Still speaking only for myself)


I do wish google engineers would do something positive for society and switch to a career in subsistence farming. No one needs ads. Not arbitrary JS ads, not declarative ads, not personalised ads, not any ads.


But what do you see as the alternative for funding sites? The site we're on is funded by (declarative, non-personalized, non-obtrusive) ads. I would rather have ads than paywalls.


"Made for Adsense" sites have no value, if they get shut down, nothing is lost. Normal sites that switched to Adsense have become worse, because now they need clicks and engagement above all else, incentivizing click-bait and low-effort content. Nothing would be lost if they switched to subscription models and provided valuable content. There's little in between in my opinion, it's either "site doesn't use Adsense, it's a company site / personal blog / some institution", "site was made only to make money by adsense with the lowest price content possible, had a bunch of links bought and now provides a passive income to some SEO" and "site used to provide quality content, loses readership to spam-sites because Google has fucked up their sorting algos and now switches to low-value content as well, because anything else can't be financed".

Twitch shows that people are very willing to pay for content. I'd very much be happy to if that removed all of the spam.


If a website cannot survive without ads, maybe it doesn't need to actually exist in the first place. The world will go on.


The world can move on without having the site shutdown, then. Don't visit those sites and pretend they don't exist.


I do think the world would go on, but it's a world I would like less. Some things would move behind paywalls, others would move to the boundary of whatever was considered advertising (sponsored content? Product placement?)


I think of it as a bit of social selection for websites. If a website has content people value, they find ways to support the website. If not, well, maybe the site not existing is not a terrible loss.


Once upon a time, children, people made websites with no ads, and no paywalls. Sure, there weren't as many websites; and sure, they didn't have as many sliding panels and other gimmicks.

And for sure, I appreciate being able to buy stuff online. But I seem to be able to do that without viewing ads! Amazing!


> I would rather have ads than paywalls.

No offense but I've heard people who work at ad companies repeat this like a mantra, and it's a false dichotomy, akin to a coal company who dumps slag in rivers saying, "Well we think it's better than letting everyone freeze to death." We're not asking Google to stop advertising altogether and close up shop, just to make the internet ad ecosystem a little less awful and Orwellian.


> We're not asking Google to stop advertising altogether

I think my parent was: "No one needs ads. Not arbitrary JS ads, not declarative ads, not personalised ads, not any ads."


Fair enough, though the suggestion that Google engineers take up farming was probably not very serious. However, the idea that we have a choice between no ads or the most pernicious ads imaginable is certainly a false one.


> the idea that we have a choice between no ads or the most pernicious ads imaginable is certainly a false one

I agree with you. I spent a large part of 2018-2019 trying to make ads declarative, and am now working on (among other things) increasing the isolation of conventional ads [1] and implementing cross-site advertising without cross-site identity leakage [2].

But it sounds like you and my parent have very different views: there's a lot of space between "ads should be a lot better" and "ads should not exist".

[1] https://github.com/WICG/webpackage/issues/624

[2] https://github.com/WICG/turtledove/blob/main/FLEDGE.md


Why is it up to the advertisers and publishers?


If they tried to unilaterally make such a change and ban the old formats, both the advertisers and publishers would complain to competition regulators around the world. And whether you think there would be any merit to those complaints or not, the outcome would still be another round of lawsuits with billions on the line.


> with billions on the line

Not my money. Why should I care about lawsuits between advertising networks and their advertisers, regulators and so on?


> Why should I care

You don't need to. But the person is asking why Google is or is not doing a particular thing. So it is their interests that are relevant to the current discussion, not yours.


I suppose they're the paying customers.


how do you think google, an adtech company, makes money?


> Since ads can run arbitrary JS

What a fantastic idea to create a platform where anyone can pay to have code ran on millions of end-user machines, embedded in random websites. What could possibly go wrong?


Haha, easy way to create plausible deniability if you just allow everything to run :D. Doesn't that also make it easy for anyone running an ad to run hivemind in the ad itself?

In addition, why is it hard to enforce a policy that disallows any ad to reach out to the camera and microphone? I don't understand why that is hard to enforce.


> Doesn't that also make it easy for anyone running an ad to run hivemind in the ad itself?

Are you talking about crypto mining? That's a good example of something which is against policy but difficult to fully prevent technically. The core problem from someone trying to exploit this, however, is that crypto mining in the browser is minimally profitable, so you need to do a huge amount before seeing noticeable returns. The more you try to do the more likely you are to get caught, so while it does take some scrutiny from publishers and ad networks, it doesn't take very much.

(still speaking only for myself; this isn't something I know very much about)


Yes, I am talking indeed about crypto mining. I am also talking mostly about the aspect of wasting compute cycles and thus harming the environment through Google's means. Yes, it is minimally profitable, but that it is possible is weird if you ask me.


Sorry, what I meant was that it is minimally profitable at the best of times. Which means it doesn't take very much enforcement to shift it over to being negatively profitable, because it costs you more in time, engineering, etc. than you would make. And once it is a money losing proposition, the people trying to exploit users are no longer interested.


Because ads can use Javascript which is notorious for how hard it is to vet code and how easy it is to hide functionality.


But why do they need that? What purpose does an ad with javascript serve?


Animations, tracking and fraud detection are the big areas. Ads were historically one of the strongest drivers of Flash, given how easy it made for creatives to implement animations without needing a frontend developer - ad buyers these days would instantly protest against any attempt to remove either of the three use cases.

And given that we are talking about sometimes eight figures worth of ad buying... no network will want to risk offending such clients.


I still don't understand it. I was under the impression that animations are also able to be served through HTML5? Both CSS and HTML should be sufficient. ~We're in the age where we're able to serve an entire sqlite database over a static website, yet an animation does require javascript?~

EDIT: I am wrong to think that serving and interacting with a sqlite database goes without javascript.


> We're in the age where we're able to serve an entire sqlite database over a static website

Making use of the SQLite database is entirely client side and requires JavaScript or WASM (the distinction is unimportant) - it requires running code on the frontend. This is not a great way to state your case.


You're right and I misinterpreted the sqlite implementation. I will edit my comment to reflect my error.


Animations can be CSS, yes. That's what you do in https://amp.dev/documentation/guides-and-tutorials/learn/a4a... (declarative ad format, no advertiser-written JS)


Then I don't understand the necessity for javascript in ads?


In this particular case, advertisers were not interested in moving to a declarative format, and publishers were not interested in requiring declarative ads. So it didn't end up going anywhere as a general purpose ad format.


How about the company you work for disallow arbitrary JS in the ads they serve? We already know the answer though. Bottom line over doing what is right.


As I wrote above, this was something we tried: https://news.ycombinator.com/item?id=29615583

This was the main thing I worked on in 2018-2019, along with many other engineers. I wrote about it some in https://www.jefftk.com/p/value-of-working-in-ads If this is something that users were demanding I could see picking it up again, but as far as we could tell us a time there was minimal interest externally.


> Since ads can run arbitrary JS it's hard to enforce policy programmatically.

It's not that hard, at least not at my end. I just don't run ads.

Why on earth does goo think that running arbitrary JS on their visitors' computers is OK? I mean, I know this is the policy, and I assume the policy of other ad networks is at least as "liberal". So I'm sorry, chaps, but no ads run on this screen.

I wonder if this is a race to the bottom? I've noticed that TV ads these days are all for animal charities, equity release schemes, and incontinence pads. I don't know what they're running in web ads, but I'm pretty sure that the TV ads are so dire because everyone but old fogeys and poor people skip the ads. I'd assume the same old/poor people are the ones that have to see web ads.

Advertising to poor people has traditionally been a pretty bad pitch. So why isn't the online/TV ad industry crumbling? And how do I bet against their shares?


>Advertising to poor people has traditionally been a pretty bad pitch

Is that true? It seems like poor people spend, in aggregate, more than rich people and they tend to buy the cheaper, more mass-produced stuff. The grocery business alone must be build on the commerce of poor people, right?


> The grocery business alone must be build on the commerce of poor people, right?

How often do you see grocers advertising own-brand baked beans? If you see baked beans advertised at all, they're drawing attention to the fact that their Heinz beans are 1p cheaper than $COMPETITOR's Heinz beans.

I'm not convinced that advertisers spend much money on pitching to poor people. Most of the ad pitches that I see are for high-end products like cars, holidays, and household appliances. There's not much point in advertising to people whose weekly budget doesn't stretch to luxuries. They will buy only what they need; they don't have choices.


If placement is considered a kind of advertising spend (and I think it is) then impulse buys are going to be an ad spend in general. Gambling, porn, junk food, that kind of thing.


> Since ads can run arbitrary JS

That's the problem!


The author is concerned that an ad might be able to surreptitiously turn on the camera or microphone

You are correct, that is the author's concern.

The reason the rest of us are concerned is because the general public has been conditioned by Google and others to just press "Accept" any prompt that pops up, no matter how dangerous.


In this case there isn't a prompt: access to the camera and microphone is disallowed by default in cross-origin iframes. The site would have to specifically delegate permission to the ad before it could even trigger a prompt.


> just press "Accept" any prompt that pops up

I auto-press [Accept]. I use an ad-blocker. I reject 3rd-party cookies. I disable JS by default, and re-enable it selectively for sites that refuse to work without JS. If that re-enablement involves more than a few clicks, I'll close the site - there are other fish in the sea.

What am I doing wrong?


This is a catch-22 though.

If something dangerous to privacy is being widely used in the world, then putting it behind a prompt creates an avalanche of prompts, and results in user apathy.

But not prompting requires you to choose a default, which either default to block and breaks things (if it was actually required) or defaults to allow.


I would happily set all browsers to always deny all ads and untrusted domains the ability to use microphone and camera at all times. So there's no need for an avalanche of alerts, it just shouldn't be permissible for a resource from an untrusted source.

It may be widely used, but for a highly concentrated set of sites. I can't think of an occasion I've used it beyond Google, Microsoft, and Zoom properties. Perhaps Slack and Discord too? So there must be a better way.


> deny… untrusted domains the ability to use microphone and camera at all times

I believe this is the default for most browsers now, right? (Or have I been spoiled by Firefox?) This is the best way, where the camera is always inaccessible, unless you enable it for a domain which needs it. Since I rarely ever see the camera/mic request option, I don’t think we’ve been conditioned to allow this type of request. (Compared to cookies, which show up on nearly every site.)

If you want to disable permissions dialogues altogether, then what’s a trusted domain? Just zoom and a handful of others? If you write yourself a nice app which uses the mic, do you have to email Google to get yourself added to the list of trusted domains? That would be pretty bad for the open web, so permissions dialogues are the alternative


The solution to this is simple, but not easy: block everything by default and do not prompt to enable it (but do show an indicator of what has been blocked).

This is how Firefox tracker protection, uMatrix, noscript, and a plethora of ad blockers and other privacy tools work.


What? In what way have you been conditioned to accept prompts?


I think there is most definitely conditioning going on. I watched my fiance click on one of the "accept all cookies" GDPR-prompts (it's become an antipattern) a few days ago. She almost automatically did it without thinking. I went to the same site on my laptop and if you clicked decline, it immediately brought up modal dialogs that made that site unusable. I can see why 99% of people would be conditioning to just hit yes.


And you blame Google for that, instead of some brain-dead Eurocrat?


Who said anything about blaming google for users clicking “ok” without thinking about it?


The person to whom I replied!


To be fair, he said Google and others. I still don't know how much Google is responsible though.


Google goes way further and doesn't even serve a prompt. Instead they show you a opt-out plugin that you can download to run in your browser. Most blatant disregard of rules and feeling better than the rest behaviour if you ask me.


Link?



The site owner interacts with the user to get consent or not (often via a CMP) and then decides whether and how to invoke third-party scripts, including Google Analytics. The alternative would be for each third party script on the page to attempt to inject their own set of consent dialogues, but those would conflict with each other.

The extension you're linking allows someone to opt out of Google Analytics across all sites. That pretty much has to be a browser extension, because GA by default doesn't use any third-party cookies (and third party cookies are going away anyway).

(Disclosure: I work at Google, speaking only for myself)


Thank you for replying. Though, I do not understand why a plugin is required? It feels even more invasive to have such a thing running in the browser.

My understanding is that I have an advertisement ID attached to my user, and that enables Google to infer who I am and what my persona is about, to match me with personalized ads.

Wherever possible I disallow Google and all others to stop tracking me, having done so in my account. I find it odd I also need to disallow the tracking from Google, through a third party site (e.g. anyone's blog) even though I have already done so through every direct means possible. What do the consent settings in Google even mean then if I need to allow/disallow consent of tracking per seperate website in addition?


Your understanding is correct for Google Ads, but in its default (and most common) configuration, Google Analytics does not interact with that third party cookie ID. GA is primarily a tool for helping publishers understand what's happening on their sites, and so uses a first-party (per-site) cookie.


Which Eurocrat required stupid popups? And since when did US corporations kow-tow to EU bureaucrats? This is stupid corporations, hiding behind GDPR to plant cookies.


Is it a good idea for Google to allow random advertisers to use privacy sensitive code (like audio access or fingerprinting) in advertiser supplied iframe content?

People may be visiting a trusted site and then are asked to allow audio and video, not realising that it is an iframe asking for the permission.


The Exhibit A why no one will ever convince me to turn off my ad blocker or switch away from Firefox. It's a great feeling to just not have to worry about this entire class of exploits.


Or use a computer without a mic/webcam permanently embedded or attached. I'm glad I'm constantly reminded that not having such peripherals can be a good thing.


Still leaves the fingerprinting vector.


Have you never built a computer before?


also tape your laptop's camera


My company is now giving us those to use in our company laptops:

https://m.media-amazon.com/images/I/61l+gnZORVL._AC_SY355_.j...

They're pretty convenient and look nice


Be careful with those on laptops. I had a 2016 Macbook Pro and put one of those one it. About 1 month later I had a nice big crack in display straight down the middle of the screen.

I'm not 100% certain that was the cause. But the guy at the Apple store seemed to think it was, which meant they wouldn't pay for it. And a quick google shows others who are convinced.

The bezel on that laptop was really tiny and I can easily see it might have contributed to the crack. I now have a 2021 model and the bezel is much thicker. But I'm not taking the chance.


>I'm not 100% certain that was the cause

I'll leave this,

https://support.apple.com/en-us/HT211148

>Make sure the camera cover is not thicker than an average piece of printer paper (0.1mm).

>If you install a camera cover that is thicker than 0.1mm, remove the camera cover before closing your computer.


Those are too thick for my laptop, so I use these adhesive stickers instead: https://supporters.eff.org/shop/laptop-camera-cover-set-ii


Our HP laptops have that built in


newer thinkpads do as well


That doesn't protect your microphone from being exposed though.


Disable it in system settings, browser settings, and prevent access through something like firejail. It is not as reliable as a hardware kill-off switch, but puts a lot of barriers to overcome.


That doesn't protect your microphone from being exposed though.

And ripping out your microphone doesn't stop evildoers from viewing the camera. What's your point?


this is why we need hardware switches for microphones


https://puri.sm/products/librem-14 - this device has the kill switch.


fwiw, there's a switch in every ext. mic. jack.

plug in an un-wired connector, cut of the wiring post, smooth with a nail-file or put on a crowning drop of glue so it won't rip your bag and you're done.


Often these are software, though. Yes there's a switch, but it just tells the audio subsystem to automatically select the plugged-in mic. You can still tell it to select the screen-frame mic instead.


A software switch. You can test by pluggin in your unwired plug then go set to any program and select the internal mic and it'll work fine.


I just got a Framework laptop (https://frame.work) and it has hardware switches for both.

Take that, Apple fanboys.


Well tape the microphone too. With thick soft tape.


Or open up your laptop and (carefully) destroy it.

I can’t remember the last time I used the built-in mic on a laptop, much less the last time I bought a laptop with a mic that was actually worth using.


> Or open up your laptop and (carefully) destroy it.

OK, I destroyed it (carefully). Now it won't boot. What do I do next?


Get a refund from the guy who sold you that janky ass laptop that won’t boot without a microphone.


OIC. Well, I destroyed the laptop, not the microphone, because that's what OP said to do. I guess I didn't read the directions carefully enough. /s


Don’t talk to your computer. When you do talk, talk about stuff that you want them to look into.


An icepick/paperclip does.


You should also destroy any speakers.

https://arxiv.org/ftp/arxiv/papers/1611/1611.07350.pdf


Oof, that's unfortunate. Thanks for the tip.


But that hack requires deep OS access and is not trivial, but for those with a nation state agency behind them, good to keep in mind


Yes, but travel is expensive and time consuming. Not to mention the effort it would take to track down the responsible party.


My laptop is mostly closed and plugged in to an external monitor. The mic is so muffled by then, it would take expert audio recovery to understand what was being said. (In other words, it would be hard to scale it for ad purposes, but obviously a targeted attack could still be devastating).


I really don’t care what advertisers think is necessary to do in order to ensure targeting/fingerprinting or countering fraud, but running any third party script isn’t anything I consider remotely acceptable from an ad. Worst case I could accept that some generic script from the ad network is run - but for the ad network to pipe through the advertisers third party script should mean they are adblocked by the browser without even requiring a plugin.


Is this just click bait? I don't know the intricacies of Google's ad serving, but is this not just someone (e.g., an ads customer) slipping a request for camera and mic access into an ad script? But the title seems to suggest Google is doing something malicious here.


Isn't Google supposed to vet whatever ads they send into the world?

I don't know if this looks any better if Google is negligent/incompetent instead of malicious.


They don't vet anything, anyone can execute any code on tpc.googlesyndicatio.com. See: https://blog.dubbelboer.com/2016/06/10/embed-into-tpc-google...


Why the _hell_ should a Google ads customer be able to "slip" in a request for camera and mic? That that is even possible is a large problem.


Allowing an ads customer to "just" slip a request for camera and mic access into an ad script is malicious.


Negligent, I wouldn't call it malicious but I would call it negligent.


The first time google served malicious JS on behalf of a customer was negligent. Maybe the second and third and fourth and fiftieth times too.

It's not 2003 anymore, we're far past negligence at this point.


You're right. Google would never compromise anyone to make some money. Never


Why does it matter whether Google did it or Google spread it around the world? It is equally repulsive behavior on their part.


> Is this just click bait?

It is not. It sounds like something that should have been taken care of by Google at least 10 years ago.


> But the title seems to suggest Google is doing something malicious here.

Aren’t they? They’re quite literally distributing malware.


Well, it's HN. HN has become a FUD machine.


"When disagreeing, please reply to the argument instead of calling names. 'That is idiotic; 1 + 1 is 2, not 3' can be shortened to '1 + 1 is 2, not 3."

"Please don't sneer, including at the rest of the community."

https://news.ycombinator.com/newsguidelines.html


Shitty ad code barfing errors onto the console is typical, unfortunately. The JS is not written by Google, it's written by the individual advertiser, with very limited oversight.


It is served by Google. Google has an enormous amount of resources to vet the code that Google serves. They are skirting their obligation of due diligence.


Yes, discounting Google's responsibility is inappropriate. They take money for this. They're responsible.


It's hard to change the status quo when people's livelihoods are at stake. It's not just about Google, it's about all the sites running Google ads, and all the companies advertising through Google. If Google makes things sufficiently less profitable for all those parties, they'll just move elsewhere. Incremental progress is being made, like SafeFrames, but it's never going to be completely solved all at once.


Oh no, how will the poor advertisers survive if someone tries to stop them from shipping malware?


The problem isn't that advertisers should be allowed to ship malware, it's that it's hard to distinguish malware from non-malware. Also the adjacent problem of "not quite malware, but shitty code that spams error logs and runs way slower than it should".


Why should ads be so free to run arbitrary code? It seems to me that in the end Google should be held responsible for anything they serve to others. If they'd be fined for this lack of oversight, perhaps they'd block javascript as a whole until they do have proper oversight in place?


Because advertisers pay more for it, and publishers like making money. If Google didn't offer it, someone else would, and they might do even worse than Google at sandboxing the code and trying to filter out abusive ads.

Not saying it's a great situation, just explaining why there's not an easy solution.


It's an odd argument to say that Google must do it in this poor way, as it is atleast better than how the others would do it. If Google stopped offering it to not waste compute cycles of others or invade too much, that would be a good choice. It seems Google would not like to lead by example but rather take the profits and risk the fine as cost-of-doing-business. If google would stop doing it. It would atleast cause >30% of the worldwide ads to run arbitrary code. A massive improvement on both user privacy and climate. The wasted cycles are not free.

The solution is easy on Google's side. Just don't do it and accept the reduction in revenue. But I guess the economic incentives are just too big, like you say.


Shitty code should be rejected too, if they have any kind of a standard for of quality. If it runs slow during testing, how do you think users will feel when they run it on their systems?

Code that makes your system appear infected with malware is indistinguishable from actual malware.


They already try to detect and ban poor performing ads, but it is hard to do perfectly.


Banning malware won't make things significantly less profitable. Quite the opposite: it will save their business, because if they allow these practices to persist, or even get worse, everyone will be forced to block ads to protect themselves.


Malware is already banned. The problem is detecting it with 100% accuracy, or sandboxing it with no ability to break out. A lot of people are working on it, both at Google and other companies.


> "with very limited oversight"

I think I found the problem.


It's a problem, but it's not "the" problem.

The problem is that letting advertisers write their own JS means advertisers are willing to pay more for the ad. If Google banned that practice, or put in a lot of oversight, people would pay less for ads through Google. But some other ad networks would still allow the bad practices, and thus be able to pay higher rates. So sites would just move more ads to those other networks.

That doesn't absolve Google of responsibility, but it does mean that we can't actually solve the problem just by being mad at Google.


I don’t think expecting Google to take responsibility for the ads they serve is unreasonable.

It is profoundly strange that we extol the virtues of succeeding in society and yet also act like said success doesn’t come with heavy responsibility.


The value in adsense is absolutely not in the ability to run unvetted javascript served from a google-hosted domain. The value is in the tracking and retargeting.

Serving javascript from a well-connected CDN isn't something that sets anyone apart, you can just sign up with cloudflare and have that working in a few minutes.


Those other networks would start getting blocked hard with adblock/DNS block/public shaming of sites using them.


Ad blockers already block all ad networks. And the status quo is that all ad networks have ads with shitty JS, and public shaming doesn't seem to have done much about it.


Google (+ apple) Control their mobile platforms and take responsibility for apps served through their respective stores.

Why can't they do the same with their ad networks? Transpile any JS code served from their network into a safe execution environment/api that only permits the resources allowed safely?


The SafeFrame stuff is an attempt to move in that direction, but it is far from perfect. Hopefully that will continue to improve.


Does googlesyndication.com serve anything that is in the user's interest? I've had that domain blocked for several years and don't think I've ever noticed it hindering any experience.


It's always been in my HOSTS file too. Ditto for their analytics and "tag manager" domains.


Quite ironic that the website posting the article actually has googlesyndication.com.


> These messages appeared in the JavaScript console on Safari while browsing multiple pages on techsparx.com. At first I saw it on one page, then checked other pages and got the same messages. This site is using Ezoic's advertising system, which in turn uses Google Ad Manager for some advertising.

Yeah he noticed it... on his own website.


Why, did you expect their techs to turn it off just for the one article?


I wonder what would happen if you visited in a browser with it set to allow video/audio with no prompts? Would it bail out with a "oh crap, they actually let us" or would it actually try and do something?


I have to wonder if anti-fingerprinting is the wrong approach to privacy invading advertising. There’s an inherent asymmetry between the resources available to those who build these systems and those who try to stop them.

I’d love to see more stuff like CCPA. As a California resident I can simply tell Google that my data is not for sale, and they’re obligated to respect that regardless of what fingerprinting happens.

This isn’t an ideal solution, but the whole issue of privacy seems like a people/politics problem we keep trying to solve with technology.


Laws take a long time to enact, especially when there are companies with almost unlimited money lobbying against them. Technology (such as browser updates and ad-blocking extensions) can be deployed immediately and can adapt quickly to changing threats.


On modern android devices there's a "Quick settings developer tiles" option called "Sensors Off" that's available after you enable developer mode.

After you enable that, a settings button will appear when you pull down your notification/settings menu for "Sensors Off". This disables the microphone, camera, fingerprint reader, accelerometer and other sensors.


I wonder if it's a fingerprinting attempt gone wrong? I can't see a reasonable even malicious reason for eavesdropping on mic/camera at scale like that, you'll be capturing a ton of data you need to manually process/clean up which takes time/resources, most people won't stay on the page long enough to capture enough sensitive info and even then, eavesdropped conversations seem pretty useless unless you also have the whole context and information on the person you're targeting to be able to effectively misuse that data.


> eavesdropped conversations seem pretty useless unless you also have the whole context and information on the person you're targeting to be able to effectively misuse that data

Keywords (i.e. "perfume", "car", "phone", "notebook", "flowers", whatever) would probably be enough to "improve" targeted ads.

However, this would be a huge scandal if true, so your first suggestion, fingerprinting gone wrong, is more likely.


This sounds like one small piece of common fingerprinting techniques.

It would have been nice to see the author address that possibility, but it seems fingerprinting is not mentioned.


IMHO, fingerprinting would explain the enumeration attempt, but not the attempt to access these devices.


Does accessing them give you extra fingerprinting data though? I would imagine that you can then enumerate at least the resolution of the camera.


This only allows you to specify a preferred resolution, but doesn't return the resolution actually available. Any device info is returned in the MediaDeviceInfo object [1] on enumeration (which doesn't include resolution or similar data).

[1] https://developer.mozilla.org/en-US/docs/Web/API/MediaDevice...


The laptop camera can be disabled with small bit of black electrical tape, but I don't understand whose crazy idea it was to put microphones into laptops in the first place, especially without hardware kill switches like the Librems have. The same thing for modern cell phones, of course.


It's not that crazy of an idea. Laptop users want to be able to record videos, call people, and talk to others.


They recently changed their meet application for some reason. I use it without video, voice only; recently it has started trying to force me to use both - I have to refuse both, then once in the meet go and enable voice only. I loathe all things google but am forced to use meet for work.


I doubt you actually work there given your last sentence, but have you tried proposing an alternative (along with some reasons why it would be better) the next time someone asks you to use it?


> I doubt you actually work there given your last sentence,

What is your logic? My company uses Meet and it's "mandated" in the sense that it is used for all company meetings. I'm in a position that I might get us to switch if I pushed it, but Teams and Zoom aren't much better. I assure you I work at my company.


I meant not working for Google.

I'm not sure about Teams, but Zoom lets you use a standard SIP client to join meetings.


Why do you think they claimed to work for Google? I didn't get that at all.


There is extremely weak proof and all the crowd here has very predictable anti chrome response even though it has nothing to do with chrome. I think the domain is checking whether there is camera and mic present not turning it on and accessing content.


I don't use Chrome, and I don't give a sh*t about it. What I care about is ad networks, and the websites that vomit those ads into the browsers of their visitors.

<mode style="grumpy-old-man"> I simply won't have it. At the moment, with FF, an adblocker and a JS blocker, I think I'm hard to track (but certainly not impossible). If my blockers get blocked, I can live without the WWW. Be careful, Goo! You may own the web, but the web doesn't own us.

As someone once said, "It's just a fad". </mode>


What are you talking about? chrome is being mentioned by two people besides yourself and neither mention is negative. Did you even read the comments before commenting about their predictability?


Well, at least that requires clicking (not to diminish this report) but five years ago it's a zero-click proposition (like Forbes: https://www.networkworld.com/article/3021113/forbes-malware-...). While I don't want to diminish their revenue, the fact that blocking online ads significantly strengthens your security posture is not lost to private companies and governments (like US CISA: https://www.cisa.gov/sites/default/files/publications/Capaci...) alike.


Viewing this on a Framework Laptop that just came in today and I'm laughing because it's the first laptop I've had that actually has hardware switches for camera and microphone.


We have a chance to notice this when it's mediated by a browser. Native apps don't barf all over the console and thus escape such scrutiny.

I first considered this when a friend told me about a brand of lawnmower of which I had never heard let alone searched (mowing lawns is the least interesting activity I can imagine), and one minute later a podcast app had a big banner at the top by which I could purchase a lawnmower of that exact brand. I don't have important conversations in the vicinity of mobile phones anymore.


I've heard similar stories many times but every time it seems to be baader-meinhof effect/frequency illusion or that the linking is not via audio but something else. People have man-in-the-middle checked advertising traffic to see if they either stream audio or send spoken keywords to ad servers and they do not seem to do that. It's more likely that it saw your phone and your friends at similar locations and your friend had searched for the brand before, therefore linking your ad profile to the brand.

Do you think the apps on your phone real-time stream all mic audio or that they run speech-to-text on your device?


That's almost scarier. Seems like it could leak embarrassing information about what you've been searching or buying to your friends.


> it could leak embarrassing information about what you've been searching or buying to your friends.

Yes. Say no to tracking, regardless of if it listens to your voice. Even if it does not leak this way it's probable that one of the major ad brokers will leak data in the future.


Would it have to be audio? It could do TTS on the phone then send the text back.


I said "run speech-to-text on your device". Also TTS would be the other way around as it means text to speech.


Sorry I meant STT


This is a common belief that phones must be listening to us because ads are so targeted, but the scary truth is they aren’t[0] because they don’t need to. They have far more effective ways of targeting ads. For example, the reason you probably saw the lawnmower ad is because your friend searched for lawnmowers, and google knows they are friends with you, so they showed you targeted ads too because you might recommend that brand if you talk or you might subconsciously register that brand and reaffirm their decision to buy with something like “oh yeah I’ve heard of X, they’re supposed to be the best” without remembering where you saw that. It’s even possible you got the ad before your chat but didn’t notice because you had no reason to pay attention to a lawnmower ad.

This is why data privacy is so important even if you feel like you have nothing to hide.

0: as you note, they technically have the ability to do so and random apps could be but the amount of effort it would take to record, transcribe, and evaluate that much data just isn’t worth it when most users voluntarily give their info anyway. This is why I don’t use a phone, browser, or email service created by an ad tech company though and it boggles my mind how many people are ok with that.


In this case, the conservative explanation seemed unlikely for several reasons. This friend would be more precisely described as the elderly friend of my elderly parents. Neither of us are on social media, she doesn't know what podcasts are, and neither of us had even used a web browser in the preceding several hours. Neither of us was in the market for a new riding mower nor had been in the preceding decade. I was on her (remote, inconsistent-cell-reception) farm to help with some cattle. I had never heard of this brand before, and now over a year later I can't think of it.

Is it so unlikely that an ad network sketchy enough to pay its way onto random Android apps would also be sketchy enough to monitor conversations for keywords that can get it paid? I don't think that's unlikely at all.


But mobile apps are required to ask for mic/camera permissions.


There are exploits that circumvent this, of course.


I don't think you'd waste an exploit like that to server a lawnmower ad.


Sure, but those exist for the web too…


Has anyone seen any well done research showing these effects?


Not a study but I've heard explanations that big companies have so much data their models are just that good. For instance they know who you're friends with, both your search histories, and location data. Knowing you friend searched for lawnmowers recently and now y'all are physically close it's possible that brand was discussed.

Although I wouldn't be shocked at all if mics were being used. I just feel like that would have been leaked by someone by now.


no, they have so much data because their models are so bad. the vast data is used to give the illusion of good models by sheer volume, so little random matches are made more likely. that’s why google and the like want to hoover up our data, they’re desperate to keep the gravy train rolling long enough to create the good models and keep it going some more.


I've heard it from more folks than I'd like to. And counting them all as crazy or paranoid is less believable than the alternative.


> And counting them all as crazy or paranoid

Nobody is saying that. It's simply a quirk of human psychology. We are pattern matching machines with a poor intuitive grasp of probability.


I've only experienced it first-hand on youtube. After trying to learn spanish and talking spanish on the phone (but searched nothing on youtube in Spanish) Youtube offers recommendations in Spanish. Also after I sneezed a couple of times it recommends something related to hay fever. Lots more. I find these way more than circumstancial.


Another reason to disable JS globally whilst doing heavy surfing and only temporarily whitelisting/enabling it on sites you trust.


A physical, hard-disconnect on-off switch for mic and camera should be required by law.


Serious question - why do we need iframes? Can't we disable them?


An iframe allows one party to securely embed something from another party. Ads are one example of this, but so are embedded videos, tweets, etc.

In this case, having the ad in a cross-origin iframe is what keeps it from being able to read the content of the page, which is definitely something you'd want from a privacy/security perspective.

(Disclosure: I work on ads at Google, speaking only for myself)


Injecting malware into ads is as old as ad networks. There are even ad networks that hijack the ads of other networks and replace the original ads with their own.

There are also many different types of clickjacking:

https://en.m.wikipedia.org/wiki/Clickjacking


Best way to solve this would be, to have a physical switch that cuts power to webcam and microphone.... sadly I don't know any laptops who actually implement this.

Atleast some (eg. lenovo), have physical shutters to cover the webcam lens, so even if the cam turns on, it records only a piece of black plastic.


I don't think that it's google's fault. Google sometimes trade ads on auctions, meaning they issue and HTTP request to partners asking "Hey, you want to show an ad here", and partner respond with price and HTML code, the highest bidder wins and HTTP code is inserted.

HTTP contains JavaScript, and theoretically anything can be executed within the browser (I've seen people mining bitcoins!).

Google can't monitor an execute every HTML snippet, but they doing pretty great job sampling responses and evaluating some of them. Fraudsters are smart, and trying to understand if the code is executed on Google's servers, but overall they are loosing.

It seems like a case where google's system didn't work.

By they way, all google partners are listed here: https://developers.google.com/third-party-ads/adx-vendors. Usually, it's possible to track down who's exactly responsible by looking at dev console


> I don't think that it's google's fault

Of course it is. It's their ad network.

> Google can't monitor an execute every HTML snippet

Of course they can. There's no excuse for allowing this nonsense on their network.


Well, they do monitor snippets. There's a lot more going on here than meets the eye.

The problem is bad actors are really good at evading detection through obfuscation and dynamically serving different code depending on the IP address so the creative behaves normally if it thinks you're a server Chrome instance and does bad stuff for real people.

To make matters worse bad actors have automated their process, so when they discover they're blocked everywhere, they rotate to a new account, domain, change their obfuscated code to look different, and are back up in a few hours. This leaves everyone else playing whack-a-mole.

And even if Google sees through all of that, the code might never actually touch Google, but come from one of the many marketplaces or resellers being rendered through Google's Ad Server. For any given site, the list of what markets they work with is usually public. This site, https://techsparx.com/ads.txt, is doing business with way too many markets - 680 of which are resellers of other markets' inventory.

This means if you're a bad actor, you can evade anyone capable of seeing through your obfuscation entirely, select for marketplaces that have extremely poor quality control (I see a few), and wind up on this website.


That sill is Google's fault as far as I'm concerned as an end user.


If you're looking at it from an end-user perspective then it's the fault of techsparx.com.


If Google can't guarantee no malicious javascript then they should strip all javascript.

If I serve any content to my users, then I'm responsible for any malware it contains.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: