>> If we continue to push back on bundling, we need to improve our tools and processes and policies to make it feasible to maintain unbundled packages. Otherwise, we need to build tools and processes and policies around bundled dependencies.
Or neither. The dependency hell created by that approach could be considered a bug with the software created that way - even a blocker. Like hey, your project isn't going to get included until it's not a giant spaghetti pile of dependencies.
I'm not really a fan of dynamic linking and shared packages. It's an idea that developed on hand-maintained systems used to process data and run CLI apps.
Snaps and APKs seem to work much better for large apps on installs meant to be replaceable commodites that can always be rebuilt
I switched to Ubuntu recently, and I'm very glad I did. The app selection is much larger, and they just work without any fussing.
This is one of the places that the often-maligned Homebrew package manager does a good job.
App dependencies in languages like Python and Lua are fetched manually, with exact filenames and hashes, and installed into an isolated location just for that app. If they turn out to be used by several apps, then they get packaged normally.
I wish other package managers did it this way. It would make packaging complicated applications much easier and result in many fewer dependency conflicts, and allows devs to produce more stable software.
It's basically static linking but for Python packages. Sometimes you want shared libraries, but for one-off dependencies it's possibly more trouble than it's worth.
As the person who has probably influenced Homebrew's policy on this the most: I'm very pleased to read this, thanks for saying it!
I was also reading this link today to think more about how Homebrew can handle this stuff better.
To me the main differentiation is "is the package for an application or a library?". If it's a library: it's a trickier balance. If it's an application: vendoring the e.g. NPM/RubyGems modules is most likely to create the best, most useful experience for the user, upstream creator and Homebrew maintainers.
As a very very occasional and light-duty packaging contributor, IMO the best thing Homebrew can do at this point is start writing much better docs for packagers. The Ruby API doc is very difficult to navigate if you aren't already a Ruby programmer. And it's very hard to know what the "right way" is for doing anything, other than browsing other formulas and hoping you land on a useful example.
This is another area where Homebrew has an opportunity to do better than the alternatives.
Making it faster wouldn't hurt either, but that's another story ;)
The article suggests using debian’s contrib package category in this way. (Packages with such dependencies live in contrib, since they don’t live up to Debian’s quality standards, which is fine.)
I'm not sure about that. I think it's more akin to vendoring libraries in a C application by unpacking someone else's tarball into your own source tree.
The overall philosophy is that of AppImage I suppose. Flatpak doesn't really count because you still have "runtimes" that other apps depend on.
Any reproducible package manager is suffice. Debian aims for that, too, AFAIK.
With Homebrew you end up having to link the binary, and you cannot link different binaries with the same name.
I just use venvs for Python, and pipx takes care of that, together with topgrade which ensures all software repos remain updated, Homebrew or Nix or not. For Rust I use Cargo, etc etc.
I'd agree Nix does it more elegantly but it is a very different beast to handle. Anything I want to do with Nix(OS) I have to look up how they do it.
The whole point of the thread is that Debian doesn't currently vendor dependencies, so you need to package every single dependency. Reproducibly, but fully packaged and maintained nonetheless, and necessitating careful testing to avoid conflicts between versions.
My point is that Homebrew takes the vendoring approach instead.
I didn't know this about homebrew. Is this only true for installing homebrew python packages?
To be honest, I would `brew install python3` or whatever, then manage my own per-project virtual environments with direnv, so I'd bypass all of homebrew's efforts to encapsulate and package python packages...
I think using per-project venv is still the sensible thing to do, not just in Python but using the equivalent in Lua (hererocks), Ruby (Bundler), etc.
This is for system-wide installations of standalone CLI tools and other "apps".
However if you do find something on PyPI that is a standalone app, but isn't in Homebrew, you can create an isolated env for it automatically using Pipx https://pypa.github.io/pipx/.
I’ve only been using Debian b as a daily driver and a home server for the past 4 months, but I love it.
I had been on Ubuntu for years but ultimately turned away because of them pushing snap packages. Also I had been curious about Debian and wanted to give it a shot to see if it was too bare bones.
Most of the work I do is scientific python with a dash of web dev. The standard packages “just work” for most of my needs, and when I need to venture out for something like Signal or Slack, flatpak has me taken care of.
For projects I either use venv or docker, which feels cleaner to me anyway.
Just FYI, although the Ubuntu version naming scheme lends itself to being abbreviated to a single letter, this isn't true for Debian: the last three releases were Bookworm, Bullseye, and Buster.
And yet, saying you use Debian b could plausibly mean any of those anyway. Several of my servers are stuck on Debian b, and I am not 100% sure which b it is.
The Debian model of an artisanal handcrafted build for every library is just not sustainable. Integrating deeply with the language's packaging ecosystem might be one answer, but they did that with Perl to (IMO) not great results, and I don't think there's much appetite for doing that with Javascript (from either side).
> The Debian model of an artisanal handcrafted build for every library is just not sustainable.
Yet it has functioned for 25 years (apt turned 25 this year). There has been mistakes, like the OpenSSL "fix", but not many. One issue that I, now that you mention Javascript, is stuff like npm, but to me that more a sign of npm being badly broken by design, or at least designed for a different environment. Build Python projects on Debian, using apt as you dependency manager works pretty well for systems that you expect to be stable over time.
One think so seems to forget is that you can often pull in newer versions of libraries and other packages from Backports. That does mitigate some of the issues with packages being to old.
> Yet it that functioned for 25 years (apt turned 25 this year).
Up to a point. Even 15 years ago the issues were clear and people were having to work around them.
> There has been mistakes, like the OpenSSL "fix", but not many.
That was the single worst bug in a general-purpose computer system IMO, and a predictable result of the Debian way of building packages. But they never felt the need to change their policies.
> Build Python projects on Debian, using apt as you dependency manager works pretty well
Python has notoriously terrible dependency management. It's a pretty low bar.
> One think so seems to forget is that you can often pull in newer versions of libraries and other packages from Backports. That does mitigate some of the issues with packages being to old.
The fact that there is such a big ecosystem of workarounds for linux distro packaging should tell you there's something fundamentally wrong with the basic idea.
As a software developer who’s written a fair bit of code in JavaScript, C and recently Rust, the idea that I would look for my dependencies in apt on Debian is a bad joke. At best I’ll get some ancient version of the things I depend on. But more likely, 2/3rds of the libraries I use simply won’t be there at all. Is the rust compiler in apt? Is serde? Rand? What about the long tail of my programs dependencies?
I remember looking in apt for nodejs one time and the version it had was about 5 years out of date, and it was missing all sorts of quality of life features that I had long since taken for granted (like promises). And of course, half of the packages in npm failed to run at all as a result. It was a complete non starter. Maybe I should have searched apt for ancient versions of common JavaScript libraries from a similar vintage? Then I could install them system wide. Yay. And what would all that extra hassle get me? A JavaScript program that would only work on one crusty Linux distribution. Which is strictly worse than what nodejs ships with out of the box: npm. Up to date packages. Modern JavaScript. And compatibility a mile wide.
And I could tell the same story with rust and cargo, and plenty of other languages.
Apt really makes no sense at all to use in this context. Maybe it’s fine to install X and gvim. But that’s about it.
If Debian really wants to mirror npm and cargo and other languages’ package managers, why don’t they just automate it, and add the hundreds of thousands of packages en masse?
Manually adding specific versions of packages from npm and cargo sounds like a massive waste of human effort. At best it will only ever produce a pale imitation of the real package repositories. I just don’t see the point.
The idea of Debian is to have a stable system that doesn't change if it possibly can for the life of the release. If you want to constantly get new breaking changes in your packages, switch to testing or sid.
If you want the latest dependencies, you don't have a stable system: it shifts with the versions of your dependencies. In Debian, you would use an unstable/sid chroot with the new versions (see debootstrap).
That’s a weird usage of the term “stable”. Is the latest rustc release unstable? Are there bugs in nodejs? It’s been years since a nodejs bug affected me.
If anything you usually have fewer problems with newer toolchains, because new features get used by your dependencies. The version of rustc or nodejs that Debian considers “stable” won’t be able to run a lot of software that I want to run.
And despite the name, “unstable” Debian is still usually 6 months behind the latest version of most packages. That’s an eternity for new things like Zig.
This is what Debian's Stable name means: that, once released, the operating system remains relatively unchanging over time.
> That’s an eternity for new things like Zig
So why isn't Zig making sure that their latest releases are available in Debian Sid? Anyone can be a Debian Maintainer, and if they don't want to invest their time in it, they just need an existing DM to sponsor their uploads.
Yeah it’s relevant. It’s just, not how a lot of software is made today. I don’t want my browser or my compiler to remain unchanged. I want it to be evergreen. Stop updating your web browser and in a year or two websites will stop working properly.
As for why don’t the zig developers become official Debian maintainers, gentoo maintainers, redhat maintainers, FreeBSD maintainers, homebrew maintainers and so on - well, probably because they’re too busy making zig. There are way too many unix package managers to support without help. Especially if doing so requires joining mailing lists, manual testing and so on. I wish there was a way to automate making all those packages across different unix distributions, but I can see why that’s a hard problem.
Longer answer: In Debian's arrangement, the idea is that the distribution moves together, so it doesn't have a native idea of "fast moving packages on top of slow moving base packages". There are other systems where this isn't the case; AUIU you can run bleeding-edge packages on a stable FreeBSD base system (largely because the BSDs have a very strong separation of the base OS from everything on top), and NixOS is quite happy, especially with flakes but you could do it other ways, to go as far as "start with stable NixOS but then add packages X, Y, Z from unstable, and don't provide libFoo at all in a global context but provide libFoo 1.0 to package A, libFoo 2.1 to package B, and libFoo 3.0g straight from git for package C". But in Debian, the equivalent is probably running a newer version in a chroot or container, yes.
Debian is an end-user distribution, not a development tool. It bundles stable versions of software, which it releases at (somewhat) timed intervals. The Debian model of software development means that software developers target testing/sid for their new versions (and work to make sure the dependencies they need are in sid!) so that their software is automatically ready for inclusion when a new release is made.
There's two problems with that: many tools no longer have multiple development lines, which means that Debian can't keep its stable releases secure without manual backporting effort, and upstream developers don't care about getting their software in Debian. This means that Debian unstable already lags behind, as you notice, but also that the freeze-before-release takes longer than necessary. But in my view, those are mainly ecosystem problems, not just Debian problems.
What does “script language development” mean? Despite the name, I’m not using JavaScript to write scripts. I’m largely writing networked server applications - which are very much a systems problem.
And rust is very much not a scripting language but it’s in exactly the same boat. Actually it’s probably worse because async and http support isn’t built in to rust’s standard library. A simple hello world web server with tokio and serde will probably have on the order of 40 dependencies in the dependency tree, not to mention rustc and cargo. How many of them do you think are in apt? Maybe none since, contra Debian’s policy, the final build result is statically linked anyway. The rust compiler doesn’t support dynamic linking of rust dependencies.
The reality is that apt simply doesn’t provide a stable, useful, modern software development ecosystem. The task of providing a “stable” version of rustc or nodejs is these days the responsibility of the rust and nodejs projects - which both have CI with incredible testing infrastructure. And the effect of that is that the latest stable versions are going to work more reliably to run real JavaScript and rust programs than whatever crusty version Debian ships.
And that disharmony between what Debian wants and what developers want causes real problems when shipping software to end users, and when deploying software to servers.
> One issue that I, now that you mention Javascript, is stuff like npm, but to me that more a sign of npm being badly broken by design, or at least designed for a different environment.
npm being "badly broken by design" is one opinion. But the fact is that almost all modern language tooling is moving in the direction of becoming more npm-like or has been created from the beginning with heavy npm inspiration. See for instance Rust's Cargo, Dart's Pub, Go's modules, Python's virtual env, Esy for Ocaml, and more. Developers love being able to easily manage and add dependencies, and this problem is only going to grow for Linux distributions across language ecosystems.
Pythons virtualenv is old than npm. For me, I think what makes Pythons dependency management work for me is that there's a logical separation between the environment and the package managers. You create a new environment, using venv, pyenv, Conda, containers, whatever you like, doesn't matter, then you install packages into that environment, might be latest, might be a specific version, depends entirely on your needs.
Something like npm og Rubys Gem, are confusing because I have no idea where the packages went. Apparently they like somewhere in the current directory, maybe? What if I want to use the same environment and package elsewhere? I'm sure you can do it, but it doesn't seem logical to me that you don't need to specify which environment you're using. It's the same with Go, it's rather confusing where packages go, or which version, it's all squirreled away in the go command. In one way it is really nice that it just goes into the same project directory, but it's also a little confusing that your environment changes when you move directories.
For production and containers... you normally just have that one thing running, so into the global environment it goes and here using apt to install the packages also ensures security updates, well not in the containers perhaps, unless you rebuild them.
> It's the same with Go, it's rather confusing where packages go, or which version, it's all squirreled away in the go command
Something that was lost when we moved to modules - the system is still the same, it's just that if you never used the old system (GOPATH) or read the manual, then you might be confused about where things have gone.
Also of note is that vendoring was for a time Go's answer to dependency management and can still easily be done, and is supported by the tooling.
Or you could do the sane thing that Java/Maven did and have a classpath and have a single local repo and references just to stuff you need, without any conflicts (barring conflicts inherent to any package manager with transitive dependencies).
> Something like npm og Rubys Gem, are confusing because I have no idea where the packages went. Apparently they like somewhere in the current directory, maybe? What if I want to use the same environment and package elsewhere? I'm sure you can do it, but it doesn't seem logical to me that you don't need to specify which environment you're using.
I find the opposite - in theory the Python way might be more orthogonal, but in practice all that does is gives you more chance to shoot yourself in the foot. Realistically I never want to run project A's code in project B's environment or vice versa, and I certainly don't want to update project B's environment with project A's dependencies - combined with the fact that there's no way to undo a `pip install` command or recreate the previous state, you can permanently screw up your development environment by mixing up your terminals.
If you use Ruby with rvm or asdf the gems go in a directory inside your home, named after the ruby version you are using for a given project and possibly a gemset you defined, so two projects with different dependencies but the same ruby don't interfere with each other. Or just use bundle and activate only the gems in the local Gemfile. Or use docker and put everything inside a container.
If I can complain about something is that there are so many ways to do it that I can't remember their details when I'm away from my laptop, as I am now. I think that I have at least one occurrence of all those 4 methods on my laptop, for different customers. They decide and everybody makes a different decision.
Other languages: same problems with different tools.
What's "badly broken" is JavaScript development as a whole, and TypeScript provides the closest thing to a workable solution where library upgrades along semver lines can be expected to just work. Python is also a highly dynamic language, but there's also a lot less churn in that ecosystem and this does help to a large extent wrt. packaging libraries for a distribution.
If it's functioned for 25 years why does no one use it for their ecosystem?
There's a wide range of package managers in the wild that exist to help build software using features that apt fundamentally does not support because of its broken model of the universe.
I don't know anyone who uses Debian and actually sticks with their packaged apps. The most important programs they're using on their computers will be installed either from third-party repositories, or some other tool (whether that's snap/flatpak/nix/etc., or a language-specific package manager like NPM), or built from source themselves, because Debian's "native" packaged versions of everything are chronically out of date.
That's the tradeoff Debian users consciously make. And it's a good one IMHO. Not sure why you make it sound like a bad thing.
For the stuff that you really _need_ to have the latest version, you have options to install it yourself. For the rest of the system, if you don't care about having the latest updates, you get a relatively stable system that doesn't potentially change every other week.
Of course some people do prefer having the whole system on the bleeding edge -- in which case Debian is obviously not for them, and there are many other distributions that cater to their needs. But the people who use Debian made a conscious choice to keep most of the system "stable" except for the important programs that they know well enough to "maintain" the latest versions themselves.
> For the stuff that you really _need_ to have the latest version, you have options to install it yourself. For the rest of the system, if you don't care about having the latest updates, you get a relatively stable system that doesn't potentially change every other week.
Those options exist, but Debian doesn't really support or steer people towards them. IMO their approach compares pretty poorly to e.g. FreeBSD where you have an explicit packages/ports split and support for using ports. (E.g. installing a newer version of an application on Debian is relatively easy, but if that application needs a library then it becomes quite hard, to the point that people resort to vendoring or static builds in one form or another)
> I use debian for work and never install anything outside their official repos unless it's not packaged or is unmaintained.
So you're saying you do install things outside their official repos, because their official repos don't package or maintain everything you want to use?
Anyway, I for one don’t think that staying with for example nginx for two years on the same version is a good idea. Even if they fix security issues, you still miss out on features.
> I don't know anyone who uses Debian and actually sticks with their packaged apps.
Many people in corporate, because that removes a lot of noise in "compliance" audits - show them that you have the main and security repository enabled and do regular update checks and installs, and that's it.
For me, the exception is Docker and Kubernetes, but that's easily explainable to auditors as a business need.
Especially for Linux, not everything is a desktop system. I have Debian installations running on embedded machines in the ten thousands.
Besides our custom kernel modules and business applications, Debian has everything these systems need and more.
I find it way cooler to use Debian on embedded systems rather than some buildroot, because it's a widely available and reproducible environment and debugging and development tools are an "apt install" away.
> Besides our custom kernel modules and business applications, Debian has everything these systems need and more.
The thing is that even in embedded, people are starting to see the value of using libraries when writing those business applications, and they end up being the same libraries that parts of the base system will use. So either you need to align how the OS approaches those libraries with how the developer wants to, or you need a user-friendly way to have OS-managed versions and developer-managed versions coexist. Current Debian does neither of those things.
I daily drive Debian stable and the few packages that aren't from the repositories are by choice. My most important programs are Neovim, Alacritty (for vim keybindings) and Firefox. Before Bookworm I had to build Alacritty from source but that's it and these days they're an apt install away.
I build dwm and dmenu as well but they exist in the repos if I didn't like patching them. If I go outside the repositories that's a choice, and I think hard if I really need it on my system, or maybe a VM for isolation makes more sense if I don't trust the developer.
If one needs bleeding edge everything maybe Debian isn't the right choice. Why does everything always have to cater to the masses? Let Debian be boring for those of us who love it for what it is.
You're missing almost 1.5 years of bug fixes then (Bookworm+Unstable 0.7.2, Jun 26, 2022; Neovim stable 0.9.4, Oct 9, 2023). LSP support in particular has gotten much better in the meantime.
> If one needs bleeding edge everything maybe Debian isn't the right choice.
> You're missing almost 1.5 years of bug fixes the
No I'm not, I used vanilla vim without LSP until Bram (R.I.P) decided to write another vimscript, too much NIH for my taste. I'm fine with what I use and have my workflow. I'm considering trying out how Flatpaks work for me for the isolation though.
I do use Docker and even the occasional VM for complex environments and don't notice any difference in my use.
Depends what do you consider "most important". For me the most important is the kernel, the xorg, the WM, the infrastructure so to speak. And then if I run an app image on top with let's say Freecad, I don't mind.
I'd have run it anyway, because I want the latest software without of the disadvantages of keeping my entire distro rolling.
Also I make use of running software in docker a lot.
> For me the most important is the kernel, the xorg, the WM, the infrastructure so to speak. And then if I run an app image on top with let's say Freecad, I don't mind.
The way I see it Freecad - the thing that you're actually doing something with on the computer - is the point, the kernel and all the rest of it are just there to support that.
> Also I make use of running software in docker a lot.
Yeah, I didn't put it in my list but that's another one I see a lot.
This is on point for Stable releases of Debian and is working as intended. However, there are many of us that run the Testing or Unstable releases of Debian. While this might not be the ideal way to run Debian, it is a way to run Debian with far more modern packages. While it's not a rolling release like other distributions, it is slightly closer to to this with, perhaps, slightly less risk compared to a rolling release.
It's what OpenBSD ports have been doing for npm packages for a while, until they eventually gave up and just told people to install using npm.
There are very strong arguments for both approaches. Some people will tell you to use a version of a library in your distro's repo, because it's been vetted by a third party. It's nice to be able to choose an OS/distro that fits your needs.
I wish Opensuse's Build Service gained more traction. Automated, reproducible builds for most distributions, itself open source etc. One of the many underrated tools in the ecosystem.
Back when I wrote a lot of Perl code, I limited myself to the debian packages, since they were tested and API stability was enforced by the debian team. It was missing a few packages, but they were either niche or low quality.
CPAN was untested garbage by comparison. (In the same way that pip is today.)
Besides the obvious people scaling problem, trying to create a deb package per pip package or rust crate will also explode the size of the apt metadata to download.
I use Debian because it is boring and stable. My core system is small and never changes by design. For anything that needs bells and whistles I either spin up a VM, use another package manager or build from source.
Don't try to change Debian, not every package needs to be on every system.
Even unstable is nowadays boring and stable, and it's hard to get into trouble even with experimental!
Decade or two ago it was more exciting when apt upgrade on unstable (or testing after a recent stable release) quite regularly got your computer into an unbootable state, the DE fubared or packages into some interlocking dependency mess that could be more or less impossible to solve. Haven't had that excitement many many years now.
Yeah, but I don't know if that's true throughout all the ecosystem.
One of the main selling points ( if not THE selling point of Debian ) is the stability and lack of "surprises".
If that fails and Debian starts to be perceived as "once was the gold standard, now they are slacking", trust erodes. Even if you're right about 80% of it.
I didn't mean this as a criticism, but a flippant praise of how well Debian does things. Unstable is for most packages relatively bleeding edge.
Of course not e.g. on Arch level, where the excitement is there all the time and the bleeding could be even literal in some use cases. (In Arch this is mitigated somewhat though by the vastly superior packaging system that makes problems easier to fix manually than the awful .deb.)
This problem in various forms has been around for decades now. In 2007 it was clear to me that Debian style packaging was insufficient for handling anything beyond trivial deployments of any tools or services from the Python, Ruby, and Java ecosystems. Things have only gotten faster-churning and more complex since then. The old model of packages for shared libraries that are updated across the board just isn’t workable. Both from the packagers point of view, because of the busywork, but also from the users’ point of view, since the low package coverage means many things just haven’t been packaged anyway.
In my opinion it’s probably fine for Debian to stick to the old model still for the core OS features, but operators need to know when to use something different for their own needs.
The old model of packages for shared libraries that are updated across the board just isn’t workable
Why is it not workable, other than "developers no longer care about API stability"? Should we uncritically accept the state of the developing world as it is today, or should we strive for something better?
>"I'm talking about packaging xyz 1.3.1 and 2.0.1, as separate xyz-1 and xyz-2 packages, and allowing the use of both in build dependencies. Then, a package using xyz-1 can work with upstream to migrate to xyz-2, and when we have no more packages in the archive using xyz-1 we can drop it. "
What a naive soul, thinking anyone in the JS world cares even remotely about API stability. xyz-1 will become obsolete within 2.5 weeks and the dev will tell the users to deal with it. You can't bring accountability into that culture.
They should just drop gsa. It's a vulnerability scanner. People interested in that and in its web fronted are supposed to have the skill to follow the instructions on GitHub and install those two pieces of software. One has a dockerfile, the other one must be built from source. It can probably run in a socket container with the correct dependencies. Problem solved.
I use Debian on my laptop but in no way I'm using their vendored languages or packages except what's needed to run the OS. Everything else moves too fast. It's either language managers or docker containers.
What I want from a distro is a solid foundation to run my desktop, editor and browser, then I care about everything else.
The article suggests moving it to contrib, and having it download its stuff post-install.
That seems completely reasonable. If it uses common npm libraries that don’t break api’s every few months, then those could be moved to stable.
I’d rather libraries packaged by debian releases continue to work for more than 30 days than to have them package unsupportable dev packages that bitrots my dev environment every “apt get update”.
It’s kind of amusing to me that there are so many comments saying this is unsustainable, given the fact that multiple BSDs maintain a quality bar similar to Debian’s.
There are a number of benefits that come from letting the distro maintain a piece of software, such as unattended, or even just a unified UI for updates.
I know, but they are usually months or years behind. If a customer tells me that they are working with Language x.y.z I need to install that version, no matter what my distro gives me. And if a project is N versions ahead of the version in my distro, I usually install from either their own PPA or docker image.
If I had one euro for every time I read something along the lines of "the version of our software available in Debian is terribly out of date, do not install it, it's full of bugs that we fixed on the last years plus new features. Follow our installation procedure instead."
I had a think about how you could best deal with this and then realised I was probably just designing a worse Nix.
I think, unfortunately, the Debian model just can’t scale to the sheer volume of what it’s being asked to do. And, as pointed out by one of the original posters, it’s not clear what benefit creating a common standardised version of lodash buys anyone.
Distro specific packagers are broken because the surface area and workload is too high as it is essentially n packages * m distros to maintain. Language specific packagers are broken because they can't express dependencies across the language barrier in a good and automated way.
The latter seems easier to solve as long as there is someone that actually wants to solve it. Currently the language packager maintainers doesn't seem to be interested in a common solution (understandably, as it is a very complex problem).
> Language specific packagers are broken because they can't express dependencies across the language barrier in a good and automated way.
> The latter seems easier to solve as long as there is someone that actually wants to solve it.
This is solved by Nix. Packages of any type can depend on eachother.
One difficulty with NPM packaging is that it's traditionally quite hard to work out whether your dependency spec is correct or not. You can either keep bumping the version, or leave it be and hope no-one checks becuase it's hard to test.
This duplicates your project in a temporary directory and locks all the direct dependencies to their lowest semver-compatible version. It's not infallible, but it helps.
The only problem is that some projects rely on their direct dependencies bumping versions in order to force update transitive dependencies. Which is horrible, and also relatively easy to avoid needing to do by using a tool like Renovate.
Pretty cool! Does it work with PNPM? I’ve mostly fully switched to that because NPM has for many years stood out as one of the worst package managers across ecosystems.
I'm afraid I've not tried it with PNPM, but it's unlikely to work without at least a small tweak as it executes an "npm run <script>" internally. You get to pick the script though.
If you're shipping a library, it's the consumers' lock files that are important. You probably want your lockfile to be as up-to-date as it can reasonably be, for testing, but you also (at least in my opinion) should keep your dependency bounds as wide as is correct so you're not forcing dependency upgrades on your consumers.
What Renovate recommend (and I concur) is specifying exact versions of development dependencies in your package.json, and broad version specifiers for peer and production dependencies. Then you use Renovate to test and bump the lock file versions, keeping them up-to-date.
Downgrade build then helps you ensure that the lower bound on your peer and production dependencies isn't set too low.
This is a huge problem for Linux distributions. I've been thinking of trying to open source a project (it's a web application simulator) that we have internally, that is built using Javascript/NodeJS and Python/Flask. While the mechanics of slapping a GPLv3 license on the code we wanted to publish were fairly straightforward, severe problems arose when we wanted to package it for distribution with an OS like Debian. The application is a web app that uses both Node and Python, and the complexities packaging it were overwhelming. For now we planning on releasing it as a container image. I'm not fan of doing this, but it seemed like a solution we conceivably implement with the time and resources we had available to us.
Please just provide a list of dependencies with min/max versions, one for Build and one for Usage.
Then provide a detailed example of the config files, a systemd unit file, and maybe some config examples for the 3 main webservers.
People will then deal with it. The target for such a project is not a noob user and it's more ressource friendly and it help people to understand how to debug the app and how it works.
I'm testing around 10 new tools every week and I hate when the project page only gives me a docker/flatpack/snap solution, I always find a way to install it raw without any indications, and it's painful.
maybe im reading it wrong, but it sounds like they wanted it to be able to be included in the distribution and not just make a package for it. If that's the case, fpm wouln't help. They'd still have to split out all their deps and get them packaged and aso hope the deps don't collide with existing deps version wise.
I'd prefer vendoring approach to all packages. I don't care about disk space or traffic, but I care about pinning precise dependency versions. I'd even say, I'd prefer to virtualize every application in a docker manner. That's what I mostly do with server applications using docker or k8s and it works for me. There's no reason it wouldn't work with desktop applications.
I agree for projects I maintain, where I can bump a package and the dependencies myself.
But on a server with 10 services, depending on 10 versions of a badly maintained project that requires an urgent security patch? Not so much, I rather just update a shared lib. It's a matter of scale and resources.
The great thing is that there are options, Flatpaks do what you want, no need to demand that Debian does the same.
If you don't care about disk space or traffic, does that mean that other people who do care about it are automatically wrong? What about people who care about software security and want to know which vulnerable versions of which software exist on their servers or desktops?
here's no reason it wouldn't work with desktop applications
I think this is partially caused by the frankly anemic standard library some languages come with. Take node for example, it tooks years (almost a decade?) for "fetch" to be available and you still can't connect to a database without external libraries, so you need to install those which then bring in their own dependencies because the standard library is so limited...
“Go” is cited as one of the problematic ecosystems and it’s generally considered a “batteries included” language.
The problem is not anemic standard libraries, it’s ecosystems where vendoring is common, because will unvendor packages so you use whatever debian ships.
Which on one hand is understandable, if they ship a security fix in one of their packages they want every dependent on the system to get that fix, on the other hand there’s no guarantees whatsoever the package works once unvendored.
> there’s no guarantees whatsoever the package works once unvendored.
And it’s precisely the job of the maintainer to ensure that it does, and more generally that all the various things that they ship together not conflict with each other. Assembling a coherent system despite each individual developer’s understandably narrow view—in explicit opposition to that view, if need be,—is and always was a distro maintainer’s job description.
Unfortunately, the bug reporting story has become worse over time. GNU packages included a configuration-time override for the bug reporting email to accomodate distributions and other modifications, but in most software today you’ll more often see a hardcoded link to the upstream homepage. And of course people will often just ask Google for the bug tracker address instead. This leads to understandable annoyance on the part of upstream developers. (It still doesn’t make acting against their intent improper in any way, though.)
TBH it's mostly caused by the lack of consistent semver use in the npm ecosystem, which in turn is driven by JavaScript being a highly dynamic language and not being designed for programming 'in the large'. The whole point of semantic versioning is to make it possible to auto-upgrade dependencies to the latest compatible version, at which point devendoring a dependency and packaging it separately starts to make a lot of sense. If every dependency upgrade requires a complex review of the code, there's no point in devendoring since it will just result in lots of bespoke package versions adding pointless clutter to the archive.
C makes it too inconvenient to pull in 300 dependencies. I think the recipe for dependency hell is insufficient stdlib + decent included package manager.
C developers basically invented vendoring - loads of C libraries are distributed as "the whole library is in a single .h file, just copy it into your repo"
Or neither. The dependency hell created by that approach could be considered a bug with the software created that way - even a blocker. Like hey, your project isn't going to get included until it's not a giant spaghetti pile of dependencies.