Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Make your own build system (jstimpfle.de)
107 points by jstimpfle on March 7, 2018 | hide | past | favorite | 90 comments


> Although it adds to the code that must be maintained as part of the project, I think that having a custom build infrastructure that can grow with the project is a good idea for medium-sized and larger projects.

Please don't do this for open source projects. Other people may want to use your code, but they generally don't want yet another build system. There are already a lot of build systems that are building very complicated projects. Surely, one of these existing, widely used build systems can be made to work for your project.


TBH I don't have many fond memories of open-source build systems. Any widely used build system (that probably <10% of the developers know very well, anyway) is not going to help if the build realized with it is broken (too flexible, too confusing, too optimistic about others' environments). And frankly most uses of those build systems in the wild are "give up control and pray".

If it's about making it easy for uninitiated collaborators to add new source files, then the listed build description format is pretty self-explanatory to use. And the build script just needs to be run. It takes no arguments, so I don't think there is a problem with that.


> If it's about making it easy for uninitiated collaborators to add new source files, then the listed build description format is pretty self-explanatory to use.

This misses the point IMHO. In practice, whatever complex the build system is, it's generally easy to add a new source file: simply grep for an existing source file name, find the build files that refers to it, and insert a similar line for your new file.

> And the build script just needs to be run. It takes no arguments, so I don't think there is a problem with that.

What about: cross compilation? ccache injection? verbose builds? parallelism control? installation prefix? Try being a package maintainer for a GNU/Linux distribution, where all of the above needs to be kept under control. You will despise custom build systems.

Established build systems (like GNU Autoconf (e.g gcc), and, to some extent, custom Autoconf (ffmpeg, libvpx)) handle all these cases pretty well, and provide fixed commande-line options (--prefix, --host) to control them. This means the project author doesn't get to decide how the option will be named and how it will work.

This make a huge difference if you're trying to packages for your distrib dozens of projects from dozens of various authors.

With a custom build system, the best case scenario is you have to re-learn all these options for each project. The worst case scenario is: one project doesn't support some use case (e.g cross-compiling), so you have to maintain a patch that hacks into the build files to add the behavior your need.


>With a custom build system, the best case scenario is you have to re-learn all these options for each project.

The best case scenario would be that it handles all of those scenarios in A) a highly readable way and B) has high quality documentation that demonstrates the use of all the necessary features in context.

Make's approach appears to be to use cryptic flags and settings and the best case documentation appears to be "have you tried stack overflow?". It is not the be all and end all of build systems.

Additionally, for many of the non-C++ workflows I want to accommodate, the 'make' approach appears to be hack around its deficiencies with bash scripting. Stringing together hacky bash scripts is not what I would describe as a best case scenario in any project.

I don't think of this guy's project is a meaningful competitor to make/cmake but equally, I can envisage other projects doing a better job than make on its home turf.


> I can envisage other projects doing a better job than make on its home turf.

Of course it's possible, GNU make is far from being perfect ; it's a pity we can't slowly evolve the language for backward compatibility reasons. Autotools also are far from being perfect.

- As a project author, I like writing custom simple makefiles and hate anything related to editing "autoconf.ac".

- As a package manager, I love projects using autoconf (or exposing autoconf's options, such as --host): they're the easiest to build because they all have the same "build interface" (alas, by default, there's no equivalent to "./configure --host" for cmake). And I despise projects using "custom simple makefiles", because it means that I'm going to have to learn yet another parochialistic BS (for build system) and have yet another special case for this specific project.

Maybe one day we will have a build system that's loved by both sides? :-)


I think the ideal build tool would be a framework written in a decent, readable high level scripting language.


> Make's approach appears to be to use cryptic flags and settings

You already memorize similiar syntax things for other languages. Why is it so hard to accept in this case? Comparatively there are a lot less of them to remember with make as well. Stop being lazy and read the manual.

> and the best case documentation appears to be "have you tried stack overflow?".

Make's documentation is fantastic. The manual is very comprehensive and well written. Just because you're too lazy to read a manual doesn't mean there isn't documentation.

Go read the documentation for make: https://gnu.org/s/make/manual/


>You already memorize similiar syntax things for other languages.

Yes, and cryptic configuration settings and syntax causes obscure, hard to track down bugs in other languages too. Welcome to programming. I am not saying it isn't a problem elsewhere. I'm saying that if a competing build system manages to do the same things in a less cryptic way then that's a strong point in its favor.

Essentially make is a fully fledged stringly typed programming language - it's got conditionals, loops, variables, etc. It just wasn't a fully fledged turing complete programming language that had a lot of thought put in to syntax, usability and readability. By contrast, there are many languages which actually did have a lot of thought put in to those things.

Btw, stringly typed is not commonly considered a 'good' thing.

>Stop being lazy and read the manual.

Did the fact that I commented on its quality make you think that I didn't look at it?

Try doing ctrl-F on the manual and searching for ccache and cross compilation - what the OP regarded as two of the most "necessary" features.

To be fair, I can find information about these things.... stack overflow has information on them.

Also, if you dig deep in to the docs the amount of context given is very thin. Look at section "the two flavors of variable" - it took about 300 words to explain something that could have been illustrated with two simple examples.


>It just wasn't a fully fledged turing complete programming language

It is turing complete though.


>What about: cross compilation? ccache injection? verbose builds? parallelism control? installation prefix? Try being a package maintainer for a GNU/Linux distribution, where all of the above needs to be kept under control. You will despise custom build systems.

None of those are features that developers care about. They are only of interest to package maintainers, systems administrators and other lunatics.


Yea... how come that "I don't care" became a virtue.


It's not a virtue, but it is what happens in real life, though.

And I'm one of those "lunatics".


I think that your build would be much more readable as make. Building object code and libraries from C is.. kind of well traveled ground?

Making an idiosyncratic system is satisfying, but the usefulness of your work is decreased if it's decorated with locally invented barriers to comprehension.

Rock on; scratch your itches.


I don't entirely disagree. If you only compile C files this is not for you. This is about exploring what building blocks it takes to also support building all the custom things you need besides that in a larger project.


>If it's about making it easy for uninitiated collaborators to add new source files, then the listed build description format is pretty self-explanatory to use.

If you write a generic makefile the uninitiated collaborator doesn't have to do anything to add new source files.


That's the problem with make and why there are so many build systems built on top of it. Make alone isn't enough. It actually doesn't handle this case at all unless you use a compiler (-MMD) or any other external tool to basically generate the makefile. Otherwise you have to manage the source files and header dependencies yourself.


It's out of scope for any general purpose DAG runner. Sure you can make a tool that can do that for language X, but make is used with a lot more than that. Make should not implement parsing of source code to find out what they include.


You can setup any build that way (no need to use Make) but I don't think that's a good idea.


I'd be interested to hear what you think that way. In my mind, if a file exists within the src directory but is not compiled as part of the project, its existence is lying to the developer. Keeping a separate list of files to be included makes it possible to lie to the developer. Therefore, wildcard builds should be used, so the list of files in the src directory and the list of files being compiled are one and the same.


I like to have an official place where the build rules and artifacts are clearly laid out. I don't think file system directories are a good place to put the truth. For one thing, parsing the information from the directory contents means relying on out-of-band data. For another, directories often contain files that "should not be there". Or maybe you accidentally delete an important file. The build couldn't detect the mistake.


I think for most projects, contributions are hard enough to come by without having to factor in a new build system.

Honestly even bazel isn't built with bazel. (it can be, but they do that as a test)

It's like rolling a whole new project into your existing ones... For no clear reason.

Perhaps it's a learning experience, however there's no reason to hobble a project with it.

Thankfully, I doubt most established projects would suddenly switch to a custom build system.

Also there are plenty of auxiliary tools developed for build systems which are at best, difficult to reproduce.

If one needs to improve a build system... Try a pull request?


> having to factor in a new build system.

> there's no reason to hobble a project with it.

I think this underestimates the complexity and overhead of using and configuring some* existing build systems, and also overestimates the complexity of running (no need to edit the build config for most contributions to the project, provided the build system works well) a custom build system.

Established build systems are often complex due to being heavily generalised and configurable. I've seen projects with build configs that are longer and more complex than a custom build system for that project could be.

> there are plenty of auxiliary tools developed for build systems

This is the key comment. You can create your own custom build system while using lower level build-related libraries. There's no need to reinvent the wheel entirely.

* I said "some" existing build systems: I'm sure some are simpler than others, so this may not apply universally.


Yeah, about that last sentence. Are you actually a contributor to an existing build system? CMake? Make? Bazel? Anything?


I agree with the sentence. I've contributed to both cmake and gradle to fix very specific pain points that I've experienced. It's definitely been less work than writing an entire build system because I don't like how big and complex the existing systems are.


I once used Shake[1], a Haskell library/EDSL for expressing and running build rules, to make a build system for work. I highly recommend it over rolling your own, if you need custom build tooling at all.

Our use case was a developer tool that would take in project files, sources, and assets, and bundle them for various platforms, push code to devices, or launch simulators for you. Notably, Shake made it fairly easy to ensure that we never needed to implement a “clean” command—all dependencies were correctly tracked.

Having strong static types and the Haskell library ecosystem available was a huge win for convenience and correctness over something like Make. Shake makes it easy to do concurrent builds with full or limited parallelism, minimise incremental build time, write multi-stage rules where e.g. you need to run the compiler to calculate the dependencies of a file, and properly handle commands that generate multiple output files. Nowadays it even supports progress reporting, profiling, and some linting of build rules.

[1]: https://shakebuild.com/


Build your own build system sounds like reinventing the wheel. As a former build build engineer and a nowadays devops, a custom build system would be enough for me not to take a job. Because investing time to learn a custom build system is just wasted time, because you never going to use it again, on the next project or next job or whatever.

Also for a project, a custom build system means, that you cannot hire anybody, who knows the build system, however if you are using any established build system, you can.

And also, software developers tend to forget, that they are not the only users of a build system, and usually it needs to be integrated into a build&release pipeline.


Nice to see a minimal version of a system.

In my experience the biggest (or most annoying?) problem though is not setting up the dependencies and executing the steps along the DAG but doing the configuration part of a build: What libraries are installed on the system, giving the user ways to opt in/out of build-time features, figuring out how to build shared libraries on the target OS, how to crosscompile, there’s tons of of these that may need solving depending on your project.

ninja does a fantastic job about handling the core and leaving all the other annoying things for someone else to solve (like cmake with the ninja backend :)


This is awesome! Instead of getting all mixed up in the discussion about which of the existing build-systems is best, and instead of trying to figure out how to work around all of the (mostly arbitrary from user perspective) restrictions, it is a very rational and pragmatic choice to just get it done and make it do what you want.


Let me be honest, the last step of doing all myself was quite a time sink over the last 3 weeks. But in the end I've learned something that will be of use in the future. I don't think this applies equally to the time I've put in off-the-shelf systems.


I completely agree! All the hours (more like days/weeks) I already spent trying to make CMake do what I want... It only strengthens my dependency on CMake in the end :-/


This is neat! If you're interested, Ninja (https://ninja-build.org/) is designed exactly to provide the dependency-graph execution underneath a program like yours that has a project-local higher-level description of the build goals.


Ninja is a very specific tool that does only one thing and that only somewhat well enough, though it is fast. Even cmake has trouble employing it for certain generated file cases.


I've implemented build/task systems in two different ways in the past.

The first is a bash script that ends in

    $1 ${2:}
or something in that region. Then I can define functions and call them directly from the command line. It's rather protable (if you have bash or I bothered to write POSIX sh compatible)

The other was a simple DAG similar to how the author describes it. Each task was a root node and subtasks were nodes connected to it. To figure out which tasks to run I'd simply scan the entire task graph for what has no subtasks and spawn a thread/process for each runnable task. Once a task finished, it's removed from the DAG. Successfully used to run database upgrade tasks (update multiple tables at once if they queries don't depend on eachother)

The only thing I ask of a custom build system is that it A) includes the classic "all" and "clean" options B) it doesn't have any dependencies beyond what I can expect from any random distro, C) autodetect libraries in common locations, not the locations on the original dev machine and D) don't require root permissions. ever. (outside "install" and maybe we can insert a rant in here how a lot of packages don't have a "uninstall" or "remove" option, making me track down the files it shat onto my disk)


So he re-built Gradle, or Buck, or Bazel, or any number of roughly similar it-runs-a-dag build tools? Not sure I buy that his problem was real. Gradle's the only one I'm meaningfully familiar with and making custom commands over a dag is a breeze.


The "run the DAG" part is very small and simple (as you can verify). I don't think there is a good argument against "rebuilding" that.

As I explained my motivation was primarily keeping my build description in my own datastructures (see the text database). IMO that's a very worthwhile goal. I then went on to see what it takes to NIH everything, and it turns out that detecting what files need to be rebuilt is hairy, especially considering transitive C includes (which requires a dependency cache). But there is no one solution to these problems, and I do think there is some value in controlling that logic.

And if you get the concepts right, it can be done. I've divided the code into loosely coupled building blocks. That hopefully makes it easier to understand. It should also be possible to recombine these blocks to make different build systems.


FWIW there's a scope threshold that many projects don't always have to cross that justifies using a "real" build tool - and it has to do with those "hairy" parts of file detection and automatically finding dependencies in the shared environment.

If the project allows it, though, your dependencies can be shrunk down until there's no issue. This is far more the case with target languages that already understand modules (e.g. Rust or Python versus C or JS), or "one big framework" projects where you are relying on linking in the one framework and having it do all the heavy lifting on the build side of things.

Actually customizing a build substantially beyond that, tends to be the case more often when you have something like a compiler within your own pipeline, like automatically transforming and clipping source art assets into optimized, UI-ready form. With those systems there is a lot to be said for a simple script that you can debug.


I suspect you could (continuing the gradle example) write gradle plugins that would accomplish what you want to do as far as using your own data structures goes. Seems to me though the majority of what's in that textual database is standard C compile/link stuff that'd be handled fine by https://guides.gradle.org/building-c-executables/


Yep - currently only my data structure generator is in the build. But I have plenty more to add. I have a pretty poor asset pipeline that I currently run manually. This stuff is very costly, so you want to have good control and manual intervention as far as possible. I also want to add more packaging workflows for a basic "component entity architecture" which means collecting information from various modules and reorganizing them into a more global database. These things are tricky and I'm pretty positive I simply want to do them myself.


To me, the fact that people seem to keep doing this suggests that these tools aren't all that great.


Seems nice from a "been there, done that" point of view. Usually rolling your own solution helps in better understanding the underlying problem.

But, as many people pointed out, having a custom build system calls for trouble (eg cross compilation, prefixes, feature control) and often produces a lot of work to maintain or cater to new needs.

Source: We roll a package based, distributed build and test system. Most build packages wrap around cmake. A core dev created it, and while he works on the build system, he can't work on the product. And if he ever decides to switch company or has a severe accident, well... (The code is mostly well written, but still the code base is huge since it was adapted to changing needs in the past decade; and while we're not tiny, we're small enough to lack the resources of having him train a second person in the guts of the system - I know probably 70% of it and am regularly lost).

If it was up to me, we'd take a look at bazel.


Our company is in the same boat - we have this weird, custom package management system that's a franken-hybrid of shell scripts, vanilla Python, Scons, and CMake. Every time I have to do anything even slightly outside of the norm with it, it breaks badly.

And there's this huge sunk-cost that makes it functionally impossible to suggest replacing with anything else. It mostly works but it's getting crustier by the day.

So I feel your pain.


I'll probably get some flack for this, but I'd like to put in a plug for autotools (if you don't need to target Windows).

Autotools has really come a long way since the bad old days back in the early 2000s. It's got some crusty corner cases, but I like it more and more as I use it.

With CMake, everything seems like it's easy to bury yourself in custom macros. And before you know it, you have a build system that's complex, hard to debug, and impossible to re-target. Scons is similar (although at least you can step through the code in Python).

I'd encourage anybody to try it out. At least, if your codebase is C or C++ and you don't need to target Windows.


The biggest problem I see with the article is that you didn't say exactly what you're doing with this build system. And this gets people's imagination going.

What is the scope? Is this an open source project? How many people are working with it?

To me it looks like this is a personal project that only you are working on. I don't see any problem using this approach. I mean, I use a batch file as my build "system" for my personal projects. But for anything else it would be worth discussing the trade-offs versus using one of the available build systems.

I didn't understand how this works: scanCmd = 'scanincludes-%s' %(oPath). What is this scanincludes thing?


It's just supporting my own project, and I'm the only person working on the build and the project. The listed build description is all I build with it currently.

> scanCmd = 'scanincludes-%s' %(oPath)

I have each node in the DAG identified by a unique string, including command nodes. This one is a name I make up for the node that is responsible for rescanning the dependencies of a particular compilation (the compilation is identified by the .o path)


Oopsie. Must have todo with me enabling ipv6 at request from another guy. Will fix this later when i come back. You can get the descriptio from https://github.com/jstimpfle/learn-opengl/blob/master/build.... as well


>its configuration syntax is awkward

This argument and others similar to it are often repeated but I've yet to see any of those people describe what makes it awkward.

>and it is restricted to filesystem files and shell commands

It'd be pretty easy to make a makefile that depends on say a specific row on a SQL database using dummy files. Also changing the language for recipes is trivial and painfully obvious. If you can't do something as simple as `SHELL := /your/own/interpreter' then why do you think you are qualified to make a build system?


It's very easy to explain and there are many examples of bad Makefiles on the internet. Magic variables, unintelligible string replacement routines, unportable extensions. Almost nobody understands the finer points like = vs := and execution order. Using a simple scripting language with well known semantics can be relieving. Maybe try this? http://nibblestew.blogspot.de/2017/12/a-simple-makefile-is-u...

Creating dummy files is inefficient, inelegant, and unmaintainable. Changing SHELL does not solve the inconvenience of having start a new process and having to go through a command-line interface each time.

Btw. watch your language, please.


Agreed on dummy files, that's one of my least favorite things about Make. If you structure everything right though, you can usually figure out how to do it with the actual input and outputs.

GNU Make's documentation is very well written though. Like, shockingly well. Don't be afraid to read through it if you see some weird syntax that you can't figure out.


I know Make quite well, but I don't like the clever compressed syntax. My ideal is a good structure it so that descriptive function names and clear semantics don't lead to unmaintainable verbosity.


Writing your own, general-purpose build system is a hard, multi-year project (spoken from experience[1]).

[1] https://build2.org


Ahh, but you don't need a general-purpose one do you?


If it takes you too long, maybe you're doing it wrong then? Are you sure you are solving a real problem with it?

I don't mean to insult you, so please don't take this the wrong way... The Point I want to make is this: Is it even a reasonable project to build a build system that does everything for everybody? It would appear -- from plenty of evidence around in most open-source projects -- that mostly each of them had to work around at least a couple of restrictions in their off-the-shelf build-system, and I haven't seen a reasonable sized project that got away without additional scripting.

So if the developer already has the need for custom scripting, why not take python? And if it Takes less Time+LOC to roll your own project-customized build-system than it would take you to integrate a "general-purpose" build-system... There is a very strong case for a small python script like the one presented here.


From these articles, one can always tell who just hit the “need to really master make” stage in their project.


You mean the one build system that doesn't know the first thing about header files? The one build system that fails to detect that a header file changed and thus a certain set of source files must now be rebuilt?

If there's a case for an existing build system here, the system in question is not make. There are much better systems out there (scons, cmake...). Just my two cents.


Make isn’t just for building executables; if your nitpick is about make not having intimate knowledge of header files, you have a long way to go on your road to mastery of make.

Make can be used for automation of many different tasks, for example web page rendering or database creation, to name a few. And, it has built in rules for handling header files correctly; you would do well to study and understand them.


>You mean the one build system that doesn't know the first thing about header files?

It's a general purpose DAG runner. It's not really feasible to add parsing for all the different possible languages that one might want to use imports in. Luckily at least GCC and Clang both have the option -MMD and any sane compiler for other languages should have something similiar.


C/C++ has text-based inclusion, which is just not a sane form of importing stuff from other modules. This is why we can get crazy stuff like broken partial builds. Most other languages are saner to build and some even come with tailor-made build tools.

Using a saner tool than plain make for C/C++ is worth it every time in my book.

make can still be very useful for other stuff, though.


Scons and Cmake aren't too bad. With either one of them, you at least know that a team of developers and a large user-base have worked out most of the kinks.

Honestly though, raw GNU Make isn't too bad either, except for the not having any good built-in mechanisms for a 'configuration' step.


Nah, it can spot this if you happen to add those header files to the rules and/or the rule to extract the dependencies using the compiler like autotools and cmake do.


So I need an extra preprocessor run to output make rules into temporary files and then I have to hack make rules to make it work. That's what we had 20 years ago. Any decent C/C++ build system these days does not need extra rules for this. And that's how it should be.


There is no such thing as a “decent C/C++ build system”; that’s a fallacy.

Master the compiler front-end’s -M option.


Your world must be quite small then if all you have are conpilers that unterstand that flag. There are perfectly workable compilers out there without this feature. And with a good build system (thus thread has mentioned a few) you do not have to care. I can show you build acripts for C++ programs that know nothing about the compiler. They work with GCC on Linux, MSVC on Windows and clang on OS X without a single change. Hiw is that not decent?


Scons and CMake are both heavier garbage, for mostly the same reasons: both require unnecesary dependencies, Scons requires Python which is a monster, and CMake is a bitch to build. Neither fully implement all of Make's capabilities, because both authors never really mastered Make and misunderstand it.

As a final slap to CMake, in the end it generates Makefiles, proving that Make is the be all, end all when it comes to build engines.

You better buy the book on Make and learn it and learn it well instead of wasting your time on toy tools. The longer you try to avoid it, the more time you'll waste.


Hey, if you think Make is so superior and flawless, I challenge you to show what you think my build would look like as a maintainable (non-generated) Makefile? The build description I've listed is not a trivial build but it's not super complex either.

I need to generate Visual Studio project files as well, but let's ignore that for now.


I would like to attempt this even though I'm not the person you're replying to.

There is just one problem though, your page isn't loading.


It works again. Had added IPv6 but broken IPv4 access.


Is the codebase you're trying to compile available somewhere?


Not in a shape you'll want to bother with, sorry. And cleaning that up is currently not one of my priorities right now (it's a toy project).


Then what was the point of your comment to begin with?


You could still make a Makefile to show how you'd do it. (But you don't have to).

Listen, I'm not fighting ego fights here. Your borderline personal attacks (one of which was apparently removed, either by you or a moderator) are not appreciated. And I'm not a clueless idiot. If you still want to discuss something for good, please write me an email. It does not need to go here. You can also easily find the repository with a web search, if you decide to bother with it.


>Your borderline personal attacks (one of which was apparently removed

Bitch please. You're the one who started with those.


To make things clear, I didn't start or even participate in anything. You posted https://news.ycombinator.com/item?id=16543557 which I considered inappropriate, so I complained in a comment to that. As a result you replied in a way that missed any understanding and respect - instead you called my note "ad hominem" which wasn't the case at all, and harshly critized me for writing in an unintelligible way (I accidentally left out one word in a sentence, sorry for that). This comment was later deleted. I didn't do anything apart from complaining about your behaviour once.

Again, your behaviour is extremely disrespectful. Please reconsider the way you treat people here.


>which I considered inappropriate

You never specified what about it is inappropriate. Your complaints about the language amounted at most to a pathetic ad hominem.

>As a result you replied in a way that missed any understanding and respect

There was nothing to understand as you had not communicated anything but the fact you found it somehow offensive. As the comment contains nothing offensive you need to be more specific. Since you didn't do that it comes of just as a dismissive personal attack.

>and harshly critized me for writing in an unintelligible way

This is because the sentence was unintelligible even when I'd asked multiple people to try to interpret it. This is not harsh criticism, it's just a statement of a fact.

>(I accidentally left out one word in a sentence, sorry for that)

The lack of that word made the sentence completely nonsensical. Why are you whining to me about not understanding it? And it seems you never fixed the sentence so it's still just as unintelligible.

>Changing SHELL does not solve the inconvenience of having start a new process and having to go through a command-line interface each time.

What does this mean?


You are continuing to be quite uncivil.

> Since you didn't do that it comes of just as a dismissive personal attack.

I think (from the earlier comment):

> Also changing the language for recipes is trivial and painfully obvious. If you can't do something as simple as `SHELL := /your/own/interpreter' then why do you think you are qualified to make a build system?

is quite an obvious personal attack, and it got deservedly flagged multiple times. I'm sure you're able to find this yourself from two little paragraphs. No need to act as if you couldn't. And even if you couldn't, no need to be extremely offensive and start name-calling.

> This is because the sentence was unintelligible even when I'd asked multiple people to try to interpret it. This is not harsh criticism, it's just a statement of a fact.

Of course it was harsh criticism for an oversight that just happens. The comment is no longer there, but you chose a completely inappropriate and aggressive tone.

> What does this mean?

Replace "having start" with "having to start". And maybe replace "solve" with "remove", which might be a better fit here (I'm not a native speaker).


>Replace "having start" with "having to start". And maybe replace "solve" with "remove", which might be a better fit here (I'm not a native speaker).

The latter half of the sentence is still incomprehensible.


I thought it was perfectly understandable, but here is another way of putting it. By "going through a command-line interface" I mean that you have to start a new process, and hand over all state as command-line parameters (because it's a new process).

This can be impractical and error-prone, especially if you add shell AND make syntax on top. From within a decent scripting language (e.g. Python) instead, you can simply use the convenient built-in data structures (e.g. list/tuple of str) as command-line parameters for subprocesses.


>I mean that you have to start a new process, and hand over all state as command-line parameters (because it's a new process).

Do you mean to the shell's command-line parameters or the compiler's? Either way whatever state you need you can also give as macro expansions.

  MACRO := whatever
  SHELL := /usr/bin/python
  .RECIPEPREFIX := >
  
  .ONESHELL:
  .PHONY: all
  .SILENT:
  
  all:
  >foo="$(MACRO)"
  >print("{}".format(foo))
  >print("You can call yur compiler from python if you want. You could even write the build system from OP here if you wanted to")
>This can be impractical and error-prone, especially if you add shell AND make syntax on top.

Make syntax is just simple macro expansion. Shouldn't be too hard. And no one is forcing you to use a bourne-style shell. You can use Lua, Perl, Python, Javascript, whatever you want.

>From within a decent scripting language (e.g. Python) instead

You can use Python.

>you can simply use the convenient built-in data structures (e.g. list/tuple of str) as command-line parameters for subprocesses.

Then do that here. You don't need to reinvent everything just because you dislike one part of an existing solution. This kind of NIH is why there are so many sub par solutions for everything and nothing is compatible with each other.


Here's Casey Muratori on saying "reinvent the wheel". This is in the context of game development, but it sure applies to build systems for C/C++.

https://youtu.be/fQeqsn7JJWA?t=140


Actually the harder thing is building your own data/content build system, where not only the assets change, so does the compiler and tool chain used to build them.


I work on a project where a huge build system has been developed. It uses the same source to build a dynamic library, the same library in static and many test programs. It handles well dependencies whatever the directory structure. It is developed in gnumake. I think you will be less criticized if you use gnumake instead of python. It has all the features needed for dynamic generation of dependency rules. And if someone want to improve the build system, a good knowledge of gnumake seems a reasonable prerequisite. The main defect is that it needs 10s before starting any compilation.


However, a few disadvantages to Make: Only files and timestamps means that the granularity of your project's modules needs to be at least as fine as required by the build (I think that's one reason why C builds are slow: Too many files written). Pattern matches are not enough IMO. The scripting language is bad for advanced things. It's like implementing a large project in shell code.

Having the full power of Python really is something else. So I was considering generating simple repetitive Make rules. But I wanted to explore what it takes to do the rest myself, as well. Setting up dependencies for Make is not nice, either. Having the code under control means being able to adapt to any requirement. It would be easy for example to convert this into a build server kind of thing.

Having a 10s delay is not acceptable to me. I didn't experience that myself though.


The 10s delay is caused by the correct handling of dependency in a full source tree (around 5000 source files). It is the price of perfectionism.

There is no simple repetitive make rules in the build system of the project I am on: all the rules are generated using macros. gnumake is a full scripting language that is dedicated to timestamp dependencies. It can manipulate easily lists of string (as long as they do not contain spaces). What are the advanced things that requires a better language ?


> The 10s delay is caused by the correct handling of dependency in a full source tree (around 5000 source files)

There is something awfully wrong if processing 5000 files and cached dependendencies takes 10s. It should take a few ms at most, even if written in a scripting language like Python. I would not be surprised if this is because that build system heavily misuses GNU Make's variable substitution features, instead of being written in a language that features proper data structures (e.g. set, map)


Heck, even cmake does this better then without fancy data structures. My bet is this either does many shell calls or wildcards multiple times through file system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: