Some build steps take input files that are not built per se, for example ‘codesign’ takes an entitlement property list, and the linker may take a text file to embed in the linked executable.
It is now possible to have these files go through variable expansion via the ‘expand’ command, for example:
expand CS_ENTITLEMENTS "${dir}/Entitlements.plist"
This will run ‘Entitlements.plist’ through ‘bin/expand_variables’ and set ‘CS_ENTITLEMENTS’ to the resulting file.
For example if we link an executable with an embedded property list, then the linking step can be made dependent on the source property list by using a “${plist}” variable in LN_FLAGS set to “${dir}/info.plist” or similar.
Generated Ragel and Cap’n Proto sources are required by both Intel and Arm targets, but as the output is the same, we cache and re-use the result from these transformations.
This however meant that our heuristic to find generated headers, and add these to the include path, would fail when we are receiving cached results (as we skip the intermediate, identical, steps).
We now return the intermediate steps from the cache, but mark them as duplicates, so that they can be stripped when generating the build file.
Not an ideal solution, but the real issue is really identifying generated headers, and getting these added to the include path.
Previously we used a heuristic and if the value looked like a path, checked that it actually existed.
The new approach makes the rules simpler, avoids a file system check, and also ensures that non-existing files *will* become dependencies.
The latter is useful if they are generated by local.ninja, typos, or actually unmet dependencies.
Instead we just add one to the last number of last version.
With the new build system, the version of the build is extracted from the release notes, so this is the only source for the version number.
Previously we would automatically pick up an Info.plist file copied using any of the CP_* keys, and both move it to the correct location (when belonging to target built) or ignore it, if we were copying it from an imported target.
To simplify the logic in the build system, it is however better to be explicit about this, also because we could actually want an Info.plist file among our copied files.
The advantage is that using -o we do not need to write to a temporary file and then only if expand_variables exits successfully, overwrite output with temporary file.
This is only possible because we know that expand_variables does not touch the output file unless it succeeds. Updating the output file and failing would cause a rebuild to continue, as if the output had been successfully built.
Using /bin/cp with -p (preserve) appears to round down the modification date, so we can end up with a file in the build directory that appears older than the source.
We also remove -R since this command is never used with directories.
We previously did this for InfoPlist.vstrings files, but the changed extension was only a temporary workaround for not allowing multiple filters to run on a build input.
Previously we would only allow a filter when going from source to build directory, since the output base name is unchanged for a filter’s output, and would thus cause multiple targets with same output name.
Since we now add the generator’s name to the output path, this is no longer a problem.
For example we have a “generator” that ensures our *.strings files are UTF-16, if this is applied to files already in the build directory, the output will now include the generator’s class name, to ensure we do not create a target with identical input and output paths.
Previously we would download and archive the default bundles as part of ./configure and place the result in our source directory, this however both pollutes the source directory with generated files, but also had the ./configure step actually do a partial build, since we need to build the ‘bl’ executable to download bundles.
The extension is converted to from ‘vstrings’ to ‘strings’ and it is saved as UTF-16. Ideally this would not introduce a new extension (and rely on the existing ‘strings’ filter to convert to UTF-16), but see previous commit for the technical limitation preventing this.
TARGET_NAME and YEAR are predefined variables.
This relates to the build system: Transformations that does not change file name are called filters, this could e.g. be converting UTF-8 to UTF-16 or converting a property list to binary representation (without changing extension).
This currently works when the filter is applied to a file in the source directory, as we write the result to the build directory, but if the input is already in the build directory, we would create a new output with the same path, which would result in a malformed build file (multiple targets generate the same file and/or cycles in the dependency graph).
As a workaround, we only allow filters to be applied to files in the source directory. But it would be nice to lift this limitation.
For example we may specify a transform for ‘.rl → .cc’ and another for ‘.mm.rl → .mm’. For a file named ‘fsm.mm.rl’ the build system would use the first rule seen, now it will always pick the latter, as it matches more of the suffix.
The default for ragel is to generate compact and flexible code, not an issue for our current use case (parsing only comments and strings in ASCII property lists) but in the future we may need to tweak the output as ragel will be used for other things.
I think it will prevent multiple tests from being run in parallel, but when writing tests, we may produce a lot of output, that should not be buffered by ninja.
A future improvement could be to only use the console pool for the tests of the current target, but that will require two different rules to run tests.
This is because third party framework headers are likely to be installed in directories with many other headers that can clash with some of our custom frameworks.
Closes#1441.
We don’t care about the trim mode as nobody is reading the generated HTML, but we have to supply something, previously we enabled everything, which is now giving an error despite the documentation saying that trim mode can be “one or more of the following modifiers”:
% enables Ruby code processing for lines beginning with %
<> omit newline for lines starting with <% and ending in %>
> omit newline for lines ending in %>
- omit blank lines ending in -%>
It would seem that ‘<>’ is mutually exclusive with ‘-’ so now we pass ‘%-’ for trim mode.
This is because we ensure that each target copied to another target, gets signed before we copy it.
We initially used ‘-deep’ but that actually never worked fully, as it didn’t find all executables in our bundle, presumably only embedded bundles like frameworks and plug-ins were found and signed.
This is because we rely on `-X` (skip extended attributes / resource forks) which is only available with Apple’s version of `cp`, and it is not unlikely that the user has GNU’s version of `cp` available via PATH.
This is to avoid redundancy as ninja_required_version is hardcoded in the generator script and builddir is already passed via the --build-directory/-C option (and explicitly exported as a variable in the generator script).