Using gyb with benchmarks


(Pavol Vaskovic) #1

Hi!

I’ve been trying to use gyb to simplify maintenance of some tests in benchmark/single-source/.

I have created DropLast.swift.gyb, but it appears this approach is currently not supported by the build system. Doing a clean build I got the following error:

+ /usr/local/bin/cmake --build /Users/mondo/Developer/swift-source/build/Ninja-ReleaseAssert/swift-macosx-x86_64 -- -j2 all swift-test-stdlib-macosx-x86_64 swift-benchmark-macosx-x86_64
ninja: error: '/Users/mondo/Developer/swift-source/swift/benchmark/single-source/DropLast.swift', needed by 'benchmark/Onone-x86_64-apple-macosx10.9/DropLast.o', missing and no known rule to make it
./swift/utils/build-script: fatal error: command terminated with a non-zero exit status 1, aborting

How do I make benchmarks work with gyb?

Best regards
Pavol Vaskovic


(Michael Gottesman) #2

Hi!

I’ve been trying to use gyb to simplify maintenance of some tests in benchmark/single-source/.

I have created DropLast.swift.gyb, but it appears this approach is currently not supported by the build system. Doing a clean build I got the following error:

+ /usr/local/bin/cmake --build /Users/mondo/Developer/swift-source/build/Ninja-ReleaseAssert/swift-macosx-x86_64 -- -j2 all swift-test-stdlib-macosx-x86_64 swift-benchmark-macosx-x86_64
ninja: error: '/Users/mondo/Developer/swift-source/swift/benchmark/single-source/DropLast.swift', needed by 'benchmark/Onone-x86_64-apple-macosx10.9/DropLast.o', missing and no known rule to make it
./swift/utils/build-script: fatal error: command terminated with a non-zero exit status 1, aborting

How do I make benchmarks work with gyb?

Please do not do this. We have been talking about switching the benchmarks to use swiftpm instead of our own custom cmake goop. swiftpm does not support using custom things like gyb.

Michael

···

On Apr 6, 2017, at 6:16 AM, Pavol Vaskovic via swift-dev <swift-dev@swift.org> wrote:

Best regards
Pavol Vaskovic

_______________________________________________
swift-dev mailing list
swift-dev@swift.org
https://lists.swift.org/mailman/listinfo/swift-dev


(Pavol Vaskovic) #3

>
> ...
> How do I make benchmarks work with gyb?

Upon further inspection, one needs to regenerate harness _after_ modifying the gyb anyway, so manually invoking gyb works fine for now, given the [FILE].swift.gyb is ignored by the benchmarking and compilation machinery.

Is that so bad to add .gyb templates alongside the .swift sources and generate the boilerplate as a first step when generating harness?
  

Please do not do this. We have been talking about switching the benchmarks to use swiftpm instead of our own custom cmake goop. swiftpm does not support using custom things like gyb.

Michael

What’s the motivation here? I’m guessing GYB will not be removed from other parts of project… The benchmark files for sequence operations I’ve been looking at are ripe for templating, reducing possibility to make accidental errors when adding new variations.

I’m about to add coverage for a lot more of sequence operations, as their current performance is horrible. These are almost identical, varying only by the concrete type of sequence/collection tested, plus lazy variants. Being able to automate this seems vital to me.

Best regards
Pavol Vaskovic

···

On Thursday, 6 April 2017 at 19:44, Michael Gottesman wrote:

> On Apr 6, 2017, at 6:16 AM, Pavol Vaskovic via swift-dev <swift-dev@swift.org (mailto:swift-dev@swift.org)> wrote:


(Luke Larson) #4

...
How do I make benchmarks work with gyb?

Upon further inspection, one needs to regenerate harness _after_ modifying the gyb anyway, so manually invoking gyb works fine for now, given the [FILE].swift.gyb is ignored by the benchmarking and compilation machinery.

Is that so bad to add .gyb templates alongside the .swift sources and generate the boilerplate as a first step when generating harness?

Please do not do this. We have been talking about switching the benchmarks to use swiftpm instead of our own custom cmake goop. swiftpm does not support using custom things like gyb.

Michael

What’s the motivation here? I’m guessing GYB will not be removed from other parts of project… The benchmark files for sequence operations I’ve been looking at are ripe for templating, reducing possibility to make accidental errors when adding new variations.

One of the goals for the benchmark suite is for it to not depend on things outside of the benchmarks folder. It works with the Swift build system but doesn’t require it. This allows someone to distribute the benchmarks folder as a standalone entity and use it without having to execute the Swift build system.

I’d recommend using a scheme like we use for generating the harness files.

Luke

···

On Apr 6, 2017, at 1:14 PM, Pavol Vaskovic via swift-dev <swift-dev@swift.org> wrote:
On Thursday, 6 April 2017 at 19:44, Michael Gottesman wrote:

On Apr 6, 2017, at 6:16 AM, Pavol Vaskovic via swift-dev <swift-dev@swift.org (mailto:swift-dev@swift.org)> wrote:

I’m about to add coverage for a lot more of sequence operations, as their current performance is horrible. These are almost identical, varying only by the concrete type of sequence/collection tested, plus lazy variants. Being able to automate this seems vital to me.

Best regards
Pavol Vaskovic

_______________________________________________
swift-dev mailing list
swift-dev@swift.org
https://lists.swift.org/mailman/listinfo/swift-dev


(Michael Gottesman) #5

...
How do I make benchmarks work with gyb?

Upon further inspection, one needs to regenerate harness _after_ modifying the gyb anyway, so manually invoking gyb works fine for now, given the [FILE].swift.gyb is ignored by the benchmarking and compilation machinery.

Is that so bad to add .gyb templates alongside the .swift sources and generate the boilerplate as a first step when generating harness?

I think in the short term this is fine. And even if we switch to use swiftpm, we could use this same gyb approach. I do have 1 request though. My concern overall with generating the harness how we are doing it today is that sometimes people do not know about it and do not regenerate the file when they need to.

Do you think you could add a test to the validation suite that locally generates the gyb/harness files and performs a diff? This will ensure that at least the bots will catch if someone forgets to run the update.

Specifically, if you look in ./validation-test/Python/, you will see a test called bug-reducer.test-sh.

I would create a separate folder called ./validation-test/benchmarks and in that folder I would create a file called "generate-harness.test-sh". This will just be a shell script along the lines of bug-reducer.test-sh that will generate the harness/gyb files in a temp directory and make sure that the diff to what is checked into tree is empty. Then at least we will know if someone forgets to regenerate the harness or gyb files.

Michael

···

On Apr 6, 2017, at 1:14 PM, Pavol Vaskovic <pali@pali.sk> wrote:
On Thursday, 6 April 2017 at 19:44, Michael Gottesman wrote:

On Apr 6, 2017, at 6:16 AM, Pavol Vaskovic via swift-dev <swift-dev@swift.org (mailto:swift-dev@swift.org)> wrote:

Please do not do this. We have been talking about switching the benchmarks to use swiftpm instead of our own custom cmake goop. swiftpm does not support using custom things like gyb.

Michael

What’s the motivation here? I’m guessing GYB will not be removed from other parts of project… The benchmark files for sequence operations I’ve been looking at are ripe for templating, reducing possibility to make accidental errors when adding new variations.

I’m about to add coverage for a lot more of sequence operations, as their current performance is horrible. These are almost identical, varying only by the concrete type of sequence/collection tested, plus lazy variants. Being able to automate this seems vital to me.

Best regards
Pavol Vaskovic