PythonKit adding spectral_norm

I'm using PythonKit to load and run a PyTorch model. I had it working well but my latest version of the trained model adds spectral normalization and I'm not sure how to add it. In Python, I followed a process of defining a simple function to "apply" it:

def add_sn(m):    
    if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
        return torch.nn.utils.spectral_norm(m)
    else:
        return m

Then I call that on the model using: generator.apply(add_sn)

Any tips on how I might go about doing this in PythonKit?

What I've tried is:

let myModel = Python.import("myModel")
myGenerator = myModel.Generator_CNN(10, 350)
myGenerator?.apply(torch.nn.utils.spectral_norm(myGenerator))

but if fails with Fatal error: 'try!' expression unexpectedly raised an error: Python exception: ('weight',) ...

And yes, I did add it to my Generator—it was a tip I found in a blog post. Since I did a lot of hacking at the model to get it to converge (without mode collapse), I'm going to try training again without spectral norm, in case it works fine and simplifies my life!

Okay, I wound up adding spectral norm to my models directly, so there was no problem loading after that. Also, I had stupidly forgotten to copy over the revised Python files from my Ubuntu box to my macOS machine after tweaking the architecture, so Python was trying to load the new checkpoints in the old architecture... gulp...