Just started to work on swift-mujoco

I've started to work on swift-mujoco. It is a Swift binding for Mujoco library (https://mujoco.org).

So far, it is very light, I am only able to have the interface to render simple models such as ant.xml from OpenAI Gym. I do plan to improve this by auto-translate more APIs over.

If you are interested in this and want to share ideas or collaborate, let me know.

3 Likes

Haven't updated this thread. Since this post, swift-mujoco had 2 releases and now matched main MuJoCo library's release cycle (on 2.2.1). The APIs are all documented (auto-generator transferred the comments over) under: https://liuliu.github.io/swift-mujoco/

3 Likes

Looking through the MuJoCo documentation, this is really awesome! I once spent an entire summer working on a physics project, MultiPendulum. It has since translated between multiple languages (Java -> Swift -> JavaScript -> Swift) and evolved into an AR app, but the core O(n^4) linear equation system solver is the same. This taught me how double-precision is pretty much the only viable precision for physics, and spurred some failed attempts to emulate FP64 on an Apple GPU. I will try again in the future with full force, going for full IEEE compliance - using SoftFloat as a reference.

Anyway, I'm happy to see that MuJoCo has similar performance on all platforms (since it's CPU-only) and didn't include CUDA-exclusive GPU acceleration (DeepMind is an AI company, so they might do that). This is an awesome addition to Swift's capabilities as a numerical computing language, and I hope it continues to stay up to date! @liuliu have you gotten the OpenGL views rendering in Jupyter, or is that too far-fetched?

Yes, it worked, I wrote about it here: Fun with Motion-JPEG, Jupyter Notebook and How to Stream GUI and Animations in the Browser There are some more involved dependencies to make this into swift-mujoco directly (mostly, I need to use libjpeg wrapper from s4nnc / ccv, performance of SwiftJPEG is not ideal at the moment).

Now I see why you opened this issue in Swift-Colab. I'm very acquainted with the concepts of streaming resource-intensive stuff in a real-time context - that's the focal point of my in-progress Metal backend for Swift for TensorFlow, and why it will be so fast.

I look forward to creating a video streaming/interactive gaming* feature in Colab and using MuJoCo in one of my test notebooks. It will be hard to implement inside the heavily restricted Google Python 3 Compute Engine, but I've found creative workarounds for software restrictions before. One was colorized text output for compiler errors, which was impossible in Colab. So in v2.1, I parsed plain text and reconstructed the text colorization from scratch. Perhaps the next minor release can incorporate streaming, alongside support for compiling s4tf/models.

*Your blog's interactive demo showed a lot of unrealized potential here. Someone could write an interactive 3D app as a Swift package and make a cloud gaming experience with Swift-Colab. It would use Google Drive to store level progress independently of the source code, and require GPU runtimes which support OpenGL.

1 Like

mjpeg is great for local network (and very easy to implement). But it unnecessarily consumes a lot bandwidth if used over the internet. Probably should switch to more capable codec for these (vp9 is pretty easy to embed these days, and should be fast enough).

I saw that in your blog, the code for streaming video happened in external libraries. I'm wondering if it's possible to facilitate the entire video experience inside the main Jupyter kernel. That means minimal external dependencies, and possibly avoiding a dependency on swift-nio. In short, I'm asking:

Can the entire streaming experience, from networking to video encoding, happen in Swift files built from source inside the notebook? If not, can I implement each specific component using common Python libraries or Ubuntu system libraries? Google Colab has a plethora of external dependencies pre-installed, which could offer the functionality we need.

You can launch any Colab notebook and look at the installed libraries. Go to Files > /usr/local/lib/python3.7/dist-packages, and browse through the dependencies. We can have two different implementations of the streaming experience, one being SwiftNIO for local notebooks, the other based on Python libs for cloud notebooks.

Screen Shot 2022-08-04 at 8.21.37 PM

For example, jpeg4py (v0.1.4) might allow for JPEG encoding in the motion-JPEG streaming format. PythonKit is built as part of the Jupyter kernel, so I can access this Python dependency with negligible overhead inside the kernel. Motion-JPEG seems more robust than other codecs that apply temporal compression, simpler to implement, and more flexible. Perhaps later, I can upgrade to a more complex codec.

Yes, Python should be "batter-included" for this: Simple Python Motion Jpeg (mjpeg server) from webcam. Using: OpenCV,BaseHTTPServer · GitHub I think http.server is included in Python, and looks like PIL is a dependency already installed on Colab.

Would it be possible to write a tutorial notebook for MuJoCo, which includes a 3D interactive experience(s)? I would like to add the tutorial to Swift-Colab's README. I'm already planning to add one for OpenCL (scroll to the bottom of the "Swift Tutorials" README section), and I want to expand upon the small set of tutorials Google created while sponsoring S4TF.

Yes, that is possible. There are a few things needs to happen though: I need to make @taylorswift 's SwiftJPEG work at reasonable speed and then move HTTPRenderServer back to MoJoCo library.

And also, I really need to focus on what I plan to do with these tools in the next a few days, I spend too much time on these tools themselves in the past a few weeks :slight_smile: (I recently also enhanced TensorBoard for Swift ... : https://liuliu.github.io/s4nnc/documentation/tensorboard/summarywriter)

1 Like

Perhaps we can brainstorm ideas in the meantime, maybe getting community input. For starters, it needs a short but descriptive name. I wanted to include "3D" in the name but avoid any prepositions before "3D". That would be too similar to "with OpenCL" in the other planned tutorial. What about "Simulating 3D Physics"?

Instead of integrating video streaming into the Jupyter kernel, showcasing it in a tutorial notebook is a better use of my time. The tutorial could import a Swift package that does encoding and networking. It would also default to a GPU-accelerated runtime for rendering.

hi Liuliu, thank you for your interest in SwiftJPEG! i’ve opened an issue to track improvements in the library’s performance, and i will set aside some time to look into this soon.

the goal of the v1 release was to implement the full JPEG specification and pass all the test suites, and unlike the PNG library, i never got around to optimizing the implementation to parity with its C-language counterpart. this was largely because i didn’t think there was all that much interest in pure-swift JPEG decoding, and i wasn’t dogfooding it as much as i was with the PNG library.

i also wrote the library about two years ago, and i know a lot more about high-performance swift now than i did back then, so i’m pretty confident the performance of the library can be improved significantly from where it’s at right now. to paraphrase a compiler dev from some years back, there are a lot of improvements to be made before we even get to the low-hanging fruit.


update: i’ve also drafted a plan to modernize swift-png’s benchmarking system, which if successful, could be applied to the JPEG library and greatly facilitate performance improvements there as well.

1 Like

I get that having an improved SwiftJPEG would be good for swift-mujoco, but we can make (at least draft) the tutorial before SwiftJPEG is completed. @liuliu's video rendering package can call into a C or Python library to encode JPEG - it already encodes JPEG images, as showcased in the blog. Once SwiftJPEG has certain optimizations, someone can modify the internal code of packages that the tutorial imports. That's not of concern to the tutorial user.*

*It may actually make performance worse, writing the JPEG encoder entirely in a JIT-compiled dependency. In Colab, everything compiles from sources in the notebook. It's much quicker to compile in debug mode, and users may get impatient or quit if builds take long. What is the performance delta of SwiftJPEG compiled in debug vs. release mode? If we instead use a pre-installed Python library just for the tutorial, we can access fully optimized performance while compiling a package quickly in debug mode.

Given the current state of swift-mujoco, there has to be something we can already do. For example, can I call into any of the C functions from Swift? Can I initialize a physics body and move it 10 meters, then print its position to the console? I assume the answers to all of these questions are "yes".

Side Note: I've made a 6-hour tutorial series for ARHeadsetKit, so I'm experienced with writing tutorials. I can get this rolling pretty fast; I just don't have expertise in MuJoCo.

Right, swift-mujoco is fully capable of simulate 3D physics and render 3D physics (on macOS or Linux, with GLFW installed) now with Swift (no need to call into any C functions). I was simply comment on the "interactive notebook experience". It should be possible to simply run swift repl from the package and start coding just fine (roundabout way to say should work OK with Xcode Playground) today.

I didn't look too deep into what you mean by tutorials? I thought you were curating a list of tutorials that people can run from Colab directly with Swift? If yes, then doing it interactively needs to have HTTPRenderServer re-implemented with PythonKit or SwiftJPEG. Otherwise we can only render it statically or print out the simulated results. Or does Colab support remote desktop?

A tutorial is a Colab notebook; a tutorial notebook. Google made a few notebooks to teach people how to use Swift and S4TF. I also referred to my experience making DocC tutorials for ARHeadsetKit, which are a different kind of "tutorial". DocC tutorials are similar to Apple's online SwiftUI course, while "tutorial notebooks" are Jupyter notebooks.

I'm confused by what you mean by "interactively". Your blog showcased an interactive 3D simulation inside a Jupyter notebook experience. I read "interactively" to mean either of these:

  • Can run the 3D simulation. Your current networking setup might cooperate with local Jupyter notebooks, but not with cloud Colab notebooks. The hard-coded ports or URLs might not work in Colab; we need to use different ports/URLs for the Colab environment in order to connect over HTTP.
  • The external dependencies compile quickly. Your demo relied on swift-nio, and you might be meaning that swift-nio takes way too long to compile in Colab. Thus, we should swap out the dependency for a Python library already installed in the Colab runtime. Same concept with SwiftJPEG vs. jpeg4py.

If it's the first interpretation, I am okay with the 3D simulation experience being broken at the moment. The tutorial has multiple parts. Some are 3D simulations, and others either "render [static images] or print out the simulated results". In the future, we will get the 3D simulation working and complete the rest of the tutorial.

Yeah, "interactively" I meant the 3D rendering / panning. If we forgo that, it can be compiled without SwiftNIO (it only requires C MuJoCo, in which it is self-contained (everything is vendored)).

would it help if we had binary distributions of swift-jpeg?

That would be awesome, especially if those distributions were compiled in release mode. However, SwiftPM currently only supports binary distributions on Apple platforms. Perhaps you could have a binary distribution shipped on its own, and I could find a way to integrate that into the notebook. I could download with %system, then inject the .swiftmodule into a dependent library using some new magic commands coming in Swift-Colab v2.3. You would have to provide an option to compile the external package with swift-jpeg as an injected (rather than explicit) dependency. More on that here. That option could be enabled with an environment variable, and turned off by default.

1 Like

i don’t know how to do this, but if binary products would help you set up this workflow, i would be happy to look into generating such artifacts on the swift-jpeg side :slight_smile:

I'd like to put the self-flipping Tippe Top mentioned in this article into the tutorial. I've always been fascinated by the mechanics of spinning objects. I once taped a bunch of CR2025 batteries to a tiny motor and watched act so wierdly because the constant influx of electrical energy counteracted entropy/friction.

Also, a 3D version of MultiPendulum that spins while swinging back and forth. Would MuJoCo handle it well if we linked a chain of 100 or 200 pendulums? The simulator uses damping, which makes it easier to implement and less computationally intensive. MultiPendulum was special in that it didn't have damping (unlike the several not-legit copies on the internet), letting the simulation run for infinitely long.

We have to get the Dzhanibekov effect in there too. I thought the Veritasium video on it was really fascinating.


Update: I revamped the Swift Tutorials section of Swift-Colab's README, adding dedicated sections for notebooks not made by Google. I settled on the name "Simulating 3D Physics" for this tutorial, unless someone has an alternative idea.