No! Why do you think us Julia fans are so insuferable and cannot stop ranting and raving about this language? 😆

You don't have to change the code!!! It is exactly the same code regardless of what you run on. This is why Julia kicks ass!

Just a clarification. Of course you need to put your arrays onto the GPU. But you can take a model you have tested and gotten working on the CPU and just do one function call to put the whole thing on the GPU. Then the you can run the whole model through training just like normal. The bulk of your code is entirely unchanged.

You can see some examples here: https://fluxml.ai/Flux.jl/stable/gpu/

As for deployment. I am personally in the process of learning machine learning myself. So I don't deploy models. You would have to clarify what you see as the problem to me.

For instance what would be the problem with deploy the Julia Flux model in a Docker container on AWS? Not sure if this would answer your question: https://discourse.julialang.org/t/how-to-deploy-a-machine-learning-model-as-a-rest-endpoint/18638

Written by

Geek dad, living in Oslo, Norway with passion for UX, Julia programming, science, teaching, reading and writing.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store