Consuming Machine Learning models in Haxe?

Does anyone know or have experience consuming machine learning models in Haxe?
Ideally I would like to have the same functionality across multiple targets.

I only have some experience with MLNet before. You end up with model.zip (or can use someone else’s model file) and you consume it like so:

// C#
var mlContext = new MLContext();
DataViewSchema predictionPipelineSchema;
ITransformer predictionPipeline = mlContext.Model.Load("./mymodel.zip", out predictionPipelineSchema);
PredictionEngine<Data, OutputData> predictionEngine = mlContext.Model.CreatePredictionEngine<Data, OutputData>(predictionPipeline);

var output = predictionEngine.Predict(new Data { low = new float[] { 0.1f, 0.1f }, original = new float[] { 2f, 2f } });

Console.WriteLine("prediction: " + output.predicted + " with score: " + output.score);

Right now I am looking to use GloVe model, but it would be great to hear any shared machine learning consuming usage in Haxe.

I will probably use a pytorch model instead and try to consume it in haxe via hxcpp target: Loading a TorchScript Model in C++ — PyTorch Tutorials 1.9.1+cu102 documentation

I learned about ONNX which is a cross platform initiative to consume ML models, which seem to include some (pretrained) models that are in PyTorch.
It uses a runtime which is said to be 7.1973MB the this article: Introducing ONNX Runtime mobile – a reduced size, high performance package for edge devices - Microsoft Open Source Blog
The runtime are available on pretty much any platform: Install ORT - onnxruntime

1 Like