Out of curiosity, why does Haxe transform function types into dynamics after compilation?
What are the performance implications of doing so?
Out of curiosity, why does Haxe transform function types into dynamics after compilation?
What are the performance implications of doing so?
Will have to bump this, following my ticket here Function types being converted to dynamic types · Issue #997 · HaxeFoundation/hxcpp · GitHub which does not seem to be taken seriously as an issue (in spite of the fact it can cause some major stuttering during gameplay due to the GC performing unnecessary collections at runtime):
Using HxScout showed a large object footprint when using function types for callbacks, i.e. var func : Float â Void; and then calling func(3.0). A âFloatâ object was created each frame which then needed to be garbage collected. This incurs a big cost at runtime because any time a callback is required, the garbage collector is clearing up potentially thousands of objects that have been seemingly generated for no reason.
Does anyone know why this is the case?
The short answer is that Float -> Void
could mean a lot of things:
function main() {
var f:Float->Void;
f = function(i:Null<Float>) {}
f(3.0);
f = function(d:Dynamic) {}
f(3.0);
}
So in the general case, the call-site doesnât know what itâs actually calling and has to be conservative.
Hi Simn. Many thanks for answering, I appreciate it.
I donât understand why it would need to be conservative in this case. In the examples you just listed surely your function argument would change as per your use-case:
var f = Null<Float> -> Void;
or
var f = Dynamic -> Void;
My understanding is that because it gets transformed to dynamic at compile (despite the fact it should know the type as given) that at runtime an inference has to be made that leads to the creation of âFloatâ hence the sudden explosion of objects in the gameloop.
Why not just enforce an additional type safety check on the function parameter types?
I wanted to make a simple example, but I suppose itâs a bit too simple. Maybe this is better:
function invoke(f:Float->Void) {
f(3.0);
}
function main() {
invoke(function(i:Null<Float>) {});
invoke(function(d:Dynamic) {});
}
The call-site has no idea what itâs actually calling at compile-time.
Based on the OPâs initial issues, I would think that perhaps if you provide specific types, the Haxe compiler shouldnât make assumptions and instead generate the necessary C/C++ style callback as per the callback defined in Haxe.
If there is a Dynamic
type being supplied in a callback definition in Haxe, then assumptions have to be made and therefore the argument regarding the âcall-site having no idea what is actually compilingâ actually makes sense. However, I donât agree that all circumstances should mean assumptions should be made every time.
If you type f:Float -> Void
, you would expect a callback function of a similar type to be generated in the output, since the types are known at compile time. So, you should see: void (*SomeCallback)(float)
in the C++ output. Obviously thatâs a C-style callback (I donât use C++ often), but I think the Haxe compiler should at least infer that if specific types are given, no assumptions need to be made about the call-site.
If the values given do not work, I think thatâs the programmerâs fault not the fault of Haxe.
For clarification: why would a Float
be created each call when using the Function type? Is it because of dynamics or because of the casting from a C++ float to a Haxe Float
?
As in tieneryâs reply, why canât the compiler make no assumptions and simply generate function pointers (templates/callables, etc) when it sees a function expression? Maybe for cases like this:
public var f : Float->Void;
public function new() {
this.f = func1;
this.f(3.0)
this.f = func2;
this.f(3.0)
}
private function func1(value : Float) : Void { }
private function func2(value : Float) : Void { }
For background, I stumbled across this issue in two places:
If Float->Void doesnât evaluate to something other, you can do following (cpp target)
import haxe.Timer;
class Main {
static function func1(value : Float) : Void { }
static function func2(value : Float) : Void { }
static function main() {
var t;
t = Timer.stamp();
for ( i in 0...1000000 ) {
var f:Float->Void = func1;
f(3.0);
f = func2;
f(3.0);
}
t = Timer.stamp() -t;
trace(t);
t = Timer.stamp();
for ( i in 0...1000000 ) {
var fs = cpp.Function.fromStaticFunction(func1);
fs(3.0);
fs = cpp.Function.fromStaticFunction(func2);
fs(3.0);
}
t = Timer.stamp() -t;
trace(t);
}
}
Main.hx:18: 0.0541986
Main.hx:29: 0.0039293
Approx. 13 times fasterâŠ
Hi there, thanks.
Issue with this approach is that it doesnât work if I am working with anything other than static functions. Also, removes the ability to make platform agnostic code. Iâd have to hide this away in an abstraction which I wouldnât want to do.
If you donât want abstracts (the most powerful runtime Haxe feature), you can always make a macro (the most powerful compile time feature).
Abstracts are a compile-time feature as well.
If inlined, otherwise static functions (especially @:from) are used at runtime. Not to mention what happens if underlying type is dynamic.
The reason function pointers or fancier templates arenât used is due to dynamic. Currently hxcpp closures generated from haxe anoymous functions all extend the hx::LocalFunc
(or hx::LocalThisFunc
) class and the HX_BEGIN_LOCAL_FUNC
you see in the generated cpp defines a class which extends one of those classes and defines a _hx_run
function which contains the users actual code.
We could try and make a fancier closure class using template pack parameters which might look something like this.
template<class TReturn, class... TArgs>
struct HXCPP_EXTERN_CLASS_ATTRIBUTES TypedLocalFunc : LocalFunc
{
virtual TReturn _hx_run(TArgs... args) = 0;
};
and as a quick test to ensure things work we can hand write a closure to see what the the cpp generator / hxcpp macros could be changed to potentially output. In this case our closure class inherits from a specialisation of our closure template.
struct HXCPP_EXTERN_CLASS_ATTRIBUTES testTyped : hx::TypedLocalFunc<float, float, int>
{
float _hx_run(float v1, int v2)
{
return v1 * v2;
}
// These are needed to ensure compatibility with wrapping the typed closure in Dynamic.
::Dynamic __Run(const Array< ::Dynamic> &inArgs) { return _hx_run( inArgs[0], inArgs[1] ); return null(); }
::Dynamic __run(const Dynamic &inArg0, const Dynamic &inArg1) { return _hx_run( inArg0, inArg1 ); return null(); }
};
At first glace this all seems to work rather well. we can then write code like this and it works!
hx::TypedLocalFunc<float, float, int>* func = new testTyped();
Dynamic dyn = Dynamic(func);
::haxe::Log_obj::trace(v1->_hx_run(5.7, 7), null());
::haxe::Log_obj::trace(dyn(5.7, 7), null());
We still retain the old Dynamic calling and if we have the actual closure pointer the only cost we pay is the virtual function call, not a potential GC collection from primitives being boxed.
This approach all falls apart when dynamic is involved. If we take a look at a second very similar closure object where the first argument is replaced with Dynamic from a haxe point of view these two functions should be compatible, but in cpp theyâre not.
struct HXCPP_EXTERN_CLASS_ATTRIBUTES otherTestTyped : hx::TypedLocalFunc<float, Dynamic, int>
{
float _hx_run(Dynamic v1, int v2)
{
return v1 + v2;
}
::Dynamic __Run(const Array< ::Dynamic> &inArgs) { return _hx_run( inArgs[0], inArgs[1] ); return null(); }
::Dynamic __run(const Dynamic &inArg0, const Dynamic &inArg1) { return _hx_run( inArg0, inArg1 ); return null(); }
};
The problems start to appear once we try and cast between these two objects, attempting to do so will result in a null pointer because as far as c++ is concerned two different template specialisations are entirely different classes.
hx::TypedLocalFunc<float, float, int>* v1 = new testTyped();
hx::TypedLocalFunc<float, Dynamic, int>* v2 = new otherTestTyped();
v1 = (hx::TypedLocalFunc<float, float, int>*)v2;
// v1 is now null!
while this could probably be solved with some sort of adapter function in true âAll problems in computer science can be solved by another level of indirectionâ style, but its a fair bit more work than using function pointers or templates. While it would be nice to not have primitives be boxes in these situations its probably eaiser to re-work any performance sensitive code to avoid dynamic functions and the current Dynamic functions are probably to just to make the cpp implementation easier.
Hi Aidan, thanks for the reply. Very insightful
I know other compilers can handle delegates/function types elegantly, yet it seems like the limitation here is because we initially transpile to C++ and then must play by its rules (i.e. how it handles dynamics).
I donât know if C++ supports function overloading but if it does my naive initial thought would be to generate something like this (in pseudocode):
TypedLocalFunc_FFI<float, float, int> * func1 = new testTyped();
TypedLocalFunc_FDI<float, dynamic, int> * func2 = new otherTestTyped();
TypedLocalFunc(float v1, float v2, int v3) {
// Call func1
}
TypedLocalFunc(float v1, dynamic v2, int v3) {
// Call func2
}
I will have a look into this later to see if I can solve my issue, as I feel function types do lead to more elegant code. It makes for simpler eventing, for example. Itâs just a shame that the secret boxing and unboxing of primitive types leads to very noticable frame stuttering at runtime.
Interesting hypothesis. My brain just thinks, âprogrammerâs are intelligent and the output should work. If it doesnât work, the programmer did something wrong.â
I understand the concept that if Haxe were to generate an output similar to what you provided, you could say that Haxe throws a compile error stating, in this case (just on the C++ target), mixing Dynamic
types with real types in function calls is invalid, which actually reinforces type-safety and forces programmers to be more careful with mixing unknown types with known types.
If working with unknown types, they should separated from function arguments whose types are known at compile time. This also reduces ambiguity between function calls. But, this is a decision by the Haxe team of course, Iâm just throwing a suggestion in here while weâre on the topic.
© 2018-2020 Haxe Foundation - Powered by Discourse