I’m currently interested (and really always have been…) in drawing graphics, 2D first, then 3D. I tend to imagine in vector graphics, not raster. I see that most of the pop interactive frameworks (Heaps, Kha, OpenFL) all provide a class for doing just this in 2D (h2d/graphics, graphics2/graphics & graphicsExtension, graphics), with OpenFL and Heaps mimicing flash’s graphics class (fill, lineto, etc.). (Kha’s graphic2/graphic class appears quite incomplete with a bunch of empty drawing functions. Though, I found some in graphicsExtension. Or maybe it’s in the other 10 files titled graphics… )
My question is: is there much of a difference of what goes on underneath? Let’s just say for OpenGL / OpenGL ES (for iOS) implementation. Will they all perform about the same for drawing primitives via OpenGL (/ES)?
For example, ohhhh, let’s say you had a drawing iPad app, and you could use all 10 of your little fingers to draw stuff on it at the same time…and a kid went ballistic with it. [Implementation:] Not merely painting on a single bitmap, but actually dynamically creating a thick line (a 2d cylinder polygon) along the paths each finger took.
Or, another example: a shooter game with tons of lasers going every which way, but not instantly: the filled rectangles sort of grow.
I don’t know how graphics really work, so that’s why I ask. I don’t know what magic those graphic classes do. I only know that I can use the flash-like graphics-class drawing api to draw a thick line following the path of the touch-points, or, drawing a filled rectangle and increasing the size of that rectangle over time. I would hope they are just sorta drawn directly to the gpu (send the polygon’s verticies and fill)… but I am at my limits of knowledge here.
Is this some rare case where OpenFL actually does just as good as the other two (Heaps ‘n Kha), or even better because it has some magical vector graphics stuff underneath? Or is OpenFL ‘n Lime still and jsut generally slower because it tries to limit itself to WebGL?..
Or maybe I’m thinking of all of this entirely wrong, and it’s better to just sorta paint on / re-use bitmaps?..
Anyway, Godot seems to support both ways: with canvasItem (it’s base 2d class; rendering_server_canvas.cpp), which contains the straight drawing api (primitives, filled and not), and inherited classes (node2d) like polygon2d (and polygon2d colllision!), in which you can still choose to render(?) it a simple solid color (filled) instead of rendering(?) a standard texture!.. So maybe I should start with Godot first … then run back to Haxe when I can’t do something within Godot…
…but the dream was to use Haxe ‘n low-level exposed frameworks for everything damnnit!, not be stuck with another big tool. Maybe Godot will stay simple ‘n clean…maybe.
(note: I’ve been out of the computer world for 10 years, so go easy. This is my first post. )
(an aside: In the FlashPunk of 10 years ago, you could use Flash’s api to draw stuff, but it was completely independent of FlashPunk’s scene tree (world) which was sprite / texture based, so layering things became an issue… I vaguely have memories of having trouble with HaxePunk ‘n AIR, something to do with certain OpenGL functions not being supported on ES. I also still have nightmares about using Cocos2d, especially Obj-c , and anyway, they were both terribly slow on the old iPads.)