I don’t know much but I will say I’m currently launching a Hashlink binary via NodeJS in a Docker container. Presumably you could use Hashlink as your server with the right setup.
I don’t know much but I will say I’m currently launching a Hashlink binary via NodeJS in a Docker container. Presumably you could use Hashlink as your server with the right setup.
Thanks for the reply. I will be looking into using Hashlink as the sys target eventually.
Thanks, I think snake-server gives a full example of running a web server in sys targets, so I’m set for that. My problem in this case is figuring out how to integrate with tink_http.
I am at the point where I’m just going to bolt onto snake-server with a routing system, support for query params, post variables, json serialization and maybe tink async stuff.
I believe tink_tcp worked on sys targets (to some extent) last time I checked
That’s entirely possible, but there’s a serious lack of examples or any help on how to do it.
The examples that come with tink_web and tink_http show NodeJs and Php as containers, but the TcpContainer expects handler callbacks that perform all the work, and I can’t figure it out (and I’m usually pretty good at figuring things out).
On the lower parts of its stack, tink has a few things that aren’t all that maintained, i.e. tink_runloop and tink_tcp. They are somewhat abandonded, because the compiler team started adding these things to the stdlib (with the event loop and at some point the asynchronous IO APIs), so it seemed a bit pointless to pour too much energy into it.
That said, the point of tink_http is to define clear abstractions, that can be implemented on top of whatever APIs it is that you want to use. In essence tink_web turns a router object (or object hierarchy) into an IncomingHttpRequest -> Promise<OutgoingHttpResponse>. You don’t even need to use the container abstraction of tink_http if you don’t want to. Use whatever server you want, turn its request abstraction into an IncomingHttpRequest, pass it to tink_web, and write the results back to whatever API your server exposes. Can be node, can be PHP, can be snake-server, could be some Java server, could be that you want to compile your tink_web application to Lua to run directly in nginx. No matter. The point is definitely not to have to start on top of TCP (I mostly did it in the spirit of “because we can”), but rather on top of whatever HTTP server you want to run your code in.
Good luck on writing your own web server, it’s a lot of fun and if you need any help I’ve walked the same road, and would be happy to give guidance.
In regards to a full featured web server with the features you want to achieve from the article, go2hx may already suit your needs for many of them. As an example:
Requirements:
Haxe preview 1 (Likely to be bumped to a nightly Haxe version soon)
Fair question, the performance is substantially worse at the moment, it’ll hold up for a few thousand connections, but past that, it’s not capable enough, unlike Go being able to do a million concurrent connections with low latency.
The reason why is 2 fold:
Most importantly, the concurrency model in go2hx is not as scalable as Go, relying on 1 thread per Goroutine model. Go on the other hand uses coroutines + threads model with a very sophisticated scheduler to share work and reduce the resources and maximum CPU usage across all threads.
go2hx has had very little work done on optimizing the outputted Haxe code. The practical reason why, is it’s a large project with few hands working on it. We’ve had to focus on correctness over everything else to get this far.
However there is some good news, on why go2hx is still worth trying out.
go2hx is married to Haxe, and Haxe is getting what is shaping out to be a well engineered coroutine system into the language for Haxe 5.
go2hx is still young, only getting a working web server in August of this year.
It’s Haxe code, and open source. Optimizations will be found and it will reach closer parity with Go with more usage + contributions.