Compilation on Bitbucket Pipelines using Docker


I’m trying to use Bitbucket Pipelines to compile, test and deploy my project. I found the docker image of haxe, but then I don’t know what to do :woozy_face:
That’s my first time with Docker so I’m not sure how it works.

For now I’ve got this:

  docker: true
    - step:
        script: # Modify the commands below to build your repository.
          - docker pull haxe
          - docker run haxe
          - docker run haxelib install openfl

Here is the error:

Unable to find image 'haxelib:latest' locally
docker: Error response from daemon: pull access denied for haxelib, repository does not exist or may require 'docker login'.
See 'docker run --help'.

It seems that I don’t have access to haxelib in the docker image…any clue?

I think this tells Docker to run a Docker image called haxelib, which… well, does not exist, as the error says.

Thanks for your feedback. That makes sense. But docker run haxe is working. Docker Haxe is supposed to be shipped with haxelib. So how can I use the haxelib command?

docker run haxe does not run the haxe command. It starts up the Haxe container / Docker image.

I think you’re looking for the exec command:

Alternatively you could use lix to install haxe and haxelib:

    - step:
        script: # Modify the commands below to build your repository.
          - npm i -g lix
          - haxelib install openfl
          - ...
1 Like

Thanks. I tried lix and sadly I’ve got errors (here are the logs):

haxelib install openfl`

Neko seems to be missing. Attempting download ...


module.exports=function(e,t){"use strict";var n={};function __webpack_require__(t){if(n[t]){return n[t].exports}var i=n[t]={i:t,l:false,exports:{}};e[t].call(i.exports,i,i.exports,__webpack_require__);i.l=true;return i.exports}__webpack_require__.ab=__dirname+"/";function startup(){return __webpack_require__(610)}return startup()}({60:function(e){e.exports=Object.freeze({Z_NO_FLUSH:0,Z_PARTIAL_FLUSH:1,Z_SYNC_FLUSH:2,Z_FULL_FLUSH:3,Z_FINISH:4,Z_BLOCK:5,Z_OK:0,Z_STREAM_END:1,Z_NEED_DICT:2,Z_ERRNO:-1,Z_STREAM_ERROR:-2,Z_DATA_ERROR:-3,Z_MEM_ERROR:-4,Z_BUF_ERROR:-5,Z_VERSION_ERROR:-6,Z_NO_COMPRESSION:0,Z_BEST_SPEED:1,Z_BEST_COMPRESSION:9,Z_DEFAULT_COMPRESSION:-1,Z_FILTERED:1,Z_HUFFMAN_ONLY:2,Z_RLE:3,Z_FIXED:4,Z_DEFAULT_STRATEGY:0,ZLIB_VERNUM:4736,DEFLATE:1,INFLATE:2,GZIP:3,GUNZIP:4,DEFLATERAW:5,INFLATERAW:6,UNZIP:7,Z_MIN_WINDOWBITS:8,Z_MAX_WINDOWBITS:15,Z_DEFAULT_WINDOWBITS:15,Z_MIN_CHUNK:64,Z_MAX_CHUNK:Infinity,Z_DEFAULT_CHUNK:16384,Z_MIN_MEMLE

TypeError: this is not a typed array.
    at Function.from (native)
    at Unzip.write (/bin/versions/node/v4.2.1/lib/node_modules/lix/bin/haxelibshim.js:2:14702)
    at write (/bin/versions/node/v4.2.1/lib/node_modules/lix/bin/haxelibshim.js:2:21963)
    at IncomingMessage.ondata (_stream_readable.js:528:20)
    at emitOne (events.js:82:20)
    at IncomingMessage.emit (events.js:169:7)
    at readableAddChunk (_stream_readable.js:146:16)
    at IncomingMessage.Readable.push (_stream_readable.js:110:10)
    at HTTPParser.parserOnBody (_http_common.js:109:22)
    at TLSSocket.socketOnData (_http_client.js:305:20)

Strange because log from npm i -g lix tells me that neko is installed


lix install haxelib:openfl


module.exports=function(e,n){"use strict";var t={};function __webpack_require__(n){if(t[n]){return t[n].exports}var i=t[n]={i:n,l:false,exports:{}};e[n].call(i.exports,i,i.exports,__webpack_require__);i.l=true;return i.exports}__webpack_require__.ab=__dirname+"/";function startup(){return __webpack_require__(726)}return startup()}({60:function(e){e.exports=Object.freeze({Z_NO_FLUSH:0,Z_PARTIAL_FLUSH:1,Z_SYNC_FLUSH:2,Z_FULL_FLUSH:3,Z_FINISH:4,Z_BLOCK:5,Z_OK:0,Z_STREAM_END:1,Z_NEED_DICT:2,Z_ERRNO:-1,Z_STREAM_ERROR:-2,Z_DATA_ERROR:-3,Z_MEM_ERROR:-4,Z_BUF_ERROR:-5,Z_VERSION_ERROR:-6,Z_NO_COMPRESSION:0,Z_BEST_SPEED:1,Z_BEST_COMPRESSION:9,Z_DEFAULT_COMPRESSION:-1,Z_FILTERED:1,Z_HUFFMAN_ONLY:2,Z_RLE:3,Z_FIXED:4,Z_DEFAULT_STRATEGY:0,ZLIB_VERNUM:4736,DEFLATE:1,INFLATE:2,GZIP:3,GUNZIP:4,DEFLATERAW:5,INFLATERAW:6,UNZIP:7,Z_MIN_WINDOWBITS:8,Z_MAX_WINDOWBITS:15,Z_DEFAULT_WINDOWBITS:15,Z_MIN_CHUNK:64,Z_MAX_CHUNK:Infinity,Z_DEFAULT_CHUNK:16384,Z_MIN_MEMLEVEL:1,Z_

TypeError: ye.alloc is not a function

at /bin/versions/node/v4.2.1/lib/node_modules/lix/bin/lix.js:2:223986
at Object.module.exports.n.63.s.constructor.e.ondone.t.on.t.on.a.futimes.a.fchown.e.(anonymous function).e.(anonymous function) (/bin/versions/node/v4.2.1/lib/node_modules/lix/bin/lix.js:2:224067)
at __webpack_require__ (/bin/versions/node/v4.2.1/lib/node_modules/lix/bin/lix.js:2:154)
at startup (/bin/versions/node/v4.2.1/lib/node_modules/lix/bin/lix.js:2:291)
at module.exports.n.60.e.exports.Object.freeze.Z_NO_FLUSH (/bin/versions/node/v4.2.1/lib/node_modules/lix/bin/lix.js:2:323)
at Object.<anonymous> (/bin/versions/node/v4.2.1/lib/node_modules/lix/bin/lix.js:2:333)
at Module._compile (module.js:435:26)
at Object.Module._extensions..js (module.js:442:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:311:12)

When I try docker exec -it haxe or docker exec -it haxelib after docker pull haxe, I have: No such container haxe or No such container haxelib . Also when I do docker ps -l, the list is empty!

No! When I run this command, I’ve got:

Haxe Compiler 3.4.7 - (C)2005-2017 Haxe Foundation

 Usage : haxe -main <class> [-swf|-js|-neko|-php|-cpp|-cppia|-as3|-cs|-java|-python|-hl|-lua] <output> [options]

 Options :

  -cp <path> : add a directory to find source files

  -js <file> : compile code to JavaScript file

I think there is a bit of a confusion here: docker run A does not execute program A, it starts a docker instance of an image named “A”. If there is no such image, docker will complain (see docker run haxelib).
When you run a docker image it will call a predefined (= baked into that docker image) command line or script, which can then start a service, run a tool or do whatever.

In the case of Haxe’s docker images that startup script simply calls haxe - an executable that sits inside that image (along with neko and haxelib). So you are both correct.

However in order to get it to call haxelib ... you would have to trick it, e.g. by bypassing the startup script, going interactive or building your own derivative image.

I haven’t used Haxe’s docker images, so I’m not really sure how they are supposed to work.
The readme for Haxe images on docker hub indicates you are meant to build your own image(s) from those Haxe images, so they contain operations tailored to your project. So Haxe’s images are just there to provide the binaries.

You should also keep in mind, that everytime you docker run <image> you start a fresh, new instance, which doesn’t know anything that happened before.

All in all it sounds like a lot of work, and I’m not sure you want to go in that direction.

regarding lix: I think node 4.2.1 is too old, try using a more up to date version. I think I’ve seen someone mentioning at least version 6 was required. I usually go for 10.x.

Yeah, regarding lix it seems like pipelines ships with a rather old node version. It might have nvm too, or you can just start from a newer image:

image: node:10

Aaaah ok ! Thank you very much for the clarification. So when I run the Haxe image, the docker image call the Hash bash because it’s written in the docker file. So I should clone the image and add all my haxelib in my own docker file and should not try to call haxelib from Pipelines like I did.

Thanks! It’s working, it seems that it’s a faster way than Docker. I just need to find how to set the Haxe version now because my project is not compatible with Haxe 4 yet.

To download and set the haxe version I did this and it seems to work:

lix install haxe 3.4.7 --global`
lix use haxe 3.4.7

But I can’t build for now, I don’t know why I’ve got a ndll error.

haxelib run openfl build html5

Called from lime/system/System.hx line 694
Called from lime/system/System.hx line 371
Called from lime/system/System.hx line 481
Called from lime/system/System.hx line 306
Called from /usr/lib/haxe/std/neko/Lib.hx line 30
Uncaught exception - load.c(237) : Failed to load library : lime.ndll (lime.ndll: cannot open shared object file: No such file or directory)

Clone might work, but usually you base your image on an existing image, so that you only have to add whatever is specific to your environment or needs.
e.g. Haxe’s alpine 3.10 docker image is derived from alpine:3.10, so it inherits whatever is inside
alpine 3.10 image and only adds Haxe binaries (that’s what FROM: alpine:3.10 in Dockerfile is for).

yeah, downloading, building and running a Docker image probably takes a long time.

In your project folder run (after installing lix on you PC):
lix scope create
lix download haxe 3.4.7 and
lix use haxe 3.4.7
You should now see a .haxerc that’s locked to Haxe 3.4.7. (you already have that part)
From there you add all your dependencies via lix install haxelib:<libname> which will put them into haxe_libraries folder. Then your project should be set up for use with lix.
You only have to do that once, because typically you put your .haxerc and haxe_libraries into source control.

When building on a new PC or CI system you simply install lix and then run lix download which will take care of installing Haxe 3.4.7 and all libs for you.
Depending on whether you installed lix locally or globally you either just run haxe or you might need to run Haxe via npx haxe, npx neko or npx haxelib.

As @ablum said, I think you’re setting up the pipeline wrong.

At work I set up a number of Jenkins declarative pipelines to do similar work and, from what I could gather from the bitbucket pipeline documentation, the scenario should be pretty similar: you shouldn’t need to run any docker command by hand, but instead you only need to instruct the pipeline runtime on which image you’d like to base your steps upon. In other words, if you specify image: haxe at the top of the pipeline file, all the ensuing steps/scripts will be directly run inside a container that will be automatically initialized by the pipeline runtime for you from the official docker image for haxe.

If you look carefully at the example posted there (reported here), you’ll see that they’re only specifying the base image they are using (node:10.15.0) and then they run a series of npm commands from inside the container:

image: node:10.15.0
    - step:
        name: Build and test
          - npm install
          - npm test
  tags:                         # add the 'tags' section
    release-*:                  # specify the tag
      - step:                   # define the build pipeline for the tag
          name: Build and release
            - npm install
            - npm test
            - npm run release
      - step:
          name: Clone
            - echo "Clone all the things!"

I couldn’t find a reference to this, but I assume that all the project contents beside the pipeline yml file are made available inside the working directory (WORKDIR) of the container.

Hi @loudo

We use Bitbucket & Docker also and our setup is as follows:

image: mikkelmr/haxe4:haxe4node

    - step:
        name: Run all unit tests.
          - npm install ws
          - bash tools/pipeline/
          - bash tools/pipeline/

The Docker image we use, is based on the Haxe 4.0.0 official image, plus a Node.js installation which we needed to run our unittests. Dockerfile looks like this:

FROM haxe:4.0.0	

# Brutally stolen from the node Dockerfile.

RUN groupadd --gid 1000 node \
  && useradd --uid 1000 --gid node --shell /bin/bash --create-home node

# gpg keys listed at
RUN set -ex \
  && for key in \
    94AE36675C464D64BAFA68DD7434390BDBE9B9C5 \
    FD3A5288F042B6850C66B31F09FE44734EB7990E \
    71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 \
    DD8F2338BAE7501E3DD5AC78C273792F7D83545D \
    C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 \
    B9AE9905FFD7803F25714661B63B535A4C206CA9 \
    56730D5401028683275BD23C23EFEFE93C4CFFFE \
    77984A986EBC2AA786BC0F66B01FBB92821C587A \
    8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 \
  ; do \
    gpg --keyserver hkp:// --recv-keys "$key" || \
    gpg --keyserver hkp:// --recv-keys "$key" || \
    gpg --keyserver hkp:// --recv-keys "$key" ; \


RUN ARCH= && dpkgArch="$(dpkg --print-architecture)" \
  && case "${dpkgArch##*-}" in \
    amd64) ARCH='x64';; \
    ppc64el) ARCH='ppc64le';; \
    s390x) ARCH='s390x';; \
    arm64) ARCH='arm64';; \
    armhf) ARCH='armv7l';; \
    i386) ARCH='x86';; \
    *) echo "unsupported architecture"; exit 1 ;; \
  esac \
  && curl -fsSLO --compressed "$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH.tar.xz" \
  && curl -fsSLO --compressed "$NODE_VERSION/SHASUMS256.txt.asc" \
  && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
  && grep " node-v$NODE_VERSION-linux-$ARCH.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
  && tar -xJf "node-v$NODE_VERSION-linux-$ARCH.tar.xz" -C /usr/local --strip-components=1 --no-same-owner \
  && rm "node-v$NODE_VERSION-linux-$ARCH.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt \
  && ln -s /usr/local/bin/node /usr/local/bin/nodejs


RUN set -ex \
  && for key in \
    6A010C5166006599AA17F08146C2130DFD2497F5 \
  ; do \
    gpg --keyserver hkp:// --recv-keys "$key" || \
    gpg --keyserver hkp:// --recv-keys "$key" || \
    gpg --keyserver hkp:// --recv-keys "$key" ; \
  done \
  && curl -fsSLO --compressed "$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" \
  && curl -fsSLO --compressed "$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" \
  && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz \
  && mkdir -p /opt \
  && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ \
  && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn \
  && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg \
  && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz

RUN haxe -v
RUN node --version

CMD [ "node", "haxe" ]