What are requirements for a Docker image to be used in BatchX?
These requirements are what we call "the BatchX contract". You can find detailed information here.
How do I send my code to BatchX?
How does BatchX compare to AWS Batch?
Both of them allow you to run Docker images on machines provisioned on-demand.
AWS Batch is a computing service within the AWS ecosystem, hence it is targeted to a technical audience with hands-on experience in infrastructure. It's not ready to be used and needs to be architected in conjunction with other AWS primitives.
BatchX is fully managed platform ready to be used, comprising file and image storage, job history, as well as other collaboration and management features.
How does the BatchX runtime compare to Hadoop?
Even though Hadoop also targets batch processing, our computational model is radically different.
Single node computation
Hadoop splits the job input and delegates its processing to multiple collaborative nodes (slaves).
BatchX provisions a big-enough machine to accomplish the whole job computation.
Fixed programming model vs containerization
Hadoop requires implementing jobs using Map-reduce semantics.
BatchX jobs run in containers. It's very easy to adapt any existent command line program to run in BatchX.
/batchx/input/input.json? Who creates that file?
The file contains the job input JSON message. BatchX creates it and maps it into the file-system of the job container as a read-only file. Read more about this here.