Logo

MacStadium Blog

Expose Native, Dockerized Services in macOS CI/CD Workflows with Orka

Learn how to leverage native Docker in a macOS or iOS CI/CD pipeline with Orka.

CI/CD pipelines consist of a series of automated processes that are executed in a predetermined order each time the workflow runs. Standing up services in advance of a given workflow run – in order to execute one or more of these workflow-embedded processes – is a common pattern in modern, cloud-based CI/CD.

For example, it is a common design requirement of modern CI/CD that a workflow’s resulting build artifacts be written to cloud storage such as AWS s3. A team may stand up a service that handles the task of writing the artifacts to a storage bucket somewhere, which *spoiler alert* we’re about to show you how to do in this blog.

More importantly, we’ll walk you through the process by which any variety of Dockerized service you could dream of will also be stood up in Orka.

Outside of the macOS world, Docker would be the tool of choice for standing up a service like this. And while Docker Desktop offers a simple path to nested virtualization on macOS, it comes with a startup penalty, because Docker Desktop has to start each time a new VM is spun up. This limitation is compounded by limited headspace for Docker to run as it is now running on a Linux VM within a macOS VM rather than running natively on Linux.

Orka solves this problem by allowing you to stand up Dockerized services alongside macOS virtual machines, rather than nested within them using Docker Desktop. This results in markedly increased efficiency in a CI/CD pipeline for macOS or iOS, because there is no startup penalty each time the workflow executes. Also, the services aren’t directly consuming VM resources as the build itself executes.

Deployment Overview

You’ll need to deploy your Dockerized service to the Kubernetes sandbox associated with your Orka environment as shown in the following diagram.

To do so, you’ll first need to Dockerize a service that listens for an HTTP request (or choose one that already exists), because the macOS and Linux environments will use HTTP to communicate. If building your own Docker image, you’ll then need to push that newly built image to an accessible Docker registry.

Our example today is a simple Bottle.py-based fileserver application that listens for a POST request, reads the attached file as bytes into memory, and writes that byte string to the s3 bucket. If you’re interested, you can check out the source code here.

More importantly, whichever service you need to stand up simply needs to result in something like the following Dockerfile and docker run command would. That is, an application that listens for HTTP requests needs to run at container startup.

FROM python:3.9
COPY requirements.txt /tmp
RUN pip install -r /tmp/requirements.txt
RUN rm -rf /tmp/requirements.txt
COPY main.py .
CMD python main.py
sudo docker run \
-e AWS_REGION_NAME="<us-west-2>" \
-e S3_BUCKET_NAME="<bucket_name>" \
-e AWS_ACCESS_KEY="<access_key>" \
-e AWS_SECRET_KEY="<secret_key>" \
-p 8888:8888 \
jdvincent/orka-s3-archive:latest

Deploying to Orka’s K8s Sandbox

With something like the above example in an accessible Docker registry, you can use the following as a template for creating the Kubernetes Deployment and Load Balancer that you will need to allow the macOS and Linux environments to communicate.

# orka-s3-archive.yml
---
apiVersion: v1
kind: Service
metadata:
  name: s3archive-service
spec:
  selector:
    app: s3archive
  ports:
    - port: 80
      targetPort: 8888
  type: LoadBalancer
  externalIPs:
  - 10.221.188.13
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: orka-s3-archive
  labels:
    app: s3archive
spec:
  replicas: 1
  selector:
    matchLabels:
      app: s3archive
  template:
    metadata:
      labels:
        app: s3archive
    spec:
      containers:
      - name: orka-s3-archive
        image: jdvincent/orka-s3-archive:intel
        env:
        - name: AWS_SECRET_KEY
          value: "<your_secret_key>"
        - name: AWS_ACCESS_KEY
          value: "<your_access_key>"
        - name: S3_BUCKET_NAME
          value: "<your_bucket_name>"
        - name: AWS_REGION_NAME
          value: "<us-west-2>"
        ports:
        - containerPort: 8888

NOTE: it is recommended that you introduce sensitive information with Kubernetes Secrets. The above approach to introducing environment variables is simple and works, but is less secure.

In the above YAML file, we have defined everything required for a high availability deployment of a Dockerized service that will be accessible via HTTP from your macOS VM.

Load Balancer Service

In the topmost section of the above file, we have defined the Load Balancer Service that will allow HTTP traffic to hit our Dockerized service. We gave it a name, an app label to associate itself with, the port it will listen on (80 in this case), and the port it will forward traffic to on the container (8888 in this case).

Finally, we have set the externalIP as the IP of the cluster node where the Kubernetes Service is deployed. This means that you will need to deploy the service without the externalIP populated, collect the IP of the node onto which it is deployed, update this file with that value, and apply the definition once more.

K8s Deployment

Next, we define a Deployment that will offer high availability to our Dockerized service. Here, we again give it a name, and in this case, we set the app label to that which the above Load Balancer will be looking for. Then we set the number of replicas we want to spin up, and just like we did above, we tell it which app label to associate with.

Finally, we define which Docker image we want to run, potentially introduce environment variables, and set the containerPort value to the port on which our Dockerized service will be listening (again, in this case, it will be port 8888).

Run It

Once you’ve updated the above template to reflect the values associated with your Dockerized service, you can run the following to stand it up in Kubernetes:

kubectl apply -f orka-s3-archive.yml

To communicate with your service from a macOS VM, you’ll simply need to send an HTTP request to the exposed Kubernetes Service we first defined. For example, in the above setup, we can now send a scripted curl request like the following to upload a zipped file to s3 storage.

curl --verbose --header "Content-Type:multipart/form-data" --form "artifact=@/path/to/artifact.zip"  http://10.221.188.13/archive

NOTE: the request is being sent to the externalIP and port we set on the Load Balancer Service.

TL;DR

Dockerized services are often part of modern CI/CD pipelines. Docker Desktop allows engineers to use this same approach in macOS-based pipelines for iOS. However, this approach can lead to performance issues when virtualizing macOS for ephemeral build agents, as Docker would then be running on a Linux VM inside of a macOS VM, rather than natively on Linux.

Orka solves this problem, and avoids Docker’s startup penalty on each workflow run, by offering a native, independent Docker environment alongside your macOS VMs rather than running it directly on your VMs with Docker Desktop.

Do you have additional questions about Orka? Visit our Orka Docs for more info, or join us on the MacStadium Community and discuss your experiences with Docker and Orka.

Logo

Orka, Orka Workspace and Orka Pulse are trademarks of MacStadium, Inc. Apple, Mac, Mac mini, Mac Pro, Mac Studio, and macOS are trademarks of Apple Inc. The names and logos of third-party products and companies shown on the website are the property of their respective owners and may also be trademarked.

©2023 MacStadium, Inc. is a U.S. corporation headquartered at 3525 Piedmont Road, NE, Building 7, Suite 700, Atlanta, GA 30305. MacStadium, Ltd. is registered in Ireland, company no. 562354.