Quest to Cloud Native Computing

Improve supply chain security with GitHub actions, Cosign, Kyverno and other open source tools


Modified: Sun, 2022-Jan-02


I am using a product called MeiliSearch for my JAM stack website. It is a great searching application. It could be deployed on-premise or using containers. I wanted to use this application for practicing supply chain security. We would secure it from building to deployment.

Here’s a diagram to summarize the workflow.

Compile MeiliSearchGenerate SHA256 checksumfor the binariesGet base containerimage MinidebIntegrity checkedwith DCTCopy MeiliSearch binary into containerChecksum of the binaries are verfiiedSign the containerSigned by CosignScan the containerScanned by TrivySARIF report uploadedGenerate SBOMSBOM created by SyftAttest the SBOMAttested by CosignDeployment in KubernetesVerified by Kyverno policystartdeployed

Re-compiling MeiliSearch

We start from building the binary and docker image of MeiliSearch. At the time of the writing, the official binaries of MeiliSearch do not provide the checksum file. The integrity of the binary could not be verified. The official docker images of the MeiliSearch did not enable Docker Content Trust (DCT). So, the integrity of the container image could not be verified when we use it.

To solve this problem, I use GitHub actions to compile MeiliSearch and generate the checksum file when the binary is produced. In order to lower the complexity of the Dockerfile, the MeiliSearch binary would be downloaded instead of compiling inside the Dockerfile. Here’s the extract of the relevant code block:

              - name: Create checksum file for the binaries
        run: |
          cd target/${{ }}/release
          sha256sum meilisearch | awk '{print $1, "${{matrix.asset_name}}"}' > ${{matrix.asset_name}}.sha256sum
          if [ -e meilisearch-stripped ]; then
             sha256sum meilisearch-stripped | awk '{print $1, "${{matrix.asset_name}}-stripped"}' > ${{matrix.asset_name}}-stripped.sha256sum

After the binaries were built, the binaries and checksum files would be uploaded to the GitHub packages.

Creating the container image

Next step is to build the container image. The choice of the base container image matters. I am using ARM platform, so the base image needs to support ARM. Besides that, to improve supply chain security, the base image should be signed using Docker Content Trust or Cosign.

Distroless is a good option. It is signed by project Cosign, but the main drawback is that it does not come with a package manager. I cannot install Debian packages easily. How about Alpine Linux? It is based on Musl instead of GLIBC, it would have performance problem. Besides that, the Docker Content Trust (DCT) is enabled but it is outdated. Newer releases are not signed.

I was using Debian as the base image but DCT was not enabled in Docker Hub. So I switched to Minideb by Bitnami. It supports ARM, having a package manager (called install_packages) and is signed (DCT).

After choosing the base image, we continue with the creation of the container. We would enable the DCT by setting the environment variable 'DOCKER_CONTENT_TRUST'. Here’s the extract of the relevant code block:

              - name: Build and push
        uses: docker/build-push-action@v2

Note that I did not create or use my signer or private key for Docker Content Trust. I use DCT for pulling the image. For pushing or signing the image, I use Cosign. We will see it in the later steps of the article.

Instead of compiling MeiliSearch with Dockerfile we download the binaries from the GitHub package that we built in the previous section. We would also compare the checksum of the binaries. Here’s the extract of relevant code block:

        RUN set -eux && \
    curl -L -v -O ${SOURCE_BINARY_BASEURL}/${MEILISEARCH_VERSION}/meilisearch-linux-$(/bin/uname -m)-stripped \
    && curl -L -v -o meilisearch.sha256sum ${SOURCE_BINARY_BASEURL}/${MEILISEARCH_VERSION}/meilisearch-linux-$(/bin/uname -m)-stripped.sha256sum \
    && sha256sum --check --strict meilisearch.sha256sum \
    && ln -s meilisearch-linux-$(/bin/uname -m)-stripped meilisearch \

Since we are using the 'and condition' (&&) for each command, all the commands in the RUN step must execute successfully with exit code 0. If any of the steps have problem the building process would terminate.


After the container image is built and pushed to the container registry, it would be signed by Cosign. Firstly, Cosign GitHub actions would be installed.

              - name: Install Cosign GH action
        uses: sigstore/cosign-installer@main

Then, we would sign the container image using Cosign. The option '--recursive' is used because the manifest contains multi-arch images (amd64 and arm64).

              - name: Use Cosign to sign the image recursively
        run: |
          # Sign the multiarch images, the signature is also pushed to the registry
          echo -n "${{ secrets.COSIGN_PRIVATE_KEY_PASSWORD }}" | \
            cosign sign --recursive --key <(echo -n "${{ secrets.COSIGN_PRIVATE_KEY }}") \
            "${{ needs.init-env.outputs.container_registry_base_uri }}:${{ env.REMOTE_BRANCH_NAME }}"

So, the code block has referenced two secrets: COSIGN_PRIVATE_KEY_PASSWORD and COSIGN_PRIVATE_KEY_PASSWORD. We would use 'cosign generate-key-pair' to generate them in a secure computer or secure environment. Here’s a very good tutorial by Dan Lorenc. Next, we have to create the two secrets in GitHub (Settings → Secrets→ New repository secrets).

Let’s check if the images are signed by using Cosign. In a local computer, run 'cosign verify':

        $ cosign verify --key
Verification for --
The following checks were performed on each of these signatures:
  - The cosign claims were validated
  - The signatures were verified against the specified public key
  - Any certificates were verified against the Fulcio roots.

[{"critical":{"identity":{"docker-reference":""},"image":{"docker-manifest-digest":"sha256:41969fc06309c9988a23aa5a1ca677c171c9011399527d2c2120bab87ea9311a"},"type":"cosign container image signature"},"optional":null}]

It is verified and all good. We may use the 'keyless' feature of Cosign when that feature is stable.

Scanning the containers

Now we have built the container image. Next, we would use open source security tools to scan it. The container is multi-arch (amd64 and arm64). After we used scanning tools to scan it, I would like to separate the outputs of the reports by platforms. I used the tool 'skopeo' to inspect the manifest. Then we can get the tag names (a SHA256 hash) of the containers for the amd64 and arm64 architecture.

Here’s an example for amd64. By using '::set-output', we can reference the digest of the container amd64 in other steps of the GitHub actions.

              - name: "Get the digest of container (amd64)"
        id: get-container-digest-amd64
        run: |
          skopeo inspect --raw docker://${{needs.init-env.outputs.container_registry_base_uri}}:${{env.REMOTE_BRANCH_NAME}} | \
            jq -r '.manifests[] | select(.platform .architecture=="amd64" and .platform .os=="linux") | .digest' > /tmp/container-digest-amd64
          echo "::set-output name=container_digest::$(cat /tmp/container-digest-amd64)"

A popular choice for container scanning tool is Trivy by Aquasec. Please refer to another article about using GitHub actions for container scanning.

Here’s the related GitHub actions. The SARIF file is for viewing the code scanning alerts in the GitHub security tab of the repository.

              - name: Scan container with Trivy
        uses: aquasecurity/trivy-action@master
        id: scan-by-trivy
          image-ref: '${{matrix.platform_image_uri}}'
          format: 'template'
          template: '@/contrib/sarif.tpl'
          output: '${{matrix.arch}}-container-trivy-results.sarif'
          severity: 'CRITICAL,HIGH'

      - name: Upload Trivy SARIF report to GitHub Security tab
        uses: github/codeql-action/upload-sarif@v1
          sarif_file: '${{matrix.arch}}-container-trivy-results.sarif'
          category: trivy-${{matrix.arch}}

Generating the SBOM and attestation

Besides Trivy, we could utilize tools like Syft by Anchore to generate the SBOM (software bill of materials). Then we could use Cosign to attest the SBOM. Using the private key in Cosign to sign the SBOM, link the signed result and then upload it to the container registry. Refer to this article by Anchore about SBOM and securing software supply chain.

Here’s the extract of the related GitHub actions.

              - name: Create SBOM attestation
        run: |
          # Create SBOM attestation and push it to the container registry
          echo -n "${{ secrets.COSIGN_PRIVATE_KEY_PASSWORD }}" | \
            cosign attest --predicate "${{matrix.arch}}-${{env.ANCHORE_SBOM_ACTION_PRIOR_ARTIFACT}}" \
            --key <(echo -n "${{ secrets.COSIGN_PRIVATE_KEY }}") \
            "${{ matrix.platform_image_uri }}"

Let’s try to verify the attestation of the SBOM. In a local computer, run 'cosign verify-attestation':

        $ cosign verify-attestation --key \ \
  | jq --slurp 'map(.payload | @base64d | fromjson | .predicate.Data| fromjson )'
Verification for --
The following checks were performed on each of these signatures:
  - The cosign claims were validated
  - The signatures were verified against the specified public key
  - Any certificates were verified against the Fulcio roots.
    "artifacts": [
        "id": "6b9c58422ffa57f2",
        "name": "adduser",
        "version": "3.118",
        "type": "deb",
        "foundBy": "dpkgdb-cataloger",
        "locations": [
            "path": "/var/lib/dpkg/status",
            "layerID": "sha256:1a1321fc25a057c78714d01b3b5bfa0523e1a763b733d10cba890ec640412ced"
.... trimmed for brevity ....

So, the verification of the attestation is successful. The SBOM that is linked to the container image is also retrieved from the payload section.

Verification of the image at deployment

Now, we wanted to verify the integrity of the container image when it is deployed with Kubernetes.

This article by Chip Zoller explains clearly about choosing OPA and Kyverno. In this article, I use Kyverno because of it’s simplicity.

        $ helm repo add kyverno
$ helm repo update
$ helm install kyverno kyverno/kyverno --namespace kyverno --create-namespace
# For Kyverno v1.5 and above
$ helm install kyverno-policies kyverno/kyverno-policies --namespace kyverno

The installation is straight forward using helm chart.

Here’s my policy:

kind: ClusterPolicy
  name: verify-image
  annotations: Verify Image Application high Pod 1.5.2 >-
      Verify my signed images in GHCR.
  validationFailureAction: enforce
  background: false
  webhookTimeoutSeconds: 30
  failurePolicy: Fail
    - name: verify-image
            - Pod
        - image: "*"
          key: |-
            -----BEGIN PUBLIC KEY-----
            -----END PUBLIC KEY-----

So the policy enforces the MeiliSearch image from my GitHub container registry needs to be signed (by Cosign). If it can use the provided public key to validate the container, it would be allowed to be used by Kubernetes. The public key ( is one of the key pair when we generate it at the beginning of the article.

If we try to deploy an unsigned image, it would not be successful:

error: statefulsets.apps "meilisearch" could not be patched: admission webhook "mutate.kyverno.svc-fail" denied the request:

resource StatefulSet/meilisearch/meilisearch was blocked due to the following policies

  autogen-verify-image: 'image signature verification failed for
  failed to verify image: fetching signatures: remote image:
    MANIFEST_UNKNOWN: manifest unknown'

The problem would also be reflected in the log of the Kyverno pod:

E1229 16:41:26.744471       1 server.go:333] WebhookServer/MutateWebhook
"msg"="image verification failed" "error"="

resource StatefulSet/meilisearch/meilisearch was blocked due to the following policies

autogen-verify-image: 'image signature verification failed for
failed to verify image: fetching signatures:
remote image: GET
MANIFEST_UNKNOWN: manifest unknown'"
"gvk"="apps/v1, Kind=StatefulSet" "kind"="StatefulSet" "name"="meilisearch" "namespace"="meilisearch"
"operation"="UPDATE" "uid"="a6b96aef-77eb-46b4-9548-12250a3b0c2c"

Finally, we update the statefulset of the MeiliSearch to use the signed container. The Kyverno admission controller uses the provided public key and verified that the container is signed correctly. The update is successful. Here’s the related log:

I1230 21:08:24.221800       1 imageVerify.go:131] EngineVerifyImages
"msg"="verifying image" "kind"="StatefulSet" "name"="meilisearch"
"namespace"="meilisearch" "policy"="verify-image"


In this article, we have used GitHub actions and various open source tools to improve supply chain security from building the container to deployment.

Further improvement or study

  • Use keyless signing when it becomes stable

  • Besides using Cosign to attest the SBOM, it is possible or desirable to use Cosign to attest the vulnerability scanning report and linked it with the container.

  • We should use tools like Grype by Anchore to scan the SBOM periodically, instead of relying scanning at build time only.

Share this article

Related articles


No. of comments: 0

This site uses Akismet and Google Perspective API to reduce spam and abuses.
Please read and agree the privacy policy before using the comment system.