Skip to content

Latest commit

 

History

History
910 lines (636 loc) · 52.1 KB

faq.md

File metadata and controls

910 lines (636 loc) · 52.1 KB

Frequently Asked Questions (FAQ)

If a question you have is not answered below, please submit an issue.

☑️ Jib User Survey
What do you like best about Jib? What needs to be improved? Please tell us by taking a one-minute survey. Your responses will help us understand Jib usage and allow us to serve our customers (you!) better.

But, I'm not a Java developer.
My build process doesn't let me integrate with the Jib Maven or Gradle plugin
How do I run the image I built?
Where is bash?
What image format does Jib use?
Why is my image created 48+ years ago?
Where is the application in the container filesystem?
How are Jib applications layered?
Can I learn more about container images?
Which base image (JDK) does Jib use?

How-Tos
How do I set parameters for my image at runtime?
Can I define a custom entrypoint?
I want to containerize a JAR.
I need to RUN commands like apt-get.
Can I ADD a custom directory to the image?
I need to add files generated during the build process to a custom directory on the image.
Can I build to a local Docker daemon?
How do I enable debugging?
What would a Dockerfile for a Jib-built image look like?
How can I inspect the image Jib built?
I would like to run my application with a javaagent.
How can I tag my image with a timestamp?
How do I specify a platform in the manifest list (or OCI index) of a base image?
I want to exclude files from layers, have more fine-grained control over layers, change file ownership, etc.
Jib build plugins don't have the feature that I need.
I am hitting Docker Hub rate limits. How can I configure registry mirrors?
Where is the global Jib configuration file and how I can configure it?

Build Problems
How can I diagnose problems pulling or pushing from remote registries?
What should I do when the registry responds with Forbidden or DENIED?
What should I do when the registry responds with UNAUTHORIZED?
How do I configure a proxy?
How can I examine network traffic?
How do I view debug logs for Jib?
I am seeing Method Not Found or Class Not Found errors when building.
I am seeing Unsupported class file major version when building.
I am seeing NoClassDefFoundError: com/github/luben/zstd/ZstdOutputStream when building.

Launch Problems
I am seeing ImagePullBackoff on my pods.
Why won't my container start?

Jib CLI
How does the jar command support Standard JARs?
How does the jar command support Spring Boot JARs?
How does the war command work?


But, I'm not a Java developer.

Check out Jib CLI, a general-purpose command-line tool for building containers images from filesystem content.

Also see rules_docker for a similar existing container image build tool for the Bazel build system. The tool can build images for languages such as Python, NodeJS, Java, Scala, Groovy, C, Go, Rust, and D.

My build process doesn't let me integrate with the Jib Maven or Gradle plugin

The Jib CLI can be useful for users with complex build workflows that make it hard to integrate the Jib Maven or Gradle plugin. It is a standalone application that is powered by Jib Core and offers two commands:

  • Build: Builds images from the filesystem content.

  • Jar: Examines your JAR and builds an image with optimized layers or containerizes the JAR as-is.

Check out the Jib CLI section of the FAQ for more information.

How do I run the image I built?

If you built your image directly to the Docker daemon using jib:dockerBuild (Maven) or jibDockerBuild (Gradle), you simply need to use docker run <image name>.

If you built your image to a registry using jib:build (Maven) or jib (Gradle), you will need to pull the image using docker pull <image name> before using docker run.

To run your image on Kubernetes, you can use kubectl:

kubectl run jib-deployment --image=<image name>

For more information, see steps 4-6 of the Kubernetes Engine deployment tutorial.

Where is bash?

By default, Jib Maven and Gradle plugin versions prior to 3.0 used distroless/java as the base image, which did not have a shell program (such as sh, bash, or dash). However, recent Jib tools use default base images that come with shell programs: Adoptium Eclipse Temurin (formerly AdoptOpenJDK) and Jetty for WAR projects.

Note that you can always set a different base image. Jib's default choice for Temurin or AdoptOpenJDK does not imply any endorsement to it; you should do your due diligence to choose the right image that works best for you. Also note that the default base image is unpinned (the tag can point to different images over time), so we recommend configuring a base image with a SHA digest for strong reproducibility.

  • Configuring a base image in Maven

    <configuration>
      <from>
        
      </from>
    </configuration>
  • Configuring a base image in Gradle

    jib.from.image = 'openjdk:11-jre-slim@sha256:...'
  • Configuring a base image in Jib CLI

    $ jib jar --from openjdk:11-jre-slim@sha256:... --target ... app.jar
    

What image format does Jib use?

Jib currently builds into the Docker V2.2 image format or OCI image format.

Maven

See Extended Usage for the <container><format> configuration.

Gradle

See Extended Usage for the container.format configuration.

Why is my image created 48+ years ago?

For reproducibility purposes, Jib sets the creation time of the container images to the Unix epoch (00:00:00, January 1st, 1970 in UTC). If you would like to use a different timestamp, set the jib.container.creationTime / <container><creationTime> parameter to an ISO 8601 date-time. You may also use the value USE_CURRENT_TIMESTAMP to set the creation time to the actual build time, but this sacrifices reproducibility since the timestamp will change with every build.

Setting creationTime parameter (click to expand)

Maven

<configuration>
  <container>
    <creationTime>2019-07-15T10:15:30+09:00</creationTime>
  </container>
</configuration>

Gradle

jib.container.creationTime = '2019-07-15T10:15:30+09:00'

Note that the modification time of the files in the built image put by Jib will still be 1 second past the epoch. The file modification time can be configured using <container><filesModificationTime> (Maven) or jib.container.filesModificationTime (Gradle).

Please tell me more about reproducibility!

Reproducible means that given the same inputs, a build should produce the same outputs. Container images are uniquely identified by a digest (or a hash) of the image contents and image metadata. Tools and infrastructure such the Docker daemon, Docker Hub, registries, Kubernetes, etc) treat images with different digests as being different.

To ensure that a Jib build is reproducible — that the rebuilt container image has the same digest — Jib adds files and directories in a consistent order, and sets consistent creation- and modification-times and permissions for all files and directories. Jib also ensures that the image metadata is recorded in a consistent order, and that the container image has a consistent creation time. To ensure consistent times, files and directories are recorded as having a creation and modification time of 1 second past the Unix Epoch (1970-01-01 00:00:01.000 UTC), and the container image is recorded as being created on the Unix Epoch. Setting container.creationTime to USE_CURRENT_TIMESTAMP and then rebuilding an image will produce a different timestamp for the image creation time, and so the container images will have different digests and appear to be different.

For more details see reproducible-builds.org.

Where is the application in the container filesystem?

Jib packages your Java application into the following paths on the image:

  • /app/libs/ contains all the dependency artifacts
  • /app/resources/ contains all the resource files
  • /app/classes/ contains all the classes files
  • the contents of the extra directory (default src/main/jib) are placed relative to the container's root directory (/)

How are Jib applications layered?

Jib makes use of layering to allow for fast rebuilds - it will only rebuild the layers containing files that changed since the previous build and will reuse cached layers containing files that didn't change. Jib organizes files in a way that groups frequently changing files separately from large, rarely changing files. For example, SNAPSHOT dependencies are placed in a separate layer from other dependencies, so that a frequently changing SNAPSHOT will not force the entire dependency layer to rebuild itself.

Jib applications are split into the following layers:

  • All other dependencies
  • Snapshot dependencies
  • Project dependencies
  • Resources
  • Classes
  • Each extra directory (jib.extraDirectories in Gradle, <extraDirectories> in Maven) builds to its own layer

Which base image (JDK) does Jib use?

eclipse-temurin by Adoptium (formerly adoptopenjdk) and jetty (for WAR). See Default Base Images in Jib for details.

Can I learn more about container images?

If you'd like to learn more about container images, @coollog has a guide: Build Containers the Hard Way, which takes a deep dive into everything involved in getting your code into a container and onto a container registry.

Configuring Jib

How do I set parameters for my image at runtime?

JVM Flags

For the default base image, you can use the JAVA_TOOL_OPTIONS environment variable (note that other JRE images may require using other environment variables):

Using Docker: docker run -e "JAVA_TOOL_OPTIONS=<JVM flags>" <image name>

Using Kubernetes:

apiVersion: v1
kind: Pod
spec:
  containers:
  - name: <name>
    image: <image name>
    env:
    - name: JAVA_TOOL_OPTIONS
      value: <JVM flags>

Note that many JVMs may only support a max length of 1024 characters for the JAVA_TOOL_OPTIONS environment variable, and anything longer than this may be cut off by the JVM.

For Java 9+, often you may want to use JDK_JAVA_OPTIONS instead of JAVA_TOOL_OPTIONS.

Other Environment Variables

Using Docker: docker run -e "NAME=VALUE" <image name>

Using Kubernetes:

apiVersion: v1
kind: Pod
spec:
  containers:
  - name: <name>
    image: <image name>
    env:
    - name: NAME
      value: VALUE

Arguments to Main

Using Docker: docker run <image name> <arg1> <arg2> <arg3>

Using Kubernetes:

apiVersion: v1
kind: Pod
spec:
  containers:
  - name: <name>
    image: <image name>
    args:
    - <arg1>
    - <arg2>
    - <arg3>

For more information, see the JAVA_TOOL_OPTIONS environment variable, the docker run -e reference, and defining environment variables for a container in Kubernetes.

Can I define a custom entrypoint at runtime?

Normally, the plugin sets a default entrypoint for java applications, or lets you configure a custom entrypoint using the container.entrypoint configuration parameter. You can also override the default/configured entrypoint by defining a custom entrypoint when running the container. See docker run --entrypoint reference for running the image with Docker and overriding the entrypoint command, or see Define a Command and Arguments for a Container for running the image in a Kubernetes Pod and overriding the entrypoint command.

I want to containerize a JAR.

The intention of Jib is to add individual class files, resources, and dependency JARs into the container instead of putting a JAR. This lets Jib choose an opinionated, optimal layout for the application on the container image, which also allows it to skip the extra JAR-packaging step.

However, you can set <containerizingMode>packaged (Maven) or jib.containerizingMode = 'packaged' (Gradle) to containerize a JAR, but note that your application will always be run via java -cp ... your.MainClass (even if it is an executable JAR). Some disadvantages of setting containerizingMode='packaged':

  • You need to run the JAR-packaging step (mvn package in Maven or the jar task in Gradle).
  • Reduced granularity in building and caching: if any of your Java source files or resource files are updated, not only the JAR has to be rebuilt, but the entire layer containing the JAR in the image has to be recreated and pushed to the destination.
  • If it is a fat or shaded JAR embedding all dependency JARs, you are duplicating the dependency JARs in the image. Worse, it results in far more reduced granularity in building and caching, as dependency JARs can be huge and all of them need to be pushed repeatedly even if they do not change.

Note that for runnable JARs/WARs, currently Jib does not natively support creating an image that runs a JAR (or WAR) through java -jar runnable.jar (although it is not impossible to configure Jib to do so at the expense of more complex project setup.)

I need to RUN commands like apt-get.

Running commands like apt-get slows down the container build process. We do not recommend or support running commands as part of the build.

However, if you need to run commands, you can build a custom image and configure Jib to use it as the base image.

Base image configuration examples (click to expand)

Maven

In jib-maven-plugin, you can then use this custom base image by adding the following configuration:

<configuration>
  <from>
    
  </from>
</configuration>

Gradle

In jib-gradle-plugin, you can then use this custom base image by adding the following configuration:

jib.from.image = 'custom-base-image'

Can I ADD a custom directory to the image?

Yes, using the extra directories feature. See the Maven and Gradle docs for examples.

I need to add files generated during the build process to a custom directory on the image.

If the current extra directories design doesn't meet your needs (e.g. you need to set up the extra files directory with files generated during the build process), you can use additional goals/tasks to create the extra directory as part of your build.

File copying examples (click to expand)

Maven

In Maven, you can use the maven-resources-plugin to copy files to your extra directory. For example, if you generate files in target/generated/files and want to add them to /my/files on the container, you can add the following to your pom.xml:

<plugins>
  ...
  <plugin>
    <artifact>jib-maven-plugin</artifact>
    ...
    <configuration>
      <extraDirectories>
        <paths>
          <path>${project.basedir}/target/extra-directory/</path>
        </paths>
      </extraDirectories>
    </configuration>
  </plugin>
  ...
  <plugin>
    <artifact>maven-resources-plugin</artifact>
    <version>3.2.0</version>
    <configuration>
      <outputDirectory>${project.basedir}/target/extra-directory/my/files</outputDirectory>
      <resources>
        <resource>
          <directory>${project.basedir}/target/generated/files</directory>
        </resource>
      </resources>
    </configuration>
  </plugin>
  ...
</plugins>

The copy-resources goal will run automatically before compile, so if you are copying files from your build output to the extra directory, you will need to either set the life-cycle phase to post-compile or later, or run the goal manually:

mvn compile resources:copy-resources jib:build

Gradle

The same can be accomplished in Gradle by using a Copy task. In your build.gradle:

jib.extraDirectories.paths = ['build/extra-directory']

task setupExtraDir(type: Copy) {
  from file('build/generated/files')
  into file('build/extra-directory/my/files')
}
tasks.jib.dependsOn setupExtraDir

The files will be copied to your extra directory when you run the jib task.

Can I build to a local Docker daemon?

There are several ways of doing this:

  • Use jib:dockerBuild for Maven or jibDockerBuild for Gradle to build directly to your local Docker daemon.
  • Use jib:buildTar for Maven or jibBuildTar for Gradle to build the image to a tarball, then use docker load --input to load the image into Docker (the tarball built with these commands will be located in target/jib-image.tar for Maven and build/jib-image.tar for Gradle by default).
  • docker pull the image built with Jib to have it available in your local Docker daemon.
  • Alternatively, instead of using a Docker daemon, you can run a local container registry, such as Docker registry or other repository managers, and point Jib to push to the local registry.

How do I enable debugging?

Use the JAVA_TOOL_OPTIONS to pass along debugging configuration arguments. For example, to have the remote VM accept local debug connections on port 5005, but not suspend:

-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=localhost:5005

Then connect your debugger to port 5005 on your local host. You can port-forward the container port to a localhost port for easy access.

Using Docker: docker run -p 5005:5005 
  </to>
</configuration>

You can then use the same timestamp to reference the image in other plugins.

Gradle

To tag the image with a timestamp, simply set the timestamp as the tag for to.image in your jib configuration. For example:

jib.to.image = 'gcr.io/my-gcp-project/my-app:' + System.nanoTime()

What would a Dockerfile for a Jib-built image look like?

A Dockerfile that performs a Jib-like build is shown below:

# Jib uses Adoptium Eclipse Temurin (formerly AdoptOpenJDK).
FROM eclipse-temurin:11-jre

# Multiple copy statements are used to break the app into layers,
# allowing for faster rebuilds after small changes
COPY dependencyJars /app/libs
COPY snapshotDependencyJars /app/libs
COPY projectDependencyJars /app/libs
COPY resources /app/resources
COPY classFiles /app/classes

# Jib's extra directory ("src/main/jib" by default) is used to add extra, non-classpath files
COPY src/main/jib /

# Jib's default entrypoint when container.entrypoint is not set
ENTRYPOINT ["java", jib.container.jvmFlags, "-cp", "/app/resources:/app/classes:/app/libs/*", jib.container.mainClass]
CMD [jib.container.args]

When unset, Jib will infer the value for jib.container.mainClass.

Some plugins, such as the Docker Prepare Gradle Plugin, will even automatically generate a Docker context for your project, including a Dockerfile.

How can I inspect the image Jib built?

To inspect the image that is produced from the build using Docker, you can use commands such as docker inspect your/image:tag to view the image configuration, or you can also download the image using docker save to manually inspect the container image. Other tools, such as dive, provide nicer UI to inspect the image.

How do I specify a platform in the manifest list (or OCI index) of a base image?

Newer Jib versions added an incubating feature that provides support for selecting base images with the desired platforms from a manifest list. For example,

  <from>
    
    <platforms>
      <platform>
        <architecture>amd64</architecture>
        <os>linux</os>
      </platform>
      <platform>
        <architecture>arm64</architecture>
        <os>linux</os>
      </platform>
    </platforms>
  </from>
jib.from {
  image = '... image reference to a manifest list ...'
  platforms {
    platform {
      architecture = 'amd64'
      os = 'linux'
    }
    platform {
      architecture = 'arm64'
      os = 'linux'
    }
  }
}

The default when not specified is a single "amd64/linux" platform, whose behavior is backward-compatible.

When multiple platforms are specified, Jib creates and pushes a manifest list (also known as a fat manifest) after building and pushing all the images for the specified platforms.

As an incubating feature, there are certain limitations:

  • OCI image indices are not supported (as opposed to Docker manifest lists).
  • Only architecture and os are supported. If the base image manifest list contains multiple images with the given architecture and os, the first image will be selected.
  • Does not support using a local Docker daemon or tarball image for a base image.
  • Does not support pushing to a Docker daemon (jib:dockerBuild / jibDockerBuild) or building a local tarball (jib:buildTar / jibBuildTar).

Make sure to specify a manifest list in <from><image> (whether by a tag name or a digest (@sha256:...)). For troubleshooting, you may want to check what platforms the manifest list contains. To view a manifest list, enable experimental docker CLI features and then run the manifest inspect command.

$ docker manifest inspect openjdk:8
{         
   ...
   // This confirms that openjdk:8 points to a manifest list.
   "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
   "manifests": [
      {
         // This entry in the list points to the manifest for the ARM64/Linux manifest.
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         ...
         "digest": "sha256:1fbd49e3fc5e53154fa93cad15f211112d899a6b0c5dc1e8661d6eb6c18b30a6",
         "platform": {
            "architecture": "arm64",
            "os": "linux",
            "variant": "v8"
         }
      }
   ]
}

I want to exclude files from layers, have more fine-grained control over layers, change file ownership, etc.

See "Jib build plugins don't have the feature that I need".

Jib build plugins don't have the feature that I need.

The Jib build plugins have an extension framework that enables anyone to easily extend Jib's behavior to their needs. We maintain select first-party plugins for popular use cases like fine-grained layer control and Quarkus support, but anyone can write and publish an extension. Check out the jib-extensions repository for more information.

I am hitting Docker Hub rate limits. How can I configure registry mirrors?

See the Maven, Gradle or Jib CLI docs. Note that the example in the docs uses Google's Docker Hub mirror on mirror.gcr.io.

Starting from Jib build plugins 3.0, Jib by default uses base images on Docker Hub, so you may start to encounter the rate limits if you are not explicitly setting a base image.

Some other alternatives to get around the rate limits:

  • Prevent Jib from accessing Docker Hub (after Jib cached a base image locally).
    • Pin to a specific base image using a SHA digest. For example, jib.from.image='eclipse-temurin:11-jre@sha256:...'. If you are not setting a base image with a SHA digest (which is the case if you don't set jib.from.image at all), then every time Jib runs, it reaches out to the registry to check if the base image is up-to-date. On the other hand, if you pin to a specific image with a digest, then the image is immutable. Therefore, if Jib has cached the image once (by running Jib online once to fully cache the image), Jib will not reach out to the Docker Hub. See this Stack Overflow answer for more details.
    • (Maven/Gradle plugins only) Do offline building. Pass --offline to Maven or Gradle. Before that, you need to run Jib online once to cache the image. However, --offline means you cannot push to a remote registry. See this Stack Overflow answer for more details.
    • Retrieve a base image from a local Docker daemon. Store an image to your local Docker daemon, and set, say, jib.from.image='docker://eclipse-temurin:11-jre'. It can be slow for an initial build where Jib has to cache the image in Jib's format. For performance reasons, we usually recommend using an image on a registry.
    • Set up a local registry, store a base image, and read it from the local registry. Setting up a local registry is as simple as running docker run -d -p5000:5000 registry:2, but nevertheless, the whole process is a bit involved.
  • Retry with increasing backoffs. For example, using the retry tool.

Where is the global Jib configuration file and how I can configure it?

See the Maven, Gradle or Jib CLI docs.

Build Problems

How can I diagnose problems pulling or pushing from remote registries?

There are a few reasons why Jib may be unable to connect to a remote registry, including:

What should I do when the registry responds with Forbidden or DENIED?

If the registry returns 403 Forbidden or "code":"DENIED", it often means Jib successfully authenticated using your credentials but the credentials do not have permissions to pull or push images. Make sure your account/role has the permissions to do the operation.

Depending on registry implementations, it is also possible that the registry actually meant you are not authenticated. See What should I do when the registry responds with UNAUTHORIZED? to ensure you have set up credentials correctly.

What should I do when the registry responds with UNAUTHORIZED?

If the registry returns 401 Unauthorized or "code":"UNAUTHORIZED", it is often due to credential misconfiguration. Examples:

  • You did not configure auth information in the default places where Jib searches. (See also Authentication Methods).

    • Docker credential file (as generated by docker login or podman login) at

      • $XDG_RUNTIME_DIR/containers/auth.json, $XDG_CONFIG_HOME/containers/auth.json or $HOME/.config/containers/auth.json
      • $DOCKER_CONFIG/config.json
      • $HOME/.docker/config.json

      This is one of the configuration files for the docker or podman command line tool. See configuration files document, credential store and credential helper sections, and this for how to configure auth. For example, you can do docker login to save auth in config.json, but it is often recommended to configure a credential helper (also configurable in config.json).

      If Jib was able to retrieve auth information from a Docker credential file, you should see a log message similar to Using credentials from Docker config (/home/myuser/.docker/config.json) where you can verify which credential file was picked up by Jib.

    • Jib configurations

  • For Google Cloud Registry (GCR), the Container Registry API is not yet enabled for your project.

    • You enable the API from Cloud Console or with the following Cloud SDK command: gcloud services enable containerregistry.googleapis.com
  • $HOME/.docker/config.json may also contain short-lived authorizations in the auths block that may have expired. In the case of Google Container Registry, if you had previously used gcloud docker to configure these authorizations, you should remove these stale authorizations by editing your config.json and deleting lines from auths associated with gcr.io (for example: "https://asia.gcr.io"). You can then run gcloud auth configure-docker to correctly configure the credHelpers block for more robust interactions with gcr.

  • Different auth configurations exist in multiple places, and Jib is not picking up the auth information you are working on.

  • You configured a credential helper, but the helper is not on $PATH. This is especially common when running Jib inside IDE where the IDE binary is launched directly from an OS menu and does not have access to your shell's environment.

  • Configured credentials have access to the base image repository but not to the target image repository (or vice versa).

  • Typos in username, password, image names, repository names, or registry names. This is a very common error.

  • Image names do not conform to the structure or policy that a registry requires. For example, Docker Hub returns 401 Unauthorized when trying to use a multi-level repository name.

  • Incorrect port number in image references (registry.hostname:<port>/...).

  • You are using a private registry without HTTPS. See How can I diagnose problems pulling or pushing from remote registries?.

Note, if Jib was able to retrieve credentials, you should see a log message like these:

Using credentials from Docker config (/home/user/.docker/config.json) for localhost:5000/java
Using credential helper docker-credential-gcr for gcr.io/project/repo
Using credentials from Maven settings file for gcr.io/project/repo
Using credentials from <from><auth> for gcr.io/project/repo
Using credentials from to.auth for gcr.io/project/repo

If you encounter issues interacting with a registry other than UNAUTHORIZED, check "How can I diagnose problems pulling or pushing from remote registries?".

How do I configure a proxy?

Jib currently requires configuring your build tool to use the appropriate Java networking properties (https.proxyHost, https.proxyPort, https.proxyUser, https.proxyPassword).

How can I examine network traffic?

It can be useful to examine network traffic to diagnose connectivity issues. Jib uses the Google HTTP client library to interact with registries which logs HTTP requests using the JVM-provided java.util.logging facilities. It is very helpful to serialize Jib's actions using the jib.serialize property.

To see the HTTP traffic, create a logging.properties file with the following:

handlers = java.util.logging.ConsoleHandler
java.util.logging.ConsoleHandler.level=ALL

# CONFIG hides authentication data
# ALL includes authentication data
com.google.api.client.http.level=CONFIG

And then launch your build tool as follows:

mvn --batch-mode -Djava.util.logging.config.file=path/to/logging.properties -Djib.serialize=true ...

or

gradle --no-daemon --console=plain --info -Djava.util.logging.config.file=path/to/logging.properties -Djib.serialize=true ...

Note: Jib Gradle plugins prior to version 2.2.0 have an issue generating HTTP logs (#2356).

You may want to enable the debug logs too (-X for Maven, or --debug --stacktrace for Gradle).

When configured correctly, you should see logs like this:

Mar 31, 2020 9:55:52 AM com.google.api.client.http.HttpResponse <init>
CONFIG: -------------- RESPONSE --------------
HTTP/1.1 202 Accepted
Content-Length: 0
Docker-Distribution-Api-Version: registry/2.0
Docker-Upload-Uuid: 6292f0d7-93cb-4a8e-8336-78a1bf7febd2
Location: https://registry-1.docker.io/v2/...
Range: 0-657292
Date: Tue, 31 Mar 2020 13:55:52 GMT
Strict-Transport-Security: max-age=31536000

Mar 31, 2020 9:55:52 AM com.google.api.client.http.HttpRequest execute
CONFIG: -------------- REQUEST  --------------
PUT https://registry-1.docker.io/v2/...
Accept:
Accept-Encoding: gzip
Authorization: <Not Logged>
User-Agent: jib 2.1.1-SNAPSHOT jib-maven-plugin Google-HTTP-Java-Client/1.34.0 (gzip)

How do I view debug logs for Jib?

Maven: use mvn -X -Djib.serialize=true to enable more detailed logging and serialize Jib's actions.

Gradle: use gradle --debug -Djib.serialize=true to enable more detailed logging and serialize Jib's actions.

I am seeing Method Not Found or Class Not Found errors when building.

Sometimes when upgrading your Gradle build plugin versions, you may experience errors due to mismatching versions of dependencies pulled in (for example: issues/2183). This can be due to the buildscript classpath loading behavior described on gradle forums.

This commonly appears in multi module Gradle projects. A solution to this problem is to define all of your plugins in the base project and apply them selectively in your subprojects as needed. This should help alleviate the problem of the buildscript classpath using older versions of a library.

build.gradle (root)

plugins {
  id 'com.google.cloud.tools.jib' version 'x.y.z' apply false
}

build.gradle (sub-project)

plugins {
  id 'com.google.cloud.tools.jib'
}

I am seeing Unsupported class file major version when building.

When you're using latest Java versions to write an app (or using an old version of Jib), you may see the error coming from Jib when building an image (not when compiling your code):

Failed to execute goal com.google.cloud.tools:jib-maven-plugin:3.2.0:dockerBuild (default-cli) on project demo: Execution default-cli of goal com.google.cloud.tools:jib-maven-plugin:3.2.0:dockerBuild failed: Unsupported class file major version 61

Jib uses the ASM library to examine compiled Java bytecode to automatically infer a main class (in other words, the class that defines public static void main() to start your app). In this way, if you have only one such class, Jib can automatically infer and use that class to set an image entrypoint (basically, a command to start your app). When new Java versions come out, often the ASM library version used in Jib doesn't support the new bytecode format. If this is the case, check if you are using the latest Jib. If you still get the error with the latest Jib, file a bug to have the Jib team upgrade the ASM library.

Workaround: to prevent Jib from doing auto-inference, you can manually set your desired main class via <container><mainClass> (for example, <container><mainClass>com.example.your.Main</mainClass>). As with other Jib parameters, it can be set through system/Maven properties or on the command-line (for example, -Dcontainer.mainClass=...).

Note that although the ASM library is the common cause of this error coming from Jib, it may be due to other reasons. Always check the full stack (-e or -X for Maven and --stacktrace for Gradle) to see where the error is coming from.

I am seeing NoClassDefFoundError: com/github/luben/zstd/ZstdOutputStream when building.

Jib supports base image layers with media-type application/vnd.oci.image.layer.v1.tar+zstd, i.e. compressed with zstd algorithm instead of gzip.

However, the dependency to zstd is optional, so pulling such layers will result in:

java.lang.NoClassDefFoundError: com/github/luben/zstd/ZstdOutputStream
at org.apache.commons.compress.compressors.zstandard.ZstdCompressorOutputStream.<init>

This can be solved by adding a dependency to artifact com.github.luben:zstd-jni:1.5.2-3 to the plugin.

Launch problems

I am seeing ImagePullBackoff on my pods (in minikube).

When you use your private image built with Jib in a Kubernetes cluster, the cluster needs to be configured with credentials to pull the image. This involves 1) creating a Secret, and 2) using the Secret as imagePullSecrets.

kubectl create secret docker-registry registry-json-key \
  --docker-server=<registry> \
  --docker-username=<username> \
  --docker-password=<password> \
  --docker-email=<any valid email address>

kubectl patch serviceaccount default \
  -p '{"imagePullSecrets":[{"name":"registry-json-key"}]}'

For example, if you are using GCR, the commands would look like (see Advanced Authentication Methods):

kubectl create secret docker-registry gcr-json-key \
  --docker-server=https://gcr.io \
  --docker-username=_json_key \
  --docker-password="$(cat keyfile.json)" \
  [email protected]

kubectl patch serviceaccount default \
  -p '{"imagePullSecrets":[{"name":"gcr-json-key"}]}'

See more at Using Google Container Registry (GCR) with Minikube.

Why won't my container start?

There are some common reasons why containers fail on launch.

My shell script won't run.

Jib Maven and Gradle plugins prior to 3.0 used Distroless Java as the default base image, which does not have a shell. See Where is bash? for more details.

The container fails with exec errors.

A Jib user reported an error launching their container:

standard_init_linux.go:211 exec user process caused "no such file or directory"

On examining the container structure with Dive, the user discovered that the contents of the /lib directory had disappeared.

The user had used Jib's ability to install extra files into the image (Maven, Gradle) to install a library file by placing it in src/main/jib/lib/libfoo.so. This would normally cause the libfoo.so to be installed in the image as /lib/libfoo.so. But /lib and /lib64 in the user's base image were symbolic links. Jib does not follow such symbolic links when creating the image. And at container initialization time, Docker treats these symlinks as a file, and thus the symbolic link was replaced with /lib as a new directory. As a result, none of the system shared libraries were resolved and dynamically-linked programs failed.

Solution: The user installed the file in a different location.

Jib CLI

How does the jar command support Standard JARs?

The Jib CLI supports both thin JARs (where dependencies are specified in the JAR's manifest) and fat JARs.

The current limitation of using a fat JAR is that the embedded dependencies will not be placed into the designated dependencies layers. They will instead be placed into the classes or resources layer. Therefore, for efficiency, we recommend against containerizing fat JARs (Spring Boot fat JARs are an exception) if you can prepare thin JARs. We hope to have better support for fat JARs in the future.

A standard JAR can be containerized by the jar command in two modes, exploded or packaged.

Exploded Mode (Recommended)

Achieved by calling jib jar --target ${TARGET_REGISTRY} ${JAR_NAME}.jar

The default mode for containerizing a JAR. It will open up the JAR and optimally place files into the following layers:

  • Other Dependencies Layer
  • Snapshot-Dependencies Layer
  • Resources Layer
  • Classes Layer

Entrypoint : java -cp /app/dependencies/:/app/explodedJar/ ${MAIN_CLASS}

Packaged Mode

Achieved by calling jib jar --target ${TARGET_REGISTRY} ${JAR_NAME}.jar --mode packaged.

It will result in the following layers on the container:

  • Dependencies Layer
  • Jar Layer

Entrypoint : java -jar ${JAR_NAME}.jar

How does the jar command support Spring Boot JARs?

The jar command currently supports containerization of Spring Boot fat JARs. A Spring-Boot fat JAR can be containerized in two modes, exploded or packaged.

Exploded Mode (Recommended)

Achieved by calling jib jar --target ${TARGET_REGISTRY} ${JAR_NAME}.jar

The default mode for containerizing a JAR. It will respect layers.idx in the JAR (if present) or create optimized layers in the following format:

  • Other Dependencies Layer
  • Spring-Boot-Loader Layer
  • Snapshot-Dependencies Layer
  • Resources Layer
  • Classes Layer

Entrypoint : java -cp /app org.springframework.boot.loader.JarLauncher

Packaged Mode

Achieved by calling jib jar --target ${TARGET_REGISTRY} ${JAR_NAME}.jar --mode packaged

It will containerize the JAR as is. However, note that we highly recommend against using packaged mode for containerizing Spring Boot fat JARs.

Entrypoint: java -jar ${JAR_NAME}.jar

How does the war command work?

The war command currently supports containerization of standard WARs. It uses the official jetty on Docker Hub as the default base image and explodes out the WAR into /var/lib/jetty/webapps/ROOT on the container. It creates the following layers:

  • Other Dependencies Layer
  • Snapshot-Dependencies Layer
  • Resources Layer
  • Classes Layer

The default entrypoint when using a jetty base image will be java -jar /usr/local/jetty/start.jar --module=ee10-deploy unless you choose to specify a custom one.

You can use a different Servlet engine base image with the help of the --from option and customize --app-root, --entrypoint and --program-args. If you don't set the entrypoint or program-arguments, Jib will inherit them from the base image. However, setting the --app-root is required if you use a non-jetty base image. Here is how the war command may look if you're using a Tomcat image:

 $ jib war --target=<image-reference> myapp.war --from=tomcat:8.5-jre8-alpine --app-root=/usr/local/tomcat/webapps/ROOT