转自:https://shaneutt.com/blog/rust-fast-small-docker-image-builds/
In this post I’m going to demonstrate how to create small, quickly built Docker Images for Rustapplications.
We’ll start by creating a simple test application, and then building and iterating on a Dockerfile.
Requirements
Ensure you have the following installed:
Setup: demo app setup
Make sure you have and are using the latest stable Rust with rustup
:
rustup default stable
rustup update
Create a new project called “myapp”:
cargo new myapp
cd myapp/
Setup: initial dockerfile
The following is a starting place we’ll use for our docker build, create a file named Dockerfile
in the current directory:
FROM rust:latest
WORKDIR /usr/src/myapp
COPY . .
RUN cargo build --release
RUN cargo install --path .
CMD ["/usr/local/cargo/bin/myapp"]
And also create a .dockerignore
file with the following contents:
target/
Dockerfile
You can test building and running the app with:
docker build -t myapp .
docker run --rm -it myapp
If everything is working properly, you should see the response Hello, world!
.
Problems with our initial docker build
At the time of writing this blog post, Rust’s package manager cargo has an issue where it does not have a –depedencies-only option to build depedencies independently.
The lack of an option with cargo
to build the depedencies separately leads to a problem of having the dependencies for the application rebuilt on every change of the src/
contents, when we really only want dependencies to be rebuilt if the Cargo.toml
or Cargo.lock
files are changed (e.g. when dependencies are added or updated).
As an additional problem, while the rust:latest Docker image is great for building, it’s a fairly large image coming in at over 1.5GB in size.
Improving builds so that dependencies don’t rebuild on src/ file changes
To avoid this problem and enable docker build cache so that builds are quicker, let’s start by modifying our Cargo.toml
to add a dependency:
[package]
name = "myapp"
version = "0.1.0"
[dependencies]
rand = "0.5.5"
We’ve added a new crate as a dependency to our project named rand which provides convenient random number generation utilities.
Now if we run:
docker build -t myapp .
It will build the rand
dependency and add it to the cache, but changing src/main.rs
will invalidate the cache for the next build:
cat <<EOF > src/main.rs
fn main() {
println!("I‘ve been updated!");
}
EOF
docker build -t myapp .
Notice that this build again had to rebuild the rand
dependency.
While we’re waiting on a --dependencies-only
build options for cargo
, we can overcome this problem by changing our Dockerfile
to have a default src/main.rs
with which the dependencies are built before we COPY
any of our code into the build:
FROM rust:latest
WORKDIR /usr/src/myapp
COPY Cargo.toml Cargo.toml
RUN mkdir src/
RUN echo "fn main() {println!(\"if you see this, the build broke\")}" > src/main.rs
RUN cargo build --release
RUN rm -f target/release/deps/myapp*
COPY . .
RUN cargo build --release
RUN cargo install --path .
CMD ["/usr/local/cargo/bin/myapp"]
The following line from the above Dockerfile
will cause the following cargo build
to rebuild only our application:
RUN rm -f target/release/deps/myapp*
So now if we build:
docker build -t myapp .
And then make another change to src/main.rs
:
cat <<EOF > src/main.rs
fn main() {
println!("I‘ve been updated yet again!");
}
EOF
We’ll find that subsequent docker build
runs only rebuild myapp
and the depedencies have been cached for quicker builds.
Reducing the size of the image
The rust:latest image has all the tools we need to build our project, but is over 1.5GB in size. We can improve the image size by using Alpine Linux which is an excellent small Linux distribution.
The Alpine team provides a docker image which is only several megabytes in size and still has some shell functionality for debugging and can be used as a small base image for our Rust builds.
Using multi-stage docker builds we can use rust:latest to do our build work, but then simply copy the app into a final build stage based on alpine:latest:
# ------------------------------------------------------------------------------
# Cargo Build Stage
# ------------------------------------------------------------------------------
FROM rust:latest as cargo-build
WORKDIR /usr/src/myapp
COPY Cargo.toml Cargo.toml
RUN mkdir src/
RUN echo "fn main() {println!(\"if you see this, the build broke\")}" > src/main.rs
RUN cargo build --release
RUN rm -f target/release/deps/myapp*
COPY . .
RUN cargo build --release
RUN cargo install --path .
# ------------------------------------------------------------------------------
# Final Stage
# ------------------------------------------------------------------------------
FROM alpine:latest
COPY --from=cargo-build /usr/local/cargo/bin/myapp /usr/local/bin/myapp
CMD ["myapp"]
Now if you run:
docker build -t myapp .
docker images |grep myapp
You should see something like:
myapp latest 03a3838a37bc 7 seconds ago 8.54MB
Next: Follow up - fixing and further improving our build
If you tried to run the above example with docker run --rm -it myapp
, you probably got an error like:
standard_init_linux.go:187: exec user process caused "no such file or directory"
If you’re familiar with ldd you can run the following to see that we’re missing shared libraries for our application:
docker run --rm -it myapp ldd /usr/local/bin/myapp
In the above examples we show how to avoid rebuilding depdencies on every src/
file change, and how to reduce our image footprint from 1.5GB+ to several megabytes, however our build doesn’t currently work because we need to build against MUSL Libc which is a lightweight, fast standard library available as the default in alpine:latest
.
Beyond that, we also want to make sure that our application runs as an unprivileged user inside the container so as to adhere to the principle of least privilege.
Building for MUSL Libc
To build for MUSL libc we’ll need to install the x86_64-unknown-linux-musl
target so that cargo
can be flagged to build for it with --target
. We’ll also need to flag Rust to use the musl-gcc
linker.
The rust:latest
image will come with rustup
pre-installed. rustup
allows you to install new targets with rustup target add $NAME
, so we can modify our Dockerfile
as such:
# ------------------------------------------------------------------------------
# Cargo Build Stage
# ------------------------------------------------------------------------------
FROM rust:latest as cargo-build
RUN apt-get update
RUN apt-get install musl-tools -y
RUN rustup target add x86_64-unknown-linux-musl
WORKDIR /usr/src/myapp
COPY Cargo.toml Cargo.toml
RUN mkdir src/
RUN echo "fn main() {println!(\"if you see this, the build broke\")}" > src/main.rs
RUN RUSTFLAGS=-Clinker=musl-gcc cargo build --release --target=x86_64-unknown-linux-musl
RUN rm -f target/x86_64-unknown-linux-musl/release/deps/myapp*
COPY . .
RUN RUSTFLAGS=-Clinker=musl-gcc cargo build --release --target=x86_64-unknown-linux-musl
# ------------------------------------------------------------------------------
# Final Stage
# ------------------------------------------------------------------------------
FROM alpine:latest
COPY --from=cargo-build /usr/src/myapp/target/x86_64-unknown-linux-musl/release/myapp /usr/local/bin/myapp
CMD ["myapp"]
Note the following line which shows the new way in which we’re building the app for MUSL Libc:
RUSTFLAGS=-Clinker=musl-gcc cargo build --release --target=x86_64-unknown-linux-musl
Do a fresh build of the app and run it:
docker build -t myapp .
docker run --rm -it myapp
If everything worked properly you should again see I‘ve been updated yet again!
.
Running as an unprivileged user
To follow principle of least privilege, let’s create a user named “myapp” which we’ll use to run myapp
as instead of as the root
user.
Change the Final Stage
docker build stage to the following:
# ------------------------------------------------------------------------------
# Final Stage
# ------------------------------------------------------------------------------
FROM alpine:latest
RUN addgroup -g 1000 myapp
RUN adduser -D -s /bin/sh -u 1000 -G myapp myapp
WORKDIR /home/myapp/bin/
COPY --from=cargo-build /usr/src/myapp/target/x86_64-unknown-linux-musl/release/myapp .
RUN chown myapp:myapp myapp
USER myapp
CMD ["./myapp"]
Update src/main.rs
:
cat <<EOF > src/main.rs
use std::process::Command;
fn main() {
let mut user = String::from_utf8(Command::new("whoami").output().unwrap().stdout).unwrap();
user.pop();
println!("I‘ve once more been updated, and now I run as the user {}!", user)
}
And now build the image and run:
docker build -t myapp .
docker run --rm -it myapp
If everything worked properly you should see I‘ve once more been updated, and now I run as the user myapp!
.
Wrapup!
The complete Dockerfile
we have now for building our app while we’re working on it now looks like:
# ------------------------------------------------------------------------------
# Cargo Build Stage
# ------------------------------------------------------------------------------
FROM rust:latest as cargo-build
RUN apt-get update
RUN apt-get install musl-tools -y
RUN rustup target add x86_64-unknown-linux-musl
WORKDIR /usr/src/myapp
COPY Cargo.toml Cargo.toml
RUN mkdir src/
RUN echo "fn main() {println!(\"if you see this, the build broke\")}" > src/main.rs
RUN RUSTFLAGS=-Clinker=musl-gcc cargo build --release --target=x86_64-unknown-linux-musl
RUN rm -f target/x86_64-unknown-linux-musl/release/deps/myapp*
COPY . .
RUN RUSTFLAGS=-Clinker=musl-gcc cargo build --release --target=x86_64-unknown-linux-musl
# ------------------------------------------------------------------------------
# Final Stage
# ------------------------------------------------------------------------------
FROM alpine:latest
RUN addgroup -g 1000 myapp
RUN adduser -D -s /bin/sh -u 1000 -G myapp myapp
WORKDIR /home/myapp/bin/
COPY --from=cargo-build /usr/src/myapp/target/x86_64-unknown-linux-musl/release/myapp .
RUN chown myapp:myapp myapp
USER myapp
CMD ["./myapp"]
From here see my demo on deploying Rust to Kubernetes on DC/OS with Skaffold. Utilizing some of the techniques in that demo, you could automate deployment of your application to Kubernetesfor testing on a local minikube system using Skaffold.
Happy coding!