IMPORTANT LEGAL NOTICE: the Docker images produced by the code in this repository contain the UE4 Engine Tools in both source code and object code form. As per Section 1A of the Unreal Engine EULA, Engine Licensees are prohibited from public distribution of the Engine Tools unless such distribution takes place via the Unreal Marketplace or a fork of the Epic Games UE4 GitHub repository. Public distribution of the built images via an openly accessible Docker Registry (e.g. Docker Hub) is a direct violation of the license terms. It is your responsibility to ensure that any private distribution to other Engine Licensees (such as via an organisation's internal Docker Registry) complies with the terms of the Unreal Engine EULA.
This repository contains a set of Dockerfiles and an accompanying Python build script that allow you to build Docker images for Epic Games' Unreal Engine 4. Key features include:
- The images contain a full source build of the Engine and are suitable for use in a Continuous Integration (CI) pipeline.
- Both Windows containers and Linux containers are supported.
- Running automation tests is supported.
- conan-ue4cli support is included when building UE4 version 4.19.0 or newer.
- An image containing an Installed Build of the Engine is also created for use when packaging Shipping builds of projects, although this behaviour can be disabled by using the
--no-package
flag when invoking the build script. - When building GPU-enabled Linux images for NVIDIA Docker and also building the
ue4-package
image, UE4Capture support is also built by default, although this behaviour can be disabled by using the--no-capture
flag when invoking the build script.
For a detailed discussion on how the build process works, see the accompanying article on my website.
- Requirements
- Build script usage
- Building images
- Building a custom version of the Unreal Engine
- Specifying the Windows Server Core base image tag
- Specifying the isolation mode under Windows
- Specifying the directory from which to copy required Windows DLL files
- Building Linux container images under Windows
- Building GPU-enabled Linux container images for use with NVIDIA Docker
- Performing a dry run
- Upgrading from a previous version
- Running automation tests
- Usage with Continuous Integration systems
- Performing cloud rendering using the NVIDIA Docker images
- Troubleshooting common issues
- Windows
hcsshim
timeout issues - Frequently Asked Questions
The common requirements for both Windows and Linux containers are:
- A minimum of 200GB of available disk space
- A minimum of 8GB of available memory (10GB under Windows)
- Python 3.6 or newer with
pip
- The dependency packages listed in requirements.txt, which can be installed by running
pip3 install -r requirements.txt
Building Windows containers also requires:
- Windows 10 Pro/Enterprise or Windows Server 2016 or newer
- Docker For Windows (under Windows 10) or Docker EE For Windows Server (under Windows Server)
- Under Windows 10, the Docker daemon must be configured to use Windows containers instead of Linux containers
- The Docker daemon must be configured to increase the maximum container disk size from the default 20GB limit by following the instructions provided by Microsoft. The 120GB limit specified in the instructions is not quite enough, so set a 200GB limit instead.
- Under Windows Server, you may need to configure the firewall to allow network access to the host from inside Docker containers
- Under Windows Server Core, you will need to copy the following DLL files from a copy of Windows 10 (or Windows Server 2016 with the Desktop Experience feature) and place them in the
C:\Windows\System32
directory:dsound.dll
opengl32.dll
glu32.dll
- Note that the three DLL files listed above must be copied from the same version of Windows as the Windows Server Core host system (e.g. Windows Server 1709 needs DLLs from Windows 10 1709, Windows Server 1803 needs DLLs from Windows 10 1803, etc.) Although DLLs from an older system version may potentially work, Windows will refuse to load these DLL files if they have been copied from a version of Windows that is newer than the host.
Building Linux containers also requires:
- Windows 10 Pro/Enterprise, Linux or macOS
- Docker For Windows (under Windows 10), Docker CE (under Linux) or Docker For Mac (under macOS)
- Under Windows 10, the Docker daemon must be configured to use Linux containers instead of Windows containers
- Under Windows 10 and macOS, Docker must be configured in the "Advanced" settings pane to allocate 8GB of memory and a maximum disk image size of 200GB
First, ensure you have installed the dependencies of the Python build script by running pip3 install -r requirements.txt
. (You may need to prefix this command with sudo
under Linux and macOS.)
Then, simply invoke the build script by specifying the UE4 release that you would like to build using full semver version syntax. For example, to build Unreal Engine 4.19.1:
python3 build.py 4.19.1
(Note that you may need to replace the command python3
with python
under Windows.)
You will be prompted for the Git credentials to be used when cloning the UE4 GitHub repository (this will be the GitHub username and password you normally use when cloning https://github.com/EpicGames/UnrealEngine.) The build process will then start automatically, displaying progress output from each of the docker build
commands that are being run.
Once the build process is complete, you will have up to five new Docker images on your system (where RELEASE
is the release that you specified when invoking the build script):
adamrehn/ue4-build-prerequisites:latest
- this contains the build prerequisites common to all Engine versions and should be kept in order to speed up subsequent builds of additional Engine versions.adamrehn/ue4-source:RELEASE
- this contains the cloned source code for UE4. This image is separated from theue4-build
image to isolate the effects of changing environment variables related to git credentials, so that they don't interfere with the build cache for the subsequent steps.adamrehn/ue4-build:RELEASE
- this contains the source build for UE4, and includes conan-ue4cli support for building Conan packages that are compatible with UE4 when building version 4.19.0 of the Engine or newer.adamrehn/ue4-package:RELEASE
- this extends theue4-build
image and is designed for packaging Shipping builds of UE4 projects. Note that the image simply creates an Installed Build of the Engine in order to speed up subsequent build time, and is not required in order to package projects (theue4-build
image can be used to package projects, albeit with longer build times.) You can disable the build for this image by specifying--no-package
when you run the build script.adamrehn/ue4-capture:RELEASE
- this extends theue4-package
image with support for the UE4Capture plugin and is designed for capturing gameplay footage from inside NVIDIA Docker containers. This image will only be built when theue4-package
image is built with NVIDIA Docker compatibility. You can disable the build for this image by specifying--no-capture
when you run the build script.
Each image extends its immediate predecessor, as depicted in the diagram below:
If you would like to build a custom version of UE4 rather than one of the official releases from Epic, you can specify "custom" as the release string and specify the Git repository and branch/tag that should be cloned:
python3 build.py custom -repo=https://github.com/MyUser/UnrealEngine.git -branch=MyBranch
When building a custom Engine version, both the repository URL and branch/tag must be specified. If you are performing multiple custom builds and wish to differentiate between them, it is recommended to add a custom suffix to the Docker tag of the built images:
python3 build.py custom -repo=https://github.com/MyUser/UnrealEngine.git -branch=MyBranch -suffix=-MySuffix
This will produce images tagged adamrehn/ue4-source:custom-MySuffix
, adamrehn/ue4-build:custom-MySuffix
, etc.
By default, Windows container images are based on the Windows Server Core release that best matches the version of the host operating system. However, Windows containers cannot run a newer kernel version than that of the host operating system, rendering the latest images unusable under older versions of Windows 10 and Windows Server. (See the Windows Container Version Compatibility page for a table detailing which configurations are supported.)
If you are building images with the intention of subsequently running them under an older version of Windows 10 or Windows Server, you will need to build images based on the same kernel version as the target system (or older.) The kernel version can be specified by providing the appropriate base OS image tag via the -basetag=TAG
flag when invoking the build script:
python3 build.py 4.19.2 -basetag=ltsc2016 # Uses Windows Server 2016 (Long Term Support Channel)
For a list of supported base image tags, see the Windows Server Core base image on Docker Hub.
The isolation mode can be specified via the -isolation=MODE
flag when invoking the build script. Valid values are process
(supported under Windows Server only) or hyperv
(supported under both Windows 10 or Windows Server.)
By default, DLL files are copied from %SystemRoot%\System32
. However, when building container images with an older kernel version than the host, the copied DLL files will be too new and the container OS will refuse to load them. A custom directory containing the correct DLL files for the container kernel version can be specified via the -dlldir=DIR
flag when invoking the build script.
By default, Windows container images are built when running the build script under Windows. To build Linux container images instead, simply specify the --linux
flag when invoking the build script.
NVIDIA Docker provides a container runtime for Docker that allows Linux containers to access NVIDIA GPU devices present on the host system. This facilitates hardware acceleration for applications that use OpenGL or NVIDIA CUDA, and can be useful for Unreal projects that need to perform offscreen rendering from within a container. To build Linux container images that support hardware-accelerated OpenGL when run via NVIDIA Docker, simply specify the --nvidia
flag when invoking the build script. If you would like CUDA support in addition to OpenGL support, also specify the --cuda
flag.
Note that NVIDIA Docker version 2.x is required to run the built images (version 1.x is not supported) and that the images can only be run under a Linux host system with one or more NVIDIA GPUs. Images with CUDA support also have additional requirements on top of the requirements for OpenGL support.
If you would like to see what docker build
commands will be run without actually building anything, you can specify the --dry-run
flag when invoking the build script. Execution will proceed as normal, except that all docker build
commands will be printed to standard output instead of being executed as child processes.
When upgrading to a newer version of the code in this repository, be sure to specify the --rebuild
flag when invoking the build script. This will ensure all images are rebuilt using the updated Dockerfiles.
There are three main approaches for running Automation Tests from the command line. These three approaches are illustrated below, accompanied by the recommended arguments for running correctly inside a Docker container:
-
Invoking the Editor directly:
path/to/UE4Editor <UPROJECT> -game -buildmachine -stdout -fullstdoutlogoutput -forcelogflush -unattended -nopause -nullrhi -ExecCmds="automation RunTests <TEST1>+<TEST2>+<TESTN>;quit"
-
Using Unreal AutomationTool (UAT):
path/to/RunUAT BuildCookRun -project=<UPROJECT> -noP4 -buildmachine -unattended -nullrhi -run "-RunAutomationTest=<TEST1>+<TEST2>+<TESTN>"
-
Using ue4cli:
ue4 test <TEST1> <TEST2> <TESTN>
(Note that it is also possible to use the
uat
subcommand to invoke UAT, e.g.ue4 uat BuildCookRun <ARGS...>
, but thetest
command is the recommended way of running automation tests using ue4cli.)
Each of these approaches has its own benefits and limitations:
Approach | Supported Docker images | Benefits | Limitations |
---|---|---|---|
Editor |
|
|
|
UAT |
|
|
|
ue4cli |
|
When using
When using
|
Irrespective of subcommand:
When using
When using
|
Irrespective of the invocation approach utilised, the following limitations apply when running automation tests inside Docker containers:
- Tests requiring sound output will not function correctly.
- Tests that require Virtual Reality (VR) or Augmented Reality (AR) devices or runtimes to be present will not function correctly.
- The Windows-specific plugins
WindowsMoviePlayer
andWmfMedia
that are enabled by default as of UE4.19 both require Microsoft Media Foundation in order to function correctly. Under Windows Server Core, Media Foundation is provided by theServer-Media-Foundation
optional feature. However, this feature has a history of being problematic inside Docker containers, and was removed from the Server Core container image in Windows Server, version 1803. As such, any tests that rely on these plugins will not function correctly.
The following resources document the use of these Docker images with the Jenkins Continuous Integration system:
- https://github.com/adamrehn/ue4-opencv-demo - provides an example of using Jenkins to build a UE4 project that consumes a third-party library package via conan-ue4cli.
The ue4-capture
image is built when NVIDIA Docker support is enabled and the ue4-package
image has also been built. The ue4-capture
image can be used to either run Unreal projects directly or to build and package them for use inside any Docker container that is based on the nvidia/opengl or nvidia/cudagl base images. For more details on using NVIDIA Docker images, see the official documentation.
When running inside an OpenGL-enabled NVIDIA Docker container, the Unreal Engine will automatically default to offscreen rendering. You can capture the contents of the framebuffer using the UE4Capture plugin in exactly the same way as when running outside of a container.
To enable audio support inside an NVIDIA Docker container, you will need to provide access to the sound devices from the host system by specifying the arguments --device /dev/snd
when invoking the docker run
command.
If you are running containers inside a virtual machine that does not have access to any physical audio devices, you will need to utilise an alternative such as an ALSA loopback device, which can be enabled on most Linux distributions by using the command sudo modprobe snd_aloop
. Note that this module is not available in the AWS-tuned Linux kernel that is used by default for AWS virtual machines, so you will need to switch to a vanilla Linux kernel in order to make use of an ALSA loopback.
If you are using the UE4Capture plugin to capture audio, you will need to ensure that you specify the argument -AudioMixer
when running the Unreal project from which audio will be captured. Note that under some circumstances, packaged builds of Unreal projects will fail to open the default audio device, resulting in no audio output. To fix this, override the default device by specifying a value for the AUDIODEV
environment variable. For example, to use the audio device called "front", you would issue the command export AUDIODEV='front'
. (You can view the list of available ALSA audio devices using the aplay
command from the alsa-utils package.) This issue does not appear to occur when running non-packaged projects from the Editor.
For an example that demonstrates performing cloud rendering in the ue4-capture
NVIDIA Docker image and then streaming the video to a web browser via WebRTC, see the ue4-cloud-rendering-demo repository.
-
Building Windows containers fails with the message
hcsshim: timeout waiting for notification extra info
or the messageThis operation ended because the timeout has expired
:This is a known issue when using Windows containers in Hyper-V isolation mode. See the Windows
hcsshim
timeout issues section below for a detailed discussion of this problem and the available workarounds. -
Building or running Windows containers fails with the message
The operating system of the container does not match the operating system of the host
:This error is shown in two situations:
- The host system is running an older kernel version than the container image. In this case, you will need to build the images using the same kernel version as the host system or older. See the Specifying the Windows Server Core base image tag section above for details on specifying the correct kernel version when building Windows container images.
- The host system is running a newer kernel version than the container image and you are attempting to use process isolation mode instead of Hyper-V isolation mode. (Process isolation mode is the default under Windows Server.) In this case, you will need to use Hyper-V isolation mode instead. See the Specifying the isolation mode under Windows section above for details on how to do this.
-
Building Windows containers fails with the message
hcsshim::ImportLayer failed in Win32: The system cannot find the path specified
or building Linux containers fails with a message about insufficient disk space:Assuming you haven't actually run out of disk space, this means that the maximum Docker image size has not been configured correctly.
- For Windows containers, follow the instructions provided by Microsoft, making sure you restart the Docker daemon after you've modified the config JSON.
- For Linux containers, use the Docker for Windows "Advanced" settings tab under Windows or the Docker for Mac "Disk" settings tab under macOS.
-
Pulling the .NET Framework base image fails with the message
ProcessUtilityVMImage \\?\
(long path here)\UtilityVM: The system cannot find the path specified
:This is a known issue when the host system is running an older kernel version than the container image. Just like in the case of the "The operating system of the container does not match the operating system of the host" error mentioned above, you will need to build the images using the same kernel version as the host system or older. See the Specifying the Windows Server Core base image tag section above for details on specifying the correct kernel version when building Windows container images.
-
Cloning the UnrealEngine Git repository fails with the message
error: unable to read askpass response from 'C:\git-credential-helper.bat'
(for Windows containers) or'/tmp/git-credential-helper.sh'
(for Linux containers):This typically indicates that the firewall on the host system is blocking connections from the Docker container, preventing it from retrieving the Git credentials supplied by the build script. (This is particularly noticeable under a clean installation of Windows Server, which blocks connections from other subnets by default.) The firewall will need to be configured appropriately to allow the connection, or else temporarily disabled. (Use the command
netsh advfirewall set allprofiles state off
under Windows Server.) -
Building the Engine in a Windows container fails with the message
The process cannot access the file because it is being used by another process
:This is a known bug in some older versions of UnrealBuildTool when using a memory limit that is not a multiple of 4GB. To alleviate this issue, specify an appropriate memory limit override (e.g.
-m 8GB
or-m 12GB
.) For more details on this issue, see the last paragraph of the Windowshcsshim
timeout issues section below. -
Building the Engine in a Windows container fails with the message
fatal error LNK1318: Unexpected PDB error; OK (0)
:This is a known bug in some versions of Visual Studio, which only appears to occur intermittently. The simplest fix is to simply reboot the host system and then re-run the build script. Insufficient available memory may also contribute to triggering this bug.
-
Building an Unreal project in a Windows container fails when the project files are located in a directory that is bind-mounted from the host operating system:
Evidently the paths associated with Windows bind-mounted directories can cause issues for certain build tools, including UnrealBuildTool and CMake. As a result, building Unreal projects located in Windows bind-mounted directories is not advised. The solution is to copy the Unreal project to a temporary directory within the container's filesystem and build it there, copying any produced build artifacts back to the host system via the bind-mounted directory as necessary.
Note 1: This problem has currently only been observed when running containers under Hyper-V isolation mode and has not yet been observed to affect containers running under process isolation mode. However, it is still recommended that you implement the copy-based workaround in your own CI pipelines to ensure compatibility with both isolation modes.
Note 2: This problem does not apply to Linux containers.
Recent versions of Docker under Windows may sometimes encounter the error hcsshim: timeout waiting for notification extra info when building or running Windows containers. This issue appears to be related to Hyper-V isolation mode and has not been observed to affect containers running in process isolation mode. At the time of writing, Microsoft have stated that they are aware of the problem, but an official fix is yet to be released.
As a workaround until a proper fix is issued, it seems that altering the memory limit for containers between subsequent invocations of the docker
command can reduce the frequency with which this error occurs. (Changing the memory limit when using Hyper-V isolation likely forces Docker to provision a new Hyper-V VM, preventing it from re-using an existing one that has become unresponsive.) Please note that this workaround has been devised based on my own testing under Windows 10 and may not hold true when using Hyper-V isolation under Windows Server.
To enable the workaround, specify the --random-memory
flag when invoking the build script. This will set the container memory limit to a random value between 10GB and 12GB when the build script starts. If a build fails with the hcsshim
timeout error, simply re-run the build script and in most cases the build will continue successfully, even if only for a short while. Restarting the Docker daemon may also help.
Note that some older versions of UnrealBuildTool will crash with an error stating "The process cannot access the file because it is being used by another process" when using a memory limit that is not a multiple of 4GB. If this happens, simply run the build script again with an appropriate memory limit (e.g. -m 8GB
or -m 12GB
.) If the access error occurs even when using an appropriate memory limit, this likely indicates that Windows is unable to allocate the full amount of memory to the container. Rebooting the host system may help to alleviate this issue.
-
Why are the Dockerfiles written in such an inefficient manner? There are a large number of
RUN
directives that could be combined to improve both build efficiency and overall image size.The Dockerfiles have been deliberately written in an inefficient way because doing so serves two very important purposes.
The first purpose is self-documentation. These Docker images are the first publicly-available Windows and Linux images to provide comprehensive build capabilities for Unreal Engine 4. Along with the supporting documentation and articles on adamrehn.com, the code in this repository represents an important source of information regarding the steps that must be taken to get UE4 working correctly inside a container. The readability of the Dockerfiles is key, which is why they contain so many individual
RUN
directives with explanatory comments. CombiningRUN
directives would reduce readability and potentially obfuscate the significance of critical steps.The second purpose is debuggability. Updating the Dockerfiles to ensure compatibility with new Unreal Engine releases is an extremely involved process that typically requires building the Engine many times over. By breaking the Dockerfiles into many fine-grained
RUN
directives, the Docker build cache can be leveraged to ensure only the failing steps need to be repeated when rebuilding the images during debugging. CombiningRUN
directives would increase the amount of processing that needs to be redone each time one of the commands in a given directive fails, significantly increasing overall debugging times. -
Can the Windows containers be used to perform cloud rendering in the same manner as the Linux NVIDIA Docker containers?
Unfortunately not. NVIDIA Docker only supports Linux at this time and I am aware of no available equivalent for Windows containers. It is possible that this situation may change in the future as Windows containers mature and become more widely adopted.
-
Is it possible to build Unreal projects for macOS or iOS using the Docker containers?
Building projects for macOS or iOS requires a copy of macOS and Xcode. Since macOS cannot run inside a Docker container, there is unfortunately no way to perform macOS or iOS builds using Docker containers.